text stringlengths 9 7.94M |
|---|
\begin{document}
\renewcommand\abstractname{\textbf{Abstract}} \title{The Kalman condition for the boundary controllability of coupled 1-d wave equations.}
\begin{center} S. Avdonin\textsuperscript{a,}, L. de Teresa\textsuperscript{b,}\\
\begin{footnotesize} \emph{\textsuperscript{a}University of Alaska Fairbanks}\\ \emph{\textsuperscript{b}Instituto de Matem\'aticas, Universidad Nacional Aut\'onoma de M\'exico, Circuito Exterior, G. U. 04510 D.F., M\'exico} \end{footnotesize} \end{center}
\noindent \rule{\textwidth}{0.4pt} \begin{abstract} \mbox{} \\ This paper is devoted to prove the exact controllability of a system of $N$ one-dimensional coupled wave equations when the control is exerted on a part of the boundary by means of one control. We consider the case where the coupling matrix $A$ has distinct eigenvalues. We give a \emph{Kalman condition} (necessary and sufficient) and give a description, non-optimal in general, of the attainable set. \\
\noindent\emph{Keywords:} Hyperbolic systems, Boundary Controllability, Kalman Rank condition, Divided differences. \end{abstract} \noindent \rule{\textwidth}{0.4pt}
\section{Statement of the Problem and Main Results} This work is devoted to the study of the controllability properties of the following hyperbolic system \begin{equation} \left\{ \begin{array}{ll} u_{tt} - u_{xx} + Au = 0, & \text{in $Q = (0,\pi) \times (0,T)$,}\\ u(0,t) = bf(t), \quad u(\pi,t) = 0 & \text{for $t \in (0,T)$,}\\ u(x,0) = u^0(x), \quad u_t (x,0) = u^1(x) & \text{for $x \in (0,\pi)$,} \end{array} \right. \end{equation} where $T > 0$ is given, $A \in \mathcal{L} (\mathbb R^N)$ is a given matrix, $b$ a given vector from $\mathbb R^N$ and $f \in L^2 (0,T)$ is a control function to be determined which acts on the system by means of the Dirichlet boundary condition at the point $x = 0$. The initial data $(u^0, u^1)$ will belong to a Hilbert space $\mathcal{H}$, which is to be specified in our main result. Our goal is to give necessary and sufficient conditions for the exact controllability of System (1) and the space $\mathcal{H}$ where this can be done.
We recall that System (1) is exactly controllable in $\mathcal{H}$ at time $T$ if, for every initial and final data $(u^0,u^1), (z^0,z^1)$, both in $\mathcal{H}$, there exists a control $f \in L^2 (0,T)$ such that the solution of System (1) corresponding to $(u^0,u^1,f)$ satisfies \begin{equation} u(x,T) = z^0 (x), \quad u_t(x,T) = z^1 (x). \end{equation}
Due to the linearity and time reversibility of System (1), this is equivalent to exact controllability from zero at time $T$. In other words, System (1) is exactly controllable if for every final state $(z^0,z^1) \in \mathcal{H}$, there exists a control $f \in L^2 (0,T)$ such that the solution $u$ to System (1) corresponding to $f$ satisfies (2) and \begin{equation} u(x,0) = 0 = u_t (x,0). \end{equation} For this reason, we will assume that $u^0 \equiv 0, u^1 \equiv 0$.
As of now, the controllability properties of System (1) are well known in the scalar case, i.e. when $N = 1$ (see for example \cite{fattorini1977}). When $N = 1$ and $b \not\equiv 0$, System (1) is exactly controllable in $\mathcal{H} = L^2 (0,\pi) \times H^{-1} (0,\pi)$ if $T \geq T_0 = 2\pi$.
Most of the known controllability results of (1) are in the case of two coupled equations: see \cite{acd2013, rd2011, bat2017}, but the results are for a particular coupling matrix $A$. In the $d$-dimensional situation, that is, for a system of coupled wave equations in a domain $\Omega \subset \mathbb R^d$, Alabau-Boussouria and collaborators have obtained several results in the case of two equations for particular coupling matrices (see e.g. \cite{alabau2003, alabau2014, al2011} and the references therein).
On the other hand, controllability properties of linear ordinary differential systems are well understood. In particular, we have the famous Kalman rank condition (see for example \cite{kfa1969} Chapter 2, p.35). That is, if $N,M \in \mathbb{N}$ with $N,M \geq 1$, $A \in \mathcal{L}(\mathbb R^N)$ and $B \in \mathcal{L}(\mathbb R^M;\mathbb R^N)$, then the linear ordinary differential system $Y' = AY + Bu$ is controllable at time $T > 0$ if and only if \begin{equation} \label{kalman} \rank [A \mid B] = \rank [A^{N-1} B, A^{N-2} B, \cdots, B] = N, \end{equation} where $[A^{N-1}B, A^{N-2}B, \cdots, B] \in \mathcal{L}(\mathbb R^{MN};\mathbb R^N)$.
Recently, Liard and Lissy \cite{ll2017} gave a general Kalman condition for the internal controllability of $N$ coupled $d$-dimensional wave equations.
In the framework of parabolic coupled equations, \cite{abgd2011} gives a general Kalman rank condition for the null boundary controllability of $N$ coupled one-dimensional parabolic equations. The aim of this research is to establish general results, as in \cite{abgd2011}, in the case of one-dimensional coupled wave equations.
To state our results, we recall that the operator $-\partial_{xx}$ on $(0,\pi)$ with homogeneous Dirichlet boundary conditions admits a sequence of eigenvalues $\{\mu_k = k^2\}_{k=1}^\infty$ and eigenfunctions $\{\sin kx\}_{k=1}^\infty$. We note that this family of eigenfunctions is a Hilbert basis of $L^2 (0,\pi)$.
Our main result is the following:
\begin{thm} \label{thm1} Suppose that $A$ has $N$ distinct eigenvalues $\lambda_1, \ldots, \lambda_N$. Suppose that the following conditions hold: \begin{enumerate}[(i)]
\item $[A|b]$ satsifies the Kalman rank condition, \item \[ \mu_k - \mu_l \neq \lambda_i - \lambda_j, \quad \forall k,l \in \mathbb{N}, \forall 1 \leq i,j \leq N \text{ with $k \neq l$ and $i \neq j$}, \] \item $T \geq 2N\pi$. \end{enumerate} Then System (1)--(3) is exactly controllable in $\mathcal{H} = H^{N-1} (0,\pi;\mathbb R^N) \times H^{N-2} (0,\pi;\mathbb R^N)$.
Furthermore, if any of (i), (ii), or (ii) is not satisfied, then System (1)--(3) is not approximately controllable. In particular, if (i) or (iii) does not hold, then the codimension of the reachable set of System (1)-(3) in $L^2 (0,\pi;\mathbb R^N) \times H^{-1} (0,\pi;\mathbb R^N)$ is infinite. On the other hand, if (ii) fails, the sequence $\{k^2 + \lambda_l\}$, $k \in \mathbb{N}$, $l = 1, \ldots, N$, only contains a finite number of multiple points. So the codimension of the reachable set is finite. \end{thm}
\begin{rem}With respect to Theorem \ref{thm1}, we have the following remarks. \begin{itemize} \item Conditions (i) and (ii) are also necessary conditions that appear in \cite{abgd2011} for the null controllability of $N$ coupled one-dimensional parabolic equations. The hyperbolicity of the equations in our case requires a minimal control time. \item In general, the reachable space $\mathcal{H}$ is not optimal. In some particular situations it is possible to give an optimal description of the space. Examples include the cases when $N = 2$ or the coupling matrix is cascade, i.e., when $A$ is triangle inferior, or when $A$ is given in canonical form. \end{itemize} \end{rem}
\section{The Fourier Method and Existence of Solutions} In this section, we introduce the Fourier Method. On the assumptions of Theorem \ref{thm1}, we denote $\varphi_1, \ldots, \varphi_N$ to be the family of eigenvectors of $A$ with corresponding eigenvalues $\lambda_1, \ldots, \lambda_N$. We denote by $\langle \cdot, \cdot \rangle$ the inner product in $\mathbb R^N$ and so $A^*$ has eigenvalues $\overline{\lambda_i}$ and eigenvectors $\psi_i$ with \[ \langle \varphi_i, \psi_j \rangle = \delta_{ij}. \] Let us define $\Phi_{nj}(x) = \sin (nx) \varphi_j$. Then $\{\Phi_{nj}(x)\}$, $n \in \mathbb{N}$, $j = 1, \ldots, N$, is a Riesz basis in $L^2 (0,\pi;\mathbb R^N)$ with biorthogonal family $\{\Psi_{nj}(x)\}$ where \[ \Psi_{nj}(x) = \dfrac{2}{\pi} \sin (nx) \psi_j. \]
We then represent the solution $u$ of System (1) in the form of the series \begin{equation} \label{fouriersolution} u(x,t) = \sum_{n,j} a_{nj} (t) \Phi_{nj} (x) \end{equation} and set \begin{equation} \label{vxt} v(x,t) = g(t) \Psi_{kl} (x), \end{equation} where $g(t)$ is a smooth function, i.e., $g \in C_0^2 (0,T)$. Below are standard routine manipulations to solve for the coefficients $a_{nj} (t)$: \begin{align*} 0 &= \int_0^T \int_0^\pi \langle u_{tt} - u_{xx} + Au, v \rangle dx \: dt \\ &= \int_0^T \int_0^\pi \langle u, v_{tt} - v_{xx} + A^* v \rangle dx \: dt + \int_0^\pi \left[ \langle u_t, v\rangle - \langle u, v_t\rangle \right]_{t = 0}^T dx \\ &\quad - \int_0^T \left[ \langle u_x, v \rangle - \langle u, v_x \rangle \right]_{x = 0}^\pi dt\\ &= \int_0^T \int_0^\pi \langle u, \ddot{g} \Psi_{kl} + k^2 g \Psi_{kl} + \overline{\lambda_l} g \Psi_{kl} \rangle dx \: dt \\ &\quad - \dfrac{2}{\pi} \int_0^T k \langle b, \psi_l \rangle f(t) g(t) \: dt\\ &= \int_0^T a_{kl} [\ddot{g} + (k^2 + \overline{\lambda_l})g] \: dt - \dfrac{2k}{\pi} \int_0^T \langle b, \psi_l \rangle f(t) g(t) \: dt\\ &= \int_0^T [\ddot{a}_{kl} + (k^2 + \overline{\lambda_l})a_{kl}] g \: dt - \dfrac{2k}{\pi} \langle b, \psi_l \rangle \int_0^T f(t) g(t) \: dt. \end{align*} Thus we obtain the equations \begin{equation} \label{distinctdiffeq} \ddot{a}_{kl} + (k^2 + \overline{\lambda_l})a_{kl} = \dfrac{2k}{\pi} \langle b, \psi_l \rangle f(t) \end{equation} with zero initial conditions that follow from (3), i.e. \begin{equation} \label{distinctdiffeqic} a_{kl} (0) = 0 = \dot{a}_{kl}(0). \end{equation} We denote $k^2 + \overline{\lambda_l}$ by $\omega_{kl}^2$ and $\langle b, \psi_l \rangle$ by $\beta_l$. In the formulas below we assume that $\omega_{kl}^2 > 0$. In fact, if $\omega_{kl}^2 < 0$ or if $\omega_{kl}$ is not real, we need to replace trigonometric functions by hyperbolic ones (see e.g. \cite{ai1995} Section 3.2). In the case where $\omega_{kl} = 0$, we will set $\frac{\sin (\omega_{kl}t)}{\omega_{kl}} = t$ (see e.g. \cite{ai1995} Sec. III.1).
The solution of (\ref{distinctdiffeq})--(\ref{distinctdiffeqic}) is given by the formula \begin{equation} \label{distinctakl} a_{kl} (t) = \dfrac{2k}{\pi} \beta_l \int_0^t f(\tau) \dfrac{\sin \omega_{kl} (t-\tau)}{\omega_{kl}} \: d\tau. \end{equation} By differentiating we obtain \begin{equation} \label{distinctakldot} \dot{a}_{kl} (t) = \dfrac{2k}{\pi} \beta_l \int_0^t f (\tau) \cos \omega_{kl} (t-\tau) \: d\tau. \end{equation} We now introduce the coefficients \begin{equation} \label{distinctckl} c_{kl} (t) = i \omega_{kl} a_{kl} (t) + \dot{a}_{kl} (t). \end{equation} We define $\omega_{-kl} = -\omega_{kl}$, $a_{-kl} = a_{kl}$, and $\dot{a}_{-kl} = \dot{a}_{kl}$ for $k \in \mathbb{K} = \{ \pm 1, \pm 2, \ldots\}$, $l \in \{1, \ldots, N\}$, and rewrite (\ref{distinctakl}) and (\ref{distinctakldot}) in the exponential form: \begin{equation} \label{distinctckl2} c_{kl} (t) = \dfrac{2k}{\pi} \beta_l \int_0^t f(\tau) e^{i \omega_{kl} (t-\tau)} \: d\tau. \end{equation} Taking into account that $\{\Phi_{nj}\}$ forms a Riesz basis in $L^2 (0,\pi;\mathbb R^N)$ and
\[ |\omega_{kl}| + 1 \asymp k, \: k \in \mathbb{K}, \] we conclude that \begin{equation} \label{ckluut}
\sum_{k \in \mathbb{K}} \dfrac{|c_{kl}(t)|^2}{k^2} \asymp \|u(\cdot,t)\|_{L^2(0,\pi,\mathbb R^N)}^2 + \|u_t (\cdot,t)\|_{H^{-1}(0,\pi;\mathbb R^N)}^2. \end{equation}
On the other hand, from the explicit form for $\omega_{kl}$, it follows that for any $t> 0$, the family $\{e^{i\omega_{kl}t}\}$ is either the union of a finite number of Riesz sequences if $t < 2\pi N$ or a Riesz sequence in $L^2(0,t)$ if $t \geq 2\pi N$ (see \cite{ai1995} Section II.4). We recall that a Riesz sequence is a Riesz basis in the closure of its linear span (see \cite{ai1995}). Therefore, from (\ref{distinctckl2}) it follows that for every fixed $t > 0$ \begin{equation} \label{cklf}
\sum_{k.l} \dfrac{|c_{kl}(t)|^2}{k^2} \prec \|f\|_{L^2(0,t)}^2. \end{equation} Recall that (\ref{ckluut}) and (\ref{cklf}) refer, respectively, to two-sided and one-sided inequalities with constants independent of the sequences $(c_{kl})$, $(k)$, and of the function $f$.
Additionally, it is not difficult (see \cite{ai1995} Sec.III.1) to check that
\[ \sum_{k,l} \dfrac{|c_{kl}(t+h) - c_{kl}(t)|^2}{k^2} \to 0, \quad h \to 0. \]
We combine our results in the following theorem.
\begin{thm} \label{thmdistinct} For any $f \in L^2 (0,T)$, there exists a unique generalized solution $u^f$ of the IBVP (1)--(3) such that \[ (u^f, u_t^f) \in C([0,T];L^2(0,\pi,\mathbb R^N) \times H^{-1} (0,\pi;\mathbb R^N)) =: \mathcal{V} \] and
\[ \|(u^f, u_t^f)\|_\mathcal{V} \prec \|f\|_{L^2(0,T)}. \] \end{thm}
\section{Controllability Results}
In this section we will prove Theorem \ref{thm1}. We assume that Conditions (i), (ii), and (iii) are satisfied. By Proposition 3.1 in \cite{abgd2011}, Condition (i) implies that $\beta_l \neq 0$ for all $l = 1, \ldots, N$. We then define $\gamma_{kl}$ to be \begin{equation} \label{distinctgammakl} \gamma_{kl} := c_{kl}(T) \left( \dfrac{2k}{\pi} \beta_l e^{i \omega_{kl} T} \right)^{-1} \end{equation} and rewrite (\ref{distinctckl2}) for $t = T$ in the form \[ \gamma_{kl} = (f, e_{kl})_{L^2 (0,T)}, \] where $e_{kl}(t) = e^{i \omega_{kl} t}$. We note that
\[ \sum_{k,l} |\gamma_{kl}|^2 \asymp \sum_{k,l} \dfrac{|c_{kl}(T)|^2}{k^2}. \]
We note that for $k$ fixed, the points $\omega_{kl}$ for $l = 1, \ldots, N$ are asymptotically close, i.e., these $N$ points lie inside an interval whose length tends to zero as $k$ tends to infinity. Therefore, the family $\{e_{kl}\}$ is not a Riesz basis in $L^2 (0,T)$ for any $T$. We therefore need to use the so-called exponential divided differences (EDD).
EDD were introduced in \cite{ai2001} and \cite{am2001} for families of exponentials whose exponents are close, that is, the difference between exponents tends to zero. Under precise assumptions, the family of EDD forms a Riesz basis in $L^2 (0,T)$. For each fixed $k$, we define \[ \tilde{e}_{k1} := [\omega_{k1}] = e^{i\omega_{k1}t}, \] and for $2 \leq l \leq N$ \[ \tilde{e}_{kl} := [\omega_{k1}, \omega_{k2}, \ldots, \omega_{kl}] = \sum_{j=1}^l \dfrac{e^{i \omega_{kj}t}}{\prod_{r \neq j} (\omega_{kj} - \omega_{kr})}. \] Under Condition (ii) of our theorem, we are able to use this formula for divided differences as opposed to the formula for generalized divided differences (see e.g. \cite{am2001}).
From asymptotics theory and the explicit formula for $\omega_{kl}$, it follows that the generating function of the family of EDD $\{\tilde{e}_{kl}\}$ is a sine-type function (see \cite{ai1995, ai2001, am2001}). Hence, the family of EDD $\{\tilde{e}_{kl}\}$ forms a Riesz sequence in $L^2 (0,T)$ for $T \geq 2\pi N$. We then define \[ \tilde{\gamma}_{kl} = (f, \tilde{e}_{kl})_{L^2 (0,T)}. \]
Since $\{ \tilde{e}_{kl}\}$ is a Riesz sequence, $\{ (\tilde{\gamma}_{kl}) \mid f \in L^2 (0,T)\} = \ell^2$, i.e. any sequence from $\ell_2$ can be obtained by a function $f \in L^2 (0,T)$ and the family $\{ \tilde{e}_{kl}\}$. We note that $| \omega_{kj} - \omega_{ki}| \asymp k^{-1}$, where $1 \leq i,j \leq N$. In particular, this implies that $|\tilde{\gamma}_{kl}| \prec k^{N-1} |\gamma_{kl}|$. Recalling Equations (\ref{distinctckl}) and (\ref{distinctgammakl}), we obtain \begin{equation} \label{tildegkl} \{ (\gamma_{kl}) \mid f \in L^2 (0,T)\} \supseteq \ell_{N-1}^2 \end{equation} where
\[ \ell_{N-1}^2 = \left\{ (a_{kl}) \mid \sum_{k.l} |k^{N-1} a_{kl}|^2 < \infty \right\}. \] Since $\{\Phi_{kl}\}$ forms a Riesz basis in $L^2 (0,\pi;\mathbb R^N)$, from (\ref{distinctckl}), (\ref{distinctgammakl}), (\ref{tildegkl}), $(u(\cdot,t),u_t (\cdot,t)) \in H^{N-1}(0,\pi;\mathbb R^N) \times H^{N-2} (0,\pi,\mathbb R^N)$ and we have proved Theorem \ref{thm1}.
We will now prove the negatives results in Theorem \ref{thm1}. We first assume that (i) and (iii) hold, but (ii) does not hold. Observe that this may only happen for a finite number of indices (see \cite{abgd2011}). So we have \[ k_d^2 - l_d^2 = \lambda_{i_d} - \lambda_{j_d}, \quad 1 \leq d \leq m. \] In this situation, the family given in (\ref{distinctckl2}), $\{e_{kl}\}$, is clearly linearly dependent since some function (or functions) is repeated twice in the family. Thus, according to Theorems I.2.1e and III.3.10e in \cite{ai1995}, System (1) is not approximately controllable for any $T > 0$.
Let us now suppose that (i) does not hold. This case is proved directly and is related to properties of exponential families (see \cite{ai1995} Sections I.1 and III.1).
If condition (iii) is not met, i.e. $T < 2\pi N$, then from \cite{ai2001} and \cite{am2001}, it follows that the family of EDD $\{\tilde{e}_{kl}\}$ is not a Riesz basis in $L^2 (0,T)$. In particular, we can split $\{\tilde{e}_{kl}\}$ into two subfamilies $\mathcal{E}_0$ and $\mathcal{E}_1$ such that $\mathcal{E}_0$ is a Riesz sequence in $L^2 (0,T)$ and $\mathcal{E}_1$ has infinite cardinality. This implies that $\{\tilde{e}_{kl}\}$ is not linearly independent and hence the reachable set has infinite codimension.
Thus we have proved the negative part of Theorem \ref{thm1}, and the proof is complete.
\section{A Particular Case: $N = 2$}
In the previous sections, we proved exact controllability with respect to a more regular space than the space of regularity for the system. This is typical of hybrid systems where clusters of close spectral points appear. However, in the case where $N = 2$, we are able to prove the sharp controllability result, i.e., to prove exact controllability in the space of sharp regularity of the system. To do this, we develop a new method based on the construction of a basis in a so-called asymmetric space. This method was proposed in \cite{ae2018} when investigating the controllability of another hybrid system of hyperbolic type -- the string with point masses. In the present paper, we extend this method to the vector case.
We consider System (1)-(3) with $N = 2$ and \begin{equation} \label{N2mat} b = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad A = \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix}.\end{equation} In other words, the boundary control acts only on the first equation and the second equation is controlled through its connection with the first. From now on, we will refer to this system as $\mathcal{S}_2$. The first question we ask is about the sharp regularity space. We claim that \[ u_1 (\cdot, t) \in L^2 (0,\pi), \quad u_2 (\cdot, t) \in H^1_0 (0,\pi). \]
From Theorem \ref{thmdistinct}, $(u_1 (\cdot, t), u_2 (\cdot,t)) \in L^2 (0,\pi)^2$. From the structure of the system, $u_2$ is a solution to a wave equation with zero Dirichlet boundary conditions and only depends on $u_1$. In particular, $u_2$ can be solved as a system of linear nonhomogeneous ordinary differential equations. Using standard methods to solve this system yields that $u_2 \in H^1_0 (0,\pi)$.
The main result of this section is \begin{thm} \label{thmN2}
Under conditions similar to those of Theorem \ref{thm1}, that is, assume that $A$ has two distinct eigenvalues $\lambda_1$, $\lambda_2$ and $b$ given by (\ref{N2mat}) with $a_{21} \neq 0$ (so the Kalman rank condition for $[A|b]$ is fulfilled), that \[ \mu_k - \mu_l \neq \lambda_1 - \lambda_2, \quad \forall k,l \in \mathbb{N}, \text{ with $k \neq l$}, \] and that $T \geq 4\pi$, then the reachable set of System $\mathcal{S}_2$, $\{(u^f (\cdot, T), u_t^f (\cdot,T)) \mid f \in L^2 (0,T)\}$ is equal to $\mathcal{H}_1$ where \[ \mathcal{H}_1 := \begin{pmatrix} L^2 (0,\pi) \\ H^1_0 (0,\pi) \end{pmatrix} \times \begin{pmatrix} H^{-1} (0,\pi) \\ L^2 (0,T) \end{pmatrix} \] for $T \geq 4\pi$.
If $T < 4\pi$, then the reachable set has infinite codimension in $\mathcal{H}_1$. \end{thm}
We will prove this theorem by considering the two possible cases, i.e., whether the matrix $A$ has two distinct eigenvalues or a repeated eigenvalue.
\begin{proof} We now return to the representation in (\ref{fouriersolution}): \begin{equation} \label{fouriersolutionN2} u(x,T) = \sum_{n,j} a_{nj} (T) \Phi_{nj} (x). \end{equation}
Taking into account that for $N = 2$, we use EDD of order one, i.e., \[ \tilde{a}_{n1} = a_{n1}, \quad \tilde{a}_{n2} = \dfrac{a_{n2}-a_{n1}}{\omega_{n2} -\omega_{n1}}, \] where we supress the argument $T$. We can rewrite (\ref{fouriersolutionN2}) in the form \begin{equation} \label{fouriersolutionN22} u(x,T) = \sum_{n,j} \tilde{a}_{nj} \tilde{\Phi}_{nj} (x). \end{equation} It is easy to verify that \begin{align} \tilde{\Phi}_{n1} (x) &= \Phi_{n1} (x) + \Phi_{n2} (x) = \sin (nx) (\varphi_1 + \varphi_2), \label{tildephi1}\\ \tilde{\Phi}_{n2} (x) &= \Phi_{n2} (x) (\omega_{n2} - \omega_{n1}) = \sin (nx) \varphi_2 (\omega_{n2} - \omega_{n1}). \label{tildephi2} \end{align}
We note that $| \omega_{n2} - \omega_{n1}| \asymp n^{-1}$. We present the following lemma. \begin{lem} Eigenvectors $\varphi_1$ and $\varphi_2$ can be chosen such that \begin{equation} \label{phi12} \varphi_1 + \varphi_2 = \begin{pmatrix} \alpha \\ 0 \end{pmatrix}. \end{equation}
\begin{proof} In particular, we claim that the second component of $\varphi_1$ and $\varphi_2$ are nonzero. If this is true, then by appropriate scaling, we can obtain eigenvectors $\varphi_1$ and $\varphi_2$ whose second components add to zero. Suppose on the contrary that $\varphi_1$ has a zero second component. By scaling, we can assume that \[ \varphi_1 = \begin{pmatrix} 1 \\ 0 \end{pmatrix}. \] By the orthogonality of $\psi_1, \psi_2$, this implies that $\psi_2$ has the form \[ \psi_2 = \begin{pmatrix} 0 \\ x \end{pmatrix}, \] for some nonzero $x$. However, this is a contradiction to the Kalman rank condition as \[ \left\langle \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \psi_2 \right\rangle = 0. \] Hence, both $\varphi_1$ and $\varphi_2$ have nonzero second components and the lemma is proved. \end{proof} \end{lem} We can now express (\ref{fouriersolutionN22}) as \[ u(x,T) = \sum_n \sin (nx) \left[ \tilde{a}_{n1} \begin{pmatrix} \alpha \\ 0 \end{pmatrix} + \tilde{a}_{n2} \begin{pmatrix} \beta \\ \gamma \end{pmatrix} (\omega_{n2} - \omega_{n1}) \right]. \] We note that it is clear that $\gamma \neq 0$.
We recall that $(\tilde{a}_{n1})$ and $(\tilde{a}_{n2})$ may be arbitrary $\ell^2$ sequences (when $f$ runs over $L^2 (0,T)$). Taking into account that $\{ \sin (nx)\}$ is an orthogonal basis in $L^2 (0,\pi)$, we begin by choosing the second component of $u(x,T)$ to be any target function from $H^1_0 (0,\pi)$, and thereby choosing $\tilde{a}_{n2}$ (recalling that $|\omega_{n2} - \omega_{n1}| \asymp n^{-1}$).
After choosing $\tilde{a}_{n2}$, we can then choose $\tilde{a}_{n1}$ so that the first component of $u(x,T)$ will coincide with any prescribed function from $L^2 (0,\pi)$. We can treat $u_t (x,T)$ in a similar fashion. This is due to the relation of sine and cosine and their appearance in $u (x,T)$ and $u_t(x,T)$. It is this relation that allows us to obtain controllability in time $T \geq 4\pi$. Thus, one of the cases for the positive part of Theorem \ref{thmN2} is proved. We note that the negative part of the theorem can be proved similar to Theorem \ref{thm1}. \end{proof}
As a result of this, we have the following corollary. \begin{cor} The family $\{\tilde{\Phi}_{nj}\}$ constructed in (\ref{tildephi1})--(\ref{phi12}) forms a Riesz basis in the asymmetric space $L^2 (0,\pi) \times H^1 (0,\pi)$. \end{cor} \begin{proof} We have proved that every function from $L^2 (0,\pi) \times H^1 (0,\pi)$ can be represented in the form of a series with respect to the family $\{ \tilde{\Phi}_{nj}\}$ with $\ell^2$ coefficients. Uniqueness of the representation follows from the basis property of $\{\sin (nx)\}$ and linear independence of the eigenvectors $\varphi_1$ and $\varphi_2$. Finally, it is clear that
\[ \| u_1 (\cdot, T)\|_{L^2 (0,\pi)}^2 + \|u_2 (\cdot, T)\|_{H^1 (0,\pi)}^2 \asymp \sum_{n,j} | a_{nj}|^2. \] \end{proof}
As a remark, the latter sum is equivalent to $\|f\|^2$ where $f$ is the corresponding control to $u(\cdot, T)$ with the minimal norm. This control belongs to the closure of the linear span of $\{e^{i \omega_{nj} t}\}$ in $L^2 (0,T)$.
\section{Open Problems and Further Results}
When the coupling matrix $A$ is in lower triangular form, it is not difficult to generalize the results for coupled hyperbolic equations. That is, it is possible to prove exact controllability under the same assumptions as Theorem \ref{thm1} in the space $\mathcal{H} =\mathcal{H}^0 \times \cdots \times \mathcal{H}^{N-1}$ where $\mathcal{H}^N = H^N (0,\pi) \times H^{N-1} (0,\pi)$. On the other hand, given an arbitrary matrix $A$, if the Kalman rank condition holds, we can obtain a canonical version of the original system and obtain similar results for this \emph{transformed system}. The problem that arises is going back to the original system combines the different components in $\mathcal{H}$ and an optimal description of the controllability space is no longer possible.
While we have proved controllability for this system, we assume that the coupling matrix $A$ has $N$ distinct eigenvalues. It remains to be proved that the system is controllable for a generic matrix $A$, assuming that the Kalman rank condition is satisfied.
It remains an open problem to treat the boundary controllability of $N$ coupled wave equations in $\mathbb R^d$. The methods in this paper are not of use in the general situation or when the matrix $A$ depends on $(x,t)$.
When finishing the writing of this paper, \cite{bat2017} was published. In it, the case of two coupled one-dimensional wave equations with first order coupling and a specific coupling matrix $A = A(x)$ was treated.
\section{Acknowledgments} A significant part of this research was made when S.~Avdonin visited UNAM supported by PREI, UNAM, Mexico. He is very grateful to the Department of Mathematics for its hospitality. S.~Avdonin was also supported in part by NSF grant DMS 1411564 and by the Ministry of Education and Science of Republic of Kazakhstan under the grant No. AP05136197. L. de Teresa was supported in part by PAPIIT-IN102116, UNAM, Mexico.
\end{document} |
\begin{document}
\title{Better Experimental Design by Hybridizing Binary Matching with Imbalance Optimization}
\begin{abstract} We present a new experimental design procedure that divides a set of experimental units into two groups in order to minimize error in estimating an additive treatment effect. One concern is minimizing error at the experimental design stage is large covariate imbalance between the two groups. Another concern is robustness of design to misspecification in response models. We address both concerns in our proposed design: we first place subjects into pairs using optimal nonbipartite matching, making our estimator robust to complicated non-linear response models. Our innovation is to keep the matched pairs extant, take differences of the covariate values within each matched pair and then we use the greedy switching heuristic of Krieger et al. (2019) or rerandomization on these differences. This latter step greatly reduce covariate imbalance to the rate $O_p(n^{-4})$ in the case of one covariate that are uniformly distributed. This rate benefits from the greedy switching heuristic which is $O_p(n^{-3})$ and the rate of matching which is $O_p(n^{-1})$. Further, our resultant designs are shown to be as random as matching which is robust to unobserved covariates. When compared to previous designs, our approach exhibits significant improvement in the mean squared error of the treatment effect estimator when the response model is nonlinear and performs at least as well when it the response model is linear. Our design procedure is found as a method in the open source \texttt{R} package available on \texttt{CRAN} called \texttt{GreedyExperimentalDesign}. \end{abstract}
\pagebreak
\section{Introduction}\label{sec:intro}
The setting we consider is the two-arm non-sequential randomized study with a continuous endpoint (e.g. a pill-placebo double-blind clinical trial assessing blood glucose improvement) that seeks inference for an additive treatment effect. There are $2n$ individuals which are sample-size balanced between the two groups. Each subject $i$ is placed into the treatment or control group and this information is encoded in $w_i \in \braces{-1, +1}$ where -1 denotes assignment to control and +1 denotes assignment to treatment. The vector $\bv{w} := \bracks{w_1~\ldots~w_{2n}}$ is called an \textit{assignment}, \textit{allocation}, or a \textit{randomization}. The collection of the legal $\bv{w}$ vectors and the probability distribution on the legal vectors in the experimental setting is known as an experimental \textit{design}, \textit{strategy}, \textit{algorithm} or \textit{procedure}.
There are $p$ covariates with the $j$th's value for subject $i$ denoted $x_{ij}$ that are assumed known in advance of the randomization (the setting of known measurements is sometimes called \textit{offline} to distinguish it from the \textit{sequential} setting, the latter not being the setting considered in this paper). After any randomization, the values of the subjects' covariates in each group are approximately the same; and it is this fact that is the main reason randomization is employed when seeking causal inference \citep{Cornfield1959}. Thus, \textit{imbalance} (of which there are many principled metrics) in the covariate values between the treatment subjects and the control subjects should be small. Upon completion, the sample responses $y_i$, the result of an unknown process we call the \emph{response model}, are assessed. These sample responses are used to compute a causal estimate of the population average treatment effect $\beta_T$.
Our contribution is a new experimental design that provides (1) lower error in the measurement of the average treatment effect and (2) is robust when the response model is linear and/or non-linear. If the response model is linear, the best experimental design is one that optimizes the treatment assignment for minimal covariate imbalance which we call \emph{imbalance-optimizing designs}. If the response model is nonlinear, a good strategy is to optimize the treatment assignment via creating binary matches (i.e. nonbipartite pairings of subjects) with small intramatch covariate distance. If the response model is a combination of both linear and nonlinear, then both criteria (minimize covariate imbalance and matching) become important. Our hybrid design does both: first by creating optimal nonbipartite matches on covariate distance and then optimizing intramatch assignments to provide small covariate imbalance across the whole sample. The latter can be accomplished by many methods; we investigate rerandomization and a pairwise greedy-switching heuristic (the subject of our previous work) where this heuristic now operates on the differences of the covariate values within each match in contrast to the raw subject covariate values.
To illustrate the advantages of our hybrid experimental design that both matches and imbalance-optimizes, consider linear and nonlinear response models, respectively $y_i = x_i + \half\beta_T w_i + \mathcal{E}_i$ and $y = x^2 + \half \beta_T w_i + \mathcal{E}$ where $\beta_T$ we set to 1, $x_i$'s are drawn iid from $U(0, 3)$ and the $\mathcal{E}_i$'s are drawn iid from $\normnot{0}{0.1^2}$. We use a small sample size of $2n = 40$ and consider three designs: (G) pure imbalance optimization via the greedy pair switching design which will be detailed in Section~\ref{sec:methodology}, (M) optimal non-bipartite matching and (MG) optimal non-bipartite matching followed by imbalance optimization via the greedy pair switching design. We simulate the above system 1,000 times and measure covariate imbalance using the metric log$_{10}(|\bar{\bv{x}}_T - \bar{\bv{x}}_C|)$ and squared error between the estimate $\ybar_T - \ybar_C$ and $\beta_T$. We average over the simulations to estimate both means and display the results in Table~\ref{tab:basic_demo}.
\begin{table}[h] \centering
\begin{tabular}{cccc|ccc} &&Average & Average & \multicolumn{3}{c}{p value for the average} \\ Response && Log$_{10}$ & Squared & \multicolumn{3}{c}{squared error comparison} \\
Model & Design & Imbalance & Error & G & M & MG \\ \hline Linear & G & -3.90 & 0.00099 & - & $\approx 0$ & 0.74 \\ Linear & M & -1.87 & 0.00135 & - & - & $\approx 0$ \\ Linear & MG & -4.55 & 0.00105 & - & - & - \\ \hline Nonlinear & G & - &0.04350 & - & $\approx 0$ & $\approx 0$ \\ Nonlinear & M & - &0.00614 & - & - & 0.08\\ Nonlinear & MG & - & 0.00273 & - & - & - \\ \end{tabular} \caption{Results of the illustrative simulation explained in the text.} \label{tab:basic_demo} \end{table}
In the linear model, the pure imbalance-optimizing design (G) provides better MSE performance than matching (M) and employing the hybrid design combining matching with imbalance-optimization (MG) does not penalize performance (i.e. G and MG have MSE's that are statistically equal). When the model is non-linear, the covariate imbalance is less relevant, and thus (M) provides much better MSE performance than (G) and the hybrid design (MG) outperforms (M) although not statistically significantly here (we will see in the results section that this advantage is likely only for very small sample sizes). Thus the hybrid (MG) design is robust and adaptive as it has the best performance in both linear and nonlinear settings.
When examining the imbalance, M achieves small covariate imbalance but not when compared to G / MG which is expected as that is not the former's primary objective in contrast to the latters' primary objective. MG has smaller covariate imbalance than G as it can combine the reduction in imbalance from M with the largely independent reduction in imbalance from G, the pure imbalance-optimizing procedure. The nearly order of magnitude difference in imbalance between MG and G does not translate to a significantly lower MSE as there is diminishing returns for imbalance optimization; the theoretical reasons for this will become clear in the following section.
\section{Methodology}\label{sec:methodology}
\subsection{Setup and Assumptions}\label{subsec:setup}
We denote the responses $\bv{y} = \bracks{y_1, \ldots, y_{2n}}^\top$ where the number of subjects $n$ is assumed even. The \emph{assignment} or \emph{allocation} vector is $\bv{w} = \bracks{w_1, \ldots, w_n}^\top$ whose entries are either +1 (the subject received $T$) or -1 (the subject received a $C$) and $\bv{w} \in \braces{-1,+1}^{2n}$. We define a \emph{design} $D$ as $W_D$, a discrete uniform random variable with support $\mathbb{W}_D \subseteq \braces{-1,+1}^{2n}$ and number of vectors $\abss{\mathbb{W}_D} \leq 2^{2n}$. We restrict the designs we consider to those that have the mirror property \citet[Assumption 2.2]{Kapelner2020}, \citet[Assumption 1b]{Kallus2018}, \citet{Nordin2020} i.e. for any $\bv{w}$, if treatment assignments and control assignments were switched then that resulting assignment would also be in the design: $\bv{w} \in \mathbb{W} \Rightarrow \bv{1} - \bv{w} \in \mathbb{W}$. We further restrict the designs we consider to be \emph{forced balance procedures} where all allocations have the same number of treated and control subjects \citep[Chapter 3.3]{Rosenberger2016}. This is a minor restriction is a standard design known as the balanced complete randomization design (BCRD) or completely random design, denoted $\mathbb{W}_{BCRD} = \braces{\bv{w}\,:\, \sum w_i = 0}$ which has $\binom{2n}{n}$ assignments. If a design further restricted this set, i.e. $\mathbb{W}_D \subset \mathbb{W}_{BCRD}$, we call $D$ a \emph{restricted design}.
We assume the only source of the randomness in the response is thus the treatment assignments $\bv{w}$. This assumption on the source of randomness is termed the \emph{randomization model} \citep[Chapter 6.3]{Rosenberger2016}, the \textit{Fisher model} or the \textit{Neyman model} whereby \qu{the $2n$ subjects are the population of interest; they are not assumed to be randomly drawn from a superpopulation} \citep[page 297]{Lin2013}.
As in \citet{Kallus2018}, we employ the simple \emph{differences-in-means} estimator,
\begin{eqnarray}\label{eq:estimator} \hat{\beta} := \frac{\bv{w}^\top \bv{y}}{2n} = \half(\Ybar_T - \Ybar_C) \end{eqnarray}
which is population average treatment estimator regardless of the response model and the effect of the treatment \citep[Example 4.7]{Lehmann1998}.
\subsection{Previous Literature}
Our design idea stems from a combination of two long-standing streams of research: first, the pure-imbalance optimizing design literature and second, the binary-matching bias-reducing literature.
For the pure-imbalance optimizing design, we first reference the rerandomization design R. Here, one begins with vectors from BCRD and discards those whose covariate imbalance is beyond a desired threshold. This idea that goes back to Fisher, the father of randomized experiments who was aware of imbalanced allocations and ironically warned against pure unrestricted randomization, the procedure that he is famous for.
Here, $\mathbb{W}_{R(a)} := \braces{\bv{w}\,:\, d_M(\bv{w}) \leq a, \bv{w} \in \mathbb{W}_{BCRD}}$ where
\begin{eqnarray}\label{eq:mahal} d_M(\bv{w}) := n (\bar{\bv{x}}_T - \bar{\bv{x}}_C)^\top \hat{\Sigma}_X^{-1} (\bar{\bv{x}}_T - \bar{\bv{x}}_C), \end{eqnarray}
a popular metric of covariate imbalance among the treatment and control groups known as the Mahalanobis distance, $a$ is an upper-bound acceptability threshold of the covariate imbalance, $\bar{\bv{x}}_T$ and $\bar{\bv{x}}_C$ are the averages across the $p$ covariates in the treatment and control groups respectively and $\hat{\Sigma}_X$ is the sample variance-covariance matrix of all $2n$ subjects. Since $a < \infty$ implies that $\mathbb{W}_{R(a)} \subset \mathbb{W}_{BCRD}$, rerandomization is a restricted design by construction. Assuming multivariate-normally distributed covariates and an additive treatment effect but no assumption on the nature of the response model, the multiplicative reduction in MSE of the difference-in-averages treatment estimator when using the R design is $\eta(p, a) R^2$ where
\begin{eqnarray}\label{eq:rerand_reduction} \eta(p, a) := 1 - \frac{2}{p} \frac{\gamma(p/2 + 1, a/2)}{\gamma(p/2, a/2)}, \end{eqnarray}
$\gamma(\cdot, \cdot)$ denotes the lower incomplete gamma function and $R^2$ is variance explained in an OLS linear regression of the response on the covariates \citep{Morgan2012}.
This reduction in treatment estimator error comes from two sources (1) a reduction of the imbalance in the $p$ covariates which is at their control and (2) the degree of linearity of the covariates in the response as measured by $R^2$.
As for their first source of estimator error reduction, lowering the imbalance in rerandomization is only slightly more impressive than BCRD. There are many ideas for achieving better reduction in covariate imbalance. First, \citet[Section 3.3]{Kallus2018} conjectured using a heuristic argument that the optimal reduction (i.e. for the best possible vector) is exponential $2^{-\Omega(n)}$. Numerical optimization such as in \citet{Bertsimas2015} employ heuristics to approximate the optimal vector. Some heuristics come with theoretical guarantees, e.g. the greedy pair-switching design of \citet{Krieger2019} (henceforth denoted KAK19) uses a greedy heuristic that begins with a draw from BCRD, considers switching every T/C pair and retains the one with the greatest reduction in the imbalance metric and stops switching when it is no longer fruitful. The resulting vectors have a provably very low imbalance $O_p(n^{-(1 + 2/p)})$ which can then be further enhanced by generating many $\bv{w}$'s and retaining only the best of those akin to rerandomization. Moreover, since there is a provably low number of switches, the degree of randomness is nearly that of BCRD. We denote this restricted design as G.
Better rates than offered by G are unlikely to matter as the imbalance reduction multiple (Equation~\ref{eq:rerand_reduction}) has diminishing returns in $a$. After a procedure such as KAK19, the performance of the estimator is much more sensitive to the second source of error, i.e. the strength of the covariates' affect on the response (as gauged by $R^2$). If the $R^2$ is low (in the case of a non-linear response model and/or a high noise setting), covariate imbalance reduction will not be too fruitful. Moreover, \citet[Section 2.3.3]{Kallus2018} proves that minimizing $a$ in rerandomization is fruitful for minimax estimator error reduction only in the case of a linear response model. \citet{Kallus2018} explains the reason why this advantage is only limited to linear response models: even if $d_M(\bv{w}) = 0$ (i.e. the first moments are matched perfectly), the covariate distributions in the two arms could be very different. In a nonlinear response model, the estimator performance can depend on more subtle differences in the covariates' distributions (e.g. their tail behavior).
The second stream of research, binary matching, implicitly attempts to equalize the covariate distributions in both arms. Here, the $2n$ subjects' indices are first organized into a \textit{matched pair structure} $\mathcal{M}_d$, a set whose elements are \textit{pairs} (a set of two subject indices). The structure $\mathcal{M}_d$ is created by first assuming a distance function $d(\bv{x}_r, \bv{x}_s) : \mathbb{R}^p \times \mathbb{R}^p \rightarrow \mathbb{R}_{\geq 0}$ whose inputs are subject covariates. A $2n \times 2n$ nonnegative symmetric matrix is then computed consisting of distances for all pairs of subjects. The optimal match structure $\mathcal{M}_d^\star$ would be one of the $\binom{2n}{n}$ sets that minimizes the sum of the resulting intramatch distances i.e. $\sum_{\braces{r,s} \in \mathcal{M}} d(\bv{x}_r, \bv{x}_s)$. The algorithm that produces the optimal match structure is known as \emph{optimal nonbipartite matching}. This algorithm has been shown to be reducible to a polynomial-time algorithm (see \citealp{Lu2011} for details and history). To create an assignment vector $\bv{w}$, the individual assignments within each of the $n$ pairs are randomly allocated to T/C or C/T via $n$ iid Bernoulli draws. We denote the optimal binary matching design with the Mahalanobis distance metric (Equation~\ref{eq:mahal}) as M and its allocation space by $\mathbb{W}_{M} = \{\bv{w}\,:\,w_r = -w_s, \braces{r,s} \in \mathcal{M}_{d_M}^\star\} \subset \mathbb{W}_{BCRD}$.
Can binary matching play a role in experimental design? This M design was investigated by \citet{Greevy2004} who reported better imbalance and higher power using randomization tests. However, matching in order to have robust estimation under a nonlinear response model has a long literature in observational studies and we recommend \citet{Stuart2010} for a broad overview. (Without the ability to manipulate $\bv{w}$, the matching procedure is quite different as it is limited to fixed $w_i$'s and is called \emph{bipartite matching}). \citet{Rubin1979} was the first to show this robustness and recommended that $d$ be specified as the Mahalanobis distance of Equation~\ref{eq:mahal}.
Further, \citet[Section 2.3.2]{Kallus2018} proves that binary matching via $d$ is the minimax variance minimizing experimental design for response functions that are Lipschitz continuous with respect to metric $d$. It is reasonable to conclude in a real-world experimental setting (for instance in a clinical trial in medicine) that the response would be continuous with derivatives not changing too quickly (i.e. satisfying the Lipschitz conditions). Thus, to reduce the variance of the treatment estimator in nonlinear response models, we elect to use a binary matching design.
Our contribution is relatively simple: we first compute $\mathcal{M}_{d_M}^\star$ as part of an M design and this binary pairing structure will provide robustness to nonlinear response models. Then instead of assigning the specific pairs' subjects via Bernoulli draws as explained above, we further restrict the pair assignments to insist on small imbalance by using R (resulting in the matching-then-rerandomization design we term MR) or by using G (resulting the matching-then-greedy-pair-switching design we term MG). We turn to the details of MR and MG now.
\subsection{Our Algorithms}
First, $\mathcal{M}_{d_M}^\star$ is computed by running the optimal non-bipartite matching algorithm implemented by \citet{Beck2016} in the \texttt{R} package \texttt{nbpMatching}.
For R after M, $\mathbb{W}_{MR(a)} := \braces{\bv{w}\,:\, d_M(\bv{w}) \leq a, \bv{w} \in \mathbb{W}_M}$ i.e. we draw many different assignments from $\mathbb{W}_M$ and retain those whose Mahalanobis distances meet the threshold $a$ thereby finding small imbalances subject to the match structure.
For G after M, we cannot simply input $\bv{w} \in \mathbb{W}_M$ directly into KAK19 as this would violate the $\mathcal{M}_{d_M}^\star$ structure, breaking the binary pairings. Instead, we temporarily view the matched pairs as individual subjects (of which there are $n$) as the input into KAK19. This means we temporarily view the intramatch covariate vector differences as the subjects' covariates which are inputted into KAK19. Put another way, the T arm now refers to a matched pair assigned T/C where the covariate difference vector is computed by taking the first subject's covariates minus the second subject's covariates. The C arm would be the opposite. In one iteration of KAK19, all \textit{pair of pairs switches} are considered: from (T/C)/(C/T) to (C/T)/(T/C) or vice versa. One of these will result in the largest reduction in imbalance as measured among the pair of pairs which is algebraically the same imbalance as measured among the entire $\bv{w}$ vector for the $2n$ subjects. This best pair of pairs switch is retained and the algorithm stops when an iteration cannot reduce the imbalance any longer. This scheme thereby preserves $\mathcal{M}_{d_M}^\star$.
Under our designs, to evaluate hypotheses concerning the value of $\beta_T$ we can use the randomization test and to construct approximate confidence intervals for $\beta_T$ we can invert the randomization test. For details, see \citet[Section 2.2]{Morgan2012} and KAK19, Section 6; both provide a careful step-by-step explanation and a literature review.
One can consider an alternative implementation of the G in MG as follows. After M, each iteration of G can perform one switch of one pair (instead of a switch of a pair of pairs). Here, we now have fewer switch possibilities, i.e. $n$ and not $\binom{n}{2}$, and we conjecture this procedure would not perform as well as the one proposed above.
One can also consider a GM (or RM) procedure i.e. to first imbalance-optimize via G (or R) and then employ binary matching. In that scenario, there will be a different $\mathcal{M}$ for each initial imbalance optimized vector based on a bipartite matching. The gain here would be that the imbalance-optimization would be slightly better but it comes at a cost: bipartite matches are much more constrained than nonbipartite matches and thus we conjecture that these designs will have much lower robustness to nonlinear models.
\subsection{Theoretical Investigation of MG and MR}\label{sec:theory}
\subsubsection{Imbalance}
The key idea of our designs is to retain the match structure while optimizing imbalance. Instead of employing G or R using $\mathbb{W}_{BCRD}$ as a base, we now employ G or R using $\mathbb{W}_{M}$ as a base. We are not aware of theoretical results for the imbalance after optimal nonbipartite matching when $p > 1$ and thus we focus on the case when $p = 1$.
In this univariate case, consider the order statistics $X_{(1)}, \ldots, X_{(2n)}$. Optimal matching creates $\mathcal{M} = \braces{\braces{2n, 2n-1}, \ldots, \braces{2,1}}$, i.e. pairing the first largest covariate with the second largest covariate value, then pairing the third largest with the fourth largest, etc. We will consider the case where $X_1,\ldots,X_{2n}$ are iid $\stduniform$. Define $\Delta_i = X_{(2i)} - X_{(2i-1)}$ for $i = 1, 2, \ldots, n$. By \citet[Theorem 6.6c]{Dasgupta2011}, $\bracks{\Delta_1,\ldots,\Delta_n}$ has the same distribution as $\oneover{S}\bracks{E_1,\ldots,E_n}$, where $E_1,\ldots,E_n,\ldots,E_{2n+1}$ are iid $\exponential{1}$ and $S := \sum_{i=1}^{2n+1} E_i = O_p(n)$. Since the denominator $S$ is common to all entries of the vector, applying the greedy pair-switching of KAK19 to $\bracks{\Delta_1,\ldots,\Delta_n}$ is the same as (in terms of distribution) applying the algorithm to $\bracks{E_1,\ldots,E_n}$ and then dividing by $S$.
The imbalance of assignments in G is $O_p(n^{-3})$ by KAK19 (Theorem 1), a distribution-free result. When dividing by $S$, which is $O_p(n)$, the resultant rate of our MG design is $O_p(n^{-4})$.
The imbalance of assignments in R is less than $a$ by construction. However, if $a$ is too small, then practical computational constraints dictate that all BCRD assignments one generates will have imbalance greater than $a$, resulting in the inability to locate any assignments in $\mathbb{W}_{R(a)}$. KAK19 reframes the discussion of R's imbalance by making this computational constraint explicit. Let $n_R$ denote the number of BCRD assignments one searches through for each resultant assignment in R. For every collection of $n_R$, the assignment that provides the minimum imbalance in that collection is retained. These retained vectors then have imbalance which is order $n_R^{-1} O_p(n^{-1/2})$ where $O_p(n^{-1/2})$ is the imbalance of BCRD. It then follows by the same argument above that when dividing by $S$, the resultant imbalance of MR is then $n_R^{-1} O_p(n^{-3/2})$.
Extending the theoretical results for imbalance in MG beyond the assumption of uniformly distributed covariates is difficult. The main result needed to prove that the order of the imbalance of the assignments in G is $O_p(n^{-3})$ is to show that there exists a pair of covariate values to switch $X_i, X_j$ where $w_i = 1$ and $w_j = -1$ that result in a necessary reduction in imbalance. The analogous result here would require the locating of $\Delta_i, \Delta_j$ where $w_i = 1$, $w_{i+1} = -1$, $w_j = -1$ and $w_{j+1} = 1$ that results in the same reduction in imbalance after switching both pairs. Showing the necessary reduction in imbalance assumed that the $X_i$'s are continous and iid. We were able to use this result in this paper when the $X_i$'s were uniformly distributed, but if the $X_i$'s are normally distributed, it is much more complicated as the $\Delta_i$'s corresponding to covariate values in a tail are stochastically larger than the $\Delta_i$'s corresponding to covariate values in the center. The imbalance of MG in the normal case is shown by simulation to be lower than G (see Figure~\ref{fig:mse_density_plots_normal} in the Supporting Information).
\subsubsection{Degree of Randomness}
We now turn to degree of randomness in MG and MR. We reiterate that in experimental design, it is important for the assignments to be highly random as this randomness provides insurance against a large variance in the treatment effect estimator due to unobserved covariates \citep{Kapelner2020}. Both G and R are less random than BCRD since they are forms of restricted randomization. In the case of G, KAK19 (Theorem 2 and Proposition 1) shows that G's assignments are asymptotically as random as BCRD according to three degree-of-randomness metrics: (a) the pairwise entropy metric of assignments, (b) the standard error metric of probability of pairwise assignments and (c) the accidental bias metric of \citet[Section 5]{Efron1971} defined as the maximum eigenvalue of the variance covariance matrix of $W$, the random variable producing assignments in any strategy (see Section~\ref{subsec:setup}). The reason for G's high degree of randomness is that the number of pairwise switches made in G is small, being only $O_p(\sqrt{n})$. Thus, since G starts with BCRD, and its assignments are not changed significantly, the resulting $\mathbb{W}_G$ is very large. In the case of R, we directly limit the number of designs based on the cutoff $a$. As, $a$ decreases, $\mathbb{W}_R$ shrinks relative to $\mathbb{W}_{BCRD}$, making R less random. Since $a$ can never be too small due to computational limitations, it follows that randomness is not substantially deteriorated in R. Thus, MG is as random as M asymptotically and MR-in-practice is likely as random as M asymptotically as well. And M is highly random according to all three degree-of-randomness metrics, only slightly less random than BCRD.
\section{Simulation Results}\label{sec:simulations}
To illustrate our main results we turn to simulations. All simulations herein were performed with \texttt{GreedyExperimentalDesign}, an \texttt{R} package available on \texttt{CRAN} whose core is implemented in \texttt{Java} for speed. We measure mean squared error (MSE) of the difference-in-averages estimator for an additive average treatment effect when the response model depends on underlying covariate(s).
We conjecture that (a) the MSE performance of MR/MG will be similar to imbalance-optimizing designs G/R when the response model is purely linear, (b) the MSE performance of MR/MG will be similar to the pure nonbipartite matching design M when the response model is purely nonlinear and (c) the MSE performance of MR/MG will be better than either exclusively imbalance-optimizing G/R or exclusively nonbipartite matching M when the response model is a hybrid of a strong linear component and strong nonlinear component.
To demonstrate the conjecture we consider $n=100$, two covariates and five response models that take the form $\bv{Y} = \beta_T\bv{w} + f(x_1, x_2) + \bv{\errorrv}$ where $x_1, x_2$ are independent draws from $U(-\sqrt{3},+\sqrt{3})$ so that their variance is one and the $n$ components of $\bv{\errorrv}$ are iid draws from $\normnot{0}{0.5^2}$. The five models are (Z) a null zero model $f = 0$ for calibration and ensuring the simulation is working (L) a purely linear model $f = 3x_1 + 3x_2$, (LsNL) a mostly linear but slightly nonlinear model $f = 3x_1 + 3x_2 + x_1^2$, (LNL) a linear and nonlinear model $f = 3x_1 + 3x_2 + x_1^2 + x_2^2 + x_1 x_2$ and (NL) a purely nonlinear model $f = x_1^2 + x_2^2 + x_1 x_2$.
We then consider the six designs of the previous section all of which feature an equiprobable draw from a set of mirrored allocations: (BCRD) balanced and completely randomized, (R) rerandomization of \citet{Morgan2012} where only the top 1\% of BCRD vectors are retained, (G) the greedy pair switching of KAK19, (M) optimal nonbipartite matching, and our two new hybrid approaches of (MG) optimal nonbipartite matching followed by greedy pair switching on the within-pair covariate differences and (MR) optimal nonbipartite matching followed by rerandomization on the within-pair covariate differences where the top 1\% of pairings are retained. Thus, we have $5 \times 6$ = 30 sample size-model-design simulation settings.
For each sample size-model-design simulation setting, we simulate 50 different covariate and noise value settings, $x_1, x_2, \mathcal{E}_1, \ldots, \mathcal{E}_n$. Within each specific covariate-noise realization setting, we draw 500 unique allocation vectors from each of the six design strategies. We define MSE as $\expesubnostr{X_1, X_2, \bv{\errorrv}}{\cexpesubnostr{W_D}{(\hat{\beta}_T - \beta_T)^2}{X_1, X_2}}$ for each sample size-model-design setting. This MSE is estimated by first taking an arithmetic average of the 500 squared errors within the covariate-noise setting and then averaging these over the 50 covariate-noise settings. Estimates for matching designs (M and MR) usually are computed by the average paired differences. This estimate is algebraically equivalent to the naive difference-in-averages estimate and thus estimates are computed uniformly for all six designs.
\begin{figure}
\caption{Density plots for MSE (on a log scale) over the covariate distributions, noise and assignments when $n=100$ for the five models (the vertically stacked panes) and six designs (depicted as different colors) explained in the text. The vertical lines indicate the MSE estimate (explained in the text) by design. Lines directly adjacent to one another are the result of a slight jitter. Statistically significant differences are tabulated in Table~\ref{tab:all_pvals}.}
\label{fig:mse_density_plots}
\end{figure}
Figure~\ref{fig:mse_density_plots} provides density plots for MSE when $n=100$ for all five models and six designs and Table~\ref{tab:all_pvals} tabulates the statistically significance of each design comparison. The null response model (Z) serves as a check on the integrity of our simulation: since the covariates are disjointed from the response, all designs perform equally as seen in the top pane of Figure~\ref{fig:mse_density_plots} and for this reason it was omitted from Table~\ref{tab:all_pvals}. For all other response models where the covariates inform the response, the figure and tables illustrates the main thrust of this paper's contribution: (1) Our designs, MR and MG, are always the best performers (or tied for the best performance) indicating robustness to linear, nonlinear and mixtures of linear and nonlinear response models. Other observations support this main thrust. (2) BCRD is the worst performer (or tied for worst performance) which is expected given that it does not allocate subjects with the benefit of knowing the covariate values. (3) In the exclusively linear model (L), the pure imbalance optimizing designs perform the best i.e. G, R, MR, MG all perform equally with M lagging. (4) In the exclusively nonlinear model (NL), the matching designs perform the best i.e. M, MR, MG all perform equally. (5) In the linear-nonlinear models (LsNL, LNL), our matching-plus-imbalance-optimizing designs MR and MG are equally the best, the matching M is second best and the pure imbalance optimizing designs G and R lag behind M. (6) R lags behind G because its imbalance is a much smaller rate (see KAK19 for details why). MR also lags behind MG for the same reason but the difference is not appreciable in these simulations.
\begin{table}[h] \centering
\begin{subtable}{.5\linewidth}{ \begin{tabular}{rlllll}
& R & G & M & MR & MG \\
\hline BCRD & *** & *** & *** & *** & *** \\
R & & & & & \\
G & & & ** & & \\
M & & & & ** & ** \\
MR & & & & & \\
\end{tabular} } \caption{Model: L} \label{tab:pvals1} \end{subtable}~ \begin{subtable}{.5\linewidth}{ \begin{tabular}{rlllll}
& R & G & M & MR & MG \\
\hline BCRD & *** & *** & *** & *** & *** \\
R & & & *** & *** & *** \\
G & & & ** & *** & *** \\
M & & & & * & * \\
MR & & & & & \\
\end{tabular} } \caption{Model: LsNL} \label{tab:pvals2} \end{subtable}\\ ~\\~\\
\begin{subtable}{.5\linewidth}{ \begin{tabular}{rlllll}
& R & G & M & MR & MG \\
\hline BCRD & *** & *** & *** & *** & *** \\
R & & & *** & *** & *** \\
G & & & *** & *** & *** \\
M & & & & * & * \\
MR & & & & & \\
\end{tabular} } \caption{Model: LNL} \label{tab:pvals3} \end{subtable}~ \begin{subtable}{.5\linewidth}{ \begin{tabular}{rlllll}
& R & G & M & MR & MG \\
\hline BCRD & & *** & *** & *** & *** \\
R & & *** & *** & *** & *** \\
G & & & *** & *** & *** \\
M & & & & & \\
MR & & & & & \\
\end{tabular} } \caption{Model: NL} \label{tab:pvals4} \end{subtable}
\caption{Tukey-Kramer comparisons of the MSE for each design pair by model. * indicates $p < 0.05$, ** indicates $p < 0.01$ and *** indicates $p < 0.001$. The Z model is not displayed as none of its design pairs' MSEs tested statistically significant.} \label{tab:all_pvals} \end{table}
We also repeated the simulation for three other sample sizes ($n=32$, $n=132$ and $n=200$) and they all have qualitatively similar results. The tabulation of MSE estimates for all sample sizes, designs and response models can be found in Tables~\ref{fig:ols_n_32}-\ref{fig:ols_n_200} in the Supporting Information. The entire simulation was also repeated for the case of normally distributed covariates and the results were not substantially different (see Figure~\ref{fig:mse_density_plots_normal} in the Supporting Information).
\section{Concluding Remarks}\label{sec:conclusion}
We proposed a randomized experimental design that is a hybrid between two well studied strategies: those that optimize covariate imbalance and those that use binary matching. The former is known to be minimax optimal when the response model is linear in the covariates and the latter is known to be minimax optimal when the response model is continuous with limits on its derivative with respect to all $p$ covariates. Additionally, we have theory that shows that the fusing of both strategies fortuitously enhances the covariate imbalance all while retaining a degree of randomization close to that of the classic completely random designs.
Because we fuse both types of design together without sacrificing the performance of either, our designs provides very low MSE performance when estimating a population average treatment effect in the purely linear case (because we optimize covariate imbalance), the purely nonlinear case (as we our assignments are optimal nonbipartite matched pairs) and response models that feature both linear and nonlinear components. Thus we expect our design to be very powerful when employed in real-world settings such as clinical trials and Internet-based experimentation.
There are many extensions to this work. First, there are different types of matching such as ratio matching \citep[Section 3.1.2]{Stuart2010}, matching in more than two arms (ibid, Section 6.1.4), matching with discrete covariates (the Mahalanobis distance function is not the most appropriate here as discussed in \citealt{Gu1993}) and matching with some covariates being more important than others. As some matches may not be acceptable, one can caliper match; this would result in a subset of the subjects being matched and a subset being unmatched. In this case, the G algorithm would have to be redesigned but the R procedure would be fairly straightforward. Further, one may wish to use the OLS estimator instead of the classical difference-in-averages estimator (Equation~\ref{eq:estimator}). One can argue that restricted randomization can leave you vulnerable to high MSE estimation due to imbalance in unobserved covariates \citep[Sections 2.2.1, 2.2.3, 2.3.1 and 2.3.3]{Kapelner2020}. Therein, they proved that restricted designs with a high degree of randomness, such as our hybrid designs in this work (see Section~\ref{sec:theory}), are highly random and thus are not susceptible. Further, \citet[Equation 14]{Kapelner2020} showed that the OLS estimator improves asymptotic MSE by an order of magnitude in sample size. However, imbalance-optimizing designs (such as R and G) still provide improvement in the linear case. Covariate adjustment cannot aid with the nonlinear component of the response model and thus our hybrid designs will perform very well in conjunction with the OLS estimator on nonlinear models. And the OLS estimator is actually preferred in matching designs \citep{Rubin1979}. Future work can also adapt this design to the sequential setting, where individuals in the experiment arrive one by one and must be assigned quickly to an arm. We can apply the on-the-fly matching procedure in \citet{Kapelner2014} with the rerandomization approach in \citet{Zhou2018} in the same vein as our hybrid procedure.
\subsection*{Replication}
All figures and tables can be reproduced by running the \texttt{R} code found at \url{https://github.com/kapelner/GreedyExperimentalDesign/blob/master/hybrid_paper}.
\pagebreak \appendix
\section*{Supporting Information}\label{app} \renewcommand{A\arabic{equation}}{A\arabic{equation}} \renewcommand{A\arabic{figure}}{A\arabic{figure}} \renewcommand{A\arabic{table}}{A\arabic{table}} \setcounter{figure}{0} \setcounter{table}{0}
\begin{figure}
\caption{The same density plots as Figure~\ref{fig:mse_density_plots} except that the underlying simulation used normally distributed covariates.}
\label{fig:mse_density_plots_normal}
\end{figure}
\begin{table}[ht] \centering \begin{tabular}{rrrrr}
\hline
& Estimate & Std. Error & t value & Pr($>$$|$t$|$) \\
\hline (Intercept) & 0.0100 & 0.0022 & 4.57 & 0.0000 \\
M & -0.0000 & 0.0031 & -0.01 & 0.9917 \\
R & -0.0000 & 0.0031 & -0.00 & 0.9999 \\
G & -0.0001 & 0.0031 & -0.04 & 0.9659 \\
MR & -0.0001 & 0.0031 & -0.02 & 0.9809 \\
MG & -0.0001 & 0.0031 & -0.02 & 0.9806 \\
L & 0.7016 & 0.0031 & 226.58 & 0.0000 \\
LsNL & 0.7284 & 0.0031 & 235.22 & 0.0000 \\
LNL & 0.8049 & 0.0031 & 259.91 & 0.0000 \\
NL & 0.1040 & 0.0031 & 33.59 & 0.0000 \\
M:L & -0.6887 & 0.0044 & -157.25 & 0.0000 \\
R:L & -0.6979 & 0.0044 & -159.37 & 0.0000 \\
G:L & -0.7016 & 0.0044 & -160.22 & 0.0000 \\
MR:L & -0.7016 & 0.0044 & -160.20 & 0.0000 \\
MG:L & -0.7016 & 0.0044 & -160.22 & 0.0000 \\
M:LsNL & -0.7131 & 0.0044 & -162.84 & 0.0000 \\
R:LsNL & -0.6918 & 0.0044 & -157.98 & 0.0000 \\
G:LsNL & -0.6970 & 0.0044 & -159.16 & 0.0000 \\
MR:LsNL & -0.7257 & 0.0044 & -165.71 & 0.0000 \\
MG:LsNL & -0.7259 & 0.0044 & -165.76 & 0.0000 \\
M:LNL & -0.7842 & 0.0044 & -179.07 & 0.0000 \\
R:LNL & -0.6965 & 0.0044 & -159.04 & 0.0000 \\
G:LNL & -0.7060 & 0.0044 & -161.20 & 0.0000 \\
MR:LNL & -0.7970 & 0.0044 & -181.98 & 0.0000 \\
MG:LNL & -0.7972 & 0.0044 & -182.03 & 0.0000 \\
M:NL & -0.0966 & 0.0044 & -22.05 & 0.0000 \\
R:NL & 0.0005 & 0.0044 & 0.11 & 0.9161 \\
G:NL & -0.0051 & 0.0044 & -1.17 & 0.2431 \\
MR:NL & -0.0962 & 0.0044 & -21.96 & 0.0000 \\
MG:NL & -0.0963 & 0.0044 & -21.99 & 0.0000 \\
\hline \end{tabular} \caption{OLS output for squared error regressed on design $\times$ response model in the simulation of Section~\ref{sec:simulations} for $n = 32$.} \label{fig:ols_n_32} \end{table}
\begin{table}[ht] \centering \begin{tabular}{rrrrr}
\hline
& Estimate & Std. Error & t value & Pr($>$$|$t$|$) \\
\hline (Intercept) & 0.0100 & 0.0022 & 4.57 & 0.0000 \\
M & -0.0000 & 0.0031 & -0.01 & 0.9917 \\
R & -0.0000 & 0.0031 & -0.00 & 0.9999 \\
G & -0.0001 & 0.0031 & -0.04 & 0.9659 \\
MR & -0.0001 & 0.0031 & -0.02 & 0.9809 \\
MG & -0.0001 & 0.0031 & -0.02 & 0.9806 \\
L & 0.7016 & 0.0031 & 226.58 & 0.0000 \\
LsNL & 0.7284 & 0.0031 & 235.22 & 0.0000 \\
LNL & 0.8049 & 0.0031 & 259.91 & 0.0000 \\
NL & 0.1040 & 0.0031 & 33.59 & 0.0000 \\
M:L & -0.6887 & 0.0044 & -157.25 & 0.0000 \\
R:L & -0.6979 & 0.0044 & -159.37 & 0.0000 \\
G:L & -0.7016 & 0.0044 & -160.22 & 0.0000 \\
MR:L & -0.7016 & 0.0044 & -160.20 & 0.0000 \\
MG:L & -0.7016 & 0.0044 & -160.22 & 0.0000 \\
M:LsNL & -0.7131 & 0.0044 & -162.84 & 0.0000 \\
R:LsNL & -0.6918 & 0.0044 & -157.98 & 0.0000 \\
G:LsNL & -0.6970 & 0.0044 & -159.16 & 0.0000 \\
MR:LsNL & -0.7257 & 0.0044 & -165.71 & 0.0000 \\
MG:LsNL & -0.7259 & 0.0044 & -165.76 & 0.0000 \\
M:LNL & -0.7842 & 0.0044 & -179.07 & 0.0000 \\
R:LNL & -0.6965 & 0.0044 & -159.04 & 0.0000 \\
G:LNL & -0.7060 & 0.0044 & -161.20 & 0.0000 \\
MR:LNL & -0.7970 & 0.0044 & -181.98 & 0.0000 \\
MG:LNL & -0.7972 & 0.0044 & -182.03 & 0.0000 \\
M:NL & -0.0966 & 0.0044 & -22.05 & 0.0000 \\
R:NL & 0.0005 & 0.0044 & 0.11 & 0.9161 \\
G:NL & -0.0051 & 0.0044 & -1.17 & 0.2431 \\
MR:NL & -0.0962 & 0.0044 & -21.96 & 0.0000 \\
MG:NL & -0.0963 & 0.0044 & -21.99 & 0.0000 \\
\hline \end{tabular} \caption{OLS output for squared error regressed on design $\times$ response model in the simulation of Section~\ref{sec:simulations} for $n = 100$.} \label{fig:ols_n_100} \end{table}
\begin{table}[ht] \centering \begin{tabular}{rrrrr}
\hline
& Estimate & Std. Error & t value & Pr($>$$|$t$|$) \\
\hline (Intercept) & 0.0073 & 0.0017 & 4.19 & 0.0000 \\
M & 0.0003 & 0.0025 & 0.11 & 0.9126 \\
R & -0.0000 & 0.0025 & -0.02 & 0.9856 \\
G & 0.0000 & 0.0025 & 0.01 & 0.9880 \\
MR & 0.0001 & 0.0025 & 0.05 & 0.9590 \\
MG & 0.0000 & 0.0025 & 0.02 & 0.9841 \\
L & 0.5650 & 0.0025 & 229.80 & 0.0000 \\
LsNL & 0.5907 & 0.0025 & 240.25 & 0.0000 \\
LNL & 0.6414 & 0.0025 & 260.86 & 0.0000 \\
NL & 0.0809 & 0.0025 & 32.92 & 0.0000 \\
M:L & -0.5575 & 0.0035 & -160.34 & 0.0000 \\
R:L & -0.5620 & 0.0035 & -161.64 & 0.0000 \\
G:L & -0.5650 & 0.0035 & -162.50 & 0.0000 \\
MR:L & -0.5650 & 0.0035 & -162.49 & 0.0000 \\
MG:L & -0.5650 & 0.0035 & -162.50 & 0.0000 \\
M:LsNL & -0.5813 & 0.0035 & -167.18 & 0.0000 \\
R:LsNL & -0.5634 & 0.0035 & -162.03 & 0.0000 \\
G:LsNL & -0.5674 & 0.0035 & -163.18 & 0.0000 \\
MR:LsNL & -0.5890 & 0.0035 & -169.39 & 0.0000 \\
MG:LsNL & -0.5890 & 0.0035 & -169.41 & 0.0000 \\
M:LNL & -0.6292 & 0.0035 & -180.95 & 0.0000 \\
R:LNL & -0.5565 & 0.0035 & -160.06 & 0.0000 \\
G:LNL & -0.5644 & 0.0035 & -162.33 & 0.0000 \\
MR:LNL & -0.6369 & 0.0035 & -183.17 & 0.0000 \\
MG:LNL & -0.6368 & 0.0035 & -183.15 & 0.0000 \\
M:NL & -0.0764 & 0.0035 & -21.97 & 0.0000 \\
R:NL & 0.0008 & 0.0035 & 0.22 & 0.8239 \\
G:NL & -0.0040 & 0.0035 & -1.15 & 0.2519 \\
MR:NL & -0.0765 & 0.0035 & -21.99 & 0.0000 \\
MG:NL & -0.0764 & 0.0035 & -21.97 & 0.0000 \\
\hline \end{tabular}
\caption{OLS output for squared error regressed on design $\times$ response model in the simulation of Section~\ref{sec:simulations} for $n = 132$.} \label{fig:ols_n_132} \end{table}
\begin{table}[ht] \centering \begin{tabular}{rrrrr}
\hline
& Estimate & Std. Error & t value & Pr($>$$|$t$|$) \\
\hline (Intercept) & 0.0050 & 0.0011 & 4.56 & 0.0000 \\
M & 0.0002 & 0.0015 & 0.11 & 0.9139 \\
R & 0.0000 & 0.0015 & 0.02 & 0.9863 \\
G & 0.0000 & 0.0015 & 0.01 & 0.9901 \\
MR & 0.0002 & 0.0015 & 0.14 & 0.8922 \\
MG & 0.0002 & 0.0015 & 0.15 & 0.8769 \\
L & 0.3512 & 0.0015 & 227.27 & 0.0000 \\
LsNL & 0.3669 & 0.0015 & 237.41 & 0.0000 \\
LNL & 0.4033 & 0.0015 & 260.98 & 0.0000 \\
NL & 0.0506 & 0.0015 & 32.74 & 0.0000 \\
M:L & -0.3482 & 0.0022 & -159.32 & 0.0000 \\
R:L & -0.3493 & 0.0022 & -159.82 & 0.0000 \\
G:L & -0.3512 & 0.0022 & -160.70 & 0.0000 \\
MR:L & -0.3512 & 0.0022 & -160.70 & 0.0000 \\
MG:L & -0.3512 & 0.0022 & -160.70 & 0.0000 \\
M:LsNL & -0.3633 & 0.0022 & -166.20 & 0.0000 \\
R:LsNL & -0.3490 & 0.0022 & -159.66 & 0.0000 \\
G:LsNL & -0.3514 & 0.0022 & -160.76 & 0.0000 \\
MR:LsNL & -0.3662 & 0.0022 & -167.54 & 0.0000 \\
MG:LsNL & -0.3662 & 0.0022 & -167.55 & 0.0000 \\
M:LNL & -0.3987 & 0.0022 & -182.43 & 0.0000 \\
R:LNL & -0.3503 & 0.0022 & -160.25 & 0.0000 \\
G:LNL & -0.3542 & 0.0022 & -162.04 & 0.0000 \\
MR:LNL & -0.4015 & 0.0022 & -183.69 & 0.0000 \\
MG:LNL & -0.4016 & 0.0022 & -183.74 & 0.0000 \\
M:NL & -0.0489 & 0.0022 & -22.37 & 0.0000 \\
R:NL & 0.0005 & 0.0022 & 0.22 & 0.8240 \\
G:NL & -0.0014 & 0.0022 & -0.65 & 0.5173 \\
MR:NL & -0.0487 & 0.0022 & -22.30 & 0.0000 \\
MG:NL & -0.0488 & 0.0022 & -22.34 & 0.0000 \\
\hline \end{tabular}
\caption{OLS output for squared error regressed on design $\times$ response model in the simulation of Section~\ref{sec:simulations} for $n = 200$.} \label{fig:ols_n_200} \end{table}
\end{document} |
\begin{document}
\title{Hessian estimates for the sigma-2 equation in dimension four}
\author{Ravi Shankar and Yu Yuan}
\date{}
\maketitle
\begin{abstract} We derive a priori interior Hessian estimates and interior regularity for the $\sigma_2$ equation in dimension four. Our method provides respectively a new proof for the corresponding three dimensional results and a Hessian estimate for smooth solutions satisfying a dynamic semi-convexity condition in higher $n\ge 5$ dimensions. \end{abstract} \maketitle
\section{Introduction}
\footnotetext[1]{\today}
In this article, we resolve the question of the interior a priori Hessian estimate and regularity for the $\sigma_{2}$ equation \begin{equation} \sigma_{2}\left( D^{2}u\right) =\sum_{1\leq i<j\leq n}\lambda_{i}\lambda _{j}=1 \label{s2} \end{equation} in dimension $n=4,$ where $\lambda_{i}'s$ are the eigenvalues of the Hessian $D^{2}u.$
\begin{thm} \label{thm:s2} Let $u$ be a smooth solution to (\ref{s2}) in the positive branch $\bigtriangleup u>0$ on $B_{1}(0)\subset\mathbb{R}^{4}$. Then $u$ has an implicit Hessian estimate \[
|D^{2}u(0)|\leq C(\left\Vert u\right\Vert _{C^{1}\left( B_{1}\left( 0\right) \right) })\ \ \ \] with$\ \left\Vert u\right\Vert _{C^{1}\left( B_{1}\left( 0\right) \right) }=\left\Vert u\right\Vert _{L^{\infty}\left( B_{1}\left( 0\right) \right) }+\left\Vert Du\right\Vert _{L^{\infty}\left( B_{1}\left( 0\right) \right) }.$ \end{thm}
From the gradient estimate for $\sigma_{k}$-equations by Trudinger \cite{T2} and also Chou-Wang \cite{CW} in the mid 1990s, we can bound $D^{2}u$ in terms of the solution $u$ in $B_{2}\left( 0\right) $ as \[
|D^{2}u(0)|\leq C(\left\Vert u\right\Vert _{L^{\infty}\left( B_{2}\left( 0\right) \right) }). \]
In higher $n\geq5$ dimensions, our method provides a Hessian estimate for smooth solutions satisfying a semi-convexity type condition with movable lower bound (\ref{dscx}), which is unconditionally valid in four dimensions by \eqref{sharp}.
\begin{thm} \label{thm:n5} Let $u$ be a smooth solution to (\ref{s2}) in the positive branch $\bigtriangleup u>0$ on $B_{1}(0)\subset\mathbb{R}^{n}$ with $n\geq5,$ satisfying a dynamic semi-convex condition \begin{equation} \lambda_{\min}\left( D^{2}u\right) \geq-c\left( n\right) \bigtriangleup u\ \ \ \text{with \ \ }c\left( n\right) =\frac{\sqrt{3n^{2}+1}-n+1}{2n}. \label{dscx} \end{equation} Then $u$ has an implicit Hessian estimate \[
|D^{2}u(0)|\leq C(n,\left\Vert u\right\Vert _{L^{\infty}\left( B_{1}\left( 0\right) \right) }). \]
\end{thm}
One application of the above estimates is the interior regularity (analyticity) of $C^{0}$ viscosity solutions to (\ref{s2}) in four dimensions, when the estimates are combined with the solvability of the Dirichlet problem with $C^{4}$ boundary data by Caffarelli-Nirenberg-Spruck \cite{CNS} and also Trudinger \cite{T1}. In particular, the solutions of the Dirichlet problem with $C^{0}$ boundary data to four dimensional (\ref{s2}) of both positive branch $\bigtriangleup u>0$ and negative branch $\bigtriangleup u<0$ respectively, enjoy interior regularity.
Another consequence is a rigidity result for entire solutions to (\ref{s2}) of both branches with quadratic growth, namely all such solutions must be quadratic, provided the smooth solutions in dimension $n\geq5$ also satisfying the dynamic semi-convex assumption (\ref{dscx}), or the symmetric one $\lambda_{\max}\left( D^{2}u\right) \leq -c\left( n\right) \bigtriangleup u$ in the symmetric negative branch case. Warren's rare saddle entire solution to (\ref{s2}) shows certain convexity condition is necessary \cite{W}. Other earlier related results can be found \cite{BCGJ} \cite{Y1} \cite{CY} \cite{CX} \cite{SY3}.
In two dimensions, an interior Hessian bound for \eqref{s2}, the Monge-Amp\`{e}re equation $\sigma_{2}=\det D^{2}u=1$ was found via isothermal coordinates, which are readily available under Legendre-Lewy transform, by Heinz \cite{H} in the 1950s. The dimension three case was done via the minimal surface structure of equation (\ref{s2}) and a full strength Jacobi inequality by Warren-Yuan in the late 2000s \cite{WY}. In higher dimensions $n\geq4$ any effective geometric structure of (\ref{s2}) appears hidden, although the level set of non-uniformly elliptic equation (\ref{s2}) is convex.
In recent years, Hessian estimates for convex smooth solutions of (\ref{s2}) have been obtained via a pointwise approach by Guan and Qiu \cite{GQ}. Hessian estimates for almost convex smooth solutions of (\ref{s2}) have been derived by a compactness argument in \cite{MSY}, and for semi-convex smooth solutions in \cite{SY1} by an integral method. However, we cannot extend these a priori estimates, including Theorem 1.2, to interior regularity statements for viscosity solutions of (\ref{s2}), because the smooth approximations may not preserve the convexity or semi-convexity constraints. Taking advantage of an improved regularity property for the equation satisfied by the Legendre-Lewy transform of almost convex viscosity solutions, interior regularity was reached in \cite{SY2}.
For higher order $\sigma_{k}\left( D^{2}u\right) =1$ with $k\geq3$ equations, which is the Monge-Amp\`{e}re equation in $k$ dimensions, there are the famous singular solutions constructed by Pogorelov \cite{P} in the 1970s, and later generalized in \cite{U1}. Worse singular solutions have been produced in recent years. Hessian estimates for solutions with certain strict $k$-convexity constraints to Monge-Amp\`{e}re equations and $\sigma_{k}$ equation ($k\geq2$) were derived by Pogorelov \cite{P} and Chou-Wang \cite{CW} respectively using the Pogorelov's pointwise technique. Urbas \cite{U2} \cite{U3} obtained (pointwise) Hessian estimates in term of certain integrals of the Hessian for $\sigma_{k}$ equations. Recently, Mooney \cite{M} derived the strict 2-convexity of convex viscosity solutions to (\ref{s2}), consequently, relying on the solvability \cite{CNS} and a priori estimates \cite{CW}, gave a different proof of the interior regularity of those convex viscosity solutions.
Our proof of Theorem \ref{thm:s2} synthesizes the ideas of Qiu \cite{Q} with Chaudhuri-Trudinger \cite{CT} and Savin \cite{S}. Qiu showed that in dimension three, where a Jacobi inequality is valid (see Section \ref{sec:Jac} for definitions of the operators) $$ F_{ij}\partial_{ij}\ln\Delta u\ge \varepsilon F_{ij}(\ln\Delta u)_i(\ln\Delta u)_j, $$ a maximum principle argument leads to a doubling, or ``three-sphere'' inequality: $$
\sup_{B_1(0)}\Delta u\le C(n,\|u\|_{C^1(B_2(0))})\sup_{B_{1/2}(0)}\Delta u. $$ (A lower bound condition on $\sigma_3(D^2u),$ satisfied by convex solutions of (\ref{s2}) in general dimensions permitted Guan-Qiu to exclude the inner ``sphere" term $B_{1/2}(0)$ in the above inequality for their eventual Hessian estimates earlier in \cite{GQ}.) Iterating this ``three-sphere'' inequality shows that the Hessian is controlled by its maximum on any arbitrarily small ball. To put it another way, any blowup point propagates to a dense subset of $B_1(0)$. To rule out Weierstrass nowhere twice differentiable counterexamples, it suffices to find a single smooth point; Savin's small perturbation theorem \cite{S} guarantees a smooth ball if there is a smooth point. It more than suffices to establish partial regularity, such as Alexandrov's theorem. Chaudhuri and Trudinger \cite{CT} showed that $k$-convex functions have an Alexandrov theorem if $k>n/2$. This gives a new compactness proof of Hessian estimate and regularity for \eqref{s2} in dimension three without minimal surface arguments, and also Hessian estimate for \eqref{s2} in general dimensions with semi-convexity assumption in \cite{SY1}, where a Jacobi inequality and Alexandrov twice differentiability are available.
In higher dimensions $n\ge 4$, there are three new difficulties. Although the H\"older estimate for $k$-convex \textit{functions} may not be valid for $k\le n/2$, we can replace it with the interior gradient estimate for 2-convex \textit{solutions} in \cite{T2} \cite{CW}; this gives an Alexandrov theorem. The main hurdle is the Jacobi inequality, which fails for four and higher dimensions without a priori control on the minimum eigenvalue $\lambda_{min}$ of $D^2u$; the Jacobi inequality was discovered in \cite{SY1, SY3} for semi-convex solutions. Instead, we can only establish an ``almost-Jacobi inequality", where $\varepsilon\sim 1+2\lambda_{min}/\Delta u$ in four dimensions. This choice of $\varepsilon$ degenerates to zero for the extreme configurations $(\lambda_1,\lambda_2,\lambda_{3},\lambda_4)=(a,a,a,-a+O(1/a))$. At first glance, $\varepsilon\to 0$ means Qiu's maximum principle argument fails; the positive term $\varepsilon|\nabla_F b|^2$ can no longer absorb bad terms. On the other hand, for the extreme configurations, the equation becomes conformally uniformly elliptic. The, usually defective, lower order term $\Delta_F|Du|^2\gtrsim \sigma_1\lambda_{min}^2$, is large enough to take control of the bad terms. The dynamic semi-convexity assumption \eqref{dscx} allows the outlined four dimensional arguments to continue working in higher $n\ge5$ dimensions.
Using similar methods, a new proof of regularity for strictly convex solutions to the Monge-Amp\`ere equation is found in \cite{SY4}. Extrinsic curvature estimates for the scalar curvature equation in dimension four are found in \cite{Sh}, extending the dimension three result of Qiu \cite{Q1}. In forthcoming work, investigation will be done on conformal geometry's $\sigma_2$ Schouten tensor equation with negative scalar curvature and the improvement of the $W^{2,6+\delta}$ to $C^{1,1}$ estimate in \cite{D} to a $W^{2,6}$ to $C^{1,1}$ estimate.
In still higher dimensions $n\ge 5$, we are not even able to establish that $\ln\Delta u$ is sub-harmonic, $\varepsilon\ge 0$, without a priori conditions on the Hessian. There is still the problem of regularity for such solutions. Combining the Alexandrov theorem with small perturbation [S, Theorem 1.3] only shows that the singular set is closed with Lebesgue measure zero.
\section{Almost Jacobi inequality} \label{sec:Jac}
In \cite{SY3}, we established a Jacobi inequality for $b=\ln(\Delta u+C(n,K))$ under the semi-convexity assumption $\lambda_{min}(D^2u)\ge -K$, namely the quantitative subsolution inequality $$
\Delta_Fb:=F_{ij}\partial_{ij}b\ge \varepsilon\,F_{ij}b_ib_j=: \varepsilon|\nabla_Fb|^2, $$ where $\varepsilon=1/3$, and for the sigma-2 equation $F(D^2u)=\sigma_2(\lambda)=1$, we denote the linearized operator by the positive definite matrix \eqal{ \label{linearized}
(F_{ij})=\Delta u\,I-D^2u=\sqrt{2\sigma_2+|D^2u|}\,I-D^2u>0. } In dimension three, the above Jacobi inequality holds for $C(3,K)=0$ unconditionally; see \cite[$\text{p. 3207}$]{SY3} and Remark \ref{rem:3D}. In dimension four, we can establish an inequality for $b=\ln\Delta u$ without any Hessian conditions. The cost is that $\varepsilon$ depends on the Hessian, and $\varepsilon(D^2u)\to 0$ is allowed. We obtain an ``almost" Jacobi inequality.
\begin{prop} \label{prop:Jac} Let $u$ be a smooth solution to $\sigma_2(\lambda)=1$, and $b=\ln\Delta u$. In dimension $n=4$, we have \eqal{ \label{aJac}
\Delta_Fb\ge\varepsilon\,|\nabla_Fb|^2, } where $$ \varepsilon=\frac{2}{9}\left(\frac{1}{2}+\frac{\lambda_{min}}{\Delta u}\right)> 0. $$ In higher dimension $n\ge 5$, \eqref{aJac} holds for $$ \varepsilon= \frac{\sqrt{3n^2+1}-(n+1)}{3(n-1)}\left(\frac{\sqrt{3n^2+1}-(n-1)}{2n}+\frac{\lambda_{min}}{\Delta u}\right) $$ under the condition $$ \frac{\lambda_{min}}{\Delta u}\ge -\frac{\sqrt{3n^2+1}-(n-1)}{2n} $$ Here, $\lambda_{min}$ is the minimum eigenvalue of $D^2u$. \end{prop}
An important ingredient for Proposition \ref{prop:Jac} is the following sharp control on the minimum eigenvalue. \begin{lem} \label{lem:sharp} Let $\lambda=(\lambda_1,\dots,\lambda_n)$ solve $\sigma_2(\lambda)=1$ with $\lambda_1+\cdots+\lambda_n>0$ and $\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_n$. Then the following bound holds for $n>2$ and is sharp: \eqal{ \label{sharp}
\sigma_1(\lambda)>\frac{n}{n-2}|\lambda_{n}|. } \begin{comment} More generally, let $\lambda'=(\lambda_1,\dots,\lambda_{n-1},0)$, $\tau=(1,\dots,1,0)/\sqrt {n-1}$ and $\lambda'^\bot=\lambda'-(\lambda'\cdot\tau)\tau$ be its traceless part. Then \eqal{ \label{sharper}
\frac{|\lambda'^\bot|^2}{\sigma_1(\lambda)^2}\le 2\left(\frac{n-2}{n}\right)^2 \left(\frac{\sigma_1(\lambda)}{|\lambda_n|}-\frac{n}{n-2}\right). } \end{comment} \end{lem}
\begin{proof} The sharpness follows from the configurations \eqal{ \label{config} \lambda=\left(a,a,\dots,a,-\frac{(n-2)}{2}a+\frac{1}{(n-1)a}\right). } Next, if $\lambda_n\ge 0$, we have $$ \sigma_1=\lambda_1+\cdots+\lambda_n\ge n\lambda_n. $$ For $\lambda_n<0$, we write $\lambda'=(\lambda_1,\dots,\lambda_{n-1})$ and observe that $\lambda_n=(1-\sigma_2(\lambda'))/\sigma_1(\lambda')$. We must have $\sigma_2(\lambda')>1$, as $\sigma_1(\lambda')>0$ from (\ref{linearized}), so we write $$ \frac{\sigma_1(\lambda)}{-\lambda_n}=-1+\frac{\sigma_1(\lambda')^2}{\sigma_2(\lambda')-1}>-1+\frac{\sigma_1(\lambda')^2}{\sigma_2(\lambda')}. $$ We write $\sigma_1(\lambda')^2$ in terms of the traceless part $\lambda'^\bot$ of $\lambda'$ and $\sigma_2(\lambda')$: $$
\sigma_1(\lambda')^2=\frac{n-1}{n-2}(2\sigma_2(\lambda')+|\lambda'^\bot|^2). $$ It then follows \begin{align*} \frac{\sigma_1(\lambda)}{-\lambda_n}&>-1+\frac{2(n-1)}{n-2}= \frac{n}{n-2}. \end{align*} \begin{comment} For $\lambda_n<0$, we write $\lambda'=(\lambda_1,\dots,\lambda_{n-1})$ and observe that $\lambda_n=(1-\sigma_2(\lambda'))/\sigma_1(\lambda')$. We must have $\sigma_2(\lambda')>1$, so we write $$ \frac{\sigma_1(\lambda)}{-\lambda_n}=-1+\frac{\sigma_1(\lambda')^2}{\sigma_2(\lambda')-1}>-1+\frac{\sigma_1(\lambda')^2}{\sigma_2(\lambda')}. $$ We eliminate $\sigma_1(\lambda')^2$ in terms of the traceless part $\lambda'^\bot$: $$
\sigma_1(\lambda')^2=\frac{n-1}{n-2}(2\sigma_2(\lambda')+|\lambda'^\bot|^2). $$ By \eqref{sharp}, $\sigma_1(\lambda)>\frac{n}{2(n-1)}\sigma_1(\lambda')$, so we obtain \begin{align*}
\frac{\sigma_1(\lambda)}{-\lambda_n}&>-1+\frac{2(n-1)}{n-2}+\frac{n-1}{n-2}\frac{|\lambda'^\bot|^2}{\sigma_2(\lambda')}\ge \frac{n}{n-2}+2\left(\frac{n-1}{n-2}\right)^2\frac{|\lambda'^\bot|^2}{\sigma_1(\lambda')^2}\\
&> \frac{n}{n-2}+\frac{n^2}{2(n-2)^2}\frac{|\lambda'^\bot|^2}{\sigma_1(\lambda)^2}. \end{align*}
\end{comment} \end{proof}
As a consequence, we obtain the following quantitative ellipticity for equation (\ref{s2}).
\begin{cor} \label{cor:ellipticity} Let $\lambda=(\lambda_1,\dots,\lambda_n)$ solve $f(\lambda)=\sigma_2(\lambda)=1$, with $\lambda_1+\cdots+\lambda_n>0$. For $\lambda_1\ge \lambda_2\ge\cdots\ge \lambda_n$ and $f_i=\partial f/\partial\lambda_i$, we have \eqal{ \label{ellipticity} \frac{1}{\sigma_1}&\le f_1\le \left(\frac{n-1}{n}\right)\sigma_1,\\ \left(1-\frac{1}{\sqrt 2}\right)\sigma_1&\le f_i\le 2\left(\frac{n-1}{n}\right)\sigma_1,\qquad i\ge 2. } \end{cor}
\begin{proof} The upper bound for $f_1=\sigma_1-\lambda_1$ comes from the easy bound $n\lambda_1\ge\sigma_1$. The sharp upper bound for $f_n$ follows from \eqref{sharp}: $$ f_i\le f_n=\sigma_1-\lambda_n<\left(1+\frac{n-2}{n}\right)\sigma_1. $$ The $i=1$ lower bound goes as follows: $$
f_1=\sigma_1-\lambda_1=\frac{2+|(0,\lambda_2,\dots,\lambda_n)|^2}{\sigma_1+\lambda_1}>\frac{2}{\sigma_1+\lambda_1}>\frac{1}{\sigma_1}. $$ The $i\ge 2$ lower bounds for $f_i=\sigma_1-\lambda_i$ are true if $\lambda_i\le 0$. For $\lambda_i>0$, $$ f_i=\sigma_1-\lambda_i>\sigma_1-\sqrt{\frac{\lambda_1^2+\cdots+\lambda_i^2}{i}}>\left(1-i^{-1/2}\right)\sigma_1, $$ where we used $$
\sigma_1=\sqrt{2+|\lambda|^2}>\sqrt{\lambda_1^2+\cdots+\lambda_i^2}, $$ in the last inequality. \end{proof} \begin{rem} A sharp form of \eqref{ellipticity} for the $i\ge 2$ lower bounds and rougher upper bounds was first shown in [LT, (16)]. A rougher form of the lower bounds in \eqref{ellipticity}, enough for our proof of doubling Proposition \ref{prop:doub}, also follows from \cite[Lemma 3.1]{CW}, \cite[Lemma 2.1]{CY}, and \cite[(2.4)]{SY1}.
\begin{comment}
Let us now derive \eqref{2ndLower}. Recalling from Lemma \ref{lem:sharp} the definitions $\lambda':=(\lambda_1,\dots,\lambda_{n-1},0)$, $\tau=(1,\dots,1,0)/\sqrt{n-1}$, and $\lambda'^\bot=\lambda'-(\lambda'\cdot\tau)\tau$, we write
\begin{align*}
\lambda_{n-1}&>\frac{\sigma_1(\lambda')}{n-1}-\frac{1}{\sqrt{n-1}}|\lambda'^\bot|\ge \frac{\sigma_1(\lambda)-\lambda_n}{n-1}-\frac{\sqrt{2}(n-2)}{n\sqrt{n-1}}\sigma_1(\lambda)\left(\frac{\sigma_1}{|\lambda_n|}-\frac{n}{n-2}\right)^{1/2}\\
&>\frac{\sigma_1(\lambda)}{n}\left[\frac{2}{n-1}-\frac{\sqrt 2(n-2)}{\sqrt{n-1}}\left(\frac{\sigma_1}{|\lambda_n|}-\frac{n}{n-2}\right)^{1/2}\right]. \end{align*}
\end{comment} \end{rem}
\begin{proof}[Proof of Proposition \ref{prop:Jac}] Step 1. Expression of the Jacobi inequality. After a rotation at $x=p$, we assume that $D^2u(p)$ is diagonal. Then $(F_{ij})=\text{diag}(f_i)$, where $f(\lambda)=\sigma_2(\lambda)$. The following calculation was performed in \cite[$\text{p. 4}$]{SY3} for $b=\ln(\Delta u+J)$ for some constant $J$. We repeat it below with $J=0$, for completeness. We start with the following formulas at $x=p$: \begin{align} \label{gradb}
&|\nabla_Fb|^2=\sum_{i=1}^nf_i\frac{(\Delta u_i)^2}{(\Delta u)^2},\\ \label{Deltab}
&\Delta_Fb=\sum_{i=1}^nf_i\left[\frac{\partial_{ii}\Delta u}{\Delta u}-\frac{(\partial_i\Delta u)^2}{(\Delta u)^2}\right] \end{align} Next, we replace the fourth order derivatives $\partial_{ii}\Delta u=\sum_{k=1}^n\partial_{ii}u_{kk}$ in \eqref{Deltab} by third derivatives. By differentiating \eqref{s2}, we have \eqal{ \label{Ds2} \Delta_FDu=(F_{ij}u_{ijk})_{k=1}^n=0. } Differentiating \eqref{Ds2} and using \eqref{linearized}, we obtain at $x=p$, \begin{align*} \sum_{i=1}^nf_i\partial_{ii}\Delta u&=\sum_{k=1}^n\Delta_Fu_{kk}=\sum_{i,j,k=1}^nF_{ij}\partial_{ij}u_{kk}=-\sum_{i,j,k=1}^n\partial_kF_{ij}\partial_{ij}u_k\\ &=\sum_{i,j,k=1}^n-(\Delta u_k\delta_{ij}-u_{kij})u_{kij}=\sum_{i,j,k=1}^nu_{ijk}^2-\sum_{k=1}^n(\Delta u_k)^2. \end{align*} Substituting this identity into \eqref{Deltab} and regrouping terms of the forms $u_{ \clubsuit\heartsuit\spadesuit}^2$, $u_{\clubsuit\clubsuit\heartsuit}^2,$ $u_{\heartsuit\heartsuit\heartsuit}^2$, and $(\Delta u_\clubsuit)^2$, we obtain $$ \Delta_Fb=\frac{1}{\sigma_1}\left\{6\sum_{i<j<k}u_{ijk}^2+\left[3\sum_{i\neq j}u_{jji}^2+\sum_iu_{iii}^2-\sum_i\left((1+\frac{f_i}{\sigma_1}\right) (\Delta u_i)^2\right]\right\} $$ Accounting for \eqref{gradb}, we obtain the following quadratic: $$
(\Delta_Fb-\varepsilon|\nabla_Fb|^2)\sigma_1\ge 3\sum_{i\neq j}u_{jji}^2+\sum_{i}u_{iii}^2-\sum_i(1+\delta f_i/\sigma_1)(\Delta u_i)^2, $$ where $\delta:=1+\varepsilon$ here. As in \cite{SY3}, we fix $i$ and denote $t_i=(u_{11i},\dots,u_{nni})$ and $e_i$ the $i$-th basis vector of $\mbb R^n$. Then we recall equation (2.9) from \cite{SY3} for the $i$-th term above: $$
Q:=3|t|-2\langle e_i,t\rangle^2-(1+\delta f_i/\sigma_1)\langle(1,\dots,1),t\rangle^2. $$ The objective is to show that $Q\ge 0$. The idea in \cite{SY3} was to reduce the quadratic form to a two dimensional subspace. In that paper, $Q\ge 0$ was shown under a semi-convexity assumption of the Hessian. Here, we show how to remove this assumption in dimension four. For completeness, we repeat that reduction below.
Step 2. Anisotropic projection. Equation \eqref{Ds2} at $x=p$ shows that $\langle Df,t_i\rangle=0$, so $Q$ is zero along a subspace. We can thus replace the vectors $e_i$ and $(1,\dots,1)$ in $Q$ with their projections: $$
Q=3|t|^2-2\langle E,t\rangle^2-(1+\delta f_i/\sigma_1)\langle L,t\rangle ^2, $$ where $$
E=e_i-\frac{\langle e_i,Df\rangle}{|Df|^2} Df,\qquad L=(1,\dots,1)-\frac{\langle (1,\dots,1),Df\rangle}{|Df|^2} Df. $$ Their rotational invariants can be calculated as in [SY3, equation (2.10)]: \eqal{ \label{invariants}
|E|^2=1-\frac{f_i^2}{|Df|^2},\qquad |L|^2=1-\frac{2(n-1)}{|Df|^2},\qquad E\cdot L=1-\frac{(n-1)\sigma_1 f_i}{|Df|^2}. }
The quadratic is mostly isotropic: if $t$ is orthogonal to both $E$ and $L$, then $Q=3|t|^2\ge 0$, so it suffices to assume that $t$ lies in the $\{E,L\}$ subspace. The matrix associated to the quadratic form is $$ Q=3I-2E\otimes E-\eta L\otimes L, $$ where $\eta=1+\delta f_i/\sigma_1=1+(1+\varepsilon)f_i/\sigma_1$. Since $Q$ is a quadratic form, its matrix is symmetric and has real eigenvalues. In the non-orthogonal basis $\{E,L\}$, the eigenvector equation is $$ \begin{pmatrix}
3-2|E|^2&-2E\cdot L\\
-\eta L\cdot E&3-\eta|L|^2 \end{pmatrix} \begin{pmatrix} \alpha\\ \beta \end{pmatrix} = \xi \begin{pmatrix} \alpha\\ \beta \end{pmatrix}. $$ The real eigenvalues of this matrix have the explicit form $$ \xi=\frac{1}{2}\left(tr\pm\sqrt{tr^2-4det}\right), $$ where the trace and determinant are given by $$
tr=6-2|E|^2-\eta|L|^2,\qquad det=9-6|E|^2-3\eta|L|^2+2\eta\left[|E|^2|L|^2-(E\cdot L)^2\right]. $$ It thus suffices to show that $tr\ge 0$ and $\det\ge 0$.
Step 3. Non-negativity of the trace of the quadratic form. In [SY3], the trace was shown positive; indeed, by \eqref{invariants}, \begin{align*}
tr&=6-2\left(1-\frac{f_i^2}{|Df|^2}\right)-\left(1+\delta\frac{f_i}{\sigma_1}\right)\left(1-\frac{2(n-1)}{|Df|^2}\right)\\ &>3-\delta\frac{f_i}{\sigma_1}=\frac{(3-\delta)\sigma_1+\delta\lambda_i}{\sigma_1}\\ &\ge 3-\delta\left(1+\frac{n-2}{n}\right)\ge 0, \end{align*} for any \eqal{ \label{trace} \delta\le \frac{3n}{2(n-1)}, } using the bound \eqref{sharp} in the case that $\lambda_i<0$.
Step 4. Non-negativity of the determinant of the quadratic form. Our new contribution here is to analyze the determinant in general. Again by \eqref{invariants}, the determinant is \begin{align*}
det&=\frac{6f_i^2}{|Df|^2}-\frac{3\delta f_i}{\sigma_1}+3\left(1+\frac{\delta f_i}{\sigma_1}\right)\boxed{\frac{2(n-1)}{|Df|^2}}\\
&+2\left(1+\frac{\delta f_i}{\sigma_1}\right)\left[\frac{2(n-1)\sigma_1 f_i}{|Df|^2}-\frac{n f_i^2}{|Df|^2}-\boxed{\frac{2(n-1)}{|Df|^2}}\,\,\right]\\
&>-\frac{3\delta f_i}{\sigma_1}+4\left(1+\frac{\delta f_i}{\sigma_1}\right)\frac{(n-1)\sigma_1 f_i}{|Df|^2}+\left[6-2n\left(1+\frac{\delta f_i}{\sigma_1}\right)\right]\frac{f_i^2}{|Df|^2}. \end{align*}
Since $f_i=\sigma_1-\lambda_i$ and $\sigma_1^2=2+|\lambda|^2$, we get $|Df|^2=(n-1)\sigma_1^2-2$, so we obtain an inequality in terms of $y:=f_i/\sigma_1$: \eqal{ \label{det}
det\cdot\frac{|Df|^2}{\sigma_1f_i}&>\boxed{\frac{6\delta}{\sigma_1^2}}-3(n-1)\delta+4(n-1)\left(1+\delta \frac{f_i}{\sigma_1}\right)+\left[6-2n\left(1+\delta\frac{f_i}{\sigma_1}\right)\right]\frac{f_i}{\sigma_1}\\
&>
\left(n-1\right)\left(4-3\delta\right)+\Big[6-2n+4\left(n-1\right)\delta\Big] y-2n\delta y^{2}\\
&=:q_\delta(y). }
\begin{rem} \label{rem:3D} In three dimensions, the almost Jacobi inequality \eqref{aJac} becomes a full strength one $\bigtriangleup_{F}b\geq\frac{1}{3}\left\vert \nabla _{F}b\right\vert ^{2},\ $because in \eqref{det}, $q_{4/3}\left( y\right) =\frac{8}{3}\frac{f_{i}}{\sigma_{1}}\left( 1+3\lambda_{i}/\sigma_{1}\right) >0\ $ by \eqref{sharp}. This was observed in \cite[$\text{p. 3207}$]{SY3}. \end{rem}
We write $q_\delta(y)=q_1(y)+\varepsilon\,r(y)$. The remainder: $$ r(y)=-3(n-1)+4(n-1)y-2ny^2=-3(n-1)+2ny\left(\frac{2(n-1)}{n}-y\right)>-3(n-1), $$ where we used $0<y=f_i/\sigma_1 \le f_n/\sigma_1 < 2(n-1)/n$; see \eqref{ellipticity} in Corollary \ref{cor:ellipticity}. To estimate $q_1(y)$, let us solve $0=q_1(y)=n-1+2(n+1)y-2ny^2$: \eqal{ y_n^\pm:=\frac{n+1\pm\sqrt{1+3n^2}}{2n},\qquad y^+_n\stackrel{n=4}{=}\,\frac{3}{2}. } Then $q_1(y)/(y_n^+-y)=2n(y-y_n^-)$. This linear function is minimized at the endpoint $y=0$, so if $y_n^+-y\ge 0$, we conclude $$ q_\delta(y)\ge -2ny_n^-(y_n^+-y)-3(n-1)\varepsilon\ge -2ny_n^-\left(y_n^+-\frac{f_n}{\sigma_1}\right)-3(n-1)\varepsilon=0, $$ provided \eqal{ \varepsilon&:= -\frac{2ny_n^-}{3(n-1)}\left(y_n^+-\frac{f_n}{\sigma_1}\right)\\ &=\frac{\sqrt{3n^2+1}-(n+1)}{3(n-1)}\left(\frac{\sqrt{3n^2+1}-(n-1)}{2n}+\frac{\lambda_n}{\sigma_1}\right)\\ &\stackrel{n=4}{=}\frac{2}{9}\left(\frac{1}{2}+\frac{\lambda_n}{\sigma_1}\right). } The condition $y_n^+-y=y_n^+-\frac{f_i}{\sigma_1}\ge 0$ for all $i$ is equivalent to dynamic semi-convexity, $$ \frac{\lambda_n}{\sigma_1}\ge -\frac{\sqrt{3n^2+1}-(n-1)}{2n}. $$ If $n=4$, all solutions satisfy this unconditionally, using \eqref{sharp}.
Let us now check that the trace condition \eqref{trace} is also satisfied. It suffices to have $\varepsilon<1/2$. Writing $\varepsilon=c(n)(c_n+\lambda_n/\sigma_1)$, it can be shown that $c(n)$ is an increasing function of $n$ bounded by $ (\sqrt{3}-1)/3<1/4$, and $c_n$ is a decreasing function bounded by $(\sqrt{13}-1)/4<2/3$. Combined with $\lambda_n/\sigma_1\le 1/n\le 1/2$ (see Lemma \ref{lem:sharp}), we find that $\varepsilon<7/24$ for $n\ge 2$.
This completes the proof of Proposition \ref{prop:Jac} in dimension $n=4$ and higher dimension $n\ge 5$. \end{proof}
\section{The doubling inequality}
We now use the almost-Jacobi inequality in Proposition \ref{prop:Jac} to show an a priori doubling inequality for the Hessian.
\begin{prop} \label{prop:doub} Let $u$ be a smooth solution of sigma-2 equation \eqref{s2} on $B_4(0)\subset \mbb R^n$. If $n=4$, then the following inequality is valid: $$
\sup_{B_{2}(0)}\Delta u\le C(n)\exp\left(C(n)\|u\|_{C^1(B_3(0))}^2\right)\sup_{B_{1}(0)}\Delta u. $$ If $n\ge 5$, the inequality is true, if we suppose also that on $B_3(0)$, there is a semi-convexity type condition \eqal{ \label{lower} \frac{\lambda_{min}(D^2u)}{\Delta u}\ge -c_n,\qquad c_n:=\frac{\sqrt{3n^2+1}-n+1}{2n}. } \end{prop}
\begin{proof}
The following test function on $B_3(0)$ is taken from \cite[Theorem 4]{GQ} and \cite[Lemma 4]{Q}: \eqal{ \label{Pdef}
P_{\alpha\beta\gamma}:=2\ln\rho(x)+\alpha (x\cdot Du-u)+\beta|Du|^2/2+\ln \max(\bar b,\gamma^{-1}). }
Here, $\rho(x)=3^2-|x|^2$, and $\bar b=b-\max_{B_1(0)}b$ for $b=\ln\Delta u$. We also define $\Gamma:=4+\|u\|_{L^\infty(B_3(0))}+\|Du\|_{L^\infty(B_3(0))}$ to gauge the lower order terms, and denote by $C=C(n)$ a dimensional constant which changes line by line and will be fixed in the end. Small dimensional positive $\gamma$, and smaller positive constants $\alpha, \beta$ depending on $\gamma$ and $\Gamma$, will be chosen later. We also assume summation over repeated indices for simplicity of notation, where it is impossible in Section \ref{sec:Jac}.
Suppose the maximum of $P_{\alpha\beta\gamma}$ occurs at $x^*\in B_3(0)$. If $\bar b(x^*)\le \gamma^{-1}$, then we conclude that for $C$ large enough, \eqal{ \label{Pmax1} \max_{B_2(0)}P_{\alpha\beta\gamma}\le C+3\alpha\Gamma+\frac{1}{2}\beta\Gamma^2+\ln\gamma^{-1}. }
So we suppose that $\bar b(x^*)>\gamma^{-1}$. If $|x^*|\le 1$, then again we obtain \eqref{Pmax1}, so we also assume that $1<|x^*|<3$.
After a rotation about $x=0$, we assume that $D^2u(x^*)$ is diagonal, $u_{ii}=\lambda_i$, with $\lambda_1\ge \lambda_2\ge\cdots\ge \lambda_n$. At the maximum point $x^*$, we have $DP_{\alpha\beta\gamma}=0$, \eqal{ \label{max1} -\frac{\bar b_i}{\bar b}&=2\frac{\rho_i}{\rho}+\alpha x_ku_{ik}+\beta u_ku_{ik}\\ &=2\frac{\rho_i}{\rho}+\alpha x_i\lambda_i+\beta u_i\lambda_i, } and for $0\ge D^2P_{\alpha\beta\gamma}= (\partial_{ij}P_{\alpha\beta\gamma})$, we get \eqal{ 0\ge\Big(&-\frac{4\delta_{ij}}{\rho}-2\frac{\rho_i\rho_j}{\rho^2}+\alpha ( x_ku_{ijk}+ u_{ij})+\beta(u_ku_{ijk}+u_{ik}u_{jk})+\frac{\bar b_{ij}}{\bar b}-\frac{\bar b_i\bar b_j}{\bar b^2}\Big) } Contracting with $F_{ij}=\partial \sigma_2/\partial u_{ij}$ and using $$ F_{ij}u_{ijk}=0,\qquad F_{ij}u_{ij}=2\sigma_2=2,\qquad F_{ij}\delta_{ij}=(n-1)\sigma_1, $$ as well as diagonality at $x^*$, $(F_{ij})=(f_{i}\delta_{ij})$ for $f(\lambda)=\sigma_2(\lambda)$, we obtain at maximum point $x^*$, $$ 0\ge F_{ij}\partial_{ij}P_{\alpha\beta\gamma}> -4(n-1)\frac{\sigma_1}{\rho}-2\frac{f_i\rho_i^2}{\rho^2}+\beta f_i\lambda_i^2+\frac{f_i\bar b_{ii}}{\bar b}-\frac{f_i\bar b_{i}^2}{\bar b^2}. $$ Under the assumption that $n=3,4$, or instead that $n\ge 5$ with Hessian constraint \eqref{lower}, almost-Jacobi inequality Proposition \ref{prop:Jac} is valid, and we get for larger $C$, \eqal{ \label{maxP} 0\ge-C\frac{\sigma_1}{\rho}-2\frac{f_i\rho_i^2}{\rho^2}+\beta f_i\lambda_i^2+\left(c_n+\frac{\lambda_n}{\Delta u}\right)\frac{f_i\bar b_{i}^2}{\bar b}-\frac{f_i\bar b_{i}^2}{\bar b^2}. } If the nonnegative coefficient of $f_i\bar b_i^2/\bar b$ is positive, we can proceed as in Qiu's proof. In the alternative case, we must use the $\beta$ term.
We start with the latter case. Note that from \eqref{sharp} in Lemma 1, condition \eqref{lower} $\lambda_n /\Delta u > -1/2 = - c_n$ is automatically satisfied for $n=4$, and $\lambda_n /\Delta u > -1/3 > -c_n/2$ for $n=3$.
\textbf{CASE} $-c_n\le \lambda_n/\Delta u\le -c_n/2$: It follows from \eqref{ellipticity} that $f_n\lambda_n^2\ge c(n)\sigma_1^3$. For larger $C$, $$
0\ge -C\frac{\sigma_1}{\rho^2}+\beta \sigma_1^3-Cf_i\frac{\bar b_i^2}{\bar b^2}. $$ Using \eqref{max1} and ellipticity \eqref{ellipticity}, we obtain $$ \beta \sigma_1^3\le C\frac{\sigma_1}{\rho^2}+C(\alpha^2+\beta^2\Gamma^2)\sigma_1^3. $$ If the small parameters satisfy \eqal{ \label{small1} \alpha^2\le \beta/(3C),\qquad \beta\le 1/(3C\Gamma^2), }
we obtain $\rho^2\sigma_1^2\le C/\beta$. Since $\sigma_1=\sqrt{2+|\lambda|^2}>\sqrt 2$, we have $\sigma_1^2>2\ln\sigma_1$, and we conclude from \eqref{Pdef} and \eqref{small1} that \eqal{ \label{Pmax2} P_{\alpha\beta\gamma}\le C+\ln\beta^{-1}. }
We next show that Qiu's argument goes through, in the case that ``almost" Jacobi becomes a regular Jacobi.
\textbf{CASE $\lambda_n/\Delta u\ge -c_n/2$.} It follows that, after enlarging $C$, \eqref{maxP} can be reduced to $$ 0\ge -C\frac{\sigma_1}{\rho}-C\frac{f_i\rho_i^2}{\rho^2}+\beta f_i\lambda_i^2+(\bar b-C)f_{i}\frac{\bar b_i^2}{\bar b^2}. $$ Using $\bar b(x^*)\ge \frac{1}{2}\bar b(x^*)+\frac{1}{2}\gamma^{-1}$, we assume that $\gamma$ satisfies \eqal{ \label{small1a} \frac{1}{2}\gamma^{-1}\ge C, } so after enlarging $C$ again, we can further reduce it to \eqal{ \label{maxP1} 0\ge -C\frac{\sigma_1}{\rho}-C\frac{f_i\rho_i^2}{\rho^2}+\beta f_i\lambda_i^2+\bar bf_{i}\frac{\bar b_i^2}{\bar b^2}. }
\textbf{SUBCASE} $1<|x^*|<3$ and $x_1^2 > 1/n$: If the small parameters satisfy the condition \eqal{ \label{small2} \beta\le \alpha/(2n\Gamma),\qquad } we then obtain from \eqref{max1}, $$ \frac{\bar b_1^2}{\bar b^2}\ge \frac{1}{2}(\alpha/n-\beta\Gamma)^2\lambda_1^2-\frac{C}{\rho^2}\ge\frac{1}{8n^2}\alpha^2\lambda_1^2-\frac{C}{\rho^2}. $$ We assume that this gives a lower bound, or that $C/\rho^2\le \alpha^2\lambda_1^2/(16n^2)$: \eqal{ \label{B1} \frac{\bar b_1^2}{\bar b^2}\ge \frac{1}{16n^2}\alpha^2\lambda_1^2. } For if not, we get $\rho^2\lambda_1^2\le C/\alpha^2$. Since $\lambda_{1}\ge\sigma_1/n$, we can get $\rho^2\ln\sigma_1\le C/\alpha^2$. Using \eqref{Pdef} and \eqref{small1}, we would obtain \eqal{ \label{Pmax3} P_{\alpha\beta\gamma}\le C+2\ln\alpha^{-1}. } It follows then, from \eqref{B1} and \eqref{ellipticity}, that \eqref{maxP1} can be simplified to $$ 0\ge -C\frac{\sigma_1}{\rho^2}+\bar b\,f_1(\alpha^2 \lambda_1^2). $$ From \eqref{ellipticity}, there holds $f_1\lambda_1^2\ge \sigma_1/n^2$, so we conclude $\rho^2\bar b\le C/\alpha^2$. By \eqref{Pdef} and \eqref{small1}, we conclude a similar bound \eqref{Pmax3}: $$ P_{\alpha\beta\gamma}\le C+2\ln\alpha^{-1}. $$
\textbf{SUBCASE} $1<|x^*|<3$ and $x_k^2>1/n$ for some $k\ge 2$: Let us first note that $\sigma_1/\rho\le C f_k\rho_k^2/\rho^2$, by \eqref{ellipticity}. We apply $\bar b>\gamma^{-1}$ to \eqref{maxP1}: $$ 0\ge -C\frac{f_i\rho_i^2}{\rho^2}+\beta f_i\lambda_i^2+\gamma^{-1}f_{i}\frac{\bar b_i^2}{\bar b^2}. $$ Using the $DP=0$ equation \eqref{max1} and enlarging $C$, we obtain
\eqal{ \label{maxP2} 0&\ge -C\frac{f_i\rho_i^2}{\rho^2}+\beta f_i\lambda_i^2+\gamma^{-1}f_i\frac{\rho_i^2}{\rho^2}-C\gamma^{-1}\alpha^2f_ix_i^2\lambda_i^2-C\gamma^{-1}\Gamma^2\beta^2 f_i\lambda_i^2\\ &\ge \frac{f_i\rho_i^2}{\rho^2}(\gamma^{-1}-C)+\Gamma^{-2}f_i\lambda_i^2\Big((\Gamma^2\beta)-C\gamma^{-1}(\Gamma\alpha)^2-C\gamma^{-1}(\Gamma^2\beta)^2\Big). } The first term is handled if $\gamma^{-1}$ is large enough: $$ \gamma^{-1}\ge 2C. $$ We choose $\alpha,\beta$ as follows: \eqal{ \label{small3} \alpha=\gamma^4/\Gamma,\qquad \beta=\gamma^{6}/\Gamma^2. } Let us check that the previous $\alpha,\beta$ conditions \eqref{small1} and \eqref{small2} are satisfied for any $\gamma^{-1}\ge 2C$, if $C$ is large enough: $$ \frac{\alpha^2}{\beta}=\gamma^{2}\le \frac{1}{4C^2}<\frac{1}{3C},\qquad \frac{\Gamma\beta}{\alpha}=\gamma^2\le \frac{1}{4C^2}<\frac{1}{2n}. $$ Finally, the coefficient of $\Gamma^{-2}f_i\lambda_i^2$ in \eqref{maxP2} is $$ \gamma^6-C\gamma^7-C\gamma^{11}=\gamma^6(1-C\gamma-C\gamma^5)\ge \gamma^6\left(1-\frac{1}{2}-\frac{\gamma^4}{2}\right)>0. $$ Overall, we obtain a contradiction to \eqref{maxP2}.
We conclude that for all large $\gamma^{-1}\ge 2C$ and $\alpha,\beta$ satisfying \eqref{small3}, the maximum of $P_{\alpha\beta\gamma}$ obeys the largest of the $P$ bounds \eqref{Pmax1}, \eqref{Pmax2}, and \eqref{Pmax3}: $$ \max_{B_2(0)}P_{\alpha\beta\gamma}\le C+\ln\max(\gamma^{-1},\beta^{-1},\alpha^{-2})=C+\ln(\Gamma^2\gamma^{-8}). $$ We now choose large $\gamma^{-1}=2C=C(n)$. By \eqref{Pdef}, we obtain the doubling estimate $$ \frac{\max_{B_2(0)}\sigma_1}{\max_{B_1(0)}\sigma_1}\le \exp\exp\left(C+\ln\Gamma^2\right)=\exp (C\Gamma^2). $$ \end{proof}
We now modify the doubling inequality to account for ``moving centers". We may control the global maximum by the maximum on any small ball. \begin{cor} \label{cor:doub} Let $u$ be a smooth solution of sigma-2 equation \eqref{s2} on $B_4(0)\subset\mbb R^n$. If $n=4$, or if lower bound \eqref{lower} holds for $n\ge 5$, then the following inequality is true for any $y\in B_{1/3}(0)$ and $0<r<4/3$: \eqal{ \label{doubley}
\sup_{B_{2}(0)}\Delta u\le C(n,r,\|u\|_{C^1(B_3(0))})\sup_{B_{r}(y)}\Delta u. } \end{cor} \begin{proof} We first note that $$ B_1(0)\subset B_{4/3}(y)\subset B_{5/3}(y)\subset B_2(0), $$
for any $|y|<1/3$. By Proposition \ref{prop:doub}, we find an inequality independent of the center: \eqal{ \label{doublex}
\sup_{B_{5/3}(y)}\Delta u\le C(n)\exp\Big(C(n)\|u\|^2_{C^1(B_3(0))}\Big)\sup_{B_{4/3}(y)}\Delta u. } We iterate this inequality about $y$ using the rescalings $$ u_{k+1}(\bar x)=\left(\frac{5}{4}\right)^2u_k\left(\frac{4}{5}(\bar x-y)+y\right),\qquad u_0=u,\qquad k=0,1,2,\dots $$ It follows that each $u_k$ satisfies \eqref{doublex}. Denoting
$$
C_k=C(n)\exp \left[C(n)\|u_k\|^2_{C^1(B_3(0))}\right]\le C(n)\exp\left[ \left(\frac{5}{4}\right)^{2k}C(n)\|u\|_{C^1(B_3(0))}\right], $$ we obtain for $k=1,2,\dots$, $$
\sup_{B_{5/3}(y)}\Delta u\le C_0C_1\cdots C_{k}\sup_{B_{r_{k+1}}(y)}\Delta u\le C(k,n,\|u\|_{C^1(B_3(0))})\sup_{B_{r_{k+1}}(y)}\Delta u,\qquad r_k=\frac{5}{3}\left(\frac{4}{5}\right)^{k} $$ Letting $r_{k+1}\le r< r_k$ for some $k$, we combine this inequality with Proposition \ref{prop:doub} again, to arrive at \eqref{doubley}. \end{proof}
\begin{rem} In the uniformly elliptic case, or $a^{ij}b_{ij}\ge a^{ij}b_ib_j$ for $\lambda I\le (a^{ij})\le \Lambda I$, it follows from Trudinger \cite[$\text{p. 70}$]{T3} that a local Alexandrov maximum principle argument gives an integral doubling inequality: $$
\sup_{B_1(0)}b\le C\left(n,r,\frac{\Lambda}{\lambda}\right)\left(1+\|b\|_{L^n(B_r(0)}\right). $$
In the $\sigma_2$ case, we can find an integral doubling inequality by modifying Qiu's argument, but the non-uniform ellipticity adds a nonlinear weight to the integral: $$
\sup_{B_1(0)}\ln\Delta u\le C(n,r)\Gamma^2\left(1+\|(\Delta u)^{2/n}\ln\Delta u\|_{L^n(B_{r}(0)}\right). $$ This \textit{nonlinear} doubling inequality can be employed to reach Theorems \ref{thm:s2} and \ref{thm:n5}, as in Section \ref{sec:proof}, Step 3. \end{rem}
\section{Alexandrov regularity for viscosity solutions}
We modify the approach of Evans-Gariepy \cite{EG} and Chaudhuri-Trudinger \cite{CT} to show the following Alexandrov regularity. In \cite[Theorem 1, section 6.4]{EG}, the Alexandrov theorem is seen to arise from combining a gradient estimate with a ``$W^{2,1}$ estimate" for convex functions. The latter can be heuristically understood from the a priori divergence structure calculation $$
\int_{B_1(0)}|D^2u|\,dx \leq \int_{B_1(0)} \Delta u\,dx\le C(n)\|u\|_{L^\infty(B_2(0))}. $$
However, for $k$-convex functions, there is no gradient estimate, in general, and only H\"older and $W^{1,n+}$ estimates for $k>n/2$. We are not able to use Chaudhuri and Trudinger's result in dimension $n=4$. Yet, 2-convex solutions of $\sigma_2=1$ have an even stronger interior Lipschitz estimate, by Trudinger \cite{T2}, and also Chou-Wang \cite{CW}, with a similar ``$W^{2,1}$ estimate" from $\Delta u=\sqrt{2+|D^2u|^2}$, so the method of \cite{EG} and \cite{CT} can be applied verbatim. We record the modifications below, for completeness.
\begin{prop} \label{prop:Alex} Let $u$ be a viscosity solution of sigma-2 equation \eqref{s2} on $B_4(0)$ with $\Delta u>0$. Then $u$ is twice differentiable almost everywhere in $B_4(0)$, or for almost every $x\in B_4(0)$, there is a quadratic polynomial $Q$ such that $$
\sup_{y\in B_{r}(x)}|u(y)-Q(y)|=o(r^2). $$ \end{prop}
We begin the proof of this proposition by first recalling the weighted norm Lipschitz estimate \cite[Corollary 3.4, $\text{p. 587}$]{TW} for smooth solutions of $\sigma_2=1,\Delta u>0$ on a smooth, strongly convex domain $\Omega\subset\mbb R^n$:
\eqal{ \label{gradp}
\sup_{x,y\in\Omega:x\neq y}d_{x,y}^{n+1}\frac{|u(x)-u(y)|}{|x-y|}\le C(n)\int_\Omega |u|dx, } where $d_{x,y}=\min(d_x,d_y)$, and $d_x=\text{dist}(x,\partial\Omega)$. By solving the Dirichlet problem \cite{CNS} with smooth approximating boundary data, this pointwise estimate holds for viscosity solution $u$, if $\Omega\subset\subset B_4(0)$, i.e. $u$ is locally Lipschitz. By Rademacher's theorem, $u$ is differentiable almost everywhere, with $Du\in L^\infty_{loc}$ equal the weak (distribution) gradient. By Lebesgue differentiation, for almost every $x\in B_4(0)$, \eqal{ \label{gradi}
&\lim_{r\to 0}\Xint-_{B_r(x)}|Du(y)-Du(x)|dy=0. } For second order derivatives, we next recall the definition \cite[$\text{p. 306}$]{CT} that a continuous 2-convex function satisfies both $\Delta u>0$ and $\sigma_2>0$ in the viscosity sense. Since viscosity solution $u$ to $\sigma_2=1$ and $\Delta u>0$ is 2-convex, we deduce from \cite[Theorem 2.4]{CT} that the weak Hessian $\partial^2u$, interpreted as a vector-valued distribution, gives a vector-valued Radon measure $[D^2u]=[\mu^{ij}]$: $$ \int u\,\varphi_{ij}\, dx=\int \varphi \,d\mu^{ij},\qquad\varphi\in C^\infty_0(B_4(0)). $$
Let us outline another proof. Noting that $\sum_{j\neq i}D_{jj}u\ge \bigtriangleup u-\lambda_{\max}\geq0$ for 2-convex smooth function $u,$ where the last inequality follows from \eqref{linearized} with $2\sigma_{2}\geq0,$ via smooth approximation in $C^{0}/L^{\infty}$ norm, we see that $\mu^{i}$ and also $\mu^{e}$ for any unit vector on $\mathbb{R}^{n}$ in \cite[(2.7)]{CT}, are non-negative Borel measures, in turn, bounded on compact sets, that is, Radon measures for 2-convex continuous $u.$ Readily $\mu^{I_{n}}$ for 2-convex continuous $u$ in \cite[(2.6)]{CT} is also a Radon measure. Consequently, for 2-convex continuous $u,$ $D_{ii}u=\mu^{I_{n}}-\mu^{i}$ and also $D_{ee}u=\mu^{I_{n}}-\mu^{e}$ in \cite[(2.8)]{CT} are Radon measures. This leads to another way in showing that the Hessian measures $D_{ij}u=\mu^{ij}=\left(\mu^{e_{+}e_{+}}-\mu^{e_{-}e_{-}}\right) /2$ with $e_{+}=\left( \partial_{i}+\partial_{j}\right) /\sqrt{2}$ and $e_{-}=\left(\partial_{i}-\partial_{j}\right) /\sqrt{2}$ in \cite[(2.9)]{CT}, are Radon measures for all $1\leq i,j\leq n$ and 2-convex continuous $u.$
By Lebesgue decomposition, we write $[D^2u]=D^2u\,dx+[D^2u]_s$, where $D^2u\in L^1_{loc}$ denotes the absolutely continuous part with respect to $dx$, and $[D^2u]_s$ is the singular part. In particular, for $dx$-almost every $x$ in $B_4(0)$, \begin{align} \label{Hess1}
&\lim_{r\to 0}\Xint-_{B_r(x)}|D^2u(y)-D^2u(x)|dy=0,\\ \label{Hess2}
&\lim_{r\to 0}\frac{1}{r^n}\|[D^2u]_s\|(B_r(x))=0. \end{align}
Here, we denote by $\|[D^2u]_s\|$ the total variation measure of $[D^2u]_s$. In fact, these conditions plus \eqref{gradi} are precisely conditions (a)-(c) in \cite[Theorem 1, section 6.4]{EG}. We state their conclusion as a lemma, and include their proof of this fact in the Appendix.
\begin{lem} \label{lem:o(r^2)} Let $u\in C(B_4(0))$ have a weak gradient $Du\in L^1_{loc}$ which satisfies \eqref{gradi} for a.e. $x$, and a weak Hessian $\partial ^2u$ which induces a Radon measure $[D^2u]=D^2u\,dx+[D^2u]_s$ obeying conditions \eqref{Hess1} and \eqref{Hess2} for a.e. $x$. Then for a.e. $x\in B_4(0)$, it follows that \eqal{
\Xint-_{B_r(x)}\Big |u(y)-u(x)-(y-x)\cdot Du(x)-\frac{1}{2}(y-x)D^2u(x)(y-x)\Big|dy=o(r^2). } \end{lem}
Choose $x$ for which conditions \eqref{gradi}, \eqref{Hess1}, and \eqref{Hess2} are valid. Let $h(y)=u(y)-u(x)-(y-x)\cdot Du(0)-(y-x)\cdot D^2u(0)\cdot (y-x)/2$. Using \eqal{ \label{L^1}
\Xint-_{B_r(x)}|h(y)|dy=o(r^2), }
we will upgrade this to the desired $\|h\|_{L^\infty(B_{r/2}(x))}=o(r^2)$. The crucial ingredient is a pointwise estimate: for $0<2r<4-|x|$, \eqal{ \label{point}
\sup_{y,z\in B_r(x),y\neq z}\frac{|h(y)-h(z)|}{|y-z|}\le \frac{C(n)}{r}\Xint-_{B_{2r}(x)}|h(y)|dy+Cr, }
where $C=C(n)|D^2u(x)|$. This was shown as \cite[Lemma 3.1]{CT} for $k$-convex functions with $k>n/2$ using the H\"older estimate \cite[Theorem 2.7]{TW}, and \cite[Claim \#1, $\text{p. 244}$]{EG} for convex functions using a gradient estimate, respectively.
\begin{proof}[Proof of \eqref{point}] To establish \eqref{point}, we first let $g(y)=u(y)-u(x)-(y-x)\cdot Du(x)$; then $\sigma_2(D^2g(y))=1$ with $\Delta g(y)>0$, so gradient estimate \eqref{gradp} yields \begin{align} \label{pointLem} \nonumber
r^{n+1}\sup_{y,z\in B_r(x),y\neq z}&\frac{|g(y)-g(z)|}{|y-z|}\\ \nonumber
&=\text{dist}(\boxed{\partial B_r(x)},\partial B_{2r}(x))^{n+1}\sup_{y,z\in \boxed{B_r(x)},y\neq z}\frac{|g(y)-g(z)|}{|y-z|}\\ \nonumber
&\le \sup_{y,z\in B_{2r}(x),y\neq z}d_{y,z}^{n+1}\frac{|g(y)-g(z)|}{|y-z|}\\ \nonumber
&\stackrel{\eqref{gradp}}{\le}C(n)\int_{B_{2r}(x)}|g(y)|dy\\
&\le C(n)\int_{B_{2r}(x)}|h(y)|dy+C(n)|D^2u(x)|\,r^{n+2}, \end{align}
where $d_{y,z}:=\min(2r-|y-x|,2r-|z-x|)$. Next, we polarize $$ (y-x)\cdot D^2u(x)\cdot(y-x)-(z-x)\cdot D^2u(x)\cdot(z-x)=(y-x+z-x)\cdot D^2u(x)\cdot(y-z), $$ which gives $$
r^{n+1}\sup_{y,z\in B_r(x),y\neq z}\frac{|h(y)-h(z)|}{|y-z|}\le r^{n+1}\sup_{y,z\in B_r(x),y\neq z}\frac{|g(y)-g(z)|}{|y-z|}+C(n)r^{n+2}|D^2u(x)|. $$ This inequality and \eqref{pointLem} lead to \eqref{point}. \end{proof}
The rest of the proof follows \cite[Claim \#2, $\text{p. 244}$]{EG} or \cite[Proof of Theorem 1.1, $\text{p. 311}$]{CT} verbatim. We summarize the conclusion as a lemma and include its proof in the appendix.
\begin{lem} \label{lem:pto(r^2)}
Let $h(y)\in C(B_4(0))$ and $x\in B_4(0)$ satisfy integral \eqref{L^1} and pointwise \eqref{point} bounds for $0<2r<4-|x|$. Then $\sup_{B_{r/2}(x)}|h(y)|=o(r^2)$. \end{lem}
This completes the proof of Proposition \ref{prop:Alex}.
\begin{rem} In fact, Proposition \ref{prop:Alex} holds true for (continuous) viscosity solutions to $\sigma_{k}\left( D^{2}u\right) =1$ for $2\leq k\leq n/2$ in $n$ dimensions, because the needed conditions \eqref{gradp}-\eqref{Hess2} in the proof are all available. The twice differentiability a.e. for all $k$-convex functions and $k>n/2,$ without satisfying any equation in $n$ dimensions, is the content of the theorems by Alexandrov \cite[$\text{p. 242}$]{EG} and Chaudhuri-Trudinger \cite{CT}. \end{rem}
\section{Proof of Theorems \ref{thm:s2} and \ref{thm:n5}} \label{sec:proof}
Step 1. After scaling $4^{2}u(x/4)$, we claim that the Hessian $D^2u(0)$ is controlled by $\|u\|_{C^1(B_4(0))}$. Otherwise, there exists a sequence of smooth solutions $u_k$ of \eqref{s2} on $B_4(0)$ with bound $\|u_k\|_{C^1(B_{3}(0))}\le A$, but $|D^2u_k(0)|\to\infty$, in either dimension $n=4$, or in higher dimension $n\ge 5$ with dynamic semi-convexity \eqref{lower}. By Arzela-Ascoli, a subsequence, still denoted by $u_k$, uniformly converges on $B_3(0)$. By the closedness of viscosity solutions (cf.\cite{CC}), the subsequence $u_k$ converges uniformly to a continuous viscosity solution, abusing notation, still denoted by $u$, of \eqref{s2} on $B_3(0)$; we included the non-uniformly elliptic convergence proof in the appendix, Lemma \ref{lem:conv}. By Alexandrov Proposition \ref{prop:Alex}, we deduce that $u$ is second order differentiable almost everywhere on $B_3(0)$. We fix such a point $x=y$ inside $B_{1/3}(0)$, and let $Q(x)$ be such that $u-Q=o(|x-y|^2)$.
Step 2. We apply Savin's small perturbation theorem \cite{S} to $v_k=u_k-Q$. Given small $0<r<4/3$, we rescale near $y$: $$ \bar v_k(\bar x)=\frac{1}{r^2}v_k(r\bar x+y). $$ Then \begin{align*}
\|\bar v_k\|_{L^\infty(B_1(0))}&\le \frac{\|u_k(r\bar x+y)-u(r \bar x+y) \|_{L^\infty(B_1(0))}} {r^2} + \frac{\|u(r\bar x+y)-Q(r \bar x+y) \|_{L^\infty(B_1(0))}}{r^2} \\
&\le \frac{\|u_k(r\bar x+y)-u(r \bar x+y) \|_{L^\infty(B_1(0))}} {r^2} + \sigma(r) \end{align*} for some modulus $\sigma(r)=o(r^2)/r^2$. And also $\bar v_k$ solves the elliptic PDE in $B_1(0)$ $$
G(D^2\bar w)=\Delta \bar w+\Delta Q-\sqrt{2+|D^2\bar w+D^2Q|^2}=0. $$
Note that $\sigma_2(D^2Q)=1$ with $\Delta Q>0$, so $G(0)=0$ with $G(M)$ smooth. Moreover, $|D^2G|\le C(n)$, and $G(M)$ is uniformly elliptic for $|M|\le 1$, with elliptic constants depending on $n,Q$.
Now we fix $r=r(n,Q,\sigma)=:\rho$ small enough such that $\sigma(\rho)<c_1/2,$ where $c_1$ is the small constant in \cite[Theorem 1.3]{S}. As $u_k$ uniformly converges to $v$, we have $\|\bar v_k\|_{L^\infty(B_1(0))} \le c_1$ for all large enough $k$. It follows from \cite[Theorem 1.3]{S} that $$
\|u_k-Q\|_{C^{2,\alpha}(B_{\rho/2}(y))}\le C(n,Q,\sigma), $$ with $\alpha=\alpha(n,Q,\sigma)\in(0,1)$. This implies $\Delta u_k\le C(n,Q,\sigma)$ on $B_{\rho/2}(y)$, uniform in $k$.
Step 3. Finally we apply doubling inequality \eqref{doubley} in Corollary \ref{cor:doub} to $u_k$ with $r=\rho/2$: $$
\sup_{B_{2}(0)}\Delta u_k\le C(n,\rho/2,\|u_k\|_{C^1(B_3(0))})C(n,Q,\sigma)\le C(n,Q,\sigma, A). $$ We deduce a contradiction to the ``otherwise blowup assumption" at $x=0$.
\begin{rem}
In fact, a similar proof directly establishes interior regularity for viscosity solution $u$ of \eqref{s2} in four dimensions, and then the Hessian estimate, instead of first obtaining the Hessian estimate, then the interior regularity as indicated in the introduction. By rescaling $\bar u(\bar x)=u(r\bar x+x_0)/r^2$ at various centers, it suffices to show smoothness in $B_1(0)$, if $u\in C(B_5(0))$. By Alexandrov Proposition \ref{prop:Alex}, we let $x=y$ be a second order differentiable point of $u$ in $B_{1/3}(0)$, with quadratic approximation $Q(x)$ and error $\sigma$ at $y$. By Savin's small perturbation theorem \cite[Theorem 1.3]{S}, we find a ball $B_\rho(y)$ with $\rho=\rho(n,Q,\sigma)$ on which $u$ is smooth, with estimates depending on $n,Q,\sigma$. Using \cite{CNS}, we find smooth approximations $u_k\to u$ uniformly on $B_{4}(0)$, with $|Du_k(x)|\le C(\|u\|_{L^\infty(B_4(0))})$ in $B_3(0)$ by the gradient estimate in \cite{T2} and also \cite{CW}. By the small perturbation theorem \cite[Theorem 1.3]{S}, it follows that $u_k\to u$ in $C^{2,\alpha}$ on $B_{\rho/2}(y)$. Applying doubling \eqref{doubley} to $u_k$ with $r=\rho/2$, we find that $\Delta u_k\le C(n,Q,\sigma,\|u\|_{L^\infty(B_4(0))})$ on $B_{2}(0)$. By Evans-Krylov, $u_k\to u$ in $C^{2,\alpha}(B_1(0))$. It follows that $u$ is smooth on $B_1(0)$.
From interior regularity, a compactness proof for a Hessian estimate would then follow by an application of the small perturbation theorem. Suppose $u_k\to u$ uniformly but $|D^2u_k(0)|\to\infty$. We observe that the limit $u$ is interior smooth. Applying Savin's small perturbation theorem to $u_k-u$, which solves a fully nonlinear elliptic PDE with smooth coefficients, implies a uniform bound on $D^2u_k(0)$ for large $k$, a contradiction. \end{rem}
\begin{rem} By combining Alexandrov Proposition \ref{prop:Alex} with [S, Theorem 1.3] as above, we find that general viscosity solutions of $\sigma_2=1$ on $B_1(0)\subset\mbb R^n$ with $\Delta u>0$ have partial regularity: the singular set is closed with Lebesgue measure zero. The same partial regularity also holds for (k-convex) viscosity solutions of equation $\sigma_k=1$, because Alexandrov Proposition \ref{prop:Alex} is valid for such solutions as noted in Remark 4.1. \end{rem}
\section{Appendix}
\begin{proof}[Proof of Lemma \ref{lem:o(r^2)}]
Choose $x\in B_4(0)$ for which conditions \eqref{gradi}, \eqref{Hess1}, and \eqref{Hess2} are valid. Given $r>0$ small enough for $B_{2r}(x)\subset B_4(0)$, we just assume $x=0$. Letting $\eta_\varepsilon(y)=\varepsilon^{-n}\eta(y/\varepsilon)$ be the standard mollifier, we set $u^\varepsilon(y)=\eta_\varepsilon\ast u(y)$ for $|y|<r$. Letting $Q^\varepsilon(y)=u^\varepsilon(0)+y\cdot Du^\varepsilon(0)+y\cdot D^2u(0)\cdot y/2$, we use Taylor's theorem for the linear part: $$ u^\varepsilon(y)-Q^\varepsilon(y)=\int_0^1(1-t)y\cdot [D^2u^\varepsilon(ty)-D^2u(0)]\cdot y\,dt. $$
Letting $\varphi\in C^2_c(B_r(0))$ with $|\varphi(y)|\le 1$, we average over $B_r=B_r(0)$: \eqal{ \label{avg} \Xint-_{B_r}\varphi(y)(u^\varepsilon(y)-Q^\varepsilon(y))dy&=\int_0^1(1-t)\left(\Xint-_{B_r}\varphi(y)y\cdot[D^2u^\varepsilon(ty)-D^2u(0)]\cdot y\,dy\right)dt\\ &=\int_0^1\frac{1-t}{t^2}\left(\Xint-_{B_{rt}}\varphi(t^{-1}z)z\cdot[D^2u^\varepsilon(z)-D^2u(0)]\cdot z\,dz\right)dt. } The first term converges to the Radon measure representation of the Hessian: \begin{align*}
g^\varepsilon(t)&:=\int_{B_{rt}}\varphi(t^{-1}z)z\cdot D^2u^\varepsilon(z)\cdot z\,dz\\
&\to\int_{B_{rt}}u(z)\partial_{ij}(z^iz^j\varphi(t^{-1}z)dz\qquad\text{as }\varepsilon\to0\\
&=\int_{B_{rt}}\varphi(t^{-1}z)z^iz^jd\mu^{ij}\\
&=\int_{B_{rt}}\varphi(t^{-1}z)z\cdot D^2u(z)\cdot z\,dz+\int_{B_{rt}}\varphi(t^{-1}z)z^iz^jd\mu^{ij}_s. \end{align*} It also has a bound which is uniform in $\varepsilon$: \begin{align*}
\frac{g^\varepsilon(t)}{r^nt^{n+2}}&\le\frac{r^2}{(rt)^n}\int_{B_{rt}}|D^2u^\varepsilon(z)|dz\\
&=\frac{r^2}{(rt)^n}\int_{B_{rt}}\left|\int_{\mbb R^n}D^2\eta_\varepsilon(z-\zeta)u(\zeta)\right|dz\\
&=\frac{r^2}{(rt)^n}\int_{B_{rt}}\left|\int_{\mbb R^n}\eta_\varepsilon(z-\zeta)d[D^2u](\zeta)\right|dz\\
&\le \frac{Cr^2}{\varepsilon^n(rt)^n}\int_{B_{rt+\varepsilon}}|B_{rt}(0)\cap B_\varepsilon(\zeta)|\,d\|D^2u\|(\zeta)\\
&\le \frac{Cr^2}{\varepsilon^n(rt)^n}\min(rt,\varepsilon)^n\|D^2u\|(B_{rt+\varepsilon})\\
&\le Cr^2\frac{\|D^2u\|(B_{rt+\varepsilon})}{(rt+\varepsilon)^n}\\
&\le Cr^2. \end{align*}
In the last inequality, we used \eqref{Hess1} and \eqref{Hess2}, and denoted by $\|D^2u\|$ the total variation measure of $[D^2u]$. Note also, by \eqref{gradi}, \begin{align*}
|Du^\varepsilon(0)-Du(0)|&\le \int_{B_\varepsilon}\eta_\varepsilon(z)|Du(z)-Du(0)|dz\\
&\le C\Xint-_{B_\varepsilon}|Du(z)-Du(0)|dz\\
&=o(1)_\varepsilon. \end{align*} By the dominated convergence theorem, we send $\varepsilon\to 0$ in \eqref{avg}: \begin{align*}
\Xint-_{B_r}\varphi(y)(u(y)-Q(y))dy&\le Cr^2\int_0^1\Xint-_{B_{rt}}|D^2u(z)-D^2u(0)|dzdt+Cr^2\int_0^1\frac{\|[D^2u]_s\|(B_{rt})}{(rt)^n}dt\\
&=o(r^2), \end{align*}
using \eqref{Hess1} and \eqref{Hess2}. Taking the supremum over all such $|\varphi(y)|\le 1$, we conclude $\Xint-_{B_r}|h(y)|dy=o(r^2)$. This completes the proof. \end{proof}
\begin{proof}[Proof of Lemma \ref{lem:pto(r^2)}]
Given $x\in B_4(0)$ such that \eqref{L^1} and \eqref{point} are true, we let $0<2r<4-|x|$ and $0<\varepsilon<1/2$. Then by \eqref{L^1}, \begin{align*}
\left|\{z\in B_r(x):|h(z)|\ge \varepsilon r^2\}\right|&\le \frac{1}{\varepsilon r^2}\int_{B_r(x)}|h(z)|dz\\
&=\varepsilon^{-1}o(r^n)\\
&< \varepsilon|B_r(x)|, \end{align*} provided $r<r_0(\varepsilon,n,h)$. Then for each $y\in B_{r/2}(x)$, there exists $z\in B_r(x)$ such that $$
|h(z)|\le \varepsilon r^2\qquad\text{ and }\qquad |y-z|\le \varepsilon r. $$ By \eqref{point} and \eqref{L^1}, we obtain for such $y$, \begin{align*}
|h(y)|&\le |h(z)|+\frac{|h(y)-h(z)|}{|y-z|}\varepsilon r\\
&\le \varepsilon r^2+C(n)\varepsilon\Xint-_{B_{2r}(x)}|h(\zeta)|d\zeta+C(n,h)\varepsilon r^2\\
&\le C(n,h)\varepsilon r^2. \end{align*}
We conclude $\sup_{B_{r/2}(x)}|h(y)|=o(r^2)$. \end{proof}
The following is standard, but for lack of reference, we include a proof. \begin{lem} \label{lem:conv} If $u_k\to u$ is a uniformly convergent sequence of viscosity solutions on $B_1(0)$ of a fully nonlinear elliptic equation $F(D^2u,Du,u,x)=0$ continuous in all variables, then $u$ is a viscosity solution of $F$ on $B_1(0)$. \end{lem}
\begin{proof} We show it is a subsolution. Suppose for some $x_0\in B_1(0)$, $0<r<\text{dist}(x_0,\partial B_1(0))$, and smooth $Q$ that $Q\ge u$ on $B_r(x_0)$ with equality at $x_0$. Set $$
Q_\varepsilon=Q+\varepsilon |x-x_0|^2-\varepsilon^4. $$ We observe that $$ u_k(x_0)-Q_\varepsilon(x_0)\ge u(x_0)-Q(x_0)+\varepsilon^4-o(1)_k>0 $$ for $k=k(\varepsilon)$ large enough. In the ring $B_{\varepsilon}(x_0)\setminus B_{\varepsilon/2}(x_0)$, we have $$ u_k(x)-Q_\varepsilon(x)<u(x)-Q(x)-\varepsilon^3/4+\varepsilon^4+o(1)_k<0 $$ for $\varepsilon=\varepsilon(r)$ small enough, and $k=k(\varepsilon)$ large enough. This means the maximum of $u_k-Q_\varepsilon$ occurs at some in $x_\varepsilon\in B_{\varepsilon/2}(x_0)$. Since $u_k$ is a subsolution, we get $$ 0\le F(D^2Q_\varepsilon(x_\varepsilon),DQ_\varepsilon(x_\varepsilon),Q_\varepsilon(x_\varepsilon),x_\varepsilon)\to F(D^2Q(x_0),DQ(x_0),Q(x_0),x_0), $$ as $\varepsilon\to 0$. This completes the proof. \end{proof}
\noindent
\textbf{Acknowledgments.} Y.Y. is partially supported by an NSF grant.
\noindent DEPARTMENT OF MATHEMATICS, PRINCETON UNIVERSITY, PRINCETON, NJ 08544-1000
\textit{Email address:} rs1838@princeton.edu
\noindent DEPARTMENT OF MATHEMATICS, UNIVERSITY OF WASHINGTON, BOX 354350, SEATTLE, WA 98195
\textit{Email address:} yuan@math.washington.edu
\end{document} |
\begin{document}
\title{
A Sorting Algorithm Based on Calculation}
\author{\authorblockN{Sheng~Bao} ~\IEEEmembership{Student Member,~IEEE,} \\ \authorblockA{Dept. of Information Engineering,Nanjing Univ. \ of P \& T,Nanjing 210046,CHINA \\ Email : forrest.bao@gmail.com} \\\authorblockN{De-Shun~Zheng}\\ \authorblockA{Dept. of Telecommunication Engineering,Nanjing Univ of P \& T,Nanjing 210046,CHINA \\ Email : gtzds@163.com}}
\maketitle
\begin{abstract} This article introduces an adaptive sorting algorithm that can relocate elements accurately by substituting their values into a function.
We focus on building this function which is the mapping relationship between record values and their corresponding sorted locations essentially. The time complexity of this algorithm $O(n)$,when records distributed uniformly. Additionally, similar approach can be used in the searching algorithm.
\end{abstract}
\begin{keywords} Algorithm/protocol design and analysis,Sorting and searching,Data Structures \end{keywords}
\IEEEpeerreviewmaketitle
\section{Introduction}
We live in a world obsessed with keeping information, and to find it,we must keep it in some sensible order.\cite{Data_Structures_and_Program_Design_in_C++} Computers spend a considerable amount of their time keeping data in order.\cite{JAVA} The objective of the sorting method is to rearrange the records so that their keys are ordered according to some well-defined ordering rules.\cite{Algorithms_in_C++}
The essense of sorting is a mapping relationship between record values and their corresponding ordered positions. A perfect sorting algorithm will make us accomplish our goal via just one calculation,substituting the value of elements into the function and returning us their location.
This article describes a new sorting algorithm which devotes to implement the mapping relationship mentioned previously. Assuming the mapping ralationship is linear,we devised two approaches.One depends on the maximum and the minimum value of records, the other depends on the statistic property of records.Of course,the second one takes more time in determining the mapping relationship.
To make the mapping more accurate,the second pass mapping on the intervals where records density is high are devised.
This algorithm consists of two parts,mapping routine and post-mapping routine.They will both be discussed.
Performance of this algorithm are also discussed.In the condition of uniform distribution,the time complexity is $O(n)$.
\section{Preliminaries} In following sections, we will describe our algorithm of sorting an array of elements which we call records. All array positions contain out-of-order records that are assumed to be sorted. To simplify matters, we assume these records are all real numbers of the type double. \footnote{This ``double" type is defined by ANSI C++ standard.} And more, we assume that all of our operations can be done in main memory.
In following discussion, the number of records is denoted as $N$. The routine of sorting is considered as putting $N$ records into prepared $N$ boxes. To identify these boxes,they are assigned indics which are integers in interval $[1,N]$.After the sorting routine, records should locate in boxes ascendly.
At the end of this section,we name the function that will be introduced as guessing function.It is named from one of its properties is ``guessing" the location of records.The routine of substituting records into guessing function is called mapping. \section{Building the guessing function} \subsection{Basic properties of guessing function} The guessing function is defined following. \begin{Definition} Guessing function is such a function whose argument is the value of a record and returning value is the location of this record after soring. \end{Definition} The ideal guessing function should have following properties :
\begin{itemize} \item It should be a single function. \item The function range should be values of the maximum and the minimum records. \item The function domain should be $[1,N]$. \end{itemize}
It is easily to infer that the minimum record should be put into the first box whereas the maximum record should be put into the last box.Denote the maximum value of elements as $X_{max}$ while the minimum value of elements as $X_{min}$.
\subsection{two terminals approach}
Based on the idea of building the function as simple as possible,we assume guessing function as a linear function with two ternimals ,$(X_{min},1)$ and $(X_{max},N)$.The equation of guessing function is \begin{equation} \frac {x-X_{min}}{X_{max}-X_{min}}= \frac {n-1} {N-1} \end{equation} where $x$ is the value of a record and $n$ is the box index where the record locates.
Thus \begin{equation} n=\frac {x-X_{min}}{X_{max}-X_{min}} (N-1)+1 \end{equation}
Since the indics of boxes are integers, so we need to round $n$ down.Then we obtain the simplest guessing function. \begin{equation} g_{1}(x)=\left \lfloor {\frac {x-X_{min}} {X_{max}-X_{min}} (N-1)} \right \rfloor +1 \end{equation}
\begin{Definition} Global tangent is defined as the tangent of guessing function of all the records. \begin{equation} k_{global}={N-1 \over X_{max}-X_{min}} \label{global tangent} \end{equation} \end{Definition} The reason why we call it ``global tangent" will be explained later.
Guessing function can be rewrited as \begin{equation} g_{1}(x)=\left \lfloor (x-X_{min}) k_{global} \right \rfloor +1 \label{GFI} \end{equation}
\subsection{An alternative approach} We also devised an alternative approach that has general adaption to normally distributed record.
According to the property of Guassian distribution, almost the entire elements lie in the symmetric interval $(M-3\sigma,M+3\sigma)$, where M is the mean and $\sigma$ is the standard deviation.\cite{Statistics}
We can assume the difference between record's value and mean lies in the interval $(-3\sigma,3\sigma)$ while their corresponding box indics lies from 1 to N.
So we can also define $k_{global}$ as ${n \over 6\sigma}$. But there is a difference compared with the first approach.Such mapping may lead to box index greater than $N$ or less than 1.So a round routine is needed to limit box index in $[1,N]$.
Since this approach needs at least two passes to obtain statistic information and it need to judge every index,it will elapse much time than the first one in building guessing function.But it's mapping may be more accurate.
\subsection{Hash table and guessing function} Some one will consider our method is just like a hash table.But in fact they are based on different principles.And more,the guessing function can be extended to a more precisely one.
\subsection{More precisely mapping:guessing function II} No matter which approach is adopted,one disadvantage of previous defined guessing function is that records with similiar values will be mapped into same boxes.This is because the tangent of guessing function that we used is a contant. An improved function that uses variable tangent can map elements more accurately since its tangent is adaptive to the density of record values. For we are going to introduce a better function, we denote the function in eq.\ref{GFI} as Guessing Function $\mathrm{I}$ and the following function as Guessing Function $\mathrm{II}$.
Guessing function II is based on guessing function $\mathrm{I}$.The only difference is the tangent of Guessing Function II is a variable.The routine on every box is the same as the one that performs in guessing function $\mathrm{I}$.The distribution array,whose element is denoted as $A[n]$,should be defined here. \begin{Definition} Distribution array is such an array that its scale equals to $N$ whereas the value of $A[n]$ is the sum of record numbers in boxes whose indics are not greater than $n$. \end{Definition}
Of course,the value of array element whose index is less than 1 or greater than N is $0$.
Then we can infer that \begin{Lemma} the final position of elements in the $n$th box is between $A[n-1]+1$ and $A[n]$. \label{position range} \end{Lemma}
To any record,we have \begin{Lemma} \begin{equation} \frac {n-1} {k_{global}} + X_{min} \leq x < \frac {n} {k_{global}} +X_{min} \end{equation} where x is its value and n is the index of the box where it is mapped by guessing function I. \label{box value} \end{Lemma}
Combining Lemma \ref{position range} and Lemma \ref{box value},we obtain that the guessing function $\mathrm{I}$ in this box has two terminals s,$(\frac {n-1}{k_{global}} +X_{min},A[n-1]+1)$ and $(\frac {n}{k_{global}} +X_{min},A[n])$.
Specially,if this box is the first box,where $n=1$,terminals of guessing function will be $(X_{min},1)$ and $(\frac {1}{k_{global}} +X_{min},A[1])$. In the last the box,the terminals should be $(\frac {N-1} {k_{global}+X_{min}} ,A[N-1]+1)$ and $({N \over k_{global}+X_{min}},A[N])$.
Then we can consider each box independently.Before we applying guessing function I onto each box,the local tangent of guessing function should be introduced.
\begin{Definition} Local tangent of guessing function is defined as the tangent of the line that passes through point $(\frac {n-1}{k_{global}} +X_{min},A[n-1]+1)$ and $(\frac {n}{k_{global}} +X_{min},A[n])$ \end{Definition}
This definition is the reason why we call the tangent defined in eq.\ref{global tangent} as global tangent.
So we have \begin{eqnarray} k_{local}&=&\frac {A[n]-A[n-1]-1} {\Big ( {N \over k_{global}}+X_{min}\Big )- \Big ( {N-1 \over k_{global}}+X_{min} \Big )}\nonumber\\ &=&k_{global}(A[n]-A[n-1]-1) \label{klocal} \end{eqnarray}
Substituting above information into eq.\ref{GFI},we obtain the local guessing function in a box as \begin{equation} \left \lfloor \Big [ x-\frac {n-1}{k_global}+X_{min} \Big]k_{local} \right \rfloor+1 \end{equation}
Considering position of elements in this box starts from $A[n]$ , we obtain the guessing function of the entire records
\begin{equation} g_{2}(x)=
A[n]+\left \lfloor \Big [ x-\frac {n-1}{k_{global}}+X_{min} \Big]k_{local} \right \rfloor +1 \label{GFII} \end{equation}
where $n$ is calculated by eq.\ref{GFI} and $k_{local}$ is given by eq.\ref{klocal}.
We name eq.\ref{GFII} as Guessing Function $\mathrm{II}$.
\subsection{The neccessarity of guessing function $\mathrm{II}$} Some of our test indicate that the time elapsed by guessing function $\mathrm{II}$ is almost 5 times than the one of guessing funciton $\mathrm{I}$.If your record distribution is similar to uniform distribution,guessing function $\mathrm{I}$ is enough.But if your record distribution is gathered in some intervals,maybe guessing function $\mathrm{II}$ is needed.
\section{Post-mapping routines} No matter guessing function $\mathrm{I}$ or guessing function $\mathrm{II}$,we can't guarantee that every box contains only one record. To records in a same box,we apply traditional sorting algorithms to sort them so that each box is sorted.One pass travesal will retrieve them out and return us a sorted array.
\section{Performance analysis} \subsection{Time complexity} In uniform distribution condition,the time compelxity of our algorithm is $O(n)$.
\begin{proof} The probability of an element being mapped into any box is $1/N$ equally.We can infer the probability of a box contains no element is ${0 \choose N} (1-{1 \over {N}})^{N}=(1-{1 \over N})^{N}$ And we have $$ \lim_{N \to \infty} {(1-{1 \over N})^{N} = e^{-1}} $$
So the expectation of boxes which contain no element is $e^{-1}N$.
After the first pass mapping,N elements are mapped into $(1-e^{-1})N$ boxes.In these boxes,the expectation of element amount in these boxes is $\frac {1} {1-e^{-1}}$ per box. Considering the final position of every element should be limited in the box where it is mapped into,in the second time of mapping,the expectation of error interval of mapping is less than $\frac {1} {2(1-e^{-1})}$.
So N elements need totally less than $\frac {N} {2(1-e^{-1})}$ times of move.Considering the mapping operation and the operation of constructing array A[n] have linear time complexity,we can conclude the time complexity is $O(n)$.\cite{Computer_Algorithms:_Introduction_to_Design_and_Analysis} \end{proof}
\subsection{Space complexity} Before mapping,the space for storing result of guessing function $\mathrm{I}$ and $\mathrm{II}$is proportional to $N$. Also the space for distribution array is proportional to $N$. Space for storing other variable is constant.So the space complexity of both guessing function $\mathrm{I}$ and $\mathrm{II}$ are both $O(n)$.
\section{Comparation with other sorting algorithms}
Some tests are performed on a computer whose CPU is AMD Athlon 2000+ and OS is Fedora Core 1(Linux Kernel 2.4.22-1). Testing programmes are executed at multiuser text mode while compiled by gcc 3.3.2 without optimization. Uniformly distributed numbers ranging from $-20000000$ to $20000000$ are generated and are sorted in testing programmes. Table \ref{comtable} lists the sorting time of different algorithms when the scale of record increases. Fig.\ref{uniformfig} also illustrates the time elpased comparison with some other algorithms. \begin{table*} \caption[comparison]{Sorting time of different algorithms} \begin{center}
\begin{tabular}[!ht]{c | r@{.}l r@{.}l r@{.}l r@{.}l r@{.}l} \hline Algorithms \slash Scale & \multicolumn{2}{c}{$2^8$} & \multicolumn{2}{c}{$2^{11}$} &\multicolumn{2}{c}{$2^{14}$} &\multicolumn{2}{c}{$2^{17}$} &\multicolumn{2}{c}{$2^{20}$}\\ \hline
\hline Quicksort\cite{Algorithms_in_C++} & 0&000075 &0&000525& 0&005425 & 0&058475 & 0&600225\\
\hline Guessing function
\\one pass mapping
\\two points approach
& 0&000025 &0&00025 &0&002575 &0&056725 &0&603525\\ \hline Guessing function
\\one pass mapping
\\alternative approach
& 0&000025 &0&00005 &0&00275 &0&05105 &0&60855\\ \hline Guessing function
\\two passes mapping
\\two points approach
& 0&000075 &0&0003 &0&00365 &0&0848. &N&A.\\ \hline Guessing function
\\two passes mapping
\\alternative approach
& 0&00005 &0&00045 &0&0043 &0&081975 &N&A.\\ \hline \end{tabular} \label{comtable} \end{center} \end{table*}
\begin{figure*}\label{uniformfig}
\end{figure*}
\end{document} |
\begin{document}
\theoremstyle{definition} \newtheorem{assumption}{Assumption} \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{example}{Example} \newtheorem{definition}{Definition} \newtheorem{corollary}{Corollary} \newtheorem{prop}{Proposition}
\def\mathrel{\mathop{=}\limits^{\triangle}}{\mathrel{\mathop{=}\limits^{\triangle}}} \def\ind{\begin{picture}(9,8)
\put(0,0){\line(1,0){9}}
\put(3,0){\line(0,1){8}}
\put(6,0){\line(0,1){8}}
\end{picture}
} \def\nind{\begin{picture}(9,8)
\put(0,0){\line(1,0){9}}
\put(3,0){\line(0,1){8}}
\put(6,0){\line(0,1){8}}
\put(1,0){{\it /}}
\end{picture}
}
\def\text{var}{\text{var}} \def\text{cov}{\text{cov}} \def\sum_{i=1}^N{\sum_{i=1}^N} \def\sum\limits_{j=1}^m{\sum\limits_{j=1}^m} \def\stackrel{a.s.}{\longrightarrow}{\stackrel{a.s.}{\longrightarrow}} \def\stackrel{d}{\longrightarrow}{\stackrel{d}{\longrightarrow}} \def\stackrel{i.i.d.}{\sim}{\stackrel{i.i.d.}{\sim}} \def\stackrel{ind}{\sim}{\stackrel{ind}{\sim}} \def\stackrel{a}{\sim}{\stackrel{a}{\sim}} \def\text{d}{\text{d}} \def\text{obs}{\text{obs}} \def\textsc{CACE}{\textsc{CACE}} \def\textsc{SP}{\textsc{SP}} \def\textsc{RD}{\textsc{RD}} \def\textsc{RR}{\textsc{RR}} \def\textsc{OR}{\textsc{OR}} \def\text{logit}{\text{logit}} \def\textsc{PC}{\textsc{PC}} \def\text{lgamma}{\text{lgamma}} \def\text{digamma}{\text{digamma}} \def\text{trigamma}{\text{trigamma}}
\newcommand\luke[1]{ {\textcolor{red}{{\sc Luke}: {\em #1}}} }
\setlength{\baselineskip}{1.5\baselineskip}
\title{\bf Model-free causal inference of binary experimental data} \author{Peng Ding\footnote{Department of Statistics, University of California, Berkeley. Address for correspondence: 425 Evans Hall, Berkeley, California, 94720, USA. Email: \url{pengdingpku@berkeley.edu}.} ~and Luke W. Miratrix\footnote{Graduate School of Education and Department of Statistics, Harvard University.
}
} \date{} \maketitle
\begin{abstract}
For binary experimental data, we discuss randomization-based inferential procedures that do not need to invoke any modeling assumptions. We also introduce methods for likelihood and Bayesian inference based solely on the physical randomization without any hypothetical super population assumptions about the potential outcomes. These estimators have some properties superior to moment-based ones such as only giving estimates in regions of feasible support. Due to the lack of identification of the causal model, we also propose a sensitivity analysis approach which allows for the characterization of the impact of the association between the potential outcomes on statistical inference.
\noindent {\bfseries Keywords}: Attributable effect; Average causal effect; Bayesian inference; Completely randomized experiment; Likelihood; Sensitivity analysis. \end{abstract}
\section{Introduction}
In randomized experiments, the outcome of interest is often binary, in which case the resulting data can be summarized by a $2\times 2$ table. Testing for significant relationships in $2\times 2$ tables has a long history in statistics. \citet{yates1984tests} provided a comprehensive review of this topic, to which Sir David Cox commented, ``discussion of tests for the $2\times 2$ tables can be described as a saga, a story with deep implications.''
In this paper, we give an in-depth discussion of estimating causal effects for those $2\times 2$ tables generated by completely randomized experiments. Under the potential outcomes framework \citep{neyman::1923, rubin::1974}, each unit has pretreatment potential outcomes corresponding to the potential treatments that unit could receive. Finite population causal inference \citep[cf.][]{rosenbaum::2002, imbens::2015book} focuses on the experimental units at hand, and treats all potential outcomes as fixed with the randomization of treatment assignment as the only source of randomness. This view allows for weak modeling assumptions and inferential methods that are valid due to the randomization mechanism itself rather than any stated belief in a data generating process. Furthermore, by focusing on the finite population, the precision of the usual difference-in-means estimator is greater than those of comparable infinite population models. Unfortunately, the uncertainty of the estimator depends on the association between the potential outcomes, an unidentifiable quantity that can complicate finite population inference \citep{neyman::1923, imbens::2015book}.
Binary outcomes, however, lend enough structure to the problem that these issues can be somewhat circumvented. Because of the discrete nature of the problem, there are only a small number of possible types of units that could exist, which allows for two things. First, we can achieve sharper bounds on the variance of the difference-in-means estimator. Second, we can actually implement model-free likelihood and Bayesian procedures for the usual treatment effects. These estimators have superior performance to the usual moment estimators because they exploit the structure of the problem in order to limit possible estimates to a restricted parameter space. In particular, the observed data assign zero likelihood outside a well-defined region of possibilities and so such procedures will not return any of these impossible estimates. Moment estimators, on the other hand, could return such values.
It is well known that the association between the potential outcomes plays an important role in estimating the average causal effect. Different approaches have been used to address this difficulty. Some restrict attention to testing the sharp null hypothesis of zero causal effect for all experimental units \citep{fisher::1935, copas::1973}. Some enumerate all possible combinations of the potential outcomes in order to construct exact confidence intervals \citep{rigdon2015randomization, li2015exact}. Some derive bounds on the variances of the estimators over all possible randomizations using the marginal distributions \citep{robins::1988, aronow::2014, ding::2015, fogarty2016discrete}. Some assume non-negative individual causal effects, allowing causal effects to be estimated directly \citep{rosenbaum::2001}, or use structures such as constant shifts \citep{rosenbaum::2002} or dilations to dictate all the individual outcomes \citep{rosenbaum1999reduced}. Recent work on Bayesian inference imputes missing potential outcomes based on their posterior predictive distributions, which requires modeling the potential outcomes as Binomial samples from a hypothetical infinite population \citep{ding::2015}.
The methods we present in this paper are distinct from these; as a ``reasoned basis'' \citep{fisher::1935}, the randomization itself allows for obtaining a likelihood function without any external modeling assumptions. To introduce this concept, we begin in Section \ref{sec::mono} (after setting up notation and background in Section \ref{sec::notation}) with the simple case with monotonicity, i.e., no units are negatively affected by treatment. Under monotonicity all causal parameters are identifiable, making this process more easily understood. We then relax monotonicity in Section \ref{sec::no-mono} to allow for a sensitivity analysis for the variance of the difference-in-means estimator as well as for our new likelihood and Bayesian inference. We finally extend these approaches to the attributable effect \citep{rosenbaum::2001} in Section \ref{sec::attributable}, showing that inference of the attributable effect does not depend on the association between potential outcomes. We use a real example to illustrate the theory and methods in Section \ref{sec::illustration} and give some concluding remarks in Section \ref{sec::discussion}. All proofs have been relegated to the Appendix.
\section{Potential Outcomes, Causal Estimands, and Observed Data} \label{sec::notation}
Consider an experiment with $N$ units, a binary treatment $W$, and a binary outcome $Y$. Under the Stable Unit Treatment Value Assumption \citep{rubin::1980}, we define $Y_i(w)$ as the potential outcome of unit $i$ under treatment $w$, with $w=1$ for treatment and $w=0$ for control, respectively. Therefore, the potential outcomes forms a $N\times 2$ matrix $\{ (Y_i(1), Y_i(0)) \}_{i=1}^N$, which is sometimes referred to as the ``Science'' \citep{rubin::2005}. With a binary outcome, there are only four types of individuals possible, defined by the pair $(Y_i(1), Y_i(0) )$ of potential outcomes. In particular, if we imagine $Y$ being a binary outcome of survival status, $(Y_i(1), Y_i(0)) = (1,1)$ would be those who always survived, $(Y_i(1), Y_i(0)) = (0,0)$ would never survive regardless of treatment, and so forth. The treatment has a positive impact for those with $(Y_i(1), Y_i(0)) = (1,0)$ and a negative impact for those with $(Y_i(1), Y_i(0)) = (0,1)$. Because there are only four types of units, the full $N \times 2$ Science Table can be summarized by a $2\times 2$ table formed by the cell counts $N_{jk}=\#\{ i: Y_i(1)=j,Y_i(0)=k \}$ for $j$ and $k=0,1$. This summary Science Table (See Table~\ref{tb::science}) contains all the information about the causal relationship between the treatment and outcome.
Causal effects are defined as comparisons between the potential outcomes. On the difference scale $\tau_i = Y_i(1) - Y_i(0)$ is the individual-level causal effect for unit $i.$ Define $p_w=\sum_{i=1}^N Y_i(w)/N$ as the proportion of the potential outcome $Y_i(w)$ being one. Then the average causal effect is defined as $$ \tau = \frac{1}{N}\sum_{i=1}^N \tau_i = p_1-p_0 = \frac{N_{10} - N_{01}}{N} . $$ We focus on $\tau$. It is conceptually straightforward to extend our discussion to other causal measures \citep{robins::1988, ding::2015}.
Consider a completely randomized experiment with $N_1$ units receiving treatment and $N_0$ control. The observed outcomes are deterministic functions of the treatment assignment and potential outcomes, i.e., $Y_i^\text{obs} = W_i Y_i(1) + (1-W_i) Y_i(0)$. Because both the treatment assignments and observed outcomes are binary, there are four observed types of the units classified by $(W_i, Y_i^\text{obs})$, which gives a different $2\times 2$ table formed by the cell counts $n_{wy}^\text{obs} = \#\{ i: W_i=w, Y_i^\text{obs} = y \}$ for $w=0,1$ and $y=0,1$. See Table \ref{tb::obs}. This table is distinct from the unknown Science Table \ref{tb::science}.
\begin{table}[t] \parbox{.45\linewidth}{ \centering \caption{The summarized Science Table}\label{tb::science}
\begin{tabular}{|c|cc|c|} \hline
& $Y(1)=1$ & $Y(1)=0$ & row sum \\ \hline $Y(0)=1$ & $N_{11}$ & $N_{01} $ & $S = N_{11}+N_{01}$ \\ $Y(0)=0$ & $N_{10}$ & $N_{00} $ & $N-S$ \\ \hline \end{tabular} }
\parbox{.45\linewidth}{ \centering \caption{The observed Data}\label{tb::obs}
\begin{tabular}{|c|cc|c|} \hline
& $Y^{\text{obs}}=1$ & $Y^{\text{obs}}=0$ & row sum \\ \hline $W=1$ & $n_{11}^\text{obs} $ & $n_{10}^\text{obs} $ & $N_1$\\ $W=0$ & $n_{01}^\text{obs} $ & $n_{00}^\text{obs} $ & $N_0$\\ \hline \end{tabular} } \end{table}
Importantly, the potential outcomes, the cell counts $N_{jk}$'s, and the causal estimand $\tau $ are all fixed. The observed cell counts $n_{wy}^\text{obs}$'s, however, are random, but the randomness comes solely from the physical randomization of the treatment assignment.
Classic approaches use the physical randomization to justify exact tests for sharp null hypotheses that fully specify the associated Science Table \citep{fisher::1935, copas::1973, rosenbaum::2002, imbens::2015book}. The sharp null formulation can be further utilized to construct exact confidence intervals for causal effects by inverting randomization tests \citep{rosenbaum::2001, rigdon2015randomization, li2015exact}. We instead evaluated the repeated sampling properties of the estimators of causal effects, and then derived likelihood-based and Bayesian inference without imposing any modeling assumptions whatsoever.
\section{Inference Under Monotonicity} \label{sec::mono}
We first discuss an important simplifying case where the potential outcomes satisfy monotonicity: \begin{assumption} \label{assume::mono} (Monotonicity) $Y_i(1) \geq Y_i(0)$ for each unit $i$. \end{assumption}
Monotonicity means that treatment is not harmful to any unit, which rules out the existence of potentially harmed units with $(Y_i(1), Y_i(0)) = (0,1)$, making $N_{01}=0$. The case with $Y_i(1) \leq Y_i(0)$ for all $i$ is analogous. Monotonicity is not refutable based on the observed data as long as the treatment is not harmful to the outcome on average. Monotonicity is a strong assumption: it imposes a maximal correlation between the potential outcomes $Y(1) $ and $ Y(0)$, and guarantees the identifiability of all the cell counts $N_{jk}$'s, as described by Proposition~\ref{thm::identi-mono}:
\begin{prop} \label{thm::identi-mono} Under monotonicity, $N_{01}=0$ and we can identify (i.e., express parameters as expectations of observed data) the $N_{jk}$'s by \[ N_{11} = E\left( \frac{N}{N_0} n_{01}^\text{obs} \right),\quad N_{00} = E\left( \frac{N}{N_1} n_{10}^\text{obs} \right),\quad N_{10} = E\left( N - \frac{N}{N_0} n_{01}^\text{obs} - \frac{N}{N_1} n_{10}^\text{obs} \right). \] \end{prop}
Proposition \ref{thm::identi-mono} immediately results in unbiased moment estimators for the $N_{jk}$'s made by plugging in sample moments. In particular, $\widehat{N}_{10} = N - (N/N_0) n_{01}^{\text{obs}} - (N/N_1) n_{10}^{\text{obs}}$ and $$ \widehat{\tau} = \frac{\widehat{N}_{10}}{N} = 1 - \frac{n_{01}^\text{obs} }{N_0} - \frac{ n_{10}^\text{obs} }{N_1} = \frac{ n_{11}^\text{obs} }{N_1} - \frac{n_{01}^\text{obs} }{N_0} \equiv \widehat{p}_1 - \widehat{p}_0, $$ where $\widehat{p}_1$ and $\widehat{p}_0$ are the observed proportions of the outcomes being one under treatment and control, respectively. The mean and variance of $\widehat{\tau} $ then follow by extending \citet{neyman::1923}'s result. Monotonicity allows for estimation of the correlation of potential outcomes, giving:
\begin{prop} \label{thm::moments-mono} The randomization distribution of $\widehat{\tau} $ has mean $\tau $ and variance \begin{eqnarray} \text{var}(\widehat{\tau} ) = \frac{N}{N-1} \left\{ \frac{ p_1(1-p_1) }{ N_1 } + \frac{ p_0(1-p_0) }{ N_0 } - \frac{ \tau (1-\tau ) }{ N} \right\} . \label{eq::var-mono} \end{eqnarray} The variance can be estimated by plugging in: \begin{eqnarray} \widehat{V} = \frac{N}{N-1} \left\{ \frac{ \widehat{p}_1(1-\widehat{p}_1) }{ N_1 } + \frac{ \widehat{p}_0(1-\widehat{p}_0) }{ N_0 } - \frac{ \widehat{\tau} (1-\widehat{\tau} ) } { N} \right\} . \label{eq::var-esti-mono} \end{eqnarray} Furthermore, $ \left( \widehat{\tau} - \tau \right) / \widehat{V} ^{1/2} \rightarrow \mathcal{N}( 0, 1 )$ in distribution. \end{prop}
Unlike the classic \citet{neyman::1923} variance expression, all terms in expression \eqref{eq::var-mono} are identifiable. Although a moment estimator with an explicit form can be useful to illustrate sources of information, it might not make full use of the information and can sometimes give estimates outside of the parameter space. An alternative approach is to utilize likelihood and Bayesian inference for the parameters of interest, which restricts our attention to only those values that are possible. Now because $\{ (Y_i(1), Y_i(0))\}_{i=1}^N$ are fixed numbers, we cannot write down the likelihood function based on the usual Binomial models. We can, however, write it down according to an urn model induced by the completely randomized experiment. In particular, view the finite population as a fixed urn containing three types of balls corresponding to the three types of units defined by $(Y(1),Y(0)) = (1,1), (1,0),$ and $(0,0)$. We have $N_{11}$ balls of type $(1,1)$, $N_{10}$ balls of type $(1,0)$, and $N - N_{11} - N_{10}$ balls of type $(0,0)$. We can thus parametrize the population with only $N_{11}$ and $N_{10}$. A completely randomized experiment is then equivalent to drawing $N_1$ balls from this urn to form the treatment arm, and using the remaining $N_0$ balls to form the control arm. This allows for writing down the likelihood based on the observed data as a multivariate Hypergeometric distribution below.
\begin{theorem} \label{thm::like-mono} Under monotonicity, the likelihood function of $(N_{10}, N_{11})$ is \begin{eqnarray*}
\binom{ N_{11} }{ N_{11} - n_{01}^\text{obs} } \binom{N_{10}}{ n_{11}^\text{obs} + n_{01}^\text{obs} - N_{11} } \binom{ N - N_{10} - N_{11} }{ n_{10}^\text{obs} } \Big / \binom{N}{N_1} ,
\end{eqnarray*} for any $\left( N_{10}, N_{11} \right)$ in the region \begin{eqnarray} \label{eq::region}
\left\{ \left( N_{10}, N_{11} \right) : n_{01}^\text{obs} \leq N_{11} \leq n_{11}^\text{obs} + n_{01}^\text{obs} \leq N_{10}+N_{11} \leq N - n_{10}^\text{obs} \right\}. \end{eqnarray} The likelihood is zero elsewhere. \end{theorem}
There are several curious aspects and consequences to this theorem which we now discuss. First, before obtaining data, the condition $N_{10}+N_{11}+N_{00}=N$ restricts $(N_{10}, N_{11})$ to take $(N+2)(N+1)/2$ possible values, and $\tau$ can take values $k/N$ for any integer $k\in [-N,N].$ After observing the data, $(N_{10}, N_{11})$ can take only $(n_{11}^\text{obs}+1)(n_{00}^\text{obs}+1)<(N+2)(N+1)/2$ possible values, and there are at most $n_{11}^\text{obs} + n_{00}^\text{obs}+1$ possible values for $\tau $, a fact noticed by \citet{rigdon2015exact} from a different perspective.
Second, there are no modeling assumptions on the outcome. The likelihood is completely driven by the physical randomization. This idea is not entirely new: such an urn model was used in \cite{neyman::1923}'s seminal causal inference paper for deriving the unbiased moment estimator and confidence interval for $\tau $.
Third, the above allows for a maximum likelihood estimate of $\tau$, obtained by maximizing the likelihood over all possible $(N_{10}, N_{11})$. This likelihood function can also play a central role in model-free Bayesian inference. For example, if we put a uniform prior on the $(N+2)(N+1)/2$ feasible points of $(N_{10}, N_{11})$, the posterior distribution of $(N_{10}, N_{11})$ concentrates only on the $(n_{11}^\text{obs}+1)(n_{00}^\text{obs}+1)$ points within region (\ref{eq::region}) and is proportional to the likelihood. If we have prior information other than the uniform distribution, we could also incorporate it into our Bayesian inference. Based on the posterior distribution of $(N_{10}, N_{11})$, it is straightforward to obtain the posterior distribution of $\tau $.
\section{Inference Without Monotonicity} \label{sec::no-mono}
We next relax the monotonicity assumption. Without monotonicity, the unknown parameters in the Science Table, $(N_{11}, N_{10}, N_{01}, N_{00})$, are no longer identifiable by the observed data. This introduces an additional complication from before, but the overall intuition is the same. Without identifiability of $(N_{11}, N_{10}, N_{01}, N_{00})$, the sampling variance of $\widehat{\tau}$ cannot be identified by the observed data, the likelihood function will be flat over a region with multiple points, and Bayesian inference will be strongly driven by the prior distribution. We can, however, weaken monotonicity in such a way that preserves identifiability in a sensitivity analysis approach. This can also be used to generate estimation regions rather than point-estimates. Finally, this approach also allows for continued use of the likelihood approach discussed above.
The key insight is that, for a known $N_{01}$, all the cell counts of $N_{jk}$'s are identifiable, allowing us to parameterize our urn model with $(N_{10}, N_{11})$ as before. We therefore choose $N_{01}$ as the sensitivity parameter with $N_{01} = 0$ corresponding to monotonicity.
We first present some extensions of the previous propositions, and then discuss how to use them for this sensitivity analysis approach to variance estimation. We also extend the likelihood and Bayesian inference procedure from before.
\begin{prop} \label{thm::identifiability-no-mono} When $N_{01}$ is known, we can identify the $N_{jk}$'s by \begin{eqnarray*} N_{11} = E\left( \frac{N}{N_0} n_{01}^\text{obs} - N_{01} \right),\quad N_{00} = E\left( \frac{N}{N_1} n_{10}^\text{obs} - N_{01} \right), \quad N_{10} = E\left( N+N_{01}- \frac{N}{N_0} n_{01}^\text{obs} - \frac{N}{N_1} n_{10}^\text{obs} \right). \end{eqnarray*} \end{prop} The above derives from the marginal distributions of the potential outcomes imposing weak restrictions on the association. This restriction comes from the data being binary.
\begin{prop} \label{thm::bounds-sensitivity} The number of potentially harmed units, $N_{01}$, is bounded by \begin{eqnarray} \max(0, - N\tau ) \leq N_{01} \leq \min\{ Np_0, N(1-p_1) \} . \label{eq::frechet} \end{eqnarray} If we assume a non-negative correlation between the potential outcomes, the bounds become $$ \max(0, - N\tau ) \leq N_{01}\leq Np_0(1-p_1). $$ If we further assume that non-negative average causal effect $\tau \geq 0$, then the bounds become \begin{eqnarray} \label{eq::bound-n01} 0\leq N_{01}\leq Np_0(1-p_1) . \end{eqnarray} \end{prop}
The bounds in \eqref{eq::frechet} are the Frech\'et--Hoeffding bounds \citep[cf.][]{nelsen2007introduction} for $N_{01}$ based on the marginal distributions of the potential outcomes. In many realistic cases, it seems plausible to assume a nonnegative association between the potential outcomes. Without loss of generality, we assume that our data have $\widehat{\tau}>0$, and therefore we either assume monotonicity or conduct sensitivity analysis within the empirical range of \eqref{eq::bound-n01}.
\begin{prop} \label{thm::variance-bounds-non-mono} With a known $N_{01}$, the variance of $\widehat{\tau}$ is \begin{equation} \text{var}(\widehat{\tau} ) = \frac{N}{N-1} \left\{ \frac{p_1(1-p_1)}{N_1} + \frac{p_0(1-p_0)}{N_0} - \frac{ \tau (1-\tau ) }{N} - \frac{2N_{01}}{N^2} \right\} . \label{eq::var-for-N01} \end{equation} The bounds of the above variance over the possible values of $N_{01}$ as delineated by region \eqref{eq::bound-n01} are \begin{equation*} \frac{N}{N-1} \left\{ \frac{\frac{N_0}{N} p_1(1-p_1)}{N_1} + \frac{\frac{N_1}{N} p_0(1-p_0)}{N_0} \right\} \leq \text{var}(\widehat{\tau} ) \leq \frac{N}{N-1} \left\{ \frac{p_1(1-p_1)}{N_1} + \frac{p_0(1-p_0)}{N_0} - \frac{\tau (1-\tau )}{N} \right\} . \end{equation*} \end{prop}
The upper bound of $\text{var}(\widehat{\tau})$ corresponds to monotonicity, and the lower bound corresponds to uncorrelated potential outcomes.
\subsection{Variance estimation in a sensitivity analysis}
Although $\tau$ depends only on the marginal distributions of the potential outcomes, the variance of $\widehat{\tau}$ depends further on the association between the potential outcomes. \citet{ding::2015} showed that (\ref{eq::var-mono}) is an upper bound for the true sampling variance of $\widehat{\tau}$ without monotonicity. However, this result does not show explicitly the impact of the association between the potential outcomes on the variability of the estimator for $\tau$. Proposition~\ref{thm::variance-bounds-non-mono} does. In particular, we can conduct a sensitivity analysis by varying $N_{01}$ within \eqref{eq::bound-n01} to get a series of variance estimators according to \eqref{eq::var-for-N01}. If we believe that $N_{01}$ is in a specific range, we can take the maximum and minimum of the variances as a range of possible uncertainty estimates. Generally, as $N_{01}$ increases the variance goes down; the most conservative (largest) variance estimate corresponds to monotonicity.
\subsection{Likelihood and Bayesian inference} The discussion above allows for getting sharper estimates on the variance of the classic moment estimators, as compared to the classic Neyman approach. We can also extend the likelihood approach shown for monotonicity in a similar fashion to obtain estimators restricted to the support of the parameter space. For a fixed $N_{01}$, the likelihood function, based on an urn model with four types of balls, is given by the following theorem:
\begin{theorem} \label{thm::like-non-mono} Given a fixed $N_{01}$, the likelihood function for $(N_{10}, N_{11})$ is \begin{eqnarray} \label{eq::like-non-mono} \sum_{x\in \mathcal{F}} \binom{N_{11}}{x} \binom{N_{10}}{ n_{11}^\text{obs} - x } \binom{N_{01}}{ N_{01}+N_{11}-n_{01}^\text{obs}-x} \binom{N-N_{11}-N_{10}-N_{01}}{ n_{10}^\text{obs}+n_{01}^\text{obs}+x-N_{01}-N_{11} } \Big / \binom{N}{N_1} , \end{eqnarray} where the feasible region of the above summation is $\mathcal{F} = \left\{ x : L \leq x \leq U \right\}$ with \begin{eqnarray*} L = \max( 0, n_{11}^\text{obs}-N_{10}, N_{11}-n_{01}^\text{obs}, N_{01}+N_{11}-n_{10}^\text{obs}-n_{01}^\text{obs} ) ,\\ U = \min( N_{11}, n_{11}^\text{obs}, N_{01}+N_{11}-n_{01}^\text{obs}, N-N_{10}-n_{10}^\text{obs}-n_{01}^\text{obs} ). \end{eqnarray*}
Note that the $x$ in the sum in \eqref{eq::like-non-mono} represents the number of always-survivors randomized to the treatment group; the formula marginalizes over this to get the overall likelihood. When $N_{01} = 0$, the feasible region of $x$ collapses to the point $x=N_{11} - n_{01}^\text{obs}$, and the likelihood function in Theorem \ref{thm::like-non-mono} reduces to the one in Theorem \ref{thm::like-mono}. The proof of Theorem \ref{thm::like-non-mono} in the Appendix shows that, for fixed $0\leq N_{01} \leq N\widehat{p}_0 (1-\widehat{p}_1)$, the likelihood is zero outside the following region of $(N_{10}, N_{11})$: \begin{eqnarray} \label{eq::feasible} \begin{array}{rll} \max(0, n_{01}^\text{obs} - N_{01} ) \leq & N_{11} & \leq \min( n_{01}^\text{obs} + n_{11}^\text{obs}, N-n_{00}^\text{obs}-N_{01} ),\\ 0\leq &N_{10} & \leq N-n_{01}^\text{obs}-n_{10}^\text{obs},\\ \max(n_{11}^\text{obs}+n_{01}^\text{obs}-N_{01}, n_{11}^\text{obs})\leq & N_{10}+N_{11} & \leq N-n_{10}^\text{obs}. \end{array} \end{eqnarray} \end{theorem}
We can then do a sensitivity analysis to see how the likelihood function and the maximum likelihood estimator change as we increase $N_{01}$. These curves can also be calculated for any estimand of interest as the population is fully specified by $( N_{11}, N_{10}) $, given $N_{01}$. For Bayesian inference, if we impose a uniform prior on $(N_{10}, N_{11})$, the posterior distribution of $(N_{10}, N_{11})$ is proportional to (\ref{eq::like-non-mono}). This immediately gives posterior distributions of $\tau$.
\citet{copas::1973} treated \eqref{eq::like-non-mono} as a likelihood function for $(N_{11}, N_{10}, N_{01})$, and observed its pathological behaviors due to the unidentifiability issue. An alternative Bayesian approach might impose a prior distribution on the sensitivity parameter $N_{01}$. Regardless of the identifiability issue, the posterior distributions of the parameters of interest will always be proper because of finite support. \citet{watson2014complications} gave detailed discussion on Bayesian inference by imposing prior distributions on $(N_{11}, N_{10}, N_{01})$ and making connections to posterior predictive checks \citep{rubin::1984, rubin::1998}. However, inference might then be driven by the prior distribution of $N_{01}$, an unidentifiable parameter from the data. Therefore, we recommend the sensitivity analysis approach in both likelihood and Bayesian inference to explicitly show the impact of the correlation between potential outcomes.
\section{The Attributable Effect and the Treatment Effect on the Treated} \label{sec::attributable}
In the previous sections, we focused the average treatment effect, which is a fixed parameter depending only on the Science table. In practice, other causal quantities may be of scientific interest. For instance, \cite{rosenbaum::2001} proposed to estimate the effect attributable to the treatment, $$ A = \sum_{i=1}^N W_i \tau_i, $$ which is closely related to the average treatment effect on the treated units $ \tau ^W = \sum_{i=1}^N W_i \tau_i / N_1 = A/N_1. $ Both causal quantities $A$ and $\tau^W$ depend on the treatment assignment as well as the Science table, and thus they are themselves random variables. Therefore, as \cite{rosenbaum::2001} suggested, we need to extend the traditional concepts of point and interval estimation to point and interval prediction of random variables in frequentists' inference. Because the difference between $A$ and $\tau^W$ is the fixed scaling factor $N_1$, we discuss only inference of the attributable effect $A.$
As shown in the proof of Theorem \ref{thm::like-non-mono}, the attributable effect can be written as \begin{eqnarray} A = n_{11}^\text{obs} + n_{01}^\text{obs} - N_{01} - N_{11} = n_{11}^\text{obs} + n_{01}^\text{obs} - S, \label{eq::attributable} \end{eqnarray} with $S=N_{01} + N_{11}$ defined in Table \ref{tb::science}. Note that $S$ is a parameter depending on the Science table (see Table \ref{tb::science}). Formula \eqref{eq::attributable} shows a linear relationship between $A$ and $S$, which makes statistical inference of $A$ simpler via statistical inference of $S$. To be more specific, if we had a point estimator $\widehat{S}$ for $S$, then we would have a point predictor $\widehat{A} = n_{11}^\text{obs} + n_{01}^\text{obs} - \widehat{S}$ for $A$. Furthermore, if we had an interval estimator $[\widehat{S}_l, \widehat{S}_u]$ for $S$, then we would have an interval predictor $[\widehat{A}_l, \widehat{A}_u]$ for $A$, where $\widehat{A}_l = n_{11}^\text{obs} + n_{01}^\text{obs} - \widehat{S}_l$ and $\widehat{A}_u= n_{11}^\text{obs} + n_{01}^\text{obs} - \widehat{S}_u$. We can thus separate out and capture the randomness in our target estimand with observed data, reducing the statistical uncertainty to a classic parameter estimation problem.
\subsection{Exact inference}
Randomization induces a Hypergeometric distribution $n_{01}^\text{obs}\sim H_S$, where $H_S$ has probability mass function $P(H_S=h) = \binom{S}{h}\binom{N-S}{N_1-h}/\binom{N}{N_1}$ for $ \max(0, S-N_0) \leq h \leq \min(S,N_1)$. This Hypergeometric distribution depends on the unknown parameter $S$, and we can thus use the number of positive outcomes under control, $n_{01}^\text{obs}$, as our observed statistic for conducting inference on $S$. Fortunately, inference on $S$ based on the Hypergeometric $n_{01}^\text{obs}$ is a classical statistical problem. For example, we can conduct a series of tests $H_{0s}: S=s$, and calculate the $p$-value for each fixed $s$ by measuring the extremeness of $n_{01}^\text{obs}$ given $S$. A choice of the two-sided $p$-value is \begin{eqnarray} \label{eq::pvalue} p(s) = \sum_{ P(H_s=h) \leq P(H_s=n_{01}^\text{obs}) } P(H_s=h) , \end{eqnarray} i.e., the sum of all the probability masses that are smaller than or equal to the probability mass of the observed value of the Hypergeometric random variable. This effectively orders the possible values of $H_s$, given $s$, by their likelihood, and the sum in \eqref{eq::pvalue} captures the total probability mass in the tails given this ordering. The Hodges--Lehmann-type point estimator for $S$ corresponds to the $s$ values that attain the maximum $p$-value \citep{hodges1963estimates, rosenbaum::2002}; the point estimator may not be unique due to discreteness. The $1-\alpha$ interval estimator contains all the $s$ values such that $p(s) > \alpha$.
The choice of two-sided $p$-value in \eqref{eq::pvalue} leads to the same procedure as \cite{rosenbaum::2001} and \cite{rigdon2015randomization}. We note, however, that the classical literature on Fisher's exact test also proposed other choices of two-sided $p$-values based on a Hypergeometric random variable \citep[cf.][page 92]{agresti2013categorical}. Moreover, we could alternatively directly construct confidence intervals for $S$ based on the Hypergeometric $n_{01}^\text{obs}$ without inverting tests. Please see \citet{wang2015exact} for classical methods and recent developments in constructing confidence intervals for Hypergeometric parameters. Overall, the relationship \eqref{eq::attributable} allows for constructing different point and interval estimators for $A$ based on different approaches for $S$, of which the previous approaches of \citet{rosenbaum::2001} and \citet{rigdon2015randomization} are special cases. Furthermore, to make exact inference of the attributable effect, \cite{rosenbaum::2001} invoked monotonicity, but our discussion above does not. The inference with or without monotonicity is the same.
\subsection{Neyman-type repeated sampling evaluation}
A natural estimator for $A$ is $N_1\widehat{\tau}$. The following proposition shows that $N_1\widehat{\tau}$ is an unbiased predictor of $A$, and the mean squared error for this prediction depends only on the marginal distribution of $Y(0)$.
\begin{prop} \label{thm::A-neyman} Over all possible randomizations, $E(A - N_1 \widehat{\tau} ) = 0$ and \begin{eqnarray} \text{var}( A - N_1 \widehat{\tau} ) = \frac{NN_1}{N_0} S_0^2= \frac{N^2N_1}{N_0(N-1)} p_0(1-p_0), \label{eq::attributable-effect} \end{eqnarray} where $S_0^2$ is the finite population variance of the control potential outcome. Therefore, $A$ can be unbiasedly predicted by $N_1\widehat{\tau} $ with estimated mean squared error $ N^2N_1\widehat{p}_0(1-\widehat{p}_0) / \{ N_0(N-1) \} $. \end{prop}
Proposition \ref{thm::A-neyman} does not rely on monotonicity. Moreover, the first identity in \eqref{eq::attributable-effect} also holds for general outcomes. Interestingly, the variance formula \eqref{eq::attributable-effect} does not depend on the association between the potential outcomes, which was hinted at by \citet{robins::1988} and \citet{hansen2009attributing}. In particular, by allowing the target of estimation to vary in a randomized experiment, one can seemingly avoid the unidentifiable issue, but the resulting analysis is then conditional, in some sense, on the realized assignment.
\subsection{Bayesian inference}
Bayesian posterior inference for $A$ is straightforward conditional on the observed data. Because of the linear relationship between $A$ and $S$ in \eqref{eq::attributable}, the posterior distribution of $S=N_{01} + N_{11}$ determines the posterior distribution of $A.$ Therefore, with fixed $N_{01}$ (zero under monotonicity and positive for a sensitivity analysis), obtaining the posterior distribution of $A$ is straightforward once we obtain the posterior distribution of $N_{11}.$
\section{Illustration} \label{sec::illustration}
We re-analyze the data in \citet[][pp. 191]{rosenbaum::2002} concerning death in the London underground. In the London underground, some train stations have a drainage pit below the tracks. When an ``incident'' happens (i.e., a passenger falls, jumps or is pushed from the station platform), such a pit is a place to escape contact with the wheels of the train. Researchers are interested in the mortality in stations with and without such a pit. In stations without a pit, only $5$ lived out of $21$ recorded ``incidents.'' For ``incidents'' in stations with a pit, $18$ out of $32$ lived. Therefore, the observed data can be summarized by $(n_{11}^\text{obs}, n_{10}^\text{obs}, n_{01}^\text{obs}, n_{00}^\text{obs}) = (18, 14, 5, 16)$, viewing ``pit'' versus ``no pit'' as treatment versus control, and life as the outcome. For illustration, we view this data set as from a hypothetical completely randomized experiment, ignoring any issues of confounding.
Under monotonicity, the moment-based estimator is $\widehat{\tau} = 0.324$, i.e., we estimate that the chance of survival is about $32$ percentage points higher for stations with a pit. Using the variance estimator in \eqref{eq::var-esti-mono} we end up with a confidence interval of $[0.106, 0.543]$, which is $13\%$ narrower than the usual one of $[0.072, 0.577]$. See the first row of Table~\ref{tb::rosenbaum}.
We then conduct a sensitivity analysis on monotonicity by varying the value of $N_{01}$, where $N_{01}=0$ corresponds to monotonicity, $N_{01}=5$ corresponds to independent potential outcomes, and $N_{01}=2$ is a value between these two extreme cases. Rows 2 and 3 of Table~\ref{tb::rosenbaum} show estimates and associated confidence intervals for these two different values of $N_{01}$. They are smaller. If we believe some would be harmed, we are then more certain of the average causal effect. Our improved variance estimator \eqref{eq::var-esti-mono} and the Bayesian approach with a uniform prior both provide improved inference. The moment estimator is close to the Bayesian posterior modes, but there is slight shift of $2$ percentage points.
Figure \ref{fg::rosenbaum-post} shows the posterior distributions of $\tau $ with $N_{01}=0,2$ and $5$. The posterior distribution has a higher peak and lighter tails with larger $N_{01}$. This conforms to the frequentists' property that the variance of $\widehat{\tau} $ becomes larger when $N_{01}$ get smaller, with monotonicity being the extreme case.
\begin{table}[t] \centering \caption{Moment and Bayes estimators with $(n_{11}^\text{obs}, n_{10}^\text{obs}, n_{01}^\text{obs}, n_{00}^\text{obs}) = (18, 14, 5, 16)$. Each of columns 2--4 shows the point estimator, interval estimator and its length.} \label{tb::rosenbaum} \begin{tabular}{cccc} $N_{01}$& Neyman's variance & Improved variance & Bayes \\ $0$ &$0.324\ [0.072, 0.577]\ 0.505$& $0.324\ [0.106, 0.543]\ 0.437$ & $0.301\ [0.075, 0.509]\ 0.434$\\ $2$ &same as above & $0.324\ [0.119, 0.530]\ 0.411$& $0.301\ [0.075, 0.490]\ 0.415$\\ $5$ &same as above & $0.324\ [0.141, 0.508]\ 0.367$& $0.301\ [0.094, 0.472]\ 0.378$ \end{tabular} \end{table}
Regarding the attributable effect under monotonicity,
the Hodges--Lehmann-type estimator is $9$, $10$ or $11$, and the $95\%$ interval estimate is $[2,16]$. The posterior mode for $A$ is $10$, and the $95\%$ highest probability interval for $A$ is $[1,16]$. Figure \ref{fg::rosenbaum-attri} compares the posterior probabilities and standardized $p$-values for testing $A=a$, showing that they have similar shapes. The moment estimator for $A$ is $10.38$ with confidence interval $[1.56, 19.20]$. The moment estimator is outside of the range of the parameter because $A$ must be an integer. Worse, the associated interval estimate is wider, with an upper limit larger than $n_{11}^\text{obs} = 18$, the maximum possible value of $A$ under monotonicity due to $A=n_{11}^\text{obs} + n_{01}^\text{obs} - N_{11} \leq n_{11}^\text{obs}$.
\begin{figure}
\caption{Example with observed data $(n_{11}^\text{obs}, n_{10}^\text{obs}, n_{01}^\text{obs}, n_{00}^\text{obs}) = (18, 14, 5, 16)$}
\label{fg::rosenbaum-post}
\label{fg::rosenbaum-attri}
\end{figure}
\section{Discussion} \label{sec::discussion}
For binary experimental data, we proposed several model-free inferential procedures for the average treatment effect and the attributable effect. We believe demonstrating that likelihood and Bayesian estimation without modeling is possible is a worthwhile proof of concept for an alternate form of thinking about estimation when the assignment mechanism is known. For further connections and comparisons, see \citet{greenland1991logical}, \citet{ding::2014}, \citet{chiba2015exact}, and \citet{ding::2015}.
Some researchers have proposed randomization-based procedures for causal effects with noncompliance \citep{rubin::1998, imbens2005robust, keele2015randomization}, with general intermediate variables \citep{nolen2011randomization}, and with interference \citep{rosenbaum2012interference, rigdon2015exact}. It is our ongoing work to extend the current approaches to these settings.
\section*{Appendix}
We first prove the theorems. The propositions follow. \begin{proof} [Proof of Theorem \ref{thm::like-mono}] Under monotonicity, the units with $(W_i,Y_i^\text{obs}) = (1,1)$ are $(1,1)$ or $(1,0)$ units, the units with $(W_i, Y_i^\text{obs}) = (1,0)$ are all $(0,0)$ units, the units with $(W_i, Y_i^\text{obs}) = (0,1)$ are all $(1,1)$ units, and the units with $(W_i, Y_i^\text{obs})=(0,0)$ are $(0,0)$ or $(1,0)$ units. Define $N_{bc,w}$ as the number of $(b,c)$ units within observed treatment group $w$. Then the observed data allows us to obtain $$ \begin{array}{lll} N_{11,1} = N_{11}-n_{01}^\text{obs}, && N_{11,0} = n_{01}^\text{obs},\\ N_{00,1} = n_{10}^\text{obs}, && N_{00,0} = N_{00}-n_{10}^\text{obs},\\ N_{10,1} = n_{11}^\text{obs} - N_{11,1} = n_{11}^\text{obs}+n_{01}^\text{obs}-N_{11},&& N_{10,0} = N_{10}-N_{10,1}=N_{10}+N_{11}-n_{11}^\text{obs}-n_{01}^\text{obs}. \end{array} $$ The above shows that we know the number of each type of unit in both treatment arms, based on the observed counts and the totals $N_{bc}$. Because all the counts are nonnegative integers, we have the following restriction on $(N_{10}, N_{11})$: $$ n_{01}^\text{obs} \leq N_{11} \leq n_{11}^\text{obs} + n_{01}^\text{obs} \leq N_{10}+N_{11} \leq N - n_{10}^\text{obs}. $$ We can count that there are $ (n_{11}^\text{obs}+1)(n_{00}^\text{obs}+1)$ possible values for $(N_{10}, N_{11})$, and $ (n_{11}^\text{obs} + n_{00}^\text{obs} + 1)$ possible values for $\tau .$
The completely randomized experiment corresponds to an urn model. We have an urn with $N_{11}$ $(1,1)$ balls, $N_{10}$ $(1,0)$ balls, and $N_{00}$ $(0,0)$ balls. The experiment is that we randomly draw $N_1$ balls without replacement to form the treatment arm and use the remaining balls to form the control arm. We then observe the outcomes. The above restrictions allows us to determine, based on observed data, the count vector for the three types of balls $(N_{11,1}, N_{10,1}, N_{00,1})$ that we have in the treatment arm, and similarly for control. Therefore, the probability of obtaining $(N_{11,1}, N_{10,1}, N_{00,1})$ is a multivariate Hypergeometric distribution, given the values of $N_{11}$ and $N_{10}$. Express this in terms of the observed data to obtain $$ \binom{ N_{11} }{ N_{11,1} } \binom{ N_{10} }{ N_{10,1} } \binom{ N_{00} }{ N_{00,1} } \Big / \binom{N}{N_1} = \binom{ N_{11} }{ N_{11} - n_{01}^\text{obs} } \binom{N_{10}}{ n_{11}^\text{obs} + n_{01}^\text{obs} - N_{11} } \binom{ N - N_{10} - N_{11} }{ n_{10}^\text{obs} }
\Big / \binom{N}{N_1}. $$ This is the likelihood, a function of $N_{11}$ and $N_{10}$, our parameters. \end{proof}
\begin{proof} [Proof of Theorem \ref{thm::like-non-mono}] Without monotonicity, the observed data classified by $(W_i, Y_i^\text{obs})$ are mixtures: the observed group $(W_i, Y_i^\text{obs})=(1,1)$ contains $(1,1)$ and $(1,0)$ units, the observed group $(W_i, Y_i^\text{obs})=(1,0)$ contains $(0,1)$ and $(0,0)$ units, the observed group $(W_i, Y_i^\text{obs})=(0,1)$ contains $(1,1)$ and $(0,1)$ units, and the observed group $(W_i, Y_i^\text{obs})=(0,0)$ contains $(1,0)$ and $(0,0)$ units.
Assume that $N_{11,1}=x$, we have $$ \begin{array}{lll} N_{11,1} = x, && N_{11,0} = N_{11}-x,\\ N_{10,1} = n_{11}^\text{obs} -x, && N_{10,0} = N_{10}+x-n_{11}^\text{obs},\\ N_{01,1} = N_{01}+N_{11}-n_{01}^\text{obs}-x, && N_{01,0}=n_{01}^\text{obs}+x-N_{11},\\ N_{00,1} = n_{10}^\text{obs}+n_{01}^\text{obs}+x-N_{01}-N_{11}, && N_{00,0} =N-N_{10}-x-n_{10}^\text{obs}-n_{01}^\text{obs}. \end{array} $$ As a byproduct, the attributable effect is $A = N_{10,1} - N_{01,1} = n_{11}^\text{obs} + n_{01}^\text{obs} - N_{01}- N_{11}.$ The above counts must all be non-negative, implying the following inequality on $x$:
$$ \max( 0, n_{11}^\text{obs}-N_{10}, N_{11}-n_{01}^\text{obs}, N_{01}+N_{11}-n_{10}^\text{obs}-n_{01}^\text{obs} ) \leq x\leq \min( N_{11}, n_{11}^\text{obs}, N_{01}+N_{11}-n_{01}^\text{obs}, N-N_{10}-n_{10}^\text{obs}-n_{01}^\text{obs} ) . $$ When $N_{01}=0$, the inequality collapses to $x=N_{11}-n_{01}^\text{obs}$, which is coherent with Theorem \ref{thm::like-mono}. The above inequality also imposes the following restrictions on $(N_{10},N_{11})$ for a given value of $N_{01}$ and the observed data: $$ \begin{array}{ll} 0\leq N_{11}, &0\leq n_{11}^\text{obs}, \\ 0\leq N_{01}+N_{11}-n_{01}^\text{obs},&0\leq N-N_{10}-n_{10}^\text{obs}-n_{01}^\text{obs},\\ n_{11}^\text{obs}-N_{10} \leq N_{11},& n_{11}^\text{obs}-N_{10} \leq n_{11}^\text{obs},\\ n_{11}^\text{obs}-N_{10} \leq N_{01}+N_{11}-n_{01}^\text{obs},& n_{11}^\text{obs}-N_{10}\leq N-N_{10}-n_{10}^\text{obs}-n_{01}^\text{obs},\\ N_{11}-n_{01}^\text{obs} \leq N_{11}, & N_{11}-n_{01}^\text{obs} \leq n_{11}^\text{obs}, \\ N_{11}-n_{01}^\text{obs} \leq N_{01}+N_{11}-n_{01}^\text{obs},& N_{11}-n_{01}^\text{obs} \leq N-N_{10}-n_{10}^\text{obs}-n_{01}^\text{obs},\\ N_{01}+N_{11}-n_{10}^\text{obs}-n_{01}^\text{obs} \leq N_{11},& N_{01}+N_{11}-n_{10}^\text{obs}-n_{01}^\text{obs} \leq n_{11}^\text{obs}, \\ N_{01}+N_{11}-n_{10}^\text{obs}-n_{01}^\text{obs} \leq N_{01}+N_{11}-n_{01}^\text{obs},& N_{01}+N_{11}-n_{10}^\text{obs}-n_{01}^\text{obs}\leq N-N_{10}-n_{10}^\text{obs}-n_{01}^\text{obs} . \end{array} $$ These inequalities can be simplied to be \eqref{eq::feasible}. The inequality for $N_{01}$ is $N_{01}\leq n_{10}^\text{obs} + n_{01}^\text{obs}$, redundant over the sensitivity analysis region $N_{01}\leq N\widehat{p}_0(1-\widehat{p}_1)$, because $n_{10}^\text{obs} + n_{01}^\text{obs}\geq N\widehat{p}_0(1-\widehat{p}_1).$ \end{proof}
We next prove the propositions. These proofs rely on the following lemma:
\begin{lemma} \label{lemma::variance} Assume $(c_1, \ldots, c_N)$ are constants with $\bar{c}=\sum_{i=1}^N c_i/N$ and $S_c^2 = \sum_{i=1}^N (c_i-\bar{c})^2/(N-1)$. Let $(W_1,\ldots, W_N)$ be the treatment indicators of a completely randomized experiment. We have that $$ E\left( \sum_{i=1}^N W_ic_i \right) = N_1 \bar{c},\quad \text{var}\left( \sum_{i=1}^N W_ic_i \right) = \frac{N_1N_0}{N}S_c^2. $$ \end{lemma} See classical survey sampling textbooks \citep[e.g.,][]{cochran::1977} for the proof.
\begin{proof} [Proof of Proposition \ref{thm::identi-mono}] Verify that $$ E(n_{01}^\text{obs}) = E\left\{ \sum_{i=1}^N (1-W_i) Y_i(0) \right\} = \frac{N_0}{N} N_{11},\quad
E(n_{10}^\text{obs}) = E\left[ \sum_{i=1}^N W_i \{1-Y_i(1)\} \right] = \frac{N_1}{N} N_{00}. $$ The conclusion follows. \end{proof}
\begin{proof} [Proof of Proposition \ref{thm::moments-mono}] Following \cite{neyman::1923} (presented using modern notation in \citet{imbens::2015book}), $\widehat{\tau}$ is unbiased for $\tau $ with variance \begin{eqnarray} \label{eq::variance-neyman} \text{var}( \widehat{\tau} ) = \frac{S_1^2}{N_1} + \frac{S_0^2}{N_0} - \frac{S_\tau^2}{N}, \end{eqnarray} where \begin{eqnarray*} S_1^2 &=& \frac{1}{N-1} \sum_{i=1}^N \{ Y_i(1) - p_1 \}^2 = \frac{1}{N-1} (Np_1^2-Np_1) = \frac{N}{N-1} p_1(1-p_1),\\ S_0^2 &=& \frac{1}{N-1} \sum_{i=1}^N \{ Y_i(0) - p_0 \}^2 = \frac{1}{N-1} (Np_0^2-Np_0) = \frac{N}{N-1} p_0(1-p_0),\\ S_\tau^2&=&\frac{1}{N-1} \sum_{i=1}^N ( \tau_i-\tau )^2 = \frac{1}{N-1} \left(N_{10} - \frac{N_{10}^2}{N} \right) = \frac{N}{N-1} \tau (1-\tau ) \end{eqnarray*} are the finite population variance of $Y(1), Y(0)$, and $\tau.$ For estimating the variance, note that the variance term $S_\tau^2$ is identifiable because $N_{01} =0$ under monotonicity, and the conclusion follows.
The consistency and asymptotic normality of $\widehat{\tau}$ follows from the finite population central limit theorem \citep{hajek::1960}. And the variance estimator can be obtained by a simple plug-in. \end{proof}
\begin{proof} [Proof of Proposition \ref{thm::identifiability-no-mono}] From Lemma \ref{lemma::variance}, we have $E(\widehat{p}_1)=p_1$ and $E(\widehat{p}_0)=p_0$. Then \begin{eqnarray*} E\left( \frac{N}{N_0} n_{01}^\text{obs} - N_{01} \right)&=& E\left( N\widehat{p}_0 - N_{01} \right) = Np_0-N_{01}=(N_{01}+N_{11})-N_{01}=N_{11},\\ E\left( \frac{N}{N_1} n_{10}^\text{obs} - N_{01} \right)&=& E\{ N(1-\widehat{p}_1) - N_{01} \}=N(1-p_1)-N_{01}=(N_{01}+N_{00})-N_{01}=N_{00},\\ E\left( N+N_{01}-\frac{N}{N_0}n_{01}^\text{obs} - \frac{N}{N_1}n_{10}^\text{obs} \right)&=& N+N_{01}-Np_0-N(1-p_1)=N_{10}. \end{eqnarray*} \end{proof}
\begin{proof} [Proof of Proposition \ref{thm::bounds-sensitivity}] As a byproduct of the derivations in the proof of Proposition \ref{thm::identifiability-no-mono}, we have \begin{eqnarray*} Np_0-N_{01} \geq 0,\quad N(1-p_1)-N_{01} \geq 0,\quad N+N_{01}-Np_0-N(1-p_1)\geq 0, \end{eqnarray*} which further implies $ \max(0, - N\tau ) \leq N_{01} \leq \min\{ Np_0, N(1-p_1) \} . $
Yule's measure of the correlation is $N_{11}N_{00} - N_{10}N_{11}$, which is the rescaled covariance of potential outcomes. If this is non-negative, the correlation of potential outcomes is non-negative. We also have that $N_{00} = N(1-p_1) - N_{01}$, $N_{11} = Np_0 - N_{01}$, and $N_{10} = N+N_{01}-N(1-p_1) - Np_0,$ giving \begin{eqnarray*} 0&\leq& N_{11}N_{00}-N_{10}N_{01} = (Np_0 - N_{01}) \{ N(1-p_1) - N_{01} \} - \{ N+N_{01}-N(1-p_1) - Np_0 \} N_{01} \\ &=& N^2p_0(1-p_1) - NN_{01}, \end{eqnarray*} or, equivalently, $N_{01} \leq Np_0(1-p_1).$ \end{proof}
\begin{proof} [Proof of Proposition \ref{thm::variance-bounds-non-mono}] According to the variance formula of $\widehat{\tau} $ in (\ref{eq::variance-neyman}), we need to calculate $S_\tau^2/N$ with a known $N_{01}$. We have \begin{eqnarray*} \frac{S_\tau^2}{N} &=& \frac{1}{(N-1)N} \left( \sum_{i=1}^N \tau_i^2 - N\tau^2 \right) = \frac{1}{N-1} \left\{ \frac{N_{10}+N_{01}}{N} - \left( \frac{ N_{10} - N_{01} }{N} \right)^2 \right\} = \frac{1}{N-1} \left(\tau - \tau^2 + \frac{2N_{01} }{ N} \right ), \end{eqnarray*} and its bounds follows directly from $0\leq N_{01} \leq Np_0(1-p_1)$. \end{proof}
\begin{proof} [Proof of Proposition \ref{thm::A-neyman}] We have $E(A) = N_1\tau = E(N_1\widehat{\tau} )$, and \begin{eqnarray*} \text{var}(A - N_1\widehat{\tau} )&=& \text{var}\left[ \sum_{i=1}^N W_i \{ Y_i(1)-Y_i(0) \} - \sum_{i=1}^N W_i Y_i(1) + \frac{N_1}{N_0}\sum_{i=1}^N (1-W_i) Y_i(0) \right]\\ &=&\text{var}\left[ \sum_{i=1}^N W_i \left\{ Y_i(1)-Y_i(0) - Y_i(1) -\frac{N_1}{N_0} Y_i(0) \right\} \right]\\ &=& \text{var}\left\{ \sum_{i=1}^N W_i Y_i(0) \cdot \frac{N}{N_0} \right\} = \frac{N^2}{N_0^2} \cdot \frac{N_1 N_0}{ N(N-1) } \cdot \sum_{i=1}^N \{ Y_i(0) - \bar{Y}(0) \}^2 \\ &=& \frac{N N_1 }{N_0} S_0^2 = \frac{N^2 N_1}{N_0(N-1)} p_0 (1-p_0), \end{eqnarray*} where the penultimate line of the proof is due to Lemma \ref{lemma::variance}. \end{proof}
\end{document} |
\begin{document}
\title{Robust and efficient generator of almost maximal multipartite entanglement}
\author{Davide Rossini} \altaffiliation{Present address: International School for Advanced Studies (SISSA), Via Beirut 2-4, I-34014 Trieste, Italy.} \affiliation{NEST-CNR-INFM \& Scuola Normale Superiore,
Piazza dei Cavalieri 7, I-56126 Pisa, Italy}
\author{Giuliano Benenti} \affiliation{CNISM, CNR-INFM \& Center for Nonlinear and Complex systems, Universit\`{a} degli Studi dell'Insubria, via Valleggio 11, I-22100 Como, Italy} \affiliation{Istituto Nazionale di Fisica Nucleare, Sezione di Milano, via Celoria 16, I-20133 Milano, Italy}
\date{\today}
\begin{abstract} Quantum chaotic maps can efficiently generate pseudo-random states carrying almost maximal multipartite entanglement, as characterized by the probability distribution of bipartite entanglement between all possible bipartitions of the system. We show that such multipartite entanglement is robust, in the sense that, when realistic noise is considered, distillable entanglement of bipartitions remains almost maximal up to a noise strength that drops only polynomially with the number of qubits. \end{abstract}
\pacs{03.67.Mn, 03.67.Lx, 03.67.-a, 05.45.Mt}
\maketitle
Entanglement is not only the most intriguing feature of quantum mechanics, but also a key resource in quantum information science~\cite{nielsen,qcbook}. In particular, for quantum algorithms operating on pure states, multipartite (many-qubit) entanglement is a necessary condition to achieve an exponential speedup over classical computation~\cite{jozsa}. The entanglement content of random pure quantum states is almost maximal~\cite{page93,zyczkowski,winter}; such states find applications in various quantum protocols, like superdense coding of quantum states~\cite{hayden,winter}, remote state preparation~\cite{bennett}, and the construction of efficient data-hiding schemes~\cite{hayden2}. Moreover, it has been argued that random evolutions may be used to characterize the main aspects of noise sources affecting a quantum processor~\cite{saraceno}.
The preparation of a random state or, equivalently, the implementation of a random unitary operator requires a number of elementary one- and two-qubit gates exponential in the number $n_q$ of qubits, thus becoming rapidly unfeasible when increasing $n_q$. On the other hand, pseudo-random states approximating to the desired accuracy the entanglement properties of true random states may be generated efficiently, that is, polynomially in $n_q$~\cite{saraceno,weinstein,plenio}. In particular, quantum chaotic maps are efficient generators of multipartite entanglement among the qubits, close to that expected for random states~\cite{caves,weinstein}. A related crucial question is whether the generated entanglement is robust when taking into account unavoidable noise sources affecting a quantum computer, that in general turn pure states into mixtures, with a corresponding loss of quantum coherence and entanglement content. In this paper we give a positive answer to this question.
The number of measures needed to fully quantify multipartite entanglement grows exponentially with the number of qubits. Different measures capture various aspects of multipartite entanglement. Therefore, following Ref.~\cite{facchi}, we characterize multipartite entanglement by means of a function rather than with a single measure: we look at the probability density function of bipartite entanglement between all possible bipartitions of the system. For pure states the bipartite entanglement is the von Neumann entropy of the reduced density matrix of one of the two subsystems:
$E_{AB} (\ket{\psi} \bra{\psi})= - {\rm Tr} \left[ \rho_A \log_2 \rho_A \right] \equiv S (\rho_{A}) \, ,$
where $\rho_A = {\rm Tr_B} \, (\ket{\psi} \bra{\psi})$, and $A, B$ denote two subsystems made up of $n_A$ and $n_B$ qubits ($n_A + n_B = n_q$). For sufficiently large systems ($N \equiv 2^{n_q} \gg 1$), it is reasonable to consider only balanced bipartitions, i.e., with $n_A = n_B$, since the statistical weight of unbalanced ones becomes negligible~\cite{facchi}.
If the probability density has a large mean value $\langle E_{AB} \rangle \sim n_q$ ($\langle \, \cdot \, \rangle$ denotes the average over balanced bipartitions) and small relative standard deviation $\sigma_{AB}/\langle E_{AB}\rangle \ll 1$, we can conclude that genuine multipartite entanglement is almost maximal (note that $E_{AB}$ is bounded within the interval $[0,n_q]$). This is the case for random states~\cite{facchi}.
\paragraph{The model.} The use of quantum chaos for efficient and robust generation of pseudo-random states carrying large multipartite entanglement is nicely illustrated by the example of the quantum sawtooth map~\cite{benenti01}. This map is described by the unitary operator $\hat{U}$: \begin{equation} \ket{{\psi}_{t+1}} = \hat{U} \ket{\psi_t} = e^{-iT\hat{n}^{2}/2} \, e^{ik(\hat{\theta} -\pi)^{2}/2} \ket{\psi_t} , \label{eq:quantmap} \end{equation} where $\hat{n} = -i \, \partial/\partial \theta$, $[\hat{\theta},\hat{n}]=i$ (we set $\hbar=1$) and the discrete time $t$ measures the number of map iterations. In the following we will always study map~\eqref{eq:quantmap} on the torus $0 \leq \theta < 2 \pi$, $- \pi \leq p < \pi$, where $p = T n$. With an $n_q$-qubit quantum computer we are able to simulate the quantum sawtooth map with $N = 2^{n_q}$ levels; as a consequence, $\theta$ takes $N$ equidistant values in the interval $0 \leq \theta < 2 \pi$, while $n$ ranges from $-N/2$ to $N/2 -1$ (thus setting $T=2\pi/N$). We are in the quantum chaos regime for map~\eqref{eq:quantmap} when $K\equiv kT >0$ or $K<-4$; in particular, in this work we focus on the case $K=1.5$.
There exists an efficient quantum algorithm for simulating the quantum sawtooth map~\cite{benenti01}. The crucial observation is that the operator $\hat{U}$ in Eq.~\eqref{eq:quantmap} can be written as the product of two operators: $\hat{U}_{k}= e^{ik(\hat{\theta}-\pi)^{2}/2}$ and $\hat{U}_{T}=e^{-iT\hat{n}^{2}/2}$, that are diagonal in the $\theta$ and in the $n$ representation, respectively. Therefore, the most convenient way to classically simulate the map is based on the forward-backward fast Fourier transform between $\theta$ and $n$ representations, and requires $O(N\log_2 N)$ operations per map iteration. On the other hand, quantum computation exploits its capacity of vastly parallelize the Fourier transform, thus requiring only $O((\log_2 N)^2)$ one- and two-qubit gates to accomplish the same task~\cite{benenti01}.
In brief, the resources required by the quantum computer to simulate the sawtooth map are only logarithmic in the system size $N$, thus admitting an exponential speed up, as compared to any known classical computation.
\paragraph{Multipartite entanglement generation.} We first compute $\langle E_{AB} \rangle$ as a function of the number $t$ of iterations of map~\eqref{eq:quantmap}. Numerical data in Fig.~\ref{fig:EntGen} exhibit a fast convergence, within a few kicks, of this quantity to the value \begin{equation} \langle E^{\, {\rm rand}}_{AB} \rangle= \frac{n_q}{2} - \frac{1}{2 \ln 2} \label{eq:entpure} \end{equation} expected for a random state~\cite{page93}. Precisely, $\langle E_{AB} \rangle$ converges exponentially fast to $\langle E^{\,{\rm rand}}_{AB} \rangle$, with the time scale for convergence $\propto n_q$ (see inset of Fig.~\ref{fig:EntGen}). Therefore, the average entanglement content of a true random state is reached to a fixed accuracy within $O(n_q)$ map iterations, namely $O(n_q^3)$ quantum gates. We stress that in our case a deterministic map, instead of random one- and two-qubit gates as in Ref.~\cite{plenio}, is implemented. Of course, since the overall Hilbert space is finite, the above exponential decay in a deterministic map is possible only up to a finite time and the maximal accuracy drops exponentially with the number of qubits. We also note that, due to the quantum chaos regime, properties of the generated pseudo-random state do not depend on initial conditions, whose characteristics may even be very different from it (e.g., in simulations of Fig.~\ref{fig:EntGen}, we start from completely disentangled states).
\begin{figure}\label{fig:EntGen}
\end{figure}
As discussed above, multipartite entanglement should generally be described in terms of a function, rather than by a single number. We therefore show in Fig~\ref{fig:Isto_eps0} the probability density function $p(E_{AB})$ for the entanglement of all possible balanced bipartitions of the state $\ket{\psi_{t=30}}$. This function is sharply peaked around $\langle E^{\,{\rm rand}}_{AB}\rangle$, with a relative standard deviation $\sigma_{AB} / \left< E_{AB} \right>$ that drops exponentially with $n_q$ (see the inset of Fig.~\ref{fig:Isto_eps0}) and is very small ($\sim 0.1$) already at $n_q=4$. For this reason, we can conclude that multipartite entanglement is large and that it is reasonable to use the first moment $\langle E_{AB} \rangle$ of $p(E_{AB})$ for its characterization. We have also calculated the corresponding probability densities for random states (dashed curves in Fig.~\ref{fig:Isto_eps0}); their average values and variances are in agreement with the values obtained from states generated by the sawtooth map. The fact that for random states the distribution $p(E_{AB})$ is peaked around a mean value close to the maximum achievable value $E_{AB}^{\rm max}=n_q/2$ is a manifestation of the ``concentration of measure'' phenomenon in a multi-dimensional Hilbert space \cite{zyczkowski,winter}.
\begin{figure}
\caption{(color online) Probability density
function of the bipartite von Neumann
entropy over all balanced bipartitions for the state $\ket{\psi_t}$,
after $30$ iterations of map~\eqref{eq:quantmap} at $K=1.5$.
Various histograms are for different numbers of qubits:
from left to right $n_q = 8, 10, 12$;
dashed curves show the corresponding probabilities for random states.
Inset: relative standard deviation
$\sigma_{AB} / \langle E_{AB} \rangle$ as a function of $n_q$
(full circles) and best exponential fit
$\sigma_{AB} / \langle E_{AB} \rangle \sim e^{- 0.48 \, n_q}$
(continuous line); data and best exponential fit
$\sigma_{AB} / \langle E_{AB} \rangle \sim e^{- n_q / 2}$
for random states are also shown (empty triangles, dashed line).}
\label{fig:Isto_eps0}
\end{figure}
\paragraph{Stability of multipartite entanglement.} In order to assess the physical significance of the generated multipartite entanglement, it is crucial to study its stability when realistic noise is taken into account. Hereafter we model quantum noise by means of unitary noisy gates, that result from an imperfect control of the quantum computer hardware~\cite{cirac95}. We follow the noise model of Ref.~\cite{rossini04}. One-qubit gates can be seen as rotations of the Bloch sphere about some fixed axis; we assume that unitary errors slightly tilt the direction of this axis by a random amount. Two-qubit controlled-phase shift gates are diagonal in the computational basis; we consider unitary perturbations by adding random small extra phases on all the computational basis states. Hereafter we assume that each noise parameter $\varepsilon_i$ is randomly and uniformly distributed in the interval $[-\varepsilon, +\varepsilon]$; errors affecting different quantum gates are also supposed to be completely uncorrelated: every time we apply a noisy gate, noise parameters randomly fluctuate in the (fixed) interval $[-\varepsilon, +\varepsilon]$.
Starting from a given initial state $\ket{\psi_0}$, the quantum algorithm for simulating the sawtooth map in presence of unitary noise gives an output state $\ket{\psi_{\varepsilon_I,t}}$ that differs from the ideal output $\ket{\psi_t}$. Here $\varepsilon_I=(\varepsilon_1,\varepsilon_2,...,\varepsilon_{n_d})$ stands for all the $n_d$ noise parameters $\varepsilon_i$, that vary upon the specific noise configuration ($n_d$ is proportional to the number of gates). Since we do not have any a priori knowledge of the particular values taken by the parameters $\varepsilon_i$, the expectation value of any observable $A$ for our $n_q$-qubit system will be given by ${\rm Tr} [\rho_{\varepsilon,t} A]$, where the density matrix $\rho_{\varepsilon,t}$ is obtained after averaging over noise: \begin{equation} \rho_{\varepsilon,t} = \left(\frac{1}{2\varepsilon}\right)^{n_d} \int d \varepsilon_I \ket{\psi_{\varepsilon_I,t}} \bra{\psi_{\varepsilon_I,t}} \, . \label{eq:rhomatr} \end{equation} The integration over $\varepsilon_I$ is estimated numerically by summing over $\mathcal{N}$ random realizations of noise, with a statistical error vanishing in the limit $\mathcal{N}\to \infty$. The mixed state $\rho_{\varepsilon}$ may also arise as a consequence of non-unitary noise; in this case Eq.~\eqref{eq:rhomatr} can also be seen as an unraveling of $\rho_{\varepsilon}$ into stochastically evolving pure states $\ket{\psi_{\varepsilon_I}}$, each evolution being known as a quantum trajectory~\cite{brun}.
We now focus on the entanglement content of $\rho_{\varepsilon,t}$. Unfortunately, for a generic mixed state of $n_q$ qubits, a quantitative characterization of entanglement is not known, neither unambiguous~\cite{plenio07}. Anyway, it is possible to give numerically accessible lower and upper bounds for the bipartite {\it distillable entanglement} $E_{AB}^{(D)} (\rho_{\varepsilon})$: \begin{equation}
\max \left\{ S(\rho_{\varepsilon,A}) - S(\rho_\varepsilon), 0 \right\} \leq E_{AB}^{(D)} (\rho_{\varepsilon}) \leq \log_2 \| \rho_{\varepsilon}^{T_B} \| \, , \label{eq:entbounds} \end{equation} where $\rho_{\varepsilon,A} = {\rm Tr}_B (\rho_\varepsilon)$ and
$\| \rho_{\varepsilon}^{T_B} \| \equiv {\rm Tr} \sqrt{(\rho_\varepsilon^{T_B})^\dagger \, \rho_{\varepsilon}^{T_B}}$ denotes the trace norm of the partial transpose of $\rho_{\varepsilon}$ with respect to party $B$.
In practice, we simulate the quantum algorithm for the quantum sawtooth map in the chaotic regime with noisy gates and evaluate the two bounds in Eq.~\eqref{eq:entbounds} for the distillable entanglement of the mixed state $\rho_{\varepsilon,t}$, obtained after averaging over $\mathcal{N}$ noise realizations. A satisfactory convergence for the lower and the upper bound is obtained after $\mathcal{N}\sim \sqrt{N}$ and $\mathcal{N}\sim N$ noise realizations, respectively. In Fig.~\ref{fig:Ent_dest_nqvar}, upper panels, we plot the first moment of the lower ($E_m$) and the upper ($E_M$) bound for the distillable entanglement as a function of the imperfection strength. The various curves are for different numbers $n_q$ of qubits; $\mathcal{N}$ depends on $n_q$ and is large enough to obtain negligible statistical errors (smaller than the size of the symbols). In the lower panels of Fig.~\ref{fig:Ent_dest_nqvar} we show the relative standard deviation of the probability density function (over all balanced bipartitions) for the distillable entanglement. Like for pure states, we notice an exponential drop with $n_q$; the distribution width slightly broadens when increasing imperfection strength $\varepsilon$. We can therefore conclude that an average value of the bipartite distillable entanglement close to the ideal case $\varepsilon=0$ implies that multipartite entanglement is stable.
\begin{figure}
\caption{(color online)
Upper graphs: lower $\langle E_m \rangle$ (left panel)
and upper bound $\langle E_M \rangle$
(right panel) for the distillable entanglement
as a function of the noise strength at time $t=30$.
Various curves stand for different numbers of qubits:
$n_q=$ 4 (circles), 6 (squares), 8 (diamonds),
10 (triangles up), and 12 (triangles down).
Lower graphs: relative standard deviation of the probability density
function for distillable entanglement over all balanced bipartitions
as a function of $n_q$, for different noise strengths $\varepsilon$.
Dashed lines show a behavior
$\sigma / \left< E \right> \sim e^{-n_q/2}$ and are
plotted as guidelines.}
\label{fig:Ent_dest_nqvar}
\end{figure}
In order to quantify the robustness of multipartite entanglement with the system size, we define a perturbation strength threshold $\varepsilon^{(R)}$ at which the distillable entanglement bounds drop by a given fraction, for instance to $1/2$, of their $\varepsilon=0$ value, and analyze the behavior of $\varepsilon^{(R)}$ as a function of the number of qubits. Numerical results are plotted in Fig.~\ref{fig:EScal_eps_nq}; both for lower and upper bounds we obtain a power-law scaling close to \begin{equation} \varepsilon^{(R)} \sim 1/n_q \, . \label{eq:epsscaling} \end{equation}
\begin{figure}
\caption{(color online)
Perturbation strength at which the bounds of multipartite
entanglement halve (lower bound on the left panel, upper bound
on the right panel), as a function of the number of qubits.
Dashed lines are best power-law fits of
numerical data: $\varepsilon^{(R)} \sim n_q^{-0.79 \pm 0.01}$ at $t=15$,
$\varepsilon^{(R)} \sim n_q^{-0.9 \pm 0.01}$ at $t=30$, for both
lower and upper bounds.}
\label{fig:EScal_eps_nq}
\end{figure}
It is possible to give a semi-analytical proof of the scaling~\eqref{eq:epsscaling} for the lower bound measure, that is based on the quantum Fano inequality~\cite{schumacher96}, which relates the entropy $S(\rho_{\varepsilon})$ to the fidelity $F = \langle \psi_t \vert \rho_{\varepsilon,t} \vert \psi_t \rangle$:
$S(\rho_{\varepsilon})\,\lesssim\, h(F) + (1 - F) \, \log_2 (N^2 - 1)\, ,$
where $h(x) = -x \log_2 (x) - (1-x) \log_2 (1-x)$ is the binary Shannon entropy. Since $F \simeq e^{- \gamma \varepsilon^2 n_g t}$~\cite{rossini04,bettelli04}, with $\gamma \sim 0.28$ and $n_g = 3 n_q^2 + n_q$ being the number of gates required for each map step, we obtain, for $\varepsilon^{2} n_g t \ll 1$, \begin{equation} S (\rho_{\varepsilon}) \, \le \, \gamma \varepsilon^2 n_g t \left[
- \log_2 ( \gamma \varepsilon^2 n_g t) + 2 n_q + \frac{1}{\ln 2} \right]. \label{eq:entrofano} \end{equation} For sufficiently large systems the second term dominates (for $n_q =12$ qubits, $t=30$ and $\varepsilon \sim 5 \times 10^{-3}$ the other terms are suppressed by a factor $\sim 1/ 10$) and, to a first approximation, we can only retain it. On the other hand, an estimate of the reduced entropy $S(\rho_A)$ is given by the bipartite entropy~\eqref{eq:entpure} of a pure random state~\cite{page93}. Therefore, from Eq.~\eqref{eq:entbounds} we obtain the following expression for the lower bound of the distillable entanglement: \begin{equation} E_{AB}^{(D)} (\rho_{\varepsilon}) \, \ge \, \frac{n_q}{2} - \frac{1}{2 \ln 2} - 6 \gamma n_q^3 \varepsilon^2 t \,. \label{eq:fanoscaling} \end{equation} From the threshold definition $E_{AB}^{(D)} (\rho_{\varepsilon^{(R)}}) = \frac{1}{2} E_{AB}^{(D)} (\rho_{0}) =\frac{1}{2}S(\rho_A)$ we get the scaling~\eqref{eq:epsscaling}, that is valid when $n_q \gg 1$: $\varepsilon^{(R)}_m \sim 1/\sqrt{24 \, \gamma \, n_q^2 \, t}$. Notice that, for small systems as the ones that can be numerically simulated (see data in Fig.~\ref{fig:EScal_eps_nq}), the first term of Eq.~\eqref{eq:entrofano} may introduce remarkable logarithmic deviations from the asymptotic power-law behavior. At any rate, the scaling derived from Eq.~\eqref{eq:fanoscaling} is in good agreement with our numerical data, and also reproduces the prefactor in front of the power-law decay \eqref{eq:epsscaling} up to a factor of two.
\paragraph{Conclusions.} We have shown that quantum chaotic maps provide a convenient tool to efficiently generate {\it in a robust way} the large amount of multipartite entanglement close to that expected for truly random states. This result may become of practical relevance, since prototypes of quantum computers simulating these systems and, in particular, our specific model~\cite{emerson} have been already experimentally put on using a three-qubit NMR-based quantum processor~\cite{cory,emerson}. The fact that distillable entanglement of balanced bipartitions remains almost maximal, up to a noise strength which drops only {\it polynomially} with the number of qubits, supports the possibility that multipartite entanglement of a large number of qubits might be used as a real physical resource in quantum information protocols.
\begin{acknowledgments} We acknowledge support by MIUR-PRIN and EC-EUROSQIP. This work has been performed within the ``Quantum Information'' research program of Centro di Ricerca Matematica ``Ennio de Giorgi'' of Scuola Normale Superiore. \end{acknowledgments}
\end{document} |
\begin{document}
\title{Fourier transform for D-algebras}
This paper is devoted to the construction of an analogue of the Fourier transform for a certain class of non-commutative algebras. The model example which initiated this study is the equivalence between derived categories of ${\cal D}$-modules on an abelian variety and ${\cal O}$-modules on the universal extension of the dual abelian variety by a vector space (see [L], [R2]). The natural framework for a generalization of this equivalence is provided by the language of $D$-algebras developed by A.~Beilinson and J.~Bernstein in [BB]. We consider a subclass of $D$-algebras we call special $D$-algebras. We show that whenever one has an equivalence of categories of ${\cal O}$-modules on two varieties $X$ and $Y$, it gives rise to a correspondence between special $D$-algebras on $X$ and $Y$ such that the corresponding derived categories of modules are equivalent. When $X$ is an abelian variety, $Y$ is the dual abelian variety, according to Mukai [M] the categories of ${\cal O}$-modules on $X$ and $Y$ are equivalent, so our construction gives in particular the Fourier transform between modules over rings of twisted differential operators ({\it tdo}, for short, see section \ref{gen-sec} for a definition) with non-degenerate first Chern class on $X$ and $Y$.
We also deal with the microlocal version of the Fourier transform. The microlocalization of a special filtered $D$-algebra on $X$ is an NC-scheme in the sense of Kapranov (see [K]) over $X$, i.e. a ringed space whose structure ring is complete with respect to the topology defined by commutator filtration (see section \ref{et-sec}). We show that in our situation the derived categories of coherent sheaves on microlocalizations are also equivalent. In the case of rings of twisted differential operators on the dual abelian varieties $X$ and $Y$ one can think about the corresponding microlocalized algebras as deformation quantizations of the cotangent spaces $T^*X$ and $T^*Y$. The projections $p_X:T^*X\rightarrow T_0^*X$ and $p_Y:T^*Y\rightarrow T_0^*Y$ can be considered as completely integrable systems with dual fibers (a choice of a non-degenerate tdo on $X$ induces an identification of the bases $T_0^*X$ and $T_0^*Y$ of these systems). We conjecture that our construction generalizes to other dual completely integrable systems. This hope is based on the following observation: the relative version of our transform gives a Fourier transform for modules over relative tdo's on dual families of abelian varieties, while deformation quantizations are usually subalgebras in these tdo's. An example of dual completely integrable systems appears in the geometric Langlands program (see [BD]). Namely, one may consider Hitchin systems for Langlands dual groups. The analogue of the Fourier transform in this situation should lead to the equivalence between modules over a microlocalized tdo on moduli spaces of principal bundles for Langlands dual groups.
An important aspect of our work is that in an appropriate sense, the microlocalized Fourier transform is in fact \'etale local. Given a special filtered $D$-algebra $\cal A$ on $X$, let $\cal A_{ml}$ be the corresponding microlocalized scheme. Then $\cal A_{ml}$ is a non-commutative thickening of the product of $X$ with a scheme $Z$. Denoting by $\Phi(\cal A)$ the corresponding $D$-algebra on $Y$, $\Phi(\cal A)_{ml}$ is a thickening of $Y\times Z$. We then prove that the microlocalized Fourier transform is \'etale local in $Z$. It should be noted that $\cal A_{ml}$ and $\Phi(\cal A)_{ml}$ are not schemes over $Z$, so one does not have a straightforward base-change argument. Rather,
we develop in section \ref{et-sec} the non-commutative version of the theory of \'etale morphisms in the framework of Kapranov's NC-schemes, and establish the equivalence by a version of the topological invariance of \'etale morphisms.
Our work is motivated in part by Krichever's construction of solutions of the KP-hierarchy, [Kr]. (See section \ref{geom-subsec}.)
Let $W$ be a smooth variety of dimension $r$, embedded in its Albanese variety, $X$. Let $D\subset W$ be an ample hypersurface and $V\subset H^0(D,{\cal O}_D(D))$ an $r$-dimensional basepoint free subspace. Let
$\phi:D\rightarrow\Bbb P(V^*)$ denote the corresponding morphism and let $U\subset D$ be an open subset such that $\phi|_U$ is \'etale. Let $Y=Pic^0(W)$. Then $V$ maps to the space of vector fields on $Y$, and hence one has a subalgebra of the differential operators on $Y$, consisting of those operators which differentiate only in the ``$V$" directions. Denote this algebra by $\Phi(\cal A_V)$. That is, $\Phi(\cal A_V)$ is dual to a $D$-algebra $\cal A_V$ on $X$. Then the microlocalizations of these $D$-algebras are thickenings of $Y\times\Bbb P(V^*)$ and $X\times\Bbb P(V^*)$ respectively. In particular, one has the \'etale localizations $\cal A_{ml,U}$ and $\Phi(\cal A)_{ml,U}$ supported on $X\times U$ and $Y\times U$ respectively. Let $U_{\infty}$ denote the formal neighborhood of $U$ in $W$. Then the diagonal embedding $U\rightarrow X\times U$ extends to an embedding $\Delta_{\infty}:U_{\infty}\rightarrow \cal A_{ml,U}$. On the other hand, $U\times Y$ sits in both $X\times Y$ and $\Phi(\cal A)_{ml,U}$. Denote by $\cal L_{\infty}$ the Fourier transform of $\Delta_{\infty *}(\cal O_{U_{\infty}})$. We prove that $\cal L_{\infty}$ is a locally free rank-one left $\cal O$ module on $\Phi(\cal A)_{ml,U}$, whose restriction to $U\times Y$ is the restriction of the Poincar\'e line bundle. Thus $\cal L_{\infty}$ is a deformation of the Poincar\'e line bundle. Furthermore, for any positive integer $k$,
the Fourier transform of $\Delta_{\infty *}(\cal O_{U_{\infty}}(kU))$ is $\cal L_{\infty}(k(U\times Y))$.
The point is that the ring $H^0(W,\cal O(*D))$ acts by $\cal A_{ml,U}$-endomorphisms on
\break $\Delta_{\infty *}(\cal O_{U_{\infty}}(*U))$. Functoriality of the Fourier transform then gives us a representation \begin{equation}\label{representation} H^0(W,\cal O(*D))\rightarrow End_{\Phi(\cal A)_{ml,U}}(\cal L_{\infty}(*(U\times Y)))\ .\end{equation} When $W$ is a curve and $D$ is a point, this representation reduces to the Burchnall-Chaundy [BC] representation of $H^0(W,\cal O(*D))$ by differential operators. We intend to study the representation (\ref{representation}) further in a future work. In particular, the problem of characterizing the image of this representation is quite interesting, and should lead to generalizations of the KP-hierarchy.
\begin{notation} Fix a scheme $S$. Given an $S$-scheme $U$, denote by $\pi^U_S$ the structural morphism. By ``associative $S$-algebra on $U$" we mean a sheaf of associative rings $\cal A$ on $U$ equipped with a morphism of sheaves of rings from ${\pi^U_S}^{-1}(\cal O_S)$ to the center of $\cal A$. We abbreviate ``$\otimes_{{\pi^U_S}^{-1}(\cal O_S)}$" by ``$\ots$". For a scheme $U$ we denote by $\cal D^b(U)$ the bounded derived category of quasicoherent sheaves on $U$. Throughout the paper, $X$ and $Y$ are flat, separated $S$-schemes. \end{notation}
\section{$D$-algebras and Lie algebroids}\label{gen-sec} \subsection{} Let us recall some definitions from [BB]. A {\it differential} $\cal O_X$-{\it bimodule} $M$ is a quasicoherent sheaf on $X \times_S X$ supported on the diagonal $X \subset X \times_S X$. One can consider the category of differential $\cal O_X$-bimodules as a subcategory in the category of all sheaves of $\cal O_X$-bimodules on $X$. A $D$-{\it algebra} on $X$ is a sheaf of flat, associative $S$-algebras $\cal A$ on $X$ equipped with a morphism of $S$-algebras $i: \cal O_X \rightarrow \cal A$ such that $\cal A$ is a differential $\cal O_X$ - bimodule. This means that $\cal A$ has an increasing filtration $0=\cal A_{-1} \subset \cal A_0 \subset \cal A_1 \subset \cdots$ such that $\cal A = \cup \cal A_n$ and $ ad(f) (\cal A_k) \subset \cal A _{k-1}$ for any $k \ge 0$ and $f \in \cal O_X$ where $ ad(m): = rm-mr$. We denote by $b(\cal A)$ the quasi-coherent sheaf on $X\times_S X$ (supported on the diagonal) corresponding to $\cal A$. Also we denote by $ \cal M ( \cal A)$ the category of sheaves of left $\cal A$-modules on $X$ which are quasicoherent as $\cal O_X$-modules.
\subsection{}\label{circle} Let us describe some basic operations with $D$-algebras and modules over them. Let $\cal A_X$ and $\cal A_Y$ be $D$-algebras over $X$ and $Y$ respectively. One defines a $D$-algebra $\cal A_X\boxtimes_S \cal A_Y$ on $X\times_S Y$ by gluing $D$-algebras over products of affine opens $U\times_S V$ corresponding to $\cal A_X(U)\ots \cal A_Y(V)$. A module $M\in\cal M(\cal A_X\boxtimes_S \cal A_Y)$ is the same as a quasicoherent $\cal O_{X\times_S Y}$-module together with commuting actions of $p_X^{-1}(\cal A_X)$ and $p_Y^{-1}(\cal A_Y)$ which are compatible with the $\cal O_{X\times_S Y}$-module structure (where $p_X$ and $p_Y$ are projections from $X\times_S Y$ to $X$ and $Y$) . In particular, we have the natural structure of $D$-algebra on $p_X^*\cal A_X\simeq \cal A_X\boxtimes_S\cal O_Y$ and $p_Y^*\cal A_Y\simeq \cal O_X\boxtimes_S\cal A_Y$ and natural embeddings of $D$-algebras $p_X^*\cal A_X\hookrightarrow \cal A_X\boxtimes_S \cal A_Y$, $p_Y^*\cal A_Y\hookrightarrow \cal A_X \boxtimes_S\cal A_Y$. For a pair of modules $M_X\in\cal M(\cal A_X)$ and $M_Y\in\cal M(\cal A_Y)$ there is a natural structure of $\cal A_X\boxtimes_S\cal A_Y$-module on $M_X\boxtimes_S M_Y$.
Now assume that we have $D$-algebras $\cal A_X$, $\cal A_Y$, and $\cal A_Z$ on $X$, $Y$ and $Z$ respectively. Then we can define an operation $$\circ_{\cal A_Y}:\cal D^-(\cal M(\cal A_X\boxtimes_S\cal A_Y^{op}))\times \cal D^-(\cal M(\cal A_Y\boxtimes_S\cal A_Z))\rightarrow \cal D^-(\cal M(\cal A_X\boxtimes_S\cal A_Z)).$$ The definition is the globalization of the operation of tensor product of bimodules. Namely, for a pair of objects $M\in {\cal D}^-(\cal M(\cal A_X\boxtimes_S \cal A_Y^{op}))$ and $N\in {\cal D}^-(\cal M(\cal A_Y\boxtimes_S \cal A_Z))$ we can form the external tensor product $M\boxtimes_S N\in {\cal D}^-(\cal M(\cal A_{XYZ}))$ where $\cal A_{XYZ}= \cal A_X\boxtimes_S \cal A_Y^{op}\boxtimes_S\cal A_Y\boxtimes_S\cal A_Z$ is a $D$-algebra on $X\times_S Y\times_S Y\times_S Z$. Note that there is a natural structure of left $\cal A_Y\boxtimes_S\cal A_Y^{op}$-module on $b(\cal A_Y)$ given by the multiplication in $\cal A_Y$. Hence, we can consider the tensor product $$(\cal A_X\boxtimes_S b(\cal A_Y)\boxtimes_S\cal A_Z) \overset{\Bbb L}{\otimes}_ {\cal A_{XYZ}}
(M{\overset{\Bbb L}{\boxtimes}}_S} \def\ots{\otimes_S N)$$ as an object in the category ${\cal D}^-(p_{XZ}^{-1}(\cal A_X\boxtimes_S\cal A_Z))$ where $p_{XZ}^{-1}$ denotes a sheaf-theoretical inverse image.
Finally, we set $$M\circ_{\cal A_Y}N=Rp_{XZ*}((\cal A_X\boxtimes_S b(\cal A_Y)\boxtimes_S\cal A_Z) \overset{\Bbb L}{\otimes}_ {\cal A_{XYZ}} (M{\overset{\Bbb L}{\boxtimes}}_S} \def\ots{\otimes_S N)) .$$
There is also the following equivalent definition: $$M\circ_{\cal A_Y}N= Rp_{XZ*}((M\boxtimes_S\cal A_Z)\overset{\Bbb L}{\otimes}_ {{\cal A_X}^{op}\boxtimes_S\cal A_Y\boxtimes_S\cal A_Z}({\cal A_X}^{op}\boxtimes_S N))\ .$$
Specializing to the case that $Z=S$ and $\cal A_Z=\cal O_S$, we see that every $\cal A_X\boxtimes_S\cal A_Y^{op}$-module $F$ on $X\times_S Y$ defines a functor $G\mapsto F\circ_{\cal A_Y} G$ from $\cal D^-\cal M(\cal A_Y)$ to $\cal D^-\cal M(\cal A_X)$.
\begin{Prop} \begin{enumerate} \item The operation $\circ$ is associative in the natural sense. \vskip 5pt \item One has $M\circ b(\cal A_Y)\simeq M$ and $b(\cal A_Y)\circ N\simeq N$ canonically, for
\break\hbox{$M\in \cal D^-\cal M(\cal A_X\boxtimes_S\cal A_Y^{op})$} and $N\in \cal D^-\cal M(\cal A_Y\boxtimes_S\cal A_Z)$. \vskip 5pt \item If $M$ is a differential $\cal O_X$-bimodule and $N$ is an $\cal O_{X\times_S Y}$-module, then
\break \hbox{$ b(M)\circ_{\cal O_X}N=p_X^{-1}(M)\otimes_{p_X^{-1}(\cal O_X)}N$.} \end{enumerate} \end{Prop}
It follows that if $\cal A$ is a $D$-algebra on $X$, the structural morphism on $\cal A$ may be viewed as a morphism $b(\cal A)\circ_{\cal O_X} b(\cal A)\rightarrow b(\cal A)$, and if $M$ is a left $\cal A$-module, the action of $\cal A$ on $M$ is given by a morphism $b(\cal A)\circ_{\cal O_X} M\rightarrow M$. Moreover, an $\cal A_X\boxtimes_S\cal A_Y^{op}$-module structure on an $\cal O_{X\times_S Y}$-module $M$ is the same as a pair of morphisms $b(\cal A_X)\circ_{\cal O_X} M\rightarrow M$ and $M\circ_{\cal O_Y}b(\cal A_Y)\rightarrow M$ making $M$ a (left $b(\cal A_X)$)-(right $b(\cal A_Y)$)-module with respect to $\circ$, such that the two module structures commute.
\subsection{} Recall that a {\it Lie algebroid} $L$ on $X$ is a (quasicoherent) $\cal O_X$-module equipped with a morphism of $\cal O_X$-modules $\sigma: L \rightarrow \cal T (:= \text{Der}_S \cal O_X = $\text{relative tangent sheaf of $X$}) and an $S$-linear Lie bracket $[\cdot,\cdot] : L\ots L \rightarrow L$ such that $\sigma$ is a homomorphism of Lie algebras and the following identity is satisfied: $$ [\ell_1, f \ell_2] = f\cdot [\ell_1, \ell_2] + \sigma (\ell_1) (f) \ell_2 $$ where $\ell_1, \ell_2 \in L, f \in \cal O_X$. To every Lie algebroid $L$ one can associate a $D$-algebra $\cal U (L)$ called the {\it universal enveloping algebra} of $L$. By definition $\cal U(L)$ is a sheaf of algebras equipped with the morphisms of sheaves $i: \cal O_X \rightarrow \cal U(L) , i_L: L \rightarrow \cal U(L)$, such that $\cal U(L)$ is generated, as an algebra, by the images of these morphisms and the only relations are:
\begin{enumerate} \renewcommand{(\arabic{enumi})} \end{Lem}{(\roman{enumi})} \item $i$ is a morphism of algebras; \item $i_L$ is a morphism of Lie algebras; \item $i_L(f \ell)= i(f) i_L (\ell),\ [ i_L (\ell) , i(f)] = i (\sigma (\ell) (f))$, where $f \in \cal O_X, \ell \in L$. \renewcommand{(\arabic{enumi})} \end{Lem}{(\arabic{enumi})} \end{enumerate}
\subsection{} Let $L$ be a Lie algebroid on $X$. A central extension of $L$ by $\cal O_X$ is a Lie algebroid $\tilde{L}$ on $X$ equipped with an embedding of $\cal O_X$ - modules $c:\cal O_X \hookrightarrow \tilde{L}$ such that $[c(1), \tilde{\ell}] = 0$ for every $\tilde{\ell} \in \tilde{L}$ ( in particular, $c(\cal O_X)$ is an ideal in $\tilde{L}$), and an isomorphism of Lie algebroids $\tilde{L}/c(\cal O_X) \simeq L$. For such a central extension we denote by $\cal U^{\circ}(\tilde{L})$ the quotient of $\cal U(\tilde{L})$ modulo the ideal generated by the central element $i(1) - i_{\tilde{L}} (c(1))$.
\begin{Lem}\label{locallyfree} Let $L$ be a locally free $\cal O_X$-module of finite rank. Then there is a bijective correspondence between isomorphism classes of the following data: \begin{enumerate} \renewcommand{(\arabic{enumi})} \end{Lem}{(\roman{enumi})} \item a structure of a Lie algebroid on $L$ and a central extension $\tilde{L}$ of $L$ by $\cal O_X$. \item a $D$-algebra $\cal A$ equipped with an increasing algebra filtration $\cal O_X = \cal A_0 \subset \cal A_1 \subset \cal A_2 \subset \dots$ such that $\cup \cal A_n = \cal A$ and an isomorphism of the associated graded algebra $\operatorname{gr} A$ with the symmetric algebra $S^\bullet L$. \end{enumerate} \renewcommand{(\arabic{enumi})} \end{Lem}{(\arabic{enumi})} \end{Lem} The correspondence between (i) and (ii) maps a central extension $ \tilde{L}$ to $\cal U^\circ (\tilde{L})$.
\subsection{} Assume that $X$ is smooth over $S$. Then one can take $L= \cal T$ with its natural Lie algebroid structure. The corresponding central extensions $\tilde{\cal T}$ of $\cal T$ by $\cal O$ are called {\it Picard algebroids} and the associated $D$-algebras are called {\it algebras of twisted differential operators}; or simply {\it tdo}'s. If $\cal D$ is a tdo, $\cal D_{-1} = 0 = \cal D_{0} \subset \cal D_1 \subset \cal D_2 \subset \dots$ is its maximal $D$-filtration, i.e. $$
{\cal D}_i = \{d \in {\cal D} | ad(f) d \in {\cal D}_{i-1}\ ,\ f\in \cal O_X \},$$ then $\operatorname{gr}{\cal D} \simeq S^\bullet \cal T$. \begin{Lem}\label{isomorphism} For a locally free $\cal O_X$-module of finite rank $E$ one has a canonical isomorphism $$ \operatorname{Ext}^1_{\cal O_{X \times_S X}}(\Delta_* E, \Delta_*\cal O_X) \simeq \operatorname{Hom}_{\cal O_X} (E, \cal T) \oplus \operatorname{Ext}^1_{\cal O_X} (E,\cal O_X), $$ where $X\overset \Delta\rightarrow X\times_S X$ is the diagonal embedding. \end{Lem}
\begin{pf} Since $\Delta_* E \simeq p^*_1 E \otimes_{\cal O_{X\times_S X}} (\cal O_{X\times_S X}/J)$, where $J$ is the ideal sheaf of the diagonal, we have an exact sequence $$ 0 \rightarrow \operatorname{Hom} (p^*_1 E \otimes_{\cal O_{X\times_S X}} J,\Delta_*\cal O_X) \rightarrow \operatorname{Ext}^1 (\Delta_*E, \Delta_*\cal O_X) \rightarrow \operatorname{Ext}^1 (p^*_1 E,\Delta_*\cal O_X) $$ Note that the first and last terms are isomorphic to $\operatorname{Hom}(E, \cal T)$ and $\operatorname{Ext}^1(E,\cal O_X)$ respectively. It remains to note that there is a canonical splitting $\Delta_*: \operatorname{Ext}^1(E,\cal O_X) \rightarrow \operatorname{Ext}^1(\Delta_* E, \Delta_* \cal O)$. \end{pf}
Note that the projection $\operatorname{Ext}^1(\Delta_*E, \Delta_*\cal O_X) \rightarrow \operatorname{Hom}(E, \cal T)$ can be described as follows. Given an extension \[ \begin{CD} 0 @>>> \Delta_* \cal O_X @>>> \tilde{E} @>>> \Delta_* E @>>> 0 \end{CD} \] the action of $J/J^2$ on $\tilde{E}$ induces the morphism $J/J^2 \otimes \tilde{E} \rightarrow \Delta_* \cal O$, which factors through $J/J^2 \otimes \Delta_* E$, since $J$ annihilates $\Delta_* \cal O$. Hence we get a morphism $\Delta_* E \rightarrow \Delta_* \cal T$.
Now if $\cal A$ is a $D$-algebra, equipped with a filtration $\cal A_{\bullet}$ such that $\operatorname{gr}\cal A \simeq S^{\bullet}(E)$, then we consider the corresponding extension of $\cal O_X$-bimodules \[ \begin{CD} 0 @>>> \cal O_X = \cal A_0 @>>> \cal A_1 @>>> E=\cal A_1/\cal A_0@>>> 0 \end{CD} \] as an element in $\operatorname{Ext}^1_{\cal O_{X \times X}}(\Delta_* E, \Delta_* \cal O_X)$. By definition, $\cal A$ is a tdo if the projection of this element to $\operatorname{Hom}_{\cal O_X}(E, \cal T)$ is a map $E \rightarrow \cal T$ which is an isomorphism.
\section{Equivalences of categories of modules over $D$-algebras}
\subsection{} Let $P$ be an object in ${\cal D}^b(X\times_S Y)$, $Q$ be an object in ${\cal D}^b(Y\times_S X)$ such that $$P\circ_{\cal O_Y} Q\simeq\Delta_*\cal O_Y,\\ Q\circ_{\cal O_X} P\simeq\Delta_*\cal O_X.$$ where $\Delta$ denotes the diagonal embedding. In this case the functors $$\Phi_P:M\mapsto P\circ M,\\ \Phi_Q:N\mapsto Q\circ N$$ establish an equivalence of categories ${\cal D}^-(X)$ and ${\cal D}^-(Y)$.
For example, we have these data in the following situation: $X$ is an abelian $S$-scheme, $Y=\hat{X}$ is the dual abelian $S$-scheme, $P=\cal P$ is the normalized Poincar\'e line bundle on $X \times_S \hat{X}$, $Q=\sigma^*\cal P^{-1}\omega_{X}^{-1}[-g]$ where $\sigma:\hat{X}\times_S X\rightarrow X\times_S \hat{X}$ is the permutation of factors, $g = \dim X$.
\subsection{} Let us call a quasicoherent sheaf $K$ on $X \times_S X$ {\it special} if there is a filtration $0 = K_{-1}\subset K_0 \subset K_1 \subset \dots$ of $K$ and a sequence of sheaves of flat, quasicoherent $\cal O_S$-modules $F_i$ such that $\cup K_i = K$ and $K_i/K_{i-1} \simeq \Delta_* {\pi_S^X}^*(F_i)$ for every $i \ge 0$. We denote by $\cal S_X$ the exact category of special sheaves on $X \times_S X$. The following properties are easily verified. \begin{Lem}\label{sheaves on S} Let $F\in {\cal D}^-(\cal M(\cal O_S))$. \begin{enumerate} \item Let $G\in D^-(\cal M(\cal O_{X\times_S Y}))$. Then $\Delta_{X*}{\pi_S^X}^*(F)\circ_{\cal O_X}G= {\pi_S^{X\times_S Y}}^*(F)\overset{\Bbb L}{\otimes}_{\cal O_{X\times_S Y}}G$. \item $P \circ_{\cal O_Y} {\pi_S^Y}^*(F) \circ_{\cal O_Y} Q={\pi_S^X}^*(F)$. \end{enumerate} \end{Lem} The following proposition then follows.
\begin{Prop} \begin{enumerate} \item For every $K \in \cal S_Y$, the functor $\cal M(\cal O_{Y\times Z})\rightarrow \cal M(\cal O_{Y\times Z})$, $M\mapsto K\circ_{\cal O_Y} M$, is exact. \vskip 5pt \item For every pair of special sheaves $K, K' \in \cal S_Y$, $ K{\circ}_{\cal O_Y} K'$ is special. \end{enumerate} \end{Prop}
\begin{Prop} The functor $\Phi: K \mapsto P \circ_{\cal O_Y} K \circ_{\cal O_Y} Q$ defines an equivalence of categories $\Phi: \cal S_Y \rightarrow \cal S_X$. \end{Prop} \begin{pf} >From lemma \ref{sheaves on S}, together with the fact that the
operation $\circ$ commutes with inductive limits, we have $\Phi(K) \in \cal S_X$ for every $K \in \cal S_Y$. It remains to notice that there is an inverse functor to $\Phi$ given by $$ \Phi^{-1}(K')=Q {\circ}_{\cal O_X} K' {\circ}_{\cal O_X} P $$ where $K' \in \cal S_X$. \end{pf}
\begin{Prop}\label{homomorphism} For $K, K' \in \cal S_Y, M \in \cal D^b(Y)$ one has a canonical isomorphism of $\cal O_X$-bimodules $$ \Phi(K \circ_{\cal O_{Y}} K') \simeq \Phi K \circ_{\cal O_X} \Phi K', $$ and a canonical isomorphism in $\cal D^b(X)$ $$ \Phi (K \circ_{\cal O_Y} M) \simeq \Phi K \circ_{\cal O_X} \Phi M. $$ \end{Prop}
\begin{Def}A $\it special$ $D$-$\it algebra$ on $X$ is a $D$-algebra $\cal A$ such that the sheaf $b(\cal A)$ on $X\times_S X$ is special.\end{Def}
It follows from the above proposition that for any special $D$-algebra $ \cal A$ on $Y$ there exists a canonical $D$-algebra $\Phi \cal A$ on $X$ such that $$b(\Phi\cal A)\simeq \Phi(b(\cal A)).$$ Namely one just has to apply $\Phi$ to structural morphisms $b(\cal A) \circ_{\cal O_{Y}} b(\cal A) \rightarrow b(\cal A)$ and $\Delta_* \cal O_{Y} \rightarrow b(\cal A)$. Futhermore, we now prove that the derived categories of modules over $\cal A$ and $\Phi \cal A$ are equivalent.
\begin{Thm}\label{main1} Assume that $P$ and $Q$ are quasi-coherent sheaves up to a shift (i.e. they have only one cohomology). Then for every special $D$-algebra $\cal A$ on $Y$ there is a canonical exact equivalence $\Phi: \cal D^- \cal M(\cal A) \rightarrow \cal D^- \cal M(\Phi \cal A)$ such that the following diagram of functors is commutative: \end{Thm} \[ \begin{CD} \cal D^b \cal M(\cal A) @>\Phi>> \cal D^b \cal M(\Phi \cal A)\\ @VVV@VVV\\ \cal D^b (Y) @>\Phi>> \cal D^b (X) \end{CD} \] where the vertical arrows are the forgetting functors. \begin{pf} Let us consider the following object in ${\cal D}^b(X\times_S Y)$: $$ \cal B = P {\circ}_{\cal O_{Y}} b(\cal A) = P \otimes_{\cal O_{X\times_S Y}} p_{Y}^*\cal A\ . $$ Note that $\cal B$ is actually concentrated in one degree so we can consider it as a quasicoherent sheaf on $X\times_S Y$ (perhaps shifted). We claim that there is a canonical $\Phi \cal A\boxtimes \cal A^{op}$-module structure on $\cal B$. Indeed, it suffices to construct commuting actions $b(\Phi \cal A)\circ\cal B\rightarrow\cal B$ and $\cal B\circ b(\cal A)\rightarrow\cal B$ compatible with $\cal O_{X\times_S Y}$-module structure. The right action of $\cal A$ is obvious while the left action of $\Phi \cal A$ on $\cal B$ is given by the following map: $$ \Phi b(\cal A) \circ_{\cal O_{X}} \cal B = P \circ_{\cal O_{Y}} b(\cal A) {\circ}_{\cal O_{Y}} Q {\circ}_{\cal O_X} P \circ_{\cal O_{Y}} b(\cal A) \rightarrow P \circ_{\cal O_{Y}} b(\cal A) \circ_{\cal O_{Y}} b(\cal A) \rightarrow P \circ_{\cal O_{Y}} b(\cal A) = \cal B\ , $$ where the last arrow is induced by multiplication in $\cal A$. It is clear that this is a morphism of right $p_{Y}^{-1}\cal A$ modules. On the other hand, there is a natural isomorphism of sheaves on $X\times_S Y$ $$ b(\Phi \cal A)\circ_{\cal O_{X}} P\simeq P\circ_{\cal O_{Y}} b(\cal A)\circ_{\cal O_{Y}} Q \circ_{\cal O_X} P\simeq P\circ_{\cal O_{Y}} b(\cal A)\simeq\cal B\ . $$ One can easily check that the above left action of $p_X^{-1}\Phi \cal A$ on $\cal B$ is compatible with the natural left $p_X^{-1}\Phi \cal A$-module structure on $b(\Phi \cal A)\circ_{\cal O_{X}} P$ via this isomorphism. Thus, $\cal B$ is an object of ${\cal D}^b(\cal M(\Phi\cal A\boxtimes\cal A^{op}))$ (concentrated in one degree).
So we can define the functor $$ \Phi: {\cal D}^b\cal M(\cal A) \rightarrow {\cal D}^b \cal M(\Phi \cal A) : M \mapsto \cal B{\circ}_{\cal A} M $$ Similarly, we define an $\cal A\boxtimes\Phi \cal A^{op}$-module (perhaps shifted) on $Y\times_S X$: $$ \cal B' = b(\cal A) \circ_{\cal O_Y} Q \simeq \cal Q \circ_{\cal O_X} b(\Phi \cal A) $$ and the functor $$ \Phi^\prime: {\cal D}^b \cal M(\Phi \cal A) \rightarrow {\cal D}^b \cal M(\cal A) : N \mapsto B^\prime {\circ}_{\Phi \cal A} N. $$ One has an isomorphism in the derived category of right $\cal O_Y\boxtimes\cal A$-modules on $Y\times_S Y$ $$ \cal B^\prime {\circ}_{\Phi \cal A} \cal B \simeq ( Q {\circ}_{\cal O_X} b(\Phi \cal A)){\circ}_{\Phi \cal A} \cal B \simeq Q {\circ}_{\cal O_X} \cal B \simeq Q {\circ}_{\cal O_X} P \circ_{\cal O_{Y}} b(\cal A) \simeq b(\cal A). $$ Similarly, there is an isomorphism in the derived category of left $\cal A\boxtimes\cal O_Y$-modules $$ \cal B^\prime{\circ}_{\Phi\cal A} \cal B \simeq \cal B^\prime{\circ}_{\Phi \cal A} (b(\Phi \cal A) {\circ}_{\cal O_X} P) \simeq \cal B^\prime\circ_{\cal O_X} P \simeq b(\cal A)\circ_{\cal O_{Y}} Q{\circ}_{\cal O_X} P \simeq b(\cal A). $$ Moreover, both these isomorphisms coincide with the following isomorphism of ${\cal O}_{Y\times_S Y}$-modules $$\cal B'\circ_{\Phi\cal A} \cal B\simeq Q\circ_{\cal O_X} b(\Phi\cal A)\circ_{\Phi\cal A} b(\Phi\cal A)\circ_{\cal O_X} P\simeq Q\circ_{\cal O_X} b(\Phi\cal A)\circ_{\cal O_X} P \simeq b(\cal A).$$ It follows that $\cal B^\prime{\circ}_{\Phi\cal A} \cal B \simeq b(\cal A)$ in the derived category of $\cal A\boxtimes\cal A^{op}$-modules,
Similarly, $\cal B{\circ}_{\cal A}\cal B'\simeq b(\Phi\cal A)$. It follows that the compositions $\Phi \Phi': {\cal D}^b \cal M(\Phi \cal A) \rightarrow {\cal D}^b \cal M(\Phi \cal A)$ and $\Phi' \Phi: {\cal D}^b \cal M(\cal A) \rightarrow \cal D^b \cal M(\cal A)$ are identity functors.
The composition of $\Phi$ with the forgetting functor can be easily computed: $$ \cal B {\circ}_{\cal A} M \simeq ( P {\circ}_{\cal O_Y} b(\cal A)) \circ_{\cal A} M \simeq P{\circ}_{\cal O_Y} M\ . $$ Hence, forgetting $\Phi \cal A$-module structure, we just get the transform with kernel $P$. \end{pf}
\begin{Rems} 1. In the situation of the theorem if we have another special $D$-algebra $\cal A^{\prime}$ and a homomorphism of $D$-algebras $\cal A\rightarrow \cal A^{\prime}$ then we have the corresponding induction and restriction functors $M\mapsto \cal A^{\prime}\otimes_{\cal A} M$ and $N\mapsto N$ between categories of $\cal A$-modules and $\cal A^{\prime}$-modules. It is easy to check that the corresponding derived functors commute with our functors $\Phi$ constructed for $\cal A$ and $\cal A^{\prime}$.
\noindent 2. Let $A$ be an abelian variety, $\hat{A}$ be the dual abelian variety. Then as was shown in [L] and [R2] the Fourier-Mukai equivalence ${\cal D}^b(A)\simeq{\cal D}^b(\hat{A})$ extends to an equivalence of the derived categories of $\cal D$-modules on $A$ and ${\cal O}$-modules on the universal extension of $\hat{A}$ by a vector space. The latter category is equivalent to the category of modules over the commutative sheaf of algebras $\cal A$ on $\hat{A}$ which is constructed as follows. Let $$0\rightarrow \cal O\rightarrow\cal E\rightarrow H^1(\hat{A},\cal O)\otimes\cal O\rightarrow 0$$ be the universal extension. Then $\cal A=\operatorname{Sym}(\cal E)/(1_{\cal E}-1)$ where $1_{\cal E}$ is the image of $1\in\cal O$ in $\cal E$. It is easy to see that $\cal A$ is the dual special $D$-algebra to the algebra of differential operators on $A$, so our theorem implies the mentioned equivalence of categories.
\noindent 3. In the case of abelian varieties one can generalize the notion of special $D$-algebra as follows. Instead of considering special sheaves on $X\times X$ one can consider quasi-coherent sheaves on $X\times X$ admitting filtration with quotients of the form $(\operatorname{id}, t_x)_*L$ where $(\operatorname{id}, t_x):X\rightarrow X\times X$ is the graph of the translation by some point $x\in X$, $L$ is a line bundle algebraically equivalent to zero on $X$. Let us call such sheaves quasi-special. It is easy to see that quasi-special sheaves are flat over $X$ with respect to both projections $p_1$ and $p_2$, so the operation $\circ$ is exact on them. We can define a quasi-special algebra as a quasi-special sheaf $K$ on $X\times X$ together with the associative multiplication $K\circ K\rightarrow K$ admitting a unit $\Delta_*\cal O_X\rightarrow K$. Then there is a Fourier duality for quasi-special algebras and equivalence of the corresponding derived categories. The proof of the above theorem works literally in this situation. Note that modules over quasi-special algebras form much broader class of categories than those over special $D$-algebras. Among these categories we can find some categories of modules over 1-motives and our Fourier duality coincides with the one defined by G. Laumon in [L]. For example, a homomorphism $\phi:\Bbb Z\rightarrow X$ defines a quasi-special algebra on $X$ which is a sum of structural sheaves of graphs of translations by $\phi(n)$, $n\in\Bbb Z$. The corresponding category of modules is the category of $\Bbb Z$-equivariant $\cal O_X$-modules. The Fourier dual algebra corresponds to the affine group over $\hat{X}$ which is an extension of $\hat{X}$ by the multiplicative group. \end{Rems}
\subsection{} Let $L$ be a Lie algebroid on $Y$ such that $L \simeq \cal O^d_{Y}$ as an $ \cal O_{Y}$ - module. Then for any central extension $\tilde{L}$ of $L$ by $\cal O_{Y}$, the $D$-algebra \ \ $\cal U^\circ (\tilde{L})$ is special. Futhermore, one has $\Phi \cal U^\circ (\tilde{L}) \simeq \cal U^\circ(\tilde{L^\prime})$ for some central extension $ \tilde{L^\prime}$ of a Lie algebroid $L^\prime$ on $X$ such that $L^\prime \simeq \cal O^d_X$ as an $\cal O_X$-module. Indeed, this follows essentially from Lemma \ref{locallyfree}. One just has to notice that if a $D$-algebra $\cal A$ on $Y$ has an algebra filtration $\cal A_\bullet$ with $\operatorname{gr} \cal A_\bullet \simeq S^\bullet (\cal O^d_{Y})$, then $\Phi \cal A$ has an algebra filtration $\cal F \cal A_\bullet$ with $\operatorname{gr} \Phi \cal A_\bullet \simeq S^\bullet(\cal O^d_X)$. Note that if $L$ is a successive extension of trivial bundles then the $D$-algebra $\cal U^\circ(\tilde{L})$ is still special, but $\Phi \cal U^\circ(\tilde{L})$ is not necessarily of the form $\cal U^\circ (\tilde{L'})$.
\subsection{} Assume now that $X$ is an abelian variety. Let $\cal P$ be a Picard algebroid on $\hat X, \cal D= \cal U^\circ(\cal P)$ be the corresponding $tdo$. Then $\cal P/ \cal O_{\hat X} \simeq \cal T_{\hat X} \simeq \hat {\frak g} \otimes_k \cal O _{\hat X}$ is a trivial $\cal O_{\hat X}$-module, hence $\Phi \cal D \simeq \cal U^{\circ}(\tilde{L^\prime})$ for some Lie algebroid ${L^\prime}$ on $X$ and its central extension $\tilde{L^\prime}$ by $\cal O_X$. \begin{Prop} Let $\cal D$ be a tdo on $\hat X, \cal P$ be the corresponding Picard algebroid. Then $\Phi\cal D$ is a tdo on $X$ if and only if the map $\hat{\frak g} \rightarrow H^1 (\hat X, \cal O)$, induced by the extension of $\cal O_{\hat X}$-modules $$ 0 \rightarrow \cal O_{\hat X}\rightarrow \cal P \rightarrow \hat{\frak g} \otimes_k \cal O_{\hat X}\rightarrow 0, $$ is an isomorphism. \end{Prop} \begin{pf} Let $\cal D_\bullet$ be the canonical filtration of $\cal D$. Then $\Phi \cal D$ is a tdo if only if the class of the extension of $\cal O_X$-bimodules $$ 0 \rightarrow \cal O_X \simeq \Phi \cal D_0 \rightarrow \Phi \cal D_1 \rightarrow \Phi(\cal D_1/ \cal D_0) \simeq \hat{\frak g} \otimes \cal O_X \rightarrow 0 $$ induces an isomorphism $\hat{\frak g} \otimes \cal O_{\hat X} \rightarrow \cal T_X$. Thus, it is sufficient to check that the components of the canonical decomposition $$ \operatorname{Ext}^1_{\cal O_{X \times X}} (\Delta_* \cal O_X, \Delta_*, \cal O_X) \simeq H^0(X, \cal T)\oplus H^1(X, \cal O_X) $$ introduced in Lemma \ref{isomorphism}, get interchanged by the Fourier-Mukai transform, if we take into account the natural isomorphisms $$ H^0(X, \cal T) \simeq \frak g \simeq H^1(\hat X, \cal O), $$ $$ H^1 (X,\cal O_X) \simeq \hat{\frak g} \simeq H^0 (\hat X,\cal T). $$ We leave this to the reader as a pleasant exercise on Fourier-Mukai transform. \end{pf}
\subsection{} Let us describe in more details the data describing a Lie algebroid $L$ on an abelian variety $X$ such that $L \simeq V \otimes_k \cal O_X$ as $\cal O_X$-module, where $V$ is a finite-dimensional $k$-vector space, and a central extension $\tilde {L}$ of $L$ by $\cal O_X$. First of all, $V=H^0(X,L)$ has a structure of Lie algebra, and the structural morphism $L \rightarrow \cal T$ is given by some $k$-linear map $\beta : V \rightarrow \frak g = H^0 (X,\cal T)$ which is a homomorphism of Lie algebras (where $\frak g$ is an abelian Lie algebra). The central extension $\tilde {L}$ is described (up to an isomorphism) by a class $\widetilde{\alpha}$ in the first hypercohomology space space $\Bbb H^1(X,L^* \rightarrow \wedge^2 L^* \rightarrow \wedge^3 L^* \rightarrow \dots)$ of the truncated Koszul complex of $L$. In particular, we have the corresponding class $\alpha \in H^1(X, L^*)$, which is just the class of the extension of $\cal O_X$-modules $$ 0 \rightarrow \cal O_X \rightarrow \tilde{L} \rightarrow L \rightarrow 0\ . $$ We can consider $\alpha$ as a linear map $V \rightarrow H^1(X, \cal O_X) = \hat{\frak g}$. The maps $\alpha$ and $\beta$ get interchanged by the Fourier transform, up to a sign.
By definition the $D$-algebra associated with $\widetilde{L}$ is a tdo if and only if $\beta:V\rightarrow\frak g$ is an isomorphism. If an addition $\alpha:V\rightarrow\hat{\frak g}$ is an isomorphism then the dual $D$-algebra is also a tdo. Thus, we have a bijection between tdo's with non-degenerate first Chern class on $X$ and $\hat{X}$ such that the corresponding derived categories of modules are equivalent. According to [BB] isomorphism classes of tdo on $X$ are classified by $\Bbb H^2(X,\Omega^{\ge 1})$ which is an extension of $H^1(X,\Omega^1)\simeq \operatorname{Hom}(\frak g,\hat{\frak g})$ by $H^0(X,\Omega^2)=\wedge^2{\frak g}^*$. Let $U_X\subset \Bbb H^2(X,\Omega^{\ge 1})$ be the subset of elements with non-degenerate projection to $H^1(X,\Omega^1)$. The duality gives an isomorphism between $U_X$ and $U_{\hat{X}}$. It is easy to see that under this isomorphism multiplication by $\lambda\in k^*$ on $U_X$ corresponds to multiplication by $\lambda^{-1}$ on $U_{\hat{X}}$.
On the other hand, let $\cal A$ be a tdo with trivial $c_1$. In other words, $\cal A$ corresponds to some global 2-form $\omega$ on $X$. Modules over $\cal A$ are ${\cal O}$-modules equipped with a connection having the scalar curvature $\omega$. Let $\cal B$ be the dual $D$-algebra on $\hat{X}$ and let $\widetilde{L}\rightarrow L=H^0(X,T_X)\otimes{\cal O}_{\hat{X}}$ be the corresponding central extension of Lie algebroids. We claim that $L$ is just an ${\cal O}_{\hat{X}}$-linear commutative Lie algebra while the central extension $\widetilde{L}$ is given by the class $(e,\omega)\in H^1(L^*)\oplus H^0(\wedge^2 L^*)$, where $e$ is the canonical element in $H^1(L^*)\simeq H^1(\hat{X},{\cal O})\otimes H^1(\hat{X},{\cal O})^*$. Indeed, as an ${\cal O}_{\hat{X}}$-module $\widetilde{L}$ is a universal extension of $H^1(\hat{X},{\cal O})\otimes{\cal O}$ by ${\cal O}$. Hence, the Lie bracket defines a morphism of ${\cal O}$-modules $\wedge^2 L\rightarrow\widetilde{L}$. Since $H^0(\widetilde{L})=H^0({\cal O})$ it follows that $[\widetilde{L},\widetilde{L}]\subset {\cal O}\subset\widetilde{L}$. It is easy to see that the Lie bracket is just given by $\omega:\wedge^2 L\rightarrow{\cal O}$.
Recall that the Neron-Severi group of $X$ is identified with $\operatorname{Hom}^{\operatorname{sym}}(X,\hat{X})\otimes\Bbb Q$ where $\operatorname{Hom}^{\operatorname{sym}}(X,\hat{X})$ is the group of symmetric homomorphisms $X\rightarrow\hat{X}$. Namely, to a line bundle $L$ therecorrespondsa symmetrichomomorphism $\phi_L:X\rightarrow\hat{X}$ sending a point $x$ to $t_x^*L\otimes L^{-1}$ where $t_x:X\rightarrow X$ is the translation by $x$. One has the natural homomorphism $c_1:NS(X)\rightarrow \Bbb H^2(X,\Omega^{\ge 1})$ sending a line bundle $L$ to the class of the ring $D_L$ of differential operators on $L$. For $\mu\in NS(X)$ we denote by $D_{\mu}$ the corresponding tdo. If $\mu\in NS(X)$ is a non-degenerate class so that $c_1(\mu)\in U_X$ then the Fourier dual tdo to $D_{\mu}$ is \begin{equation}\label{detisom} \Phi(D_{\mu})=D_{-\mu^{-1}} \end{equation} Indeed, it suffices to check this when $\mu$ is a class of a line bundle $L$, in which case it follows easily from the isomorphism $$\phi_L^*\det\Phi(L)\simeq L^{-\operatorname{rk}\Phi(L)}$$ and the fact that the dual tdo to $D_L$ acts on $\Phi(L)$.
Let $E$ be a coherent sheaf which is a module over some tdo on $X$ (then $E$ is automatically locally free). Following [BB] we say in this case that there is an integrable projective connection on $E$.
\begin{Prop} Let $E$ be a vector bundle on $X$ equipped with an integrable projective connection. Assume that $\det E$ is a non-degenerate line bundle. Then $H^i\Phi(E)$ are vector bundles with canonical integrable projective connections, and the following equality holds: $$\phi_{\det E}^*c_1(\Phi(E)) =-\chi(X,E)\cdot\operatorname{rk} E\cdot c_1(E).$$ \end{Prop} \begin{pf} The first statement follows immediately from the fact that $\Phi(E)$ is quasi-isomorphic to a complex of modules over the tdo on $\hat{X}$ dual to $D_{(\det E)^{\frac{1}{r}}}$ where $r=\operatorname{rk} E$. On the other hand, this tdo acting on $\Phi(E)$ is isomorphic $D_{(\det \Phi(E))^{\frac{1}{r'}}}$ where $r'=\operatorname{rk}\Phi(E)=\chi(X,E)$. Considering classes of these dual tdo's and using the isomorphism (\ref{detisom}) applied to $\mu=\frac{1}{r}\phi_{\det E}$ we get the above formula. \end{pf}
\subsection{} The following two natural questions arise: 1) whether for every $\mu\in NS(X)$ there exists a vector bundle $E$ on $X$ which is a module over $D_{\mu}$, 2) what vector bundles on an abelian variety admit integrable projective connections. To answer these questions we use the following construction. Let $\pi:X_1\rightarrow X_2$ be an isogeny of abelian varieties and $E$ be a vector bundle with an integrable projective connection on $X_1$. Then there is a canonical integrable projective connection on $\pi_*E$. Indeed, the simplest way to see this is to use Fourier duality. If $E$ is a module over some tdo $D_{\lambda}$ on $X_1$ then $\Phi(E)$ is a module over the dual $D$-algebra $\Phi(D_{\lambda})$ on $\hat{X}_1$. Now we use the formula $$\pi_*E\simeq \Phi^{-1}\hat{\pi}^*(\Phi(E)),$$ where $\Phi^{-1}$ is the inverse Fourier transform on $X_2$, hence $\pi_*E$ is a module over $\Phi^{-1}\hat{\pi}^*\Phi(D_{\lambda})$ which is a tdo on $X_2$.
In particular, the push-forwards of line bundles under isogenies have canonical integrable projective connections. Also it is clear that if $E$ is a vector bundle with an integrable projective connection and $F$ is a flat vector bundle then $E\otimes F$ has a natural integrable projective connection.
Now we can answer the above questions.
\begin{Thm} For every $\mu\in NS(X)$ there exists a vector bundle $E$ which is a module over $D_{\mu}$. \end{Thm} \begin{pf} We can write $\mu=[L]/n$ where $n>0$ is an integer, $[L]$ is a class of a line bundle $L$ on $X$. Let $[n]_A:A\rightarrow A$ be an endomorphism of multiplication by $n$. Then $[n]_A^*(\mu)\in NS(X)$ is represented by a line bundle $L'$. Now we claim that the push-forward $[n]_{A,*}L'$ has a structure of a module over $D_{\mu}$. Indeed, it suffices to check that $$c_1([n]_{A,*}L')/\deg([n]_A)=\mu.$$ Let $\operatorname{Nm}_n: NS(X)\rightarrow NS(X)$ be the norm homomorphism corresponding to the isogeny $[n]_A$. Then the LHS of the above equality is $\operatorname{Nm}_n([L'])/\deg([n]_A)$. Hence, the pull-back of the LHS by $[n]_A$ is equal to $[L']=[n]_A^*(\mu)$ which implies our claim. \end{pf}
\begin{Thm} Let $E$ be an indecomosable vector bundle with an integrable projective connection on an abelian variety $X$. Then there exists an isogeny of abelian varieties $\pi:X'\rightarrow X$, a line bundle $L$ on $X'$ and a flat bundle $F$ on $X$ such that $E\simeq \pi_*L\otimes F$. \end{Thm} \begin{pf} The main idea is to analyze the sheaf of algebras $A=End(E)$. Namely, $A$ has a flat connection such that the multiplication is covariantly constant. In other words, it corresponds to a representation of the fundamental group $\pi_1(X)$ in automorphisms of the matrix algebra. Since all such automorphisms are inner we get a homomorphism $$\rho:\pi_1(X)\rightarrow PGL(E_0)$$ where $E_0$ is a fiber of $E$ at zero. Now the central extension $SL(E_0)\rightarrow PGL(E_0)$ induces a central extension of $\pi_1(X)=\Bbb Z^{2g}$ by the group of roots of unity of order $rk E$. This central extension splits on some subgroup of finite index $H\subset\pi_1(X)$. In other words, the restriction of $\rho$ to $H$ lifts to a homomorphism $\rho_H:H\rightarrow GL(E_0)$. Let $\pi:\widetilde{X}\rightarrow X$ be an isogeny corresponding to $H$, so that $\widetilde{X}$ is an abelian variety with $\pi_1(\widetilde{X})=H$. Then $\rho_H$ defines a flat bundle $\widetilde{F}$ on $\widetilde{X}$ such that $$\pi^*A\simeq End(\widetilde{F})$$ as algebras with connections. It follows that $\pi^*E\simeq L\otimes\widetilde{F}$ for some line bundle $L$ on $\widetilde{X}$. Thus, $L$ is a direct summand of $\pi_*(L\otimes\widetilde{F})$. Note that there exists a flat bundle $F$ on $X$ such that $\widetilde{F}\simeq \pi^*F$ (again the simplest way to see this is to use the Fourier duality). Hence, $L$ is a direct summand of $\pi_*L\otimes F$. It remains to check that all indecomposable summands of the latter bundle have the same form. This follows from the following lemma. \end{pf}
\begin{Lem} Let $\pi:X_1\rightarrow X_2$ be an isogeny of abelian varieties, $L$ be a line bundle on $X_1$, $F$ be an indecomposable flat bundle on $X_1$. Assume that $\pi_*(L\otimes F)$ is decomposable. Then there exists a non-trivial factorization of $\pi$ into a composition $$X_1\stackrel{\pi'}{\rightarrow}X'_1\rightarrow X_2$$ such that $L\simeq (\pi')^*L'$ for some line bundle $L'$ on $X'_1$. \end{Lem} \begin{pf} By adjunction and projection formula we have $$\operatorname{End}(\pi_*(L\otimes F))\simeq \operatorname{Hom}(\pi^*\pi_*(L\otimes F), L\otimes F)\simeq\oplus_{x\in K}\operatorname{Hom}(t_x^*L\otimes F,L\otimes F)$$ where $K\subset X_1$ is the kernel of $\pi$. If $t_x^*L\simeq L$ for some $x\in K$, $x\neq 0$ then $L$ descends to a line bundle on the quotient of $X_1$ by the subgroup generated by $x$. Otherwise, we get $\operatorname{End}(\pi_*(L\otimes F))\simeq\operatorname{End}(F)$, hence, $\pi_*(L\otimes F)$ is indecomposable. \end{pf}
\section{Noncommutative \'Etale morphisms} \label{et-sec}
This section provides the setting we will need to discuss microlocalization.
\subsection{} \begin{Def} (cf. [K]) A ring homomorphism $A'\morph{\phi}A$ is called a central extension if $\phi$ is surjective, $ker(\phi)$ is a central ideal and $ker(\phi)^2=0$. \end{Def}
\begin{Def}\label{formally etale} Let $\text{\bf Rings}$ denote the category of asssociative rings. Let $\cal C\subset\text{\bf Rings}$ be a full subcategory. A morphism $R\morph{\alpha}S$ in $\cal C$ is formally \'etale if, for every commutative diagram $\alpha, \beta, \gamma, \delta$ \vskip 2pt \begin{equation}\label{etale morphism} \begin{array}{ccc} R & \lrar{\delta} & A' \\ \ldar{\alpha} &\lbrurar{\epsilon} & \ldar{\gamma} \\ S & \lrar{\beta} & A \end{array} \end{equation} in $\cal C$, with $\gamma$ a central extension, there exists a unique morphism \dsp{S\morph{\epsilon}A'} such that diagram (\ref{etale morphism}) commutes. \end{Def}
\begin{Ex}\label{standard ring example} Let $R$ be a ring and let $a_0,a_1,....,a_n\in R$. Let $S$ be the $R$-algebra generated by elements $z,u$ subject to the relations \begin{equation}\sum a_iz^i=0\ ,\ u\sum i a_iz^{i-1}=1\ ,\ \sum i a_iz^{i-1}u=1\ .\end{equation} Then the natural map \dsp{R\morph{\alpha}S} is formally \'etale in $\text{\bf Rings}$.\end{Ex} \begin{pf} Consider a commutative diagram \vskip 2pt \begin{equation} \begin{array}{ccc} R & \lrar{\delta} & A' \\ \ldar{\alpha} && \ldar{\gamma} \\ S & \lrar{\beta} & A \end{array} \end{equation} as in definition \ref{formally etale}. Let $I=ker(\gamma)$. Choose $x,y\in A'$ such that $\gamma(x)=\beta(z)$, $\gamma(y)=\beta(u)$. It must be shown that there are unique elements $p,q\in I$ such that \begin{equation} \begin{matrix} \sum \delta(a_i)(x+p)^i&=0\ ,\\ \ (y+q)\sum i \delta(a_i)(x+p)^{i-1}&=1\ ,\\ \ \sum i \delta(a_i)(x+p)^{i-1}(y+q)&=1\ .\end{matrix}\end{equation} Since $I^2=0$, the equations are uniquely solved by setting \begin{equation} p=-y\sum \delta(a_i)x^i\ \ ,\ \ q=y(1-\sum i\delta(a_i)(x+p)^{i-1} y)\ .\end{equation} \end{pf}
\subsection{} Let us recall some definitions from [K]. For any associative algebra $R$, the NC-filtration on $R$ is the decreasing filtration $\{F^d R\}_{d\ge 0}$ defined by setting $$F^d R=\sum_{i_1+\ldots+i_m=d} R\cdot R_{i_1} \cdot R \cdot \ldots \cdot R\cdot R_{i_m} \cdot R$$ where $R_0=R$, $R_{i+1}=[R,R_i]$ are the terms of the lower central series for $R$ considered as a Lie algebra (we use a different indexing from Kapranov's). This filtration is compatible with multiplication and the associated graded algebra is commutative.
The category ${\cal N}_d$ is the category of associative algebras $R$ with $F^{d+1}R=0$. For example, ${\cal N}_0$ is the category of commutative algebras. For every $d$ there is a pair of adjoint functors $r_d:{\cal N}_d\rightarrow{\cal N}_{d-1}$ and $i_d:{\cal N}_{d-1}\rightarrow{\cal N}_d$, where $i_d$ is the natural inclusion, $r_d(R)=R/F_dR$. Note that if $R\in{\cal N}_d$ then $F^dR\subset R$ is a central ideal with zero square. Thus, $R$ is a central extension of $r_d(R)$. Indeed, ${\cal N}_d$ is the category of rings $A$ which are obtained as the composition of $d$ central extensions, $$ A\rightarrow A_1\rightarrow A_2\rightarrow ...\rightarrow A_d$$ with $A_d$ commutative.
\begin{Lem}\label{der} Let $R\rightarrow S$ be a formally \'etale morphism in ${\cal N}_d$, $M$ be an $S^{ab}$-module. Then the natural map $\operatorname{Der}(S,M)\rightarrow\operatorname{Der}(R,M)$ is a bijection. \end{Lem}
\begin{pf} Given a central $S$-bimodule $M$ we can define a trivial central extension of $S$ by $M$: $\widetilde{S}=S\oplus M$. Then derivations from $S$ to $M$ are in bijective correspondence with splittings of the projection $\widetilde{S}\rightarrow S$. Hence, the assertion. \end{pf}
\begin{Prop}\label{sasha's nice observation} Let \dsp{R\morph{\alpha}S} be a formally \'etale morphism in ${\cal N}_d$. Let \begin{equation} \begin{array}{ccc} R & \lrar{\delta} & A' \\ \ldar{\alpha} & & \ldar{\gamma} \\ S & \lrar{\beta} & A \end{array} \end{equation} be a commutative diagram in $\text{\bf Rings}$, such that $\beta$ is surjective and $\gamma$ is a central extension. Then $A'\in{\cal N}_d$.\end{Prop}
\begin{pf} Note that apriori we know from this diagram that $A\in{\cal N}_{d}$, hence, $A'\in{\cal N}_{d+1}$. We want to prove that $F^{d+1}A'=0$. Since $F^{d+2}A'=0$ it suffices to prove that for every sequence of positive numbers $i_1,\ldots,i_m$ such that $i_1+\ldots+i_m=d+1$ one has $$A'_{i_1}\cdot A'_{i_2}\cdot \ldots\cdot A'_{i_m}=0.$$ We use descending induction in $m$. Assume that this is true for $m+1$. Then we can define a map \begin{align*} &D:S^{m+d+1}\rightarrow I: \\ &(s_1,\ldots,s_{m+d+1})\mapsto [a'_1,[\ldots,[a'_{i_1},a'_{i_1+1}]\ldots]]\cdot [a'_{i_1+2},[\ldots,[a'_{i_1+i_2+1},a'_{i_1+i_2+2}]\ldots]]]\cdot \ldots \end{align*} where $a'_i\in A'$ are such that $\gamma(a'_i)=\beta(s_i)$. This map is well-defined since $a'_i$ are well-defined modulo $I$ which is a central ideal. Now the induction assumption implies that $D$ is a derivation in every argument. Hence, applying Lemma \ref{der} we conclude that $D=0$. \end{pf}
\begin{Thm}\label{enlarging the category} Let \dsp{R\morph{\alpha}S} be a formally \'etale morphism in ${\cal N}_d$. Then \dsp{R\morph{\alpha}S}is a formally \'etale morphism in $\text{\bf Rings}$.\end{Thm} \begin{pf} This follows easily from propostion \ref{sasha's nice observation}.\end{pf}
Let ${\cal N}_{\infty}$ denote the category of rings that are complete with respect to the NC-filtration.
\begin{Thm} Let \dsp{R\morph{\alpha}S} be formally \'etale in ${\cal N}_{\infty}$, with $R\in{\cal N}_d$. Then $S\in{\cal N}_d$.\end{Thm}
\begin{pf} The natural morphism \dsp{R\rightarrow r_{d+i}(S)} is formally \'etale in ${\cal N}_{d+i}$ for all $i\ge 0$. Proposition \ref{sasha's nice observation} applied to the diagram \begin{equation} \begin{array}{ccc} R & \lrar{} & r_{d+i+1}(S) \\ \ldar{} & & \ldar{} \\ r_{d+i}(S) & \lrar{=} & r_{d+i}(S) \end{array} \end{equation} shows that $F^{d+i}(S)\subset F^{d+i+1}(S)$ for all $i\ge 1$. Hence the assertion.\end{pf}
\begin{Ex}\label{standard example in Nd} Let $R$ be a ring belonging ${\cal N}_d$ for some $d$. Let \dsp{R\morph{\alpha}S} be as in example \ref{standard ring example}. Let $\hat S$ denote the completion of $S$ with respect to the NC-filtration. Then $\hat S$ belongs to ${\cal N}_d$ and the natural morphism $R\rightarrow \hat{S}$ is formally \'etale (in $\text{\bf Rings}$). As in the commutative case, we call such a morphism {\em standard}.\end{Ex}
\subsection{} The category $\operatorname{NCS}^d$ of NC-schemes of degree $d$ (in Kapranov's terminology ``NC-nilpotent of degree $d$") is constructed in the same way as the commutative category of schemes using ${\cal N}_d$ instead of ${\cal N}_0$ as coordinate rings of affine schemes. For a scheme $X\in \operatorname{NCS}^d$ we denote by $\operatorname{NCS}^d_{/X}$ the category of NC-schemes of degree $d$ over $X$. The morphisms in $\operatorname{NCS}^d_{\/X}$ are denoted by $\operatorname{Hom}_X(\cdot,\cdot)$.
As in affine case we have natural adjoint functors $r_d:\operatorname{NCS}^d\rightarrow \operatorname{NCS}^{d-1}$ and $i_d:\operatorname{NCS}^{d-1}\rightarrow \operatorname{NCS}^d$. In particular, we have the abelianization functor $\operatorname{NCS}^d\rightarrow \operatorname{NCS}^0:X\mapsto X^{ab}$ given by the composition $r_1r_2\ldots r_d$.
A morphism $Z\rightarrow\widetilde{Z}$ of NC-schemes of degree $d$ is called a nilpotent thickening if it induces an isomorphism of underlying topological spaces and ${\cal O}_{\widetilde{Z}}\rightarrow{\cal O}_Z$ is a surjection with nilpotent kernel.
\begin{Def} A morphism $Y\rightarrow X$ in $\operatorname{NCS}^d$ is called formally smooth (resp. formally unramified) if for every nilpotent thickening $Z\subset \widetilde{Z}$ in $\operatorname{NCS}^d_{/X}$ the map $\operatorname{Hom}_X(\widetilde{Z},Y)\rightarrow\operatorname{Hom}_X(Z,Y)$ is surjective (resp. injective). A morphism is called formally \'etale if it is both formally smooth and formally unramified. A morphism is called \'etale if it is \'etale and the corresponding morphism of commutative schemes $Y^{ab}\rightarrow X^{ab}$ is locally of finite type. \end{Def}
\begin{Prop}\label{gen} Let $P$ be a property of being formally smooth (resp. formally unramified, resp. formally \'etale). \newline a) Let $f:Y\rightarrow X$ be a morphism in $\operatorname{NCS}^d$ with property $P$. Then the same property holds for $r_d(f)$. \newline b) If $f:Y\rightarrow X$ and $g:Z\rightarrow Y$ are morphisms in $\operatorname{NCS}^d$ having property $P$ then $f\circ g$ also has this property. \newline c) If $f:Y\rightarrow X$ is formally unramified morphism in $\operatorname{NCS}^d$, $g:Z\rightarrow Y$ is a morphism in $\operatorname{NCS}^d$ such that $f\circ g$ has property $P$ then $g$ also has this property. \newline d) An open morphism $U\rightarrow X$ is \'etale. \newline e) A morphism $f:Y\rightarrow X$ is \'etale if and only if there exists an open covering $X=\cup X_i$ and for every $i$ an open covering $Y_{ij}$ of $f^{-1}(X_i)$ such that all the induced morphisms $Y_{ij}\rightarrow X_i$ are \'etale. \end{Prop}
The proof is straightforward.
Theorem \ref{enlarging the category} has the following global version.
\begin{Thm} Let $f$ be a formally \'etale morphism in $\operatorname{NCS}^{d-1}$. Then $i_d(f)$ is a formally \'etale morphism in $\operatorname{NCS}^d$. \end{Thm}
Now we observe that the topological invariance of \'etale morphisms remains valid in the present context.
\begin{Thm} For any $X\in \operatorname{NCS}^d$ the canonical functor $Y\mapsto Y^{ab}$ from the category of \'etale $X$-schemes to that of \'etale $X^{ab}$-schemes is an equivalence. \end{Thm}
\begin{pf} First we claim that the functor in question is fully faithful. Indeed, let $Y_1, Y_2\in\operatorname{NCS}^d$ be \'etale $X$-schemes. Then since $Y_1^{ab}\rightarrow Y_1$ is a nilpotent thickening and $Y_2\rightarrow X$ is \'etale the natural map $$\operatorname{Hom}_X(Y_1,Y_2)\rightarrow\operatorname{Hom}_X(Y_1^{ab},Y_2)\simeq \operatorname{Hom}_{X^{ab}}(Y_1^{ab},Y_2^{ab})$$ is an isomorphism as required.
To prove the surjectivity of the functor it suffices to do it locally. Thus we may assume that the morphism $Y^{ab}\rightarrow X^{ab}$ is a standard \'etale extension of commutative rings $R^{ab}\rightarrow S^{ab}$ where $S^{ab}=(R^{ab}[z_0]/(f_0(z_0)))_{f'_0(z_0)}$, $f_0$ is a unital polynomial. But such a morphism lifts to a standard \'etale morphism in ${\cal N}_d$, as in example \ref{standard example in Nd}. \end{pf}
\begin{Cor} A morphism $f:Y\rightarrow X$ in $\operatorname{NCS}^d$ is \'etale if and only if there exists an open covering $Y=\cup Y_i$ such that all the induced morphisms $Y_{i}\rightarrow f(Y_i)$ are standard \'etale morphisms. \end{Cor}
\section{Microlocalization}
\subsection{} Let us return to the setting of Theorem (\ref{main1}). We assume that the $D$-algebra $\cal A$ is equipped with an increasing algebra filtration $\cal A_{\bullet}$ (so $\cal A_i \cal A_j\subset \cal A_{i+j}$) such that $\cal A_{-1}=0$, $\cal A_0=\cal O_Y$, the associated graded algebra $\operatorname{gr}(\cal A)$ is commutative and is generated by $\operatorname{gr}(\cal A)_1$ over $\cal O_Y$. In particular, the left and right actions of $\cal O_Y$ on $\operatorname{gr}(\cal A)_i$ are the same. We will call such a filtration {\it special} if there exists a sheaf of flat, commutative graded $\cal O_S$-algebras $C$, generated over $\cal O_S$ by $C_1$, and an isomorphism of graded algebras $\operatorname{gr}(\cal A)\simeq {\pi_S^Y}^*( C)$. By lemma \ref{sheaves on S}, such an isomorphism induces an isomorphism $\operatorname{gr}(\Phi\cal A)\simeq {\pi_S^X}^*( C)$.
Given a $D$-algebra $\cal A$ with a special filtration $\cal A_{\bullet}$ we can form the corresponding sequence of graded algebras $\operatorname{gr}_{(n)}(\cal A)$ for $n\ge 0$ by setting $$\operatorname{gr}_{(n)}(\cal A)=\oplus_{i=0}^{\infty} \cal A_i/\cal A_{i-n-1}.$$ In particular, $\operatorname{gr}_{(0)}(\cal A)=\operatorname{gr}(\cal A)$ is commutative while for $n\ge 1$ there is a central element $t$ in $\operatorname{gr}_{(n)}(\cal A)_1=\cal A_1$ (corresponding to $1\in \cal A_0\subset \cal A_1$) such that $t^{n+1}=0$ and $\operatorname{gr}_{(n)}(\cal A)/(t)=\operatorname{gr}(\cal A)$. These algebras form a projective system via the natural projections $\operatorname{gr}_{(n+1)}(\cal A)\rightarrow \operatorname{gr}_{(n)}(\cal A)$.
Consider for each $n$ the NC-scheme $\Bbb P_{n}(\cal A)=\operatorname{Proj}(\operatorname{gr}_{(n)}(\cal A))$ corresponding to $\operatorname{gr}_{(n)}(\cal A)$ via the noncommutative analogue of $\operatorname{Proj}$ construction. We denote by $\cal D^-(\Bbb P_{n}(\cal A))$ the (bounded from above) derived category of left quasi-coherent sheaves $\Bbb P_{n}(\cal A)$. Similar to the commutative case there is a natural localization functor $M\mapsto\widetilde{M}$ from the category of graded $\operatorname{gr}_{(n)}(\cal A)$-modules to the category of quasi-coherent sheaves on $\Bbb P_{n}(\cal A)$.
If $M$ is a left $\cal A$-module (quasi-coherent over ${\cal O}_Y$) equipped with an increasing module filtration $M_{\bullet}$ then for every $n>0$ we can form the corresponding graded $\operatorname{gr}_{(n)}(\cal A)$-module $\oplus M_i/M_{i-n}$, hence the corresponding quasi-coherent sheaf $$\operatorname{ml}_{n}(M)=(\oplus_i M_i/M_{i-n})^{\widetilde{}}.$$
The above NC-schemes are connected by a sequence of closed embeddings $i_n:\Bbb P_n(\cal A)\hookrightarrow \Bbb P_{n+1}(\cal A)$ and the quasi-coherent sheaves
$\operatorname{ml}_n(M)$ satisfy $\operatorname{ml}_n(M)=\operatorname{ml}_{n+1}(M)|_{\Bbb P_n(\cal A)}$. In other words, the system $(\operatorname{ml}_n(M))$ corresponds to a quasi-coherent sheaf on the formal NC-scheme $\Bbb P_{\infty}(\cal A)=\operatorname{inj.}\lim\Bbb P_n(\cal A)$.
Our aim now is to establish an equivalence of derived categories of sheaves on $\Bbb P_n(\cal A)$ and $\Bbb P_n(\Phi \cal A)$. We will prove something stronger -- namely that such an equivalence exists \'etale locally on $Proj(C)$.
With begin a Zariski local version. Clearly, $\circ$ commutes with flat base change on $S$, so we may assume $S$ is affine. The isomorphisms $\operatorname{gr}(\cal A)\simeq {\pi_S^Y}^*(\cal C)$ and $\operatorname{gr}(\Phi\cal A)\simeq {\pi_S^X}^*(\cal C)$ give us isomorphisms $\Bbb P_1(\cal A)=Y\times_S\operatorname{Proj}(C)$ and $\Bbb P_1(\Phi\cal A)= X\times_S\operatorname{Proj}(C)$. Let $f$ be a section of $C_1$. It defines a Zariski open subset $D_f\subset\operatorname{Proj}(C)$ which is the spectrum of $C_{(f)}$, the degree zero part in the localization of $C$ by $f$. Hence, we have the corresponding open subset $Y\times_S D_f\subset\Bbb P_0(\cal A)$. Since $\Bbb P_n$ has the same underlying topological space as $\Bbb P_0$ we have the corresponding open subscheme $\Bbb P_n(\cal A)_f\subset\Bbb P_n(\cal A)$ for every $n\ge 0$. We claim that we can identify $\Bbb P_n(\cal A)_f$ with the spectrum of some sheaf of algebras over $Y$. Namely, we have the surjection $\cal A_1\rightarrow\operatorname{gr}(\cal A)_1$ and locally we can lift $f$ to a section $\widetilde{f}\in\cal A_1$. Consider the graded ${\cal O}_Y$-algebra $$\operatorname{gr}_{(n)}(\cal A)_{f}:=\operatorname{gr}_{(n)}(\cal A)_{\widetilde{f}}.$$ It is easy to see that this algebra doesn't depend on a choice of the lifting element $\widetilde{f}$ (since two liftings differ by a nilpotent), hence this algebra is defined globally over $Y$. Now let $\operatorname{gr}_{(n)}(\cal A)_{(f)}$ be the degree zero component in $\operatorname{gr}_{(n)}(\cal A)_f$. Then $\operatorname{Spec}(\operatorname{gr}_{(n)}(\cal A)_{(f)})$ is an open subscheme in $\Bbb P_n(\cal A)$ with the underlying open subset $Y\times_S D_f$, hence $\operatorname{Spec}(\operatorname{gr}_{(n)}(\cal A)_{(f)})=\Bbb P_n(\cal A)_f$.
For a graded $\operatorname{gr}_{(n)}(\cal A)$-module $M$ we have the corresponding quasi-coherent sheaf $\widetilde{M}$ on $\Bbb P_n(\cal A)$. The restriction of $\widetilde{M}$ to $\Bbb P_n(\cal A)_f$ is the sheaf associated with $\operatorname{gr}_{(n)}(\cal A)_{(f)}$-module $M_{(f)}$, the degree zero part in the localization of $M$ with respect to some local lifting of $f$.
Let as called a graded sheaf {\it graded special} if every of its graded components is a special sheaf.
\begin{Lem}\label{lemloc} For every element $f\in C_1$ the algebra $\operatorname{gr}_{(n)}(\cal A)_f$ is a graded special $D$-algebra on $Y$. There is a canonical isomorphism of graded algebras \begin{equation}\label{Philoc} \Phi(\operatorname{gr}_{(n)}(\cal A)_f)\simeq\operatorname{gr}_{(n)}(\Phi\cal A)_f. \end{equation} \end{Lem} \begin{pf} First of all the Lemma is obvious for $n=0$: in this case $\operatorname{gr}(\cal A)_f\simeq {\pi_S^Y}^* (C_f)$ and $$\Phi(\operatorname{gr}(\cal A)_f)\simeq \operatorname{gr}(\Phi\cal A)_f\simeq {\pi_S^X}^* (C_f).$$ Now for $n>0$ consider the filtration on $\operatorname{gr}_{(n)}(\cal A)$ by two-sided principal ideals $I^k=(t^k)$ where $t\in\operatorname{gr}_{(n)}(\cal A)_1$ is the central element corresponding to $1\in\cal A_1$. Then $I^0=\cal A$, $I^n=0$, and $I^k/I^{k+1}\simeq\operatorname{gr}(\cal A)(-k)$ as $\operatorname{gr}_{(n)}(\cal A)$-module for $0\le k<n$. Localizing this filtration we get a filtration by two sided-ideals $I^k_f$ in $\operatorname{gr}_{(n)}(\cal A)_f$ with associated graded quotients $\operatorname{gr}(\cal A)_f(-k)$. Thus, $\operatorname{gr}_{(n)}(\cal A)_f$ is graded special.
To construct an isomorphism (\ref{Philoc}) we notice that since $\Phi(\operatorname{gr}_{(n)}(\cal A)_f)$ is a nilpotent extension of $\Phi(\operatorname{gr}(\cal A)_f)\simeq \operatorname{gr}(\Phi\cal A)_f$, any local lifting of $f$ is invertible in $\Phi(\operatorname{gr}_{(n)}(\cal A)_f)$. Therefore, by universal property we get a homomorphism $$\operatorname{gr}_{(n)}(\Phi\cal A)_f\rightarrow \Phi(\operatorname{gr}_{(n)}(\cal A)_f).$$ Using the above filtration on $\operatorname{gr}_{(n)}$ one immediately checks that this is an isomorphism. \end{pf}
\begin{Thm}\label{main2} Assume that $H^0(X,\cal O_X)=H^0(Y,{\cal O}_Y)=\Bbb C$. Then for every $n\ge 0$ there is a canonical equivalence of categories $$\Phi_{(n)}:\cal D^-(\Bbb P_{n}(\cal A))\rightarrow \cal D^-(\Bbb P_{n}(\Phi \cal A))$$ commuting with functors $i_{n*}$ and $i_n^*$. Moreover, assume that $M$ is a left $\cal A$-module with an increasing module filtration such that for some integer $d$ and for $i$ sufficiently large $\Phi(M_i/M_{i-1})$ is concentrated in degree $d$ (as an object of $\cal D(X)$). Then $$\operatorname{ml}_n(\Phi(M))\simeq\Phi_{(n)}(\operatorname{ml}_n(M)).$$ \end{Thm} \begin{pf} The proof of this theorem is similar to the proof of Theorem \ref{main1}. First we note that the definition of the operation $\circ$ from \ref{circle} works for non-commutative schemes as well (the only difference is that now whenever we need to take the opposite $D$-algebra we have to pass to the opposite scheme as well). Now we just want to construct some quasi-coherent sheaves (perhaps shifted) on $\Bbb P_n(\cal A)\times\Bbb P_n(\Phi\cal A^{op})$ and on $\Bbb P_n(\Phi\cal A)\times\Bbb P_n(\cal A^{op})$ such that both their $\circ$-composition are equal to the structure sheaves of diagonals (note that although there is no embedding of a noncommutative scheme $Z$ into $Z\times Z^{op}$ we still can define an analogue of the structure sheaf of the diagonal $\delta_Z$ which is a quasi-coherent sheaf on $Z\times Z^{op}$). The definition of these sheaves is the following. First we observe that $\Bbb P_n(\Phi\cal A)\times \Bbb P_n(\cal A^{op})=\operatorname{Proj}(\cal A_{XY})$ where $\cal A_{XY}$ is the following graded algebra on $X\times Y$: $$\cal A_{XY}=\oplus_i \operatorname{gr}_{(n)}(\Phi\cal A)_i\boxtimes \operatorname{gr}_{(n)}(\cal A^{op})_i.$$ Next we remark that the sheaf $$\cal B=P\circ_{\cal O_Y} b(\cal A)=b(\Phi\cal A)\circ_{\cal O_X} P$$ introduced in the proof of Theorem \ref{main1} has a natural filtration $$\cal B_i=P\circ_{\cal O_Y} b(\cal A)_i= b(\Phi\cal A)_i\circ_{\cal O_X} P.$$ It follows that the sheaf $\oplus_i\cal B_{2i}/\cal B_{2i-n}$ has a natural structure of graded $\cal A_{XY}$-module, so we can set $$\cal B_{ml}=(\oplus_i \cal B_{2i}/\cal B_{2i-n})^{\widetilde{}}$$ which is a quasi-coherent sheaf on $\Bbb P_n(\Phi\cal A)\times \Bbb P_n(\cal A^{op})$. Similarly, one can define the quasi-coherent sheaf $\cal B'_{ml}$ on $\Bbb P_n(\cal A)\times\Bbb P_n(\Phi\cal A^{op})$ starting with the sheaf $\cal B'=Q\circ_{\cal O_X} b(\Phi\cal A)$.
It remains to compute $\cal B_{ml}\circ_{\cal O_{\Bbb P_n(\cal A)}}\cal B'_{ml}$ and $\cal B'_{ml}\circ_{\cal O_{\Bbb P_n(\Phi\cal A)}}\cal B_{ml}$. The idea is the following: we cover $\operatorname{Proj}(C)$ by affine subsets $D_f$, where $f$ runs through $C_1$. For every $f\in C_1$ we'll construct a canonical isomorphism between restrictions of $\cal B_{ml}\circ_{\cal O_{\Bbb P_n(\cal A)}}\cal B'_{ml}$ and the structure sheaf of diagonal in $\Bbb P_n(\Phi\cal A)\times \Bbb P_n(\Phi\cal A^{op})$ to the open subscheme $\Bbb P_n(\Phi\cal A)_f\times\Bbb P_n(\Phi\cal A^{op})_f$. These isomorphisms will be compatible on intersections, so they will glue into a global isomorphism.
The following notation will be useful: for a sheaf $\cal F$ on one of our schemes and an element $f\in C_1$ we denote by $\cal F_f$ the restriction of $\cal F$ to the open subscheme defined by $f$.
Under identification of the underlying topological space of $\Bbb P_n(\Phi\cal A)\times\Bbb P_n(\cal A^{op})$ with $X\times\operatorname{Proj}(C)\times Y\times\operatorname{Proj}(C)$ the support of $\cal B_{ml}$ is the diagonal $X\times Y\times\operatorname{Proj}(C)$. Using this fact it is fairly easy to see that $$(\cal B_{ml}\circ_{\cal O_{\Bbb P_n(\cal A)}}\cal B'_{ml})_f= \cal B_{ml,f}\circ_{\cal O_{\Bbb P_n(\cal A)_f}}\cal B'_{ml,f}$$ It remains to compute the $\circ$-composition in the RHS. This is easier than the original problem because the sheaf $\cal B_{ml,f}$ (resp. $\cal B'_{ml,f}$) live on affine schemes over $X\times Y$ (resp. $Y\times X$). Namely, $$\Bbb P_n(\Phi\cal A)_f\times\Bbb P_n(\cal A^{op})_f= \operatorname{Spec}(\operatorname{gr}_{(n)}(\Phi\cal A)_{(f)}\boxtimes\operatorname{gr}_{(n)}(\cal A^{op})_{(f)}).$$
According to Lemma \ref{lemloc} we have dual $D$-algebras $\operatorname{gr}_{(n)}(\cal A)_{(f)}$ on $Y$ and $\operatorname{gr}_{(n)}(\Phi\cal A)_{(f)}$ on $X$. Hence, we can apply Theorem \ref{main1} to these $D$-algebras. Let us denote by $\cal B_{(f)}$ the $\operatorname{gr}_{(n)}(\Phi\cal A)_{(f)}\boxtimes\operatorname{gr}_{(n)}(\cal A^{op})_{(f)}$-module constructed in the proof of the cited theorem (where it is called $\cal B$). We claim that there is a canonical isomorphism of the $\cal B_{ml,f}$ with the sheaf on $\operatorname{Spec}(\operatorname{gr}_{(n)}(\Phi\cal A)_{(f)}\boxtimes\operatorname{gr}_{(n)}(\cal A^{op})_{(f)})$ obtained by localization of $\cal B_{(f)}$. This claim (together with an easy check of the compatibility of isomorphisms on intersections) would allow to finish the proof by referring to Theorem \ref{main1}. It remains to construct an isomorphism between the two $\operatorname{gr}_{(n)}(\Phi\cal A)_{(f)}\boxtimes\operatorname{gr}_{(n)}(\cal A^{op})_{(f)}$-modules: $\cal B_{(f)}$ and $(\oplus_i\cal B_{2i}/\cal B_{2i-n})_{(f\otimes f)}$ (the localization of the latter module is clearly $\cal B_{ml,f}$). Recall that by definition $\cal B_{(f)}=P\circ_{\cal O_Y}\operatorname{gr}_{(n)}\cal A_{(f)}$. Also, it is clear that $(\oplus_i\cal B_{2i}/\cal B_{2i-n})_{(f\otimes f)}$ is isomorphic to the degree zero part in the localization of $$\operatorname{gr}_{(n)}(\cal B)=\oplus_i\cal B_i/\cal B_{i-n}= P\circ_{\cal O_Y}\operatorname{gr}_{(n)}(\cal A)$$ by $\widetilde{f}\otimes 1$ and $1\otimes \widetilde{f'}$, where $\widetilde{f}$ is a local lifting of $f$ to $\Phi\cal A_1$, $\widetilde{f'}$ is a local lifting of $f$ to $\cal A_1$. Thus, it suffices to construct a graded isomorphism between $P\circ_{\cal O_Y}\operatorname{gr}_{(n)}(\cal A)_f$ and $\operatorname{gr}_{(n)}(\cal B)_{\widetilde{f}\otimes 1,1\otimes \widetilde{f'}}$. According to Lemma \ref{lemloc} we have $$P\circ_{\cal O_Y}\operatorname{gr}_{(n)}(\cal A)_f\simeq \operatorname{gr}_{(n)}(\Phi\cal A)_f\circ_{\cal O_X} P$$ so the assertion follows. \end{pf}
Note that we have canonical invertible ${\cal O}_{\Bbb P_n}$-bimodules on $\Bbb P_n(\cal A)$: $${\cal O}_{\Bbb P_n}(m)=\operatorname{ml}_n(\operatorname{gr}_{(n)}(\cal A)(m))$$ where $M\mapsto M(m)$ denotes the shift of grading. In particular, we have the automorphism $$M\mapsto M(1)={\cal O}(1)\otimes_{{\cal O}} M$$ of the category ${\cal D}^-(\Bbb P_n(\cal A))$. It is easy to see that the above equivalence respects these automorphisms.
\subsection{} One can generalize Theorem \ref{main1} to the case of NC-schemes of finite degree. Namely, there is a natural notion of support of a quasi-coherent sheaf on such a scheme (just the support of the corresponding sheaf on the reduced commutative scheme), hence, the definition of $D$-algebra makes sense. Now the proof of Theorem \ref{main1} works almost literally in this case. Moreover, it seems plausible for the NC-schemes $\Bbb P_n(\cal A)$ one can consider slightly more general $D$-algebras than special ones. Namely, instead of requiring the existence of filtration with graded factors isomorphic to ${\cal O}$ it suffices to require the existence of filtration with factors ${\cal O}^{ab}$, plus one should require $D$-algebra to be flat as left and right ${\cal O}$-module.
\subsection{\'Etale local version of the equivalence}
Let $U\rightarrow Z$ be an \'etale morphism of $S$-schemes. Then we have the corresponding \'etale morphism $Y\times U\rightarrow\Bbb P_1(\cal A)$. By topological invariance of \'etale category for every $n\ge 1$ this morphism extends to an \'etale morphism of NC-schemes $$j:\Bbb P_n(\cal A)_U\rightarrow\Bbb P_n(\cal A).$$ Similarly we have an NC-scheme $\Bbb P_n(\Phi\cal A)_U$, and an \'etale morphism $j:\Bbb P_n(\Phi\cal A)_U\rightarrow\Bbb P_n(\cal A)$.
\begin{Thm} In the above situation the categories $\cal D^-(\Bbb P_n(\cal A)_U)$ and $\cal D^-(\Bbb P_n(\Phi\cal A)_U)$ are canonically equivalent. \end{Thm} \begin{pf} Recall that in the proof of Theorem \ref{main2} we have constructed a quasi-coherent sheaf (up to shift) $\cal B_{ml}$ on $\Bbb P_n(\Phi\cal A)\times \Bbb P_n(\cal A^{op})$ supported on the diagonal $\Delta_Z:X\times Y\times Z\hookrightarrow X\times Z\times Y\times Z$. Moreover, the restriction of $B_{ml}$ to $\Bbb P_1(\Phi\cal A)\times\Bbb P_n(\cal A^{op})$ is actually obtained from the sheaf $P$ on $X\times Y$ via first pulling back to $X\times Y\times Z$ and then pushing forward by $\Delta_Z$. We have the following diagram of \'etale morphisms of NC-schemes: \begin{equation} \begin{array}{ccc} \Bbb P_n(\Phi\cal A)_U\times \Bbb P_n(\cal A^{op})_U &\lrar{\operatorname{id}\times j}& \Bbb P_n(\Phi\cal A)_U\times \Bbb P_n(\cal A^{op})\\ \ldar{j\times\operatorname{id}} & & \ldar{j\times\operatorname{id}}\\ \Bbb P_n(\Phi\cal A)\times \Bbb P_n(\cal A^{op})_U &\lrar{\operatorname{id}\times j} &\Bbb P_n(\Phi\cal A)\times \Bbb P_n(\cal A^{op}) \end{array} \end{equation} Now we claim that there exists a quasi-coherent sheaf $\cal B_{ml,U}$ on $\Bbb P_n(\Phi\cal A)_U\times \Bbb P_n(\cal A^{op})_U$ supported on the diagonal $\Delta_U:X\times Y\times U\hookrightarrow X\times U\times Y\times U$ such that \begin{equation}\label{iso1} (\operatorname{id}\times j)_*\cal B_{ml,U}\simeq (j\times\operatorname{id})^*\cal B_{ml} \end{equation} \begin{equation}\label{iso2} (j\times\operatorname{id})_*\cal B_{ml,U}\simeq (\operatorname{id}\times j)^*\cal B_{ml}. \end{equation} and such that the restriction of $\cal B_{ml,U}$ to $\Bbb P_1(\Phi\cal A)_U\times\Bbb P_1(\cal A)$ is isomorphic to $\Delta_{U,*}(p_{XY}^*P)$. Indeed, consider the quasi-coherent sheaf $(j\times j)^*\cal B_{ml}$ on $\Bbb P_n(\Phi\cal A)_U\times \Bbb P_n(\cal A^{op})_U$. It is is supported on $(j\times j)^{-1}(X\times Y\times Z)$
where $X\times Y\times Z$ is the relative diagonal in $X\times Y\times Z\times Z$. Now since $j$ is \'etale, the relative diagonal $X\times Y\times U$ is a connected component in $(j\times j)^{-1}(X\times Y \times Z)$. Now we just set $\cal B_{ml,U}$ to be the direct summand of $(j\times j)^*\cal B_{ml}$ concentrated on this component, i.e. $$\cal B_{ml,U}=(j\times j)^*\cal B_{ml}|_{X\times Y\times U}.$$ The above properties of $\cal B_{ml,U}$ are clear from this definition.
Similarly, we construct the sheaf $\cal B'_{ml,U}$ on $\Bbb P_n(\cal A)\times \Bbb P_n(\Phi\cal A^{op})$. It remains to compute the relevant $\circ$-products. This is easily done using isomorphisms (\ref{iso1}), (\ref{iso2}). Namely, one should start by computing $(j\times\operatorname{id})_*(\cal B_{ml,U}\circ_{\Bbb P_n(\cal A)_U}\cal B'_{ml,U})$ on $\Bbb P_n(\Phi\cal A)\times\Bbb P_n(\Phi\cal A^{op})_U$. We have \begin{align*} &(j\times\operatorname{id})_*(\cal B_{ml,U}\circ_{\Bbb P_n(\cal A)_U}\cal B'_{ml,U})\simeq ((j\times\operatorname{id})_*\cal B_{ml,U})\circ_{\Bbb P_n(\cal A)_U}\cal B'_{ml,U}\simeq ((\operatorname{id}\times j)^*\cal B_{ml})\circ_{\Bbb P_n(\cal A)_U}\cal B'_{ml,U}\simeq\\ &\cal B_{ml}\circ_{\Bbb P_n(\cal A)}((j\times\operatorname{id})_*\cal B'_{ml,U})\simeq \cal B_{ml}\circ_{\Bbb P_n(\cal A)}((\operatorname{id}\times j)^*\cal B'_{ml})\simeq (\operatorname{id}\times j)^*(\cal B_{ml}\circ_{\Bbb P_n(\cal A)}\cal B'_{ml})\simeq\\ &(\operatorname{id}\times j)^*(\delta_{\Bbb P_n(\Phi\cal A)})\simeq (j\times\operatorname{id})_* (\delta_{\Bbb P_n(\Phi\cal A)_U} ) \end{align*} Now the situation looks locally as follows: we have an \'etale extension of NC-algebras $A\rightarrow A_1$, an $A_1\otimes A_1^{op}$-module $M$, and an isomorphism of $A\otimes A_1^{op}$-modules $M\simeq A_1$. Furthermore, we have a 2-sided ideal $I\subset A$ such that $A/I$ is commutative and $IM=MI$, and we know that the induced isomorphism $M/IM\simeq A_1/IA_1$ is an isomorphism of $A_1/IA_1\otimes A_1/IA_1$-modules (notice that $A_1/IA_1$ is commutative). We claim that in such a situation the above isomoprhism commutes with the left action of $A_1$. Indeed, the left action of $A_1$ on
$M$ induces a homomorphism $\phi:A_1\rightarrow A_1$ such that $\phi|_A=\operatorname{id}$ and $\phi\mod IA_1$ is the identity on $A_1/IA_1$. Now the formal \'etaleness implies that $\phi=\operatorname{id}$.
Thus, we conclude that $\cal B_{ml,U}\circ_{\Bbb P_n(\cal A)_U}\cal B'_{ml,U}\simeq \delta_{\Bbb P_n(\Phi\cal A)_U}$ as required. \end{pf}
\subsection{} The sheaf of rings ${\cal O}_{\Bbb P_n}$ on $\Bbb P_n$ can be naturally enlarged as follows. The central element $t\in\operatorname{gr}_{(n)}(\cal A)_1$ induces a sequence of embeddings of ${\cal O}_{\Bbb P_n}$-bimodules $${\cal O}_{\Bbb P_n}\rightarrow{\cal O}_{\Bbb P_n}(1)\rightarrow{\cal O}_{\Bbb P_n}(2)\rightarrow\ldots$$ Now using the natural morphisms $${\cal O}_{\Bbb P_n}(m)\otimes_{{\cal O}_{\Bbb P_n}}{\cal O}_{\Bbb P_n}(l)\rightarrow {\cal O}_{\Bbb P_n}(m+l)$$ we can define the ring structure on the direct limit $$\widetilde{{\cal O}}_{\Bbb P_n(\cal A)}= \operatorname{inj.}\lim({\cal O}\rightarrow{\cal O}(1)\rightarrow{\cal O}(2)\rightarrow\ldots)$$ For example, if $Y$ is smooth and $\cal A=D_Y$ is the sheaf of differential operators on $Y$ then $\widetilde{{\cal O}}_{\Bbb P_{\infty}}$ is the sheaf of (formal) pseudo-differential operators (the underlying topological space of $\Bbb P_{\infty}$ is the projectivized cotangent bundle of $Y$). The subsheaf ${\cal O}_{\Bbb P_{\infty}}$ consists of (formal) pseudo-differential operators of negative order.
Now let $\Bbb P_1(\cal A)=Y\times Z$ and $U\rightarrow Z$ be an \'etale morphism. Then one can define invertible ${\cal O}_{\Bbb P_n(\cal A)_U}$-bimodules ${\cal O}_{\Bbb P_n(\cal A)_U}(m)$ as follows. We can regard ${\cal O}_{\Bbb P_n(\cal A)}$ as a sheaf on $\Bbb P_n(\cal A)\times \Bbb P_n(\cal A^{op})$ supported on the diagonal. Let $V$ be a thickening of the diagonal in $\Bbb P_n(\cal A)\times\Bbb P_n(\cal A^{op})$ on which ${\cal O}_{\Bbb P_n(\cal A)}(m)$ lives. Then there is a canonical \'etale morphism $V_U\rightarrow V$ and an embedding $V_U\rightarrow \Bbb P_n(\cal A)_U\times\Bbb P_n(\cal A^{op})_U$. Now by definition ${\cal O}_{\Bbb P_n(\cal A)_U}(m)$ is obtained from ${\cal O}_{\Bbb P_n(\cal A)}(m)$ by first pulling back to $V_U$ and then pushing forward to $\Bbb P_n(\cal A)_U\times\Bbb P_n(\cal A^{op})_U$. It is easy to see that we still have morphisms of bimodules ${\cal O}(m)\rightarrow{\cal O}(m+1)$ and ${\cal O}(n)\otimes{\cal O}(m)\rightarrow{\cal O}(n+m)$ so we can define the algebra $\widetilde{{\cal O}}_{\Bbb P_n(\cal A)_U}$.
\begin{Thm} In the preceding two theorems one can replace the categories of ${\cal O}$-modules by the categories of $\widetilde{{\cal O}}$-modules. \end{Thm}
The proof is an application of the analogue of Theorem \ref{main1} for $D$-modules on NC-schemes.
\section{Noncommutative deformation of the Poincar\'e line bundle}\label{geom-subsec}
Consider the following data: \newline \noindent $W$ is a smooth projective variety over $\Bbb C$ of dimension $r$,\newline \noindent $D\subset W$ is a reduced irreducible effective divisor,\newline \noindent $V\subset H^0(D,{\cal O}_D(D))$ is an $r$-dimensional subspace, such that the corresponding rational morphism $\phi:D\rightarrow\Bbb P(V^*)$ is generically finite, \newline \noindent
$U\subset D$ is an open subset such that $\phi|_U$ is \'etale.
From the exact sequence $$0\rightarrow {\cal O}_W\rightarrow{\cal O}_W(D)\rightarrow{\cal O}_D(D)\rightarrow 0$$ we get a boundary homomorphism $$V\rightarrow H^0(D,{\cal O}_D(D))\rightarrow H^1(W,{\cal O}_W).$$ Now let $X$ be the Albanese variety of $W$, $a:W\rightarrow X$ be the Abel-Jacobi map (associated with some point of $W$). Then we have the canonical isomorphism $$H^1(X,{\cal O}_X)\widetilde{\rightarrow} H^1(W,{\cal O}_W),$$ in particular, we get a homomorphism $V\rightarrow H^1(W,{\cal O}_W)$. Let $$0\rightarrow{\cal O}_X\rightarrow{\cal E} \rightarrow H^1(X,{\cal O}_X)\otimes_{\Bbb C}{\cal O}_X\rightarrow 0$$ be the universal extension. Taking the pull-back of this extension under the map $V\rightarrow H^1(W,{\cal O}_W)$ we obtain an extension $$0\rightarrow{\cal O}_X\rightarrow{\cal E}_V\rightarrow V\otimes_{\Bbb C}{\cal O}_X\rightarrow 0.$$ Now we define a commutative sheaf of algebras on $X$ as follows $$\cal A_V=\operatorname{Sym} ({\cal E}_V)/(1_{\cal E}-1)$$ where $1_{\cal E}$ is the image of $1\in{\cal O}_X\rightarrow{\cal E}_V$. Note that $\cal A_V$ is equipped with the filtration satisfying the conditions of the previous section. Also by construction we have a canonical morphism of sheaves $${\cal E}_V\rightarrow a_*{\cal O}_W(D)$$ which induce the homomorphism of ${\cal O}_X$-algebras $${\cal A}_V\rightarrow a_*{\cal O}_W(*D)$$ compatible with natural filtrations, where ${\cal O}_W(*D)= \operatorname{inj.}\lim {\cal O}_W(nD)$.
If the map $V\rightarrow H^1(V,{\cal O}_V)$ is an embedding then the dual $D$-algebra $\Phi{\cal A}_V$ is the algebra of differential operators ``in directions $V$", where we consider $V$ as a subspace in $H^1(X,{\cal O}_X)\simeq H^0(\hat{X},T_{\hat{X}})$. By Theorem \ref{main1} we get a functor from the derived category of ${\cal A}_V$-modules to the derived category of $\Phi{\cal A}_V$-modules. We can restrict this functor to the category of ${\cal O}_W(*D)$-modules. For example, if $D$ is ample the Fourier transform of ${\cal O}_W(*D)$ is a coherent $\Phi{\cal A}_V$-module. In the case when $W$ is a curve, $D$ is a point, and $a(D)=0\in X$ one can show that the latter $\Phi{\cal A}_V$-module is free of rank 1 at general point. However, in general $\cal F({\cal O}_W(*D))$ is not free as $\Phi\cal A_V$-module even at general point unless $a(D)=0$. To get a module which is free of rank 1 at general point we have to pass to microlocalization and use \'etale localization "in vector fields direction" as described below.
Let $D_{(n)}\subset W$ be the closed subscheme corresponding to the divisor $nD$. Then $\operatorname{Proj}(\operatorname{gr}_{(n)}({\cal O}_W(*D)))\simeq D_{(n)}$, hence by functoriality we have a morphism $$a_n:D_{(n)}\rightarrow\Bbb P_n(\cal A_V)$$ and an isomorphism $$a_{n,*}{\cal O}_{D_{(n)}}\simeq\operatorname{ml}_n({\cal O}_W(*D))$$ where ${\cal O}_W(*D)$ is considered as a $\cal A_V$-module.
Let us start with the case $n=1$. Note that $$a_1:D\rightarrow \Bbb P_1(\cal A_V)\simeq X\times \Bbb P(V^*)$$ is the natural map induces by $a$ and by $\phi$. Hence, applying Fourier-Mukai transform to $a_{1,*}{\cal O}_D$ over a general point of $\Bbb P(V^*)$ one gets a free module of rank equal to the degree of $\phi$. To get a free module of rank 1 at general point we use the \'etale base change $U\rightarrow\Bbb P(V^*)$. Namely, we replace $\Bbb P_1(\cal A_V)=X\times\Bbb P(V^*)$ by $\Bbb P_1(\cal A_V)_U=X\times U$ and $a_1$ by the morphism $$a^U_1:U\rightarrow\Bbb P_1(\cal A_V)_U=X\times U:u\mapsto (a(u),u).$$ Then the following lemma is clear.
\begin{Lem} The relative Fourier transform of $a^U_{1,*}{\cal O}_U$ is the line bundle $(id\times a|_U)^*({\cal P})$ on $\hat{X}\times U$, where $\cal P$ is the Poincar\'e line bundle. \end{Lem}
Now let $\Bbb P_n(\cal A_V)_U$ be the \'etale scheme over $\Bbb P_n(\cal A_V)$ which is a thickening of $X\times U$. Let also $U_{(n)}$ be the open subset of $D_{(n)}$ which is a nilpotent thickening of $U$. Then we have a commutative diagram \begin{equation} \begin{array}{ccc} U & \lrar{} & \Bbb P_n(\cal A_V)_U \\ \ldar{} & & \ldar{} \\ U_{(n)} & \lrar{} & \Bbb P_n(\cal A_V) \end{array} \end{equation} where the top horizontal arrow is the composition of $a_1^U$ and the closed embedding $\Bbb P_1(\cal A_V)_U\rightarrow\Bbb P_n(\cal A_V)_U$, the bottom horizontal arrow is the restriction of $a_n$. Since the right vertical morphism is \'etale this diagram gives rise to a morphism $$a^U_n: U_{(n)}\rightarrow\Bbb P_n(\cal A_V)_U$$
filling the diagonal in the above commutative square. It is easy to check that $a_n^U|_{U_{(n-1)}}=a_{n-1}^U$. Now we define the sequence of coherent sheaves on $\Bbb P_n(\Phi\cal A_V)_U$ by setting $$\cal L_n=\Phi_{(n)}(a^U_{n,*}{\cal O}_{U_{(n)}}).$$ These sheaves satisfy
$\cal L_{n+1}|_{\Bbb P_n}\simeq \cal L_n$, hence we can consider the projective limit $\cal L_{\infty}$ of $\cal L_n$ which is a coherent sheaf on the formal NC-scheme $\Bbb P_{\infty}(\Phi\cal A_V)_U$.
\begin{Prop} The ${\cal O}_{\Bbb P_{\infty}(\Phi\cal A_V)_U}$-module $\cal L_{\infty}$ is locally free of rank 1. \end{Prop} \begin{pf} One has an exact sequence $$0\rightarrow a_{n-1,*}^U\cal O_{U_{(n-1)}}(-1)\rightarrow a_{n,*}^U\cal O_{U_{(n)}}\rightarrow a_{1,*}^U\cal O_U\rightarrow 0.$$ Applying the functor $\Phi_{(n)}$ and passing to the limit we obtain an exact sequence $$0\rightarrow \cal L_{\infty}(-1) \stackrel{t}{\rightarrow} \cal L_{\infty}\rightarrow \cal L_1\rightarrow 0$$ where $t$ is the canonical central element in $\cal O(1)$. It remains to use the following simple algebraic fact. Let $A$ be a noetherian ring, $t\in A$ be a non zero divisor such that $At=tA$ and $A=\operatorname{proj.}\lim A/t^nA$. Let $M$ be a finitely generated left $A$-module such that $t$ is not a divisor of zero in $M$ and $M/tM$ is a free $A/tA$-module of rank 1. Then $M$ is a free $A$-module of rank 1. \end{pf}
Notice that in the case when $W=C$ is a curve $D=P$ is a point, $U=D$ the module $\cal L_{\infty}$ was used in [R1] to construct Krichever's solution to the KP hierarchy. In this case $\cal L_{\infty}$ is a locally free module of rank-1 over the microlocalization of the subalgebra ${\cal O}[\xi]$ in the ring of the differential operators on the Jacobian $J(C)$ generated by the vector field $\xi$ which comes from the boundary homomorphism $H^0(O_P(P))\rightarrow H^1({\cal O}_C)$. The key point is that $\cal L_{\infty}$ has also an action of completion of ${\cal O}_C(*P)$ at $P$ (which is isomorphic to the ring of Laurent series) commuting with the action of pseudo-differential operators in $\xi$.
\noindent {\bf References}
\noindent [BB] A.~Beilinson, J.~Bernstein, {\it A proof of Jantzen conjectures}. I.~M.~Gelfand Seminar, 1--50, Adv. Soviet Math., 16, Part 1, Amer. Math. Soc., Providence, RI, 1993.
\noindent [BD] A.~Beilinson, V.~Drinfeld, {\it Quantization of Hitchin's fibration and Langlands' program}. Algebraic and geometric methods in mathematical physics (Kaciveli, 1993), 3--7, Math. Phys. Stud., 19, Kluwer Acad. Publ., Dordrecht, 1996.
\noindent [BC] J.~Burchnall, T.~Chaundy, {\it Commutative ordinary differential operators},
Proc. London Math. Soc., 21 (1923), 420--440; {\it Commutative ordinary differential operators II}, Proc. Royal Soc. London (A), 134 (1932), 471--485.
\noindent [Kr] I.M.~Krichever, {\it Algebro-geometric construction of the Zaharov-Shabat equations and their periodic solutions}. Soviet Math. Dokl. 17 (1976), 394--397; {\it Integration of nonlinear equations by the methods of nonlinear geometry}, Funk. Anal. i Pril, 11 (1977), 15-- 31.
\noindent [L] G.~Laumon, {\it Transformation de Fourier generalisee}, preprint alg-geom 9603004.
\noindent [K] M.~Kapranov, {\it Noncommutative geometry based on commutator expansions}, preprint math.AG/9802041.
\noindent [M] S.~Mukai, {\it Duality between $D(X)$ and $D(\hat{X})$ with its application to Picard sheaves}.
Nagoya Math. J. 81 (1981), 153--175.
\noindent [R1] M.~Rothstein, {\it Connections on the Total Picard Sheaf and the KP Hierarchy}, Acta Applicandae Math. 42 (1996), 297--308.
\noindent [R2] M.~Rothstein, {\it Sheaves with connection on abelian varieties}, Duke Math. Journal 84 (1996), 565--598.
{\sc Department of Mathematics, Harvard University, Cambridge,
MA 02138
Department of Mathematics, University of Georgia, Athens, GA 30602}
{\it E-mail addresses:} apolish@@math.harvard.edu, rothstei@@math.uga.edu
\end{document} |
\begin{document}
\title{Free quadri-algebras and dual quadri-algebras}
ABSTRACT. We study quadri-algebras and dual quadri-algebras. We describe the free quadri-algebra on one generator as a subobject of the Hopf algebra of permutations $\mathbf{FQSym}$, proving a conjecture due to Aguiar and Loday, using that the operad of quadri-algebras can be obtained from the operad of dendriform algebras by both black and white Manin products. We also give a combinatorial description of free dual quadri-algebras. A notion of quadri-bialgebra is also introduced, with applications to the Hopf algebras $\mathbf{FQSym}$ and $\mathbf{WQSym}$.\\
AMS CLASSIFICATION.16W10; 18D50; 16T05.\\
KEYWORDS. Quadri-algebras; Koszul duality; Combinatorial Hopf algebras.
\tableofcontents
\section*{Introduction}
An algebra with an associativity splitting is an algebra whose associative product $\star$ can be written as a sum of a certain number of (generally nonassociative) products, satisfying certain compatibilities. For example, dendriform algebras \cite{FoissyDend,Loday} are equipped with two bilinear products $\prec$ and $\succ$, such that for all $x,y,z$: \begin{align*} (x\prec y)\prec z&=x\prec (y\prec z+y\succ z),\\ (x \succ y)\prec z&=x\succ(y\prec z),\\ (x \prec y+x\succ y)\succ z&=x\succ(y\succ z). \end{align*} Summing these axioms, we indeed obtain that $\star=\prec+\succ$ is associative. Another example is given by quadri-algebras, which are equipped with four products $\nwarrow$, $\swarrow$, $\searrow$ and $\nearrow$, in such a way that: \begin{itemize} \item $\leftarrow=\nwarrow+\swarrow$ and $\rightarrow=\searrow+\nearrow$ are dendriform products, \item $\uparrow=\nwarrow+\nearrow$ and $\downarrow=\swarrow+\searrow$ are dendriform products. \end{itemize} Shuffle algebras or the algebra of free quasi-symmetric functions $\mathbf{FQSym}$ are examples of quadri-algebras. No combinatorial description of the operad $\mathbf{Quad}$ of quadri-algebra is known, but a formula for its generating formal series is conjectured in \cite{Loday} and proved in \cite{Vallette}, as well as the koszulity of this operad. A description of $\mathbf{Quad}$ is given with the help of the black Manin product on nonsymmetric operads $\blacksquare$, namely $\mathbf{Quad}=\mathbf{Dend} \blacksquare \mathbf{Dend}$, where $\mathbf{Dend}$ is the nonsymmetric operad of dendriform algebras (this product is denoted by $\Box$ in \cite{Guo,Loday3}). It is also suspected that the sub-quadri-algebra of $\mathbf{FQSym}$ generated by the permutation $(12)$ is free. We give here a proof of this conjecture (Corollary \ref{7}). We use for this that $\mathbf{Quad}$ is also equal to $\mathbf{Dend} \Box \mathbf{Dend}$ (Corollary \ref{5}), and consequently can be seen as a suboperad of $\mathbf{Dend} \otimes \mathbf{Dend}$: hence, free $\mathbf{Dend} \otimes \mathbf{Dend}$-algebras contain free quadri-algebras, a result which is applied to $\mathbf{FQSym}$. We also combinatorially describe the Koszul dual $\mathbf{Quad}^!$ of $\mathbf{Quad}$, and prove its koszulity with the rewriting method of \cite{Hoffbeck,Dotsenko,Loday2}.
The last section is devoted to a study of the compatibilities between the quadri-algebra structure of $\mathbf{FQSym}$ and its dual quadri-coalgebra structure: this leads to the notion of quadri-bialgebra (Definition \ref{10}). Another example of quadri-bialgebra is given by the Hopf algebra of packed words $\mathbf{WQSym}$. It is observed that, unlike the case of dendriform bialgebras, there is no rigidity theorem for quadri-bialgebras; indeed: \begin{itemize} \item $\mathbf{FQSym}$ and $\mathbf{WQSym}$ are not free quadri-algebras, nor cofree quadri-coalgebras. \item $\mathbf{FQSym}$ and $\mathbf{WQSym}$ are not generated, as quadri-algebras, by their primitive elements, in the quadri-coalgebraic sense.\\ \end{itemize}
{\bf Aknowledgments.} The research leading these results was partially supported by the French National Research Agency under the reference ANR-12-BS01-0017. I would like to thank Bruno Vallette for his precious comments, suggestions and help. \\
{\bf Notations.} \begin{enumerate} \item We denote by $K$ a commutative field. All the objects (vector spaces, algebras, coalgebras, operads$\ldots$) of this text are taken over $K$. \item For all $n \geq 1$, we denote by $[n]$ the set of integers $\{1,2,\ldots,n\}$. \end{enumerate}
\section{Reminders on quadri-algebras and operads}
\subsection{Definitions and examples of quadri-algebras}
\begin{defi}\begin{enumerate} \item A quadri-algebra is a family $(A,\nwarrow,\swarrow,\searrow,\nearrow)$, where $A$ is a vector space and $\nwarrow$, $\swarrow$, $\searrow$, $\nearrow$ are products on $A$, such that for all $x,y,z \in A$: \begin{align*} (x\nwarrow y)\nwarrow z&=x \nwarrow (y\star z),&(x\nearrow y) \nwarrow z&=x \nearrow (y\leftarrow z), &(x \uparrow y) \nearrow z&=x \nearrow (y \rightarrow z),\\ (x\swarrow y)\nwarrow z&=x \swarrow (y\uparrow z),&(x\searrow y) \nwarrow z&=x \searrow (y\nwarrow z), &(x \downarrow y) \nearrow z&=x \searrow (y \nearrow z),\\ (x\leftarrow y)\swarrow z&=x \swarrow (y\downarrow z),&(x\rightarrow y) \swarrow z&=x \searrow (y\swarrow z), &(x \star y) \searrow z&=x \searrow (y \searrow z), \end{align*} where: \begin{align*} \leftarrow&=\nwarrow+\swarrow,& \rightarrow&=\nearrow+\searrow,& \uparrow&=\nwarrow+\nearrow,& \downarrow&=\swarrow+\searrow, \end{align*}
\begin{align*} \star&=\nwarrow+\swarrow+\searrow+\nearrow=\leftarrow+\rightarrow=\uparrow+\downarrow. \end{align*} These relations will be considered as the entries of a $3\times 3$ matrix, and will be refered as relations $(1,1)\ldots (3,3)$. \item A quadri-coalgebra is a family $(C,\Delta_\nwarrow,\Delta_\swarrow,\Delta_\searrow,\Delta_\nearrow)$, where $C$ is a vector space and $\Delta_\nwarrow$, $\Delta_\swarrow$, $\Delta_\searrow$, $\Delta_\nearrow$ are coproducts on $C$, such that: \begin{align*} (\Delta_\nwarrow\otimes Id)\circ \Delta_\nwarrow&=(Id \otimes \Delta_*)\circ \Delta_\nwarrow,& (\Delta_\swarrow\otimes Id)\circ \Delta_\nwarrow&=(Id \otimes \Delta_\uparrow)\circ \Delta_\swarrow,\\ (\Delta_\nearrow\otimes Id)\circ \Delta_\nwarrow&=(Id \otimes \Delta_\leftarrow)\circ \Delta_\nearrow,& (\Delta_\searrow\otimes Id)\circ \Delta_\nwarrow&=(Id \otimes \Delta_\nwarrow)\circ \Delta_\searrow,\\ (\Delta_\uparrow\otimes Id)\circ \Delta_\nearrow&=(Id \otimes \Delta_\rightarrow)\circ \Delta_\nearrow;& (\Delta_\downarrow\otimes Id)\circ \Delta_\nearrow&=(Id \otimes \Delta_\nearrow)\circ \Delta_\searrow; \end{align*} \begin{align*} (\Delta_\leftarrow\otimes Id)\circ \Delta_\swarrow&=(Id \otimes \Delta_\downarrow)\circ \Delta_\swarrow,\\ (\Delta_\rightarrow\otimes Id)\circ \Delta_\swarrow&=(Id \otimes \Delta_\swarrow)\circ \Delta_\searrow,\\ (\Delta_*\otimes Id)\circ \Delta_\searrow&=(Id \otimes \Delta_\searrow)\circ \Delta_\searrow, \end{align*} with: \begin{align*} \Delta_\leftarrow&=\Delta_\searrow+\Delta_\nearrow,&\Delta_\rightarrow&=\Delta_\nwarrow+\Delta_\swarrow,& \Delta_\uparrow&=\Delta_\nwarrow+\Delta_\nearrow,&\Delta_\downarrow&=\Delta_\swarrow+\Delta_\searrow, \end{align*}
\begin{align*} \Delta_*&=\Delta_\nwarrow+\Delta_\swarrow+\Delta_\searrow+\Delta_\nearrow. \end{align*}\end{enumerate}\end{defi}
{\bf Remarks.} \begin{enumerate} \item If $A$ is a finite-dimensional quadri-algebra, then its dual $A^*$ is a quadri-coalgebra, with $\Delta_\diamond =\diamond^*$ for all $\diamond \in \{\nwarrow,\swarrow,\searrow,\nearrow,\leftarrow,\rightarrow,\uparrow,\downarrow,\star\}$. \item If $C$ is a quadri-coalgebra (even not finite-dimensional), then $C^*$ is a quadri-algebra, with $\diamond=\Delta_\diamond^*$ for all $\diamond \in \{\nwarrow,\swarrow,\searrow,\nearrow,\leftarrow,\rightarrow,\uparrow,\downarrow,\star\}$. \item Let $A$ be a quadri-algebra. Adding each row of the matrix of relations: \begin{align*} (x \uparrow y)\uparrow z&=x \uparrow (y \star z),\\ (x\downarrow y)\uparrow z&=x \downarrow (y \uparrow z),\\ (x \star y) \downarrow z&=x \downarrow (y\downarrow z). \end{align*} Hence, $(A,\uparrow,\downarrow)$ is a dendriform algebra. Adding each column of the matrix of relations: \begin{align*} (x \leftarrow y)\leftarrow z&=x \leftarrow (y \star z),&(x\rightarrow y)\leftarrow z&=x \rightarrow (y \leftarrow z), &(x \star y) \rightarrow z&=x \rightarrow (y\rightarrow z). \end{align*} Hence, $(A,\leftarrow,\rightarrow)$ is a dendriform algebra. The associative (non unitary) product associated to both these dendriform structures is $\star$. \item Dually, if $C$ is a quadri-coalgebra, $(C,\Delta_\uparrow,\Delta_\downarrow)$ and $(C,\Delta_\leftarrow,\Delta_\rightarrow)$ are dendriform coalgebras. The associated coassociative (non counitary) coproduct is $\Delta_*$. \end{enumerate}
{\bf Examples.} \begin{enumerate} \item Let $V$ be a vector space. The augmentation ideal of the tensor algebra $T(V)$ is given four products defined in the following way: for all $v_1,\ldots,v_k,v_{k+1},\ldots,v_{k+l}\in V$, $k,l \geq 1$, \begin{align*} v_1\ldots v_k \nwarrow v_{k+1}\ldots v_{k+l}&=\sum_{\substack{\sigma \in Sh(k,l),\\ \sigma^{-1}(1)=1,\:\sigma^{-1}(k+l)=k}} v_{\sigma^{-1}(1)}\ldots v_{\sigma^{-1}(k+l)},\\ v_1\ldots v_k \swarrow v_{k+1}\ldots v_{k+l}&=\sum_{\substack{\sigma \in Sh(k,l),\\ \sigma^{-1}(1)=k+1,\:\sigma^{-1}(k+l)=k}} v_{\sigma^{-1}(1)}\ldots v_{\sigma^{-1}(k+l)},\\ v_1\ldots v_k \searrow v_{k+1}\ldots v_{k+l}&=\sum_{\substack{\sigma \in Sh(k,l),\\ \sigma^{-1}(1)=k+1,\:\sigma^{-1}(k+l)=k+l}} v_{\sigma^{-1}(1)}\ldots v_{\sigma^{-1}(k+l)},\\ v_1\ldots v_k \nearrow v_{k+1}\ldots v_{k+l}&=\sum_{\substack{\sigma \in Sh(k,l),\\ \sigma^{-1}(1)=1,\:\sigma^{-1}(k+l)=k+l}} v_{\sigma^{-1}(1)}\ldots v_{\sigma^{-1}(k+l)}, \end{align*} where $Sh(k,l)$ is the set of $(k,l)$-shuffles, that is to say permutations $\sigma\in \mathfrak{S}_{k+l}$ such that $\sigma(1)<\ldots< \sigma(k)$ and $\sigma(k+1)<\ldots<\sigma(k+l)$. The associated associative product is the usual shuffle product. \item The augmentation ideal of the Hopf algebra $\mathbf{FQSym}$ of permutations introduced in \cite{Malvenuto} and studied in \cite{Duchamp} is also a quadri-algebra, as mentioned in \cite{Aguiar}. For all permutations $\alpha \in \mathfrak{S}_k$, $\beta \in \mathfrak{S}_l$, $k,l\geq1$: \begin{align*} \alpha \nwarrow \beta&=\sum_{\substack{\sigma \in Sh(k,l),\\ \sigma^{-1}(1)=1,\:\sigma^{-1}(k+l)=k}} (\alpha\otimes \beta)\circ \sigma^{-1},\\ \alpha \swarrow \beta&=\sum_{\substack{\sigma \in Sh(k,l),\\ \sigma^{-1}(1)=k+1,\:\sigma^{-1}(k+l)=k}} (\alpha\otimes \beta)\circ \sigma^{-1},\\ \alpha \searrow \beta&=\sum_{\substack{\sigma \in Sh(k,l),\\ \sigma^{-1}(1)=k+1,\:\sigma^{-1}(k+l)=k+l}} (\alpha\otimes \beta)\circ \sigma^{-1},\\ \alpha \nearrow \beta&=\sum_{\substack{\sigma \in Sh(k,l),\\ \sigma^{-1}(1)=1,\:\sigma^{-1}(k+l)=k+l}} (\alpha\otimes \beta)\circ \sigma^{-1}. \end{align*} As $\mathbf{FQSym}$ is self-dual, its coproduct can also be split into four parts, making it a quadri-coalgebra. As the pairing on $\mathbf{FQSym}$ is defined by $\langle \sigma,\tau\rangle=\delta_{\sigma,\tau^{-1}}$ for any permutations $\sigma,\tau$, we deduce that if $\sigma\in \mathfrak{S}_n$, $n\geq 1$, with the notations of \cite{Malvenuto}: \begin{align*} \Delta_\nwarrow(\sigma)&=\sum_{\sigma^{-1}(1),\sigma^{-1}(n) \leq i<n} Std(\sigma(1)\ldots \sigma(i))\otimes Std(\sigma(i+1)\ldots \sigma(n)),\\ \Delta_\swarrow(\sigma)&=\sum_{\sigma^{-1}(n) \leq i < \sigma^{-1}(1)} Std(\sigma(1)\ldots \sigma(i))\otimes Std(\sigma(i+1)\ldots \sigma(n)),\\ \Delta_\searrow(\sigma)&=\sum_{1\leq i< \sigma^{-1}(1) ,\sigma^{-1}(n)} Std(\sigma(1)\ldots \sigma(i))\otimes Std(\sigma(i+1)\ldots \sigma(n)),\\ \Delta_\nearrow(\sigma)&=\sum_{\sigma^{-1}(1) \leq i <\sigma^{-1}(n)} Std(\sigma(1)\ldots \sigma(i))\otimes Std(\sigma(i+1)\ldots \sigma(n)). \end{align*} The compatibilites between these products and coproducts will be studied in Proposition \ref{11}. For example: \begin{align*} (12)\nwarrow (12)&=(1342),&\Delta_\nwarrow((3412))&=(231)\otimes (1),&\Delta_\nwarrow((2143))&=(213)\otimes (1),\\ (12)\swarrow (12)&=(3142)+(3412),&\Delta_\swarrow((3412))&=(12)\otimes (12),&\Delta_\swarrow((2143))&=0,\\ (12)\searrow (12)&=(3124),&\Delta_\searrow((3412))&=(1)\otimes(312),&\Delta_\searrow((2143))&=(1)\otimes(132),\\ (12)\nearrow (12)&=(1234)+(1324),&\Delta_\nearrow((3412))&=0,&\Delta_\nearrow((2143))&=(21)\otimes (21). \end{align*} The dendriform algebra $(\mathbf{FQSym},\leftarrow,\rightarrow)$ and the dendriform coalgebra $(\mathbf{FQSym}, \Delta_\leftarrow,\Delta_\rightarrow)$ are decribed in \cite{FoissyDend,FoissyDend2}; the dendriform algebra $(\mathbf{FQSym},\uparrow,\downarrow)$ and the dendriform coalgebra $(\mathbf{FQSym}, \Delta_\uparrow,\Delta_\downarrow)$ are decribed in \cite{FoissyPatras}. Both dendriform algebras are free, and both dendriform coalgebras are cofree, by the dendriform rigidity theorem \cite{FoissyDend}. Note that $\mathbf{FQSym}$ is not free as a quadri-algebra, as $(1)\nwarrow (1)=0$. \item The dual of the Hopf algebra of totally assigned graphs \cite{Manchon} is a quadri-coalgebra. \end{enumerate}
\subsection{Nonsymmetric operads}
We refer to \cite{Loday2,Markl,Vallette} for the usual definitions and properties of operads and nonsymmetric operads.\\
{\bf Notations and reminders.} \begin{itemize} \item Let $V$ be a vector space. The free nonsymmetric operad generated in arity $2$ by $V$ is denoted by $\mathbf{F}(V)$. If we fix a basis $(v_i)_{i\in I} $ of $V$, then for all $n \geq 1$, a basis of $\mathbf{F}(V)_n$ is given by the set of planar binary trees with $n$ leaves, whose $(n-1)$ internal vertices are decorated by elements of $\{v_i\mid i\in I\}$. The operadic composition is given by the grafting of trees on leaves. If $V$ is finite-dimensional, then for all $n\geq 1$, $\mathbf{F}(V)_n$ is finite-dimensional, and: $$dim(\mathbf{F}(V)_n)=\frac{1}{n}\binom{2n-2}{n-1}dim(V)^n.$$ \item Let $\mathbf{P}$ a nonsymmetric operad and $V$ a vector space. A structure of $\mathbf{P}$-algebra on $V$ is a family of maps: $$\left\{\begin{array}{rcl} \mathbf{P}(n)\otimes V^{\otimes n}&\longrightarrow&V\\ p\otimes v_1\otimes \ldots \otimes v_n&\longrightarrow&p.(v_1,\ldots,v_n), \end{array}\right.$$ satisfying some compatibilities with the composition of $\mathbf{P}$. \item The free $\mathbf{P}$-algebra generated by the vector space $V$ is, as a vector space: $$F_\mathbf{P}(V)=\bigoplus_{n\geq 0} \mathbf{P}(n) \otimes V^{\otimes n};$$ the action of $\mathbf{P}$ on $F_\mathbf{P}(V)$ is given by: $$p.(p_1\otimes w_1,\ldots,p_n \otimes w_n)=p\circ (p_1,\ldots,p_n)\otimes w_1\otimes \ldots \otimes w_n.$$ \item Let $\mathbf{P}=(\mathbf{P}_n)_{n \geq 1}$ be a nonsymmetric operad. It is quadratic if : \begin{itemize} \item It is generated by $G_\mathbf{P}=\mathbf{P}_2$. \item Let $\pi_\mathbf{P}:\mathbf{F}(G_\mathbf{P})\longrightarrow \mathbf{P}$ be the canonical morphism from $\mathbf{F}(G_\mathbf{P})$ to $\mathbf{P}$; then its kernel is generated, as an operadic ideal, by $Ker(\pi_\mathbf{P})_3=Ker(\pi_\mathbf{P})\cap \mathbf{F}(G_\mathbf{P})_3$. \end{itemize} \end{itemize} If $\mathbf{P}$ is quadratic, we put $G_\mathbf{P}=\mathbf{P}_2$, and $R_\mathbf{P}=Ker(\pi_\mathbf{P}) _3$. By definition, these two spaces entirely determine $\mathbf{P}$, up to an isomorphism. \\
{\bf Examples}. \begin{enumerate} \item The nonsymmetric operad $\mathbf{Quad}$ of quadri-algebras is quadratic. It is generated by $G_\mathbf{Quad}=Vect(\nwarrow,\swarrow,\searrow,\nearrow)$, and $R_\mathbf{Quad}$ is the linear span of the nine following elements: \begin{align*} \bdtroisun{$\nwarrow$}{$\nwarrow$}&-\bdtroisdeux{$\nwarrow$}{$\star$},& \bdtroisun{$\nwarrow$}{$\nearrow$}&-\bdtroisdeux{$\nearrow$}{$\leftarrow$},& \bdtroisun{$\nearrow$}{$\uparrow$}&-\bdtroisdeux{$\nearrow$}{$\rightarrow$},\\ \bdtroisun{$\nwarrow$}{$\swarrow$}&-\bdtroisdeux{$\swarrow$}{$\uparrow$},& \bdtroisun{$\nwarrow$}{$\searrow$}&-\bdtroisdeux{$\searrow$}{$\nwarrow$},& \bdtroisun{$\nearrow$}{$\downarrow$}&-\bdtroisdeux{$\searrow$}{$\nearrow$},\\ \bdtroisun{$\swarrow$}{$\leftarrow$}&-\bdtroisdeux{$\swarrow$}{$\downarrow$},& \bdtroisun{$\swarrow$}{$\rightarrow$}&-\bdtroisdeux{$\searrow$}{$\swarrow$},& \bdtroisun{$\searrow$}{$\star$}&-\bdtroisdeux{$\searrow$}{$\searrow$}. \end{align*} As $dim(F(G_\mathbf{Quad})_3)=32$, $dim(\mathbf{Quad}_3)=32-9=23$. \item The nonsymmetric operad $\mathbf{Dend}$ of dendriform algebras is quadratic. It is generated by $G_\mathbf{Dend}=Vect(\prec,\succ)$, and $R_\mathbf{Dend}$ is the linear span of the three following elements: \begin{align*} \bdtroisun{$\prec$}{$\prec$}&-\bdtroisdeux{$\prec$}{$\star$},& \bdtroisun{$\prec$}{$\succ$}&-\bdtroisdeux{$\succ$}{$\prec$},& \bdtroisun{$\succ$}{$\star$}&-\bdtroisdeux{$\succ$}{$\succ$}. \end{align*}\end{enumerate}
The nonsymmetric-operad $\mathbf{Quad}$ of quadri-algebras, being quadratic, has a Koszul dual $\mathbf{Quad}^!$. The following formulas for the generating formal series of $\mathbf{Quad}$ and $\mathbf{Quad}^!$ has been conjectured in \cite{Aguiar} and proved in \cite{Vallette}, as well as the koszulity:
\begin{prop}\label{2}\begin{enumerate} \item For all $n \geq 1$, $\displaystyle dim (\mathbf{Quad}(n))=\sum_{j=n}^{2n-1}\binom{3n}{n+1+j}\binom{j-1}{j-n}$. This is sequence A007297 in \cite{Sloane}. \item For all $n \geq 1$, $dim(\mathbf{Quad}^!(n))=n^2$. \item The operad of quadri-algebras is Koszul. \end{enumerate}\end{prop}
\section{The operad of quadri-algebras and its Koszul dual}
\subsection{Dual quadri-algebras}
Algebras on $\mathbf{Quad}^!$ will be called dual quadri-algebras. This operad $\mathbf{Quad}^!$ is described in \cite{Vallette} in terms of the white Manin product. Let us give an explicit description.
\begin{prop} A dual quadri-algebra is a family $(A,\nwarrow,\swarrow,\searrow,\nearrow)$, where $A$ is a vector space and $\nwarrow,\swarrow,\searrow,\nearrow:A\otimes A \longrightarrow A$, such that for all $x,y,z \in A$: \begin{align*} (x \nwarrow y)\nwarrow z&=x \nwarrow (y\nwarrow z)=x\nwarrow (y\swarrow z)=x \nwarrow (y\searrow z)=x\nwarrow (y \nearrow z),\\ (x \nearrow y)\nwarrow z&=x \nearrow (y\nwarrow z)=x \nearrow(y \swarrow z),\\ (x \nwarrow y)\nearrow z&=(x \nearrow y)\nearrow z=x \nearrow (y\searrow z)=x \nearrow (y\nearrow z),\\ (x \swarrow y)\nwarrow z&=x \swarrow (y\nwarrow z)=x \swarrow (y\nearrow z),\\ (x \searrow y) \nwarrow z&=x \searrow (y\nwarrow z),\\ (x \swarrow y)\nearrow z&=(x \searrow y) \nearrow z=x \searrow (y \nearrow z),\\ (x \nwarrow y)\swarrow z&=(x\swarrow y)\swarrow z=x \swarrow (y\swarrow z)=x \swarrow (y \searrow y),\\ (x \searrow y)\swarrow z&=x( \nearrow y)\swarrow z=x \searrow(y \swarrow z),\\ (x \nwarrow y) \searrow z&=(x \swarrow y) \searrow z=(x \searrow y) \searrow z=(x \nearrow y) \searrow z=x \searrow (y\searrow z). \end{align*} These groups of relations are denoted by $(1)^!,\ldots, (9)^!$. Note that the four products $\nwarrow,\swarrow,\searrow,\nearrow$ are associative. \end{prop}
\begin{proof} We put $G=Vect(\nwarrow,\swarrow,\searrow,\nearrow)$ and $E$ the component of arity $3$ of the free nonsymmetric operad generated by $G$, that is to say: $$E=Vect\left(\bdtroisdeux{$f$}{$g$},\bdtroisun{$f$}{$g$}\mid f,g \in \{\nwarrow,\swarrow,\searrow,\nearrow\}\right).$$ We give $G$ a pairing, such that the four products form an orthonormal basis of $G$. This induces a pairing on $E$: for all $x,y,z,t\in G$, \begin{align*} \langle \bdtroisun{$x$}{$y$},\bdtroisun{$z$}{$t$}\rangle&=\langle x,z\rangle \langle y,t\rangle,& \langle \bdtroisdeux{$x$}{$y$},\bdtroisdeux{$z$}{$t$}\rangle&=-\langle x,z\rangle \langle y,t\rangle,\\ \langle \bdtroisdeux{$x$}{$y$},\bdtroisun{$z$}{$t$}\rangle&=0,& \langle \bdtroisun{$x$}{$y$},\bdtroisdeux{$z$}{$t$}\rangle&=0. \end{align*} The quadratic nonsymmetric operad $\mathbf{Quad}$ is generated by $G=Vect(\nwarrow,\swarrow,\searrow,\nearrow)$ and the subspace of relations $R$ of $E$ corresponding to the nine relations (1,1)$\ldots$(3,3). The quadratic nonsymmetric operad $\mathbf{Quad}^!$ is generated by $G \approx G^*$ and the subspaces of relations $R^\perp$ of $E$. As $dim(R)=9$ and $dim(E)=32$, $dim(R^\perp)=23$. A direct verification shows that the $23$ relations given in $(1)^!,\ldots, (9)^!$ are elements of $R^\perp$. As they are linearly independent, they form a basis of $R^\perp$. \end{proof} \\
{\bf Notations}. We consider: $$\mathcal{R}=\bigsqcup_{n=1}^\infty [n]^2.$$ The element $(i,j) \in [n]^2 \subset \mathcal{R}$ will be denoted by $(i,j)_n$ in order to avoid the confusions. We graphically represent $(i,j)_n$ by putting in grey the boxes of coordinates $(a,b)$, $1\leq a \leq i$, $1 \leq b \leq j$,
of a $n \times n$ array, the boxes $(1,1)$, $(1,n)$, $(n,1)$ and $(n,n)$ being respectively up left, down left, up right and down right. For example: \begin{align*} (2,1)_3&=\blocun, &(1,1)_2&=\blocdeux, &(3,2)_4&=\bloctrois. \end{align*}
\begin{prop} Let $A_\mathcal{R}=Vect(\mathcal{R})$. We define four products $\nwarrow$, $\swarrow$, $\searrow$, $\nearrow$ on $A_\mathcal{R}$ by: \begin{align*} (i,j)_p \nwarrow (k,l)_q&=(i,j)_{p+q},&(i,j)_p \nearrow (k,l)_q&=(k+p,j)_{p+q},\\ (i,j)_p \swarrow (k,l)_q&=(i,p+l)_{p+q},& (i,j)_p \searrow (k,l)_q&=(k+p,l+p)_{p+q}. \end{align*} Then $(A_\mathcal{R},\nwarrow,\swarrow,\searrow,\nearrow)$ is a dual quadri-algebra. It is graded by putting the elements of $[n]^2 \in \mathcal{R}$ homogeneous of degree $n$, and the generating formal series of $A_\mathcal{R}$ is: $$\sum_{n=1}^\infty n^2X^n=\frac{X(1+X)}{(1-X)^3}.$$ Moreover, $A_\mathcal{R}$ is freely generated as a dual quadri-algebra by $(1,1)_1$. \end{prop}
\begin{proof} Let us take $(i,j)_p$, $(k,l)_q$ and $(m,n)_r \in \mathcal{R}$. Then: \begin{itemize} \item Each computation in $(1)^!$ gives $(i,j)_{p+q+r}$. \item Each computation in $(2)^!$ gives $(p+k,j)_{p+q+r}$. \item Each computation in $(3)^!$ gives $(p+q+m,j)_{p+q+r}$. \item Each computation in $(4)^!$ gives $(i,p+l)_{p+q+r}$. \item Each computation in $(5)^!$ gives $(p+k,p+l)_{p+q+r}$. \item Each computation in $(6)^!$ gives $(p+q+m,p+l)_{p+q+r}$. \item Each computation in $(7)^!$ gives $(i,p+q+n)_{p+q+r}$. \item Each computation in $(8)^!$ gives $(p+k,p+q+n)_{p+q+r}$. \item Each computation in $(9)^!$ gives $(p+q+m,p+q+n)_{p+q+r}$. \end{itemize} So $A_\mathcal{R}$ is a dual quadri-algebra. We now prove that $A_\mathcal{R}$ is generated by $(1,1)_1$. Let $B$ be the dual quadri-subalgebra of $A_\mathcal{R}$ generated by $(1,1)_1$, and let us prove that $(i,j)_n \in B$ by induction on $n$ for all $(i,j)_n \in \mathcal{R}$. This is obvious in $n=1$, as then $(i,j)_n=(1,1)_1$. Let us assume the result at rank $n-1$, with $n>1$. \begin{itemize} \item If $i \geq 2$ and $j\leq n-1$, then $(1,1)_1 \nearrow (i-1,j)_{n-1}=(i,j)_n$. By the induction hypothesis, $(i-1,j)_{n-1}\in B$, so $(i,j)_n \in B$. \item If $i\leq n-1$ and $j\geq 2$, then $(1,1)_1 \swarrow (i,j-1)_{n-1}=(i,j)_n$. By the induction hypothesis, $(i,j-1)_{n-1}\in B$, so $(i,j)_n \in B$. \item Otherwise, ($i=1$ or $j=n$) and ($i=n$ or $j=1$), that is to say $(i,j)_n=(1,1)_n$ or $(i,j)_n=(n,n)_n$. We remark that $(1,1)\nwarrow (1,1)_{n-1}=(1,1)_n$ and $(1,1)_1 \searrow (n-1,n-1)_{n-1}=(n,n)_n$. By the induction hypothesis, $(1,1)_{n-1}$ and $(n-1,n-1)_n \in B$, so $(1,1)_n $ and $(n,n)_n \in B$. \end{itemize} Finally, $B$ contains $\mathcal{R}$, so $B=A_\mathcal{R}$. \\
Let $C$ be the free $\mathbf{Quad}^!$-algebra generated by a single element $x$, homogeneous of degree $1$. As a graded vector space: $$C=\bigoplus_{n\geq 1} \mathbf{Quad}^!_n\otimes V^{\otimes n},$$ where $V=Vect(x)$. So for all $n \geq 1$, by Proposition \ref{2}, $dim(C_n)=n^2=dim(A_n)$. There exists a surjective morphism of $\mathbf{Quad}^!$-algebras $\theta$ from $C$ to $A$, sending $x$ to $(1,1)_1$. As $x$ and $(1,1)_1$ are both homogeneous of degree $1$, $\theta$ is homogeneous of degree $0$. As $A$ and $C$ have the same generating formal series, $\theta$ is bijective, so $A$ is isomorphic to $C$. \end{proof}\\
{\bf Examples.} Here are graphical examples of products. The result of the product is drawn in light gray: \begin{align*} \blocun \nwarrow \blocdeux&=\produitnw,&\blocun \swarrow \blocdeux&=\produitsw,& \blocun \searrow \blocdeux&=\produitse,&\blocun \nearrow \blocdeux&=\produitne. \end{align*} Roughly speaking, the products of $x \in [m]^2\subset \mathcal{R}$ and $y \in [n]^2\subset \mathcal{R}$ are obtained by putting $x$ and $y$ diagonally in a common array of size $(m+n) \times (m+n)$. This array is naturally decomposed in four parts denoted by $nw$, $sw$, $se$ and $ne$ according to their direction. Then: \begin{enumerate} \item $x \nwarrow y$ is given by the black boxes in the $nw$ part. \item $x\swarrow y$ is given by the boxes in the $sw$ part which are simultaneously under a black box and to the left of a black box. \item $x\searrow y$ is given by the black boxes in the $se$ part. \item $x\nearrow y$ is given by the boxes in the $ne$ part which are simultaneously over a black box and to the right of a black box. \end{enumerate} Here are the results of the nine relations applied to $x=\blocun$, $y=\blocdeux$ and $z=\bloctrois$: \begin{align*} (1)^!&:\relationun&(2)^!&:\relationdeux&(3)^!&:\relationtrois\\[2mm] (4)^!&:\relationquatre&(5)^!&:\relationcinq&(6)^!&:\relationsix\\[2mm] (7)^!&:\relationsept&(8)^!&:\relationhuit&(9)^!&:\relationneuf \end{align*}
{\bf Remarks.} \begin{enumerate} \item A description of the free $\mathbf{Quad}^!$-algebra generated by any set $\mathcal{D}$ is done similarly. We put: $$\mathcal{R}(\mathcal{D})=\bigsqcup_{n=1}^\infty [n]^2\times \mathcal{D}^n.$$ The four products are defined by: \begin{align*} ((i,j)_p,d_1,\ldots,d_p) \nwarrow ((k,l)_q,e_1,\ldots,e_q)&=((i,j)_{p+q},d_1,\ldots,d_p,e_1,\ldots,e_q),\\ ((i,j)_p,d_1,\ldots,d_p) \swarrow ((k,l)_q,e_1,\ldots,e_q)&=((i,p+l)_{p+q}d_1,\ldots,d_p,e_1,\ldots,e_q),\\ ((i,j)_p,d_1,\ldots,d_p) \searrow ((k,l)_q,e_1,\ldots,e_q)&=((k+p,l+p)_{p+q}d_1,\ldots,d_p,e_1,\ldots,e_q)\\ ((i,j)_p,d_1,\ldots,d_p) \nearrow ((k,l)_q,e_1,\ldots,e_q)&=((k+p,j)_{p+q}d_1,\ldots,d_p,e_1,\ldots,e_q). \end{align*} \item We can also deduce a combinatorial description of the nonsymmetric operad $\mathbf{Quad}^!$. As a vector space, $\mathbf{Quad}^!_n=Vect([n]^2)$ for all $n \geq 1$. The composition is given by: $$(i,j)_m \circ ((k_1,l_1)_{n_1},\ldots,(k_n,l_n)_{n_m})=(n_1+\ldots+n_{i-1}+k_i,n_1+\ldots+n_{j-1}+l_j)_{n_1+\ldots+n_m}.$$ In particular: \begin{align*} \nwarrow&=(1,1)_2,&\swarrow&=(1,2)_2,&\searrow&=(2,2)_2,&\nearrow&=(2,1)_2. \end{align*}\end{enumerate}
\begin{cor}\label{5} We define a nonsymmetric operad $\mathbf{Dias}$ in the following way: \begin{itemize} \item For all $n\geq 1$, $\mathbf{Dias}_n=Vect([n])$. The elements of $[n] \subseteq \mathbf{Dias}_n$ are denoted by $(1)_n,\ldots,(n)_n$ in order to avoid confusions. \item The composition is given by: $$(i)_m \circ ((j_1)_{n_1},\ldots,(j_m)_{n_m})=(n_1+\ldots+n_{i-1}+j_i) _{n_1+\ldots+n_m}.$$ \end{itemize} This is the nonsymmetric operad of associative dialgebras \cite{Loday}, that is to say algebras $A$ with two products $\vdash$ and $\dashv$ such that for all $x,y,z\in A$: \begin{align*} x\dashv(y\dashv z)&=x\dashv (y\vdash z)=(x\dashv y)\dashv z,\\ (x\vdash y)\dashv z&=x\vdash (y\dashv z),\\ (x\dashv y)\vdash z&=(x\vdash y)\vdash z=x\vdash (y\vdash z). \end{align*} We denote by $\Box$ and $\blacksquare$ the two Manin products on nonsymmetric-operads of \cite{Vallette}. Then: \begin{align*} \mathbf{Quad}^!&=\mathbf{Dias}\otimes \mathbf{Dias}=\mathbf{Dias} \Box \mathbf{Dias}=\mathbf{Dias}\blacksquare \mathbf{Dias},\\ \mathbf{Quad}&=\mathbf{Dend} \blacksquare \mathbf{Dend}=\mathbf{Dend} \Box \mathbf{Dend}. \end{align*}
\end{cor}
\begin{proof} We denote by $\mathbf{Dias}'$ the nonsymmetric operad generated by $\dashv$ and $\vdash$ and the relations: \begin{align*} \bdtroisdeux{$\dashv$}{$\dashv$}&=\bdtroisdeux{$\dashv$}{$\vdash$}=\bdtroisun{$\dashv$}{$\dashv$},& \bdtroisdeux{$\vdash$}{$\dashv$}&=\bdtroisun{$\dashv$}{$\vdash$},& \bdtroisdeux{$\vdash$}{$\vdash$}&=\bdtroisun{$\vdash$}{$\dashv$}=\bdtroisun{$\vdash$}{$\vdash$}. \end{align*} First, observe that: \begin{align*} (1)_2\circ (I,(1)_2)&=(1)_2\circ(I,(2)_2)=(1)_2\circ((1)_2,I)=(1)_3,\\ (1)_2\circ((2)_2,I)&=(2)_2\circ(I,(1)_2)=(2)_3,\\ (2)_2\circ (I,(2)_2)&=(2)_2\circ((1)_2,I)=(2)_2\circ ((2)_2,I)=(3)_3. \end{align*} So there exists a morphism $\theta$ of nonsymmetric operad from $\mathbf{Dias}'$ to $\mathbf{Dias}$, sending $\dashv$ to $(1)_2$ and $\vdash$ to $(2)_2$. Note that $\theta(I)=(1)_1$.\\
Let us prove that $\theta$ is surjective. Let $n\geq 1$, $i\in [n]$, we show that $(i)_n \in Im(\theta)$ by induction on $n$. If $n \leq 2$, the result is obvious. Let us assume the result at rank $n-1$, $n\geq 3$. If $i=1$, then: $$(1)_2\circ ((1)_1,(1)_{n-1})=(1)_n.$$ By the induction hypothesis, $(1)_{n-1} \in Im(\theta)$, so $(1)_n \in Im(\theta)$. If $i\geq 2$, then: $$(2)_2\circ ((1)_1,(i-1)_{n-1})=(i)_n.$$ By the induction hypothesis, $(1)_{n-1} \in Im(\theta)$, so $(i)_n \in Im(\theta)$. \\
It is proved in \cite{Loday} that $dim(\mathbf{Dias}'_n)=dim(\mathbf{Dias}_n)=n$ for all $n \geq 1$. As $\theta$ is surjective, it is an isomorphism. Moreover, let us consider the following map: $$\left\{\begin{array}{rcl} \mathbf{Dias}\otimes \mathbf{Dias}&\longrightarrow&\mathbf{Quad}^!\\ (i)_n \otimes (j)_n&\longrightarrow&(i,j)_n. \end{array}\right.$$ It is clearly an isomorphism of nonsymmetric operads. It is proved in \cite{Vallette} that $\mathbf{Dias} \Box \mathbf{Dias}=\mathbf{Quad}^!$. As $R_\mathbf{Dias}$ is generated the quadratic nonsymmetric algebra generated by $(1)_2$ and $(2)_2$ and the following relations: $$\bdtroisun{$b$}{$a$}-\bdtroisdeux{$c$}{$d$}, (a,b,c,d)\in E= \left\{\begin{array}{c} ((1)_2,(1)_2,(1)_2,(1)_2),((1)_2,(1)_2,(1)_2,(2)_2),\\ ((2)_2,(1)_2,(2)_2,(1)_2),((1)_2,(2)_2,(2)_2,(2)_2),\\ ((2)_2,(2)_2,(2)_2,(2)_2)\end{array}\right\},$$ $\mathbf{Dias} \blacksquare \mathbf{Dias}$ is generated by $(1,1)_2$, $(1,2)_2$, $(2,1)_2$ and $(2,2)_2$ with the relations: \begin{align*} &\bdtroisun{$b$}{$a$}-\bdtroisdeux{$c$}{$d$},(a,b,c,d)\in E',\\ E'&=\{((a_1,a_2)_2,(b_1,b_2)_2,(c_1,c_2)_2,(d_1,d_2)_2)\mid (a_1,b_1,c_1,d_1),(a_2,b_2,c_2,d_2)\in E\}. \end{align*} This gives 25 relations, which are not linearly independent, and can be regrouped in the following way: \begin{align*} \bprimedtroisun{$11$}{$11$}&=\bdtroisdeux{$11$}{$11$}=\bdtroisdeux{$11$}{$12$}=\bdtroisdeux{$11$}{$21$}=\bdtroisdeux{$11$}{$22$},& \bprimedtroisun{$11$}{$21$}&=\bdtroisdeux{$21$}{$11$}=\bdtroisdeux{$21$}{$12$},\\ \bprimedtroisun{$21$}{$11$}&=\bdtroisdeux{$21$}{$21$}=\bprimedtroisun{$21$}{$21$}=\bdtroisdeux{$21$}{$22$},& \bprimedtroisun{$11$}{$12$}&=\bdtroisdeux{$12$}{$21$}=\bdtroisdeux{$12$}{$11$},\\ \bprimedtroisun{$11$}{$22$}&=\bdtroisdeux{$22$}{$11$},& \bprimedtroisun{$21$}{$12$}&=\bprimedtroisun{$21$}{$22$}=\bdtroisdeux{$22$}{$21$},\\ \bprimedtroisun{$12$}{$11$}&=\bdtroisdeux{$12$}{$12$}=\bdtroisdeux{$12$}{$22$}=\bprimedtroisun{$12$}{$12$},& \bprimedtroisun{$12$}{$21$}&=\bdtroisdeux{$22$}{$12$}=\bprimedtroisun{$12$}{$22$},\\ \bdtroisdeux{$22$}{$22$}&=\bprimedtroisun{$22$}{$11$}=\bprimedtroisun{$22$}{$12$}=\bprimedtroisun{$22$}{$21$}=\bprimedtroisun{$22$}{$22$}. \end{align*} where we denote $ij$ instead of $(i,j)_2$. So $\mathbf{Dias} \blacksquare \mathbf{Dias}$ is isomorphic to $\mathbf{Quad}^!$ via the isomorphism given by: $$\left\{\begin{array}{rcl} \mathbf{Quad}^!&\longrightarrow&\mathbf{Dias} \blacksquare \mathbf{Dias}\\ \nwarrow&\longrightarrow&(1,1)_2,\\ \swarrow&\longrightarrow&(1,2)_2,\\ \searrow&\longrightarrow&(2,2)_2,\\ \nearrow&\longrightarrow&(2,1)_2. \end{array}\right.$$ By Koszul duality, as $\mathbf{Dias}^!=\mathbf{Dend}$, we obtain the results for $\mathbf{Quad}$. \end{proof}
\subsection{Free quadri-algebra on one generator}
As $\mathbf{Quad}=\mathbf{Dend} \Box \mathbf{Dend}$, $\mathbf{Quad}$ is the suboperad of $\mathbf{Dend} \otimes \mathbf{Dend}$ generated by the component of arity $2$. An explicit injection of $\mathbf{Quad}$ into $\mathbf{Dend} \otimes \mathbf{Dend}$ is given by:
\begin{prop}\label{6} The following defines a injective morphism of nonsymmetric operads: $$\Theta:\left\{\begin{array}{rcl} \mathbf{Quad}&\longrightarrow&\mathbf{Dend} \otimes \mathbf{Dend}\\ \nwarrow&\longrightarrow&\prec \otimes \prec\\ \swarrow&\longrightarrow&\prec \otimes \succ\\ \searrow&\longrightarrow&\succ \otimes \succ\\ \nearrow&\longrightarrow&\succ \otimes \prec. \end{array}\right.$$ \end{prop}
\begin{cor} \label{7} The quadri-subalgebra of $(\mathbf{FQSym},\nwarrow,\swarrow,\searrow,\nearrow)$ generated by $(12)$ is free. \end{cor}
\begin{proof} Both dendriform algebras $(\mathbf{FQSym},\downarrow,\uparrow)$ and $(\mathbf{FQSym},\leftarrow,\rightarrow)$ are free. So the $\mathbf{Dend} \otimes \mathbf{Dend}$-algebra $(\mathbf{FQSym} \otimes \mathbf{FQSym},\uparrow \otimes \leftarrow,\downarrow\otimes \leftarrow, \downarrow \otimes \rightarrow,\uparrow \otimes \rightarrow)$ is free. By restriction, the $\mathbf{Dend} \otimes \mathbf{Dend}$-subalgebra of $\mathbf{FQSym} \otimes \mathbf{FQSym}$ generated by $(1)\otimes (1)$ is free. By restriction, the quadri-subalgebra $A$ of $\mathbf{FQSym}\otimes \mathbf{FQSym}$ generated by $(1)\otimes (1)$ is free.
Let $B$ be the quadri-subalgebra of $\mathbf{FQSym}$ generated by $(12)$ and let $\phi:A\longrightarrow B$ be the unique morphism sending $(1)\otimes (1)$ to $(12)$. We denote by $\mathbf{FQSym}_{even}$ the subspace of $\mathbf{FQSym}$ formed by the homogeneous components of even degrees. It is clearly a quadri-subalgebra of $\mathbf{FQSym}$. As $(12)\in \mathbf{FQSym}_{even}$, $A\subseteq \mathbf{FQSym}_{even}$. We consider the map: $$\psi:\left\{\begin{array}{rcl} \mathbf{FQSym}_{even}&\longrightarrow&\mathbf{FQSym}\otimes \mathbf{FQSym}\\ \sigma\in \mathfrak{S}_{2n}&\longrightarrow&\begin{cases} \left(\frac{\sigma(1)-1}{2},\ldots,\frac{\sigma(n)-1}{2}\right)\otimes \left(\frac{\sigma(n+1)}{2},\ldots,\frac{\sigma(2n)}{2}\right)\\ \hspace{1cm} \mbox{ if $\sigma(1),\ldots,\sigma(n) $ are odd and $\sigma(n+1),\ldots,\sigma(2n) $ are even},\\ 0\mbox{ otherwise}. \end{cases} \end{array}\right.$$ Let $\sigma \in \mathfrak{S}_{2m}$, $\tau\in \mathfrak{S}_{2n}$. Let us prove that $\psi(\sigma\diamond \tau)=\psi(\sigma)\diamond \psi(\tau)$ for $\diamond \in \{\nwarrow,\swarrow,\searrow,\nearrow\}$.
{\it First case.} Let us assume that $\psi(\sigma)=0$. There exists $1\leq i\leq m$, such that $\sigma(i)$ is even, and an element $m+1\leq j \leq m+n$, such that $\sigma(j)$ is odd.
Let $\tau\in \mathfrak{S}_{2n}$. Let $\alpha$ be obtained by a shuffle of $\sigma$ and $\tau[2n]$. If the letter $\sigma(i)$ appears in $\alpha$ in one of the position $1,\ldots,m+n$, then $\psi(\alpha)=0$. Otherwise, the letter $\sigma(i)$ appears in one of the positions $m+n+1,\ldots,2m+2n$, so $\sigma(j)$ also appears in one of these positions, as $i<j$, and $\psi(\alpha)=0$. In both case, $\psi(\alpha)=0$, and we deduce that $\psi(\sigma\diamond \tau)=0=\psi(\sigma)\diamond \psi(\tau)$.
{\it Second case.} Let us assume that $\psi(\tau)=0$. By a similar argument, we show that $\psi(\sigma\diamond \tau)=0=\psi(\sigma)\diamond \psi(\tau)$.
{\it Last case.} Let us assume that $\psi(\sigma)\neq 0$ and $\psi(\tau)\neq 0$. We put $\sigma=(\sigma_1,\sigma_2)$ and $\tau=(\tau_1,\tau_2)$, where the letters of $\sigma_1$ and $\tau_1$ are odd and the letters of $\sigma_2$ and $\tau_2$ are even.
Then $\psi(\sigma \nwarrow \tau)$ is obtained by shuffling $\sigma$ and $\tau[2n]$, such that the first and last letters are letters of $\sigma$,
and keeping only permutations such that the $(m+n)$ first letters are odd (and the $(m+n)$ last letters are even). These words are obtained
by shuffling $\sigma_1$ and $\tau_1[2m]$ such that the first letter is a letter of $\sigma_1$, and by shuffling $\sigma_2$ and $\tau_2[2m]$,
such that the last letter is a letter of $\sigma_2$. Hence:
$$\psi(\sigma \nwarrow \tau) =\psi(\sigma) \uparrow \otimes \leftarrow\psi(\tau)=\psi(\sigma) \nwarrow \psi(\tau).$$
The proof for the three other quadri-algebra products is similar. \\
Consequently, $\psi$ is a quadri-algebra morphism. Moreover, $\psi \circ \phi((1)\otimes (1))=\psi(12)=(1) \otimes (1)$.
As $A$ is generated by $(1) \otimes (1)$, $\psi \circ \phi=Id_A$, so $\phi$ is injective, and $A$ is isomorphic to $B$. \end{proof}
\subsection{Koszulity of $\mathbf{Quad}$}
The koszulity of $\mathbf{Quad}$ is proved in \cite{Vallette} by the poset method. Let us give here a second proof, with the help of the rewriting method of \cite{Hoffbeck,Dotsenko,Loday2}.
\begin{theo} The operads $\mathbf{Quad}$ and $\mathbf{Quad}^!$ are Koszul. \end{theo}
\begin{proof} By Koszul duality, it is enough to prove that $\mathbf{Quad}^!$ is Koszul. We choose the order $\searrow<\nearrow<\swarrow<\nwarrow$ for the four operations, and the order $\btroisun<\btroisdeux$ for the two planar binary trees of arity $3$. Relations $(1)^!,\ldots,(9)^!$ give $23$ rewriting rules: \begin{align*} \bdtroisdeux{$\nwarrow $}{$\nwarrow $},\bdtroisdeux{$\nwarrow $}{$\swarrow $}, \bdtroisdeux{$\nwarrow $}{$\searrow $}, \bdtroisdeux{$\nwarrow $}{$\nearrow $}&\longrightarrow \bdtroisun{$\nwarrow $}{$\nwarrow $},& \bdtroisdeux{$\nearrow $}{$\nwarrow $}, \bdtroisdeux{$\nearrow $}{$\swarrow $} &\longrightarrow \bdtroisun{$\nwarrow $}{$\nearrow $},\\ \bdtroisun{$\nearrow $}{$\nwarrow $}, \bdtroisdeux{$\nearrow $}{$\searrow $}, \bdtroisdeux{$\nearrow $ }{$\nearrow $}&\longrightarrow \bdtroisun{$\nearrow $}{$\nearrow $},& \bdtroisdeux{$\swarrow $}{$\nwarrow $},\bdtroisdeux{$\swarrow $}{$\nearrow $} &\longrightarrow \bdtroisun{$\nwarrow $}{$\swarrow $},\\ \bdtroisdeux{$\searrow $}{$\nwarrow $}&\longrightarrow \bdtroisun{$\nwarrow $}{$\searrow $},&
\bdtroisdeux{$\searrow $}{$\nearrow $},\bdtroisun{$\nearrow $}{$\swarrow $} &\longrightarrow \bdtroisun{$\nearrow $}{$\searrow $},\\
\bdtroisdeux{$\swarrow $}{$\swarrow $}, \bdtroisdeux{$\swarrow $}{$\searrow $}, \bdtroisun{$\swarrow $ }{$\nwarrow $}&\longrightarrow \bdtroisun{$\swarrow $}{$\swarrow $},& \bdtroisdeux{$\searrow $}{$\swarrow $}, \bdtroisun{$\swarrow $}{$\nwarrow $} &\longrightarrow \bdtroisun{$\swarrow $}{$\searrow $},\\ \bdtroisdeux{$\searrow $}{$\searrow $},\bdtroisun{$\searrow $}{$\nwarrow $}, \bdtroisun{$\searrow $}{$\swarrow $}, \bdtroisun{$\searrow $}{$\nearrow $}, &\longrightarrow \bdtroisun{$\searrow $}{$\searrow $}. \end{align*}
There are $156$ critical monomials, and the $156$ corresponding diagrams are confluent. Hence, $\mathbf{Quad}^!$ is Koszul. We used a computer to find the critical monomials and to verify the confluence of the diagrams. \end{proof}
\section{Quadri-bialgebras}
\subsection{Units and quadri-algebras}
Let $A,B$ be a vector spaces. We put $A\overline{\otimes} B=(K\otimes B)\oplus (A\otimes B)\oplus (A\otimes K)$. Clearly, if $A,B,C$ are three vector spaces, $(A\overline{\otimes} B)\overline{\otimes} C=A\overline{\otimes} (B\overline{\otimes} C)$.
\begin{prop}\begin{enumerate} \item Let $A$ be a quadri-algebra. We extend the four products on $A\overline{\otimes} A$ in the following way: if $a,b \in A$, \begin{align*} a\nwarrow 1&=a,&a\nearrow 1&=0,&1\nwarrow a&=0,&1\nearrow a&=0,\\ a\swarrow 1&=0,&a\searrow 1&=0,&1\swarrow a&=0,&1\searrow a&=a. \end{align*} The nine relations defining quadri-algebras are true on $A\overline{\otimes} A\overline{\otimes} A$. \item Let $A,B$ be two quadri-algebras. Then $A\overline{\otimes} B$ is a quadri-algebra with the following products: \begin{itemize} \item if $a,a' \in A\sqcup K$, $b,b'\in B\sqcup K$, with $(a,a')\notin K^2$ and $(b,b') \notin K^2$ : \begin{align*} (a\otimes b)\nwarrow (a'\otimes b')&=(a\uparrow a')\otimes (b\leftarrow b'),&(a\otimes b)\nearrow (a'\otimes b')&=(a\uparrow a')\otimes (b\rightarrow b'),\\ (a\otimes b)\swarrow (a'\otimes b')&=(a\downarrow a')\otimes (b\leftarrow b'),&(a\otimes b)\searrow (a'\otimes b')&=(a\downarrow a')\otimes (b\rightarrow b'). \end{align*} \item If $a,a'\in A$: \begin{align*} (a\otimes 1)\nwarrow (a'\otimes 1)&=(a\nwarrow a')\otimes 1,&(a\otimes 1)\nearrow (a'\otimes 1)&=(a\nearrow a')\otimes 1,\\ (a\otimes 1)\swarrow (a'\otimes 1)&=(a\swarrow a')\otimes 1,&(a\otimes 1)\searrow (a'\otimes 1)&=(a\searrow a')\otimes 1. \end{align*} \item If $b,b'\in B$: \begin{align*} (1\otimes b)\nwarrow (1\otimes b')&=1\otimes (b\nwarrow b'),&(1\otimes b)\nearrow (1\otimes b')&=1\otimes (b\nearrow b'),\\ (1\otimes b)\swarrow (1\otimes b')&=1\otimes (b\swarrow b'),&(1\otimes b)\searrow (1\otimes b')&=1\otimes (b\searrow b'). \end{align*}\end{itemize}\end{enumerate}\end{prop}
\begin{proof} 1. It is shown by direct verifications. \\
2. As $(A,\uparrow,\downarrow)$ and $(B,\leftarrow,\rightarrow)$ are dendriform algebras, $A\otimes B$ is a $\mathbf{Dend} \otimes \mathbf{Dend}$-algebra, so is a quadri-algebra by Proposition \ref{6}, with $\nwarrow=\uparrow \otimes \leftarrow$, $\swarrow=\downarrow \otimes \leftarrow$, $\searrow=\downarrow \otimes \rightarrow$ and $\nearrow=\uparrow \otimes \rightarrow$. The extension of the quadri-algebra axioms to $A\overline{\otimes} B$ is verified by direct computations. \end{proof} \\
{\bf Remark.} There is a second way to give $A\overline{\otimes} B$ a structure of quadri-algebra with the help of the associativity of $\star$: \begin{align*} \mbox{If $a\in A$ or $a'\in A$, $b,b'\in K\oplus B$,}&\begin{cases} (a\otimes b)\nwarrow (a'\otimes b')&=(a\nwarrow a')\otimes (b\star b'),\\ (a\otimes b)\swarrow (a'\otimes b')&=(a\swarrow a')\otimes (b\star b'),\\ (a\otimes b)\searrow (a'\otimes b')&=(a\searrow a')\otimes (b\star b'),\\ (a\otimes b)\nearrow (a'\otimes b')&=(a\nearrow a')\otimes (b\star b'); \end{cases}\\ \\ \mbox{if $b,b'\in K\oplus B$},&\begin{cases} (1\otimes b) \nwarrow (1\otimes b')&=1\otimes (b\nwarrow b'),\\ (1\otimes b) \swarrow (1\otimes b')&=1\otimes (b\swarrow b'),\\ (1\otimes b) \searrow (1\otimes b')&=1\otimes (b\searrow b'),\\ (1\otimes b) \nearrow (1\otimes b')&=1\otimes (b\nearrow b'). \end{cases}\end{align*}
$A\otimes K$ and $K \otimes B$ are quadri-subalgebras of $A\overline{\otimes} B$, respectively isomorphic to $A$ and $B$.
\subsection{Definitions and example of $\mathbf{FQSym}$}
\begin{defi}\label{10} A quadri-bialgebra is a family $(A,\nwarrow,\swarrow,\searrow,\nearrow,\tilde{\Delta}_\nwarrow,\tilde{\Delta}_\swarrow,\tilde{\Delta}_\searrow,\tilde{\Delta}_\nearrow)$ such that: \begin{itemize} \item $(A\nwarrow,\swarrow,\searrow,\nearrow)$ is a quadri-algebra. \item $(A,\tilde{\Delta}_\nwarrow,\tilde{\Delta}_\swarrow,\tilde{\Delta}_\searrow,\tilde{\Delta}_\nearrow)$ is a quadri-coalgebra. \item We extend the four coproducts in the following way: \begin{align*} \Delta_\nwarrow&:\begin{cases} A&\longrightarrow A\otimes A\\ a&\longrightarrow \tilde{\Delta}_\nwarrow(a)+a\otimes 1, \end{cases}&\Delta_\nearrow&:\begin{cases} A&\longrightarrow A\otimes A\\ a&\longrightarrow \tilde{\Delta}_\nearrow(a), \end{cases}\\ \\ \Delta_\swarrow&:\begin{cases} A&\longrightarrow A\otimes A\\ a&\longrightarrow \tilde{\Delta}_\swarrow(a), \end{cases}&\Delta_\searrow&:\begin{cases} A&\longrightarrow A\otimes A\\ a&\longrightarrow \tilde{\Delta}_\searrow(a)+1\otimes a. \end{cases} \end{align*} For all $a,b\in A$: For all $a,b\in A$: \begin{align*} \Delta_\nwarrow(a \nwarrow b)&=\Delta_\uparrow(a)\nwarrow \Delta_\leftarrow(b)& \Delta_\nearrow(a \nwarrow b)&=\Delta_\uparrow(a)\nwarrow \Delta_\rightarrow(b)\\ \Delta_\nwarrow(a \swarrow b)&=\Delta_\uparrow(a)\swarrow \Delta_\leftarrow(b)& \Delta_\nearrow(a \swarrow b)&=\Delta_\uparrow(a)\swarrow \Delta_\rightarrow(b)\\ \Delta_\nwarrow(a \searrow b)&=\Delta_\uparrow(a)\searrow \Delta_\leftarrow(b)& \Delta_\nearrow(a \searrow b)&=\Delta_\uparrow(a)\searrow \Delta_\rightarrow(b)\\ \Delta_\nwarrow(a \nearrow b)&=\Delta_\uparrow(a)\nearrow \Delta_\leftarrow(b)& \Delta_\nearrow(a \nearrow b)&=\Delta_\uparrow(a)\nearrow \Delta_\rightarrow(b)\\ \\ \Delta_\swarrow(a \nwarrow b)&=\Delta_\downarrow(a)\nwarrow \Delta_\leftarrow(b)& \Delta_\searrow(a \nwarrow b)&=\Delta_\downarrow(a)\nwarrow \Delta_\rightarrow(b)\\ \Delta_\swarrow(a \swarrow b)&=\Delta_\downarrow(a)\swarrow \Delta_\leftarrow(b)& \Delta_\searrow(a \swarrow b)&=\Delta_\downarrow(a)\swarrow \Delta_\rightarrow(b)\\ \Delta_\swarrow(a \searrow b)&=\Delta_\downarrow(a)\searrow \Delta_\leftarrow(b)& \Delta_\searrow(a \searrow b)&=\Delta_\downarrow(a)\searrow \Delta_\rightarrow(b)\\ \Delta_\swarrow(a \nearrow b)&=\Delta_\downarrow(a)\nearrow \Delta_\leftarrow(b)& \Delta_\searrow(a \nearrow b)&=\Delta_\downarrow(a)\nearrow \Delta_\rightarrow(b) \end{align*} \end{itemize}\end{defi}
{\bf Remark.} In other words, for all $a,b\in A$: \begin{align*} \tilde{\Delta}_\nwarrow(a\nwarrow b)&=a'_\uparrow \uparrow b \otimes a''_\uparrow +a'_\uparrow\uparrow b'_\leftarrow \otimes a''_\uparrow \leftarrow b''_\leftarrow,\\ \tilde{\Delta}_\swarrow(a\nwarrow b)&=a'_\downarrow \uparrow b \otimes a''_\downarrow +a'_\downarrow\uparrow b'_\leftarrow \otimes a''_\downarrow \leftarrow b''_\leftarrow,\\ \tilde{\Delta}_\searrow(a\nwarrow b)&=a'_\downarrow \otimes a''_\downarrow \leftarrow b +a'_\downarrow\uparrow b'_\rightarrow \otimes a''_\downarrow \leftarrow b''_\rightarrow,\\ \tilde{\Delta}_\nearrow(a\nwarrow b)&=a'_\uparrow \otimes a''_\uparrow \leftarrow b +a'_\uparrow\uparrow b'_\rightarrow \otimes a''_\uparrow \leftarrow b''_\rightarrow,\\ \\ \tilde{\Delta}_\nwarrow(a\swarrow b)&=a'_\uparrow \downarrow b \otimes a''_\uparrow +a'_\uparrow\downarrow b'_\leftarrow \otimes a''_\uparrow \leftarrow b''_\leftarrow,\\ \tilde{\Delta}_\swarrow(a\swarrow b)&=b\otimes a+b'_\leftarrow \otimes a\leftarrow b''_\leftarrow +a'_\downarrow \downarrow b \otimes a''_\downarrow+a'_\downarrow\downarrow b'_\leftarrow \otimes a''_\downarrow \leftarrow b''_\leftarrow,\\ \tilde{\Delta}_\searrow(a\swarrow b)&=b'_\rightarrow \otimes a\leftarrow b''_\rightarrow +a'_\downarrow\downarrow b'_\rightarrow \otimes a''_\downarrow \leftarrow b''_\rightarrow,\\ \tilde{\Delta}_\nearrow(a\swarrow b)&=a'_\uparrow\downarrow b'_\rightarrow \otimes a''_\uparrow \leftarrow b''_\rightarrow,\\ \\ \tilde{\Delta}_\nwarrow(a\searrow b)&=a\downarrow b'_\leftarrow\otimes b''_\leftarrow +a'_\uparrow\downarrow b'_\leftarrow \otimes a''_\uparrow \rightarrow b''_\leftarrow,\\ \tilde{\Delta}_\swarrow(a\searrow b)&=b'_\leftarrow\otimes a \rightarrow b''_\leftarrow +a'_\downarrow\downarrow b'_\leftarrow \otimes a''_\downarrow \rightarrow b''_\leftarrow,\\ \tilde{\Delta}_\searrow(a\searrow b)&=b'_\rightarrow\otimes a \rightarrow b''_\rightarrow +a'_\downarrow\downarrow b'_\rightarrow \otimes a''_\downarrow \rightarrow b''_\rightarrow,\\ \tilde{\Delta}_\nearrow(a\searrow b)&=a\downarrow b''_\rightarrow\otimes b''_\rightarrow +a'_\uparrow\downarrow b'_\rightarrow \otimes a''_\uparrow \rightarrow b''_\rightarrow,\\ \\ \tilde{\Delta}_\nwarrow(a\nearrow b)&=a\uparrow b'_\leftarrow\otimes b''_\leftarrow +a'_\uparrow\uparrow b'_\leftarrow \otimes a''_\uparrow \rightarrow b''_\leftarrow,\\ \tilde{\Delta}_\swarrow(a\nearrow b)&=a'_\downarrow\uparrow b'_\leftarrow \otimes a''_\downarrow \rightarrow b''_\leftarrow,\\ \tilde{\Delta}_\searrow(a\nearrow b)&=a'_\downarrow \otimes a''_\downarrow \rightarrow b +a'_\downarrow\uparrow b'_\rightarrow \otimes a''_\downarrow \rightarrow b''_\rightarrow,\\ \tilde{\Delta}_\nearrow(a\nearrow b)&=a\otimes b+a'_\uparrow\otimes a''_\uparrow \rightarrow b+a\uparrow b''_\rightarrow\otimes b''_\rightarrow +a'_\uparrow\uparrow b'_\rightarrow \otimes a''_\uparrow \rightarrow b''_\rightarrow. \end{align*} Consequently, we obtain four dendriform bialgebras \cite{FoissyDend}: \begin{align*} &(A,\leftarrow,\rightarrow,\Delta_\leftarrow,\Delta_\rightarrow),& &(A,\downarrow^{op},\uparrow^{op},\Delta_\downarrow^{op},\Delta_\uparrow^{op}),& &(A,\rightarrow^{op},\leftarrow^{op},\Delta_\uparrow,\Delta_\downarrow),& &(A,\uparrow,\downarrow,\Delta_\rightarrow^{op},\Delta_\leftarrow^{op}). \end{align*}
\begin{prop} \label{11} The augmentation ideal of $\mathbf{FQSym}$ is a quadri-bialgebra. \end{prop}
\begin{proof} As an example, let us prove the last compatibility. Let $\sigma,\tau$ be two permutations, of respective length $k$ and $l$. Then $\Delta_\nearrow(\sigma \nearrow \tau)$ is obtained by shuffling in all possible ways the words $\sigma$ and the shifting $\tau[k]$ of $\tau$, such that the first letter comes from $\sigma$ and the last letter comes from $\tau[k]$, and then cutting the obtained words in such a way that $1$ is in the left part and $k+l$ in the right part. Hence, the left part should contain letters coming from $\sigma$, including $1$, and starts by the first letter of $\sigma$, and the right part should contain letters coming from $\tau[k]$, including $k+l$, and ends with the last letter of $\tau[k]$. there are four possibilities: \begin{itemize} \item The left part contains only letters from $\sigma$ and the right part contains only letters form $\tau[k]$. This gives the term $\sigma \otimes \tau$. \item The left part contains only letters from $\sigma$, and the right part contains letters from $\sigma$ and $\tau[k]$. This gives the term $\sigma_\uparrow' \otimes \sigma_\uparrow'' \rightarrow \tau$. \item The left part contains letters from $\sigma$ and $\tau[k]$, and the right part contains only letters form $\tau[k]$. This gives the term $\sigma \uparrow \tau_\rightarrow'\otimes \tau_\rightarrow''$. \item Both parts contains letters from $\sigma$ and $\tau[k]$. This gives the term $\sigma'_\uparrow \uparrow \tau'_\rightarrow \otimes \sigma''_\uparrow \rightarrow \tau''_\rightarrow$. \end{itemize} So: $$\Delta_\nearrow(\sigma \nearrow \tau)=\sigma \otimes \tau+\sigma_\uparrow' \otimes \sigma_\uparrow'' \rightarrow \tau +\sigma \uparrow \tau_\rightarrow'\otimes \tau_\rightarrow'' +\sigma'_\uparrow \uparrow \tau'_\rightarrow \otimes \sigma''_\uparrow \rightarrow \tau''_\rightarrow.$$ The other compatibilities are proved following the same lines. \end{proof}
\subsection{Other examples}
Let $F_\mathbf{Quad}(V)$ be the free quadri-algebra generated by $V$. As it is free, it is possible to define four coproducts satisfying the quadri-bialgebra axioms in the following way: for all $v\in V$, $$\tilde{\Delta}_\nwarrow(v)=\tilde{\Delta}_\swarrow(v)=\tilde{\Delta}_\searrow(v)=\tilde{\Delta}_\nearrow(v)=0.$$ It is naturally graded by puting the elements of $V$ homogeneous of degree $1$.
\begin{prop} For any vector space $V$, $F_\mathbf{Quad}(V)$ is a quadri-bialgebra. \end{prop}
\begin{proof} We only have to prove the nine compatibilities of quadri-coalgebras. We consider: $$B_{(1,1)}=\{a\in F_\mathbf{Quad}(V)\mid (\Delta_\nwarrow\otimes Id)\circ \Delta_\nwarrow(a)=(Id \otimes \Delta)\circ \Delta_\nwarrow(a)\}.$$ First, for all $v \in V$: $$(\Delta_\nwarrow\otimes Id)\circ \Delta_\nwarrow(v) =v \otimes 1\otimes 1=(Id \otimes \Delta)\circ \Delta_\nwarrow(v),$$ so $V\subseteq B_{(1,1)}$. If $a,b\in B_{(1,1)}$ and $\diamond \in \{\nwarrow,\swarrow,\searrow,\nearrow\}$: \begin{align*} (\Delta_\nwarrow \otimes Id)\circ \Delta_\nwarrow(a\diamond b) &=((\Delta_\uparrow \otimes Id)\circ \Delta_\uparrow(a))\diamond (\Delta_\leftarrow \otimes Id)\circ \Delta_\leftarrow(b))\\ &=((Id \otimes \Delta)\circ \Delta_\uparrow(a))\diamond ((Id \otimes \Delta)\circ \Delta_\leftarrow(b))\\ &=(Id \otimes \Delta)(\Delta_\uparrow(a)\diamond \Delta_\leftarrow(b))\\ &=(Id\otimes \Delta)\circ \Delta_\nwarrow(a\diamond b). \end{align*} So a$\diamond b \in B_{(1,1)}$, and $B_{(1,1)}$ is a quadri-subalgebra of $F_\mathbf{Quad}(V)$ containing $V$: $B_{(1,1)}=F_\mathbf{Quad}(V)$, and the quadri-coalgebra relation (1.1) is satisfied. The eight other relations can be proved in the same way. Hence, $F_\mathbf{Quad}(V)$ is a quadri-bialgebra. \end{proof} \\
{\bf Remarks.} \begin{enumerate} \item We deduce that $(F_\mathbf{Quad}(V),\leftarrow,\rightarrow,\Delta_\leftarrow,\Delta_\rightarrow)$ and $(F_\mathbf{Quad}(V),\uparrow,\downarrow,\Delta_\rightarrow^{op},\Delta_\leftarrow^{op})$ are bidendriform bialgebras, in the sense of \cite{FoissyDend,FoissyDend2}; consequently, $(F_\mathbf{Quad}(V),\leftarrow,\rightarrow)$ and $(F_\mathbf{Quad}(V),\uparrow,\downarrow)$ are free dendriform algebras. \item When $V$ is one-dimensional, here are the respective dimensions $a_n$, $b_n$ and c$_n$ of the homogeneous components, of the primitive elements, and of the dendriform primitive elements, of degree $n$, for these two dendriform bialgebras:
$$\begin{array}{c|c|c|c|c|c|c|c|c|c|c} n&1&2&3&4&5&6&7&8&9&10\\ \hline a_n&1&4&23&156&1\:162&9\:162&75\:819&644\:908&5\:616\:182&49\:826\:712\\ \hline b_n&1&3&16&105&768&6\:006&49\:152&415\:701&3\:604\:480&31\:870\:410\\ \hline c_n&1&2&10&64&462&3\:584&29\:172&245\:760&2\:124\:694&18\:743\:296 \end{array}$$ These are sequences A007297, A085614 and A078531 of \cite{Sloane}. \item Let $V$ be finite-dimensional. The graded dual $F_\mathbf{Quad}(V)^*$ of $F_\mathbf{Quad}(V)$ is also a quadri-bialgebra. By the bidendriform rigidity theorem \cite{FoissyDend,FoissyDend2}, $(F_\mathbf{Quad}(V)^*,\leftarrow,\rightarrow)$ and $(F_\mathbf{Quad}(V)^*,\uparrow,\downarrow)$ are free dendriform algebras. Moreover, for any $x,y \in V$, nonzero, $x \nwarrow y$ and $x \searrow y$ are nonzero elements of $Prim_\mathbf{Quad}(F_\mathbf{Quad}(V))$, which implies that $(F_\mathbf{Quad}(V)^*,\nwarrow,\swarrow,\searrow,\nearrow)$ is not generated in degree $1$, so is not free as a quadri-algebra. Dually, the quadri-coalgebra $F_\mathbf{Quad}(V)$ is not cofree. \end{enumerate}
We now give a similar construction on the Hopf algebra of packed words $\mathbf{WQSym}$, see \cite{Thibon} for more details on this combinatorial Hopf algebra. \begin{theo} For any nonempty packed word $w$ of length $n$, we put: \begin{align*} m(w)&=\max\{i \in [n]\mid w(i)=1\},&M(w)&=\max\{i\in [n]\mid w(i)=\max(w)\}. \end{align*} We define four products on the augmentation ideal of $\mathbf{WQSym}$ in the following way: if $u,v$ are packed words of respective lengths $k,l\geq 1$: \begin{align*} u\nwarrow v&=\sum_{\substack{Pack(w(1)\ldots w(k))=u,\\ Pack(w(k+1)\ldots w(k+l)=v,\\ m(w),M(w)\leq k}} w,& u\nearrow v&=\sum_{\substack{Pack(w(1)\ldots w(k))=u,\\ Pack(w(k+1)\ldots w(k+l)=v,\\ m(w)\leq k <M(w)}} w,\\ u\swarrow v&=\sum_{\substack{Pack(w(1)\ldots w(k))=u,\\ Pack(w(k+1)\ldots w(k+l)=v,\\ M(w)\leq k<m(w)}} w,& u\searrow v&=\sum_{\substack{Pack(w(1)\ldots w(k))=u,\\ Pack(w(k+1)\ldots w(k+l)=v,\\ k<m(w),M(w)}} w. \end{align*} We define four coproducts on the augmentation ideal of $\mathbf{WQSym}$ in the following way: if $u$ is a packed word of length $n\geq 1$, \begin{align*} \Delta_\nwarrow(u)&=\sum_{u(1),u(n)\leq i<\max(u)} u_{\mid [i]}\otimes Pack(u_{\mid [\max(u)]\setminus [i]}),\\ \Delta_\swarrow(u)&=\sum_{u(n)\leq i<u(1)} u_{\mid [i]}\otimes Pack(u_{\mid [\max(u)]\setminus [i]}),\\ \Delta_\searrow(u)&=\sum_{1\leq i<u(1),u(n)} u_{\mid [i]}\otimes Pack(u_{\mid [\max(u)]\setminus [i]}),\\ \Delta_\nearrow(u)&=\sum_{u(1)\leq i<u(n)} u_{\mid [i]}\otimes Pack(u_{\mid [\max(u)]\setminus [i]}). \end{align*} These products and coproducts make $\mathbf{WQSym}$ a quadri-bialgebra. The induced Hopf algebra structure is the usual one. \end{theo}
\begin{proof} For all packed words $u,v$ of respective lengths $k,l \geq 1$: $$u\star v=\sum_{\substack{Pack(w(1)\ldots w(k))=u,\\ Pack(w(k+1)\ldots w(k+l)=v}} w.$$ So $\star$ is the usual product of $\mathbf{WQSym}$, and is associative. In particular, if $u,v,w$ are packed words of respective lengths $k,l,n \geq 1$: $$u\star (v\star w)=(u\star v)\star w=\sum_{\substack{Pack(x(1)\ldots x(k))=u,\\ Pack(x(k+1)\ldots x(k+l)=v,\\ Pack(x(k+l+1),\ldots,x(k+l+n))=w}} x.$$ Then each side of relations $(1,1)\ldots (3,3)$ is the sum of the terms in this expression such that: \begin{align*} &m(x),M(x)\leq k&&m(x)\leq k<M(x)\leq k+l&&m(x)\leq k <k+l<M(x)\\ &M(x)\leq k<m(x)\leq k+l&&k<m(x),M(x)\leq k+l&&k<m(x)\leq k+l<M(x)\\ &M(x)\leq k<k+l<m(x)&&k<M(x)\leq k+l<m(x)&&k+l<m(x),M(x) \end{align*} So $(\mathbf{WQSym},\nwarrow,\swarrow,\searrow,\nearrow)$ is a quadri-algebra. \\
For all packed word $u$ of length $n\geq 1$: $$\tilde{\Delta}(u)=\sum_{1\leq i<\max(u)}u_{\mid [i]}\otimes Pack(u_{\mid [\max(u)]\setminus [i]}).$$ So $\tilde{\Delta}$ is the usual coproduct of $\mathbf{WQSym}$ and is coassociative. Moreover: $$(\tilde{\Delta} \otimes Id)\circ \tilde{\Delta}(u)=(Id \otimes \tilde{\Delta})\circ \tilde{\Delta}(u)=\sum_{1\leq i<j<\max(u)} u_{\mid [i]} \otimes Pack(u_{\mid [j]\setminus [i]})\otimes Pack(u_{\mid [\max(u)]\setminus [j]}).$$ Then each side of relations $(1,1)\ldots (3,3)$ is the sum of the terms in this expression such that: \begin{align*} &u(1),u(n)\leq i&&u(1)\leq i<u(n)\leq j&&u(1)\leq i<j< u(n)\\ &u(n)\leq i<u(1)\leq j&&i<u(1),u(n)\leq j&&i<u(1)\leq j<u(n)\\ &u(n)\leq i<j< u(1)&&i<u(n)\leq j<u(1)&&j<u(1),u(n) \end{align*} So $(\mathbf{WQSym},\Delta_\nwarrow,\Delta_\swarrow,\Delta_\searrow,\Delta_\nearrow)$ is a quadri-coalgebra.\\
Let us prove, as an example, one of the compatibilities between the products and the coproducts. If $u,v$ are packed words of respective lengths $k,l \geq 1$, $\Delta_\nearrow(u\nearrow v)$ is obtained as follows: \begin{itemize} \item Consider all the packed words $w$ such that $Pack(w(1)\ldots w(k))=u$, $Pack(w(k+1)\ldots w(k+l))=v$, such that $1\notin \{w(k+1),\ldots,w(k+l)\}$ and $\max(w) \in \{w(k+1),\ldots,w(k+l)\}$. \item Cut all these words into two parts, by separating the letters into two parts according to their orders, such that the first letter of $w$ in the left (smallest) part, and the last letter of $w$ is in the right (greatest) part, and pack the two parts. \end{itemize} If $u'\otimes u''$ is obtained in this way, before packing, $u'$ contains $1$, so contains letters $w(i)$ with $i\leq k$, and $u''$ contains $\max(w)$, so contains letters $w(i)$, with $i>k$. Four cases are possible. \begin{itemize} \item $u'$ contains only letters $w(i)$ with $i\leq k$, and $u''$ contains only letters $w(i)$ with $i>k$. Then $w=(u(1)\ldots u(k)(v(1)+\max(u))\ldots (v(l)+\max(u))$ and $u'\otimes u''=u\otimes v$. \item $u'$ contains only letters $w(i)$ with $i\leq k$, whereas $u''$ contains letters $w(i)$ with $i\leq k$ and letters $w(j)$ with $j>k$. Then $u'$ is obtained from $u$ by taking letters $<i$, with $i\geq u(1)$, and $u''$ is a term appearing in $Pack(u_{\mid [k]\setminus [i]})\star v$, such that there exists $j>k-i$, with $u''(j)=\max(u'')$. Summing all the possibilities, we obtain $u'_\uparrow\otimes u''_\uparrow \rightarrow v$. \item $u'$ contains letters $w(i)$ with $i\leq k$ and letters $w(j)$ with $j>k$, whereas $u''$ contains only letters $w(i)$ with $i>k$. With the same type of analysis, we obtain $u\uparrow v'_\rightarrow \otimes v''_\rightarrow$. \item Both $u'$ and $u''$ contain letters $w(i)$ with $i\leq k$ and letters $w(j)$ with $j>k$. We obtain $u'_\uparrow \uparrow v'_\rightarrow\otimes u''_\uparrow \rightarrow v''_\rightarrow$. \end{itemize} Finally: $$\Delta_\nearrow(u\nearrow v)=u\otimes v+u'_\uparrow\otimes u''_\uparrow \rightarrow v+u\uparrow v'_\rightarrow \otimes v''_\rightarrow +u'_\uparrow \uparrow v'_\rightarrow\otimes u''_\uparrow \rightarrow v''_\rightarrow.$$ The fifteen remaining compatibilites are proved following the same lines. \end{proof}\\
{\bf Examples.} \begin{align*} (12)\nwarrow (12)&=(1423),\\ (12)\swarrow (12)&=(1312)+(2312)+(2413)+(3412),\\ (12)\searrow (12)&=(1212)+(1213)+(2313)+(2314),\\ (12)\nearrow (12)&=(1223)+(1234)+(1323)+(1324). \end{align*}
\begin{cor} $(\mathbf{WQSym},\rightarrow,\leftarrow)$ and $(\mathbf{WQSym},\downarrow,\uparrow)$ are free dendriform algebras. \end{cor}
{\bf Remarks.} \begin{enumerate} \item If $A$ is a quadri-algebra, we put: $$Prim_\mathbf{Quad}(A)=Ker(\tilde{\Delta}_\nwarrow)\cap Ker(\tilde{\Delta}_\swarrow)\cap Ker(\tilde{\Delta}_\searrow)\cap Ker(\tilde{\Delta}_\nearrow).$$ For any vector space $V$, $A=F_\mathbf{Quad}(V)$ is obviously generated by $Prim_\mathbf{Quad}(A)$, as $V \subseteq Prim_\mathbf{Quad}(A)$. \item Let us consider the quadri-bialgebra $\mathbf{FQSym}$. Direct computations show that: \begin{align*} Prim_\mathbf{Quad}(\mathbf{FQSym})_1&=Vect(1),\\ Prim_\mathbf{Quad}(\mathbf{FQSym})_2&=(0),\\ Prim_\mathbf{Quad}(\mathbf{FQSym})_3&=(0),\\ Prim_\mathbf{Quad}(\mathbf{FQSym})_4&=Vect((2413)-(2143),(2413)-(3412)); \end{align*} moreover, the homogeneous component of degree $4$ of the quadri-subalgebra generated by $Prim_\mathbf{Quad}(\mathbf{FQSym})$ has dimension $23$, with basis: $$(1234),(1243),(1324),(1342),(1423),(1432),(2134),(2314),(2314),(2431),$$ $$(3124),(3214),(3241),(3421),(4123),(4132),(4213),(4231),(4312),(4321),$$ $$(2143)+(2413),(3142)+(3412),(2143)-(3142).$$ So $\mathbf{FQSym}$ is not generated by $Prim_\mathbf{Quad}(\mathbf{FQSym})$, so is not isomorphic, as a quadri-bialgebra, to any $F_\mathbf{Quad}(V)$. A similar argument holds for $\mathbf{WQSym}$. \end{enumerate}
\end{document} |
\begin{document}
\twocolumn[\icmltitle{Data Considerations in Graph Representation Learning for Supply Chain Networks} \begin{icmlauthorlist} \icmlauthor{Ajmal Aziz}{cam_engineering} \icmlauthor{Edward Elson Kosasih}{cam_engineering} \icmlauthor{Ryan-Rhys Griffiths}{cam_phys} \icmlauthor{Alexandra Brintrup}{cam_engineering} \end{icmlauthorlist}
\icmlaffiliation{cam_engineering}{Department of Engineering, University of Cambridge} \icmlaffiliation{cam_phys}{Department of Physics, University of Cambridge}
\icmlcorrespondingauthor{Edward Elson Kosasih}{eek31@cam.ac.uk}
\icmlkeywords{Graph Representation Learning, Machine Learning, Supply Chains, Knowledge Graphs, Inductive Knowledge Graph Completion, ICML} \vskip 0.3in]
\printAffiliationsAndNotice{}
\begin{abstract} Supply chain network data is a valuable asset for businesses wishing to understand their ethical profile, security of supply, and efficiency. Possession of a dataset alone however is not a sufficient enabler of actionable decisions due to incomplete information. In this paper, we present a graph representation learning approach to uncover hidden dependency links that focal companies may not be aware of. To the best of our knowledge, our work is the first to represent a supply chain as a heterogeneous knowledge graph with learnable embeddings. We demonstrate that our representation facilitates state-of-the-art performance on link prediction of a global automotive supply chain network using a relational graph convolutional network. It is anticipated that our method will be directly applicable to businesses wishing to sever links with nefarious entities and mitigate risk of supply failure. More abstractly, it is anticipated that our method will be useful to inform representation learning of supply chain networks for downstream tasks beyond link prediction.\footnote{Code available at: \hyperlink{https://anonymous.4open.science/r/Link-Prediction-Supply-Chains-3D76/README.md}{https://anonymous.4open.science/r/Link-Prediction-Supply-Chains-3D76/README.md}}
\end{abstract}
\section{Introduction}
\begin{figure}\label{fig:supply_network}
\end{figure}
Manufacturing firms with non-trivial product offerings scale up by procuring subcomponents, services, or capabilities \cite{barney1991}. Inevitably, due to labour cost arbitrage and an ever increasing focus on cost efficiencies, supply networks have become more global as firms position themselves to optimise profitability. Whilst globalisation and outsourcing can have financial benefits and lead to faster time to market for manufactured goods, a supply network leads to structural dependencies amongst firms and subsequent concentration of risk, leaving value chains vulnerable to disruptions. The effects of globalisation mean that individual firms have little control or visibility over their extended supply network, exacerbating the risk of disruption.
In particular, the lack of visibility may result in firms procuring goods and services from firms which are known to perform nefarious activities, examples of which include, but are not limited to, the engagement of child labour, unsustainable business practices, and more general violations of employment law. An illustrative example of the structure of a complex supply network is given in \autoref{fig:supply_network}, where the focal firm (customer) remains unaware of a Tier 2 supplier and is also supplied by a Tier 3 supplier.
Recently, methods that leverage web scraping, entity recognition, and labelling have been proposed to provide transparency of the supply chain \cite{WichmannBBWM20}. In these method, entity recognition is used to derive nodes with edges being built through binary classification applied to text data on the entities. There are two main drawbacks to Natural Language Processing (NLP) based approaches: (i) it is implicitly assumed that all procurement activities are published as articles or metadata on the internet, and (ii) they are not statistically or otherwise verifiable.
In this work, we propose an automated approach to synthesise an appropriate representation for a downstream link prediction task. It is viewed that automated approaches may complement methods that gather incomplete information and help towards statistical verification of links that have been found. Specifically, we:
\begin{enumerate}
\item Introduce the first method to learn a heterogeneous graph (knowledge graph) of supply chain network data.
\item Leverage the learned representation to achieve state-of-the-art performance on link prediction using a relational graph convolution network. \end{enumerate}
\section{Background}
\subsection{Supply Chain Networks as Graphs} Representing supply chain networks as graphs was first proposed by~\cite{choi_supply_2001}. Since then, researchers have studied the impacts of ripple effects \cite{chauhan2020relationship}, \cite{dolgui2018ripple}, demonstrated that supply chain networks naturally form hubs and exhibit scale-free characteristics , and even trained algorithms to locate hidden links in these networks~\cite{Brintrup2018} using manually-specified homogeneous graphs (single edge type and node type cf. Section 4). In this work, we build on this body of work by developing a heterogeneous supply chain graph representation that yields improved performance in the downstream task of link prediction.
\begin{figure}\label{fig:kg_extract}
\end{figure}
\subsection{Supply Chain Link Prediction}
\begin{figure*}
\caption{An illustrative subgraph from the developed supply chain knowledge graph, composed of multi-type nodes and edges.}
\label{fig:kg_extract_full}
\end{figure*}
\textbf{Link Prediction:} Various techniques have been proposed for link prediction in domains beyond supply chain applications. One of the most commonly-used techniques is based on computing similarity between pairs of nodes. Such similarities are derived based on handcrafted heuristics such as node degrees, or the number of common neighbours; these include the Jaccard Coefficient \citep{libennowell_link-prediction_2007}, Katz \citep{katz_new_1953}, LHN Index \citep{leicht_vertex_2006}, Preferential Attachment \citep{barabasi_emergence_1999}, Adamic-Adar \citep{adamic_friends_2003}, Resource Allocation \citep{zhou_predicting_2009} and path-based similarity \citep{lu_similarity_2009}.
While many existing heuristics-based similarity techniques could work well in practice, they rely on domain experts to handcraft features. Given that we work with larger datasets with more attributes, manually defining such formulae is expensive. Additionally, while handcrafted heuristics can work well in a particular application, transferring them to different contexts is likely to fail. For instance, \citep{kovacs_network-based_2019} shows that Common Neighbours (CN), a heuristic used in social network analysis, fails to perform in protein graph networks. This is due to the inductive bias stemming from CN's assumption of homophily (similar nodes are connected), an assumption that does not hold in protein networks. Such issues have also been observed in supply chains \citep{brintrup_predicting_2018}.
Attempts to extract features automatically have been made by node embedding algorithms such as DeepWalk \citep{perozzi_deepwalk_2014}, LINE \citep{tang_line_2015} and node2vec \citep{grover_node2vec_2016}. Here, nodes are represented as vectors derived from topological features obtained by performing various forms of random walk within the neighborhood of the nodes. Link prediction then becomes a binary classification task, where a decoder scores a pair of node embeddings to calculate if there is a high likelihoood of an edge forming between them.
Recent approaches to extract more complex node embeddings are obtained by using graph neural networks (GNNs) (\citep{hamilton_graph_2020}, \citep{bruna_spectral_2014}, \citep{duvenaud_convolutional_2015}, \citep{kipf_semi-supervised_2017}, and \citep{niepert_learning_2016}). GNNs have outperformed many existing algorithms across various domains such as airline carrier networks, citation networks, political blogs, protein interactions, power grids, router-level internet and E. coli metabolite reactions (\citep{zhang_weisfeiler-lehman_2017}, \citep{zhang_link_2018}, \citep{zhang_revisiting_2020}, \cite{huang_graph_2021}, \cite{teru_inductive_2020}).
While GNNs have been applied to extract node embeddings, they may also be used to learn representations of a triplet (a pair of nodes with an edge between them). One implementation of such a GNN is called the relational graph convolutional network.
\textbf{Relational Graph Convolutional Networks (RGCNs):} generate latent representations for entities within multi-relational graphs (or knowledge graphs) for downstream graph reasoning tasks. \cite{1schlinchtkrull2018}. Our approach begins by leveraging the GraphSAGE architecture proposed by \citet{2017_Hamilton} to learn functions that inductively generate node embeddings for all entities in the knowledge graph. The inductive learning paradigm is chosen because supply chain networks evolve over time as companies (which act as autonomous agents in our network representation) choose their locations, product offerings, or procurement relationships. All entity types within the knowledge graph are initialised with a random embedding vector. The set of features for all nodes $\mathbf{X} \in \mathbb{R}^{|\mathcal{V}|\times d}$ are chosen at random where $d$ is the dimensionality of the feature vector associated with the nodes and is treated as a hyperparameter to be tuned during cross validation.
\section{Our Approach}
\subsection{Learning a Heterogeneous Graph Representation of a Supply Chain Network}
The formal definition of a knowledge graph varies between application fields. For the purposes of graph representation learning over supply chain networks, a definition in line with ~\citet{palumbo2020} is adopted. In this paradigm, a knowledge graph can be conceptualised as a 3-tuple/triplet $K = (\mathcal{V}, \mathcal{E}, \mathcal{O})$ where $\mathcal{V}$ is the set of entities (or nodes), $\mathcal{E} \subseteq \mathcal{V} \times \mathcal{V}$ is the set of relations, and $\mathcal{O}$ is the ontology of the knowledge graph.
\textbf{The ontology:} defines the set of entity types, $\Lambda$, and the set of relation types $\mathcal{R}$. Additionally, it assigns nodes to their entity type, $\mathcal{O}: u \in \mathcal{V} \mapsto \Lambda$, and entity types to their related properties, $\mathcal{O}: \epsilon \in \Lambda \mapsto \mathcal{R}_{\epsilon} \in \mathcal{R}$. Effectively, the ontology defines the underlying data structure of a knowledge graph. Within this context, the set of entity types comprises $\{$\verb|Company|, \verb|Capability|, \verb|Certification|, \verb|Product|, and \verb|Country|$\}$ where $|\Lambda| = 5$ . The corresponding set of edges between entities ($\mathcal{R}_{\epsilon} \in \mathcal{R}$) is defined with business-specific use cases in mind. A pictorial representation of the defined ontology is shown in \autoref{fig:ontology_kg}. Considering \autoref{fig:ontology_kg} for entity type \verb|Country|, only a single relation type, \verb|located_in|, is allowed for triplets containing the entity type \verb|Country|.
\begin{figure}
\caption{Developed ontology to populate the supply chain knowledge graph.}
\label{fig:ontology_kg}
\end{figure}
\begin{figure}
\caption{Example of how tabular data is converted to a knowledge graph for (only two relation types ($C_n$ for companies, and $P_n$ for products)) according to our defined ontology.}
\label{fig:table_converted_to_graph}
\end{figure}
\textbf{Populating the knowledge graph}: The ontology is populated through a tabular data structure comprising (incomplete) attribute information about companies within the automotive sector\footnote{The data is obtained from MarkLines, a company that specialises in automotive supply chain data collection.}. The tabular data is converted into multiple multipartite graphs to derive relations. For an indicative example, \autoref{fig:table_converted_to_graph} demonstrates this procedure for two relation types: (company, \verb|buys_from|, company) and (company, \verb|makes_product|, product). Where relationships could not be deduced from the tabular data, bipartite projections were taken over the entity set where information was missing. This is a crucial step as complementary capability and product offerings may embed inductive bias when predicting \verb|buys_from| relations.
Some links included within the ontology were not immediately available in collected data but could be deduced. In this case, a co-occurrence frequency was used to derive these relations. The intuition here is that if a company possesses a capability (e.g. Plastic Injection Moulding) and produces products (Seat Belts, Bumpers, etc.), then enough instances of co-occurrence of capabilities with the same product would imply that the capability and product can be tied into the \verb|capability_produces| relation.
The histogram of co-occurrence frequency is shown in \autoref{fig:weights_capability_product_projection}. As the data exhibits noise potentially due to spurious information, a cutoff threshold is required to filter relations based on co-occurrence frequency. This threshold is treated as a hyperparameter during training and can be optimised for whichever edge type a company may deem the riskiest. For example, if a company is interested in geographic risk, then the cutoff threshold is optimised for predicting \verb|buys_from| relationships successfully.
\begin{figure}
\caption{Capabilities and products co-occurrence frequency weights.}
\label{fig:weights_capability_product_projection}
\end{figure}
The other edge type that has to be deduced from data is $\tau = \verb|complimentary_product_to|$. For this edge type, a bipartite graph consisting of relation type $\tau=\verb|makes_product|$ between companies and their respective product portfolio is leveraged. A bipartite projection is taken onto the product entities where the weights in the projection space indicate the number of times companies purchased similar products. ~\autoref{fig:bipartite_projection} shows the distribution of edge weights in the projection space. The cutoff threshold for introducing \verb|complimentary_product_to| relations is also treated as a hyperparameter during training.
\textbf{Triplets} or labelled directed edges are represented as factual tuples $(u, \tau, v)$ for $u, v \in \mathcal{V}$ and $\tau \in \mathcal{R}$. For example, we have edge types $\tau = \verb|has_capability|$ and the edge $(u, \verb|has_capability| , v)$ where $u=\verb|Bill Forge|$ and $v=\verb|Forging|$ indicate known information about a company and its capability since the relation type is restricted between entity types \verb|Company| and \verb|Capability|. ~\autoref{tab:entities} and ~\autoref{tab:triplets} convey the extracted entities totalling $\sim$161k and extracted facts totalling $\sim$647k respectively.
\begin{figure}
\caption{Bipartite projection weights for product entities. $\mathcal{V}_A$ and $\mathcal{V}_B$ denote the disjoint .}
\label{fig:bipartite_projection}
\end{figure}
Finally, the learning objective is the same as knowledge graph completion, and is geared towards predicting missing edges to complete the knowledge graph representation.
\begin{table}[htbp!] \centering \begin{tabular}{lc} \toprule \textbf{Entity Type} & \textbf{Count} \\ \midrule company (e.g. General Motors) & 41,826 \\ product (e.g. Floor mat) & 119,618 \\ country (e.g. Germany). & 74 \\ capability (e.g. Machining) & 36 \\ certification (e.g. ISO9001) & 9 \\ \midrule Total & 161,563 \\ \bottomrule \end{tabular} \caption{Entity count contained within the supply chain knowledge graph.} \label{tab:entities} \end{table}
\begin{table}[htbp!] \centering \begin{tabular}{lc} \toprule \textbf{Triplet Type} & \textbf{Count} \\ \midrule (capability, capability\_produces, product) & 21,857 \\ (company, buys\_from, company) & 88,997 \\ (company, has\_capability, capability) & 83,787 \\ (company, has\_cert, certification) & 32,654 \\ (company, located\_in, country) & 40,421 \\ (company, makes\_product, product) & 119,618 \\ (product, complimentary\_product\_to, product) & 260,658 \\ \midrule Total & 647,992 \\ \bottomrule \end{tabular} \caption{Triplet count contained within the supply chain knowledge graph.} \label{tab:triplets} \end{table}
\subsection{Loss Function for Link Prediction} The latent embeddings $\mathbf{h}_u$ for nodes $u \in \mathcal{V}$ are generated using the GraphSAGE architecture in the minibatch setting. In the GraphSAGE paradigm, trainable functions are learned to generate compact embeddings by sampling and aggregating features from local neighbourhoods of nodes to be used in downstream tasks (link prediction in our case). The aggregator function $\textsc{aggregate}^\tau_k$ $\forall k \in \{1, \ldots, K\}$ for depth $K$ and $\tau \in \mathcal{R}$ as well as trainable weight matrices used in updating latent embedding $\mathbf{W}^K_\tau$ for $\tau \in \mathcal{R}$ are trained to minimise the binary cross entropy loss across all relation types. This choice of link prediction loss is similar to that proposed by~\citet{2018_Schlichtkrull} and is given as:
\begin{equation} \begin{split}
\mathcal{L} = \sum_{(u,\tau, v), y} &y \text{ log} f(u, \tau, v) + \\&(1-y)\text{ log}\left( 1-f(u, \tau, v) \right) \end{split} \end{equation}
Where triples $(u, \tau, v)$ for $(u, v) \in \mathcal{V}$ with relation $\tau \in \mathcal{R}$ are scored according to $f(u, \tau, v)$ based on an indicator $y \in \{0, 1\}$ denoting whether or not the triplet exists (detailed further in~\autoref{sec:experiments}). The score $f(.)$ is derived based on the $K$-th node embeddings for source and destination nodes, $\mathbf{h}^K_u$ and $\mathbf{h}^K_v$ respectively, and was chosen as $f(u, \tau, v) = (\mathbf{h}^K_u)^T R_{\tau} \mathbf{h}^K_u$, which is the DistMult scoring function~\cite{yang2015embedding}. In this context, $R_{\tau} \in \mathbb{R}^{d \times d}$ is a diagonal matrix for every relation $\tau \in \mathcal{R}$ and $d$ is the size of the initialised node embeddings. The loss naturally incentivises the model to associate higher scores to observable triples and lower scores for unobserved triples.
\section{Related Work}
The application of link prediction in supply networks has been scarce. To the best of our knowledge, Supply Network Link Prediction (SNLP) \citep{brintrup_predicting_2018} is the only published work to apply link prediction. SNLP was applied on the same automotive supply chain dataset that is used in our work. The authors represent the supply chain network as a homogeneous graph with one type of edge/relation: \verb|buys_from|, unlike our heterogeneous knowledge graph which has multiple relation types.
This baseline model represents every node with a set of attributes derived from handcrafted heuristics, such as the number of existing suppliers, overlaps between both companies' product portfolios, product outsourcing associations and likelihood of having common buyers. This bears similarity with modern graph node embedding techniques, albeit their representations were not learnable. The approach treats link prediction as a binary classification problem given a pair of nodes with their respective attributes. They report an Area Under the Receiver Operating Curve (AUC) score of 0.76.
\section{Experiments and Results} \label{sec:experiments}
The task of relational link prediction is to discern whether a given edge $(u, \tau, v)$ is present in $\mathcal{E}^U - \mathcal{E}$ where $\mathcal{E}^U$ is the set of all possible edges and $\mathcal{E}$ is the set of captured edges in the knowledge graph representation. The set $\mathcal{E}^U - \mathcal{E}$ is the set of edges that have not been captured when building the supply chain knowledge graph, or are edges that will present themselves in the future (a new partnership between two companies is formed, new capabilities are invested in, etc.). The learning regime involves cross validation (70\% training, 20\% validation, and 10\% testing) by splitting the set of all actualised triples into a training, validation, and test set. Negative triplets (triplets which are not facts in the knowledge graph) are then corrupted by either swapping the source or destination nodes ($u$ and $v$) or by uniformly sampling a new relation type between the source and destination nodes. Models are assessed based on their capability to differentiate between factual and non-factual triplets. The task is therefore distilled into a binary classification task (for all relation types), and the commonly-used Area Under the Receiver Operating Curve (AUC) is used to assess model performance. To the best of our knowledge, the best reported AUC for this task is 0.76 (for the \verb|buys_from| relation in our context). As shown in~\autoref{tab:experimental_results}, our multi-relational model outperforms the existing baseline and extends the prediction task to multiple relations.
\begin{table}[htbp!] \begin{tabular}{@{}lccc@{}}
\toprule Relation Type & Train & Validation & Test \\ \midrule makes\_product & 1.000 & 0.996 & 0.989\\ has\_cert & 0.825 & 0.591 & 0.430 \\ complimentary\_product\_to & 0.997 & 1.000 & 1.000 \\ located\_in & 0.955 & 0.977 & 0.613 \\ has\_capability & 0.802 & 0.676 & 0.564 \\ buys\_from & \textbf{0.932} & \textbf{0.862} & \textbf{0.877} \\ capability\_produces & 0.993 & 1.000 & 1.000 \\ \bottomrule \end{tabular} \caption{AUC scores for training, validation, and test graphs. Note, test set results were not recalculated based on retraining with both training and validation edges. Results with bold face outperform existing benchmarks (SNLP). Other table entries represent novel relation types that have not been considered in prior work.} \label{tab:experimental_results} \end{table}
\section{Conclusion}
Due to the effects of globalisation, supply chains are becoming more complex, and obtaining visibility into interdependencies within the network has become a tremendous challenge. While better information extraction techniques have been developed, there remains a large gap towards obtaining a complete representation of the network. The raw data alone often has missing information due to a company's propensity to engage in secretive and competitive behavior. This information, however, is particularly important for supply chain practitioners to detect operational risks, such as unfair manufacturing practices and overreliance on certain sole suppliers. Graph representation learning, in the form of link prediction, can help impute such missing data.
Our paper proposes a novel method for learning a representation of a supply chain network as a heterogeneous graph, allowing us to predict the existence of various type of dependencies, as opposed to the incumbent SOTA (SNLP) approach of predicting just one type of dependency using a homogeneous graph. Moreover, our embeddings are learnable, which may also be responsible for the improved performance relative to SNLP.
In future work we wish to perform an ablation study to isolate the contributions of the learnable embedding and heterogeneous graph components. An extension of this work will include exploration of graph learning techniques for multi-hop reasoning to detect more complex dependencies associated with paths in the graphs, as opposed to single links.
\end{document} |
\begin{document}
\title{Efficient Quantum Compression for Ensembles of Identically Prepared Mixed States } \author{Yuxiang Yang, Giulio Chiribella, and Daniel Ebler} \affiliation{Department of Computer Science, The University of Hong Kong, Pokfulam Road, Hong Kong} \begin{abstract} We present one-shot compression protocols that optimally encode ensembles of $N$ identically prepared mixed states into $O(\log N)$ qubits. In contrast to the case of pure-state ensembles, we find that the number of encoding qubits drops down discontinuously as soon as a nonzero error is tolerated and the spectrum of the states is known with sufficient precision. For qubit ensembles, this feature leads to a 25\% saving of memory space. Our compression protocols can be implemented efficiently on a quantum computer.
\end{abstract} \maketitle
Storing data into the smallest possible space is of crucial importance in present-day digital technology, especially when dealing with large amounts of information and with limited memory space \cite{bigdata}. The need for saving space is even more pressing in the quantum domain, where storing data is an expensive task that requires sophisticated error correction techniques \cite{Memory1,Memory2,Memory3}.
For quantum data, Schumacher's compression \cite{Schumacher} and its extensions \cite{JozsaSchumacher,LoBound,Horodecki, Jozsa,BennetHarrowLloyd} provide optimal ways to store information in the asymptotic limit of many identical and independent uses of the same source.
However, in many situations there may be correlations from one use of the source to the next. In such situations, it is convenient to regard $N$ uses of the original source as a single use of a new source, which emits messages of length $N$. This scenario is an instance of one-shot quantum data compression \cite{datta}.
An important example of one-shot compression is when the states emitted at $N$ subsequent moments of time are perfectly correlated, resulting in codewords of the form $\rho_x^{\otimes N} $ for some density matrix $\rho_x$ and some random parameter $x$. This situation arises when the original source is an uncharacterized preparation device, which generates the same quantum state at every use. For quantum bits (qubits), Plesch and Bu\v zek \cite{SchurTransform2} observed that every ensemble of identically prepared pure states can be stored without any error into $\log (N +1)$ qubits, thus allowing for an exponential saving of memory space.
Recently, Rozema \emph{et al} \cite{SchurCompression} brought this idea into the realm of experiment, demonstrating a prototype of one-shot compression in a photonic setup.
The possibility of implementing one-shot compression in the lab opens new questions that require one to go beyond the ideal case of pure states and no errors.
First, due to the presence of noise, real-life implementations typically involve mixed states---think, ~e.~g., ~of quantum information processing with NMR \cite{NMR}, where the standard is to have thermal states at a given temperature, or, more generally, of mixed-state quantum computing \cite{aharonov-kitaev,1bit,shor-jordan,datta2008quantum,lanyon-barbieri}. For mixed states, the basic principle of pure-state compression does not work: in the qubit case, for example, projecting the quantum state into the smallest subspace containing the code words does not lead to any compression if the states $\rho_x^{\otimes N}$ are mixed, because in that case the smallest subspace is the whole Hilbert space. As a result, it is natural to search for compression protocols that work for mixed states and to ask which protocols achieve the best compression performance.
An even more important question is how the number of qubits needed to store data depends on the errors in the decoding. Tolerating a nonzero error is natural in real-life implementations, which typically suffer from noise and imperfections.
In this Letter we answer the above questions, proposing compression protocols for ensembles of identically prepared mixed states. We first analyze the zero-error scenario, showing that the storage of $N$ mixed qubits with known purity and unknown Bloch vector requires a quantum memory of at least $2 \log N$ qubits. The size of the required memory is twice that of the required memory for pure states, but it is still exponentially smaller that the initial data size. The maximum compression is achieved by a protocol that does not require knowledge of the purity. We then investigate the more realistic case of protocols with an error tolerance. When the purity is known with sufficient precision, we find out that tolerating an error, no matter how small, allows one to encode the initial data into only $3/2 \, \log N$ qubits, plus a small correction independent of $N$. Remarkably, the discontinuity in the error parameter takes place as soon as the prior knowledge of the purity is more precise than the knowledge that could be gained by measuring the $N$ input qubits. The existence of a discontinuity is a striking deviation from the pure-state case, for which we prove that there is no significant advantage in introducing an error tolerance. Furthermore, we show that our compression protocol can be implemented efficiently and that the compression rate is optimal under the requirements that the encoding be rotationally covariant and the decoding preserve the magnitude of the total angular momentum. These assumptions are relevant in physical situations where the mixed states are used as indicators of spatial directions \cite{demkowicz,bagan} and the decoding operations are limited by conservation laws \cite{reference, marvian-spekkens1,marvian-spekkens2,spekkens-marvian-WAY, ahmadi,marvian-spekkensNat}.
All our results can be generalized to quantum systems of arbitrary finite dimension, where we quantify how the presence of degeneracy in the spectrum affects the compression rates.
Let us start from the qubit case, assuming $N$ to be even for the sake of concreteness. We denote by $\map{E}: \spc{H}^{\otimes N}\to\spc{H}_{\rm enc}$ ($\map{D}: \spc{H}_{\rm enc}\to \spc{H}^{\otimes N})$ the encoding (decoding) channel, where $\spc H$ is the Hilbert space of a single qubit and $\spc H_{\rm enc}$ is the Hilbert space of the encoding system. For an ensemble of identically prepared qubit states $\{ \rho^{\otimes N}_x \, , p_x \}$ the average error of the compression protocol is \begin{align}\label{error}
e_N= \sum_{x} \, p_x \frac{ \left\|\rho^{\otimes N}_x-\map{D}\circ\map{E}\left(\rho_x^{\otimes N}\right)\right\|} 2 \, , \end{align}
$\| A \|$ denoting the trace norm.
We consider ensembles where all the states $\rho_x$ have the same purity, which is assumed to be perfectly known (this assumption will be lifted later). Let us write $\rho_x$ as $\rho_{\st n}=p \, | \st n \>\< \st n| \, + (1-p) \, | - \st n \>\< -\st n |$, where $|\st n\>$ denotes the two-dimensional pure state with Bloch vector $\st n = (n_x,n_y,n_z)$ and $p \ge 1/2$ is the maximum eigenvalue.
We focus on mixed states $(p\not = 1)$, excluding the trivial case $p=1/2$, in which the ensemble consists of just one state. For $p\not \in \{ 1,1/2\}$, we call the ensemble $\{\rho_{\st n}^{\otimes N}\, , p_\st n\}$ complete if the probability distribution $p_{\st n} $ is dense in the unit sphere.
The typical example is an ensemble of mixed states with known purity and completely unknown Bloch vector. For every complete ensemble we demonstrate a sharp contrast between two types of compression: (i) zero-error compression, wherein the decoded state is equal to the initial state, and (ii) approximate compression, wherein small errors are tolerated.
In the zero-error case we have the following \begin{theo} \label{thm:zeroerror} The minimum number of logical qubits needed to compress a complete $N$-qubit ensemble is $ \lceil 2 \log (N+2)-2\rceil $. Every compression protocol that has zero error on a complete ensemble must have zero error on every ensemble of identically prepared mixed states and on every ensemble of permutationally invariant N-qubit states. \end{theo} Intuitively, the reason for the exponential reduction of the number of qubits is that the states in the ensemble are invariant under permutations and, therefore, they do not carry all the information that could be encoded into $N$ qubits. This observation was anticipated by Blume-Kohout \emph{et al} in the context of state discrimination and tomography \cite{estimation}. The key point of Theorem \ref{thm:zeroerror} is the optimality proof, which establishes that if a mixed-state ensemble is complete, then compressing it is as hard as compressing any arbitrary ensemble of permutationally invariant states \cite{supplemental}.
In preparation of our analysis of approximate compression, it is instructive to look into an optimal protocol achieving zero-error compression. The starting point is the Schur-Weyl duality \cite{fultonharris},
stating that there exists a basis in which the $N$-fold tensor action of the group $\grp{GL}(2)$ and the natural action of the permutation group $S_N$ are both block diagonal. In this basis, the Hilbert space of the $N$ qubits can be decomposed as \begin{align} \spc{H}^{\otimes N}\simeq\bigoplus_{j=0 }^{N/2}\left(\spc{R}_j\otimes\spc{M}_{j} \right) , \end{align}
where $j$ is the quantum number of the total angular momentum, $\spc{R}_j$ is a representation space, in which the group $\grp{GL}(2)$ acts irreducibly, and $\spc{M}_{j}$ is a multiplicity space, in which the group acts trivially.
Now, since the state $\rho_{\st n}^{\otimes N}$ is invariant under permutations of the $N$ qubits, one has \begin{align}\label{statedecomp}
\rho_{\st n}^{\otimes N} =\bigoplus_{j=0}^{N/2} \, q_{j,N} \, \left(\rho_{\st n,j}\otimes \frac{I_{m_j }}{m_j}\right), \end{align} where $q_{j,N}$ is a suitable probability distribution in $j$, $\rho_{\st n,j}$ is a quantum state on $\spc{R}_j$, $I_{m_j} $ is the identity on $\spc{M}_{j} $, and $m_j$ is the dimension of $\spc{M}_j$. From Eq. (\ref{statedecomp}) it is obvious that all information about the input state lies in the representation spaces.
Hence, $\rho_{\st n}^{\otimes N}$ can be encoded faithfully into the state $ \map E \left (\rho_{\st n}^{\otimes N} \right) = \bigoplus_{j} q_{j,N}\, \rho_{\st n, j}$. Such state has an exponentially smaller support, contained in the space $\spc{H}_{N}:=\bigoplus_{j=0 }^{N/2}\spc{R}_j$, whose dimension is $\dim\spc{H}_{N}=\left( N/2+1\right)^2$. Hence, the initial state can be encoded into $\lceil\log \dim\spc{H}_{N}\rceil$ qubits---the amount declared in Theorem \ref{thm:zeroerror}. A perfect decoding is achieved by the channel
\begin{align}\label{channelD} \map{D}(\rho) : = \bigoplus_{j } \, \left( P_j \,\rho \, P_j \otimes \frac{ I_{m_j}}{m_j}\right) \, , \end{align} where $P_j$ is the projector on the representation space $\spc R_j$.
Considering that qubits are a costly resource, it is worth pointing out a slight modification of the above protocol, which uses approximately $\log N$ qubits and $\log N$ classical bits. The modified protocol consists in (i) measuring the value of $j$, thus projecting $N$ qubits into the state $\rho_{\st n,j} \otimes I_{m_j}/m_j$, (ii) discarding the multiplicity part, (iii) encoding the state $\rho_{\st n,j}$ into $\lceil \log (N+1)\rceil$ qubits, and (iv) transmitting the encoded state to the receiver, along with a classical message specifying the value of $j$. Knowing the value of $j$, the receiver can append an additional system in the state $I_{m_{j}}/{m_j}$ and embed the state $\rho_{\st n, j}\otimes I_{m_j}/m_j$ into the right subspace.
Let us consider now the more realistic case of approximate compression. Here, the number of encoding qubits drops down discontinuously.
\begin{theo}\label{thm:faithful} For every allowed error rate $\epsilon >0 $ and for every complete qubit ensemble, there exists a number $N_0>0$ such that for any $N\ge N_0$ the ensemble can be encoded into $3/2 \log N+\log[4(2p-1)\sqrt{\ln(2/\epsilon)}]$ qubits with error smaller than $\epsilon$. \end{theo}
The idea is to work out the explicit form of the probability distribution $q_{j, N}$ in Eq. (\ref{statedecomp}), given by \begin{align}\nonumber q_{j,N}= \frac{2j+1}{2j_0} &\left[ B\left(N+1,p,\frac N2 + j+1 \right)\right.\nonumber \\ &\left. ~ -B\left(N+1,p,\frac N2 - j \right)\right] \label{dist} \end{align} where $B(n,p,k)$ is the binomial distribution with $n$ trials and with probability $p$, and $j_0 = (p-1/2)(N+1)$. For large $N$, the distribution $q_{j,N}$ is approximately the product of a linear function with the normal distribution of variance $(N+1)p(1-p)$ centered around $j_0$. In order to compress, we get rid of the tails: for every $\epsilon >0 $, we select a set $\set S_\epsilon : = \left\{ j_0 - \lfloor \sqrt{\ln(2/\epsilon)N} \rfloor , \dots, j_0 + \lfloor \sqrt{\ln(2/\epsilon)N} \rfloor \right\}$ and we compress the state $\rho_{\st n}^{\otimes N}$ into the encoding space $\spc{H}_{\rm enc}=\bigoplus_{j\in \set S_\epsilon }\spc{R}_j$, by applying the quantum channel \begin{align}\label{channelE} \map{E}(\rho):= \bigoplus_{j \in \set S_\epsilon} \, \operatorname{Tr}}\def\atanh{\operatorname{atanh}_{ \spc M_j} \, \left [ \, \Pi_j \, \rho \, \Pi_j \, \right] + \sum_{j\not \in\set S_\epsilon } \,\operatorname{Tr}}\def\atanh{\operatorname{atanh}\left[ \Pi_j \, \rho\right] \, \rho_0\, , \end{align} where $\Pi_j$ is the projector on $\spc R_j \otimes \spc M_j$, $\operatorname{Tr}}\def\atanh{\operatorname{atanh}_{\spc M_j}$ is the partial trace over $ \spc M_j$, and $\rho_0$ is a fixed state with support inside $\spc{H}_{\rm enc}$.
The encoding space has dimension
\begin{align*} \dim\spc{H}_{\rm enc} & = \sum_{j\in\set S_\epsilon} \, (2j +1) \le \, (2j_0 +1) \left(2 \sqrt{ N \ln\frac{2}{\epsilon}}+ 1\right) \, , \end{align*} growing as $N^{3/2}$. The initial state can be recovered, up to error $\epsilon$, by a suitable decoding channel \cite{supplemental}.
Theorem \ref{thm:faithful} guarantees that $N$ identical copies of a mixed state with known purity can be stored faithfully to $\epsilon$ into $3/2\log N$ qubits, plus an overhead that is doubly logarithmic in $1/\epsilon$. This result is good news for future implementations, because the overhead grows slowly with the required accuracy. For example, when $p=0.6$,
$N= 20$ identically prepared qubits with Bloch vectors pointing in arbitrary direction can be compressed into 8 qubits with an error smaller than $1\%$.
In addition to the fully quantum version of the protocol, one can construct a hybrid version where the initial state is stored partly into qubits and partly into classical bits, as discussed in the zero-error case. In the hybrid version, the discontinuity between zero-error and approximate compression pertains to the number of classical bits needed to communicate the value of $j$, which decreases from $\log N$ to $1/2 \log N$ as soon as a nonzero error is tolerated.
Our result highlights a radical difference between mixed and pure states: for mixed states, every finite error tolerance $\epsilon > 0$ allows one to reduce the size of the compression space from the original $2 \, \log N$ qubits to $3/2 \, \log N$ qubits. Such a discontinuity does not take place for pure states: for pure states with completely unknown Bloch vector, every compression protocol
with tolerance $\epsilon$ requires at least $(1- 2 \epsilon)\, \log N$ qubits \cite{supplemental}.
It is worth commenting on the importance of knowing the purity. Our approximate protocol requires the purity to be perfectly known, so that one can encode only the subspaces where the quantum number $j$ is in a strip around the most likely value. If the purity is only partially known, the protocol can be adapted by broadening the size of the strip, i.\,e., by changing the set $\set{S}_\epsilon$. Specifically, suppose that the eigenvalues of $\rho_{\st n}$ are known up to an error $\Delta p = O( N^{-\gamma})$, with $\gamma \ge 1/2$. In this case, the number of encoding qubits can be reduced to $3/2\, \log N+g(\epsilon,\gamma)$ where $g$ is a function depending on $\epsilon$ and $\gamma$, but not on $N$. Hence, the discontinuity between zero-error and approximate compression persists. However, the situation is different if the eigenvalues are known with less precision: if the error in the specification of the eigenvalues scales as $ N^{-\gamma}$ with $\gamma < 1/2$, then the number of encoding qubits becomes $(2-\gamma)\, \log N$. Quite intriguingly, the separation between the two regimes takes place exactly when the knowledge of the eigenvalues becomes more precise than the knowledge that could be extracted through spectrum estimation \cite{KeylWerner}. Note that our protocol can be combined for free with spectrum estimation, which only requires measuring the value of $j$.
However, the \emph{a posteriori} knowledge of the measurement outcome cannot replace the \emph{a priori} knowledge of the spectrum: indeed, finding the outcome $j$ leads to estimating the maximum eigenvalue as $\hat p =1/2 + j/(N+1)$ \cite{KeylWerner} and then to encoding the state $\rho_{\st n,j}$ into $\lceil \log(2j+1)\rceil$ qubits. In order to decode, the receiver needs a classical message communicating the value of $j$, which requires $\lceil \log (N/2+1) \rceil$ bits in the one-shot scenario. This leads to the same resource scaling as in the zero-error case, i.~e., approximately $\log N$ qubits to send the encoded state and $\log N$ bits to communicate $j$.
The protocol of Theorem \ref{thm:faithful} is optimal within the physically relevant class of protocols constrained by covariance under rotations and by the preservation of the magnitude of the angular momentum. More precisely, we have the following \cite{supplemental}. \begin{theo}\label{thmopt} Every compression protocol that encodes a complete $N$-qubit ensemble into $( 3/2 - \delta) \, \log N$ qubits with covariant encoding and a decoding that preserves the magnitude of the total angular momentum will necessarily have error $e \ge1/2$ in the asymptotic limit. \end{theo}
\begin{figure}
\caption{{\bf A quantum circuit for encoding.} The Schur transform turns the initial $N$ qubits together with $K= O(\log N)$ ancillary qubits into three registers: the index register $\spc J$, the representation register $\spc{R}$, and the multiplicity register $\spc{M}$. The multiplicity register is discarded. The index register is encoded into $N/2+1$ qubits by the position embedding $V_{N/2+1}$. The qubits in positions outside $\set S_\epsilon$ are discarded and the remaining qubits are reencoded into $\lceil \log |\set S_\epsilon| \rceil$ qubits. }
\label{fig:encoding}
\end{figure}
\begin{figure}
\caption{{\bf A quantum circuit for decoding.} The first operation is the position embedding $V_{| \set S_\epsilon|}$, which produces $|\set S_\epsilon|$ output qubits. The $j$th of these qubits controls the generation of a maximally mixed state of rank $m_j$ (achieved by the controlled operation $G_j$, represented explicitly in the blue inset for $m_j=4$). The third step is the initialization of $L= N/2+1 - |\set S_\epsilon|$ qubits which are put in positions corresponding to values of $j$ outside $\set S_\epsilon$. After a total of $N/2 +1$ qubits are in place, the inverse of the position embedding is performed, followed by the inverse of the Schur transform. The output of the circuit is a state on $N$ qubits and $ K = O(\log N)$ ancillas, which are finally discarded. }
\label{fig:decoding}
\end{figure}
Let us now discuss the complexity of the compression protocol. To operate on the input state we use the Schur transform \cite{harrowthesis,SchurTransform,SchurTransform2}, which transforms the initial $N$ qubits together with $O(\log N)$ ancillary qubits into three registers: (i) the index register, where the value of $j$ is stored into the state of $\log (N/2+1)$ qubits, (ii) the representation register, which uses $\log (N+1)$ qubits to encode the representation spaces, and (iii) the multiplicity register, where the multiplicity spaces are encoded into $O(N)$ qubits (see Fig. \ref{fig:encoding}).
Since the implementation of the Schur transform in a quantum circuit is approximate, we focus on approximate compression, so that the Schur transform error can be absorbed into the compression error. Let us analyze first the encoding. The first step is the approximate Schur transform, whose complexity is ${\rm poly}( N, \log 1/\epsilon')$, $\epsilon'$ being the approximation error \cite{harrowthesis,SchurTransform}. We set $\epsilon'$ to be vanishing exponentially in $N$, resulting in a complexity ${\rm poly} (N)$ for the implementation of the Schur transform.
After the Schur transform has been performed, the encoding circuit embeds the index register into an exponentially larger register of $N/2+1$ qubits, transforming the state $|j\>$ into the state where the $j$th qubit is set to $|1\>$ and the rest of the qubits are set to $|0\>$ \cite{SchurTransform2}.
We refer to this transformation as position embedding and denote it by $ V_{D}$, where $D$ is the dimension of the register that is being embedded (in this case $D=N/2+1$). The point of position embedding is to physically encode the value of $j$ in a form that makes it easy to check whether or not $j$ belongs to the set $S_\epsilon$. In fact, such a check can be equivalently implemented on a classical computer. After this step, the circuit discards the qubits in positions outside the set $S_\epsilon$ and transforms the remaining qubits into $\log |S_\epsilon|$ qubits, by applying $V^{-1}_{ |S_{\epsilon}|}$.
Now, the complexity of position embedding is upper bounded by $D (\log D)^2$ \cite{SchurTransform2}. Since $j$ ranges from 0 to $N/2$, the total complexity of the position embedding and of its inverse scales as $ N ( \log N )^2$.
From the above reasoning, it is clear that the bottleneck of the encoding is the implementation of the Schur transform, which leads to an overall complexity of ${\rm poly}(N)$ for the encoding circuit. The situation is similar for the decoding, which also uses position embedding to perform operations depending on $j$ (see Fig. \ref{fig:decoding}). The only new parts are the initialization of $N/2 + 1 - |\set S_\epsilon|$ qubits in the index register and the preparation of maximally mixed states of rank $m_j$ in the multiplicity register, which can be approximately generated with exponential precision in $O(N^2)$ operations \cite{supplemental}. Summing over the values of $j$ in $\set S_\epsilon$, we then obtain a number of operations upper bounded by $O( N^2) |\set S_\epsilon| = O( N^{5/2})$.
From the above count it is clear that the overall complexity is polynomial in $N$. In addition to the computational complexity, it is worth discussing the size of the ancillary systems needed in our compression protocol. Since the multiplicity register is discarded, the Schur transform in our protocol needs only an ancilla of $O(\log N)$ qubits \cite{estimation}. The position embeddings require ancillas of size $O(N)$, but, as mentioned earlier, they can be implemented on a classical computer. Hence, the total number of qubits that need to be kept coherent throughout our protocol scales only as $O(\log N)$.
Our compression protocol, presented for qubits, can be generalized to quantum systems of arbitrary dimension $d$. In this case, an ensemble of $N$ identically prepared rank-$r$ states with known spectrum can be compressed with error less than $\epsilon$ into approximately $ \left(2dr - r^2-1\right)/2 \, \log N$ qubits. In addition, one can take advantage of the presence of degeneracies and further reduce the number of qubits: every time the same eigenvalue appears in the spectrum the number of qubits is reduced by at least $1/2\log N$(see \cite{supplemental} for the exact value). Again, the protocol can be implemented efficiently and is optimal under suitable symmetry assumptions \cite{supplemental}.
In this Letter we showed how to efficiently store ensembles of identically prepared quantum systems into an exponentially smaller memory space.
For mixed states we discovered that, whenever a nonzero error is allowed, the size of the memory is cut down in a discontinuous way, provided that the spectrum of the state is known with sufficient precision. Intriguingly, the dropoff in the memory size takes place as soon as the prior information about the eigenvalues is more than the information that could be extracted by a measurement on the input copies.
Our approximate compression protocols can be implemented efficiently on a quantum computer.
{\emph{Acknowledgments.} We thank M. Ozols and the referees of this Letter for a number of comments that stimulated substantial improvements of the original manuscript. This work is supported by the National Natural Science Foundation of China through Grant No. 11450110096, by the Foundational Questions Institute (Grant No. FQXi-RFP3-1325), by the 1000 Youth Fellowship Program of China, and by the HKU Seed Funding for Basic Research. }
\appendix
\begin{widetext}
\section{PROOF OF THEOREM 1}\label{app:opt_zeroerror} Here we show the optimality of our the error protocol in the main text. Specifically, we show that no zero-error protocol exists that compresses a complete ensemble of mixed states into less than $ \lceil 2 \log (N+2)-2\rceil$.
\subsection{The zero error condition}
The condition for zero-error compression requires that the average error defined as \begin{align}\label{error} e_N= \sum_{\st n} \, p_{\st n}
\frac{ \left\|\rho^{\otimes N}_{\st n}-\map{D}\circ\map{E}\left(\rho_{\st n}^{\otimes N}\right)\right\|} 2 = 0 \, \, . \end{align}
This condition immediately implies $\|\map D\circ\map E(\rho_{\st n}^{\otimes N})-\rho_{\st n}^{\otimes N}\|=0$ for every $\st n$ except for a zero-measure set. Since the Hermitian operator $\map D\circ\map E(\rho_{\st n}^{\otimes N})-\rho_{\st n}^{\otimes N}$ has only zero eigenvalues, it must be a null operator.
Hence, the channel $\map C : = \map D \circ\map E$ must fix $\rho_{\st n}^{\otimes N}$, namely that \begin{align}\label{above} \map C(\rho_{\st n}^{\otimes N}) = \rho_{\st n}^{\otimes N} \end{align} for every $\st n$ except for a set of zero measure. Since $p_{\st n}$ has full support on the Bloch sphere, the above condition holds for a dense set of points on the Bloch sphere. As a result, for every Bloch vector $\st{n}$ there exists a sequence $\left\{\rho_{\st {n}_k}^{\otimes N}\right\}$ of Bloch vectors satisfying Eq. (\ref{above}) such that $\lim_{k\to \infty} \st n_k =\st n $ and $$\lim_{k\to\infty}\rho_{\st{n}'_k}^{\otimes N}=\rho_{\st{n}}^{\otimes N} \, .$$ Consequently, we have \begin{align*}
\left\|\map D\circ\map E(\rho_{\st n}^{\otimes N})-\rho_{\st n}^{\otimes N}\right\|_1&=\left\|\map D\circ\map E\left(\lim_{k\to\infty}\rho_{{\st n}'_k}^{\otimes N}\right)-\lim_{k\to\infty}\rho_{{\st n}'_k}^{\otimes N}\right\|\\
&=\left\|\lim_{k\to\infty}\left[\map D\circ\map E(\rho_{{\st n}'_k}^{\otimes N})-\rho_{{\st n}'_k}^{\otimes N}\right]\right\|\\ &=0, \end{align*} which implies that $\map C(\rho_{\st n}^{\otimes N}) = \rho_{\st n}^{\otimes N} $ for every vector $\st{n}$ on the Bloch sphere.
\subsection{The algebra associated to the fixed points of a channel}
Here we develop a technique that generates fixed points of a given channel starting from an initial set of fixed points. Our technique is based on a result by Blume-Kohout \emph{et al} \cite{IPS} characterizes the fixed points. Specifically, Theorem 5 of Ref. \cite{IPS} guarantees that one can find a decomposition of the Hilbert space as $ \spc H= \bigoplus_{k} \left( \spc{L}_k\otimes \spc{M}_k \right)$, with the property that the fixed points of a given channel acting on $\spc{H}$ are all the operators of the form \begin{align}\label{formfix} A=\bigoplus_k \left( A^{(k)}\otimes \omega^{(k)}_0 \right)\, , \end{align} where $A^{(k)}$ is an arbitrary matrix on $\spc{L}_k$ and $\omega^{(k)}_0$ is a fixed non-negative matrix on $\spc{M}_k$.
Using this fact, we develop a technique that generates fixed points of a channel starting from an initial set of fixed points.
\begin{prop}\label{prop:fixalgebra}
Let ${\rm Fix} (\map C)$ be the set of fixed points of channel $\map C$, let $\{ A_x \}_{x\in\set X} \subset {\rm Fix} (\map C)$ be a subset of non-negative fixed points, and let $\mu (\operatorname{d}}\def\<{\langle}\def\>{\rangle x)$ be a non-negative measure on $\set X$. Then, the set of operators
\[ \map A = E^{-1/2} \, {\rm Fix} (\map C) \, E^{-1/2}
\, , \qquad E : = \int \mu( \operatorname{d}}\def\<{\langle}\def\>{\rangle x) \, \, A_x \, , \]
is a matrix $\ast$-algebra (i.~e.~a matrix algebra closed under adjoint). Moreover, one has $ E^{1/2} \map A E^{1/2} \subseteq {\mathsf{Fix}}(\map C)$.
\end{prop}
[Notation: for a non-invertible operator $E$, we define $E^{-1}$ as the inverse on the support of $E$.]
{\bf Proof. } ~ Writing each operator $A_x$ in the form (\ref{formfix}), we obtain \[ E = \bigoplus_k \left( E^{(k)} \otimes \omega_0^{(k)}\right) \, , \qquad E^{(k)} = \int \mu(\operatorname{d}}\def\<{\langle}\def\>{\rangle x) A_x^{(k)} \, . \] Hence, for a generic fixed point $A\in {\mathsf{Fix}}(\map C)$, decomposed as in Eq. (\ref{formfix}), we have \[ E^{-1/2} A E^{-1/2} = \bigoplus_k \left[ \left( E^{(k)} \right)^{-1/2} A_k \left( E^{(k)} \right)^{-1/2} \otimes P_k \, , \right]\] where $P_k$ is the projector on the support of $ \omega_0^{(k)}$. Since each $A_k$ is a generic operator on $\spc L_k$, we have \[ E^{-1/2} \, {\mathsf{Fix}} (\map C) \, E^{-1/2} = \bigoplus_k \left[ {\mathsf B} ( \spc S_k ) \otimes P_k \right] \, , \] where ${\mathsf B} ( \spc S_k )$ denotes the algebra of all linear operators on the subspace $\spc S_k = \mathsf{Supp} \left[ E^{(k)}\right]$. Hence, $\spc A = E^{-1/2} \, {\mathsf{Fix}} (\map C) \, E^{-1/2}$ is an algebra and is closed under adjoint.
On the other hand, we have \[ E^{1/2} \, \spc A \, E^{1/2} = \bigoplus_k \left[ {\mathsf B} ( \spc S_k ) \otimes M^{(k)}_0 \right] \, , \] meaning that every operator in $ E^{1/2} \, \spc A \, E^{1/2}$ is of the form (\ref{formfix})---that is, it is a fixed point. \qed
\subsection{The minimal algebra required by the zero error condition}
Let us apply Proposition \ref{prop:fixalgebra} to the channel $\map C = \map D\circ \map E$, resulting from the concatenation of the encoding and the decoding in a generic zero-error protocol. By the zero-error condition, all the states $\rho_{\st n}^{\otimes N}$ are fixed points. The states can be decomposed as
\begin{align}\label{statesagain} \rho_{\st n}^{\otimes N} = \bigoplus_{j=0}^{N/2} \, q_{j,N} \, \left(\rho_{\st n,j}\otimes \frac{I_{m_j }}{m_j}\right) \, .
\end{align}
\emph{A priori}, this block decomposition could be completely unrelated with the block decomposition of Eq. (\ref{formfix}). Proving that the two decompositions coincide will be the main part of our argument.
Choosing the measure $\mu(\operatorname{d}}\def\<{\langle}\def\>{\rangle x)$ in Proposition \ref{prop:fixalgebra} to be the invariant measure over $\st n$, the average operator $E$ is given by \[ E = \bigoplus_{j=0}^{N/2} \, q_{j,N} \, \left( \frac{I_j}{d_j}\otimes \frac{I_{m_j }}{m_j}\right) \, . \] Hence, the algebra $\spc A$ defined in Proposition \ref{prop:fixalgebra} must contain all the operators of the form \[ E^{-1/2} \, \rho_{\st n}^{\otimes N} \, E^{-1/2} = \bigoplus_{j=0}^{N/2} \, \, \left( d_j \, \rho_{\st n,j}\otimes I_{m_j } \right) \, , \] for every unit vector $\st n$. Hence, $\map A$ must contain the smallest algebra $\spc A_{\min}$ generated by the above operators. We will now characterize this algebra:
\begin{prop}\label{prop:top}
If the states in Eq. (\ref{statesagain}) are not maximally mixed, $\spc A_{\min}$ contains the matrix algebra of all operators on the symmetric subspace, corresponding to $j= N/2$ in the decomposition (\ref{statesagain}). \end{prop}
{\bf Proof. } ~ Let us express the state $\rho = p |0\>\<0| + (1-p) |1\>\<1|$ as $\rho = e^{-\beta Z}/\operatorname{Tr}}\def\atanh{\operatorname{atanh}[ e^{-\beta Z}]$, $Z = |0\>\<0| - |1\>\<1|$ for a suitable $\beta\ge 0$. By definition, for every unitary $U \in \grp{SU}(2)$, the algebra $\spc A_{\min}$ contains the operator \begin{align} \nonumber A_U &: = E^{-1/2} (U\rho U^{\dag})^{\otimes N} E^{-1/2}\\ \label{au} & = \bigoplus_{j=0}^{N/2} \, \frac{d_j}{\operatorname{Tr}}\def\atanh{\operatorname{atanh} \left[ e^{-\beta J^{(j)}_z}\right]} \, \left( U^{(j)} \, e^{-\beta J^{(j)}_z}U^{(j)\dag} \otimes I_{m_j } \right) \, ,
\qquad \qquad J_z^{(j)} = \sum_{m=-j}^j \, m \, |j,m\>\<j,m|
\end{align}
where $U^{(j)}$ denotes the $(2j+1)$-dimensional irreducible representation of $\grp{SU} (2)$. Moreover, since the algebra $\spc A_{\min}$ is closed under linear combinations, $\spc A_{\min}$ must contain the operator \[ X_l = \int \operatorname{d}}\def\<{\langle}\def\>{\rangle U \, \chi_U^{(l)} ~ A_U \, , \] where $\chi_U^{(l)}$ are the characters of the irreducible representations of $\grp{SU}(2)$ given by $ \chi_U^{(l)} = \operatorname{Tr}}\def\atanh{\operatorname{atanh} [ U^{(l)}]$. Let us set $l= N$. In this case, the orthogonality of $\grp{SU}(2)$ matrix elements eliminates all terms in the block decomposition of $\rho^{\otimes N}$, except for the term with $j=N/2$. Notice that in this case the multiplicity subspace is trivial. Hence, one has \begin{align*} X_N & = \int \operatorname{d}}\def\<{\langle}\def\>{\rangle U \, \chi_U^{(N)} \, d_{N/2} \, U^{(N/2)} \, \rho_{N/2} \, U^{(N/2)\dag} \qquad \qquad \rho_{N/2} = \frac{ e^{-\beta J_z^{(N/2)}}}{ \operatorname{Tr}}\def\atanh{\operatorname{atanh} \left[e^{-\beta J_z^{(N/2)}}\right]} \, . \end{align*}
The matrix elements of $X_N$ can be computed explicitly as \begin{align*}
\left\< \frac N2, n \right | \, X_N \, \left |\frac N2, n'\right\> & = \frac{d_{N/2} }{ \operatorname{Tr}}\def\atanh{\operatorname{atanh} \left[e^{-\beta J^{(N/2)}_z}\right]}\, \int \operatorname{d}}\def\<{\langle}\def\>{\rangle U \, \chi_U^{(N)} \, \left [ \sum_{m=-N/2}^{N/2} \, e^{-\beta m} \, \left\< \frac N2, n \right | U^{(N/2)} \left |\frac N2,m\right\> \left\< \frac N2 , m\right| U^{(N/2)\dag} \left |\frac N2,n'\right\> \right] \\ &
= \delta_{n,n'} \, (-1)^{n} \, \frac{ d_{N/2} \, \left\< \frac N2, n, \frac N2, -n' | N, 0\right\>}{ d_N \operatorname{Tr}}\def\atanh{\operatorname{atanh} \left[e^{-\beta J^{(N/2)}_z}\right]}\, \left [ \sum_{m=-N/2}^{N/2} \, (-e^{-\beta })^m \, \overline{\left. \left\< \frac N2,m, \frac N2, -m\right| N,0\right\>} \right] \, \\
&= \delta_{n,n'} \, (-1)^{n} \, \frac{ d_{N/2} \, \left\< \frac N2, n, \frac N2, -n' | N, 0\right\>}{ d_N \operatorname{Tr}}\def\atanh{\operatorname{atanh} \left[e^{-\beta J^{(N/2)}_z}\right]}\, \left [ \sum_{m=-N/2}^{N/2} \, \, \frac{(N!)^2(-e^{-\beta } )^m }{(N/2-m)!(N/2+m)!\sqrt{(2N)!}} \right] \,\\
&=\delta_{n,n'} \, (-1)^{n+N/2} \, \frac{ d_{N/2}(N!)e^{\beta N/2}(1-e^{-\beta})^N \, \left\< \frac N2, n, \frac N2, -n' | N, 0\right\>}{ d_N \sqrt{(2N)!}\operatorname{Tr}}\def\atanh{\operatorname{atanh} \left[e^{-\beta J^{(N/2)}_z}\right]}\, ,
\end{align*}
$ \<j_1,m_1, j_2,m_2| J, M\>$ denoting the Clebsch-Gordan coefficient. Note that the Clebsch-Gordan coefficient in the above expression is nonzero if and only if $n=n'$. As a consequence, the operator $X_N$ has full support.
Now, since $\spc A_{\min}$ is an algebra, it must contain $X_N$ as well as the whole Abelian algebra generated by it. In particular, it must contain the projector on the support of $X_N$---which is nothing but $ P_{N/2}$, the projector on the symmetric subspace. Moreover, it must contain all the operators of the form \[ A_{U,N/2} = P_{N/2} A_U P_{N/2} \propto U^{(N/2)} \, e^{-\beta J^{(N/2)}_z} \, U^{(N/2)\dag} \qquad \forall\,U \in \grp{SU}(2) \,. \] Finally, for $\beta \not = 0$, it is easy to see that the smallest algebra $\map A_{\min, N/2}$ containing the above operators is the algebra $\mathsf{B} (\spc R_{N/2})$. This can be easily seen by von Neumann's double commutant theorem: If an operator $B$ commutes with the non-degenerate Hermitian operator $A_{U,N/2}$ for every $U$, then $B$ must be proportional to the identity. Hence, the double commutant of $\map A_{N/2}$---equal to $\map A_{N/2}$ itself---is the whole $\mathsf{B} (\spc R_{N/2})$. In conclusion, we have the inclusion $\mathsf{B} (\spc R_{N/2}) \subseteq \spc A_{\min, N/2} \subseteq \spc A_{\min}$. \qed
\begin{prop}\label{prop:todos} If the states in Eq. (\ref{statesagain}) are neither pure nor maximally mixed, then $\spc A_{\min}$ is the full algebra generated by the $N$-fold tensor representation of $\grp{GL}(2)$, namely
\[ \spc A_{\min} = \bigoplus_{j=0}^{N/2} \, \left[ \mathsf{B} ( \spc R_j) \otimes I_{m_j }\right] \, ,\] $\mathsf{B} ( \spc R_j) $ denoting the algebra of all linear operators on the representation space $ \spc R_j$. \end{prop} {\bf Proof. } ~ We prove that $\spc A_{\min}$ contains the algebra $ \mathsf{B} ( \spc R_j) \otimes I_{m_j }$ for every $j$. The proof is by induction, with $j$ starting from $N/2$ and going down to $0$. For $j=N/2$ we know that $\spc A_{\min}$ contains the algebra $\mathsf{B} (\spc R_{N/2})$ of all operators with support in the symmetric subspace. Let us assume that $\spc A_{\min}$ contains all the algebras $\mathsf{B} ( \spc R_j) \otimes I_{m_j }$ with $j \ge j_*+1$ and show that it must necessarily contain also the algebra $\mathsf{B} ( \spc R_{j_*}) \otimes I_{m_{j_*} }$. By construction, we know that $\spc A_{\min}$ contains all the operators $A_U$ of the form \begin{align} \nonumber A_U = \bigoplus_{j=0}^{N/2} \, \frac{d_j}{\operatorname{Tr}}\def\atanh{\operatorname{atanh} \left[ e^{-\beta J^{(j)}_z}\right]} \, \left( U^{(j)} \, e^{-\beta J^{(j)}_z}U^{(j)\dag} \otimes I_{m_j } \right) \, ,
\qquad \qquad J_z^{(j)} = \sum_{m=-j}^j \, m \, |j,m\>\<j,m| \, .
\end{align} Since the states in Eq. (\ref{statesagain}) are not pure, all the blocks in the sum are non-zero.
Moreover, the induction hypothesis implies that $\spc A_{\min}$ should also contain the operators $A_U'$ of the form
\[ A_U ' = \bigoplus_{j=0}^{j_*} \, \frac{ d_j}{\operatorname{Tr}}\def\atanh{\operatorname{atanh} \left[ e^{-\beta J^{(j)}_z}\right]} \, \left(U^{(j) } \, e^{-\beta J^{(j)}_z} U^{(j) \dag} \otimes I_{m_j } \right) \, , \qquad U \in SU(2) \, . \]
Now, we can repeat the argument used in the proof of Proposition \ref{prop:top}: by linearity, $\spc A_{\min}$ must contain the operator \begin{align*} X_{2j_*} &= \int \operatorname{d}}\def\<{\langle}\def\>{\rangle U \, \chi_U^{(2j_*)} ~ A'_U \\ & = \frac{ d_{j_*}}{\operatorname{Tr}}\def\atanh{\operatorname{atanh} \left[ e^{-\beta J^{(j_*)}_z}\right]} \int \operatorname{d}}\def\<{\langle}\def\>{\rangle U \, \chi_U^{(2j_*)} ~ \, \left( U^{(j_*)} \, e^{-\beta J^{(j_*)}_z} \, U^{(j_*) \dag} \otimes I_{m_{j_*} } \right) \, . \end{align*} Explicit calculation (same as in Proposition \ref{prop:top}) shows that $X_{2j_*}$ has full rank. Hence, the projector on the support of $X_{2j_*}$ is $ P_{j_*} = I_{j_*} \otimes I_{m_{j_*}}$. Since $\spc A_{\min}$ should contain this projector, it must also contain all operators of the form \begin{align*} A'_{U,j_*} & = P_{j_*} A'_U P_{j_*} \\ & \propto U^{(j_*)} e^{-\beta J^{(j_*)}_z } U^{(j_*) \dag} \otimes I_{m_{j_*}} \, , \qquad U\in SU(2) \, . \end{align*} Again, using von Neumann's double commutant theorem, it is easy to show that the smallest algebra containing all the above operators is $ \mathsf{ B} (\spc R_{j^*}) \otimes I_{m_{j_*}}$. In conclusion we proved that $\spc A_{\min}$ must contain $ \mathsf{ B} (\spc R_{j^*}) \otimes I_{m_{j_*}}$. By induction, this proves the inclusion \[ \spc A_{\min} \supseteq \bigoplus_{j=0}^{N/2} \, \left[ \mathsf{B} ( \spc R_j) \otimes I_{m_j }\right] \, .\] In the other hand, the definition of $\spc A_{\min}$ implies the opposite inclusion. Hence, one must have the equality.
\qed
\subsection{Zero-error compression of a complete ensemble implies zero error compression for every ensemble of permutationally invariant states}
Propositions \ref{prop:fixalgebra} and \ref{prop:todos} imply the following \begin{cor}\label{cor:done} If the states (\ref{statesagain}) are neither pure nor maximally mixed, every channel $\map C$ preserving them must preserve all permutationally invariant states. \end{cor}
{\bf Proof. } ~ By Propositions \ref{prop:fixalgebra} and \ref{prop:todos}, the channel $\map C$ must satisfy \[ {\mathsf{Fix}} (\map C) \supseteq \spc A_{\min} = \bigoplus_{j=0}^{N/2} \, \left[ \mathsf B ( \spc R_j) \otimes I_{m_j} \right] \, ,\] meaning that the full algebra generated by the tensor representation of $\grp{GL}(2)$ is contained in the set of fixed points. \qed
We are now in position to prove Theorem 1 in the main text:
{\bf Proof of Theorem 1.} Suppose that a compression protocol has zero error on a complete ensemble of mixed states. Then, Corollary \ref{cor:done} implies that the protocol should have zero error on all permutationally invariant states. In particular, the protocol should be able to transmit without error the following ensemble of orthogonal pure states
\[S:= \left. \left\{ \rho_{j,m} = |j,m\>\< j,m| \otimes \frac{I_{m_j}}{m_j} , p_{j,m}=\frac{1}{D}\, \right |\, j = 0, \dots, N/2 \, , m = -j, \dots, j, \,D:=\sum_j d_j \right\} \, .\]
A lower bound on the dimension $d_{\rm enc}$ of the encoding space $\spc H_{\rm enc}$ is then obtained by considering the amount of classical information carried by $S$. In detail, the lower bound can be calculated using the monotonicity of Holevo's chi quantity in quantum data processing. Holevo's chi quantity of $S$ \cite{holevo} is defined as follows \begin{align*} \chi\left(S\right)&:=H\left(\sum_{j,m} p_{j,m}\rho_{j,m}\right)-\sum_{j,m}p_{j,m}H\left(\rho_{j,m}\right) \end{align*} with $H(\rho)$ being the von Neumann entropy of the state $\rho$. Since the chi quantity is non-increasing under quantum evolutions, in the zero-error scenario we have \begin{align}\label{chieq1} \chi\left(S\right)=\chi\left(S_{\rm enc}\right) \end{align} where $S_{\rm enc}$ is the encoded ensemble $S_{\rm enc}:=\{\map E(\rho_{j,m}), p_{j,m}\}$. On the other hand, the dimension of the encoding subspace is lower bounded by the chi quantity \cite{Horodecki} \begin{align}\label{chibound1} \log d_{\rm enc}\ge \chi\left(S_{\rm enc}\right). \end{align}
The chi quantity for the ensemble $S$ can be computed as $\chi\left(S\right)=\log D \, .$ Combining this equality with Eqs. (\ref{chieq1}) and (\ref{chibound1}) we get $$d_{\rm enc}\ge D=\left(\frac N2+1\right)^2,$$ which concludes the optimality proof. The protocol showed in the main text saturates the bound. \qed
\section{PROOF OF THEOREM 2}
As stated in the main text, we assume $p>\frac12$, because for $p=1/2$ the ensemble is trivial, consisting only of the maximally mixed state.
We first notice that the error of the compression protocol is upper bounded as \begin{align}
e_{N}& = \frac 12 \, \left \| \rho_{\st n}^{\otimes N} - \map D \circ \map E \left ( \rho_{\st n}^{\otimes N} \right) \right\| \, , \qquad \forall \st n \in \mathbb S^2 \nonumber \\
& = \frac 12 \, \left \| \sum_{j \not \in \set S_\epsilon } \, q_{j,N} \, \left[ \rho_{\st n , j} \otimes \frac{ I_{m_j}}{m_j} - \map D( \rho_0 ) \right] \right \|\nonumber\\
&\le \sum_{j\not\in\set S_\epsilon}q_{j,N} \, \label{errorbound}, \end{align} the last step following from the triangle inequality and from the fact that the trace distance of two states is upper bounded by 2. Note that the upper bound is independent of $\st n$, meaning that the protocol works equally well for all states with the same spectrum (or equivalently, for all states with the same purity).
At this point, it is enough to prove that the upper bound vanishes in the large $N$ limit. To this purpose, we use the expression for $q_{j,N}$ [Eq. (5) in the main text] and observe that one has \begin{align}\label{F2} 1-e_N\ge&\sum_{j\in\set{S}_\epsilon}\frac{2(2j+1)}{j_0}B\left(N+1,p,\frac{N}{2}+j+1\right)-\sum_{j\in\set{S}_\epsilon}\frac{2(2j+1)}{j_0}B\left(N+1,p,\frac{N}{2}-j\right) \end{align} where $j_0=(2p-1)(N+1)/2$. The second summand in the r.h.s. of Eq. (\ref{F2}) is negligible in the large $N$ limit: precisely, it can be bounded as \begin{eqnarray} \sum_{j\in\set{S}_\epsilon}\frac{2(2j+1)}{j_0}B\left(N+1,p,\frac{N}{2}-j\right)&\le&\sum_{j=0}^{\frac{N}{2}}\frac{2(2j+1)}{j_0}B\left(N+1,p,\frac{N}{2}-j\right)\nonumber\\ &\le&\frac{1}{2p-1}\sum_{j=0}^{\frac{N}{2}}B\left(N+1,p,\frac{N}{2}-j\right)\nonumber\\ &\le&\frac{1}{2p-1}\exp\left[-\frac{2(2p-1)^2N^2}{N+1}\right]\label{F4} \end{eqnarray} having used the Hoeffding's inequality in the last step. Hence, this term goes to zero exponentially fast with $N$,
Now, recall that we chose $\set S_\epsilon$ to be the interval \begin{align}\label{Sepsilon} \set{S}_\epsilon=\left[j_0-1/2-\sqrt{N\ln(2/\epsilon)},j_0-1/2+\sqrt{N\ln(2/\epsilon)}\right]. \end{align}
Setting $j_0-j-1/2= x$, we then obtain \begin{align*} e_N&\le1-\sum_{x=-\sqrt{N\ln(2/\epsilon)}}^{\sqrt{N\ln(2/\epsilon)}}\left(1-\frac{x}{j_0}\right)B\left(N+1,p,p(N+1)-x\right)+\frac{1}{2p-1}\exp\left[-\frac{2(2p-1)^2N^2}{N+1}\right]\\ &=1-\sum_{x=-\sqrt{N\ln(2/\epsilon)}}^{\sqrt{N\ln(2/\epsilon)}}B\left(N+1,p,p(N+1)-x\right)+\frac{1}{2p-1}\exp\left[-\frac{2(2p-1)^2N^2}{N+1}\right]\\ &\le 2\exp\left[\frac{2N}{N+1}\ln\frac{\epsilon}{2}\right]+\frac{1}{2p-1}\exp\left[-\frac{2(2p-1)^2N^2}{N+1}\right]\\ &\le \epsilon^{\frac{2N}{N+1}}+\frac{1}{2p-1}\exp\left[-\frac{2(2p-1)^2N^2}{N+1}\right] \end{align*} In the second last step we have used the Hoeffding's inequality. Now it can be seen that the right hand side of the bound vanishes exponentially fast with $N$, and we can always find a $N_0$ such that $e_N\le\epsilon^{3/2}<\epsilon$ for any $N>N_0$. The dimension of the encoded system is now \begin{eqnarray*} d_{\rm enc}&=&\sum_{j\in\set{S}_\epsilon}(2j+1)\\ &=&2(2p-1)\sqrt{N\ln(2/\epsilon)}(N+1) \end{eqnarray*} An upper bound on the number of required qubits is given by \begin{align*} \log d_{\rm enc}&=\log\left[2(2p-1)N\sqrt{N\ln\frac{2}{\epsilon}}\right]+\log\left(1+\frac1N\right)\\ &\le\frac32 \log N+\log\left[2(2p-1)\sqrt{\ln\frac2\epsilon}\right]+1\\ \end{align*} \qed
\section{THE PURE STATE CASE: NO DISCONTINUOUS GAP BETWEEN ZERO-ERROR AND APPROXIMATE COMPRESSION} Here we prove that the type of discontinuity highlighted by our Theorems 1 and 2 is specific to mixed states.
Consider the pure state ensemble
$\left\{ \left( |\st n\>\<\st n| \right)^{\otimes N}\, , \operatorname{d}}\def\<{\langle}\def\>{\rangle^2 \st n\right\}$, where $ |\st n\>$ is the pure qubit state with Bloch vector $\st n$ and $\operatorname{d}}\def\<{\langle}\def\>{\rangle^2\st n$ is the invariant measure on the Bloch sphere.
Suppose that the state $\left( |\st n\>\<\st n|\right)^{\otimes N}$ is encoded into a state $\rho_{\st n,{\rm enc}}$ on a Hilbert space of dimension $d_{\rm enc}$. Assuming that the compression error is bounded by $\epsilon$, an argument by Horodecki \cite{Horodecki} gives a lower bound on $d_{\rm enc}$.
The argument is based on the following lemma, based on the Alicki-Fannes inequality
\begin{lem}[\cite{alicki}] Let $\{ \rho_x\, , p_x\}$ be an ensemble of states and let $\{ \rho_{x,{\rm enc}} \, , p_x\}$ be the ensemble of the encoded states. If the compression protocol has error bounded by $\epsilon$, then the following inequality holds \begin{align}\label{chi bound}
\left|\chi\left(\left\{ \rho_x \, ,p_x \right\}\right) -\chi\left(\left\{ \rho_{x,{\rm enc}} \, , p_x\right\} \right) \right| \le 2\left[\epsilon\log d_{\rm in}+\eta(\epsilon)\right], \end{align} where $d_{\rm in}$ is the rank of the average state $\rho = \sum_x \, p_x \rho_x $ and $\eta(x)=-x\ln x$. \end{lem} In our case, $d_{\rm in}$ is the dimension of the symmetric subspace, namely \begin{align}\label{din} d_{\rm in} = d_{\frac{N}{2}}=N+1 \,. \end{align} Moreover, we have \begin{eqnarray}\label{chi}
\chi\left(\left\{ \left( |\st n\>\<\st n|\right)^{\otimes N} , \, \operatorname{d}}\def\<{\langle}\def\>{\rangle^2 \st n \right\} \right)=H\left(I_{\frac{N}{2}}/d_{\frac{N}{2}}\right)=\log (N+1). \end{eqnarray} and, by the Holevo's bound \cite{holevo}, \begin{align}\label{hb} \chi\left(\left\{ \rho_{\st n, {\rm enc}} \, , \operatorname{d}}\def\<{\langle}\def\>{\rangle^2 \st n \right\} \right) \le \log d_{\rm enc} \, . \end{align} In our case, we have $d_{\rm in}=d_{\frac{N}{2}}=N+1$. Hence, combining Eqs. (\ref{chi bound}), (\ref{din}), (\ref{chi}), and (\ref{hb}) we obtain the bound \begin{align*} \log d_{\rm enc}&\ge (1-2\epsilon)\log(N+1)-2\eta(\epsilon). \end{align*} Now, note that the r.h.s. is continuous in $\epsilon$ and tends to $\log (N+1)$ when $\epsilon$ tends to zero. The value $\log (N+1)$ is exactly the minimum number of qubits needed to encode a generic state in the symmetric subspace with zero error. Hence, as $\epsilon$ tends to zero, the number of qubits needed for approximate compression tends to the number of qubits needed for zero-error compression.
\section{PROOF OF THEOREM 3}\label{app:qubit-coverse}
Here we prove the optimality of our protocol among all compression protocols where the encoding is covariant and the decoding preserves the magnitude of the total angular momentum. Precisely, we assume that
\begin{enumerate}
\item the encoding space $\spc H_{\rm enc}$ supports a unitary representation of the group $\grp {SU} (2)$, denoted by $ \{ V_g ~|~ g\in\grp {SU} (2) \}$
\item the encoding channel satisfies the covariance condition
\begin{align}\label{covariance}
\map E \circ \map U_g = \map V_g \circ \map E \, , \qquad \forall g\in\grp {SU} (2) \, ,
\end{align}
where $\map U_g$ and $\map V_g$ are the unitary channels defined by $\map U_g (\cdot): = U_g \cdot U_g^\dag $ and $\map V_g= V_g \cdot V_g^\dag$.
\item the decoding channel $\map D$ preserve the magnitude of the total angular momentum, in the sense that, for every input state $\rho$, one has
\begin{align}\label{conservation}
\operatorname{Tr}}\def\atanh{\operatorname{atanh} \left [ \st K^2 \, \map D( \rho) \right] = \operatorname{Tr}}\def\atanh{\operatorname{atanh} \left[ \st J^2 \, \rho \right ] \, ,
\end{align} where $\st K= ( K_x,K_y,K_z)$ are the generators of the representation $\{ V_g \, , g \in \grp {SU} (2)\}$ and $ \st J = ( J_x, J_y, J_z)$ are the generators of the representation $\{U_g^{\otimes N} , g\in\grp {SU} (2)\}$.
\end{enumerate}
Under these conditions, we can prove the optimality of the protocol presented in Theorem 3 of the main text.
{\bf Proof of Theorem 3.} For the purpose of this proof, it is convenient to parametrize the mixed states $\rho_{\st n}$ as $\rho_g = U_g \rho U_g^\dag$, where $\rho$ is a fixed state and $g$ is a generic element of $\grp { SU} (2)$. Let us decompose the encoding space as \begin{align}\label{iso} \spc H_{\rm enc} = \bigoplus_j \, \left( \spc R_j \otimes \widetilde {\spc M}_j \right) \, , \end{align} where $j$ is the quantum number of the angular momentum, $ \map R_j$ is the corresponding representation space, and $\widetilde{\spc M}_j$ is a suitable multiplicity space. By definition, one has \begin{align} \nonumber \spc H_{\rm enc} & \supseteq \mathsf{Span} \left\{ \mathsf{Supp} \left [ \map E \left(\rho_g^{\otimes N}\right) \right] , g\in\grp {SU} (2) \right\} \\ \label{inclusion} & = \mathsf{Span} \left[ \mathsf{Supp} \left ( \Omega \right) \right] \, , \qquad \Omega : = \int \operatorname{d}}\def\<{\langle}\def\>{\rangle g \, \map E \left(\rho_g^{\otimes N}\right) \, . \end{align} Since $\map E$ is covariant, the state $\Omega$ satisfies the relation $V_g \Omega V_g^\dag = \Omega\, , \forall g\in\grp {SU} (2)$. Hence, $\Omega$ can be written in the block diagonal form \[\Omega = \bigoplus_{j\in \set S} \left ( \frac { I_j}{d_j} \otimes \omega_j \right)\, , \] where $\omega_j$ is a suitable state on the multiplicity space and $\set S$ is a suitable set of values of the angular momentum number. Combining the above decomposition with Eq. (\ref{inclusion}), we obtain the bound \begin{align}\label{dencbound} d_{\rm enc} \ge \mathsf{rank} \, \Omega \ge \sum_{ j\in \set S} \, d_j \,. \end{align}
On the other hand, since the decoding preserves the magnitude of the angular momentum, one has
\begin{align*} \operatorname{Tr}}\def\atanh{\operatorname{atanh} [ \Pi_j \, \map D \circ\map E \left(\rho_g^{\otimes N}\right)] & = \operatorname{Tr}}\def\atanh{\operatorname{atanh} [ \widetilde \Pi_j \map E \left(\rho_g^{\otimes N}\right) ] \, , \qquad \forall j = 0, \dots, N/2 \, , \forall g \in\grp{SU} (2) \, , \end{align*} where $\Pi_j$ is the projector on $\spc R_j\otimes \spc M_j$ while $\widetilde \Pi_j$ is the projector on $\spc R_j\otimes \widetilde {\spc M}_j$. Hence, we have \begin{align} \sum_{j\in\set S} \operatorname{Tr}}\def\atanh{\operatorname{atanh}[ \Pi_j \map D \circ \map E\left( \rho_g^{\otimes N}\right)] = 1 \, , \qquad \forall g \in\grp{SU} (2) \, , \end{align} meaning that all the output states $\map D \circ \map E\left( \rho_g^{\otimes N}\right)$ are contained in the subspace $\spc H_N : = \bigoplus_{j\in\set S} \left(\spc R_j \otimes \spc M_j\right)$. Hence, we have \begin{align}
\nonumber e_N&= \frac 12 \left \| \rho_{g}^{\otimes N} - \ \map D \circ \map E \left(\rho_{g}^{\otimes N} \right) \right\| \qquad \forall g \in \grp {SU} (2)\\
\nonumber & \ge \frac 12 \left \| P_N \left[ \rho_{g}^{\otimes N} - \ \map D \circ \map E \left(\rho_{g}^{\otimes N} \right) P_N \right] \right \| + \frac 12 \left \| (I^{\otimes N} - P_N) \left[ \rho_{g}^{\otimes N} - \ \map D \circ \map E \left(\rho_{g}^{\otimes N} \right) \right] (I^{\otimes N} - P_N) \right\| \\
\nonumber &= \frac 12 \left \| (I^{\otimes N} - P_N) \rho_{g}^{\otimes N} (I^{\otimes N} - P_N) \right\| \\ &\ge \sum_{j\not\in\set{S}}\frac{q_{j,N}}{2} \label{approxt} \end{align} where $P_N$ is the projector on $\spc{H}_N$. Now we prove that any protocol with $d_{\rm enc}=O\left(N^{3/2-\delta}\right)$, $\delta >0$, will have a non-vanishing error. Recall from the main text that the probability distribution $q_{j,N}$ can be expressed as \begin{align}\label{dist} q_{j,N}= \frac{2j+1}{2j_0} &\left[ B\left(N+1,p,\frac N 2 + j+1 \right) -B\left(N+1,p,\frac N2 - j \right)\right] \end{align} where $B(n,p,k)$ is the binomial distribution with $n$ trials and with probability $p$ and $$j_0 = (p-1/2)(N+1) \, .$$ Combing Eq. (\ref{approxt}) with Eq. (\ref{dist}), we have \begin{align*} e_N&\ge\frac12-\frac12\sum_{j\in\set{S}}\frac{2j+1}{2j_0}B\left(N+1,p,\frac{N}{2}+j+1\right). \end{align*}
We split the set $\set S$ into two subsets $\set S_1$ and $\set S_2$, defined as \begin{align*} \set S_1&= \set S \cap \left [ j_0-\frac{\sqrt{cN}+1}2 , j_0+\frac{\sqrt{cN}+1} 2\right ] \\ \set S_2&=\set S\setminus \set S_1 \end{align*} where $c$ is an arbitrary constant.
The error is then bounded as \begin{align}\label{e-bound1} e_N&\ge\frac12 \left( 1 - s_1 -s_2\right) \, \qquad s_k : = \sum_{j\in\set{S}_k}\frac{2j+1}{2j_0}B\left(N+1,p,\frac{N}{2}+j+1\right) \, , ~ k = 1, 2 \, . \end{align} We now bound $s_1$ and $s_2$. Let us start from $s_1$: by definition, we have \begin{align} \nonumber s_1 &\le \frac{\max_{j\in\set S_1} ( 2j+1) }{2j_0}\, \sum_{j\in\set{S}_1} B\left(N+1,p,\frac{N}{2}+j+1\right) \nonumber\\ \nonumber &= O(1) \, \sum_{j\in\set{S}_1} B\left(N+1,p,\frac{N}{2}+j+1\right) \\
& \le O(1) \, |\set S_1| B\left(N+1,p,\frac{N}{2}+j_0+1\right) \nonumber\\
\label{baab} & = O\left( N^{-1/2}\right)\, |\set S_1|\, . \end{align} In turn, $\set S_1$ can be bounded from the relation \begin{align}
\nonumber |\set S_1| \, \left( \min_{j\in \set S_1} \, 2j+1 \right) & \le \sum_{j\in\set S_1} (2j+1) \\ \nonumber & \le d_{\rm enc} \\ & = O\left ( N^{3/2 -\delta}\right) \, , \end{align}
which implies $ |\set S_1 | \le O( N^{1/2-\delta})$. Inserting this relation into Eq. (\ref{baab}), we finally obtain \begin{align} \label{e-bound2} s_1 \le O\left( N^{-\delta}\right) \, . \end{align}
Regarding $s_2$, we have the bound \begin{align} s_2 &\le \frac{N+1}{j_0}\left[\sum_{j\le j_0 - \frac { \sqrt {cN} +1} 2 } \, B\left(N+1,p,\frac{N}{2}+j+1\right)\right] \nonumber\\ &=\frac{1}{p-1/2} \left[ \sum_{j\le j_0 - \frac { \sqrt {c N } + 1}2 } \, B\left(N+1,p,\frac{N}{2}+j+1\right) \right] \nonumber\\ &\le \frac{e^{-c/2}}{p-1/2} \, ,\label{e-bound3} \end{align} the last inequality coming from Hoeffding's bound.
Finally, combining the inequalities (\ref{e-bound1}), (\ref{e-bound2}), and (\ref{e-bound3}), we obtain the lower bound \begin{align*} e_N \ge \frac12 \left[ 1 - O\left(N^{-\delta}\right)-\frac{e^{-c/2}}{p-1/2} \right] \, , \end{align*} Since the constant $c$ is arbitrary, the bound becomes $e_N \ge 1/2 - O\left(N^{-\delta}\right)$. \qed
\section{UPPER BOUND ON THE COMPLEXITY OF GENERATING APPROXIMATE MAXIMALLY MIXED STATES}
The decoding requires the preparation of maximally mixed states to be placed in the multiplicity register. For a given value of $j$, this is accomplished by generating a maximally entangled state of rank $m_j$. In the following we present a three-step protocol for this purpose. \begin{enumerate}
\item Choose an integer $n=O(N)$ such that $m_j \in(2^{n-1}, 2^n]$. Prepare $n$ maximally entangled qubit states. The resulting the state is $\rho=[|\Phi^+ \rangle \langle \Phi^+ |]^{\otimes n}$, with $|\Phi^+ \rangle = (|00\rangle + |11\rangle )/\sqrt{2}$ and lies in a space of dimension $2^{2n}$. \item Perform the measurement in the computational basis on one qubit of each entangled pair. The measurement outcomes of the individual qubit measurements are saved in a sequence of $n$ binary digits, let us denote it by $\underline y$. \item Compare the string $\underline y$ with the binary expression of $m_j$. If $\underline y$, as a number, is larger than $ m_j$, the protocol fails and we have to restart by preparing again $n$ maximally entangled qubits. Otherwise, we keep the remaining qubits, which, on average, will be in a maximally entangled mixed state of rank $m_j$. \end{enumerate} The last step can be seen by noting down the quantum operation $\mathcal{C}_{\rm yes}$ corresponding to the successful outcomes of the projective measurement, given by \begin{align}
\mathcal{C}_{\rm yes}(\sigma) = \sum_{y\leq m_j} |\underline{y}\rangle \langle \underline{y} | \sigma |\underline{y}\rangle \langle \underline{y} | \nonumber \ . \end{align}
The protocol is successful in more than half of the cases. For that reason, the probability of failure vanishes exponentially in the number of repetitions $l$ as $p_{\rm no} \leq 2^{-l}$. To ensure that the error is vanishing fast enough with the number of state copies $N$, we repeat the protocol $N$ times. Then, the complexity of the protocol is comprised of preparing the qubit states, which takes $O(N)$ steps, and from comparing the $n$ digit binary strings on a classical computer, which also takes $O(N)$ steps. By repeating the protocol $N$ times, the overall complexity yields $O(N^2)$. It is safe to run the protocol $N$ times to assure for an exponentially vanishing error, because the complexity of the decoding is still dominated by the Schur transform.
\section{ZERO-ERROR COMPRESSION FOR QUANTUM SYSTEMS OF DIMENSION $d>2$} In this and the following sections, we generalize our results to quantum systems of arbitrary finite dimension $d<\infty$.
\subsection{Upper bound on the number of encoding qubits}
\begin{theo}\label{thm2} In dimension $d$, every ensemble of $N$ identically prepared mixed states of rank $r$ can be encoded without error into less than $\left(2dr-r^2+r-2\right)/2 \, \log (N+d-1)$ qubits. \end{theo}
The proof is based on the Schur-Weyl duality, which allows one to decompose the $N$-copy Hilbert space as \[ \spc H^{\otimes N} \simeq \bigoplus_{ \lambda \in \spc Y_{N,d}} \, \left( \spc R_{ \lambda}\otimes \spc M_{ \lambda} \right) \, , \] where $\spc R_{ \lambda}$ is a representation space, $\spc M_{ \lambda}$ is a multiplicity space, and the sum runs over the set $ \spc Y_{N,d}$ of all Young diagrams of $N$ boxes arranged in $d$ rows, parametrized as $ \lambda = (\lambda_1,\dots, \lambda_d)$, with $\lambda_1 \ge \lambda_2\ge\dots\ge \lambda_d$, $\sum_{i=1}^d \lambda_i = N$. We use the notations $$d_\lambda=\dim \spc R_\lambda$$ and $$m_\lambda=\dim \spc M_\lambda.$$
Relative to this decomposition, every state of the form $\rho^{\otimes N}$ where $\rho$ has rank $r$ can be cast into the form $$\rho^{\otimes N} =\bigoplus_{\lambda\in\mathcal{{Y}}_{N,r}}q_{\lambda,N} \left(\rho_{\lambda}\otimes\frac{I_{m_{\lambda}}}{m_\lambda}\right) \, ,$$ where $\rho_{\lambda}$ is a quantum state on $\spc R_\lambda$, $I_{m_\lambda}$ is the identity on $\spc M_\lambda$, and $q_{\lambda,N}$ is a suitable probability distribution. Note that only the Young diagrams with $r$ rows or less are present here (for this fact, see e.g. \cite{ExactDistribution}).
The proof of Theorem \ref{thm2} makes use of the following lemmas:
\begin{lem}\label{bound-R} For every $\lambda\in\mathcal{{Y}}_{N,r}$, one has
$d_\lambda\le (N+d-1)^{(2dr-r^2-r)/2}$. \end{lem} {\bf Proof. } The dimension can be expressed as \begin{align}\label{dimensione} d_\lambda = \frac{\prod_{1\le i<j\le d}(\lambda_i-\lambda_j-i+j)}{\prod_{k=1}^{d-1}k!} \,, \end{align} cf. Eq. (III.10) of \cite{unitary-groups}. Since $\lambda_i=0$ for $i>r$, we have the following chain of (in)equalities \begin{align*} d_\lambda&= \frac{\prod_{1\le i<j\le r}(\lambda_i-\lambda_j-i+j) \, \cdot \, \prod_{1\le i\le r<j\le d}(\lambda_i -i+j) \, \cdot \, \prod_{r<i<j\le d}(j-i)}{\prod_{k=1}^{d-1}k!} \\ &\le \frac{(N+r-1)^{r\choose 2} \, \cdot \, (N+d-1)^{r(d-r)} \, \cdot \, \prod_{ l=1}^{d-r-1} l! }{\prod_{k=1}^{d}k!} \\ &\le \frac{(N+d-1)^{(2dr-r^2-r)/2}}{ \prod_{ k=d-r}^{d-1} k! } \, . \end{align*} \qed
\begin{lem}\label{denc} The total dimension of all the representation spaces corresponding to Young diagrams with no more than $r$ rows is upper bounded as \[ \sum_{\lambda\in\mathcal{{Y}}_{N,r}} \, d_\lambda < (N+d-1)^{\frac{2dr-r^2+ r-2}2} \, . \] \end{lem}
{\bf Proof. } By Lemma \ref{bound-R} one has \begin{align*}
\sum_{\lambda\in\mathcal{{Y}}_{N,r}} \, d_\lambda & \le (N+d-1)^{\frac{2dr-r^2-r}2} \, \left| \mathcal{{Y}}_{N,r} \right| \\
& < (N+d-1)^{\frac{2dr-r^2+r-2}2} \, , \end{align*}
having used the equality $ \left| \mathcal{{Y}}_{N,r} \right| = {N+r-1\choose r-1} $ \cite{GW98} and the elementary bound $ {N+r-1\choose r-1} < (N+1)^{r-1} \le (N+ d-1)^{r-1} $. \qed
{\bf Proof of Theorem \ref{thm2}.} A zero-error compression protocol is given by the following encoding and decoding channels: \begin{align*} \map E (\rho) & = \bigoplus_{\lambda \in {\spc Y}_{N,r}} \operatorname{Tr}}\def\atanh{\operatorname{atanh}_{\spc M_\lambda} [\Pi_\lambda \rho \Pi_\lambda] \\ \map D(\rho') & = \bigoplus_{\lambda \in {\spc Y}_{N,r}} P_\lambda \rho' P_\lambda \otimes \frac { I_{m_\lambda}}{m_{\lambda}} \, , \end{align*} where $\Pi_\lambda$ is the projector on $\spc R_\lambda \otimes \spc M_\lambda$ and $ P_\lambda$ is the projector on $ \spc R_\lambda$. The encoding space is $\spc H_{\rm enc} = \bigoplus_{\lambda \in {\spc Y}_{N,r}} \spc R_\lambda$ and has dimension $d_{\rm enc}=\sum_{\lambda\in {\spc{Y}}_{N,r}}d_{\lambda}$, which we bound as \begin{align*} d_{\rm enc} &=\sum_{\lambda\in {\spc{Y}}_{N,r}}d_{\lambda} \\
&< (N+d-1)^{\frac{2dr-r^2+r-2 } 2} \, ,
\end{align*} having used Lemma \ref{denc}. \qed
\subsection{Lower bound on the number of encoding qubits used by the zero-error protocol}
Here we give a lower bound on the dimension of the encoding space in the zero-error protocol
described in the proof of Theorem \ref{thm2}. Precisely, we have the following
\begin{lem}\label{lem:upperbounddimensiond}
The total dimension of all the representation spaces corresponding to Young diagrams with no more than $r$ rows is lower bounded as \begin{align}\label{lowerboundsum} \sum_{\lambda\in\mathcal{{Y}}_{N,r}} \, d_\lambda \ge c (r,d) \, N^{\frac{2dr-r^2+ r-2}2} \, , \end{align} where $c$ is a suitable function.
\end{lem}
{\bf Proof. }
For simplicity, we use the notation $f (N, r,d)\gtrsim g(N,r,d)$ to mean that there exists a function $c (r,d)$ such that $ f( N,r,d) \ge c(r,d) g(N,r,d) $ for every $N$. If $f(N,r,d) \gtrsim g(N,r,d)$ and $g(N,r,d) \gtrsim f(N,r,d)$, then we write $ f(N,r,d) \approx g(N,r,d)$.
With this notation, we have \begin{align*}
d_\lambda \gtrsim \prod_{1\le i<j\le d} \, (\lambda_i -\lambda_j) \, ,
\end{align*}
having used Eq. (\ref{dimensione}).
Consider the case when $N$ is a multiple of $r(r+1)/2$ and define $s = 2N/r(r+1)$. Define the subset of Yang diagrams
\begin{align*}
\set S_{\rm core} = \left\{ \lambda \in \mathcal Y_{N,r} ~|~ \lambda_i \in \left[ ( r-i+1) s - \frac s {2r} , ( r-i+1) s + \frac s{2r} \right] \, , \qquad \forall i = 1,\dots, r-1 \right\}
\end{align*} For every diagram in $\set S_{\rm core}$ we have the lower bound
\begin{align}
\nonumber d_\lambda & \gtrsim \left[ \prod_{1\le i<j\le r-1} \, (\lambda_i -\lambda_j) \right] \, \left[ \prod_{1\le i\le r-1} \, (\lambda_i -\lambda_r) \right] \, \left[ \prod_{1\le i <r< j\le d} \, \lambda_i \right] \, \left[ \prod_{r< j\le d} \, \lambda_r \right] \\
\nonumber & \ge \left\{ \prod_{1\le i<j\le r} \, \left[ ( j-i)s - \frac sr \right] \right\} \, \left\{ \prod_{1\le i\le r-1} \, \left[ ( r-i) s - \frac s2\right] \right\} \, \left\{ \prod_{1\le i\le r< j\le d} \, (r-i)s \right\} \, \left\{ \prod_{r< j\le d} \, \frac s 2 \right\} \\
\nonumber & \approx s^{\frac { 2dr-r^2 -r}2} \\
\label{core} & \approx N^{\frac { 2dr-r^2 -r}2} \, .
\end{align} Now, the total dimension of the subspaces with Young diagrams in $\set S_{\rm core}$ an be lower bounded as \begin{align*}
\sum_{\lambda \in \set S_{\rm core}} d_\lambda & \gtrsim N^{\frac { 2dr-r^2 -r}2} \, | \set S_{\rm core} | \\ & = N^{\frac { 2dr-r^2 -r}2} \, \left( \frac sr\right)^{r-1} \\ &\approx N^{\frac { 2dr-r^2 -r}2} \, N^{r-1} \\ & = N^{ \frac{ 2rd-r^2 +r -2}2} \, . \end{align*} Since $\set S_{\rm core}$ is a subset of $\spc Y_{N,r}$, we obtain Eq. (\ref{lowerboundsum}). \qed
Following the steps adopted in the $d=2$ case, it is also possible to show that the upper bound of Lemma \ref{lem:upperbounddimensiond} is actually an upper bound for \emph{every} zero-error protocol that works for a \emph{complete} ensemble of mixed states---i.~e.~for an ensemble of the form $\{ \rho_g^{\otimes N} \, , p_g \}$ where the state $\rho_g$ is non-degenerate and the probability distribution $p_g$ is dense on $\grp {SU} (d)$. Essentially, the argument is based on the use of Proposition \ref{prop:todos}, which can be applied here to all the $\grp {SU} (2)$ subgroups of $\grp {SU} (d)$.
\section{APPROXIMATE COMPRESSION FOR QUANTUM SYSTEMS OF DIMENSION $d>2$}
\subsection{Compression protocol} Here we consider ensembles of $N$ identically prepared mixed states, each of them having the same spectrum. Every such ensemble can be written in the form $\{ \rho_g^{\otimes N} , p_g \}$, where $\rho_g$ is a density matrix of the form \[\rho_g = U_g \rho_0 U_g^\dag \, , \qquad g\in\grp{SU} (d ) \, , \] $\rho_0$ is a rank-$r$ density matrix with non-degenerate positive eigenvalues, and $p_g$ is a probability distribution over the group $\grp {SU} (d)$. For ensembles of this form, we have the following
\begin{theo}\label{thm2second}
For every $\epsilon >0$ there exists an integer $N_0$ such that for every $N \ge N_0$ the ensemble $\{\rho_g^{\otimes N} \, , p_g\}$ can be compressed with error less than $\epsilon$ into $N_{\rm enc}$ qubits, with
\[N_{\rm enc} \le \frac{2dr-r^2-1-m}{2}\log (N+d-1)+ \frac{m+r-1}{2}\log \left[ 4 d (d+1)\ln(N+1)+ 8 \ln\left(\frac{1}{\epsilon}\right) + O\left( \frac 1 {\sqrt N} \right)\right] \] and $m : = \sum_{i=1}^r \mu_i$, where $\mu_i$ be the cardinality of the set $\{ j : \, j > i \, , p_j = p_i \}$. We notice that $m=0$ when the spectrum is non-degenerate. \end{theo}
The proof of the theorem is based on the Schur-Weyl decomposition \begin{align}\label{stateschur} \rho_g^{\otimes N} =\bigoplus_{\lambda\in\mathcal{{Y}}_{N,r}}q_{\lambda,N} \left( U_{g}^{(\lambda)} \,\rho_{0,\lambda} \, U_{g}^{(\lambda) \, \dag}\otimes\frac{I_{m_{\lambda}}}{m_\lambda}\right) \, , \end{align} where $\rho_{0,\lambda}$ is a fixed density matrix on $\spc R_\lambda$ and $U_g^{(\lambda)}$ is the irreducible representation of $\grp {SU} (d)$ acting on $\spc R_\lambda$. The key point is that the probability distribution $q_{\lambda, N}$ is concentrated on the Young diagrams such that the vector \begin{align}\label{plambda} p_\lambda : = \left(\frac {\lambda_1} N , \dots, \frac{\lambda_d} N\right) \end{align} is close to the vector of the eigenvalues of $\rho_0$ \cite{Spectrum,CM}, listed as \begin{align}\label{pspec} p = (p_1, \dots, p_d) \, , \qquad p_1\ge p_2 \ge \cdots \ge p_r > p_{r+1} = \cdots = p_{d} = 0 \, . \end{align}
Precisely, we will use the following \begin{lem}[\cite{Spectrum,CM}]\label{lem}
Let $p_\lambda$ and $p$ be the vectors defined in Eqs. (\ref{plambda}) and (\ref{pspec}), respectively, and let $ d(a , b) : = \frac 12 \sum_i |a_i -b_i|$ be the total variation distance between two vectors. Then, one has \begin{align*} {\sf Prob}\left[\lambda: d(p_\lambda, p) >x\right] \le (N + 1)^{d(d+1)/2}\cdot e^{-2Nx^2} \, , \end{align*} with $ {\sf Prob}\left[\lambda: d(p_\lambda, p) >x\right] : = \sum_{\lambda : \, d(p_\lambda, p) >x } \, q_{\lambda,N}$, $q_{N, \lambda}$ being the probability distribution in Eq. (\ref{stateschur}). \end{lem}
The idea of the proof is to discard all Young diagrams whose probability vector $p_\lambda$ falls outside in a ball of size $O(1/\sqrt N)$ around the vector $p$. The dimensions of the subspaces associated to the remaining diagrams can be bounded with the following \begin{lem}\label{lem:dimensiondegenerate} The maximum dimension of a subspace $\spc R_\lambda$ satisfying $d (p_\lambda , p) \le x$ is upper bounded as \begin{align} d_\lambda \le (4 Nx + r)^{m} \, ( N + d-1)^{ \frac{2dr-r (r+1)}2 - m} \, . \end{align} \end{lem}
{\bf Proof. } The dimension can be bounded as \begin{align*} d_\lambda & = \frac{\prod_{1\le i<j\le d}(\lambda_i-\lambda_j-i+j) }{\prod_{k=1}^{d-1}k!} \\ &\le \frac{ \prod_{1\le i\le r} \left\{ \left[ \prod_{ i< j \le i+ \mu_i} (\lambda_i-\lambda_j-i+j) \right] \, \left[ \prod_{i+ \mu_i< j \le d} \, (\lambda_i-\lambda_j-i+j)\right] \right\} }{\prod_{k=1}^{d-1}k!} \\ &\le \frac{ \prod_{1\le i\le r} \left\{ \left[ \prod_{ i< j \le i+ \mu_i} (4 Nx + \mu_i) \right] \, \left[ \prod_{i+ \mu_i< j \le d} \, ( N + d-1)\right] \right\} }{\prod_{k=1}^{d-1}k!} \\ & \le \frac{
\prod_{1\le i\le r} (4 Nx + \mu_i)^{\mu_i} \, ( N + d-1)^{d-i-\mu_i} }{\prod_{k=1}^{d-1}k!} \\
& \le \frac{
(4 Nx + r)^{m} \, ( N + d-1)^{ \frac{2dr-r (r+1)}2 - m} }{\prod_{k=1}^{d-1}k!} \, , \end{align*}
having used the fact that the ball $\set S = \left\{\lambda \in \spc Y_{N,r}: \, d(p_\lambda, p) \le x \right\} $ is contained in the hypercube $\set S' = \{ \lambda \in \spc Y_{N,r}: | \lambda_i/N - p_i| \le 2 x \, , \forall i = 1,\dots, r -1 \}$, so that, for $p_i = p_j$, $i<j$, one has $\lambda_i - \lambda_j \le 4 N x$. \qed
\begin{lem}\label{lem:totaldim1} The total dimension of the subspaces satisfying $d (p_\lambda , p) \le x$ satisfies \[ \sum_{{\lambda \in \spc Y_{N,r}: \, d(p_\lambda, p) \le x } } \, d_\lambda \le \, ( N + d-1)^{ \frac{2dr-r (r+1)}2 - m} (4 Nx + r)^{m + r-1} \, .\]
\end{lem}
{\bf Proof. }
Immediate from Lemma \ref{lem:dimensiondegenerate} and from the fact that the ball $\set S = \left\{\lambda \in \spc Y_{N,r}: \, d(p_\lambda, p) \le x \right\} $ is contained in the hypercube $\set S' = \{ \lambda \in \spc Y_{N,r}: | \lambda_i/N - p_i| \le 2 x \, , \forall i = 1,\dots, r -1 \}$, yielding the bound
\[ |\set S| \le |\set S'| \le (4 Nx)^{r-1} \, .\] \qed
{\bf Proof of Theorem \ref{thm2second}.} To compress within an error $\epsilon$, we choose the encoding and decoding channels \begin{align*} \map E (\rho) & = \bigoplus_{\lambda \in \set S_\epsilon } \operatorname{Tr}}\def\atanh{\operatorname{atanh}_{\spc M_\lambda} [\Pi_\lambda \rho \Pi_\lambda] \, \oplus \, \operatorname{Tr}}\def\atanh{\operatorname{atanh} \left[ \rho \left(I^{\otimes N}- \Pi_\epsilon \right)\right] \, \rho_{\rm fail} \\ \map D(\rho') & = \bigoplus_{\lambda \in \set S_\epsilon} \, \left( P_\lambda \rho' P_\lambda \otimes \frac { I_{m_\lambda}}{m_{\lambda}} \right) \, , \end{align*} with $\Pi_{\epsilon} = \bigoplus_{\lambda\in\set S_\epsilon} \Pi_\lambda$, $ \mathsf{Supp} ( \rho_{\rm fail}) \subseteq \spc H_{\rm enc} = \bigoplus_{\lambda \in \set S_\epsilon} \, \spc R_\lambda$, and \begin{align*}
\set S_\epsilon:=\left\{\lambda\in\mathcal{{Y}}_{N,r}~|~ d(p_\lambda, p) \le x_\epsilon\right\} \, , \qquad x_\epsilon= \sqrt{\frac{d(d+1)/2\ln(N+1)+ \ln(1/\epsilon)}{ 2N}} \, . \end{align*} The value of $x_\epsilon$ is chosen in order to bound the compression error as
\begin{align*}
e_N &= \frac 12 \left\| \map D\circ \map E \left( \rho_g^{\otimes N} \right) - \rho_g^{\otimes N} \right\| \qquad \forall g\in\grp{SU} (d) \\ &\le \frac 12 \operatorname{Tr}}\def\atanh{\operatorname{atanh} \left[ \rho \left(I^{\otimes N}- \Pi_\epsilon \right)\right] \, \left\| \map D (\rho_{\rm fail}) - \rho_{g,\rm fail} \right\| \, , \qquad \rho_{g, \rm fail} : = \bigoplus_{ \lambda\not \in \set S_\epsilon} \frac{ q_{\lambda,N} }{ \operatorname{Tr}}\def\atanh{\operatorname{atanh} \left[ \rho \left(I^{\otimes N}- \Pi_\epsilon \right)\right] } \, \left( U_{g}^{(\lambda)} \,\rho_{0,\lambda} \, U_{g}^{(\lambda) \, \dag} \otimes\frac{I_{m_{\lambda}}}{m_\lambda} \right) \\
&\le \operatorname{Tr}}\def\atanh{\operatorname{atanh} \left[ \rho \left(I^{\otimes N}- \Pi_\epsilon \right)\right] \\
& = \, \sum_{\lambda \not \in \set S_\epsilon} \, q_{\lambda,N} \\
&\le (N + 1)^{d(d+1)/2}\cdot e^{-2Nx^2} \\
& = \epsilon \, , \end{align*} the last inequality coming from Lemma \ref{lem}. On the other hand, the encoding subspace has dimension \begin{align*} d_{\rm enc}& = \sum_{\lambda \in \set S_ \epsilon} \, d_\lambda \\
& \le ( N + d-1)^{dr - \frac{r (r+1)}2 - m} (4 Nx + r)^{m + r-1} \\
&\le ( N + d-1)^{dr - \frac{r (r+1)}2 - m} \, N^{\frac{m+ r-1}2} \, \left[4d(d+1)\ln (N+1) +8\ln \left(\frac 1\epsilon\right) + O\left( \frac 1 {\sqrt N} \right )\right]^{\frac{m+r-1}2} \\
&\le (N+d-1)^{ \frac{2 dr - r^2 - 1 - m }2 } \, \left[4d(d+1)\ln (N+1) +8\ln \left(\frac 1\epsilon\right) + O\left( \frac1 {\sqrt N} \right )\right]^{\frac{m+r-1}2} \, , \end{align*} having used Lemma \ref{lem:totaldim1} and the definition of $x_\epsilon$. Hence, the number of encoding qubits satisfies \begin{align*} N_{\rm enc} &\le \log d_{\rm enc} \\
&\le\frac{2rd-r^2-1-m}{2}\log (N+d-1)+ \frac{m+r-1}{2}\log \left[ 4 d (d+1)\ln(N+1)+ 8 \ln\left(\frac{1}{\epsilon}\right) + O\left( \frac 1 {\sqrt N} \right)\right] \, . \end{align*} \qed
\subsection{Optimality proof in the presence of symmetry} Here we prove the converse of Theorem \ref{thm2second}. Our proof is valid for protocols where the encoding is covariant and the decoding preserves the \emph{nonabelian charges} \cite{algebra-preserve} identified by the Young diagrams.
Precisely, we assume that
\begin{enumerate}
\item the encoding space $\spc H_{\rm enc}$ supports a unitary representation of the group $\grp {SU} (d)$, denoted by $ \{ V_g ~|~ g\in\grp {SU} (d) \}$.
\item the encoding channel satisfies the covariance condition $ \map E \circ \map U_g = \map V_g \circ \map E$, $\forall g\in\grp {SU} (d)$.
\item the decoding channel $\map D$ preserves the nonabelian charges associated to $\grp {SU} (d)$, namely, for every input state $\rho$, one has
\begin{align}\label{conservation}
\operatorname{Tr}}\def\atanh{\operatorname{atanh} \left [ {\Pi}_\lambda \, \map D( \rho) \right] = \operatorname{Tr}}\def\atanh{\operatorname{atanh} \left[ \widetilde \Pi_\lambda \, \rho \right ] \, \qquad \forall \lambda \in \spc Y_{N,d} \, ,
\end{align} where $\widetilde \Pi_{\lambda}$ is the projector on the direct sum of all the invariant subspaces of $\spc H_{\rm enc}$ with Young diagram $\lambda$.
\end{enumerate} By the same argument as in the qubit case, the error of the compression protocol satisfying the above assumption can be lower bounded as $e_N\ge (1/2)\sum_{\lambda\in\set S}q_{\lambda,N}$, with $\set S$ being a subset of $\spc{Y}_{N,r}$ specified by the protocol. The encoding dimension is given by $d_{\rm enc}=\sum_{\lambda\in\set S}q_{\lambda,N}$. We have the following theorem.
\begin{theo}\label{thmopt} Every compression protocol that encodes a complete $N$-qubit ensemble into $$\left(\frac{2dr-r^2-1-m}{2}-\delta\right)\log N \, , \qquad \delta > 0 \, ,$$
qubits with covariant encoding and a decoding that preserves the nonabelian charges will necessarily have error $e \ge1/2$ in the asymptotic limit. Here $m : = \sum_{i=1}^r \mu_i$, where $\mu_i$ be the cardinality of the set $\{ j : \, j > i \, , p_j = p_i \}$. We notice that $m=0$ when the spectrum is non-degenerate. \end{theo}
To prove the theorem, we first define the cubic lattice \begin{align}\label{H-epsilon}
\set H_\epsilon = \left\{ \lambda \in \spc Y_{N,r} ~\left|~ \lambda_i \in \left[ p_i N - \frac{\sqrt{ N ^{1+\epsilon}}} 2 , p_i N + \frac{\sqrt{N^{1+\epsilon}}} 2 \right] \, , \quad \forall~i=1,\dots, r-1 \right\}\right. \, \end{align} for any constant $\epsilon\in(0,1)$. With this definition, the sum of the probability $q_{\lambda,N}$ when $\lambda\not\in\set H_\epsilon$ vanishes exponentially in $N$. Precisely, we have the following lemma. \begin{lem} \label{lemma-H-epsilon} For the set $\set H_\epsilon$ defined by Eq. (\ref{H-epsilon}), the following bound holds. $$\sum_{\lambda\not\in \set H_{\epsilon}}q_{\lambda,N}\le (N+1)^{\frac{d(d+1)}2}e^{-\frac{N^\epsilon}8}.$$ \end{lem} \begin{proof}
For any Young diagram $\lambda$ not in the set $\set H_\epsilon$, there exist at least one $j$ such that $|\lambda_j-p_i N|\ge \sqrt{ N ^{1+\epsilon}}/2$. Thus we have $$d(p_\lambda,p)\ge \frac12\left|\frac{\lambda_j}{N}-p_j\right|\ge \frac1{4\sqrt{N^{1-\epsilon}}}.$$ Substituting this fact into Lemma \ref{lem}, we immediately get the following lemma. \end{proof}
Now we start to bound the probability distribution $q_{\lambda,N}$ within the set $\set H_\epsilon$. Notice that the exact expression of $q_{\lambda,N}$ is given as \cite{ExactDistribution} \begin{align}\label{q-expression} q_{\lambda,N}=\frac{\det \Delta }{\det \Sigma}\cdot m_{\lambda} \end{align} where the matrix $\Sigma$ is independent of $N$ (and thus its expression is not relevant to bounding the probability) and the matrix $\Delta$ is a rank $r$ square matrix defined as the following. \begin{align} \Delta_{ij}=\left[\prod_{\beta=0}^{\mu_j-1}(\lambda_i+r-i-\beta)\right]p_j^{\lambda_i+r-i-\mu_j}, \end{align} with $\mu_i$ defined in Theorem \ref{thmopt}. Notice that we follow the convention $\prod_{i=0}^{-1}f(i)=1$. We first prove the following bound of $\det\Delta$.
\begin{lem} \label{lemma-delta} For any $\lambda$ in the set $\set H_\epsilon$ defined by Eq. (\ref{H-epsilon}), the following bound holds asymptotically for large $N$: $$\det\Delta\lesssim N^{\frac {(1+\epsilon)m}2}\left(\prod_{i=1}^{r}p_i^{\lambda_i}\right) \, , \qquad m=\sum_{i=1}^r \mu_i \, . $$
\end{lem}
\begin{proof} Suppose that there are $k$ distinct positive values in the spectrum, and the $i$-th biggest value has degeneracy $r_i$. We can then divide the set $\{1,\dots,r\}$ into $k$ subsets $\set L_1\cup\dots\cup\set L_k$, corresponding to the distinct eigenvalues, so that $\set L_i$ is the set of indices corresponding to the $i$-th biggest eigenvalue. Recalling that $r_j$ is the degeneracy of the $j$-th eigenvalue, we have $$\set L_i=\left\{\sum_{j=1}^{i-1}r_j+1,\dots,\sum_{j=1}^i r_j\right\}.$$ Notice that, by definition, one has \begin{align}\label{L-property} p_l=p_k\qquad\forall\,l,k\in\set L_i. \end{align} With the above definition, the spectrum now reads \begin{align*} \underbrace{p_1=\dots=p_{r_1}}_{\set L_1}>\underbrace{p_{r_1+1}=\dots=p_{r_1+r_2}}_{\set L_2}>\dots>\underbrace{p_{\sum_{i=1}^{k-1}r_i+1}=\dots=p_{r}}_{\set L_k}>p_{r+1}=\dots=p_{d}=0. \end{align*}
Correspondingly, we define a subgroup $\grp{P}_r$ of the group $\grp S_r$, consisting of the product of permutations that act within the subsets $\{\set L_i\}$. Precisely, \begin{align*}
\grp{P}_r:=\left\{ \sigma^{(1)} \times \sigma^{(2)} \times \cdots \times \sigma^{(k)} \, | \, \sigma^{(i)} \in\grp S_{r_i}; i=1,\dots,k\right\}. \end{align*}
With the above definition, we divide $\det\Delta$ into two terms \begin{equation}\label{main-bound} \begin{split} \det \Delta &=t_1+t_2\\
t_1&=\sum_{\sigma\in\grp{P}_r}{\rm sgn}(\sigma)\left(\prod_{i=1}^{r}\Delta_{i\,\sigma_i }\right)\\
t_2&=\sum_{\sigma\not\in\grp{P}_r}{\rm sgn}(\sigma)\left(\prod_{i=1}^{r}\Delta_{i\,\sigma_i }\right),
\end{split} \end{equation} denoting by $\sigma_i$ the index that comes from applying $\sigma$ to $i$.
Let us bound $t_1$. By definition, $\grp P_r$ contains every permutation $\sigma$ such that $p_i=p_{\sigma_i}$ for every $i$. Therefore, we have \begin{align*}
t_1&= \sum_{\sigma\in\grp{P}_r}{\rm sgn}(\sigma)\left\{\prod_{i=1}^{r}\left[\prod_{\beta=0}^{\mu_{\sigma_i}-1}(\lambda_i+r-i-\beta)\right]p_{\sigma_i}^{\lambda_i+r-i-\mu_{\sigma_i}}\right\}\\
& = \sum_{\sigma\in\grp{P}_r}{\rm sgn}(\sigma)\left\{\prod_{i=1}^{r}\left[\prod_{\beta=0}^{\mu_{\sigma_i}-1}(\lambda_i+r-i-\beta)\right]p_{i}^{\lambda_i+r-i}\right\}\left(\prod_{i=1}^{r}p_{\sigma_i}^{-\mu_{\sigma_i}}\right)\\
& = \sum_{\sigma\in\grp{P}_r}{\rm sgn}(\sigma)\left\{\prod_{i=1}^{r}\left[\prod_{\beta=0}^{\mu_{\sigma_i}-1}(\lambda_i+r-i-\beta)\right]p_{i}^{\lambda_i+r-i}\right\}\left(\prod_{i=1}^{r}p_{i}^{-\mu_i}\right)\\
& = \left(\prod_{i=1}^{r} p_{i}^{\lambda_i+r-i-\mu_{i}}\right)\sum_{\sigma\in\grp{P}_r}{\rm sgn}(\sigma)\left[\prod_{i=1}^{r}\prod_{\beta=0}^{\mu_{\sigma_i}-1}(\lambda_i+r-i-\beta)\right]. \end{align*} Since $i$ and $\sigma_i$ are always in the same subset $\set L_l$ (for suitable $l$), we can rewrite the term $\prod_{i=1}^{r}\prod_{\beta=0}^{\mu_{\sigma_i}-1}(\lambda_i+r-i-\beta)$ as $\prod_{l=1}^{k}\prod_{i\in \set L_l}\prod_{\beta=0}^{\mu_{\sigma_i}-1}(\lambda_i+r-i-\beta)$. We then have \begin{align*} t_1 & = \left(\prod_{i=1}^{r} p_{i}^{\lambda_i+r-i-\mu_{i}}\right) \sum_{\sigma\in\grp{P}_r}{\rm sgn}(\sigma)\left\{\prod_{l=1}^{k}\left[\prod_{i\in \set L_l}\prod_{\beta=0}^{\mu_{\sigma_i}-1}(\lambda_i+r-i-\beta)\right]\right\}\\
&=\left(\prod_{i=1}^{r} p_{i}^{\lambda_i+r-i-\mu_{i}}\right) \prod_{l=1}^{k}\left\{\sum_{\sigma^{(l)}\in\grp{S}_{r_l}}{\rm sgn}\left(\sigma^{(l)}\right)\left[\prod_{i\in \set L_l}\prod_{\beta=0}^{\mu_{\sigma^{(l)}_i}-1}(\lambda_i+r-i-\beta)\right]\right\}\\
& = \left(\prod_{i=1}^{r} p_{i}^{\lambda_i+r-i-\mu_{i}}\right) \prod_{l=1}^{k}\left\{\sum_{\sigma^{(l)}\in\grp{S}_{r_l}}{\rm sgn}\left(\sigma^{(l)}\right)\left[\prod_{i\in\set L_l}\left(\Delta_l\right)_{i\sigma^{(l)}_{i}}\right]\right\}\\
& = \left(\prod_{i=1}^{r} p_{i}^{\lambda_i+r-i-\mu_{i}}\right)\left(\prod_{l=1}^{k}~\det \Delta_l\right). \end{align*} Here $\Delta_l$ is a rank $r_l$ square matrix defined as \begin{align*} \left(\Delta_l\right)_{ij}=\prod_{\beta=0}^{r_l-j-1}(\lambda_i+r-i-\beta), \end{align*} observing that $\mu_j$ assumes the values $r_l-1,r_l-2,\dots,1,0$ for the indices in $\set L_l$. The determinant of $\Delta_l$ equals to $\prod_{1\le i<j\le r_l}(\lambda_i-\lambda_j+j-i)$. Combining this with the definition of $\set H_\epsilon$ (\ref{H-epsilon}), we have \begin{align} t_1&=\left[\prod_{l=1}^{k}\prod_{1\le i<j\le r_l}(\lambda_i-\lambda_j+j-i)\right]\left(\prod_{i=1}^{r} p_{i}^{\lambda_i+r-i-\mu_{i}}\right)\nonumber\\ &\lesssim \left[\prod_{l=1}^{k}\left(\sqrt{N^{1+\epsilon}}\right)^{\frac{r_l(r_l-1)}2}\right]\left(\prod_{i=1}^{r} p_{i}^{\lambda_i+r-i-\mu_{i}}\right)\nonumber\\ &= N^{\frac{(1+\epsilon)m}{2}}\left(\prod_{i=1}^{r} p_{i}^{\lambda_i+r-i-\mu_{i}}\right)\nonumber\\ &\approx N^{\frac{(1+\epsilon)m}{2}}\left(\prod_{i=1}^{r} p_{i}^{\lambda_i}\right)\label{term1}. \end{align} The last step follows from the fact that $$m=\sum_{i=1}^{r} \mu_i=\sum_{i=1}^{k}\sum_{j=1}^{r_i}(r_i-j).$$
Next, we bound the second term $t_2$ in Eq. (\ref{main-bound}) as \begin{align*} t_2&\le \sum_{\sigma\not\in\grp{P}_r}\left(\prod_{i=1}^{r}\Delta_{i\,\sigma_i }\right)\\ &= \sum_{\sigma\not\in\grp{P}_r}\left\{\prod_{i=1}^{r}\left[\prod_{j=0}^{\mu_{\sigma_i}-1}(\lambda_i+r-i-j)\right]p_{\sigma_i}^{\lambda_i+r-i-\mu_{\sigma_i}}\right\}\\ &\le \sum_{\sigma\not\in\grp{P}_r}\left[\prod_{i=1}^{r}(N+r-1)^{\mu_{\sigma_i}}p_{\sigma_i}^{\lambda_i+r-i-\mu_{\sigma_i}}\right]\\ &= (N+r-1)^m\sum_{\sigma\not\in\grp{P}_r}\left[\prod_{i=1}^{r}p_{\sigma_i}^{\lambda_i+r-i-\mu_{\sigma_i}}\right]\\ &= (N+r-1)^m\sum_{\sigma\not\in\grp{P}_r}\left[\prod_{i=1}^{r}\left(\frac{p_{\sigma_i}}{p_{i}}\right)^{\lambda_i+r-j-\mu_{\sigma_j}}\right]\left[\prod_{j=1}^{r}p_{j}^{\lambda_j+r-j-\mu_{\sigma_j}}\right]\\ &= (N+r-1)^m\sum_{\sigma\not\in\grp{P}_r}\left[\prod_{i=1}^{r}\left(\frac{p_{\sigma_i}}{p_{i}}\right)^{Np_i+O(\sqrt{N^{1+\epsilon}})}\right]\left[\prod_{j=1}^{r}p_{j}^{\lambda_j+r-j-\mu_{\sigma_j}}\right]\\
&\approx (N+r-1)^m\sum_{\sigma\not\in\grp{P}_r}\exp\left[-N D(p||\sigma_p)\right]\left[\prod_{i=1}^{r}p_{i}^{\lambda_i+r-i-\mu_{\sigma_i}}\right], \end{align*}
where $D(p||q):=\sum_i p_i \ln(p_i/q_i)$ is the Kullback-Leibler divergence and $\sigma_p:=(\sigma_{p_1},\dots,\sigma_{p_r})$. Now, since $\sigma\not\in\grp{P}_r$, we always have $D(p||\sigma_p)>0$. Therefore, the second term in Eq. (\ref{main-bound}) vanishes exponential in $N$. Combining this fact with Eq. (\ref{main-bound}) and Eq. (\ref{term1}) we get the desired bound on $\det \Delta$. \end{proof}
\begin{lem} \label{lemma-q/d} For any $\lambda$ in the set $\set H_\epsilon$ defined by Eq. (\ref{H-epsilon}), the following bound holds asymptotically for large $N$. $$\frac{q_{\lambda,N}}{d_{\lambda}}\lesssim N^{-\frac{2dr-r^2-1-(1+\epsilon)m}2}.$$ \end{lem}
\begin{proof} The dimension of $\spc M_\lambda$ is given by \begin{align*} m_\lambda&=\frac{N!}{\prod_{i=1}^{d}(\lambda_i+d-i)!}\prod_{1\le i<j\le d}(\lambda_i-\lambda_j+j-i) \end{align*} (see e.~g.~\cite{ExactDistribution}) and can be bounded as \begin{align*} m_\lambda &\le \frac 1 { \lambda_1^{d-1} \, \lambda_2^{d-2} \,\dots \, \lambda_r^{d-r} }\, {N \choose \lambda} \prod_{1\le i<j\le d}(\lambda_i-\lambda_j+ j-i) \\ &\lesssim N^{-\frac{2dr-r^2-r}2} \, {N \choose \lambda} \prod_{1\le i<j\le d}(\lambda_i-\lambda_j + j-i) \, \end{align*} for any $\lambda\in\set H_\epsilon$. Substituting the above bound and the bound in Lemma \ref{lemma-delta} into Eq. (\ref{q-expression}), we have \begin{align*} q_{\lambda,N}&\lesssim \frac{N^{\frac {(1+\epsilon)m}2}}{\det \Sigma} \left(\prod_{i=1}^{r}p_i^{\lambda_i}\right)\cdot N^{-\frac{2dr-r^2-r}2} \, {N \choose \lambda} \prod_{1\le i<j\le d}(\lambda_i-\lambda_j + j-i) \\ &\lesssim N^{-\frac{2dr-r^2-r-(1+\epsilon)m}2} m(N,p,\lambda) \prod_{1\le i<j\le d}(\lambda_i-\lambda_j + j-i) \\ &\lesssim N^{-\frac{2dr-r^2-1-(1+\epsilon)m}2} \prod_{1\le i<j\le d}(\lambda_i-\lambda_j + j-i) \\ \end{align*} which holds for any $\lambda\in\set H_\epsilon$. The last inequality comes from the upper bound of the multinomial $m(N,p,\lambda)$. Finally, we get the desired bound of $q_{\lambda,N}/d_\lambda$ by combining the above bound with the expression of $d_{\lambda}$ $$d_\lambda=\frac{\prod_{1\le i<j\le d}(\lambda_i-\lambda_j-i+j)}{\prod_{k=1}^{d-1}k!}.$$ \end{proof}
Finally, we can bound the error of any compression protocol with an encoding set $\set S$ and with the encoding dimension $d_{\rm enc}=O\left(N^{\frac{2dr-r^2-1-m}{2}-\delta}\right)$ as \begin{align*} e_N&\ge \frac12\sum_{\lambda\in\set S} q_{\lambda,N}\\ &=\frac12\left(1-\sum_{\lambda\not\in\set S} q_{\lambda,N}\right)\\ &\ge\frac12\left(1-\sum_{\lambda\not\in \set H_{\delta/m}}q_{\lambda,N}-\sum_{\lambda\in\set H_{\delta/m}\cap\set S} q_{\lambda,N}\right)\\ &\ge\frac12\left[1-\sum_{\lambda\not\in \set H_{\delta/m}}q_{\lambda,N}-\max_{\lambda\in\set H_{\delta/m}}\left(\frac{q_{\lambda,N}}{d_{\lambda}}\right)\sum_{\lambda\in\set S}d_{\lambda}\right]\\ &\ge\frac12\left[1-\sum_{\lambda\not\in \set H_{\delta/m}}q_{\lambda,N}-\max_{\lambda\in\set H_{\delta/m}}\left(\frac{q_{\lambda,N}}{d_{\lambda}}\right)\cdot d_{\rm enc}\right]\\ &\gtrsim\frac12\left[1-(N+1)^{\frac{d(d+1)}2}e^{-\frac{1}8 N^{\frac{\delta}m}}-N^{-\frac{\delta}{2}}\right]\\ &=\frac12\left(1-N^{-\frac{\delta}{2}}\right). \end{align*}
\end{widetext}
\end{document} |
\begin{document}
\date{}
\title{Intrinsic and Apparent Singularities in Differentially Flat Systems, and Application to Global Motion Planning}
\begin{abstract} In this paper, we study the singularities of differentially flat systems, in the perspective of providing global or semi-global motion planning solutions for such systems: flat outputs may fail to be globally defined, thus potentially preventing from planning trajectories leaving their domain of definition, the complement of which we call \emph{singular}. Such singular subsets are classified into two types: \emph{apparent} and \emph{intrinsic}. A rigorous definition of these singularities is introduced in terms of atlas and local charts in the framework of the differential geometry of jets of infinite order and Lie-B\"acklund isomorphisms. We then give an inclusion result allowing to effectively compute all or part of the intrinsic singularities. Finally, we show how our results apply to the global motion planning of the celebrated example of non holonomic car. \end{abstract}
\keywords{differential flatness; jets of infinite order; Lie-B\"acklund isomorphism; atlas; local chart; apparent and intrinsic singularity; global motion planning}
\section{Introduction}
Differential flatness has become a central concept in non-linear control theory for the past two decades. See~\cite{FLMR_95,FLMR_99}, the overviews \cite{MMR-ecc,SRA} and \cite{Levine-09} for a thoroughgoing presentation.
Consider a non-linear system on a smooth $n$-dimensional manifold $X$ given by \begin{equation}\label{eq::explicit_form} \dot{x} = f(x,u) \end{equation} where $x \in X$ is the $n$-dimensional state vector and $u \in \mathbb R^m$ the input or control vector, with $m \leq n$ to avoid trivial situations.
We consider infinitely prolonged coordinates of the form $(x,\overline{u})\triangleq (x,u,\dot{u}, \ddot{u},\ldots)\in X\times\mathbb R^m_{\infty} \triangleq X\times \mathbb R^m\times \mathbb R^m\times\cdots$ where the latter cartesian product is made of a countably infinite number of copies of $\mathbb R^m$.
Roughly speaking, system~\eqref{eq::explicit_form} is said to be (differentially) flat\footnote{This is not a rigorous definition but rather an informal presentation, without advanced mathematics, of the flatness concept. Problems associated to this informal definition are reported in \cite[Section 5.2]{Levine-09}. For a rigorous definition, in the context of implicit systems, the reader may refer to definitions \ref{L-B-equi:def} and \ref{flatness:def} of Section \ref{sec::atlas}.} at a point $(x_0,\overline{u}_0 )\triangleq(x_0,u_0,\dot{u}_0,\ldots) \in X\times \mathbb R^m_{\infty}$, if there exists an $m$-dimensional vector $y = (y_1, \dots, y_m)$ satisfying the following statements: \begin{itemize} \item $y$ is a smooth function of $x$, $u$ and time derivatives of $u$ up a to a finite order $\beta= (\beta_1, \ldots, \beta_m)$, \textit{i.e.~} $y = \Psi(x,u,\dot{u},\dots, u^{(\beta)})$, where $u^{(\beta)}$ stands for $(u_{1}^{(\beta_{1})}, \ldots, u_{m}^{(\beta_{m})})$ and where $u_{i}^{(\beta_{i})}$ is the $\beta_{i}$th order time derivative of $u_{i}$, $i=1, \ldots,m$, in a neighborhood of the point $(x_0,\overline{u}_0)$; \item $y$ and its successive time derivatives $\dot{y}, \ddot{y}, \dots$ are locally differentially independent in this neighborhood; \item $x$ and $u$ are smooth functions of $y$ and its time derivatives up to a finite order $\alpha= (\alpha_1, \ldots,\alpha_m)$, \textit{i.e.~} $(x,u) = \Phi(y,\dot{y},\dots,y^{(\alpha)})$ in a neighborhood of the point $(y_0,\dot{y}_0,\ldots) \triangleq (\Psi(x_0,u_0,\dot{u}_0,\dots, u_0^{(\beta)}), \dot{\Psi}(x_0,u_0,\dot{u}_0,\dots, u_0^{(\beta+1)}), \ldots) $. \end{itemize} Then the vector $y$ is called \emph{flat output}.
Note that it is convenient to regard the above defined functions $\Phi$ and $\Psi$ as smooth functions over infinite order jet spaces endowed with the product topology\footnote{Recall that in this topology, a continuous function only depends on a finite number of variables, \textit{i.e.~}, in this context of jets of infinite order, on a finite number of successive derivatives of $u$ (see \textit{e.g.~} \cite[Section 5.3.2]{Levine-09}).} \cite{KLV_86,Zharinov-92,FLMR_99,Levine-09}. They are then called \emph{Lie-B\"acklund isomorphisms} and are inverse one of each other (see \cite{FLMR_99,Levine-09}). However, these functions may be defined on suitable neighborhoods that need not cover the whole space. We thus may want to know where such isomorphisms do not exist at all, a set that may be roughly qualified of \emph{intrinsically singular}, thus motivating the present work: if two points are separated by such an intrinsic singularity, it is intuitively impossible to join them by a smooth curve satisfying the system differential equations and, thus, to globally solve the motion planning problem\footnote{By global motion planning problem, we mean that two arbitrary points of the infinite jet space associated to the system, once the set of intrinsic singularities has been removed, can be joined by a system's trajectory, and thus that this set is connected by arcs.}.
More precisely, the notions of \emph{apparent and intrinsic singularities} are introduced thanks to the construction of an \emph{atlas}, that we call \emph{Lie-B\"acklund atlas}, where \emph{local charts} are made of the open sets where the Lie-B\"acklund isomorphisms, defining the flat outputs, are non degenerated, in the spirit of \cite{CE_14,CE_17} where a comparable idea was applied to a quadcopter model. Intrinsic singularities are then defined as points where flat outputs fail to exist, \textit{i.e.~} that are contained in no above defined chart at all. Other types of singularities are called apparent, as they can be ruled out by switching to another flat output well defined in an intersecting chart. Our intrinsic singularity notion may be seen as a generalization of the one introduced in \cite{Li-Respondek} in the particular case of two-input driftless systems such as cars with trailers, and restricted to the so-called $x$-flat outputs.
Our main result, apart from the above Lie-B\"acklund atlas and singularities definition, then concerns the inclusion of a remarkable and effectively computable set in the set of intrinsic singularities. Note that, since finitely computable necessary and sufficient conditions of non existence of flat output are not available in general \cite{Levine-09,Levine-11}, an easily computable complete characterization of the set of intrinsic singularities is not presently known and it may be useful to label all or part of the singularities as intrinsic thanks to their membership of another set.
To briefly describe this result, we start from the necessary and sufficient conditions for the existence of local flat outputs of meromorphic systems of \cite{Levine-11}\footnote{Other approaches to flatness characterization may be found in \cite{ABMP-ieee-95,Cht,Antritt_2010}}. It consists in firstly transforming the system \eqref{eq::explicit_form} in the locally equivalent implicit form: \begin{equation} \label{eq::implicit_form} F(x,\dot{x}) = 0 \end{equation} where $F$ is assumed meromorphic, and introducing the operator $\tau$, the trivial Cartan field on the manifold of global coordinates $(x,\dot{x}, \ddot{x},\ldots)$, given by $\tau = \sum_{i=1}^n \sum_{j \geq 0} x_i^{(j+1)} \frac{\partial}{\partial x_i^{(j)}}$. Then, we compute the \emph{diagonal} or \emph{Smith-Jacobson decomposition} \cite{Cohn,Levine-09} of the following polynomial matrix: \begin{equation}\label{polymat:eq} P(F) = \frac{\partial F}{\partial x} + \frac{\partial F}{\partial \dot{x}} \tau \end{equation} a matrix that describes the variational system associated to \eqref{eq::implicit_form}, and that lies in the ring of matrices whose entries are polynomials in the operator $\tau$ with meromorphic coefficients.
We prove that the set of intrinsic singularities contains the set where $P(F)$ is \emph{not} hyper-regular (see \cite{Levine-09}). As a corollary, we deduce that if an equilibrium point is not first order controllable, then it is an intrinsic singularity.
These results are applied to the global motion planning problem of the well-known non-holonomic car, which is only used here as a benchmark in order to show how the classical and simple flatness-based motion planning methodology can be extended in presence of singularities. It is also meant to help the reader verifying that the introduced concepts, in the relatively arduous context of Lie-B\"acklund isomorphisms, are nevertheless intuitive and well suited to this situation.
Note that different approaches, also leading to global results, have already been extensively developed in the context of non holonomic systems, based on controllability, Lie brackets of vector fields and piecewise trajectory generation by sinusoids \cite{Murray_93,Jean_96,Chitour_13,Jean_14}, or using Brockett-Coron stabilization results \cite{Brockett_83,Coron}. However, though some particular nonholonomic systems, as the car example, happen to be flat, our approach applies to the class of flat systems which is different, including \textit{e.g.~} pendulum systems, unmanned aerial vehicles and many others that do not belong to the nonholonomic class (see \cite{MMR-ecc,SRA,Levine-09,CE_14,CE_17}).
Remark that, in the car example, the obtained intrinsic singularities are the same as the ones revealed in \cite{Murray_93,Jean_96,Chitour_13,Jean_14} where first order controllability fails to hold, or, according to \cite{Brockett_83,Coron}, where stabilisation by continuous state feedback is impossible. However, the degree of generality of this coincidence is not presently known.
The paper is organized as follows. In section~\ref{sec::atlas}, we introduce the basic language of Lie-B\"acklund atlas and charts. Then this leads to a computational approach for calculating intrinsic singularities. In particular, their links with the hyper-singularity of the polynomial matrix \eqref{polymat:eq} of the variational system is established in Proposition~\ref{hyperreg:prop} and Theorem~\ref{intrinsic:th}, and then specialized in Corollary~\ref{singularequilibrium:cor} to the case of equilibrium points.
In section~\ref{route:sec}, we apply our results to the non holonomic car. We build an explicit Lie-B\"acklund atlas for this model, compute the set of intrinsic singularities and apply the atlas construction to trajectory planning where the route contains several apparent singularities and starts and ends at intrinsically singular points. Finally, conclusions are drawn in section~\ref{concl:sec}.
\section{Lie-B\"acklund Atlas, Apparent and Intrinsic Singularities} \label{sec::atlas}
Recall from the introduction that we consider the controlled dynamical system in explicit form \eqref{eq::explicit_form}, where $x$ evolves in some $n$-dimensional manifold $X$. The control input $u$ lies in $\mathbb R^m$. Then the system can be seen as the zero set of $\dot{x} - f(x,u)$ in $\mathrm{T} X \times \mathbb R^m$, where $\mathrm{T} X$ is the tangent bundle of $X$. From now on, we assume that the Jacobian matrix $\frac{\partial f}{\partial u}(x,u)$ has rank $m$ for every $(x,u)$.
Converting system \eqref{eq::explicit_form} into its implicit form consists in eliminating the input $u$ or, more precisely, in computing its image by the projection $\pi$ from $\mathrm{T} X \times \mathbb R^m$ onto $\mathrm{T} X$ to get the implicit relation \eqref{eq::implicit_form}, where we assume that $F: (x,\dot{x}) \in \mathrm{T} X \mapsto \mathbb R^{n-m}$ is a meromorphic function, with $m \leq n$.
Following~\cite{Levine-09,Levine-11}, we embed the state space associated to \eqref{eq::implicit_form} into a diffiety (see~\cite{Zharinov-92}), i.e. into the manifold $\mathfrak{X} \triangleq X \times \mathbb R^n_\infty$, where we have denoted by $\mathbb R^n_\infty$ the product of a countably infinite number of copies of $\mathbb R^n$, with coordinates $\overline{x} \triangleq (x,\dot{x},\ddot{x},\ldots, x^{(k)}, \ldots)$, endowed with the trivial Cartan field: $$\tau_{\mathfrak{X}} \triangleq \sum_{i=1}^n \sum_{j \geq 0} x_i^{(j+1)} \frac{\partial}{\partial x_i^{(i)}}.$$ Note that $\tau_{\mathfrak{X}}$ is such that the elementary relations $\tau_{\mathfrak{X}} x^{(k)}=x^{(k+1)}$ hold for all $k \in \mathbb N$. The integral curves of both \eqref{eq::explicit_form} and \eqref{eq::implicit_form} thus belong to the zero set of $\{F,\tau_{\mathfrak{X}}^{k} F \mid k \in \mathbb N\}$ in $\mathfrak{X}$. However, there might exist points $\overline{x} = (x,\dot{x},\ddot{x},\ldots, x^{(k)}, \ldots) \in \mathfrak{X}$ such that the fiber $\pi^{-1}(x,\dot{x})$ above $\overline{x}$ is empty, \textit{i.e.~} such that there does not exist a $u\in \mathbb R^m$ such that $\dot{x}-f(x,u)=0$. We indeed naturally exclude such points. It is easily proven that the integral curves of \eqref{eq::explicit_form} and \eqref{eq::implicit_form} coincide on the set $\mathfrak{X}_0$ given by $$\mathfrak{X}_0 = \{\overline{x} \in \mathfrak{X} \mid \tau_{\mathfrak{X}}^k F(\overline{x}) = 0, \forall k \in \mathbb N\} \setminus \{\overline{x} \in \mathfrak{X} \mid \pi^{-1}(x,\dot{x}) = \emptyset \}.$$ Therefore, the system trajectories are uniquely defined by the triple $(\mathfrak{X}, \tau_{\mathfrak{X}},F)$ that we call \emph{the system} from now on (see \cite{Levine-09}). Without loss of generality, we may consider that this system is restricted to $\mathfrak{X}_0$.
In order to get rid of any reference to an explicit system, such as the complementary of the empty fibers of the projection $\pi$, we more generally assume that $\mathfrak{X}_0$ is an open dense subset\footnote{As a consequence of the implicit function theorem, the set of points where the fibers are empty is the complement of an open dense subset of the set $\{\overline{x} \in \mathfrak{X} \mid \tau_{\mathfrak{X}}^k F(\overline{x}) = 0, \forall k \in \mathbb N\}$.} of $\{\overline{x} \in \mathfrak{X} \mid \tau_{\mathfrak{X}}^k F(\overline{x}) = 0, \forall k \in \mathbb N\}$.
Let us recall the definitions of Lie-B\"acklund equivalence and local flatness for implicit systems (\cite{Levine-09,Levine-11}):
Consider two systems $(\mathfrak{X}, \tau_{\mathfrak{X}},F)$ and $(\mathfrak{Y}, \tau_{\mathfrak{Y}},G)$ where $\mathfrak{Y}\triangleq Y\times \mathbb R^{q}_{\infty}$, $Y$ being a $q$-dimensional smooth manifold, where $qÊ\in\mathbb N$ is arbitrary, with global coordinates $\ol{y} \triangleq (y,\dot{y}, \ldots)$ and trivial Cartan field $\tau_{\mathfrak{Y}} \triangleq \sum_{i=1}^q \sum_{j \geq 0} y_i^{(j+1)} \frac{\partial}{\partial y_i^{(j)}}$. As before, we denote
by $\mathfrak{Y}_{0}$ an open dense subset of $\{ \ol{y}\in \mathfrak{Y} \mid \tau_{\mathfrak{Y}}^{k}G(\ol{y})=0,~\forall k\in \mathbb N \}$.
\begin{definition}\label{L-B-equi:def} We say that $(\mathfrak{X}, \tau_{\mathfrak{X}},F)$ and $(\mathfrak{Y}, \tau_{\mathfrak{Y}},G)$ are Lie-B\"acklund equivalent at a pair of points $(\ol{x}_0, \ol{y}_0)\in \mathfrak{X}_0\times\mathfrak{Y}_0$ if, and only if, \begin{itemize}
\item[(i)] there exist neighborhoods $\EuScript{X}_{0}$ of $\ol{x}_{0}$ in $\mathfrak{X}_{0}$, and $\EuScript{Y}_{0}$ of $\ol{y}_{0}$ in $\mathfrak{Y}_{0}$, and a one-to-one mapping $\Phi=(\varphi_{0},\varphi_{1},\ldots )$, meromorphic from $\EuScript{Y}_{0}$ to $\EuScript{X}_{0}$, satisfying $\Phi(\ol{y}_{0})=\ol{x}_{0}$ and such that the restrictions of the trivial Cartan fields ${\tau_{\mathfrak{Y}}}_{\big \vert \EuScript{Y}_{0}}$ and ${\tau_{\mathfrak{X}}}_{\big \vert \EuScript{X}_{0}}$ are $\Phi$-related, namely $\Phi_{\ast}{\tau_{\mathfrak{Y}}}_{\big \vert \EuScript{Y}_{0}}={\tau_{\mathfrak{X}}}_{\big \vert \EuScript{X}_{0}}$;
\item[(ii)] there exists a one-to-one mapping $\Psi=(\psi_{0},\psi_{1},\ldots )$, meromorphic from $\EuScript{X}_{0}$ to $\EuScript{Y}_{0}$, such that $\Psi(\ol{x}_{0})=\ol{y}_{0}$ and $\Psi_{\ast}{\tau_{\mathfrak{X}}}_{\big \vert \EuScript{X}_{0}}={\tau_{\mathfrak{Y}}}_{\big \vert \EuScript{Y}_{0}}$.
\end{itemize} The mappings $\Phi$ and $\Psi$ are called \emph{mutually inverse Lie-B\"acklund isomorphisms} at $(\ol{x}_{0},\ol{y}_{0})$.
The two systems $(\mathfrak{X}, \tau_{\mathfrak{X}},F)$ and $(\mathfrak{Y}, \tau_{\mathfrak{Y}},G)$ are called \emph{locally L-B equivalent} if they are L-B equivalent at every pair $(\ol{x}, \Psi(\ol{x}))=(\Phi(\ol{y}),\ol{y})$ of an open dense subset $\EuScript{Z}$ of $\mathfrak{X}_{0}\times \mathfrak{Y}_{0}$, with $\Phi$ and $\Psi$ mutually inverse Lie-B\"acklund isomorphisms on $\EuScript{Z}$. \end{definition}
Accordingly, \begin{definition}\label{flatness:def} The system $(\mathfrak{X}, \tau_{\mathfrak{X}},F)$ is said (differentially) flat at $\ol{x}_{0}$ if, and only if, it is Lie-B\"acklund equivalent to the trivial system $(\mathbb R^{m}_{\infty}, \tau,0)$ at $(\ol{x}_{0},\ol{y}_{0})$ where $\tau$ is the trivial Cartan field on $\mathbb R^m_{\infty}$ with global coordinates\footnote{The number of components of $y$ must be equal to $m$ (see \cite{FLMR_99,Levine-09}).} $\ol{y}= (y,\dot{y},\ldots)$, \textit{i.e.~} $\tau = \sum_{i=1}^{m}\sum_{j\geq 0} y_{i}^{(j+1)}\frac{\partial}{\partial y_{i}^{(j)}}$, and where $0$ indicates that there is no differential equation to satisfy. In this case, we say that $y$, or $\Psi$ by extension, is a \emph{local flat output}, well-defined and invertible from a neighborhood of $\ol{x}_{0}$ to a neighborhood of $\ol{y}_0$.
Finally, the system $(\mathfrak{X}, \tau_{\mathfrak{X}},F)$ is said locally (differentially) flat if it is flat at every point of an open dense subset $\EuScript{Z}$ of $\mathfrak{X}_{0}\times \mathbb R^{m}_{\infty}$. \end{definition}
\subsection{Lie-B\"acklund Atlas}\label{LB-atlas:subsec} From now on, we assume that system~\eqref{eq::explicit_form}, or equivalently \eqref{eq::implicit_form} or, also equivalently, system $(\mathfrak{X}, \tau_{\mathfrak{X}},F)$ is locally flat.
We now introduce the notion of a Lie-B\"acklund atlas for flat systems. It consists of a collection of charts on $\mathfrak{X}_0$, that we call \emph{Lie-B\"acklund charts and atlas}, and that will allow us to define a structure of infinite dimensional manifold on a subset of $\mathfrak{X}_0$, that can be $\mathfrak{X}_0$ itself is some cases.
\begin{definition} \begin{itemize} \item[(i)] A \emph{Lie-B\"acklund chart} on $\mathfrak{X}_0$ is the data of a pair $(\EuScript{U}, \psi)$ where $\EuScript{U}$ is an open set of $\mathfrak{X}_0$ and $\psi : \EuScript{U} \rightarrow \mathbb R^m_{\infty}$ a local flat output, with local inverse $\varphi: \EuScript{V} \rightarrow \EuScript{U}$ with $\EuScript{V}$ open subset of $\psi(\EuScript{U})\subset \mathbb R^{m}_{\infty}$. \item[(ii)] Two charts $(\EuScript{U}_1,\psi_1)$ and $(\EuScript{U} _2,\psi_2)$ are said to be \emph{compatible} if, and only if, the mapping $$\psi_{1}\circ \varphi_{2}: \psi_{2}(\varphi_{1}(\EuScript{V}_1)\cap \varphi_2(\EuScript{V}_2) ) \subset \mathbb R^m_{\infty} \rightarrow \psi_{1}(\varphi_{1}(\EuScript{V}_1)\cap \varphi_2(\EuScript{V}_2) ) \subset \mathbb R^m_{\infty}$$ is a local Lie-B\"acklund isomorphism (with the same trivial Cartan field $\tau$ associated to both the source and the target) with local inverse $\psi_{2}\circ \varphi_{1}$, as long as $\varphi_{1}(\EuScript{V}_1)\cap \varphi_2(\EuScript{V}_2) \neq \emptyset$. \item[(iii)] An \emph{atlas} $\mathfrak{A}$ is a collection of compatible charts. \end{itemize} \end{definition}
For a given atlas $\mathfrak{A} = (\EuScript{U}_i,\psi_i)_{i \in I}$, let $\mathfrak{U}_\mathfrak{A}$ be the union $\mathfrak{U}_\mathfrak{A}\triangleq \bigcup_{i \in I} \EuScript{U}_i$.
Here our definition differs from the usual concept of atlas in finite dimensional differential geometry, since, on the one hand, diffeomorphisms are replaced by Lie-B\"acklund isomorphisms and, on the other hand, we do not require that $\mathfrak{U}_\mathfrak{A} = \mathfrak{X}_0$. The reason for this difference is precisely related to our objective, i.e. identifying the essential singularities of differentially flat systems. This will become clear in the sequel.
\subsection{Apparent and Intrinsic Flatness Singularities} It is clear from what precedes that if we are given two Lie-B\"acklund atlases, their union is again a Lie-B\"acklund atlas. Therefore the union of all charts that form every atlas is well-defined as well as its complement, which we call the set of intrinsic flatness singularities, as stated in the next definition.
\begin{definition} \label{def::intrinsic-singularities} We say that a point in $\mathfrak{X}_0$ is an \emph{intrinsic flatness singularity} if it is excluded from all charts of every Lie-B\"acklund atlas. Every other singular point, namely every point $\bar{x}\not\in \EuScript{U}_i$ for some chart $(\EuScript{U}_i,\psi_i)$ but for which there exists another chart $(\EuScript{U}_j,\psi_j)$, $j\neq i$, such that $\bar{x}\in \EuScript{U}_j$, is called \emph{apparent}. \end{definition}
Clearly, this notion does not depend on the choice of atlas and charts. The concrete meaning of this notion is that at points that are intrinsic singularities there is no flat output, \textit{i.e.~} the system is not flat at these points.
On the other hand, points that are apparent singularities are singular for a given set of flat outputs, but well defined points for another set of flat outputs.
Note, moreover, that obtaining atlases may be very difficult in general situations and a computable criterion to directly detect intrinsic singularities should be of great help. A simple result in this direction is presented in the following section~\ref{sec::criterion-intrinsic-singularities}.
\subsection{Intrinsic Flatness Singularities and Hyper-regularity} \label{sec::criterion-intrinsic-singularities}
The purpose of this section is to give a tractable sufficient condition of intrinsic singularity and an algorithm to effectively compute the associated points.
With the notations defined at the beginning of section~\ref{sec::atlas}, we next consider the variational equation, in polynomial form, of system \eqref{eq::implicit_form}:
\begin{equation}\label{polymatgen:eq} P(F) dx = 0, \quad P(F) = \frac{\partial F}{\partial x} + \frac{\partial F}{\partial \dot{x}} \tau_{\mathfrak{X}} \end{equation} where the entries of the $(n-m)\times n$ matrix $P( F)$ are polynomials in $\tau_{\mathfrak{X}}$ with meromorphic functions on $\mathfrak{X}$ as coefficients.
Recall that a square $n\times n$ polynomial matrix is said to be \emph{unimodular} if it is invertible and if its inverse is also a matrix whose entries are polynomials in $\tau_{\mathfrak{X}}$ with meromorphic functions on $\mathfrak{X}$ as coefficients. It is of importance to remark that, according to the fact that the coefficients are meromorphic functions, they are, in general, only locally defined. This local dependence will be omitted unless explicitly needed.
The $(n-m)\times n$ polynomial matrix $P(F)$ is said \emph{hyper-regular} if, and only if, there exists a $(n-m)\times (n-m)$ unimodular polynomial matrix $V$ and a $n\times n$ unimodular polynomial matrix $U$ such that \begin{equation}\label{hyper-reg0:eq} VP(F)U= \left( \begin{array}{cc} I_{n-m}&0_{(n-m)\times m}\end{array}\right). \end{equation}
In fact, it has been proven in \cite{Antritter-Middeke-09} (see also \cite[Proposition 1]{ACLM-scl}), that the latter definition may be simplified as follows: \begin{proposition} The polynomial matrix $P(F)$ is hyper-regular if, and only if, there exists a $n\times n$ unimodular polynomial matrix $U$ such that \begin{equation}\label{hyper-reg:eq} P(F)U= \left( \begin{array}{cc} I_{n-m}&0_{(n-m)\times m}\end{array}\right). \end{equation} \end{proposition}
\begin{proof} $P(F)$ is hyper-regular if, and only if, there are matrices $S$, of size $(n-m)\times (n-m)$ and $T$ of size $n\times n$ such that $SP(F)T = \left( \begin{array}{cc} I_{n-m}&0_{(n-m)\times m}\end{array}\right)$. Thus, using the identity \begin{equation*} \left( \begin{array}{cc} I_{n-m}&0_{(n-m)\times m}\end{array}\right)
= S^{-1} \left( \begin{array}{cc} I_{n-m}& 0_{(n-m)\times m} \end{array}\right) \left( \begin{array}{cc} S & 0_{(n-m)\times m} \\ 0_{m\times(n-m)} & I_{m} \end{array}\right) \end{equation*} we get \begin{equation*}
\left( \begin{array}{cc} I_{n-m}&0_{(n-m)\times m}\end{array}\right) = S^{-1} (SP(F)T)
\left( \begin{array}{cc} S & 0 \\ 0 & I_{m} \end{array}\right)
= P(F)
\Bigl( T \left( \begin{array}{cc} S & 0 \\ 0 & I_{m} \end{array}\right)\Bigr) \triangleq P(F)U \end{equation*} which proves \eqref{hyper-reg:eq}. The converse is trivial \end{proof} We say that $P(F)$ is \emph{hyper-singular} at a given point if, and only if, it is not hyper-regular at this point, i.e. if this point does not belong to any neighborhood where $P(F)$ is hyper-regular or, in other words, if at this point no unimodular matrix $U$ satisfying \eqref{hyper-reg:eq} exists.
Let us denote by $\mathcal{S}_{F}$ the subset of $\mathfrak{X}_0$ where $P(F)$ is hyper-singular.
The following proposition clarifies some previous results of \cite{Levine-09,Levine-11} in the context of flat systems at a point:
\begin{proposition}\label{hyperreg:prop}
If system \eqref{eq::implicit_form} is flat at the point $\ol{x}_0 \in \mathfrak{X}_0$, then there exists a neighborhood $V$ of $\ol{x}_0$ where $P(F)$ is hyper-regular.
\end{proposition}
\begin{proof}
Assume that system \eqref{eq::implicit_form} is flat at the point $\ol{x}_0\in \mathfrak{X}_0$. Then, denoting as before $\ol{y} \triangleq (y,\dot{y}, \ddot{y},\ldots)$ and $\ol{x} \triangleq (x,\dot{x}, \ddot{x},\ldots)$, by definition, there exists a neighborhood $V$ of $\ol{x}_0$ and a flat output $\ol{y} = \Psi(\ol{x}) \triangleq (\Psi_0(\ol{x}), \Psi_1(\ol{x}), \Psi_2(\ol{x}),\ldots) \in \Psi(V) \subset \mathbb R^m_{\infty}$ for all $\ol{x} \in V$ and conversely, $\ol{x} = \Phi(\ol{y})\triangleq (\Phi_0(\ol{y}), \Phi_1(\ol{y}), \Phi_2(\ol{y}),\ldots)$ for all $\ol{y} \in \Psi(V)$ such that
$F(\Phi_0(\ol{y}), \Phi_1(\ol{y})) = F(\Phi_0(\ol{y}), \tau\Phi_0(\ol{y})) \equiv 0$.
Taking differentials, we show that $dy$ is a flat output of the variational system.
Considering the Jacobian matrix $d\Phi_0(\ol{y})$ (resp. $d\Psi_0(\ol{x})$) of the 0th order component $\Phi_0$ (resp. $\Psi_0$) of $\Phi$ (resp. $\Psi$), we denote by $P(\Phi_0)$ (resp. $P(\Psi_0)$) its polynomial matrix form with respect to $\tau$ (resp. w.r.t. $\tau_{\mathfrak{X}}$) (see \cite{Levine-09,Levine-11}).
Since $d\ol{y} = d\Psi(\ol{x})d\ol{x}$ and $d\ol{x}= d\Phi(\ol{y})d\ol{y}$, we get that $dx = P(\Phi_0) dy \in \mathrm{T}^{\ast}V$, $dy= P(\Psi_0)dx \in \mathrm{T}^{\ast}\Psi(V)$, $P(F)P(\Phi_0)\equiv 0$ and $P(\Phi_0)$ left-invertible, since $P(\Psi_0)P(\Phi_0)= I_m$.
We next consider the Smith-Jacobson decomposition, or diagonal decomposition \cite[Chap. 8]{Cohn}, of $P(F)$: there exists an $(n-m)\times (n-m)$ unimodular matrix $W$, an $n\times n$ unimodular matrix $U$ and an $(n-m)\times(n-m)$ diagonal matrix $\Delta$ such that $WP(F)U= \left( \begin{array}{cc}\Delta&0\end{array}\right)$. Partitionning $U$ into $\left(\begin{array} {cc} U_1&U_2\end{array}\right)$, we indeed get $WP(F)U_1=\Delta$ and $WP(F)U_2=0$, or $P(F)U_2=0$ since $W$ is unimodular. Thus, by elementary matrix algebra, taking account of the independence of the columns of both $U_2$ and $P(\Phi_0)$, one can choose $U$ such that $U_2=P(\Phi_0)$.
Following \cite{Fl-scl2,Levine-11} (see also \cite{ACLM-scl} in a more general context), we introduce the \emph{free} differential module $\mathfrak{K}[dy]$ finitely generated by $dy_1,\ldots, dy_m$ over the ring $\mathfrak{K}$ of meromorphic functions from $\mathfrak{X}_0$ to $\mathbb R$ and the differential quotient module $\mathfrak{H} \triangleq \mathfrak{K}[dx] / \mathfrak{K}[P(F)dx]$ where $\mathfrak{K}[P(F)dx]$ is the differential module generated by the rows of $P(F)dx$. Taking an arbitrary non zero element $z= (z_1, \ldots, z_m)$ in $\mathfrak{K}[dy]$, and its image $\xi= P(\Phi_0)z$, we immediately get $P(F)\xi= P(F)P(\Phi_0)z= 0$ which proves that $\xi$ is equivalent to zero in $\mathfrak{H}$. Since $U = \left(\begin{array} {cc} U_1&P(\Phi_0)\end{array}\right)$ is unimodular, it admits an inverse $V= \left(\begin{smallmatrix} V_1\\EuScript{V}_2\end{smallmatrix}\right)$ and thus $U_1V_1 + P(\Phi_0)V_2= I_n$. Multiplying on the left by $WP(F)$ and on the right by $\xi$, and using the relation $P(F)P(\Phi_0)= 0$, we get $0= WP(F)\xi= WP(F)U_1V_1\xi + WP(F)P(\Phi_0)V_2\xi=WP(F)U_1V_1\xi$. Consequently, recalling that $WP(F)U_1 = \Delta$, we have that $\zeta \triangleq V_1\xi = V_1P(\Phi_0)z$ satisfies $0= WP(F)U_1\zeta= \Delta \zeta$. Consequently, if the entries of the diagonal matrix $\Delta$ contain at least one polynomial of degree larger than 0 with respect to $\tau$, say $\delta_i$ for some $i = 1, \ldots, n-m$, then $\delta_i \zeta_i=0$, and since $\zeta_i\in \mathfrak{K}[dy]$, we have proven that the non zero component $\zeta_i$ is a torsion element of $\mathfrak{K}[dy]$, thus leading to a contradiction with the fact that $\mathfrak{K}[dy]$ is free (see \textit{e.g.~} \cite[Theorem 7.3, Chap. III]{Lang_02} or \cite[Corollary 2.2, Chap. 8, Sec. 8.2]{Cohn}). Therefore, the entries of the matrix $\Delta$ must belong to $\mathfrak{K}$, which implies that there exists a submatrix $U'_1$ such that $U' \triangleq \left(\begin{array}{cc}U'_1&P(\Phi_0)\end{array}\right)$ is unimodular and satisfies $WP(F)U'= \left( \begin{array}{cc}I_{n-m}&0\end{array}\right)$, and thus, according to \cite{Antritter-Middeke-09} or \cite[Proposition 1]{ACLM-scl}, that $P(F)$ must be hyper-regular in the considered neighborhood. \end{proof}
\begin{rem}
The above proof may be summarized by the following diagram of exact sequences: $$
\begin{array}{ccccccc} 0&\longrightarrow&\mathbb R^m_{\infty}& \begin{array}{c}{\scriptstyle \Phi}
\\\longrightarrow
\\\longleftarrow
\\{\scriptstyle \Psi}\end{array}&\mathfrak{X}_0& \stackrel{\scriptstyle{F}}{\longrightarrow}&0
\\ &&\hspace{-0.9em}{\scriptstyle d} \downarrow& &\hspace{0.5em}\downarrow {\scriptstyle d}&&
\\ 0&\longrightarrow&\mathrm{T}\mathbb R^m_{\infty}&\begin{array}{c} {\scriptstyle d\Phi}
\\\longrightarrow
\\\longleftarrow
\\{\scriptstyle d\Psi}\end{array}&\mathrm{T}\mathfrak{X}_0&\stackrel{\scriptstyle{P(F)}}{\longrightarrow} &0 \end{array} $$
Since $\mathrm{T}\mathbb R^m_{\infty}$, is isomorphic to the free differential module $\mathfrak{K}[dy]$, then $\mathrm{T}\mathfrak{X}_0$, that may also be seen as a differential module, is necessarily free. In other words, the kernel of $P(F)$ must be equal to the image of $\mathrm{T}\mathbb R^m_{\infty}$ by the one-to-one linear map $d\Phi$, thus sending a basis of $\mathrm{T}\mathbb R^m_{\infty}$ (flat outputs) to a basis of $\mathrm{T}\mathfrak{X}_0$. \end{rem}
\begin{rem} Due to the Smith-Jacobson decomposition, the hyper-regularity property gives a practical row-reduction algorithm to compute $\mathcal{S}_{F}$ (see \cite{Antritter-Middeke-09} and the car example in section~\ref{intrinsic-subsec} below). The hyper-singular set is then deduced by complementarity.
\end{rem}
According to Proposition~\ref{hyperreg:prop}, it is clear that on $\mathcal{S}_{F}$, the system cannot be flat.
We thus have the following straightforward result:
\begin{theorem}\label{intrinsic:th} The set $\mathcal{S}_{F}$ is contained in the set of flatness intrinsic singularities of the system. \end{theorem}
In fact (see \cite{Fl-scl2,Levine-09}), $\mathcal{S}_{F}$ corresponds to the points where the system is no more \mbox{F-controllable}, i.e. controllable in the sense of free modules, and therefore non flat (see \cite{CLM_91,FLMR_95,FLMR_99,Levine-09}). As a consequence of this theorem, the points where the matrix $P(F)$ is hyper-singular are automatically intrinisic singularities of the system.
Note that, at equilibrium points, F-controllability boils down to first order controllability, \textit{i.e.~} controllability of the tangent linear system. \begin{corollary}\label{singularequilibrium:cor} The set made of equilibrium points that are not first order controllable is contained in the set of flatness intrinsic singularities of the system. \end{corollary}
\section{Applications: Route Planning For the Non Holonomic Car} \label{route:sec}
In this section, we show on a specific example how the above carried out theoretical analysis applies.
\subsection{Car Model} \label{sec::model}
The car (kinematic) model is made of the following set of explicit differential equations (see \textit{e.g.~} \cite{Murray_93}): \begin{equation}\label{carsys:eq} \left \{ \begin{array}{ccc} \dot{x} & = & u \cos \theta \\ \dot{y} & = & u \sin \theta \\ \dot{\theta} & = & \frac{u}{l} \tan \varphi \end{array} \right. \end{equation}
\begin{figure}
\caption{Car Model: the state vector is made of the coordinates $(x,y)$ of the rear axle's center and of the angle $\theta$ between the car's axis and the x-axis. The controls are the speed $u$ and the angle $\varphi$ between the wheels' axis and the car's axis. The length $l$ is the distance between the two axles.}
\label{fig::car_model}
\end{figure}
Details about the notations are given in the caption of figure~\ref{fig::car_model}. In explicit form, the system evolves in the manifold $\mathfrak{X}_1 = \mathbb R^2 \times \mathbb S^1 \times \mathbb R \times \mathbb S^1$ where the variables are $(x,y,\theta,u,\varphi)$. For the sake of clarity, we note $\mathfrak{X}_{11} = \mathbb R^2 \times \mathbb S^1$ for the space of state variables $(x,y,\theta)$ and $\mathfrak{X}_{12} = \mathbb R \times \mathbb S^1$ for the space of control variables $(u,\varphi)$. The tangent bundle of $\mathfrak{X}_{11}$ is denoted by $\mathrm{T} \mathfrak{X}_{11}$. This system can thus be seen as the zero set in $\mathrm{T} \mathfrak{X}_{11} \times \mathfrak{X}_{12}$ of the following function: $$ \mathfrak{F}(x,y,\theta,\dot{x},\dot{y},\dot{\theta},u,\varphi) = \left ( \begin{array}{c} \dot{x} - u \cos \theta \\ \dot{y} - u \sin \theta \\ \dot{\theta} - \frac{u}{l} \tan \varphi \end{array} \right ) $$
As in section~\ref{sec::atlas} and again following~\cite{Levine-09,Levine-11}, we consider the local implicit representation of the system, obtained by projecting $\mathfrak F$ on $\mathrm{T} \mathfrak{X}_{11}$ by the canonical projection $\pi: \mathrm{T} \mathfrak{X}_{11} \times \mathfrak{X}_{12} \rightarrow \mathrm{T} \mathfrak{X}_{11}$, which amounts to eliminating the controls. In this context, the dynamics \eqref{carsys:eq} are locally equivalent to the zero set of the following function: \begin{equation}\label{carimpsys:eq} F(x,y,\theta,\dot{x},\dot{y},\dot{\theta}) = \dot{x} \sin \theta - \dot{y} \cos \theta = 0. \end{equation}
We then embed the state space associated to \eqref{carimpsys:eq} into the diffiety $\mathfrak{X} = \mathbb R^2 \times \mathbb S^1 \times \mathbb R^3_\infty$, endowed with the trivial Cartan field: $\displaystyle \tau_{\mathfrak{X}} = \sum_{i=1}^3 \sum_{j \geq 0} x_i^{(j+1)} \frac{\partial}{\partial x_i^{(i)}}$, where we have set $x_1 = x,~ x_2 = y$ and $x_3 = \theta$.
The system trajectories now live in $\mathfrak{X}_0$, the subset of $\{\overline{x}\in \mathfrak{X} \mid \tau_{\mathfrak{X}}^{k} F =0, \forall k \in \mathbb N\}$, where we have excluded the set $\mathfrak{Z} \triangleq \{(x,y,\theta,\dot{x},\dot{y},\dot{\theta}) \in \mathrm{T} \mathfrak{X}_{11}\} \mid \dot{x} = \dot{y} = 0, \dot{\theta} \neq 0\}$ of points of $\mathrm{T}\mathfrak{X}_{11}$ where the fibers associated to $\pi$ are empty, \textit{i.e.~} the points of $\mathrm{T} \mathfrak{X}_{11}$ such that there does not exist $u$ and $\varphi$ such that $F(x, \dot{x})=0$ (see section~\ref{sec::atlas}). Thus $$\mathfrak{X}_0 \triangleq \{\overline{x}\in \mathfrak{X} \mid \tau_{\mathfrak{X}}^{k} F =0, \forall k \in \mathbb N\} \setminus \mathfrak{Z}.$$
\subsection{Lie-B\"acklund Atlas for the Car Model}\label{atlas-car:subsec}
We now define an atlas on $\mathfrak{X}_0$ by simply enumerating the charts, as in \cite{CE_14,CE_17} in the context of quadcopters. Each chart is defined on an open set associated to a local Lie-B\"acklund isomorphism $\psi_i$ from $\mathfrak{X}_0$ to $\mathbb R^2_\infty$ with local inverse denoted by $\phi_i : \mathbb R^2_\infty \rightarrow \mathfrak{X}_0$. For simplicity's sake, we only define $\phi_i$ by its three first components. The other ones are deduced by differentiation, i.e. by applying $\tau_{\mathfrak{X}}$ to them an arbitrary number of times. A similar abuse of notation has been used for the definition of $\psi_i$. A point in $\mathfrak{X}_0$ is denoted by $\mathfrak{x}$.
\begin{enumerate} \item Over $U_1 \triangleq \{\dot{x} \neq 0\}$, we take $y_1 = (x,y) = \psi_1(\mathfrak{x})$ and the inverse Lie-B\"acklund transform is given by: $$ \phi_1 = \left (\begin{array}{c} x \\ y \\ \tan^{-1}(\frac{\dot{y}}{\dot{x}}) \end{array} \right ) $$
\item Over $U_2 \triangleq \{\dot{y} \neq 0\}$, we take $y_2 = (x,y) = \psi_2(\mathfrak{x})$ and the inverse Lie-B\"acklund transform is given by: $$ \phi_2 = \left (\begin{array}{c} x \\ y \\ \mbox{cotan}^{-1}(\frac{\dot{x}}{\dot{y}}) \end{array} \right ) $$
\item Over $U_3 \triangleq \{\dot{\theta} \neq 0\}$, we take $y_3 = (\theta,x\sin \theta - y\cos \theta) = \psi_3(\mathfrak{x})$. Here for the sake of simplicity, we shall denote $(z_1, z_2)$ the components of $y_3$. In that case the inverse Lie-B\"acklund transform is given by: $$ \phi_3 = \left (\begin{array}{c} \frac{\dot{z}_2}{\dot{z}_1} \cos z_1 + z_2 \sin z_1 \\ \frac{\dot{z}_2}{\dot{z}_1} \sin z_1 - z_2 \cos z_1 \\ z_1 \end{array} \right ) $$
\item Finally note that the above charts do not contain the set $V = \mathfrak{X}_0 \setminus \left( \bigcup_{i=1}^3 U_i \right) = \{\dot{x} = \dot{y} = \dot{\theta} = 0\}$, which corresponds to the set of equilibrium points of the system. Note that, by the definition of $\mathfrak{X}_0$, $\dot{x} = \dot{y} = 0$ implies $\dot{\theta} = 0$. Therefore, $V= \mathfrak{X}_0 \setminus \left( \bigcup_{i=1}^3 U_i \right) = \{\dot{x} = \dot{y} = 0\}$ \end{enumerate}
\
One can check that for all $i,j$, $Im(\phi_i) \subset \mathfrak{X}_0$ and that the $\psi_j \circ \phi_i$'s satisfy the compatibility definition of section \ref{LB-atlas:subsec} on $\mathbb R^2_\infty$. Therefore we have indeed defined an atlas of $\bigcup_{i=1}^3 U_i = \mathfrak{X}_0 \setminus \{\dot{x} = \dot{y} = 0\}$. Among other things, this allows us to conclude that the car dynamics is globally controllable provided one avoids the singular set $V$, as illustrated in section~\ref{route:sec}. Note that at this level, we are not able to conclude that the set $\{\dot{x} = \dot{y} = 0\}$ is an intrinsic flatness singularity since, according to definition~\ref{def::intrinsic-singularities} above, we still have to prove that no other atlas can contain this set, hence the importance of the next section based on the results of section~\ref{sec::criterion-intrinsic-singularities}.
\subsection{Flat Outputs and Intrinsic Flatness Singularities of the Car Example}\label{intrinsic-subsec}
One first considers the differential of the implicit equation: $$ dF = d \dot{x} \sin \theta + \dot{x} \cos \theta d \theta - d \dot{y} \cos \theta + \dot{y} \sin \theta d \theta = (\dot{x} \cos \theta + \dot{y} \sin \theta) d \theta + \sin \theta d \dot{x} - \cos \theta d \dot{y} $$
Note that, if $z$ is an arbitrary variable of the system, we have $d \dot{z} = d (\tau_{\mathfrak{X}} z) = \tau_{\mathfrak{X}} d z$, \textit{i.e.~} the exterior derivative $d$ commutes with the Cartan field $\tau_{\mathfrak{X}}$, and the matrix $P(F)$ reads: $$ P(F) = \left [ \begin{array}{ccc} (\sin \theta) \tau_{\mathfrak{X}} & - (\cos \theta) \tau_{\mathfrak{X}} & \dot{x} \cos \theta + \dot{y} \sin \theta \end{array} \right ] $$ thus satisfying $$P(F)\left( \begin{array}{c}dx\\dy\\d\theta\end{array}\right)=0$$ for all $dx, dy, d\theta$ that are differentials of the variables $x, y, \theta$ satisfying system \eqref{carimpsys:eq}.
Now in the context of the car system given by \eqref{carimpsys:eq}, we are ready to prove the following:
\begin{proposition}\label{intrinsic-car:prop} The intrinsic singular set of system \eqref{carimpsys:eq}, given by $\{\dot{x}=\dot{y}=0\}$, is equal to $\mathcal{S}_{F}$. \end{proposition} \begin{proof} We compute the set where $P(F)$ is not hyper-regular. Let us define $$A = \dot{x} \cos \theta + \dot{y} \sin \theta.$$ Up to a column permutation, $P(F)$ reads $[A,(\sin \theta) \tau_{\mathfrak{X}},- (\cos \theta) \tau_{\mathfrak{X}}]$. Then the first column of $U$, say $u_1$ is $u_1 = [1/A,0,0]^t$ (the superscript $^t$ denotes the transposition operator). The second one $u_2$ is given by $[P_0,P_1,P_2]^t$ where $P_0, P_1, P_2$ are polynomials of $\tau_{\mathfrak{X}}$ with \sloppy$\dg{P_{0}} = 1 + \max_{i=1,2}\dg{P_{i}}$, such that $AP_0 + (\sin \theta) \tau_{\mathfrak{X}} P_1 - (\cos \theta) \tau_{\mathfrak{X}} P_2 = 0$, or $P_0= -\frac{1}{A}\left( (\sin \theta) \tau_{\mathfrak{X}} P_1 - (\cos \theta) \tau_{\mathfrak{X}} P_2 \right)$. The third column $u_3$ is obtained in the same way: $u_3 = \left[ P'_0,P'_1,P'_2\right]^t$ with $P'_0= -\frac{1}{A}\left( (\sin \theta) \tau_{\mathfrak{X}} P'_1 - (\cos \theta) \tau_{\mathfrak{X}} P'_2 \right)$ and $P'_1,P'_2$ such that the matrix $$\left[ \begin{array}{cc}P_1&P'_1\\P_2&P'_2\end{array}\right]$$ is unimodular. Therefore every decomposition exhibits at least one singularity defined by the vanishing of $A$. Moreover, it is readily seen that the following 0 degree choice $P_1=\sin\theta$, $P_2=-\cos\theta$, $P'_1= \cos\theta$, $P'_2=\sin\theta$ is such that $$U=\left[ \begin{array}{ccc} u_1&u_2&u_3\end{array}\right] = \left[\begin{array}{ccc} \frac{1}{A}&-\frac{1}{A}\tau_{\mathfrak{X}}&\frac{\dot{\theta}}{A}\\ 0&\sin\theta&\cos\theta\\ 0&-\cos\theta&\sin\theta \end{array}\right]$$ is singular if, and only if, $A=0$. We thus conclude that $P(F)$ is hyper-regular if and only if $A\not=0$.
Finally, the equation $A = \dot{x} \cos \theta + \dot{y} \sin \theta = 0$, combined with $F = \dot{x} \sin \theta - \dot{y} \cos \theta = 0$ leads to $\dot{x} = \dot{y} = 0$. We therefore have shown that $\mathcal{S}_F = \{\dot{x} = \dot{y} = 0\}$, in other words that the only obstruction to the hyper-regularity of $P(F)$ is a flat output singularity, hence intrinsic according to Theorem~\ref{intrinsic:th}. \end{proof}
Note that this direct computation, from the variational system, of the intrinsic singularity confirms that the atlas construction of section~\ref{sec::atlas} was complete in the sense that adding more charts would not reduce the set of intrinsic singularities.
\begin{rem} Let us stress that the intrinsic singularity obtained in section \ref{atlas-car:subsec} and the planned trajectory of the next section~\ref{route:sec} do not depend on the choice of atlas and charts. Another choice, using \textit{e.g.~} the formulas given in \cite[Section 6.2.4]{Levine-09} would be equally possible, leading to a similar construction. \end{rem}
\begin{rem} In this example, we could prove that $\mathcal{S}_{F}$ is in fact equal to the set of intrinsic singularities of the system. Indeed, it would be most interesting to have an idea of the generality of this situation. However, examples where $\mathcal{S}_{F}$ does not coincide with the set of flatness intrinsic singularities of the system are not presently known by the authors. \end{rem}
\subsection{Route Planning}\label{route:sec}
Next, we show how the previously built atlas can be used to control the car over a route along which there are several apparent and intrinsic singularities, as the one depicted in figure~\ref{fig::car_route}.
\begin{figure}
\caption{Planned car route, parametrized by arc length.}
\label{fig::car_route}
\end{figure}
\begin{figure}
\caption{The speed corresponding to the route depicted in figure~\ref{fig::car_route}}
\label{fig::commands}
\end{figure}
\begin{figure}
\caption{The flat outputs parametrized first by arc length and then by time corresponding to the route depicted in figure~\ref{fig::car_route}}
\label{fig::flat_outputs}
\end{figure}
\begin{figure}
\caption{The angles theta and phi parametrized by time corresponding to the route depicted in figure~\ref{fig::car_route}. For the computation of $\phi$, the car length has been chosen equal to $l=2$m.}
\label{fig::theta_phi}
\end{figure}
This route has been defined in several steps. First, the way points $A$, $C$ and following, up to $K$, were chosen in the $(x,y)$-plane to start from the equilibrium point $A$ (intrinsic singularity) along the $y$-axis, which is an apparent singularity for $y_1$ (see section~\ref{atlas-car:subsec}). The car accelerates up to $B$ and then travels at constant speed up to $C$ where it starts making a right turn up to $D$. The route between $C$ and $D$ has been designed by a univariate spline fitting in order to join the previous vertical line to the horizontal segment $DE$, an apparent singularity for $y_2$. The next segment $FG$, after the arc $EF$, again designed by spline fitting, corresponds to a constant heading angle $\theta$, an apparent singularity for $y_3$. Finally, on the arc $HJ$, the car speed remains constant and then linearly decreases from $J$ to the end point $K$ which is an equilibrium point, thus an intrinsic singularity.
The whole route has been parametrized, in a first step, by its arc length variable on the interval $[0,L]$, with unit speed, in order to allow the design of an arbitrary speed profile over time.
The trajectory design is done according to the flatness-based method described in \cite{rouchon-et-al-ecc93,rouchon-et-al-cdc93} on each route section. The flat output used is $y_2$ on $AC$, $y_1$ on $CE$, indifferently $y_1$ or $y_2$ on $EG$, and $y_1$ on $GK$ since the component $y$ attains its minimum on this arc, thus with $\dot{y}=0$.
The obtained speed profile of the car is shown in figure~\ref{fig::commands}.
For the computation of $\phi$, we exclude the end points where the speed vanishes and thus where $\phi$ is only asymptotically defined. See figure~\ref{fig::theta_phi}. Those points, which are indeed intrinsic singularities, can be approached as close as we want but exactly stopping on them with a prescribed orientation and bounded controls is impossible.
\section{Concluding Remarks}\label{concl:sec}
In this paper, the concepts of intrinsic and apparent flatness singularities have been defined. These notions are of paramount importance for global trajectory planning, namely planning through apparent singularities, avoiding intrinsic singularities, with the possibility of approaching them as close as possible.
We have also shown that intrinsic singularities include a remarkable set, namely the points where the matrix $P(F)$ of the variational system, which plays a major role in the process of flat output computation, is hyper-singular.
This analysis is illustrated by the global motion planning of a non holonomic car. In this context, we have exhibited an atlas of flat outputs and a complex trajectory safely passing through all possible charts of this atlas.
Note that this approach may be applied in the same way to other flat systems which do not belong to the class of nonholonomic systems. Moreover, it might be possible to extend it to the computation of the largest reachable set of a system.
\section*{References}
\end{document} |
\begin{document}
\footnotetext{(M.B.A, G.E.M., R.V.) IMECC - UNICAMP, Departamento de Matematica, Rua Sérgio Buarque de Holanda, 651, 13083-970 Campinas-SP, Brazil \\ \textit{E-mail addresses: ra163508@ime.unicamp.br, mantovani.gabriel@gmail.com, varao@unicamp.br }}
\title{Chain Recurrence and Positive Shadowing in Linear Dynamics}
\begin{abstract}
We study positive shadowing and chain recurrence in the context of linear operators acting on Banach spaces or even on normed vector spaces. We show that for linear operators there is only one chain recurrent set, and this set is a closed invariant subspace. We prove that every chain transitive linear dynamical system with positive shadowing property is frequently hypercyclic and, as a corollary, we obtain that every positive shadowing hypercyclic linear dynamical system is frequently hypercyclic. \end{abstract}
\textit{Keywords: Chain recurrence, positive shadowing, frequently hypercyclic, non wandering set.}
\section{Introduction}
Linear Dynamics is the study of linear operators on topological vector spaces. It is relatively simple to describe the dynamical behavior of any linear operator on a finite dimensional vector space. However, when the dimension of the vector space is infinite it turns out that the dynamical behavior of linear operators becomes very rich. To grasp how rich the dynamics can be we mention Feldman's result \cite{feldman} in which he proves that there is a linear operator $T:X \rightarrow X$ on a Banach space $X$ which ``contains'' all topological dynamical systems on compact spaces. More precisely, given a continuous map $f:M \rightarrow M$ on a compact metric space $M$, there is a compact $T$-invariant subset $Y$ in $X$ such that $f$ is conjugate to $T|_Y$.
In linear dynamics one is often interested in the study of the behavior orbits of a system. For instance, one might ask whether a system has a dense orbit. In topological dynamics a point whose orbit is dense is called a \emph{transitive point}, while in linear dynamics we call such a point a \emph{hypercyclic vector}. A system which has a hypercyclic vector is called \textit{hypercyclic system}. Linear Dynamics is not solely influenced by Dynamical Systems, it is, of course, also influence by Functional Analysis. That is why definitions are not necessarily following those typically found in compact dynamics. A final remark of the importance of linear dynamics is the classical Functional Analysis open problem the ''Invariant Subspace Problem" (see \cite{dynamicsoflinearoperators}) which can be formulated in terms of closure of orbits of points, i.e. from a Dynamical System point of view.
There has been a recent effort to use tools that have helped dynamicists understand compact dynamical systems in the context of linear dynamical systems. As examples of such effort we may cite \cite{Messaoudi} and \cite{bernardes2020shadowing} that applied the concept of shadowing and hyperbolicity for linear dynamical systems, we may also cite \cite{brian} that used entropy in the study of translation operators. The main focus of this paper is to use the concepts of positive shadowing and chain recurrence in the study of linear dynamical operators.
We may say that both shadowing and chain recurrence study how the system responds to pseudo trajectories. In words a pseudo trajectory (or chain) is almost a piece of orbit from the system. The difference from an actual orbit is that at each interaction of the dynamics there is a possible offset added to the result. This definition of chain occurs naturally in computational dynamics where given a dynamical system almost any orbit calculated in a computer will be a pseudo orbit, since few computer operations are error free. The formal definitions of chain recurrence and positive shadowing will be given in their respective sections.
In this manuscript $X$ will always be a normed vector space (frequently a Banach space). We will denote by $\mathbb{K}$ the field over $X$, where $\mathbb{K} = \mathbb{R}$ or $\mathbb{K} = \mathbb{C}$. The symbol $\mathbb{N}$ denotes the set of natural numbers including $0$, that is, $\mathbb{N}=\{0,1,2,\ldots\}$.
We will now define some of the terms used in this article. Let $Y$ be a topological space and $f: Y \to Y$ a continuous function, $f$ is said to be \textbf{transitive} (or topologically transitive) if, for any pair of non-empty open sets $U, V \subset Y$, there is a natural number $N > 0$ such that $T^N U \cap V \neq \emptyset$. $f$ is said to be \textbf{topologically mixing} if, for any pair of non-empty open sets $U, V \subset Y$, there is $N \in \mathbb{N}$ such that, for every $n > N$, $f^{n}(U) \cap V \neq \emptyset$. It is a consequence of Birkhoff's Transitivity Theorem \cite{Birkhoff} that a linear dynamical system $(X,T)$, with $X$ a separable Banach space and $T : X \to X$ a linear operator, is transitive if, and only if, is hypercyclic.
An operator is frequently hypercyclic if it has a vector whose orbit visits each open set with a positive lower density. Formally, the \textbf{lower density} of a subset of natural numbers $A$ is defined by \[\underline{dens}(A):=\liminf_{N\rightarrow\infty}\frac{\#(A\cap[1,N])}{N},\] where $\#(B)$ denotes the cardinality of the set $B$. A linear operator $T:X\rightarrow X$ on separable metric space $X$ is \textbf{frequently hypercyclic} if there is $x\in X$ such that \[\underline{dens}(\{n\in\mathbb{N}\;:\;T^n(x)\in V\})>0\] for every non-empty open subset $V$ of $X$.
Our main result is
\begin{maintheorem} \label{gymstheorem} Let $X$ be a separable Banach space, and $T : X \to X$ an operator with both chain transitivity and positive shadowing property. Then $T$ is topologically mixing and frequently hypercyclic. \end{maintheorem}
Since hypercyclic systems are chain transitive, then an immediate consequence of the previous theorem is the following corollary.
\begin{corollary} \label{hyperimpliesfreq.hyper} Let $X$ be a separable Banach space. If $T : X \to X$ is hypercyclic and has the positive shadowing property then $T$ is frequently hypercyclic and topologically mixing. \end{corollary}
We also prove
\begin{maintheorem}\label{theo:nonwandering} \label{nonwanderingfreq} Let $X$ be a normed vector space and $T$ a bounded linear operator on $X$ that has the positive shadowing property. Let $\Omega$ be the non wandering set of $T$. Then the following holds:
\begin{enumerate}
\item $\Omega$ is the chain recurrent set of $T$;
\item $\Omega$ is a closed and invariant subspace of $X$;
\item Suppose further that $X$ is a separable Hilbert space and $T$ is self-adjoint. Then $T|_{\Omega} : \Omega \to \Omega$ is topologically mixing and frequently hypercyclic, in particular either $\Omega=\{0\}$ or $\Omega$ is infinite dimensional. \end{enumerate}
\end{maintheorem}
In section 2 chain recurrent systems are defined and some elementary new results for the linear dynamical setting are obtained. In section 3 we define the concept of positive shadowing and use its synergy with chain recurrence to obtain the above theorems among other smaller results. The last section brings some open questions that emerged during the elaboration of this text to motivate future research.
\section{The Chain Recurrent Subspace}
Let $(Y,d)$ be a metric space and $f: Y \rightarrow Y$ a continuous function. We say that a finite sequence $\{x_0, x_1,\ldots, x_n\}$ is an \textbf{$\epsilon$-chain} with $\epsilon>0$ if $n\in\mathbb{N}\setminus\{0\}$ and $d(f(x_i),x_{i+1})< \epsilon$ for every $ 0 \leq i < n$. Given two points $x,y\in Y$, we write $x\mathcal{R}y$ if the following holds: \begin{center}
given $\epsilon>0$ there is an $\epsilon$-chain beginning in $x$ and ending in $y$, $\{x_0=x,x_1,\ldots, x_{n-1},x_n=y\}$, and another beginning in $y$ and ending in $x$, $\{y_0=y, y_1,\ldots, y_{m-1},y_m=x\}$. \end{center} The set $CR(f)=\{x\in Y\;:\;x\mathcal{R}x\}$ is called \textbf{chain recurrent set}. A point $x\in CR(f)$ is called a \textbf{chain recurrent point}. Restricted to $CR(f)$, the relation $\mathcal{R}$ is an equivalence relation. We will say that a dynamical system $(X,f)$ is \textbf{chain transitive} if $x\mathcal{R} y$ for every $x,y\in Y$.
As an example of chain transitive system one may easily see that the identity operator on any normed vector space is chain transitive. Also any hypercyclic operator on any normed vector space is chain transitive as well. In compact dynamics it is a relevant problem to find how many different chain recurrent classes there are in the space. The next two results state that a linear dynamical system has only one chain recurrent class, which we will simply call the chain recurrent set.
\begin{theorem} \label{spanofchainrecurrent}
Let $X$ be a normed vector space and $T$ be a bounded linear operator acting on $X$. If $x \in X$ is chain recurrent, then every point of span$[x]$ is chain recurrent and span$[x]$ is contained in only one recurrent class. \end{theorem} \begin{proof}
Since $x$ is chain recurrent, given $\eps> 0$ there is an $\eps$-chain \[\{x_0=x, x_1, \ldots, x_n=x\},\] that is, $\|Tx_0 - x_1 \| < \eps,\ldots,\| T x_{n-1} - x \| < \eps$. One can readily verify that if $|\lambda| \in (0,1]$
then the finite sequence $\{\lambda x_0, \lambda x_1, \ldots, \lambda x_n\}$ is a finite $\eps$-chain that begins and ends in $\lambda x$. Therefore $\lambda x$ is chain recurrent for any $|\lambda| \in (0,1]$. We know that $0$ is always chain recurrent, since $T(0)=0$, therefore $\lambda x$ is chain recurrent for $|\lambda| \in [0,1]$.
The case $|\lambda| > 1$ is analogous: given $\eps > 0 $ we know that there is an $\eps/ |\lambda|$-chain starting and ending in $x$, given by $\{x_0, x_1, \ldots ,x_n\}$, it is not hard to see that the chain given by $\{\lambda x_0, \lambda x_1, \ldots , \lambda x_n\}$ is an $\eps$-chain starting and ending in $\lambda x$.
We have proved above that every point of the span$[x]$ is chain recurrent. It remains to prove that span$[x]$ belongs to just one recurrent class. Given $\lambda \in \mathbb{K}$ and $\eps>0$, we want to find an $\eps$-chain that begins in $x$ and ends in $\lambda x$. Let $k \in \mathbb{N}$ be such that $\displaystyle\dfrac{\|x-\lambda x\|}{k}<\frac{\eps}{2}$. For each $j\in\{0,\ldots, k\}$, consider $$ \displaystyle x^j=\left(1-\frac{j}{k}\right)x+\frac{j}{k}\lambda x \in \mbox{span}[x]. $$
We proved above that $x^j$ is chain recurrent. Now we can choose for each $j\in\{0,1,\ldots, k-1\}$ an $\eps/2$-chain $\{x_0^j=x^j, x_1^j,\ldots, x_{{n_j}-1}^j, x_{n_j}^j=x^j\}$. The sequence $$\{x_0^0=x, x_1^0,\ldots, x_{n_0-1}^0, x_0^1=x^1, x_1^1,\ldots, x_{n_1-1}^1,\ldots, x_0^{k-1}=x^{k-1},x^{k-1}_1,\ldots, x_{n_{k-1}-1}^{k-1}, x^k=\lambda x\}$$ is an $\eps$-chain from $x$ to $\lambda x$. Indeed, it is enough to show that $$\|Tx_{n_j-1}^j-x^{j+1}\|<\eps,\;\forall\;j\in\{0,1,\ldots, k-1\}.$$ This follows from the fact that
$$\|x^j-x^{j+1}\| = \left\|\left(1-\frac{j}{k}\right)x+\frac{j}{k}\lambda x - \left(1-\frac{j+1}{k}\right)x-\frac{j+1}{k}\lambda x\right\| = \frac{1}{k}\|x-\lambda x\|<\frac{\eps}{2}$$ and
$$\|Tx_{n_j-1}^j-x^{j+1}\|\leq \|Tx_{n_j-1}^j - x^j\|+\|x^j-x^{j+1}\|<\dfrac{\eps}{2}+\dfrac{\eps}{2}= \eps,$$ for all $j\in\{0,\ldots k-1\}$. We now need to do the inverse path, go from $\lambda x$ to $x$. We saw that $\lambda x$ is chain recurrent. Then, if $\lambda\neq 0$ we can apply the previous argument to create an $\eps$-chain from $y=\lambda x$ to $\lambda'y=x$ with $\lambda'=\frac{1}{\lambda}$. If $\lambda=0$, that is, $\lambda x=0$, consider ${\lambda'}\neq 0$ such that $\|{\lambda'}x\|\leq\epsilon/2$ and use the previous argument to create an $\epsilon/2$-chain, $\{x_0=\lambda'x,x_1,\ldots, x_{n-1}, x_n= x\}$ which begins in $\lambda'x$ and ends in $x$. This yields an $\eps$-chain, $\{0,x_0=\lambda'x,x_1,\ldots, x_{n}=x\}$, starting in the origin and ending in $x$.
\end{proof}
\begin{corollary} \label{onerecurrenceclass} If $X$ is a normed vector space and $T$ is a bounded linear operator acting on $X$, then $T$ has only one chain recurrent class. \end{corollary}
\noindent \textbf{Proof:} It is clear that $0$ is chain recurrent for any linear operator $T$, therefore the chain recurrent set of $T$ is non empty. Now, due to the previous result, every chain recurrent class has the origin of $X$ in common. Since these chain recurrent classes have a common point they are the same.
\qed
In view of Corollary \ref{onerecurrenceclass}, we will refer to the chain recurrent class of an operator as the chain recurrent set. This corollary says that a bounded linear operator is chain transitive if every point of the space is chain recurrent. Notice that the chain recurrent set of any operator is non-empty since the origin is always contained in this set.
Let $Y$ be a set and $f : Y \to Y$ a function. A subset $A$ of $Y$ is \textbf{invariant} for $f$ if $f(A) \subset A$. The next corollary tell us that the chain recurrent set is a closed invariant subspace.
\begin{corollary} \label{chainspace} The chain recurrent set is a closed and invariant subspace. If $T$ is invertible, $T(CR(T))= CR(T)$. \end{corollary}
\noindent \textbf{Proof: } Theorem \ref{spanofchainrecurrent} tell us that $CR(T)$ is closed under scalar multiplication. Let $x,y \in CR(T)$, $\epsilon > 0$ and $\{x_0=x,x_1,\ldots,x_n=0\}$, $\{y_0=y,y_1,\ldots,y_m=0\}$ be two $\epsilon/2$-chains that go from $x$ to $0$ and from $y$ to $0$ respectively. We may suppose $m>n$ so $$\{x+y=x_0+y_0,x_1+y_1, \ldots , x_n + y_n, 0 + y_{n+1} , \ldots ,0 + y_{m} = 0\}$$ is an $\epsilon$-chain that connects $x+y$ to $0$. A similar idea may be used to show that one can go from zero to $x+y$ with an $\epsilon$-chain. Therefore $x+y$ is chain recurrent.
The fact that the chain recurrent set is closed and invariant is true for topological dynamical systems in general. But we include the proof of these facts for the reader's convenience.
To show that $CR(T)$ is closed consider $\{x_n\}$ a sequence in $CR(T)$ converging to a point $x$ in $X$. Given $\epsilon > 0$ choose $x_n$ such that $\|x_n - x\| < \epsilon/2$ and $\|Tx_n-Tx\|<\eps/2$ and let $\{y_0=x_n,y_1,\ldots,y_N=x_n\}$ be an $\epsilon/2$-chain that goes from $x_n$ to $x_n$. It is immediate to see that $\{x,y_1,\ldots, y_{N-1},x\}$ is an $\epsilon$-chain that goes from $x$ to $x$. Therefore, $x\in CR(T)$.
We now prove the invariance. Given $x \in CR(T)$, there is an $\epsilon/(2 \max \{\|T\|,1\})$-chain, $\{x_0=x,x_1,\ldots,x_N=x\}$, that goes from $x$ to $x$. Define the chain $\{y_0=Tx, y_1=x_2 , y_2=x_3 , \ldots, y_{N-1} = x_N =x,y_N=Tx\}$. The only difficult step is to prove that $\|Ty_0 - y_1\|$ is smaller than $\eps$. But we have that $$\begin{array}{rcl}
\|Ty_0 - y_1\|&=&\|Ty_0 - x_2\|\\ \\
&=&\|T^2 x -Tx_1 + Tx_1 - x_2\|\\ \\
&\leq& \|T^2 x -Tx_1\| + \|Tx_1 - x_2\|\\ \\
&\leq&\|T\| \, \|T x -x_1 \| + \|Tx_1 - x_2\|\\ \\ &<&\epsilon. \end{array} $$
Suppose now that $T$ is invertible. To show that $T(CR(T))=CR(T)$ it is enough to prove that $T^{-1}(CR(T))\subset CR(T)$ since it is already proven that $T(CR(T))\subset CR(T)$. Let $x\in CR(T)$ and $\eps>0$. Then, there is an $\epsilon/(2 \max \{\|T^{-1}\|,1\})$-chain, $\{x_0=x,x_1,\ldots,x_N=x\}$, that goes from $x$ to $x$. The finite sequence $\{y_0=T^{-1}x, y_1=x_0=x , y_2=x_1 , \ldots, y_{N-1} = x_{N-2}, y_{N}=T^{-1}x\}$ is an $\eps$-chain from $T^{-1}x$ to $T^{-1}x$, since \[\begin{array}{rcl}
\|T(y_{N-1})-y_N\| & = &\|T(x_{N-2})-T^{-1}(x)\|\\
\\
& \leq &\|T(x_{N-2})-x_{N-1}\|+\|x_{N-1}-T^{-1}(x)\|\\
\\
&\leq&\|T(x_{N-2})-x_{N-1}\|+\|T^{-1}\|\|T(x_{N-1})-x\|\\
\\
&<&\eps. \end{array}\]
\qed
The next propositions give examples of operators that are chain transitive and operators that are not.
\begin{proposition} \label{unitaryimplieschaintransitivity}
Let $X$ be a normed vector space and $T:X \to X$ a bounded linear operator which is a surjective isometry (or equivalently, T is invertible and $\|T^{-1}\| = \|T\| = 1$), then $T$ is chain transitive. In particular, if $X$ is an inner product space, every unitary operator $T:X \to X$ is chain transitive. \end{proposition}
\noindent \textbf{Proof:} Let $x\in X$ and $\epsilon > 0$. We will show that $x$ is a chain recurrent point. Choose $n \in \mathbb{N}$ such that $\dfrac{\|x\|}{n} < \dfrac{\epsilon}{2}$. Define the sequence $$\begin{array}{rcl} x_0 & = & x,\\ \\ x_1& = &T(x) + \dfrac{T^{-n+1}(x)}{n} - \dfrac{T(x)}{n},\\ \\ x_2 & = &T(x_1) + \dfrac{T^{-n+2}(x)}{n} - \dfrac{T^2(x)}{n} \;\;=\;\; T^2(x) + \dfrac{2T^{-n+2}(x)}{n} - \dfrac{2T^2(x)}{n},\\ &\vdots&\\ x_k &= &T(x_{k-1}) + \dfrac{T^{-n+k}(x)}{n} - \dfrac{T^k(x)}{n} \;\;=\;\; T^k(x) + \dfrac{kT^{-n+k}(x)}{n}- \dfrac{kT^k(x)}{n},\\ \end{array}$$
for every $1 \leq k \leq n$. Notice that $x_n = x$, and that $$
\|x_k - T(x_{k-1})\| = \left|\left| \dfrac{T^{-n+k}(x)}{n} - \dfrac{T^k(x)}{n}\right|\right| \leq \left| \left| \dfrac{T^{-n+k}(x)}{n} \right| \right| +\left| \left| \dfrac{T^{k}(x)}{n} \right| \right| < \epsilon $$ for $1 \leq k \leq n$. Therefore, $\{x_0,x_1,\ldots,x_n\}$ is an $\eps$-chain which begins and ends in $x$. This means that $x$ is a chain recurrent point.
\qed
If $X$ is a normed vector space, a map $T: X \to X$ is \textbf{recurrent} if, for every $x \in X$, and for every open set $U \subset X$, with $x \in U$, there is some $n \in \mathbb{N} \setminus \{0\}$, such that $T^n(U) \cap U \neq \emptyset$. Clearly any recurrent operator is also chain transitive, but there are operators that are chain transitive and not recurrent. Indeed let $X = \ell_p(\mathbb{Z})$ for $1 \leq p \leq \infty$ and $T: X \to X$ be the shift $T(e_i) = e_{i-1}$. Then by the above proposition $T$ is chain transitive. For $x = e_0$ there is no $n>0$ such that $T^n(B(x,1/2)) \cap B(x,1/2) \neq \emptyset$, therefore $T$ is not recurrent.
Let $X$ be a normed vector space. We say that a bounded linear operator $T: X \to X$ is a \textbf{proper contraction} if $\|T\| < 1$ and a \textbf{contraction} if $\|T\| \leq 1$. We say that $T$ is a \textbf{proper dilation} if $T$ is invertible and $\|T^{-1}\|<1$ and a \textbf{dilatation} if $\|T^{-1}\| \leq 1$. An operator $T$ on a Banach space $B$ is said to be \textbf{hyperbolic} \cite{hyperbolic} if there is a splitting $$ B = B_s \oplus B_u, \hspace{1cm} T = T_s \oplus T_u, $$
where $B_s$ and $B_u$ are closed $T$-invariant linear subspaces of $B$, $T_s = T |_{B_{s}}$ is a proper contraction, and $T_u = T |_{B_u}$ is a proper dilation. It is common in the literature to assume that hyperbolic operators are invertible, but in this manuscript such assumption is not needed and therefore we do not assume it.
\begin{proposition}\label{CRtrivial} Let $X$ be a normed vector space and $T : X \to X$ a linear operator. If $T$ is a proper contraction, then the chain recurrent set of $T$ is the origin. \end{proposition}
\noindent \textbf{Proof: } Recall that $0$ is always in the chain recurrent set. Let $x\neq 0$, define $\epsilon = (\|x\| - \|Tx\|-\delta)(1-\|T\|)$ and choose a small enough $\delta >0$ such that $\eps > 0$. Consider $\{x_0=x,x_1,\ldots,x_N\}$ an $\epsilon$-chain that starts in $x$. Thus, we have that $$\begin{array}{rcl}
\|x_{N}\| &=&
\|x_{N} + (Tx_{N-1} - Tx_{N-1}) + \cdots + (T^{N-1}x_1 - T^{N-1}x_1) + (T^{N}x_0 - T^{N}x_0) \|\\ \\
&=&\|(x_{N} - Tx_{N-1}) + T(x_{N-1} - Tx_{N-2}) + \cdots + T^{N-1}(x_1 - Tx) + T^{N}x\|\\ \\
&\leq & \|(x_{N} - Tx_{N-1})\| + \|T\| \, \|x_{N-1} - Tx_{N-2}\| + \cdots + \|T^{N-1}\| \, \|x_1 - Tx\| + \|T^{N}x\|\\ \\
&\leq&\|T^N x\| + \dfrac{\epsilon}{1-\|T\|}\\ \\
&=&\|x\|-(\|Tx\|-\|T^Nx\|+\delta)\\ \\
&<& \|x\|, \end{array} $$
since $\|T^Nx\|\leq\|Tx\|$. Therefore there is no $\epsilon$-chain that starts in $x$ and finishes in $x$.
\qed
\begin{lemma} \label{refereelemma} If $T$ is an invertible operator on the normed space $X$, then $CR(T)=CR(T^{-1})$. \end{lemma}
\noindent \textbf{Proof:}
Let $x\in CR(T)$, $\epsilon>0$ and $\{x_0=x,\ldots, x_n=x\}$ an $\epsilon/\|T^{-1}\|$-chain for $T$. It is then straightforward that $\{y_0=x, y_1=x_{n-1},\ldots, y_{n-1}=x_1,y_n=x\}$ is an $\epsilon$-chain for $T^{-1}$ from $x$ to $x$. This means that $CR(T)\subset CR(T^{-1})$ and interchanging the roles of $T$ and $T^{-1}$ we get the conclusion. \qed
\begin{corollary} \label{properdilation} If $T$ is a proper dilation, then $CR(T)=\{0\}$. \end{corollary}
\noindent \textbf{Proof} It follows immediately from the above lemma and Proposition \ref{CRtrivial}.
\qed
The next theorem is interesting in itself and will provide two corollaries. Corollary \ref{decompositionchainrecurrentset} will aid us to prove that $CR(T)=\{0\}$ when $T$ is hyperbolic. In \cite{fabricio} it is proved something weaker, that $T$ is not chain recurrent when $T$ is hyperbolic. Corollary \ref{chainrecurrentselfadjoint} will aid in the proof of Theorem B.
\begin{theorem}\label{CRprojection}
Let $T$ be a bounded operator on a Banach space $X$ such that $X=M\oplus N$ where $M,N$ are $T$-invariant closed subspaces of $X$. Then $CR(T)\cap M=CR(T|_M)$. \end{theorem}
\noindent \textbf{Proof:} It is clear that $CR(T|_{M})\subset CR(T) \cap M$. We will show that $CR(T)\cap M\subset CR(T|_{M})$. By Theorem 2.10 of \cite{Brezis}, there is $\alpha>0$ such that \begin{equation}\label{brezisequation}\|m\|\leq\alpha\|m+n\| \mbox{ and }\|n\|\leq\alpha\|m+n\|\end{equation} whenever $m\in M$ and $n\in N$. Given $x\in CR(T)\cap M$ and $\eps>0$ there is an $\eps/\alpha$-chain, $\{x,x_1,x_2,\ldots,x_{n-1},x\}$, from $x$ to $x$. For each $i\in\{1,2,\ldots,n-1\}$ there are $z_i\in M$ and $w_i\in N$ such that $x_i=z_i+w_i$. We shall show that $\{x,z_1,z_2,\ldots,z_{n-1},x\}$ is an $\eps$-chain in $M$ from $x$ to $x$, which will complete the proof. Since $M$ and $N$ are $T$-invariant and $X=M\oplus N$, by (\ref{brezisequation}) we have that \[\begin{array}{l}
\|Tx-z_1\|\leq \alpha\|Tx-(z_1+w_1)\|<\alpha\dfrac{\eps}{\alpha}=\eps,\\ \\
\|Tz_i-z_{i+1}\|\leq \alpha\|T(z_i+w_i)-(z_{i+1}+w_{i+1})\|<\alpha\dfrac{\eps}{\alpha}=\eps,\;\;\forall\; i\in\{1,2,\ldots,n-2\}, and\\ \\
\|Tz_{n-1}-x\|\leq \alpha\|T(z_{n-1}+w_{n-1})-x\|<\alpha\dfrac{\eps}{\alpha}=\eps.
\end{array}\] This guarantees that $\{x,z_1,z_2,\ldots,z_{n-1},x\}$ is an $\eps$-chain in $M$ beginning and ending in $x$, that is, $x\in CR(T|_{M})$.
\qed
\begin{corollary}\label{decompositionchainrecurrentset}
Let $T$ be a bounded operator on a Banach space $X$. Suppose that $X = M \oplus N$, where $M$ and $N$ are closed $T$-invariant subspaces of $X$. Then $CR(T)=CR(T|_M)\oplus CR(T|_N)$. \end{corollary}
\noindent \textbf{Proof:} Proposition \ref{chainspace} and the fact that $M$ and $N$ are closed and invariant subspaces of $X$ give us that $CR(T) \cap M$ and $CR(T) \cap N$ are closed invariant subspaces of $X$ and therefore of $CR(T)$. We know that every element $x \in CR(T)$ can be written in a unique manner as $x=m+n$, with $m \in M$ and $n \in N$. Following a similar reasoning as in the proof of Theorem \ref{CRprojection}, we have that both $m$ and $n$ are chain recurrent. Therefore $m \in CR(T) \cap M$ and $n \in CR(T) \cap N$, since $x$ is arbitrary, this implies that \begin{equation} \label{projecaodoCR} CR(T) = (CR(T) \cap M) \oplus (CR(T) \cap N) . \end{equation} By Theorem \ref{CRprojection} the above expression may be written as $$
CR(T) = CR(T|_M) \oplus CR(T|_N). $$
\qed
\begin{corollary}\label{chainrecurrentselfadjoint}
Let $T$ be a self-adjoint bounded operator on a Hilbert space $X$. Then the chain recurrent set of $T|_{CR(T)}$ coincides with $CR(T)$. In particular, $T|_{CR(T)}$ is chain transitive. \end{corollary}
\noindent \textbf{Proof:} Since $X$ is a Hilbert space and $CR(T)$ is a closed space, then $X=CR(T)\oplus (CR(T))^\perp$. We have already seen that $CR(T)$ is invariant for $T$ which implies that $(CR(T))^\perp$ is invariant for $T^*=T$. By Theorem \ref{CRprojection} $CR(T|_{CR(T)}) = CR(T)$.
\qed
The following results are immediate consequences of Corollary \ref{decompositionchainrecurrentset}.
\begin{corollary} Let $T$ be a hyperbolic operator on a Banach space $X$ then $CR(T) = \{0\}$. \end{corollary}
\noindent \textbf{Proof:} It follows immediately from the corollaries \ref{decompositionchainrecurrentset}, \ref{properdilation} and proposition \ref{CRtrivial}.
\qed
\begin{corollary}\label{CRforsubspaces}
Let $T$ be an operator on a Banach space $X$. Suppose that $X = M \oplus N$, where $M$ and $N$ are closed $T$-invariant subspaces of $X$. Then $T$ is chain transitive if, and only if, $T|_M$ and $T|_N$ are both chain transitive. \end{corollary}
The following example shows that $CR(T)$ can be non-trivial.
\begin{example} \label{refereeexample} Let $X$ be a non-trivial normed space, $T:X\rightarrow X$ a proper contraction and $I:X\rightarrow X$ the identity operator. The operator \[T\times I:X\times X\rightarrow X\times X,\] where $X\times X$ is endowed with any of the typical product norms, satisfies that $CR(T\times I)=\{0\}\times X$. \end{example}
The next result, which is used in the last section of this paper, asserts that chain transitivity is preserved under Cartesian product. This is an obvious consequence of Corollary \ref{decompositionchainrecurrentset} when one assumes that the spaces are Banach.
\begin{proposition} \label{multiplechainrecurrence} Let $T_1, T_2, ..., T_k$ be bounded chain transitive operators on normed vector spaces $X_1, X_2,..., X_k$ respectively. Then the product $T_1 \times ... \times T_k :X_1\times ... \times X_k \rightarrow X_1 \times ... \times X_k$ is chain transitive. \end{proposition}
\noindent \textbf{Proof:} Consider any of the typical product norms in the product space. We will prove the case when $k=2$, the proof of the general case follows by induction. Let $(x,y) \in X_1 \times X_2$. Given $\eps>0$, consider an $\eps$-chain $\{x_0=x,x_1,\ldots,x_n=0\}$ that connects $x$ with $0$. We can see that \[\{(x_0,y),(x_1,T_2(y)),(x_2,{T_2}^2(y)),\ldots,(x_n, {T_2}^n(y))\}\] is an $\eps$-chain that connects $(x,y)$ with $(0,{T_2}^n(y))$. Following the same reasoning we are able to create an $\eps$-chain that connects $(0,{T_2}^n(y))$ with $(0,0)$. To go from $(0,0)$ to $(x,y)$ we consider $\eps/2$-chains $\{x_0=0,x_1,\ldots,x_n=x\}$ in $X_1$ and $\{y_{0}=0,y_1,\ldots,y_m=y\}$ in $X_2$ that go from $0$ to $x$ and from $0$ to $y$ respectively. If $n=m$, the sequence $\{(x_0,y_0), (x_1,y_1),\ldots, (x_n,y_n)\}$ is an $\eps$-chain beginning in $(0,0)$ and ending in $(x,y)$. Assuming $n>m$, then the finite sequence $$\{(0,0), (x_1,0), (x_2,0), \ldots,(x_{n-m},0),(x_{n-m+1},y_{1}), (x_{n-m+2},y_{2}), \ldots, (x_n,y_m) = (x,y)\}$$ is an $\eps$-chain connecting $(0,0)$ to $(x,y)$. One can see that this finite sequence is an $\eps$-chain that connects $(0,0)$ with $(x,y)$.
\qed
\section{Chain Recurrence and Positive Shadowing}
Let $(Y,d)$ be a metric space, $f : Y \to Y$ a continuous function, $\delta>0$ and $\{x_n\}_{n \in \mathbb{N}}$ a sequence in $Y$. We say that $\{x_n\}_{n \in \mathbb{N}}$ is a \textbf{positive $\delta$-pseudo orbit} of $f$ if $d(f(x_n),x_{n+1}) \leq \delta$ for all $n \in \mathbb{N}$. The function $f$ have the \textbf{positive shadowing property} if for each $\epsilon > 0$ there is $\delta > 0$ such that every $\delta$-pseudo orbit is $\epsilon$-shadowed by an $x \in Y$, i.e., there is $x \in Y$ such that $$ d(x_n,f^{n}(x)) < \epsilon \, \, \text{ for all }n \in \mathbb{N}. $$
It is easy to see that the identity map does not have the shadowing property, on the other hand it is also easy to see that proper contractions and proper dilations have the shadowing property. More generally, any hyperbolic operator on a Banach space has the shadowing property \cite{Messaoudi}. In \cite{bernardes2020shadowing} one necessary and sufficient condition for the weighted shift to have the shadowing property is obtained. To illustrate the positive shadowing property we present two examples bellow. The first one is an operator that has an eigenvalue equal to $1$ (and therefore it is not hyperbolic) and has positive shadowing. The second example is an invertible contraction that does not have the positive shadowing property.
\begin{example} \label{eigen1andshadowing}
In the space $\ell_p(\mathbb{N})$, for $1 \leq p \leq \infty$, consider the operator $T: \ell_p(\mathbb{N}) \to \ell_p(\mathbb{N})$ given by $T(e_0) = e_0$, and $T(e_i) = 2 e_{i - 1}$ if $i \geq 1$, where $\{e_i\}_{i\in\mathbb{N}}$ is the canonical basis. Note that there is an eigenspace associated with the eigenvalue $1$. This operator has positive shadowing. To see this consider the operator $Se_{i} = e_{i+1}/2$ for every $i \in \mathbb{N}$. Note that $\|S\|=1/2$ and that $S$ is a right inverse for $T$. Given any $\delta$-pseudo orbit $\{x_n\}_{n \in \mathbb{N}}$ note that the point $$ x = x_0 + S(x_1 - Tx_0) + S^{2}(x_2 - Tx_1) + \cdots $$ is in $\ell_p(\mathbb{N})$ and $2 \delta$-shadows the pseudo-orbit. Therefore $T$ has positive shadowing.
It is not hard to see that $T$ is transitive and therefore is chain transitive when $1 \leq p < \infty$. Indeed let $U$ and $V$ be two non-empty open subsets of $\ell_{p}(\mathbb{N})$. Since sequences with a finite number of non null elements are dense in $\ell_{p}(\mathbb{N})$ let $x=(x_0,x_1,\ldots,x_n,0,0,\ldots) \in U$ and $y=(y_0,y_1,\ldots,y_m,0,0,\ldots) \in V$. Consider $$ r = \sum_{i=0}^{n} x_i 2^{i} $$ and choose $k,l \in \mathbb{N}$ such that
$$ z = ( \underbrace{ \underbrace{x_0,x_1,\ldots,x_n,0,0,\ldots, 0 , -\dfrac{r}{2^k}}_{k \text{ positions}}, 0, \ldots, 0, \dfrac{y_{0}}{2^{l}}}_{l \text{ positions}}, \dfrac{y_{1}}{2^{l+1}}, \ldots, \dfrac{y_{m}}{2^{l+m}},0, 0, \ldots ) \in U. $$ Hence $T^lz = y$ and so $T^{l} U \cap V \neq \emptyset$, therefore $T$ is transitive. Since $T$ has positive shadowing and is transitive, then Theorem A gives us that $T$ is topologically mixing and frequently hypercyclic. \end{example}
\begin{example}
Consider the space of real Lebesgue integrable functions in $[1/2,1]$, $L_1([1/2,1])$, and the operator $T : L_1([1/2,1]) \to L_1([1/2,1])$, given by $T(f)(x)=f(x)x$ for every $x \in [1/2,1]$. Notice that $T$ is invertible. We will prove that this operator does not have the shadowing property, that is, for each $\delta>0$ there is a $\delta$-pseudo orbit which cannot be $\eps$-shadowed for any $\eps>0$. Given $\delta>0$ consider the sequence $\{f_n^\delta\}_{n\in\mathbb{N}}$ where \[f_n^\delta(x)=\delta(x^{n-1}+x^{n-2}+\cdots+x+1).\] This sequence is a $\delta$-pseudo orbit, since
\[\begin{array}{rcl}\|T(f_n^\delta)-f_{n+1}^\delta\|_1 & = & \|\delta(x^{n-1}+x^{n-2}+\cdots+x+1)x - \delta(x^n+x^{n-1}+\cdots+x+1)\|_1\\ \\
&=&\|\delta(x^{n}+x^{n-1}+\cdots+x^2+x) - \delta(x^n+x^{n-1}+\cdots+x+1)\|_1\\ \\
&=& \|\delta\|_1 \;=\;\displaystyle\int_{1/2}^1\delta dx\;=\;\dfrac{\delta}{2}\;<\;\delta. \end{array}\] Let us show that $\{f_n^\delta\}$ previously defined cannot be $\eps$-shadowed for any $\eps>0$. Indeed, for any $g\in L^1([1/2,1])$ we have that $$\begin{array}{rcl}
\|T^{n}g - f_n^{\delta}\|_1& = &
\|x^n g(x) - \delta (x^{n-1} + \cdots + 1)\|_1\\ \\
&\geq&| \, \|x^ng(x) \|_1 - \|\delta (x^{n-1} + \cdots + 1)\|_1 \, |. \end{array} $$
Notice that since $x<1$ almost surely, then $x^n g(x) \to 0$ almost surely as $n \to \infty$. Therefore, by the Dominated Convergence Theorem, we have that $\|x^ng(x) \|_1 \to 0$ as $n \to \infty$ and so this term is bounded. The second term in the above sum is not bounded. Indeed, $$\begin{array}{rcl}
\displaystyle\|\delta (x^{n-1} + \cdots + 1)\|_1 &=& \displaystyle\delta \int_{1/2}^{1} x^{n-1} + \cdots + 1dx\\ \\ &= &\displaystyle \delta \left[ \dfrac{x^n}{n} + \dfrac{x^{n-1}}{n-1} + \cdots + x \right]_{1/2}^1\\ \\ &= &\displaystyle\delta \left( \dfrac{1}{n} + \dfrac{1}{n-1} + \cdots + 1 - \dfrac{1}{2^n} \dfrac{1}{n} - \dfrac{1}{2^{n-1}} \dfrac{1}{n-1} - \cdots - \dfrac{1}{2} \right)\\ \\ &=& \displaystyle \delta \sum_{k=1}^{n} \dfrac{1}{k}\left( 1 - \dfrac{1}{2^k} \right), \end{array} $$
using a comparison test with the harmonic series, for instance the limit comparison test \cite{limit}, one can readily see that the right side goes to $+\infty$ as $n \to + \infty$. This ensures that $\|T^ng-f_n\|_1\rightarrow\infty$, therefore the $\delta$-pseudo orbit $\{f_n^\delta\}_{n\in\mathbb{N}}$ cannot be $\eps$-shadowed for any $\eps>0$.
\end{example}
It turns out that chain transitivity and positive shadowing have an interesting synergy that allows us to obtain the most important results of this paper. Chain transitivity allows us to connect different points of the space with $\epsilon$-chains and the shadowing property tell us that there will be a point that shadows such chain.
\begin{theorem} \label{positiveshadowingmixing} Let $X$ be a normed vector space, and $T : X \to X$ a bounded linear operator with both chain transitive and positive shadowing property. Then $T$ is topologically mixing. \end{theorem}
\noindent \textbf{Proof:} Let $U$, $V$ be non-empty open subsets of $X$. Let $x$ be a vector in $U$, and $y$ a vector in $V$ and let $\lambda > 0$ be such that $B(x,\lambda) \subset U $ and $B(y,\lambda) \subset V$. Let $\epsilon = \lambda/2$ and let $\delta>0$ be associated with this $\epsilon$ from the positive shadowing property. Since $T$ is chain transitive, there is a $\delta$-chain that goes from $x$ to the origin, $\{x_0,x_1,\ldots,x_n\}$, and a $\delta$-chain that goes from the origin to $y$, $\{y_0,y_1,\ldots,y_m\}$. Then we have that $$ \{x, x_1, x_2, \ldots, x_n, \underbrace{0, 0, \ldots ,0}_{\text{total of } k \text{ zeroes}} ,y_1, \ldots, y_{m-1}, y, Ty, T^2y, \ldots\} $$
is a positive $\delta$-pseudo orbit for every $k \in \mathbb{N}$, and therefore there is a $z_k \in X$ that $\epsilon$-shadows such pseudo orbit. Notice that $z_k \in U$ and $T^{n+m+k}z_k \in V$ for every $k \in \mathbb{N}$, hence $T^{n+m+k}U \cap V \neq \emptyset$ for every $k \in \mathbb{N}$.
\qed
The above Theorem together with Proposition \ref{multiplechainrecurrence} provide the following corollary.
\begin{corollary} If $T_i : X_i \to X_i$ for $1\leq i \leq n$ is a finite family of linear dynamical systems such that, for each $i$, $(T_i,X_i)$ satisfies the hypothesis of the Theorem \ref{positiveshadowingmixing}. Then $T_1 \times \ldots \times T_n : X_1 \times \ldots X_n \to X_1 \times \ldots X_n$ is topologically mixing. \end{corollary}
The next lemma allows us to obtain subsets of $\mathbb{N}$ that are big enough to have positive lower density and at the same time are spread enough apart. This lemma is crucial in the proof of our main result, Theorem \ref{gymstheorem}.
\begin{lemma}\cite[Lemma 6.19]{dynamicsoflinearoperators}\label{freqhypercycliclemma} Let $\{N_p\}_{p \geq 1}$ be any sequence of positive real numbers. Then one can find a sequence $\{\Delta_p\}$ of pairwise disjoint subsets of $\mathbb{N}$ such that \begin{enumerate}
\item Each set $\Delta_p$ has positive lower density;
\item $\min(\Delta_p) \geq N_p$ and $|n-m| \geq N_p + N_q$; whenever $n \neq m$ and $(n,m) \in \Delta_p \times \Delta_q$. \end{enumerate} \end{lemma}
\noindent \textbf{Proof of Theorem \ref{gymstheorem}:}
The conclusion of being topologically mixing is a direct consequence of Theorem \ref{positiveshadowingmixing}, we only need to prove that $T$ is frequently hypercyclic. Let $\{x_p\}_{p \in\mathbb{N}}$ be a countable dense set of vectors in $X$. For every $p \in \mathbb N$, let $\epsilon_p = 1/2^p$. By the positive shadowing property for each $p\in\mathbb{N}$ there is a $\delta_p>0$ such that every $\delta_p$-pseudo orbit is $\epsilon_p$ shadowed by some point of $X$.
For each $x_p$ in the dense countable set let $N_p$ be the size of an $\delta_p/2$-chain that connects $0$ to $x_p$ and then to $0$ again (by the size of a chain we mean its cardinality). By completing with zeroes, we may suppose $\{N_p\}_{p \in \mathbb{N}}$ is a strictly increasing sequence of even numbers of the form $N_p=2R_p$, with $R_p\in\mathbb{N}\setminus\{0\}$ and ${1}/{R_p}<{\delta_p}/{4}$ for every $p\in\mathbb{N}$. We may also suppose that it takes half of the chain to reach $x_p$ from $0$ and other half to go from $x_p$ to $0$. Formally, our chain has the general form \begin{equation} \label{z0pseudoorbit} \{x_0^p=0,x_1^p, \ldots, x_{R_p}^p= x_p,\ldots, x_{N_p-1}^p = 0\}. \end{equation} For the sequence $\{N_p=2R_p\}_{p\in\mathbb{N}}$, we associate a sequence of subsets $\{\Delta_p\}_{p\in\mathbb{N}}$ of $\mathbb{N}$ given by the Lemma \ref{freqhypercycliclemma}.
Bellow we describe an induction procedure to define a sequence of vectors $\{z_p\}_{p\in\mathbb{N}}$ of $X$ satisfying the following properties
\begin{itemize}
\item[(a)] $\|z_p\|<\dfrac{1}{2^p}$ for every $p \in \mathbb{N}$;
\item[(b)] if $m \in \Delta_p$ then $\displaystyle \left\|\sum_{0\leq q\leq p}T^{m+R_p}(z_q)-x_p\right\|<\frac{1}{2^p}$ for every $p \in \mathbb{N}$;
\item[(c)] given $n,p \in \mathbb{N}$ with $n \notin \{ m, m+1, \ldots, m+ N_p-1: m \in \Delta_p\}$, then $\| T^{n} (z_p) \| < \dfrac{1}{2^p}$.
\end{itemize}
\noindent Since we assume $T^0 z_p=z_p$, the reader will notice that Property (a) is a consequence of Property (c), but we decided to make Property (a) explicit for the sake of clarity. Assume for now that such a sequence $\{z_p\}_{p \in \mathbb{N}}$ exists. In this case the vector \[z=\sum_{p\in\mathbb{N}}z_p\] is well defined, since Property (a) implies $\|z\|=\|\sum z_p\|\leq\sum1/2^p=2$. We claim that $z$ is a frequently hypercyclic vector for $T$. Indeed, let $V$ be a non-empty open subset of $X$. Consider $w\in V$ and $\lambda>0$ such that $B(w,\lambda)\subset V$, and let $q_0\in\mathbb{N}$ be such that $1/2^{q_0}<\lambda/2.$ By the density of $\{x_p\}_{p\in\mathbb{N}}$, we can choose $x_p\in B(w,\lambda/2)$ with $p> q_0$. For each $m\in \Delta_p$, we have by Properties (b), (c) and item 2 of Lemma \ref{freqhypercycliclemma} that $$
\|T^{m+R_p}(z)-x_p\| \leq \left\|\sum_{0\leq q\leq p}T^{m+R_p}(z_q)-x_p\right\|+\left\|\sum_{q>p}T^{m+R_p}(z_q)\right\| \leq $$ $$
\frac{1}{2^p} + \sum_{q>p} \left\|T^{m+R_p}(z_q)\right\| \leq \frac{1}{2^p}+\frac{1}{2^{p+1}}+\frac{1}{2^{p+2}}+\cdots = \frac{1}{2^{p-1}} < \dfrac{\lambda}{2}. $$
Thus $T^{m+R_p}(z)\in B(x_p,\lambda/2)\subset V$ for all $m\in\Delta_p$, that is,
\[\underline{\emph{dens}}(\{n\in\mathbb{N}\;:\;T^n(z)\in V\})\geq \underline{\emph{dens}}(\Delta_p+R_p){=}\underline{dens}(\Delta_p)>0,\] where $\Delta_p+R_p=\{m+R_p\;:\;m\in\Delta_p\}$ (see \cite{lowerdensity} for equality above). Proving, therefore, that $z$ is a frequently hypercyclic vector for $T$ and consequently $T$ is frequently hypercyclic.
We now obtain the vectors $z_p$ with Properties (a), (b) and (c). We use an induction procedure, as such the first step is to obtain $z_0$. For this end, consider the sequence $\{\gamma_n^0\}_{n\in\mathbb{N}}$ given by \[\gamma_n^0=\left\{\begin{array}{ll} x_k^{0} & \mbox{if } n=m+k \mbox{ with } m\in\Delta_0 \mbox{ and } k\in\{0,1,\ldots, N_0-1\}\\ 0 &\mbox{if } n\in\mathbb{N}\setminus\{m,m+1,\ldots,m+N_0-1\;:\;m\in\Delta_0\}. \end{array}\right.\] By the definition of \eqref{z0pseudoorbit} we have that $\{\gamma_n^0\}_{n\in\mathbb{N}}$ is a $\delta_0$-pseudo orbit. Thus the positive shadowing property guarantees the existence of $z_0\in X$ such that
\[\|T^n(z_0)-\gamma_n^0\|<\eps_0, \mbox{ for every }n\in\mathbb{N}.\] This immediately implies that $\|z_0\|=\|T^0(z_0)-0\|=\|T^0(z_0)-\gamma_0^0\|<\eps_0$, and therefore $z_0$ satisfies Property (a). Notice that for each $m\in\Delta_0$, \[\|T^{m+R_0}(z_0)-\gamma_{m+R_0}^0\|=\|T^{m+R_0}(z_0)-x_{R_0}^0\|=\|T^{m+R_0}(z_0)-x_0\|<\eps_0\] which implies that $z_0$ satisfies Property (b). For every $n\in\mathbb{N}\setminus\{m,\ldots, m+N_0-1\;:\;m\in\Delta_0\}$, $\|T^n(z_0)-0\|=\|T^n(z_0)\|<\eps_0\leq 1$ and therefore $z_0$ satisfies Property (c).
As part of the induction procedure we will now assume that we have $z_1,z_2,\ldots,z_{p-1}$ and will now obtain $z_p$, $p\geq1$. For this end, define the sequence $\{\gamma_n^p\}_{n\in\mathbb{N}}$ as follows: \[\gamma_n^p=\left\{\begin{array}{ll} x_k^{p}-\dfrac{k}{R_p}T^{n}(z_0+z_1+\ldots+z_{p-1})& \mbox{if } n=m+k \mbox{ with } m\in\Delta_p\\ & \mbox{and } k\in\{0,1,\ldots,R_p\}\\ x_k^{p}- \dfrac{N_p-k}{R_p}T^n(z_0+z_1+\ldots+z_{p-1})& \mbox{if } n=m+k \mbox{ with } m\in\Delta_p\\ &\mbox{and } k\in\{R_p+1,\ldots, N_p-1\}\\ \\ 0& \mbox{if }n\in\mathbb{N}\setminus\{m,\ldots, m+N_p-1\}\\ &\mbox{and } m\in\Delta_p.\\ \end{array}\right.\] We shall see bellow that $\{\gamma_n^p\}_{n\in\mathbb{N}}$ is a $\delta_p$-pseudo orbit. This will guarantee the existence of a $z_p$ which $\eps_p$-shadows $\{\gamma_n^p\}$, that is,
\[\|T^n(z_p)-\gamma_n^p\|<\eps_p=\frac{1}{2^p}.\]
\noindent Since $\gamma_0^p=0$ follows that $\|z_p\|=\|T^0(z_p)-0\|<\eps_p=1/2^p$. Thus, $z_p$ satisfies the Property (a).
\noindent Notice that with $z_p$ obtained this way we have, for each $m\in\Delta_p$, that \[\|T^{m+R_p}(z_0+z_1+\ldots+z_p)-x_p\|<\eps_p=\frac{1}{2^p}.\] Indeed, since $z_p$ shadows $\{\gamma_{n}^p\}$ \[\begin{array}{rcl}
\epsilon_p>\|T^{m+R_p}(z_p)-\gamma_{m+R_p}^p\|&=&\|T^{m+R_p}(z_p)-x_p+T^{m+R_p}(z_0+\ldots+z_{p-1})\|\\
&=&\|T^{m+R_p}(z_0+\ldots+z_p)-x_p\|.\\ \end{array}\] Therefore, Property (b) is assured. Since $\gamma_{n}^p=0$ if $n \notin \{ m, m+1, \ldots, m+ N_p-1: m \in \Delta_p\}$ we have Property (c).
The last step remaining is to show that $\{\gamma_n^p\}_{n\in\mathbb{N}}$ is a $\delta_p$-pseudo orbit. First notice that from the definitions of $\Delta_p$ and $\Delta_q$ and item (2) from Lemma \ref{freqhypercycliclemma} follows that $\{k,\ldots, k+N_p\;:\;k\in\Delta_p\} \cap \{m,\ldots, m+N_q\;:\;m\in\Delta_q\} = \emptyset$ whenever $p\neq q$. This fact and Property (c) imply that if $n \in \{m,\ldots, m+N_p-1\;:\;m\in\Delta_p\}$, then \begin{equation}\label{soma}
\|T^{n}(z_0+z_1+\ldots+z_{p-1})\|=\left\|\sum_{0\leq q<p}T^n(z_q)\right\|\leq \sum_{0\leq q<p}\|T^n(z_q)\|\leq\sum_{0\leq q<p}\frac{1}{2^q}<2.
\end{equation} It is obvious that if $n,n+1\in\mathbb{N}\setminus\{m,\ldots,m+N_p-1\;:\;m\in\Delta_p\}$, then \[\|T(\gamma_n^p)-\gamma_{n+1}^p\|=0<\delta_p.\] If $n$ or $n+1$ belongs to $\{m,\ldots, m+N_p-1\;:\;m\in\Delta_p\}$, we have $5$ possibilities:
\noindent\emph{Case 1:} $n\in\mathbb{N}\setminus\{m,\ldots, m+N_p-1\;:\;m\in\Delta_p\}$ and
$n+1\in\Delta_p$.
In this case, we have
\[\gamma_n^p=0 \mbox{ and }\gamma_{n+1}^p=x_0^p-\frac{0}{R_p}T^{n+1}(z_0+\cdots+z_{p-1})=0.\]
Hence,
\[\|T(\gamma_n^p)-\gamma_{n+1}^p\|=0<\delta_p.\]
\noindent\emph{Case 2:} $n=m+k$ with $m\in\Delta_p$ and $k\in\{0,\ldots, R_p-1\}.$
In this case, we have \[\gamma_n^p=x_k^{p}-\dfrac{k}{R_p}T^n(z_0+\cdots+z_{p-1}) \mbox{ and } \gamma_{n+1}^p=x_{k+1}^p-\frac{k+1}{R_p}T^{n+1}(z_0+\cdots+z_{p-1}).\] Then,
\[\begin{array}{rcl}
\|T(\gamma_n^p)-\gamma_{n+1}^p\| &=&\displaystyle \left\|T(x_k^{p}) - \dfrac{k}{R_p}T(T^n(z_0+\cdots+z_{p-1}))\right. -\\
\\
&&\displaystyle \;\;\;- \left.\left(x_{k+1}^p-\dfrac{k+1}{R_p}T^{n+1}(z_0+\cdots+z_{p-1})\right)\right\|\\
\\
&\leq &\displaystyle \|T(x_k^{p})-x_{k+1}^p\|+\frac{1}{R_p}\|T^{n+1}(z_0+\cdots+z_{p-1})\|\\
\\
&<&\dfrac{\delta_p}{2}+\dfrac{\delta_p}{4}\cdot 2\\
\\
& < &\delta_p.
\end{array}\] The second inequality is assured by Equation (\ref{soma}).
\noindent\emph{Case 3:} $n=m+R_p$ with $m\in\Delta_p$.
In this case, \[\gamma_n^p=x_{R_p}^p-\dfrac{R_p}{R_p}T^n(z_0+\cdots+z_{p-1})=x_p-T^n(z_0+\cdots+z_{p-1}) \mbox{ and }\]\[\gamma_{n+1}^p = x_{R_p+1}^p-\dfrac{N_p-(R_p+1)}{R_p}T^{n+1}(z_0+\cdots+z_{p-1}).\] Thus,
\[\begin{array}{rcl}
\|T(\gamma_n^p)-\gamma_{n+1}^p\| &=& \displaystyle\left\|T(x_p)-T^{n+1}(z_0+\cdots+z_{p-1})-x_{R_p+1}^p+\frac{R_p-1}{R_p}T^{n+1}(z_0+\cdots+z_{p-1})\right\|\\
\\
&\leq&\|T(x_p)-x_{R_p+1}^p\|+\dfrac{1}{R_p}\|T^{n+1}(z_0+\cdots+z_{p-1})\|\\
\\
&<&\dfrac{\delta_p}2+\dfrac{\delta_p}{4}\cdot 2\;\;\;\;\;\;\;\;\mbox{( by Eq. (\ref{soma}))}\\
\\
&=&\delta_p.\\
\end{array}\]
\noindent\emph{Case 4:} $n=m+k$ with $m\in\Delta_p$ and $k\in\{R_p+1, R_p+2,\ldots, N_p-2\}$. It is analogous to case 2.
\noindent\emph{Case 5:} $n=m+N_p-1$ with $m\in\Delta_p$.
In this case, we have
\[\gamma_n^p=x_{N_p-1}^p-\frac{N_p-(N_p-1)}{R_p}T^n(z_0+\cdots+z_{p-1})=0-\dfrac{1}{R_p}T^n(z_0+\cdots+z_{p-1}) \mbox{ and }\gamma_{n+1}^p=0.\] By Equation (\ref{soma})
\[\|T(\gamma_n^p)-\gamma_{n+1}^p\|=\dfrac{1}{R_p}\|T^{n+1}(z_0+\cdots+z_{p-1})\|<\frac{\delta_p}{4}\cdot 2<\delta_p.\]
\noindent This concludes that $\{\gamma_n^p\}_{n\in\mathbb{N}}$ is a $\delta_p$-pseudo orbit.
\qed
\begin{corollary} \label{hyperimpliesfreq.hyper} Let $X$ be a separable Banach space. If $T : X \to X$ is hypercyclic (or recurrent) and has the positive shadowing property then $T$ is frequently hypercyclic and topologically mixing. \end{corollary}
\begin{corollary} \label{prodct of shadowing and CR} If $T_i : X_i \to X_i$, for $1\leq i \leq n$, is a finite family of linear dynamical systems such that, for each $i$, $(T_i,X_i)$ satisfies the hypothesis of the Theorem \ref{gymstheorem}, then $T_1 \times \ldots \times T_n : X_1 \times \ldots X_n \to X_1 \times \ldots X_n$ is frequently hypercyclic and topologically mixing. \end{corollary}
\noindent \textbf{Proof:} It follows from Theorem \ref{gymstheorem}, Proposition \ref{multiplechainrecurrence}, Theorem \ref{positiveshadowingmixing} and the fact that the shadowing property is preserved under cartesian product.
\qed
Consider a topological dynamical system $(X,T)$. By an \textbf{invariant measure with full support} for $(X,T)$ we mean a measure $\mu$ defined over the Borelian $\sigma$-algebra of $X$ such that $\mu(X)=1$, $\mu(U)>0$ for any open set $U \subset X$ and $\mu(T^{-1}A) = \mu(A)$ for any mensurable set $A$.
\begin{corollary} Let $X$ be a separable and reflexive Banach space and $T : X \to X$ a chain transitive map that has the positive shadowing property then there is an invariant measure with full support for $T$. \end{corollary}
\noindent \textbf{Proof:} It follows from Theorem \ref{gymstheorem} and the fact that frequently hypercyclic operators on reflexive Banach spaces admit an invariant measure with full support \cite{invariant}.
\qed
\textbf{Devaney chaotic} systems are those that are transitive and have a dense set of periodic points \cite{devaney2}. The next corollary gives a suficient condition for a system to have this property.
\begin{corollary} Let $X$ be a separable Banach space. If $T : X \to X$ has a dense set of periodic points and has the positive shadowing property then $T$ is Devaney chaotic. \end{corollary}
\noindent \textbf{Proof:} Since $CR(T)$ is closed, and the set of periodic points is dense, then $X=CR(T)$. Therefore, Theorem A implies that $T$ is frequently hypercyclic and topologically mixing.
\qed
\begin{corollary} If $X$ is a separable Hilbert space and $T$ is an unitary operator, then $T$ does not have the positive shadowing property. \end{corollary}
\noindent \textbf{Proof:} By proposition \ref{unitaryimplieschaintransitivity} unitary operators are chain transitive, but since $\|T\| = 1$ they may never be hypercyclic. Therefore, by Theorem A these operators cannot have positive shadowing.
\qed
Given a topological dynamical system $(Y,f)$, a point in $x \in Y$ is said to be a \textbf{non-wandering point} if, for every open set $B$ that contains $x$, and for every $N \in \mathbb{N}$, there is $n>N$, such that $f^{n}(B) \cap B \neq \emptyset$. If $(X,T)$ is a linear dynamical system, linearity implies that the origin is always a non wandering point of $X$. We call the set of non wandering points of \textbf{non wandering set}. The complement of the non wandering set is the \textbf{wandering set}.
It is not difficult to see that if $T$ is recurrent, then every $x \in X$ is non wandering, and if $T$ has a dense set of non wandering points, then $T$ is recurrent.
The next result can be easily proven (it is similar to Proposition 6 of \cite{Messaoudi}), but we state it since it is needed in the proof of Theorem B.
\begin{proposition} \label{shadowingforsubspaces}
Let $T$ be an operator on a Banach space $X$. Suppose that $X = M \oplus N$, where $M$ and $N$ are closed $T$-invariant subspaces of $X$. Then $T$ has positive shadowing property if, and only if, $T|_M$ and $T|_N$ both have positive shadowing property. \end{proposition}
\noindent \textbf{Proof of Theorem \ref{theo:nonwandering}:} It is easy to see that the non-wandering set is contained in the set of chain recurrent elements. For the contrary inclusion if $x$ is chain recurrent, and $U$ is an open set that contains $x$, then there is a pseudo-orbit that goes from $x$ to $x$ and then to $x$ again and so on, and the shadowing property tells us that such pseudo orbit can be shadowed by a real orbit. This proves that $x$ is a non-wandering point. Since $\Omega$ is equal to the chain recurrent set, then Corollary \ref{chainspace} tells us that $\Omega$ is a closed and invariant subspace of $X$. This allows us to obtain the first two conclusions.
By item 1, we have $\Omega=CR(T)$. By Corollary \ref{chainspace}, $\Omega$ is $T$-invariant. This implies that $\Omega^\perp$ is $T^*$-invariant, consequently, it is $T$-invariant, because $T=T^*$. Since $X$ is a Hilbert space and $\Omega$ is a closed subspace of $X$ then $X=\Omega\oplus\Omega^{\perp}$. Hence, by Proposition \ref{shadowingforsubspaces}, $T|_{\Omega}$ has positive shadowing property. Corollary \ref{chainrecurrentselfadjoint} guarantees that $T|_{\Omega}$ is chain transitive. Since $T|_{\Omega} : \Omega \to \Omega$ has positive shadowing property and is chain transitive, Theorem \ref{gymstheorem} gives that $T|_{\Omega}$ is topologically mixing and frequently hypercyclic.
\qed
\section{Open Questions}
In this section we leave some open questions for the reader. Question \ref{refereequestion} was kindly offered to us by the anonymous referee. Originally, this question only addressed the shadowing property, but we decided to expand it for chain recurrence as well.
\begin{question} Is there any simple criterion to decide if a system is chain recurrent? \end{question}
Since every hypercyclic operator is chain recurrent, then Kitai's Hypercyclic Criterion \cite{dynamicsoflinearoperators} is a simple criterion for chain recurrence. But since there are operators that are chain recurrent but not hypercyclic, e.g. the identity operator, it would be nice to find a tighter criterion.
\begin{question} Is there any simple criterion to decide if a system has positive shadowing? \end{question}
This question is not new and was addressed by other authors for the shadowing property. It seems that the notion of generalized hyperbolicity \cite{generalized} captures much of the essence of the shadowing property and could be equivalent to it. In view of the results presented in this text positive shadowing might be a more relevant property than shadowing itself, and therefore worthy of a similar search for an equivalent notion. Example \ref{eigen1andshadowing} shows that when $T$ has a right inverse that is a proper contraction, then $T$ will have positive shadowing.
\begin{question} \label{refereequestion}
Let $X$ be a normed vector space and $T:X \to X$ a linear operator with the positive shadowing property (and or chain recurrence), and let $Y \subset X$ be a closed invariant subspace. Under which hypotheses one may guarantee that the operator $T|_Y$ has the shadowing property (and or chain recurrence)? \end{question}
Proposition \ref{shadowingforsubspaces} and Corollary \ref{CRforsubspaces} provide partial answers to this question. New results in this direction may provide a proof of the last item of Theorem B under more general assumptions than $X$ being Hilbert and $T$ being self-adjoint.
\begin{question} Are the conclusions of Theorem A strong enough to imply the hypothesis? \end{question}
More precisely does every frequently hypercyclic and topologically mixing linear dynamical system have positive shadowing? At the moment, the authors have no plausible argument that favors such conclusion, but we are unable to provide a counter-example to this claim. Based on Corollary \ref{prodct of shadowing and CR}, one may also ask a weaker version of this question, which is: if $(X,T)$ is a linear dynamical system such that any finite product of $T \times \ldots \times T$ is topologically mixing and frequently hypercyclic, then would this imply that $T$ has positive shadowing?
\noindent \textit{Acknowledgements: We would like to thank Prof. Udayan B. Darji for helpful comments on an earlier version of this text. We would also like to show our deep appreciation for the careful review and interesting suggestions given by the anonymous referee, which include among others, lemma \ref{refereelemma} and its proof, Corollary \ref{properdilation}, example \ref{refereeexample} and question \ref{refereequestion}. M.B.A. was supported by CAPES. R.V. was partially supported by CNPq.}
\end{document} |
\begin{document}
\newcommand*\discR{
\psframe*[linecolor=red!80](0.28,0.28)
} \newcommand*\discG{
\psframe*[linecolor=green!80](0.28,0.28)
} \newcommand*\discB{
\psframe*[linecolor=blue!80](0.28,0.28)
} \newcommand*\discY{
\psframe*[linecolor=yellow!90](0.28,0.28)
} \newcommand*\Grid{
\psgrid[gridlabels=0,subgriddiv=3](0,0)(3,3)
} \newcommand*\Gridtwo{
\psgrid[gridlabels=0,subgriddiv=3](0,0)(9,9)
}
\title{A Note on Computable Proximity of $\mathcal{L}_1$-Discs\\ on the Digital Plane}
\author[J.F. Peters]{J.F. Peters$^{\alpha}$} \email{James.Peters3@umanitoba.ca} \address{\llap{$^{\alpha}$\,}Computational Intelligence Laboratory, University of Manitoba, WPG, MB, R3T 5V6, Canada and Department of Mathematics, Faculty of Arts and Sciences, Ad\.{i}yaman University, 02040 Ad\.{i}yaman, Turkey}
\author[K. Kordzaya]{K. Kordzaya$^{\beta}$} \email{korka@ciu.edu.ge}
\author[I. Dochviri]{I. Dochviri$^{\beta}$} \email{iraklidoch@yahoo.com}
\address{\llap{$^{\beta}$\,}Department of Mathematics, Caucasus International University, 73, Chargali str., 0192 Tbilisi, Georgia}
\subjclass[2010]{Primary 54E05 (Proximity); Secondary 68U05 (Computational Geometry)}
\date{}
\begin{abstract} {This paper investigates problems in the characterization of the proximity of digital discs. Based on the $\mathcal{L}_1$-metric structure for the 2D digital plane and using a Jaccard-like metric, we determine numerical characters for intersecting digital discs.} \end{abstract}
\keywords{Digital discs, $\mathscr{L}_1$-metric, Jaccard like metric, Proximity}
\subjclass[2010]{Primary 65D18, 68U05; Secondary 54E05}
\maketitle
\section{Introduction} This paper introduces a form of digital geometry in proximity spaces. The study of digital discs is connected to the discovery of proximal objects~\cite{Peters2016ISRLcomputationalProximity,DBLP:series/isrl/2014-63,Irakli2016MCStopologicalSorting}. The objects often can be represented as sets of points and this stipulates that set-theoretic and topological methods are very useful tools in the study of proximity relations. Digital geometry deals with geometric properties of objects on computer screens~\cite{Klette2004,Kopperman1991AMMonthyDigitalTopology,Kronheimer1992,Rosenfeld1979}.
\setlength{\intextsep}{0pt} \begin{wrapfigure}[8]{R}{0.25\textwidth}
\begin{minipage}{3.2 cm}
\centering \includegraphics[width=25mm]{digitalDiscs} \caption[]{\footnotesize Structures}
\label{fig:Digital discs}
\end{minipage}
\end{wrapfigure}
Many different computer screen images can be obtained via pixel lighting. A \emph{pixel} is the smallest element is a digital image and are usually identified as points. In other words, we can describe images on the computer screen by their pixels that have digital valued coordinates, {\em i.e.}, a mathematical model of the computer screen is the digital plane $\mathbb{Z}^2$.
The importance of the notions of the circle and disc in Euclidean geometry is well known. In digital geometry, digital circles and digital discs have various important properties that are different from the Euclidean ones (see, {\em e.g.},~\cite{Nakamura1984CVGIPdigitalCircles,McIlroy1983ACMTGintegerGrids,Kim1984PAMIdigitalDiscs,Toutant2013DAMdigitalCircles,Andres2011LNCSdigitalCircles}). One of the reasonable realizations of metric structure on the digital plane $\mathbb{Z}^2$ can be determined via the so-called $\mathcal{L}_1$ metric. This metric has the following view: \[ d\left(p_1,p_2\right) = \abs{a_1-a_2} + \abs{b_1-b_2}, \mbox{where $p_1$ and $p_2$ are some matched points}, \] {\em i.e.}, $p_1$ and $p_2$ are pixels for our future considerations. Since we can represent pixel coordinates as digital pairs, then it is obvious that $d\left(p_1,p_2\right)\in \mathbb{Z}$ (the integers).
Based on the $\mathcal{L}_1$ metric, we define a digital circle with radius $r$ and center $x$ (denoted by $C_d(x,r)$) as follows: \[ C_d(x,r) = \left\{z\in \mathbb{Z}^2: d(x,z) = r\right\}. \] Moreover, we denote by $c\left(C_d(x,r)\right)$ the circumference of the circle $C_d(x,r)$ where $r\in \mathbb{N}\cup\left\{0\right\}$.
Due to R. Klette and A. Rosenfeld~\cite{Klette2004}, it is known that $\pi_{\mathcal{L}_1} = \frac{c\left(C_d(x,r)\right)}{\mbox{diam}\left(C_d(x,r)\right)} = \frac{8r}{2r} = 4$, where $\mbox{diam}\left(C_d(x,r)\right)$ is the diameter of the circle $C_d(x,r)$. Using this fact, we easily obtain the following result.
\begin{lemma}\label{lemma:digitalCircle} Let $C_d(x,r)$ be a digital circle with center at point $x$ and radius $r$ relative to the $\mathcal{L}_1$ metric. Then, for the number of pixels of $C_d(x,r)$, we have the formula \[ \mbox{card}\left(C_d(x,r)\right) = \frac{2c\left(C_d(x,r)\right)}{\pi_{\mathcal{L}_1}} = 4r. \] \end{lemma}
\noindent Fig.~\ref{fig:Digital discs} demonstrates the structural property of the digital disc, namely, \begin{align*} D_d\left(x,R\right) &= \left\{z\in\mathbb{Z}^2\mid d(x,z)\leq R\right\},\ \mbox{particularly:}\\ D_d\left(x,R\right) &= \left\{x\right\}\cup\left(\mathop{\bigcup}\limits_{r=1}^R C_d(x,r)\right), \ \mbox{where}\ R\in\mathbb{Z}. \end{align*}
\begin{lemma}\label{lemma:discPixels} If $D_d\left(x,R\right)$ is a digital disc relative to the $\mathcal{L}_1$ metric $d$, then the number of pixels forming the disc $D_d\left(x,R\right)$ can be computed by the formula $\mbox{card}\left(D_d\left(x,R\right)\right) = 2R^2 + 2R + 1$. \end{lemma} \begin{proof} Since $D_d\left(x,R\right) = \left\{x\right\}\cup\left(\mathop{\bigcup}\limits_{r=1}^R C_d(x,r)\right)$, we can write \[ \mbox{card}\left(D_d\left(x,R\right)\right) = 1 + \mbox{card}\left(C_d(x,1)\right) + \mbox{card}\left(C_d(x,2)\right) +\cdots+\mbox{card}\left(C_d(x,R)\right). \] Now, applying Lemma~\ref{lemma:digitalCircle}, we get \begin{align*} \mbox{card}\left(D_d\left(x,R\right)\right) &= 1 + 4 + 8 +\cdots+4R=\\
&= 1 + 4\left(\frac{1 + R}{2}R\right)=\\
&= 2R^2 + 2R + 1. \end{align*} \end{proof}
\section{How Near are Digital Discs?} To solve a wide class of the problems of computational proximity, we know that the Hausdorff metric is appropriate~\cite{Klette2004,Deza2009}. The Hausdroff metric (denoted by $d_H(A,B)$) measures the distance between the sets $A,B$ in the given metric space $(X,d)$ and is defined by \[ d_H(A,B) = \mbox{max}\left\{\mathop{\mbox{sup}}\limits_{x\in A}\mathop{\mbox{inf}}\limits_{y\in B}d(x,y), \mathop{\mbox{sup}}\limits_{y\in B}\mathop{\mbox{inf}}\limits_{x\in A}d(x,y)\right\}. \] If the sets $A,B$ are finite, we obtain the simplication of the Hausdorff metric by maxima and minima~\cite{Engelking1989}, {\em i.e.}, \[ d_H(A,B) = \mbox{max}\left\{\mathop{\mbox{max}}\limits_{x\in A}\mathop{\mbox{min}}\limits_{y\in B}d(x,y), \mathop{\mbox{max}}\limits_{y\in B}\mathop{\mbox{min}}\limits_{x\in A}d(x,y)\right\}. \] For intersecting sets $A$ and $B$, \emph{i.e.}, $A\cap B\neq \emptyset$, the Hausdorff metric guarantees that $d_H(A,B) = 0$. Such sets in the theory of proximity spaces~\cite[\S 8.4]{Engelking1989} are said to be trivially near. Therefore, if $A\cap B\neq \emptyset$ and $A\cap C\neq \emptyset$ hold in the metric space $(X,d)$, we cannot distinguish which the sets in the pair $B,C$ is more near to $A$. Hence, the application of Hausdorff distance in the sorting of near sets is more suitable for disjoint sets.
\setlength{\intextsep}{0pt} \begin{wrapfigure}[10]{R}{0.25\textwidth}
\begin{minipage}{3.2 cm}
\centering \includegraphics[width=35mm]{fig2} \caption[]{Overlap} \label{fig:intersectionDiscs}
\end{minipage}
\end{wrapfigure} $\mbox{}$\\
Classification of images in computer science frequently need the application of Jaccard-like metrics~\cite{Fujita2013JJIAMsetDistance}. We will use a simplified version to analyze proximity of intersecting digital discs. It must be especially noticed that the problem connected with the intersection of plane discs was considered from a computer science perspective in~\cite{Sharir1985SIAMJCplanarDiscs}.
For the Jaccard-like metric $m$, we understand the distance function defined via the cardinality of the symmetric difference of two arbitrary nonempty finite sets $A$ and $B$, {\em i.e.}, \begin{align*} m(A,B) &= \mbox{card}\left(A\bigtriangleup B\right)\\
&= \mbox{card}\left(A\setminus B\right) + \mbox{card}\left(B\setminus A\right)\\
&= \mbox{card}\left(A\right) + \mbox{card}\left(B\right) - 2\mbox{card}\left(A\cap B\right). \end{align*}
It is obvious that if $\mbox{card}\left(A\right)\neq \mbox{card}\left(B\right)$ and both sets are finite while $A\cap B\neq\emptyset$, we get $m(A,B) \neq 0$. This raises the question of the computation of the proximity of intersecting digital discs such as the ones in Fig.~\ref{fig:intersectionDiscs}.
\begin{theorem} Let $D_d(x,R_1)$ and $D_d(y,R_2)$ be digital discs such that $C_d(x,R_1)\cap C_d(y,R_2)\neq\emptyset$. Then \[ m\left(D_d(x,R_1),D_d(y,R_2)\right) = 2\left(R_1^2 + R_2^2 + R_1 + R_2 - 2kn + k + n\right), \] where $k$ and $n$ denote the number of pixels forming the width and height of the greatest rectangle subset of an intersection set. \end{theorem} \begin{proof} Appling Lemma~\ref{lemma:discPixels}, we obtain the following cardinal equalities: \begin{align*} m\left(D_d(x,R_1),D_d(y,R_2)\right) &= \mbox{card}\left(D_d(x,R_1)\right) + \mbox{card}\left(D_d(y,R_2)\right) - 2\mbox{card}\left(D_d(x,R_1)\cap D_d(y,R_2)\right)\\
&= 2\left(R_1^2 + R_2^2 + R_1 + R_2 + 1)\right) - 2\mbox{card}\left(D_d(x,R_1)\cap D_d(y,R_2)\right)\\
&= 2\left(R_1^2 + R_2^2 + R_1 + R_2 + 1\right) - 2\left[kn + (k-1)(n-1)\right]\\
&= 2\left(R_1^2 + R_2^2 + R_1 + R_2 - 2kn + k + n\right) \end{align*} \end{proof}
\setlength{\intextsep}{0pt} \begin{wrapfigure}[11]{R}{0.25\textwidth}
\begin{minipage}{3.2 cm}
\centering \includegraphics[width=35mm]{fig3} \caption[]{Non-Intersecting Boundaries} \label{fig:discBdys}
\end{minipage}
\end{wrapfigure} $\mbox{}$\\
Notice that there is a situation in which two digital discs are intersecting but their boundaries are not intersecting (see, {\em e.g.},~Fig.\ref{fig:discBdys}). Observe that in that case, we have $C_d\left(x,R_1-1\right)\cap C_d\left(y,R_2\right)\neq\emptyset$, or, equivalently, $C_d\left(x,R_1\right)\cap C_d\left(y,R_2-1\right)\neq\emptyset$.
\begin{theorem} Let $D_d\left(x,R_1\right)$ and $D_d\left(y,R_2\right)$ be digital discs such that $C_d\left(x,R_1\right)\cap C_d\left(y,R_2\right) = \emptyset$, but $C_d\left(x,R_1-1\right)\cap C_d\left(y,R_2\right)\neq\emptyset$. Then we have $\mbox{m}\left(D_d\left(x,R_1\right),D_d\left(y,R_2\right)\right) = 2\left(R_1^2+R_2^2+R_1+R_2+1-2kn\right)$, where $k$ and $n$ denote the number of pixels forming the width and height of the greatest rectangle subset of an intersection set. \end{theorem} \begin{proof} In this case, we can easily not that $\mbox{card}\left(D_d\left(x,R_1\right)\cap D_d\left(y,R_2\right)\right) = 2kn$. Hence, we have $\mbox{m}\left(D_d\left(x,R_1\right),D_d\left(y,R_2\right)\right) = 2\left(R_1^2+R_2^2+R_1+R_2+1-2kn\right)$. \end{proof}
Next, we need to represent the centers $x$ and $y$ of discs $D_d\left(x,R_1\right)$ and $D_d\left(y,R_2\right)$ by a couple of digital coordinates as follows: $x = \left(\alpha,\beta\right)$ and $y = \left(\gamma,\delta\right)$. If one of the following equalities hold $d(x,y) = \abs{\alpha - \gamma}$ or $d(x,y) = \abs{\beta - \delta}$, {\em i.e.}, the centers of the discs lie on horizontal or verical axes (similar to the situations shown in Fig.~\ref{fig:discCentres} and Fig.~\ref{fig:nonCapBdys}), then we can measure the proximity of the discs via computation of the pixel cardinality of the intersections sets.
\begin{figure}\label{fig:discCentres}
\label{fig:nonCapBdys}
\label{fig:intersectingDiscs}
\end{figure} $\mbox{}$\\
\begin{theorem}\label{thm:coords} Let $D_d\left(x,R_1\right)$ and $D_d\left(y,R_2\right)$ be digital discs such that $x = \left(\alpha,0\right)$ and $y = \left(\gamma,0\right)$ with $\alpha < \gamma$ and $\gamma - \alpha \leq R_1 + R_2$. If $C_d\left(x,R_1\right)\cap C_d\left(y,R_2\right)\neq\emptyset$, then \[ \mbox{m}\left(D_d\left(x,R_1\right),D_d\left(y,R_2\right)\right) = \left(R_1 - R_2\right)^2 + 2\left(R_1 + R_2 + 1\right)\left(\gamma - \alpha\right) - \left(\gamma - \alpha\right)^2. \] \end{theorem} \begin{proof} Since $x = \left(\alpha,0\right)$, $y = \left(\gamma,0\right)$ and $C_d\left(x,R_1\right)\cap C_d\left(y,R_2\right)\neq\emptyset$, we claim that \[ C_d\left(x,R_1\right)\cap C_d\left(y,R_2\right) = C_d(k,r),\ \mbox{where}, \]
$k = \left(\frac{\alpha + R_1 + \gamma - R_2}{2},0\right)$ and\\ $r = R_1 - (k - \alpha) = \frac{R_1 + R_2 + (\gamma - \alpha)}{2}\in \mathbb{N}\cup \left\{0\right\}$.
Consequently, simplification of \[ \mbox{m}\left(D_d\left(x,R_1\right),D_d\left(y,R_2\right)\right) = 2\left(R_1^2 + R_2^2 + R_1 + R_2 + 1 - 2r^2 - 2r - 1\right) \] gives the needed expression \[ \mbox{m}\left(D_d\left(x,R_1\right),D_d\left(y,R_2\right)\right) = \left(R_1 - R_2\right)^2 + 2\left(R_1 + R_2 + 1\right)\left(\gamma - \alpha\right) - \left(\gamma - \alpha\right)^2. \] \end{proof}
Observe that Theorem~\ref{thm:coords} can be applied in similar cases when the intersection set of the digital discs itself is a disc.
This leads us to consider two intersecting digital discs with non-intersecting boundaries (see, {\em e.g.}, Fig.~\ref{fig:nonCapBdys}) so that both centers lie on the horizontal or vertical axes. In such cases, we obtain the following result.
\begin{corollary} Let $D_d\left(x,R_1\right)$ and $D_d\left(y,R_2\right)$ be intersecting digital discs that satisfy the conditions of Theorem~\ref{thm:coords}, but $C_d\left(x,R_1\right)\cap C_d\left(y,R_2\right) = \emptyset$. Then we have \begin{align*} \mbox{m}\left(D_d\left(x,R_1\right),D_d\left(y,R_2\right)\right) &= 2\left(R_1^2 + R_2^2 + R_1 + R_2 - 2r_0^2 - 4r_0 + 1\right), \mbox{where},\\ r_0 &= \frac{R_1 - 1 + R_2 + (\gamma - \alpha)}{2}. \end{align*} \end{corollary}
\end{document} |
\begin{document}
\begin{abstract} We give an algebro-geometric construction of the Hitchin connection, valid also in positive characteristic (with a few exceptions). A key ingredient is a substitute for the Narasimhan-Atiyah-Bott K\"ahler form that realizes the Chern class of the determinant-of-cohomology line bundle on the moduli space of bundles on a curve. As replacement we use an explicit realisation of the Atiyah class of this line bundle, based on the theory of the trace complex due to Beilinson-Schechtman and Bloch-Esnault. \end{abstract}
\title{The Hitchin Connection in Arbitrary Characteristic}
\section{Introduction} \subsection{}The Hitchin connection was originally introduced in \cite{hitchin:1990}, with a two-fold motivation. The first was an elucidation of the $2+1$ dimensional topological quantum field theory proposed by Witten to explain the polynomial Jones invariants for knots \cite{witten:1989, atiyah:1990}. The second was the question of the dependency of the geometric quantisation of a symplectic manifold on the choice of polarisation.
In a beautiful construction, Hitchin exhibited a flat projective connection on the bundles of non-abelian theta functions over the base of a family of compact Riemann surfaces. For a fixed Riemann surface, the corresponding vector space can be understood to be the geometric quantisation of the moduli space of flat unitary connections on the underlying surface. The latter carries a canonical symplectic structure, but the complex structure on the surface also equips the moduli space with a K\"ahler polarisation, and the connection indicates precisely how the quantisation varies.
Even though the construction of the connection uses analytic and K\"ahler techniques throughout, it was already observed by Hitchin that the end result could entirely be interpreted in terms of algebraic geometry, and should in fact hold in positive characteristic as well (see \cite[\S 5]{hitchin:1990a}). This in itself is not too surprising, bearing in mind that one of the sources of inspiration for Hitchin was the work of Welters {\cite{welters:1983}}, which generalised the heat equation that (abelian) theta-functions had classically been know to satisfy to positive characteristic. Welters work was probably the first in which a cohomological approach to heat equations was developed; the non-abelian situation is quite a bit more involved, however.
The aim of this paper now is to give a new, purely algebro-geometric, construction of the Hitchin connection, without using any analytic or K\"ahler techniques. This construction works as well in positive characteristic (apart from a few exceptions, see below), which as far as we are aware is a first, for either the Hitchin connection itself or any of the equivalent connections (such as the KZB or TUY/WZW connection from conformal field theory -- see however \cite{schechtman.varchenko:2019} for a recent study of the KZ equation in positive characteristic). We stress that the construction only involves (finite-dimensional) algebraic geometry, and in particular no infinite-dimensional representation theory -- the only prerequisites needed are covered by \cite{ega}.
Key elements in our construction are a framework for connections coming from heat operators in algebraic geometry, due to van Geemen and de Jong \cite{vangeemen.dejong:1998}, as well as a substitute for the Narasimhan-Atiyah-Bott K\"ahler form \cite{narasimhan:1970, atiyah.bott:1983}, which according to Quillen \cite{quillen:1985} realizes the Chern class of the determinant-of-cohomology line bundle. The serendipitous similarity between this K\"ahler form and the quadratic part of the Hitchin system were crucially used in \cite{hitchin:1990} to obtain the Hitchin connection in the complex case.
We compensate for the absence of this K\"ahler form by interpreting the cohomology class of the line bundle as an Atiyah class. This difference in guaranteeing the cohomological conditions of the Theorem of van Geemen and de Jong forms the bulk of our work.
An essential ingredient of our construction is the description of the Atiyah algebra of the theta line bundle over the moduli space in terms of the first direct image of the Atiyah algebra of a universal bundle (Theorem \ref{maintracecompl}). A complete proof is given in section \ref{sectionbigproof} and in appendices \ref{appendixtracecomplex} and \ref{appendixsplitting}, whose aim is to give a simplified and self-contained presentation of the results used in the proof of this theorem, i.e. the theory of the trace complex \cite{beilinson.schechtman:1988}, \cite{bloch.esnault:2002} and some additional inputs worked out in \cite{sun.tsai:2004}, describing the behaviour of the above objects when replacing a universal bundle by its endomorphism bundle. We observe that the paper \cite{sun.tsai:2004} also describes a construction of the Hitchin connection, but the strategy in \cite{sun.tsai:2004} is different from ours: they construct the Hitchin connection by relying on another argument from \cite{faltings:1997}, whereas our approach seeks to verify directly the van Geemen--de Jong criterion for the liftability of a symbol map to a heat operator.
\subsection{} At this point, we would like to make a few comments on the relationship of this work to the existing literature. As already mentioned, we will follow the algebro-geometric framework of van Geemen and de Jong \cite{vangeemen.dejong:1998} for connections induced by heat operators. This provides a purely cohomological criterion for the existence of a heat operator with a prescribed symbol map.
In \cite[\S 2.3.8]{vangeemen.dejong:1998} van Geemen and de Jong show how their framework of connections induced by heat-operators easily re-captures Welters' construction of the Mumford-Welters projective connection on bundles of theta functions. The main point of their work is to use
this framework (which we resume below in Theorem \ref{vgdj}) to construct a Hitchin connection (in complex algebraic geometry) in the particular case of rank $2$ bundles on genus $2$ curves (which was excluded from Hitchin's original work, and indeed from ours as well). They do not re-establish the Hitchin connection in all other cases though, and in this sense
the present paper exactly complements their work.
We remark that several other algebro-geometric descriptions of connections on bundles of non-abelian theta functions have appeared in the literature -- e.g. \cite{faltings:1997, ramadas:1998, ginzburg:1995, ran:2006,sun.tsai:2004,ben-zvi.frenkel:2004}. It is not always clear however exactly how these connections are related, see e.g. \cite{faltings-vs-hitchin}, and for various reasons they are all restricted to characteristic zero. None also directly use the framework of van Geemen and de Jong. We remark that many of the properties of Hitchin's original connection like e.g. monodromy \cite{laszlo.pauly.sorger:2013} or projective flatness of strange duality maps \cite{belkale:2009} have been proved with representation-theoretical methods, more precisely by using its equivalence, due to Laszlo \cite{laszlo:1998}, with the TUY/WZW connection on spaces of conformal blocks \cite{TUY:1989, tsuchimoto:1993}.
For most of the cited works the relationship with conformal blocks is undeveloped (they have of course other motivations: e.g. \cite{sun.tsai:2004}, which together with \cite{ginzburg:1995} is probably closest to our approach, is particularly focused on the logarithmic description of the connection as the curves degenerate to nodal singularities). We therefore thought it useful to establish the Hitchin connection itself, in the original context (moduli of bundles with trivial determinant over curves), in a purely algebro-geometric way that nevertheless manifestly gives the same connection as Hitchin, and to which Laszlo's theorem immediately applies. For completeness, we mention that there are several other constructions in the literature of a differential geometric or K\"ahler nature, e.g. \cite{andersen.gammelgaard.lauridsen:2012,axelrod.dellapietra.witten:1991,scheinost.schottenloher:1995}.
We want to mention that (because of Laszlo's theorem) the term \emph{Hitchin connection} is often loosely employed to refer to any of a number of equivalent projective connections. We shall use it in a much stricter sense however, as a connection arising through a heat operator with a prescribed symbol map (see below).
In this context the terminology \emph{non-abelian theta functions} is frequently used (including by us), even though that is in fact slightly misleading. Our construction of the connection only works for moduli spaces of bundles with trivial determinant, or equivalently, $\operatorname{SL}(r)$-principal bundles. At various places the (semi-)simplicity is crucial, and as far as we are aware there is currently no construction that works immediately for arbitrary reductive groups. Indeed, a connection for moduli of $\operatorname{GL}(r)$-principal bundles was crucially needed in \cite{belkale:2009}, but this was created out of an $\operatorname{SL}(r)$-connection and an (abelian) $\mathbb{G}_m$-connection.
\subsection{} As a motivation for looking at the Hitchin connection from a purely algebro-geometric point of view, we would like to highlight three contexts. The first is the Grothendieck-Katz $p$-curvature conjecture \cite{katz:1972}, which (roughly speaking) claims that every algebraic connection which is formulated in sufficient generality and has vanishing $p$-curvature when reduced mod $p$ for almost all $p$ should have finite monodromy in the complex case. Presumably motivated by this conjecture it was originally expected (see \cite[\S 7]{brylinski.mclaughlin:1994}) that the Hitchin connection would have finite monodromy. However, it was shown by Masbaum in \cite{masbaum:1999} that, for rank $2$, the image of the corresponding projective representation of the mapping class group will, for all genera and almost all levels, contain elements of infinite order. This came somewhat as a surprise, as the connection for abelian theta-functions was well known to have finite monodromy from Mumford's approach through theta groups. Masbaum was working with a skein-theoretic approach to these representations, but the equivalence of this picture with the Hitchin connection follows from the work of Andersen and Ueno \cite{andersen.ueno:2015} combined with Laszlo's theorem. Masbaum's result was also directly re-derived in an algebro-geometric context by Laszlo, Sorger, and the fourth named author \cite{laszlo.pauly.sorger:2013}. We hope that our construction can be a starting point for investigating the $p$-curvature of the Hitchin connection.
The second is the question of integrality of TQFTs, and the related topic of modular representations of the mapping class group. Various results have been obtained here through a skein-theoretic approach, cfr. \cite{gilmer:2004, gilmer.masbaum:2007, gilmer.masbaum:2014, gilmer.masbaum:2017}, but so far a geometric counterpart is missing. We again hope that the current work can help shed light on these issues.
Finally we would like to mention various generalisations of the connection constructed here, by looking at variations of the moduli problem of vector bundles on curves. A minor variation is by looking at moduli spaces of $G$-principal bundles, where $G$ is a semi-simple group. One could also equip the curve with marked points, and look for parabolic structures of the bundle at these points. All of these can be understood as special cases of the moduli problem of $\mathcal{G}$-torsors, where $\mathcal{G}$ is a parahoric Bruhat-Tits group scheme over the curve (see e.g. \cite{pappas.rapoport:2010,heinloth:2010,balaji-seshadri:2015}). We hope to come back to the Hitchin connection in this generality in the near future, and expect that the construction developed in this paper, bypassing the need for an explicit description of a K\"ahler form, will facilitate this.
\subsection{} The rest of the paper is organised as follows. In Section \ref{recaphitchin} a summary of Hitchin's work is given, explaining the context of variation of K\"ahler polarisation in geometric quantisation. There are essentially two parts to this: a general framework that gives conditions under which a projective connection exists (Theorem \ref{mainHitchin}), and a discussion of why these conditions are satisfied in the case of moduli spaces of flat unitary connections on surfaces. Though none of what follows later logically depends on this, we nevertheless wanted to include a brief overview of Hitchin's original construction to highlight the extent to which our exposition parallels his.
The remainder of the paper is then concerned with our algebro-geometric construction of the Hitchin connection. In Section \ref{contextvgdj}, after a quick review of Atiyah sequences and Atiyah classes, the notion of heat operators, their relations to connections, and the main framework of van Geemen and de Jong is given (Theorem \ref{vgdj}). We present the latter as a counterpart to Theorem \ref{mainHitchin}, and for completeness we have included a proof of it and of Hitchin's flatness criterion (Theorem \ref{thm_flatness}), to highlight that these results hold in arbitrary characteristic, as the original discussion in \cite{vangeemen.dejong:1998} was strictly speaking just in a complex context.
Section \ref{mainconstruction} then goes on to show that the conditions of Theorem \ref{vgdj} are indeed satisfied, culminating in Theorem \ref{existenceconnection}. The primary tool to this end is Proposition \ref{phi-rho-L}, and most of the rest of the section is essentially a (necessarily lengthy) \emph{mise en place} to obtain this result. As stated above, the key element is Theorem \ref{maintracecompl}, which realizes the Atiyah class of the determinant-of-cohomology line bundle as a particular extension, given as the first derived functor of the push down of the dual of the Atiyah sequence of the universal bundle on the moduli space of bundles. This provides an analogue to the theorem of Quillen that realizes the Chern class of the line bundle as a particular K\"ahler form. Just as in Hitchin's original approach, it is this particular realisation that allows us to verify the cohomological conditions of Theorem \ref{vgdj}. Theorem \ref{maintracecompl} is itself obtained from a variation on the theory of the trace complex, of which we give a self-contained account in Appendix \ref{appendixtracecomplex}. The proof of Theorem \ref{maintracecompl} takes up Section \ref{sectionbigproof}. Finally, the other appendices contain proofs of various facts we use in the main body of the article, but for which we could not find references in the generality we needed. \subsection{} To finish the introduction, we state the necessary restrictions on the characteristic $p$ of the base field $\Bbbk$, and their sources. The first limitation that we encounter is due to the use of the trace and the trace pairing: \[\begin{tikzcd} \operatorname{tr}:\mathcal{E} nd(E) \ar[r] & \mathcal{O}, &\operatorname{Tr}: \mathcal{E} nd(E) \times \mathcal{E} nd(E) \ar[r] & \mathcal{O}. \end{tikzcd} \] We need these to behave similarly as they do in characteristic zero. In particular we want the trace $\operatorname{tr}$ to split equivariantly , i.e. $\mathcal{E} nd(E)=\mathcal{E} nd^0(E)\oplus \mathcal{O}$, where $\mathcal{E} nd^0(E)$ is the kernel of $\operatorname{tr}$. This is induced from an $\operatorname{SL}(r)$-equivariant splitting of the short exact sequence of Lie algebras \[\begin{tikzcd} 0\ar[r] & \mathfrak{sl}(r)\ar[r] & \mathfrak{gl}(r)\ar[r] & \Bbbk \ar[r] & 0,\end{tikzcd}\] which requires $p \nmid r$. Secondly, we want the trace pairing $\operatorname{Tr}$, which is non-degenerate for all possible characteristics $p$ and $r=rk(E)$, to remain non-degenerate when restricted to $\mathcal{E} nd^0(E)\times \mathcal{E} nd^0(E)$. This is again true if and only if $p \nmid r$.
The second limitation is due to the use of differential operators (cf. \cite[IV, \S 16.8]{ega}) and their symbols: in characteristic $p>0$ one considers the algebra of differential operators associated to the Atiyah algebra $\mathcal{D}^{(1)}_{\mathcal{M}/S}(L)$ and defined as a quotient of its universal enveloping algebra -- see \cite[1.1.3]{beilinson.schechtman:1988}. Up to order $k=p-1$ these however coincide with $\mathcal{D}^{(k)}_{\mathcal{M}/S}(L)$, and we have the symbol map to $\Sym^k T_{\mathcal{M}/S}$ with its usual properties at our disposal. As the construction of connections via heat operators uses second order operators and their symbols, we exclude characteristic 2; in the flatness criterion also third-order symbols appear, hence there we also exclude $p=3$.
Furthermore, we also use trace complexes; the original reference avoids positive characteristic, but as we use only part of the theory we check in Appendix \ref{appendixtracecomplex} that everything works with the restrictions already in place: in order for the residue $\widetilde{\Res}$ from \cite[page 658]{beilinson.schechtman:1988} to be well defined, we need to avoid characteristic 2.
The third and last limitation is due to the formula in Thm. \ref{existenceconnection}, where there is a factor $\frac{1}{r+k}$. Hence we also need to assume that $p\nmid (r+k)$. \subsection{Acknowledgments} The authors would like to thank J\o rgen Andersen, Prakash Belkale, C\'edric Bonnaf\'e, Najmuddin Fakhruddin, Emilio Franco, Bert van Geemen, Jochen Heinloth, Nigel Hitchin, Gregor Masbaum, Swarnava Muk\-ho\-padh\-yay, Jon Pridham, Brent Pym, Pavel Safronov, Richard Wentworth and Hacen Zelaci for useful conversations and remarks at various stages of this work. This work grew out of another project of the first and third named authors that was joint with J\o rgen Andersen, Peter Gothen and Shehryar Sikander -- they thank all three of them for related discussions.
\section{Heat operators and connections - summary of the work of Hitchin}\label{recaphitchin} We outline in this section the original work of Hitchin that establishes the flat projective connection on bundles of non-abelian theta functions. Hitchin's motivation came from geometric quantisation and K\"ahler geometry, and he mainly used analytic or K\"ahler techniques. \subsection{Change of K\"ahler polarisation} Inspired by earlier work of Welters \cite{welters:1983}, the Hitchin connection was introduced in \cite{hitchin:1990} in the context of geometric quantisation: given a compact (real) symplectic manifold $(\mathcal{M},\omega)$, with pre-quantum line bundle $L$, Hitchin studied how the geometric quantisations with respect to different K\"ahler polarisations were related. In particular, he gave the following general criterium for the existence of a projective connection on the bundle of quantisations: \begin{theorem}[Hitchin, {\cite[Theorem 1.20]{hitchin:1990}}] \label{mainHitchin} Given a family of K\"ahler polarisations on $\mathcal{M}$, such that for each polarisation we have \begin{enumerate}[(a)] \item The map $$\begin{tikzcd}\cup [\omega]: H^0(\mathcal{M},T_{\mathcal{M}}) \ar[r] &H^1(\mathcal{M}, \mathcal{O}_{\mathcal{M}})\end{tikzcd}$$ is an isomorphism (this means that there are no holomorphic vector fields which fix $L$, i.e. $H^0(\mathcal{M}, \mathcal{D}^{(1)}_{\mathcal{M}}(L))=H^0(\mathcal{M}, \mathcal{O}_{\mathcal{M}})$); \item\label{hitchintwo} For each $s\in H^0(\mathcal{M},L)$ and tangent vector $\overset{.}{I}$ to the base of the family there exists a smoothly varying $$A(\overset{.}{I}, s)\in \mathbb{H}^1(\mathcal{M}, \mathcal{D}^{(1)}_{\mathcal{M}}(L)\overset{.s}{\rightarrow} L)$$ such that the symbol $-i\sigma_1 (A(\overset{.}{I}, s))$ equals the Kodaira-Spencer class $[\overset{.}{I}]$ in\linebreak $H^1(\mathcal{M}, T_{\mathcal{M}})$. \end{enumerate} Then this defines a projective connection on the bundle of projective spaces $\mathbb{P}(H^0(\mathcal{M}, L))$ over the base of the family. \end{theorem} Here $\mathcal{D}^{(1)}_{\mathcal{M}}(L)$ denotes the sheaf of first order differential operators on $L$ and $\sigma_1$ its symbol map to $T_{\mathcal{M}}$. The map $.s:\mathcal{D}^{(1)}_{\mathcal{M}}(L)\rightarrow L$ is just given by evaluating the differential operators on the section $s$, and $\mathbb{H}^1$ stands for the first hypercohomology group of the two-term complex.
Note that the space of infinitesimal deformations of the pair $(\mathcal{M}, L)$ is given by\linebreak $H^1(\mathcal{M}, \mathcal{D}^{(1)}_{\mathcal{M}}(L))$, and likewise the space of infinitesimal deformations of the triple\linebreak $(\mathcal{M},L,s)$, for $s\in H^0(\mathcal{M},L)$, is given by $\mathbb{H}^1(\mathcal{M}, \mathcal{D}^{(1)}_{\mathcal{M}}(L)\overset{.s}{\rightarrow} L)$ (cfr. \cite[Proposition 1.2]{welters:1983}). \subsection{Moduli spaces of flat unitary connections} Moreover, Hitchin then showed that the conditions of Theorem \ref{mainHitchin} are satisfied in the case where $(\mathcal{M},\omega)$ is the space of flat, unitary, tracefree connections on the trivial rank $r$ bundle over a closed oriented surface $\mathcal{C}$ of genus $g\geq 2$ (with the exception of the case $r=2, g=2$), and $L=\mathcal{L}^k$ is a power of the positive generator $\mathcal{L}$ of its Picard group. This space is not quite a manifold, but its smooth locus is canonically a symplectic manifold, with $\omega$ the Goldman-Karshon symplectic form (which uses a Killing form on the Lie algebra of $\operatorname{SU}(r)$).
If $\mathcal{C}$ is equipped with the structure of a Riemann surface (or, equivalently, regarded as a smooth complex projective curve), then $\mathcal{M}$ can be understood as the moduli space of semi-stable rank $r$ vector bundles with trivial determinant, which is a projective variety. The symplectic form $\omega$ is then moreover a K\"ahler form, as discussed by Narasimhan \cite{narasimhan:1970} and Atiyah-Bott \cite{atiyah.bott:1983}. By Quillen's theorem \cite{quillen:1985}, the inverse $\mathcal{L}$ of the determinant-of-cohomology line bundle provides a pre-quantum line bundle.
In particular, we can understand the $A(\overset{.}{I}, s)$ as follows in this situation: we have the short exact sequence of complexes \begin{equation}\label{sesofcomplex}\begin{tikzcd}[row sep=small] 0\ar[r] &\mathcal{D}^{(1)}_{\mathcal{M}} (\mathcal{L}^k) \ar[r] \ar[d, ".s"] & \mathcal{D}^{(2)}_{\mathcal{M}} (\mathcal{L}^k)\ar[r] \ar[d, ".s"] & \Sym^2 T_{\mathcal{M}}\ar[r] \ar[d]& 0\\ 0\ar[r] & \mathcal{L}^k \ar[r] &\mathcal{L}^k \ar[r] & 0 \ar[r] &0. \end{tikzcd}\end{equation} This gives a connecting homomorphism \begin{equation}\label{boundary}\begin{tikzcd}\delta: H^0(\mathcal{M}, \Sym^2(T_{\mathcal{M}})) \ar[r] & \mathbb{H}^1(\mathcal{M}, \mathcal{D}^{(1)}_{\mathcal{M}}(\mathcal{L}^k)\overset{.s}{\rightarrow} \mathcal{L}^k).\end{tikzcd} \end{equation} On the other hand, the quadratic part of the Hitchin system (which also uses the Killing form) gives, for every holomorphic vector bundle $E$ on $\mathcal{C}$ with trivial determinant, a map $$\begin{tikzcd}\Sym^2 H^0(\mathcal{C},\mathcal{E}nd^0(E)\otimes K_{\mathcal{C}}) \ar[r] & H^0(\mathcal{C}, K^2_{\mathcal{C}}),\end{tikzcd}$$ where $K_{\mathcal{C}}$ is the canonical bundle of $\mathcal{C}$. Dualizing this, and using Serre duality on $\mathcal{C}$ gives, for each $E$, a map $$\begin{tikzcd}H^1(\mathcal{C}, T_{\mathcal{C}}) \ar[r] & \Sym^2 H^1(\mathcal{C}, \mathcal{E}nd^0(E)),\end{tikzcd}$$ where $\mathcal{E}nd^0(E)$ is the sheaf of trace-free endomorphisms of $E$. Since for each stable $E$ the space $H^1(\mathcal{C}, \mathcal{E}nd^0(E))$ is the tangent space to the moduli space (in casu $\mathcal{M}$), we can write this as a map \begin{equation}\begin{tikzcd}\label{rho}\rho:H^1(\mathcal{C}, T_{\mathcal{C}}) \ar[r] &H^0(\mathcal{M}, \Sym^2 T_{\mathcal{M}}).\end{tikzcd}\end{equation} Composing this with (\ref{boundary}) gives a linear map $$\begin{tikzcd}A(., s): H^1(\mathcal{C}, T_{\mathcal{C}}) \ar[r] &\mathbb{H}^1(\mathcal{M}, \mathcal{D}^{(1)}_{\mathcal{M}}(\mathcal{L}^k)\overset{s}{\rightarrow} \mathcal{L}^k)\end{tikzcd}$$ which depends smoothly on $s$, and which Hitchin shows (after a rescaling by $\frac{1}{r+k}$) to satisfy the condition in \ref{hitchintwo} of Theorem \ref{mainHitchin}. \begin{remark} Some key steps in Hitchin's approach were fundamentally differential geometric or K\"ahler in nature. In particular, the explicit description of the Narasimhan-Atiyah-Bott K\"ahler form, and its similarity to the symmetric two-tensors given by the symbol was crucially used. \end{remark}
\section{Hitchin-type Connections in Algebraic Geometry}\label{contextvgdj} An algebro-geometric framework for connections determined by a heat equation (like the Hitchin connection) was developed by van Geemen and de Jong in \cite{vangeemen.dejong:1998}. Besides being set in algebraic geometry as opposed to K\"ahler geometry, this description is also more local, in contrast with the infinitesimal framework of Theorem \ref{mainHitchin} of Hitchin (the latter is not a substantional difference however, cfr. \cite[\S 2.3.4]{vangeemen.dejong:1998}). We summarise the main parts and some related prerequisites below.
From now on, everything will be defined over an algebraically closed field $\Bbbk$ of characteristic different from $2$. We have to exclude characteristic $2$ for a variety of reasons, but in particular will also split the projection $T_{\mathcal{M}}^{\otimes 2}\rightarrow \Sym^2 T_{\mathcal{M}}$ throughout. In this general section, $\mathcal{M} \rightarrow S$ will be a smooth morphism of smooth schemes. \subsection{Atiyah Algebroids, (projective) connections, and Atiyah classes}\label{seqandcon} Our approach to connections essentially follows Atiyah's seminal exposition \cite{atiyah:1957}, but in this context we will phrase everything in terms of vector bundles rather than work with principal bundles. \subsubsection*{Atiyah algebroids}Let $\mathcal{D}^{(n)}_{\mathcal{M}}(E)$ be the sheaf of differential operators of order at most $n$ on a vector bundle $E$ over $\mathcal{M}$. The associated symbol map will be denoted $$\sigma_n:\mathcal{D}^{(n)}_{\mathcal{M}}(E)\rightarrow \Sym^n T_{\mathcal{M}}\otimes \mathcal{E}nd(E).$$ \begin{definition} The Atiyah sequence associated to a vector bundle $E\rightarrow \mathcal{M}$ is the top row of the following diagram \begin{equation*} \begin{tikzcd}[row sep=small] 0\ar[r]& \mathcal{E}nd(E)\ar[r]\ar[d,equal] & \mathcal{A}(E)\ar[r] \ar[d,hookrightarrow]& T_{\mathcal{M}}\ar[r] \ar[d, hookrightarrow, "-\otimes \operatorname{Id}_E"]&0\\ 0 \ar[r] & \mathcal{E}nd(E)\ar[r] & \mathcal{D}^{(1)}_{\mathcal{M}}(E)
\ar[r, "\sigma_1"] & T_{\mathcal{M}}\otimes \mathcal{E}nd(E)\ar[r] & 0.\end{tikzcd} \end{equation*} The middle term $\mathcal{A}(E)$ is called the Atiyah algebroid associated to $E$ (or, strictly speaking, to the frame bundle associated to $E$, which is a $\operatorname{GL}$-principal bundle). \end{definition} \begin{definition} We will denote by $\mathcal{A}_{\mathcal{M}/S}(E)$, the relative Atiyah algebroid associated to a vector bundle $E\rightarrow \mathcal{M}$, where $\mathcal{M}$ comes with a morphism $\pi: \mathcal{M}\rightarrow S$ onto a base scheme $S$. The associated relative Atiyah sequence is the top row of the following pull-back diagram: \begin{equation}\label{relatiyahsequence}
\begin{tikzcd}[row sep=small] 0\ar[r]& \mathcal{E}nd(E)\ar[r]\ar[d,equal] & \mathcal{A}_{\mathcal{M}/S}(E)\ar[r] \ar[d,hookrightarrow]& T_{\mathcal{M}/S}\ar[r] \ar[d, hookrightarrow]&0\\ 0 \ar[r] & \mathcal{E}nd(E)\ar[r] & \mathcal{A}(E)\ar[r] & T_{\mathcal{M}}\ar[r] & 0.\end{tikzcd} \end{equation} where $T_{\mathcal{M}/S}$ is the subsheaf of vector fields tangent along the fibers, i.e., $$T_{\mathcal{M}/S} = \Ker(T_{\mathcal{M}} \to \pi^* T_S).$$ \end{definition} Finally, we need to define the trace-free Atiyah algebroid for vector bundles with trivial determinant. Pushing out the standard Atiyah sequence by the trace map $\mathcal{E}nd(E)\rightarrow \mathcal{O}$ gives a morphism of the Atiyah sequene of $E$ to that of $\det(E)$. If the latter is trivial, its Atiyah sequence splits canonically, giving rise a morphism $\tr:\mathcal{A}(E)\rightarrow \mathcal{O}$. We define the trace free Atiyah algebroid $\mathcal{A}^0(E)$ to be the kernel of this map. This all fits together in a commutative diagram (with exact horizontal rows and left vertical row) \begin{equation*}
\begin{tikzcd}[row sep=small] 0\ar[r]& \mathcal{E}nd^0(E)\ar[r]\ar[d,hookrightarrow] & \mathcal{A}^0(E)\ar[r, "\sigma_1"] \ar[d,hookrightarrow]& T_{\mathcal{M}}\ar[r] \ar[d, equal]&0\\ 0 \ar[r] & \mathcal{E}nd(E)\ar[r] \ar[d, "\tr"] & \mathcal{A}(E)\ar[r,"\sigma_1"] \ar[d, "\tr + \sigma_1"]& T_{\mathcal{M}}\ar[r]\ar[d, equal] & 0 \\ 0\ar[r] & \mathcal{O}\ar[r] & \mathcal{A}(\det(E))\cong \mathcal{O}\oplus T_{\mathcal{M}}\ar[r] & T_{\mathcal{M}}\ar[r] & 0. \end{tikzcd} \end{equation*} The algebroid $\mathcal{A}^0(E)$ can be understood, in the language of principal bundles, as arising from the $\operatorname{SL}(r)$-principal frame bundle of $E$. Analogously there is also a relative version $\mathcal{A}^0_{\mathcal{M}/S}(E)$.
Assuming $p \nmid r$, we have a direct sum decomposition $\mathcal{E} nd(E) = \mathcal{E} nd^0(E) \oplus \mathcal{O}_{\mathcal{M}}$ and we denote by $q: \mathcal{E} nd(E) \to \mathcal{E} nd^0(E)$ the projection onto the first direct summand. In this case, the trace-free Atiyah algebroid is also canonically isomorphic to the \emph{projective} Atiyah algebroid, i.e. the push-out of the standard Atiyah sequence by the map $q$ as follows \begin{equation*} \begin{tikzcd}[row sep=small] 0 \ar[r] & \mathcal{E}nd(E) \ar[r]\ar[d,"q"] & \mathcal{A}(E) \ar[r]\ar[d] & T_\mathcal{M} \ar[r]\ar[d,equal] & 0 \\ 0 \ar[r] & \mathcal{E}nd^0(E) \ar[r] & \mathcal{A}^0(E) \ar[r] & T_\mathcal{M} \ar[r] & 0. \end{tikzcd} \end{equation*} We will make this identification throughout. \subsubsection*{Atiyah classes} We will also need a relative version of the Atiyah class for a line bundle $L$. There are a number of ways this can be defined; perhaps the easiest is by taking the top sequence of (\ref{relatiyahsequence}), tensoring it with $\Omega^1_{\mathcal{M}/S}$, and applying $\pi_*$ to obtain a long exact sequence (of course for line bundles we have canonically $\mathcal{E}nd(L)\cong\mathcal{O}$). \begin{definition} The image of the identity $\pi_\ast\operatorname{Id}\in\pi_*\big( \Omega^1_{\mathcal{M}/S}\otimes T_{\mathcal{M}/S}\big)$ under the connecting homomorphism yields a global section of $R^1\pi_* \big(\Omega^1_{\mathcal{M}/S} \otimes \mathcal{E}nd(E)\big)$, which we shall refer to as the \emph{relative Atiyah class}, and denote by $[L]$. \end{definition}
Note that the connecting homomorphism in the long exact sequence obtained by applying $\pi_*$ to the top sequence of (\ref{relatiyahsequence}) is given by cupping with $[L]$ and contracting. In the absolute case, the Atiyah class is the obstruction to the existence of a connection on $L$; a similar interpretation holds in the relative case, though we will not use this. If $\mathcal{M}$ is complex K\"ahler, $[L]$ is just the relative Chern class.
The following lemma probably dates back to \cite{atiyah:1957}, see e.g. \cite[p. 431]{looijenga:2013}. \begin{lemma}\label{extens} Let $X$ be a smooth algebraic variety, $L$ a line bundle, $k$ a positive integer, then we have an isomorphism of short exact sequences \begin{equation*} \begin{tikzcd}[row sep=small]
0 \ar[r] & \mathcal{O}_X \ar[r] \ar[d] & \mathcal{A}(L^{\otimes k}) \ar[r]\ar[d] & T_X \ar[r]\ar[d] & 0 \\
0 \ar[r] & \mathcal{O}_X \ar[r, "\frac{1}{k}"] & \mathcal{A}(L) \ar[r] & T_X \ar[r] & 0. \end{tikzcd} \end{equation*} \end{lemma} \subsubsection*{Projective connections} \begin{definition} Given a vector bundle $E$ on a variety $\mathcal{M}$, a \emph{(Koszul) connection} $\nabla$ on $E$ is a $\mathcal{O}_{\mathcal{M}}$-linear splitting of the Atiyah algebroid: $$\begin{tikzcd} 0\ar[r]& \mathcal{E}nd(E)\ar[r] & \mathcal{A}(E)\ar[r] &T_{\mathcal{M}} \ar[r] \ar[l, bend left=30, " \nabla", dashed] & 0.\end{tikzcd}$$ The connection is said to be \emph{flat} (or integrable) if $\nabla$ preserves the Lie brackets (where the Lie bracket on $\mathcal{A}(E)$ is just the commutator of differential operators). \end{definition} The Hitchin connection is a projective connection. There are a number of ways one can encode what a projective connection is: one could think in terms of $\operatorname{PGL}$ principal bundles, or work with the projectivisation $\mathbb{P}(E)$ of $E$, or work with twisted $\mathcal{D}$-modules (cfr. \cite{beilinson.kazhdan:1990}, \cite[\S 1]{looijenga:2013}). In our context, the most useful one is the following. \begin{definition} Given a vector bundle $E$ on $\mathcal{M}$ as before, a projective connection is a splitting $$\begin{tikzcd} 0\ar[r]& \mathcal{E}nd(E)/\mathcal{O}_{\mathcal{M}} \ar[r] & \mathcal{A}(E)/{\mathcal{O}_{\mathcal{M}}}\ar[r] &T_S\ar[r] \ar[l, bend left=30, dashed, " \nabla "] & 0.\end{tikzcd}$$ It is again flat if $\nabla$ preserves the Lie brackets. \end{definition} \subsection{Heat operators} Consider a smooth surjective morphism of smooth schemes $\pi:\mathcal{M}\rightarrow S$, and a line bundle $L\rightarrow \mathcal{M}$ such that $\pi_\ast L$ is locally free, hence a vector bundle. The connection we construct will live on the projectivisation $\mathbb{P}\pi_\ast L$, but everything below will be expressed in terms of vector bundles, not projective bundles.
We will denote by $\mathcal{D}^{(n)}_{\mathcal{M}/S}(L)$ the subsheaf of $\mathcal{D}^{(n)}_{\mathcal{M}}(L)$ consisting of differential operators of order at most $n$ that are $\pi^{-1}(\mathcal{O}_S)$ linear. The symbol maps $$\begin{tikzcd}\sigma_n: \mathcal{D}^{(n)}_{\mathcal{M}/S}(L) \ar[r] & \Sym^n T_{\mathcal{M}/S}\end{tikzcd} $$ take values in $\Sym^n T_{\mathcal{M}/S}$.
We are now interested in the sheaf $$\mathcal{W}_{\mathcal{M}/S}(L)=\mathcal{D}^{(1)}_{\mathcal{M}}(L)+\mathcal{D}^{(2)}_{\mathcal{M}/S}(L)\subset \mathcal{D}^{(2)}_{\mathcal{M}}(L).$$ Besides the second order symbol map $$\begin{tikzcd} \sigma_2: \mathcal{W}_{\mathcal{M}/S}(L) \ar[r] &\Sym^2T_{\mathcal{M}/S},\end{tikzcd}$$ on this sheaf of differential operators, there is a subprincipal symbol \begin{equation}\label{subprincipal} \begin{tikzcd}
\sigma_S: \mathcal{W}_{\mathcal{M}/S}(L) \ar[r] &\pi^\ast T_{S} ,
\qquad
\langle \sigma_S(D),d (\pi^\ast f) \rangle s = D(\pi^\ast f s) - \pi^\ast f D(s).\end{tikzcd} \end{equation} where $s$ is a local section of $L$ and $f$ a local section of $\mathcal{O}_S$; both well-definedness and the Leibniz rule follow from the property of the second order symbol \[
D(fg s) = \langle \sigma_2(D) , df \otimes dg \rangle s + f D(gs)+g D(fs)-fg D(s) . \] Thus we have a short exact sequence \begin{equation}\label{ses_W}
\begin{tikzcd}0\ar[r] & \mathcal{D}^{(1)}_{\mathcal{M}/S}(L)\ar[r] & \mathcal{W}_{\mathcal{M}/S}(L) \ar[r,"^{\sigma_S\oplus\sigma_2}"] &\pi^* (T_S)\oplus \Sym^2 T_{\mathcal{M}/S}\ar[r] & 0.\end{tikzcd} \end{equation} We can now define \begin{definition}[{\cite[2.3.2]{vangeemen.dejong:1998}}] A \emph{heat operator} $D$ on $L$ is a $\mathcal{O}_S$-linear map of coherent sheaves $$\begin{tikzcd}D:T_S \ar[r] &\pi_\ast \mathcal{W}_{\mathcal{M}/S}(L)\end{tikzcd}$$ such that $\sigma_S \circ \widetilde{D}=\ensuremath{\text{Id}}$, where $\widetilde{D}$ is the equivalent (by adjunction) $\mathcal{O}_{\mathcal{M}}$-linear map $$\widetilde{D} : \pi^* T_S \to \mathcal{W}_{\mathcal{M}/S}(L).$$ Similarly a \emph{projective heat operator} is a map $$\begin{tikzcd}D: T_S \ar[r] & \left( \pi_\ast \mathcal{W}_{\mathcal{M}/S}(L) \right) / \mathcal{O}_S.\end{tikzcd}$$ \end{definition} Given such a heat operator, we refer to $$\begin{tikzcd} \pi_*(\sigma_2) \circ D: T_S\ \ar[r] &\pi_\ast\Sym^2 T_{\mathcal{M}/S}\end{tikzcd} $$ as the \emph{symbol} of the heat operator.
Also a projective heat operator has a well-defined symbol. \subsection{Heat operators and connections} Any heat operator gives rise to a connection on the locally free sheaf $\pi_* L$, as follows (cfr. \cite[\S 2.3.3]{vangeemen.dejong:1998}). Given an open subvariety $U\subset S$, and $\theta \in T(U)$, we want a first order differential operator $$\begin{tikzcd}\nabla_{\theta}: \pi_* L\ \ar[r] &\pi_* L.\end{tikzcd}$$ If $s\in \pi_* L(U)$, we denote by $s$ and $\pi^{-1}(\theta)$ the corresponding sections of $L(\pi^{-1}(U))$ and $\pi^{-1}(T_S)(\pi^{-1}(U))$ respectively. We can now put \begin{equation}\label{conn-heat-op}
\nabla_{\theta}s=D(\pi^{-1}(\theta))(s), \end{equation} since the latter indeed corresponds to a section of $\pi_* L(U)$. Moreover, the Leibniz rule is satisfied since the subprincipal symbol of $D(\pi^{-1}\theta)$ is $\pi^{-1}\theta$, so that for any $f\in \mathcal{O}_S(U)$ we have \[ \nabla_{\theta}(fs)=D(\pi^{-1}(\theta))(\pi^\ast(f) s) = \pi^\ast(\theta(f)) s+ \pi^\ast(f) D(\pi^{-1}(\theta))(s)= \theta(f) s+ f\nabla_{\theta}s, \] so $\nabla_{\theta}$ is indeed a first order differential operator with symbol $\theta$, and hence $\nabla$ is indeed a Koszul connection.
The connection $\nabla$ will be flat if $D$ preserves the Lie brackets. If we have a projective heat operator, we still get a projective connection, with the same comment for flatness.
\subsection{A heat operator for a candidate symbol} As an algebro-geometric counter-part to Hitchin's Theorem \ref{mainHitchin}, van Geemen and de Jong investigated under what conditions a candidate symbol map $$\begin{tikzcd}\rho: T_S \ar[r]& \pi_\ast \Sym^2 T_{\mathcal{M}/S}\end{tikzcd}$$ actually arises as a symbol of a heat operator, i.e. whether it was possible to find a (projective) heat operator $D$ such that $\rho= \pi_*(\sigma_2) \circ D$. Before we can state their result we need to recall two maps. The canonical short exact sequence $$\begin{tikzcd}0\ar[r]& T_{\mathcal{M}/S}\ar[r]& T_{\mathcal{M}}\ar[r]& \pi^*T_S\ar[r]& 0\end{tikzcd}$$ gives rise to the \emph{Kodaira-Spencer map} \begin{equation}\label{ks}\begin{tikzcd}\kappa_{\mathcal{M}/S}: T_{S} \ar[r] & R^1\pi_* T_{\mathcal{M}/S}.\end{tikzcd}\end{equation} Similarly, the short exact sequence \begin{equation}\label{sesnoext}\begin{tikzcd}0\ar[r]& T_{\mathcal{M}/S}\ar[r]& \mathcal{D}^{(2)}_{\mathcal{M}/S}(L)/\mathcal{O}_{\mathcal{M}}\ar[r]& \Sym^2 T_{\mathcal{M}/S}\ar[r]& 0 \end{tikzcd}\end{equation} gives rise to the connecting homomorphism \begin{equation}\begin{tikzcd}\label{mu-L}\mu_{L}:\pi_* \Sym^2 T_{\mathcal{M}/S} \ar[r] & R^1\pi_* T_{\mathcal{M}/S}.\end{tikzcd} \end{equation} We can now state \begin{theorem}[{van Geemen -- de Jong,\cite[\S 2.3.7]{vangeemen.dejong:1998}}]\label{vgdj} With $L$ and $\pi:\mathcal{M}\rightarrow S$ as before, we have that if, for a given $\rho: T_S \rightarrow \pi_\ast \Sym^2 T_{\mathcal{M}/S}$, \begin{enumerate}[(a)] \item \label{vgdj-one} $\kappa_{\mathcal{M}/S}+\mu_{L} \circ \rho=0,$ \item \label{vgdj-two} cupping with the relative Atiyah class \[\begin{tikzcd}\cup [L]: \pi_*T_{\mathcal{M}/S}\ar[r] &R^1\pi_*\mathcal{O}_{\mathcal{M}}\end{tikzcd}\] is an isomorphism, and \item \label{vgdj-three} $\pi_*\mathcal{O}_{\mathcal{M}}=\mathcal{O}_S$, \end{enumerate} then there exists a unique projective heat operator $D$ whose symbol is $\rho$. \end{theorem} Note that even though the context of this theorem is entirely algebro-geometric and makes no reference to a symplectic form, the conditions are closely matched with those in Hitchin's Theorem \ref{mainHitchin}: the requirement of cupping with the Chern class being an isomorphism is identical in both cases, whereas from a quadratic symbol $\rho$ satisfying condition \ref{vgdj-one} we recover an element of the hypercohomology group in \ref{mainHitchin}.\ref{hitchintwo} via the long-exact sequence of hypercohomology obtained from (\ref{sesofcomplex}). Finally, \ref{vgdj-three} is an appropriate weakening of the premise that $\mathcal{M}$ is compact (and connected) in Theorem \ref{mainHitchin}. \begin{proof}
Consider the long-exact sequence associated to the short exact sequence (\ref{ses_W}),
\[
\begin{tikzcd}[row sep=small]
0\ar[r] & \pi_\ast \mathcal{D}^{(1)}_{\mathcal{M}/S}(L)\ar[r] & \pi_\ast \mathcal{W}_{\mathcal{M}/S}(L) \ar[r,"^{\pi_* \sigma_S\oplus \pi_* \sigma_2}"] & T_S\oplus \pi_\ast \Sym^2 T_{\mathcal{M}/S} \ar[dll,swap,"\delta"] \\
& R^1 \pi_\ast \mathcal{D}^{(1)}_{\mathcal{M}/S}(L)\ar[r] & R^1 \pi_\ast \mathcal{W}_{\mathcal{M}/S}(L) \ar[r] & \dots \end{tikzcd}
\]
As $\cup[L]$ is the connecting homomorphism in the long-exact sequence associated with the first order symbol map on $\mathcal{D}^{(1)}_{\mathcal{M}/S}(L)$, condition \ref{vgdj-two} guarantees that $\mathcal{O}_S = \pi_\ast \mathcal{O}_{\mathcal{M}} = \pi_\ast \mathcal{D}^{(1)}_{\mathcal{M}/S}(L)$, i.e. all global first order operators on $L$ along the fibers of $\pi$ are of order zero. Using condition \ref{vgdj-three}, we obtain a commutative diagram with exact rows and columns
\[
\begin{tikzcd}[row sep=small, column sep=tiny]
& 0 \ar[d] & 0 \ar[d] \\
0 \ar[r] & \pi_\ast \mathcal{O}_{\mathcal{M}} \ar[r] \ar[d] & \pi_\ast \mathcal{O}_{\mathcal{M}} \ar[r] \ar[d] & 0 \ar[d] \\
0\ar[r] & \pi_\ast \mathcal{D}^{(1)}_{\mathcal{M}/S}(L)\ar[r] \ar[d] & \pi_\ast \mathcal{W}_{\mathcal{M}/S}(L) \ar[r] \ar[d] & \Ker \delta \ar[r] \ar[d] & 0 \\
0 \ar[r] & 0 \ar[r] & \left( \pi_\ast \mathcal{W}_{\mathcal{M}/S}(L) \right) / \mathcal{O}_S \ar[r] \ar[d] & \Ker \delta \ar[r] \ar[d] & 0 \\
& & 0 & 0
\end{tikzcd}
\]
and therefore an isomorphism $ \left( \pi_\ast \mathcal{W}_{\mathcal{M}/S}(L) \right) / \mathcal{O}_S \to \Ker \delta $. It remains to show that our hypotheses imply that the image of the morphism
\[\begin{tikzcd}
T_S \ar[r] & T_S \oplus \pi_* \Sym^2 T_{\mathcal{M}/S} ,
\qquad
\theta \ar[r, mapsto] & (\theta,\rho(\theta))
\end{tikzcd}
\]
is contained in the kernel of the connecting homomorphism $\delta$. In order to do this, let us decompose $\delta=\delta_1 + \delta_2$ into its two components:
$$\begin{tikzcd}\delta_1: T_S \ar[r] & R^1\pi_*\mathcal{D}^{(1)}_{\mathcal{M}/S}(L)\ \ \ \ \textrm{and}\ \ \ \ \delta_2:\pi_*\Sym^2 T_{\mathcal{M}/S} \ar[r] & R^1\pi_*\mathcal{D}^{(1)}_{\mathcal{M}/S}(L).\end{tikzcd}$$
It is then straightforward to check that
$$R^1\pi_*(\sigma_1)\circ \delta_1= \kappa_{\mathcal{M}/S}\ \ \ \ \textrm{and}\ \ \ \ R^1\pi_*(\sigma_1)\circ \delta_2= \mu_{L}.$$
Finally, we observe that $\sigma_1$ induces an injective map $$\begin{tikzcd}R^1\pi_*(\sigma_1):R^1\pi_*\mathcal{D}^{(1)}_{\mathcal{M}/S}(L) \ar[r] & R^1\pi_* T_{\mathcal{M}/S},\end{tikzcd}$$ as the previous map in the long exact sequence $$\begin{tikzcd}\dots\ar[r] & \pi_{*}T_{\mathcal{M}/S}\ar[r, "{\cup [L]}"] & R^1\pi_*{\mathcal{O}_{\mathcal{M}}} \ar[r] & R^1\pi_*\mathcal{D}^{(1)}_{\mathcal{M}/S}(L) \ar[r] & R^1\pi_*T_{\mathcal{M}/S}\ar[r]& \dots\end{tikzcd} $$ is surjective by condition \ref{vgdj-one}. Thus $(\theta, \rho(\theta))\in \Ker \delta$ if and only if $(\kappa_{\mathcal{M}/S} + \mu_{L} \circ \rho)(\theta)=0$, for any local vector field $\theta$ on $S$. \end{proof}
\subsection{A flatness criterion} To complete our outline of the general part of the theory, we discuss a general flatness condition for connections constructed via Theorem \ref{vgdj}. It is a verbatim translation of Hitchin's original reasoning \cite[Thm. 4.9]{hitchin:1990} to the algebro-geometric setting, its central ingredient being the requirement that the symbols should Poisson-commute when viewed as homogeneous functions on the relative cotangent bundle.
\begin{theorem}\label{thm_flatness} Under the conditions of Theorem \ref{vgdj} and over a base field of characteristic different from 3, the projective connection constructed from a symbol $\rho$ is projectively flat if
\begin{enumerate}[(a)]
\item for all local sections $\theta,\theta'$ of $T_S$,
\[
\{ \rho(\theta), \rho(\theta') \}_{T^\ast_{\mathcal{M}/S}} = 0 ,
\]
\item the morphism $\mu_L$ is injective, and
\item there are no vertical vector fields, $\pi_\ast T_{\mathcal{M}/S}=0$.
\end{enumerate} \end{theorem} \begin{remark} In the statement and the proof of this theorem we use the fact that the natural morphism
\[
\pi_\ast \Sym^k T_{\mathcal{M}/S} \to \pi_\ast\mathcal{O}_{T^\ast_{\mathcal{M}/S}}
\]
is an isomorphism of Poisson-algebras onto the weight $k$ part under the natural $\mathbb{G}_m$-action for $k \leq p-1$; here, the Poisson structure on the left is the one inherited from the commutator bracket on operators of order at most $k$, and the one on the right is the natural one on the cotangent bundle. \end{remark} \begin{proof}
As the connection is defined by projective heat operators (\ref{conn-heat-op}), its flatness is equivalent to the vanishing of the operator
\begin{equation}\label{comm-flatness}
[D(\theta),D(\theta')]-D([\theta,\theta']) \in \pi_{e\ast} \left( \mathcal{D}^{(3)}_{\mathcal{M}/S}(\mathcal{L}^k)+\mathcal{D}^{(2)}_{\mathcal{M}}(\mathcal{L}^k) \right) \big/ \mathcal{O}_S .
\end{equation}
Now it follows from the preceding remark and condition (a) that
\[
\sigma_3([D(\theta),D(\theta')]) = \left\{ \sigma_2(D(\theta)),\sigma_2(D(\theta')) \right\}_{T^\ast_{\mathcal{M}/S}} = \left\{ \rho(\theta),\rho(\theta') \right\}_{T^\ast_{\mathcal{M}/S}} = 0.
\]
Therefore, the operator (\ref{comm-flatness}) is actually at most second order, and we furthermore claim that it really acts only along the fibers of $\mathcal{M} \rightarrow S$,
\[
[D(\theta),D(\theta')]-D([\theta,\theta']) \in \pi_{e\ast} \left( \mathcal{D}^{(2)}_{\mathcal{M}/S}(\mathcal{L}^k) \right) \big/ \mathcal{O}_S .
\]
This happens for the same reason the curvature $[\nabla_X,\nabla_Y]-\nabla_{[X,Y]}$ of a connection is of degree zero as a differential operator: one checks (using the subprincipal symbol (\ref{subprincipal})) that (\ref{comm-flatness}) is $\pi^{-1}\mathcal{O}_S$-linear.
Now we look at the short exact sequence (\ref{sesnoext}), and apply $\pi_\ast$. As $\mu_L$ is injective by condition (b) and there are no vertical vector fields by (c), we get
$$\pi_\ast\mathcal{D}^{(2)}_{\mathcal{M}/S}(L)\Big/\mathcal{O}_S \cong \pi_\ast T_{\mathcal{M}/S} = 0,$$
thus concluding the proof. \end{proof}
\subsection{The map $\mu_{L}$ } Finally, we need to get a better understanding of the map $\mu_{L}$ from (\ref{mu-L}), for which we could simply refer to \cite[Cor. 2.4.6]{beilinson.bernstein:1993}. As the proof is not too complicated and uses only a fraction of the machinery of that paper, we thought it worthwile to include it here. We thank an anonymous referee for pointing out considerable simplifications to our previous proof. \begin{proposition}\label{thm_mu_O} In the context outlined above (with $\pi:\mathcal{M}\rightarrow S$ is a smooth morphism of smooth schemes, and $L$ a line bundle on $\mathcal{M}$), we can write the connecting homomorphism (\ref{mu-L}) as \begin{equation*}
\mu_{L}= \cup [L] + \cup\left(-\frac{1}{2} [K_{\mathcal{M}/S}]\right),\end{equation*} where $K_{\mathcal{M}/S}$ is the relative canonical bundle of $\pi:\mathcal{M}\rightarrow S$. \end{proposition} Note that `half' of this statement ( $\mu_{L}=\cup[L]+\mu_{\mathcal{O}_{\mathcal{M}}}$) appears in \cite[Lemma 1.16]{welters:1983}, except that Welters uses the extension class of the sheaf of principal parts $\mathcal{P}^{(1)}(L)$ of order $\leq 1$ instead of $\mathcal{D}^{(1)}_{\mathcal{M}/S}(L)$ to define $[L]$ and hence has a minus sign on the right-hand side. In a K\"ahler context, with $L$ a polarizing line bundle, the statement of Proposition \ref{thm_mu_O} is implied in \cite[p. 364]{hitchin:1990}. In the general complex analytic setting, a Dolbeault-theoretic approach is descibed in \cite[Appendix A.2]{boer:2008}\footnote{The formulas in \cite{beilinson.bernstein:1993} and \cite{boer:2008} are more general expressions that both specialise to the one given in Proposition \ref{thm_mu_O}, but appear different from each other in general.}. \begin{proof}
The proof follows from the identification of the opposite of the algebra of differential operators on $L$ with that of $L^{-1}\otimes K_{\mathcal{M}/S}$ via the adjoint differential operator $D^\circ$, as discussed for example in \cite[1.1.5.(iv)]{beilinson.schechtman:1988}.
Due to the identity $\mu_{L}=\cup[L]+\mu_{\mathcal{O}_{\mathcal{M}}}$ observed already by Welters (in arbitrary characteristic), it suffices to show that
\begin{equation}\label{muadj}
\mu_{L} = -\mu_{L^{-1}\otimes K}.
\end{equation}
For this, consider the adjoint map between sheaves of differential operators $\mathcal{A}_{\mathcal{M}/S}(E) \ni D \mapsto D^\circ \in \mathcal{A}_{\mathcal{M}/S}(E^\ast \otimes K_{\mathcal{M}/S})$ defined by the identity
\[
\langle e, D^\circ e^\circ \rangle = \langle De, e^\circ \rangle - \mathcal{L}_{\sigma_1 D}\langle e, e^\circ \rangle ,
\]
where $e$ and $e^\circ$ are arbitrary local sections of $E$ and $E^\circ := E^\ast \otimes K_{\mathcal{M}/S}$, respectively, and $\mathcal{L}$ is the Lie derivative on the relative canonical bundle. It is straightforward to verify that $D^\circ$ has symbol $\sigma_1(D^\circ) = - \sigma_1(D)$, and that for any regular local function $\phi$
\[
(\phi D)^\circ = \phi D^\circ - \langle \sigma_1 D, d_{\mathcal{M}/S}\phi \rangle ,
\]
so that $D\mapsto D^\circ$ is in particular $\pi^{-1}\mathcal{O}_S$-linear. This zeroth-order deviation from $\mathcal{O}_{\mathcal{M}}$-linearity may appear inconvenient at first sight, but it actually permits to extend the adjoint to second order operators, as
\[
(\phi D_2)^\circ \circ D_1^\circ = D_2^\circ \circ (\phi D_1)^\circ + (\langle \sigma_1 D_1,d\phi \rangle D_2)^\circ .
\]
In this way we obtain a $\pi^{-1}\mathcal{O}_S$-linear isomorphism of short-exact sequences
\[
\begin{tikzcd}[row sep=small]
0 \ar[r] & \mathcal{D}^{(1)}_{\mathcal{M}/S}(L) \ar[r] \ar[d, swap, "{D \mapsto D^\circ}"] & \mathcal{D}^{(2)}_{\mathcal{M}/S}(L) \ar[r, "\sigma_2"] \ar[d, swap, "{D \mapsto D^\circ}"] & \Sym^2 T_{\mathcal{M}/S} \ar[r] \ar[d, "\ensuremath{\text{Id}}"] & 0 \\
0 \ar[r] & \mathcal{D}^{(1)}_{\mathcal{M}/S}(L^{-1} \otimes K_{\mathcal{M}/S}) \ar[r] & \mathcal{D}^{(2)}_{\mathcal{M}/S}(L^{-1} \otimes K_{\mathcal{M}/S}) \ar[r, "\sigma_2"] & \Sym^2 T_{\mathcal{M}/S} \ar[r] & 0 ,
\end{tikzcd}
\]
whose push-out along $\sigma_1$ gives
\[
\begin{tikzcd}[row sep=small]
0 \ar[r] & T_{\mathcal{M}/S} \ar[r] \ar[d, swap, "-\ensuremath{\text{Id}}"] & \mathcal{D}^{(2)}_{\mathcal{M}/S}(L)/\mathcal{O}_{\mathcal{M}} \ar[r, "\sigma_2"] \ar[d, swap, "{D \mapsto D^\circ}"] & \Sym^2 T_{\mathcal{M}/S} \ar[r] \ar[d, "\ensuremath{\text{Id}}"] & 0 \\
0 \ar[r] & T_{\mathcal{M}/S} \ar[r] & \mathcal{D}^{(2)}_{\mathcal{M}/S}(L^{-1} \otimes K_{\mathcal{M}/S})/\mathcal{O}_{\mathcal{M}} \ar[r, "\sigma_2"] & \Sym^2 T_{\mathcal{M}/S} \ar[r] & 0 ,
\end{tikzcd}
\]
which proves the necessary identity (\ref{muadj}). \end{proof}
\begin{remark} Note that the preceding result remains true in characteristic $p>0$ with $p\neq 2$, since we only use the isomorphism induced by $D \mapsto D^\circ$ between differential operators of order $\leq 2$. \end{remark}
\section{An algebro-geometric approach to the Hitchin connection for non-abelian theta-functions}\label{mainconstruction} In this section we construct the Hitchin connection in algebraic geometry. We want to invoke Theorem \ref{vgdj}, using the symbol $\rho$ from (\ref{rho}) on page \pageref{rho}. In order to verify that this theorem applies, we need to begin by examining the various ingredients of condition \ref{vgdj-one}.
Note that, compared to the situation of families of abelian varieties (cfr. \cite{welters:1983}, \cite[\S2.3.8]{vangeemen.dejong:1998}), we need a much more detailed knowledge of our candidate symbol, in order to establish flatness of the connection later on (which is done via other means for abelian varieties). \subsection{Basic facts about the moduli space of bundles}\label{sect_basicfacts} At this point we can turn our attention to the particular context we are interested in: the moduli theory of bundles on curves. In the rest of Section \ref{mainconstruction}, we shall denote by $\pi_s:\mathcal{C}\rightarrow S$ a smooth family of smooth projective curves of genus $g\geq 2$. This gives rise, for any integer $r\geq 2$ to a (coarse) relative moduli space of stable bundles of rank $r$ with trivial determinant over the same base, which we shall denote by $\pi_e:\mathcal{M}\rightarrow S$. If $g=2$ we will assume that $r\geq 3$. We shall denote the fibered product by the diagram \[\begin{tikzcd} \mathcal{C}\times_S \mathcal{M}\ar[d, "\pi_w"']\ar[r, "\pi_n"] & \mathcal{M}\ar[d, "\pi_e"] \\ \mathcal{C}\ar[r, "\pi_s"'] & S \end{tikzcd}\] and will simply put $$\pi_c=\pi_e\circ \pi_n=\pi_s\circ \pi_w.$$ Unfortunately $\mathcal{M}$ is only a coarse moduli space, and a universal bundle over $\mathcal{C}\times_S \mathcal{M}$ does not exist (one could argue that it exists over the stack of stable bundles $\mathfrak{M}\rightarrow S$, but does not descend to $\mathcal{M}$). Nevertheless, one can speak both of the Atiyah algebroid and Atiyah sequence of the virtual bundle (since these do descend to the coarse moduli space). There exists a unique line bundle $\mathcal{L}$ over $\mathcal{M}$, called the \it theta line bundle\rm, which is mapped to the relatively ample generator of the relative Picard variety $\Pic(\mathcal{M}/S)$ (see \cite{drezet.narasimhan:1989,hoffmann:2012}). In order to avoid making our notations heavier than needed, we shall henceforth pretend a universal bundle $\mathcal{E}\rightarrow \mathcal{C}\times_S\mathcal{M}$ exist. Note that this universal bundle is only unique up to tensor product with a line bundle coming from $\mathcal{M}$. However the trace-free endomorphism bundle $\mathcal{E} nd^0(\mathcal{E})$ is unique. Similarly the determinant-of-cohomology line bundle on $\mathcal{M}$ associated to a universal bundle $\mathcal{E}$, defined as in \cite{MK} \[
\lambda(\mathcal{E}) := \det R^\bullet \pi_{n \ast} (\mathcal{E}) , \] will depend on the choice of the universal bundle $\mathcal{E}$. We will use two well-known properties when considering vector bundles with trivial determinant. \begin{itemize} \item For any universal bundle $\mathcal{E}$ and any line bundle $\zeta$ on $\mathcal{C} \to S$ of degree $g-1$, we have the equality \cite{drezet.narasimhan:1989,hoffmann:2012} \begin{equation}\label{thetadet1} \mathcal{L}^{-1}=\lambda (\mathcal{E} \otimes \pi_w^*\zeta). \end{equation} \item For any universal bundle $\mathcal{E}$, we have the equalities \cite{LS} \begin{equation}\label{thetadet2} \mathcal{L}^{-2r} = K_{\mathcal{M}/S} = \lambda(\mathcal{E} nd^0(\mathcal{E})). \end{equation} \end{itemize} At various places we shall use the trace pairing $$\begin{tikzcd} \operatorname{Tr}:\mathcal{E}nd^0(\mathcal{E})\times \mathcal{E}nd^0(\mathcal{E})\ar[r]& \mathcal{O}_{\mathcal{C}\times_S\mathcal{M}}\end{tikzcd}$$ to identify $\mathcal{E}nd^0(\mathcal{E})$ with its dual $\mathcal{E}nd^0(\mathcal{E})^*$.
We will need a few other standard facts about the moduli space $\mathcal{M}$ as well: \begin{proposition}\label{basicfacts} We have \begin{enumerate}[(a)] \item $\pi_{n*}\mathcal{E}nd^0(\mathcal{E})=\{0\}$, \item $T_{\mathcal{M}/S}=R^1\pi_{n*}\mathcal{E}nd^0(\mathcal{E})$, \item\label{basicfactsthree} $\pi_{e*}T_{\mathcal{M}/S}=\{0\}$,
\item\label{basicfactsfour} $R^1\pi_{e*}\mathcal{O}_{\mathcal{M}}=\{0\}$.
\end{enumerate}
\end{proposition} The first two of these follow from basic deformation theory. For the last two, which are also well-known, we include a proof (due to Hitchin) using the Hitchin system in Appendix \ref{appendixbasicfacts}. \subsection{The Kodaira-Spencer Map} Our aim in this section is to give a description of the map $$\begin{tikzcd} \Phi: R^1\pi_{s*}T_{\mathcal{C}/S}\ar[r]& R^1\pi_{e*}T_{\mathcal{M}/S}\end{tikzcd}$$ (relating deformations of the curve to deformations of the moduli space) which makes the diagram of sheaves on $S$ \begin{equation}\label{kappaphi} \begin{tikzcd}[row sep=-0.5ex, column sep=large] & R^1\pi_{s*}T_{\mathcal{C}/S} \ar[dd, "\Phi"] \\ T_S \ar[ur, pos=0.6, "\kappa_{\mathcal{C}/S}"]\ar[dr, pos=0.7, "\kappa_{\mathcal{M}/S}" '] & \\ & R^1\pi_{e*}T_{\mathcal{M}/S}\\ \end{tikzcd}\end{equation} commute, where $\kappa_{\mathcal{C}/S}$ and $\kappa_{\mathcal{M}/S}$ are the Kodaira-Spencer maps, as in (\ref{ks}). This is a line of reasoning that essentially goes back to Narasimhan and Ramanan \cite{narasimhan.ramanan:1970}.
On $\mathcal{C}\times_S\mathcal{M}$ we have the trace-free relative Atiyah sequence \begin{equation}\label{relatseq}\begin{tikzcd} 0\ar[r] & \mathcal{E}nd^0(\mathcal{E}) \ar[r] & \mathcal{A}^0_{\mathcal{C}\times_S\mathcal{M}\big/\mathcal{M}}(\mathcal{E}) \ar[r] & T_{\mathcal{C}\times_S \mathcal{M} \big/ \mathcal{M}} \ar[r] & 0.\end{tikzcd}\end{equation} As we have that $\pi_{n*}\left(T_{C\times_S \mathcal{M} \big/ \mathcal{M}}\right)=0$ and $R^2\pi_{n*}\mathcal{E}nd^0(\mathcal{E})=0$, applying $R^1\pi_{n*}$ gives the short exact sequence on $\mathcal{M}$ \begin{equation}\label{fromsernesi}\begin{tikzcd} 0\ar[r] & R^1\pi_{n*}\mathcal{E}nd^0(\mathcal{E})\ar[r] & R^1\pi_{n*}\mathcal{A}^0_{\mathcal{C}\times_S\mathcal{M}\big/\mathcal{M}}(\mathcal{E})\ar[r] & R^1\pi_{n*}T_{\mathcal{C}\times_S \mathcal{M}\big/\mathcal{M}}\ar[r] & 0 . \end{tikzcd}\end{equation}
In order to describe the Kodaira-Spencer map $\kappa_{\mathcal{M}/S}$, we need to start from the short exact sequence $$\begin{tikzcd} 0\ar[r] & T_{\mathcal{M}/S} \ar[r] & T_{\mathcal{M}}\ar[r] & \pi_e^*T_S\ar[r] & 0,\\ \end{tikzcd}$$ which is given (see e.g. \cite[\S 3.3.3]{sernesi:2006} for the case of a line bundle -- vector bundles are a straightforward generalisation of the description there, and are discussed in \cite[\S 2.3]{martinengo:2009}) by the pullback of (\ref{fromsernesi}) along the map \begin{equation*}\begin{tikzcd}\pi_e^*\kappa_{\mathcal{C}/S}:\pi_{e}^*T_S\ar[r]& R^1\pi_{n*}T_{\mathcal{C}\times_S \mathcal{M}\big/\mathcal{M}}\cong \pi_e^*\left(R^1\pi_{s*} T_{\mathcal{C}/S} \right).\end{tikzcd}\end{equation*}
If we apply $\pi_{e*}$ to this, we obtain finally \begin{lemma}\label{constrphi} The Kodaira-Spencer map $\kappa_{\mathcal{M}/S}$ is given by the composition of $\kappa_{\mathcal{C}/S}$ with $\Phi$, the connecting homomorphism of (\ref{fromsernesi}): \[\begin{tikzcd}[column sep=large, row sep=0ex] & R^1\pi_{s*}T_{\mathcal{C}/S}\cong \pi_{e*}\big( R^1\pi_{n*}T_{\mathcal{C}\times_S \mathcal{M}\big/ \mathcal{M}}\big) \ar[dd, shorten=-1ex, "\Phi"]\\ T_S\ar[ur, end anchor={[xshift=-4em,yshift=1ex]south}, pos=0.5, "\kappa_{\mathcal{C}/S}"] \ar[dr, end anchor={[xshift=-4em]north}, pos=0.5, "\kappa_{\mathcal{M}/S}" '] & \\ & R^1\pi_{e*}T_{\mathcal{M}/S}\cong R^1\pi_{e*}\left(R^1\pi_{n*}\mathcal{E}nd^0(\mathcal{E})\right). \end{tikzcd}\] \end{lemma}
\subsection{The Hitchin Symbol} We have already briefly encountered the Hitchin symbol in (\ref{rho}), we shall clarify the precise definition here in the appropriate relative setting. We start from the quadratic part of the Hitchin system, relative over $S$, and its associated symmetric bilinear form (temporarily denoted $B$) \[
\begin{tikzcd}[column sep=tiny, row sep=small]
T_{\mathcal{M}/S}^\ast \arrow[rr, "\operatorname{diag}"] \arrow[rd] && T_{\mathcal{M}/S}^\ast \otimes T_{\mathcal{M}/S}^\ast \arrow[ld, "B"] \\
& \pi_{n*}K^{\otimes 2}_{\mathcal{C}\times_S\mathcal{M}\big/\mathcal{M}} &
\end{tikzcd} \] Recall that the bilinear form $B$ is, in the explicit description of the relative cotangent bundle via Higgs fields $T^\ast_{\mathcal{M} /S} = {\pi_n}_\ast ( \mathcal{E}nd^0(\mathcal{E})\otimes K_{\mathcal{C} \times_S \mathcal{M} \big/ \mathcal{M}} )$, given by the trace \[
B(\phi,\psi) = \tr (\phi \circ \psi) . \] In particular, it factors further through the symmetric square $\Sym^2 T_{\mathcal{M}/S}^\ast$. Notice as well that since we assume the characteristic of the base field to be different from 2, the symmetric square is canonically identified with the symmetric 2-tensors, and in particular there is also a canonical identification \[
\left( \Sym^2 T_{\mathcal{M}/S}^\ast \right)^\ast \cong \Sym^2 T_{\mathcal{M}/S} . \] Taking the dual $B^\ast$ of $B$, using Serre duality relative to $\pi_n$ on the domain (where in particular $K_{\mathcal{C} \times_S \mathcal{M} / \mathcal{M}} = \pi_w^\ast K_{\mathcal{C} / S}$), and pushing down via ${\pi_e}_\ast$ we obtain a map ${\pi_e}_\ast \left( B^\ast \right)$ \[ \begin{tikzcd}
{\pi_e}_\ast R^1 {\pi_n}_\ast \pi_w^\ast T_{\mathcal{C} / S}\ar[r, "{\pi_e}_\ast B^\ast"]& {\pi_e}_\ast \Sym^2 T_{\mathcal{M} / S}.
\end{tikzcd} \] Combining this with flat base change \[
R^1 {\pi_n}_\ast \pi_w^\ast T_{\mathcal{C} / S} \cong \pi_e^\ast R^1 {\pi_s}_\ast T_{\mathcal{C} / S} , \] we make the following definition. \begin{definition}\label{hitchinsymbol} The Hitchin symbol $\rho^{\operatorname{Hit}}$ is defined as \[ \begin{tikzcd} \rho^{\operatorname{Hit}} := {\pi_e}_\ast \left( B^\ast \right) : R^1 {\pi_s}_\ast T_{\mathcal{C} / S} \ar[r]& {\pi_e}_\ast \Sym^2 T_{\mathcal{M} / S} . \end{tikzcd} \] \end{definition} The morphism $\rho^{\operatorname{Hit}}$ is in fact an isomorphism. As we do not need this fact directly, we have relegated it to the Appendix, see Lemma \ref{rho-Hit-isom}.
For our purpose of comparing the symbol map with the Kodaira--Spencer morphism in the general context of Theorem \ref{vgdj}, we need the following alternative description: consider first the surjective evaluation map on $\mathcal{C} \times_S \mathcal{M}$: \begin{equation}\label{eval-end} \begin{tikzcd}\pi_n^*\pi_{n*}(\mathcal{E}nd^0(\mathcal{E})\otimes \pi_w^*K_{\mathcal{C}/S}) \ar[r,"\operatorname{ev}"]& \mathcal{E}nd^0(\mathcal{E})\otimes \pi_w^*K_{\mathcal{C}/S}. \end{tikzcd} \end{equation} Dualizing (\ref{eval-end}) we get a morphism \[ \begin{tikzcd} \mathcal{E}nd^0(\mathcal{E})^*\otimes \pi_w^*T_{\mathcal{C}/S} \ar[r]& \pi_n^*\left(\pi_{n*}\left(\mathcal{E}nd^0(\mathcal{E})\otimes \pi_w^*K_{\mathcal{C}/S}\right)\right)^* \end{tikzcd} \] so that swapping the first tensor factor and composing with relative Serre duality for $\pi_n$ we obtain a $\mathcal{O}_{\mathcal{C}\times_S \mathcal{M}}$-linear morphism \begin{equation}\label{eval-dual} \begin{tikzcd}
\pi_w^*T_{\mathcal{C}/S} \ar[r,"\operatorname{ev}^\ast"] & \mathcal{E}nd^0(\mathcal{E})\otimes\pi_n^*(R^1\pi_{n*}(\mathcal{E}nd^0(\mathcal{E})^*)).
\end{tikzcd} \end{equation} We also use the trace pairing to identify $\operatorname{Tr}: \mathcal{E}nd^0(\mathcal{E})\overset{\cong}{\to} \mathcal{E}nd^0(\mathcal{E})^*$. Now we apply ${\pi_e}_\ast \circ R^1{\pi_n}_\ast$ to (\ref{eval-dual}) and, by the isomorphism $R^1\pi_{n*}\mathcal{E}nd^0(\mathcal{E})^*\cong R^1\pi_{n*}\mathcal{E}nd^0(\mathcal{E}) \cong T_{\mathcal{M}/S}$, the projection formula and base change, we obtain a map \begin{equation}\label{pre-symbol} \begin{tikzcd} R^1\pi_{s*}(T_{\mathcal{C}/S}) \ar[r]& \pi_{e*}\left(T_{\mathcal{M}/S}\otimes T_{\mathcal{M}/S}\right). \end{tikzcd} \end{equation} \begin{lemma}\label{defhitchinsymbol} The map (\ref{pre-symbol}) coincides with the Hitchin symbol \ref{hitchinsymbol}. \end{lemma} \begin{proof} The claimed identity follows from commutativity of the diagram \[ \begin{tikzcd}[column sep=tiny]
& R^1 {\pi_n}_\ast \pi_w^\ast T_{\mathcal{C} /S} \arrow[ld, swap, "R^1{\pi_n}_\ast(\operatorname{ev}^\ast)"] \arrow[rd, "B^\ast"] & \\
R^1 {\pi_n}_\ast \mathcal{E}nd^0(\mathcal{E}) \otimes R^1 {\pi_n}_\ast \left( \mathcal{E}nd^0(\mathcal{E})^\ast \right) \arrow[rr, "\ensuremath{\text{Id}} \otimes (R^1{\pi_n}_\ast \operatorname{Tr}^{-1})^\ast"] & & T_{\mathcal{M} /S} \otimes T_{\mathcal{M} /S}. \end{tikzcd} \] This follows if we in turn dualize, apply Serre duality, for which \[ \left( R^1 {\pi_n}_\ast (\operatorname{ev}^\ast ) \right)^\ast = {\pi_n}_\ast \left( \operatorname{ev} \otimes \ensuremath{\text{Id}} \right), \] (and similarly for the other arrow, where additionally $\operatorname{Tr} = \operatorname{Tr}^\ast$), and observe that the natural pairing on $\mathcal{E}nd^0(\mathcal{E})^\ast \otimes \mathcal{E}nd^0(\mathcal{E})$ coincides with $B \circ (\operatorname{Tr}^{-1} \otimes \ensuremath{\text{Id}})$ by the definition of $B$ and $\operatorname{Tr}$. \end{proof} \subsection{The theta line bundle and its Atiyah algebroid} Next we need some observations about the Atiyah algebroid of the theta line bundle $\mathcal{L}$ (see Sect. \ref{sect_basicfacts}). We recall that $\mathcal{L}$ is mapped to the ample generator of $\Pic(\mathcal{M}/S)$ and that $\mathcal{L}$ is related to the determinant-of-cohomology line bundle as in (\ref{thetadet1}) and (\ref{thetadet2}).
In this setting, the Atiyah sequence for $\mathcal{L}$ relative to $S$ has a remarkably direct description in terms of the Atiyah sequence of the trace-free relative Atiyah algebroid of $\mathcal{E}$, \begin{equation}\label{at-alg} \begin{tikzcd} 0\ar[r]& \mathcal{E}nd^0(\mathcal{E}) \ar[r]& \mathcal{A}^0_{\mathcal{C}\times_S\mathcal{M}\big/\mathcal{M}}(\mathcal{E}) \ar[r]& \pi_w^*T_{\mathcal{C}/S}\cong T_{\mathcal{C}\times_S\mathcal{M}\big/\mathcal{M}} \ar[r]& 0. \end{tikzcd} \end{equation} Note that, since $\mathcal{E} nd^0(\mathcal{E})$ is uniquely defined, also is $\mathcal{A}^0_{\mathcal{C}\times_S\mathcal{M}\big/\mathcal{M}}(\mathcal{E})$. Indeed, we have \begin{theorem}\label{maintracecompl} The relative Atiyah sequence of the theta line bundle $\mathcal{L}$ is isomorphic to the first direct image $R^1 \pi_{n \ast}$ of the dual of (\ref{at-alg}):
\begin{equation}\label{dualR1} \begin{tikzcd}[column sep=small, row sep=small]
0 \ar[r]& R^1\pi_{n*}(K_{\mathcal{X}/\mathcal{M}})\cong \mathcal{O}_\mathcal{M} \ar[r] \ar[d, " \operatorname{Id}_{\mathcal{O}_M}" swap] & R^1\pi_{n*}\left(\mathcal{A}^0_{\mathcal{X}/\mathcal{M}}(\mathcal{E})^\ast\right) \ar[r] \ar[d, "\cong"] & R^1\pi_{n*}\left(\mathcal{E}nd^0(\mathcal{E})^*\right) \ar[d, "\cong"] \ar[r]& 0 \\
0 \ar[r]& \mathcal{O}_\mathcal{M} \ar[r] & \mathcal{A}_{\mathcal{M}/S}(\mathcal{L}) \ar[r, "\sigma_1"] & T_{\mathcal{M}/S} \ar[r]& 0 . \end{tikzcd} \end{equation} \end{theorem} For a single fixed curve, this result was stated (without proof) in the announcement \cite{ginzburg:1995} (see Theorem 9.1), where it is attributed to Beilinson and Schechtman (even though it does not seem to appear in \cite{beilinson.schechtman:1988}); it can also be derived from results contained in \cite{sun.tsai:2004}. We give an independent proof in Section \ref{sectionbigproof}. \subsection{A comment on extensions of line bundles} Let $X$ be a scheme, $V$ and $L$ respectively a vector and a line bundle on $X$. Let moreover $F$ be an extension of $L$ by $V$ \[ \begin{tikzcd} 0\ar[r] & V \ar[r,"i"] & F \ar[r, "\pi"] & L \ar[r] & 0.\end{tikzcd} \]
By taking the dual and tensoring with $V\otimes L$ we get \[ \begin{tikzcd} 0 \ar[r] & V \ar[r] & F^* \otimes V \otimes L \ar[r] & \ar[r] V^*\otimes V\otimes L \ar[r] &0. \end{tikzcd}\] Consider now the injective natural map \begin{eqnarray*} \psi: L & \to & V^*\otimes V\otimes L \\ \ell & \mapsto & \operatorname{Id}_V \otimes \ell. \end{eqnarray*} \begin{lemma}\label{VBremark} There exists a canonical injection $\phi:F\hookrightarrow F^*\otimes V \otimes L$ so that the diagram \begin{equation}\label{ext2} \begin{tikzcd}[row sep=small] 0 \ar[r] & V \ar[d, equal] \ar[r, "i"] & F \ar[d] \ar[d, hookrightarrow, "\phi"] \ar[r, "-\pi "] & L \ar[d, " \psi ", hookrightarrow] \ar[r] & 0 \\ 0 \ar[r] & V \ar[r] & F^*\otimes V \otimes L \ar[r] & V^* \otimes V\otimes L \ar[r] & 0 \end{tikzcd} \end{equation} commutes. \end{lemma} \begin{proof}
We consider the natural $\mathcal{O}_X$-linear map $\alpha:F \otimes F \to F\otimes L$ defined by
\[
\alpha(f_1\otimes f_2) = f_1 \otimes \pi(f_2) - f_2 \otimes \pi(f_1)
\]
for local sections $f_1,f_2$ of $F$. Then it is easy to check that the image of $\alpha$ is the subbundle $V\otimes L \subset F\otimes L$. Now the map $\alpha$ naturally corresponds to an $\mathcal{O}_X$-linear map $\phi: F \to F^\ast \otimes V \otimes L$, which can be described locally in terms of a basis of local sections $\{e_i\}$ of $F$ and the dual basis $\{e_i^\ast\}$ of $F^\ast$ as
\[
\phi(f) = \sum_{i=1}^{\rk F} \left(e_i^\ast \otimes f \otimes \pi(e_i) - e_i^\ast \otimes e_i \otimes \pi(f) \right) .
\]
It is now straightforward to check that this $\phi$ makes the above diagram commute. \end{proof}
\subsection{Locally freeness of $\pi_{e*}(\mathcal{L})$} We will be assuming that the direct image $\pi_{e*}(\mathcal{L}^k)$ on $S$ is locally free. In characteristic zero this follows trivially from Kodaira vanishing, but in positive characteristic it is not known in general (but of course it will always trivally be true for large enough $k$). For $r=2$, this is however proven in \cite{mehta.ramadas:1996}.
Note that in characteristic zero, a coherent sheaf with a flat projective connection will necessarily be locally free, but this need not be true in general.
\subsection{The relation between $\rho^{\operatorname{Hit}}, \Phi$, and $\mathcal{L}$} We can now state the final ingredient we will need to prove the existence of the Hitchin connection: \begin{proposition}\label{phi-rho-L} The sheaf morphism $\Phi$ from (\ref{kappaphi}) equals minus the composition $(\cup [\mathcal{L}]) \circ\rho^{\operatorname{Hit}}$ of the Hitchin symbol and the characteristic class $[\mathcal{L}]$, i.e. the following diagram of sheaves on $S$ commutes: \[ \begin{tikzcd}[row sep=small] R^1\pi_{s*} T_{\mathcal{C}/S} \ar[rr, "-\Phi"] \ar[dr, "\rho^{\operatorname{Hit}} "'] & & R^1\pi_{e*}T_{\mathcal{M}/S}.\\ & \pi_{e*}\Sym^2 T_{\mathcal{M}/S} \ar[ur, "\cup {[\mathcal{L}]} "'] & \end{tikzcd} \] \end{proposition} \begin{proof} We begin with the trace-free Atiyah sequence on $\mathcal{C}\times_S\mathcal{M}$ for $\mathcal{E}$, relative to $\pi_n$, as introduced in Section \ref{seqandcon}. To keep the notation light, we shall denote in this proof the Atiyah algebroid $\mathcal{A}^0_{\mathcal{C}\times_S \mathcal{M}\big/\mathcal{M}}(\mathcal{E})$ simply by $\mathcal{A}$. By using the evaluation maps, as in (\ref{eval-end}), dualizing, and tensoring with $\pi^*_wT_{\mathcal{C}/S}\otimes \mathcal{E}nd^0(\mathcal{E})$, we obtain the following natural map of exact sequences: \begin{equation}\label{doubleseq} \begin{tikzcd}[column sep=small, row sep=small] 0 \ar[r] & \mathcal{E}nd^0(\mathcal{E}) \ar[r]\ar[d,equal] & \begin{array}{@{}c@{}} \mathcal{E}nd^0(\mathcal{E})\otimes \\ \mathcal{A}^*\otimes \pi_w^*T_{\mathcal{C}/S} \end{array} \ar[r] \ar[d] & \begin{array}{@{}c@{}} \mathcal{E}nd^0(\mathcal{E})\ \otimes\\ \mathcal{E}nd^0(\mathcal{E})^*\otimes \pi_w^*T_{\mathcal{C}/S}\end{array} \ar[r] \ar[d] & 0 \\ 0 \ar[r] & \mathcal{E}nd^0(\mathcal{E}) \ar[r] & \begin{array}{@{}c@{}} \mathcal{E}nd^0(\mathcal{E})\ \otimes\\ \pi_n^*(\pi_{n*}(\mathcal{A}\otimes \pi_w^* K_{\mathcal{C}/S}))^*\end{array} \ar[r] & \begin{array}{@{}c@{}}\mathcal{E}nd^0(\mathcal{E})\ \otimes\\ \pi_n^*(\pi_{n*}(\mathcal{E}nd^0(\mathcal{E})\otimes \pi_w^*K_{\mathcal{C}/S}))^*\end{array} \ar[r] & 0. \end{tikzcd} \end{equation} By relative Serre duality for $\pi_n$, the lower exact sequence is equal to the following \begin{equation}\label{relserdual} \begin{tikzcd} 0\ar[r] & \begin{array}{@{}c@{}}\mathcal{E}nd^0(\mathcal{E})\ \otimes \\ \pi_n^*(R^1\pi_{n*}\pi_w^*K_{\mathcal{C}/S})\end{array}\ar[r] &\begin{array}{@{}c@{}} \mathcal{E}nd^0(\mathcal{E})\ \otimes\\ \pi_n^*(R^1\pi_{n*}\mathcal{A}^*)\end{array} \ar[r] & \begin{array}{@{}c@{}} \mathcal{E}nd^0(\mathcal{E})\ \otimes\\ \pi_n^*(R^1\pi_{n*}\mathcal{E}nd^0(\mathcal{E})^*)\end{array} \ar[r]& 0. \end{tikzcd} \end{equation} By plugging $V=\mathcal{E}nd^0(\mathcal{E})$, $L=\pi_w^*T_{\mathcal{C}/S}$ and $F=\mathcal{A}$ in Lemma \ref{VBremark}, we get a map of exact sequences \begin{equation}\label{finalmap} \begin{tikzcd}[row sep=small] 0 \ar[r] & \mathcal{E}nd^0(\mathcal{E}) \ar[d, equal] \ar[r] & \mathcal{A} \ar[r] \ar[d] & \pi_w^* T_{\mathcal{C}/S} \ar[d] \ar[r] & 0 \\ 0 \ar[r] & \mathcal{E}nd^0(\mathcal{E}) \ar[r] &\begin{array}{@{}c@{}} \mathcal{E}nd^0(\mathcal{E})\ \otimes\\ \mathcal{A}^*\otimes \pi_w^*T_{\mathcal{C}/S}\end{array} \ar[r] & \begin{array}{@{}c@{}} \mathcal{E}nd^0(\mathcal{E})\ \otimes\\ \mathcal{E}nd^0(\mathcal{E})^*\otimes \pi_w^*T_{\mathcal{C}/S} \end{array} \ar[r] & 0. \end{tikzcd} \end{equation} Hence, by composing the short exact sequence maps (\ref{finalmap}) and (\ref{doubleseq}), and using the isomorphism of the target exact sequence with that of (\ref{relserdual}), we get a new map of exact sequences: \begin{equation}\label{comm-at-ks} \begin{tikzcd}[column sep=scriptsize, row sep=small] 0 \ar[r] & \mathcal{E}nd^0(\mathcal{E}) \ar[d, equal] \ar[r] & \mathcal{A} \ar[r] \ar[d] & \pi_w^*T_{\mathcal{C}/S} \ar[r] \ar[d] & 0 \\ 0 \ar[r] & \begin{array}{@{}c@{}} \mathcal{E}nd^0(\mathcal{E})\ \otimes\\ \pi_n^*(R^1\pi_{n*}(\pi_w^*K_{\mathcal{C}/S}))\end{array} \ar[r] & \begin{array}{@{}c@{}} \mathcal{E}nd^0(\mathcal{E})\ \otimes \\ \pi_n^*(R^1\pi_{n*}\mathcal{A}^*) \end{array} \ar[r] & \begin{array}{@{}c@{}} \mathcal{E}nd^0(\mathcal{E})\ \otimes \\ \pi_n^*(R^1\pi_{n*}(\mathcal{E}nd^0(\mathcal{E})^*)) \end{array} \ar[r] & 0. \end{tikzcd} \end{equation}
By taking the direct image $R^1\pi_{n*}$ of both sequences, they remain exact and we obtain the commutative diagram \begin{equation}\label{R1comm-at-ks} \begin{tikzcd}[row sep=small] 0 \ar[r] & R^1\pi_{n*}\mathcal{E}nd^0(\mathcal{E}) \ar[d, equal] \ar[r] & R^1\pi_{n*}\mathcal{A} \ar[r] \ar[d] & R^1\pi_{n*}\pi_w^*T_{\mathcal{C}/S} \ar[d] \ar[r] & 0 \\ 0 \ar[r] & R^1\pi_{n*}\mathcal{E}nd^0(\mathcal{E})\ar[r] & \begin{array}{@{}c@{}} R^1\pi_{n*}\mathcal{E}nd^0(\mathcal{E})\\ \otimes\ (R^1\pi_{n*}\mathcal{A}^*)\end{array} \ar[r] & \begin{array}{@{}c@{}}R^1\pi_{n*}\mathcal{E}nd^0(\mathcal{E})\ \otimes\\ (R^1\pi_{n*}(\mathcal{E}nd^0(\mathcal{E})^*))\end{array} \ar[r] & 0. \end{tikzcd} \end{equation} We now apply $\pi_{e*}$ to both exact sequences in (\ref{R1comm-at-ks}). The claimed equality is proven once we consider the commutative diagram given by the connecting homomorphisms: \begin{equation}\label{hitcupL} \begin{tikzcd}[row sep=small]
R^1\pi_{s*}(T_{\mathcal{C}/S}) \ar[r, "-\Phi"]
\ar[d, swap, "{\rho^{\operatorname{Hit}}}"]
& R^1\pi_{e*}(T_{\mathcal{M}/S}) \ar[d, equal] \\
\pi_{e*}(T_{\mathcal{M}/S}\otimes T_{\mathcal{M}/S}) \ar[r,"{\cup [\mathcal{L}]} "] & R^1\pi_{e*}(T_{\mathcal{M}/S}). \end{tikzcd} \end{equation} Since the bottom row of (\ref{R1comm-at-ks}) is given by tensoring (\ref{dualR1}) by $R^1\pi_{n*}\mathcal{E}nd^0(\mathcal{E})$, by Theorem \ref{maintracecompl} the connecting homomorphism for the bottom row is given by the relative Atiyah class of $\mathcal{L}$. By Lemma \ref{defhitchinsymbol}, the left vertical map is given by the Hitchin symbol $\rho^{\operatorname{Hit}}$. Since the upper exact sequence of (\ref{comm-at-ks}) is the same as the sequence (\ref{relatseq}) but with one sign changed (as in (\ref{ext2})), by Lemma \ref{constrphi} the connecting homomorphism for the top row of (\ref{hitcupL}) is given by $-\Phi$. \end{proof}
\subsection{Existence and flatness of the connection} We can now summarize the algebro-geometric construction of the Hitchin connection: \begin{theorem} \label{existenceconnection} Let $k$ be a positive integer. Suppose a smooth family $\pi_{e}:\mathcal{C}\rightarrow S$ of projective curves of genus $g\geq 2$ (and $g\geq 3$ if $r=2$) is given as before, defined over an algebraically closed field of characteristic different from $2$, not dividing $r$ and $k+r$, and such that $\pi_{e*}(\mathcal{L}^k)$ is locally free. Then there exists a unique projective connection on the vector bundle $\pi_{e*}(\mathcal{L}^{k})$ of non-abelian theta functions of level $k$, induced by a heat operator with symbol $$\rho=\frac{1}{r+k}\,\left(\rho^{\operatorname{Hit}}\circ \kappa_{\mathcal{C}/S}\right).$$ \end{theorem} \begin{proof} We establish the existence of the projective connection by invoking Theorem~\ref{vgdj} for the line bundle $\mathcal{L}^{k}$ over $\mathcal{M}$. We recall from (\ref{thetadet2}) the equality $K_{\mathcal{M}/S}=\mathcal{L}^{-2r}$. From Proposition \ref{thm_mu_O} we therefore have that $$\mu_{\mathcal{L}^k}=\cup(r+k)[\mathcal{L}],$$ and hence (using Proposition \ref{phi-rho-L} and (\ref{kappaphi})) we have $$\mu_{\mathcal{L}^k}\circ\rho=\mu_{\mathcal{L}^k}\circ \frac{1}{r+k}\,\left(\rho^{\operatorname{Hit}}\circ \kappa_{\mathcal{C}/S}\right)=\left(\cup[\mathcal{L}]\right)\circ \rho^{\operatorname{Hit}}\circ \kappa_{\mathcal{C}/S}=-\Phi\circ \kappa_{\mathcal{C}/S} = -\kappa_{\mathcal{M}/S},$$ which establishes condition \ref{vgdj-one} of Theorem~\ref{vgdj}. Condition \ref{vgdj-two} is trivially satisfied because of Proposition \ref{basicfacts}, and condition \ref{vgdj-three} follows from the algebraic Hartogs's theorem \cite[Lemma 11.3.11]{vakil:2017}, together with the well-known fact that the relative coarse moduli space $\mathcal{M}^{\operatorname{ss}}$ of semi-stable bundles with trivial determinant (which is singular but normal) is proper over $S$, and if $g>2$ or $r>2$, the complement of $\mathcal{M}$ will have codimension greater than one in $\mathcal{M}^{\operatorname{ss}}$. \end{proof} As for the curvature of the connection, we have: \begin{theorem}\label{connection-flat} Suppose furthermore that the characteristic of the base field is different from 3. Then the projective connection constructed in Theorem \ref{existenceconnection} is flat. \end{theorem} \begin{proof}
We apply Theorem \ref{thm_flatness}: condition (a) holds since by definition of the Hitchin symbol the corresponding homogeneous functions on $T^\ast_{\mathcal{M}/S}$ are the quadratic components of the Hitchin system, and hence Poisson-commute,
\[
\left\{ \rho^{\operatorname{Hit}}(\theta),\rho^{\operatorname{Hit}}(\theta') \right\}_{T^\ast_{\mathcal{M}/S}} = 0 .
\]
Condition (b) is satisfied as $\mu_{\mathcal{L}^k}$ is injective (see Lemma \ref{mu-L-inj} in Appendix \ref{appendixbasicfacts}), and (c) holds by Proposition \ref{basicfacts}. \end{proof}
\section{Proof of Theorem \ref{maintracecompl}}\label{sectionbigproof} We shall need the theory of the \emph{trace complex}, due to Beilinson and Schechtman, or rather a variation thereon due to Bloch and Esnault -- see \cite{beilinson.schechtman:1988} and \cite{bloch.esnault:2002}. In Appendix \ref{appendixtracecomplex} a summary of this theory is given, and we refer to it for definitions of the complexes $\tensor*[^{tr\!\!}]{\mathcal{A}}{^\bullet}$, $\mathcal{B}^\bullet$, and $\tensor*[^0]{\mathcal{B}}{^\bullet}$. We will be applying the trace complex in our particular setting here, where $\mathcal{M}$ is as in Section \ref{sect_basicfacts}, $\mathcal{X} = \mathcal{C} \times_S \mathcal{M}$ and $f = \pi_n$. In this context we find that the trace complex simplifies significantly, to give Theorem \ref{maintracecompl}.
Before proving Theorem \ref{maintracecompl} we need to prove a few auxiliary results. \begin{lemma}\label{tecfacts} Following the above notation: \begin{enumerate}[(a)] \item the direct image $\pi_{n*}{}^0\mathcal{B}^0(\mathcal{E})$ equals 0; \item the natural map $R^1\pi_{n*}\mathcal{E}nd^0 (\mathcal{E}) \to R^1\pi_{n*}{}^0\mathcal{B}^0(\mathcal{E})$ is zero. \end{enumerate} \end{lemma} \begin{proof} Recall from Section \ref{rmk_tracelessB} that we have a short exact sequence $$ \begin{tikzcd}0 \ar[r]& \mathcal{E} nd^0(\mathcal{E}) \ar[r]& {}^0\mathcal{B}^0(\mathcal{E}) \ar[r]& \pi_n^{-1}T_{\mathcal{M}/S} \ar[r]& 0.\end{tikzcd}$$ By applying the direct image $\pi_{n*}$ we get $$\begin{tikzcd}[column sep=small] 0 \ar[r]& \pi_{n*}\mathcal{E} nd^0(\mathcal{E}) \ar[r]& \pi_{n*}{}^0\mathcal{B}^0(\mathcal{E}) \ar[r]& T_{\mathcal{M}/S} \ar[r]& R^1\pi_{n*}\mathcal{E} nd^0(\mathcal{E}) \ar[r]& R^1\pi_{n*}{}^0\mathcal{B}^0(\mathcal{E}) \ar[r]& \cdots .\end{tikzcd}$$ Now, by Proposition \ref{basicfacts} $(a)$ and $(b)$, $\pi_{n*}\mathcal{E} nd^0(\mathcal{E})=0$ and the map $T_{\mathcal{M}/S} \to R^1\pi_{n*}\mathcal{E} nd^0(\mathcal{E})$ is an isomorphism. The two claims follow. \end{proof} \begin{proposition}\label{isodirimage} There exists an isomorphism $\phi: R^1\pi_{n*}{}^0\mathcal{B}^{-1}(\mathcal{E}) \to R^0\pi_{n*}\mathcal{B}^\bullet(\mathcal{E} nd^0(\mathcal{E}))$ that makes the following diagram commute. \[ \begin{tikzcd}[column sep=small, row sep=small]
0 \ar[r]& R^1\pi_{n*}(K_{\mathcal{X}/\mathcal{M}}) \cong \mathcal{O}_\mathcal{M} \ar[r] \ar[d, "2r\cdot \operatorname{Id}_{\mathcal{O}_\mathcal{M}}"] & R^1\pi_{n*} {}^0\mathcal{B}^{-1}(\mathcal{E}) \ar[r] \ar[d, "\phi", "\cong" swap] & R^1\pi_{n*}\left(\mathcal{E}nd^0(\mathcal{E})\right)\cong T_{\mathcal{M}/S} \ar[d, "\cong"] \ar[r]& 0 \\
0 \ar[r] & R^0\pi_{n*}K_{\mathcal{X}/\mathcal{M}}[1] \cong \mathcal{O}_\mathcal{M} \ar[r] & R^0\pi_{n*}\mathcal{B}^\bullet(\mathcal{E} nd^0(\mathcal{E})) \ar[r] & T_{\mathcal{M}/S} \ar[r] & 0 . \end{tikzcd}\] In particular $\phi$ induces $2r\cdot \operatorname{Id}_{\mathcal{O}_\mathcal{M}}$ on $\mathcal{O}_\mathcal{M}$. \end{proposition} This Proposition is already proved by combining \cite[Thm. 3.7 and Cor. 3.12]{sun.tsai:2004}. For the sake of self-containedness, here we give a complete but slightly different proof of this statement. \begin{proof} We construct $\phi$ in several steps, notably as the composition of three maps. First of all, let us define a map $$\begin{tikzcd}\phi_1: R^1\pi_{n*}{}^0\mathcal{B}^{-1}(\mathcal{E}) \ar[r]& R^0\pi_{n*}{}^0\mathcal{B}^\bullet(\mathcal{E}).\end{tikzcd}$$ For the sake of clarity, we recall the definition of the $0^{th}$ direct image $R^0\pi_{n*}{}^0\mathcal{B}^\bullet(\mathcal{E}).$ We choose an acyclic resolution of the complex ${}^0\mathcal{B}^\bullet(\mathcal{E})$ as follows \[ \begin{tikzcd}[row sep=small] {}^0\mathcal{B}^{-1}(\mathcal{E}) \ar[r]\ar[d, hook]& {}^0\mathcal{B}^0(\mathcal{E}) \arrow[d, hook] \\ \mathcal{C}^0({}^0\mathcal{B}^{-1}(\mathcal{E})) \ar[r, "\delta^0"]\ar[d, two heads] & \mathcal{C}^0({}^0\mathcal{B}^0(\mathcal{E}))\ar[d, two heads] \\ \mathcal{C}^1({}^0\mathcal{B}^{-1}(\mathcal{E})) \ar[r, "\delta^1"] & \mathcal{C}^1({}^0\mathcal{B}^0(\mathcal{E})) \end{tikzcd}\] We push this diagram forward through $\pi_n$ and consider the following one: \[ \begin{tikzcd}[row sep=small] \pi_{n*}\mathcal{C}^0({}^0\mathcal{B}^{-1}(\mathcal{E})) \ar[r, "\delta^0"]\ar[d, "d_{-1}"] & \pi_{n*}\mathcal{C}^0({}^0\mathcal{B}^0(\mathcal{E}))\ar[d, "d_0"] \\ \pi_{n*}\mathcal{C}^1({}^0\mathcal{B}^{-1}(\mathcal{E})) \ar[d, two heads] \ar[r, "\delta^1"] & \pi_{n*}\mathcal{C}^1({}^0\mathcal{B}^0(\mathcal{E})) \ar[d, two heads] \\ R^1\pi_{n*}{}^0\mathcal{B}^{-1}(\mathcal{E})\ar[r] & R^1\pi_{n*}{}^0\mathcal{B}^{0}(\mathcal{E}) \end{tikzcd}\] Remark that the lower horizontal arrow factors as $$R^1\pi_{n*}{}^0\mathcal{B}^{-1}(\mathcal{E}) \to R^1\pi_{n*} \mathcal{E}nd^0(\mathcal{E}) \to R^1\pi_{n*}{}^0\mathcal{B}^{0}(\mathcal{E}).$$ By definition we have that $R^0\pi_{n*}{}^0\mathcal{B}^{-1}(\mathcal{E}):= \Ker(B)/ \Ima(A)$, where \begin{equation*} \begin{tikzcd}[row sep=0pt] \pi_{n*}\mathcal{C}^0({}^0\mathcal{B}^{-1}(\mathcal{E})) \ar[r, "A"]& \pi_{n*}\mathcal{C}^0({}^0\mathcal{B}^0(\mathcal{E})) \oplus \pi_{n*}\mathcal{C}^1({}^0\mathcal{B}^{-1}(\mathcal{E}))\ar[r, "B"] & \pi_{n*} \mathcal{C}^1({}^0\mathcal{B}^0(\mathcal{E})) \\ (\gamma) \ar[r, mapsto]& (\delta^0(\gamma),d_{-1}(\gamma)) & \\
& (\alpha, \beta) \ar[r, mapsto] & d_0(\alpha) - \delta^1(\beta).\\ \end{tikzcd} \end{equation*} Hence we can define a map \begin{eqnarray*} \tilde{\phi}: \pi_{n*}\mathcal{C}^1({}^0\mathcal{B}^{-1}(\mathcal{E})) & \to & \Ker(B);\\ \beta & \mapsto & (\alpha, \beta); \end{eqnarray*} where $\alpha\in\pi_{n*}\mathcal{C}^0({}^0\mathcal{B}^0(\mathcal{E}))$ is uniquely defined by the formula $d_0(\alpha)=\delta^1(\beta)$. In fact we observe that Lemma \ref{tecfacts} implies that $d_0$ is injective and that $\Ima(\delta^1) \subseteq \Ima(d^0)$. The map $\tilde{\phi}$ descends to the first of our three maps: \begin{eqnarray*} \phi_1: R^1\pi_{n*}{}^0\mathcal{B}^{-1}(\mathcal{E}) & \to & R^0\pi_{n*}{}^0\mathcal{B}^\bullet(\mathcal{E});\\ \bar{\beta} & \mapsto & \overline{(\alpha,\beta)}; \end{eqnarray*} where the overline should be intended as just taking the corresponding classes.
The second map is defined as follows (see App. \ref{appendixsplitting} for the precise definitions of $\widehat{\mathrm{ad}}$ and $\widetilde{\mathrm{ad}}$): \begin{eqnarray*} \phi_2: R^0\pi_{n*}{}^0\mathcal{B}^\bullet(\mathcal{E}) & \to & R^0\pi_{n*}({}^0\mathcal{B}^{-1}(\mathcal{E}nd^0(\mathcal{E})) \to \mathcal{B}^0(\mathcal{E}nd^0(\mathcal{E})));\\ \overline{(\alpha,\beta)} & \mapsto & (\widetilde{\mathrm{ad}}(\alpha), \widehat{\mathrm{ad}}(\beta)); \end{eqnarray*} where we abuse once more of the notation (and of the reader's patience) by denoting by $\widehat{\mathrm{ad}}$ and $\widetilde{\mathrm{ad}}$ also the maps on the direct images. Note also that here we consider $\widetilde{\mathrm{ad}}$ as defined on the quotient ${}^0\mathcal{B}^0(\mathcal{E})$ of the subsheaf $\mathcal{B}^0(\mathcal{E})\subset \mathcal{A}(\mathcal{E})$, and we are allowed to do so since the trivial sheaf is in $\Ker(\widetilde{\mathrm{ad}})$. Moreover, we can consider $\mathcal{B}^0(\mathcal{E}nd^0(\mathcal{E}))$ as the target space of $\widetilde{\mathrm{ad}}$ the image of ${}^0\mathcal{B}^0(\mathcal{E})$ via $\widetilde{\mathrm{ad}}$ is contained in $\mathcal{B}^0(\mathcal{E}nd^0(\mathcal{E}))\subset \mathcal{A}(\mathcal{E}nd^0(\mathcal{E}))$.
The third map is induced on $R^0\pi_{n*}({}^0\mathcal{B}^{-1}(\mathcal{E}nd^0(\mathcal{E})) \to \mathcal{B}^0(\mathcal{E}nd(\mathcal{E})))$ by the natural inclusion ${}^0\mathcal{B}^{-1}(\mathcal{E}nd^0(\mathcal{E})) \hookrightarrow \mathcal{B}^{-1}(\mathcal{E}nd(\mathcal{E}))$. Hence this gives a natural map $$\begin{tikzcd}\phi_3: R^0\pi_{n*}({}^0\mathcal{B}^{-1}(\mathcal{E}nd^0(\mathcal{E})) \ar[r]& \mathcal{B}^0(\mathcal{E}nd^0(\mathcal{E}))) \ar[r]& R^0 \pi_{n*}\mathcal{B}^\bullet(\mathcal{E}nd^0(\mathcal{E})).\end{tikzcd}$$ It is a standard check that these three maps are well defined and pass to the quotient in cohomology.
The situation is now the following, we have two exact sequences and a map $\phi:= \phi_3\circ \phi_2 \circ \phi_1$ between extensions: \begin{equation*} \begin{tikzcd}[column sep=0.8em, row sep=small] 0 \ar[r] &[-0.5ex] R^1\pi_{n*}(K_{\mathcal{X}/\mathcal{M}}) \cong \mathcal{O}_\mathcal{M} \ar[r]\ar[d]& R^1\pi_{n*}{}^0\mathcal{B}^{-1}(\mathcal{E}) \ar[d, "\phi"]\ar[r]& R^1\pi_{n*}(\mathcal{E}nd^0(\mathcal{E}))\cong T_{\mathcal{M}/S} \ar[r]\ar[d]&[-0.5ex] 0\\ 0 \ar[r] &[-0.5ex] R^0\pi_{n*}(K_{\mathcal{X}/\mathcal{M}})[1])\cong \mathcal{O}_{\mathcal{M}} \ar[r]& R^0\pi_{n*}\mathcal{B}^\bullet(\mathcal{E}nd^0(\mathcal{E})) \ar[r]& T_{\mathcal{M}/S} \ar[r] &[-0.5ex] 0. \end{tikzcd} \end{equation*} Now, suppose we have a class $\bar{\beta}$ in $R^1\pi_{n*}{}^0\mathcal{B}^{-1}(\mathcal{E})$, and let us consider $\beta$ a local section of $\pi_{n*}\mathcal{C}^1({}^0\mathcal{B}^{-1}(\mathcal{E}))$ representing $\bar{\beta}$. If we denote as above by $\alpha\in \pi_{n*}\mathcal{C}^0({}^0\mathcal{B}^0(\mathcal{E}))$ the uniquely defined local section as in the definition of $\tilde{\phi}$, then $\phi$ sends $\beta$ on $\overline{(\widetilde{\mathrm{ad}}(\alpha),\widehat{\mathrm{ad}}(\beta))}$.
By Proposition \ref{ST310} we have a commutative diagram \begin{equation*} \begin{tikzcd}[row sep=small] 0 \ar[r] & K_{\mathcal{X}/\mathcal{M}} \ar[r]\ar[d, "\cdot 2r"]& {}^0\mathcal{B}^{-1}(\mathcal{E}) \ar[d, "\widehat{\mathrm{ad}}"]\ar[r]& \mathcal{E}nd^0(\mathcal{E}) \ar[r]\ar[d, "\mathrm{ad}_0"]& 0\\ 0 \ar[r] & K_{\mathcal{X}/\mathcal{M}}\ar[r]& {}^0\mathcal{B}^{-1}(\mathcal{E}nd^0(\mathcal{E})) \ar[r]& \mathcal{E}nd^0(\mathcal{E}nd^0(\mathcal{E})) \ar[r] & 0. \end{tikzcd} \end{equation*} which implies the claim about the restriction of $\phi$ to $\mathcal{O}_\mathcal{M}$. Thus $\phi$ also descends to a $\mathcal{O}_\mathcal{X}$-linear map $\phi^T: T_{\mathcal{M}/S} \to T_{\mathcal{M}/S}.$ Remark in fact that, again by Appendix \ref{appendixsplitting} and the observations on $\widetilde{\mathrm{ad}}$ made here above, $\phi^T$ is induced by the adjoint map between the following exact sequences. \begin{equation*} \begin{tikzcd}[row sep=small] 0 \ar[r] & \mathcal{E}nd^0(\mathcal{E}) \ar[r]\ar[d, "\mathrm{ad}"]& {}^0\mathcal{B}^{0}(\mathcal{E}) \ar[d, "\widetilde{\mathrm{ad}}_0"]\ar[r]& \pi_{n}^{-1}(T_{\mathcal{M}/S}) \ar[r]\ar[d, "\operatorname{Id}", "\cong" swap]& 0\\ 0 \ar[r] & \mathcal{E}nd(\mathcal{E}nd^0(\mathcal{E})) \ar[r]& \mathcal{B}^{0}(\mathcal{E}nd^0(\mathcal{E})) \ar[r]& \pi_{n}^{-1}(T_{\mathcal{M}/S}) \ar[r] & 0. \end{tikzcd} \end{equation*} \end{proof} \begin{proof}[Proof of Theorem \ref{maintracecompl}]The isomorphism of exact sequences claimed in the theorem will follow by composing the following isomorphisms. In the diagram below they will be composed vertically from the first to the fifth. First we apply $R^1\pi_{n*}$ to the second identification from Theorem \ref{thmdualityB}. Then we compose with the map from Proposition \ref{isodirimage}. The third map is the isomorphism from Theorem \ref{easyBS} applied to $\mathcal{E} nd^0(\mathcal{E})$ (recall that $\lambda(\mathcal{E} nd^0(\mathcal{E})) = \mathcal{L}^{-2r}$). The fourth and fifth map is the canonical isomorphism $\mathcal{A}(\mathcal{L}^{-1}) \cong \mathcal{A}(\mathcal{L}^{-2r})$ obtained by scaling appropriately the extension as in Lemma \ref{extens} with $k=2r$ and $L=\mathcal{L}^{-1}$. Finally the last vertical isomorpism $\mathcal{A}(\mathcal{L}^{-1}) \to \mathcal{A}(\mathcal{L})$ is the canonical map between the Atiyah algebra of $\mathcal{L}^{-1}$ and its dual $\mathcal{L}$ (with the opposite symbol map). Hence we obtain the following commutative diagram \begin{equation*} \begin{tikzcd}[column sep=small, row sep=small] 0 \ar[r]& \mathcal{O}_{\mathcal{M}} \ar[r]\ar[d, "\cong", "\operatorname{Id}_{\mathcal{O}_{\mathcal{M}}}" swap] & R^1\pi_{n*}(\mathcal{A}^0_{\mathcal{X}/\mathcal{M}}(\mathcal{E})^\ast) \ar[r]\ar[d, "\cong", "\widetilde{Res}" swap] & R^1\pi_{n*}(\mathcal{E} nd^0(\mathcal{E})^*) \ar[d, "\cong", "-\operatorname{Tr}" swap]\ar[r]& 0 \\ 0 \ar[r]& \mathcal{O}_{\mathcal{M}} \ar[d, "\cong", "2r\cdot \operatorname{Id}_{\mathcal{O}_{\mathcal{M}}}" swap] \ar[r]& R^1\pi_{n*}({}^0\mathcal{B}^{-1}(\mathcal{E})) \ar[r]\ar[d, "\phi"]& R^1\pi_{n*}(\mathcal{E} nd^0(\mathcal{E})) \ar[r]\ar[d, "\cong"]& 0 \\ 0 \ar[r]& \mathcal{O}_{\mathcal{M}} \ar[d, "\cong"] \ar[r]& R^1\pi_{n*}\mathcal{B}^{\bullet}(\mathcal{E} nd^0(\mathcal{E})) \ar[r]\ar[d, "\cong"]& T_{\mathcal{M}/S} \ar[r]\ar[d, "\cong"]& 0 \\ 0 \ar[r]& \mathcal{O}_{\mathcal{M}} \ar[d, "\cong"]\ar[r]& \mathcal{A}(\mathcal{L}^{-2r}) \ar[r, "\sigma_1"]\ar[d, "\cong"]& T_{\mathcal{M}/S} \ar[r]\ar[d, "\cong"]& 0 \\ 0 \ar[r]& \mathcal{O}_{\mathcal{M}} \ar[d, "\cong"] \ar[r, "\frac{1}{2r}"]& \mathcal{A}(\mathcal{L}^{-1}) \ar[d, "\cong"] \ar[r, "\sigma_1"]& T_{\mathcal{M}/S} \ar[d, "\cong"] \ar[r] & 0 \\ 0 \ar[r]& \mathcal{O}_{\mathcal{M}} \ar[r, "\frac{1}{2r}"]& \mathcal{A}(\mathcal{L}) \ar[r, "-\sigma_1"]& T_{\mathcal{M}/S} \ar[r] & 0. \end{tikzcd} \end{equation*} Note that the first vertical right hand side map is $-\operatorname{Tr}$. This means that the extension class defining the upper short exact sequence is equal to the standard Atiyah sequence of $\mathcal{L}$ as claimed in the Theorem. \end{proof}
\appendix \section{The trace complex, following Beilinson--Schechtman and Bloch--Esnault}\label{appendixtracecomplex} We give here a presentation of the parts of the theory of \emph{trace complexes} (due to Beilinson and Schechtman \cite[\S 2]{beilinson.schechtman:1988}, see also \cite{esnault.tsai:2000}) that we need. We then describe an alternative approach to the trace complexes, suggested by Bloch and Esnault \cite[\S 5.2]{bloch.esnault:2002}.
In fact, to suit our purposes, we make two minor variations: first, we make some small changes to ensure that the construction works in positive characteristic (apart from 2), and secondly, we phrase everything in a relative context. The latter is trivial on a technical level, but we do it as the Bloch-Esnault approach requires an extra condition, which, when we invoke it in the main part of the article, is only satisfied in a relative setting.
Section \ref{sectiononBS} below covers the original trace complex, and is just expository. In Section \ref{sectiononBE}, where the alternative of Bloch-Esnault is explained, we also give proofs for various assertions merely stated in \cite{bloch.esnault:2002}.
For the purpose of this appendix, we consider a family of smooth projective curves $f: \mathcal{X} \to \mathcal{M}$ of genus $g\geq 2$, relative to a smooth base scheme $S$, \[
\begin{tikzcd}[column sep=small, row sep=small]
\mathcal{X} \arrow[r, "f"] \arrow[rd] & \mathcal{M} \arrow[d] \\
& S ,
\end{tikzcd} \] together with a vector bundle $\mathcal{E} \to \mathcal{X}$. We shall write $\mathcal{E}^\circ$ for $\mathcal{E}^\ast \otimes K_{\mathcal{X}/\mathcal{M}}$.
The trace complex we are interested in describes the Atiyah algebroid $\mathcal{A}_{\mathcal{M}/S}(\det R^\bullet f_\ast \mathcal{E})$ (remark that our notation differs from Beilinson and Schechtman's: our $\mathcal{M}$ is their $S$, and our $S$ is just a point in \cite{beilinson.schechtman:1988}). \subsection{The Beilinson--Schechtman trace complex $\tensor*[^{\operatorname{tr}\!\! }]{\mathcal{A}}{^\bullet}(\mathcal{E})$}\label{sectiononBS} \newcommand{\cC\times_S \cM}{\mathcal{C}\times_S \mathcal{M}} \subsubsection{Overview} The relative tangent bundle $T_{\mathcal{X}/S}$ contains as subsheaves $T_{\mathcal{X}/\mathcal{M}} \subset T_{f/S} \subset T_{\mathcal{X}/S}$, where (with $df: T_{\mathcal{X}/S}\rightarrow f^*T_{\mathcal{M}/S}$) \[
T_{f/S} := (df)^{-1} f^{-1}T_{\mathcal{M}/S} , \] and corresponding Atiyah algebroids \begin{equation*}
\mathcal{A}_{\mathcal{X}/\mathcal{M}}(\mathcal{E}) \hookrightarrow \mathcal{A}_{f/S}(\mathcal{E}) \hookrightarrow \mathcal{A}_{\mathcal{X}/S}(\mathcal{E}) . \end{equation*} The Beilinson-Schechtman trace complex is a three-term complex \[ \tensor*[^{\operatorname{tr}\!\! }]{\mathcal{A}}{^\bullet}(\mathcal{E})=\left\{\begin{tikzcd} \tensor*[^{\operatorname{tr}\!\! }]{\mathcal{A}}{^{-2}}(\mathcal{E}) \ar[r]& \tensor*[^{\operatorname{tr}\!\! }]{\mathcal{A}}{^{-1}}(\mathcal{E}) \ar[r] & \tensor*[^{\operatorname{tr}\!\! }]{\mathcal{A}}{^0}(\mathcal{E}) \end{tikzcd}\right\}, \] where $\tensor*[^{\operatorname{tr}\!\! }]{\mathcal{A}}{^{-2}}(\mathcal{E})=\mathcal{O}_{\mathcal{X}}$, $\tensor*[^{\operatorname{tr}\!\! }]{\mathcal{A}}{^0}(\mathcal{E}) =\mathcal{A}_{f/S}(\mathcal{E})$, and $\tensor*[^{\operatorname{tr}\!\! }]{\mathcal{A}}{^{-1}}(\mathcal{E})$ is an extension (to be defined below in Section \ref{BS-construction}) ) \begin{equation}\label{at1} \begin{tikzcd} 0\ar[r]& K_{\mathcal{X}/\mathcal{M}} \ar[r] & \tensor*[^{\operatorname{tr}\!\!}]{\mathcal{A}}{^{-1}}(\mathcal{E}) \ar[r, "\operatorname{res}"] & \mathcal{A}_{\mathcal{X}/\mathcal{M}}(\mathcal{E}) \ar[r]& 0, \end{tikzcd} \end{equation} which fits into the following commutative diagram \begin{equation}\label{3cpxes} \begin{tikzcd}[row sep=small] & \mathcal{O}_{\mathcal{X}} \ar[r, equal] \ar[d, "d_{\mathcal{X}/\mathcal{M}}"] & \tensor*[^{\operatorname{tr}\!\! }]{\mathcal{A}}{^{-2}}(\mathcal{E}) \ar[d, "{d_{\mathcal{X}/\mathcal{M}}}"] & & \\ 0 \ar[r] & K_{\mathcal{X}/\mathcal{M}} \ar[r] & \tensor*[^{\operatorname{tr}\!\! }]{\mathcal{A}}{^{-1}}(\mathcal{E}) \ar[r, "\operatorname{res}"] \ar[d, "\operatorname{res}"]& \mathcal{A}_{\mathcal{X}/\mathcal{M}}(\mathcal{E}) \ar[r] \ar[d] & 0 \\
& & \tensor*[^{\operatorname{tr}\!\! }]{\mathcal{A}}{^0}(\mathcal{E}) \ar[r, equal] & \mathcal{A}_{f/S}(\mathcal{E}). &
\end{tikzcd} \end{equation} The main use of the trace complex $\tensor*[^{\operatorname{tr}\!\!} ]{\mathcal{A}}{^\bullet}(\mathcal{E})$ is the following: \begin{theorem}[{\cite[Thm. 2.3.1]{beilinson.schechtman:1988}} ]\label{main1} The relative Atiyah sequence of the determinant-of-cohomology line bundle $$ \lambda(\mathcal{E}) = \det R^\bullet f_\ast \mathcal{E} := \det f_\ast \mathcal{E}\otimes \left(\det R^1 f_\ast \mathcal{E}\right)^*$$ of $\mathcal{E}$ with respect to $f$ is canonically isomorphic to the short exact sequence \[ \begin{tikzcd}[column sep=tiny, row sep=small] 0\ar[r] & R^0f_*\left(\Omega^{\bullet}_{\mathcal{X}/\mathcal{M}}[2]\right) \ar[r]\ar[d,"\cong"]& R^0 f_\ast(\tensor*[^{\operatorname{tr}\!\! }]{\mathcal{A}}{^\bullet}(\mathcal{E})) \ar[r] \ar[d, "\cong"] & R^0f_*\left(\left({ {\begin{array}{@{}c@{}} {\mathcal{A}_{\mathcal{X}/\mathcal{M}}(\mathcal{E})} \\ \downarrow \\ \mathcal{A}_{f/S}(\mathcal{E}) \end{array}} }\right)[1]\right) \ar[r]\ar[d,"\cong"] & 0\\ 0 \ar[r] & \mathcal{O}_{\mathcal{M}} \ar[r] & \mathcal{A}_{\mathcal{M}/S}(\lambda(\mathcal{E})) \ar[r, "\sigma_1"] \ar[r] & T_{\mathcal{M}/S} \ar[r] & 0. \end{tikzcd} \] \end{theorem} \subsubsection{Construction of $\tensor*[^{\operatorname{tr}\!\! }]{\mathcal{A}}{^{-1}}(\mathcal{E})$}\label{BS-construction} Let $\Delta \cong \mathcal{X} \subset \mathcal{X} \times_\mathcal{M} \mathcal{X}$ denote the diagonal, and $p_1$ and $p_2$ the two projections of $\mathcal{X} \times_\mathcal{M} \mathcal{X}$ to $\mathcal{X}$. For each of the projections $p_1, p_2$ we have a residue map $\operatorname{Res}^1, \operatorname{Res}^2$ along the fibres (cfr \cite{tate:1968,beilinson:1980,braunling:2018}). The following is a key ingredient for us: \begin{lemma}[{\cite[\S 2.1.1.1]{beilinson.schechtman:1988}}] There exists a map $$\widetilde{\operatorname{Res}}: K_{\mathcal{X}/\mathcal{M}}\boxtimes K_{\mathcal{X}/\mathcal{M}}(3\Delta) \rightarrow \mathcal{O}_{\mathcal{X}},$$ which vanishes on $K_{\mathcal{X}/\mathcal{M}}\boxtimes K_{\mathcal{X}/\mathcal{M}}(\Delta)$, is symmetric with respect to transposition, and such that $d\widetilde{\operatorname{Res}}=\operatorname{Res}^1-\operatorname{Res}^2$. The restriction of $\widetilde{\Res}$ to $K_{\mathcal{X}/\mathcal{M}}\boxtimes K_{\mathcal{X}/\mathcal{M}}(2\Delta)$ gives a short exact sequence \begin{equation*}\begin{tikzcd}[row sep=small] 0 \ar[r] & K_{\mathcal{X}/\mathcal{M}}\boxtimes K_{\mathcal{X}/\mathcal{M}}(\Delta) \ar[r] & K_{\mathcal{X}/\mathcal{M}}\boxtimes K_{\mathcal{X}/\mathcal{M}}(2\Delta) \ar[dl, shorten=-2ex, "{\operatorname{res}_\Delta = \widetilde{\operatorname{Res}}}" description]
\\ &K_{\mathcal{X}/\mathcal{M}}\boxtimes K_{\mathcal{X}/\mathcal{M}}(2\Delta)_{|\Delta}\cong \mathcal{O}_{\mathcal{X}} \ar[r] & 0, \\ \end{tikzcd} \end{equation*} where the second map is $\widetilde{\Res}$, and coincides with the restriction to the diagonal $\Delta$. \end{lemma} We shall also need a particular description of the sheaf of (relative) first order differential operators $\mathcal{D}^{(1)}_{\mathcal{X}/\mathcal{M}}(\mathcal{E})$ (see \cite[2.1.1.2]{beilinson.schechtman:1988} or the introduction of \cite{esnault.tsai:2000}, from which we borrow the notation). Here and in what follows, we identify sheaves supported on the diagonal $\Delta$ with sheaves on $\mathcal{X}$. The next lemma is easily deduced from the definition of the ``pole at $\Delta$" map. \begin{lemma} The symbol short exact sequence for first order differential operators on $\mathcal{E}$ relative to $f$ is isomorphic to the exact sequence \begin{equation}\label{diagonals} \begin{tikzcd}[row sep=small]
0 \arrow[r] & \frac {\mathcal{E} \boxtimes \mathcal{E}^\circ (\Delta)}{\mathcal{E} \boxtimes \mathcal{E}^\circ} \arrow[r] \arrow[d, "\cong"] & \frac {\mathcal{E} \boxtimes \mathcal{E}^\circ (2\Delta)}{\mathcal{E} \boxtimes \mathcal{E}^\circ} \arrow[r] \arrow[d, "\delta"] & \frac {\mathcal{E} \boxtimes \mathcal{E}^\circ (2\Delta)}{\mathcal{E} \boxtimes \mathcal{E}^\circ (\Delta)} \arrow[r] \arrow[d, "\cong"] & 0 \\
0 \arrow[r] & \mathcal{E} nd(\mathcal{E}) \arrow[r] & \mathcal{D}^{(1)}_{\mathcal{X}/\mathcal{M}}(\mathcal{E}) \arrow[r,"\sigma_1"] & T_{\mathcal{X}/\mathcal{M}}\otimes \mathcal{E} nd(\mathcal{E}) \arrow[r] & 0.
\end{tikzcd} \end{equation} where $\delta$ is the ``pole at $\Delta$'' map defined by $$\delta(\psi)(e) = \operatorname{Res}^2(\langle \psi, p_2^*(e) \rangle),$$ for any local section $\psi$ of $\frac{\mathcal{E} \boxtimes \mathcal{E}^\circ (2\Delta)}{\mathcal{E} \boxtimes \mathcal{E}^\circ}$ and any local section $e$ of $\mathcal{E}$. Here $\langle -,- \rangle$ is the natural pairing $\mathcal{E}^\circ \times \mathcal{E} \to K_{\mathcal{X}/\mathcal{M}}$. \end{lemma} We consider now the natural exact sequence \begin{equation}\label{diagonals2} \begin{tikzcd}[row sep=tiny] 0 \ar[r] & \frac{\mathcal{E} \boxtimes \mathcal{E}^\circ}{\mathcal{E} \boxtimes \mathcal{E}^\circ (-\Delta)} \ar[d, equal] \ar[r] & \frac{\mathcal{E} \boxtimes \mathcal{E}^\circ(2\Delta)} {\mathcal{E}\boxtimes \mathcal{E}^\circ (-\Delta)} \ar[r] & \frac{\mathcal{E} \boxtimes \mathcal{E}^\circ(2\Delta)}{\mathcal{E} \boxtimes \mathcal{E}^\circ} \ar[r] \ar[d, equal] & 0\\ & \mathcal{E} nd(\mathcal{E}) \otimes K_{\mathcal{X}/\mathcal{M}} & & \mathcal{D}^{(1)}_{\mathcal{X}/\mathcal{M}}(\mathcal{E}). & \\ \end{tikzcd} \end{equation} Then the construction that defines the short exact sequence (\ref{at1}) is obtained by taking first the pull-back of (\ref{diagonals2}) to $\mathcal{A}_{\mathcal{X}/\mathcal{M}}(\mathcal{E})\subset \mathcal{D}^{(1)}_{\mathcal{X}/\mathcal{M}}(\mathcal{E})$, and then the push-out under the trace map $\mathcal{E} nd(\mathcal{E})\otimes K_{\mathcal{X}/\mathcal{M}} \stackrel{\operatorname{Tr}}{\to} K_{\mathcal{X}/\mathcal{M}}$, \begin{equation}\label{diag_A-1}
\begin{tikzcd}[row sep=small]
0 \arrow[r] & \frac{\mathcal{E} \boxtimes \mathcal{E}^\circ}{\mathcal{E} \boxtimes
\mathcal{E}^\circ (-\Delta)} \arrow[r] & \frac{\mathcal{E} \boxtimes \mathcal{E}^\circ(2\Delta)}{\mathcal{E} \boxtimes \mathcal{E}^\circ (-\Delta)} \arrow[r] & \frac{\mathcal{E} \boxtimes \mathcal{E}^\circ(2\Delta)}{\mathcal{E} \boxtimes \mathcal{E}^\circ} \arrow[r] & 0 \\
0 \arrow[r]& \mathcal{E} nd(\mathcal{E}) \otimes K_{\mathcal{X}/\mathcal{M}} \arrow[r] \arrow[d, "\operatorname{Tr}"]\ar[u, equal] & \tensor*[^{\operatorname{tr}\!\!}]{\widetilde{\mathcal{A}}}{^{-1}}(\mathcal{E}) \arrow[r] \arrow[d] \ar[u]& \mathcal{A}_{\mathcal{X}/\mathcal{M}}(\mathcal{E}) \arrow[r] \arrow[d, equal] \ar[u]& 0 \\
0 \arrow[r] & K_{\mathcal{X}/\mathcal{M}} \arrow[r] & \tensor*[^{\operatorname{tr}\!\!}]{\mathcal{A}}{^{-1}}(\mathcal{E}) \arrow[r] & \mathcal{A}_{\mathcal{X}/\mathcal{M}}(\mathcal{E}) \arrow[r] & 0.
\end{tikzcd} \end{equation} \subsection{The quasi-isomorphic Bloch--Esnault complex $\mathcal{B}^\bullet$}\label{sectiononBE} Following \cite{bloch.esnault:2002}, we will now construct a subcomplex $\mathcal{B}^\bullet(\mathcal{E}) \subset \tensor*[^{\operatorname{tr}\!\! }]{\mathcal{A}}{^\bullet}(\mathcal{E})$ that allows for more handy computations. Its construction relies on the existence of a splitting of the short exact sequence \begin{equation}\label{TfS_ses}
\begin{tikzcd}
0 \arrow[r] & T_{\mathcal{X}/\mathcal{M}} \arrow[r] & T_{f/S} \arrow[r, "df"] & f^{-1} T_{\mathcal{M}/S} \arrow[r] \arrow[l, dashed, bend left=30] & 0.
\end{tikzcd} \end{equation} \begin{remark}\label{fibredproduct}Note that this condition is in particular satisfied whenever $\mathcal{X}$ is a fibered product $\mathcal{X} = \mathcal{Y} \times_{S} \mathcal{M}$ and $f = \pi_2$ the projection, since then $T_{\mathcal{X}/S} \cong \pi_1^\ast T_{\mathcal{Y}/S} \oplus \pi_2^\ast T_{\mathcal{M}/S}$ and in particular \[
T_{f/S} \cong \pi_1^\ast T_{\mathcal{Y}/S} \oplus f^{-1} T_{\mathcal{M}/S} . \]\end{remark} \subsubsection{Construction of $\mathcal{B}^\bullet(\mathcal{E})$} The definition of $\mathcal{B}^{-1}(\mathcal{E})$ is analogous to that of $\tensor*[^{\operatorname{tr}\!\! }]{\mathcal{A}}{^{-1}}(\mathcal{E})$ via the sub-quotient (\ref{diag_A-1}). One starts once again from the short exact sequence (\ref{diagonals2}), but pulls it back all the way to $\mathcal{E} nd(\mathcal{E}) \hookrightarrow \mathcal{D}^{(1)}_{\mathcal{X}/\mathcal{M}}(\mathcal{E})$, and then pushes out along the trace \begin{equation}\label{diag_B-1}
\begin{tikzcd}[row sep=small]
0 \arrow[r] & \frac{\mathcal{E}\boxtimes \mathcal{E}^\circ}{\mathcal{E}\boxtimes \mathcal{E}^\circ (-\Delta)} \arrow[r] & \frac{\mathcal{E}\boxtimes \mathcal{E}^\circ(2\Delta)}{\mathcal{E}\boxtimes \mathcal{E}^\circ (-\Delta)} \arrow[r] & \frac{\mathcal{E}\boxtimes \mathcal{E}^\circ(2\Delta)}{\mathcal{E}\boxtimes \mathcal{E}^\circ} \arrow[r] & 0 \\
0 \arrow[r]& \mathcal{E} nd(\mathcal{E}) \otimes K_{\mathcal{X}/\mathcal{M}} \arrow[r] \arrow[d, "\operatorname{Tr}"]\ar[u, equal] & \widetilde{\mathcal{B}}^{-1} \arrow[r] \arrow[d] \ar[u]& \mathcal{E} nd(\mathcal{E}) \arrow[r] \arrow[d, equal] \ar[u]& 0 \\
0 \arrow[r] & K_{\mathcal{X}/\mathcal{M}} \arrow[r] & \mathcal{B}^{-1} \arrow[r] & \mathcal{E} nd(\mathcal{E}) \arrow[r] & 0.
\end{tikzcd} \end{equation} Similarly, we define $\mathcal{B}^0(\mathcal{E})$ via the pull-back of the symbol exact sequence of $\tensor*[^{\operatorname{tr}\!\! }]{\mathcal{A}}{^0}(\mathcal{E}) = \mathcal{A}_{f/S}(\mathcal{E})$ under the inclusion $f^{-1} T_{\mathcal{M}/S} \hookrightarrow T_{f/S}$ arising through the splitting condition on (\ref{TfS_ses}), so that we have the following diagram \begin{equation*}
\begin{tikzcd}[row sep=small]
0 \ar[r] & \mathcal{E} nd(\mathcal{E}) \ar[d, equals] \ar[r] & \mathcal{B}^0(\mathcal{E}) \ar[d] \ar[r] & f^{-1} T_{\mathcal{M}/S} \ar[d] \ar[r] & 0\\
0 \ar[r] & \mathcal{E} nd(\mathcal{E}) \ar[r] & \tensor*[^{\operatorname{tr}\!\! }]{\mathcal{A}}{^0} = \mathcal{A}_{f/S}(\mathcal{E}) \ar[r] & T_{f/S} \ar[r] & 0.
\end{tikzcd} \end{equation*} Hence $\mathcal{B}^\bullet(\mathcal{E})$ is a subcomplex of $\tensor*[^{\operatorname{tr}\!\! }]{\mathcal{A}}{^{\bullet}}(\mathcal{E})$, and the following holds true. \begin{proposition}[{\cite[Sect. 5.2]{bloch.esnault:2002}}] \label{quasiiso} If the short exact sequence (\ref{TfS_ses}) is split, the complex $\mathcal{B}^\bullet(\mathcal{E})$ is quasi-isomorphic to $\tensor*[^{\operatorname{tr}\!\! }]{\mathcal{A}}{^{\bullet}}(\mathcal{E})$. \end{proposition} \begin{corollary} The short exact sequence of complexes (\ref{3cpxes}) is quasi-isomorphic to \[ \begin{tikzcd}[row sep=small] & \mathcal{O}_{\mathcal{X}} \ar[r, equals] \ar[d, "d_{\mathcal{X}/\mathcal{M}}"]& \mathcal{B}^{-2}(\mathcal{E}) \ar[d]& & \\ 0\ar[r] & K_{\mathcal{X}/\mathcal{M}} \ar[r]& \mathcal{B}^{-1}(\mathcal{E}) \ar[r] \ar[d]& \mathcal{E}nd(\mathcal{E}) \ar[r] \ar[d] & 0 \\ & & \mathcal{B}^{0}(\mathcal{E}) \ar[r, equals] & \mathcal{B}^{0}(\mathcal{E}) . & \end{tikzcd} \] \end{corollary} Moreover, since we are considering only $0^{th}$ direct images, we can drop the degree $-2$ part of the first two complexes. Hence we obtain a short exact sequence of complexes, $$0 \to K_{\mathcal{X}/\mathcal{M}}[1] \to \mathcal{B}^\bullet(\mathcal{E}) \to \mathcal{C}^\bullet(\mathcal{E}) \to 0,$$ where $\mathcal{C}^{-1}(\mathcal{E}) := \mathcal{E} nd(\mathcal{E})$ and $\mathcal{C}^0(\mathcal{E}):= \mathcal{B}^0(\mathcal{E})$. We also observe that $\mathcal{C}^\bullet(\mathcal{E})$ is quasi-isomorphic to $f^{-1}T_{\mathcal{M}/S}$ since this is exactly the cokernel of $\mathcal{E} nd (\mathcal{E}) \to \mathcal{B}^0(\mathcal{E})$. Thus Theorem \ref{main1} now simplifies to \begin{theorem}\label{easyBS} We have an isomorphism of short exact sequences \begin{equation*}
\begin{tikzcd}[column sep=3ex, row sep=small]
0 \ar[r] & R^0f_*(K_{\mathcal{X}/\mathcal{M}}[1]) \ar[d, "\cong"] \ar[r] & R^0f_*(\mathcal{B}^\bullet(\mathcal{E})) \ar[d, "\cong"] \ar[r] & R^0f_*(\mathcal{E} nd(\mathcal{E}) \to \mathcal{B}^0(\mathcal{E})) \cong T_{\mathcal{M}/S} \ar[d, "\cong"] \ar[r] &0\\ 0 \ar[r] & \mathcal{O}_\mathcal{M} \ar[r] & \mathcal{A}_{\mathcal{M}/S}(\lambda(\mathcal{E})) \ar[r] & T_{\mathcal{M}/S} \ar[r] & 0.
\end{tikzcd} \end{equation*} \end{theorem} \begin{remark} We observe that both sides of the central vertical isomorphism depend on $\mathcal{E}$. \end{remark} \subsubsection{Traceless version $\tensor*[^0]{\mathcal{B}}{^\bullet}(\mathcal{E})$ of $\mathcal{B}^\bullet(\mathcal{E})$}\label{rmk_tracelessB} As expected, we define the subsheaf $\tensor*[^0]{\mathcal{B}}{^{-1}}(\mathcal{E})\subset\mathcal{B}^{-1}(\mathcal{E})$ via the pull-back of the short exact sequence defining $\mathcal{B}^{-1}(\mathcal{E})$ in (\ref{diag_B-1}) along the inclusion of traceless endomorphisms $\mathcal{E} nd^0(\mathcal{E}) \hookrightarrow \mathcal{E} nd(\mathcal{E})$, \begin{equation*}
\begin{tikzcd}[row sep=small]
0 \ar[r] & K_{\mathcal{X}/\mathcal{M}} \ar[d, equals] \ar[r] & \tensor*[^0]{\mathcal{B}}{^{-1}}(\mathcal{E}) \ar[d] \ar[r] & \mathcal{E} nd^0(\mathcal{E}) \ar[d] \ar[r] & 0\\
0 \ar[r] & K_{\mathcal{X}/\mathcal{M}} \ar[r] & \mathcal{B}^{-1}(\mathcal{E}) \ar[r] & \mathcal{E} nd(\mathcal{E}) \ar[r] & 0.
\end{tikzcd} \end{equation*} As we did before, we introduce also a quotient sheaf $\tensor*[^0]{\mathcal{B}}{^0}(\mathcal{E})$ of $\mathcal{B}^0(\mathcal{E})$, obtained as push-out through $\mathcal{E} nd(\mathcal{E}) \rightarrow \mathcal{E} nd^0(\mathcal{E})$, that is \begin{equation*}
\begin{tikzcd}[row sep=small]
0 \ar[r] & \mathcal{E} nd(\mathcal{E}) \ar[d] \ar[r] & {\mathcal{B}}^0(\mathcal{E}) \ar[d] \ar[r] & f^{-1}T_{\mathcal{M}/S} \ar[d, equals] \ar[r] & 0\\
0 \ar[r] & \mathcal{E} nd^0(\mathcal{E}) \ar[r] & \tensor*[^0]{\mathcal{B}}{^0}(\mathcal{E}) \ar[r] & f^{-1}T_{\mathcal{M}/S} \ar[r] & 0.
\end{tikzcd} \end{equation*} \subsubsection{Identification of $\mathcal{B}^{-1}(\mathcal{E})$ and $\tensor*[^{0}]{\mathcal{B}}{^{-1}}(\mathcal{E})$} The duality $$\mathcal{A}_{\mathcal{X}/\mathcal{M}}(\mathcal{E})^\ast \cong \mathcal{B}^{-1}(\mathcal{E})$$ was already stated in \cite{bloch.esnault:2002} formula (5.31). We give a proof here, in particular to include a discussion of the traceless case, and to control the necessary restrictions on the characteristic of the ground field. \begin{theorem}\label{thmdualityB} There is a canonical identification between the natural short exact sequences \[
\begin{tikzcd}[row sep=small]
0 \arrow[r] & T_{\mathcal{X}/\mathcal{M}}^\ast \arrow[r] \arrow[d, equal] & \mathcal{A}_{\mathcal{X}/\mathcal{M}}(\mathcal{E})^\ast \arrow[r] \arrow[d, "\cong"] & \mathcal{E} nd(\mathcal{E})^\ast \arrow[r] \arrow[d, "\cong", "- \operatorname{Tr}" swap] & 0 \\
0 \arrow[r] & K_{\mathcal{X}/\mathcal{M}} \arrow[r] & \mathcal{B}^{-1}(\mathcal{E}) \arrow[r] &
\mathcal{E} nd(\mathcal{E}) \arrow[r] & 0
\end{tikzcd} \] There is also a traceless analogue: \[
\begin{tikzcd}[row sep=small]
0 \arrow[r] & T_{\mathcal{X}/\mathcal{M}}^\ast \arrow[r] \arrow[d, equal] & \mathcal{A}^0_{\mathcal{X}/\mathcal{M}}(\mathcal{E})^\ast \arrow[r] \arrow[d, "\cong"] & \mathcal{E} nd^0(\mathcal{E})^\ast \arrow[r] \arrow[d, "\cong", "- \operatorname{Tr}" swap] & 0 \\
0 \arrow[r] & K_{\mathcal{X}/\mathcal{M}} \arrow[r] & \tensor*[^{0}]{\mathcal{B}}{^{-1}}
(\mathcal{E}) \arrow[r] & \mathcal{E} nd^0(\mathcal{E}) \arrow[r] & 0 .
\end{tikzcd} \] \end{theorem} \begin{remark} Note that the vertical maps on the RHS are given by the opposite of the isomorphism induced by the trace pairing. \end{remark} \begin{proof} Following \cite[Sect. 2.1.1.3]{beilinson.schechtman:1988}, let us define a pairing \begin{eqnarray*}
\mathcal{E} \boxtimes \mathcal{E}^\circ (2\Delta) \times \mathcal{E} \boxtimes \mathcal{E}^\circ (\Delta) & \to & \mathcal{O}_{\mathcal{X}};\\
(\psi_1,\psi_2) & \mapsto & \widetilde{\Res}(\psi_1\cdot^t\psi_2); \end{eqnarray*} where $\prescript{t}{}{\psi_2}$ denotes the transposition of $\psi_2$, that is the pull-back under the map that exchanges the two factors of the fibered product $\mathcal{X} \times_{\mathcal{M}} \mathcal{X}$. This means that $\prescript{t}{}{\psi_2}$ is a section of $\mathcal{E}^\circ \boxtimes \mathcal{E}(\Delta)$. Then we observe that the product $\psi_1\cdot^t\psi_2$ is a section of $K_{\mathcal{X}/\mathcal{M}}\boxtimes K_{\mathcal{X}/\mathcal{M}}(3\Delta)$, after taking the trace $\operatorname{Tr} : \mathcal{E} \otimes \mathcal{E}^\circ \to K_{\mathcal{X}/\mathcal{M}}$ on each factor. Since $\widetilde{\Res}$ is zero on $K_{\mathcal{X}/\mathcal{M}}\boxtimes K_{\mathcal{X}/\mathcal{M}}(\Delta)$, the pairing descends to a pairing on the quotients \[
\langle - , - \rangle : \frac{\mathcal{E}\boxtimes \mathcal{E}^\circ (2\Delta)}{\mathcal{E}\boxtimes \mathcal{E}^\circ} \times \frac{\mathcal{E} \boxtimes
\mathcal{E}^\circ (\Delta)}{\mathcal{E} \boxtimes \mathcal{E}^\circ (-\Delta)} \to
\mathcal{O}_{\mathcal{X}} . \] We claim that this pairing is non-degenerate. In order to check this, observe that it is defined on the central terms of the two short exact sequences (\ref{diagonals}) and (\ref{diagonals2}), \[
\begin{tikzcd}[row sep=small, column sep=small]
\mathcal{E} nd(\mathcal{E}) \cong \frac{\mathcal{E}\boxtimes \mathcal{E}^\circ (\Delta)}
{\mathcal{E}\boxtimes \mathcal{E}^\circ} \ar[d, hook] & \frac{\mathcal{E}\boxtimes \mathcal{E}^\circ}{\mathcal{E}\boxtimes \mathcal{E}^\circ(-\Delta)}\cong \mathcal{E} nd(\mathcal{E})
\otimes K_{\mathcal{X}/\mathcal{M}} \ar[d, hook] & \\
\mathcal{D}^{(1)}_{\mathcal{X}/\mathcal{M}}(\mathcal{E}) \cong \frac{\mathcal{E} \boxtimes \mathcal{E}^\circ (2\Delta)}{\mathcal{E} \boxtimes \mathcal{E}^\circ} \ar[d, swap, "\sigma_1", two heads] \arrow[r, phantom, "\times"] & \frac{\mathcal{E} \boxtimes \mathcal{E}^\circ (\Delta)}{\mathcal{E} \boxtimes \mathcal{E}^\circ(-\Delta)} \ar[r, "{\langle - , - \rangle}"] \ar[d, two heads] & \mathcal{O}_{\mathcal{X}} \\
\mathcal{E} nd(\mathcal{E})\otimes T_{\mathcal{X}/\mathcal{M}} \cong \frac{\mathcal{E} \boxtimes
\mathcal{E}^\circ (2\Delta)}{\mathcal{E} \boxtimes \mathcal{E}^\circ (\Delta)} & \frac{\mathcal{E} \boxtimes \mathcal{E}^\circ (\Delta)}{\mathcal{E} \boxtimes \mathcal{E}^\circ}\cong \mathcal{E} nd(\mathcal{E}) &
\end{tikzcd} \] Using the fact that $\widetilde{\Res}$ vanishes on $K_{\mathcal{X}/\mathcal{M}}\boxtimes K_{\mathcal{X}/\mathcal{M}}(\Delta)$, we note that the pairing is identically zero when restricted to the product of the kernels $\frac{\mathcal{E} \boxtimes \mathcal{E}^\circ (\Delta)}{\mathcal{E}\boxtimes \mathcal{E}^\circ} \times \frac{\mathcal{E} \boxtimes \mathcal{E}^\circ}{\mathcal{E} \boxtimes \mathcal{E}^\circ(-\Delta)}$. Therefore it induces pairings on the products of the kernel of one sequence with the quotient of the other one, that is, on $\mathcal{E} nd(\mathcal{E}) \times \mathcal{E} nd(\mathcal{E})$ and $\mathcal{E} nd(\mathcal{E}) \otimes T_{\mathcal{X}/\mathcal{M}} \times \mathcal{E} nd(\mathcal{E})\otimes K_{\mathcal{X}/\mathcal{M}}$. \begin{lemma} The residue pairing $\langle - , - \rangle$ factorizes through the trace pairings $- \operatorname{Tr}$ on $\mathcal{E} nd(\mathcal{E}) \times \mathcal{E} nd(\mathcal{E})$ and $+ \operatorname{Tr}$ on $\mathcal{E} nd(\mathcal{E}) \otimes T_{\mathcal{X}/\mathcal{M}} \times \mathcal{E} nd(\mathcal{E})\otimes K_{\mathcal{X}/\mathcal{M}}$. \end{lemma} \begin{proof} Consider $\psi_1$ a local section of $\frac{\mathcal{E}\boxtimes \mathcal{E}^\circ (\Delta)} {\mathcal{E}\boxtimes \mathcal{E}^\circ} \subset \frac{\mathcal{E} \boxtimes \mathcal{E}^\circ (2\Delta)}{\mathcal{E} \boxtimes \mathcal{E}^\circ}$ and $\psi_2$ a local section of $\frac{\mathcal{E}\boxtimes \mathcal{E}^\circ (\Delta)} {\mathcal{E}\boxtimes \mathcal{E}^\circ(-\Delta)}$. As explained above $\langle \psi_1, \psi_2 \rangle$ depends only on $\langle \psi_1, \overline{\psi_2} \rangle$, where $\overline{\psi_2}$ is the class of $\psi_2$ in $\frac{\mathcal{E}\boxtimes \mathcal{E}^\circ (\Delta)} {\mathcal{E}\boxtimes \mathcal{E}^\circ}$. It will be enough to do the computations locally. Choose (as in \cite{esnault.tsai:2000}) a local coordinate $x$ at a point $p \in \mathcal{X}$ and let $(x,y)$ be the induced local coordinate at the point $(p,p) \in \Delta$. Then the local equation of $\Delta$ is $x-y = 0$. Let $e_i$ be a local basis of $\mathcal{E}$ and $e_j^*$ its dual basis. Then we can write the local sections $\psi_1$ and $\overline{\psi_2}$ as $$ \psi_1 = \sum_{i,j} e_i \otimes e_j^* \frac{\alpha_{ij}(x,y-x)} {y-x}dy \ \ \text{and} \ \ \overline{\psi_2} = \sum_{k,l} e_k \otimes e_l^* \frac{\beta_{kl}(x,y-x)} {y-x}dy $$ for some local regular functions $\alpha_{ij}$ and $\beta_{kl}$. Then the local sections $\phi_1$ and $\phi_2$ of $\mathcal{E} nd(\mathcal{E})$ associated to $\psi_1$ and $\overline{\psi_2}$ are given by $$ \phi_1 = \sum_{i,j} e_i \otimes e_j^* \alpha_{ij}(x,0) \ \ \text{and} \ \ \phi_2 = \sum_{k,l} e_k \otimes e_l^* \beta_{kl}(x,0). $$ Then we compute \begin{eqnarray*} \langle \psi_1, \overline{\psi_2} \rangle & = & \widetilde{\Res}\left( \sum_{ijkl} e_i \otimes e_l^* \cdot e_k \otimes e_j^* \frac{\alpha_{ij}(x,y-x) \beta_{kl}(y,x-y)}{-(x-y)^2} dxdy\right) \\
& = & \widetilde{\Res}\left( \sum_{ij} \frac{\alpha_{ij}(x,y-x) \beta_{ji}(y,x-y)}{-(x-y)^2} dxdy \right) \\
& = & - \sum_{ij} \alpha_{ij}(x,0) \beta_{ji}(x,0) = -
\operatorname{Tr}(\phi_1 \phi_2). \end{eqnarray*} The computations for the second case are similar. \end{proof} Since the trace pairing $ \operatorname{Tr}$ is non-degenerate, we deduce from the above Lemma that the pairing $\langle - , - \rangle$ is also non-degenerate.
Now, we observe that $\mathcal{A}_{\mathcal{X}/\mathcal{M}}(\mathcal{E}) \subset \frac{\mathcal{E} \boxtimes \mathcal{E}^\circ(2\Delta)}{\mathcal{E} \boxtimes \mathcal{E}^\circ}$ and that $\frac{\mathcal{E} \boxtimes \mathcal{E}^\circ(\Delta)}{\mathcal{E} \boxtimes \mathcal{E}^\circ(-\Delta)}\twoheadrightarrow \mathcal{B}^{-1}(\mathcal{E})$. We want to prove that the restriction $\langle \mathcal{A}_{\mathcal{X}/\mathcal{M}}(\mathcal{E}), - \rangle$ descends to $\mathcal{B}^{-1}(\mathcal{E})$, but this follows from the definition of $\mathcal{A}_{\mathcal{X}/\mathcal{M}}(\mathcal{E})$ by pull-back via $T_{\mathcal{X}/\mathcal{M}}\otimes \mathcal{E} nd(\mathcal{E})$ and the definition of $\mathcal{B}^{-1}(\mathcal{E})$ by push-out via $\mathcal{E} nd(\mathcal{E})\otimes K_{\mathcal{X}/\mathcal{M}} \stackrel{Tr}{\twoheadrightarrow} K_{\mathcal{X}/\mathcal{M}}$, and the duality between these two maps. Hence we obtain a non-degenerate pairing \[ \langle - , - \rangle: \mathcal{A}_{\mathcal{X}/\mathcal{M}}(\mathcal{E}) \times \mathcal{B}^{-1}(\mathcal{E}) \longrightarrow \mathcal{O}_{\mathcal{X}}. \] The same argument yields non-degeneracy of the traceless version of this pairing \[ \langle - , - \rangle: \mathcal{A}_{\mathcal{X}/\mathcal{M}}^0(\mathcal{E}) \times \tensor*[^{0}]{\mathcal{B}}{^{-1}}(\mathcal{E}) \longrightarrow \mathcal{O}_{\mathcal{X}}. \] \end{proof} \begin{remark} The duality between $\mathcal{A}_{\mathcal{X}/\mathcal{M}}(\mathcal{E})$ and $\mathcal{B}^{-1}(\mathcal{E})$ was constructed by Sun-Tsai in \cite[Lemma 4.11.2]{sun.tsai:2004} using a local description of $\mathcal{B}^{-1}(\mathcal{E})$. Note that their claim involves the Atiyah algebroid $\mathcal{A}_{\mathcal{X}/\mathcal{M}}(\mathcal{E}^*)$, which is isomorphic to $\mathcal{A}_{\mathcal{X}/\mathcal{M}}(\mathcal{E})$ but has opposite extension class. \end{remark}
\begin{remark} We note that
$$\frac{\mathcal{E}\boxtimes \mathcal{E}^\circ(\Delta)}{\mathcal{E}\boxtimes \mathcal{E}^\circ(-\Delta)}\cong \mathcal{D}^{(1)}_{\mathcal{X}/\mathcal{M}}(\mathcal{E}) \otimes K_{\mathcal{X}/\mathcal{M}}.$$
Thus the pairing $\langle -,- \rangle $ described in the above proof induces a natural isomorphism between $\mathcal{D}^{(1)}_{\mathcal{X}/\mathcal{M}}(\mathcal{E})^*$ and $\mathcal{D}^{(1)}_{\mathcal{X}/\mathcal{M}}(\mathcal{E}) \otimes K_{\mathcal{X}/\mathcal{M}}$. \end{remark}
\section{The splitting of the adjoint map.}\label{appendixsplitting} In this appendix we collect some representation-theoretical facts needed in the proof of Prop. \ref{isodirimage}. We will work in the following framework. We will denote by $\mathcal{E}$ a rank $r$ vector bundle on a smooth algebraic variety $X$ and as usual $\mathcal{E}nd^0(\mathcal{E})$ will denote the traceless endomorphisms of $\mathcal{E}$. We need the characteristic $p$ of the field $\Bbbk$ to be $0$ or not dividing $r$.
First we observe that we have two non-degenerate pairings induced by the trace, \begin{eqnarray} \operatorname{Tr} : \mathcal{E} nd(\mathcal{E})\times \mathcal{E} nd(\mathcal{E}) & \to & \mathcal{O}_X , \label{trone}\\ \operatorname{Tr} : \mathcal{E} nd(\mathcal{E} nd(\mathcal{E})) \times \mathcal{E} nd(\mathcal{E} nd(\mathcal{E})) & \to & \mathcal{O}_X , \label{trtwo} \end{eqnarray} which allow us to identify $\mathcal{E} nd (\mathcal{E})$ with $\mathcal{E} nd(\mathcal{E})^*$ and $\mathcal{E} nd(\mathcal{E} nd(\mathcal{E}))$ with $\mathcal{E} nd(\mathcal{E} nd(\mathcal{E}))^*$. Moreover, we denote by \begin{eqnarray} \mathrm{ad}: \mathcal{E} nd(\mathcal{E}) & \to & \mathcal{E} nd(\mathcal{E} nd(\mathcal{E})) \label{defad}\\ \alpha & \mapsto & (\beta \mapsto [\alpha, \beta]) \nonumber \end{eqnarray} the $\mathcal{O}_X$-linear map given by the adjoint, for any local sections $\alpha, \beta$ of $\mathcal{E} nd(\mathcal{E})$. \begin{lemma} \label{splitting} Let $\alpha,\beta$ be local sections of the vector bundle $\mathcal{E} nd(\mathcal{E})$. The $\mathcal{O}_X$-linear map \begin{eqnarray*} s:\mathcal{E} nd(\mathcal{E} nd(\mathcal{E})) \cong \mathcal{E} nd (\mathcal{E})\otimes \mathcal{E} nd(\mathcal{E}) & \to & \mathcal{E} nd(\mathcal{E})\\ \alpha \otimes \beta & \mapsto & \frac{1}{2r}[\beta,\alpha] \end{eqnarray*} satisfies $s \circ \mathrm{ad}(\alpha) = \alpha - \frac{\tr(\alpha)}{r}\operatorname{Id}_{\mathcal{E}}$, \it i.e. \rm $s$ is a splitting of the restriction of $\mathrm{ad}$ to $\mathcal{E} nd^0(\mathcal{E})$. \end{lemma} \begin{proof} It will be enough to check the equality pointwise. The statement then reduces to check that for an $r \times r$ matrix $A \in \mathrm{M}_r(\Bbbk)$ we have the equality $s \circ \mathrm{ad} (A) = A - \frac{\tr(A)}{r} I_r$. We consider the canonical basis $\{ E_{ij} \}$ with $1 \leq i,j \leq r$ of $\mathrm{M}_r(\Bbbk)$. The dual basis of $\{ E_{ij} \}$ under the trace pairing (\ref{trone}) is given by $\{ E_{ji} \}$. The claim then follows by straightforward computation : \begin{eqnarray*} s \circ \mathrm{ad}(A) & = & s \left( \sum_{i,j} E_{ji} \otimes [A,E_{ij}] \right) = \frac{1}{2r} \sum_{i,j} AE_{ij}E_{ji} - E_{ij}AE_{ji} -
E_{ji}AE_{ij} + E_{ji}E_{ij}A \\
& = & \frac{1}{2r} (2r A - 2\tr(A) I_r). \end{eqnarray*} \end{proof} \begin{lemma} \label{dualofsplitting} Using the identifications (\ref{trone}) and (\ref{trtwo}) given by the trace pairings we denote by $s^* : \mathcal{E} nd(\mathcal{E}) \to \mathcal{E} nd(\mathcal{E} nd(\mathcal{E}))$ the dual of $s$. Then we have the equality $$ s^* = \frac{1}{2r} \mathrm{ad}.$$ \end{lemma} \begin{proof} As in the previous lemma we will check the equality pointwise. By the definition of the dual map $s^*$ and the trace pairings (\ref{trone}) and (\ref{trtwo}) it is easily seen that the claimed equality is equivalent to the equality $$ \tr (\mathrm{ad}(A). B \otimes C) = \tr (A [C,B])$$ for any matrices $A,B,C \in \mathrm{M}_r(\Bbbk)$. Note that the trace on the left-hand side is the trace on $End(\mathrm{M}_r(\Bbbk)) \cong \mathrm{M}_r(\Bbbk) \otimes \mathrm{M}_r(\Bbbk)$. Again this equality is proved by straightforward computation : \begin{eqnarray*} \tr(\mathrm{ad}(A). B \otimes C) & = & \sum_{i,j} \tr ( E_{ji} \otimes [A,E_{ij}] \otimes B \otimes C ) = \sum_{i,j} \tr(E_{ji} C ) \tr( [A,E_{ij}] B) \\ & = & \sum_{i,j} (\tr(E_{ji} C ) ( \tr (BAE_{ij}) - \tr (E_{ij}AB) ) = \tr(BAC) - \tr(ABC) \\ & = & \tr(A[C,B]). \end{eqnarray*} \end{proof} We will also abuse slightly of notation and denote also by $\mathrm{ad}$ the $\mathcal{O}_X$-linear map $\mathcal{E} nd(\mathcal{E}) \to \mathcal{E} nd(\mathcal{E} nd^0(\mathcal{E}))$ induced by the one defined in (\ref{defad}). We will write instead $\mathrm{ad}_0:\mathcal{E} nd^0(\mathcal{E}) \to \mathcal{E} nd^0(\mathcal{E} nd^0(\mathcal{E}))$ for the restriction to $\mathcal{E} nd^0(\mathcal{E})$. \begin{proposition}\label{ST310} \begin{enumerate}[(a)] \item There exists a $\mathcal{O}_X$-linear map $$ \widetilde{\mathrm{ad}}: \mathcal{A}(\mathcal{E}) \to \mathcal{A}(\mathcal{E} nd^0(\mathcal{E})), $$ extending respectively $\mathrm{ad}$ inducing the identity on $T_X$. Note that $\widetilde{\mathrm{ad}}$ factorizes through $\mathcal{A}^0(\mathcal{E})$. We shall denote by $$ \widetilde{\mathrm{ad}}_0 : \mathcal{A}^0(\mathcal{E}) \to \mathcal{A}(\mathcal{E} nd^0(\mathcal{E})) $$ the factorized map. \item There exists a $\mathcal{O}_X$-linear map $$ \widetilde{s}: \mathcal{A}(\mathcal{E} nd^0(\mathcal{E})) \to \mathcal{A}^0(\mathcal{E}), $$ extending $s: \mathcal{E} nd(\mathcal{E} nd^0(\mathcal{E})) \to \mathcal{E} nd^0(\mathcal{E})$, inducing the identity on $T_X$ and such that $\widetilde{s} \circ \widetilde{\mathrm{ad}}_0 = Id_{\mathcal{A}^0(\mathcal{E})}$. \item With the notation of Appendix A, there exists a $\mathcal{O}_\mathcal{X}$-linear map $$\widehat{\mathrm{ad}}:{}^0\mathcal{B}^{-1}(\mathcal{E}) \to{}^0\mathcal{B}^{-1}(\mathcal{E} nd^0(\mathcal{E})),$$ lifting $\mathrm{ad}_0$ and inducing $2r \operatorname{Id}$ on the line subbundle $K_{\mathcal{X}/\mathcal{M}}$. \end{enumerate} \end{proposition} \begin{proof} Part (a) is proved in \cite{atiyah:1957} pages 188-189. \\ Part (b): We define $\widetilde{s}$ as the push-out of the exact sequence $$ 0 \to \mathcal{E} nd^0 (\mathcal{E} nd^0(\mathcal{E})) \to \mathcal{A}^0(\mathcal{E} nd^0(\mathcal{E})) \to T_X \to 0 $$ under the $\mathcal{O}_X$-linear map $s$. Then, by Lemma \ref{splitting}, since $s$ is a splitting of $\mathrm{ad}_0$, we see that the extension class of the push-out is the same as the extension class of $\mathcal{A}^0(\mathcal{E})$, hence these two vector bundles are isomorphic (see e.g. \cite{atiyah:1957} pages 188-189). \\ Part (c): We recall from Theorem \ref{thmdualityB} that there exist isomorphisms $$ \delta_{\mathcal{E}} : \mathcal{A}^0_{\mathcal{X}/\mathcal{M}}(\mathcal{E})^* \to {}^0\mathcal{B}^{-1}(\mathcal{E}) \ \ \text{and} \ \
\delta_{\mathcal{E} nd^0(\mathcal{E})} : \mathcal{A}^0_{\mathcal{X}/\mathcal{M}}(\mathcal{E} nd^0(\mathcal{E}))^* \to {}^0\mathcal{B}^{-1}(\mathcal{E} nd^0(\mathcal{E})) $$ We then construct the map $\widehat{\mathrm{ad}}$ as the composition $$\widehat{\mathrm{ad}} = (2r) \delta_{\mathcal{E} nd^0(\mathcal{E})} \circ \widetilde{s}^* \circ \delta_{\mathcal{E}}^{-1}.$$ Then $\widehat{\mathrm{ad}}$ induces $(2r) \operatorname{Id}$ on $K_{\mathcal{X}/\mathcal{M}}$ and, by Lemma \ref{dualofsplitting}, $\widehat{\mathrm{ad}}$ lifts the map $\mathrm{ad}_0$. \end{proof} \begin{remark} Proposition \ref{ST310} coincides with \cite[Prop. 3.10]{sun.tsai:2004}. Our proof is different since we give a global construction of the liftings of the adjoint maps. \end{remark}
\section{Basic facts about the moduli space $\mathcal{M}$ through the Hitchin system}\label{appendixbasicfacts} In this appendix we give proofs for some of the basic facts about the moduli space of stable bundles $\mathcal{M}$ (as in Section \ref{sect_basicfacts}) that we use in the main body of the paper. These are essentially all well known, but we were unable to find references for them in the generality we need (outside the complex case). We therefore show here how they can all be obtained using the Hitchin system -- a strategy once again due to Hitchin (cfr. \cite[\S 6]{hitchin:1987} and \cite[\S 5]{hitchin:1990}) -- via some minor adaptations to the algebro-geometric setting. \subsection{The moduli space of Higgs bundles and the Hitchin system} We will denote by $\mathcal{M}^{\operatorname{H}, \operatorname{ss}}$ the moduli space of semi-stable Higgs bundles with trivial determinant (and trace-free Higgs field) -- all still relative over $S$ as before. This space is singular but normal, and comes equipped with the Hitchin system, a projective morphism $\phi$ to the vector bundle $\pi_{\mathcal{H}}:\mathcal{H}\rightarrow S$ associated to the sheaf $\oplus_{i=2}^r \pi_{s *} K_{\mathcal{C}/S}^i$ over $S$. This morphism is equivariant with respect to the $\mathbb{G}_m$-action that scales the Higgs fields, and acts with weight $i$ on $\pi_{s *} K^i_{\mathcal{C}/S}$.
The fibers of $\pi_{\operatorname{H}}:\mathcal{M}^{\operatorname{H}, \operatorname{ss}}\rightarrow S$ have a canonical (algebraic) symplectic structure on their smooth locus, which extends the one on $T^*_{\mathcal{M}/S}$. Closed points in $\mathcal{H}$ give rise to degree $r$ spectral covers of $\mathcal{C}$. The locus whose spectral curve is smooth is denoted by $\mathcal{H}^{\operatorname{reg}}$. \subsection{Proofs} \begin{proposition}[{Proposition \ref{basicfacts}(\ref{basicfactsthree})}] There are no global vector fields on $\mathcal{M}$: $$\pi_{e*}T_{\mathcal{M}/S}=\{0\}.$$ \end{proposition} \begin{proof} Elements of $\pi_{e*}T_{\mathcal{M}/S}$ would give rise to global functions on $T^*_{\mathcal{M}/S}$. As the complement of $\mathcal{M}$ in $\mathcal{M}^{\operatorname{H}, \operatorname{ss}}$ has sufficiently high codimension, these would extend by Hartogs's theorem to all of $\mathcal{M}^{\operatorname{H}, \operatorname{ss}}$. As they have weight $1$ under the $\mathbb{G}_m$-action, they have to be pulled-back from functions on $\mathcal{H}$ of the same weight, but there are no such functions. \end{proof} \begin{proposition}\label{rho-Hit-isom} The Hitchin symbol $\rho^{\operatorname{Hit}}$ is an isomorphism. \end{proposition} \begin{proof} Elements of ${\pi_e}_\ast \Sym^2 T_{\mathcal{M} / S}$ can be understood as regular functions on the total space of $T^*_{\mathcal{M}/S}$, of degree $2$ on all tangent spaces. In turn these extend, by Hartog's theorem, to $\mathcal{M}^{\operatorname{H},\operatorname{ss}}$, where they are of degree 2 with respect to the $\mathbb{G}_m$-action that scales the Higgs field. As the Hitchin system is equivariant, they are moreover obtained from regular linear functions on the quadratic part of the Hitchin base, which is exactly given by $R^1 {\pi_s}_\ast T_{\mathcal{C} / S}$ though $\rho^{\operatorname{Hit}}$. \end{proof} To establish that $\mu_{\mathcal{L}^k}$ is injective, we can again adapt the reasoning from \cite[\S 5]{hitchin:1990}. By Propositions \ref{thm_mu_O} and \ref{phi-rho-L}, and Lemma \ref{rho-Hit-isom}, it suffices to show that $\Phi$ is injective.
\begin{lemma}[{\cite[Proposition 5.2]{hitchin:1990}}]\label{useful-lemma} There exists a canonical isomorphism $$\begin{tikzcd}\Psi:\pi_{\mathcal{H}*}\mathcal{\mathcal{O}_{\mathcal{H}}}\otimes \mathcal{H}^*\ar[r] & R^1\pi_{\operatorname{H} *}\mathcal{O},\end{tikzcd}$$ of $\pi_{\mathcal{H}*}\mathcal{\mathcal{O}_{\mathcal{H}}}$-modules which is equivariant with respect to the natural action of $\mathbb{G}_m$ on $\pi_{\mathcal{H}*}\mathcal{\mathcal{O}_{\mathcal{H}}}\otimes \mathcal{H}$, and the natural action twisted by weight $-1$ on $R^1\pi_{\operatorname{H} *}\mathcal{O}$. \end{lemma} \begin{proof} Indeed, sections of $\mathcal{H}^*$ give rise to fiber-wise linear functions on $\mathcal{H}$, which pull back by $\phi$ to functions on $\mathcal{M}^{\operatorname{H}}$. As the latter has an algebraic symplectic structure on $\mathcal{M}^{\operatorname{H}, \operatorname{s}}$ extending the canonical one on $T^*{\mathcal{M}}$, these give rise to hamiltonian vector fields on $\mathcal{M}^{\operatorname{H}, \operatorname{s}}$ which are tangent to the fibres of $\phi$. Moreover, the inverse of the determinant-of-cohomology line bundle $\mathcal{L}$ naturally extends to $\mathcal{M}^{\operatorname{H}}$, and is relatively ample with respect to $\phi$. Taking the cup product with its relative Atiyah class gives a natural morphism $\pi_{\operatorname{H} *} T_{\mathcal{M}^{\operatorname{H}}/S}\rightarrow R^1\pi_{\operatorname{H} *}\mathcal{O}$. The composition gives a morphism $\mathcal{H}\rightarrow R^1\pi_{\operatorname{H} *}\mathcal{O}$, which naturally extends as a morphism of $\pi_{\mathcal{H}*}\mathcal{\mathcal{O}_{\mathcal{H}}}$-modules to the desired morphism $\Psi$.
To show that $\Psi$ is an isomorphism, it can be argued as follows: as $\pi_{\operatorname{H}}$ factors over $\pi_{\mathcal{H}}$, and the latter is an affine morphism, we have that $R^1\pi_{\operatorname{H} *}\mathcal{O}_{\mathcal{M}^{\operatorname{H}}}\cong \pi_{\mathcal{H}*}\left(R^1\phi_{*}\mathcal{O}_{\mathcal{M}^{\operatorname{H}}}\right)$. Now, through the theory of abelianisation, we know that over a locus $\mathcal{H}^{\circ}$ whose complement has sufficiently high co-dimension, the morphism $\phi$ is a family of (semi-)abelian varieties. The line bundle $\mathcal{L}$ restricts to an ample one on the fibres, and for those fibres $X$ it is known that cupping with $[\mathcal{L}]$ is an isomorphism $H^0(X, T_X)\rightarrow H^1(X, \mathcal{O}_X)$. As the vector fields on $\mathcal{M}^{\operatorname{H}}$ are independent, on each such $X$ the space $H^0(X, T_X)$ is given by the vector field coming from $\mathcal{H}^*$. As a result, we find that, on $\mathcal{H}^{\circ}$, $R^1\phi_{*}\mathcal{O}_{\mathcal{M}^{\operatorname{H}}}$ is a trivial vector bundle, and that the map $\Psi$ is indeed an isomorphism.
It is also straightforward to observe that the map $\Psi$ is in fact equivariant for the natural $\mathbb{G}_m$-action that is defined on all spaced, induced by the scaling of Higgs fields, provided that we twist the action on $R^1\pi_{\mathcal{H}*}\mathcal{M}^{\operatorname{H}}$ by a weight $-1$. \end{proof}
\begin{proposition}[{\ref{basicfacts}(\ref{basicfactsfour})}] \label{nor1}We have that $R^1\pi_{e*}\mathcal{O}_{\mathcal{M}}=\{0\}$. \end{proposition} \begin{proof} It suffices to remark that sections of $R^1\pi_{e*}\mathcal{O}_{\mathcal{M}}$ correspond to sections of\\ $R^1\pi_{\operatorname{H}*}\mathcal{O}_{\mathcal{M}^{\operatorname{H},\operatorname{ss}}}$ of weight $0$, which would correspond under $\Psi$ to sections of weight $-1$, of which there are none. \end{proof} \begin{proposition}\label{cap-isom} The map $\cup[\mathcal{L}]:\pi_{e*}\Sym^2T_{\mathcal{M}/S}\rightarrow R^1\pi_{e*} T_{\mathcal{M}/S}$ is an isomorphism. \end{proposition} \begin{proof} We now want to restrict the isomorphism $\Psi$ from \ref{useful-lemma} to the sub-bundle of $\pi_{\mathcal{H}*}\mathcal{O}_{\mathcal{H}}\otimes \mathcal{H}^*$ of weight 2, which corresponds exactly to fibre-wise linear functionals on $\pi_{s*}K^2_{\mathcal{C}/S}$, which by relative Serre duality is exactly given by $R^1\pi_{s*} T_{\mathcal{C}/S}$. On this space $\Psi$ restricts to give an isomorphism to $R^1\pi_{e*}T_{\mathcal{M}/S}$. To show that this is a multiple of $\Phi$, one can argue as follows: if $\mathcal{O}^{(1)}$ is the structure sheaf of the first order infinitesimal neighborhood of $\mathcal{M}$ in $\mathcal{M}^{\operatorname{H}}$ (cfr. \cite[\href{https://stacks.math.columbia.edu/tag/05YW}{Tag 05YW}]{stacks-project}), we have the short exact sequence on $\mathcal{M}$ $$\begin{tikzcd} 0\ar[r] & N^*_{\mathcal{M}/\mathcal{M}^{\operatorname{H}}} \ar[r] & \mathcal{O}^{(1)} \ar[r] & \mathcal{O}_{\mathcal{M}}\ar[r] & 0. \end{tikzcd} $$ Here $N^*_{\mathcal{M}/\mathcal{M}^{\operatorname{H}}}$ is the co-normal bundle of $\mathcal{M}$ in $\mathcal{M}^{\operatorname{H}}$, which is canonically isomorphic to the tangent bundle $T_{\mathcal{M}/S}$. As by Proposition \ref{nor1} we have that $R^1\pi_{e*}\mathcal{O}_{\mathcal{M}}=\{0\}$, this gives $$R^1\pi_{e*} T_{\mathcal{M}/S}\cong R^1\pi_{e*}\mathcal{O}^{(1)}.$$
If $\mathcal{I}$ is the ideal sheaf of $\mathcal{M}$ in $\mathcal{M}^{\operatorname{H}}$, we have that $\mathcal{O}^{(1)}=\left(\mathcal{O}_{\mathcal{M}^{\operatorname{H}}}\big/ \mathcal{I}^2\right) \Big|_{\mathcal{M}}$, and hence we have a restriction map $$\begin{tikzcd}R^1\pi_{\operatorname{H} *}\mathcal{O}_{\mathcal{M}^{\operatorname{H}}}\ar[r] & R^1\pi_{e*}\mathcal{O}^{(1)}\cong R^1\pi_{e*} T_{\mathcal{M}/S}\end{tikzcd},$$ which is the identity on $R^1\pi_{e*} T_{\mathcal{M}/S}$ (sitting inside $R^1\pi_{\operatorname{H} *}\mathcal{O}_{\mathcal{M}^{\operatorname{H}}}$ as the weight $1$ part). So we only need to keep track of first order information in the normal direction. We now claim that, for any $\Delta\in R^1\pi_{\operatorname{H} *}(\Omega^1_{\mathcal{M}^{\operatorname{H}}/S})$ which restricts to $\widetilde{\Delta}\in R^1\pi_{e*}(\Omega^1_{\mathcal{M}/S})$ the following diagram is commutative: \begin{equation}\label{lastdiagram}\begin{tikzcd}[row sep=small,column sep=small] \ &\pi_{e*}\Sym^2 T_{\mathcal{M}/S} \ar[rr, "{\cup 2\widetilde{\Delta}}"] \ar[dl, hook]&\ & R^1\pi_{e*}T_{\mathcal{M} / S} &\ \\ \pi_{\operatorname{H}*}\mathcal{O}_{\mathcal{M}^{\operatorname{H}}}\ar[dr, "d" '] &\ &\ &\ & R^1\pi_{e*}\mathcal{O}^{(1)}\ar[ul, "{\cong}" ']\\ \ & \pi_{\operatorname{H} *}\Omega^1_{\mathcal{M}^{\operatorname{H}}/S}\ar[r, "\omega", "\cong"'] & \pi_{\operatorname{H}*}T_{\mathcal{M}^{\operatorname{H}}/S}\ar[r, "{\cup\Delta}"] & R^1\pi_{\operatorname{H}*}\mathcal{O}_{\mathcal{M}^{\operatorname{H}}}\ar[ur, "{\text{restrict}}" '] &\ \end{tikzcd}\end{equation}
In \cite[page 379]{hitchin:1990} this was shown using holomorphic Darboux coordinates on the total space of $T_{\mathcal{M}/S}$, coming from (holomorphic) coordinates on $\mathcal{M}$. The reasoning does not strictly speaking need the latter choice though, and it suffices to work with a local trivialisation of $T_{\mathcal{M}/S}$. In this sense it also goes through in an algebraic context, as follows. Let $U_{i}$ be a covering of $\mathcal{M}$ by open affines, such that $T_{\mathcal{M}/S}\big|_{U_{i}}$ is free. For a fixed $i$ we choose generators $e_1, \ldots, e_n$ of the latter. These can also be understood as functions $f_1, \ldots, f_n$ on $T^*_{\mathcal{M}/S}\big|_{U_{\gamma}}$. If we denote the dual sections to $e_1, \ldots, e_n$ as $e^1, \ldots, e^n$, then we can interpret their pull backs as one-forms on the total space of $T^*_{\mathcal{M}/S}\big|_{U_{\gamma}}$. The tautological one-form $\theta$ on the total space of $T^*_{\mathcal{M}/S}$ can now be written locally as $\theta=\sum_{\alpha} f_{\alpha} e^{\alpha}$, and the canonical symplectic form is therefore $\omega=-d\theta=\sum_{\alpha} df_{\alpha}\wedge e^{\alpha}$. If a section of $\pi_{e*}\Sym^2 T_{\mathcal{M}/S}$ is locally written as $G=\sum_{\alpha,\beta}G^{\alpha\beta}e_{\alpha}\odot e_{\beta}$ (with the $G^{\alpha\beta} \in \mathcal{O}_{U_{i}}$), then the corresponding element of $\pi_{\operatorname{H}*}\mathcal{O}_{\mathcal{M}^{\operatorname{H}}}$ can be written as $\sum_{\alpha,\beta}G^{\alpha\beta}f_{\alpha}f_{\beta}$. The corresponding Hamiltonian vector field (with respect to $\omega$) in $\pi_{\operatorname{H} *}$ is locally written as $$-\sum_{\alpha,\beta,\gamma}e_{\gamma}(G^{\alpha\beta})f_{\alpha} f_{\beta}h^{\gamma}+2\sum_{\alpha,\beta}G^{\alpha\beta}f_{\alpha} e_{\beta},$$ (where, with a slight abuse of notation, we denote by $e_1, \ldots, e_n, h^1, \ldots, h^n$ the elements of the basis of $T_{\mathcal{M}^{\operatorname{H}}/S}$ dual to $e^1, \ldots, e^n, df_1, \ldots, df_n$). After taking the cup product with $\Delta$ (which we represent by a \v{C}ech cohomology class with respect to the open covering $T^*_{U_{i}/S}$), and restricting to $\mathcal{O}^{(1)}$, this gives indeed $2G\cup \widetilde{\Delta}$. We conclude by applying this to $\Delta=[\mathcal{L}]$, in which case the `bottom path' of (\ref{lastdiagram}) is given by a component of the isomorphism $\Psi$.
\end{proof}
\begin{corollary}
The map $\Phi$ from (\ref{kappaphi}) is an isomorphism.
\end{corollary}
\begin{proof}
This follows immediately by combining Proposition \ref{rho-Hit-isom}, Proposition \ref{cap-isom}, and Proposition \ref{phi-rho-L}.
\end{proof} Finally, as a corollary we also get the final fact we need in the proof of the flatness of the Hitchin connection (Theorem \ref{connection-flat}): \begin{lemma}\label{mu-L-inj} The map $\mu_{\mathcal{L}^k}$ is injective. \end{lemma}
\def\cftil#1{\ifmmode\setbox7\hbox{$\accent"5E#1$}\else
\setbox7\hbox{\accent"5E#1}\penalty 10000\relax\fi\raise 1\ht7
\hbox{\lower1.15ex\hbox to 1\wd7{\hss\accent"7E\hss}}\penalty 10000
\hskip-1\wd7\penalty 10000\box7}
\def\cftil#1{\ifmmode\setbox7\hbox{$\accent"5E#1$}\else
\setbox7\hbox{\accent"5E#1}\penalty 10000\relax\fi\raise 1\ht7
\hbox{\lower1.15ex\hbox to 1\wd7{\hss\accent"7E\hss}}\penalty 10000
\hskip-1\wd7\penalty 10000\box7}
\def\cftil#1{\ifmmode\setbox7\hbox{$\accent"5E#1$}\else
\setbox7\hbox{\accent"5E#1}\penalty 10000\relax\fi\raise 1\ht7
\hbox{\lower1.15ex\hbox to 1\wd7{\hss\accent"7E\hss}}\penalty 10000
\hskip-1\wd7\penalty 10000\box7}
\def\cftil#1{\ifmmode\setbox7\hbox{$\accent"5E#1$}\else
\setbox7\hbox{\accent"5E#1}\penalty 10000\relax\fi\raise 1\ht7
\hbox{\lower1.15ex\hbox to 1\wd7{\hss\accent"7E\hss}}\penalty 10000
\hskip-1\wd7\penalty 10000\box7} \def$'${$'$}
\end{document} |
\begin{document}
\title[Homotopy of Ringed Finite Spaces]
{Homotopy of Ringed Finite Spaces}
\author{ Fernando Sancho de Salas} \address{Departamento de Matem\'{a}ticas and Instituto Universitario de F�sica Fundamental y Matem�ticas (IUFFyM), Universidad de Salamanca, Plaza de la Merced 1-4, 37008 Salamanca, Spain}
\email{fsancho@usal.es}
\subjclass[2010]{14-XX, 55PXX, 05-XX, 06-XX}
\keywords{Finite spaces, quasi-coherent modules, homotopy}
\thanks {The author was supported by research project MTM2013-45935-P (MINECO)}
\begin{abstract} A ringed finite space is a ringed space whose underlying topological space is finite. The category of ringed finite spaces contains, fully faithfully, the category of finite topological spaces and the category of affine schemes. Any ringed space, endowed with a finite open covering, produces a ringed finite space. We study the homotopy of ringed finite spaces, extending Stong's homotopy classification of finite topological spaces to ringed finite spaces. We also prove that the category of quasi-coherent modules on a ringed finite space is a homotopy invariant. \end{abstract}
\maketitle
\section*{Introduction}
This paper deals with ringed finite spaces and quasi-coherent modules on them. Let us motivate why these structures deserve some attention (Theorems 1 and 2 below). Let $S$ be a topological space and let ${\mathcal U}=\{ U_1,\dots, U_n\}$ be a finite covering by open subsets. Let us consider the following equivalence relation on $S$: we say that $s\sim s'$ if ${\mathcal U}$ does not distinguish $s$ and $s'$; that is, if we denote $U^s=\underset{s\in U_i}\cap U_i$, then $s\sim s'$ iff $U^s=U^{s'}$. Let us denote $X=S/\negmedspace\sim$ the quotient set, with the topology given by the following partial order: $[s]\leq [s']$ iff $U^s\supseteq U^{s'}$. $X$ is a finite ($T_0$)-topological space, and the quotient map $\pi\colon S\to X$ is continous.
Assume now that $S$ is a path connected, locally path connected and locally simply connected topological space and let ${\mathcal U}$ be a finite covering such that the $U^s$ are simply connected. Then (Theorem \ref{fin-sp-assoc-top}):
{\bf Theorem 1.} {\sl The functors
\[\aligned \left\{\aligned \text{Locally constant sheaves}\\ \text{of abelian groups on $S$}\endaligned \right\} & \overset{\longrightarrow}\leftarrow \left\{ \aligned \text{Locally constant sheaves}\\ \text{of abelian groups on $X$}\endaligned \right\} \\ {\mathcal M} &\to \pi_*{\mathcal M} \\ \pi^*{\mathcal N} &\leftarrow {\mathcal N} \endaligned \] are mutually inverse. In other words, $\pi_1(S,s)\to \pi_1(X,\pi(s))$ is an isomorphism between the fundamental groups of $S$ and $X$. Moreover, if the $U^s$ are homotopically trivial, then $\pi\colon S\to X$ is a weak homotopy equivalence, i.e., $\pi_i(S)\to \pi_i(X)$ is an isomorphism for any $i\geq 0$. }
Now, if we take the constant sheaf ${\mathbb Z}$ on $X$, it turns out that a sheaf of abelian groups on $X$ is locally constant if and only if it is a quasi-coherent ${\mathbb Z}$-module (Theorem \ref{qc-fts}). In conclusion, the category of representations of $\pi_1(S)$ on abelian groups is equivalent to the category of quasi-coherent ${\mathbb Z}$-modules on the finite space $X$.
Assume now that $S$ is a scheme and that the $U^s$ are affine schemes (a ${\mathcal U}$ with this condition exists if and only if $S$ is quasi-compact and quasi-separated). Let ${\mathcal O}_S$ be the structural sheaf of $S$ and put ${\mathcal O}=\pi_*{\mathcal O}_S$, which is a sheaf of rings on $X$. Now the result is (Theorem \ref{schemes}):
{\bf Theorem 2.} {\sl Let $S$ be a scheme, ${\mathcal U}$ a finite covering such that the $U^s$ are affine schemes and $(X,{\mathcal O})$ the ringed finite space constructed above. The functors \[\aligned \{\text{Quasi-coherent ${\mathcal O}_S$-modules} \} & \overset{\longrightarrow}\leftarrow \{\text{Quasi-coherent ${\mathcal O}$-modules} \} \\ {\mathcal M} &\to \pi_*{\mathcal M} \\ \pi^*{\mathcal N} &\leftarrow {\mathcal N} \endaligned \] are mutually inverse, i.e., the category of quasi-coherent modules on $S$ is equivalent to the category of quasi-coherent ${\mathcal O}$-modules on $X$. }
In \cite{EstradaEnochs} it is proved that the category of quasi-coherent sheaves on a quasi-compact and quasi-separated scheme $S$ is equivalent to the category of quasi-coherent $R$-modules, where $R$ is a ring representation of a finite quiver ${\mathcal V}$. Our point of view is that the quiver ${\mathcal V}$ may be replaced by a finite topological space $X$ and the representation $R$ by a sheaf of rings ${\mathcal O}_X$. The advantage is that the equivalence between quasi-coherent modules is obtained from a geometric morphism $\pi\colon S\to X$. Thus, this point of view may be used to prove cohomological results on schemes by proving them on a finite ringed space. For example, one can prove the Theorem of formal functions, Serre's criterion of affineness, flat base change or Grothendieck's duality in the context of ringed finite spaces (where the proofs are easier) obtaining those results for schemes as a particular case. Thus, the standard hypothesis of separated or semi-separated on schemes may be replaced by the less restrictive hypothesis of quasi-separated. This will be done in a future paper.
In algebraic geometry, quasi-coherent modules and their cohomology play an important role, as locally constant sheaves do in algebraic topology. Theorems 1 and 2 tell us that, under suitable conditions, these structures are determined by a finite model. All this led us to conclude that it is worthy to make a study of ringed finite spaces and of quasi-coherent modules on them.
By a ringed finite space we mean a ringed space $(X,{\mathcal O})$ whose underlying topological space $X$ is finite, i.e. it is a finite topological space endowed with a sheaf ${\mathcal O}$ of (commutative with unit) rings. It is well known (since Alexandroff) that a finite topological space is equivalent to a finite preordered set, i.e. giving a topology on a finite set is equivalent to giving a preorder relation. Giving a sheaf of rings ${\mathcal O}$ on a finite topological space $X$ is equivalent to give, for each point $p\in X$, a ring ${\mathcal O}_p$, and for each $p\leq q$ a morphism of rings $r_{pq}\colon {\mathcal O}_p\to{\mathcal O}_q$, satisfying the obvious relations ($r_{pp}=\Id$ for any $p$ and $r_{ql}\circ r_{pq}=r_{pl}$ for any $p\leq q\leq l$). An ${\mathcal O}$-module ${\mathcal M}$ on $X$ is equivalent to the data: an ${\mathcal O}_p$-module ${\mathcal M}_p$ for each $p\in X$ and a morphism of ${\mathcal O}_p$-modules ${\mathcal M}_p\to{\mathcal M}_q$ for each $p\leq q$ (again with the obvious relations).
The category of ringed finite spaces is a full subcategory of the category of ringed spaces and it contains (fully faithfully) the category of finite topological spaces and the category of affine schemes (see Examples \ref{ejemplos}, (1) and (2)). If $(S,{\mathcal O}_S)$ is an arbitrary ringed space (a topological space, a differentiable manifold, a scheme, etc) and we take a finite covering ${\mathcal U}=\{ U_1,\dots,U_n\}$ by open subsets, there is a natural associated ringed finite space $(X,{\mathcal O}_X)$ and a morphism of ringed spaces $S\to X$ (see Examples \ref{ejemplos}, (3)). As mentioned above, a particular interesting case is when $S$ is a quasi-compact and quasi-separated scheme and ${\mathcal U}$ is a locally affine finite covering.
In section 3 we make a study of the homotopy of ringed finite spaces. We see how the homotopy relation of continuous maps between finite topological spaces can be generalized to morphisms between ringed finite spaces in such a way that Stong's classification (\cite{Stong}) of finite topological spaces (via minimal topological spaces) can be generalized to ringed finite spaces (Theorem \ref{homotopic-classification}). An important fact is that the category of quasi-coherent modules on a ringed finite space is a homotopy invariant: two homotopy equivalent ringed finite spaces have equivalent categories of quasi-coherent sheaves (Theorem \ref{homotinvariance}).
The results of this paper could be formulated in terms of posets and complexes. As in \cite{Barmak}, we have preferred the topological point of view of McCord, Stong and May.
This paper is dedicated to the beloved memory of Prof. Juan Bautista Sancho Guimer{\'a}. I learned from him most of mathematics I know, in particular the use of finite topological spaces in algebraic geometry.
\section{Preliminaries}
In this section we recall elementary facts about finite topological spaces and ringed spaces. The reader may consult \cite{Barmak} for the results on finite topological spaces and \cite{GrothendieckDieudonne} for ringed spaces.
\subsection{Finite topological spaces}
\begin{defn} A finite topological space is a topological space with a finite number of points. \end{defn}
Let $X$ be a finite topological space. For each $p\in X$, we shall denote by $U_p$ the minimum open subset containing $p$, i.e., the intersection of all the open subsets containing $p$. These $U_p$ form a minimal base of open subsets.
\begin{defn} A finite preordered set is a finite set with a reflexive and transitive relation (denoted by $\leq$). \end{defn}
\begin{thm} {\rm (Alexandroff)} There is an equivalence between finite topological spaces and finite preordered sets.
\end{thm}
\begin{proof} If $X$ is a finite topological space, we define the relation: $$p\leq q\quad\text{iff}\quad p\in \bar q \quad (\text{i.e., if } q\in U_p) $$ Conversely, if $X$ is a finite preordered set, we define the following topology on $X$: the closure of a point $p$ is $\bar p=\{ q\in X: q\leq p\}$. \end{proof}
\begin{rem} \begin{enumerate} \item The preorder relation defined above does not coincide with that of \cite{Barmak}, but with its inverse. In other words, the topology associated to a preorder that we have defined above is the dual topology that the one considered in op.cit. \item If $X$ is a finite topological space, then $U_p=\{ q\in X: p\leq q\}$. Hence $X$ has a minimum $p$ if and only if $X=U_p$. \end{enumerate} \end{rem}
A map $f\colon X\to X'$ between finite topological spaces is continuous if and only if it is monotone: for any $p\leq q$, $f(p)\leq f(q)$.
\begin{prop} A finite topological space is $T_0$ (i.e., different points have different closures) if and only if the relation $\leq$ is antisymmetric, i.e., $X$ is a partially ordered finite set (a finite poset). \end{prop}
\begin{ejem}\label{covering}{\bf (Finite topological space associated to a finite covering)}. Let $S$ be a topological space and let ${\mathcal U}=\{U_1,\dots,U_n\}$ be a finite open covering of $S$. Let us consider the following equivalence relation on $S$: we say that $s\sim s'$ if ${\mathcal U}$ does not distinguish $s$ and $s'$, i.e., if we denote $U^s=\underset{s\in U_i}\bigcap U_i$, then $s\sim s'$ iff $U^s=U^{s'}$. Let $X=S/\negmedspace\sim$ be the quotient set with the topology given by the following partial order: $[s]\leq [s']$ iff $U^s\supseteq U^{s'}$. This is a finite $T_0$-topological space, and the quotient map $\pi\colon S\to X$, $s\mapsto [s]$, is continuous. Indeed, for each $[s]\in X$, one has that $\pi^{-1}(U_{[s]})=U^s$: \[ s'\in \pi^{-1}(U_{[s]})\Leftrightarrow [s']\geq [s]\Leftrightarrow U^{s'}\subseteq U^s\Leftrightarrow s'\in U^s.\]
We shall say that $X$ is the finite topological space associated to the topological space $S$ and the finite covering ${\mathcal U}$.
This construction is functorial in $(S,{\mathcal U})$: Let $f\colon S'\to S$ be a continuous map, ${\mathcal U}$ a finite covering of $S$ and ${\mathcal U}'$ a finite covering of $S'$ that is thinner than $f^{-1}({\mathcal U})$ (i.e., for each $s'\in S'$, $U^{s'}\subseteq f^{-1}(U^{f(s')})$). If $\pi\colon S\to X$ and $\pi'\colon S'\to X'$ are the associated finite spaces, one has a continuous map $X'\to X$ and a commutative diagram \[\xymatrix{ S'\ar[r]^f\ar[d]_{\pi'} & S\ar[d]^\pi\\ X'\ar[r] & X. }\] This is an easy consequence of the following:
\begin{lem} $U^{s'_1}\subseteq U^{s'_2}\Rightarrow U^{f(s'_1)}\subseteq U^{f(s'_2)}$. \end{lem} \begin{proof} $U^{s'_1}\subseteq U^{s'_2} \Rightarrow s'_1\in U^{s'_2}\subseteq f^{-1}(U^{f(s'_2)}) \Rightarrow f(s'_1)\in U^{f(s'_2)} \Rightarrow U^{f(s'_1)}\subseteq U^{f(s'_2)}.$ \end{proof}
\end{ejem}
\subsection{Generalities on ringed spaces}
\begin{defn} A ringed space is a pair $(X,{\mathcal O})$, where $X$ is a topological space and ${\mathcal O}$ is a sheaf of (commutative with unit) rings on $X$. A morphism or ringed spaces $(X,{\mathcal O})\to (X',{\mathcal O}')$ is a pair $(f,f_\#)$, where $f\colon X\to X'$ is a continuous map and $f_\#\colon {\mathcal O}'\to f_*{\mathcal O}$ is a morphism of sheaves of rings (equivalently, a morphism of sheaves of rings $f^\#\colon f^{-1}{\mathcal O}'\to {\mathcal O}$). \end{defn}
\begin{defn} Let ${\mathcal M}$ be an ${\mathcal O}$-module (a sheaf of ${\mathcal O}$-modules). We say that ${\mathcal M}$ is {\it quasi-coherent} if for each $x\in X$ there exist an open neighborhood $U$ of $x$ and an exact sequence \[ {\mathcal O}_{\vert U}^I \to {\mathcal O}_{\vert U}^J\to{\mathcal M}_{\vert U}\to 0\] with $I,J$ arbitrary sets of indexes. Briefly speaking, ${\mathcal M}$ is quasi-coherent if it is locally a cokernel of free modules.
\end{defn}
Let $f\colon X\to Y$ a morphism of ringed spaces. If ${\mathcal M}$ is a quasi-coherent module on $Y$, then $f^*{\mathcal M}$ is a quasi-coherent module on $X$.
\section{Ringed finite spaces}
Let $X$ be a finite topological space. Recall that we have a preorder relation \[ p\leq q \Leftrightarrow p\in \bar q \Leftrightarrow U_q\subseteq U_p\]
Giving a sheaf $F$ of abelian groups (resp. rings, etc) on $X$ is equivalent to giving the following data:
- An abelian group (resp. a ring, etc) $F_p$ for each $p\in X$.
- A morphism of groups (resp. rings, etc) $r_{pq}\colon F_p\to F_q$ for each $p\leq q$, satisfying: $r_{pp}=\Id$ for any $p$, and $r_{qr}\circ r_{pq}=r_{pr}$ for any $p\leq q\leq r$. These $r_{pq}$ are called {\it restriction morphisms}.
Indeed, if $F$ is a sheaf on $X$, then $F_p$ is the stalk of $F$ at $p$, and it coincides with the sections of $F$ on $U_p$. That is \[ F_p=\text{ stalk of } F \text{ at } p = \text{ sections of } F \text{ on } U_p:=F(U_p)\] The morphisms $F_p\to F_q$ are just the restriction morphisms $F(U_p)\to F(U_q)$.
\begin{ejem} Given a group $G$, the constant sheaf $G$ on $X$ is given by the data: $G_p=G$ for any $p\in X$, and $r_{pq}=\Id$ for any $p\leq q$. \end{ejem}
\begin{defn} A {\it ringed finite space} is a ringed space $(X,{\mathcal O} )$ such that $X$ is a finite topological space. \end{defn}
By the previous consideration, one has a ring ${\mathcal O}_p$ for each $p\in X$, and a morphism of rings $r_{pq}\colon {\mathcal O}_p\to{\mathcal O}_q$ for each $p\leq q$, such that $r_{pp}=\Id$ for any $p\in X$ and $r_{ql}\circ r_{pq}=r_{pl}$ for any $p\leq q\leq l$.
Giving a morphism of ringed spaces $(X,{\mathcal O})\to (X',{\mathcal O}')$ between two ringed finite spaces, is equivalent to giving:
- a continuous (i.e. monotone) map $f\colon X\to X'$,
- for each $p\in X$, a ring homomorphism $f^\#_p\colon {\mathcal O}'_{f(p)}\to {\mathcal O}_p$, such that, for any $p\leq q$, the diagram (denote $p' =f(p), q'=f(q)$) \[ \xymatrix{ {\mathcal O}'_{p'} \ar[r]^{f^\#_{p}} \ar[d]_{r_{p'q'}} & {\mathcal O}_{p}\ar[d]^{r_{pq}}\\ {\mathcal O}'_{q'} \ar[r]^{f^\#_{q}} & {\mathcal O}_{q}}\] is commutative. We shall denote by $\Hom(X,Y)$ the set of morphisms of ringed spaces between two ringed spaces $X$ and $Y$.
\begin{ejems}\label{ejemplos} \item[$\,\,$(1)] {\it Punctual ringed spaces}. A ringed finite space is called punctual if the underlying topological space has only one element. The sheaf of rings is then just a ring. We shall denote by $(*,A)$ the ringed finite space with topological space $\{*\}$ and ring $A$. Giving a morphism of ringed spaces $(X,{\mathcal O})\to (*,A)$ is equivalent to giving a ring homomorphism $A\to {\mathcal O}(X)$. In particular, the category of punctual ringed spaces is equivalent to the (dual) category of rings, i.e., the category of affine schemes. In other words, the category of affine schemes is a full subcategory of the category of ringed finite spaces, precisely the full subcategory of punctual ringed finite spaces.
Any ringed space $(X,{\mathcal O})$ has an associated punctual ringed space $(*,{\mathcal O}(X))$ and a morphism or ringed spaces $\pi\colon (X,{\mathcal O})\to (*,{\mathcal O}(X))$ which is universal for morphisms from $(X,{\mathcal O})$ to punctual spaces. In other words, the inclusion functor \[i\colon \{\text{Punctual ringed spaces}\} \hookrightarrow \{\text{Ringed spaces}\}\] has a left adjoint: $(X,{\mathcal O})\mapsto (*,{\mathcal O}(X))$. For any ${\mathcal O}(X)$-module $M$, $\pi^*M$ is a quasi-coherent module on $X$. We sometimes denote $\widetilde M:=\pi^*M$.
\item[$\,\,$(2)] {\it Finite topological spaces}. Any finite topological space $X$ may be considered as a ringed finite space, taking the constant sheaf ${\mathbb Z}$ as the sheaf of rings. If $X$ and $Y$ are two finite topological spaces, then giving a morphism of ringed spaces $(X,{\mathbb Z})\to (Y,{\mathbb Z})$ is just giving a continuous map $X\to Y$. Therefore the category of finite topological spaces is a full subcategory of the category of ringed finite spaces. The (fully faithful) inclusion functor \[ \aligned \{\text{Finite topological spaces}\} &\hookrightarrow \{\text{Ringed finite spaces} \}\\ X &\mapsto (X,{\mathbb Z})\endaligned\] has a left adjoint, that maps a ringed finite space $(X,{\mathcal O})$ to $X$. Of course, this can be done more generally, removing the finiteness hypothesis: the category of topological spaces is a full subcategory of the category of ringed spaces (sending $X$ to $(X,{\mathbb Z})$), and this inclusion has a left adjoint: $(X,{\mathcal O})\mapsto X$.
\item[$\,\,$(3)] Let $(S,{\mathcal O}_S)$ be a ringed space (a scheme, a differentiable manifold, an analytic space, ...). Let ${\mathcal U}=\{U_1,\dots,U_n\}$ be a finite open covering of $S$. Let $X$ be the finite topological space associated to $S$ and ${\mathcal U}$, and $\pi\colon S\to X$ the natural continuous map (Example \ref{covering}). We have then a sheaf of rings on $X$, namely ${\mathcal O}:=\pi_*{\mathcal O}_S$, so that $\pi\colon (S,{\mathcal O}_S)\to (X,{\mathcal O})$ is a morphism of ringed spaces. We shall say that $(X,{\mathcal O})$ is the {\it ringed finite space associated to the ringed space $S$ and the finite covering ${\mathcal U}$}. This construction is functorial on $(S,{\mathcal U})$ , as in Example \ref{covering}.
\item[$\,\,$(4)] {\it Quasi-compact and quasi-separated schemes}. Let $(S,{\mathcal O}_S)$ be a scheme and ${\mathcal U}=\{U_1,\dots,U_n\}$ a finite open covering of $S$. We say that ${\mathcal U}$ is {\it locally affine} if for each $s\in S$, the intersection $U^s = \underset{s\in U_i}\cap U_i$ is affine. We have the following:
\begin{prop} Let $(S,{\mathcal O}_S)$ be a scheme. The following conditions are equivalent: \begin{enumerate} \item $S$ is quasi-compact and quasi-separated. \item $S$ admits a locally affine finite covering ${\mathcal U}$. \item There exist a finite topological space $X$ and a continuous map $\pi\colon S\to X$ such that $\pi^{-1}(U_x)$ is affine for any $x\in X$. \end{enumerate} \end{prop}
\begin{proof} (1) $\Rightarrow$ (2). Since $S$ is quasi-compact and quasi-separated, we can find a finite covering $U_1,\dots, U_n$ of $S$ by affine schemes and a finite covering $\{ U_{ij}^k\}$ of $U_i\cap U_j$ by affine schemes. Let ${\mathcal U}=\{ U_i, U_{ij}^k\}$ and let us see that it is a locally affine covering of $S$. Let $s\in S$. We have to prove that $U^s$ is affine. If $s$ only belongs to one $U_i$, then $U^s=U_i$ is affine. If $s$ belongs to more than one $U_i$, let us denote $U_{ij}^s= \underset{s\in U_{ij}^k}\cap U_{ij}^k$. Since $U_{ij}^k$ are affine schemes contained in an affine scheme (for example $U_i$), one has that $U_{ij}^s$ is affine. Now, $U^s=\underset{i,j}\cap U_{ij}^s$. Put $U^s=U_{i_1j_1}^s\cap \dots \cap U_{i_nj_n}^s$. Replacing each intersection $U_{i_rj_r}^s\cap U_{i_{r+1}j_{r+1}}^s$ by $U_{i_rj_r}^s\cap U_{j_{r}i_{r+1}}^s\cap U_{i_{r+1}j_{r+1}}^s$, we may assume that $j_k=i_{k+1}$, i.e. \[ U^s=U_{i_1i_2}^s\cap U_{i_2i_3}^s\cap U_{i_3i_4}^s\cap \dots \cap U_{i_{n-1}i_n}^s\] Now, $U_{i_1i_2}^s\cap U_{i_2i_3}^s$ is affine because it is the intersection of two affine subschemes of the affine scheme $U_{i_2}$. Then $U_{i_1i_2}^s\cap U_{i_2i_3}^s\cap U_{i_3i_4}^s$ is affine because $U_{i_1i_2}^s\cap U_{i_2i_3}^s$ and $U_{i_3i_4}^s$ are affine subschemes of the affine scheme $U_{i_3}$. Proceeding this way, one concludes.
(2) $\Rightarrow$ (3). It suffices to take $X$ as the finite topological space associated to $S$ and ${\mathcal U}$.
(3) $\Rightarrow$ (1). $S$ is covered by the affine open subsets $\{ \pi^{-1}(U_x)\}_{x\in X}$, and the intersections $ \pi^{-1}(U_x)\cap \pi^{-1}(U_{x'})$ are covered by the affine open subsets $\{ \pi^{-1}(U_y)\}_{y\in U_x\cap U_{x'}}$. Hence $S$ is quasi-compact and quasi-separated.
\end{proof}
\end{ejems}
\subsection{Quasi-coherent modules}
Let ${\mathcal M}$ be a sheaf of ${\mathcal O}$-modules on a ringed finite space $(X,{\mathcal O})$. Thus, for each $p\in X$, ${\mathcal M}_p$ is an ${\mathcal O}_p$-module and for each $p\leq q$ one has a morphism of ${\mathcal O}_p$-modules ${\mathcal M}_p\to{\mathcal M}_q$, hence a morphism of ${\mathcal O}_q$-modules \[{\mathcal M}_p\otimes_{{\mathcal O}_p}{\mathcal O}_q\to{\mathcal M}_q\]
\begin{rem} From the natural isomorphisms \[\Hom_{{\mathcal O}_{\vert U_p}}({\mathcal O}_{\vert U_p},{\mathcal M}_{\vert U_p})=\Gamma(U_p,{\mathcal M})={\mathcal M}_p=\Hom_{{\mathcal O}_p}({\mathcal O}_p,{\mathcal M}_p)\] it follows that, in order to define a morphism of sheaves of modules ${\mathcal O}_{\vert U_p}\to{\mathcal M}_{\vert U_p}$ it suffices to define a morphism of ${\mathcal O}_p$-modules ${\mathcal O}_p\to {\mathcal M}_p$ and this latter is obtained from the former by taking the stalk at $p$. \end{rem}
\begin{thm}\label{qc} An ${\mathcal O}$-module ${\mathcal M}$ is quasi-coherent if and only if for any $p\leq q$ the morphism \[{\mathcal M}_p\otimes_{{\mathcal O}_p}{\mathcal O}_q\to{\mathcal M}_q\] is an isomorphism. \end{thm}
\begin{proof} If ${\mathcal M}$ is quasi-coherent, for each point $p$ one has an exact sequence: \[ {\mathcal O}_{\vert U_p}^I\to {\mathcal O}_{\vert U_p}^J \to {\mathcal M}_{\vert U_p} \to 0.\] Taking the stalk at $q\geq p$, one obtains an exact sequence \[ {\mathcal O}_q^I\to {\mathcal O}_q^J \to {\mathcal M}_q \to 0\] On the other hand, tensoring the exact sequence at $p$ by $\otimes_{{\mathcal O}_p}{\mathcal O}_q$, yields an exact sequence \[ {\mathcal O}_q^I\to {\mathcal O}_q^J \to {\mathcal M}_p\otimes_{{\mathcal O}_p}{\mathcal O}_q \to 0.\] Conclusion follows.
Assume now that ${\mathcal M}_p\otimes_{{\mathcal O}_p}{\mathcal O}_q\to{\mathcal M}_q$ is an isomorphism for any $p\leq q$. We solve ${\mathcal M}_p$ by free ${\mathcal O}_p$-modules: \[ {\mathcal O}_p^I\to {\mathcal O}_p^J \to {\mathcal M}_p \to 0.\]
We have then morphisms ${\mathcal O}_{\vert U_p}^I\to {\mathcal O}_{\vert U_p}^J \to {\mathcal M}_{\vert U_p}\to 0$. In order to see that this sequence is exact, it suffices to take the stalk at $q\geq p$. Now, the sequence obtained at $q$ coincides with the one obtained at $p$ (which is exact) after tensoring by $\otimes_{{\mathcal O}_p}{\mathcal O}_q$, hence it is exact. \end{proof}
\begin{ejem} Let $(X,{\mathcal O})$ be a ringed finite space, $A={\mathcal O}(X)$ and $\pi\colon (X,{\mathcal O})\to (*,A)$ the natural morphism. We know that for any $A$-module $M$, $\widetilde M:=\pi^*M$ is a quasi-coherent module on $X$. The explicit stalkwise description of $\widetilde M$ is given by: $(\widetilde M)_x=M\otimes_A{\mathcal O}_x$. \end{ejem}
\begin{cor}\label{corqc} Let $X$ be a ringed finite space with a minimum and $A=\Gamma(X,{\mathcal O})$. Then the functors \[\aligned \{\text{Quasi-coherent ${\mathcal O}$-modules} \} & \overset{\longrightarrow}\leftarrow \{ \text{$A$-modules}\} \\ {\mathcal M} &\to \Gamma(X,{\mathcal M}) \\ \widetilde M &\leftarrow M \endaligned \] are mutually inverse. \end{cor}
\begin{proof} Let $p$ be the minimum of $X$. Then $U_p=X$ and for any sheaf $F$ on $X$, $F_p=\Gamma(X,F)$. If ${\mathcal M}$ is a quasi-coherent module, then for any $x\in X$, ${\mathcal M}_x={\mathcal M}_p\otimes_{{\mathcal O}_p}{\mathcal O}_x$. That is, ${\mathcal M}$ is univocally determined by its stalk at $p$, i.e., by its global sections. \end{proof}
This corollary is a particular case of the invariance of the category of quasi-coherent mo\-du\-les under homotopies (see Theorem \ref{homotinvariance}), because any ringed finite space with a minimum $p$ is contractible to $p$ (Remark \ref{contractible}).
\begin{thm}\label{schemes} Let $S$ be a quasi-compact and quasi-separated scheme and ${\mathcal U}=\{ U_1,\dots, U_n\}$ a locally affine finite covering. Let $(X,{\mathcal O})$ be the finite space associated to $S$ and ${\mathcal U}$, and $\pi\colon S\to X$ the natural morphism of ringed spaces (see Examples \ref{ejemplos}, (3) and (4)). One has:
1. For any quasi-coherent ${\mathcal O}_S$-module ${\mathcal M}$, $\pi_*{\mathcal M}$ is a quasi-coherent ${\mathcal O}$-module.
2. The functors $\pi^*$ and $\pi_*$ establish an equivalence between the category of quasi-coherent ${\mathcal O}_S$-modules and the category of quasi-coherent ${\mathcal O}$-modules.
Moreover, for any open subset $U$ of $X$, the morphism $\pi^{-1}(U)\to U$ satisfies 1. and 2. \end{thm}
\begin{proof} 1. We have to prove that $(\pi_*{\mathcal M})_p\otimes_{{\mathcal O}_p}{\mathcal O}_q\to(\pi_*{\mathcal M})_q$ is an isomorphism for any $p\leq q$. This is a consequence of the following fact: if $V\subset U$ are open and affine subsets of a scheme $S$ and ${\mathcal M}$ is a quasi-coherent module on $S$, the natural map ${\mathcal M}(U)\otimes_{{\mathcal O}_S(U)}{\mathcal O}_S(V)\to{\mathcal M}(V)$ is an isomorphism.
2. Let ${\mathcal M}$ be a quasi-coherent module on $S$. Let us see that the natural map $\pi^*\pi_*{\mathcal M}\to{\mathcal M}$ is an isomorphism. Taking the stalk at $s\in S$, one is reduced to the following fact: if $U$ is an affine open subset of $S$, then for any $s\in U$ the natural map ${\mathcal M}(U)\otimes_{{\mathcal O}_S(U)}{\mathcal O}_{S,s}\to {\mathcal M}_s$ is an isomorphism.
To conclude 2., let ${\mathcal N}$ be a quasi-coherent module on $X$ and let us see that the natural map ${\mathcal N}\to\pi_*\pi^*{\mathcal N}$ is an isomorphism. Taking the stalk at $p\in X$, we have to prove that ${\mathcal N}_p\to (\pi^*{\mathcal N})(U)$ is an isomorphism, with $U=\pi^{-1}(U_p)$. Notice that $U$ is an affine open subscheme and ${\mathcal O}_S(U)={\mathcal O}_p$. Thus, it suffices to prove that, for any $s\in U$, ${\mathcal N}_p\otimes_{{\mathcal O}_p}{\mathcal O}_{S,s}\to (\pi^*{\mathcal N})(U)\otimes_{{\mathcal O}_p}{\mathcal O}_{S,s}$ is an isomorphism. Denoting $q=\pi(s)$, one has that $(\pi^*{\mathcal N})(U)\otimes_{{\mathcal O}_p}{\mathcal O}_{S,s}= (\pi^*{\mathcal N})_s={\mathcal N}_q\otimes_{{\mathcal O}_q}{\mathcal O}_{S,s}$. Since ${\mathcal N}$ is quasi-coherent, ${\mathcal N}_q={\mathcal N}_p\otimes_{{\mathcal O}_p}{\mathcal O}_q$. Conclusion follows.
Finally, these same proofs work for $\pi\colon \pi^{-1}(U)\to U$, for any open subset $U$ of $X$. \end{proof}
\begin{thm}\label{qc-fts} Let $X$ be a finite topological space $({\mathcal O}={\mathbb Z}$). A sheaf ${\mathcal M}$ of abelian groups on $X$ is quasi-coherent if and only if it is locally constant, i.e., for each $p\in X$, ${\mathcal M}_{\vert U_p}$ is (isomorphic to) a constant sheaf. If $X$ is connected, this means that there exists an abelian group $G$ such that ${\mathcal M}_{\vert U_p}=G$ for every $p$. If $X$ is not connected, the latter holds in each connected component. \end{thm}
\begin{proof} Since ${\mathcal O}$ is the constant sheaf ${\mathbb Z}$, the quasi-coherence condition $$``{\mathcal M}_p\otimes_{{\mathcal O}_p}{\mathcal O}_q\to{\mathcal M}_q \text{ is an isomorphism}"$$ is equivalent to say that the restriction morphisms ${\mathcal M}_p\to{\mathcal M}_q$ are isomorphisms, i.e., ${\mathcal M}_{\vert U_p}$ is isomorphic to a constant sheaf. \end{proof}
Now let us prove a topological analog of Theorem \ref{schemes}. First let us recall a basic result about locally constant sheaves and the fundamental group.
\noindent{\it Locally constant sheaves and the fundamental group}.
Let $S$ be a path connected, locally path connected and locally simply connected topological space and let $\pi_1(S)$ be its fundamental group. Then there is an equivalence between the category of locally constant sheaves on $S$ (with fibre type $G$, an abelian group) and the category of representations of $\pi_1(S)$ on $G$ (i.e., morphisms of groups $\pi_1(S)\to \Aut_{{\mathbb Z}-\text{mod.}} G$). In particular, $S$ is simply connected if and only if any locally constant sheaf (of abelian groups) on $S$ is constant.
Now, the topological analog of Theorem \ref{schemes} is:
\begin{thm}\label{fin-sp-assoc-top} Let $S$ be a path connected, locally path connected and locally simply connected topological space and let ${\mathcal U}=\{ U_1,\dots,U_n\}$ be a locally simply connected finite covering of $S$, i.e., for each $s\in S$, the intersection $U^s:=\underset{s\in U_i}\cap U_i$ is simply connected. Let $X$ be the associated finite topological space and $\pi\colon S\to X$ the natural continous map. Then
1. For any locally constant sheaf ${\mathcal L}$ on $S$, $\pi_*{\mathcal L}$ is a locally constant sheaf on $X$.
2. The functors $\pi^*$ and $\pi_*$ establish an equivalence between the category of locally constant sheaves on $S$ and the category of locally constant sheaves on $X$. In other words, $\pi_1(S)\to\pi_1(X)$ is an isomorphism.
Moreover, if the $U^s$ are homotopically trivial, then $\pi\colon S\to X$ is a weak homotopy equivalence, i.e., $\pi_i(S)\to\pi_i(X)$ is an isomorphism for any $i$. \end{thm}
\begin{proof} Let us recall that, on a simply connected space, every locally constant sheaf is constant. Let $x\leq x'$ in $X$, and put $x=\pi(s)$, $x'=\pi(s')$. Then $(\pi_*{\mathcal L})_x \to(\pi_*{\mathcal L})_{x'}$ is the restriction morphism $\Gamma(U^s,{\mathcal L})\to \Gamma(U^{s'},{\mathcal L})$, which is an isomorphism because ${\mathcal L}$ is a constant sheaf on $U^s$.
If ${\mathcal L}$ is a locally constant sheaf on $S$, the natural morphism $\pi^*\pi_*{\mathcal L}\to{\mathcal L}$ is an isomorphism, since taking fibre at $s\in S$ one obtains the morphism $\Gamma(U^s,{\mathcal L})\to {\mathcal L}_s$, which is an isomorphism because ${\mathcal L}$ is a constant sheaf on $U^s$. Finally, if ${\mathcal N}$ is a locally constant sheaf on $X$, the natural map ${\mathcal N}\to \pi_*\pi^*{\mathcal N}$ is an isomorphism: taking fibre at a point $x=\pi(s)$ one obtains the morphism ${\mathcal N}_x\to \Gamma(U^s, \pi^*{\mathcal N})$, which is an isomorphism because ${\mathcal N}$ is a constant sheaf on $U_x$ (and then $\pi^*{\mathcal N}$ is a constant sheaf on $U^s$).
Finally, if the $U^s$ are homotopically trivial, then $\pi_i(S)\to\pi_i(X)$ is an isomorphism for any $i\geq 0$ by McCord's theorem (see \cite{Barmak}, Theorem 1.4.2). \end{proof}
\begin{rem} The same proof works for a more general statement: Let $S$ and $T$ be path connected, locally path connected and locally simply connected topological spaces, $f\colon S\to T$ a continuous map such that there exists a basis like open (and connected) cover ${\mathcal U}$ of $T$ such that $f^{-1}(U)$ is connected and $\pi_1(f^{-1}(U))\to \pi_1(U)$ is an isomorphism for every $U\in {\mathcal U}$. Then $\pi_1(S)\to\pi_1(T)$ is an isomorphism. It is an analogue of McCord's theorem for the fundamental group. See also \cite{Quillen}, Proposition 7.6. \end{rem}
\begin{rems} \begin{enumerate} \item Theorems \ref{schemes} and \ref{fin-sp-assoc-top} are not true for non quasi-coherent modules. For example, if $S$ is a homotopically trivial topological space and ${\mathcal U}=\{ S\}$, then the associated finite space is just a point. If $\pi^*$ were an equivalence between the categories of sheaves on $S$ and $X$, this would imply that any sheaf on $S$ is constant. This is not true unless $S$ is a point. \item Theorems \ref{schemes} and \ref{fin-sp-assoc-top} are not true for non locally affine (resp. locally simply connected) coverings. For example take a scheme $S$ and ${\mathcal U}=\{ S\}$. The associated finite space is just a point. Then $\pi^*$ is an equivalence between quasi-coherent modules if and only if $S$ is affine. \end{enumerate} \end{rems}
\section{Homotopy}
For this section, we shall follow the lines of \cite{Barmak}, section 1.3, generalizing them to the ringed case.
Let $(X,{\mathcal O}_X)$ and $(Y,{\mathcal O}_Y)$ be two ringed spaces and let $(X\times Y,{\mathcal O}_{X\times Y})$ the product ringed space: the topological space $X\times Y$ is the ordinary topological product and the sheaf of rings ${\mathcal O}_{X\times Y}$ is defined as ${\mathcal O}_{X\times Y}=\pi_X^{-1}{\mathcal O}_X\otimes_{\mathbb Z}\pi_Y^{-1}{\mathcal O}_Y$, where $\pi_X$, $\pi_Y$ are the projections of $X\times Y$ onto $X$ and $Y$ respectively.
Let us denote $I=[0,1]$, the unit interval. It is a ringed space (with ${\mathcal O}_I={\mathbb Z}$). For any ringed space $(X,{\mathcal O}_X)$, the ringed space $X\times I$ is given by the topological space $X\times I$ and the sheaf of rings ${\mathcal O}_{X\times I}=\pi_X^{-1}{\mathcal O}_X$. Then, for any open subsets $U\subseteq X$ and $V\subseteq I$, ${\mathcal O}_{X\times I}(U\times V)={\mathcal O}_X(U)^{\# V}$, where $\# V$ denotes the number of connected components of $V$.
For any $t\in I$, one has a morphism of ringed spaces $i_t\colon X\to X\times I$, defined by the continuous map $i_t(x)=(x,t)$ and the identity morphism of sheaves of rings $i_t^{-1}{\mathcal O}_{X\times I}={\mathcal O}_X\to {\mathcal O}_X$.
\begin{defn} Let $f,g\colon X\to Y$ be two morphisms of ringed spaces. We say that $f$ and $g$ are {\it homotopy equivalent}, $f\sim g$, if there exists a morphism of ringed spaces $H\colon X\times I\to Y$ such that $H_0=f$ and $H_1=g$ (for any $t\in I$, $H_t\colon X\to Y$ is the composition of $i_t\colon X\to X\times I$ with $H$) \end{defn}
We can then define the homotopy equivalence between ringed finite spaces:
\begin{defn} Two ringed finite spaces $X$ and $Y$ are said to be {\it homotopy equivalent}, denoted by $X\sim Y$, if there exist morphisms $f\colon X\to Y$ and $g\colon Y\to X$ such that $g\circ f \sim \Id_X$ and $f\circ g\sim \Id_Y$. \end{defn}
Let $f,g\colon X\to Y$ be two morphisms of ringed spaces, $S$ a subspace of $X$. We leave the reader to define the notion of being homotopic relative to $S$ and hence the notion of a strong deformation retract.
\subsection{Homotopy of ringed finite spaces}
Let us now reduce to ringed finite spaces. Let $X$, $Y$ be finite topological spaces and $\Hom(X,Y)$ the set of continuous maps, which is a finite set. This set has a preorder, the pointwise preorder: \[ f\leq g \iff f(x)\leq g(x) \text{ for any } x\in X,\] hence $\Hom(X,Y)$ is a finite topological space.
It is easy to prove that two continuous maps $f,g\colon X\to Y$ are homotopy equivalent if and only if they belong to the same connected component of $\Hom(X,Y)$. In other words, if we denote $f\equiv g$ if either $f\leq g$ or $f\geq g$, then $f\sim g$ if and only if there exists a sequence \[ f=f_0\equiv f_1 \equiv \cdots \equiv f_n=g,\qquad f_i\in\Hom(X,Y)\]
Assume now that $X$ and $Y$ are ringed finite spaces and $\Hom(X,Y)$ is the set of morphisms of ringed spaces. It is no longer a finite set, however we can define a preorder relation:
\begin{defn} Let $f,g\colon X\to Y$ be two morphisms of ringed spaces. We say that $f\leq g$ if:
(1) $f(x)\leq g(x)$ for any $x\in X$.
(2) For any $x\in X$ the triangle \[ \xymatrix{{\mathcal O}_{f(x)}\ar[rr]^{r_{f(x)g(x)}}\ar[rd]_{f^\#_x} & & {\mathcal O}_{g(x)}\ar[ld]^{g^\#_x}\\ & {\mathcal O}_x & }\] is commutative. We shall denote by $f\equiv g$ if either $f\leq g$ or $f\geq g$. \end{defn}
\begin{rems} \label{rem}
(a) Condition (1) is equivalent to say that for any open subset $V$ of $Y$, one has $f^{-1}(V)\subseteq g^{-1}(V)$. Thus, for any sheaf $F$ on $X$, one has the restriction morphism $F(g^{-1}(V))\to F(f^{-1}(V))$, i.e., a morphism of sheaves $ g_*F\to f_*F$. By adjunction, one has, for any sheaf $G$ on $Y$, a morphism of sheaves $f^{-1}G\to g^{-1}G$, whose stalkwise description at a point $x$ is just the restriction morphism $r_{f(x)g(x)}\colon G_{f(x)}\to G_{g(x)}$. Thus, condition (2) is equivalent to say that the triangle
\[ \xymatrix{f^{-1}{\mathcal O}_Y\ar[rr] \ar[rd]_{f^\#} & & g^{-1}{\mathcal O}_Y\ar[ld]^{g^\#}\\ & {\mathcal O}_X & }\] is commutative, or equivalently, that the diagram \[ \xymatrix{g_*{\mathcal O}_X\ar[rr] & & f_*{\mathcal O}_X \\ & {\mathcal O}_Y\ar[ul]^{g_\#} \ar[ur]_{f_\#} & }\] is commutative.
(b) \label{rem} If $f(x)=g(x)$ for any $x\in X$ (i.e., $f$ and $g$ coincide as continuous maps) and $f\leq g$, then $f=g$. \end{rems}
\begin{prop} Let $f,g\colon X\to Y$ be two morphisms of ringed finite spaces. Then $f$ and $g$ are homotopy equivalent if and only if there exists a sequence: \[ f=f_0\equiv f_1 \equiv \cdots \equiv f_n=g,\qquad f_i\in\Hom(X,Y) \] \end{prop}
\begin{proof} It is a consequence of the following lemmas. \end{proof}
\begin{lem} Let $f,g\colon X\to Y$ be two morphisms between ringed finite spaces. If $f\leq g$, then $f$ is homotopy equivalent to $g$. \end{lem}
\begin{proof} Let $H\colon X\times I\to Y$ be the map defined by $$H(x,t)=\left\{ \aligned f(x),&\text{ for } t=0 \\ g(x),&\text{ for }t>0\endaligned \right. .$$ For any $y\in Y$, $f^{-1}(U_y)\subseteq g^{-1}(U_y)$, because $f(x)\leq g(x)$ for any $x\in X$. It follows that $$H^{-1}(U_y)=(f^{-1}(U_y)\times I)\cup (g^{-1}(U_y)\times (0,1]).$$ Thus $H$ is continuous. Moreover, one has the exact sequence \[ 0\to {\mathcal O}_{X\times I}(H^{-1}(U_y))\to {\mathcal O}_{X\times I} (f^{-1}(U_y)\times I)\times {\mathcal O}_{X\times I}(g^{-1}(U_y)\times (0,1]) \to {\mathcal O}_{X\times I}(f^{-1}(U_y)\times (0,1]),\] i.e., an exact sequence \[ 0\to H_*{\mathcal O}_{X\times I} \to f_*{\mathcal O}_X \times g_*{\mathcal O}_X \to f_*{\mathcal O}_X.\] By Remark \ref{rem}, (a), one obtains a morphism ${\mathcal O}_Y\to H_*{\mathcal O}_{X\times I}$. Thus $H$ is a morphism of ringed spaces, and $H_0=f$, $H_1=g$.
\end{proof}
\begin{lem} Let $H\colon X\times I\to Y$ be a morphism of ringed spaces such that $H(x,t)=H(x,t')$ for any $t,t'>0$. Then $H_0\leq H_1$. \end{lem}
\begin{proof} Let us denote $f=H_0$, $g=H_1$.
1) $f(x)\leq g(x)$ for any $x\in X$. Let $y=f(x)$. Since $H$ is continuous, there exists $\epsilon >0$ such that $H(x,t)\in U_y$ for any $t<\epsilon$. Thus $g(x)=H_t(x)\in U_y$, i.e., $g(x)\geq f(x)$.
2) For any $y\in Y$, $H^{-1}(U_y)$ is the union of $(f^{-1}(U_y)\times I)$ and $(g^{-1}(U_y)\times (0,1])$, whose intersection is $f^{-1}(U_y)\times (0,1]$. From the commutative diagram \[ \xymatrix{{\mathcal O}_{X\times I}(H^{-1}(U_y))\ar[r]\ar[d] & {\mathcal O}_{X\times I}(f^{-1}(U_y)\times I)={\mathcal O}_X(f^{-1}(U_y))\ar[d]^{\Id} \\ {\mathcal O}_X(g^{-1}(U_y))={\mathcal O}_{X\times I}(g^{-1}(U_y)\times (0,1])\ar[r] & {\mathcal O}_{X\times I}(f^{-1}(U_y)\times (0,1])={\mathcal O}_X(f^{-1}(U_y)) }\] one obtains a commutative diagram \[ \xymatrix{ H_*{\mathcal O}_{X\times I}\ar[rr] \ar[rd] & & f_*{\mathcal O}_X \\ & g_*{\mathcal O}_X \ar[ur] & }\] and composing with the morphism ${\mathcal O}_Y\to H_*{\mathcal O}_{X\times I}$, yields a commutative diagram \[ \xymatrix{ {\mathcal O}_Y\ar[rr]^{f_\#} \ar[rd]_{g_\#} & & f_*{\mathcal O}_X \\ & g_*{\mathcal O}_X \ar[ur] & }\]
With all, $f\leq g$.
\end{proof}
\begin{rem}\label{contractible} Any ringed finite space $X$ with a minimum $p$ is contractible to $p$, i.e. it is homotopy equivalent to the punctual ringed space $(p,{\mathcal O}_p)$. Indeed, one has a natural morphism $i_p\colon (p,{\mathcal O}_p)\to X$. On the other hand, since $p$ is the minimum, $X=U_p$ and ${\mathcal O}_p=\Gamma(X,{\mathcal O}_X)$, and we have the natural morphism (see Examples \ref{ejemplos}, (1)) $\pi\colon X\to (p,{\mathcal O}_p)$. The composition $\pi\circ i_p$ is the identity and $i_p\circ \pi \geq \Id_X$. \end{rem}
\begin{prop}\label{homotinvarianceProp} Let $f,g\colon X\to Y$ be two morphisms of ringed finite spaces. If $f\sim g$, then, for any quasi-coherent sheaf ${\mathcal M}$ on $Y$, one has $f^*{\mathcal M}=g^*{\mathcal M}$. \end{prop}
\begin{proof} We may assume that $f\leq g$. Then, for any $x\in X$, $$(f^*{\mathcal M})_x={\mathcal M}_{f(x)}\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_x = {\mathcal M}_{f(x)}\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_{g(x)}\otimes_{{\mathcal O}_{g(x)}}{\mathcal O}_x = {\mathcal M}_{g(x)}\otimes_{{\mathcal O}_{g(x)}}{\mathcal O}_x =(g^*{\mathcal M})_x$$ where the second equality is due to the hypothesis $f\leq g$ and the third one to the quasi-coherence of ${\mathcal M}$. \end{proof}
\begin{rems} \begin{enumerate} \item Proposition \ref{homotinvarianceProp} is not true if ${\mathcal M}$ is not quasi-coherent. For example, let $X$ be a finite topological space with a minimum $p$. Then $X$ is contractible to $p$, i.e., the identity $\Id\colon X\to X$ is homotopic to the constant map $g\colon X\to X$, $g(x)=p$. If ${\mathcal M}$ is a non constant sheaf on $X$, then $\Id^*{\mathcal M}$ is not equal to $g^*{\mathcal M}$ (they are not even isomorphic), since $g^*{\mathcal M}$ is a constant sheaf. \item We do not know if Proposition \ref{homotinvarianceProp} holds for general ringed spaces. \end{enumerate} \end{rems}
The following theorem is now straightforward (and it generalizes Corollary \ref{corqc}):
\begin{thm}\label{homotinvariance} If $X$ and $Y$ are homotopy equivalent ringed finite spaces, then their categories of quasi-coherent modules are equivalent. In other words, the category of quasi-coherent modules on a ringed finite space is a homotopy invariant. \end{thm}
\begin{rem} We do not know if this theorem holds for general (non finite) ringed spaces. \end{rem}
\subsection{Homotopy classification: minimal spaces}
Here we see that Stong's homotopical classification of finite topological spaces via minimal topological spaces (\cite{Stong}) can be reproduced in the ringed context.
First of all, let us prove that any ringed finite space is homotopy equivalent to its $T_0$-associated space. Let $X$ be a ringed finite space, $X_0$ its associated $T_0$-space and $\pi\colon X\to X_0$ the quotient map. Let us denote ${\mathcal O}_0=\pi_*{\mathcal O}$. Then $(X,{\mathcal O})\to (X_0,{\mathcal O}_0)$ is a morphism of ringed spaces. The preimage $\pi^{-1}$ gives a bijection between the open subsets of $X_0$ and the open subsets of $X$. Hence, for any $x\in X$, ${\mathcal O}_x={{\mathcal O}_0}_{\pi(x)}$, and any section $s\colon X_0\to X$ of $\pi$ is continuous and a morphism of ringed spaces. The composition $\pi\circ s$ is the identity and the composition $s\circ \pi$ is homotopic to the identity, because ${\mathcal O}_x={\mathcal O}_{s(\pi(x))}$. We have then proved:
\begin{prop}\label{sdr} $(X_0,{\mathcal O}_0) \hookrightarrow (X,{\mathcal O}_X)$ is a strong deformation retract. \end{prop}
Let $X$ be a ringed finite $T_0$-space. Let us generalize the notions of up beat point and down beat point to the ringed case.
\begin{defn} A point $p\in X$ is called a {\it down beat point} if $\bar p -\{ p\}$ has a maximum. A point $p$ is called an {\it up beat point} if $U_p- \{ p\}$ has a minimum $q$ and $r_{pq}\colon {\mathcal O}_p\to{\mathcal O}_q$ is an isomorphism. In any of these cases we say that $p$ is a {\it beat point} of $X$. \end{defn}
\begin{prop}\label{beating} Let $X$ be a ringed finite $T_0$-space and $p\in X$ a beat point. Then $X- \{ p\}$ is a strong deformation retract of $X$. \end{prop}
\begin{proof} Assume that $p$ is a down beat point and let $q$ be the maximum of $\bar p- \{ p\}$. Define the retraction $r\colon X\to X-\{ p\}$ by $r(p)=q$. It is clearly continuous (order preserving). It is a ringed morphism because one has the restriction morphism ${\mathcal O}_q\to{\mathcal O}_p$. If $i\colon X-\{ p\}\hookrightarrow X$ is the inclusion, then $i\circ r\leq \Id_X$ and we are done.
Assume now that $p$ is an up beat point and let $q$ be the minimum of $U_p- \{ p\}$. Define the retraction $r\colon X\to X-\{ p\}$ by $r(p)=q$. It is order preserving, hence continuous. By hypothesis the restriction morphism ${\mathcal O}_p\to{\mathcal O}_q$ is an isomorphism, so that $r$ is a morphism of ringed spaces. Finally, $i\circ r\geq \Id_X$ and we are done. \end{proof}
\begin{defn} A ringed finite $T_0$-space is a {\it minimal} ringed finite space if it has no beat points. A {\it core} of a ringed finite space $X$ is a strong deformation retract which is a minimal ringed finite space. \end{defn}
By Propositions \ref{sdr} and \ref{beating} we deduce that every ringed finite space has a core. Given a ringed finite space $X$, one can find a $T_0$-strong deformation retract $X_0\subseteq X$ and then remove beat points one by one to obtain a minimal ringed finite space. As in the topological case, the notable property about this construction is that in fact the core of a ringed finite space is unique up to isomorphism, moreover: two ringed finite spaces are homotopy equivalent if and only if their cores are isomorphic.
\begin{thm}\label{minimal} Let $ X$ be a minimal ringed finite space. A map $f \colon X \to X$ is homotopic to the identity if and only if $f = \Id_X$. \end{thm}
\begin{proof} We may suppose that $ f \leq \Id_X$ or $f \geq \Id_X$. Assume $ f \leq \Id_X$. By Remark \ref{rem}, (b), it suffices to prove that $f(x)=x$ for any $x\in X$. On the contrary, let $p\in X$ be minimal with the condition $f(x)\neq x$. Hence $f(p)<p$ and $f(x)=x$ for any $x<p$. Then $f(p)$ is the maximum of $\bar p-\{ p\}$, which contradicts that $X$ has no down beat points.
Assume now that $f\geq \Id_X$. Again, it suffices to prove that $f(x)=x$ for any $x\in X$. On the contrary, let $p\in X$ be maximal with the condition $f(x)\neq x$. Then $f(p)>p$ and $f(x)=x$ for any $x>p$. Hence $q=f(p)$ is the minimum of $U_p-\{ p\}$. Moreover $f$ is a morphism of ringed spaces, hence it gives a commutative diagram \[ \xymatrix {{\mathcal O}_q={\mathcal O}_{f(p)} \ar[r]^{\quad f^\#_p}\ar[d]_{\Id} &{\mathcal O}_p \ar[d]^{r_{pq}}\\ {\mathcal O}_q={\mathcal O}_{f(q)}\ar[r]^{\quad f^\#_q} & {\mathcal O}_q.}\] Moreover, since $f\geq \Id_X$, the triangles \[ \xymatrix{{\mathcal O}_{p}\ar[rr]^{r_{pq}}\ar[rd]_{\Id^\#_p} & & {\mathcal O}_{q}\ar[ld]^{f^\#_p}\\ & {\mathcal O}_p & }\quad \xymatrix{{\mathcal O}_{q}\ar[rr]^{r_{qq}}\ar[rd]_{\Id^\#_q} & & {\mathcal O}_{q}\ar[ld]^{f^\#_q}\\ & {\mathcal O}_q & }\] are commutative. One concludes that $r_{pq}$ is an isomorphism and $p$ is an up beat point of $X$. \end{proof}
\begin{thm}\label{homotopic-classification} (Classification Theorem). A homotopy equivalence between minimal ringed finite spaces is an isomorphism. In particular the core of a ringed finite space is unique up to isomorphism and two ringed finite spaces are homotopy equivalent if and only if they have isomorphic cores. \end{thm}
\begin{proof} Let $f \colon X \to Y$ be a homotopy equivalence between minimal ringed finite spaces and let $g \colon Y \to X$ be a homotopy inverse. Then $gf = \Id_X$ and $fg = \Id_Y$ by Theorem \ref{minimal}. Thus, f is an isomorphism. If $X_1$ and $X_2$ are two cores of a ringed finite space $X$, then they are homotopy equivalent minimal ringed finite spaces, and therefore, isomorphic. Two ringed finite spaces $X$ and $Y$ have the same homotopy type if and only if their cores are homotopy equivalent, but this is the case only if they are isomorphic. \end{proof}
\end{document} |
\begin{document}
\maketitle
\begin{abstract} We study the dynamics of soliton solutions to the perturbed mKdV equation $\partial_t u = \partial_x(-\partial_x^2 u -2u^3) + \epsilon V u$, where $V\in \mathcal{C}^1_b(\mathbb{R})$, $0<\epsilon\ll 1$. This type of perturbation is non-Hamiltonian. Nevertheless, via symplectic considerations, we show that solutions remain $O(\epsilon \langle t\rangle^{1/2})$ close to a soliton on an $O(\epsilon^{-1})$ time scale. Furthermore, we show that the soliton parameters can be chosen to evolve according to specific exact ODEs on the shorter, but still dynamically relevant, time scale $O(\epsilon^{-1/2})$. Over this time scale, the perturbation can impart an $O(1)$ influence on the soliton position. \end{abstract}
\section{Introdution}
We consider the modified Korteweg-de Vries (mKdV) equation with a small external potential \begin{equation} \label{E:pmkdv} \partial_t u = \partial_x(-\partial_x^2 u -2u^3) + \epsilon V u \,. \end{equation} where $0<\epsilon\ll 1$, $V\in \mathcal{C}^1_b(\mathbb{R})$, i.e. $V$ and $V'$ are continuous and bounded.
The unperturbed case of \eqref{E:pmkdv}, \begin{equation} \label{E:mkdv} \partial_t u = \partial_x(-\partial_x^2 u -2u^3) \end{equation} is globally well-posed in $H^k$ for $k\geq 1$ (see Kenig-Ponce-Vega \cite{KPV}), and possesses single soliton solutions $u(x,t) = \eta(x,a+c^2t,c)$, for $a\in\mathbb{R}$ and $c\in\mathbb{R}\,\backslash\,\{0\}$, where $\eta(x,a,c)=c Q(c(x-a))$ with $Q(x)=\operatorname{sech}(x)$ (so that $-Q+Q''+2Q^3 = 0$). The solitons are orbitally stable as solutions to the unperturbed mKdV \eqref{E:mkdv} (see \cite{Be, Bo, W2, BSS}), i.e. the solutions stay close to the soliton manifold
$$M = \{ \; \eta(x,a,c) \; | \; a\in \mathbb{R}\,, c>0 \, \}$$ if they are initially close.
Our first main result, Theorem \ref{T:main1}, shows that this type of orbital stability remains true for the \emph{structurally} perturbed mKdV \eqref{E:pmkdv}, in the following sense: solutions which start an $H_x^1$ distance $\omega$ from the soliton manifold $M$ remain within an $H_x^1$ distance $(\omega +\epsilon t^{1/2})e^{C\epsilon t}$ up to time $\epsilon^{-1}\log \epsilon^{-1}$. Our second main result result, Theorem \ref{T:main}, shows that on the shorter time scale $\epsilon^{-1/2}\log \epsilon^{-1}$, we can predict the location on the soliton manifold by solving a system of two ODE for the position parameter $a$ and scale parameter $c$. Strong agreement between this prediction and the numerical solution of \eqref{E:pmkdv} is illustrated in Fig.~\ref{F:fig1} and Fig.~\ref{F:fig2}. We prove the global well-posedness of \eqref{E:pmkdv} in $H^1_x$, by adapting the argument of Kenig-Ponce-Vega \cite{KPV}, in Apx.~\ref{A:wp}.
The forced KdV equation \begin{equation} \label{E:fkdv} \partial_t u = \partial_x(-u_{xx}-3u^2) + \epsilon f \end{equation} is a model for free-surface shallow water flow \cite{LYW} with contributions to $f$ arising from surface pressure and bottom topography. Numerics and experiments discussed in \cite{LYW} show that this type of perturbation can effect the evolution of a single soliton by generating a procession of small solitons ahead of, and dispersive waves behind, the primary soliton.
Both \eqref{E:pmkdv} and \eqref{E:fkdv} are specific instances of a family of gKdV equations with general perturbation $$\partial_t u = \partial_x(-u_{xx}-u^p) + \epsilon f$$ for $p\in \mathbb{N}$, $p\geq 2$, and $f=f(x,t,u)$. The case $p=3$ (mKdV) is the unique member of the gKdV family that avoids a certain anomaly with the symplectic structure. Specifically, for $p=3$, one has $\partial_x^{-1}\partial_c \eta \in L^2$ but this fails for $p\neq 3$. For $p=3$, one can symplectically project onto the tangent space of the soliton manifold $M$ rather than on a skew space. The difference between $p=3$ and $p\neq 3$ is illustrated in the fact that the local virial estimate of Martel-Merle \cite{MM1} simplifies for $p= 3$. Nevertheless, we believe that the analysis of the paper carries over in some form to $p\neq 3$ and more general $f$ of the form $f(x,t,u)$. We chose \eqref{E:pmkdv} as the mathematically simplest case in which to illustrate our method.
\subsection{Statements of main results}
\begin{theorem}[orbital stability] \label{T:main1} Let $\delta>0$ and $a_0,c_0\in\mathbb{R}$ such that $2\delta \leq c_0 \leq (2\delta)^{-1}$. Suppose $u(x,t)$ solves \eqref{E:pmkdv} with initial data $u(x,0)$ such that
$$\omega \defeq \|u(x,0) - \eta(x,a_0,c_0)\|_{H_x^1} \lesssim \epsilon^{1/2}$$ Then there exist trajectories $a(t)$ and $c(t)$ so that the following hold, where $T$ is the maximum time such that $\delta\leq c(t) \leq \delta^{-1}$ for all $0\leq t \leq T$ and $w(x,t) \defeq u(x,t) - \eta(x,a(t),c(t))$. First, we have the following bounds on the deviation $w$: \begin{equation}
\label{E:thm1-est-w} \|w\|_{L^\infty_{[0,t]}H^1_x} +
\|e^{-\alpha|x-a|}w\|_{L^2_{[0,t]}H^1_x} \leq C(\omega + \epsilon t^{1/2}) e^{C\epsilon t} \end{equation} Second, we have $T \geq C^{-1}\epsilon^{-1}$ and the following estimates for the trajectories $a(t)$ and $c(t)$: \begin{equation}
\label{E:thm1-est-par} \|\dot a - c^2 - \epsilon c^{-1} \langle V\eta,
(x-a)\eta\rangle\|_{L_{[0,t]}^1\cap L_{[0,t]}^\infty} + \|\dot c -
\epsilon\langle V\eta, \eta\rangle \|_{L_{[0,t]}^1\cap L_{[0,t]}^\infty}\leq C (\omega + \epsilon t^{1/2})^2 e^{C \epsilon t} \end{equation} The constants $C$ in \eqref{E:thm1-est-w}, \eqref{E:thm1-est-par}
depend on $\|V\|_{\mathcal{C}^1}$ and $\delta$. \end{theorem}
We remark that the same result holds for $c_0<0$, since $\eta(x,a,-c)=-\eta(x,a,c)$.
\begin{theorem}[exact predictive dynamics] \label{T:main} Suppose $u(x,t)$ solves \eqref{E:pmkdv} with initial data $u(x,0)$ satisfying
$$\omega \defeq \|u(x,0) - \eta(x,a_0,c_0)\|_{H_x^1} \lesssim \epsilon^{1/2}$$ where $c_0>0$. Let $(a(t),c(t))$ evolve according to the ODE system \begin{equation} \label{E:thm-dym} \left\{ \begin{aligned} & \dot a = c^2 + \epsilon c^{-1}\langle V\eta, (x-a)\eta\rangle \\ &\dot c = \epsilon\langle V\eta, \eta\rangle \end{aligned} \right. \end{equation} with initial data $a(0)=a_0$, $c(0)=c_0$. Then for $$0\leq t\leq T=\sigma\epsilon^{-1/2}\log\epsilon^{-1}\,,
\qquad\sigma=\sigma(c_0,\|V\|_{\mathcal{C}^1_b})>0\,,$$ we have the following estimates with $w(x,t)=u(x,t)-\eta(x,a(t),c(t))\,:$ \begin{equation} \label{E:thm-est-w}
\|w\|_{L^\infty_{[0,t]}H^1_x}+ \|e^{-\alpha|x-a|}w\|_{L^2_{[0,t]}H^1_x}\leq C( \omega + \epsilon t^{1/2}) e^{C\epsilon^{1/2}t}\,. \end{equation}
where $C=C(c_0,\|V\|_{\mathcal{C}^1_b})$. \end{theorem} We remark that if one selects initial data so that $\omega \lesssim \epsilon^{3/4}$, then the two terms on the right-side of the estimate \eqref{E:thm-est-w} balance on the $\epsilon^{-1/2}$ time scale. In this case the bound becomes $\epsilon^{3/4} e^{C\epsilon^{1/2}t}$.
\begin{figure}
\caption{With external potential given by $V_1$ as in \eqref{E:potential1}, the top plot gives the
rescaled evolution $U(X,T)$, the bottom two plots give the comparison between the evolution of the
parameters obtained solving the ODE system and exact PDE evolution, i.e. we fit the solution to
$\eta(X,\tilde A,\tilde C)$, and plot $T$ versus $\tilde A$ and $\tilde C$ respectively.}
\label{F:fig1}
\end{figure}
\subsection{Relation to recent work}
The energy-Lyapunov based methods for proving orbital stability of solitons subject to perturbations (of the data, as opposed to the structural perturbations considered here) were developed by Benjamin \cite{Be}, Bona \cite{Bo}, Weinstein \cite{W2}, Grillakis-Shatah-Strauss \cite{GSS1, GSS2}. In the last decade several results have emerged using the same basic framework to address the dynamics of solitons for equations subject to structural perturbations \cite{BJ, FGJS, FTY, H, HL, HPZ, HZ1, HZ2, DV, AS-S, AS, M1, M2}. The nonlinear Schr\"odinger equation (NLS) with slowly varying potential was considered by Fr\"ohlich-Gustafson-Jonsson-Sigal \cite{FGJS} and a result of ``orbital stability'' type was obtained, however the estimates were not strong enough to obtain ``exact predictive dynamics''. Holmer-Zworski \cite{HZ2} obtained exact predictive dynamics plus refined accuracy by adopting the conceptual perspective of symplectic projection, but also, at the technical level, finding an appropriate distortion of the soliton manifold that enabled refined Lyapunov estimates. This ``symplectic projection plus correction term method'' has been subsequently pursued in different contexts in Datchev-Ventura \cite{DV}, Holmer-Lin \cite{HL}, Holmer-Perelman-Zworski \cite{HPZ}, and Pocovnicu \cite{P}. To treat a problem in which the perturbation gives rise to significant dispersive radiation, a different approach was employed by Holmer \cite{H}. He treated the KdV equation with a slowly varying potential, and used the Martel-Merle local virial estimate \cite{M1,M2} to supplement the energy Lyapunov estimate. In this paper, we follow this approach as well. We show the method is sufficiently robust to handle small non-Hamiltonian perturbations, which had not been considered in any of the above papers. A stochastic variant of the problem we consider has been addressed by de Bouard--Debussche \cite{deB-Deb} without the use of the local virial estimate. Work in progress by Holmer-Setayeshgar \cite{HS} will adapt the present paper to the stochastic setting and obtain a refinement of the results of \cite{deB-Deb}.
\subsection{Numerics} To solve \eqref{E:pmkdv} numerically we adapt the method in \cite{T} which is based on the fast fourier transform in $x$, then fourth-order Runge-Kutta for the resulting ODE in $t$. We use the rescaled coordinate frame $X=\epsilon^{-1/3}x$, $T=\epsilon^{-1} t$, and consider the equation on $[-\pi,\pi)$. If $U(X,T)$ solves $$\partial_TU = \partial_X(-\partial_X^2U-2U^3)+V(X)U\,,$$ with initial data $$U(0,X) = \eta(X,A_0,C_0)
= \eta(X,\epsilon^{1/3}a_0,\epsilon^{-1/3}c_0)\,,$$ then $u(x,t)=\epsilon^{1/3}U(\epsilon^{1/3}x,\epsilon t)$ gives a solution of \eqref{E:pmkdv} on $[-\pi/\epsilon^{1/3},\pi/\epsilon^{1/3})$ with initial data $u(0,x)=\eta(x,a_0,c_0)$, and periodic boundary conditions. Fig.~\ref{F:fig1} and Fig~\ref{F:fig2} plot the evolution of the soliton initial data (after rescaling) in the following external potential respectively \begin{equation} \label{E:potential1} V_1=-10\cos^2(6X) + 6\sin(10X)\,, \end{equation} \begin{equation} \label{E:potential2} V_2=8\cos^2(4X) - 4\sin(2X)\,. \end{equation} Note that to examine the solution $u(x,t)$ on time interval $0\leq t\leq C\epsilon^{-1/2}$(or $C\epsilon^{-1}$), we should let $U(X,T)$ evolve for time $C\epsilon^{1/2}$(or $C$).
\begin{figure}
\caption{These plots are analogs of Fig.~\ref{F:fig1}, the external potential is given by
$V_2$ as in \eqref{E:potential2}.}
\label{F:fig2}
\end{figure}
\section{Background on Hamiltonian structure} \label{S:background}
Let $J=\partial_x$, and consider $L^2(\mathbb{R}\mapsto \mathbb{R})$ as a manifold with metric $\langle v_1,v_2\rangle = \int v_1 v_2\,dx$, we can define the symplectic form as \begin{equation} \label{E:symp} \omega(v_1,v_2) = \langle v_1 J^{-1} v_2 \rangle = \langle v_1, \partial_x^{-1}v_2\rangle \,, \end{equation} where $J^{-1}$ is given by $$J^{-1}f(x)=\partial_x^{-1}f(x)\defeq\frac12\left(\int_{-\infty}^x - \int_x^{+\infty}\right)f(y)\,dy\,.$$ The mKdV equation \eqref{E:mkdv} is the Hamiltonian flow associated with $$H_0(u) = \frac12\int (u_x^2-u^4)\,,$$ i.e. we can write \eqref{E:mkdv} as \begin{equation} \label{E:mkdv2}\partial_t u = JH_0'(u)\,. \end{equation} Solutions to mKdV also satisfy conservation of mass $M(u)$ and momentum $P(u)$, where $$M(u) = \int u\, dx \,, \qquad P(u) = \frac12\int u^2\,dx \,.$$ We define 2-dimensional manifold of solitons $M$ as
$$M = \{ \, \eta(\cdot, a, c) \, | \,\,a\in \mathbb{R}, c\in \mathbb{R}\,\backslash\, \{0\}\} \,.$$
The symplectic form \eqref{E:symp} restricted to $M$ is given by $\omega|_M=da\wedge dc$. We denote $\eta=\eta(\cdot,a,c)$, the dependence of $(a,c)$ on $\eta$ is always meant implicitly. The tangent space at $\eta$ is given by $$T_\eta M = \operatorname{span} \{ \, \partial_a \eta, \partial_c \eta \, \}\,.$$ Note that $JH_0'(\eta) \in T_{\eta} M$, thus the flow associated to \eqref{E:mkdv} will remain on $M$ if it is initially. Specifically, direct computation shows \begin{equation} \label{E:JH0-prime} JH_0'(\eta) = c^2\partial_a\eta \,. \end{equation} which, together with \eqref{E:mkdv2}, explains the form of the expression for single solitons. This is equivalent to saying that the flow \eqref{E:mkdv2} restricted to $M$ (and thus stays on $M$) is given by \begin{equation} \label{E:free-flow} \left\{ \begin{aligned} &\dot a = c^2 \\ &\dot c = 0 \end{aligned} \right. \end{equation} One can also get \eqref{E:free-flow} by first restricting $H_0$ to $M$ to obtain $$H_0(\eta) = -\frac13 c^3 \,,$$ and then noticing that \eqref{E:free-flow} is just the solution to the Hamilton equations of motion for $H_0(\eta)$ with respect to
$\omega|_M$: \begin{equation} \label{E:free-ODEs} \left\{ \begin{aligned} &\dot a = -\frac{\partial H_0}{\partial c} = c^2 \\ &\dot c = \frac{\partial H_0}{\partial a} = 0 \end{aligned} \right. \end{equation} Note that we can write \eqref{E:JH0-prime} as \begin{equation} \label{E:JH0-prime-2} JH_0'(\eta) + c^2 JP'(\eta)=0 \,. \end{equation} From this, we learned that $L'(\eta) =0$, where \begin{equation} \label{E:L-classical} L(u) \defeq H_0(u) + c^2 P(u) \,. \end{equation} which is the Lyapunov functional used in the classical orbital stability theory, see \cite{W2}.
Next, we define the symplectic orthogonal projection operator at $(a,c)$: $$\Pi_{a,c}: L^2 \cong T_{\eta}L^2 \to T_\eta M\,,$$ by requiring that $$\langle \Pi_{a,c}^\perp f, J^{-1}\partial_a \eta\rangle = \langle \Pi_{a,c}^\perp f, J^{-1}\partial_c \eta\rangle =0\,,$$ where $\Pi_{a,c}^\perp = I - \Pi_{a,c}$, equivalently, $$\Pi_{a,c}f = \langle
f, J^{-1}\partial_c \eta\rangle\partial_a\eta - \langle f, J^{-1}\partial_a \eta\rangle\partial_c\eta\,.$$ Note that for mKdV, $$J^{-1}\partial_a\eta = -\eta\text{ and }J^{-1}\partial_c\eta = c^{-1} (x-a)\eta\,.$$
\section{Decomposition of the flow} \label{S:eff-dyn} We can arrange the modulation parameters $a(t)$ and $c(t)$ so that $$\Pi_{a(t),c(t)}\left[u(x,t) - \eta(x,a(t),c(t))\right]=0\,.$$ This is a standard fact and we recall it in the following \begin{lemma} \label{L:orth-decomp} Given $\tilde a$, $\tilde c$, there exist
$\delta_1>0$, $C>0$, such that if $u = \eta(\cdot, \tilde a,\tilde c)+\tilde w$ with $\|\tilde w\|_{H^1_x}\leq\delta_1$, then there exist unique $a$, $c$ such that \begin{equation} \label{E:decomp}w(x,t) \defeq u(x,t) - \eta(x,a(t),c(t)) \end{equation} satisfies the symplectic orthogonality conditions \begin{equation} \label{E:orth} \langle w,J^{-1}\partial_a\eta\rangle = \langle w, J^{-1}\partial_c\eta\rangle = 0\,. \end{equation}
Moreover, $$|a-\tilde a|\leq C\|\tilde w\|_{H^1_x}\,,\ \ |c-\tilde c|\leq C\|\tilde w\|_{H^1_x}\,.$$ \end{lemma} \begin{proof} Define $\phi:\,H^1_x\times\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}^2$ by $$\phi(v,a,c) = \begin{bmatrix} \langle v-\eta,\eta\rangle \\ \langle v-\eta,(x-a)\eta\rangle \end{bmatrix} $$
Using $\omega|_M=da\wedge dc$, we can get the Jacobian matrix of $\phi$ with respect to $(a,c)$ at $(\eta(\cdot,\tilde a,\tilde c),\tilde a,\tilde c)$ $$(D_{a,c}\phi)(\eta(\cdot,\tilde a,\tilde c),\tilde a,\tilde c) = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} $$ which implies, by the implicit function theorem, that the equation $\phi(u,a,c)=0$ can be solved for $(a,c)$ in terms of $u$ in a neighborhood of $\eta(\cdot, \tilde a,\tilde c)$. \end{proof} Now since $u=w+\eta$ and $u$ solves \eqref{E:pmkdv}, we compute \begin{equation} \begin{aligned} \label{E:lin-w} \partial_t w& = \partial_x(-\partial_x^2w-6\eta^2w-6\eta w^2-2w^3)+\epsilon Vw -F_0\\&=\partial_x(\mathcal{L}w-c^2 w - 6\eta w^2-2w^3)+\epsilon Vw -F_0\,, \end{aligned} \end{equation} where $$\mathcal{L} = -\partial_x^2-6\eta^2+c^2\,,$$ and $F_0$ results from the perturbation and $\partial_t$ landing on the parameters: $$F_0 = (\dot a-c^2)\partial_a\eta + \dot c\partial_c \eta - \epsilon V \eta\,.$$ Next, decompose $F_0$ into the symplectically parallel part $\Pi_{a,c} F_0$ and symplectically orthogonal part $\Pi_{a,c}^\perp F_0$, explicitly, \begin{equation} \label{E:F0-paralel} \Pi_{a,c} F_0 = (\dot a - c^2 - \epsilon \langle V\eta, J^{-1}\partial_c\eta\rangle)\partial_a\eta + (\dot c + \epsilon \langle V\eta,J^{-1}\partial_a\eta\rangle)\partial_c\eta\,, \end{equation} \begin{equation} \label{E:F0-perp} \Pi_{a,c}^\perp F_0 = -\epsilon V\eta + \epsilon \langle V\eta, J^{-1}\partial_c\eta\rangle\partial_a\eta - \epsilon \langle V\eta,J^{-1}\partial_a\eta\rangle\partial_c\eta\,. \end{equation} We now obtain the equations for the parameters: \begin{lemma}[effective dynamics] \label{L:ODEs} Given $V\in\mathcal{C}^1_b$, suppose that $w$ defined by \eqref{E:decomp} satisfies the orthogonality conditions \eqref{E:orth}. Then there exists $\alpha>0$ such that \begin{equation}
\label{E:eff-dyn} \| \partial_t \eta - c^2\partial_a\eta -
\epsilon\Pi_{a,c}(V\eta) \|_{T_{a,c}M} \lesssim
\|e^{-\alpha|x-a|}w\|_{H^1}^2 +
\epsilon\|e^{-\alpha|x-a|}w\|_{H^1}\,. \end{equation} Explicitly, \begin{equation}
\label{E:eff-dyn2} \begin{aligned}&\left|\dot a - c^2 - \epsilon\langle V\eta, J^{-1}\partial_c\eta\rangle\right| \lesssim
\|e^{-\alpha|x-a|}w\|_{H^1}^2
+ \epsilon\|e^{-\alpha|x-a|}w\|_{H^1}\,,\\
&\left|\dot c + \epsilon\langle V\eta, J^{-1}\partial_a\eta\rangle\right|
\lesssim \|e^{-\alpha|x-a|}w\|_{H^1}^2 +
\epsilon\|e^{-\alpha|x-a|}w\|_{H^1}\,. \end{aligned} \end{equation} \end{lemma} As all norms on a finite dimensional space are equivalent, we can take
$$\| \alpha \partial_a \eta + \beta \partial_c \eta \|_{T_{a,c}M} = |\alpha| + |\beta|$$ \begin{proof} Recall that $$\partial_t w = JH_0''(\eta)w - J(6\eta w^2 - 2w^3) +\epsilon V w - F_0\,.$$ Write $\mathcal{R}$ for the error terms of the same order as the right hand side of \eqref{E:eff-dyn}, take derivative with respect to $t$ for $\langle w, J^{-1}\partial_a\eta\rangle$, we have \begin{equation} \label{E:der-a} \begin{aligned} \ &0=\langle\partial_tw,J^{-1}\partial_a\eta\rangle + \langle w,J^{-1}\partial_a\partial_t\eta\rangle\\ =& -\langle F_0,J^{-1}\partial_a\eta\rangle + \langle JH_0''(\eta)w,J^{-1}\partial_a\eta\rangle + \langle w,J^{-1}\partial_a\partial_t\eta\rangle + \mathcal{R} \\=& -\langle F_0,J^{-1}\partial_a\eta\rangle + \langle w, J^{-1}\partial_a(\partial_t\eta-JH_0'(\eta))\rangle+\mathcal{R} \\= &-\langle F_0,J^{-1}\partial_a\eta\rangle + \langle w, J^{-1}\partial_a(\Pi_{a,c}F_0)\rangle + \mathcal{R}\,, \end{aligned} \end{equation} where for the penultimate equality we have used $J^*J^{-1}=-I$ and the self-adjointness of $H_0''$, and for the last that $$\partial_t\eta-JH_0'(\eta) = (\dot a - c^2)\partial_a\eta + \dot c\,\partial_c\eta = \Pi_{a,c}F_0 + O(\epsilon)\partial_a\eta + O(\epsilon)\partial_c\eta\,.$$
Taking derivative for $\langle w, J^{-1}\partial_c\eta\rangle$, similar computation gives $$0=-\langle F_0,J^{-1}\partial_c\eta\rangle + \langle w, J^{-1}\partial_c(\Pi_{a,c}F_0)\rangle + \mathcal{R}\,.$$ Combining with \eqref{E:der-a}, and applying the orthogonality conditions for the second terms when $\partial_a$ and $\partial_c$ land on the coefficients of $\Pi_{a,c}F_0$, the lemma follows from Cauchy-Schwarz and the smallness of $w$. \end{proof}
\section{Local virial estimate} \label{S:virial}
In this section we review, and then apply, part of the local virial estimates due to Martel-Merle. Let $\Phi\in\mathcal{C}(\mathbb{R})$, $\Phi(x)=\Phi(-x)$, $\Phi^\prime\leq0$ on $(0,\infty)$, such that $$\Phi(x)=1 \text{ on }[0,1]\,,\qquad\Phi(x)=e^{-x} \text{ on }[2,\infty)\,,\qquad e^{-x} \leq\Phi(x)\leq 3e^{-x}\text{ on }[0,\infty)\,.$$ Let $\Psi(x)=\int_0^x\Phi(y)\,dy$, and for $A\gg0$, set $\Psi_A(x)=A\Psi(x/A)$, we have following \begin{lemma} [Martel-Merle \cite{MM1,MM2} local virial spectral estimate] \label{L:virial}There exists $A$ sufficiently large and $\lambda_0$ sufficiently small, such that if $w$ satisfies the orthogonal condition \eqref{E:orth}, then \begin{equation*} -\langle\Psi_A(x-a)w,\partial_x\mathcal{L}w\rangle\geq
\lambda_0\int(w_x^2+w^2)e^{-|x-a|/A}\,dx\,. \end{equation*} \end{lemma} Denoting $\psi(\cdot)$ for $\Psi_A(\cdot-a)$, we now proceed as in \cite{MM1}: \begin{lemma}[local virial estimate] \label{L:app-virial} Suppose $V$ is bounded, then there exist $\alpha>0$ and $\kappa_j>0$, $j=1,2$, such that if $w$ solves \eqref{E:lin-w} and satisfies the orthogonality conditions \eqref{E:orth}, then \begin{equation}
\label{E:app-virial-diff} \|e^{-\alpha|x-a|}w\|^2_{H^1_x}\leq -\kappa_1\partial_t\int\psi w^2\,dx
+\kappa_2\epsilon^2+\kappa_2\epsilon\|w\|^2_{H^1_x}\,. \end{equation} \end{lemma} \begin{proof} From the equation for $\partial_t w$, we have \begin{align*} \partial_t\int\psi w^2 = & -\dot a\int\psi^\prime w^2 + 2\int\psi w\partial_t w\\ =& -\dot a\int\psi^\prime w^2 + 2\int\psi w\partial_x(\mathcal{L}w) &&\leftarrow\text{I}+\text{II}\\ &- 2c^2\int\psi w\partial_x w - 12\int\psi w\partial_x(\eta w^2)&&\leftarrow\text{III}+\text{IV} \\ &- 4\int\psi w\partial_x(w^3) + 2\epsilon\int\psi Vw^2 - 2\int\psi wF_0 &&\leftarrow\text{V}+\text{VI}+\text{VII} \end{align*} Using integration by parts, $$\text{III} = c^2\int\psi^\prime w^2\,,$$ hence \begin{equation}
\label{E:v10} |\text{I}+\text{III}|=|-(\dot a-c^2)\int\psi^\prime w^2|\lesssim
\epsilon\|w\|^2_{H^1_x}+\|e^{-\alpha|x-a|}w\|_{H^1_x}^2\|w\|^2_{H^1_x} \end{equation} by \eqref{E:eff-dyn2}. Following from the boundedness of $\psi$ and
$V$, and the estimate $\|w\|_{L^\infty_x}\lesssim\|w\|_{H^1_x}$, we obtain \begin{equation} \label{E:v20} \begin{aligned}
|\text{IV}|&\lesssim\|e^{-\alpha|x-a|}w\|_{H^1_x}^2\|w\|_{H^1_x}\,,\\
|\text{V}|=\big|3\int\psi' w^4\big|&\lesssim \|w\|_{H^1_x}^2\|e^{-|x-a|/(2A)}w\|_{L^2_x}^2\,,\\
|\text{VI}|&\lesssim \epsilon\|w\|_{H^1_x}^2\,, \end{aligned} \end{equation} where for the second estimate we have used $\psi'=\Phi((x-a)/A)$ and the definition of $\Phi$. Decomposing VII term as $$\text{VII} =-2\int\psi w\Pi F_0 - 2\int\psi w\Pi^\perp F_0 = \text{VIIA}+\text{VIIB}\,,$$ we have by Lemma \ref{L:ODEs} that \begin{equation}
\label{E:v30} \text{VIIA}\lesssim \epsilon\|w\|^2_{H^1_x} +
\|e^{-\alpha|x-a|}w\|_{H^1_x}^2\|w\|_{H^1_x}\,, \end{equation} and by $\Pi^\perp F_0\sim \epsilon\eta$ (see \eqref{E:F0-perp}) that for any $\mu>0$, \begin{equation} \label{E:v40} \text{VIIB} \lesssim
\epsilon\|e^{-\alpha|x-a|}w\|_{H^1_x} \lesssim \mu^{-1}\epsilon^2 + \mu \| e^{-\alpha|x-a|}w\|_{H_x^1}^2 \end{equation} Note in above estimates the value of $\alpha$ may change from one line to the next, but we can choose one single small enough $\alpha$ that works for all.
By Lemma \ref{L:virial}, we have $$\text{II} = 2\langle\psi w,\partial_x(\mathcal{L}w)\rangle\leq
-\lambda_0\int(w_x^2+w^2)e^{-|x-a|/A}\,dx\,.$$ Combining with \eqref{E:v10}, \eqref{E:v20}, \eqref{E:v30} and \eqref{E:v40}, the estimate \eqref{E:app-virial-diff} follows by the smallness of
$\|w\|_{H^1_x}$, taking $A$ large enough so that $1/(2A)<\alpha$, and $\mu>0$ suitably small. \end{proof}
\section{Energy estimate} \label{S:energy}
In this section we formulate the energy estimate necessary for the estimation of the error term $w$. Recall $\mathcal{L} = -\partial_x^2-6\eta^2+c^2$. Let $$\mathcal{E} = \frac12\langle\mathcal{L}w,w\rangle-2\int\eta w^3\,dx-\frac12\int w^4\,dx\,,$$
Note that $\mathcal{L}=H''_0(\eta)+c^2=L''(\eta)$, see \eqref{E:JH0-prime-2} and \eqref{E:L-classical}. We have classical coercivity properties for $\mathcal{L}$ (for a proof, see e.g. \cite[Prop $2.9$]{W1} or \cite[Prop $4.1$]{HZ1} for a more direct proof -- note that $\mathcal{L}$ is the operator $L_+$ considered there): \begin{lemma}[energy spectral estimate] \label{L:coer} Suppose that $w$ satisfies the orthogonality condition \eqref{E:orth}. Then \begin{equation}
\label{E:coer} \langle \mathcal{L}w,w\rangle \gtrsim \|w_x\|_{L^2}^2 + c^2\|w\|_{L_x^2}^2\,, \end{equation} \end{lemma}
Since we impose a lower bound on $c$ in Theorems \ref{T:main1}, it follows from \eqref{E:coer} that if $\|w\|_{H_x^1}$ is smaller than some ($\epsilon$ independent) constant, then
$$\|w(t) \|_{H_x^1}^2 \sim \mathcal{E}(t)$$
\begin{lemma}[energy estimate] \label{L:energy} Suppose we are given $V\in\mathcal{C}_b^1$, $\delta_0>0$ and $w(x,t)$, such that $\delta_0<c(t)<\delta_0^{-1}$, $w$ solves \eqref{E:lin-w} and satisfies the orthogonality conditions \eqref{E:orth}, then \begin{equation} \label{E:energy}
|\partial_t\mathcal{E}|\lesssim\epsilon\|w\|^2_{H^1_x} +
\epsilon\|e^{-\alpha|x-a|} w\|_{H^1_x} + \|w\|_{H^1_x}^2
\|e^{-\alpha|x-a|}w\|_{H^1_x}^2 + \|w\|_{H^1_x}^6\,. \end{equation} where the implicit constant depends on $\delta_0$, $\sigma_0$ and the bounds on $V$ and $V'$. \end{lemma} \begin{proof} We compute \begin{align*}
\partial_t\mathcal{E} = &\langle\mathcal{L} w,\partial_t w\rangle + \dot c c\|w\|_{L^2_x}^2-6\langle(\dot a\partial_a\eta + \dot c\partial_c\eta)\eta w, w\rangle&&\leftarrow\text{I} + \text{II} +\text{III}\\ & -\langle\partial_t w,6\eta w^2+2w^3\rangle-2\langle (\dot a\partial_a\eta+\dot c\partial_c\eta),w^3\rangle &&\leftarrow\text{IV}+\text{V} \end{align*} Substitute \eqref{E:lin-w} into I: \begin{align*} \text{I} &= \langle\mathcal{L}w,\partial_x(\mathcal{L}w)\rangle-c^2\langle\mathcal{L}w,\partial_xw\rangle - \langle\mathcal{L}w,\partial_x(6\eta w^2 +2w^3)\rangle +\langle\mathcal{L}w,\epsilon Vw\rangle -\langle\mathcal{L}w,F_0\rangle\\ &=\text{IA}+\text{IB}+\text{IC}+\text{ID}+\text{IE} \end{align*} First, $\text{IA} = 0$. Integration by parts yields $\text{IB} = -6c^2\langle\eta\eta_x,w^2\rangle$. By the boundedness of $V$ and $V'$,
$$\text{ID} \lesssim \epsilon\|w\|_{H^1_x}^2\,,$$ and since $\mathcal{L}(TM)\subset TM$ (by direct computation), we have $$\text{IE} = -\langle\mathcal{L}w, \Pi F_0\rangle -\langle\mathcal{L}w,\Pi^\perp F_0\rangle=-\langle\mathcal{L}w,\Pi^\perp F_0\rangle\,,$$ but by \eqref{E:F0-perp}
$$|\langle\mathcal{L}w,\Pi^\perp F_0\rangle |\lesssim \epsilon
\|e^{-\alpha|x-a|}w\|_{H^1_x}\,,$$ hence
$$|\text{IE}|\lesssim\epsilon
\|e^{-\alpha|x-a|}w\|_{H^1_x}\,.$$ Combining, we obtain \begin{align} \label{E:e10} \text{I} &= \text{IB} + \text{IC} +
O\left(\epsilon\|w\|_{H^1_x}^2+\epsilon\|e^{-\alpha|x-a|}w\|_{H^1_x}\right)\\ \notag&= -6c^2\langle\eta\eta_x,w^2\rangle - \langle\mathcal{L}w,\partial_x(6\eta w^2 +2w^3)\rangle +
O\left(\epsilon\|w\|_{H^1_x}^2+\epsilon\|e^{-\alpha|x-a|}w\|_{H^1_x}\right) \,. \end{align} Substituting \eqref{E:lin-w} into IV, we have \begin{equation} \label{E:IC+IV}\text{IC}+\text{IV} = -\left\langle\partial_x(-c^2w-6\eta w^2-2w^3) + \epsilon Vw - F_0\,, 6\eta w^2 + 2w^3\right\rangle\,. \end{equation} By \eqref{E:eff-dyn2}, we have \begin{equation}
\label{E:est-dot-ac}|\dot a-c^2|\lesssim\epsilon +
\|e^{-\alpha|x-a|}w\|_{H^1_x}^2\,,\qquad |\dot c|\lesssim\epsilon +
\|e^{-\alpha|x-a|}w\|_{H^1_x}^2\,, \end{equation} hence
$$|\langle F_0\,, 6\eta w^2 +
2w^3\rangle|\lesssim\epsilon\|w\|_{H^1_x}^2 +
\|w\|_{H^1_x}^2\|e^{-\alpha|x-a|}w\|_{H^1_x}^2\,.$$ Note $$-\langle\partial_x(-c^2 w), 6\eta w^2 + 2w^3\rangle = -2c^2\int\eta' w^3\,dx\,.$$
Estimating the rest of the terms in \eqref{E:IC+IV} using Cauchy-Schwarz and that $\|w\|_{L^\infty_x}\lesssim\|w\|_{H^1_x}$, we obtain \begin{equation} \label{E:e20}\text{IC}+\text{IV} = -2c^2\langle\eta',w^3\rangle +
O(\epsilon\|w\|_{H^1_x}^2 +
\|e^{-\alpha|x-a|}w\|_{H^1_x}^2\|w\|_{H^1_x}^2 + \|w\|_{H^1_x}^6)\,. \end{equation} By \eqref{E:est-dot-ac} again, and that $\partial_x\eta=-\partial_a\eta$, we have \begin{equation} \label{E:e30} \text{II}+\text{V} = 2\dot a\langle\eta', w^3\rangle +
O(\epsilon\|w\|_{H^1_x}^2 +
\|e^{-\alpha|x-a|}w\|_{H^1_x}^2\|w\|_{H^1_x}^2)\,, \end{equation} and \begin{equation} \begin{aligned}
\label{E:e40} \text{IB}+\text{III} =& 6(\dot a-c^2)\langle\eta\eta_x,w^2\rangle-6\langle\dot c(\partial_c\eta)\eta,w^2\rangle\\\lesssim&\epsilon\|w\|_{H^1_x}^2 +
\|e^{-\alpha|x-a|}w\|_{H^1_x}^2\|w\|_{H^1_x}^2\,. \end{aligned} \end{equation}
Apply \eqref{E:est-dot-ac} again to the sum of \eqref{E:e20} and \eqref{E:e30}, then combine with \eqref{E:e10} and \eqref{E:e40}, we can obtain \eqref{E:energy}.
\end{proof}
\section{Proof of the main theorems} \label{S:pf-thm}
First, we give the proof of Theorem \ref{T:main1}.
Let $[0,T']$ be the maximal time interval so that \begin{equation} \label{E:bs-assump}
\|w\|_{L_{[0,T]}^\infty H_x^1} \leq \mu \langle t \rangle^{-1/4} \end{equation} for $\mu>0$ chosen small enough to ensure the validity of Lemmas \ref{L:orth-decomp}, \ref{L:ODEs}, \ref{L:app-virial}, and \ref{L:energy}, and also small enough to beat some constants in the estimates that follow (as explained below).
Let
$$\mathcal{V}(t) \defeq \int_0^t \|e^{-\alpha |x-a(s)|} w(s) \|_{H_x^1}^2 \, ds \,, \qquad \mathcal{F}(t)
\defeq \sup_{0\leq s \leq t} \|w(s) \|_{H_x^1}^2 \,.$$ Integrating the local virial estimate \eqref{E:app-virial-diff} gives \begin{equation} \label{E:loc-vir-bs} \mathcal{V}(t) \lesssim \mathcal{F}(t) + \epsilon^2 t + \epsilon\int_0^t \mathcal{F}(s) \,ds \,. \end{equation} Integrating \eqref{E:energy} over $0\leq t\leq \tau$ yields $$\mathcal{E}(\tau ) \leq \mathcal{E}(0) + \epsilon\int_0^\tau \mathcal{F}(s) \, ds + \epsilon \tau^{1/2}\mathcal{V}(\tau) + \mathcal{F}(\tau)\mathcal{V}(\tau) + \tau\mathcal{F}(\tau)^3 \,.$$
Using that $\mathcal{E}(\tau)\sim \|w(\tau)\|_{H_x^1}^2$, and then taking the sup of the above estimate over $0\leq \tau \leq t$, we obtain $$\mathcal{F}(t) \lesssim \mathcal{F}(0)+ \epsilon \int_0^t \mathcal{F}(s) \,ds + \epsilon t^{1/2} \mathcal{V}(t)^{1/2} + \mathcal{F}(t)\mathcal{V}(t) + t \mathcal{F}(t)^3$$ By \eqref{E:bs-assump} and the estimate $\epsilon t^{1/2}\mathcal{V}(t)^{1/2} \leq \mu^{-1}\epsilon^2 t + \mu \mathcal{V}(t)$ this implies $$\mathcal{F}(t) \lesssim \epsilon \int_0^t \mathcal{F}(s) \,ds + \mathcal{F}(0)+ \mu^{-1}\epsilon^2 t + \mu \mathcal{V}(t)$$ Substituting \eqref{E:loc-vir-bs} into here, taking $\mu$ (introduced in \eqref{E:bs-assump} above) small enough to beat the implicit constants, \begin{equation} \label{E:step100} \mathcal{F}(t) \lesssim \epsilon \int_0^t \mathcal{F}(s) \,ds + \mathcal{F}(0) + \epsilon^2 t \,. \end{equation} Hence, for some $\kappa>0$, $$\frac{d}{dt} \left(e^{-\kappa \epsilon t} \int_0^t \mathcal{F}(s) \,ds \right) \leq e^{-\kappa \epsilon t} (\mathcal{F}(0)+\epsilon^2 t)$$ Integrating yields $$ \int_0^t \mathcal{F}(s) \,ds \lesssim (e^{\kappa \epsilon t}-1)(\epsilon^{-1}\mathcal{F}(0) + 1)$$ Substituting this back into \eqref{E:step100}, $$\mathcal{F}(t) \lesssim e^{\kappa \epsilon t}\mathcal{F}(0) + \epsilon( (e^{\kappa \epsilon t}-1) + \epsilon t)$$ For the second term, we might as well bound $(e^{\kappa \epsilon t}-1) + \epsilon t \lesssim \epsilon t e^{\kappa \epsilon t}$, so $$\mathcal{F}(t) \lesssim e^{\kappa \epsilon t}(\mathcal{F}(0)+ \epsilon^2 t)$$ This enables us to reach time $\sigma \epsilon^{-1}\log \epsilon^{-1}$, for $\sigma>0$ small, while still reinforcing the bootstrap assumption \eqref{E:bs-assump}. Returning to \eqref{E:loc-vir-bs}, we obtain the bound for $\mathcal{V}(t)$, thus completing the proof of \eqref{E:thm1-est-w}. The $L^1_{[0,T]}$ estimates \eqref{E:thm1-est-par} follow from integrating \eqref{E:eff-dyn2} in time and applying \eqref{E:thm1-est-w}. The $L^\infty_{[0,T]}$ estimates also follow from \eqref{E:eff-dyn2} by dropping the spatial localization in the terms on the right-hand side of \eqref{E:eff-dyn2} and applying the bound on
$\|w\|_{L_{[0,T]}^\infty H_x^1}$ given by\eqref{E:thm1-est-w}.
Now we discuss the proof of Theorem \ref{T:main}.
Let $\tilde a$, $\tilde c$ solve the ODE system \begin{equation*} \left\{ \begin{aligned} &\dot {\tilde a} - \tilde c^2 - \epsilon \tilde c^{-1}\langle V\tilde\eta, (x-\tilde a)\tilde\eta\rangle = 0 \\ &\dot {\tilde c} - \epsilon\langle V\tilde\eta, \tilde\eta\rangle = 0 \end{aligned} \right. \end{equation*} with initial data $\tilde a(0)=a_0$, $\tilde c(0) = c_0$, where
$\tilde \eta=\tilde c Q(\tilde c(x-\tilde a))$. Since $|\dot c|,\
|\dot{\tilde c}|\lesssim\epsilon$, we can assume $\delta_0<c,\ \tilde c<\delta_0^{-1}$ on $[0,T]$. Define $$\bar a = a - \tilde a\,,\ \ \bar c = c - \tilde c\,,$$ we have $$\langle V\eta, (x-a)\eta\rangle - \langle V\tilde\eta, (x-\tilde a)\tilde\eta\rangle = \beta_1(a-\tilde a) + \beta_2(c-\tilde c)\,,$$ where we have defined \begin{equation*} \begin{aligned} &\beta_1 = \frac{1}{a-\tilde a}\int\left(V(\frac{x}{\tilde c}+ a) - V(\frac{x}{\tilde c} + \tilde a)\right) x\eta^2\,dx\,,\\ &\beta_2 = \frac{1}{c-\tilde c}\int\left(V(\frac{x}{c}+ a) - V(\frac{x}{\tilde c} + a)\right)x\eta^2\,dx\,, \end{aligned} \end{equation*} similarly, $$\frac{1}{c}\langle V\eta,\eta\rangle - \frac{1}{\tilde c}\langle V\tilde\eta, \tilde\eta\rangle = \gamma_1(a-\tilde a) + \gamma_2(c-\tilde c)\,,$$ where \begin{equation*} \begin{aligned} &\gamma_1 = \frac{1}{a-\tilde a}\int\left(V(\frac{x}{\tilde c}+ a) - V(\frac{x}{\tilde c} + \tilde a)\right) \eta^2\,dx\,,\\ &\gamma_2 = \frac{1}{c-\tilde c}\int\left(V(\frac{x}{c}+ a) - V(\frac{x}{\tilde c} + a)\right)\eta^2\,dx\,, \end{aligned} \end{equation*} Denote $\mathcal{R}_1$, $\mathcal{R}_2$ for the error terms in Lemma \ref{L:ODEs}, i.e. \begin{equation*} \left\{ \begin{aligned} &\dot { a} - c^2 - \epsilon c^{-1}\langle V \eta, (x- a) \eta\rangle -\mathcal{R}_1 = 0 \\ &\dot { c} - \epsilon\langle V \eta, \eta\rangle - \mathcal{R}_2 = 0\,, \end{aligned} \right. \end{equation*} Apply \eqref{E:thm1-est-par} to \eqref{E:eff-dyn2}, we obtain \begin{equation}
\label{E:est-R12} \|\mathcal{R}_j\|_{L_{[0,t]}^1} \leq C(\omega + \epsilon t^{1/2})^2 e^{C\epsilon^{1/2}t} \,,\ \ j=1\,,2\,. \end{equation} Note $$\frac{\dot c}{c}-\frac{\dot{\tilde c}}{\tilde c}=\frac{\dot{\bar c}}{c}-\frac{\dot{\tilde c}}{c\tilde c}\bar c\,,$$ and $$c\dot a - \tilde c\dot{\tilde a} = c\dot{\bar a} + (c-\tilde c)\dot{\tilde a}\,,$$ denoting $$\theta_1 = \frac{1}{c}\left[(c^2+{\tilde c}^2+c\tilde c) - ({\tilde c}^2+\epsilon{\tilde c}^{-1} \langle V\tilde\eta,(x-\tilde a)\tilde\eta\rangle) + \epsilon\beta_2\right]\,,$$ and $$\theta_2=\frac{1}{\epsilon}\frac{\dot{\tilde c}}{\tilde c} =\frac{1}{\tilde c}\langle V\tilde\eta,\tilde\eta\rangle\,,$$ we can obtain the equation for $(\bar a,\bar c)$, \begin{equation} \label{E:eqn-bar-ac} \begin{bmatrix} \bar a\\ \bar c \end{bmatrix}^\prime = \begin{bmatrix} \epsilon\beta_1 c^{-1} & \theta_1 \\ \epsilon c\gamma_1 & \epsilon(\theta_2+c\gamma_2) \end{bmatrix} \begin{bmatrix} \bar a\\ \bar c \end{bmatrix} + \begin{bmatrix} \mathcal{R}_1\\ \mathcal{R}_2 \end{bmatrix}\,. \end{equation} Writing $$A(t) = \begin{bmatrix} \epsilon\beta_1 c^{-1} & \theta_1 \\ \epsilon c\gamma_1 & \epsilon(\theta_2+c\gamma_2) \end{bmatrix}\,. $$ From the boundedness of $\beta_j$, $\gamma_j$, $\theta_j$, $j=1,2$, which is a result of the boundedness of $V$, $V'$, $c$ and $\tilde c$, we have the estimate \begin{equation}
|A(t)| \lesssim \begin{bmatrix} \epsilon & 1\\ \epsilon & \epsilon \end{bmatrix}\,. \end{equation} Writing $p(s)=(\epsilon \bar a^2+\bar c^2)^{1/2}$, then by above estimate
\begin{align*}|\dot p| &\lesssim\frac{1}{p}\left[\epsilon |\bar a|(\epsilon |\bar a|+|\bar c|+|\mathcal{R}_1|) + |\bar c|(\epsilon |\bar a|+\epsilon |\bar c|+|\mathcal{R}_2|)\right]\\&\lesssim\frac{1}{p}\left[\epsilon(\epsilon
\bar a^2+\bar c^2) + \epsilon^{1/2}(\epsilon\bar a^2+\bar c^2)+\epsilon|\bar a| |\mathcal{R}_1|+|\bar c||\mathcal{R}_2|\right]
\\&\lesssim \epsilon^{1/2}p+\epsilon^{1/2}|\mathcal{R}_1|+|\mathcal{R}_2|\,. \end{align*}
By Gronwall and $p(0)=0$, we obtain $$p(t)\leq Ce^{C\epsilon^{1/2}t}\int_0^t\left(\epsilon^{1/2}|\mathcal{R}_1|+|\mathcal{R}_2|\right)(s) \,ds\,.$$ Applying \eqref{E:est-R12}, we obtain $$p(t)\leq Ce^{C\epsilon^{1/2}t}(\omega+\epsilon t^{1/2})^2\,,$$ recalling the bounds on $t$ and $\omega$ in Theorem~\ref{T:main}, this gives $$p(t)\leq C \epsilon^{1/2}(\omega+\epsilon t^{1/2})e^{C\epsilon^{1/2}t}\,.$$ The bounds on $\bar a$ and $\bar c$ now follow from the definition of $p$:
$$|\bar a|\leq C(\omega+\epsilon t^{1/2}) e^{C\epsilon^{1/2}t}\,,$$
$$|\bar c|\leq C \epsilon^{1/2}(\omega+\epsilon t^{1/2}) e^{C\epsilon^{1/2}t}\,.$$ Compare the above two estimates with \eqref{E:thm-est-w}, we can conclude the proof of Theorem \ref{T:main}.
\begin{remark} \label{R:r10} The $\epsilon^{-1/2}$ constraint on the time scale stems from the fact that the eigenvalues of $\begin{bmatrix} \epsilon & 1\\ \epsilon &\epsilon\end{bmatrix}$ are only of order $\epsilon^{1/2}$. \end{remark}
\appendix \section{Local and global well-posedness} \label{A:wp}
The global well-posedness for gKdV in energy space was obtained by Kenig-Ponce-Vega in \cite{KPV}, where they introduced new and powerful local smoothing and maximal function estimates, especially, they proved the local well-posedness for \eqref{E:mkdv} in $H^s(\mathbb{R})$ for $s\geq 1/4$. To prove well-posedness for \eqref{E:pmkdv} at $H^1$ level of regularity, the full strength of these estimates is not needed, we here follow the presentation of \cite{HPZ} Apx. A and make necessary modifications.
Let $Q_n=[n-\tfrac12, n+\tfrac12]$, and $\tilde Q_n=[n-1,n+1]$. An example of notation is:
$$\|u\|_{\ell_n^\infty L^2_T L^2_{Q_n}} =
\sup_n\|u\|_{L^2_{[0,T]}L^2_{Q_n}}\,.$$ Note that due to the finite incidence of overlap, we have
$$\|u\|_{\ell_n^\infty L^2_T L^2_{Q_n}}\sim \|u\|_{\ell_n^\infty L^2_T L^2_{\tilde Q_n}}\,.$$We omit the $\epsilon$ in \eqref{E:pmkdv}, and consider \begin{equation} \label{E:pmkdv2} \partial_t u = \partial_x(-\partial_x^2 u -2u^3) + V u\,, \qquad V\in \mathcal{C}^1_b\,. \end{equation} As in \cite{HPZ}, we first prove a local smoothing estimate and a maximal function estimate (weak versions), by an integrating factor method: \begin{lemma} \label{L:ls-mf} Suppose that \begin{equation} \label{E:linear-part} v_t + v_{xxx} - V v = f\,, \end{equation} then there exists $C>0$, such that if
$$T\leq C(1+\|V\|_{L^\infty_x})^{-1}\,,$$ we have the energy and local smoothing estimates \begin{equation}
\label{E:ls} \|v\|_{L^\infty_T L^2_x} + \|v_x\|_{\ell_n^\infty L^2_T L^2_{Q_n}}\lesssim \|v_0\|_{L^2_x} +
\left\{\begin{aligned}&\|\partial_x^{-1}f\|_{\ell_n^1 L^2_T L^2_{Q_n}}\\&\|f\|_{L^1_TL^2_x}\end{aligned}\right. \end{equation} and the maximal function estimate \begin{equation}
\label{E:mf} \|v\|_{\ell_n^2 L^\infty_T L^2_{Q_n}} \lesssim
\|v_0\|_{L^2_x} + T^{1/2}\|v\|_{L^2_TH^1_x} +
T^{1/2}\|f\|_{L^2_TL^2_x}\,. \end{equation} The implicit constants are independent of $V$. \end{lemma} \begin{proof} Let $\phi(x)=-\tan^{-1}(x-n)$, and set $w(x,t)=e^{\phi(x)}v(x,t)$. By \eqref{E:linear-part}, $$\partial_t w + w_{xxx} - 3\phi' w_{xx} +3(-\phi''+(\phi')^2)w_x+(-\phi'''+3\phi''\phi'-(\phi')^3)w-V w=e^{\phi}f\,,$$ integrating its product with $\tfrac12 w$ over $x$,
$$\partial_t\|w\|_{L^2_x}^2 = -6\langle\phi',w_x^2\rangle + \langle-\phi'''+2(\phi')^3,w^2\rangle + 2\langle V,w^2\rangle+2\langle e^{\phi}f,w\rangle\,,$$ integrating this identity over $[0,T]$, and using $\phi'(x)=-\langle x-n\rangle^{-2}$, we obtain \begin{align*}
\|w(T)&\|^2_{L^2_x} + 6\|\langle x-n\rangle^{-1}w_x\|_{L^2_T L^2_x}^2\\&\leq
\|w_0\|^2_{L^2_x} + C_1 T(1+\|V\|_{L^\infty_x})\|w\|^2_{L^\infty_T L^2_x} + C_1\int_0^{T}\left|\int e^{\phi} f w\,dx\right|\,dt\,, \end{align*} for some constant $C_1>0$, replace $T$ by $t$, and take supremum over $t\in [0,T]$, we obtain, for $T\leq
\tfrac12C_1^{-1}(1+\|v\|_{L^\infty_x})$, the estimate
$$\|w\|^2_{L^\infty_T L^2_x} + \|\langle x-n\rangle^{-1}w_x\|_{L^2_T L^2_x}^2\lesssim \|w_0\|^2_{L^2_x} + \int_0^{T}\left|\int e^{\phi} f w\,dx\right|\,dt\,,$$ note that $0<e^{-\pi/2}\leq e^{\phi(x)}\leq e^{\pi/2}<\infty$, we can convert the above estimate back to an estimate for $v$:
$$\|v\|^2_{L^\infty_T L^2_x} + \|v_x\|_{L^2_T L^2_{Q_n}}^2\lesssim \|v_0\|^2_{L^2_x} + \int_0^{T}\left|\int e^{2\phi} f v\,dx\right|\,dt\,.$$ Estimating as
$$\int_0^{T}\left|\int e^{2\phi} f v\,dx\right|\,dt\lesssim \|f\|_{L^1_T L^2_x}\|v\|_{L^\infty_T L^2_x}\,,$$ and then taking the supremum in $n$ yields the second estimate in \eqref{E:ls}. Estimating instead as
$$\begin{aligned}\int_0^{T}\left|\int e^{2\phi} f v\,dx\right|\,dt
&= \int_0^{T}\left|\int(\partial_x^{-1}f
\partial_x(e^{2\phi} v)\,dx\right|\,dt \\&\leq \sum_m\|\partial_x^{-1}
f\|_{L^2_T L^2_{Q_m}}\|\langle\partial_x\rangle v\|_{L^2_T L^2_{Q_m}}\\&\leq
\|\partial_x^{-1}f\|_{\ell_m^1 L^2_T L^2_{Q_m}}\|\langle\partial_x\rangle v\|_{\ell_m^\infty L^2_T L^2_{Q_m}} \end{aligned}$$ and then taking the supremum in $n$ yields the second estimate in \eqref{E:ls}.
For the proof of estimate \eqref{E:mf}, take $\phi(x)=1$ on $[n-\tfrac12,n+\tfrac12]$ and $0$ outside $[n-1,n+1]$, set $w =\phi v$, and compute similarly as the above.
\end{proof} Using estimates in the above lemma, we can prove: \begin{theorem}[local well-posedness in $H^1_x$] \label{T:lwp} Suppose that \begin{equation}
\label{D:M} M\defeq \|V\|_{L^\infty_x}+\|V'\|_{L^\infty_x} <\infty\,. \end{equation} For any $R\geq 1$, take $$T\lesssim\min(M^{-1}, R^{-2})\,,$$ we have \begin{enumerate}
\refstepcounter{myenum}
\par\noindent{\bf #1 \arabic{myenum}.\ } If $\|u_0\|_{H^1}\leq R$, there exists a solution $u(t)\in
\mathcal{C}([0,T];H^1_x)$ to \eqref{E:pmkdv2} on $[0,T]$ with initial data
$u_0(x)$ satisfying
$$\|u\|_{L^\infty_T H^1_x} +\|u_{xx}\|_{\ell^\infty_n L^2_T
L^2_{Q_n}}\lesssim R\,.$$
\refstepcounter{myenum}
\par\noindent{\bf #1 \arabic{myenum}.\ } This solution $u(t)$ is unique among all solutions in
$\mathcal{C}([0,T];H^1_x)$.
\refstepcounter{myenum}
\par\noindent{\bf #1 \arabic{myenum}.\ } The data-to-solution map $u_0 \mapsto u(t)$ is continuous
as a mapping $H^1 \rightarrow \mathcal{C}([0,T];H^1_x)$. \end{enumerate} \end{theorem} \begin{proof} We prove the existence by contraction in the space $X$, where
$$X = \{\,u\,|\, \|u\|_{\mathcal{C}([0,T];H^1_x)} + \|u_{xx}\|_{\ell^\infty_n L^2_T
L^2_{Q_n}} + \|u\|_{\ell^2_n L^\infty_T
L^2_{Q_n}}\leq CR\,\}\,,$$ where the constant $C$ is chosen large enough to ($10$ times, say) exceed the implicit constants in Lemma~\ref{L:ls-mf}. Given $u\in X$, let $\varphi(u)$ denote the solution to \begin{equation} \label{E:psi-u} \partial_t\varphi(u) + \partial_x^3\varphi(u) - V\varphi(u) = -2\partial_x(u^3)\,. \end{equation} with initial condition $\varphi(u)(0) = u_0$. A fixed point $\varphi(u)=u$ in $X$ will solve \eqref{E:pmkdv2}.
The local smoothing estimate \eqref{E:ls} applied to $v=\varphi(u)$ and the estimate
$$\|(u^3)_x\|_{L^1_T L^2_x}\lesssim T\|u\|_{L^\infty_T H^1_x}^3$$ give the estimate \begin{equation}
\label{E:ls-bd1} \|\varphi(u)\|_{L^\infty_T L^2_x}\lesssim
\|u_0\|_{H^1_x} + T\|u\|_{L^\infty_T H^1_x}^3\,, \end{equation} The maximal function estimate \eqref{E:mf} applied to $v=\varphi(u)$ and the estimate
$$\|(u^3)_x\|_{L^2_T L^2_x}\lesssim T^{1/2}\|u\|_{L^\infty_T H^1_x}^3$$ imply the estimate \begin{equation}
\label{E:mf-bd} \|\varphi(u)\|_{\ell_n^2 L^\infty_T L^2_{Q_n}}
\lesssim \|u_0\|_{L^2_x} + T\|\varphi(u)\|_{L^\infty_T H^1_x} +
T\|u\|_{L^\infty_T H^1_x}^3\,. \end{equation}
Now applying $\partial_x$ to \eqref{E:psi-u}, and denoting $v=\varphi(u)_x$ instead: $$v_t + v_{xxx} - V v = -2(u^3)_{xx} + V'\varphi(u)\,.$$ By \eqref{E:ls} again, \begin{equation}
\label{E:psi-u-ls} \|\varphi(u)_x\|_{L^\infty_T L^2_x} +
\|\varphi(u)_{xx}\|_{\ell_n^\infty L^2_T L^2_{Q_n}}\lesssim
\|u_0\|_{H^1_x} + \|(u^3)_x\|_{\ell_n^1 L^2_T L^2_{Q_n}}+\|V'\varphi(u)\|_{L^1_TL^2_x}\,. \end{equation} Applying Gagliado-Nirenberg inequality to $\phi(x)u$, where $\phi(x)=1$ on $[n-\tfrac12,n+\tfrac12]$ and $0$ outside $[n-1,n+1]$, we obtain (writing $Q$ for $Q_n$ and $\tilde Q$ for $\tilde Q_n$ for the following):
$$\|u\|_{L^\infty_Q}^2\lesssim (\|u\|_{L^2_{\tilde Q}} + \|u_x\|_{L^2_{\tilde Q}})\|u\|_{L^2_{\tilde Q}}\,,$$ hence
$$\|(u^3)_x\|_{L^2_Q}\lesssim
\|u_x\|_{L^2_Q}\|u\|_{L^\infty_Q}^2\lesssim
\|u_x\|_{L^2_Q}\|u\|_{L^2_{\tilde Q}}(\|u\|_{L^2_{\tilde Q}} +
\|u_x\|_{L^2_{\tilde Q}})\,.$$ Taking $L^2_T$ norm and applying the H\"{o}lder inequality, we obtain
$$\|(u^3)_x\|_{L^2_T L^2_Q} \lesssim
\|u_x\|_{L^\infty_T L^2_Q}\|u\|_{L^\infty_T L^2_{\tilde Q}}(\|u\|_{L^2_TL^2_{\tilde Q}} + \|u_x\|_{L^2_TL^2_{\tilde Q}})\,.$$ Taking $\ell^1_n$ norm and applying the H\"{o}lder inequality again yields
$$\|(u^3)_x\|_{\ell^1_nL^2_T L^2_Q} \lesssim
\|u_x\|_{\ell^\infty_nL^\infty_T L^2_Q}\|u\|_{\ell^2_nL^\infty_T L^2_{\tilde Q}}(\|u\|_{\ell^2_nL^2_TL^2_{\tilde Q}} +
\|u_x\|_{\ell^2_nL^2_TL^2_{\tilde Q}})\,.$$ Using the bounds
$\|u_x\|_{\ell^\infty_nL^\infty_T L^2_Q}\lesssim \|u_x\|_{L^\infty_T L^2_x}$,
$$\|u\|_{\ell^2_nL^2_TL^2_{\tilde Q}}\lesssim
\|u\|_{L^2_TL^2_x}\lesssim T^{1/2}\|u\|_{L^\infty_TL^2_x}$$ and
$$\|u_x\|_{\ell^2_nL^2_TL^2_{\tilde Q}}\lesssim
\|u_x\|_{L^2_TL^2_x}\lesssim T^{1/2}\|u_x\|_{L^\infty_TL^2_x}\,,$$ we obtain
$$\|(u^3)_x\|_{\ell_n^1 L^2_T L^2_{Q_n}}\lesssim T^{1/2}\|u\|_{L^\infty_T H^1_x}^2\|u\|_{\ell_n^2 L^\infty_T L^2_{Q_n}}\,,$$ inserting into \eqref{E:psi-u-ls}, \begin{equation} \label{E:ls-bd2} \begin{aligned}
\|\varphi(u)_x\|&_{L^\infty_T L^2_x} + \|\varphi(u)_{xx}\|_{\ell_n^\infty L^2_T L^2_{Q_n}}\\&\lesssim \|u_0\|_{H^1_x} + T^{1/2}\|u\|_{L^\infty_T H^1_x}^2\|u\|_{\ell_n^2 L^\infty_T L^2_{Q_n}} +
T\|V'\|_{L^\infty_x}\|\varphi(u)\|_{L^\infty_T L^2_x}\,. \end{aligned} \end{equation}
Summing \eqref{E:ls-bd1}, \eqref{E:mf-bd} and \eqref{E:ls-bd2}, we obtain that $\|\varphi(u)\|_X\leq CR$ if $\|u\|_X\leq CR$ provided $T\leq C_0\min(M^{-1}, R^{-2})$, with $C_0$ small enough. Thus $\varphi:\ X\rightarrow X$. A similar argument establishes that $\varphi$ is a contraction on $X$.
Now suppose $u,v\in\mathcal{C}([0,T];H^1_x)$ solve \eqref{E:pmkdv2}. By \eqref{E:mf}, \begin{equation} \label{E:uv-mf}
\begin{aligned}\|u\|_{\ell_n^2 L^\infty_T L^2_{Q_n}}&\lesssim\|u_0\|_{L^2_x} + T\|u\|_{L^\infty_T H^1_x} +
T\|u\|_{L^\infty_T H^1_x}^3\,,\\
\|v\|_{\ell_n^2 L^\infty_T L^2_{Q_n}}&\lesssim\|v_0\|_{L^2_x} +
T\|v\|_{L^\infty_T H^1_x} + T\|v\|_{L^\infty_T H^1_x}^3\,, \end{aligned} \end{equation} Set $w=u-v$. Then, with $g=(u^3-v^3)/(u-v)=u^2+uv+v^2$, we have \begin{equation*} w_t+w_{xxx}+2(gw)_x-Vw=0\,. \end{equation*} Apply \eqref{E:ls} to $v=w_x$, we obtain \begin{equation}
\label{E:w-ls} \|w_x\|_{L^\infty_T L^2_x}\lesssim\|(gw)_x\|_{\ell_n^1 L^2_T L^2_{Q_n}} + \|V'w\|_{L^1_T L^2_x}\,. \end{equation}
The terms of $\|(gw)_x\|_{\ell_n^1 L^2_T L^2_{Q_n}}$ can be bounded in the following manner: \begin{align}
\label{E:apx-100}\|u_x v w\|_{\ell_n^1 L^2_T L^2_{Q_n}} &\lesssim
\|u_x\|_{\ell_n^\infty L^\infty_T L^2_{Q_n}}\|vw\|_{\ell_n^1 L^2_T L^\infty_{Q_n}}\\
\notag &\lesssim\|u_x\|_{\ell_n^\infty L^\infty_T L^2_{Q_n}}(\|vw\|_{\ell_n^1 L^2_T L^1_{Q_n}}+\|(vw)_x\|_{\ell_n^1 L^2_T L^1_{Q_n}}) \end{align} The term in the parentheses is bounded by
$$\|v\|_{\ell_n^2 L^2_T L^2_{Q_n}}\|w\|_{\ell_n^2 L^\infty_T L^2_{Q_n}} + \|v_x\|_{\ell_n^2 L^2_T L^2_{Q_n}}\|w\|_{\ell_n^2 L^\infty_T L^2_{Q_n}} + \|v\|_{\ell_n^2 L^\infty_T L^2_{Q_n}}\|w_x\|_{\ell_n^2 L^2_T L^2_{Q_n}}$$ which by \eqref{E:uv-mf} and
$$\|u_x\|_{\ell_n^\infty L^\infty_T L^2_{Q_n}}\lesssim\|u\|_{L^\infty_T H^1_x}\,,\qquad \|v\|_{\ell_n^2 L^2_T L^2_{Q_n}}\lesssim T^{1/2}\|v\|_{L^\infty_T L^2_x}$$ implies
$$\|u_x v w\|_{\ell_n^1 L^2_T L^2_{Q_n}} \lesssim_{\|u\|_{L^\infty_T H^1_x},\|v\|_{L^\infty_T H^1_x}}T^{1/2}(\|w\|_{\ell_n^2 L^\infty_T L^2_{Q_n}} + \|w\|_{L^\infty_T H^1_x})\,.$$ Same bounds follow for other terms in $\|(gw)_x\|_{\ell_n^1 L^2_T L^2_{Q_n}}$, combined with $\|V'w\|_{L^1_T L^2_x}\lesssim T\|V'\|_{L^\infty_x}\|w\|_{L^\infty_T H^1_x}$, this establishes the estimate
$$\|w_x\|_{L^\infty_T L^2_x}\lesssim T^{1/2}(\|w\|_{\ell_n^2 L^\infty_T L^2_{Q_n}} + \|w\|_{L^\infty_T H^1_x})\,,$$ where the implicit constant depends on $\|u\|_{L^\infty_T H^1_x}$ and
$\|v\|_{L^\infty_T H^1_x}$. Same estimate follows for
$\|w\|_{L^\infty_T L^2_x}$ by applying \eqref{E:ls} to $v=w$. Hence \begin{equation}
\label{E:apx-bdd-w1} \|w\|_{L^\infty_T H^1_x}\lesssim T^{1/2}(\|w\|_{\ell_n^2 L^\infty_T L^2_{Q_n}} + \|w\|_{L^\infty_T H^1_x})\,, \end{equation} but applying \eqref{E:mf} to $v=w$ yields \begin{equation}
\label{E:apx-200} \|w\|_{\ell_n^2 L^\infty_T L^2_{Q_n}}\lesssim T\|w\|_{L^\infty_T H^1_x} \end{equation} since e.g.
$$\|uvw_x\|_{L^2_TL^2_x}\lesssim T^{1/2}\|w\|_{L^\infty_T H^1_x}\|u\|_{L^\infty_T H^1_x}\|v\|_{L^\infty_T H^1_x}$$ which can be proved by the same method as in \eqref{E:apx-100}, and thus
$\|(gw)_x\|_{L^2_TL^2_x}\lesssim T^{1/2}\|w\|_{L^\infty_T H^1_x}$. Substituting \eqref{E:apx-200} into \eqref{E:apx-bdd-w1} implies $w\equiv0$ for $T$ sufficiently small, which then establishes the uniqueness of solutions in $\mathcal{C}([0,T];H^1_x)$. The continuity of the data-to-solution map can be proved by a similar argument. \end{proof} We now prove the global well-posedness in $H^1$ by (almost) conservation laws. \begin{theorem}[global well-posedness] Suppose $M<\infty$, where M is defined in \eqref{D:M}, for $u_0\in H^1$, there is a unique global solution $u\in C_{loc}([0,\infty);H^1_x)$ to \eqref{E:pmkdv2} with
$\|u\|_{L^\infty_T H^1_x}$ controlled by $\|u_0\|_{H^1}$, $T$ and $M$. \end{theorem} \begin{proof} First, note from Gagliado-Nirenberg inequality,
$\|u\|_{L^4}^4\lesssim \|u\|_{L^2}^3 \|u_x\|_{L^2}$, we have
$$\|u_x\|_{L^2}^2 - \|u\|_{L^2}^3 \|u_x\|_{L^2} \leq H_0(u) \leq
\|u_x\|_{L^2}^2\,.$$ Applying Peter-Paul inequality to the
$\|u\|_{L^2}^3 \|u_x\|_{L^2}$ term gives us
$$\|u_x\|_{L^2}^2 + \|u\|_{L^2}^6\sim H_0(u) + \|u\|_{L^2}^6\,.$$ Suppose $u$ solves \eqref{E:pmkdv2}, then \begin{equation} \label{E:der-H0}
\begin{aligned}&\ \ \ \left|\frac{d}{dt} H_0(u)\right| = \left|\langle H_0'(u),JH_0'(u)+Vu\rangle\right| = \left|\langle H_0'(u),Vu\rangle\right|\\&\lesssim M(\|u_x\|_{L^2}^2 + \|u\|_{L^2}^2 +
\|u\|_{L^4}^4)\lesssim M(\|u_x\|_{L^2}^2 + \|u\|_{L^2}^2 +
\|u\|_{L^2}^3 \|u_x\|_{L^2})\\& \lesssim M(\|u_x\|_{L^2}^2 +
\|u\|_{L^2}^2 + \|u\|_{L^2}^6)\lesssim M(H_0(u)+ \|u\|_{L^2}^2 +
\|u\|_{L^2}^6)\,, \end{aligned} \end{equation} on the other hand, by \begin{equation*}
\left|\frac{d}{dt} P(u)\right| = \left|\langle u , Vu\rangle\right|\lesssim MP(u)\,, \end{equation*}
and Gronwall inequality, we obtain a bound on $\|u\|_{L^\infty_T L^2_x}$ in terms of $\|u_0\|_{L^2}$ and $M$, combine this with \eqref{E:der-H0}, and apply Gronwall again, we obtain a bound on
$H_0(u)$ and hence $\|u\|_{H^1_x}$. \end{proof} \begin{remark} A global well-posedness in $H^k_x$ for $k\geq 1$ can in fact be proved, provided $V\in\mathcal{C}^k_b$, by similar arguments. \end{remark}
\end{document} |
\begin{document}
\title[Associated forms: current progress and open problems]{Associated forms:
\\ current progress and open problems}\xdef\@thefnmark{}\@footnotetext{{\bf Mathematics Subject Classification:} 13A50, 14L24, 32S25.}\xdef\@thefnmark{}\@footnotetext{{\bf Keywords:} associated forms, isolated hypersurface singularities, the Mather-Yau theorem, classical invariant theory, Geometric Invariant Theory, contravariants of homogeneous forms.}
\author[Isaev]{Alexander Isaev}
\address{Mathematical Sciences Institute\\ Australian National University\\ Canberra, Acton, ACT 2601, Australia} \email{alexander.isaev@anu.edu.au}
\maketitle
\thispagestyle{empty}
\pagestyle{myheadings}
\begin{abstract} Let $d\ge 3$, $n\ge 2$. The object of our study is the morphism $\Phi$, introduced in earlier articles by J. Alper, M. Eastwood and the author, that assigns to every homogeneous form of degree $d$ on ${\mathbb C}^n$ for which the discriminant $\Delta$ does not vanish a form of degree $n(d-2)$ on the dual space called the associated form. This morphism is $\mathop{\rm SL}\nolimits_n$-equivariant and is of interest in connection with the well-known Mather-Yau theorem, specifically, with the problem of explicit reconstruction of an isolated hypersurface singularity from its Tjurina algebra. Letting $p$ be the smallest integer such that the product $\Delta^p\Phi$ extends to the entire affine space of degree $d$ forms, one observes that the extended map defines a contravariant. In the present paper we survey known results on the morphism $\Phi$, as well as the contravariant $\Delta^p\Phi$, and state several open problems. Our goal is to draw the attention of complex analysts and geometers to the concept of the associated form and the intriguing connection between complex singularity theory and invariant theory revealed through it. \end{abstract}
\section{Introduction}\label{intro} \setcounter{equation}{0}
In this paper we discuss a curious connection between complex singularity theory and classical invariant theory proposed in \cite{EI} and further explored in \cite{AI1}, \cite{AI2}, \cite{AIK}, \cite{F}, \cite{FI}. What follows is a survey of known results and open problems. It is written as a substantially extended version of our recent paper \cite{I3} and is intended mainly for complex analysts and geometers. Thus, some of our expositional and notational choices may not be up to the taste of a reader with background in algebra, for which we apologize.
Consider the vector space ${\mathbb C}[z_1,\dots,z_n]_d$ of homogeneous forms of degree $d$ on ${\mathbb C}^n$, where $n\ge 2$, $d\ge 3$. Fix $f\in{\mathbb C}[z_1,\dots,z_n]_d$ and look at the hypersurface $V_f:=\{z\in{\mathbb C}^n: f(z)=0\}$. We will be interested in the situation when the singularity of $f$ at the origin is isolated, or, equivalently, when the discriminant $\Delta(f)$ of $f$ does not vanish. In this case, define $M_f:={\mathbb C}[[z_1,\dots,z_n]]/(f_{z_{{}_1}},\dots,f_{z_{{}_n}})$ to be the {\it Milnor algebra}\, of the singularity. By the Mather-Yau theorem (see \cite{MY} and also \cite{Be}, \cite{Sh}, \cite[Theorem 2.26]{GLS}, \cite{GP}), the isomorphism class of $M_f$ determines the germ of the hypersurface $V_f$ at the origin up to biholomorphism, hence the form $f$ up to linear equivalence.
In fact, for a general isolated hypersurface singularity in ${\mathbb C}^n$ defined by (the germ of) a holomorphic function $F$, the Mather-Yau theorem states that, remarkably, the singularity is determined, up to biholomorphism, by $n$ and the isomorphism class of the {\it Tjurina algebra}\, $T_F:={\mathbb C}[[z_1,\dots,z_n]]/(F,F_{z_{{}_1}},\dots,F_{z_{{}_n}})$. The proof of the Mather-Yau theorem is not constructive, and it is an important open problem---called {\it the reconstruction problem}---to understand explicitly how the singularity is encoded in the corresponding Tjurina algebra. In this paper we concentrate on the homogeneous case as set out in the previous paragraph (notice that $T_f=M_f$). In this situation, the reconstruction problem was solved in \cite{IK}, where we proposed a simple algorithm for extracting the linear equivalence class of the form $f$ from the isomorphism class of $M_f$. An alternative (invariant-theoretic) approach to the reconstruction problem---which applies to the more general class of quasihomogeneous isolated hypersurface singularities---was initiated in article \cite{EI}, where we proposed a method for extracting certain numerical invariants of the singularity from its Milnor algebra (see \cite{I2} for a comparison of the two techniques). Already in the case of homogeneous singularities this approach leads to a curious concept that deserves attention regardless of the reconstruction problem and that is interesting from the purely classical invariant theory viewpoint. This concept is the focus of the present paper.
We will now briefly describe the idea behind it with details postponed until Section \ref{setup}. Let ${\mathfrak m}$ be the (unique) maximal ideal of $M_f$ and $\mathop{\rm Soc}\nolimits(M_f)$ the socle of $M_f$, defined as $\mathop{\rm Soc}\nolimits(M_f):=\{x\in M_f: x\,{\mathfrak m}=0\}$. It turns out that $M_f$ is a Gorenstein algebra, i.e., $\dim_{{\mathbb C}}\mathop{\rm Soc}\nolimits(M_f)=1$, and, moreover, that $\mathop{\rm Soc}\nolimits(M_f)$ is spanned by the image $\reallywidehat{\mathop{\rm Hess}\nolimits(f)}$ of the Hessian $\mathop{\rm Hess}\nolimits(f)$ of $f$ in $M_f$. Observing that $\mathop{\rm Hess}\nolimits(f)$ has degree $n(d-2)$, one can then introduce a form defined on the $n$-dimensional quotient ${\mathfrak m}/{\mathfrak m}^2$ with values in $\mathop{\rm Soc}\nolimits(M_f)$ as follows: $$ {\mathfrak m}/{\mathfrak m}^2 \to \mathop{\rm Soc}\nolimits(M_f), \quad x \mapsto y^{\,n(d-2)}, $$ where $y$ is any element of ${\mathfrak m}$ that projects to $x\in{\mathfrak m}/{\mathfrak m}^2$. There is a canonical isomorphism ${\mathfrak m}/{\mathfrak m}^2\cong {\mathbb C}^{n*}$ and, since $\reallywidehat{\mathop{\rm Hess}\nolimits(f)}$ spans the socle, there is also a canonical isomorphism $\mathop{\rm Soc}\nolimits(M_f) \cong {\mathbb C}$. Hence, one obtains a form ${\mathbf f}$ of degree $n(d-2)$ on ${\mathbb C}^{n*}$ (i.e., an element of ${\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$, where $e_1,\dots,e_n$ is the standard basis of ${\mathbb C}^n$), called the {\it associated form}\, of $f$.
The principal object of our study is the morphism $$ \Phi:X_n^d\to {\mathbb C}[e_1,\dots,e_n]_{n(d-2)},\quad f\mapsto{\mathbf f} $$ of affine algebraic varieties, where $X_n^d$ is the variety of forms in ${\mathbb C}[z_1,\dots,z_n]_d$ with nonzero discriminant. This map has a $\mathop{\rm GL}\nolimits_n$-equivariance property (see Proposition \ref{equivariance}), and one of the reasons for our interest in $\Phi$ is the following intriguing conjecture proposed in \cite{EI}, \cite{AI1}:
\begin{conjecture}\label{conj2} For every regular $\mathop{\rm GL}\nolimits_n$-invariant function $S$ on $X_n^d$ there exists a rational $\mathop{\rm GL}\nolimits_n$-invariant function $R$ on ${\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$ defined at all points of the set\, $\Phi(X_n^d)\subset {\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$ such that $R\circ\Phi=S$. \end{conjecture}
Observe that, if settled, Conjecture \ref{conj2} would imply an invariant-theoretic solution to the reconstruction problem in the homogeneous case. Indeed, on the one hand, it is well-known that the regular $\mathop{\rm GL}\nolimits_n$-invariant functions on $X_n^d$ separate the $\mathop{\rm GL}\nolimits_n$-orbits, and, on the other hand, the result of the evaluation of any rational $\mathop{\rm GL}\nolimits_n$-invariant function at the associated form ${\mathbf f}$ depends only on the isomorphism class of $M_f$. Thus, the conjecture would yield a complete system of biholomorphic invariants of homogeneous isolated hypersurface singularities constructed from the algebra $M_f$ alone. So far, Conjecture \ref{conj2} has been confirmed for binary forms (see \cite{EI}, \cite{AI2}), and its weaker variant (which does not require that the function $R$ be defined on the entire image of $\Phi$) has been established for all $n$ and $d$ (see \cite{AI1}).
The conjecture is also rather interesting from the purely invariant-theoretic point of view. Indeed, if settled, it would imply that the invariant theory of forms in ${\mathbb C}[z_1,\dots,z_n]_d$ can be extracted, by way of the morphism $\Phi$, from that of forms in ${\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$ at least at the level of rational invariant functions, or {\it absolute invariants}. Indeed, every absolute invariant of forms in ${\mathbb C}[z_1,\dots,z_n]_d$ can be represented as the ratio of two $\mathop{\rm GL}\nolimits_n$-invariant regular functions on $X_n^d$ (see \cite[Corollary 5.24 and Proposition 6.2]{Mu}).
The goal of the present survey is to draw the attention of the complex-analytic audience to the concept of the associated form and the curious connection between complex geometry and invariant theory manifested through it. In the paper, we focus on two groups of problems concerning associated forms. The first one is related to establishing Conjecture \ref{conj2} and is discussed in Sections \ref{results} and \ref{S:binaryquarticternarycubics}. The other one is also relevant to classical invariant theory but in a different way. Namely, letting $p$ be the smallest positive integer such that the product $\Delta^p\Phi$ extends to a morphism from ${\mathbb C}[z_1,\dots,z_n]_d$ to ${\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$, by utilizing the equivariance property of $\Phi$ one observes that this product defines a contravariant of degree $np(d-1)^{n-1}-n$ of forms in ${\mathbb C}[z_1,\dots,z_n]_d$. While it can be expressed via known contravariants for small values of $n$ and $d$ (see \cite{AIK} and Subsection \ref{contravarsmallnd} below), it appears to be new in general (cf.~\cite{D2}). We discuss this contravariant in Section \ref{S:contravariant} focussing on the problem of estimating the\linebreak integer $p$. Note that some of the details included in the survey have not been previously published.
For simplicity we have chosen to work over the field ${\mathbb C}$ although everything that follows applies verbatim to any algebraically closed field of characteristic zero, and many of the results do not in fact require algebraic closedness. We also note that all algebraic geometry in the paper is done for complex varieties (i.e., reduced separated schemes of finite type over ${\mathbb C}$) hence in the proofs it suffices to argue at the level of closed points, and this is what we do. In particular, when we speak about affine (resp.~projective) varieties, we only deal with the maximal spectra (resp.~maximal projective spectra) of the corresponding rings.
{\bf Acknowledgement.} We are grateful to M. Fedorchuk for many very helpful discussions.
\section{Preliminaries on associated forms}\label{setup} \setcounter{equation}{0}
In this section we provide an introduction to associated forms and their properties.
\subsection{The associated form of a nondegenerate form} For any finite collection of symbols $t_1,\dots,t_m$ we denote by ${\mathbb C}[t_1,\dots,t_m]$ the algebra of polynomials in these symbols with complex coefficients and by ${\mathbb C}[t_1,\dots,t_m]_k\subset{\mathbb C}[t_1,\dots,t_m]$ the vector space of homogeneous forms in $t_1,\dots,t_m$ of degree $k\ge 0$.
Clearly, we have $$ {\mathbb C}[t_1,\dots,t_m]=\bigoplus_{k=0}^{\infty}{\mathbb C}[t_1,\dots,t_m]_k. $$
We now fix $n\ge 2$ and let $e_1,\dots,e_n$ be the standard basis of ${\mathbb C}^n$. The group $\mathop{\rm GL}\nolimits_n:=\mathop{\rm GL}\nolimits_n({\mathbb C})$ (hence the group $\mathop{\rm SL}\nolimits_n:=\mathop{\rm SL}\nolimits_n({\mathbb C})$) acts on ${\mathbb C}^n$ via $$ (e_1,\dots,e_n)\mapsto (e_1,\dots,e_n)C,\,C\in\mathop{\rm GL}\nolimits_n, $$ or, equivalently, as \begin{equation} Cz=C(z_1,\dots,z_n):=(z_1,\dots,z_n)C^T,\,\,z=(z_1,\dots,z_n)\in{\mathbb C}^n,\,\,C\in\mathop{\rm GL}\nolimits_n.\label{actiononcn} \end{equation} This action induces an action on the space ${\mathbb C}[e_1,\dots,e_n]_k$: \begin{equation} (CF)(e_1,\dots,e_n):=F((e_1,\dots,e_n)C),\,F\in {\mathbb C}[e_1,\dots,e_n]_k,\,C\in\mathop{\rm GL}\nolimits_n.\label{dualactionfe} \end{equation}
Next, let us think of the coordinates $z_1,\dots,z_n$ on ${\mathbb C}^n$ with respect to the basis $e_1,\dots,e_n$ as the elements of the basis of ${\mathbb C}^{n*}$ dual to $e_1,\dots,e_n$. Then the dual action of $\mathop{\rm GL}\nolimits_n$ on ${\mathbb C}^{n*}$ is given by $$ (z_1,\dots,z_n)\mapsto (z_1,\dots,z_n)C^{-T},\,C\in\mathop{\rm GL}\nolimits_n. $$ Equivalently, if we identify a point $z^*\in{\mathbb C}^{n*}$ with its coordinate vector $(z_1^*,\dots,z_n^*)$ with respect to the basis $z_1,\dots,z_n$, this action is written as \begin{equation} Cz^*=C(z_1^*,\dots,z_n^*)=(z_1^*,\dots,z_n^*)C^{-1},\,z^*=(z_1^*,\dots,z_n^*)\in{\mathbb C}^{n*},\,C\in\mathop{\rm GL}\nolimits_n.\label{actiononcn*} \end{equation} It leads to an action on ${\mathbb C}[z_1,\dots,z_n]_k$: \begin{equation} (Cf)(z_1,\dots,z_n):=f\left((z_1,\dots,z_n)C^{-T}\right),\,f\in {\mathbb C}[z_1,\dots,z_n]_k,\,C\in\mathop{\rm GL}\nolimits_n.\label{actionfz} \end{equation} Two forms in either ${\mathbb C}[e_1,\dots,e_n]_k$ or ${\mathbb C}[z_1,\dots,z_n]_k$ that lie in the same $\mathop{\rm GL}\nolimits_n$-orbit are called {\it linearly equivalent}.
Clearly, every element of ${\mathbb C}[z_1,\dots,z_n]_k$ can be thought of as a function on ${\mathbb C}^n$, so to every nonzero $f\in{\mathbb C}[z_1,\dots,z_n]_k$ we associate the hypersurface $$ V_f:=\{z\in{\mathbb C}^n: f(z)=0\} $$ and consider it as a complex space with the structure sheaf induced by $f$. The singular set of $V_f$ is then the critical set of $f$. In particular, if $k\ge 2$ the hypersurface $V_f$ has a singularity at the origin. We are interested in the situation when this singularity is isolated, or, equivalently, when $V_f$ is smooth away from 0. This occurs if and only if $f$ is {\it nondegenerate}, i.e., $\Delta(f)\ne 0$, where $\Delta$ is the {\it discriminant}\, (see \cite[Chapter 13]{GKZ}).
Fix $d\ge 3$ and define $$ X^d_n:=\{f\in{\mathbb C}[z_1,\dots,z_n]_d: \Delta(f)\ne 0\}. $$ Observe that $\mathop{\rm GL}\nolimits_n$ acts on the affine variety $X_n^d$ and note that every $f\in X_n^d$ is {\it stable}\, with respect to this action, i.e., the orbit of $f$ is closed in $X_n^d$ and has dimension $n^2$ (see \cite[Proposition 4.2]{MFK}, \cite[Corollary 5.24, Lemma 5.40]{Mu} and cf.~Subsection \ref{reviewGIT} below). It then follows by standard Geometric Invariant Theory arguments (see, e.g., \cite[Proposition 3.1]{EI}) that regular invariant functions on $X_n^d$ separate the $\mathop{\rm GL}\nolimits_n$-orbits. As explained in the introduction, this is one of the facts that link Conjecture \ref{conj2} with the reconstruction problem arising from the Mather-Yau theorem.
Fix $f\in X^d_n$ and consider the {\it Milnor algebra}\, of the singularity\ of $V_f$, which is the complex local algebra $$ M_f:={\mathbb C}[[z_1,\dots,z_n]]/(f_{z_{{}_1}},\dots,f_{z_{{}_n}}), $$ where ${\mathbb C}[[z_1,\dots,z_n]]$ is the algebra of formal power series in $z_1,\dots,z_n$ with complex coefficients. Since the singularity of $V_f$ is isolated, it follows from the Nullstellensatz that the algebra $M_f$ is Artinian, i.e., $\dim_{{\mathbb C}}M_f<\infty$. Therefore, $f_{z_{{}_1}},\dots,f_{z_{{}_n}}$ is a system of parameters in ${\mathbb C}[[z_1,\dots,z_n]]$, and, since ${\mathbb C}[[z_1,\dots,z_n]]$ is a regular local ring, $f_{z_{{}_1}},\dots,f_{z_{{}_n}}$ is a regular sequence in ${\mathbb C}[[z_1,\dots,z_n]]$. This yields that $M_f$ is a complete intersection (see \cite[\S 21]{Ma}).
It is convenient to utilize another realization of the Milnor algebra. Namely, it is easy to see that $M_f$ is isomorphic to the algebra ${\mathbb C}[z_1,\dots,z_n]/(f_{z_{{}_1}},\dots,f_{z_{{}_n}})$, so we write $$ M_f={\mathbb C}[z_1,\dots,z_n]/(f_{z_{{}_1}},\dots,f_{z_{{}_n}}). $$ Let ${\mathfrak m}$ denote the maximal ideal of $M_f$, which consists of all elements represented by polynomials in ${\mathbb C}[z_1,\dots,z_n]$ vanishing at the origin. By Nakayama's lemma, the maximal ideal is nilpotent and we let $\nu:=\max\{\eta\in{\mathbb N}: {\mathfrak m}^{\eta}\ne 0\}$ be the socle degree of $M_f$.
Since $M_f$ is a complete intersection, by \cite{Ba} it is a {\it Gorenstein algebra}. This means that the {\it socle}\, of $M_f$, defined as $$ \mathop{\rm Soc}\nolimits(M_f):=\{x\in M_f : x\,{\mathfrak m}=0\}, $$ is a one-dimensional vector space over ${\mathbb C}$ (see, e.g., \cite[Theorem 5.3]{Hu}). We then have $\mathop{\rm Soc}\nolimits(M_f)={\mathfrak m}^{\nu}$. Furthermore, $\mathop{\rm Soc}\nolimits(M_f)$ is spanned by the projection $\reallywidehat{\mathop{\rm Hess}\nolimits(f)}$ to $M_f$ of the Hessian $\mathop{\rm Hess}\nolimits(f)$ of $f$ (see, e.g., \cite[Lemma 3.3]{Sai}). Since $\mathop{\rm Hess}\nolimits(f)$ has degree $n(d-2)$, it follows that $\nu=n(d-2)$. Thus, the subspace \begin{equation} \begin{array}{l} W_f:={\mathbb C}[z_1,\dots,z_n]_{n(d-2)-(d-1)}f_{z_{{}_1}}+\dots+\\
\\ \hspace{3cm}{\mathbb C}[z_1,\dots,z_n]_{n(d-2)-(d-1)}f_{z_{{}_n}}\subset{\mathbb C}[z_1,\dots,z_n]_{n(d-2)}\end{array}\label{subspace} \end{equation} has codimension 1, with the line spanned by $\mathop{\rm Hess}\nolimits(f)$ being complementary to it.
Denote by $\omega_f \co \mathop{\rm Soc}\nolimits(M_f)\rightarrow{\mathbb C}$ the linear isomorphism given by the condition $\omega_f(\reallywidehat{\mathop{\rm Hess}\nolimits(f)})=1$. Define a form ${\mathbf f}$ on ${\mathbb C}^{n*}$ as follows. Fix $z^*\in{\mathbb C}^{n*}$, let, as before, $z_1^*,\dots,z_n^*$ be the coordinates of $z^*$ with respect to the basis $z_1,\dots,z_n$, and set \begin{equation} {\mathbf f}(z^*):=\omega_f\left((z_1^*\widehat{z}_1+\dots+z_n^*\widehat{z}_n)^{n(d-2)}\right),\label{assocformdef} \end{equation} where $\widehat{z}_j$ is the projection to $M_f$ of the coordinate function $z_j\in{\mathbb C}[z_1,\dots,z_n]$.
Notice that if $i_1,\dots,i_n$ are nonnegative integers such that $i_1+\dots+i_n=n(d-2)$, the product $\widehat{z}_1^{i_1}\cdots \widehat{z}_n^{i_n}$ lies in $\mathop{\rm Soc}\nolimits(M_f)$, hence we have \begin{equation} \widehat{z}_1^{i_1}\cdots \widehat{z}_n^{i_n}=\mu_{i_1,\dots,i_n}(f) \reallywidehat{\mathop{\rm Hess}\nolimits(f)}\label{assocformexpppp} \end{equation} for some $\mu_{i_1,\dots,i_n}(f)\in{\mathbb C}$. In terms of the coefficients $\mu_{i_1,\dots,i_n}(f)$ the form ${\mathbf f}$ is written as \begin{equation} {\mathbf f}(z^*)=\sum_{i_1+\cdots+i_n=n(d-2)}\frac{(n(d-2))!}{i_1!\cdots i_n!}\mu_{i_1,\dots,i_n}(f) z_1^{* i_1}\cdots z_n^{* i_n}.\label{assocformexpp} \end{equation} One can view the expression in the right-hand side of (\ref{assocformexpp}) as an element of ${\mathbb C}[z_1^*,\dots,z_n^*]_{n(d-2)}$, where we regard $z_1^*,\dots,z_n^*$ as the basis of ${\mathbb C}^{n**}$ dual to the basis $z_1,\dots,z_n$ of ${\mathbb C}^{n*}$. Identifying $z^*_j\in{\mathbb C}^{n**}$ with $e_j\in{\mathbb C}^n$, we will think of ${\mathbf f}$ as the following element of ${\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$: \begin{equation} {\mathbf f}(e_1,\dots,e_n)=\sum_{i_1+\cdots+i_n=n(d-2)}\frac{(n(d-2))!}{i_1!\cdots i_n!}\mu_{i_1,\dots,i_n}(f) e_1^{i_1}\cdots e_n^{i_n}.\label{assocformexpp1} \end{equation} We call ${\mathbf f}$ given by expression (\ref{assocformexpp1}) the {\it associated form}\, of $f$.
\begin{example}\label{E:example1} \rm If $f = a_1 z_1^d + \cdots + a_n z_n^d$ for nonzero $a_i \in {\mathbb C}$, then one computes $\mathop{\rm Hess}\nolimits(f) = a_1 \cdots a_n(d(d-1))^n (z_1 \cdots z_n)^{d-2}$ and $$ {\mathbf f}(e_1,\dots,e_n) = \frac{1}{a_1 \cdots a_n} \frac{(n(d-2))!}{(d!)^n} e_1^{d-2} \cdots e_n^{d-2}. $$ \end{example}
\noindent More examples of calculating associated forms will be given in Section \ref{S:binaryquarticternarycubics}.
It is not hard to show that each $\mu_{i_1,\dots,i_n}$ is a regular function on the affine variety $X_n^d$ (see, e.g., \cite[Proposition 2.1]{I3}). Hence, we have \begin{equation} \mu_{i_1,\dots,i_n}=\frac{P_{i_1,\dots,i_n}}{\Delta^{p_{i_1,\dots,i_n}}}\label{formulaformus} \end{equation} for some $P_{i_1,\dots,i_n}\in{\mathbb C}[{\mathbb C}[z_1,\dots,z_n]_d]$ and nonnegative integer $p_{i_1,\dots,i_n}$. Here and in what follows for any affine variety $X$ over ${\mathbb C}$ we denote by ${\mathbb C}[X]$ its coordinate ring, which coincides with the ring ${\mathcal O}_X(X)$ of all regular functions on $X$. For example, ${\mathbb C}[z_1,\dots,z_n]={\mathbb C}[{\mathbb C}^n]$ and ${\mathbb C}[z_1^*,\dots,z_n^*]={\mathbb C}[{\mathbb C}^{n*}]$.
Thus, we have arrived at the morphism $$ \Phi \co X_n^d\rightarrow {\mathbb C}[e_1,\dots,e_n]_{n(d-2)},\quad f\mapsto {\mathbf f} $$ of affine algebraic varieties. Notice that by Example \ref{E:example1} this morphism is not injective.
Next, recall that for any $k\ge 0$ the {\it polar pairing}\, between the spaces ${\mathbb C}[z_1,\dots,z_n]_{k}$ and ${\mathbb C}[e_1,\dots,e_n]_{k}$ is given as follows:\begin{equation} \begin{array}{l} {\mathbb C}[z_1,\dots,z_n]_{k}\times{\mathbb C}[e_1,\dots,e_n]_{k}\to{\mathbb C},\\
\\ (g(z_1,\dots,z_n),F(e_1,\dots,e_n))\mapsto g\diamond F:=\\
\\ \hspace{5cm}g\left(\partial/\partial e_1,\dots,\partial/\partial e_n\right) F(e_1,\dots,e_n). \end{array}\label{polarpairing} \end{equation} This pairing is nondegenerate and therefore yields a canonical identification between ${\mathbb C}[e_1,\dots,e_n]_{k}$ and ${\mathbb C}[z_1,\dots,z_n]_{k}^*$ (see, e.g., \cite[Subsection 1.1.1]{D1} for details). Using this identification, one may regard the associated form as an element of ${\mathbb C}[z_1,\dots,z_n]_{n(d-2)}^*$, in which case the morphism $\Phi$ turns into a morphism from $X_n^d$ to ${\mathbb C}[z_1,\dots,z_n]_{n(d-2)}^*$; we denote it by $\widetilde\Phi$.
The morphism $\widetilde\Phi$ admits a rather simple description. For $f\in X_n^d$, let $\widetilde\omega_f$ be the element of ${\mathbb C}[z_1,\dots,z_n]_{n(d-2)}^*$ such that: \begin{itemize}
\item[(i)] $\ker\widetilde\omega_f=W_f$ with $W_f$ introduced in (\ref{subspace}), and
\item[(ii)] $\widetilde\omega_f(\mathop{\rm Hess}\nolimits(f))=1$.
\end{itemize}
\noindent Clearly, $\mu_{i_1,\dots,i_n}(f)=\widetilde\omega_f(z_1^{i_1}\cdots z_n^{i_n})$ for $i_1+\dots+i_n=n(d-2)$. A straightforward calculation yields:
\begin{proposition}\label{newmap} The morphism $$ \widetilde\Phi: X_n^d\to {\mathbb C}[z_1,\dots,z_n]_{n(d-2)}^* $$ sends a form $f$ to $(n(d-2))!\,\widetilde\omega_f$.
\end{proposition}
The maps $\Phi$ and $\widetilde\Phi$ are rather natural; in particular, \cite[Proposition 2.1]{AI1} implies equivariance properties for them. Recall that the actions of $\mathop{\rm GL}\nolimits_n$ on ${\mathbb C}[z_1,\dots,z_n]_d$, ${\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$ are given by formulas (\ref{actionfz}), (\ref{dualactionfe}), respectively, and on the dual space ${\mathbb C}[z_1,\dots,z_n]_{n(d-2)}^*$ by \begin{equation} \begin{array}{l} (Ch)(g):=h(C^{-1}g),\\
\\ \hspace{2cm}h\in {\mathbb C}[z_1,\dots,z_n]_{n(d-2)}^*,\, g\in {\mathbb C}[z_1,\dots,z_n]_{n(d-2)},\,C\in\mathop{\rm GL}\nolimits_n. \end{array}\label{dualdualhg} \end{equation} The isomorphism ${\mathbb C}[e_1,\dots,e_n]_{n(d-2)}\simeq {\mathbb C}[z_1,\dots,z_n]_{n(d-2)}^*$ induced by the polar pairing is equivariant with respect to actions (\ref{dualactionfe}) and (\ref{dualdualhg}).
We will now state the equivariance properties of $\Phi$ and $\widetilde\Phi$:
\begin{proposition}\label{equivariance} For every $f\in X_n^d$ and $C\in\mathop{\rm GL}\nolimits_n$ one has \begin{equation} \Phi(Cf)=(\det C)^2\,\Bigl(C\Phi(f)\Bigr)\,\,\hbox{and}\,\,\,\widetilde\Phi(Cf)=(\det C)^2\,\Bigl(C\widetilde\Phi(f)\Bigr).\label{equivarphitildephi} \end{equation} In particular, the morphisms $\Phi$, $\widetilde\Phi$ are $\mathop{\rm SL}\nolimits_n$-equivariant. \end{proposition}
Note that the associated form of $f\in X_n^d$ arises from the following invariantly defined map $$ {\mathfrak m}/{\mathfrak m}^2 \to \mathop{\rm Soc}\nolimits(M_f),\quad x \mapsto y^{n(d-2)},\label{coordinatefree} $$ with $y\in{\mathfrak m}$ being any element that projects to $x\in{\mathfrak m}/{\mathfrak m}^2$. Indeed, ${\mathbf f}$ is derived from this map by identifying the target with ${\mathbb C}$ via $\omega_f$ and the source with ${\mathbb C}^{n*}$ by mapping the image of $\widehat{z}_j$ in ${\mathfrak m}/{\mathfrak m}^2$ to the element $z_j$ of the basis $z_1,\dots,z_n$ of ${\mathbb C}^{n*}$. It then follows that for any rational $\mathop{\rm GL}\nolimits_n$-invariant function $R$ on ${\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$ the value $R({\mathbf f})$ depends only on the isomorphism class of the algebra $M_f$. As stated in the introduction, this is another fact that links Conjecture \ref{conj2} with the reconstruction problem.
\subsection{The associated form of a finite morphism} As before, let $n\ge 2$ and $d\ge 3$. We will now generalize the above construction from forms $f\in X_n^d$ to finite morphisms ${\mathfrak f}=(f_1,\dots,f_n):{\mathbb C}^n\to{\mathbb C}^n$ defined by $n$ forms of degree $d-1$.
Consider the vector space $({\mathbb C}[z_1,\dots,z_n]_{d-1})^{\oplus n}$ of $n$-tuples ${\mathfrak f}=(f_1, \ldots, f_n)$ of forms of degree $d-1$. Recall that the {\it resultant}\, $\mathop{\rm Res}\nolimits$ on the space $({\mathbb C}[z_1,\dots,z_n]_{d-1})^{\oplus n}$ is a form with the property that $\mathop{\rm Res}\nolimits({\mathfrak f}) \neq 0$ if and only if $f_1, \ldots, f_n$ have no common zeroes away from the origin (see, e.g., \cite[Chapter 13]{GKZ}). For an element ${\mathfrak f} = (f_1, \ldots, f_n) \in ({\mathbb C}[z_1,\dots,z_n]_{d-1})^{\oplus n}$, we now introduce the algebra $$ M_{\mathfrak f} := {\mathbb C}[z_1,\dots,z_n]/ (f_1, \ldots, f_n) $$ and recall a well-known lemma (see, e.g., \cite[Lemma 2.4]{AI2} and \cite[p.~187]{SS}):
\begin{lemma}\label{fourconds} \it The following statements are equivalent: \begin{enumerate}
\item[\rm (1)] the resultant $\mathop{\rm Res}\nolimits(\mathfrak{f})$ is nonzero;
\item[\rm (2)] the algebra $M_\mathfrak{f}$ has finite vector space dimension;
\item[\rm (3)] the morphism $\mathfrak{f} \co {\mathbb C}^n \to {\mathbb C}^n$ is finite;
\item[\rm (4)] the $n$-tuple $\mathfrak{f}$ is a homogeneous system of parameters of ${\mathbb C}[z_1,\dots,z_n]$, i.e., the Krull dimension of $M_\mathfrak{f}$ is $0$. \end{enumerate} If the above conditions are satisfied, then $M_\mathfrak{f}$ is a local complete intersection {\rm (}hence Gorenstein{\rm )} algebra whose socle $\mathop{\rm Soc}\nolimits(M_{\mathfrak f})$ is generated in degree $n(d-2)$ by the image $\widehat{\mathop{\rm Jac}\nolimits(\mathfrak{f})}$ in $M_\mathfrak{f}$ of the Jacobian $\mathop{\rm Jac}\nolimits(\mathfrak{f})$ of\, $\mathfrak{f}$.\end{lemma}
\noindent In the above lemma $ \mathop{\rm Soc}\nolimits(M_\mathfrak{f}):=\{x\in M_\mathfrak{f} : x\,{\mathfrak m}=0\}, $ where the (unique) maximal ideal ${\mathfrak m}$ of $M_\mathfrak{f}$ consists of all elements represented by polynomials in ${\mathbb C}[z_1,\dots,z_n]$ vanishing at the origin.
Next, let $Y_n^{d-1}$ be the affine open subset of $({\mathbb C}[z_1,\dots,z_n]_{d-1})^{\oplus n}$ of all $n$-tuples of forms with nonzero resultant. Lemma \ref{fourconds} implies that for $\mathfrak{f}\in Y_n^{d-1}$ the subspace \begin{equation} \begin{array}{l} W_\mathfrak{f}:={\mathbb C}[z_1,\dots,z_n]_{n(d-2)-(d-1)}f_1+\dots+\\
\\ \hspace{3cm}{\mathbb C}[z_1,\dots,z_n]_{n(d-2)-(d-1)}f_n\subset{\mathbb C}[z_1,\dots,z_n]_{n(d-2)} \end{array}\label{subspace1} \end{equation} has codimension 1, with the line spanned by $\mathop{\rm Jac}\nolimits(\mathfrak{f})$ being complementary to it.
Fix $\mathfrak{f}\in Y_n^{d-1}$ and denote by $\omega_\mathfrak{f} \co \mathop{\rm Soc}\nolimits(M_\mathfrak{f})\rightarrow{\mathbb C}$ the linear isomorphism given by the condition $\omega_\mathfrak{f}(\widehat{\mathop{\rm Jac}\nolimits(\mathfrak{f})})=1$. Define a form ${\mathbf f}$ on ${\mathbb C}^{n*}$ as follows. Fix $z^*\in{\mathbb C}^{n*}$, let, as before, $z_1^*,\dots,z_n^*$ be the coordinates of $z^*$ with respect to the basis $z_1,\dots,z_n$, and set $$ {\mathbf f}(z^*):=\omega_\mathfrak{f}\left((z_1^*\widehat{z}_1+\dots+z_n^*\widehat{z}_n)^{n(d-2)}\right),\label{assocformdef1} $$ where $\widehat{z}_j$ is the projection to $M_\mathfrak{f}$ of the coordinate function $z_j\in{\mathbb C}[z_1,\dots,z_n]$.
If $i_1,\dots,i_n$ are nonnegative integers such that $i_1+\dots+i_n=n(d-2)$, the product $\widehat{z}_1^{i_1}\cdots \widehat{z}_n^{i_n}$ lies in $\mathop{\rm Soc}\nolimits(M_\mathfrak{f})$, hence we have $$ \widehat{z}_1^{i_1}\cdots \widehat{z}_n^{i_n}=\mu_{i_1,\dots,i_n}(\mathfrak{f}) \widehat{\mathop{\rm Jac}\nolimits(\mathfrak{f})}\label{assocformexpppp1} $$ for some $\mu_{i_1,\dots,i_n}(\mathfrak{f})\in{\mathbb C}$. In terms of the coefficients $\mu_{i_1,\dots,i_n}(\mathfrak{f})$ the form ${\mathbf f}$ is written as \begin{equation} {\mathbf f}(z^*)=\sum_{i_1+\cdots+i_n=n(d-2)}\frac{(n(d-2))!}{i_1!\cdots i_n!}\mu_{i_1,\dots,i_n}(\mathfrak{f}) z_1^{* i_1}\cdots z_n^{* i_n}.\label{assocformexpp2} \end{equation} One can view the expression in the right-hand side of (\ref{assocformexpp2}) as an element of ${\mathbb C}[z_1^*,\dots,z_n^*]_{n(d-2)}$, where we regard $z_1^*,\dots,z_n^*$ as the basis of ${\mathbb C}^{n**}$ dual to the basis $z_1,\dots,z_n$ of ${\mathbb C}^{n*}$. Identifying $z^*_j\in{\mathbb C}^{n**}$ with $e_j\in{\mathbb C}^n$, we will think of ${\mathbf f}$ as the following element of ${\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$: \begin{equation} \sum_{i_1+\cdots+i_n=n(d-2)}\frac{(n(d-2))!}{i_1!\cdots i_n!}\mu_{i_1,\dots,i_n}(\mathfrak{f}) e_1^{i_1}\cdots e_n^{i_n}.\label{assocformexpp3} \end{equation} We call ${\mathbf f}$ given by expression (\ref{assocformexpp3}) the {\it associated form}\, of $\mathfrak{f}$. Clearly, the associated form of $f\in X_n^d$ is the associated form of the gradient $(f_{z_{{}_1}},\dots,f_{z_{{}_n}})\in Y_n^{d-1}$.
We note that the associated form of $\mathfrak{f} \in Y_n^{d-1}$ arises from the following invariantly defined map $$ {\mathfrak m}/{\mathfrak m}^2 \to \mathop{\rm Soc}\nolimits(M_\mathfrak{f}),\quad x \mapsto y^{n(d-2)},\label{coordinatefree1} $$ with $y\in{\mathfrak m}$ being any element that projects to $x\in{\mathfrak m}/{\mathfrak m}^2$. Indeed, ${\mathbf f}$ is derived from this map by identifying the target with ${\mathbb C}$ via $\omega_{\mathfrak{f}}$ and the source with ${\mathbb C}^{n*}$ by mapping the image of $\widehat{z}_j$ in ${\mathfrak m}/{\mathfrak m}^2$ to the element $z_j$ of the basis $z_1,\dots,z_n$ of ${\mathbb C}^{n*}$.
Again, it is not hard to show that each $\mu_{i_1,\dots,i_n}$ is a regular function on the affine variety $Y_n^{d-1}$ (cf.~\cite[the proof of Proposition 2.1]{I3}). Hence, we have $$ \mu_{i_1,\dots,i_n}=\frac{P_{i_1,\dots,i_n}}{\mathop{\rm Res}\nolimits^{p_{i_1,\dots,i_n}}}\label{formulaformus1} $$ for some $P_{i_1,\dots,i_n}\in{\mathbb C}[({\mathbb C}[z_1,\dots,z_n]_{d-1})^{\oplus n}]$ and nonnegative integer $p_{i_1,\dots,i_n}$. Thus, we arrive at the morphism $$ \Psi \co Y_n^{d-1}\rightarrow {\mathbb C}[e_1,\dots,e_n]_{n(d-2)},\quad \mathfrak{f}\mapsto {\mathbf f} $$ of affine algebraic varieties. Using the polar pairing, we may regard the associated form as an element of ${\mathbb C}[z_1,\dots,z_n]_{n(d-2)}^*$, in which case $\Psi$ turns into a morphism from $Y_n^{d-1}$ to ${\mathbb C}[z_1,\dots,z_n]_{n(d-2)}^*$; we call it $\widetilde\Psi$.
The morphism $\widetilde\Psi$ is easy to describe. For $\mathfrak{f}\in Y_n^{d-1}$, denote by $\widetilde\omega_\mathfrak{f}$ the element of ${\mathbb C}[z_1,\dots,z_n]_{n(d-2)}^*$ such that: \begin{itemize}
\item[(i)] $\ker\widetilde\omega_\mathfrak{f}=W_\mathfrak{f}$ with $W_\mathfrak{f}$ introduced in (\ref{subspace1}), and
\item[(ii)] $\widetilde\omega_\mathfrak{f}(\mathop{\rm Jac}\nolimits(\mathfrak{f}))=1$.
\end{itemize}
\noindent Clearly, $\mu_{i_1,\dots,i_n}(\mathfrak{f})=\widetilde\omega_\mathfrak{f}(z_1^{i_1}\cdots z_n^{i_n})$ for $i_1+\dots+i_n=n(d-2)$. We have a fact analogous to Proposition \ref{newmap}:
\begin{proposition}\label{newmap1} The morphism $$ \widetilde\Psi: Y_n^{d-1}\to {\mathbb C}[z_1,\dots,z_n]_{n(d-2)}^* $$ sends an $n$-tuple $\mathfrak{f}$ to $(n(d-2))!\,\widetilde\omega_\mathfrak{f}$. \end{proposition}
We will now state the equivariance property of the morphisms $\Psi$, $\widetilde\Psi$. First, notice that for any $k$ the group $\mathop{\rm GL}\nolimits_n \times \mathop{\rm GL}\nolimits_n$ acts on the vector space $({\mathbb C}[z_1,\dots,z_n]_k)^{\oplus n}$ via $$ ((C_1, C_2) \mathfrak{f}) (z_1,\dots,z_n) := \mathfrak{f} ((z_1,\dots,z_n)C_1^{-T})C_2^{-1}\label{doubleaction} $$ for $\mathfrak{f}\in ({\mathbb C}[z_1,\dots,z_n]_k)^{\oplus n}$ and $C_1,C_2 \in \mathop{\rm GL}\nolimits_n$. We then have (see \cite[Lemma 2.7]{AI2}):
\begin{proposition}\label{L:equiv2} For every $\mathfrak{f}\in Y_n^{d-1}$ and $C_1,C_2 \in \mathop{\rm GL}\nolimits_n$ the following holds: \begin{equation} \begin{array}{l} \displaystyle\Psi((C_1,C_2) \mathfrak{f})=\det(C_1 C_2)\Bigl (C_1 \Psi(\mathfrak{f} )\Bigr)\,\,\hbox{and}\\
\\ \hspace{4cm}\widetilde\Psi((C_1,C_2) \mathfrak{f})=\det(C_1 C_2)\Bigl(C_1 \widetilde\Psi(\mathfrak{f})\Bigr). \end{array}\label{E:equiv2} \end{equation} \end{proposition}
We conclude this subsection by observing that the morphisms $\Phi$, $\widetilde\Phi$ can be factored as \begin{equation}
\Phi=\Psi\circ\nabla|_{X_n^d},\quad\widetilde\Phi=\widetilde\Psi\circ\nabla|_{X_n^d},\label{decomposition} \end{equation} where $\nabla$ is the {\it gradient morphism}: \begin{equation} \nabla: {\mathbb C}[z_1,\dots,z_n]_d\to ({\mathbb C}[z_1,\dots,z_n]_{d-1})^{\oplus n},\quad f\mapsto (f_{z_{{}_1}},\dots,f_{z_{{}_n}}).\label{gradientmorph} \end{equation}
\noindent Later on, this factorization will prove rather useful.
\subsection{Macaulay inverse systems and the image of $\Psi$}
We will now interpret the morphism $\Psi$ in different terms. Recall that the algebra ${\mathbb C}[e_1,\dots,e_n]$ is a ${\mathbb C}[z_1,\dots,z_n]$-module via differentiation: $$ \begin{array}{l} \displaystyle(g \diamond F) (e_1, \ldots, e_n) := g\left(\frac{\partial}{\partial e_1}, \ldots, \frac{\partial}{\partial e_n}\right)F(e_1, \ldots, e_n),\\
\\ \hspace{7cm}g\in{\mathbb C}[z_1,\dots,z_n],\, F\in{\mathbb C}[e_1,\dots,e_n]. \label{pp} \end{array} $$ Restricting this module structure to ${\mathbb C}[z_1,\dots,z_n]_k\times{\mathbb C}[e_1,\dots,e_n]_k$, we obtain the perfect polar pairing described in (\ref{polarpairing}).
For any $F\in{\mathbb C}[e_1,\dots,e_n]_k$, we now introduce a homogeneous ideal, called the {\it annihilator}\, of $F$, as follows: $$ F^{\perp} := \{g\in {\mathbb C}[z_1,\dots,z_n]: g \diamond F = 0 \}, $$ which is clearly independent of scaling and thus is well-defined for $F$ in the projective space ${\mathbb P}\,{\mathbb C}[e_1,\dots,e_n]_k$ (from now on, we will sometimes think of forms as elements of the corresponding projective spaces, which will be clear from the context). It is well-known that the quotient ${\mathbb C}[z_1,\dots,z_n]/F^{\perp}$ is a standard graded local Artinian Gorenstein algebra of socle degree $k$ and the following holds (cf.~\cite[Lemmas 2.12, 2.14]{IK}, \cite[Proposition 4]{Em}):
\begin{proposition} \label{prop-correspondence} The correspondence $F \mapsto {\mathbb C}[z_1,\dots,z_n]/F^{\perp}$ induces a bijection $$ {\mathbb P}\,{\mathbb C}[e_1,\dots,e_n]_k \to \left\{
\begin{array}{l}
\text{local Artinian Gorenstein algebras ${\mathbb C}[z_1,\dots,z_n]/I$}\\
\text{of socle degree $k$, where the ideal $I$ is homogeneous}\\
\end{array} \right\}. $$ \end{proposition}
We also note that the isomorphism classes of local Artinian Gorenstein algebras ${\mathbb C}[z_1,\dots,z_n]/I$ of socle degree $k$, where the ideal $I$ is homogeneous, are in bijective correspondence with the linear equivalence classes (i.e., $\mathop{\rm GL}\nolimits_n$-orbits) of nonzero elements of ${\mathbb C}[e_1,\dots,e_n]_k$ (see \cite[Proposition 17]{Em} and cf.~\cite[formula (5.7)]{I4}). This correspondence is induced by the map $F \mapsto {\mathbb C}[z_1,\dots,z_n]/F^{\perp}$, $F\in{\mathbb C}[e_1,\dots,e_n]_k$.
Any form $F\in{\mathbb C}[e_1,\dots,e_n]_k$ such that $F^{\perp}=I$ is called {\it a {\rm (}homogeneous{\rm )} Macaulay inverse system of\, ${\mathbb C}[z_1,\dots,z_n]/I$} and its image in ${\mathbb P}\,{\mathbb C}[e_1,\dots,e_n]_k$ is called {\it the {\rm (}homogeneous{\rm )} Macaulay inverse system of\, ${\mathbb C}[z_1,\dots,z_n]/I$}.
We have (see \cite[Proposition 2.11]{AI2}):
\begin{prop} \label{P:inverse-system} \it For any $\mathfrak{f} \in Y_n^{d-1}$, the associated form $\Psi(\mathfrak{f})$ is a Macaulay inverse system of the algebra $M_\mathfrak{f}$. \end{prop}
\noindent By Proposition \ref{P:inverse-system}, the morphism $\Psi$ can be thought of as a map assigning to every element $\mathfrak{f} \in Y_n^{d-1}$ a particular Macaulay inverse system of the algebra $M_\mathfrak{f}$. Similarly, $\Phi$ assigns to every element $f \in X_n^d$ a particular Macaulay inverse system of $M_f$.
Let $U_n^{n(d-2)} \subset {\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$ be the locus of forms $F$ such that the subspace $F^{\perp} \cap{\mathbb C}[z_1,\dots,z_n]_{d-1}$ is $n$-dimensional and has a basis with nonvanishing resultant. A description of $U_n^{n(d-2)}$ was given in \cite[Theorem 3.5]{I6}. It follows from this description (and is easy to see independently) that $U_n^{n(d-2)}$ is locally closed in ${\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$, hence is a quasi-affine variety. By Proposition \ref{P:inverse-system} we have $\mathop{\rm im}\nolimits(\Psi)\subset U_n^{n(d-2)}$. In fact, one can show that $U_n^{n(d-2)}$ is exactly the image of $\Psi$:
\begin{prop} \label{P:image} \it $\mathop{\rm im}\nolimits(\Psi)=U_n^{n(d-2)}$. \end{prop}
\begin{proof} If $F \in U_n^{n(d-2)}$, then for the ideal $I\subset{\mathbb C}[z_1,\dots,z_n]$ generated by the subspace $F^{\perp} \cap{\mathbb C}[z_1,\dots,z_n]_{d-1}$ we have $I \subset F^{\perp}$. Hence, one has the inclusion $I_{n(d-2)} \subset F^{\perp}_{n(d-2)}$ of the $n(d-2)$th graded components of these ideals. As both $I_{n(d-2)}$ and $F^{\perp}_{n(d-2)}$ have codimension 1 in ${\mathbb C}[z_1,\dots,z_n]_{n(d-2)}$, it follows that $I_{n(d-2)}=F^{\perp}_{n(d-2)}$. By Proposition \ref{P:inverse-system}, for any basis $\mathfrak{f}$ of $F^{\perp} \cap{\mathbb C}[z_1,\dots,z_n]_{d-1}$ the associated form $\Psi(\mathfrak{f})$ is proportional to $F$, and therefore $F\in\mathop{\rm im}\nolimits(\Psi)$.\end{proof}
\noindent In the next subsection we will state a projectivized variant of this proposition.
\subsection{Projectivizations of $\Phi$ and $\Psi$} The constructions of the morphisms $\Phi$ and $\Psi$ can be projectivized. Let ${\mathbb P} X_n^d$ be the image of $X_n^d$ in the projective space ${\mathbb P}\,{\mathbb C}[z_1,\dots,z_n]_d$; it consists of all lines spanned by forms with nonzero discriminant. The discriminant on ${\mathbb C}[z_1,\dots,z_n]_d$ descends to a section of a line bundle over ${\mathbb P}\,{\mathbb C}[z_1,\dots,z_n]_d$, and ${\mathbb P} X_n^d$ is the affine open subset of ${\mathbb P}\,{\mathbb C}[z_1,\dots,z_n]_d$ where this section does not vanish (see Subsection \ref{reviewGIT} for details). The definition of the associated form of a form in $X_n^d$ (or, alternatively, equivariance property (\ref{equivarphitildephi})) yields that the morphism $\Phi$ descends to an $\mathop{\rm SL}\nolimits_n$-equivariant morphism $$ {\mathbb P}\Phi \co {\mathbb P} X_n^d \to {\mathbb P}\,{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}. $$ By Proposition \ref{P:inverse-system}, the morphism ${\mathbb P}\Phi$ can be regarded as a map assigning to every line ${\mathcal L}\in{\mathbb P} X_n^d$\, {\it the}\, Macaulay inverse system of the algebra $M_f$, where $f$ is any form that spans ${\mathcal L}$. Notice that by Example \ref{E:example1} this morphism is not injective.
Next, let $Z_n^{d-1}$ be the image of $Y_n^{d-1}$ in the Grassmannian $\mathop{\rm Gr}\nolimits(n,{\mathbb C}[z_1,\dots,z_n]_{d-1})$ of $n$-dimensional subspaces of ${\mathbb C}[z_1,\dots,z_n]_{d-1}$; it consists of all $n$-dimensional subspaces of ${\mathbb C}[z_1,\dots,z_n]_{d-1}$ having a basis with nonzero resultant. The resultant $\mathop{\rm Res}\nolimits$ on $({\mathbb C}[z_1,\dots,z_n]_{d-1})^{\oplus n}$ descends to a section of a line bundle over the Grassmannian $\mathop{\rm Gr}\nolimits(n, {\mathbb C}[z_1,\dots,z_n]_{d-1})$, and $Z_n^{d-1}$ is the affine open subset of $\mathop{\rm Gr}\nolimits(n,{\mathbb C}[z_1,\dots,z_n]_{d-1})$ where this section does not vanish (see Subsection \ref{reviewGIT}). Equivariance property (\ref{E:equiv2}) shows that the morphism $\Psi$ induces an $\mathop{\rm SL}\nolimits_n$-equivariant morphism $$ {\mathbb P}\Psi \co Z_n^{d-1} \to {\mathbb P}\,{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}. $$ By Proposition \ref{P:inverse-system}, the morphism ${\mathbb P}\Psi$ can be thought of as a map assigning to every subspace in $\mathop{\rm Gr}\nolimits(n,{\mathbb C}[z_1,\dots,z_n]_{d-1})$ the Macaulay inverse system of the algebra $M_\mathfrak{f}$, with $\mathfrak{f}=(f_1,\dots,f_n)$ being any basis of the subspace.
Proposition \ref{P:image} yields $\mathop{\rm im}\nolimits({\mathbb P}\Psi)={\mathbb P} U_n^{n(d-2)}$, where ${\mathbb P} U_n^{n(d-2)}$ is the image of $U_n^{n(d-2)}$ in the projective space ${\mathbb P}\,{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$. Clearly, ${\mathbb P} U_n^{n(d-2)}$ is locally closed, hence is a quasi-projective variety. With a little extra effort one obtains (see \cite[Proposition 2.13]{AI2}):
\begin{prop} \label{P:imagehat} \it The morphism ${\mathbb P}\Psi:Z_n^{d-1}\to{\mathbb P} U_n^{n(d-2)}$ is an isomorphism. \end{prop}
\begin{proof} The morphism $\chi \co {\mathbb P} U_n^{n(d-2)} \to Z_n^{d-1}$ given by $F \mapsto F^{\perp} \cap {\mathbb C}[z_1,\dots,z_n]_{d-1}$ yields the diagram $$\xymatrix{ Z_n^{d-1} \ar[r]^{\hspace{-0.3cm}{\mathbb P}\Psi} \ar[rd]^{{\mathrm {id}}} &{\mathbb P} U_n^{n(d-2)}\ar[d]^{\chi} \\
& Z_n^{d-1}, }$$ which is commutative by Proposition \ref{P:inverse-system}. As $\chi$ is separated, it follows that ${\mathbb P}\Psi$ is an isomorphism (see \cite[Remark 9.11]{GW}).\end{proof}
\noindent By Proposition \ref{P:imagehat}, the map ${\mathbb P}\Psi:Z_n^{d-1}\to{\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$ is a locally closed immersion, i.e., an isomorphism onto a locally closed subset of ${\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$.
Next, consider an open subset of ${\mathbb C}[z_1,\dots,z_n]_d$: $$ W_n^d:={\mathbb C}[z_1,\dots,z_n]_d\setminus\left\{f\in{\mathbb C}[z_1,\dots,z_n]_d:f_{z_{{}_1}},\dots,f_{z_{{}_n}}\,\hbox{are linearly dependent}\right\}. $$ Clearly, we have $X_n^d\subset W_n^d$. The gradient morphism $\nabla$ introduced in (\ref{gradientmorph}) indices the morphism $$ W_n^d\to\mathop{\rm Gr}\nolimits(n,{\mathbb C}[z_1,\dots,z_n]_{d-1}),\,\,\, f\mapsto\langle f_{z_{{}_1}},\dots,f_{z_{{}_n}}\rangle, $$ where $\langle{}\cdot{}\rangle$ denotes linear span. This morphism descends to an $\mathop{\rm SL}\nolimits_n$-equivariant morphism $$ {\mathbb P}\nabla\co{\mathbb P} W_n^d\to\mathop{\rm Gr}\nolimits(n,{\mathbb C}[z_1,\dots,z_n]_{d-1}), $$ where ${\mathbb P} W_n^d$ is the (open) image of $W_n^d$ in the projective space ${\mathbb P}\,{\mathbb C}[z_1,\dots,z_n]_d$. Clearly, ${\mathbb P}\nabla$ maps ${\mathbb P} X_n^d$ into $Z_n^{d-1}$, and from (\ref{decomposition}) we obtain \begin{equation}
{\mathbb P}\Phi={\mathbb P}\Psi\circ{\mathbb P}\nabla|_{{\mathbb P} X_n^d}.\label{decommpossition} \end{equation} This factorization will be of importance to us in relation with Conjecture \ref{conj2}.
\section{Results and open problems related to Conjecture \ref{conj2}}\label{results} \setcounter{equation}{0}
\subsection{Review of Geometric Invariant Theory}\label{reviewGIT} We start this section by giving a brief overview of some of the concepts of Geometric Invariant Theory, or GIT. The principal reference for GIT is \cite{MFK}, but we will follow the more elementary expositions given in \cite{Ne} and \cite[Chapter 9]{LR}.
First of all, recall that an (affine) algebraic group $G$ is called {\it reductive}\, if its unipotent radical is trivial. Since we only consider algebraic groups over ${\mathbb C}$, this condition is equivalent to $G$ being the affine algebraic complexification of a compact group $K$; in this case
$K\xhookrightarrow{}G$ is the universal complexification of $K$ (see, e.g., \cite[p.~247]{VO}, \cite[Theorems 5.1, 5.3]{Ho}). The groups $\mathop{\rm GL}\nolimits_n$ and $\mathop{\rm SL}\nolimits_n$ are examples of reductive groups being the affine algebraic complexifications of $\mathop{\rm U}\nolimits_n$ and $\mathop{\rm SU}\nolimits_n$, respectively.
Let $X$ be an algebraic variety and $G$ a reductive group acting algebraically on $X$. For any open $G$-invariant subset $U$ of $X$ denote by ${\mathcal O}_X(U)^G$ the algebra of $G$-invariant regular functions of $X$ on $U$. A {\it good quotient} of $X$ by $G$ is a pair $(Z,\pi)$, where $Z$ is an algebraic variety and $\pi \co X\rightarrow Z$ is a morphism such that: \begin{enumerate} \item[(P1)] $\pi$ is surjective; \item[(P2)] $\pi$ is $G$-invariant, i.e., $\pi(gx)=\pi(x)$ for all $g\in G$ and $x\in X$; \item[(P3)] $\pi$ is affine, i.e., the inverse image of an open affine subset of $Z$ is an open affine subset of $X$; \item[(P4)] the induced map $$ \pi^*\co\mathcal{O}_Z(U) \to \mathcal{O}_X(\pi^{-1}(U))^G,\quad f\mapsto f\circ\pi $$ is an isomorphism for every open subset $U\subset Z$. \end{enumerate}
The good quotient $Z$, if exists, possesses the following additional properties: \begin{enumerate} \item[(P5)] for $x,x'\in X$ one has $\pi(x)=\pi(x')$ if and only if $\overline{G \cdot x}\cap\overline{G \cdot x'}\ne\emptyset$ (where $G \cdot x$ is the $G$-orbit of $x$), and every fiber of $\pi$ contains exactly one closed $G$-orbit (the unique orbit of minimal dimension); hence if $S_1$, $S_2$ are closed disjoint $G$-invariant subsets of $X$, then $\pi(S_1)\cap\pi(S_2)=\emptyset$;
\item[(P6)] if $U\subset X$ is a {\it saturated}\, open subset, (i.e., an open subset satisfying\linebreak $U=\pi^{-1}(\pi(U))$), then $\pi(U)$ is open and $(\pi(U),\pi|_U)$ is a good quotient of $U$;
\item[(P7)] if $A$ is a $G$-invariant closed subset of $X$, then $\pi(A)$ is closed in $Z$ and $(\pi(A),\pi|_A)$ is a good quotient of $A$; \item[(P8)] if $X$ is normal, so is $Z$; \item[(P9)] if $Y$ is an algebraic variety and $\varphi \co X\rightarrow Y$ is a $G$-invariant morphism, then there exists a unique morphism $\tau_{\varphi} \co Z\rightarrow Y$ such that $\varphi=\tau_{\varphi}\circ\pi$. \end{enumerate}
In most situations, the construction of the morphism $\pi$ will be clear from the context, therefore we usually apply the term \lq\lq good quotient\rq\rq\, to the variety $Z$ rather than the pair $(Z,\pi)$. A good quotient, if exists, is unique up to isomorphism and is denoted by $X/\hspace{-0.1cm}/ G$. If every fiber of $\pi$ consists of a single (closed) orbit, the quotient $X/\hspace{-0.1cm}/ G$ is called {\it geometric}.
We will now describe two cases when good quotients are known to exist.
{\bf Case 1.} Assume that $X$ is an affine variety, so $X=\mathop{\rm Spec}\nolimits{\mathbb C}[X]$, where the coordinate ring ${\mathbb C}[X]$ is finitely generated. Clearly, $G$ acts on ${\mathbb C}[X]$, and this action is rational (see, e.g., \cite[p.~47]{Ne} for the definition of a rational action on an algebra). We now note that over ${\mathbb C}$ the condition of reductivity for affine algebraic groups is equivalent to those of {\it linear reductivity}\, and {\it geometric reductivity}\, (see \cite[pp.~96--98]{LR} for details). Then by the Gordan-Hilbert-Mumford-Nagata theorem (see \cite{G}, \cite{Hi1}, \cite{Hi2}, \cite[p.~29]{MFK}, \cite{Na}), the algebra of invariants ${\mathbb C}[X]^G$ is finitely generated. Choose generators $f_1,\dots,f_m$ of ${\mathbb C}[X]^G$ and set $ \pi:= (f_1,\dots,f_m): X\rightarrow {\mathbb C}^m. $ Next, consider the ideal \begin{equation} I:=\{g\in{\mathbb C}[w_1,\dots,w_m]: g\circ\pi=0\}.\label{idealforquotient} \end{equation} Clearly, $I$ is a radical ideal in ${\mathbb C}[w_1,\dots,w_m]$ and ${\mathbb C}[X]^G\simeq{\mathbb C}[w_1,\dots,w_m]/I$. Let \begin{equation} Z:=\{w\in{\mathbb C}^m: g(w) = 0\,\,\hbox{for all}\,\, g\in I\}.\label{affinequotient} \end{equation} It can be shown that the affine variety $Z$ is a good quotient of $X$. In other words, one has $X/\hspace{-0.1cm}/ G=\mathop{\rm Spec}\nolimits{\mathbb C}[X]^G$.
If $V$ is a vector space over ${\mathbb C}$, then $V\setminus\{0\}/\hspace{-0.1cm}/{\mathbb C}^*$ is the projective space ${\mathbb P} V$, with $\pi:V\setminus\{0\}\to{\mathbb P} V$ being the natural projection. Note that every ${\mathbb C}^*$-invariant open subset of $V\setminus\{0\}$ is saturated with respect to $\pi$. Hence, by property (P6) we see ${\mathbb P} X_n^d=X_n^d/\hspace{-0.1cm}/{\mathbb C}^*$ and ${\mathbb P} W_n^d=W_n^d/\hspace{-0.1cm}/{\mathbb C}^*$. Also, using properties (P6) and (P7) one observes ${\mathbb P} U_n^{n(d-2)}=U_n^{n(d-2)}/\hspace{-0.1cm}/{\mathbb C}^*$.
Let $N:=\dim_{{\mathbb C}}V$. For $\ell\le N$ setting $$ S(\ell,V):=V^{\oplus \ell}\setminus\{(v_1,\dots,v_\ell)\in V^{\oplus \ell}: v_1,\dots, v_\ell\,\hbox{are linearly dependent}\}, $$ we have $\mathop{\rm Gr}\nolimits(\ell,V)=S(\ell,V)/\hspace{-0.1cm}/\mathop{\rm GL}\nolimits_\ell$ with $$ \pi\co S(\ell,V)\to \mathop{\rm Gr}\nolimits(\ell,V),\quad (v_1,\dots,v_\ell)\mapsto \langle v_1,\dots,v_\ell\rangle. $$ Since $Y_n^{d-1}$ is saturated in $S(n,{\mathbb C}[z_1,\dots,z_n]_{d-1})$, it follows that $Z_n^{d-1}=Y_n^{d-1}/\hspace{-0.1cm}/ \mathop{\rm GL}\nolimits_n$.
{\bf Case 2.} To describe this case, we need to give some definitions. Let $G$ be a reductive group with a linear representation $G\rightarrow\mathop{\rm GL}\nolimits(V)$ on a vector space $V$ of dimension $N$, and $X\subset V$ a $G$-invariant affine algebraic subvariety with the algebraic action of $G$ induced from that on $V$. A point $x\in X$ is called {\it semistable}\, if the closure of the orbit $G\cdot x$ does not contain $0$, {\it polystable}\, if $x\ne 0$ and $G\cdot x$ is closed, and {\it stable}\, if $x$ is polystable and $\dim G\cdot x=\dim G$ (or, equivalently, the stabilizer of $x$ is zero-dimensional). The three loci are denoted by $X^{\mathop{\rm ss}\nolimits}$, $X^{\mathop{\rm ps}\nolimits}$, and $X^{\mathop{\rm s}\nolimits}$, respectively. Clearly, $X^{\mathop{\rm s}\nolimits}\subset X^{\mathop{\rm ps}\nolimits} \subset X^{\mathop{\rm ss}\nolimits}$.
Let now $X \subset{\mathbb P} V$ be a $G$-invariant projective algebraic variety with the algebraic action of $G$ induced from that on ${\mathbb P} V$. Then the semistability, polystability and stability of a point $x\in X$ are understood as the corresponding concepts for some (hence every) point $\widehat x$ lying over $x$ in the affine cone $\widehat X\subset V$ over $X$. We denote the three loci by $X^{\mathop{\rm ss}\nolimits}$, $X^{\mathop{\rm ps}\nolimits}$, and $X^{\mathop{\rm s}\nolimits}$, respectively. One has $X^{\mathop{\rm s}\nolimits}\subset X^{\mathop{\rm ps}\nolimits}\subset X^{\mathop{\rm ss}\nolimits}$. The loci $X^{\mathop{\rm s}\nolimits}$ and $X^{\mathop{\rm ss}\nolimits}$ are open subsets of $X$, and the following holds: $$ \begin{array}{l} X^{\mathop{\rm ps}\nolimits}=\{x\in X^{\mathop{\rm ss}\nolimits}: \hbox{$G\cdot x$ is closed in $X^{\mathop{\rm ss}\nolimits}$}\},\\
\\ X^{\mathop{\rm s}\nolimits}= \{x\in X^{\mathop{\rm ss}\nolimits}: \hbox{$G\cdot x$ is closed in $X^{\mathop{\rm ss}\nolimits}$ and $\dim G\cdot x=\dim G$}\}. \end{array} $$
Choose coordinates $x_1,\dots,x_N$ in $V$. Then by \cite[Proposition 9.5.2.2]{LR}, the semi\-stability of a point $x\in X$ is characterized by the existence of a $G$-invariant homogeneous form of positive degree in $x_1,\dots,x_N$ nonvanishing at some (hence every) lift $\widehat x$ of $x$. In fact, for any nonnegative integer $k$ any element of ${\mathbb C}[x_1,\dots,x_N]_k$ can be identified with a global section of the $k$th tensor power $H^{\otimes k}$ of the {\it hyperplane line bundle}\, $H$ on ${\mathbb P} V$ (see, e.g., \cite[Example 13.16]{GW}). Therefore the condition of the nonvanishing of a $G$-invariant homogeneous form at $\widehat x$ is equivalent to that of the nonvanishing at $x$ of the corresponding global $G$-invariant section of a power\linebreak of $H$.
For instance, let us think of the Grassmannian $\mathop{\rm Gr}\nolimits(n,{\mathbb C}[z_1,\dots,z_n]_{d-1})$ as the projective variety in ${\mathbb P}\bigwedge^n{\mathbb C}[z_1,\dots,z_n]_{d-1}$ obtained via the Pl\"ucker embedding. It then follows that $Z_n^{d-1}$ lies in $\mathop{\rm Gr}\nolimits(n,{\mathbb C}[z_1,\dots,z_n]_{d-1})^{\mathop{\rm ss}\nolimits}$ since $Z_n^{d-1}$ consists exactly of the elements of $\mathop{\rm Gr}\nolimits(n,{\mathbb C}[z_1,\dots,z_n]_{d-1})$ at which the resultant $\mathop{\rm Res}\nolimits$, understood as the restriction of a global $\mathop{\rm SL}\nolimits_n$-invariant section of $H^{\otimes(d-1)^{n-1}}$ to $\mathop{\rm Gr}\nolimits(n,{\mathbb C}[z_1,\dots,z_n]_{d-1})$, does not vanish. This description of $Z_n^{d-1}$ is a consequence of \cite[p.~257, Corollary 2.3 and p.~427, Proposition 1.1]{GKZ}. Similarly, we have ${\mathbb P} X_n^d\subset{\mathbb P}{\mathbb C}[z_1,\dots,z_n]_d^{\mathop{\rm ss}\nolimits}$ since ${\mathbb P} X_n^d$ consists exactly of the elements of ${\mathbb P}{\mathbb C}[z_1,\dots,z_n]_d$ at which the discriminant $\Delta$, understood as a global $\mathop{\rm SL}\nolimits_n$-invariant section of $H^{\otimes n(d-1)^{n-1}}$, does not vanish. In fact, by \cite[Proposition 4.2]{MFK}, we have ${\mathbb P} X_n^d\subset{\mathbb P}{\mathbb C}[z_1,\dots,z_n]_d^{\mathop{\rm s}\nolimits}$.
Returning to the general setting, let $L:=H|_X$. One can show that for all sufficiently high $k$ every global section of $L^{\otimes k}$ is the restriction of a global section of $H^{\otimes k}$ to $X$ (see \cite[p.~13]{Ne}). Consider the algebra $$ R:=\bigoplus_{k=0}^{\infty}R_k, $$ where $R_k:=\Gamma(X,L^{\otimes k})$. It is finitely generated (see \cite[Chapter III, Theorem 5.2]{Hart} and \cite[Proposition 7.45]{GW}), and we have $X=\mathop{\rm Proj}\nolimits R$ (see \cite[Proposition 13.74]{GW}).
Now, the group $G$ rationally acts on $R$, and by the Gordan-Hilbert-Mumford-Nagata theorem the algebra of global $G$-invariant sections $$ R^G=\bigoplus_{k=0}^{\infty}R_k^G $$ is finitely generated over $R_0^G={\mathbb C}$. By \cite[Chapter III, \S1.3, Proposition 3]{Bo} (see also \cite[Lemma 13.10 and Remark 13.11]{GW}), we can find $p$ such that the {\it Veronese subalgebra} $$ R^{G(p)}:=\bigoplus_{k=0}^{\infty}R_{kp}^G $$ is generated in degree $1$ over $R_0^G$, namely $R^{G(p)}=R_0^G[R_p^G]$. Let $f_1,\dots,f_m$ be degree $1$ generators of $R^{G(p)}$, and consider the rational map $$ \pi\co X\xdashrightarrow{}{\mathbb P}^{m-1},\quad [x_1:\dots:x_N]\mapsto [f_1(x_1,\dots,x_N):\cdots:f_m(x_1,\dots,x_N)]. $$ By \cite[Proposition 9.5.2.2]{LR}, the indeterminacy locus of this rational map is exactly the complement to the semistable locus $X^{\mathop{\rm ss}\nolimits}$, so $\pi$ is a morphism from $X^{\mathop{\rm ss}\nolimits}$ to ${\mathbb P}^{m-1}$.
Now, consider the ideal $I$ defined by formula (\ref{idealforquotient}). This ideal is homogeneous and is generated by all forms $g$ in $w_1,\dots,w_m$ such that $g\circ\pi=0$. Clearly, $I$ is a radical ideal in ${\mathbb C}[w_1,\dots,w_m]$ and $R^{G(p)}\simeq{\mathbb C}[w_1,\dots,w_m]/I$. Then, analogously to (\ref{affinequotient}) we set $$ Z:=\{[w_1:\cdots:w_m]\in{\mathbb P}^{m-1}: g(w_1,\dots,w_m) = 0\,\,\hbox{for all}\,\, g\in I\}. $$ It can be shown that the projective variety $Z$ is a good quotient of $X^{\mathop{\rm ss}\nolimits}$. In other words, one has $X^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/ G=\mathop{\rm Proj}\nolimits R^{G(p)}$ (cf.~\cite[Proposition 13.12]{GW}), and by \cite[Remark 13.7]{GW} we see $X^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/ G=\mathop{\rm Proj}\nolimits R^G$.
As the open subset $X^{\mathop{\rm s}\nolimits}\subset X^{\mathop{\rm ss}\nolimits}$ is saturated, $\pi(X^{\mathop{\rm s}\nolimits})\subset X^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/ G$ is a good quotient of $X^{\mathop{\rm s}\nolimits}$; this quotient is quasi-projective and geometric.
\subsection{Interpretation of Conjecture \ref{conj2} via GIT}
Recall that the image of the morphism ${\mathbb P}\Psi$ coincides with ${\mathbb P} U_n^{n(d-2)}$ (see Proposition \ref{P:imagehat}). By \cite[Theorem 1.2]{F} (see also \cite{FI}), the variety ${\mathbb P} U_n^{n(d-2)}$ lies in ${\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm ss}\nolimits}$. Properties (\ref{E:equiv2}) and (P9) then show that there exists a morphism $$ \overline{{\mathbb P}\Psi}\co Z_n^{d-1}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n \to {\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n $$ of good GIT quotients by $\mathop{\rm SL}\nolimits_n$ such that the following diagram commutes: $$ \xymatrix{ Z_n^{d-1} \ar[r]^{\hspace{-0.8cm}{{\mathbb P}\Psi}} \ar[d]_{\pi_2} & {\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm ss}\nolimits} \ar[d]^{\pi_1} \\ Z_n^{d-1}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n \ar[r]^{\hspace{-1.2cm}{{\overline{{\mathbb P}\Psi}}}} &{\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n } $$ (here and below we denote by $\pi_1,\pi_2,\dots$ the relevant quotient projections). Notice that $Z_n^{d-1}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n$ is affine while ${\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n$ is projective.
Next, the morphism ${\mathbb P}\nabla|_{{\mathbb P} X_n^d}$ leads to a morphism of good affine GIT quotients $$
\overline{{\mathbb P}\nabla|_{{\mathbb P} X_n^d}}\co{\mathbb P} X_n^d /\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n \to Z_n^{d-1}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n $$ and a commutative diagram $$ \xymatrix{
{\mathbb P} X_n^d \ar[r]^{\hspace{-0.4cm}{{\mathbb P}\nabla|_{{\mathbb P} X_n^d}}} \ar[d]_{\pi_3} & Z_n^{d-1} \ar[d]^{\pi_2} \\
{\mathbb P} X_n^d/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n \ar[r]^{\hspace{-0.4cm}{\overline{{\mathbb P}\nabla|_{{\mathbb P} X_n^d}}}} &Z_n^{d-1}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n. } $$
Recalling factorization (\ref{decommpossition}), we now see that ${\mathbb P}\Phi$ maps ${\mathbb P} X_n^d$ to the semistable locus ${\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm ss}\nolimits}$ and that the morphism $$ \overline{{\mathbb P}\Phi}\co{\mathbb P} X_n^d /\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n\to{\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n $$ corresponding to the commutative diagram $$ \xymatrix{ {\mathbb P} X_n^d \ar[r]^{\hspace{-0.8cm}{{\mathbb P}\Phi}} \ar[d]_{\pi_3} & {\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm ss}\nolimits} \ar[d]^{\pi_1} \\ {\mathbb P} X_n^d/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n \ar[r]^{\hspace{-1.2cm}{{\overline{{\mathbb P}\Phi}}}} &{\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n } $$ factors as \begin{equation}
{\overline{{\mathbb P}\Phi}}={\overline{{\mathbb P}\Psi}}\circ\overline{{\mathbb P}\nabla|_{{\mathbb P} X_n^d}}.\label{decomposi1} \end{equation}
We will now relate the above facts to Conjecture {\rm \ref{conj2}}. The following claim corrects the assertion made in \cite{AI2} that the positive answer to Question 3.1, stated therein, yields the conjecture. The claim that appears below has been suggested to us by M.~Fedorchuk.
\begin{claim}\label{claimconj} \it In order to establish Conjecture {\rm \ref{conj2}} it suffices to show that ${\overline{{\mathbb P}\Phi}}$ is an isomorphism onto a closed subset of an affine open subset of the GIT quotient\, ${\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n$. \end{claim}
\begin{proof} Let $U\subset{\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n$ be an affine open subset and $A\subset U$ a closed subset such that $$ {\overline{{\mathbb P}\Phi}}\co {\mathbb P} X_n^d/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n\to A $$ is an isomorphism. Fix a $\mathop{\rm GL}\nolimits_n$-invariant regular function $S$ on $X_n^d$. By property (P4) it is the pullback of a uniquely defined regular function $\bar S$ on ${\mathbb P} X_n^d/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n$. Let $T$ be the push-forward of $\bar S$ to $A$ by means of ${\overline{{\mathbb P}\Phi}}$. Since $A$ is closed in $U$ and $U$ is affine, the function $T$ extends to a regular function on $U$. The pull-back of this function by means of $\pi_1$ yields an $\mathop{\rm SL}\nolimits_n$-invariant regular function on the dense open subset $\pi_1^{-1}(U)$ of ${\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm ss}\nolimits}$, hence a $\mathop{\rm GL}\nolimits_n$-invariant rational function $R$ on ${\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$. Clearly, $R$ is defined on $\mathop{\rm im}\nolimits(\Phi)$ and $R\circ\Phi=S$ as required by Conjecture {\rm \ref{conj2}}.\end{proof}
Factorization (\ref{decomposi1}) yields that in order to show that the map ${\overline{{\mathbb P}\Phi}}$ satisfies the condition stated in Claim \ref{claimconj}, it suffices to prove the following: \begin{itemize}
\item[(C1)] $\overline{{\mathbb P}\nabla|_{{\mathbb P} X_n^d}}$ is a closed immersion, i.e., an isomorphism onto a closed subset of $Z_n^{d-1}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n$;
\item[(C2)] ${\overline{{\mathbb P}\Psi}}$ is an isomorphism onto a closed subset of an affine open subset of ${\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n$.
\end{itemize}
Neither of conditions (C1), (C2) has been established in full generality, so we state:
\begin{openprob}\label{prob1} \it Prove that conditions {\rm(C1)} and {\rm (C2)} are satisfied for all $n\ge 2$, $d\ge 3$. \end{openprob}
\noindent Below, we will list known results leading towards settling these conditions.
\subsection{Results concerning conditions (C1) and (C2)} We start with condition (C1). First, we note that the locus ${\mathbb P} W_n^d$, where the morphism ${\mathbb P}\nabla$ is defined, contains the semistable locus ${\mathbb P}{\mathbb C}[z_1,\dots,z_n]_d^{\mathop{\rm ss}\nolimits}$ (see \cite[p.~452]{F}). Next, it is shown in \cite[Theorem 1.7]{F} that the morphism ${\mathbb P}\nabla$ preserves semistability, i.e., that the element\linebreak ${\mathbb P}\nabla(f)\in\mathop{\rm Gr}\nolimits(n,{\mathbb C}[z_1,\dots,z_n]_{d-1})$ is semistable whenever $f\in{\mathbb P} W_n^d$ is semistable. Denoting the restriction of ${\mathbb P}\nabla$ to ${\mathbb P}{\mathbb C}[z_1,\dots,z_n]_d^{\mathop{\rm ss}\nolimits}$ by the same symbol, we thus have a morphism of good GIT quotients $$ \overline{{\mathbb P}\nabla}\co{\mathbb P}{\mathbb C}[z_1,\dots,z_n]_d^{\mathop{\rm ss}\nolimits} /\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n \to \mathop{\rm Gr}\nolimits(n,{\mathbb C}[z_1,\dots,z_n]_{d-1})^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n $$ and a commutative diagram $$ \xymatrix{ {\mathbb P}{\mathbb C}[z_1,\dots,z_n]_d^{\mathop{\rm ss}\nolimits} \ar[r]^{\hspace{-0.4cm}{{\mathbb P}\nabla}} \ar[d]_{\pi_5} & \mathop{\rm Gr}\nolimits(n,{\mathbb C}[z_1,\dots,z_n]_{d-1})^{\mathop{\rm ss}\nolimits} \ar[d]^{\pi_4} \\ {\mathbb P}{\mathbb C}[z_1,\dots,z_n]_d^{\mathop{\rm ss}\nolimits} /\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n \ar[r]^{\hspace{-0.8cm}{\overline{{\mathbb P}\nabla}}} &\mathop{\rm Gr}\nolimits(n,{\mathbb C}[z_1,\dots,z_n]_{d-1})^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n. } $$
As each of the subsets ${\mathbb P} X_n^d$ and $Z_n^{d-1}$ is defined as the loci where an $\mathop{\rm SL}\nolimits_n$-invariant section of a power of the hyperplane bundle does not vanish, these subsets are saturated. Hence we can assume that the projection $\pi_3$ is the restriction of $\pi_5$ to ${\mathbb P} X_n^d$, the projection $\pi_2$ is the restriction of $\pi_4$ to $Z_n^{d-1}$, and $\overline{{\mathbb P}\nabla|_{{\mathbb P} X_n^d}}$ is the restriction of $\overline{{\mathbb P}\nabla}$ to ${\mathbb P} X_n^d/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n\subset {\mathbb P}{\mathbb C}[z_1,\dots,z_n]_d^{\mathop{\rm ss}\nolimits} /\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n $.
We now state:
\begin{theorem}\label{nablefedorchuk}\cite[Proposition 2.1, part (3)]{F}
The morphism $\overline{{\mathbb P}\nabla|_{{\mathbb P} X_n^d}}$ is finite and injective. \end{theorem}
By Theorem \ref{nablefedorchuk} and Zariski's Main Theorem (see \cite[Corollary 17.4.8]{TY}), condition (C1) will follow if we establish the normality of the (closed) image of $\overline{{\mathbb P}\nabla|_{{\mathbb P} X_n^d}}$ in $Z_n^{d-1}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n$. Thus, condition (C1) is a consequence of a positive answer to:
\begin{openprob}\label{probnormality}
\it Show that the image of $\overline{{\mathbb P}\nabla|_{{\mathbb P} X_n^d}}$ is normal for all $n\ge 2$, $d\ge 3$. \end{openprob}
While the above problem remains open in full generality, for the case $n=2$ we have the following result, which even gives the normality of $\mathop{\rm im}\nolimits(\overline{{\mathbb P}\nabla})$:
\begin{theorem}\label{isaevalper}\cite[Corollaries 5.5 and 6.6]{AI2} Assume that $n=2$. Then the morphism $\overline{{\mathbb P}\nabla}$ is finite and injective, and its image in $\mathop{\rm Gr}\nolimits(2,{\mathbb C}[z_1,z_2]_{d-1})^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_2$ is normal. \end{theorem}
Another positive result on condition (C1) concerns the case of ternary cubics (here $n=d=3$):
\begin{proposition}\label{ternarycubicsnabla}
The image $\mathop{\rm im}\nolimits(\overline{{\mathbb P}\nabla|_{{\mathbb P} X_3^3}})$ is a nonsingular curve in $Z_3^2/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_3$. \end{proposition}
\noindent Proposition \ref{ternarycubicsnabla} has never appeared in print as stated but easily follows from other published facts. Details will be given in Section \ref{S:binaryquarticternarycubics} (see Remark \ref{Rem:binaryquarticternarycubics}).
Next, we will discuss condition (C2). First of all, the following holds:
\begin{theorem}\label{psilocclosedimmer}\cite[Corollary 5.4]{FI} The morphism $\overline{{\mathbb P}\Psi}$ is a locally closed immersion. \end{theorem}
\begin{proof} The proof is primarily based on \cite[Theorem 1.2]{FI}, the main result of \cite{FI}, which states that ${\mathbb P}\Psi$ maps polystable points to polystable points. Once this difficult fact has been established, we proceed as follows.
Recall that by Proposition \ref{P:imagehat} the morphism ${\mathbb P}\Psi$ is a locally closed immersion, specifically, is an isomorphism onto the $\mathop{\rm SL}\nolimits_n$-invariant locally closed subset ${\mathbb P} U_n^{n(d-2)}$ of the semistable locus ${\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm ss}\nolimits}$. Therefore, property (P5) implies that $\overline{{\mathbb P}\Psi}$ is injective.
Next, consider the closure $Z$ of ${\mathbb P} U_n^{n(d-2)}$ in ${\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm ss}\nolimits}$. Clearly, $Z$ is\linebreak $\mathop{\rm SL}\nolimits_n$-invariant. Then by property (P7) we see that $\pi_1(Z)$ is closed in the projective variety ${\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n$ and is a good quotient of $Z$. Since ${\mathbb P} U_n^{n(d-2)}$ is locally closed in ${\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm ss}\nolimits}$, it is open in $Z$. Let us show that ${\mathbb P} U_n^{n(d-2)}$ is saturated in $Z$ as well. Fix $a\in{\mathbb P} U_n^{n(d-2)}$ and let $O$ be the unique closed\linebreak $\mathop{\rm SL}\nolimits_n$-orbit in $Z$ that lies in the closure of $\mathop{\rm SL}\nolimits_n\cdot\, a$ in $Z$ (see property (P5)). Set $\widetilde a:=({\mathbb P}\Psi)^{-1}(a)\in Z_n^{d-1}$ and consider the closed $\mathop{\rm SL}\nolimits_n$-orbit $\widetilde O$ that lies in the closure of $\mathop{\rm SL}\nolimits_n\cdot\,\widetilde a$ in $Z_n^{d-1}$. Appealing to \cite[Theorem 1.2]{FI} once again, we see that ${\mathbb P}\Psi(\widetilde O)$ is a closed $\mathop{\rm SL}\nolimits_n$-orbit in $Z$ and that ${\mathbb P}\Psi(\widetilde O)$ lies in the closure of $\mathop{\rm SL}\nolimits_n\cdot\,a$ in $Z$. It then follows that $O={\mathbb P}\Psi(\widetilde O)$, so $O$ is contained in ${\mathbb P} U_n^{n(d-2)}$. Since ${\mathbb P} U_n^{n(d-2)}$ is open in $Z$, the $\mathop{\rm SL}\nolimits_n$-orbit of every point of $Z$ that contains $O$ in its closure in fact lies in ${\mathbb P} U_n^{n(d-2)}$. This shows that ${\mathbb P} U_n^{n(d-2)}$ is saturated in $Z$ as claimed.
By property (P6) we then have that $\pi_1\left({\mathbb P} U_n^{n(d-2)}\right)$ is open in $\pi_1(Z)$ and is a good quotient of ${\mathbb P} U_n^{n(d-2)}$. Now, recall that ${\mathbb P} U_n^{n(d-2)}$ is isomorphic to the smooth variety $Z_n^{d-1}$, hence is normal. By property (P8) we therefore see that $\pi_1\left({\mathbb P} U_n^{n(d-2)}\right)$ is a normal variety. Zariski's Main Theorem now implies that $\overline{{\mathbb P}\Psi}$ is an isomorphism onto $\mathop{\rm im}\nolimits(\overline{{\mathbb P}\Psi})=\pi_1\left({\mathbb P} U_n^{n(d-2)}\right)$, hence is a locally closed immersion.\end{proof}
\noindent Despite the fact that ${\mathbb P}\Phi$ is not injective, factorization (\ref{decomposi1}) and Theorems \ref{nablefedorchuk}, \ref{psilocclosedimmer} imply
\begin{corollary}\label{barphiinj} The morphism $\overline{{\mathbb P}\Phi}$ is injective. \end{corollary}
Note that Theorem \ref{psilocclosedimmer} states that the map $\overline{{\mathbb P}\Psi}$ is an isomorphism onto the closed subset $\pi_1\left({\mathbb P} U_n^{n(d-2)}\right)$ of an open subset of ${\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{d(n-2)}^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n$ but does not assert that the open subset may be chosen to be affine as required by condition (C2). We will now make the following observation:
\begin{proposition}\label{invargeneral} Suppose that there exists a homogeneous $\mathop{\rm SL}\nolimits_n$-invariant $J$ on the space ${\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$ such that $U_n^{n(d-2)}$ is a closed subset of the complement to the zero locus of $J$. Then $\pi_1\left({\mathbb P} U_n^{n(d-2)}\right)$ is a closed subset of an affine open subset of ${\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{d(n-2)}^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n$, hence condition {\rm (C2)} is satisfied. \end{proposition}
\begin{proof} Let $U$ be the affine open subset of ${\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$ that consists of all elements of ${\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$ at which $J$, understood as a global $\mathop{\rm SL}\nolimits_n$-invariant section of a power of the hyperplane bundle, does not vanish. Then ${\mathbb P} U_n^{n(d-2)}$ is a closed subset of $U$. Since $U$ is a saturated open subset of ${\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm ss}\nolimits}$, by property (P6) it follows that $\pi_1(U)$ is open in ${\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n$ and is a good quotient of $U$. As $U$ is affine, its good quotient is also affine, hence $\pi_1(U)$ is an affine open subset of ${\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n$. Since ${\mathbb P} U_n^{n(d-2)}$ is a closed $\mathop{\rm SL}\nolimits_n$-invariant subset of of $U$, by property (P7) we see that $\pi_1\left({\mathbb P} U_n^{n(d-2)}\right)$ is a closed subset of $\pi_1(U)$.\end{proof}
Following Proposition \ref{invargeneral}, we now state:
\begin{openprob}\label{openprobexistinvari} \it Show that for all $n\ge 2$, $d\ge 3$ one can find a homogeneous $\mathop{\rm SL}\nolimits_n$-invariant on the space ${\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$ such that $U_n^{n(d-2)}$ is a closed subset of the complement to its zero locus. \end{openprob}
\noindent We note that in \cite[Theorem 3.5]{I6} we constructed a hypersurface in an affine variety $A$ containing $U_n^{n(d-2)}$ that does not intersect $U_n^{n(d-2)}$ but it is not clear whether this hypersurface comes from the zero set of an $\mathop{\rm SL}\nolimits_n$-invariant.
While Problem \ref{openprobexistinvari} remains open in full generality, it has been solved in the cases $n=2$ and $n=d=3$. To discuss the case $n=2$, let us recall that the {\it catalecticant}\, of a binary form $$ f=\sum_{i=0}^{2N}{2N \choose i}a_iw_1^{2N-i}w_2^i $$ of even degree $2N$ is \begin{equation} \mathop{\rm Cat}\nolimits(f):=\det\left(\begin{array}{cccc} a_0 & a_1 & \dots & a_N\\ a_1 & a_2 & \dots & a_{N+1}\\ \vdots & \vdots & \ddots & \vdots\\ a_N & a_{N+1} & \dots & a_{2N} \end{array} \right).\label{catalectform} \end{equation} It is an $\mathop{\rm SL}\nolimits_2$-invariant and does not vanish if and only if the $N+1$ partial derivatives of $f$ of order $N$ are linearly independent in ${\mathbb C}[w_1,w_2]_N$ (see, e.g., \cite[Lemma 6.2]{K}). Alternatively, the set where the catalecticant is nonzero is the complement to the closure of the locus of forms in ${\mathbb C}[w_1,w_2]_{2N}$ expressible as the sum of the $2N$th powers of $N$ linear forms (see, e.g., \cite[\S 208]{Ell} or \cite[\S 187]{GY}).
Notice that the catalecticant is defined on the target space of the morphism $\Psi \co Y_n^{d-1} \to {\mathbb C}[e_1,e_2]_{2(d-2)}$. Let us denote the affine open subset of ${\mathbb C}[e_1,e_2]_{2(d-2)}$ where $\mathop{\rm Cat}\nolimits$ does not vanish by $V_2^{2(d-2)}$ and its image in ${\mathbb P}{\mathbb C}[e_1,e_2]_{2(d-2)}$ by ${\mathbb P} V_2^{2(d-2)}$. Clearly, ${\mathbb P} V_2^{2(d-2)}$ is the affine open subset of ${\mathbb P}{\mathbb C}[e_1,e_2]_{2(d-2)}$ that consists of all elements of ${\mathbb P}{\mathbb C}[e_1,e_2]_{2(d-2)}$ at which the catalecticant $\mathop{\rm Cat}\nolimits$, understood as a global $\mathop{\rm SL}\nolimits_2$-invariant section of $H^{\otimes(d-1)}$, does not vanish.
For binary forms the following holds:
\begin{theorem}\label{catnonzero}\cite[Proposition 4.3]{AI2} One has $U_2^{2(d-2)}=V_2^{2(d-2)}$. \end{theorem}
Next, we let $n=d=3$. Notice that in this case $n(d-2)=d=3$. Let $A_4$ be the {\it degree four Aronhold invariant}\, of ternary cubics. An explicit formula for $A_4$ can be found, for example, in \cite[p.~191]{Sal}. Namely, for a ternary cubic $$ \begin{array}{l} f(w_1,w_2,w_3)=aw_1^3+bw_2^3+cw_3^3+3dw_1^2w_2+3pw_1^2w_3+3qw_1w_2^2+\\
\\ \hspace{6cm}3rw_2^2w_3+3sw_1w_3^2+3tw_2w_3^2+6uw_1w_2w_3 \end{array} $$ one has \begin{equation} \begin{array}{l} A_4(f)=abcu-bcdp-acqr-abst-u(art+bps+cdq)+\\
\\ \hspace{1.5cm}aqt^2+ar^2s+bds^2+bp^2t+cd^2r+cpq^2-u^4+\\
\\ \hspace{1.5cm}2u^2(qs+dt+pr)-3u(drs+pqt)-q^2s^2-d^2t^2-\\
\\ \hspace{1.5cm}p^2r^2+dprt+prqs+dqst. \end{array}\label{aronhold4} \end{equation} Let us denote the affine open subset of ${\mathbb C}[e_1,e_2,e_3]_3$ where $A_4$ does not vanish by $V_3^3$ and its image in ${\mathbb P}{\mathbb C}[e_1,e_2,e_3]_3$ by ${\mathbb P} V_3^3$. Clearly, ${\mathbb P} V_3^3$ is the affine open subset of ${\mathbb P}{\mathbb C}[e_1,e_2,e_3]_3$ that consists of all elements of ${\mathbb P}{\mathbb C}[e_1,e_2,e_3]_3$ at which $A_4$, understood as a global $\mathop{\rm SL}\nolimits_3$-invariant section of $H^{\otimes 4}$, does not vanish.
For $n=d=3$ we have:
\begin{theorem}\label{ternarycubics}\cite[Proposition 4.1]{I5} One has $U_3^3=V_3^3$. \end{theorem}
Now, Claim \ref{claimconj}, Theorems \ref{isaevalper}, \ref{catnonzero}, \ref{ternarycubics} and Proposition \ref{ternarycubicsnabla} imply
\begin{corollary}\label{positivecor} Conjecture {\rm \ref{conj2}} is valid for $n=2$ and for $n=d=3$. \end{corollary}
In fact, in Section \ref{S:binaryquarticternarycubics} we will see that for $n=d=3$ factorizations (\ref{decomposition}), (\ref{decommpossition}), (\ref{decomposi1}) are not required. In this case, Conjecture \ref{conj2} can be obtained by studying the morphism $\Phi$ directly.
To conclude this subsection, we reiterate that in order to establish Conjecture \ref{conj2} in full generality it suffices to solve Open Problem \ref{prob1}, which in turn will follow from positive solutions to Open Problems \ref{probnormality} and \ref{openprobexistinvari}.
\subsection{A weak variant of Conjecture {\rm \ref{conj2}}} The initial version of Conjecture {\rm \ref{conj2}}, stated in \cite{EI}, did not contain the requirement that the $\mathop{\rm GL}\nolimits_n$-invariant rational function $R$ be defined at every point of the image of $\Phi$. It was formulated as follows:
\begin{conjecture}\label{conj3} For every regular $\mathop{\rm GL}\nolimits_n$-invariant function $S$ on $X_n^d$ there exists a rational $\mathop{\rm GL}\nolimits_n$-invariant function $R$ on ${\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$ such that $R\circ\Phi$ extends to a regular function on $X_n^d$ that coincides with $S$. \end{conjecture}
Note, for instance, that for the morphism $$ \varphi:{\mathbb C}\to{\mathbb C}^2,\quad z\mapsto (z,z) $$ the rational function $R:=z_1/z_2$ is not defined at $(0,0)=\varphi(0)$ but the pullback $R\circ\varphi$ extends to the regular function $1$ on ${\mathbb C}$. Conjecture \ref{conj3} does not rule out such situations, whereas Conjecture \ref{conj2} does. We stress that it is the stronger conjecture that is required for solving the reconstruction problem stated in the introduction.
The weaker conjecture has turned out to be easier to settle:
\begin{theorem}\label{weakerconjsettled}\cite[Theorem 4.1]{AI1} Conjecture {\rm\ref{conj3}} holds for all $n\ge 2$, $d\ge 3$. \end{theorem}
\begin{proof} The case $n=2$, $d=3$ is trivial, and we exclude it from consideration (note also that in this situation the result is contained in Corollary \ref{positivecor}). Then we have $n(d-2)\ge 3$. The proof is based on the fact that $\mathop{\rm im}\nolimits({\mathbb P}\Phi)$ intersects the locus of stable points ${\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm s}\nolimits}$. In fact, in \cite[Proposition 4.3]{AI1} we show that $\mathop{\rm im}\nolimits({\mathbb P}\Phi)$ contains an element with nonvanishing discriminant. Specifically, one can prove that the associated form of \begin{equation} f_0(z_1,\dots,z_n):=\left\{ \begin{array}{ll} \displaystyle \sum_{1\le i<j<k\le n}z_iz_jz_k & \hbox{if $d=3$,}\\
\\ \displaystyle \sum_{1\le i<j\le n}(z_i^{d-2}z_j^2+z_i^2z_j^{d-2}) & \hbox{if $d\ge 4$.} \end{array} \right.\label{deff0} \end{equation} is nondegenerate. Once this nontrivial statement has been established, we proceed as follows.
Consider the nonempty open $\mathop{\rm SL}\nolimits_n$-invariant subset $$ U:=({\mathbb P}\Phi)^{-1}\left({\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm s}\nolimits}\right)\subset{\mathbb P} X_n^d. $$ Since ${\mathbb P} X_n^d\subset{\mathbb P}{\mathbb C}[z_1,\dots,z_n]_d^{\mathop{\rm s}\nolimits}$, the set $U$ is saturated in ${\mathbb P} X_n^d$, hence $\pi_3(U)$ is a good geometric quotient of $U$, and we have the commutative diagram $$\xymatrix{
U \ar[r]^{\hspace{-1cm}{\mathbb P}\Phi|_U} \ar[d]^{\pi_{{}_3}|_U} & {\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm s}\nolimits} \ar[d]^{\pi_{{}_1}|_{{\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm s}\nolimits}}}\\ \pi_3(U)\ar[r]^{\hspace{0cm}\varphi} & Z, }$$
where $Z:=\pi_1\left({\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm s}\nolimits}\right)$ is a good geometric quotient of the stable locus ${\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm s}\nolimits}$ and $\varphi:=\overline{{\mathbb P}\Phi}|_{\pi_3(U)}$. Recall that by Corollary \ref{barphiinj} the morphism $\varphi$ is injective.
Next, since the set $\varphi(\pi_3(U))$ is constructible, it contains a subset $W$ that is open in the closed irreducible subvariety ${\mathcal R}:=\overline{\varphi(\pi_3(U))}$ of $Z$. Let ${\mathcal R}_{\rm sing}$ be the singular locus of ${\mathcal R}$. Then $W\setminus {\mathcal R}_{\rm sing}$ is nonempty and open in ${\mathcal R}$ as well, and we choose an open subset $O\subset Z$ such that $W\setminus {\mathcal R}_{\rm sing}=O\cap {\mathcal R}$. Clearly, $W\setminus {\mathcal R}_{\rm sing}$ is closed in $O$. Next, choose $V\subset O$ to be an affine open subset intersecting $W\setminus {\mathcal R}_{\rm sing}$. Then the set $\widetilde{\mathcal R}:=V\cap(W\setminus {\mathcal R}_{\rm sing})=V\cap {\mathcal R}$ is closed in $V$. Let $\widetilde U:=\varphi^{-1}(V)=\varphi^{-1}(\widetilde{\mathcal R})$. By construction $$
\widetilde\varphi:=\varphi|_{\widetilde U} \co \widetilde U \to \widetilde{\mathcal R}\subset V $$ is a bijective morphism from the open subset $\widetilde U$ of $U$ onto the smooth variety $\widetilde{\mathcal R}$. It now follows from Zariski's Main Theorem that $\widetilde\varphi$ is an isomorphism.
We will now argue as in the proof of Claim \ref{claimconj}. Fix a $\mathop{\rm GL}\nolimits_n$-invariant regular function $S$ on $X_n^d$. By property (P4), it is the pullback of a uniquely defined regular function $\bar S$ on ${\mathbb P} X_n^d/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_n$. Let $T$ be the push-forward of $\bar S|_{\widetilde U}$ to $ \widetilde{\mathcal R}$ by means of $\widetilde\varphi$. Since $ \widetilde{\mathcal R}$ is closed in $V$ and $V$ is affine, the function $T$ extends to a regular function on $V$. The pull-back of this function by means of $\pi_1|_{{\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm s}\nolimits}}$ yields an $\mathop{\rm SL}\nolimits_n$-invariant regular function on the dense open subset $\pi_1^{-1}(V)$ of ${\mathbb P}{\mathbb C}[e_1,\dots,e_n]_{n(d-2)}^{\mathop{\rm s}\nolimits}$, hence a $\mathop{\rm GL}\nolimits_n$-invariant rational function $R$ on ${\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$. Clearly, the composition $R\circ\Phi$ extends to a regular function on ${\mathbb C}[z_1,\dots,z_n]_d$, and the extension coincides with $S$.\end{proof}
As we have seen, the main part of the proof of Theorem \ref{weakerconjsettled} is the existence of $f_0\in X_n^d$ such that $\Delta\Phi(f_0)\ne 0$ (see (\ref{weakerconjsettled})). The existence of such a form also insures that one can consider the iteration $\Phi^2$, viewed as a rational map from ${\mathbb C}[z_1,\dots,z_n]_d$ to ${\mathbb C}[z_1,\dots,z_n]_{n(n(d-2)-2)}$. This observation leads to the following natural question:
\begin{openprob}\label{probiterations} \it Is the iteration $\Phi^k$ a well-defined rational map for all $k\in{\mathbb N}${\rm ?} \end{openprob}
\noindent In the next section we will look at the iterations of the projectivized map ${\mathbb P}\Phi$ in two special cases: (i) $n=2$, $d=4$ and (ii) $n=d=3$.
\section{The morphism ${\mathbb P}\Phi$ for binary quartics and ternary cubics}\label{S:binaryquarticternarycubics} \setcounter{equation}{0}
To further clarify the nature of the morphisms $\Phi$, ${\mathbb P}\Phi$ and $\overline{{\mathbb P}\Phi}$, in this section we will consider two special cases for which we will present results of explicit calculations. Notice that for all pairs $n,d$ (excluding the trivial case $n=2$, $d=3$) one has $n(d-2)\ge d$, and the equality holds exactly for the pairs $n=2$, $d=4$ and $n=3$, $d=3$. These are the situations we will focus on below. In particular, we will provide an independent verification of Conjecture \ref{conj2} in each of the two cases. We will also see that in these situations the morphism ${\mathbb P}\Phi$ induces a unique equivariant involution on the variety ${\mathbb P} X_n^d$ with one orbit removed and that the involution can be understood via projective duality. For convenience, everywhere in this section we will identify the algebras ${\mathbb C}[z_1,\dots,z_n]_d$ and ${\mathbb C}[e_1,\dots,e_n]_{2(d-2)}$ by means of identifying $z_j$ and $e_j$, thus the morphism ${\mathbb P}\Phi$ will be regarded as a map from ${\mathbb P} X_n^d$ to ${\mathbb P}{\mathbb C}[z_1,\dots,z_n]_{n(d-2)}^{\mathop{\rm ss}\nolimits}$. In this interpretation, it has the following equivariance property: \begin{equation} {\mathbb P}\Phi(C f)=C^{-T}{\mathbb P}\Phi(f),\,\, f\in{\mathbb P} X_n^d,\,\, C\in\mathop{\rm SL}\nolimits_n\label{equivartype2} \end{equation} (see (\ref{equivarphitildephi})). The material that follows can be found in articles \cite{Ea}, \cite{EI}, \cite{I1}, \cite{I2}, \cite{AIK}.
\subsection{Binary quartics} \label{S:binary-quartics} Let $n=2$, $d=4$. It is a classical result that every nondegenerate binary quartic is linearly equivalent to a quartic of the form \begin{equation} q_t(z_1,z_2):=z_1^4+tz_1^2z_2^2+z_2^4,\quad t\ne\pm 2\label{qt} \end{equation} (see \cite[\S 211]{Ell}). A straightforward calculation yields that the associated form of\linebreak $q_t$ is \begin{equation} {\mathbf q}_t(z_1,z_2):=\frac{1}{72(t^2-4)}(tz_1^4-12z_1^2z_2^2+tz_2^4).\label{bfqt} \end{equation} For $t\ne 0,\pm 6$ the quartic ${\mathbf q}_t$ is nondegenerate, and in this case the associated form of ${\mathbf q}_t$ is proportional to $q_t$, hence $({\mathbb P}\Phi)^2(q_t) = q_t$. As explained below, the exceptional quartics $q_0$, $q_6$, $q_{-6}$, are pairwise linearly equivalent.
It is easy to show that ${\mathbb P}{\mathbb C}[z_1,z_2]_4^{\mathop{\rm ss}\nolimits}$ is the union of ${\mathbb P} X_2^4$ (which coincides with ${\mathbb P}{\mathbb C}[z_1,z_2]_4^{\mathop{\rm s}\nolimits}$) and two orbits that consist of strictly semistable elements:\linebreak $O_1:=\mathop{\rm SL}\nolimits_2\cdot\, z_1^2z_2^2$, $O_2:=\mathop{\rm SL}\nolimits_2\cdot \,z_1^2(z_1^2+z_2^2)$, of dimensions 2 and 3, respectively. Notice that $O_1$ is closed in ${\mathbb P}{\mathbb C}[z_1,z_2]_4^{\mathop{\rm ss}\nolimits}$ and is contained in the closure of $O_2$. We then observe that ${\mathbb P}\Phi$ maps ${\mathbb P} X_2^4$ onto ${\mathbb P}{\mathbb C}[z_1,z_2]_4^{\mathop{\rm ss}\nolimits}\setminus (O_2\cup O_3)$, where $O_3:=\mathop{\rm SL}\nolimits_2\cdot\, q_0$ (as we will see shortly, $O_3$ contains the other exceptional quartics $q_6$, $q_{-6}$ as well). Also, notice that ${\mathbb P}\Phi$ maps the 3-dimensional orbit $O_3$ onto the 2-dimensional orbit $O_1$. In particular, ${\mathbb P}\Phi$ restricts to an equivariant involutive automorphism of ${\mathbb P} X_2^4\setminus O_3$, which for $t\ne 0,\pm 6$ establishes a duality between the quartics $C q_t$ and $C^{-T} q_{-12/t}$ with $C\in\mathop{\rm SL}\nolimits_2$, hence between the orbits $\mathop{\rm SL}\nolimits_2\cdot\, q_t$ and $\mathop{\rm SL}\nolimits_2\cdot\, q_{-12/t}$.
In order to understand the induced map $\overline{{\mathbb P}\Phi}$ of good GIT quotients, we note that the algebra of $\mathop{\rm SL}\nolimits_2$-invariants ${\mathbb C}[{\mathbb C}[z_1,z_2]_4]^{\mathop{\rm SL}\nolimits_2}$ is generated by the pair of elements $I_2$ and $\mathop{\rm Cat}\nolimits$, where $I_2$ has degree 2 (see, e.g., \cite[\S\S 29, 30, 80]{Ell}). We have \begin{equation} \Delta=I_2^3-27\,\mathop{\rm Cat}\nolimits^2\label{deltabinquar} \end{equation} (see \cite[\S 81]{Ell}), and for a binary quartic of the form $$ f(z_1,z_2)=az_1^4+6bz_1^2z_2^2+cz_2^4 $$ the value of $I_2$ is computed as \begin{equation} \begin{array}{l} I_2(f)=ac+3b^2.\label{form1} \end{array} \end{equation} It then follows that the algebra ${\mathbb C}[{\mathbb P} X_2^4]^{\mathop{\rm SL}\nolimits_2}\simeq{\mathbb C}[X_2^4]^{\mathop{\rm GL}\nolimits_2}$ is generated by \begin{equation} J:=\frac{I_2^3}{\Delta}.\label{form2} \end{equation} Therefore, ${\mathbb P} X_2^4/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_2$ is the affine space ${\mathbb C}$, and ${\mathbb P}{\mathbb C}[z_1,z_2]_4^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_2$ can be identified with ${\mathbb P}^1$, where both $O_1$ and $O_2$ project to the point at infinity in ${\mathbb P}^1$.
Next, from formulas (\ref{catalectform}), \eqref{qt}, (\ref{deltabinquar}), \eqref{form1}, \eqref{form2} we calculate \begin{equation} J(q_t)=\frac{(t^2+12)^3}{108(t^2-4)^2}\quad\hbox{for all $t\ne \pm 2$.}\label{form3} \end{equation} Clearly, \eqref{form3} yields \begin{equation} J(q_0)=J(q_6)=J(q_{-6})=1,\label{Jeq1} \end{equation} which implies that $q_0$, $q_6$, $q_{-6}$ are indeed pairwise linearly equivalent as claimed above and that the orbit $O_3$ is described by the condition $J=1$.
Using (\ref{bfqt}), \eqref{form3} one obtains $$ J({\mathbf q}_t)=\frac{J(q_t)}{J(q_t)-1}\quad\hbox{for all $t\ne 0,\pm 6$.}\label{jtransfbinquar} $$ This shows that the map $\overline{{\mathbb P}\Phi}$ extends to the automorphism $\varphi$ of ${\mathbb P}^1$ given by $$ \zeta\mapsto\frac{\zeta}{\zeta-1}. $$ Clearly, one has $\varphi^{\,2}=\hbox{id}$, that is, $\varphi$ is an involution. It preserves ${\mathbb P}^1\setminus\{1,\infty\}$, which corresponds to the duality between the orbits $\mathop{\rm SL}\nolimits_2\cdot\, q_t$ and $\mathop{\rm SL}\nolimits_2\cdot\, q_{-12/t}$ for $t\ne 0,\pm 6$ noted above. Further, $\varphi(1)=\infty$, which agrees with \eqref{Jeq1} and the fact that $O_3$ is mapped onto $O_1$. We also have $\varphi(\infty)=1$, but this identity has no interpretation at the level of orbits. Indeed, ${\mathbb P}\Phi$ cannot be equivariantly extended to an involution of ${\mathbb P}{\mathbb C}[z_1,z_2]_4^{\mathop{\rm ss}\nolimits}$ as the fiber of the quotient ${\mathbb P}{\mathbb C}[z_1,z_2]_4^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_2$ over $\infty$ contains $O_1$, which cannot be mapped onto $O_3$ since $\dim O_1<\dim O_3$.
Finally, an explicit calculation shows that $\mathop{\rm Cat}\nolimits({\mathbf q}_t)\ne 0$ for all $t\ne\pm 2$ (cf.~Theorem \ref{catnonzero}). Consider the absolute invariant of binary quartics $$ K:=\frac{I_2^3}{27\mathop{\rm Cat}\nolimits^2}. $$ It is then easy to see that $K({\mathbf q}_t)=J(q_t)$ for all $t\ne\pm 2$, which independently establishes Conjecture \ref{conj2} for $n=2$, $d=4$ (cf.~Corollary \ref{positivecor}).
\subsection{Ternary cubics} \label{S:cubics} Let $n=d=3$. Every nondegenerate ternary cubic is linearly equivalent to a cubic of the form \begin{equation} c_t(z_1,z_2,z_3):=z_1^3+z_2^3+z_3^3+tz_1z_2z_3,\quad t^3\ne -27,\label{ct} \end{equation} called {\it Hesse's canonical equation} (see, e.g., \cite[Theorem 1.3.2.16]{Sc}). The associated form of $c_t$ is easily found to be \begin{equation} {\mathbf c}_t(z_1,z_2,z_3):=-\frac{1}{24(t^3+27)}(tz_1^3+tz_2^3+tz_3^3-18z_1z_2z_3).\label{bfct} \end{equation} For $t\ne 0$, $t^3\ne 216$ the cubic ${\mathbf c}_t$ is nondegenerate, and in this case the associated form of ${\mathbf c}_t$ is proportional to $c_t$, hence $({\mathbb P}\Phi)^2 (c_t) = c_t$. Below we will see that the exceptional cubics $c_0$, $c_{6\tau}$, with $\tau^3=1$, are pairwise linearly equivalent.
It is well-known (see, e.g., \cite[Theorem 1.3.2.16]{Sc}) that ${\mathbb P}{\mathbb C}[z_1,z_2,z_3]_3^{\mathop{\rm ss}\nolimits}$ is the union of ${\mathbb P} X_3^3$ (which coincides with ${\mathbb P}{\mathbb C}[z_1,z_2,z_3]_3^{\mathop{\rm s}\nolimits}$) and the following three orbits that consist of strictly semistable forms: ${\rm O}_1:=\mathop{\rm SL}\nolimits_3\cdot\, z_1z_2z_3$, ${\rm O}_2:=\mathop{\rm SL}\nolimits_3\cdot\, (z_1z_2z_3+z_3^3)$,\linebreak ${\rm O}_3:=\mathop{\rm SL}\nolimits_3\cdot\, (z_1^3+z_1^2z_3+z_2^2z_3)$ (the cubics lying in ${\rm O}_3$ are called {\it nodal\,}). The dimensions of the orbits are 6, 7 and 8, respectively. Observe that ${\rm O}_1$ is closed in ${\mathbb P}{\mathbb C}[z_1,z_2,z_3]_3^{\mathop{\rm ss}\nolimits}$ and is contained in the closures of each of ${\rm O}_2$, ${\rm O}_3$. We then see that ${\mathbb P}\Phi$ maps ${\mathbb P} X_3^3$ onto ${\mathbb P}{\mathbb C}[z_1,z_2,z_3]_3^{\mathop{\rm ss}\nolimits}\setminus ({\rm O}_2\cup {\rm O}_3\cup {\rm O}_4)$, where ${\rm O}_4:=\mathop{\rm SL}\nolimits_3\cdot\, c_0$ (as explained below, ${\rm O}_4$ also contains the other exceptional cubics $c_{6\tau}$, with\linebreak $\tau^3=1$). Further, note that the 8-dimensional orbit ${\rm O}_4$ is mapped by ${\mathbb P}\Phi$ onto the 6-dimensional orbit ${\rm O}_1$ (thus the morphism of the stabilizers of $c_0$ and ${\mathbb P}\Phi(c_0)$ is an inclusion of a finite group into a two-dimensional group). Hence, ${\mathbb P}\Phi$ restricts to an equivariant involutive automorphism of ${\mathbb P} X_3^3\setminus {\rm O}_4$, which for $t\ne 0$, $t^3\ne 216$ establishes a duality between the cubics $C c_t$ and $C^{-T} c_{-18/t}$ with $C\in\mathop{\rm SL}\nolimits_3$, therefore between the orbits $\mathop{\rm SL}\nolimits_3\cdot\, c_t$ and $\mathop{\rm SL}\nolimits_3\cdot\, c_{-18/t}$.
To determine the induced map $\overline{{\mathbb P}\Phi}$ of GIT quotients, we recall that the algebra of $\mathop{\rm SL}\nolimits_3$-invariants ${\mathbb C}[{\mathbb C}[z_1,z_2,z_3]_3]^{\mathop{\rm SL}\nolimits_3}$ is generated by the two {\it Aronhold invariants}\, $A_4$, $A_6$, of degrees 4 and 6, respectively. Explicit formulas for these invariants are given, e.g., in \cite[\S\S 220, 221]{Sal}, \cite{C}, and we recall that the expression for $A_4$ was written down in (\ref{aronhold4}). One has \begin{equation} \Delta=A_6^2+64\, A_4^3\label{discrtercub} \end{equation} (see \cite{C}), and for a ternary cubic of the form \begin{equation} f(z_1,z_2,z_3)=az_1^3+bz_2^3+cz_3^3+6dz_1z_2z_3\label{generaltercubic} \end{equation} the value of $A_6$ is calculated as \begin{equation} \begin{array}{l} A_6(f)=a^2b^2c^2-20abcd^3-8d^6.\label{form11} \end{array} \end{equation} It then follows that the algebra ${\mathbb C}[{\mathbb P} X_3^3]^{\mathop{\rm SL}\nolimits_3}\simeq{\mathbb C}[X_3^3]^{\mathop{\rm GL}\nolimits_3}$ is generated by \begin{equation} {\rm J}:=\frac{64A_4^3}{\Delta}.\label{form21} \end{equation} Hence, ${\mathbb P} X_3^3/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_3$ is the affine space ${\mathbb C}$, and ${\mathbb P}{\mathbb C}[z_1,z_2,z_3]_3^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_3$ is identified with ${\mathbb P}^1$, where ${\rm O}_1$, ${\rm O}_2$, ${\rm O}_3$ project to the point at infinity in ${\mathbb P}^1$.
Further, from formulas (\ref{aronhold4}), \eqref{ct}, (\ref{discrtercub}), \eqref{form11}, \eqref{form21} we find \begin{equation} {\rm J}(c_t)=-\frac{t^3(t^3-216)^3}{1728(t^3+27)^3}\quad\hbox{for all $t$ with $t^3\ne -27$.}\label{form31} \end{equation} From identity \eqref{form31} one obtains \begin{equation} {\rm J}(c_0)={\rm J}(c_{6\tau})=0\quad\hbox{for $\tau^3=1$,}\label{Jeq11} \end{equation} which implies that the orbit ${\rm O}_4$ is given by the condition ${\rm J}=0$ and that the four cubics $c_0$, $c_{6\tau}$ are indeed pairwise linearly equivalent.
Using \eqref{bfct}, \eqref{form31} we see $$ {\rm J}({\mathbf c}_t)=\frac{1}{{\rm J}(c_t)}\quad\hbox{for all $t\ne 0$ with $t^3\ne 216$.}\label{jtransftercubics} $$ This shows that the map $\overline{{\mathbb P}\Phi}$ extends to the involutive automorphism $\varphi$ of ${\mathbb P}^1$ given by $$ \zeta\mapsto\frac{1}{\zeta}. $$ This involution preserves ${\mathbb P}^1\setminus\{0,\infty\}$, which agrees with the duality between the orbits $\mathop{\rm SL}\nolimits_3\cdot\, c_t$ and $\mathop{\rm SL}\nolimits_3\cdot\, c_{-18/t}$ for $t\ne 0$, $t^3\ne 216$ established above. Next,\linebreak $\varphi(0)=\infty$, which corresponds to \eqref{Jeq11} and the facts that ${\rm O}_4$ is mapped onto ${\rm O}_1$. Also, one has $\varphi(\infty)=0$, but this identity cannot be illustrated by a correspondence between orbits. Indeed, ${\mathbb P}\Phi$ cannot be equivariantly extended to an involution of ${\mathbb P}{\mathbb C}[z_1,z_2,z_3]_3^{\mathop{\rm ss}\nolimits}$ as the fiber of the quotient ${\mathbb P}{\mathbb C}[z_1,z_2,z_3]_3^{\mathop{\rm ss}\nolimits}/\hspace{-0.1cm}/\mathop{\rm SL}\nolimits_2$ over $\infty$ contains ${\rm O}_1$, which cannot be mapped onto ${\rm O}_4$ since $\dim {\rm O}_1<\dim {\rm O}_4$.
Finally, an explicit calculation shows that $A_4({\mathbf c}_t)\ne 0$ for all $t^3\ne-27$ (cf.~Theorem \ref{ternarycubics}). Consider the absolute invariant of ternary cubics $$ {\rm K}:=\frac{A_6^2}{64A_4^3}+1. $$ It is then easy to see that ${\rm K}({\mathbf c}_t)=J(c_t)$ for all $t^3\ne-27$, which independently establishes Conjecture \ref{conj2} for $n=d=3$ (cf.~Corollary \ref{positivecor}).
\begin{remark}\label{Rem:binaryquarticternarycubics}
The above considerations easily imply Proposition \ref{ternarycubicsnabla}. Indeed, we see that $\mathop{\rm im}\nolimits(\overline{{\mathbb P}\Phi})=\varphi({\mathbb C})={\mathbb P}^1\setminus\{0\}$ is a smooth curve. By factorization (\ref{decomposi1}) and Theorem \ref{psilocclosedimmer} it follows that $\mathop{\rm im}\nolimits(\overline{{\mathbb P}\nabla|_{X_3^3}})=(\overline{{\mathbb P}\Psi})^{\,-1}({\mathbb P}^1\setminus\{0\})$ is a nonsingular curve as required. \end{remark}
If we regard ${\mathbb P} X_3^3$ as the space of elliptic curves, the invariant ${\rm J}$ of ternary cubics translates into the $j$-invariant, and one obtains an equivariant involution on the locus of elliptic curves with nonvanishing $j$-invariant. It is well-known that every elliptic curve can be realized as a double cover of ${\mathbb P}^1$ branched over four points (see, e.g., \cite[p.~115, 117]{D1}, \cite[Exercise 22.37 and Proposition 22.38]{Harr}). Therefore, it is not surprising that the cases of binary quartics and ternary cubics considered above have many similarities.
\subsection{Rational equivariant involutions and projective duality}\label{uniqueness} We have seen that the map $\overline{{\mathbb P}\Phi}$ for binary quartics and ternary cubics yields involutions of ${\mathbb P}^1$. It is natural to ask whether there exist any other involutions of ${\mathbb P}^1$ that arise from rational equivariant involutions of ${\mathbb P}{\mathbb C}[z_1,z_2]_4$ and ${\mathbb P}{\mathbb C}[z_1,z_2,z_3]_3$ as above. Here for either $n=2$, $d=4$ or $n=d=3$ a rational map $\iota$ of ${\mathbb P}{\mathbb C}[z_1,\dots,z_n]_d$ is called equivariant if it satisfies $$ \iota(Cf)=C^{-T}\iota(f),\,\,C\in\mathop{\rm SL}\nolimits_n $$ for all $f$ lying in the domain of $\iota$ (cf.~(\ref{equivartype2})). The following result asserts that there are no possibilities other than ${\mathbb P}\Phi$:
\begin{theorem}\label{invclass}\cite[Theorem 2.1]{AIK} For each pair $n=2$, $d=4$ and $n=3$, $d=3$ the morphism ${\mathbb P}\Phi$ is the unique rational equivariant involution of ${\mathbb P}{\mathbb C}[z_1,\dots,z_n]_d$. \end{theorem}
We will now see that for $n=2$, $d=4$ and $n=d=3$ the unique rational equivariant involution of ${\mathbb P}{\mathbb C}[z_1,\dots,z_n]_d$, and therefore the orbit duality induced by ${\mathbb P}\Phi$, can be understood via projective duality. We will now briefly recall this classical construction. For details the reader is referred to the comprehensive\linebreak survey \cite{T}.
Let $V$ be a vector space. The dual projective space $({\mathbb P} V)^*$ is the algebraic variety of all hyperplanes in $V$, which is canonically isomorphic to ${\mathbb P} V^*$. Let $X$ be a closed irreducible subvariety of ${\mathbb P} V$ and $X_{\mathop{\rm reg}\nolimits}$ the set of its regular points. Consider the affine cone $\widehat X\subset V$ over $X$. For every $x\in X_{\mathop{\rm reg}\nolimits}$ choose a point $\widehat x\in\widehat X$ lying over $x$. The cone $\widehat X$ is regular at $\widehat x$, and we consider the tangent space $T_{\widehat x}(\widehat X)$ to $\widehat X$ at $\widehat x$. Identifying $T_{\widehat x}(\widehat X)$ with a subspace of $V$, we now let $H_x$ be the collection of all hyperplanes in $V$ that contain $T_{\widehat x}(\widehat X)$ (clearly, this collection is independent of the choice of $\widehat x$ over $x$). Regarding every hyperplane in $H_x$ as a point in $({\mathbb P} V)^*$, we obtain the subset $$ {\mathcal H}:=\bigcup_{x\in X_{\mathop{\rm reg}\nolimits}}H_x \subset ({\mathbb P} V)^*. $$ The Zariski closure $X^*$ of ${\mathcal H}$ in $({\mathbb P} V)^*$ is then called the variety dual to $X$. Canonically identifying $(({\mathbb P} V)^*)^*$ with ${\mathbb P} V$, one has the reflexivity property $X^{**}=X$. Furthermore, if $X$ is a hypersurface, there exists a natural map from $X_{\mathop{\rm reg}\nolimits}$ to $X^*$, as follows: $$ \varphi: X_{\mathop{\rm reg}\nolimits}\to X^*,\quad x\mapsto T_{\widehat x}(\widehat X)\subset V, $$ where $\widehat x\in\widehat X$ is related to $x\in X_{\mathop{\rm reg}\nolimits}$ as above.
Observe now that in each of the two cases $n=2$, $d=4$ and $n=d=3$, for $f\in {\mathbb P} X_n^d$ the orbit $\mathop{\rm SL}\nolimits_n\cdot\,f$ is a smooth irreducible hypersurface in ${\mathbb P} X_n^d$, thus its closure $\overline{\mathop{\rm SL}\nolimits_n\cdot\,f}$ in ${\mathbb P}{\mathbb C}[z_1,\dots,z_n]_d$ is an irreducible (possibly singular) hypersurface. Therefore, one can consider the map $$ \varphi_f: \overline{\mathop{\rm SL}\nolimits_n\cdot\,f}_{\mathop{\rm reg}\nolimits}\to ({\mathbb P}{\mathbb C}[z_1,\dots,z_n]_d)^*\label{mapvarphismall} $$ constructed as above. Then we have
\begin{theorem}\label{mainaik}\cite[Theorem 2.2]{AIK} Suppose that we have either $n=2$, $d=4$, or $n=d=3$. Then for every $f\in{\mathbb P} X_n^d$ the restrictions ${\mathbb P}\Phi\big|_{\mathop{\rm SL}\nolimits_n\cdot\,f}$ and $\varphi_f\big|_{\mathop{\rm SL}\nolimits_n\cdot\,f}$ coincide upon the canonical identification $({\mathbb P}{\mathbb C}[z_1,\dots,z_n]_d)^*={\mathbb P}{\mathbb C}[z_1,\dots,z_n]_d^*$ and the identification ${\mathbb C}[z_1,\dots,z_n]_d^*={\mathbb C}[e_1,\dots,e_n]_d$ via the polar pairing. \end{theorem}
This theorem provides a clear explanation of the duality for orbits of binary quartics and ternary cubics that we observed earlier in this section. Indeed, suppose first that $n=2$, $d=4$. Then Theorem \ref{mainaik} yields that for $t\ne 0,\pm 6$ one has $\overline{\mathop{\rm SL}\nolimits_2\cdot \,q_t}^{\,*}\simeq\overline{\mathop{\rm SL}\nolimits_2\cdot\, q_{-12/t}}$ and $\overline{O}_3^{\,*}\simeq\overline{O}_1$. By reflexivity it then follows that $\overline{O}_1^{\,*}\simeq\overline{O}_3$. However, since $O_1$ is not a hypersurface, there is no natural map from $\overline{O}_1$ to its dual. This fact corresponds to the impossibility to extend ${\mathbb P}\Phi$ equivariantly to $O_1$.
Analogously, for $n=d=3$, Theorem \ref{mainaik} implies that for $t\ne 0$ and $t^3\ne 216$ we have $\overline{\mathop{\rm SL}\nolimits_c\cdot\,c_t}^{\,*}\simeq\overline{\mathop{\rm SL}\nolimits_3\cdot\,c_{-18/t}}$ and $\overline{{\rm O}}_4^{\,*}\simeq\overline{{\rm O}}_1$. By reflexivity one then has $\overline{{\rm O}}_1^{\,*}\simeq\overline{{\rm O}}_4$. Again, since ${\rm O}_1$ is not a hypersurface, there is no natural map from $\overline{{\rm O}}_1$ to its dual. This agrees with the nonexistence of an equivariant extension of ${\mathbb P}\Phi$ to ${\rm O}_1$.
\section{Results and open problems concerning the contravariant arising from the morphism $\Phi$}\label{S:contravariant} \setcounter{equation}{0}
\subsection{Covariants and contravariants} Recall that a regular function $\Gamma$ on the space ${\mathbb C}[z_1,\dots,z_n]_k\times_{{\mathbb C}}{\mathbb C}^n$ (i.e., an element of ${\mathbb C}[{\mathbb C}[z_1,\dots,z_n]_k\times_{{\mathbb C}}{\mathbb C}^n]$) is said to be a {\it covariant}\, of forms in ${\mathbb C}[z_1,\dots,z_n]_k$ if the following holds: $$ \begin{array}{l} \Gamma(f,z)=(\det C)^m\, \Gamma(C f,Cz)=(\det C)^m\, \Gamma(C f,z\, C^T),\\
\\ \hspace{3cm}f\in{\mathbb C}[z_1,\dots,z_n]_k,\,\,z=(z_1,\dots,z_n)\in{\mathbb C}^n,\,\,C\in\mathop{\rm GL}\nolimits_n, \end{array} $$ where $m$ is an integer called the {\it weight} of $\Gamma$ and $z\mapsto Cz=z\, C^T$ is the standard action of $\mathop{\rm GL}\nolimits_n$ on ${\mathbb C}^n$ (see (\ref{actiononcn})). Every homogeneous component of\, $\Gamma$ with respect to $z$ is automatically homogeneous with respect to $f$ and is also a covariant. Such covariants are called {\it homogeneous}\, and their degrees with respect to $f$ and $z$ are called the {\it degree}\, and {\it order}, respectively. We may view a homogenous covariant $\Gamma$ of degree $D$ and order $K$ as the $\mathop{\rm SL}\nolimits_n$-equivariant morphism $$ {\mathbb C}[z_1,\dots,z_n]_k \to {\mathbb C}[z_1,\dots,z_n]_K,\quad f \mapsto (z, \mapsto \Gamma(f,z))
$$
of degree $D$ with respect to $f$, which maps a form $f\in{\mathbb C}[z_1,\dots,z_n]_k$ to the form in ${\mathbb C}[z_1,\dots,z_n]_K$ whose evaluation at $z$ is $\Gamma(f,z)$. In what follows, we write $\Gamma(f)$ for the form $z \mapsto \Gamma(f,z)$ on ${\mathbb C}^n$. Covariants independent of $z$ (i.e., of order $0$) are called {\it relative invariants}. Note, for example, that the discriminant $\Delta$ is a relative invariant of forms in ${\mathbb C}[z_1,\dots,z_n]_k$ of weight $k(k-1)^{n-1}$ hence of degree $n(d-1)^{n-1}$ (see \cite[Chapter 13]{GKZ}).
Next, we identify every element $z^*\in{\mathbb C}^{n*}$ with its coordinate vector $(z_1^*,\dots,z_n^*)$ with respect to the basis $z_1,\dots,z_n$ of ${\mathbb C}^{n*}$ and recall that $z^*\mapsto Cz^*=z^*\, C^{-1}$ is the standard action of $\mathop{\rm GL}\nolimits_n$ on ${\mathbb C}^{n*}$ (see (\ref{actiononcn*})). Then a regular function $\Lambda$ on the space ${\mathbb C}[z_1,\dots,z_n]_k\times_{{\mathbb C}}{\mathbb C}^{n*}$ (i.e., an element of ${\mathbb C}[{\mathbb C}[z_1,\dots,z_n]_k\times_{{\mathbb C}}{\mathbb C}^{n*}]$) is said to be a {\it contravariant}\, of forms in ${\mathbb C}[z_1,\dots,z_n]_k$ if one has $$ \begin{array}{l} \Lambda(f,z^*)=(\det C)^m\, \Lambda(C f,Cz^*)=(\det C)^m\, \Lambda(C f,z^* C^{-1}),\\
\\ \hspace{3cm}f\in{\mathbb C}[z_1,\dots,z_n]_k,\,\,z^*=(z_1^*,\dots,z_n^*)\in{\mathbb C}^{n*},\,\,C\in\mathop{\rm GL}\nolimits_n, \end{array} $$ where $m$ is a (nonnegative) integer called the {\it weight} of $\Lambda$. Again, every contravariant splits into a sum of homogeneous ones, and for a homogeneous contravariant its degrees with respect to $f$ and $z^*$ are called the {\it degree}\, and {\it class}, respectively. We may regard a homogenous contravariant $\Lambda$ of degree $D$ and class $K$ as the $\mathop{\rm SL}\nolimits_n$-equivariant morphism $$ {\mathbb C}[z_1,\dots,z_n]_k \to {\mathbb C}[z_1^*,\dots,z_n^*]_K,\quad f \mapsto (z^* \mapsto \Lambda(f,z^*)) $$ of degree $D$ with respect to $f$. In what follows, we write $\Lambda(f)$ for the form\linebreak $z^* \mapsto \Lambda(f,z^*)$ on ${\mathbb C}^{n*}$.
If $n=2$, every homogeneous contravariant $\Lambda$ yields a homogenous covariant $\widehat{\Lambda}$ via the formula \begin{equation} \widehat{\Lambda}(f)(z_1,z_2) := \Lambda(f)(-z_2, z_1),\,\,f\in{\mathbb C}[z_1,z_2]_k,\,\, (z_1,z_2)\in{\mathbb C}^2,\label{relcovcontrav} \end{equation} where $(-z_2, z_1)$ is viewed as a point in ${\mathbb C}^{2*}$. Analogously, every homogeneous covariant $\Gamma$ gives rise to a homogenous contravariant $\widetilde{\Gamma}$ via the formula $$ \widetilde{\Gamma}(f)(z_1^*,z_2^*) := \Gamma(f)(z_2^*, -z_1^*),\,\,f\in{\mathbb C}[z_1,z_2]_k,\,\, (z_1^*,z_2^*)\in{\mathbb C}^{2*},\label{relcovcontrav1} $$ where $(z_2^*,-z_1^*)$ is regarded as a point in ${\mathbb C}^{2}$. Under these correspondences the degree and order of a homogeneous covariant translate into the degree and class of the corresponding homogeneous contravariant and vice versa.
\subsection{The contravariant arising from the morphism $\Phi$}
As before, fix $d\ge 3$ and recall that $\Phi$ is a morphism $$ \Phi \co X_n^d \to {\mathbb C}[e_1,\dots,e_n]_{n(d-2)} $$ defined on the locus $X_n^d$ of nondegenerate forms. From now on we identify the spaces ${\mathbb C}[e_1,\dots,e_n]_{n(d-2)}$ and ${\mathbb C}[z_1^*,\dots,z_n^*]_{n(d-2)}$ by identifying $e_j$ and $z_j^*$ and regard $\Phi$ as the morphism from $X_n^d$ to ${\mathbb C}[z_1^*,\dots,z_n^*]_{n(d-2)}$ given by in formulas (\ref{assocformdef}), (\ref{assocformexpp}). The coefficients $\mu_{i_1, \ldots, i_n}$ that determine $\Phi$ (see (\ref{assocformexpppp})) are elements of the coordinate ring ${\mathbb C}[X_n^d] = {\mathbb C}[{\mathbb C}[z_1,\dots,z_n]_d]_{\Delta}$, i.e., have the form (\ref{formulaformus}). Let $p_{i_1,\dots,i_n}$ in formula (\ref{formulaformus}) be the minimal integer such that $\Delta^{p_{i_1,\dots,i_n}}\cdot\mu_{i_1,\dots,i_n}$ is a regular function on ${\mathbb C}[z_1,\dots,z_n]_d$ and $$ p:=\max\{p_{i_1,\dots,i_n}: i_1+\dots+i_n=n(d-2)\}. $$ Then the product $\Delta^p \Phi$ is the morphism $$ \Delta^p \Phi \co X_n^d \to {\mathbb C}[z_1^*,\dots,z_n^*]_{n(d-2)}, \quad f \mapsto \Delta(f)^p \Phi(f), $$ which extends to a morphism from ${\mathbb C}[z_1,\dots,z_n]_d$ to ${\mathbb C}[z_1^*,\dots,z_n^*]_{n(d-2)}$. We denote the extended map by the same symbol $\Delta^p \Phi$.
Notice that by Proposition \ref{equivariance} the morphism $$ \Delta^p \Phi \co {\mathbb C}[z_1,\dots,z_n]_d \to{\mathbb C}[z_1^*,\dots,z_n^*]_{n(d-2)} $$ is in fact a homogeneous contravariant of weight $pd(d-1)^{n-1}-2$. Since the class of $\Delta^p \Phi$ is $n(d-2)$, it follows that its degree is equal to $np(d-1)^{n-1}-n$. Observe that $p>0$ as the weight and the degree of a contravariant are always nonnegative.
In the next subsection we will see that $\Delta^p \Phi$ can be expressed via known contravariants for certain small values of $n$ and $d$. However, it appears that in full generality (i.e., for all $n\ge 2$, $d\ge 3$) the contravariant $\Delta^p \Phi$ has not been discovered prior to our work \cite{AIK}, \cite{I3}.
The contravariant $\Delta^p \Phi$ is rather mysterious with even its most basic properties not having been understood so far. Indeed, the very first question that one encounters is:
\begin{openprob}\label{probdegreecontravariant} \it Compute the integer $p$. \end{openprob}
We will now state what is known regarding this problem starting with the following theorem:
\begin{theorem}\label{mainoldpaper}\cite{AIK}, \cite{I3}. One has \begin{equation} p\le\left[\frac{n^{n-2}}{(n-1)!}\right],\label{estim} \end{equation} where $[x]$ denotes the largest integer that is less than or equal to $x$. Hence the degree of $\Delta^p \Phi$ does not exceed $n[n^{n-2}/(n-1)!](d-1)^{n-1}-n$. \end{theorem}
Observe that for $n=2,3$ upper bound (\ref{estim}) yields $p=1$. However, (\ref{estim}) is not sharp in general. In the two propositions below we focus on the cases $n=4$, $n=5$ and find that for sufficiently small values of $d$ estimate (\ref{estim}) can be improved.
Indeed, if $n=4$ inequality (\ref{estim}) yields $p\le 2$, whereas in fact the following holds:
\begin{proposition}\label{n=4}\cite{I3} For $n=4$ one has $$
\begin{array}{ll} p=1 & \hbox{if $3\le d\le 6$,}\\
\\ p\le 2 & \hbox{if $d\ge 7$.} \end{array}
$$ \end{proposition}
\noindent Next, for $n=5$ inequality (\ref{estim}) yields $p\le 5$, but there are in fact more precise bounds:
\begin{proposition}\label{n=5}\cite{I3} For $n=5$ one has $$
\begin{array}{ll} p=1 & \hbox{if $d=3$,}\\
\\ p\le 2 & \hbox{if $d=4$,}\\
\\ p\le 3 & \hbox{if $5\le d\le 8$,}\\
\\ p\le 4 & \hbox{if $9\le d\le 50$,}\\
\\ p\le 5 & \hbox{if $d\ge 51$.}\\
\\ \end{array}
$$ \end{proposition}
The method used in the proofs of Propositions \ref{n=4}, \ref{n=5} can be applied, in principle, to any $n\ge 2$. However, an analysis of this kind appears to be computationally quite challenging to perform in full generality, and we did not attempt to do so systematically. We only give a word of warning that, although one may get the impression that the method always yields that $p=1$ if $d=3$, this is in fact not the case as the example of $n=6$ shows. Indeed, for $n=6$, $d=3$ the approach utilized in the proofs of Propositions \ref{n=4}, \ref{n=5} only leads to the bound $p\le 2$.
Following the above discussion, we state a subproblem of Open Problem \ref{probdegreecontravariant}:
\begin{openprob}\label{openprobp>1} \it Is there an example with $p>1${\rm ?} \end{openprob}
In the next subsection we will look at the contravariant $\Delta^p\Phi$ in three special cases: (i) $n=2, d=4$, (ii) $n=2$, $d=5$, (iii) $n=d=3$. Recall that, by Theorem \ref{mainoldpaper}, in each of these cases we have $p=1$.
\subsection{The contravariant $\Delta^p \Phi$ for small values of $n$ and $d$}\label{contravarsmallnd}
\subsubsection{Binary Quartics} Let first $n=2$, $d=4$. In this case $\Delta \Phi$ is a contravariant of weight 10, degree 4 and class 4. We have the following identity of covariants of weight 6 (see (\ref{relcovcontrav})): \begin{equation} \widehat {\Delta \Phi}= \frac{1}{2^7 3^3}I_2 \mathop{\rm Hess}\nolimits - \frac{1}{2^4}\mathop{\rm Cat}\nolimits{\mathbf{id}},\label{covar1} \end{equation} where $I_2$ the relative invariant of degree $2$ considered in Subsection \ref{S:binary-quartics}, and\linebreak ${\mathbf{id}}:f\mapsto f$ the identity covariant. To verify (\ref{covar1}), it suffices to check it for the quartics $q_t$ introduced in (\ref{qt}). For these quartics the validity of (\ref{covar1}) is a consequence of formulas (\ref{bfqt})--(\ref{form1}).
Observe that formula (\ref{covar1}) is not a result of mere guesswork; it follows naturally from the well-known explicit description of the algebra of covariants of binary quartics. Indeed, this algebra is generated by $I_2$, the catalecticant $\mathop{\rm Cat}\nolimits$, the Hessian $\mathop{\rm Hess}\nolimits$ (which has degree 2 and order 4), the identity covariant ${\mathbf{id}}$ (which has degree 1 and order 4), and one more covariant of degree 3 and order 6 (see \cite[\S 145]{Ell}). Therefore $\widehat {\Delta \Phi}$, being a covariant of degree 4 and order 4, is necessarily a linear combination of $I_2\mathop{\rm Hess}\nolimits$ and $\mathop{\rm Cat}\nolimits{\mathbf {id}}$. The coefficients in the linear combination can be determined by computing $\Delta \Phi$, $I_2\mathop{\rm Hess}\nolimits$ and $\mathop{\rm Cat}\nolimits{\mathbf {id}}$ for particular nondegenerate quartics of simple form.
Formula (\ref{covar1}) yields an expression for the morphism $\Phi$ via $I_2$, $\mathop{\rm Cat}\nolimits$ and $\mathop{\rm Hess}\nolimits$. Namely, for $f\in X_2^4$ we obtain \begin{equation} \label{eqn-quartic} \Phi(f)(z_1^*,z_2^*)=\frac{1}{\Delta}\left(\frac{1}{2^7 3^3}I_2(f) \mathop{\rm Hess}\nolimits(f)(z_2^*,-z_1^*)-\frac{1}{2^4}\mathop{\rm Cat}\nolimits(f) f(z_2^*,-z_1^*)\right). \end{equation} One might hope that formula (\ref{eqn-quartic}) provides an extension of ${\mathbb P}\Phi$ beyond ${\mathbb P} X_2^4$. However, for $f=z_1^2z_2^2$ the second factor in the right-hand side of (\ref{eqn-quartic}) vanishes, which agrees with the fact, explained in Subsection \ref{S:binary-quartics}, that ${\mathbb P}\Phi$ does not have a natural continuation to the orbit $O_1=\mathop{\rm SL}\nolimits_2\cdot\,z_1^2z_2^2$.
\subsubsection{Binary Quintics} Suppose next that $n=2$, $d=5$. In this case the calculations are significantly more involved, and we will only provide a brief account of the result. In this situation $\Delta \Phi$ is a contravariant of weight 18, degree 6 and class 6. A generic binary quintic $f \in{\mathbb C}[z_1,z_2]_5$ is linearly equivalent to a quintic given by the {\it Sylvester canonical equation} \begin{equation} f = a X^5 + b Y^5 + c Z^5,\label{sylvcanform} \end{equation} where $X$, $Y$, $Z$ are linear forms satisfying $X+Y+Z=0$ (see, e.g., \cite[\S 205]{Ell}). The algebra of $\mathop{\rm SL}\nolimits_2$-invariants of binary quintics is generated by relative invariants of degrees 4, 8, 12, 18 with a relation in degree 36, and the algebra of covariants is generated by 23 fundamental homogeneous covariants (see \cite{Sy}), which we will write as $C_{i,j}$ where $i$ is the degree and $j$ is the order.
For $f\in{\mathbb C}[z_1,z_2]_5$ given in the form (\ref{sylvcanform}) the covariants relevant to our calculations are computed as follows: $$ \begin{array}{l} C_{4,0}(f,z)=a^2b^2+b^2c^2+a^2c^2-2abc(a+b+c), \\
\\ C_{8,0}(f,z)=a^2b^2c^2(ab+ac+bc),\hspace{0.8cm} C_{5,1}(f,z)=abc(bcX+acY+abZ), \\
\\ C_{2,2}(f,z)=abXY+acXZ+bcYZ, \hspace{0.3cm} C_{3,3}(f,z)=abcXYZ, \\
\\ C_{4,4}(f,z)=abc(aX^4+bY^4+cZ^4),\hspace{0.4cm} C_{1,5}(f,z)=f(z) = a X^5 + b Y^5 + c Z^5,\\
\\ \displaystyle C_{2,6}(f,z)=\frac{\mathop{\rm Hess}\nolimits(f)(z)}{400}=abX^3Y^3+bcY^3Z^3+acX^3Z^3. \end{array} $$ For instance, the discriminant can be written as $$ \Delta=C_{4,0}^2-128\,C_{8,0}. $$
The vector space of covariants of degree 6 and order 6 has dimension 4 and is generated by the products $$ \hbox{$C_{4,0}C_{2,6}$, $C_{1,5}C_{5,1}$, $C_{3,3}^2$, $C_{2,2}^3$, $C_{2,2}C_{4,4}$} $$ satisfying the relation $$ C_{4,0}C_{2,6} - C_{1,5}C_{5,1} + 9 C_{3,3}^2 - C_{2,2}^3 + 2 C_{2,2}C_{4,4}=0. $$ One can then explicitly compute $$ \widehat{\Delta \Phi}=\frac{1}{20}C_{4,0}C_{2,6}-\frac{3}{50}C_{1,5}C_{5,1}+\frac{27}{10}C_{3,3}^2-\frac{1}{10}C_{2,2}^3. $$
\subsubsection{Ternary cubics} Finally, we assume that $n=d=3$. In this case $\Delta \Phi$ is a contravariant of weight 10, degree 9 and class 3. Recall that the algebra of $\mathop{\rm SL}\nolimits_3$-invariants of ternary cubics is freely generated by the relative invariants $A_4$, $A_6$ (the Aronhold invariants considered in Subsection \ref{S:cubics}), and the ring of contravariants is generated over the algebra of $\mathop{\rm SL}\nolimits_3$-invariants by the Pippian $P$ of degree $3$ and class $3$, the Quippian $Q$ of degree $5$ and class $3$, the Clebsch transfer of the discriminant of degree $4$ and class $6$, and the Hermite contravariant of degree $12$ and class $9$ (see \cite{C}). For a ternary cubic of the form (\ref{generaltercubic}), the Pippian and Quippian are calculated as follows: $$ \hspace{-0.1cm}\begin{array}{l} P(f)(z_1^*,z_2^*,z_3^*) = -d(bcz_1^{*3}+acz_2^{*3}+abz_3^{*3})-(abc-4d^3)z_1^*z_2^*z_3^*,\\
\\ Q(f)(z_1^*,z_2^*,z_3^*) = (abc-10d^3)(bcz_1^{*3}+acz_2^{*3}+abz_3^{*3})-6d^2(5abc+4d^3)z_1^*z_2^*z_3^*. \end{array} $$ Since any contravariant of degree 9 and class 3 is a linear combination of $A_6 P$ and $A_4 Q$, it is easy to compute \begin{equation} \Delta \Phi = -\frac{1}{36}A_6 P - \frac{1}{27}A_4 Q.\label{contravar3} \end{equation} The above expression can be verified directly by applying it to the cubics $c_t$ defined in (\ref{ct}) and using formulas (\ref{bfct}), (\ref{discrtercub}), (\ref{form11}).
Identity (\ref{contravar3}) provides an expression for $\Phi$ in terms of $A_4$, $A_6$, $P$ and $Q$. Namely, on $X_3^3$ we have \begin{equation} \Phi = -\frac{1}{\Delta}\left(\frac{1}{36}A_6 P + \frac{1}{27}A_4 Q\right).\label{newexpr1} \end{equation} One might think that formula (\ref{newexpr1}) yields a continuation of ${\mathbb P}\Phi$ beyond ${\mathbb P} X_3^3$. However, for $f=z_1z_2z_3$ the second factor in the right-hand side of (\ref{newexpr1}) is zero, which illustrates the obstruction to extending ${\mathbb P}\Phi$ to the orbit ${\rm O}_1=\mathop{\rm SL}\nolimits_3\cdot\,z_1z_2z_3$ discussed in Subsection \ref{S:cubics}.
\end{document} |
\begin{document}
\title{A variational approach to the alternating projections method}
\author{Carlo Alberto De Bernardi } \address{Dipartimento di Matematica per le Scienze economiche, finanziarie ed attuariali, Universit\`{a} Cattolica del Sacro Cuore, Via Necchi 9, 20123 Milano, Italy}
\email{carloalberto.debernardi@unicatt.it, carloalberto.debernardi@gmail.com}
\author{Enrico Miglierina } \address{Dipartimento di Matematica per le Scienze economiche, finanziarie ed attuariali, Universit\`{a} Cattolica del Sacro Cuore, Via Necchi 9, 20123 Milano, Italy}
\email{enrico.miglierina@unicatt.it}
\subjclass[2010]{Primary: 47J25; secondary: 90C25, 90C48}
\keywords{convex feasibility problem, stability, set-convergence, alternating projections method}
\thanks{ }
\begin{abstract}
The 2-sets convex feasibility problem aims at finding a point in
the nonempty intersection of two closed convex sets $A$ and $B$ in a Hilbert space $X$. The method of alternating projections is the simplest iterative procedure for finding a solution and it goes back to von Neumann.
In the present paper, we study some stability properties for this method in the following sense: we consider two sequences of sets,
each of them converging, with respect to the Attouch-Wets variational convergence, respectively, to $A$ and $B$. Given a starting point $a_0$, we consider the sequences of points obtained by projecting on the ``perturbed'' sets, i.e., the sequences $\{a_n\}$ and $\{b_n\}$ given by $b_n=P_{B_n}(a_{n-1})$ and $a_n=P_{A_n}(b_n)$.
Under appropriate geometrical and topological
assumptions on the intersection of the limit sets, we ensure that the sequences $\{a_n\}$ and $\{b_n\}$
converge in norm to a point in the intersection of $A$ and $B$. In particular, we consider both when the intersection $A\cap B$ reduces to a singleton and when the interior of $A \cap B$ is nonempty. Finally we consider the case in which the limit sets $A$ and $B$ are subspaces. \end{abstract}
\maketitle
\section{Introduction} The 2-sets convex feasibility problem is the classical problem of finding a point in the nonempty intersection of two closed and convex sets $A$ and $B$ in a Hilbert space $X$ (see \cite[Section~4.5]{BorweinZhu} for some basic results on this subject). Many efforts have been devoted to the study of algorithmic procedures to solve convex feasibility problems, both from a theoretical and from a computational point of view (see, e.g., \cite{BauschkeBorwein,BCOMB,BorweinSimsTam,Censor,Hundal} and the references therein). The method of alternating projections is the simplest iterative procedure for finding a solution and it goes back to von Neumann \cite{vonNeumann}: let us denote by $P_A$ and $P_B$ the projections on the sets $A$ and $B$, respectively, and, given a starting point $c_0\in X$, consider the {\em alternating projections sequences} $\{c_n\}$ and $\{d_n\}$ given by $$d_n=P_{B}(c_{n-1})\ \ \text{and}\ \ c_n=P_{A}(d_n)\ \ \ \ \ (n\in\mathbb{N}).$$ In the case the sequences $\{c_n\}$ and $\{d_n\}$ converge in norm to a point in the intersection of $A$ and $B$, we say that the method of alternating projections converges.
Many concrete problems in applications can be formulated as a convex feasibility problem. As typical examples, we mention solution of convex inequalities, partial differential equations, minimization of convex nonsmooth functions, medical imaging, computerized tomography and image reconstruction. For some details and other applications see, e.g., \cite{BauschkeBorwein} and the references therein.
Often in concrete applications data are affected by some uncertainties. Hence stability of solutions of a convex feasibility problem with respect to data perturbations is a desirable property, both from theoretical and computational point of view. In the present paper we investigate some ``stability'' properties of the alternating projections method in the following sense. Let us suppose that $\{A_n\}$ and $\{B_n\}$ are two sequences of closed convex sets such that $A_n\rightarrow A$ and $B_n\rightarrow B$ for the Attouch-Wets {variational} convergence (see Definition~\ref{def:AW}) and let us introduce the definition of {\em perturbed alternating projections sequences}.
\begin{definition}\label{def:perturbedseq} Given $a_0\in X$, the {\em perturbed alternating projections sequences} $\{a_n\}$ and $\{b_n\}$, w.r.t. $\{A_n\}$ and $\{B_n\}$ and with starting point $a_0$, are defined inductively by
$$b_n=P_{B_n}(a_{n-1})\ \ \ \text{and}\ \ \ a_n=P_{A_n}(b_n) \ \ \ \ \ \ \ \ \ (n\in\mathbb{N})$$ \end{definition}
\noindent Our aim is to find some conditions on the limit sets $A$ and $B$ that guarantee, for each choice of the sequences $\{A_n\}$ and $\{B_n\}$ and for each choice of the starting point $a_0$, the convergence in norm of the corresponding perturbed alternating projections sequences $\{a_n\}$ and $\{b_n\}$. If this is the case, we say that the couple $(A,B)$ is {\em stable}.
The results reported in this paper can be seen as a continuation of the research considered in \cite{DebeMiglMol}. However, compared with the notion of stability studied in that paper, the approach developed here seems to be more interesting also from a computational point of view since it does not require to find an exact solution of the ``perturbed problems'' (i.e. the problems given by the sets $A_n$ and $B_n$) but only to consider projections on the ``perturbed'' sets $A_n$ and $B_n$. Moreover, the techniques used in the proofs are completely different from those of \cite{DebeMiglMol}.
Clearly, in order that the couple $(A,B)$ is stable, it is necessary that the alternating projections sequences $\{c_n\}$ and $\{d_n\}$ converge in norm (indeed, we can consider the particular case in which the sequences of sets $\{A_n\}$ and $\{B_n\}$ are given by $A_n=A$ and $B_n=B$, whenever $n\in\mathbb{N}$). Since, in general, this is not the case (see \cite{Hundal,MatouReich}), we shall restrict our attention to those situations in which the method of alternating projections converges. After some preliminaries, contained in Section~\ref{Notations and preliminaries}, we consider, in Sections~\ref{section:Infinite-dimensional}, \ref{Section:nonempty interior} and \ref{section:subspaces}, respectively, the following three cases: \begin{enumerate}
\item $A$ and $B$ are separated by a strongly exposing functional $f$ for the set $A$, i.e., there exist $x_0\in A\cap B$ and a linear continuous functional $f$ such that $\inf f(B)=f(x_0)=\sup f(A)$ and such that $f$ strongly exposes $A$ at $x_0$ (see Definition~\ref{def:strexp}); \item the intersection between $A$ and $B$ has nonempty interior; \item $A$ and $B$ are closed subspaces. \end{enumerate}
Observe that if (i) is satisfied then the method of alternating projections converges. Indeed, by \cite[Lemma~4.5.11]{BorweinZhu} or by \cite[Theorem~1.4]{KopReichHilbert}, the alternating projections sequences $\{c_n\}$ and $\{d_n\}$ satisfy $\|c_n-d_n\|\to0$. Then it is easy to verify that $f(c_n),f(d_n)\to f(x_0)$ and hence, since $f$ strongly exposes $A$ at $x_0$, we have that $c_n,d_n\to x_0$ in norm.
Similar assumption on the limit sets has been considered by the authors and E.~Molho in the recent paper \cite{DebeMiglMol}, in which they proved, among other things, that if (i) is satisfied and if $x_n\in A_n,\,y_n\in B_n$ are such that
$\|x_n-y_n\|$ coincides with the distance between $A_n$ and $B_n$ then $x_n,y_n\to x_0$ in norm (see the proof of \cite[Theorem~4.5]{DebeMiglMol}). In Section~\ref{section:Infinite-dimensional} of the present paper, we prove that if $A$ and $B$ are separated by a strongly exposing functional $f$ for the set $A$ then, for each choice of sequences $\{A_n\},\,\{B_n\}$ and starting point $a_0$, the corresponding perturbed alternating projections sequences $\{a_n\}$ and $\{b_n\}$ converge in norm to $x_0$ (cf. Theorem~\ref{puntolur} below). In this case, our approach is essentially based on suitable approximations of the sets $A_n$ and $B_n$ by convex and non-convex cones, respectively. This result shed a new light also on the celebrated example of Hundal (see \cite{Hundal}) of a convex feasibility problem in a Hilbert space whose corresponding alternating projections sequences do not norm converge. There, $A$ is a convex cone and $B$ is a hyperplane touching the vertex of the cone $A$; this hyperplane is defined by a functional that does not strongly expose the vertex of the cone. Our result prove that, if we consider a hyperplane defined by a functional strongly exposing the vertex of the cone, we obtain not only the norm convergence of the alternating projections, but also the convergence of the perturbed alternating projections, i.e., the couple $(A,B)$ is stable.
{In
Section~\ref{Section:nonempty interior}, we investigate to what extent it is possible to guarantee convergence of the perturbed alternating projections in the case $A\cap B$ is nonempty but does not reduce to a singleton.
Example~\ref{ex: notconverge} show that, in general, even in the finite-dimensional setting and even if $A\cap B$ is bounded, the couple $(A,B)$ may be not stable. On the other hand, Theorem~\ref{theorem:corpilur} ensures that the couple $(A,B)$ is stable whenever $\mathrm{int}\,(A\cap B)\neq \emptyset$. We point out that boundedness of $A \cap B$ is not required. Moreover, we apply the results of this section to investigate the convergence of perturbed alternating projections for the inequality constraints problem. }
Finally the last section of the paper is devoted to the case (iii) where $A$ and $B$ are closed subspaces. The convex feasibility problem where $A$ and $B$ are subspaces is the original problem studied by von~Neumann. In his, now classical, theorem (see \cite{vonNeumann}), he proved that the alternating projections sequences $\{c_n\}$ and $\{d_n\}$ converge in norm to $P_{A\cap B}(a_0)$. This theorem was rediscovered by several authors and many alternative proofs were provided (see, e.g., \cite{KopReich,KopReichHilbert} and the references therein). In Section~\ref{section:subspaces}, we study the problem of convergence of perturbed alternating projections sequences in the case in which $A$ and $B$ are subspaces. Example~\ref{ex:duedimensionale} below shows that even in the finite-dimensional setting it is conceivable that the perturbed projections sequences are unbounded in the case $A\cap B\neq\{0\}$. For this, in Section~\ref{section:subspaces}, we focus on the situation in which $A$ and $B$ are closed subspaces such that $A\cap B=\{0\}$. It turns out that if $A+B$ is a closed subspace then the couple $(A,B)$ is stable (Theorem~\ref{prop:sottospazisommachiusa}). On the other hand, in Theorem~\ref{teo:sommaNONchiusa}, we provide a couple $(A,B)$ of closed subspaces such that $A\cap B=\{0\}$ and such that there exist sequences of sets $\{A_n\},\,\{B_n\}$ and starting point $a_0$ such that the corresponding perturbed projections sequences are unbounded. Our construction is based on the example, contained in \cite{FranchettiLight86}, of two subspaces of a Hilbert space with non-closed sum such that the convergence of the corresponding alternating projection method is not geometric (for the definition of geometric convergence see \cite{FranchettiLight86}, see also \cite{PusReichZas} for some results concerning the convergence rate of the alternating projection algorithm for the case of $n$ subspaces).
\section{Notations and preliminaries}\label{Notations and preliminaries}
Throughout all this paper, if not differently stated, $X$ denotes a real normed space with the topological dual $X^*$. We denote by $B_X$ and $S_X$ the closed unit ball and the unit sphere of $X$, respectively.
For $x,y\in X$, $[x,y]$ denotes the closed segment in $X$ with endpoints $x$ and $y$.
For a subset $K$ of $X$, $\alpha>0$, and a functional $f\in S_{X^*}$ bounded on $K$, let $$S(f,\alpha,K)=\{x\in K;\, f(x)\geq\sup f(K)-\alpha\}$$ be the closed slice of $K$ given by $\alpha$ and $f$.
For $f\inS_{X^*}$ and $\alpha\in(0,1)$, we denote
$$C(f,\alpha)=\{x\in X; f(x)\geq \alpha\|x\|\},\
V(f,\alpha)=\{x\in X; f(x)\leq\alpha\|x\|\}.$$
It is easy to see that $C(f,\alpha)$ and $V(f,\alpha)$ are nonempty closed cones and that $C(f,\alpha)$ is convex.
For a subset $A$ of $X$, we denote by $\mathrm{int}\,(A)$, $\partial A$, ${\mathrm{conv}}\,(A)$ and $\overline{\mathrm{conv}}\,(A)$ the interior, the boundary, the convex hull and the closed convex hull of $A$, respectively.
We denote by $$\textstyle \mathrm{diam}(A)=\sup_{x,y\in A}\|x-y\|,$$ the (possibly infinite) diameter of $A$. For $x\in X$, let
$$\mathrm{dist}(x,A) =\inf_{a\in A} \|a-x\|.$$ Moreover, given $A,B$ nonempty subsets of $X$, we denote by $\mathrm{dist}(A,B)$ the usual ``distance'' between $A$ and $B$, that is, $$ \mathrm{dist}(A,B)=\inf_{a\in A} \mathrm{dist}(a,B).$$
Let us now introduce some definitions and basic properties concerning convergence of sets. By $\mathrm{c}(X)$ we denote the family of all nonempty closed subsets of $X$.
Let us introduce the (extended) Hausdorff metric $h$ on $\mathrm{c}(X)$. For $A,B\in\mathrm{c}(X)$, we define the excess of $A$ over $B$ as
$$e(A,B) = \sup_{a\in A} \mathrm{dist}(a,B).$$
\noindent Moreover, if $A\neq\emptyset$ and $B=\emptyset$ we put $e(A,B)=\infty$, if $A=\emptyset$ we put $e(A,B)=0$. For $A,B\in \mathrm{c}(X)$, we define
$$h(A,B)=\max \bigl\{ e(A,B),e(B,A) \bigr\}.$$
\begin{definition} A sequence $\{A_j\}$ in $\mathrm{c}(X)$ is said to Hausdorff converge to $A\in\mathrm{c}(X)$ if $$\textstyle \lim_j h(A_j,A) = 0.$$ \end{definition}
Next we recall the definition of the so called Attouch-Wets convergence (see, e.g., \cite[Definition~8.2.13]{LUCC}), which can be seen as a localization of the Hausdorff convergence. If $N\in\mathbb{N}$ and $A,C\in\mathrm{c}(X)$, define \begin{eqnarray*} e_N(A,C) &=& e(A\cap N B_X, C)\in[0,\infty),\\ h_N(A,C) &=& \max\{e_N(A,C), e_N(C,A)\}. \end{eqnarray*}
\begin{definition}\label{def:AW} A sequence $\{A_j\}$ in $\mathrm{c}(X)$ is said to Attouch-Wets converge to $A\in\mathrm{c}(X)$ if, for each $N\in\mathbb{N}$, $$\textstyle \lim_j h_N(A_j,A)= 0.$$ \end{definition}
Several times without mentioning it, we shall use
the following two results.
\begin{theorem}[{see, e.g., \cite[Theorem~8.2.14]{LUCC}}] The sequence of sets $\{A_n\}$ Attouch-Wets converges to $A$ if{f}
$$\textstyle \sup_{\|x\|\leq N}{|\mathrm{dist}(x,A_n)-\mathrm{dist}(x,A)|}\to 0 \ \ \ (n\to \infty),$$
whenever $N\in\mathbb{N}$. \end{theorem}
\begin{fact}\label{fact:AW} Let $A$ be a nonempty closed convex set in a Banach space $X$. Suppose that
$\{A_n\}$ is a sequence of closed convex sets such that $A_n\rightarrow
A$ for the Attouch-Wets convergence. Then, if $\{a_n\}$ is a bounded sequence in $X$ such that $a_n\in A_n$ ($n\in\mathbb{N}$), we have that $\mathrm{dist}(a_n,A)\to 0$.
\end{fact}
\begin{definition}[{see, e.g., \cite[Definition~7.10]{FHHMZ}}]\label{def:strexp} Let $A$ be a nonempty subset of a normed space $X$. A point $a\in A$
is called a strongly exposed point of $A$ if there exists a
support functional $f\in X^*\setminus\{0\}$ for $A$ in $a$ $\bigl($i.e.,
$f (a) = \sup f(A)$$\bigr)$, such that $x_n\to a$ for all sequences
$\{x_n\}$ in $A$ such that $\lim_n f(x_n) = \sup f(A)$. In this
case, we say that $f$ strongly exposes $A$ at $a$. \end{definition}
\noindent Let us observe that $f\in S_{X^*}$ strongly exposes $A$ at $a$ if{f} $f(a)=\sup f(A)$ and $$\mathrm{diam}\bigl(S(f,\alpha,A)\bigr)\to0 \text{ as } \alpha\to 0^+.$$
\noindent Let us recall that a {\em body} in $X$ is a closed convex set in $X$ with nonempty interior.
\begin{definition}[{see, e.g., \cite[Definition~1.3]{KVZ}}]
Let $A\subset X$ be a body. We say that $x\in\partial A$ is an
{\em LUR (locally uniformly rotund) point} of $A$ if for each
$\varepsilon>0$ there exists $\delta>0$ such that if $y\in A$
and $\mathrm{dist}(\partial A,(x+y)/2)<\delta$ then $\|x-y\|<\varepsilon$. \end{definition}
If $A=B_X$, the previous definition coincides with the standard definition of local uniform rotundity of the norm at $x$. We say that $A$ is an {\em LUR body} if each point in $\partial A$ is an LUR point of $A$.
\begin{lemma}\label{slicelimitatoselur} Let $A$ be a body in $X$
and suppose that $a\in\partial A$ is an LUR point of $A$. Then, if
$f\in S_{X^*}$ is a support functional for $A$ in $a$, $f$
strongly exposes $A$ at $a$. \end{lemma}
The lemma is well-known in the case the body is a ball (see, e.g., \cite[Exercise~8.27]{FHHMZ}) and in the general case the proof is similar (see, e.g., \cite[Lemma~4.3]{DebeMiglMol}).
The next lemma gives a characterization of those functionals $f$ that strongly expose a set $A$ in terms of containment of $A$ in translations of cones of the form $C(f,\alpha)$.
\begin{lemma}\label{lemma:stronglyexposedVSopiccolo}
Let $A$ be a convex set in $X$ such that $0\in A$. Let $f\in S_{X^*}$ be such that
$f (0) = \inf f(A)$ and let $x_0\in S_X$ be such that $f(x_0)=1$ . Let us consider $\varepsilon:(0,1)\to[0,\infty]$ defined by
$$\varepsilon(\alpha)=\inf\{\lambda>0;\, A\subset C(f,\alpha)-\lambda x_0\}\ \ \ \ (0<\alpha<1).$$
Then $\varepsilon(\alpha)$ is $o(\alpha)$ as $\alpha\to 0^+$ if{f} $({-f})$ strongly exposes $A$ at $0$. \end{lemma}
\begin{remark}\label{InfMin} Observe that if $\alpha\in(0,1)$ is such that $\varepsilon(\alpha)$ is finite then, in the definition of the function $\varepsilon$, the infimum is actually a minimum. Hence, in this case, we have that
$A\subset C(f,\alpha)-\varepsilon(\alpha) x_0.$ \end{remark} \begin{proof}[Proof of Lemma~\ref{lemma:stronglyexposedVSopiccolo}] On the contrary, suppose that $\varepsilon(\alpha)$ is not $o(\alpha)$ as $\alpha\to 0^+$, then there exist $M>0$ and $\alpha_n\to0^+$ such that $\varepsilon(\alpha_n)>M\alpha_n$.
Let $z_n\in A\setminus[C(f,\alpha_n)-M\alpha_n x_0]$ and observe that
$$\textstyle f(z_n)+M{\alpha_n}=f(z_n+M\alpha_n x_0)<\alpha_n\|z_n+M\alpha_n x_0\|.$$
Hence it holds
$$\textstyle 0\leq f(z_n)<\alpha_n\|z_n+M\alpha_n x_0\|-M{\alpha_n}=\alpha_n(\|z_n+M\alpha_n x_0\|-{M}).$$
Then $\|z_n+M\alpha_n x_0\|>{M}$ and hence eventually $\|z_n\|>\frac M2$.
So, eventually we have
$$\textstyle 0\leq f(\frac{z_n}{\|z_n\|})<\alpha_n\frac{\|z_n+M\alpha_n x_0\|-M}{\|z_n\|}\leq \alpha_n\frac{\|z_n\|+M\alpha_n-M}{\|z_n\|}\leq \alpha_n.$$
In particular, we have $f(\frac{Mz_n}{2\|z_n\|})\to0$ as $n\to\infty$. Since $A$ is convex and $0\in A$, we have that eventually $\frac{Mz_n}{2\|z_n\|}\in A$, and hence that ${-f}$ does not strongly expose $A$ at $0$.
For the other implication, suppose that $\varepsilon(\alpha)$ is $o(\alpha)$ as $\alpha\to 0^+$. By Remark~\ref{InfMin}, we have that eventually (for $\alpha\to 0^+$) $\varepsilon(\alpha)$ is finite and $$A\subset C(f,\alpha)-\varepsilon(\alpha)x_0.$$ Let $x\in A\cap\{x\in X;\, f(x)\leq\alpha^2\}$, then eventually
$$\alpha\|x+\varepsilon(\alpha)x_0\|\leq f(x+\varepsilon(\alpha)x_0)= f(x)+\varepsilon(\alpha)f(x_0)\leq \alpha^2+\varepsilon(\alpha)$$
and hence $\|x\|\leq\frac{\varepsilon(\alpha)}{\alpha}+\varepsilon(\alpha)+\alpha$. This proves that $({-f})$ strongly exposes $A$ at $0$. \end{proof}
In the following two lemmas we analyse some relations between the Attouch-Wets convergence of a sequence of sets and the containment of the sets of the sequence in a cone of the form $V(f,\alpha)$ or $C(f,\alpha)$.
\begin{lemma}\label{lemma:definitivamentenellaconca} Let $B, B_n$ ($n\in\mathbb{N}$) be closed convex sets in $X$ such that $B_n\rightarrow
B$ for the Attouch-Wets convergence, and $f\in S_{X^*}$. Suppose that $x_0\in S_X$ is such that $f(x_0)=1$ and suppose that $0\in B\subset \{x\in X;\, f(x)\leq 0\}$.
Then, for each $\alpha\in(0,1)$ and $\varepsilon>0$, there exists $n_0\in \mathbb{N}$ such that $B_n\subset V(f,\alpha)+\varepsilon x_0$, whenever $n\geq n_0$. \end{lemma}
\begin{proof}
On the contrary, suppose that there exists a sequence of integers $\{n_k\}$ such that, for each $k\in\mathbb{N}$, there exists $$b_{n_k}\in B_{n_k}\setminus[V(f,\alpha)+\varepsilon x_0].$$
Since
$$\mathrm{dist}(B, C(f,\alpha)+\varepsilon x_0)>0,$$
by Fact~\ref{fact:AW}, we can suppose without any loss of generality that $\|b_{n_k}\|\geq1$ ($k\in\mathbb{N}$).
Since $b_{n_k}\not\in V(f,\alpha)+\varepsilon x_0$, we have
$$f(b_{n_k})>\alpha\|b_{n_k}-\varepsilon x_0\|+\varepsilon\geq\alpha\|b_{n_k}\|.$$
Let $\delta=\min\{\varepsilon,\alpha/2\}$, since $0\in B$ and $B_n\rightarrow
B$ for the Attouch-Wets convergence, we can suppose without any loss of generality that, for each $k\in\mathbb{N}$, there exists $d_k\in (\delta B_X)\cap B_{n_k}$. Let
$$\textstyle w_k=\frac1{\|b_{n_k}\|}b_{n_k}+\frac{\|b_{n_k}\|-1}{\|b_{n_k}\|}d_k\in B_{n_k},$$
and observe that $\|w_k\|\leq1+\varepsilon$. Moreover, we have
$$\textstyle f(w_k)\geq f(b_{n_k})\frac1{\|b_{n_k}\|}-\|d_k\|\geq \alpha -\|d_k\|\geq \frac\alpha2.$$
Since $\{w_k\}$ is a bounded sequence, by Fact~\ref{fact:AW}, $\mathrm{dist}(w_k,B)\to0$.
Hence we get a contradiction since $\{w_k\}\subset \{x\in X;\, f(x)\geq\alpha/2\}$ and
$$\mathrm{dist}(B, \{x\in X;\, f(x)\geq\alpha/2\})>0.$$ \end{proof}
\begin{lemma}\label{lemma:definitivamentenelcono} Let $A, A_n$ ($n\in\mathbb{N}$) be closed convex sets in $X$ such that $A_n\rightarrow
A$ for the Attouch-Wets convergence, $f\in S_{X^*}$, $\alpha\in(0,1)$, and $\varepsilon>0$. Suppose that $x_0\in S_X$ is such that $f(x_0)=1$ and suppose that $0\in A\subset C(f,\alpha)-\varepsilon x_0$.
Then, for each $\beta\in(0,\alpha)$ and $\varepsilon'>\varepsilon$, there exists $n_0\in \mathbb{N}$ such that $A_n\subset C(f,\beta)-\varepsilon' x_0$, whenever $n\geq n_0$. \end{lemma}
\begin{proof}
Suppose on the contrary that there exists a sequence of integers $\{n_k\}$ such that, for each $k\in\mathbb{N}$, there exists $$a_{n_k}\in A_{n_k}\setminus[C(f,\beta)-\varepsilon' x_0].$$
Since $a_{n_k}+\varepsilon' x_0\not\in C(f,\beta)$, we have
\begin{equation}\label{eq:fuoridalcono} f(a_{n_k}+\varepsilon' x_0)=f(a_{n_k})+\varepsilon'<\beta\|a_{n_k}+\varepsilon' x_0\|. \end{equation}
Fix any $\gamma\in(\beta,\alpha)$ and let $M\geq1$ be such that $M>\frac{2\varepsilon'}{\alpha-\gamma}$. Finally, let $\theta\in (0,1)$ be such that
\begin{enumerate}
\item[(a)] $M-\theta>\frac{2\varepsilon'}{\alpha-\gamma} $; \item[(b)] $\frac{\beta M+\theta}{M-\theta}\leq\gamma$. \end{enumerate} Since $$\mathrm{dist}\bigl(C(f,\alpha)-\varepsilon x_0,V(f,\beta)-\varepsilon' x_0\bigr)>0,$$
by Fact~\ref{fact:AW}, we can suppose without any loss of generality that $\|a_{n_k}\|\geq M$ ($k\in\mathbb{N}$). Moreover, since $0\in A$ and $A_n\rightarrow A$ for the Attouch-Wets convergence, we can suppose without any loss of generality that, for each $k\in\mathbb{N}$, there exists $c_k\in A_{n_k}\cap\theta B_X$. Put, for each $k\in\mathbb{N}$,
$$\textstyle b_k=\frac M{\|a_{n_k}\|}a_{n_k}+\frac{\|a_{n_k}\|-M}{\|a_{n_k}\|}c_k\in A_{n_k},$$
and observe that $M-\theta\leq\|b_k\|\leq M+\theta$. Now, by (\ref{eq:fuoridalcono}), we have $f(a_{n_k})<\beta\|a_{n_k}\|$ and hence
$$f(b_k)\leq M\beta+\theta\leq\|b_k\|\frac{M\beta+\theta}{\|b_k\|}\leq\frac{M\beta+\theta}{M-\theta}\|b_k\|\leq\gamma\|b_k\|.$$ Moreover, since $\{b_k\}$ is bounded and $A\subset C(f,\alpha)-\varepsilon x_0$, by Fact~\ref{fact:AW}, we have that eventually
$f(b_k)\geq\alpha\|b_k\|-2\varepsilon'$ and hence that
$$\alpha\|b_k\|-2\varepsilon'\leq f(b_k)\leq\gamma \|b_k\|.$$
In particular, we have that eventually $\|b_k\|\leq\frac{2\varepsilon'}{\alpha-\gamma}<M-\theta$, a contradiction since $\|b_k\|\geq M-\theta$. \end{proof}
\section{The case where the intersection of limits sets is a singleton}\label{section:Infinite-dimensional}
In the sequel of the paper, we suppose that $X$ is a real Hilbert space. If $u,v\in X\setminus\{0\}$, we denote as usual
$$\textstyle\cos(u,v)=\frac{\langle u,v\rangle}{\|u\|\|v\|},$$ where $\langle u,v\rangle$ denotes the inner product between $u$ and $v$.
If $K$ is a nonempty closed convex subset of $X$, let us denote by $P_K$ the projection onto the set $K$. Several times without mentioning it, we shall use the variational characterization of best approximations from convex sets in Hilbert spaces: let $K$ be as above, $x\in X$ and $y_0\in K$, then $y_0=P_K(x)$ if and only if \begin{equation}\label{eq:proiezionesuconvesso} \langle x-y_0,y-y_0\rangle\leq0 \ \ \ \ \text{whenever} \ y\in K. \end{equation} It is easy to see that, if $x\not\in K$, (\ref{eq:proiezionesuconvesso}) is equivalent to the following condition: \begin{equation}\label{eq:proiezionecoseno}
\|y-y_0\|\leq \|x-y\|\cos(y_0-y,x-y) \ \ \ \ \text{whenever} \ y\in K\setminus\{y_0\}. \end{equation} Moreover, if $K$ is a subspace of $X$ then (\ref{eq:proiezionesuconvesso}) becomes \begin{equation}\label{eq:proiezionesusottospazio} \langle x-y_0,y-y_0\rangle=0 \ \ \ \ \text{whenever} \ y\in K. \end{equation}
Let us recall the definition of stability for a couple $(A,B)$ of subsets of $X$.
\begin{definition}
Let $A$ and $B$ be closed convex subsets of $X$ such that $A\cap B$ is nonempty. We say that the that
the couple $(A,B)$ is {\em stable} if for each choice of sequences $\{A_n\},\{B_n\}\subset\mathrm{c}(X)$ converging for the Attouch-Wets convergence to $A$ and $B$, respectively, and for each choice of the starting point $a_0$, the corresponding perturbed alternating projections sequences $\{a_n\}$ and $\{b_n\}$ converge in norm. \end{definition}
\begin{remark} We remark that in the above definition we can equivalently require that there exists $c\in A\cap B$ such that $a_n,b_n\to c$ in norm. \end{remark}
\begin{proof}
It suffices to prove that if the perturbed alternating projections sequences $\{a_n\}$ and $\{b_n\}$ converge in norm then they both converge to a point in $A\cap B$.
Let us start by proving that if $a_n\to a$ then $a\in A\cap B$. It is not difficult to prove that, since $$a_{n+1}= P_{A_n}P_{B_n}a_n=P_{A}P_{B}a_n+(P_{A_n}P_{B_n}-P_{A}P_{B})a_n$$ and since $A_n\to A,B_n\to B$ for the Attouch-Wets convergence, we have $a=P_AP_B a$. By \cite[Facts~1.1, (ii)]{BauschkeBorwein93}, we have that $a\in A\cap B$. Similarly, it is easy to see that
$$b_{n+1}=P_{B_n}a_n=P_{B}a_n+(P_{B_n}-P_{B})a_n\to P_Ba=a,$$
and the proof is concluded.
\end{proof}
The main aim of this section is to prove that under the assumption that the sets $A$ and $B$ are separated by a strongly exposing functional $f$ for the set $A$ (i.e. condition (i) in the introduction) the couple $(A,B)$ is stable. The following theorem is the main result of this section.
\begin{theorem}\label{puntolur} Let $X$ be a Hilbert space and $A,B$
nonempty closed convex subsets of $X$. Let $\{A_n\}$ and $\{B_n\}$ be two sequences of closed convex sets such that $A_n\rightarrow A$ and $B_n\rightarrow B$ for the Attouch-Wets convergence. Suppose that there exist $y\in A\cap B$ and a linear continuous functional $f\in S_{X^*}$ such that $\inf f(B)=f(y)=\sup f(A)$ and such that $f$ strongly exposes $A$ at $y$. Then, for each $a_0\in X$, the corresponding perturbed alternating projections sequences $\{a_n\}$ and $\{b_n\}$ (with starting point $a_0$), converge to $y$ in norm. \end{theorem}
Before starting with the proof of the theorem we need some preliminary work. First of all, let us observe that without any loss of generality we can suppose that $y=0$ and hence that $$\inf f(A)=f(0)=\sup f(B).$$
Suppose that $x_0\in S_X$ is such that $f(x_0)=1$, i.e., $f$ is represented by $x_0$, in the sense that $f(\cdot)=\langle x_0,\cdot\rangle$. Then it is straightforward to give the following representation of the cones $C(f,\alpha)$ and $V(f,\alpha)$, introduced at the beginning of Section~\ref{Notations and preliminaries}: if we define $$\textstyle C(\theta):=\{x\in X\setminus\{0\};\, \cos(x,x_0)\geq\sin(\theta)\}\cup\{0\}\ \ \ (\theta\in(0,\frac\pi2)),$$
then the set $C(\theta)$ coincides with $C(f,\sin\theta)$. Similarly, if we define $$\textstyle V(\theta):=\{x\in X\setminus\{0\};\, \cos(x,x_0)\leq\sin(\theta)\}\cup\{0\}\ \ \ (\theta\in(0,\frac\pi2)),$$
then the set $V(\theta)$ coincides with $V(f,\sin\theta)$. We shall need the following simple fact.
\begin{fact}\label{fact:coseno} Suppose that $\theta_1,\theta_2\in(0,\frac\pi2)$ are such that $\theta_1<\theta_2$. If $x\in C(\theta_2)\setminus\{0\}$ and $y\in V(\theta_1)\setminus\{0\}$ then $\cos(x,y)\leq\cos(\theta_2-\theta_1)$. \end{fact}
\begin{proof} For $z\in X\setminus\{0\}$ let us denote $\theta_z=\frac\pi2-\arccos \cos(z,x_0)$ and observe that $$z\in C(\theta_2)\Leftrightarrow \theta_z\geq\theta_2\ \ \ \text{and}\ \ \ z\in V(\theta_1)\Leftrightarrow \theta_z\leq\theta_1.$$ Let us define $x_1=x-f(x)x_0$ and $y_1=y-f(y)x_0$, then
$$\textstyle \cos(x,y)\leq\frac{f(x)f(y)}{\|x\|\|y\|}+\frac{\|x_1\|\|y_1\|}{\|x\|\|y\|}=\cos(\theta_x-\theta_y)\leq\cos(\theta_2-\theta_1).$$ \end{proof}
\begin{proof}[Proof of Theorem~\ref{puntolur}]
Fix $M>0$, it suffices to prove that the sequences $\{a_n\}$ and $\{b_n\}$ are eventually contained in $2M B_X$. Let $f\in S_{X^*}$ and $x_0\in X$ be as above. Let $\alpha\in (0,1)$ and let $$\varepsilon(\alpha)=\inf\{\lambda>0;\, A\subset C(f,\alpha)-\lambda x_0\}\in[0,\infty],$$
by Lemma~\ref{lemma:stronglyexposedVSopiccolo}, $\varepsilon(\alpha)$ is $o(\alpha)$ as $\alpha\to 0^+$. In particular, we can fix $\beta\in(0,1/3)$ such that if $\theta=\frac12\arcsin (2\beta)$ then $\varepsilon':=2\varepsilon(3\beta)\in\mathbb{R}$ and \begin{enumerate}
\item[(a)] $\varepsilon'\leq M/2$;
\item[(b)] $\sin\theta+\frac8M\varepsilon'\leq\sin(\frac43\theta)$;
\item[(c)] $\sin(2\theta)-\frac8M{\varepsilon'}\geq\sin(\frac53\theta)$;
\item[(d)] $\cos(\frac13\theta)+\frac2M{\varepsilon'}\leq\cos(\frac16\theta)$. \end{enumerate}
Since, by Remark~\ref{InfMin}, $0\in A\subset C(f,3\beta)-\varepsilon(3\beta) x_0$, by Lemma~\ref{lemma:definitivamentenelcono}, we have that eventually $$A_n\subset C(f,2\beta)-2\varepsilon(3\beta) x_0=C(2\theta)-\varepsilon' x_0.$$ Since, $0\in B\subset\{x\in X;\, f(x)\leq0\}$, by Lemma~\ref{lemma:definitivamentenellaconca}, we have that eventually $$B_n\subset V(\theta)+\varepsilon' x_0.$$ Since $0\in A\cap B$, $A_n\rightarrow A$ and $B_n\rightarrow B$ for the Attouch-Wets convergence, eventually there exist $x_n\in A_n\cap \varepsilon'B_X$ and $y_n\in B_n\cap \varepsilon'B_X$.
\begin{claim} Eventually, if $a_n,b_n,b_{n+1}\not\in M B_X$, the following conditions hold: \begin{enumerate}
\item $a_n-x_n\in C(\frac53\theta)$;
\item $b_n-x_n\in V(\frac43\theta)$;
\item $a_n-y_{n+1}\in C(\frac53\theta)$;
\item $b_{n+1}-y_{n+1}\in V(\frac43\theta)$. \end{enumerate} \end{claim}
\begin{proof}[Proof of the claim]
Let us prove (i) and (ii), the proof of (iii) and (iv) is similar. To prove (i), observe that, since $a_n\in A_n\subset C(2\theta)-\varepsilon' x_0$, we have
\begin{eqnarray*}
f(a_n-x_n)&\geq& f(a_n+\varepsilon'x_0)-2\varepsilon'\\
&\geq&\sin(2\theta)(\|a_n+\varepsilon'x_0\|)-2\varepsilon'\\
&\geq&\sin(2\theta)(\|a_n-x_n\|-2\varepsilon')-2\varepsilon'\\
\textstyle&=&\textstyle\|a_n-x_n\|(\sin(2\theta)-\frac{2\varepsilon'\sin(2\theta)+2\varepsilon'}{\|a_n-x_n\|})\\
\textstyle&\geq&\textstyle\|a_n-x_n\|(\sin(2\theta)-\frac8M{\varepsilon'}) \\
&\geq&\textstyle\|a_n-x_n\|\sin(\frac53\theta),
\end{eqnarray*}
where the last inequality holds by (c).
To prove (ii), we proceed similarly: observe that, since $b_n\in B_n\subset V(\theta)+\varepsilon' x_0$, we have
\begin{eqnarray*}
f(b_n-x_n)&\leq& f(b_n-\varepsilon'x_0)+2\varepsilon'\\
&\leq&\sin(\theta)(\|b_n-\varepsilon'x_0\|)+2\varepsilon'\\
&\leq&\sin(\theta)(\|b_n-x_n\|+2\varepsilon')+2\varepsilon'\\
\textstyle&=&\textstyle\|b_n-x_n\|(\sin\theta+\frac{2\varepsilon'\sin\theta+2\varepsilon'}{\|b_n-x_n\|})\\
\textstyle&\leq&\textstyle\|b_n-x_n\|(\sin\theta+\frac8M{\varepsilon'}) \\
&\leq&\textstyle\|b_n-x_n\|\sin(\frac43\theta),
\end{eqnarray*}
where the last inequality holds by (b). The claim is proved.
\end{proof}
Now, since $a_n=P_{A_n}b_n$ and $x_n\in A_n$, by (\ref{eq:proiezionecoseno}), it holds \begin{equation}\label{eq:normadecresce}
\|a_n-x_n\|\leq\|b_n-x_n\|\cos(a_n-x_n,b_n-x_n). \end{equation}
Then we can observe that, by (i) and (ii) in our claim and by Fact~\ref{fact:coseno}, we have that eventually, if $a_n,b_n\not\in M B_X$, it holds $\|a_n-x_n\|\leq\|b_n-x_n\|\cos(\frac13\theta)$ and hence
$$\textstyle \|a_n\|\leq\|a_n-x_n\|+\varepsilon'\leq(\|b_n\|+\varepsilon')\cos(\frac13\theta)+\varepsilon'\leq\|b_n\|(\cos(\frac13\theta)+\frac2M\varepsilon')\leq\|b_n\|\cos(\frac16\theta),$$
where the last inequality holds by (d). Similarly, since $b_{n+1}=P_{B_n}a_n$ and $y_{n+1}\in B_n$, it holds $\|b_{n+1}-y_{n+1}\|\leq\|a_n-y_{n+1}\|\cos(b_{n+1}-y_{n+1},a_n-y_{n+1})$. By (iii) and (iv) in our claim and by Fact~\ref{fact:coseno}, we have that eventually, if $a_n,b_{n+1}\not\in M B_X$, it holds $\|b_{n+1}-y_{n+1}\|\leq\|a_n-y_{n+1}\|\cos(\frac13\theta)$ and hence
$$\textstyle \|b_{n+1}\|\leq(\|a_n\|+\varepsilon')\cos(\frac13\theta)+\varepsilon'\leq\|a_n\|(\cos(\frac13\theta)+\frac2M\varepsilon')\leq\|a_n\|\cos(\frac16\theta),$$ where the last inequality holds by (d).
By (\ref{eq:normadecresce}) and by the observations above, there exists $n_0\in\mathbb{N}$ such that if $n\geq n_0$ then the following conditions hold: \begin{enumerate}
\item[($\alpha)$] if $a_n,b_n\not\in M B_X$ then $ \|a_n\|\leq\|b_n\|\cos(\frac16\theta)$, and if $a_n,b_{n+1}\not\in M B_X$ then $\|b_{n+1}\|\leq\|a_n\|\cos(\frac16\theta)$;
\item[($\beta)$] if $b_n\in M B_X$ then $\|a_n\|\leq\|b_n\|+2\varepsilon'\leq 2M$, and if $a_n\in M B_X$ then $\|b_{n+1}\|\leq\|a_n\|+2\varepsilon'\leq 2M$.
\end{enumerate}
Now, it is easy to see that there exists $n_1\geq n_0$ such that $a_{n_1}\in MB_X$ or $b_{n_1}\in MB_X$. Indeed, since $\cos(\frac16\theta)<1$, the fact that, for each $n\geq n_0$, $a_n,b_n\not\in M B_X$ contradicts ($\alpha$). By ($\beta$) and taking into account also ($\alpha$), we obtain that $a_n,b_n\in2M B_X$, whenever $n>n_1$. \end{proof}
\begin{corollary}\label{corollary:puntolur} Let $X$ be a Hilbert space, $B$
a nonempty closed convex subset of $X$, $A$ a body in $X$ and
$y\in\partial A$ an LUR point of $A$. Let $\{A_n\}$ and $\{B_n\}$
be two sequences of closed convex sets such that $A_n\rightarrow
A$ and $B_n\rightarrow B$ for the Attouch-Wets convergence.
Suppose that $A\cap B=\{y\}$.
Then, for each $a_0\in X$, the corresponding perturbed alternating projections sequences $\{a_n\}$ and $\{b_n\}$ (with starting point $a_0$), converge to $y$ in norm. \end{corollary}
\begin{proof}
Since $(\mathrm{int}\, A)\cap B=\emptyset$, by the Hahn-Banach separation theorem, there exists $f\in S_{X^*}$ such that $$\inf f(A)=f(y)=\sup f(B).$$
Since $y$ is an LUR point of $A$, by Lemma~\ref{slicelimitatoselur}, $f$ strongly exposes $A$ at $y$. The thesis follows by Theorem~\ref{puntolur}. \end{proof}
It is worth noting that, in the recent paper \cite{GrKaKuReich}, a result concerning the convergence of iterates of nonexpansive mapping has been obtained under a geometrical condition involving LUR points.
\section{The case where the interior of the intersection of limits sets is nonempty} \label{Section:nonempty interior}
The main aim of this section is to prove that, under the assumption that the interior of $A\cap B$ is nonempty, the couple $(A,B)$ is stable.
We start by the following two dimensional fact. Even if the argument used is elementary we include a sketch of a possible proof for the sake of completeness.
\begin{fact}\label{fact:norms}
Let $X$ be a Hilbert space and $\varepsilon,K>0$. Then there exists a constant $\mu>0$ such that, whenever $C$ is a closed convex subset of $X$ containing $\varepsilon B_X$ and $x\in KB_X$, we have \begin{equation}\label{eq:angleboundedawayfrom0}
\|x-P_Cx\|\leq\mu(\|x\|-\|P_C x\|). \end{equation} \end{fact}
\begin{proof} We claim that $\mu=K/\varepsilon$ works. Let us denote by $\theta(u,v)$ the angle between two not null vectors $u$ and $v$.
Let us denote $y=P_C x$. We can (and do) assume that $y$ and $x$ are not proportional (if else (\ref{eq:angleboundedawayfrom0}) trivially holds). Hence, since $\varepsilon B_X\subset C$, we have that $\varepsilon<\|y\|<\|x\|$. Let $Y=\mathrm{span}\{x,y\}$ and let $w\in\varepsilon S_Y$ be such that: \begin{enumerate}
\item the line containing $\{y,w\}$ is tangent to $\varepsilon B_Y$;
\item the segment $[y,w]$ intersects the segment $[0,x]$. \end{enumerate}
Observe that existence of such an element $w$ is guaranteed by the fact that $\|x-y\|\leq \|x\|-\varepsilon$.
Since the vectors $w$ and $w-y$ are orthogonal, we clearly have $\sin\theta(-y,w-y)\geq \varepsilon/K $. Let us denote $z=\frac{\|y\|}{\|x\|}x$, by the variational characterization of best approximations from convex sets in Hilbert spaces and by the fact that $\|z\|=\|y\|$, we have: \begin{enumerate} \item $\theta(x-y,w-y)\geq \pi/2$; \item $\theta(-y,z-y)\leq \pi/2$. \end{enumerate}
It follows that $\theta(x-y,z-y)\geq \theta(-y,w-y)$ and hence that $$\|x-y\|\leq\frac{K}{\varepsilon}\|x-z\|=\frac{K}{\varepsilon}(\|x\|-\|y\|)$$
\end{proof}
The following theorem is the main result of this section and it is an application of the previous argument.
\begin{theorem}\label{theorem:corpilur} Let $X$ be a Hilbert space and $A,B$
nonempty closed convex subsets of $X$.
Suppose that $\mathrm{int}\,(A\cap B)\neq\emptyset$, then the couple $(A,B)$ is stable. \end{theorem}
\begin{proof}
Without any loss of generality, we can suppose that $0\in\mathrm{int}\, (A\cap B)$.
Let $\{A_n\}$ and $\{B_n\}$
be two sequences of closed convex sets such that $A_n\rightarrow
A$ and $B_n\rightarrow B$ for the Attouch-Wets convergence. Suppose that $\{a_n\}$ and $\{b_n\}$ are
the corresponding perturbed alternating projections sequences with respect to a given starting point $a_0$.
By Proposition 27 in \cite{PenotZalinescu} we have that $A_n\cap B_n \rightarrow A\cap B$ for the Attouch-Wets convergence. Hence, by Theorem 7.4.2 in \cite{Beer}, we can suppose without any loss of generality that there exists $\varepsilon>0$ such that $\varepsilon B_X\subset A_n\cap B_n $, whenever $n\in \mathbb{N}$. Since $0\in A_n\cap B_n$, we have that $\|a_n\|\leq\|b_n\|, \|b_n\|\leq\|a_{n-1}\|$ and hence there exists $K>0$ such that $\{a_n\},\{b_n\}\subset K B_X$. By Fact~\ref{fact:norms}, we have that there exists $\mu>0$ such that $\|a_n-b_n\|\leq\mu(\|b_n\|-\|a_n\|)$ and
$\|b_n-a_{n-1}\|\leq\mu(\|a_{n-1}\|-\|b_n\|)$. Hence
$$\textstyle \sum_{n=1}^N(\|a_n-b_n\|+\|b_n-a_{n-1}\|)\leq\sum_{n=1}^N\mu(\|a_{n-1}\|-\|a_n\|)=\mu(\|a_{0}\|-\|a_N\|).$$ This proves that the series $\sum_{n\in\mathbb{N}}(a_n-a_{n-1})$ is absolutely convergent and hence convergent, i.e., the sequence $\{a_n\}$ is convergent. Similarly, we have that also the sequence $\{b_n\}$ is convergent and the proof is complete. \end{proof}
By combining the results contained in Section~\ref{section:Infinite-dimensional} and the previous theorem we have the following corollary. This corollary describes the stability property for the couple $(A,B)$ where $A$ and $B$ are bodies.
\begin{corollary}\label{corollary:corpilur} Let $X$ be a Hilbert space, suppose that at least one of the following conditions holds.
\begin{enumerate}
\item $A$ is a closed convex set with nonempty interior, $f\in X^*\setminus\{0\}$ is such that $f$ strongly exposes $A$ at the origin, and $B=\{x\in X;\, f(x)\geq \alpha\}$, where $\alpha\leq 0$.
\item $A,B$ are bodies in $X$ such that $A$ is LUR and $A\cap B\neq\emptyset$. \end{enumerate} Then the couple $(A,B)$ is stable.
\end{corollary}
\begin{proof}
(i) If $\alpha<0$ then $\mathrm{int}\,(A\cap B)\neq\emptyset$ and we can apply Theorem~\ref{theorem:corpilur}. If $\alpha=0$ apply Theorem~\ref{puntolur}.
\noindent (ii) If $\mathrm{int}\,(A\cap B)\neq\emptyset$ we can apply Theorem~\ref{theorem:corpilur}. If $\mathrm{int}\,(A\cap B)=\emptyset$, since $A$ and $B$ are bodies, we have that
$\mathrm{int}\,(A)\cap B=\emptyset$. Since $A$ is an LUR body, there exists $y\in\partial A$ such that $A\cap B=\{y\}$. Apply Corollary~\ref{corollary:puntolur}. \end{proof}
It is worth to remark that the assumptions (i) and (ii) in Corollary \ref{corollary:corpilur} cannot be avoided if we ask for a stable couple of bodies. Indeed, when we consider two bodies with nonempty intersection, the typical situation in which (i) and (ii) fail is the following: there exists a functional $f\in X^*\setminus\{0\}$ separating the bodies $A$ and $B$ but $f$ strongly exposes neither $A$ nor $B$. The following simple 2-dimensional example shows that, in general, in this case we cannot guarantee that the couple $(A,B)$ is stable.
\begin{example}\label{ex: notconverge}
Let $X=\mathbb{R}^2$ and let us consider, for each $h\in\mathbb{N}$, the following subsets of $X$:
\begin{eqnarray*}
A&=&\textstyle {\mathrm{conv}}\,\{(1,1),(-1,1),(1,0),(-1,0)\};\\ C_{2h}&=&\textstyle{\mathrm{conv}}\,\{(1,1),(-1,1),(1,\frac1h),(-1,0)\};\\
C_{2h-1}&=&\textstyle{\mathrm{conv}}\,\{(1,1),(-1,1),(1,0),(-1,\frac1h)\};\\
B&=&\textstyle{\mathrm{conv}}\,\{(1,-1),(-1,-1),(1,0),(-1,0)\};\\ D_{2h}&=&\textstyle{\mathrm{conv}}\,\{(1,-1),(-1,-1),(1,-\frac1h),(-1,0)\};\\ D_{2h-1}&=&\textstyle{\mathrm{conv}}\,\{(1,-1),(-1,-1),(1,0),(-1,-\frac1h)\}.
\end{eqnarray*}
We claim that the couple $(A,B)$ is not stable. To prove this, let us consider the starting point $z_0=(0,0)$ and observe that, if we consider the points $a_k=(P_{C_1} P_{D_1})^k z_0$, it is clear that there exists $N_1\in\mathbb{N}$ such that $$\textstyle \|a_{N_1}-(1,0)\|<\frac12.$$
Define $A_n=C_1$ and $B_n=D_1$ whenever $1\leq n\leq N_1$. Similarly, if we consider the points $a_{N_1+k}=(P_{C_2} P_{D_2})^{k} a_{N_1}$ then there exists $N_2\in\mathbb{N}$ such that $$\textstyle \|a_{N_1+N_2}-(-1,0)\|<\frac12.$$ Define $A_n=C_2$ and $B_n=D_2$ whenever $N_1+1\leq n\leq N_1+N_2$. Then, proceeding inductively, it is easy to construct sequences $\{A_n\}$ and $\{B_n\}$ converging respectively to $A$ and $B$ for the Attouch-Wets convergence and such that the perturbed alternating projections sequences $\{a_n\}$ and $\{b_n\}$, w.r.t. $\{A_n\}$ and $\{B_n\}$ and with starting point $z_0$, do not converge. \end{example}
\subsection*{Inequality constraints}
Inequality constraints are a typical example of problem that can be solved by projections and reflections methods (see, e.g., \cite[Remark~3.17]{BorweinSimsTam}). It appears very in often in mathematical programming theory. This problem reveals to be a stable problem under mild assumptions. Indeed, in the rest of this section we will show that under suitable additional hypotheses also the method of perturbed alternating projections sequences can be applied to deal with such a problem.
Given a closed convex cone $K$ in a Hilbert space $X$ (recall that a subset $K$ of $X$ is called cone if $\lambda k\in K$, whenever $\lambda\in [0,\infty)$ and $k\in K$), we denote by $K^-$ its {\em negative polar cone}, i.e., the closed convex cone defined by $$K^-=\{x\in X;\, \langle x,k \rangle\leq 0, \ \text{whenever}\ k\in K\}.$$ Let us suppose that $a\in X\setminus\{0\}$, $b\in \mathbb{R}$, and define $A=\{x\in X;\, \langle a, x\rangle\leq b\}$. Then it is easy to observe that the following assertions hold true. \begin{itemize}
\item If $\mathrm{int}\, K\neq\emptyset$, $a_1,\ldots,a_n\in X$, $b_1,\ldots,b_n>0$ and $$B:=\{x\in X;\, \langle a_i, x\rangle\leq b_i,\ i=1,\ldots,n\}$$ then $\mathrm{int}\,(B\cap K)\neq \emptyset$.
\item If $\mathrm{int}\, K\neq\emptyset$ and $a\not\in K^-$ then $\mathrm{int}\,(A\cap K)\neq \emptyset$.
\item If $a\in\mathrm{int}\, (K^-)$ and $b=0$ then $A$ and $K$ are separated by a strongly exposing functional for the set $K$. \end{itemize}
\noindent Hence, by combining the previous observation, Theorem~\ref{theorem:corpilur}, and Theorem~\ref{puntolur}, we obtain the following result about the convergence of perturbed projections for the inequality constraints problem.
\begin{theorem} Let $K$ be a closed convex cone in a Hilbert space $X$. Suppose that at least one of the following conditions holds true. \begin{enumerate}
\item $\mathrm{int}\, K\neq\emptyset$, $a_1,\ldots,a_n\in X$, $b_1,\ldots,b_n>0$, and $$B:=\{x\in X;\, \langle a_i, x\rangle\leq b_i,\ i=1,\ldots,n\}.$$
\item $\mathrm{int}\, K\neq\emptyset$, $a\not\in K^-$, $b\in\mathbb{R}$, and $$B:=\{x\in X;\, \langle a, x\rangle\leq b\}.$$
\item $a\in\mathrm{int}\, (K^-)$ and $$B:=\{x\in X;\, \langle a, x\rangle\leq 0\}.$$ \end{enumerate}
Then the couple $(K,B)$ is stable. \end{theorem}
\noindent As a corollary, we obtain the following finite-dimensional result, where the cone is the standard nonnegative lattice cone in $\mathbb{R}^N$.
\begin{corollary}
Let $X=\mathbb{R}^N$ and $K=\{(x_k)_1^N\in\mathbb{R}^N;\, x_k\geq0, k=1,\ldots,N \}$. Suppose that at least one of the following conditions holds true.
\begin{enumerate}
\item $a_1,\ldots,a_n\in X$, $b_1,\ldots,b_n>0$, and $$B:=\{x\in X;\, \langle a_i, x\rangle\leq b_i,\ i=1,\ldots,n\}.$$
\item $a\not\in K^-$, $b\in\mathbb{R}$, and $$B:=\{x\in X;\, \langle a, x\rangle\leq b\}.$$
\item $a\in\mathrm{int}\, (K^-)$ and $$B:=\{x\in X;\, \langle a, x\rangle\leq 0\}.$$
\end{enumerate}
Then the couple $(K,B)$ is stable. \end{corollary}
\section{Perturbed alternating projections sequences for subspaces}\label{section:subspaces}
In this section, we study the convergence of the perturbed alternating projections sequences in the case where the limit sets are subspaces. The following elementary example shows that if the intersection of the subspaces is non-trivial, in general, convergence does not hold.
\begin{example}\label{ex:duedimensionale} Let $Z=\mathbb{R}^2$ and let us consider $A_n=A=B=\{(x,0)\in Z;\, x\in\mathbb{R}\}$ ($n\in\mathbb{N}$). For each $h\in\mathbb{N}$, let us consider the line $C_h=\{(x,\frac1h-\frac1{h^2}x);\,x\in\mathbb{R}\}$ passing through the points $(0,\frac1h)$ and $(h,0)$. Let us consider the starting point $z_0=(0,0)$ and observe that, if we consider the points $a_k=(P_A P_{C_1})^k z_0$, it is clear that there exists $N_1\in\mathbb{N}$ such that $\|a_{N_1}\|>\frac12$. Define $B_n=C_1$ whenever $1\leq n\leq N_1$. Similarly, if we consider the points $a_{N_1+k}=(P_A P_{C_2})^{k} a_{N_1}$ then there exists $N_2\in\mathbb{N}$ such that $\|a_{N_1+N_2}\|>1$. Define $B_n=C_2$ whenever $N_1+1\leq n\leq N_1+N_2$. Then, proceeding inductively, it is easy to construct a sequence $\{B_n\}$ such that the perturbed alternating projections sequences $\{a_n\}$ and $\{b_n\}$, w.r.t. $\{A_n\}$ and $\{B_n\}$ and with starting point $z_0$, are unbounded. \end{example}
In order to avoid such a situation we consider the case in which the intersection of the subspaces reduces to the origin. We have the following theorem.
\begin{theorem}\label{prop:sottospazisommachiusa}
Let $X$ be a Hilbert space and suppose that $U,V\subset X$ are closed subspaces such that $U\cap V=\{0\}$ and $U+V$ is closed. Let $\{A_n\}$ and $\{B_n\}$ be two sequences of closed convex sets such that $A_n\rightarrow
U$ and $B_n\rightarrow V$ for the Attouch-Wets convergence.
Then, for each $a_0\in X$, the corresponding perturbed alternating projections sequences $\{a_n\}$ and $\{b_n\}$, with starting point $a_0$, converge to $0$ in norm. \end{theorem}
\noindent If $W$ is a subspace of $X$ and $\varepsilon\in(0,1)$, let $W(\varepsilon)\subset X$ be the set defined by $$W(\varepsilon)=\{w\in X\setminus\{0\};\,\exists u\in W\setminus\{0\}\ \text{such that}\ \cos(u,w)\geq1-\varepsilon\}\cup\varepsilon B_X.$$ An easy computation shows that:
\begin{equation}\label{eq:Uepsilon}
W(\varepsilon)=\{w\in X\setminus\{0\};\,\exists u\in W\cap\|w\|S_X\ \text{such that}\ \|u-w\|^2\leq2\varepsilon\|w\|^2\}\cup\varepsilon B_X. \end{equation} Before starting with the proof of the theorem we need the following two lemmas.
\begin{lemma}\label{lemma:defnelconoSottospazio}
Let $X$ be a Hilbert space and $U$ a subspace of $X$. Let $\{A_n\}$ be a sequence of closed convex sets such that $A_n\rightarrow
U$ for the Attouch-Wets convergence. Then, for each $\varepsilon\in(0,1)$, it eventually holds that $A_n\subset U(\varepsilon)$. \end{lemma}
\begin{proof} On the contrary, suppose that there exist $\varepsilon\in(0,1)$ and a sequence $\{n_k\}$ of integers such that, for each $k\in\mathbb{N}$, there exists $x_{n_k}\in A_{n_k}\setminus U(\varepsilon)$. Since $A_n\rightarrow
U$ for the Attouch-Wets convergence, we can suppose, without any loss of generality, that $\|x_{n_k}\|>1$ (indeed, we can observe that $\mathrm{dist}\bigl(U,X\setminus U(\varepsilon)\bigr)>0$ and use Fact~\ref{fact:AW}). Let $\gamma\in(0,1)$ be such that $\frac{(1-\varepsilon)(1+\frac\gamma{1-\varepsilon})}{(1-\frac\epsilon2)(1-\gamma)}\leq1$ and let $k\in\mathbb{N}$ be such that there exists $z_k\in A_{n_k}\cap\gamma B_X$. Consider $$\textstyle w_k=\lambda x_{n_k}+(1-\lambda)z_k\in A_{n_k},$$
where $\lambda=\frac1{\|x_{n_k}\|}$, and observe that $1-\gamma\leq\|w_k\|\leq1+\gamma$ and that, for each $u\in U$, we have
\begin{eqnarray*}\textstyle \langle w_k,u\rangle&=&\lambda\langle x_{n_k},u\rangle+(1-\lambda)\langle z_{k},u\rangle\leq \|u\|(1-\varepsilon)\|\lambda x_{n_k}\|+\gamma\|u\|\\
&=& \textstyle \|u\|(1-\varepsilon)(1+\frac\gamma{1-\varepsilon})\\
&=& \textstyle
[(1-\frac\epsilon2)\|u\|\|w_k\|]\frac{(1-\varepsilon)(1+\frac\gamma{1-\varepsilon})}{(1-\frac\epsilon2)\|w_k\|}\\
&\leq& \textstyle
[(1-\frac\epsilon2)\|u\|\|w_k\|]\frac{(1-\varepsilon)(1+\frac\gamma{1-\varepsilon})}{(1-\frac\epsilon2)(1-\gamma)}\leq (1-\frac\epsilon2)\|u\|\|w_k\|.
\end{eqnarray*}
Hence, $w_k\in A_{n_k}\setminus U(\frac\epsilon2)$. Since $\{w_k\}$ is a bounded sequence, by Fact~\ref{fact:AW}, $\mathrm{dist}(w_k,U)\to0$.
We get a contradiction since
$$\textstyle \mathrm{dist}\bigl(U, X\setminus
U(\frac\epsilon2)\bigr)>0.$$ \end{proof}
\begin{lemma}\label{lemma:approssimazioneprodotto}
Let $U,V$ be closed subspace of a Hilbert space $X$ such that $U\cap V=\{0\}$ and $U+V$ is closed. Let $M\in(0,1)$, then there exist $\varepsilon\in(0,M)$ and $\eta\in(0,1)$ such that, for each $x\in U(\varepsilon)\setminus MB_X$, $y\in V(\varepsilon)\setminus MB_X$ and $z\in\varepsilon B_X$, we have $\cos(x-z,y-z)\leq\eta$. \end{lemma}
\begin{proof}
By \cite[Lemma~3.5]{FranchettiLight86}, we have that
$$\textstyle \Omega:=\sup\{{<a,b>};\,a\in V\cap S_X,b\in U\cap S_X\}<1.$$
Fix any $\eta\in(\Omega,1)$ and take $\varepsilon\in(0,M)$ such that
$$\textstyle\bigl(\frac{M}{M-\varepsilon}\bigr)^2\bigl(\Omega+ \frac{15\sqrt\varepsilon}{M^2}\bigr)\leq\eta.$$ Suppose that
$x\in U(\varepsilon)\setminus MB_X$, $y\in V(\varepsilon)\setminus MB_X$ and $z\in\varepsilon B_X$. By (\ref{eq:Uepsilon}), there exist $u\in U\cap \|x\|S_X$ and $v\in V\cap \|y\|S_X$ such that $\|x-u\|\leq\sqrt{2\varepsilon}\|x\|$ and $\|y-v\|\leq\sqrt{2\varepsilon}\|y\|$. Hence, $x':=x-u-z\in 3\sqrt\varepsilon B_X$ and $y':=y-v-z\in 3\sqrt\varepsilon B_X$. Then we have:
\begin{eqnarray*}
\textstyle \langle x-z,y-z \rangle &=& \langle u+x',v+y'\rangle \\
&\leq&\textstyle \langle u,v\rangle+\langle
u,y'\rangle+\langle x',v\rangle+\langle x',y'\rangle\\
&\leq&\textstyle \Omega\|x\|\|y\|+ 3\sqrt\varepsilon\|x\|+3\sqrt\varepsilon\|y\|+9\varepsilon\\
&\leq&\textstyle \|x\|\|y\|(\Omega+ \frac{3\sqrt\varepsilon}{\|x\|}+\frac{3\sqrt\varepsilon}{\|y\|}+\frac{9\varepsilon}{\|x\|\|y\|})
\\
&\leq&\textstyle \|x\|\|y\|(\Omega+ \frac{6\sqrt\varepsilon}{M}+\frac{9\varepsilon}{M^2})\\
&\leq&\textstyle \|x\|\|y\|(\Omega+ \frac{15\sqrt\varepsilon}{M^2})\\
&\leq&\textstyle \|x-z\|\|y-z\|\frac{\|x\|}{\|x\|-\varepsilon}\frac{\|y\|}{\|y\|-\varepsilon}(\Omega+ \frac{15\sqrt\varepsilon}{M^2})\\
&\leq&\textstyle \|x-z\|\|y-z\|\bigl(\frac{M}{M-\varepsilon}\bigr)^2(\Omega+ \frac{15\sqrt\varepsilon}{M^2})\\
&\leq&\textstyle\eta \|x-z\|\|y-z\|.
\end{eqnarray*} \end{proof}
\noindent We are now ready to prove our theorem. \begin{proof}[Proof of Theorem~\ref{prop:sottospazisommachiusa}]
Fix $M\in(0,1)$, it suffices to prove that eventually $a_n, b_n\in 3 M B_X$ (recall that $\{a_n\}$ and $\{b_n\}$ are defined as in Definition~\ref{def:perturbedseq}). Let $\varepsilon\in(0,M)$ and $\eta\in(0,1)$ be given by Lemma~\ref{lemma:approssimazioneprodotto}. Let us consider the sets $U(\varepsilon),V(\varepsilon)$ and observe that, by Lemma~\ref{lemma:defnelconoSottospazio}, there exists $n_0\in\mathbb{N}$ such that if $n\geq n_0$ then $A_n\subset U(\varepsilon)$ and $B_n\subset V(\varepsilon)$. Let us fix $\varepsilon'\in(0,\varepsilon)$ such that $\eta+\frac{2\varepsilon'}M\leq\frac{\eta+1}2$, then there exists an integer $n_1\geq n_0$ such that, for each $n\geq n_1$, there exist $x_n\in A_n\cap\varepsilon' B_X$ and $y_n\in B_n\cap\varepsilon' B_X$.
Suppose that $n\geq n_1$, we can observe that:
\begin{itemize}
\item by (\ref{eq:proiezionecoseno}) and Lemma~\ref{lemma:approssimazioneprodotto}, if $a_n,b_n\not\in M B_X$, it holds $\|a_n-x_n\|\leq\|b_n-x_n\|\eta$ and hence
$$\textstyle \|a_n\|\leq\|a_n-x_n\|+\varepsilon'\leq\eta(\|b_n\|+\varepsilon')+\varepsilon'\leq\|b_n\|(\eta+\frac{2\varepsilon'}M)\leq\frac{\eta+1}2\|b_n\|;$$
\item similarly, if $a_n,b_{n+1}\not\in M B_X$, it holds
$$\textstyle \|b_{n+1}\|\leq\frac{\eta+1}2\|a_n\|;$$
\item by (\ref{eq:proiezionecoseno}), if $b_n\in M B_X$ then $\|a_n\|\leq\|b_n\|+2\varepsilon'\leq 3M$ and, similarly, if $a_n\in M B_X$ then $\|b_{n+1}\|\leq3M$.
\end{itemize}
By the observations above and since $\frac{\eta+1}2<1$, proceeding as at the end of the proof of Theorem~\ref{puntolur}, it easily follows that eventually $a_n,b_n\in 3M B_X$. \end{proof}
The remaining part of this section is devoted to proving that the assumption on the closedness of the sum of the subspaces, in Proposition~\ref{prop:sottospazisommachiusa}, cannot be removed. This result is contained in Theorem~\ref{teo:sommaNONchiusa} below and is inspired by the construction contained in \cite[Section~4]{FranchettiLight86}. Let $X=\ell_2$. For the sake of clearness, we point out that, in the sequel, we sometimes use the following notation: if, for each $h\in \mathbb{N}$, $x^h$ is an element of $X$, we denote by $\{x^h\}$ the corresponding sequence in $X$. Moreover, if $h\in\mathbb{N}$ is fixed, we can consider $x^h$ as a sequence of real numbers and we write $x^h=\{x^h_n\}_n$. Now, suppose that $\{\theta_n\}\subset\mathbb{R}$ is a bounded sequence and let us consider the linear continuous operator $D:X\to X$ given by $Dx=D\{x_n\}=\{\theta_n x_n\}$ ($x=\{x_n\}\in X$). Suppose that $b=\{b_n\}\in X$ and consider the closed convex subsets of $Z=X\oplus_2X$ defined as follows: $$ A=\{(x,0)\in Z;\,x\in X\}\ \ \ \ \text{and}\ \ \ \ V=\{(x,b+Dx)\in Z;\,x\in X\}. $$ Observe that $A$ is a subspace of $Z$ and $V$ is an affine set in $Z$. \begin{remark} \label{remark calcolo proiezioni}
If $(\alpha,\beta)\in Z$
then we obtain immediately that $P_A(\alpha,\beta)=(\alpha,0)$.
Now, let us suppose that $(\alpha,0)\in A$ and let us compute $P_V(\alpha,0)$. If we denote $P_V(\alpha,0)=(\{x_n\},\{b_n+\theta_n x_n\})$, by the characterization of best approximation in Hilbert space, we have, for each $\{y_n\}\in X$,
$$\textstyle \bigl\langle(\{x_n-\alpha_n\},\{b_n+\theta_n x_n\}),(\{y_n\},\{\theta_n y_n\})\bigr\rangle=0.$$
Hence, we must have $x_n-\alpha_n+b_n\theta_n+x_n\theta^2_n=0$, whenever $n\in\mathbb{N}$. That is, for each $n\in\mathbb{N}$, it holds
\begin{equation}\label{eq:proiez}
\textstyle x_n=\frac{\alpha_n-\theta_n b_n}{1+\theta_n^2}.
\end{equation} \end{remark}
\begin{lemma}\label{lemma:convAWpersottospazi}
Let $Z$ be defined as above. Let $\{b^n\}\subset X$ be a norm null sequence. Let $D,D^n:X\to X$ ($n\in\mathbb{N}$) be linear bounded operators such that $D^n\to D$ in the operator norm. Then if we define
$$W=\{(x,Dx)\in Z;\,x\in X\}\ \ \ \text{and}\ \ \ W_n=\{(x,b^n+D^n x)\in Z;\,x\in X\}\ \ \ (n\in\mathbb{N})$$
we have that $W_n\rightarrow W$ for the Attouch-Wets convergence. \end{lemma}
\begin{proof}
Let us fix $N\in\mathbb{N}$. If $z=(x,Dx)\in W\cap N B_Z$ then we can consider $z'=(x,b^n+D^nx)\in W_n$ and observe that $$\|z-z'\|_Z=\|Dx-D^nx-b^n\|_X\leq N\|D-D^n\|+\|b^n\|_X.$$
Similarly, if $w=(y,b^n+D^ny)\in W_n\cap N B_Z$ then we can consider $w'=(y,Dy)\in W$ and observe that $$\|w-w'\|_Z=\|Dy-D^ny-b^n\|_X\leq N\|D-D^n\|+\|b^n\|_X.$$
Hence, $h_N(W,W_n)\leq N\|D-D^n\|+\|b^n\|\to0$ ($n\to\infty$), and the proof is concluded. \end{proof}
\begin{theorem}\label{teo:sommaNONchiusa} Let $Z$ be defined as above and $A=\{(x,0)\in Z;\,x\in X\}$, then there exist \begin{enumerate}
\item[(a)] $B$ a closed subspace of $Z$,
\item[(b)] $z_0\in Z$,
\item[(c)] $\{A_n\},\{B_n\}\subset c(Z)$
two sequences of sets converging to $A$ and $B$, respectively, for the Attouch-Wets convergence,
\end{enumerate}
such that the perturbed alternating projections sequences (w.r.t. $\{A_n\}$ and $\{B_n\}$ and with starting point $z_0$), are unbounded. \end{theorem}
\begin{proof}
Let us consider the sequence $\{a_n\}\subset\mathbb{R}$, given by $a_n=4^{-n}$, and let us consider the operator $D:X\to X$, given by $D\{x_n\}=\{a_n x_n\}$. Then define $B=\{(x,Dx)\in Z;\,x\in X\}$ and, for each $n\in\mathbb{N}$, put $A_n=A$. Now, consider any $z_0=(\{\alpha_n\},0)\in A$ such that $\alpha_n>0$ ($n\in\mathbb{N}$) and $\|z_0\|<1$.
Let us put, $N_0=1$ and, for each $n\in\mathbb{N}$, $\alpha_n^{0,1}=\alpha_n$. We shall define inductively (with respect to $h\in\mathbb{N}$) positive integers $N_h$, countable families of elements of $X$ $$\textstyle \{\alpha^{h,1}_n\}_n,\{\alpha^{h,2}_n\}_n,\{\alpha^{h,3}_n\}_n\ldots,$$ positive real numbers $M_h$, and sets $C_h\subset Z$ such that:
\begin{enumerate}
\item \label{induzione M} $2^h+h>(1+M_h)^2\sum_{n=h+1}^\infty(\alpha_n^{h-1,N_{h-1}})^2> 2^h$
\item \label{induzione C} $C_h=\{(x,b^h+D^hx)\in Z;\, x\in X\}$, where $D^h:X\to X$ is given by $D^h\{x_n\}=\{\theta^h_n x_n\}$ and where $b^h=\{b^h_n\}_n\in X$ and $\theta^h_n\in\mathbb{R}$ are given by
$$\textstyle b^h_n=\begin{cases}
0\ \ \ &\text{if}\ n\leq h\\
\alpha_n^{h-1,N_{h-1}} a_n\frac{1+M_h}{M_h} \ &\text{if}\ n> h
\end{cases}\ \ \ \ \text{and}\ \ \ \ \
\theta^h_n=\begin{cases}
a_n\ &\text{if}\ n\leq h\\
-\frac{1}{M_h}a_n\ &\text{if}\ n> h
\end{cases}
;$$
\item \label{induzione P(h,1)} $(\{\alpha^{h,1}_n\}_n,0)=P_A P_{C_h}(\{\alpha^{h-1,N_{h-1}}_n\}_n,0)$;
\item \label{induzione P(h,t+1)} $(\{\alpha^{h,t+1}_n\}_n,0)=P_A P_{C_h}(\{\alpha^{h,t}_n\}_n,0)$, $t\in\mathbb{N}$;
\item \label{induzione disuguaglianze} $2^h+h>\sum_{n=1}^\infty(\alpha_n^{h,N_{h}})^2\geq\sum_{n=h+1}^\infty(\alpha_n^{h,N_{h}})^2> 2^h$;
\item \label{induzione alfa} $\alpha^{h,t}_n>0$, whenever $n,t\in\mathbb{N}$.
\end{enumerate}
Let us show that this is possible.
Let $h\in\mathbb{N}$ and suppose we already
have $N_{h-1}\in\mathbb{N}$ and sequences $$\{\alpha^{h-1,1}_n\}_n,\ldots,\{\alpha^{h-1,N_{h-1}}_n\}_n\subset X$$ such that the following conditions hold:
\begin{itemize}
\item $\textstyle 2^{h-1}+h-1>\sum_{n=1}^\infty(\alpha_n^{h-1,N_{h-1}})^2$;
\item $\alpha^{h-1,N_{h-1}}_n>0$, whenever $n\in\mathbb{N}$.
\end{itemize}
(Observe that for $h=1$ the two conditions above are trivially satisfied since $\alpha_n^{0,N_0}=\alpha_n>0$ and $\sum_{n=1}^\infty(\alpha_n^{0,N_0})^2=\|z_0\|^2<1$.)
By combining these two relations, we obtain that
$$
2^{h}+h>\sum_{n=1}^\infty(\alpha_n^{h-1,N_{h-1}})^2>\sum_{n=h+1}^\infty(\alpha_n^{h-1,N_{h-1}})^2>0.
$$
Hence there exists a positive real number $M_h$ such that (\ref{induzione M}) holds true. Now, let us consider $C_h$ defined as in (\ref{induzione C}). Then, by the relations in (\ref{induzione P(h,1)}) and (\ref{induzione P(h,t+1)}), we define $\{\alpha^{h,t}_n\}_n$ ($t\in\mathbb{N}$). We just have to prove that there exists $N_h\in\mathbb{N}$ such that (\ref{induzione disuguaglianze}) is satisfied and that (\ref{induzione alfa}) holds true. By taking into account Remark \ref{remark calcolo proiezioni} and the fact that $(\{\alpha^{h,1}_n\}_n,0)=P_A P_{C_h}(\{\alpha^{h-1,N_{h-1}}_n\}_n,0)$, an easy computation shows that, for each $n>h$,
$$\textstyle \alpha^{h,1}_n=\alpha^{h-1,N_{h-1}}_n\frac{1+\frac{1+M_h}{M_h^2}a_n^2}{1+\frac1{M_h^2}a_n^2}.$$
Repeating $N$ times the same argument yields:
$$\textstyle \alpha^{h,N}_n=\alpha^{h-1,N_{h-1}}_n\frac{1+\frac{1+M_h}{M_h^2}a_n^2\sum_{l=0}^{N-1}(1+\frac1{M_h^2}a_n^2)^l}{(1+\frac1{M_h^2}a_n^2)^N}.$$
Moreover, for each $n\leq h$,
$$\textstyle \alpha^{h,1}_n=\alpha^{h-1,N_{h-1}}_n\frac{1}{1+a_n^2}.$$
Repeating $N$ times the same argument yields:
$$\textstyle \alpha^{h,N}_n=\alpha^{h-1,N_{h-1}}_n\frac{1}{(1+a_n^2)^N}.$$
Since $$\textstyle \frac{1+\frac{1+M_h}{M_h^2}a_n^2\sum_{l=0}^{N-1}(1+\frac1{M_h^2}a_n^2)^l}{(1+\frac1{M_h^2}a_n^2)^N}=\frac{-M_h+(1+M_h)(1+\frac1{M_h^2}a_n^2)^N}{(1+\frac1{M_h^2}a_n^2)^N}\to 1+M_h \ \ (N\to \infty)$$
and
$$
\textstyle \frac{1}{(1+a_n^2)^N}\to 0\ \ (N\to \infty),
$$
by (\ref{induzione M}) we obtain that there exists $N_h\in\mathbb{N}$ such that
$$\textstyle 2^h+h>\sum_{n=1}^\infty(\alpha_n^{h,N_{h}})^2\geq\sum_{n=h+1}^\infty(\alpha_n^{h,N_{h}})^2> 2^h.$$
Moreover, it follows immediately that condition (\ref{induzione alfa}) is satisfied.
Now, if $\sum_{k=0}^{h-1} N_k\leq n<\sum_{k=0}^{h} N_k$, put $B_n=C_h$. By our construction, it holds that $a_N=(\{\alpha_n^{h,N_h}\},0)$ where $N=\sum_{k=1}^{h} N_k$. In particular, $$\|b_N\|^2\geq\|P_{A} b_N\|^2=\|P_{A_N} b_N\|^2=\|a_N\|^2\geq \sum_{n=h+1}^\infty(\alpha_n^{h,N})^2> 2^h $$
and hence the the sequences $\{a_n\}$ and $\{b_n\}$ are unbounded.
It remains to prove that $B_n\rightarrow B$ for the Attouch-Wets convergence or, equivalently, that $C_h\rightarrow B$ for the Attouch-Wets convergence. In view of Lemma~\ref{lemma:convAWpersottospazi}, it suffices to prove that the sequence $\{b^h\}$ is norm null and that $D^h\to D$ in the operator norm.
By the inequalities in (\ref{induzione M}) and (\ref{induzione disuguaglianze}), we have
$$\textstyle (1+M_h)^2(2^{h-1}+h-1)\geq(1+M_h)^2\sum_{n=h+1}^\infty(\alpha_n^{h-1,N_{h-1}})^2> 2^h,$$
and hence
$$\textstyle (1+M_h)^2>\frac{2^h}{2^{h-1}+h-1}.$$
Therefore the sequence $\{M_h\}$ is bounded away from $0$. Hence, the sequences $\{\frac1{M_h}\}$ and $\{\frac{1+M_h}{M_h}\}$ are bounded above by a positive constant $K$. Then, by the definition of $b^h$ in (\ref{induzione C}), we have
$$\textstyle \|b^h\|\leq Ka_h\|\{\alpha_n^{h-1,N_{h-1}}\}\|_X\leq \frac K{4^h}\|\{\alpha_n^{h-1,N_{h-1}}\}\|_X\leq\frac K{4^h}\sqrt{2^{h-1}+h-1},$$
where the last inequality holds by (\ref{induzione disuguaglianze}). Moreover, by the definition of $\theta_n^h$ in (\ref{induzione C}), we have that
$$\textstyle \|(D-D^h)x\|^2\leq \sum_{n=h+1}^\infty (a_n-\frac1{M_h}a_n)^2x_n^2\leq(1+K)^2a^2_{h+1}\|x\|^2\ \ \ \ \ (x=\{x_n\}\in X). $$
Therefore, finally we obtain that
$$\textstyle \|D-D^h\|\leq (1+K)a_{h+1}.$$
\end{proof}
\section*{Acknowledgments.} The research of the authors is partially supported by GNAMPA-INdAM, Project GNAMPA 2018. The research of the second author is partially supported by the Ministerio de Ciencia, Innovaci\'on y Universidades (MCIU), Agencia Estatal de Investigaci\'on (AEI) (Spain) and Fondo Europeo de Desarrollo Regional (FEDER) under project PGC2018-096899-B-I00 (MCIU/AEI/FEDER, UE).
The authors thank S.~Reich and E.~Molho for useful remarks that helped them in preparing this paper.
\end{document} |
\begin{document}
\title{f Quantum central limit theorem for continuous-time quantum walks on
odd graphs in quantum probability theory}
\begin{abstract}
The method of the quantum probability theory only requires simple structural data of graph and allows us to avoid a heavy combinational argument often necessary to obtain full description of spectrum of the adjacency matrix. In the present paper, by using the idea of calculation of the probability amplitudes for continuous-time quantum walk in terms of the quantum probability theory, we investigate quantum central limit theorem for continuous-time quantum walks on odd graphs.
{\bf Keywords: Continuous-time quantum walk, Spectral
distribution, Odd graph.}
{\bf PACs Index: 03.65.Ud } \end{abstract}
\section{Introduction} Two types of quantum walks, discrete and continuous time, were introduced as the quantum mechanical extension of the corresponding random walks and have been extensively studied over the last few years \cite{adz, Kempe}.
Random walks on graphs are the basis of a number of classical algorithms. Examples include 2-SAT (satisfiability for certain types of Boolean formulas), graph connectivity, and finding satisfying assignments for Boolean formulas. It is this success of random walks that motivated the study of their quantum analogs in order to explore whether they might extend the set of quantum algorithms. Several systems have been proposed as candidates to implement quantum random walks. These proposals include atoms trapped in optical lattices \cite{ap1}, cavity quantum electrodynamics (CQED) \cite{ap2} and nuclear magnetic resonance (NMR) in solid substrates \cite{ap3,ap4}. In liquid-state NMR systems \cite{ap5}, time–resolved observations of spin waves has been done \cite{ap6}. It has also been pointed out that a quantum walk can be simulated using classical waves instead of matter waves \cite{ap7,ap8}.
A study of quantum walks on simple graph is well known in physics(see \cite{fls}). Recent studies of quantum walks on more general graphs were described in \cite{ccdfgs, fg,cfg,abnvw,aakv,mr,kem}. Some of these works studies the problem in the important context of algorithmic problems on graphs and suggests that quantum walks is a promising algorithmic technique for designing future quantum algorithms.
There is one approach for investigation of continuous-time quantum walk (CTQW) on graphs using the spectral distribution associated with the adjacency matrix of graphs \cite{js,jsa, jsas, konno1, konno2}. Authors in Ref.\cite{js} have introduced a new method for calculating the probability amplitudes of quantum walk based on spectral distribution which allows us to avoid a heavy combinational argument often necessary to obtain full description of spectrum of the Hamiltonian. It is interesting to investigate the CTQW on graph when the graph grows as time goes by. In order to study a quantum system in full detail, its Hamiltonian needs to be diagonalized. With increasing dimension of the Hilbert space, the diagonalization of an operator becomes a very tedious task. In fact, we discuss this question in the CTQW as a quantum central limit theorem for CTQW. In this paper we try to investigate quantum central limit theorem for CTQW on growing odd graph via spectral distribution.
The organization of the paper is as follows. In section 2, we give a brief outline of graphs and introduce odd graph. In Section $3$, we review the stratification and quantum decomposition for adjacency matrix of graphs. Section $4$ is devoted to the method of computing the amplitude for CTQW, through spectral distribution $\mu$ of the adjacency matrix. In the first subsection of section $5$ we evaluate the CTQW on finite odd graph and the in second subsection $5$, we investigate quantum central limit theorem for CTQW on odd graph. The paper is ended with a brief conclusion and one appendix which contains determination of spectral distribution by continued fractions method.
\section{Odd graph} In beginning we present a summery of graph and then we introduce odd graph.
A graph is a pair $G=(V,E)$, where $V$ is a non-empty set and $E$ is a subset of $\{\{\alpha,\beta\}|\alpha, \beta\in V,\alpha\neq \beta \}$. Elements of $V$ and $E$ are called \emph{vertices} and \emph{edges}, respectively. Two vertices $\alpha, \beta\in V$ are called adjacent if $\{\alpha,\beta\}\in E$, and in this case we write $\alpha\sim \beta$. Let $l^2(V)$ denotes the Hilbert space of C-valued square-summable functions on
V, and $\{\ket{\alpha}|\ \alpha\in V\}$ becomes a complete orthonormal basis of $l^2(V)$. The adjacency matrix $A=(A_{\alpha \beta})_{\alpha,\beta\in V}$ is defined by
\[ A_{\alpha \beta} = \left\{ \begin{array}{ll} 1 & \mbox{if $ \alpha\sim \beta$}\\ 0 & \mbox{otherwise.} \end{array} \right. \]
which is considered as an operator acting on $l^2(V)$ in such a way that $$ A\ket{\alpha}=\sum_{\alpha\sim \beta}\ket{\beta}, \;\;\;\;\alpha\in V. $$ Obviously, (i) $A$ is symmetric; (ii) an element of $A$ takes a value in $\{0, 1\}$; (iii) a diagonal element of $A$ vanishes. Conversely, for a non-empty set $V$, a graph structure is uniquely determined by such a matrix indexed by $V$. The \emph{degree} or \emph{valency} of a vertex $\alpha\in V$ is defined by $$
\kappa(\alpha)=|\{\beta\in V| \alpha\sim \beta\}|, $$
where $|.|$ denotes the cardinality.
Let $S$ be a set of integer as $S=\{1,2,...,2k-1\}$ for a fixed integer $k\geq 2$. Now we define $V$ be the set of subsets of $S$ having cardinality $k-1$, i.e., \begin{equation}
V=\{\alpha \subset S\ |\alpha|=k-1\}, \end{equation} and put \begin{equation}
E=\{\{\alpha,\beta\}|\alpha, \beta\in V,\alpha\cap \beta=\emptyset \}. \end{equation} This graph $(V,E)$ is called the odd graph of degree $k$ and is denoted by $O_k$. Obviously, $O_k$ is a regular graph with degree $k$. For $k=3$, the odd graph is well-known as Petersen graph (Fig.1). The odd graphs have been studied in algebraic graph theory where some of their properties are found in Refs.\cite{biggs1,biggs2}.
For $n=0,1,2,...,k-1$ we define $\epsilon_n$ as \[ \epsilon_n = \left\{ \begin{array}{ll} k-1-\frac{n}{2} & \mbox{if $n$ is even}\\ \frac{n-1}{2} & \mbox{if $n$ is odd,} \end{array} \right. \] then by using proposition $4.1$ \cite{nob1}, for pair $\alpha, \beta\in V$ we have \begin{equation}
\;\;\;\ |\alpha\cap \beta|=\epsilon_n \;\;\;\ \Longleftrightarrow \partial(\alpha, \beta)=n, \end{equation} where $\partial$ stands for the natural distance function. Due to the above relations based on distance function the odd graphs are distance regular graphs, i.e., for a given $i,j,l=0,1,2,...,$ the intersection number \begin{equation}
p_{ij}^l=|\{\gamma\in V| \partial(\alpha, \gamma)=i \ \mbox{and} \;\;\
\partial(\gamma, \beta)=j\}|, \end{equation} is independently determined of the choice of $\alpha, \beta\in V$ satisfying $\partial(\alpha, \beta)=l$. There are some well-known facts about the intersection numbers of distance regular graphs, for example $p_{j1}^i=0 $ (for $i\neq 0$, $j$ is not $\{i-1, i, i+1 \}$)( for more details see Refs. \cite{cohen, bailey}). For convenience, set $b_i:=p_{i-1,1}^i (1\leq i\leq d)$, $c_i:=p_{i+1,1}^i (0\leq i \leq d-1)$, $a_i:=p_{i,1}^i (0\leq i \leq d)$, $k_i:=p_{i,i}^0 (0\leq i \leq d)$ and $b_0=c_d=0$, where $d:=$max$\{\partial(\alpha, \beta): \alpha, \beta\in V \}$ is called diameter of graph. Moreover \begin{equation} b_i+a_i+c_i=k, \;\;\;\;\ 0\leq i \leq d, \end{equation} where $k=k_1$ is degree of graph.
For calculation of Szeg\"{o}- Jacobi sequences $\{\omega_k\}$ and $\{\alpha_k\}$ in the next section we will require $b_i$ and $c_i$ \cite{jsa}, where for odd graph is given by
\begin{equation}\label{interodd1} b_i\;=\;\cases{\frac{i}{2} & if $\;i \; \mbox{is even} $,\cr \frac{i+1}{2} & if $\;i \; \mbox{is odd} $\cr}\qquad \qquad (1\leq i\leq k-1), \end{equation} \begin{equation}\label{interodd2} c_i\;=\;\cases{k-\frac{i}{2} & if $\;i \; \mbox{is even} $,\cr k-\frac{i+1}{2} & if $\;i \; \mbox{is odd} $\cr}\qquad \qquad (0\leq i\leq k-2). \end{equation}
\section{Quantum Probabilistic Approach for CTQW} In this section we give some preliminaries that require to describe CTQW via spectral distribution technique.
\subsection{Stratification} Due to definition of function $\partial$, the graph becomes a metric space with the distance $\partial$. We fix a point $o\in V$ as an origin of the graph, called reference vertex. Then, the graph is stratified into a disjoint union of strata: \begin{equation}\label{v1}
V=\bigcup_{i=0}^{\infty}V_i,\;\;\;\;\;\; V_i=\{\alpha\in V|\ \partial(o,\alpha)=i\}. \end{equation} With each stratum $V_i$ we associate a unit vector in $l^2(V)$ defined by \begin{equation}
\ket{\phi_{i}}=\frac{1}{\sqrt{|V_i|}}\sum_{\alpha\in V_{i}}\ket{i, \alpha}, \end{equation} where $\ket{i, \alpha}$ denotes the eigenket of the $\alpha$th vertex at the stratum $i$.
The closed subspace of $l^2(V)$ spanned by $\{\ket{\phi_{i}}\}$ is denoted by $\Lambda(G)$. Since $\{\ket{\phi_{i}}\}$ becomes a complete orthonormal basis of $\Lambda(G)$, we often write \begin{equation} \Lambda(G)=\sum_{i}\oplus \textbf{C}\ket{\phi_{i}}. \end{equation}
\subsection{Quantum decomposition} One can obtain a quantum decomposition associated with the stratification (\ref{v1}) for the adjacency matrices of this type of graphs as \begin{equation} A=A^{+}+A^{-}+A^0. \end{equation} where three matrices $A^+$, $ A^-$ and $A^0$ are defined as follows: for $\alpha\in V_i$ \[ (A^+)_{\beta\alpha} = \left\{ \begin{array}{ll} A_{\beta\alpha} & \mbox{if $ \beta\in V_{i+1}$}\\ 0 & \mbox{otherwise,} \end{array} \right. \] \[ (A^-)_{\beta\alpha} = \left\{ \begin{array}{ll} A_{\beta\alpha} & \mbox{if $ \beta\in V_{i-1}$}\\ 0 & \mbox{otherwise,} \end{array} \right. \] \[ (A^0)_{\beta\alpha} = \left\{ \begin{array}{ll} A_{\beta\alpha} & \mbox{if $ \beta\in V_{i}$}\\ 0 & \mbox{otherwise,} \end{array} \right. \] or, equivalently, for $\ket{i,\alpha}$, \begin{equation}\label{qd} A^{+}\ket{i,\alpha}=\sum_{\beta\in V_{i+1}}\ket{i+1,\beta}, \;\;\;\;\ A^{-}\ket {i,\alpha}=\sum_{\beta\in V_{i-1}}\ket{i-1,\beta}, \;\;\;\;\ A^{0}\ket{i,\alpha}=\sum_{\beta\in V_{i}}\ket{i,\beta}, \;\;\;\;\
\end{equation}
for $\{\alpha, \beta\}\in E$. Since $\alpha\in V_i$ and $\{\alpha,\beta\}\in E$ then $\beta\in V_{i-1}\bigcup V_i\bigcup V_{i+1}$, where we tacitly understand that $V_{-1}=\emptyset$. The vector state corresponding to $\ket{o}=\ket{\phi_0}$, with $o\in V$ as the fixed origin, is analogous to the vacuum state in Fock space.
According to Ref.\cite{nob}, $<A^m>$ coincides with the number of $m$-step walks starting and terminating at $o$, also, by lemma 2.2, \cite{nob} if $G$ is invariant under the quantum components $A^\varepsilon$, $\varepsilon\in \{+,-,0\}$, then there exist two Szeg\"{o}- Jacobi sequences $\{\omega_i\}_{i=1}^{\infty}$ and $\{\alpha_i\}_{i=1}^{\infty}$ derived from $A$, such that \begin{equation}\label{v5} A^{+}\ket{\phi_{i}}=\sqrt{\omega_{i+1}}\ket{\phi_{i+1}}, \;\;\;\ i\geq 0 \end{equation} \begin{equation}\label{v6} A^{-}\ket{\phi_{0}}=0, \;\;\ A^{-}\ket{\phi_{i}}=\sqrt{\omega_{i}}\ket{\phi_{i-1}}, \;\;\;\ i\geq 1 \end{equation} \begin{equation}\label{v7} A^{0}\ket{\phi_{i}}=\alpha_{i+1}\ket{\phi_{i}}, \;\;\;\ i\geq 0, \end{equation} where
$\sqrt{\omega_{i+1}}=\frac{|V_{i+1}|^{1/2}}{|V_{i}|^{1/2}}\kappa_{-(\beta)}$,
$\kappa_{-(\beta)}=|\{\alpha\in V_i| \alpha\sim \beta\}|$ for $\beta\in V_{i+1}$ and $\alpha_{i+1}=\kappa_{0(\beta)}$, such that
$\kappa_{0(\beta)}=|\{\alpha\in V_i| \alpha\sim \beta\}|$ for $\beta\in V_i$. In particular $(\Lambda(G), A^+, A^-)$ is an interacting Fock space associated with a Szeg\"{o}-Jacobi sequence $\{\omega_i\}$.
\subsection{Study of CTQW on a graph via spectral distribution of its adjacency matrix} The CTQW on graph has been introduced as the quantum mechanical analogue of its classical counterpart, which is defined by replacing Kolmogorov's equation (master equation) of continuous-time classical random walk on a graph \cite{ghwiss,nvkampen} \begin{equation} \frac{dP_m(t)}{dt} = \sum_{n=1}^{l}H_{mn}P_j(t), \;\;\ m=1,2,...,l \end{equation} with Schr\"{o}dinger's equation. Matrix $H$ is the Hamiltonian of the walk and $P_m(t)$ is the occupying probability of vertex $m$ at time $t$. It is natural to choose the Laplacian of the graph, defined as $L=A-D$ as the Hamiltonian of the walk, where $D$ is a diagonal matrix with entries $D_{jj}=deg(\alpha_j)$.
Let $\ket{\phi(t)}$ be a time-dependent amplitude of the quantum process on graph $\Gamma$. The wave evolution of the quantum walk is \begin{equation}
i\hbar\frac{d}{dt}\ket{\phi(t)} = H\ket{\phi(t)}, \end{equation} where from now on we assume $\hbar = 1$, and $\ket{\phi_{0}}$ is the initial amplitude wave function of the particle. The solution is given by $\ket{\phi(t)} = e^{-iHt} \ket{\phi_{0}}$. On $d$-regular graphs, $D = \frac{1}{d}I$, and since $A$ and $D$ commute, we get \begin{equation} \label{eqn:phase-factor} e^{-itH} = e^{-it(A-\frac{1}{d}I)} = e^{-it/d}e^{-itA}. \end{equation} This introduces an irrelevant phase factor in the wave evolution. Hence we can consider $H=A$. Thus, we have \begin{equation}
\ket{\phi(t)} = e^{-iAt}\ket{\phi_0}, \end{equation}
One of our goals in this paper is the evaluation of probability amplitudes for CTQW on graphs by using the method of spectral distribution associated with the adjacency matrix. The spectral properties of the adjacency matrix of a graph play an important role in many branches of mathematics and physics. The spectral distribution can be generalized in various ways. In this work, following Refs.\cite{js, nob}, we consider the spectral distribution $\mu$ of the adjacency matrix $A$: \begin{equation}\label{v2} \langle A^m\rangle=\int_{R}x^{m}\mu(dx), \;\;\;\;\ m=0,1,2,... \end{equation} where $\langle.\rangle$ is the mean value with respect to the state $\ket{\phi_0}$. By condition of quantum decomposition (QD) graphs the ‘‘moment’’ sequence $\{\langle A^m\rangle\}_{m=0}^{\infty}$ is well-defined\cite{js,nob}. Then the existence of a spectral distribution satisfying (\ref{v2}) is a consequence of Hamburger’s theorem, see e.g., Shohat and Tamarkin [\cite{st}, Theorem 1.2].
Due to using the quantum decomposition relations (\ref{v5}, \ref{v6},\ref{v7}) and the recursion relation (\ref{op}) of polynomial $P_n(x)$, the other matrix elements $\label{cw1} \braket{\phi_{n}}{A^m\mid \phi_0}$ can be written as \begin{equation}\label{cw1} \braket{\phi_{n}}{A^m\mid \phi_0}=\frac{1}{\sqrt{\omega_1\omega_2\cdots \omega_{n} }}\int_{R}x^{m}P_{n}(x)\mu(dx), \;\;\;\;\ m=0,1,2,.... \end{equation} where is useful for obtaining of amplitudes of CTQW in terms of spectral distribution associated with the adjacency matrix of graphs \cite{js}.
Therefore by using (\ref{cw1}), the probability amplitude of observing the walk at stratum $m$ at time $t$ can be obtained as \begin{equation}\label{v4} q_{m}(t)=\braket{\phi_{m}}{\phi(t)}=\braket{\phi_{m}}{e^{-iAt}\mid \phi_0}=\frac{1}{\sqrt{\omega_1\omega_2\cdots\omega_{m}}}\int_{R}e^{-ixt}P_{m}(x)\mu(dx). \end{equation} The conservation of probability $\sum_{m=0}{\mid \braket{\phi_{m}}{\phi(t)}\mid}^2=1$ follows immediately from Eq.(\ref{v4}) by using the completeness relation of orthogonal polynomials $P_n(x)$. In the appendix $A$ of the reference \cite{js}, it is proved that the walker has the same amplitude at the vertices belonging to the same stratum, i.e., we have $q_{im}(t)=\frac{q_{m}(t)}{\mid V_m\mid}, i=0,1,...,\mid V_m\mid$, where $q_{im}(t)$ denotes the amplitude of the walker at $i$th vertex of $m$th stratum.
Investigation of CTQW via spectral distribution method pave the way to calculate CTQW on infinite graphs and to approximate with finite graphs and vice versa, simply via Gauss quadrature formula, where in cases of infinite graphs, one can study asymptotic behavior of walk at large enough times by using the method of stationary phase approximation (for more details see [1]).
Indeed, the determination of $\mu(x)$ is the main problem in the spectral theory of operators, where this is quite possible by using the continued fractions method, as it is explained in appendix A.
\section{Quantum central limit theorem for CTQW on odd graphs} Having studied CTQW on finite odd graphs using the method of the spectral distribution, we investigate quantum central limit theorem for CTQW on this graphs which is our main goals.
To consider stratification and quantum decomposition of section $3$,
Ref.\cite{jsa} (i.e., $\omega_i=c_{i-1}b_{i}, \;\;\;\ \alpha_i=a_1-b_{i-1}-c_{i-1}$) and Eqs.(\ref{interodd1}),
(\ref{interodd2}), we obtain two Szeg\"{o}- Jacobi sequences $\{\omega_i\}$ and $\{\alpha_i\}$ for $O_k$ as follows:\\ if $i$ is odd, \begin{equation}\label{central2} \omega_i=\frac{i+1}{2}(k-\frac{i-1}{2}), \end{equation} if $i$ is even, \begin{equation} \omega_i=\frac{i}{2}(k-\frac{i}{2}) \end{equation} if $i$ is odd or even, \begin{equation} \alpha_i=0. \end{equation} \textbf{A. Finite $k$ case}
Let $\mu_k$ denote the spectral distribution of odd graph $O_k$. Here for studying CTQW on finite odd graph we consider $k=4$ case. Then we have \begin{equation} \omega_1=4,\;\ \omega_2=3, \;\ \omega_3=6, \;\ \alpha_1=\alpha_2=\cdots= 0. \end{equation} Therefore we obtain Stieltjes transform \begin{equation} G_{\mu_4}(z)=\frac{z^3-9z}{z^4-13z^2+24}. \end{equation} In this case one can obtain the spectral distribution as follows $$ \mu_4=\frac{\sqrt{73}}{292}(5+\sqrt{73})(\delta(x-\frac{1}{2}\sqrt{26-2\sqrt{73}})+ \delta(x+\frac{1}{2}\sqrt{26-2\sqrt{73}}))+ $$ \begin{equation} \frac{\sqrt{73}}{292}(-5+\sqrt{73})(\delta(x-\frac{1}{2}\sqrt{26+2\sqrt{73}})+ \delta(x+\frac{1}{2}\sqrt{26+2\sqrt{73}})) \end{equation} By using Eq.(\ref{v4}) the amplitudes for walk at time $t$ are $$ q_o(t)=\frac{\sqrt{73}}{146}((5+\sqrt{73})\cos (\frac{1}{2}\sqrt{26-2\sqrt{73}})t+(-5+\sqrt{73})\cos( \frac{1}{2}\sqrt{26+2\sqrt{73}})t), $$ $$ q_1(t)=\frac{-i\sqrt{73}}{584}((5+\sqrt{73})(\sqrt{26-2\sqrt{73}})\sin (\frac{1}{2}\sqrt{26-2\sqrt{73}})t+ $$$$ (-5+\sqrt{73})(\sqrt{26+2\sqrt{73}})\sin( \frac{1}{2}\sqrt{26+2\sqrt{73}})t), $$ $$ q_2(t)=\frac{2\sqrt{3}}{\sqrt{73}}(-\cos (\frac{1}{2}\sqrt{26-2\sqrt{73}})t+ \cos( \frac{1}{2}\sqrt{26+2\sqrt{73}})t), $$ $$ q_3(t)=\frac{-i\sqrt{73}}{584\sqrt{2}}(-(13+\sqrt{73})(\sqrt{26-2\sqrt{73}})\sin (\frac{1}{2}\sqrt{26-2\sqrt{73}})t+ $$ \begin{equation} (-13 +\sqrt{73})(\sqrt{26+2\sqrt{73}})\sin( \frac{1}{2}\sqrt{26+2\sqrt{73}})t). \end{equation} \textbf{B. Quantum central limit theorem }
In the limit of large $k\longrightarrow \infty$, it is observed the odd graphs $O_k$ form a growing family of distance regular graphs. In the remaining of this section, we obtain CTQW on odd graphs $O_k$ for $k\longrightarrow \infty$ by applying the quantum central limit theorem where is derived from the quantum probabilistic techniques.
\textbf{Theorem (\cite{nob1})}. Let $\{G^{k}=(V^k, E^k)\}$ be a growing family of distance regular graphs. Let $A_k$ and $\{p_{ij}^{l}(k)\}$ be the adjacency matrix and the intersection numbers of $G^{k}$, respectively. Assume that the limits $$ \omega_i=\lim_{k\longrightarrow\infty}\frac{p_{i-1,1}^{i}(k)p_{i,1}^{i-1}(k)}{p_{11}^{0}(k)}=\lim_{k\longrightarrow\infty}\frac{b_{i}(k)c_{i-1}(k)}{k} $$ \begin{equation}\label{central1} \alpha_i=\lim_{k\longrightarrow\infty}\frac{p_{i-1,1}^{i-1}(k)}{\sqrt{p_{11}^{0}(k)}}=\lim_{k\longrightarrow\infty}\frac{a_1(k)-b_{i-1}(k)-c_{i-1}(k)}{\sqrt{k}} \end{equation} exist for all $i=1,2,...$. Let $\Lambda=(\Lambda, \{\ket{\psi_i}\}, B^+, B^-)$ be the interacting Fock space associated with $\{\omega_i\}$ and define a diagonal operator $B^0$ by $B^0\ket{\psi_i}=\alpha_{i+1}\ket{\psi_i}$. Therefor the quantum components $A_{k}^\varepsilon, \varepsilon\in\{+,-,0\}$, of adjacency matrix $A_k$ is holds that \begin{equation} \lim_{k\longrightarrow\infty}\frac{A_{k}^{\varepsilon}}{\sqrt{p_{11}^{0}(k)}}=\lim_{k\longrightarrow\infty}\frac{A_{k}^{\varepsilon}}{\sqrt{k}}=B^{\varepsilon} \;\;\;\ \varepsilon\in\{+,-,0\}, \end{equation} in the stochastic sense. Then we have \begin{equation}
\lim_{k\longrightarrow\infty}\braket{\phi_m}{\frac{A_{k}^{\varepsilon}}{\sqrt{k}}|\phi_0}=\braket{\psi_m}{B^{\varepsilon}|\psi_0}, \end{equation} for $\varepsilon\in\{+,-,0\}$.
To state a quantum central limit theorem for CTQW on odd graph $O_k$, it is convenient to calculate amplitudes of probability as \begin{equation} q_m(t)=\lim_{k\longrightarrow
\infty}\braket{\phi_m}{e^{\frac{-itA_k}{\sqrt{k}}}|\phi_0}=\frac{1}{\sqrt{\omega_1\omega_2...\omega_m}}\int_{R}e^{-ixt}P_{m}(x)\mu_{\infty}(dx). \end{equation} According to the above theorem, we need only to find two Szeg\"{o}- Jacobi sequences $\{\omega_i\}$ and $\{\alpha_i\}$, where by using Eqs.(\ref{central1}) and (\ref{central2}) we obtain as follows:\\
if $i$ is odd, \begin{equation} \omega_i=\lim_{k\longrightarrow \infty}\frac{1}{k} \frac{i+1}{2}(k-\frac{i-1}{2})=\frac{i+1}{2}, \end{equation} if $i$ is even, \begin{equation} \omega_i=\lim_{k\longrightarrow \infty}\frac{1}{k} \frac{i}{2}(k-\frac{i}{2})=\frac{i}{2}, \end{equation} if $i$ is odd or even, \begin{equation} \alpha_i=0. \end{equation} Thus, $\{\omega_i\}=\{1,1,2,2,3,3,4,4,...\}$ as desired. Therefore Stieltjes transform of infinite odd graphs is \begin{equation}\label{measurein} G_{\mu_\infty}(z)=\frac{1}{z-\frac{1}{z-\frac{1} {z-\frac{2}{z-\frac{2}{z-\frac{3}{z-\frac{3}{z-\cdots}}}}}}}, \end{equation} where the spectral distribution $\mu_\infty(x)$ in the Stieltjes transform is given by \cite{nob1} \begin{equation}
\mu_\infty(x)=|x|e^{-x^2}. \end{equation} Therefore we obtain the amplitude of probability at time $t$ and the $0$-th stratum (starting vertex) $$ q_0(t)=\int_{R}e^{-ixt}\mu_{\infty}(dx)=
\int_{-\infty}^{\infty}e^{-ixt}|x|e^{-x^2}dx $$ \begin{equation} =\frac{i\sqrt{\pi}t}{2}erf(it/2)e^{-\frac{t^2}{4}}+1. \end{equation} In the above calculation we used formulas: $$ x^ne^{-ixt}=i^n \frac{d^n}{d t^n}e^{-ixt}, $$ \begin{equation}\label{deriv1} \int_{0}^{\infty}e^{-x^2}e^{-ixt}dx=\frac{\sqrt{\pi}}{2}(1-erf(it/2))e^{\frac{-t^2}{4}}, \end{equation} where $erf(x)$ stands for the error function and it is defined as: \begin{equation} erf(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{-s^2}ds \end{equation} also, the derivative of the error function follows immediately from its definition $\frac{\partial}{\partial x}erf(x)=\frac{2}{\sqrt{\pi}}e^{-x^2}$.
By using Eqs.(\ref{v4}) and (\ref{deriv1}), we obtain the amplitude of probability for walk at time $t$ and $m$-th stratum on infinite odd graphs in terms of $q_0(t)$ as:\\ if $m$ is odd \begin{equation} q_{m}(t)=\frac{1}{(\frac{m-1}{2})!(\frac{m+1}{2})!}P_m(i\frac{d}{dt})q_0(t), \end{equation} if $m$ is even \begin{equation} q_{m}(t)=\frac{1}{((\frac{m}{2})!)^2}P_m(i\frac{d}{dt})q_0(t), \end{equation} where polynomials $\{P_m(i\frac{d}{dt})\}$ are defined recurrently by relation (\ref{op}) in terms of $i\frac{d}{dt}$. In the end for example we obtain the amplitude of probability for walk at stratum $1,2$ and $3$ as follows:\\ $m=1$ \begin{equation} q_1(t)=P_1(i\frac{d}{dt})q_0(t)=i\frac{d}{dt}q_0(t)=\frac{\sqrt{\pi}}{4}(t^2-2)erf(it/2)e^{\frac{-t^2}{4}}-it/2, \end{equation} $m=2$ $$ q_2(t)=P_2(i\frac{d}{dt})q_0(t)=((i\frac{d}{dt})^2-1)q_0(t) $$ \begin{equation} =\frac{1}{8}(-i\sqrt{\pi}t^3\;erf(it/2)e^{\frac{-t^2}{4}}-2t^2+2i\sqrt{\pi}t\;erf(it/2)e^{\frac{-t^2}{4}}), \end{equation} $m=3$ $$ q_3(t)=\frac{1}{\sqrt{2}}P_3(i\frac{d}{dt})q_0(t)=\frac{1}{\sqrt{2}}((i\frac{d}{dt})^3-2i\frac{d}{dt})q_0(t) $$ \begin{equation} =\frac{1}{16\sqrt{2}}(-\sqrt{\pi}t^4erf(it/2)e^{\frac{-t^2}{4}}+2it^3+4\sqrt{\pi}t^2erf(it/2)e^{\frac{-t^2}{4}}-4it+ 4\sqrt{\pi}erf(it/2)e^{\frac{-t^2}{4}}). \end{equation} \section{Conclusion} In this paper by using the method of calculation of the probability amplitude for continuous-time quantum walk on graph via quantum probability theory, we have studied continuous-time quantum walk on odd graph when the odd graph grow as time goes by. We have discussed this question as a quantum central limit theorem for CTQW. It is interesting to investigate CTQW on a growing family of graphs
since it is probable the probability amplitudes of CTQW to converge the uniform distribution, which is under investigation.
\setcounter{section}{0}
\setcounter{equation}{0}
\renewcommand{A-\roman{equation}}{A-\roman{equation}}
{\Large{Appendix A}}\\ \textbf{\large{Determination of spectral distribution by continued fractions method }}
In this appendix we explain how we can determine spectral distribution $\mu(x)$ of the graphs, by using the Szeg\"{o}-Jacobi sequences $(\{\omega_k\},\{\alpha_k\})$, which the parameters $\omega_k$ and $\alpha_k$ are defined in the subsection 3.2.
To this aim we may apply the canonical isomorphism from the interacting Fock space onto the closed linear span of the orthogonal polynomials determined by the Szeg\"{o}-Jacobi sequences $(\{\omega_i\},\{\alpha_i\})$. More precisely, the spectral distribution $\mu$ under question is characterized by the property of orthogonalizing the polynomials $\{P_n\}$ defined recurrently by $$ P_0(x)=1, \;\;\;\;\;\ P_1(x)=x-\alpha_1,$$ \begin{equation}\label{op} xP_n(x)=P_{n+1}(x)+\alpha_{n+1}P_n(x)+\omega_nP_{n-1}(x), \end{equation} for $n\geq 1$.
As it is shown in \cite{tsc}, the spectral distribution ì can be determined by the following identity: \begin{equation}\label{v3} G_{\mu}(z)=\int_{R}\frac{\mu(dx)}{z-x}=\frac{1}{z-\alpha_1-\frac{\omega_1}{z-\alpha_2-\frac{\omega_2} {z-\alpha_3-\frac{\omega_3}{z-\alpha_4-\cdots}}}}=\frac{Q_{n-1}^{(1)}(z)}{P_{n}(z)}=\sum_{l=1}^{n} \frac{A_l}{z-x_l}, \end{equation} where $G_{\mu}(z)$ is called the Stieltjes transform and $A_l$ is the coefficient in the Gauss quadrature formula corresponding to the roots $x_l$ of polynomial $P_{n}(x)$ and where the polynomials $\{Q_{n}^{(1)}\}$ are defined recurrently as\\
$Q_{0}^{(1)}(x)=1$,\\
$Q_{1}^{(1)}(x)=x-\alpha_2$,\\
$xQ_{n}^{(1)}(x)=Q_{n+1}^{(1)}(x)+\alpha_{n+2}Q_{n}^{(1)}(x)+\omega_{n+1}Q_{n-1}^{(1)}(x)$,\\
for $n\geq 1$.
Now if $G_{\mu}(z)$ is known, then the spectral distribution $\mu$ can be recovered from $G_{\mu}(z)$ by means of the Stieltjes inversion formula: \begin{equation}\label{m1} \mu(y)-\mu(x)=-\frac{1}{\pi}\lim_{v\longrightarrow 0^+}\int_{x}^{y}Im\{G_{\mu}(u+iv)\}du. \end{equation} Substituting the right hand side of (\ref{v3}) in (\ref{m1}), the spectral distribution can be determined in terms of $x_l, l=1,2,...$, the roots of the polynomial $P_n(x)$, and Guass quadrature constant $A_l, l=1,2,... $ as \begin{equation}\label{m} \mu=\sum_l A_l\delta(x-x_l) \end{equation}
( for more details see Refs. \cite{js,jsa,tsc,st}.)
{\bf Figure Captions}
{\bf Figure-1:} The Petersen graph.
\end{document} |
\begin{document}
\title{Classification Constrained Dimensionality Reduction}
\author{Raviv~Raich,~\IEEEmembership{Member,~IEEE,}
Jose~A.~Costa,~\IEEEmembership{Member,~IEEE,}
Steven~B.~Damelin,~\IEEEmembership{Senior Member,~IEEE,}
and~Alfred~O.~Hero~III,~\IEEEmembership{Fellow,~IEEE} \thanks{This work was partially funded by the DARPA Defense Sciences Office under Office of Naval Research contract \#N00014-04-C-0437. Distribution Statement A. Approved for public release; distribution is unlimited. S. B. Damelin was supported in part by National Science Foundation grant no. NSF-DMS-0555839 and NSF-DMS-0439734 and by AFRL. } \thanks{R. Raich is with the Oregon State University, Corvallis. A. O Hero III is with the University of Michigan, Ann Arbor.
J. A. Costa is with the California Institute of Technology. S. B. Damelin is with the Georgia Southern University. }}
\markboth{ } {Raich, Costa, and Hero: Classification Constraint Dimensionality Reduction}
\maketitle
\begin{abstract} Dimensionality reduction is a topic of recent interest. In this paper, we present the classification constrained dimensionality reduction (CCDR) algorithm to account for label information. The algorithm can account for multiple classes as well as the semi-supervised setting. We present an out-of-sample expressions for both labeled and unlabeled data. For unlabeled data, we introduce a method of embedding a new point as preprocessing to a classifier. For labeled data, we introduce a method that improves the embedding during the training phase using the out-of-sample extension. We investigate classification performance using the CCDR algorithm on hyper-spectral satellite imagery data. We demonstrate the performance gain for both local and global classifiers and demonstrate a $10\%$ improvement of the $k$-nearest neighbors algorithm performance. We present a connection between intrinsic dimension estimation and the optimal embedding dimension obtained using the CCDR algorithm. \end{abstract}
\begin{keywords} Classification, Computational Complexity, Dimensionality Reduction, Embedding, High Dimensional Data, Kernel, K-Nearest Neighbor, Manifold Learning, Probability, Out-of-Sample Extension. \end{keywords}
\IEEEpeerreviewmaketitle
\section{Introduction} \label{sec:intro}
In classification theory, the main goal is to find a mapping from an observation space ${\cal X}$ consisting of a collection of points in some containing Euclidean space $\mathds{R}^{d}$, $d\geq 1$ into a set consisting of several different integer valued hypotheses. In some problems, the observations from the set ${\cal X}$ lie on a $d$-dimensional manifold $\mathcal{M}$ and Whitney's theorem tells us that provided that this manifold is smooth enough, there exists an embedding of $\mathcal{M}$ into $ \mathds{R}^{2d+1}$. This motivates the approach taken by kernel methods in classification theory, such as support vector machines \cite{hastie:00} for example. Our interest is in finding an embedding of $\mathcal{M}$ into a lower dimensional Euclidean space.
\begin{figure}
\caption{PCA of a two-classes classification problem.}
\label{fig:pca}
\end{figure}
Dimensionality reduction of high dimensional data, was addressed in classical methods such as principal component analysis (PCA) \cite{Jain&Dubes:88} and multidimensional scaling (MDS) \cite{Torgerson:52, Cox&Cox:00}. In PCA, an eigendecomposition of the $d \times d$ empirical covariance matrix is performed and the data points are linearly projected along the $0< m \le d$ eigenvectors with the largest eigenvalues. A problem that may occur with PCA for classification is demonstrated in Fig.~\ref{fig:pca}. When the information that is relevant for classification is present only in the eigenvectors associated with the small eigenvalues ($e_2$ in the figure), removal of such eigenvectors may result in severe degradation in classification performance. In MDS, the goal is to find a lower dimensional embedding of the original data points that preserves the relative distances between all the data points.
The two later methods suffer greatly when the manifold is nonlinear. For example, PCA will not be able to offer dimensionality reduction for classification of two classes lying each on one of two concentric circles.
In \cite{Scholkopf:98:nc}, a nonlinear extension to PCA is presented. The algorithm is based on the ``kernel trick'' \cite{aizerman:64:arc}. Data points are nonlinearly mapped into a feature space, which in general has a higher (or even infinite) dimension as compared with the original space and then PCA is applied to the high dimensional data.
In the paper of Tenenbaum \emph{et al} \cite{Tenenbaum:00:science}, Isomap, a global dimensionality reduction algorithm was introduced taking into account the fact that data points may lie on a lower dimensional manifold. Unlike MDS, geodesic distances (distances that are measured along the manifold) are preserved by Isomap. Isomap utilizes the classical MDS algorithm, but instead of using the matrix of Euclidean distances, it uses a modified version of it. Each point is connected only to points in its local neighborhood. A distance between a point and another point outside its local neighborhood is replaced with the sum of distances along the shortest path in graph. This procedure modifies the squared distances matrix replacing Euclidian with geodesic distances.
In \cite{Belkin&Niyogi:NC03}, Belkin and Niyogi present a related Laplacian eigenmap dimensionality reduction algorithm. The algorithm performs a minimization on the weighted sum of squared-distances of the lower-dimensional data. Each weight multiplying the squared-distances of two low-dimensional data points is inversely related to distance between the corresponding two high-dimensional data points. Therefore, small distance between two high-dimensional data points results in small distance between two low-dimensional data points. To preserve the geodesic distances, the weight of the distance between two points that do not share a local neighborhood is set to zero.
We refer the interested reader to the references below and those cited therein for a list of some of the most commonly used additional algorithms within the class of \emph{manifold learning} algorithms and their different advantages relevent to our work. Locally Linear Embedding (LLE) \cite{Roweis:00:science}, Laplacian Eigenmaps \cite{Belkin&Niyogi:NC03}, Hessian Eigenmaps (HLLE) \cite{Donoho:03:PNAS}, Local Space Tangent Analysis \cite{zhang:05:siam}, Diffusion Maps \cite{coifman:06:ACHA} and Semidefinite Embedding (SDE)\cite{weinberger:04:cvpr}.
The algorithms mentioned above, consider the problem of learning a lower-dimensional embedding of the data. In classification, such algorithms can be used to preprocess high-dimensional data before performing the classification. This could potentially allow for a lower computational complexity of the classifier. In some cases, dimensionality reduction results increase the computational complexity of the classifier. In fact, support vector machines suggest the opposing strategy: data points are projected onto a higher-dimensional space and classified by a low computational complexity classifier. To guarantee a low computational complexity of the classifier of the low-dimensional data, a classification constrained dimensionality reduction (CCDR) algorithm was introduced in \cite{costa:05:icassp}. The CCDR algorithm is an extension of Laplacian eigenmaps \cite{Belkin&Niyogi:NC03} and it incorporates class label information into the cost function, reducing the distance between points with similar label. Another algorithm that incorporates label information is the marginal fisher analysis (MFA) \cite{yan:07:pami}, in which a constraint on the margin between classes is used to enforce class separation.
In \cite{costa:05:icassp} the CCDR algorithm was only studied for two classes and its performance was illustrated for simulated data. In \cite{raich:06:icassp}, a multi-class extension to the problem was presented. In this paper, we introduce two additional components that make the algorithm computationally viable. The first is an out-of-sample extension for classification of unlabeled test points. Similarly to the out-of-sample extension presented in \cite{Bengio:03}, one can utilize the Nystr\"{o}m formula for classification problems in which label information is available. We study the algorithm performance as its various parameters, (e.g., dimension, label importance, and local neighborhood), are varied. We study the performance of CCDR as preprocessing prior to implementation of several classification algorithms such as $k$-nearest neighbors, linear classification, and neural networks. We demonstrate a $10\%$ improvement over the $k$-nearest neighbors algorithm performance benchmark for this dataset. We address the issue of dimension estimation and its effect on classification performance.
The organization of this paper is as follows. Section \ref{sec:ccdr} presents the multiple-class CCDR algorithm. Section \ref{sec:study} provides a study of the algorithm using the Landsat dataset and Section \ref{sec:conclusion} summaries our results.
\section{Dimensionality Reduction} Let $\mathcal{X}_n=\{\Vc{x}_1,\Vc{x}_2,\ldots,\Vc{x}_n\}$ be a set of $n$ points constrained to lie on an $m$-dimensional submanifold $\mathcal{M} \subseteq \mathds{R}^d$. In dimensionality reduction, our goal is to obtain a lower-dimensional embedding $\mathcal{Y}_n = \{\Vc{y}_1,\Vc{y}_2,\ldots,\Vc{y}_n\}$ (where $\Vc{y}_i \in \mathds{R}^m$ with $m<d$) that preserves local geometry information such that processing of the lower dimensional embedding $\mathcal{Y}_n$ yields comparable performance to processing of the original data points $\mathcal{X}_n$. Alternatively, we would like learn the mapping $f: \mathcal{M} \subseteq \mathds{R}^d \to \mathds{R}^m$ that maps every data point $\Vc{x}_i$ to $\Vc{y}_i=f(\Vc{x}_i)$ such that some geometric properties of the high-dimensional data are preserved in the lower dimensional embedding. The first question that comes to mind is how to select $f$, or more specifically how to restrict the function $f$ so that we can still achieve our goal.
\subsection{Linear dimensionality reduction} \subsubsection{PCA} When principal component analysis (PCA) is used for dimensionality reduction, one considers a linear embedding of the form \begin{eqnarray*} \Vc{y}_i=f(\Vc{x}_i)={\bm A} \Vc{x}_i, \end{eqnarray*}
where ${\bm A}$ is $m \times d$. This embedding captures the notion of proximity in the sense that close points in the high dimensional space map to close points in the lower dimensional embedding, i.e., $\|\Vc{y}_i-\Vc{y}_j\| = \| {\bm A} (\Vc{x}_i - \Vc{x}_j)\| \le \|{\bm A}\| \|
\Vc{x}_i-\Vc{x}_j\|$. Let \[ \bar{\Vc{x}} = \frac{1}{n} \sum_{i=1}^n \Vc{x}_i \] and \[ \mathcal{C}_x= \frac{1}{n} \sum_{i=1}^n (\Vc{x}_i - \bar{\Vc{x}}) ( \Vc{x}_i - \bar{\Vc{x}})^T. \] Similarly, let \[ \bar{\Vc{y}} = \frac{1}{n} \sum_{i=1}^n \Vc{y}_i \] and \[ \mathcal{C}_y= \frac{1}{n} \sum_{i=1}^n (\Vc{y}_i - \bar{\Vc{y}}) ( \Vc{y}_i - \bar{\Vc{y}})^T. \] Since $\Vc{y}_i= {\bm A} \Vc{x}_i$, we have $\bar{\Vc{y}}= {\bm A} \bar{\Vc{x}}$ and $\mathcal{C}_y= {\bm A} \mathcal{C}_x {\bm A}^T$. In PCA, the goal is to find the projection matrix ${\bm A}$ that preserves most of the energy in the original data by solving \begin{eqnarray*} \max_{{\bm A}} \mbox{tr} \{\mathcal{C}_y( {\bm A})\} \quad \textrm{s.t.} \quad {\bm A}{\bm A}^T = {\bm I}, \end{eqnarray*} which is equivalent to \begin{eqnarray}\label{eq:pca} \max_{{\bm A}} \mbox{tr} \{{\bm A} \mathcal{C}_x {\bm A}^T \} \quad \textrm{s.t.} \quad {\bm A}{\bm A}^T = {\bm I}. \end{eqnarray} The solution to (\ref{eq:pca}), is given by ${\bm A}=[{\bf u}_1,{\bf u}_2,\ldots,{\bf u}_m]^T$, where ${\bf u}_i$ is the eigenvector of $\mathcal{C}_x$ corresponding to its $i$th largest eigenvalue. When the data lies on an $m$-dimensional hyperplane, the matrix $\mathcal{C}_x$ has only $m$ positive eigenvalues and the rest are zero. Furthermore, every $\Vc{x}_i$ belongs to $ \bar{\Vc{x}}
+\textrm{span} \{{\bf u}_1,{\bf u}_2,\ldots,{\bf u}_m\} \subseteq \mathds{R}^d$. In this case, the mapping PCA finds $f(\Vc{x})={\bm A} \Vc{x}$ is one-to-one and satisfies $\|f(\Vc{x}_i)-f(\Vc{x}_j)\| = \| {\bm A} (\Vc{x}_i - \Vc{x}_j) \| =
\| \Vc{x}_i-\Vc{x}_j \|$. Therefore, the lower embedding preserves all the geometry information in the original dataset $\mathcal{X}$. We would like to point out that PCA can be written as \begin{eqnarray*}
\max_{\{ \mathcal{Y} \}} \sum_{i=1}^n \| \Vc{y}_i-\Vc{y}_j\|^2 \quad \textrm{s.t.} \quad \Vc{y}_i = {\bm A} \Vc{x}_i~\textrm{and}~{\bm A}{\bm A}^T = {\bm I}, \end{eqnarray*}
\subsubsection{MDS}
Multidimensional Scaling (MDS) differs from PCA in the way the input is provided to it. While in PCA, the original data $\mathcal{X}$ is provided, the classical MDS requires only the set of all Euclidean pairwise distances $\{ \| \Vc{x}_i-\Vc{x}_j \|_2 \}_{i=1,j>i}^{n-1}$. As MDS uses only pairwise distances, the solution it finds is given up to translation and unitary transformation. Let
$\Vc{x}'_i=\Vc{x}_i-{\bf c}$, the Euclidean distance $\|\Vc{x}'_i-\Vc{x}'_j\|$ is the same as $\|\Vc{x}_i-\Vc{x}_j\|$. Let $\Vc{U}$ be an arbitrary unitary matrix $\Vc{U}$ satisfying $\Vc{U}^T\Vc{U} = {\bm I}$ and define $\Vc{x}'_i=\Vc{U}
\Vc{x}$. The distance $\|\Vc{x}'_i-\Vc{x}'_j\|$ is equal to
$\|\Vc{U}(\Vc{x}_i-\Vc{x}_j)\|$, which by the invariance of the Euclidean norm to a unitary transformation equals to $\|\Vc{x}_i-\Vc{x}_j\|$. Denote the pairwise squared-distance matrix by $[\Vc{D}_2]_{ij} =
\|\Vc{x}_i-\Vc{x}_j\|^2$. By the definition of Euclidean distance, the matrix $\Vc{D}_2$ satisfies \begin{eqnarray}\label{mds:1} \Vc{D}_2= {\bf 1} \Vc{\phi}^T + \Vc{\phi} {\bf 1}^T - 2 \Vc{X}^T \Vc{X}, \end{eqnarray} where $\Vc{X} = [\Vc{x}_1,\Vc{x}_2,\ldots,\Vc{x}_n]$ and $ \Vc{\phi}=[
\|\Vc{x}_1\|^2,\|\Vc{x}_2\|^2,\ldots, \|\Vc{x}_n\|^2]^T$. To verify
(\ref{mds:1}), one can examine the $ij$-th term of $\Vc{D}_2$ and compare with $\| \Vc{x}_i - \Vc{x}_j \|^2$. Denote the $n \times n$ matrix $\Vc{H} = {\bm I} - {\bf 1} {\bf 1}^T/n$. Multiplying both sides of $\Vc{D}_2$ with $\Vc{H}$ in addition to a factor of $-\frac{1}{2}$, yields \[ -\frac{1}{2} \Vc{H} \Vc{D}_2 \Vc{H} = (\Vc{X} \Vc{H})^T (\Vc{X} \Vc{H}), \] which is key to MDS, i.e., Cholesky decomposition of $-\frac{1}{2} \Vc{H} \Vc{D}_2 \Vc{H}$ yields $\Vc{X} $ to within a translation and a unitary transformation. Consider the eigendecomposition $-\frac{1}{2} \Vc{H} \Vc{D}_2 \Vc{H} = \Vc{U} {\bm \Lambda} \Vc{U}^T$. Therefore, a rank $d$ $\Vc{X}$ can be obtained as $\Vc{X} = {\bm \Lambda}_d^{\frac{1}{2}} \Vc{U}_d^T $, where ${\bm \Lambda}_d = \diag{[\lambda_1,\lambda_2,\ldots,\lambda_d]}^{\frac{1}{2}}$ and $\Vc{U}_d = [{\bf u}_1,{\bf u}_2,\ldots,{\bf u}_d]$. Note that $\Vc{X} \Vc{H}$ is a translated version of $\Vc{X}$, in which every column $\Vc{x}_i$ is translated to $\Vc{x}_i-\bar{\Vc{x}}$.
To use MDS for dimensionality reduction, we can consider a two step process. First a square-distance matrix $\Vc{D}_2$ is obtained from the high-dimensional data $\mathcal{X}$. Then, MDS is applied to $\Vc{D}_2$ to obtain a low-dimensional ($m<d$) embedding by $\Vc{X}_{m} = {\bm \Lambda}_m^{\frac{1}{2}} \Vc{U}_m^T = \Vc{X} \Vc{U}_m \Vc{U}_m^T$. In the absence of noise, this procedure provides an affine transformation to the high-dimensional data and thus can be regarded as a linear method.
\subsection{Nonlinear dimensionality reduction}
Linear maps are limited as they cannot preserve the geometry of nonlinear manifolds.
\subsubsection{Kernel PCA} Kernel PCA is one of the first methods in dimensionality reduction of data on nonlinear manifolds. The method combines the dimensionality reduction capabilities of PCA on linear manifolds with a nonlinear embedding of data points in a higher (or even infinite) dimensional space using ``kernel trick'' \cite{aizerman:64:arc}. In PCA, one finds the eigenvectors satisfying: $\mathcal{C}_x {\bf v}_k = \lambda_k {\bf v}_k$. Since ${\bf v}_k$ can be written as a linear combination of the $\Vc{x}_i$'s: ${\bf v}_k = \sum_i \alpha_{ki} (\Vc{x}_i-\bar{\Vc{x}})$, one can replace ${\bf v}_k$ in the eigendecomposition, simplify, and obtain: $\Vc{X} ({\bm K} {\bm \alpha}_k - \lambda_k {\bm \alpha}_k)=0$, where ${\bm K}_{ij}=(\Vc{x}_i-\bar{\Vc{x}})^T(\Vc{x}_j-\bar{\Vc{x}})$. Consider the mapping ${\bm \phi}: {\cal M} \to {\cal H}$ from the manifold to a Hilbert space. The ``kernel trick'' suggests replacing $\Vc{x}_i$ with ${\bm \phi}(\Vc{x}_i)$ and therefore rewriting the kernel as ${\bm K}_{ij}={\bm \phi}(\Vc{x}_i)^T{\bm \phi}(\Vc{x}_j)$. Further generalization can be made by setting ${\bm K}_{ij}=K(\Vc{x}_i,\Vc{x}_j)$ where $K(\cdot,\cdot)$ is positive semidefinite. The resulting vectors are of the form ${\bf v}_k = \sum_i \alpha_{ki} {\bm \phi}(\Vc{x}_i)$ and thus implementing a nonlinear embedding into a nonlinear manifold.
\subsubsection{ISOMAP}
In \cite{Tenenbaum:00:science}, Tenenbaum \emph{et al} find a nonlinear embedding that rather than preserving the Euclidean distance between points on a manifold, preserves the geodesic distance between points on the manifold. Similar to MDS where a lower dimensional embedding is found to preserve the Euclidean distances of high dimensional data, ISOMAP finds a lower dimensional embedding that preserves the geodesic distances between high-dimensional data points.
\subsubsection{Laplacian Eigenmaps} Belkin and Niyogi's Laplacian eigenmaps dimensionality reduction algorithm \cite{Belkin&Niyogi:NC03} takes a different approach. They consider a nonlinear mapping $f$ that minimize the Laplacian \begin{eqnarray}
\arg \min_{\|f\|_{L^2(\mathcal{M})}=1} \int_{\mathcal{M}} \|\nabla f \|^2. \end{eqnarray} Since the manifold is not available but only data point on it are, the lower dimensional embedding is found by minimizing the graph Laplacian given by \begin{eqnarray}\label{laplacian}
\sum_{i=1}^n w_{ij} \|\Vc{y}_i - \Vc{y}_j\|^2, \end{eqnarray} where $w_{ij}$ is the $ij$th element of the adjacency matrix which is constructed as follows: For $k \in \mathds{N}$, a $k$-nearest neighbors graph is constructed with the points in $\mathcal{X}_n$ as the graph vertices. Each point $\Vc{x}_i$ is connected to its $k$-nearest neighboring points. Note that it suffices that either $\Vc{x}_i$ is among $\Vc{x}_j$'s $k$-nearest neighbors or $\Vc{x}_j$ is among $\Vc{x}_i$'s $k$-nearest neighbors for $\Vc{x}_i$ and $\Vc{x}_j$ to be connected.
For a fixed scale parameter $\epsilon > 0$, the weight associated with the two points $\Vc{x}_i$ and $\Vc{x}_j$ satisfies \begin{eqnarray*} w_{ij} = \left\{ \begin{array} {l c l}
\exp\left\{ - \| \Vc{x}_i - \Vc{x}_j \|^2 / \epsilon \right\} & \quad & \textrm {if $\Vc{x}_i$ and $\Vc{x}_j$ are connected} \\ 0 & \quad & \textrm{otherwise.} \end{array} \right. \end{eqnarray*}
\section{Classification Constrained Dimensionality Reduction} \label{sec:ccdr}
\subsection{Statistical framework} To put the problem in a classification context, we consider the following model. Let $\mathcal{X}_n=\{\Vc{x}_1,\Vc{x}_2,\ldots,\Vc{x}_n\}$ be a set of $n$ points sampled from an $m$-dimensional submanifold $\mathcal{M} \subseteq \mathds{R}^d$. Each point $\Vc{x}_i \in \mathcal{M}$ is associate with a class label $c_i \in {\cal A}=\{0,1,2,\ldots,L\}$, where $c_i=0$ corresponds to the case of unlabeled data. We assume that pairs $(\Vc{x}_i,c_i) \in \mathcal{M} \times {\cal A}$ are i.i.d.~drawn from a joint distribution \begin{eqnarray}
P(\Vc{x},c)=p_x(\Vc{x}|c) P_c(c) = P_c(c|\Vc{x}) p_x(\Vc{x}), \end{eqnarray}
where $p_x(\Vc{x}) > 0$ and $p_x(\Vc{x}|c)>0$ (for $\Vc{x} \in \mathcal{M}$) are the marginal and the conditional probability density functions, respectively, satisfying $\int_\mathcal{M} p_x(\Vc{x})d\Vc{x} = 1$, $\int_\mathcal{M} p_x(\Vc{x}|c) d\Vc{x} = 1 $ and $P_c(c) > 0$ and $P_c(c|\Vc{x}) > 0$ are the a priori and a posteriori probability mass functions of the class label, respectively, satisfying $\sum_c P_c(c) = 1$ and
$\sum_c P_c(c|\Vc{x}) = 1$. While we consider unlabeled points of the form $(\Vc{x}_i,0)$ similar labeled points, we still make the following distinction. Consider the following mechanism for generating an unlabeled point. First, a class label $c \in
\{1,2,\ldots,L\}$ is generated from the labeled a priori probability mass function $P'_c(c)=P(c|c~\textrm{is labeled}) =
P_c(c)/\sum_{c'=1}^{L} P_c(c')$. Then $\Vc{x}_i$ is generated according to $p_x(\Vc{x}|c)$. To treat $c$ as an unobserved label, we marginalize $P(\Vc{x},c|c~\textrm{is labeled}) = p_x(\Vc{x}|c)P'_c(c)$ over $c$: \begin{eqnarray}
p_x(\Vc{x}|c=0)=\sum_{q=1}^L p_x(\Vc{x}|c=q) P'_c(q) =
\frac{\sum_{q=1}^L p_x(\Vc{x}|c=q) P_c(q)}{\sum_{c'=1}^L P_c(c')}. \end{eqnarray} This suggests that the conditional PDF of unlabeled points
$f_x(\Vc{x}|c=0)$ is uniquely determined by the class priors and the conditionals for labeled point. We would like to point out that this is one of few treatments that can be offered for unlabeled point. For example, in anomaly detection, one may want to associate the unlabeled point with contaminated data points, which can be represented as a density mixture of $p_x(\Vc{x}|c=0)$ and $\gamma(\Vc{x})$ (e.g., $\gamma(\Vc{x})$ is uniform in $\mathcal{X}$).
In classification constraint dimensionality reduction, our goal is to obtain a lower-dimensional embedding $\mathcal{Y}_n = \{\Vc{y}_1,\Vc{y}_2,\ldots,\Vc{y}_n\}$ (where $\Vc{y}_i \in \mathds{R}^m$ with $m<d$) that preserves local geometry and that encourages clustering of points of the same class label. Alternatively, we would like to find a mapping ${\bm f}(\Vc{x},c): \mathcal{M} \times {\cal A} \to \mathds{R}^m$ for which $\Vc{y}_i ={\bm f}(\Vc{x}_i,c_i)$ that is smooth and that clusters points of the same label.
We introduce the class label indicator for data point $\Vc{x}_i$ as $c_{ki}= I( c_i = k )$, for $k=1,2,\ldots,L$ and $i=1,2,\ldots,n$. Note that when point $\Vc{x}_i$ is unlabeled $c_{ki}=0$ for all $k$. Using the class indicator, we can write the number of point in class $k$ as $n_k = \sum_{i=1}^n c_{ki}$. If all points are labeled, then $n= \sum_{k=1}^L n_k$.
\subsection{Linear dimensionality reduction for classification} \subsubsection{LDA} Restricting the discussion to linear maps, one can extend PCA to take into account label information using the multi-class extension to Fisher's linear discriminant analysis (LDA). Instead of maximizing the data covariance matrix, LDA maximizes the ratio of the between-class-covariance to within-class-covariance. In other words, we obtain a linear transformation $\Vc{y}_i = f(\Vc{x}_i,c_i) = {\bm A} \Vc{x} _i$ with matrix ${\bm A}$ that is the solution to the following maximization: \begin{eqnarray}\label{eq:lda} \max_{{\bm A}} \mbox{tr} \{{\bm A} \mathcal{C}_B {\bm A}^T \} \quad \textrm{s.t.} \quad {\bm A} \mathcal{C}_W {\bm A}^T = {\bm I} , \end{eqnarray} where \[ \mathcal{C}_B = \frac{1}{n}\sum_{k=1}^L n_k (\bar{\Vc{x}}^{(k)} - \bar{\Vc{x}})(\bar{\Vc{x}}^{(k)} - \bar{\Vc{x}})^T \] is the between-class-covariance matrix, $\bar{\Vc{x}}^{(k)} = \sum_i c_{ki} \Vc{x}_i / n_k $ is the $k$th class center, $\bar{\Vc{x}} = \sum_i \Vc{x}_i / n $ is the center point of the dataset, \[ \mathcal{C}_W = \frac{1}{n} \sum_{k=1}^L n_k \mathcal{C}_W^{(k)}, \] is the within-class-covariance, and \[\mathcal{C}_W^{(k)} = \frac{ \sum_{i=1}^n c_{ki} (\Vc{x}_i - \bar{\Vc{x}}^{(k)}) ( \Vc{x}_i - \bar{\Vc{x}}^{(k)})^T }{ n_k} \] is within-class-$k$ covariance matrix. In Fig.~\ref{fig:pca}, LDA selects an embedding that projects the data onto ${\bf e}_2$ since the maximum distance between classes is achieved along with a minimum class variance when projecting the data onto ${\bf e}_2$. We are interested in exploring a strategy that maximizes class separation in the lower dimensional embedding. \
\subsubsection{Marginal Fisher Analysis} Recent work \cite{yan:07:pami}, presents the marginal Fisher analysis (MFA), which is a method that minimizes the ratio between intraclass compactness and interclass separability. In its basic formulation MFA is a linear embedding, in which $\Vc{y}_i={\bm A} \Vc{x}_i$. Another aspect of the method is that it considers two classes. The kernel trick is used to provide a nonlinear extension to MFA. To construct the cost function, two quantities are of interest: intraclass compactness and interclass separability. The intraclass compactness can be written as \begin{eqnarray}
\sum_{i,j} w_{ij} \|\Vc{y}_i -\Vc{y}_j\|^2, \end{eqnarray} where $w_{ij}$ is given by \begin{eqnarray} w_{ij}= (\sum_k c_{ki}c_{kj}) I( \Vc{x}_i \in N_{k_1}^+(\Vc{x}_j)~\textrm{or}~\Vc{x}_j \in N_{k_1}^+(\Vc{x}_i)) \end{eqnarray} and $N_{k}^+(\Vc{x})$ denote the $k$-nn neighborhood of $\Vc{x}$ within the same class as $\Vc{x}$. Note that the term $\sum_k c_{ki}c_{kj}$ is one if $\Vc{x}_i$ and $\Vc{x}_j$ have the same label and zero otherwise. Similarly, the interclass separability can be written as \begin{eqnarray}
\sum_{i,j} w_{ij} \|\Vc{y}_i -\Vc{y}_j\|^2, \end{eqnarray} where $w_{ij}$ is given by \begin{eqnarray} w_{ij}= (1-\sum_k c_{ki}c_{kj}) I( \Vc{x}_i \in N_{k_2}^-(\Vc{x}_j)~\textrm{or}~\Vc{x}_j \in N_{k_2}^-(\Vc{x}_i)) \end{eqnarray} and $N_{k}^-(\Vc{x})$ denote the $k$-nn neighborhood of $\Vc{x}$ outside the class of $\Vc{x}$.
\section{Dimensionality reduction for classification on nonlinear manifolds} Here, we review the CCDR algorithm \cite{costa:05:icassp} and its extension to multi-class classification.
To cluster lower dimensional embedded points of the same label we associate each class with a class center namely $\Vc{z}_k \in \mathds{R}^m$. We construct the following cost function:
\Eq{
\label{cost:function}
J(\mathcal{Z}_L,\mathcal{Y}_n) = \sum_{ki} c_{ki} \,
\| \Vc{z}_k -\Vc{y}_i \|^2 + \frac{\beta}{2} \sum_{ij} w_{ij} \, \| \Vc{y}_i -\Vc{y}_j \|^2 , }
where $\mathcal{Z}_L = \{\Vc{z}_1,\ldots,\Vc{z}_L\}$ and $\beta \geq 0$ is a regularization parameter. We consider two terms on the RHS of (\ref{cost:function}). The first term corresponds to the concentration of points of the same label around their respective class center. The second term is as in (\ref{laplacian}) or as in Laplacian Eigenmaps \cite{Belkin&Niyogi:NC03} and controls the smoothness of the embedding over the manifold. Large values of $\beta$ produce an embedding that ignores class labels and small values of $\beta$ produce an embedding that ignores the manifold structure. Training data points will tend to collapse into the class centers, allowing many classifiers to produce perfect classification on the training data without being able to control the generalization error (i.e., classification error of the unlabeled data). Our goal is to find $\mathcal{Z}_L$ and $\mathcal{Y}_n$ that minimize the cost function in (\ref{cost:function}).
Let $\tensor{C}$ be the $L \times n$ class membership matrix with $c_{ki}$ as its $ki$-th element, $\tensor{Z} = [\Vc{z}_1 , \ldots , \Vc{z}_L , \Vc{y}_1, \ldots ,\Vc{y}_n]$, and $\tensor{0}$ be the $L \times L$ all zeroes matrix and \[
{\bm G} = \left[
\begin{array}{cc}
\tensor{0} & \tensor{C} \\
\tensor{C}^T & \beta \tensor{W}
\end{array}
\right] \ . \]
Minimization over $\tensor{Z}$ of the cost function in (\ref{cost:function}) can be expressed as
\Eq{ \label{E:Optimization_Extended}
\min_{\footnotesize{\begin{array}{c} \tensor{Z} \tensor{D} \Vc{1} = \Vc{0} \\
\tensor{Z} \tensor{D} \tensor{Z}^T = \tensor{I} \end{array} }} \mbox{tr} \left( \tensor{Z} \tensor{L} \tensor{Z}^T \right) \ , }
where $\tensor{D}=\diag{{\bm G} \Vc{1}}$ and $\tensor{L}=\tensor{D}-{\bm G}$. To prevent the lower-dimensional points and the class centers from collapsing into a single point at the origin, the regularization $ \tensor{Z} \tensor{D} \tensor{Z}^T = \tensor{I} $ is introduced. The second constraint $\tensor{Z} \tensor{D} \Vc{1} = \Vc{0}$ is constructed to prevent a degenerate solution, e.g., $\Vc{z}_1=\ldots=\Vc{z}_L=\Vc{y}_1=\ldots=\Vc{y}_n$. This solution may occur since $\Vc{1}$ is in the null-space of the Laplacian $\tensor{L}$ operator, i.e., $\tensor{L}\Vc{1}=\Vc{0}$. The solution to (\ref{E:Optimization_Extended}) can be expressed in term of the following generalized eigendecomposition \begin{eqnarray}\label{eq:eig:1} \tensor{L}^{(n)} {\bf u}_k^{(n)} = \lambda_k^{(n)} \tensor{D}^{(n)} {\bf u}_k^{(n)}, \end{eqnarray} where $\lambda_k^{(n)}$ is the $k$th eigenvalue and ${\bf u}_k^{(n)}$ is its corresponding eigenvector. Note that we include $^{(n)}$ to emphasize the dependence on the $n$ data points. Without loss of generality we assume $\lambda_1 \le \lambda_2 \le \ldots \le \lambda_{n+L}$. Specifically, matrix $\tensor{Z}$ is given by $[{\bf u}_2, {\bf u}_3, \ldots, {\bf u}_{m+1}]^T$, where the first $L$ columns correspond to the coordinates of the class centers, i.e., $\Vc{z}_k=\tensor{Z} {\bf e}_k$, and the following $n$ columns determine the embedding of the $n$ data points, i.e., $\Vc{y}_t=\tensor{Z} {\bf e}_{L+t}$. We use ${\bf e}_i$ to denote the canonical vector such that $[{\bf e}_i]_{s}= 1$ for element $s=i$ and zero otherwise.
\subsection{Classification and computational complexity}
In classification, the goal is to find a classifier $a_x(\Vc{x}):\mathcal{M} \to {\cal A}$ based on the training data that minimizes the generalization error: \begin{eqnarray}\label{general} \hat{a} = \arg \min_{a \in {\cal F}} E[ I(a (\Vc{x}) \neq a)], \end{eqnarray} where the expectation is taken w.r.t.~the pair $(\Vc{x}, a)$. Since only samples from the joint distribution of $\Vc{x}$ and $a$ are available, we replace the expectation with a sample average w.r.t.~the training data $\frac{1}{n} \sum_{i=1}^n I(a(\Vc{x}_i) \neq a_i)$. During the minimization, we search over a set of classifiers $a_x(\Vc{x}):\mathcal{M} \subseteq \mathds{R}^d \to {\cal A}$, which is defined over a domain in $\mathds{R}^d$. In our framework, we suggest replacing a classifier $a_x(\Vc{x}):\mathcal{M} \subseteq \mathds{R}^d \to {\cal A}$ with dimensionality reduction via CCDR $f(\Vc{x}): \mathcal{M} \subseteq \mathds{R}^d \to \mathds{R}^m$ followed by a classifier on the lower-dimensional space $a_y({\bm y}): \mathds{R}^m \to {\cal A}$, i.e., $a_x = a_y \circ f$. The first advantage is that the search space for the minimization in (\ref{general}) defined over a $d$-dimensional space can be reduced to an $m$-dimensional space. This results in significant savings in computational complexity if the complexity associated with the process of obtaining $f$ can be made low. In general, the classifier set ${\cal F}$ has to be rich enough to attain a lower generalization error. The other advantage of our method lies in the fact that CCDR is designed to cluster points of the same label thus allowing for a linear classifier or other low complexity classifiers. Therefore, further reduction in the size of class ${\cal F}$ can be achieved in addition to the reduction due to a lower-dimensional domain. To classify a new data point, one has to apply CCDR to a new data point. If it is done brute force, the point is added to the set of training points with no label a new matrix $W'$ is formed and an eigendecomposition is carried out.
When performing CCDR, each of the $n(n-1)/2$ terms of the form $\{
\| \Vc{x}_i - \Vc{x}_j \|^2 \}$ requires one summation and $d$ multiplications leading to computational complexity of the order $O(dn^2)$. Construction of a $K$-nearest neighbors graph requires $O(kn)$ comparisons per point and therefore a total of $O(kn^2)$. The total number of operations involved in constructing the graph is therefore $O((k+d)n^2)$. Next, an eigendecomposition is applied to $W'$, which is an $(L+n) \times (L+n)$ matrix. The associated computation complexity is $O(n^3)$. Therefore, the overall computational complexity of CCDR is $O(n^3)$. This holds for both training and classification as explained earlier. We are interested in reducing computational complexity in training the classifier and in classification. For that purpose, we consider an out-of-sample extension of CCDR.
\section{Out-of-Sample Extension}
We start by rearranging the generalized eigendecomposition of the Laplacian in (\ref{eq:eig:1}) as \begin{eqnarray}\label{eq:eig:2} {\bm G}^{(n)} {\bf u}_l^{(n)} = (1-\lambda_l^{(n)}) \tensor{D}^{(n)} {\bf u}_l^{(n)}, \end{eqnarray} and recall that ${\bf u}_l^{(n)}=[{\bm z}_1(l),{\bm z}_1(l),\ldots,{\bm z}_1(l), \Vc{y}_1(l), \Vc{y}_2(l), \ldots, \Vc{y}_n(l)]^T$. Since we consider an $m$-dimensional embedding, we are only interested in eigenvectors ${\bf u}_2,\ldots,{\bf u}_{m+1}$.
The $L+i$ equation (row) for $i=1,2,\ldots,n$ in the eigendecomposition in (\ref{eq:eig:2}) can be written as \begin{eqnarray}\label{eq:y:sample} \Vc{y}_i^{(n)}(l)=\frac{1}{1- \lambda_l^{(n)}} \frac{ \sum_k c_{ki}{\bm z}_k^{(n)}(l)+ \beta \sum_j K(\Vc{x}_i,\Vc{x}_j) \Vc{y}_j^{(n)}(l)} { \sum_k c_{ki}+ \beta \sum_j K(\Vc{x}_i,\Vc{x}_j)}. \end{eqnarray} Similarly, the $k$th equation (row) of (\ref{eq:eig:2}) for $k=1,2,\ldots,L$ is given by \begin{eqnarray}\label{eq:z:sample} {\bm z}_k^{(n)}(l)= \frac{\sum_i c_{ki} \Vc{y}_i^{(n)}(l)}{(1-\lambda_l^{(n)}) n_k}. \end{eqnarray} Our interest is in finding a mapping $\Vc{f}(\Vc{x},c)$ that in addition to mapping every $\Vc{x}_i$ to $\Vc{y}_i$, can perform an out-of-sample extension, i.e., is well-defined outside the set $\mathcal{X}$. We consider the following out-of-sample extension expression \begin{eqnarray}\label{eq:y:sample:oos} \Vc{f}_l^{(n)}(\Vc{x},c)=\frac{1}{1- \lambda_l^{(n)}} \frac{ I(c\neq 0) {\bm z}_c^{(n)}(l)+ \beta \sum_j K(\Vc{x},\Vc{x}_j) \Vc{y}_j^{(n)}(l)} { I(c\neq 0)+ \beta \sum_j K(\Vc{x},\Vc{x}_j)}, \end{eqnarray} where ${\bm z}^{(n)}$ is the same as in (\ref{eq:z:sample}). This formula can be explain as follows. First, the lower dimensional embedding $\Vc{y}^{(n)}_1,\ldots, \Vc{y}_n^{(n)}$ and the class centers $\Vc{z}_1^{(n)},\ldots,\Vc{z}_L^{(n)}$ are obtained through an the eigendecomposition in (\ref{eq:eig:2}). Then, the embedding outside the sample set $\mathcal{X}$ is calculated via (\ref{eq:y:sample:oos}). By comparison of $\Vc{f}_l^{(n)}(\Vc{x}_i,c_i)$ evaluated through (\ref{eq:y:sample:oos}) with (\ref{eq:y:sample}), we have $\Vc{f}_l^{(n)}(\Vc{x}_i,c_i) = \Vc{y}_i^{(n)}(l)$. This suggests that the out-of-sample extension coincides with the solution, we already have for the mapping at the the data points $\mathcal{X}$. Moreover, using this result one can replace all $\Vc{y}_i^{(n)}$ with $\Vc{f}_l^{(n)}(\Vc{x}_i,c_i)$ in (\ref{eq:y:sample:oos}) and obtain the following generalization of the eigendecomposition in (\ref{eq:eig:2}): \begin{eqnarray}\label{eq:f:eig} \Vc{f}_l^{(n)}(\Vc{x},c)=\frac{1}{1- \lambda_l^{(n)}} \frac{ I(c\neq 0) {\bm z}_c^{(n)}(l)+ \beta \sum_j K(\Vc{x},\Vc{x}_j) \Vc{f}_l^{(n)}(\Vc{x}_j,c_j) }{ I(c\neq 0)+ \beta \sum_j K(\Vc{x},\Vc{x}_j)}, \end{eqnarray} and \begin{eqnarray}\label{eq:z:f:eig} {\bm z}_k^{(n)}(l)= \frac{\sum_i c_{ki} \Vc{f}_l^{(n)}(\Vc{x}_i,c_i)}{(1-\lambda_l^{(n)}) n_k}. \end{eqnarray} In \cite{Bengio:04}, it is propose that if the out-of-sample solution to the eigendecomposition problem associated with kernel PCA converge, it is given by the solution to the asymptotic equivalent of the eigendecomposition. Using similar machinery, we can provide a similar result suggesting that if $\Vc{f}_l^{(n)}(\Vc{x},c) \to \Vc{f}_l^{(\infty)}(\Vc{x},c)$ as $n \to \infty$, then the asymptotic equivalents to (\ref{eq:f:eig}) and (\ref{eq:z:f:eig}) should provide the solution to the limit of $\Vc{f}_l^{(n)}(\Vc{x},c)$. The asymptotic analogues to (\ref{eq:y:sample}) and (\ref{eq:z:sample}) are described in the following. The mapping for labeled data $f_l(\Vc{x},c): \mathcal{M} \times {\cal A} \to \mathds{R}$ for $c=0,1,2,\ldots,L$ equivalent to equation (\ref{eq:y:sample}) is \begin{eqnarray}\label{map:f} f_l(\Vc{x},c)=\frac{1}{1-\lambda_l} \frac{ I(c \neq 0) {\bm z}_c(l)+ \beta' \sum_{c'=0}^L \int_{\mathcal{M}} K(\Vc{x},\Vc{x}') f_l(\Vc{x}',c') P(\Vc{x}',c') d\Vc{x}'} { I(c \neq 0) + \beta' \int_{\mathcal{M}} K(\Vc{x},\Vc{x}') p(\Vc{x}') d\Vc{x}'} \end{eqnarray} where ${\bm z}_c(l)$ for $c=1,2,\ldots,L$ is equivalent to (\ref{eq:z:sample}) \begin{eqnarray}
{\bm z}_c(l)= \frac{\int_{\mathcal{M}} f_l(\Vc{x},c) p(\Vc{x}|c) d \Vc{x} } {1-\lambda_l}, \end{eqnarray} and $\beta'=\beta n$. Since we are interested in an $m$-dimensional embedding, we consider only $l=1,2,\ldots,m$, i.e., the eigenvectors that correspond to the $m$ smallest eigenvalues. To guarantee that the relevant eigenvectors are unique (up to a multiplicative constant), we require $\lambda_1<\lambda_2< \cdots < \lambda_{m+1} \le \lambda_{m+2} \le \ldots \lambda_{n}$.
The out-of-sample extension given by (\ref{eq:y:sample:oos}), can be useful in a couple of scenario. The first, is in classification of new unlabeled samples. We assume that $\{\Vc{y}_j\}_{j=1}^n$, $\{{\bm z}_k\}_{k=1}^L$, and $\{ \lambda_l \}_{l=1}^m$ are already obtained based on labeled (or partially labeled) training data and we would like to embed a new unlabeled data point. We consider using (\ref{eq:y:sample:oos}) with $c=0$, i.e., we can use $\Vc{f}(\Vc{x},0)$ to map a new sample $\Vc{x}$ to $\mathds{R}^m$. The obvious immediate advantage is the savings in computational complexity as we avoid performing addition eigendecomposition that includes the new point.
The second scenario involves the out-of-sample extension for labeled data. The goal here is not to classify the data since the label is already available. Instead, we are interested in the training phase in the case of large $n$ for which the eigendecomposition is infeasible. In this case, a large amount of labeled training data is available but due to the heavy computational complexity associated with the eigendecomposition in (\ref{eq:eig:1}) (or by (\ref{eq:eig:2})), the data cannot be processed. In this case, we are interested in developing a resampling method, which integrates $\Vc{f}^{(n)}_l(\Vc{x}, c)$ obtained for different subsamples of the complete data set.
\subsection{Classification Algorithms}
We consider three widespread algorithms: $k$-nearest neighbors, linear classification, and neural networks. A standard implementation of $k$-nearest neighbors was used, see \cite[p. 415]{hastie:00}. The linear classifier we implemented is given by \begin{eqnarray*} \hat{c}({\Vc{y}}) & = &
\arg \max_{c \in \{ {\cal A}_1,\ldots {\cal A}_L\}}
\Vc{y}^T {\bm \alpha}^{(c)} + \alpha_0^{(c)}\\
\bigl[
{\bm \alpha}^{({\cal A}_k)},
\alpha_0^{({\cal A}_k)}
\bigr]
& = & \arg \min_{[{\bm \alpha} , \alpha_0]} \sum_{i=1}^n (\Vc{y}_i^T {\bm \alpha} + \alpha_0 - c_{ki})^2, \end{eqnarray*} for $k=1,\ldots,L$. The neural network we implemented is a three-layer neural network with $d$ elements in the input layer, $2d$ elements in the hidden layer, and $6$ elements in the output layer (one for each class). Here $d$ was selected using the common PCA procedure, as the smallest dimension that explains $99.9\%$ of the energy of the data. A gradient method was used to train the network coefficients with 2000 iterations. The neural net is significantly more computationally burdensome than either linear or $k$-nearest neighbors classifications algorithms.
\subsection{Data Description}
In this section, we examine the performance of the classification algorithms on the benchmark label classification problem provided by the Landsat MSS satellite imagery database \cite{sat:data}. Each sample point consists of the intensity values of one pixel and its 8 neighboring pixels in 4 different spectral bands. The training data consists of 4435 36-dimensional points of which, 1072 are labeled as 1) red soil, 479 as 2) cotton crop, 961 as 3) grey soil, 415 as 4) damp grey soil, 470 are labeled as 5) soil with vegetation stubble, and 1038 are labeled as 6) very damp grey soil. The test data consists of 2000 36-dimensional points of which, 461 are labeled as 1) red soil, 224 as 2) cotton crop, 397 as 3) grey soil, 211 as 4) damp grey soil, 237 are labeled as 5) soil with vegetation stubble, and 470 are labeled as 6) very damp grey soil. In the following, each classifier is trained on the training data and its classification is evaluated based on the entire sample test data. In Table~\ref{table:performance}, we present ``best case'' performance of neural networks, linear classifier, and $k$-nearest neighbors in three cases: no dimensionality reduction, dimensionality reduction via PCA, and dimensionality reduction via CCDR. The table presents the minimum probability of error achieved by varying the tuning parameters of the classifiers. The benefit of using CCDR is obvious and we are prompted to further evaluate the performance gains attained using CCDR.
\begin{table}[htb!]
\begin{tabular}{l|r|r|r}
& Neural Net. & Lin. & $k$-nearest neigh. \\ \hline
No dim. reduc. & 83 \% & 22.7 \% & 9.65 \% \\
PCA & 9.75 \% & 23 \% & 9.35 \% \\
CCDR & 8.95 \% & 8.95 \% & 8.1 \% \end{tabular} \caption{Classification error probability}
\label{table:performance} \end{table}
\subsection{Regularization Parameter $\beta$}\label{sec:beta}
As mentioned earlier, the CCDR regularization parameter $\beta$ controls the contribution of the label information versus the contribution of the geometry described by the sample. We apply CCDR to the 36-dimensional data to create a 14-dimensional embedding by varying $\beta$ over a range of values. For justification of our choice of $d=14$ dimensions see Section \ref{sec:dim}. In the process of computing the weights $w_{ij}$ for the algorithm, we use $k$-nearest neighbors with $k=4$ to determine the local neighborhood. Fig.~\ref{fig:vs:beta} shows the classification error probability (dashed lines) for the linear classifier vs. $\beta$ after preprocessing the data using CCDR with $k=4$ and dimension 14.
We observe that for a large range of $\beta$ the average classification error probability is greater than $0.09$ but smaller than $0.095$. This performance competes with the performance of $k$-nearest neighbors applied to the high-dimensional data, which is presented in \cite{hastie:00} as the leading classifier for this benchmark problem. Another observation is that for small values of $\beta$ (i.e., $\beta<0.1$) the probability of error is constant. For such small value of $\beta$, classes in the lower-dimensional embedding are well-separated and are well-concentrated around the class centers. Therefore, the linear classifier yields perfect classification on the training set and fairly low constant probability of error on the test data is attained for low value of $\beta$. When $\beta$ is increased, we notice an increase in the classification error probability. This is due to the fact that the training data become non separable by any linear classifier as $\beta$ increases.
We perform a similar study of classification performance for $k$-nearest neighbors. In Fig.~\ref{fig:vs:beta}, classification probability error is plotted (dotted lines) vs. $\beta$. Here, we observed that an average error probability of $0.086$ can be achieved for $\beta\approx 0.5$. Therefore, $k$-nearest neighbors preceded by CCDR outperforms the straightforward $k$-nearest neighbors algorithm. We also observe that when $\beta$ is decreased the probability of error is increased. This can be explained as due to the ability of $k$-nearest neighbors to utilize local information, i.e., local geometry. This information is discarded when $\beta$ is decreased.
We conclude that CCDR can generate lower-dimensional data that is useful for global classifiers, such as the linear classifier, by using a small value of $\beta$, and also for local classifiers, such as $k$-nearest neighbors, by using a larger value $\beta$ and thus preserving local geometry information.
\begin{figure}
\caption{Probability of incorrect classification vs. $\beta$
for a linear classifier (dotted line $\circ$) and
for the $k$-nearest neighbors algorithm (dashed line $\diamond$) preprocessed by CCDR.
$80\%$ confidence intervals are presented as $\times$ for the linear classifier and as $+$
for the $k$-nearest neighbors algorithm.}
\label{fig:vs:beta}
\end{figure}
\subsection{Dimension Parameter}\label{sec:dim}
\begin{figure}
\caption{Probability of incorrect classification vs. CCDR's
dimension for a linear classifier (dotted line $\circ$) and
for the $k$-nearest neighbors algorithm (dashed line $\diamond$) preprocessed by CCDR.
$80\%$ confidence intervals are presented as $\times$ for the linear classifier and as $+$
for the $k$-nearest neighbors algorithm.}
\label{fig:vs:dim}
\end{figure}
While the data points in $\mathcal{X}_n$ may lie on a manifold of a particular dimension, the actual dimension required for classification may be smaller. Here, we examine classification performance as a function of the CCDR dimension. Using the entropic graph dimension estimation algorithm in \cite{costa:04:tsp}, we obtain the following estimated dimension for each class: \begin{center}
\begin{tabular}{|c ||c |c |c |c |c |c|} \hline class & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline \hline dimension & 13 & 7 & 13 & 10 &6 & 13 \\ \hline \end{tabular}
\end{center} Therefore, if an optimal nonlinear embedding of the data could be found, we suspect that a dimension greater than $13$ may not yield significant improvement in classification performance. Since CCDR does not necessarily yield an optimal embedding, we choose CCDR embedding dimension as $d=14$ in Section \ref{sec:beta}.
In Fig.~\ref{fig:vs:dim}, we plot the classification error probability (dotted line) vs. CCDR dimension and its confidence interval for a linear classifier. We observed decrease in error probability as the dimension increases. When the CCDR dimension is greater than $5$, the error probability seems fairly constant. This is an indication that CCDR dimension of $5$ is sufficient for classification if one uses the linear classifier with $\beta=0.5$, i.e., linear classifier cannot exploit geometry.
We also plot the classification error probability (dashed line) vs. CCDR dimension and its confidence interval for $k$-nearest neighbors classifier. Generally, we observe decrease in error probability as the dimension increases. When the CCDR dimension is greater than $5$, the error probability seems fairly constant. When CCDR dimension is three, classifier error is below $0.1$. On the other hand, minimum possibility of error obtained at CCDR dimension 12-14. This is remarkable agreement with the dimension estimate of $13$ obtained using the entropic graph algorithm of \cite{costa:04:tsp}.
\subsection{CCDR's $k$-Nearest Neighbors Parameter}
\begin{figure}
\caption{Probability of incorrect classification vs. CCDR's
$k$-nearest neighbors parameter for a linear classifier (dotted line $\circ$) and
for the $k$-nearest neighbors algorithm (dashed line $\diamond$) preprocessed by CCDR.
$80\%$ confidence intervals are presented as $\times$ for the linear classifier and as $+$
for the $k$-nearest neighbors algorithm.}
\label{fig:vs:k}
\end{figure}
The last parameter we examine is the CCDR's $k$-nearest neighbors parameter. In general, as $k$ increases non-local distances are included in the lower-dimensional embedding. Hence, very large $k$ prevents the flexibility necessary for dimensionality reduction on (globally) non-linear (but locally linear) manifolds.
In Fig.~\ref{fig:vs:k}, the classification probability of error for the linear classifier (dotted line) is plotted vs. the CCDR's $k$-nearest neighbors parameter. A minimum is obtained at $k=3$ with probability of error of $0.092$. The classification probability of error for $k$-nearest neighbors (dashed line) is plotted vs. the CCDR's $k$-nearest neighbors parameter. A minimum is obtained at $k=4$ with probability of error of $0.086$.
\section{Conclusion} \label{sec:conclusion}
In this paper, we presented the CCDR algorithm for multiple classes. We examined the performance of various classification algorithms applied after CCDR for the Landsat MSS imagery dataset. We showed that for a linear classifier, decreasing $\beta$ yields improved performance and for a $k$-nearest neighbors classifier, increasing $\beta$ yields improved performance. We demonstrated that both classifiers have improved performance on the much smaller dimension of CCDR embedding space than when applied to the original high-dimensional data. We also explored the effect of $k$ in the $k$-nearest neighbors construction of CCDR weight matrix on classification performance. CCDR allows reduced complexity classification such as the linear classifier to perform better than more complex classifiers applied to the original data. We are currently pursuing an out-of-sample extension to the algorithm that does not require rerunning CCDR on test and training data to classify new test point.
\end{document} |
\begin{document}
\title{A note on saturation for Berge-$G$ hypergraphs}
\author{Maria Axenovich\thanks{Karlsruhe Institute of Technology, Karlsruhe, Germany}\and Christian Winter\thanks{Karlsruhe Institute of Technology, Karlsruhe, Germany}\thanks{Research supported in part by Talenx stipendium.}}
\maketitle
\begin{abstract} For a graph $G=(V,E)$, a hypergraph $H$ is called {\it Berge-$G$} if there is a hypergraph $H'$, isomorphic to $H$, so that $V(G)\subseteq V(H')$ and there is a bijection $\phi: E(G) \rightarrow E(H')$ such that for each $e\in E(G)$, $e \subseteq \phi(e)$. The set of all Berge-$G$ hypergraphs is denoted $\ensuremath{\mathcal{B}}(G)$.
A hypergraph $H$ is called Berge-$G$ {\it saturated} if it does not contain any subhypergraph from $\ensuremath{\mathcal{B}}(G)$, but adding any new hyperedge of size at least $2$ to $H$ creates such a subhypergraph.
Since each Berge-$G$ hypergraph contains $|E(G)|$ hypergedges, it follows that each Berge-$G$ saturated hypergraph must have at least $|E(G)|-1$ edges.
We show that for each graph $G$ that is not a certain star and for any $n\geq |V(G)|$, there are Berge-$G$ saturated hypergraphs on $n$ vertices and exactly $|E(G)|-1$ hyperedges. This solves a problem of finding a saturated hypergraph with the smallest number of edges
exactly.
\end{abstract}
\section{Introduction}
For a graph $G=(V,E)$, a hypergraph $H$ is called {\it Berge-$G$} if there is a hypergraph $H'$, isomorphic to $H$, so that $V(G)\subseteq V(H')$ and there is a bijection $\phi: E(G) \rightarrow E(H')$ such that for each $e\in E(G)$, $e \subseteq \phi(e)$. The set of all Berge-$G$ hypergraphs is denoted $\ensuremath{\mathcal{B}}(G)$.
Here, for a graph or a hypergraph $F$, we shall always denote the vertex set of $F$ as $V(F)$ and the edge set of $F$ as $E(F)$. A copy of a graph $F$ in a graph $G$ is a subgraph of $G$ isomorphic to $F$. When clear from context, we shall drop the word ``copy" and just say that there is an $F$ in $G$.
Several classical questions regarding Berge-$G$ hypergraphs have been considered. Among those are extremal numbers for Berge-$G$ hypergraphs measuring the largest number of hyperedges or the largest weight of hypergraphs on $n$ vertices that contain no subhypergraph from $\ensuremath{\mathcal{B}}(G)$, see for example \cite{GP, G, GMT, PTTW}. In addition, Ramsey numbers for Berge-$G$ hypergraphs have been considered in \cite{AG, GYS, GYLSS}.
In this paper, we consider a saturation problem. Let $\ensuremath{\mathcal{F}}$ be a class of hypergraphs with edges of size at least two. A hypergraph $\ensuremath{\mathcal{H}}$ is called $\ensuremath{\mathcal{F}}$ {\it saturated} if it does not contain any subhypergraph isomorphic to a member of $\ensuremath{\mathcal{F}}$, but adding any new hyperedge of size at least $2$ to $\ensuremath{\mathcal{H}}$ creates such a subhypergraph.
Saturation problem for families of $k$-uniform hypergraphs has been treated by Pikhurko \cite{P}, see also \cite{P1}. Pikhurko \cite{P} proved in particular, that for any $k$-uniform hypergraph $G$ there is an $n$-vertex $k$-uniform hypergraph $H$ that is $\{G\}$ saturated and has $O(n^{k-1})$ edges. This extends a result of K\'aszonyi and Tuza \cite{KT} who proved this fact for $k=2$, i.e., for graphs. See also a survey of Faudree et al. \cite{FFS}. Here $\{G\}$ saturated means that $H$ has no subhypergraph isomorphic to $G$ but adding any new hyperedge of size $k$ creates such a subhypergraph. This result is asymptotically tight for some $G$. The determination of a smallest size for $\{G\}$-saturated hypergraphs remains open in general. In the same setting of $k$-uniform hypergraphs, English et al. \cite{EGMT} proved that there are $\ensuremath{\mathcal{B}}_k(G)$ saturated hypergraphs on $n$ vertices and $O(n)$ hyperedges, where $\ensuremath{\mathcal{B}}_k(G)$ is the set of all $k$-uniform Berge-$G$ hypergraphs, $3\leq k\leq 5$. See also English et al. \cite{EGKMS}, for Berge saturation results on some special graphs. \\
We restrict our attention to the non-uniform case and Berge-$G$ hypergraphs.
For $n\geq |V(G)|$, let the {\it saturation number} for a Berge-$G$ hypergraph be defined as
$${\rm sat}(n, \ensuremath{\mathcal{B}}(G))= \min \{|E(\ensuremath{\mathcal{H}})|: ~ \ensuremath{\mathcal{H}} \mbox{ is a } \ensuremath{\mathcal{B}}(G) \mbox{ saturated hypergraph on } n \mbox{ vertices} \}.$$
Observe that for any nontrivial graph $G$, $${\rm sat}(n, \ensuremath{\mathcal{B}}(G))\geq |E(G)|-1.$$
Since no Berge-$G$ hypergraph has hyperedges of sizes less than $2$, we can assume that all hypergraphs considered have hyperedges of sizes at least $2$. We further assume that graphs considered have no isolated vertices. The following is the main result of this paper:
\begin{theorem}\label{thm:main}
Let $G=(V, E)$ be a graph with no isolated vertices, $n\geq |V(G)|$, and $m=|E(G)|-1$. Then $$ {\rm sat}(n, \ensuremath{\mathcal{B}}(G))= \begin{cases}
|E(G)|, & \mbox{ if } G \mbox{ is a star on at least four edges},\\
|E(G)| -1, & \mbox{ otherwise.} \end{cases}$$ Moreover if $G_1$ is a star on at least $4$ edges and $G_2 $ is any other graph, then $\ensuremath{\mathcal{H}}_t(n)$
and $\ensuremath{\mathcal{H}}(n, m)$ are a Berge-$G_1$ and a Berge-$G_2$ saturated hypergraphs, respectively. \end{theorem}
\noindent For a positive integer $n$, let $[n]=\{1, 2, \ldots, n\}$. We shorten $\{i,j\}$ as $ij$ when clear from context. If $F$ is a hypergraph and $e$ is a hyperedge, we denote by $F+e$, $F-e$, a hypergraph obtained by adding $e$ to $F$, deleting $e$ from $F$, respectively.\\
\noindent {\bf Construction of a hypergraph $\ensuremath{\mathcal{H}}_t(n)$:}\\ Let $n$ and $t$ be positive integers, $t\leq n$. Let $ \ensuremath{\mathcal{H}}_t(n) = ([n],\{[n], [n]-\{1\}, [n]-\{2\},\ldots, [n]-\{t-3\}, [t-3]\}).$\\
\noindent {\bf Construction of a set system $H'(n,m)$ and a hypergraph $\ensuremath{\mathcal{H}}(n, m)$:}\\ Let $n$ and $m$ be positive integers, $m\leq \binom{n}{2}$. Let $x= \min\{ m-1, n\}$.
Let $V' $ be a set of singletons, $V'\subseteq \{\{i\}\colon i\in[n]\}$, $|V'|=x$. Let $E'$ be an edge-set of an almost regular graph (the degrees of vertices differ by at most one) on the vertex set $[n]$, such that $|E'|=m - x-1$. Let $H' (n,m)= \{\varnothing\} \cup V' \cup E'$.\\
Informally, we build a set system $H'(n,m)$ of $m$ sets on the ground set $[n]$ by first picking an empty set, then as many as possible singletons, and then pairs, so that the pairs form an edge-set of an almost regular graph.\\
Let $$\ensuremath{\mathcal{H}}=\ensuremath{\mathcal{H}}(n,m)= ([n], \{ [n]-E: ~ E\in H'(n,m)\}).$$
Note that $|E(\ensuremath{\mathcal{H}})|=m$ and each hyperedge of $\ensuremath{\mathcal{H}}$ has size $n$, $n-1$, or $n-2$.\\
\noindent \textbf {Examples.} \\ If $n=4$ and $m=4$, we have: \begin{eqnarray*} H'(4,4)& = &\{ \varnothing, \{1\}, \{2\}, \{3\}\},\\ E(\ensuremath{\mathcal{H}}(4,4))& = & \{ [4], \{2,3,4\}, \{1,3,4 \}, \{1,2,4\}\}. \end{eqnarray*}
\noindent If $n=5$ and $m = 8$, we have $$H'(5,8)= \{ \varnothing, \{1\}, \{2\}, \{3\}, \{4\}, \{5\}, \{12\}, \{34\}\},$$ $$E(\ensuremath{\mathcal{H}}(5,8)) = \{ [5], \{2,3,4,5\}, \{1,3,4,5 \}, \{1,2, 4,5\}, \{1,2,3,5\}, \{1,2, 3,4\}, \{3,4, 5\}, \{1,2,5\}\}.$$
Let $H$ be a Berge-$G$ hypergraph, we call a copy $G'$ of $G$, where $V(G')\subseteq V(H)$ and the edges of $G'$ are contained in distinct hyperedges of $H$, an {\it underlying graph} of the Berge-$G$ hypergraph $H$. For example, if $G'$
is a triangle on vertices $1,2,3$, then a hypergraph $(\{1,2, 3, 4\}, \{\{1,2\}, \{2, 3, 4\}, \{1,2, 3, 4\}\})$ is Berge-$K_3$ and $G'$
is an underlying graph of Berge-$K_3$ hypergraph $H$.
\section{Proof of the main theorem}
Let $S_t$ denote a star on $t$ vertices.
\begin{lemma}
Let $t\ge 5$, $n\geq t$. Then ${\rm sat}(n,\ensuremath{\mathcal{B}}(S_t))=t-1= |E(S_t)|$. \label{lem:35} \end{lemma} \begin{proof}
To show the lower bound, assume first that there is a hypergraph $\ensuremath{\mathcal{H}}$ on $t-2$ hyperedges and vertex set $[n]$ that is Berge-$S_t$ saturated.
Since maximum degree of any member in $\ensuremath{\mathcal{B}}(S_t)$ is at least $t-1$, we have that the maximum degree of $\ensuremath{\mathcal{H}}+e$ for any new edge $e$ of size at least $2$ is at least $t-1$. We have that $\ensuremath{\mathcal{H}}$ has at least $|V(S_t)|=t\geq 5$ vertices. Assume first that $\ensuremath{\mathcal{H}}$ contains an edge of size $2$, say $12$. Then any vertex in $\{3, \ldots, n\}$ does not belong to this edge, so it has a degree at most $t-3$. Thus, for any $i,j\in \{3, \ldots, n\}$, $i\neq j$, the maximum degree of $\ensuremath{\mathcal{H}}+ij$ is at most $t-2$, implying that $ij \in E(\ensuremath{\mathcal{H}})$. Since the edge $12$ was chosen arbitrarily, we can conclude that $\ensuremath{\mathcal{H}}$ contains all edges of size $2$. Thus $\ensuremath{\mathcal{H}}$ has at least $\binom{n}{2} \geq \binom{t}{2} > t-2$ edges, a contradiction. Therefore $\ensuremath{\mathcal{H}}$ has no hyperedges of size $2$. Assume next that all but at most one vertex, say $n$, belong to all hyperedges of $\ensuremath{\mathcal{H}}$. Thus each hyperedge contains the set $[n-1]$, implying that each hyperedge is either $[n-1]$ or $[n]$, a contradiction to the fact that there are $t-2\geq 3$ distinct hyperedges in $E(\ensuremath{\mathcal{H}})$. Hence, there are two vertices, say $1$ and $2$, each with degree at most $t-3$. We know that $12 \not\in E(\ensuremath{\mathcal{H}})$ and that $\ensuremath{\mathcal{H}}+12$ has maximum degree at most $t-2$, a contradiction. Thus $\ensuremath{\mathcal{H}}$ is not $\ensuremath{\mathcal{B}}(S_t)$ saturated. \\
For the upper bound, we show that $\ensuremath{\mathcal{H}}_t$ is a $\ensuremath{\mathcal{B}}(S_t)$-saturated hypergraph. Recall that $\ensuremath{\mathcal{H}}= \ensuremath{\mathcal{H}}_t= ([n],\{[n], [n]-\{1\}, [n]-\{2\},\ldots, [n]-\{t-3\}, [t-3]\}).$
Note that each vertex of $\ensuremath{\mathcal{H}}$ has degree $t-2$. Thus $\ensuremath{\mathcal{H}}$ is $\ensuremath{\mathcal{B}}(S_t)$-free. Let $e\subseteq [n]$ of size at least $2$, such that $e\not\in E(\ensuremath{\mathcal{H}})$. Let $i,j \in e$, $i\neq j$. We shall show that $\ensuremath{\mathcal{H}}+e$ contains a Berge $S_t$ hypergraph.\\
Case 1. $i, j\in [t-3]$, without loss of generality $i=1, j=2$. Then the pairs $1n, 1(n-1), 13, \ldots, 1(t-2), 12$ are contained in $[n], [n]-\{2\},\ldots, [n]-\{t-3\}, [t-3], e$, respectively, and form an underlying graph of Berge-$S_t$ in $\ensuremath{\mathcal{H}}+e$.\\
Case 2. $i$ or $j$ is not in $[t-3]$. Let, without loss of generality $i=n$. Then, without loss of generality $j=n-1$ or $j=1$. Then the pairs $n2, n3, \ldots, n(t-2)$ are contained in $[n]-\{1\}, [n]-\{2\},\ldots, [n]-\{t-3\}$, respectively, and and the pairs $1n, (n-1)n$ are contained in $[n], e$ or $e, [n]$, respectively. Thus all these $t-1$ pairs form an underlying graph of Berge-$S_t$ in $\ensuremath{\mathcal{H}}+e$. \end{proof}
\vskip 1cm
\begin{proof}[Proof of Theorem \ref{thm:main}] First we consider some special graphs: stars on at most three edges and a triangle. For the upper bounds on ${\rm sat}(n,\ensuremath{\mathcal{B}}(G))$ for $G=S_2, S_3, S_4, K_3$, consider the following hypergraphs in order for $n\geq 2$, $n\geq 3$, $n\geq 4$, and $n\geq 3$, respectively: $([n], \emptyset), ([n], \{[n]\}), ([n], \{[n], [n]-\{1\}\}), ([n], \{[n], [n]-\{1\}\})$.
It is easy to see that these hypergraphs are saturated for the respective Berge hypergraphs. Thus, for $G$ being one of these graphs, ${\rm sat}(\ensuremath{\mathcal{B}}(G))\leq |E(G)|-1$. Since the lower bound on ${\rm sat}(\ensuremath{\mathcal{B}}(G))$ is trivially $|E(G)|-1$, the theorem holds in this case. Lemma \ref{lem:35} implies that the theorem holds for all other stars.\\
From now on, let $G$ be a non-empty graph which is neither a star nor a $K_3$. Let $n$ be the number of vertices in $G$, $n\geq 4$. We shall further assume that $G$ has no isolated vertices and that $V(G)=[n]$. Let $m=|E(G)|-1$. We shall prove that $\ensuremath{\mathcal{H}}=\ensuremath{\mathcal{H}}(n,m)$ as defined in the introduction is a Berge-$G$ saturated hypergraph, i.e. such that it does not contain any member of $\ensuremath{\mathcal{B}}(G)$ as a subhypergraph and
such that for any new hyperedge $e$ of size at least two, $\ensuremath{\mathcal{H}}+e$ contains a Berge-$G$ sub-hypergraph. In fact, instead of $\ensuremath{\mathcal{H}}(n,m)$ we shall be mostly using the system $H'(n,m)$ also defined in the introduction. Note that $\ensuremath{\mathcal{H}}$ does not contain any member of $\ensuremath{\mathcal{B}}(G)$ since $\ensuremath{\mathcal{H}}$ has $|E(G)|-1$ edges. \\
Consider $e$, $e\subseteq [n]$, $|e|\geq 2$, $e\not\in E(H)$. Let $\{i,j\}\subseteq e$, $i\neq j$. Relabel the vertices of $G$ such that $ij \in E(G)$ and $i$ is a vertex of maximum degree in $G$. We shall show that $\ensuremath{\mathcal{H}}$ is a Berge-$(G-ij)$, thus showing that $\ensuremath{\mathcal{H}}+e$ is Berge-$G$. We shall prove one of the following equivalent statements: \\
\noindent (i) there is a bijection $\phi$ between $E(G-ij)$ and $E(\ensuremath{\mathcal{H}})$ such that $e' \subseteq \phi(e')$ for any $e'\in E(G-ij)$,\\ (ii) there is a bijection $f$ between $E(G-ij)$ and $H'=H'(n,m)$ such that for each $e'\in E(G-ij)$, $e' \cap f(e') = \varnothing$,\\ (iii) there is a perfect matching in a bipartite graph $F$ with one part $A= E(G) - \{ij\}$ and the other part $B=H'$ such that $e'\in A=E(G) - \{ij\}$ and $e''\in B= H'$ are adjacent in $F$ iff $e'\cap e'' = \varnothing$. \\
One can see that (i) and (ii) are equivalent by defining $\phi(e') $ to be
$[n]-f(e')$. The equivalence of (ii) and (iii) is clear since $|A|=|B|$. Next, we shall prove (iii).\\
In each of the cases below, we assume that there is no perfect matching in $F$, thus by Hall's theorem, there is a set $S\subseteq A$ such that $|N_F(S)|<|S|$. Let $Q = B \setminus N_F(S)$. We see that each element of $Q$ intersects each edge in $ S$. Let $G_S$ be a subgraph of $G$ with edge set $S$. Since each element in $Q$ has size one or two, $G_S$ has a vertex cover of size one or two. Thus $G_S$ is either a star, a triangle, or an edge-disjoint union of two stars. Clearly, $\emptyset$ is not in $Q$. Assume some singleton, say $\{1\}$ is in $Q$. Then $S$ forms a star with center $1$. Then all singletons $\{2\}, \{3\}, ...$ and $\emptyset$ are in $N_F(S)$.
If $S\neq A$, i.e., $|E(G)|-1>|S|$, then $|N_F(S)|\geq |S|$, a contradiction to our assumption on $S$. If $S=A$, i.e., $G$ is a union of a star and an edge $ij$, since $i$ is a vertex of maximum degree in $G$, we see that $G$ is a star, a contradiction. Thus we can assume that $Q$ contains only two-elements sets, i.e., in particular $H'$ has two-element sets and thus, by definition of $H'$, $|H'| >n+1$. Finally, since an empty set and all singletons are not in $Q$, they are in $N_F(S)$, so $|N_F(S)|\geq n$.
Thus $|S|\geq n+1$, and in particular, $S$ does not form a star. We observed earlier that we could assume that $G$ is not a star. \\\\
{\bf Case 1.} $G$ is a union of two stars.
We already excluded the case that $G$ is a star, so let's assume that $G$ is an edge-disjoint union of two stars with different centers. If one of the stars has at most two edges, then $|E(G)| \leq n+1$, and $|S|\leq n$, a contradiction. Thus each of the stars has at least $3$ edges.
Note that $G$ has at most $2n-1$ edges. In particular, since there are $n$ singletons and an empty set in $H'$ and $|H'|\leq2n-1$, we have that $E'$, the set of pairs from $H'$, has size at most $n-2$ and thus the graph on edge set $E'$ has maximum degree at most $2$. This implies that for every vertex there is a non-adjacent vertex in a graph with edge-set $E'$. Let $k$ be the integer such that $ik\not\in E'$. Relabel the vertices of $G$ such that $ij$ is an edge of $G$, and $i$ and $k$ are the centers of the stars whose union is $G$, and $j\neq k$. Since $ik\not\in E'$, it follows that $ik \not\in Q$. Since each pair from $Q$ forms a vertex cover of $G_S$, there is a pair different from $ik$ that forms a vertex cover of $G_S$.
Since $ik$ is a vertex cover of $G$, it is a vertex cover of $G_S$. Thus $G_S$ has two distinct vertex covers of size $2$. Then $G_S$ is a subgraph of a triangle with possibly some further edges incident to the same vertex of the triangle or a subgraph of a $C_4$. This implies that $|S|\leq n$, a contradiction.\\
\textbf {Case 2.} $G$ is not a union of two stars.
If $|Q|=1$, then $|N_F(S)|=|B|-1= |A|-1$. Since $|S|> |N_F(S)|= |A|-1$ and $S\subseteq A$, we have that $S=A$, hence $G_S= G - ij$.
Since there is a vertex cover of $G_S$ of size $2$, we have that $G_S= G-ij$ is a union of two stars $S', S''$, so $G$ is a union of two stars and an edge incident to a vertex of maximum degree of $G$. If maximum degree of $G$ is at least four, then $i$ is a center of $S'$ and $S''$. Thus $G$ is a union of two stars, a contradiction. If the maximum degree of $G$ is at most $3$, then $|E(G)| \leq 7$.
On the other hand, $m=|H'| \ge n+2$. Thus $n+3\le |E(G)|\le 7$.
Thus $n=|V(G)| \leq 4$
and for each such choice of $n$ we reach a contradiction by the fact that $n+3\leq |E(G)|$. If $Q$ contains two disjoint edges, say $12$ and $34$, then $G_S$ can only be a subgraph of a $4$-cycle $13241$.
So, $|S|\leq 4\leq n$, a contradiction to our assumption that $|S|\geq n+1$. \\
Thus $Q$ contains edges that either form a star on at least three edges or a subgraph of a triangle. If the edges of $Q$ form a star on at least three edges, say $12, 13, 14, \ldots$, $S$ forms a star with center $1$, a contradiction. If the edges of $Q$ form a triangle, say $123$, then we arrive at a contradiction since no two-element set can at the same time intersect $12$, $23$, and $13$. Thus $Q$ contains exactly two adjacent edges, say $12$ and $13$. It follows that $S$ forms a star with center $1$ and maybe an edge $23$. Then $|S|\leq n$, a contradiction. Hence, there is a perfect matching in $F$ and thus $\ensuremath{\mathcal{H}}$ is Berge-$G$ saturated. \end{proof}
\section{Conclusions}
In this note, we completely determine ${\rm sat}(n, \ensuremath{\mathcal{B}}(G))$ for any $n\geq |V(G)|$ and show in particular that this function does not depend on $n$. There are many variations of saturation numbers for non-uniform hypergraphs that could be considered. Among those are functions optimising the total weight of a saturated hypergraph, i.e., the sum of cardinalities of all hyperedges, or functions optimising the size of a saturated multihypergraph. These have been considered by the second author in \cite{W}. One particularly interesting variation considered in \cite{W} is the following notion of saturation: a hypergraph $\ensuremath{\mathcal{H}}$ is called strongly $\ensuremath{\mathcal{F}}$ saturated with respect to a family of hypergraphs $\ensuremath{\mathcal{F}}$ if $\ensuremath{\mathcal{H}}$ does not contain any member of $\ensuremath{\mathcal{F}}$ as a subhypergraph, but replacing any hyperedge $e$ of $\ensuremath{\mathcal{H}}$ with $e\cup \{v\}$ for any vertex $v\not\in e$ such that $e\cup \{v\} \notin E(\ensuremath{\mathcal{H}})$ creates such a member of $\ensuremath{\mathcal{F}}$.\\
\noindent {\bf Acknowledgements~~~} We thank Casey Tompkins for useful discussions and carefully reading the manuscript.
\end{document} |
\begin{document}
\begin{center} {\LARGE \bf The uniform normal form} \\ \mbox{}
\\ {\LARGE \bf of a linear mapping} \end{center} \begin{center} \mbox{}
\\ Richard Cushman\footnotemark \mbox{}
\\ Dedicated to my friend and colleague Arjeh Cohen on his retirement \end{center} \footnotetext{Department of Mathematics and Statistics, University of Calgary, \\ Calgary, Alberta, Canada, T2N 1N4.}
Let $V$ be a finite dimensional vector space over a field $\mathrm{k} $ of characteristic $0$. Let $A:V \rightarrow V$ be a linear mapping of $V$ into itself with characteristic polynomial ${\chi }_A$. The goal of this paper is to give a normal form for $A$, which yields a better description of its structure than the classical companion matrix. This normal form does not use a factorization of ${\chi }_A$ and requires only operations in the field $\mathrm{k} $ to compute.
\section{Semisimple linear mappings} \label{sec1}
We begin by giving a well known criterion to determine if the linear mapping $A$ is semisimple, that is, every $A$-invariant subspace of $V$ has an $A$-invariant complementary subspace.
Suppose that we can factor ${\chi }_A$, that is, find monic irreducible polynomials ${ \{ {\pi }_i \} }^m_{i=1}$, which are pairwise relatively prime, such that ${\chi }_A = {\prod }^m_{i=1} {\pi }^{n_i}_i$, where $n_i \in {\mathbb{Z}}_{\ge 1}$. Then \begin{displaymath} {\chi}'_A =\sum^m_{j=1} (n_j {\pi }^{n_j-1}_j {\pi }'_j) {\prod}_{i\ne j}{\pi }^{n_i}_i = \big( {\prod }^m_{\ell =1} {\pi }^{n_{\ell }-1}_{\ell } \big) \big( \sum^m_{j=1} (n_j {\pi }'_j) \prod_{i\ne j}{\pi }_i \big) . \end{displaymath} Therefore the greatest common divisor ${\chi }_A$ and its derivative ${\chi }'_A$ is the polynomial $d= {\prod }^m_{\ell =1} {\pi }^{n_{\ell }-1}_{\ell }$. The polynomial $d$ can be computed using the \linebreak Euclidean algorithm. Thus the square free factorization of ${\chi }_A$ is the polynomial $p = {\prod }^m_{\ell =1} {\pi }_{\ell } = {\chi }_A/d $, which can be computed without knowing a factorization of ${\chi }_A$.
The goal of the next discussion is to prove
\noindent \textbf{Claim 1.1} The linear mapping $A:V \rightarrow V$ is semisimple if $p(A) =0 $ on $V$.
Let $p = {\prod }^m_{j =1} {\pi }_j$ be the square free factorization of the characteristic polynomial ${\chi }_A $ of $A$. We now decompose $V$ into $A$-invariant subspaces. For each $1\le j \le m$ let $V_j = \{ v \in V \setrule \, {\pi }_j (A)v = 0 \} $. Then $V_j$ is an $A$-invariant subspace of $V$. For if $v \in V_j$, then ${\pi }_j(A)Av = A{\pi }_j(A)v =0$, that is, $Av \in V_j$. The following argument shows that $V = {\bigoplus }^m_{j=1} V_j$. Because for $1 \le j \le m$ the polynomials ${\prod }_{i\ne j}{\pi}_i$ are pairwise relatively prime, there are polynomials $f_j$, $1 \le j \le m$ such that $1 = \sum^m_{j=1} f_j \, \big( \prod_{i\ne j}{\pi }_i \big)$. Therefore every vector $v \in V$ can be written as \begin{displaymath} v = \sum^m_{j=1} f_j(A) \, \big( \prod_{i\ne j}{\pi }_i(A)v \big) = \sum^m_{j=1}f_j(A)v_j. \end{displaymath} Since ${\pi }_j(A) \big( \prod_{i\ne j}{\pi }_i(A)v \big) = p(A)v =0$, the vector $v_j \in V_j$. Therefore $V = \sum^m_{j=1}V_j$. If for $i \ne j$ we have $w \in V_i \cap V_j$, then for some polynomials $F_i$ and $G_j$ we have $ 1 = F_i {\pi }_i + G_j {\pi }_j$, because ${\pi }_i$ and ${\pi }_j$ are relatively prime. Consequently, $w = F_i(A){\pi }_i(A)w +G_j(A){\pi }_j(A)w =0$. So $V = \sum^m_{j=1} \oplus V_j$.
$\square $
We now prove
\noindent \textbf{Lemma 1.2} For each $1 \le j \le m$ there is a basis of the $A$-invariant subspace $V_j$ such that that matrix of $A$ is block diagonal.
\noindent \textbf{Proof.} Let $W$ be a minimal dimensional proper $A$-invariant subspace of $V_j$ and let $w$ be a nonzero vector in $W$. Then there is a minimal positive integer $r$ such that $A^r w \in {\mathop{\rm span}\nolimits }_{\mathrm{k} }\{ w, \, Aw, \, \ldots , \, A^{r-1}w \} = U$. We assert: the vectors ${\{ A^iw \} }^{r-1}_{i=0}$ are linearly independent. Suppose that there are $a_i \in \mathrm{k} $ for $1 \le i \le r-1$ such that $0 = a_0w + a_1Aw + \cdots + a_{r-1}A^{r-1}w$. Let $t \le r-1$ be the largest index such that $a_t \ne 0$. So $A^tw = -\frac{a_{t-1}}{a_t} A^{t-1}w - \cdots - \frac{a_0}{a_t} w$, that is, $A^t w \in {\mathop{\rm span}\nolimits }_k \{ w, \, \ldots , \, A^{t-1}w \} $ and $t <r $. This contradicts the definition of the integer $r$. Thus the index $t$ does not exist. Hence $a_i =0$ for every $0 \le i \le r-1$, that is, the vectors ${\{ A^iw \} }^{r-1}_{i=0}$ are linearly independent.
The subspace $U$ of $W$ is $A$-invariant, for \begin{align} A(\sum^{r-1}_{j=0} b_j A^jw) & = \sum^{r-2}_{j=0} b_j A^{j+1}w +b_{r-1}A^r w, \, \, \, \mbox{where $b_j \in \mathrm{k}$} \notag \\ &\hspace{-.5in} = \sum^{r-1}_{j=1}b_{j-1}A^jw +b_{r-1}(\sum^{r-1}_{\ell =0} a_{\ell}A^{\ell }w), \quad \mbox{since $A^rw \in U$} \notag \\ & \hspace{-.5in} = b_{r-1}a_0w +\sum^{r-1}_{j=1}(b_{j-1}+b_{r-1}a_j)A^jw \in U. \notag \end{align}
Next we show that there is a monic polynomial $\mu $ of degree $r$ such that $\mu (A) =0$ on $U$. With respect the basis ${\{ A^iw \} }^{r-1}_{i=0}$ of $U$ we can write $A^rw = -a_0w - \cdots - a_{r-1}A^{r-1}w$. So $\mu (A)w =0$, where \begin{equation} \mu (\lambda ) = a_0 +a_1\lambda + \cdots + a_{r-1}{\lambda }^{r-1} +{\lambda }^r . \label{eq-twos1} \end{equation} Since $\mu (A)A^iw = A^i(\mu (A)w) =0$ for every $0 \le i \le r-1$, it follows that $\mu (A) = 0$ on $U$.
By the minimality of the dimension of $W$ the subspace $U$ cannot be proper. But $U \ne \{ 0 \}$, since $w \in U$. Therefore $U = W$. Since $U \subseteq V_j$, we obtain ${\pi }_j(A) u =0 $ for every $u \in U$. Because ${\pi }_j$ is irreducible, the preceding statement shows that ${\pi }_j$ is the minimum polynomial of $A$ on $U$. Thus ${\pi }_j$ divides $\mu $. Suppose that $\deg {\pi }_j =s < \deg \mu = r$. Then $A^s u^{\prime } \in {\mathop{\rm span}\nolimits }_k \{ u^{\prime }, \, \ldots \, A^{s-1}u^{\prime } \} = Y$ for some nonzero vector $u^{\prime }$ in $U$. By minimality, $Y = U$. But $\dim Y = s < \dim U = r$, which is a contradiction. Thus
${\pi }_j = \mu $. Note that the matrix of $A|U$ with respect to the basis ${\{ A^iw \} }^{r-1}_{i=0}$ is the $r \times r$ companion matrix \begin{equation} C_r = \mbox{\footnotesize $ \left( \begin{array}{lllcc} 0 & \cdots & \cdots & 0 & -a_0 \\ 1 & 0 & \cdots & 0 & -a_1 \\ \vdots & 1 & \ddots & \vdots & \vdots \\ \vdots & & \ddots & 0 & -a_{r-2} \\ 0 & \cdots & \cdots & 1 & -a_{r-1} \end{array} \right) $,} \label{eq-twostars1} \end{equation} where ${\pi }_j = a_0 +a_1\lambda + \cdots + a_{r-1}{\lambda }^{r-1} +{\lambda }^r$.
Suppose that $U \ne V_j$. Then there is a nonzero vector $w' \in V_j \setminus U$. Let $r'$ be the smallest positive integer such that $A^{r'}w' \in {\mathop{\rm span}\nolimits }_{\mathrm{k} } \{ w', \, Aw', \, \ldots , \,$ $A^{r'-1}w' \} = U'$. Then by the argument in the preceding paragraph, $U'$ is a minimal $A$-invariant subspace of $V_j$ of dimension $r' =r$, whose minimal polynomial is ${\pi }_j$. Suppose that $U' \cap U \ne \{ 0 \} $. Then $U' \cap U$ is a proper $A$-invariant subspace of $U'$. By minimality $U' \cap U = U'$, that is, $U \subseteq U'$. But $r = \dim U = \dim U' = r'$. So $U = U'$. Thus $w'\in U'$ and $w' \notin U$, which is a contradiction. Therefore $U' \cap U = \{ 0 \} $. If $U \oplus U' \ne V_j$, we repeat the above argument. Using $U \oplus U'$ instead of $U$, after a finite number of repetitions we have $V_j = \sum^{\ell }_{i=1}\oplus U_i$, where for every $0 \le i \le \ell $ the subspace $U_i$ of $V_j$ is $A$-invariant with basis ${\{ A^k u_i \} }^{r-1}_{k =0}$ and the minimal polynomial
of $A|U_i$ is ${\pi }_j$. With respect to the basis ${\{ A^k u_i \}}^{(\ell , r-1)}_{(i,k) =(1,0)}$ of $V_j$ the matrix of $A$ is $\mathrm{diag}(C_r, \ldots , C_r)$, which is block diagonal.
$\square $
For each $1 \le j \le m $ applying lemma 1.2 to $V_j$ and using the fact that $V = \sum^m_{j=1} \oplus V_j$ we obtain
\noindent \textbf{Corollary 1.3} There is a basis of $V$ such that the matrix of $A$ is block diagonal.
\noindent \textbf{Proof of claim 1.1} Suppose that $U$ is an $A$-invariant subspace $V$. Then by corollary 1.3, there is a basis ${\varepsilon }_U$
of $U$ such that the matrix of $A|U$ is block diagonal. By corollary 1.3 there is a basis ${\varepsilon }_V$ of $V$ which extends the basis ${\varepsilon }_U$ such that the matrix of $A$ on $V$ is block diagonal. Let $W$ be the subspace of $V$ with basis ${\varepsilon }_W = {\varepsilon }_V \setminus {\varepsilon }_U$.
The matrix of $A|W$ is block diagonal. Therefore $W$ is $A$-invariant and $V = U \oplus W$ by construction. Consequently, $A$ is semisimple.
$\square $
\section{The Jordan decomposition of ${\bf A}$} \label{sec2}
Here we give an algorithm for finding the Jordan decomposition of the linear mapping $A$, that is, we find real semisimple and commuting nilpotent linear maps $S$ and $N$ whose sum is $A$. The algorithm we present uses only the characteristic polynomial ${\chi }_A$ of $A$ and does {\em not} require that we know {\em any} of its factors. Our argument follows that of \cite{burgoyne-cushman}.
Let $p$ be the square free factorization of ${\chi }_A$. Let $M$ be the smallest positive integer such that ${\chi }_A$ divides $p^M$. Then $M \le \deg {\chi }_A$. Assume that $\deg {\chi }_A \ge 2$, for otherwise $S = A$. Write \begin{equation} S = A + \sum^{M-1}_{j=1}r_j(A){p(A)}^j, \label{eq-ones2} \end{equation} where $r_j$ is a polynomial whose degree is less than the degree of $p$. From the fact that ${\chi }_A$ divides $p^M$, it follows that ${p(A)}^M =0$.
We want to determine $S$ in the form (\ref{eq-ones2}) so that \begin{equation} p(S) =0. \label{eq-twos2} \end{equation} From claim 1.1 it follows that $S$ is semisimple.
We have to find the polynomials $r_j$ in (\ref{eq-ones2}) so that equation (\ref{eq-twos2}) holds. We begin by using the Taylor expansion of $p$. If (\ref{eq-ones2}) holds, then \begin{align} p(S) & = p\Big( A + \sum^{M-1}_{j=1}r_j(A){p(A)}^j \Big) \notag \\ & = p(A) + \sum^{M-1}_{i=1} p^{(i)}(A) \Big( \sum^{M-1}_{j=1}r_j(A){p(A)}^j \Big)^i, \notag \\ & \hspace{.5in}\parbox[t]{3.5in}{where $p^{(i)}$ is $\frac{1}{i!}$ times the ${\rm i}^{\rm th}$ derivative of $p$} \notag \\ & = p(A) + \sum^{M-1}_{i=1}\sum^{M-1}_{k=1} c_{k,i}\, {p(A)}^k p^{(i)}(A). \label{eq-twostars2} \end{align} Here $c_{k,i}$ is the coefficient of $z^k$ in $(r_1z + \cdots +\, r_{M-1}z^{M-1})^i$. Note that $c_{k,i} =0$ if $k>i$. A calculation shows that when $k \le i$ we have \begin{equation} c_{k,i} = \sum_{\stackrel{{\alpha }_1 + \cdots + {\alpha}_{k-1}=i}{{\alpha }_1 + 2{\alpha }_2 + \cdots + (k-1){\alpha }_{k-1}}= k} \frac{i!}{{\alpha }_1! \cdots {\alpha }_{k-1}!} r^{{\alpha }_1}_1 \cdots r^{{\alpha }_{k-1}}_{k-1} . \label{eq-threestars2} \end{equation} Interchanging the order of summation in (\ref{eq-twostars2}) we get \begin{displaymath} p(S) = p(A) + \sum^{M-1}_{i=1}\Big( r_i(A)p^{(1)}(A) + e_i(A) \Big) {p(A)}^i, \end{displaymath} where $e_1=0$ and for $i \ge 2$ we have $e_i = \sum^{i}_{j=2} c_{i,j} p^{(j)}$. Note that $e_i$ depends on $r_1, \ldots , r_{i-1}$, because of (\ref{eq-threestars2}).
Suppose that we can find polynomials $r_i$ and $b_i$ such that \begin{equation} r_ip^{(1)} + e_i = b_i p -b_{i-1}, \label{eq-threes2} \end{equation} for every $1 \le i \le M-1$. Here $b_0 =1$. Then \begin{displaymath} \sum^{M-1}_{i=1}\big( r_i(A)p^{(1)}(A) + e_i(A) \big) {p(A)}^i = \sum^{M-1}_{i=1} \big( b_i(A)p(A) - b_{i-1}(A) \big) {p(A)}^i \, = \, -p(A), \end{displaymath} since $p^M(A) =0$ and $b_0 =1$, which implies $p(S) =0$, see (\ref{eq-twostars2}).
We now construct polynomials $r_i$ and $b_i$ so that (\ref{eq-threes2}) holds. We do this by induction. Since the polynomials $p$ and $p^{(1)}$ have no common nonconstant factors, their greatest common divisor is the constant polynomial $1$. Therefore by the Euclidean algorithm there are polynomials $g$ and $h$ with the degree of $h$ being less than the degree of $p$ such that \begin{equation} gp - hp^{(1)} = 1. \label{eq-fours2} \end{equation} \par Let $r_1 =h$, and $b_1 =g$. Using the fact that $b_0=1$ and $e_1=0$, we see that equation (\ref{eq-fours2}) is the same as equation (\ref{eq-threes2}) when $i=1$. Let $d_1 = 0$ and $q_0 = q_1 =0$. Now suppose that $n \ge 2$. By induction suppose that the polynomials $r_1, \ldots \, , r_{n-1}$, $e_1, \ldots , e_{n-1}$, $q_1, \ldots \, , q_{n-1}$ and $b_1, \ldots \, , b_{n-1}$ are known and that $r_i$ and $b_i$ satisfy (\ref{eq-threes2}) for every $1 \le i \le n-1$. Using the fact that the polynomials $r_1, \ldots , r_{n-1}$ are known, from formula (\ref{eq-threestars2}) we can calculate the polynomial $e_n = \sum^n_{j=2}c_{i,n}\, p^{(j)}$. For $n \ge 2$ define the polynomial $d_n$ by \begin{equation} d_n = q_{n-1} + h \sum^n_{i=1} g^{n-i} e_i. \label{eq-fives2} \end{equation} Note that the polynomials $q_{n-1}$, $g = b_1 $, $h= r_1$, and $e_i$ for $1 \le i \le n-1$ are already known by the induction hypothesis. Thus the right hand side of (\ref{eq-fives2}) is known and hence so is $d_n$. Now define the polynomials $q_n$ and $r_n$ by dividing $d_n$ by $p$ with remainder, namely \begin{equation} d_n = q_n p + r_n . \label{eq-sixs2} \end{equation} Clearly, $q_n$ and $r_n$ are now known. Next for $n\ge 2$ define the polynomial $b_n$ by \begin{equation} b_n = -p^{(1)}q_n + g\sum^n_{i=1}g^{n-i}e_i. \label{eq-sevens2} \end{equation} Since the polynomials $p^{(1)}$, $q_n$, $g = b_1$, and $e_i$ for $1 \le i \le n$ are known, the polynomial $b_n$ is known. We now show that equation (\ref{eq-threes2}) holds.
\noindent \textbf{Proof.} We have already checked that (\ref{eq-threes2}) holds when $n=1$. By induction we assumed that it holds for every $1 \le i \le n-1$. Using the definition of $b_n$ (\ref{eq-sevens2}) and the induction hypothesis we compute \begin{align} b_n p - b_{n-1} & = \Big[ -p^{(1)}pq_n + pg\sum^n_{i=1} g^{n-i}e_i \Big] - \Big[ -p^{(1)}q_{n-1} + g\sum^{n-1}_{i=1}g^{n-1-i}e_i \Big] \notag \\ & \hspace{-.5in} = - p^{(1)}(q_n p - q_{n-1}) + pg\sum^n_{i=1}g^{n-i}e_i -\sum^{n-1}_{i=1}g^{n-i}e_i \notag \\ & \hspace{-.5in} = -p^{(1)}(-r_n + d_n-q_{n-1}) +(hp^{(1)}+1)\sum^n_{i=1}g^{n-i}e_i - \sum^{n-1}_{i=1}g^{n-i}e_i, \notag \\ & \hspace{.5in} \mbox{using (\ref{eq-fours2}) and (\ref{eq-sixs2})} \notag \\ & \hspace{-.5in} = p^{(1)}r_n -hp^{(1)}\sum^n_{i=1}g^{n-i}e_i + hp^{(1)}\sum^n_{i=1}g^{n-i}e_i + \sum^n_{i=1}g^{n-i}e_i - \sum^{n-1}_{i=1}g^{n-i}e_i, \notag \\ &\hspace{.5in} \mbox{using (\ref{eq-fives2})} \notag \\ & \hspace{-.5in}= p^{(1)}r_n +e_n. \tag*{$\square $} \end{align} This completes the construction of the polynomial $r_n$ in (\ref{eq-ones2}). Repeating this construction until $n = M-1$ we have determined the semisimple part $S$ of $A$. The commuting nilpotent part of A is $N = A-S$.
$\Box $
\section{Uniform normal form} \label{sec3}
In this section we give a description of the uniform normal form of a linear map $A$ of $V$ into itself. We assume that the Jordan decomposition of $A$ into its commuting semisimple and nilpotent summands $S$ and $N$, respectively, is known.
\subsection{Nilpotent normal form} \label{sec3subsec1}
In this subsection we find the Jordan normal form for a nilpotent linear transformation $N$.
Recall that a linear transformation $N:V \rightarrow V$ is said to be {\sl nilpotent of index} $n$ if there is an integer $n \ge 1$ such that $N^{n-1} \not = 0$ but $N^n = 0$. Note that the index of nilpotency $n$ need not be equal to $\dim V$. Suppose that for some positive integer $\ge 1$ there is a nonzero vector $v$, which lies in $\ker N^{\ell } \setminus \ker N^{{\ell }-1}$. The set of vectors $\{ v, Nv, \ldots \, , N^{{\ell }-1}v \} $ is a \emph{Jordan chain} of \emph{length} ${\ell }$ with \emph{generating vector} $v$. The space $V^{\ell }$ spanned by the vectors in a given Jordan chain of length ${\ell }$ is a $N$-\emph{cyclic subspace} of $V$. Because $N^{\ell }v =0$, the subspace $V^{\ell }$ is $N$-invariant.
Since $\ker N|V^{\ell } = {\mathop{\rm span}\nolimits }_{\mathrm{k} } \{ N^{{\ell }-1}v \} $, the mapping
$N|V^{\ell }$ has exactly one eigenvector corresponding to the eigenvalue $0$.
\noindent \textbf{Claim 3.1.1} The vectors in a Jordan chain are linearly independent.
\noindent \textbf{Proof.} Suppose not. Then $0 = \sum^{{\ell }-1}_{i=0} {\alpha }_i\, N^iv$, where not every ${\alpha }_i \in \mathrm{k} $ is zero. Let $i_0$ be the smallest index for which ${\alpha }_{i_0} \not = 0$. Then \begin{equation} 0 = {\alpha }_{i_0}\, N^{i_0}v + \cdots \, + {\alpha }_{{\ell }-1} \, N^{{\ell }-1}v . \label{eq-sec4ss1one} \end{equation} Applying $N^{{\ell }-1-i_0}$ to both sides of (\ref{eq-sec4ss1one}) gives $0 = {\alpha }_{i_0}N^{{\ell }-1}v$. By hypothesis $v\not \in \ker N^{{\ell }-1}$, that is, $N^{{\ell }-1}v \ne 0$. Hence ${\alpha }_{i_0} =0$. This contradicts the definition of the index $i_0$. Therefore ${\alpha }_i =0$ for every $0 \le i \le \ell -1$. Thus the vectors ${\{ N^iv \}}^{\ell -1}_{i=0}$, which span the Jordan chain $V^{\ell }$, are linearly independent.
$\square $
With respect to the \emph{standard basis} $\{ N^{{\ell }-1}v, N^{\ell-2}v,
\ldots \, , Nv, v \}$ of $V^{\ell }$ the matrix of $N|V^{\ell }$ is the ${\ell } \times {\ell }$ matrix \begin{displaymath} \mbox{{\footnotesize $\left( \begin{array}{ccccc} 0 & 1 & 0 & \dots & 0 \\ 0 & 0 & 1 &\ddots & \vdots \\ \vdots & \vdots & \ddots & \ddots & 0 \\ \vdots & \vdots & \vdots & \ddots & 1 \\ 0 & 0 & \cdots & \cdots & 0 \\ \end{array} \right) $},} \end{displaymath} which is a \emph{Jordan block} of size ${\ell }$.
We want to show that $V$ can be decomposed into a direct sum of $N$-cyclic subspaces. In fact, we show that there is a basis of $V$, whose elements are given by a dark dot $\bullet $ or an open dot $\circ $ in the diagram below such that the arrows give the action of $N$ on the basis vectors. Such a diagram is called the \emph{Young diagram of} $N$. \begin{displaymath} \begin{array}{l} \phantom{\bullet}\\ \phantom{\uparrow}\\ \phantom{\bullet}\\ \phantom{\uparrow}\\ \phantom{\bullet}\\ \phantom{\vdots}\\ \phantom{\bullet}\\ \phantom{\uparrow}\\ \phantom{circ}\\ \phantom{circ}\\ \phantom{x}\\ \end{array} \hspace{-1cm} \begin{array}{l} \begin{array}{l} \phantom{\bullet}\\ \phantom{\uparrow}\\ \phantom{\bullet}\\ \phantom{\uparrow}\\ \phantom{\bullet}\\ \phantom{\vdots}\\ \phantom{\bullet}\\ \phantom{x}\\ \end{array} \\ \phantom{\uparrow}\\ \phantom{circ}\\ \end{array} \hspace{-0.5cm} \begin{array}{l} \begin{array}{l} \phantom{\bullet}\\ \end{array}\\ \phantom{\uparrow}\\ \phantom{\bullet}\\ \phantom{\uparrow}\\ \phantom{\bullet}\\ \phantom{\vdots}\\ \phantom{\bullet}\\ \phantom{\uparrow}\\ \phantom{circ}\\ \end{array} \hspace{-0.4cm} \begin{array}{cllllllll} \bullet&\bullet&\bullet&\bullet&\circ&\circ&\circ\\ \uparrow&\uparrow&\uparrow&\uparrow\\ \bullet&\bullet&\bullet&\bullet\\ \uparrow&\uparrow&\uparrow&\uparrow\\ \bullet&\bullet&\bullet&\circ\\ \vdots&\vdots&\vdots\\ \bullet&\bullet&\circ&\\ \uparrow&\uparrow\\ \circ&\circ \end{array} \end{displaymath} \par \noindent \hspace{1.25in}\parbox[t]{4.5in}{Figure 3.1.1. The Young diagram of $N$.}
Note that the columns of the Young diagram of $N$ are Jordan chains with generating vector given by an open dot. The black dots form a basis for the image of $N$, whereas the open dots form a basis for a complementary subspace in $V$. The dots on or above the ${\ell }^{\rm th}$ row for a basis for $\ker N^{\ell }$ and the black dots in the first row form a basis for $\ker N \cap \mathrm{im}\, N$. Let $r_{\ell }$ be the number of dots in the ${\ell }^{\rm th}$ row. Then $r_{\ell } = \dim \ker N^{\ell } - \dim \ker N^{\ell -1}$. Thus the Young diagram of $N$ is unique.
\noindent \textbf{Claim 3.1.2} There is a basis of $V$ that realizes the Young diagram of $N$.
\noindent \textbf{Proof}. Our proof follows that of Hartl \cite{hartl}. We use induction of the dimension of $V$. Since $\dim \ker N > 0$, it follows that $\dim \mathrm{im}\, N < \dim V$. Thus by the induction hypothesis, we may suppose that $\mathrm{im}\, N$ has a basis which is the union of $p$ Jordan chains $\{ w_i, Nw_i, \ldots \, , N^{m_i}w_i \} $ each of length $m_i$. The \linebreak vectors ${\{ N^{m_i}w_i \} }^p_{i=1}$ lie in $\mathrm{im}\, N \cap \ker N$ and in fact form a basis of this subspace. Since $\ker N$ may be larger than $\mathrm{im}\, N \cap \ker N$, choose vectors $\{ y_1, \ldots \, , y_q \} $ where $q$ is a nonnegative integer such that $\{ N^{m_1}w_1, \ldots \, , N^{m_p}w_p, $ $y_1, \ldots \, , y_q \} $ form a basis of $\ker N$.
Since $w_i \in \mathrm{im}\, N$ there is a vector $v_i$ in $V$ such that $w_i = Nv_i$. We assert that the $p$ Jordan chains \begin{displaymath} \{ v_i, Nv_i, \ldots \, , N^{m_i+1}v_i \} = \{ v_i, w_i, Nw_i, \ldots \, , N^{m_i}w_i \} \end{displaymath} each of length $m_i +2$ together with the $q$ vectors $\{ y_j \} $, which are Jordan chains of length $1$, form a basis of $V$. To see that they span $V$, let $v \in V$. Then $Nv \in \mathrm{im}\, N$. Using the basis of $\mathrm{im}\, N$ given by the induction hypothesis, we may write \begin{displaymath} Nv = \sum^p_{i=1}\sum^{m_i}_{\ell =0} {\alpha }_{i\ell } N^{\ell } w_i \, = \, N \Big( \sum^p_{i=1}\sum^{m_i}_{\ell =0} {\alpha }_{i\ell } N^{\ell }v_i \Big). \end{displaymath} Consequently, \begin{displaymath} v - \sum^p_{i=1}\sum^{m_i}_{\ell =0} {\alpha }_{i\ell } N^{\ell } v_i = \sum^p_{i=1}{\beta }_i N^{m_i+1}v_i + \sum^q_{\ell =1}{\gamma }_{\ell } y_{\ell }, \end{displaymath} since the vectors \begin{displaymath} \{ N^{m_1}w_1, \ldots \, , N^{m_p}w_p, y_1, \ldots \, , y_q \} = \{ N^{m_1+1}v_1, \ldots \, , N^{m_p+1}v_p, y_1, \ldots \, , y_q \} \end{displaymath} form a basis of $\ker N$. Linear independence is a consequence of the following counting argument. The number of vectors in the Jordan chains is \begin{align} \sum^p_{i=1}(m_i + 2) +q = \sum^p_{i=1}(m_i+1) +(p+q) \, = \, \dim \mathrm{im}\, N + \dim \ker N \, = \, \dim V. \tag*{$\square $} \end{align}
We note that finding the generating vectors of the Young diagram of $N$ or equivalently the Jordan normal form of $N$, involves solving linear equations with coefficients in the field $\mathrm{k} $ and thus only operations in the field $k$.
\subsection{Some facts about $S$} \label{sec3subsec3}
We now study the semisimple part $A$.
\noindent \textbf{Lemma 3.2.1} $V = \ker S \oplus \mathrm{im}\, S$. Moreover the characteristic polynomial ${\chi }_S(\lambda )$ of $S$ can be written as a product of ${\lambda }^n$, where $n = \dim \ker S$ and
${\chi}_{S|\mathrm{im}\, S}$, the characteristic
polynomial of $S|\mathrm{im}\, S$. Note that ${\chi }_{S|\mathrm{im}\, S} (0) \ne 0$
\noindent \textbf{Proof.} $\ker S$ is an $S$-invariant subspace of $V$. Since $Sv =0$ for every $v \in $ $\ker S$, the characteristic polynomial
of $S|\ker S$ is ${\lambda }^n$.
Because $S$ is semisimple, there is an $S$-invariant subspace $Y$ of $V$
such that $V = \ker S \oplus Y$. The linear mapping $S|Y:Y \rightarrow Y$ is invertible, for if $Sy=0$ for some $y \in Y$, then $S(y+u) =0$ for every $u \in \ker S$. Therefore $y+u \in \ker S$, which implies that
$y \in \ker S \cap Y = \{ 0 \} $, that is, $y=0$. So $S|Y$ is invertible.
Suppose that $y \in Y$, then $y = S\big( (S|Y)^{-1}y \big) \in \mathrm{im}\, S$. Thus $Y \subseteq \mathrm{im}\, S$. But $\dim \mathrm{im}\, S = \dim V - \dim \ker S = \dim Y$. So $Y = \mathrm{im}\, S$.
Since $\ker S \cap \mathrm{im}\, S = \{0 \} $, we see that $\lambda $ does
not divide the polynomial ${\chi }_{S|\mathrm{im}\, S}(\lambda )$.
Consequently, ${\chi }_{S|\mathrm{im}\, S}(0) \ne 0$. Since $V = \ker S \oplus \mathrm{im}\, S$, where $\ker S$ and $\mathrm{im}\, S$ are $S$-invariant subspaces of $V$, we obtain \begin{align} {\chi }_S(\lambda ) & = {\chi }_{\ker S}(\lambda ) \cdot
{\chi }_{S|\mathrm{im}\, S}(\lambda ) =
{\lambda }^n{\chi }_{S|\mathrm{im}\, S}(\lambda ). \tag*{$\square $} \end{align}
\noindent \textbf{Lemma 3.2.2} The subspaces $\ker S$ and $\mathrm{im}\, S$ are $N$-invariant and hence $A$-invariant.
\noindent \textbf{Proof.} Suppose that $x \in \mathrm{im}\, S$. Then there is a vector $v \in V$ such that $x = Sv$. So $Nx = N(Sv) = S(Nv) \in \mathrm{im}\, S$. In other words, $\mathrm{im}\, S$ is an $N$-invariant subspace of $V$. Because $\mathrm{im}\, S$ is also $S$-invariant and $A = S+N$, it follows that $\mathrm{im}\, S$ is an $A$-invariant subspace of $V$. Suppose that $x \in \ker S$, that is, $Sx =0$. Then $S(Nx) = N(Sx) =0$. So $Nx \in \ker S$. Therefore $\ker S$ is an $N$-invariant and hence $A$-invariant subspace of $V$.
$\square $
\subsection{Decription of uniform normal form} \label{sec3subsec3}
We now describe the uniform normal form of the linear mapping $A:V \rightarrow V$, using both its semisimple and nilpotent parts.
Since $A|\ker S = N|\ker S$, we can apply the discussion of \S 3.1 to obtain
a basis of $\ker S$ which realizes the Young diagram of $N|\ker S$, which say has $r$ columns. For $1 \le {\ell } \le r$ let $F_{q_{\ell }}$ be the space spanned by the generating
vectors of Jordan chains of $N|\ker S$ in $\ker S$ of length $m_{\ell }$.
By lemma 3.2.1 $A|\mathrm{im}\, S$ is a linear mapping of $\mathrm{im}\, S$ into itself
with invertible semisimple part $S|\mathrm{im}\, S$ and commuting nilpotent part $N|\mathrm{im}\, S$. Using the discussion of \S 3.1 for every $r+1 \le {\ell } \le p$ let $F_{q_{\ell}}$ be the set of generating vectors of the Jordan
chains of $N|\mathrm{im}\, S$ in $\mathrm{im}\, S$ of length $m_{\ell }$, which
occur in the $p-(r+1)$ columns of the Young diagram of $N|\mathrm{im}\, S$.
Now we prove
\noindent \textbf{Claim 3.3.1} For each $1 \le {\ell } \le p$ the space $F_{q_{\ell }}$ is $S$-invariant.
\noindent \textbf{Proof.} Let $v^{\ell } \in F_{q_{\ell }}$. Then $\{ v^{\ell }, \, N v^{\ell }, \ldots , N^{m_{{\ell }}-1}v^{\ell } \} $ is a Jordan chain in the Young diagram of $N$ of length $m_{\ell }$ with generating vector $v^{\ell }$. For each $1 \le {\ell } \le r$ we have $F_{q_{\ell }} \subseteq \ker S$. So trivially $F_{q_{\ell }}$ is $S$-invariant, because $S =0$ on $F_{q_{\ell }}$. Now suppose that $r+1 \le {\ell } \le p$. Then $F_{q_{\ell }} \subseteq \mathrm{im}\, S $ and
$S|\mathrm{im}\, S$ is invertible Furthermore, suppose that for some ${\alpha }_j \in \mathrm{k} $ with $0 \le j \le m_{\ell }-1$ we have $0 = \sum^{m_{\ell }-1}_{j=0} {\alpha }_j N^j(Sv^{\ell })$. Then $0 = S\big( \sum^{m_{\ell }-1}_{j=0}{\alpha }_j N^jv^{\ell } \big)$, because
$S|\mathrm{im}\, S$ and $N|\mathrm{im}\, S$ commute. Since $S|\mathrm{im}\, S$ is invertible, the preceding equality implies $0 = \sum^{m_{\ell }-1}_{j=0}{\alpha }_j N^jv^{\ell }$. Consequently, by lemma 3.1.1 we obtain ${\alpha }_j =0 $ for every $0 \le j \le m_{\ell }-1$. In other words, $\{ Sv^{\ell }, \, N(Sv^{\ell }), \ldots , N^{m_{\ell }-1}(Sv^{\ell }) \} $ is
a Jordan chain of $N|\mathrm{im}\, S$ in $\mathrm{im}\, S$ of length $m_{\ell }$ with generating vector $Sv^{\ell }$. So $Sv^{\ell } \in F_{q_{\ell }}$. Thus $F_{q_{\ell }}$ is an $S$-invariant subspace of $\mathrm{im}\, S$ and hence is an $S$-invariant subspace of $V$, since $V = \mathrm{im}\, S \oplus \ker S$.
$\square $
An $A$-invariant subspace $U$ of $V$ is \emph{uniform} of \emph{height} $m-1$ if $N^{m-1}U \ne \{ 0 \} $ but $N^m U =\{0 \} $ and $\ker N^{m-1}U = NU$. For each $1 \le {\ell } \le r$ let $U^{q_{\ell }}$ be the space spanned by the
vectors in the Jordan chains of length $m_{\ell }$ in the Young diagram of $N|\ker S$ and for $r+1 \le \ell \le p$ let $U^{q_{\ell }}$ be the space spanned by the
vectors in the Jordan chains of length $m_{\ell }$ in the Young diagram of $N|\mathrm{im}\, S$
\noindent \textbf{Claim 3.3.2} For each $1 \le {\ell } \le p$ the subspace $U^{q_{\ell }}$ is uniform of height $m_{\ell }-1$.
\noindent \textbf{Proof.} By definition $U^{q_{\ell }} = F_{q_{\ell }} \oplus NF_{q_{\ell }} \oplus \cdots \oplus N^{m_{\ell }-1}F_{q_{\ell }}$. Since $N^{m_{\ell }}F_{q_{\ell }} =\{ 0 \} $ but $N^{m_{\ell }-}F_{q_{\ell }} \ne \{ 0 \} $, the subspace $U^{q_{\ell }}$ is $A$-invariant and of the height $m_{\ell }-1$. To show that $U^{q_{\ell }}$ is uniform we need only show that $\ker N^{m_{\ell }-1} \cap U^{q_{\ell }} \subseteq NU^{q_{\ell }}$ since the inclusion of $NU^{q_{\ell }}$ in $\ker N^{m_{\ell }-1}$ follows from the fact that $N^{m_{\ell }}F_{q_{\ell }} =0$. Suppose that $u \in \ker N^{m_{\ell }-1} \cap U^{q_{\ell }}$, then for every $0 \le i \le m_{\ell }-1$ there are unique vectors $f_i \in F_{q_{\ell }}$ such that $u = f_0 + Nf_1 + \cdots + N^{m_{\ell }-1}f_{m_{\ell }-1}$. Since $u \in \ker N^{m_{\ell }-1}$ we get $0 = N^{m_{\ell }-1}u = N^{m_{\ell }-1}f_0$. If $f_0 \ne 0$, then the preceding equality contradicts the fact that $f_0$ is a generating vector of a Jordan chain of $N$ of length $m_{\ell }$. Therefore $f_0 =0$, which means that $u = N(f_1 + \cdots + N^{m_{\ell }-2}f_{m_{\ell }-1}) \in NU^{q_{\ell }}$. This shows that $\ker N^{m_{\ell }-1} \cap U^{q_{\ell }} \subseteq NU^{q_{\ell }}$. Hence $\ker N^{m_{\ell }-1} \cap U^{q_{\ell }} = NU^{q_{\ell }}$, that is, the subspace $U^{q_{\ell }}$ is uniform of height $m_{\ell }-1$.
$\square $
Now we give an explicit description of the uniform normal form of the linear mapping $A$. For each $1 \le {\ell } \le p$ let ${\chi }_{S|F_{q_{\ell }}}$ be the characteristic polynomial of $S$ on $F_{q_{\ell }}$. From the fact that every summand in $U^{q_{\ell }} = F_{q_{\ell }} \oplus NF_{q_{\ell }} \oplus \cdots \oplus N^{m_{\ell }-1}F_{q_{\ell }}$ is $S$-invariant, it follows that the
characteristic polynomial ${\chi }_{S|U^{q_{\ell }}}$ of $S$ on $U^{q_{\ell }}$ is
${\chi }^{m_{\ell }}_{S|F_{q_{\ell }}}$. Since $V = \sum^p_{\ell =1}\oplus U^{q_{\ell }}$, we
obtain ${\chi }_S = \prod^p_{\ell =1}{\chi }^{m_{\ell }}_{S|F_{q_{\ell }}}$. Choose a basis ${\{ u^{\ell }_j \} }^{q_{\ell }}_{j=1}$
of $F_{q_{\ell }}$ so that the matrix of $S|F_{q^{\ell }}$ is the $q_{\ell } \times q_{\ell }$
companion matrix $C_{q_{\ell }}$ (\ref{eq-twostars1}) associated to the characteristic polynomial ${\chi }_{S|F_{q_{\ell }}}$. When $1 \le {\ell } \le r$ the companion matrix
$C_{q_{\ell }}$ is $0$ since $S|F_{q_{\ell }} =0$. With respect to the basis ${\{ u^{\ell }_j, \, Nu^{\ell }_j, \ldots , N^{m_{\ell }-1}u^{\ell }_j \} }^{q_{\ell }}_{j=1} $ of $U^{q_{\ell }}$
the matrix of $A|U^{q_{\ell }}$ is the $m_{\ell }q_{\ell } \times m_{\ell }q_{\ell }$ matrix \begin{displaymath} D_{m_{\ell }q_{\ell }} = \mbox{\footnotesize $\left( \begin{array}{cccccl} C_{q_{\ell }} & 0 & 0 & \cdots & \cdots & 0 \\ I & C_{q_{\ell }} & 0 & \cdots & \vdots & 0 \\ 0 & I & \ddots & & \vdots & \vdots \\ \vdots & & \ddots & \ddots & \vdots & \vdots \\ 0 & \cdots & 0 & I & C_{q_{\ell }} & 0 \\ 0 & \cdots & \cdots & 0 & I & C_{q_{\ell }} \end{array} \right) .$} \end{displaymath} Since $V = \sum^p_{{\ell }=1} \oplus U^{q_{\ell }}$, the matrix of $A$ is $\mathrm{diag}\, (D_{m_1q_1}, \ldots , D_{m_pq_p})$ with respect to the basis ${\{ u^{\ell }_j, \, Nu^{\ell }_j, \ldots , N^{m_{\ell }-1}u^{\ell }_j \} }^{(q_{\ell }, p)}_{(j, \ell ) = (1,1)}$. We call preceding matrix the \emph{uniform normal form} for the linear map $A$ of $V$ into itself. We note that this normal form can be computed using only operations in the field $\mathrm{k} $ of characteristic $0$.
Using the uniform normal form of $A$ we obtain a factorization of its characteristic polynomial ${\chi }_A$ over the field $\mathrm{k} $.
\noindent \textbf{Corollary 3.3.3} ${\chi }_A (\lambda )=
\prod^p_{\ell =1}{\chi }^{m_{\ell }}_{S|F_{q_{\ell }}}(\lambda ) = {\lambda }^n \,
\prod^p_{\ell = r+1}{\chi }^{m_{\ell }}_{S|F_{q_{\ell }}}(\lambda )$, where $n = \sum^r_{\ell =1} m_{\ell } = \dim \ker S$.
\end{document} |
\begin{document}
\title{Fine properties of self-similar solutions of the Navier--Stokes equations\footnote{
{\it Keywords:\/}
\begin{abstract} We study the solutions of the nonstationary incompressible Navier--Stokes equations in $\R^d$, $d\ge2$, of self-similar form~$u(x,t)=\frac{1}{\sqrt t}U\bigl(\frac{x}{\sqrt t}\bigr)$, obtained from small and homogeneous initial data~$a(x)$. We construct an explicit asymptotic formula relating the self-similar profile~$U(x)$ of the velocity field to its corresponding initial datum $a(x)$. \end{abstract}
\section{Introduction}
In this paper we are concerned with the study of solutions of the elliptic problem \begin{equation} \label{elp} \begin{cases} -\frac{1}{2}U-\frac{1}{2}(x\cdot\nabla )U-\Delta U+(U\cdot\nabla U)+\nabla P=0\\ \nabla \cdot U=0, \end{cases} \qquad x\in\R^d, \end{equation} where $U=(U_1,\ldots,U_d)$ is a vector field in $\R^d$, $d\ge2$, $\nabla=(\partial_1,\ldots,\partial_d)$, and $P$ is a scalar function defined on~$\R^d$. Such system arises from the nonstationary Navier--Stokes equations (NS), for an incompressible viscous fluid filling the whole $\R^d$, when looking for a velocity field $u(x,t)$ and pressure $p(x,t)$ of forward self-similar form: $u(x,t)=\frac{1}{\sqrt t}U(x/\sqrt t)$ and $p(x,t)=\frac{1}{t} P(x/\sqrt t)$. An important motivation for studying the system~\eqref{elp} is that the corresponding self-similar velocity fields $u(x,t)$ describe the asymptotic behavior at large scales for a wide class of Navier--Stokes flows. Moreover, simple necessary and sufficient conditions for a solution of the Navier--Stokes equations to have an asymptotically self-similar profile for large~$t$ are available, see \cite{Pla98}. We refer to \cite{Can95} and \cite{Lem02}, for more explanations and further motivations.
The problem that we address in the present paper is the study of the
asymptotic behavior for $|x|\to\infty$ for a large class of solutions to the system~\eqref{elp}.
The existence of nontrivial solutions of ~\eqref{elp} has been known for more than sixty years. For example, in the three-dimensional case Landau observed that, putting an additional axi-symmetry condition one can construct, via ordinary differential equations methods, a one-parameter family $(U,P)$, smooth outside the origin, and satisfying~\eqref{elp} in the pointwise sense for $x\not=0$ (see, e.g., \cite[p. 207]{Bat74}).
Landau's solutions have the additional property that $U$ is a homogeneous vector field of degree~$-1$ and $P$ is homogeneous of degree~$-2$, in a such way that the corresponding solution $(u,p)$ of (NS) turns out to be stationary. A uniqueness result by \v Sver\'ak, \cite{Sve06} implies, on the other hand, that no other solution with these properties does exist in $\R^3$, other than Landau axi-symmetric ones. See also~\cite{KorSve} for a detailed study of the asymptotic properties of these flows.
The class of solutions to the system~\eqref{elp} is, however, much larger. Indeed, Giga and Miyakawa \cite{GM} proposed a general method, based on the analysis of the vorticity equation in Morrey spaces, for constructing nonstationary self-similar solutions of (NS). A more direct construction was later proposed by Cannone, Meyer, Planchon \cite{CanMP}, \cite{CanP96}, see also~\cite[Chapt. 23]{Lem02}. Now we know that to obtain new solutions $U$ of~\eqref{elp} we only have to choose vector fields $a(x)$ in $\R^d$, homogeneous of degree $-1$, and satisfying some mild smallness and regularity assumption on the sphere $\S^{d-1}$: the simplest example in $\R^3$ is obtained taking a small $\epsilon>0$ and letting \begin{equation} \label{simpa}
a(x)=\biggl(- \frac{\epsilon\, x_2}{|x|^2}, \frac{\epsilon\,x_1}{|x|^2}, 0\biggr), \end{equation} but a condition like
$a|_{\S^{d-1}}\in L^\infty(\S^{d-1})$ with small norm (or similar weaker conditions involving the $L^d$-norm or other Besov-type norms on the sphere) would be enough. The basic idea is that the Cauchy problem for Navier--Stokes can be solved, through the application of the contraction mapping theorem,
in Banach spaces made of functions invariant under the natural scaling. The profile $U$ of the self-similar solution $u$ obtained in this way (i.e. $U=u(x,1)$) then solves the elliptic system~\eqref{elp}.
Regularity properties and unicity classes of those (small) self-similar solutions have been studied in different functional settings (see, e.g., \cite{MiuS06}, \cite{GerPS}) and are now quite well understood.
On the other hand, probably because of the lack of known relations between the self-similar profile~$U$ and the datum~$a$, even in the case of self-similar flows emanating from the simplest data, such as in~\eqref{simpa}, the problem of the asymptotic behavior of the solutions $U$ obtained in this way has not been addressed, before, in the literature.
The main purpose of this paper is to construct an explicit formula relating $U(x)$ to $a(x)$, and valid asymptotically for $|x|\to\infty$.
We will also consider the more general problem of
constructing asymptotic profiles as $|x| \to\infty$ for (not necessarily self-similar) solutions $u(x,t)$ of the Navier--Stokes equations with slow decay at infinity
(tipically, $|u(x,t)|\le C|x|^{-1}$). Our motivation for such generalization is that solutions with such type of decay have, in general, a non-self-similar asymptotic for large time. In fact, Cazenave, Dickstein and Weissler showed that their large time behavior can be much more chaotic than for the solutions described by Planchon \cite{Pla98}. As shown in \cite{CazDW}, however, one can obtain some understanding on the large time behavior of these solutions from the analysis of their spatial behavior at infinity.
\section{Main results and methods} \label{sec2}
\subsection{Notations and functional spaces}
If $\mathcal{Q}=(\mathcal{Q}_{j;h,k})$ and $B=(B_{h,k})$ are, respectively, a three-order and a two-order tensor in the Euclidean space $\R^d$, we denote by $\mathcal{Q}:B$ the vector field with components $$(\mathcal{Q}:B)_j=
\sum_{h,k=1}^d \mathcal{Q}_{j;h,k}B_{h,k}, \qquad j=1,\ldots,d.$$ Sometimes, in the proofs of our decay estimates, we will simply write $\mathcal{Q}B$ instead of $\mathcal{Q}:B$ when all components of such vectors can be bounded by the same quantities.
We denote the gaussian function by
$$g_t(x)=(4\pi t)^{-d/2}e^{-|x|^2/(4t)}, \qquad x\in\R^d,\quad t>0.$$ As usual, we adopt the semi-group notation $e^{t\Delta}a=g_t*a$
for the solution of the heat system $\partial_t u=\Delta u$, with $u|_{t=0}=a$ for an initial datum $a$ defined on the whole~$\R^d$.
All the functions we deal with are supposed to be measurable. By definition, for any $\vartheta\ge0$ and $m\in \N$, \begin{equation} \begin{split} &f\in \dot E^m_\vartheta\iff
f\in C^m(\R\backslash\{0\})\quad\hbox{and}\quad|x|^{\vartheta+|\alpha|}\, \partial^\alpha f\in L^\infty(\R^d)\quad
\hbox{$\forall\,\alpha\in\N^d,\; |\alpha|\le m$}.\\ \end{split} \end{equation} We are especially interested in the case $\vartheta=1$. Indeed, the spaces $\dot E^m_1$ contain homogeneous functions of degree~$-1$ (and, in particular, the initial datum $a(x)$ given by~\eqref{simpa}).
The non-homogeneous counterpart of $\dot E^m_\vartheta$ is the smaller space $E_\vartheta^m$, which is defined by the additional requirement that $\partial^\alpha f\in L^\infty(\R^d)$
for all $|\alpha|\le m$. These spaces are equipped with their natural norm: \begin{equation*} \begin{split}
&\|f\|_{\dot E_\vartheta^m}=\max_{|\alpha|\le m}\,\sup_{x\in\R^d\backslash\{0\}} |x|^{\vartheta+|\alpha|}|\partial^\alpha f(x)|,\\
&\|f\|_{E_\vartheta^m}=\max_{|\alpha|\le m}\,\sup_{x\in\R^d} (1+|x|)^{\vartheta+|\alpha|}|\partial^\alpha f(x)|. \end{split} \end{equation*}
Our starting point is a classical result by Cannone, Meyer and Planchon about the construction of self-similar solutions of the Navier--Stokes equations \begin{equation*} \begin{cases}
\partial_t u+\nabla\cdot(u\otimes u)=\Delta u -\nabla p\\
\hbox{div}\,u=0\\
u|_{t=0}=a, \end{cases} \qquad x\in\R^d,\, t>0. \end{equation*} Even though their construction goes through under very general assumptions on the regularity of the initial data, here we are mainly interested in the following simple result:
\begin{theorem}[see \cite{CanMP}, \cite{CanP96}] \label{theoCMP} For all $m\in\N$ there exist $\epsilon,\beta>0$ such that for all initial datum $a\in \dot E^m_1$ homogeneous of degree~$-1$, divergence-free and satisfying \begin{equation} \label{smallCMP}
\|a\|_{\dot E_1^m} <\epsilon, \end{equation} there exists a unique self-similar solution $u(x,t)=\frac{1}{\sqrt t}U\Bigl(\frac{x}{\sqrt t}\Bigr)$
of the Navier--Stokes system (written in the usual integral form, see (NS) below) starting from~$a$, and such that $\|U\|_{E^m_1}<\beta$. Moreover, \begin{equation}
\label{proCMP}
U(x)=e^{\Delta}a+{\cal O}(|x|^{-2}), \qquad \hbox{as $|x|\to\infty$}. \end{equation} \end{theorem}
More precisely, Cannone, Meyer and Planchon prove that $U(x)=e^{\Delta}a(x)+{\cal R}(x)$, where the remainder term satisfies ${\cal R}\in E_2^m$. Their result was stated in dimension three, but their proof easily adapts for all $d\ge2$.
\subsection{Main results}
Our main result shows that one can give a much more precise asymptotic formula between the asymptotic profile $U(x)$ and the datum $a(x)$. It turns out that such asymptotic profile has a different structure in different space dimensions.
\begin{theorem} \label{theoss} Let $a(x)$ be a homogeneous datum of degree $-1$, such that $a$ is smooth on the unit sphere $\S^{d-1}$ and satisfying the smallness condition~\eqref{smallCMP} for some $m\ge3$. Let $u(x,t)=\frac{1}{\sqrt t}U\Bigl(\frac{x}{\sqrt t}\Bigr)$ the self-similar solution constructed in Theorem~\ref{theoCMP}. Then the following profiles hold: \begin{subequations} \begin{itemize}
\item If $d=2$, we have as $|x|\to\infty$,
\begin{equation} \label{profss2D}
U(x)=a(x)-\log(|x|)\frac{\mathcal{Q}(x)\!:\!A}{|x|^6} + \mathcal{O}(|x|^{-3}), \end{equation} Here $A=(A_{h,k})$ is the $2\times 2$ matrix given by $A_{h,k}=\int_{\S^1} (a_ha_k)$ and $\mathcal{Q}(x)=\bigl(\mathcal{Q}_{j;h,k}(x)\bigr)$, where the $\mathcal Q_{j;h,k}$ are homogeneous polynomials of degree three (given by the explicit formula~\eqref{explicitQ} below)
\item For $d=3$, we have as $|x|\to\infty$, \begin{equation} \label{profss3D} U(x)=a(x)+\Delta a(x)-\P \nabla \cdot(a\otimes a)
-\frac{ \mathcal{Q}(x)\!:\!B}{|x|^7} + \mathcal{O}\bigl(|x|^{-5}\log|x|\bigr), \end{equation} for a $d\times d$ constant real matrix $B=(B_{h,k})$ depending on~$a$. Here $\P=Id-\nabla(\Delta)^{-1}\hbox{div}$ is the Leray-Hopf projector onto the divergence-free vector fields.
\item For $d\ge4$, the far-field asymptotics reads, as $|x|\to\infty$, \begin{equation} \label{profss4D}
U(x)=a(x)+\Delta a(x)-\P \nabla\cdot (a\otimes a)+ \mathcal{O}\bigl(|x|^{-5}\log|x|\bigr). \end{equation} \end{itemize} \end{subequations} \end{theorem}
In Section~\ref{sec9} we will restate and prove this theorem in a more general form, removing the assumption that $a$ is homogeneous. Such more general theorem will apply also for solutions $u(x,t)$ of Navier--Stokes of non-self-similar form. On the other hand, we will not seek for the greatest generality about the regularity of the datum: even though there is a considerable interest in studying self-similar solutions emanating from rough data (see \cite{Lem02}, \cite{Gru06}), in most of our statements we will assume that $a\in C^3(\S^{d-1})$, which is of course non-optimal, but permits us to greatly simplify the presentation of our results and to better emphasize the main ideas.
The method that we present in this paper would allow to compute, in principle, the asymptotics of $U$ up to any order, when $a$ is smooth on $\S^{d-1}$. However, the higher order terms have quite complicated expressions.
The functions $\Delta a$ and $\P\nabla\cdot (a\otimes a)$ appearing in our expansions are both homogeneous of degree $-3$ and smooth outside the origin. Therefore, our asymptotic profiles imply that conclusion~\eqref{proCMP} of Theorem~\ref{theoCMP} can be improved into
$$U(x)=a(x)+\mathcal{O}\bigl(|x|^{-3}\log|x|\bigr), \qquad\hbox{as $|x|\to\infty$}, \qquad\hbox{if $d=2$},$$ and
$$U(x)=a(x)+\mathcal{O}\bigl(|x|^{-3}\bigr), \qquad\hbox{as $|x|\to\infty$}, \qquad\hbox{if $d\ge3$}.$$ The datum $a$ can be replaced here by its filtered version $e^{\Delta}a$.
It turns out that such improved estimates are optimal for generic
self-similar solutions. For example, in the two dimensional case, the logarithmic factor cannot be removed, since the improved bound $U(x)=a(x)+\mathcal{O}\bigl(|x|^{-3}\bigr)$ would require $\mathcal{Q}\!:\!A\equiv0$: such stringent condition can be proved to be equivalent to the orthogonality relations $\int_{\S^1} a_1^2=\int_{\S^1} a_2^2$ and $\int_{\S^1} a_1a_2=0$.
\subsection{Main methods}
We will use the semigroup method and the theory of mild solutions of the Navier--Stokes equations as explained in detail in the books \cite{Can95} and~\cite{Lem02}. The main novelty of our approach relies on the use of the following ingredients: \begin{enumerate}
\item The first one is the use of remarkable, but not so much known, \emph{cancellation properties\/} hidden inside the kernel $\K(x,t)$ of the Oseen operator $e^{t\Delta}\P$, and inside other related operators, appearing in the integral formulation of Navier--Stokes.
To be more precise, we can write $\K(x,t)=\KK(x)+t^{-d/2}\K_2(x/\sqrt t)$, where $\KK(x)$ is a tensor whose components are homogeneous function of degree~$-d$
(namely, second order derivatives of the fundamental solution of the Laplacian in~$\R^d$), and $\K_2$ is exponentially decaying as $|x|\to\infty$. Such decomposition already played an important role in our previous work~\cite{BraV07}, where we showed that solutions $u(x,t)$ \emph{arising from well-localized data\/} behave like
$$u(x,t)\sim \nabla_x\KK(x):{E}(t), \qquad\hbox{as $|x|\to\infty$},$$ where ${E}(t)$ is the energy matrix of the flow: ${E}(t)=\bigl(\int_0^t\!\int u_hu_k(y,s)\,dy\,ds\bigr)$.
A crucial fact in the proof of the results of the present paper will be the use of the identities, for $j=1,\ldots,d$, \begin{equation*} \int_{\S^{d-1}}\KK(\omega)\,d\omega=0, \qquad \int_{\S^{d-1}}\omega_j\nabla\KK(\omega)\,d\omega=0. \end{equation*} Such cancellations are somehow hidden in $\K$, because the non-homogeneous part $\K_2$ (and, {\it a fortiori\/}, the kernel $\K$) \emph{does not have\/} a vanishing integral on the sphere.
\item Our second ingredient are \emph{asymptotic formulae for convolution integrals\/}: roughly speaking, these formulae
consist in deducing the exact profile as $|x|\to\infty$ of a convolution product $f*g(x)$, from information on the regularity, the cancellations, and the behavior at infinity of the two factors~$f$ and~$g$. In their simplest form, and for $f$ and $g$ ``well behaved'' at infinity, those formulae read \begin{equation} \label{afci} f*g(x)\sim\Bigl(\int f\Bigr)g(x)+\Bigl(\int g\Bigr)f(x),
\qquad \hbox{as $|x|\to\infty$}. \end{equation} We will apply several generalizations and variants of~\eqref{afci} in different situations (including the case of non-integrable functions) the factors $f$ and $g$ being either the Oseen kernel, the heat kernel, or a function related to the non-linearity. The assumptions for the validity of~\eqref{afci} are quite stringent (notice that~\eqref{afci} is obviously wrong if, {\it e.g.\/}, $f$ and $g$ are both a gaussian function). Neverthless, the method that we use here has a wide applicability and can be used for constructing the far-field asymptotics for equations of other equations. See, {\it e.g.\/}, \cite{BraK07} for an application to a class of convection equations with anomalous diffusion. \end{enumerate}
We will also make use of the so called {\it bi-integral formula\/}. Such formula is obtained by simply iterating the usual integral formulation of the Navier--Stokes equations, which we now recall: \begin{equation*} \tag\hbox{NS} \begin{cases} u(t)=e^{t\Delta}a-\displaystyle\int_0^t e^{(t-s)\Delta}\P\nabla\cdot (u\otimes u)(s)\,ds\\ \div(a)=0. \end{cases} \end{equation*} Using the Oseen kernel $\K(x, t)$, we can define the Navier--Stokes bilinear operator as \begin{equation*} \label{rep1} B(u,v)(t)=\int_0^t \K(t-s)*\nabla \cdot(u\otimes v)(s)\,ds. \end{equation*} Then (NS) can be written simply as $u=e^{t\Delta}a-B(u,u)$. The bi-integral formula is obtained by a straightforward iteration: \begin{equation} \label{bif} u(t)=e^{t\Delta}a-B(e^{t\Delta}a,e^{t\Delta}a)+2B(e^{t\Delta}a,B(u,u)) -B(B(u,u),B(u,u)). \end{equation}
Roughly speaking, combining equation~\eqref{bif} with some nice properties of the heat kernel and fine decay estimates of the bilinear operator we can prove ({\it e.g.\/} when $d=3$) that $e^{t\Delta}a\sim a+t\Delta a$ and $B(e^{t\Delta}a,e^{t\Delta}a)(x)\sim t\P\nabla\cdot(a\otimes a)(x)$ as
$|x|\to\infty$. After obtaining an explicit far-field asymptotics for $u(x,t)$, it is easy to deduce, in the self-similar case, the behavior at infinity of the profile $U(x)$, by passing to self-similar variables and eliminating~$t$. The two last terms in the above bi-integral formula will contribute to the remaining terms in the right-hand side of expansion~\eqref{profss3D}.
Notice that, in the two-dimensional case, the term
$\P\nabla\cdot(a\otimes a)$ is not well-defined when $a$ is homogeneous of degree~$-1$. This explains the different structure of our asymptotic expansions in this case. The special structure of the asymptotic profiles in the two-dimensional case can be observed also if, instead of considering the behavior for $|x|\to\infty$ as we do in this paper, one focuses on the the behavior of solutions for large time. (See, {\it e.g.\/}, \cite{GW3}).
For sake of simplicity, in this paper we consider only data such that $a(x)\sim |x|^{-1}$ as $|x|\to\infty$, which is the natural assumption for the study of global strong solutions and related self-similarity phenomena.
However, the study of the asymptotic behavior for large~$|x|$ of solutions $u$
(possibly defined only locally in time) is also of interest in more general situations, such as $a(x)\sim |x|^{-\vartheta}$. The far-field behavior of the solution $u(x,t)$ of (NS), then mainly depends on the competition between three factors. The first one is the spatial localization of the datum (say, the value of the exponent~$\vartheta$) and the consequent space-time decay of the linear evolution $e^{t\Delta}a$. The other two factors are the action of the quadratic non-linearity $u\otimes u$ and of the
non-local operator $\P\hbox{div}(\cdot)$.
When $\vartheta>d+1$, the action of this nonlocal operator (whose kernel behaves at infinity like $|x|^{-d-1}$) is predominant, and is responsible of \emph{spatial spreading effects}. When $(d+1)/2<\vartheta<d+1$ (the limit case $\vartheta=(d+1)/2$ corresponding to the situations in which $u\otimes u$ decays like the kernel), the linear evolution becomes predominant and the spatial spreading phenomenon is not directly observed on the solution, but rather on its~\emph{fluctuation} $u-e^{t\Delta}a$. We refer to our previous paper~\cite{BraV07} for a sharp description of these issues.
The asymptotic profiles of $u$ as $|x|\to \infty$ in the cases $0<\vartheta<1$ and $1<\vartheta<(d+1)/2$
should have a slightly different structure, but they are not known with precision yet. The method that we use in this paper for $\vartheta=1$, and in particular the idea of iterating the Duhamel formula making use of the cancellations of the kernels, might be used to compute them. More and more iterations would be needed to deal with data decaying slower than $|x|^{-1}$, or to determine the asymptotics to a higher order. On the other hand, no iteration or cancellation property was needed for the faster decaying data studied in~\cite{BraV07}.
The plan of the paper is the following: we begin with the study of the Oseen kernel. In Section~\ref{sec4}, after some generalities about the asymptotic of convolutions, we describe the behavior at large distances of solutions to the heat equation. Section~\ref{sec5} is devoted to the (more or less standard) construction of solutions with a prescribed space-time decay. In Section~\ref{section6} we show how to use the cancellations of the Oseen kernel to get some new fine estimates. In the remaining part of the paper we will state and prove a more general form of Theorem~\ref{theoss}.
\section{Asymptotics and cancellations of the Oseen kernel $\K$ and of the kernel $F$} \label{sec3}
Let $\K(x,t)$ be the kernel of $e^{t\Delta}\P$, let $F(x,t)$ be the kernel of $e^{t\Delta}\P\hbox{div}(\cdot)$. Both $\K(\cdot,t)$ and $F(\cdot, t)$ belong to $C^\infty(\R^d)$ and they satisfy the scaling properties $\K(x,t)=t^{-d/2}\K(x/\sqrt t,1)$ and $F(x,t)=t^{-(d+1)/2}F(x/\sqrt t,1)$.
Denote by $\Gamma$ the Euler Gamma function and by $\delta_{j,k}$ the Kronecker symbol. The following Proposition extends and completes a Lemma contained in~\cite{BraV07}. \begin{proposition} \label{asG} Let $\KK=(\KK_{j,k})$, where $\KK_{j,k}(x)$ is the homogeneous function of degree~$-d$ \begin{subequations} \begin{equation} \label{KK}
\KK_{j,k}(x)=\frac{\Gamma(d/2)}{2\pi^{d/2}}\cdot \frac{\bigl(-\delta_{j,k}|x|^2+dx_jx_k\bigr)} {|x|^{d+2}}, \end{equation} and $\FF=(\FF_{j;h,k})$, where $\FF_{j;h,k}=\partial_h\KK_{j,k}$, which we can write also as \begin{equation} \label{FF} \FF_{j;h,k}(x)=\frac{\Gamma\bigl(\frac{d+2}{2}\bigr)}{\pi^{d/2}}\cdot
\frac{\sigma_{j,h,k}(x)|x|^2-(d+2)x_jx_hx_k}{|x|^{d+4}}, \end{equation} where $\sigma_{j,h,k}(x)=\delta_{j,h}x_k+\delta_{h,k}x_j+\delta_{k,j}x_h$, for $j,h,k=1,\ldots,d$. \end{subequations} \begin{subequations} Then the following decompositions hold: \begin{equation} \label{prof KK}
\K(x,t)=\KK(x)+|x|^{-d}\Psi\Bigl(x/\sqrt t\Bigr), \end{equation} and \begin{equation} \label{profKF}
F(x,t)=\FF(x)+|x|^{-d-1}\widetilde\Psi\Bigl(x/\sqrt t\Bigr), \end{equation} where $\Psi$ and $\widetilde\Psi$ are smooth outside the origin and such that, for all $\alpha\in\N^d$, and $x\not=0$,
$|\partial^\alpha \Psi(x)|+|\partial^\alpha \widetilde\Psi(x)|\le Ce^{-c|x|^2}$. Here $C$ and $c$ are positive constant, depending on $|\alpha|$ but not on $x$. \end{subequations}
Moreover, the following cancellations hold: \begin{equation} \label{canc}
\left\{ \begin{aligned} & \displaystyle\int_{\S^{d-1}} \KK(\omega)\,d\sigma(\omega)=\displaystyle\int_{\S^{d-1}}
\omega_\ell\KK(\omega)\,d\sigma(\omega)=0\\ & \displaystyle\int_{\S^{d-1}}\FF(\omega)\,d\sigma(\omega)
=\displaystyle\int_{\S^{d-1}} \omega_{\ell}\FF(\omega)d\sigma(\omega)=0\\ &\displaystyle\int_{\S^{d-1}} \omega_{\ell}\omega_{m}\FF(\omega)d\sigma(\omega)=0, \qquad\qquad\ell,m=1,\ldots,d. \end{aligned} \right. \end{equation} \end{proposition}
\begin{remark}
\label{remQ} The homogeneous polynomials $\mathcal{Q}(x)=\bigl(\mathcal Q_{j;h,k}(x)\bigr)$ appearing in the statement of Theorem~\ref{theoss} is defined by the relation
$\FF(x)=|x|^{-d-4}\mathcal{Q}(x)$, that is, with $\gamma_d=\Gamma(\frac{d+2}{2})/\pi^{d/2}$,
\begin{equation}
\label{explicitQ}
\mathcal{Q}_{j;h,k}(x)=\gamma_d\Bigl( (\delta_{j,h}x_k+\delta_{h,k} x_j+\delta_{k,j}x_h)|x|^2-(d+2)x_j x_h x_k\Bigr). \end{equation}
\end{remark}
{
\noindent{\em Proof. }} The symbol of $\K$ is \begin{equation*} \label{symb}
\widehat \K_{j,k}(\xi,t)=e^{-t|\xi|^2}\biggl(\delta_{j,k}-\frac{\xi_j\xi_k}{|\xi|^2}\biggr)
=e^{-t|\xi|^2}\delta_{j,k}-\int_t^\infty \xi_j\xi_ke^{-s|\xi|^2}\,ds \end{equation*}
Taking the inverse Fourier transform we get \begin{equation*} \K_{j,k}(x,t)= \delta_{j,k}\,g_t(x)+\int_t^\infty\partial_j\partial_k g_s(x)\,ds\equiv \K^{(1)}_{j,k}(x,t)+\K^{(2)}_{j,k}(x,t) \end{equation*}
Computing the derivatives $\partial_j\partial_k g_s(x)$ and changing the variable $\lambda=\frac{|x|}{\sqrt{4s}}$ in the integral we get
\begin{equation*}
\K^{(2)}_{j,k}=\pi^{-d/2}|x|^{-d}\int_0^{|x|/\sqrt {4t}} \biggl( -\delta_{j,k}\, \lambda^{d-1}
+2\lambda^{d+1}\frac{x_j x_k}{|x|^2} \biggr)
e^{-\lambda^2}\,d\lambda. \end{equation*}
But, for all $r>0$ and $\alpha>-1$, $$ \int_0^r \lambda^\alpha e^{-\lambda^2}\,d\lambda =\frac{1}{2}\Gamma\biggl(\frac{\alpha+1}{2}\biggr) -\int_r^\infty\lambda^\alpha e^{-\lambda^2}\,d\lambda. $$ Choosing first $\alpha=d-1$, then $\alpha=d+1$ and using $\Gamma((d+2)/2)=(d/2)\Gamma(d/2)$, we get \begin{equation*} \label{bG}
\K^{(2)}_{j,k}(x,t) =\frac{|x|^{-d}}{2\pi^{d/2}}\Gamma\Bigl(\frac{d}{2}\Bigr)
\biggl[-\delta_{j,k}+d\frac{x_jx_k}{|x|^2}\biggr]+
|x|^{-d}\Psi_{j,k}(x/\sqrt {4t}). \end{equation*} Here, $\Psi=(\Psi_{j,k})$ is a family of functions such that, \begin{equation} \label{est Psi}
\forall\,\alpha\in\N^d, \quad|\partial^\alpha \Psi(y)|\le C_\alpha e^{-c|y|^2}, \qquad y\in\R^3. \end{equation} Observing that $\K^{(1)}_{j,k}$ can be bounded by the second term on the right hand side and modifying, if necessary, the functions $\Psi_{j,k}$ (which can be done without affecting estimate~\eqref{est Psi}) we see that decomposition~\eqref{prof KK} holds. The decomposition~\eqref{profKF} is now an immediate consequence of the definition of $\FF(x)$.
Observe that $\KK_{j,k}=\partial_j\partial_k \E_d$, where $\E_d$ is the fundamental solution of $-\Delta$ in~$\R^d$. From the radial symmetry of $\E_d$, we immediately get \begin{equation*} \label{cancp}
\displaystyle\int_{\S^{d-1}} \KK_{j,k}(\omega)\,d\sigma(\omega)=\displaystyle\int_{\S^{d-1}}
\omega_j\KK(\omega)\,d\sigma(\omega)=0, \qquad j\not=k \end{equation*} and \begin{equation*}
\displaystyle\int_{\S^{d-1}}\FF(\omega)\,d\sigma(\omega)= \displaystyle\int_{\S^{d-1}} \omega_{\ell}\omega_{m}\FF(\omega)d\sigma(\omega)=0, \qquad\qquad\ell,m=1,\ldots,d. \end{equation*} Using again the radiality of $\E_d$ and $\Delta \E_d=0$ on $\S^{d-1}$, yields $\int_{\S^{d-1}}\KK_{j,j}(\omega)\,d\sigma(\omega)=0$. This argument also shows that the identities \begin{equation*} \int_{\S^{d-1}}\omega_\ell\,\FF_{j;h,k}(\omega)\,d\sigma(\omega)=0, \qquad j,h,k,\ell=1,\ldots,d \end{equation*} can be reduced to the proof of the equality \begin{equation} \label{dirre} \int_{\S^{d-1}} \omega_\ell\,\partial_\ell\partial_j^2 \E_d(\omega)\,d\sigma(\omega) =\int_{\S^{d-1}}\omega_\ell\, \partial_\ell^3 \E_d(\omega)\,d\sigma(\omega), \qquad j\not=\ell. \end{equation} The fact that both terms in~\eqref{dirre} are zero follows from $\partial_j\partial_h\partial_k \E_d(\omega)=\mathcal{Q}_{j,h,k}(\omega)$, for $\omega\in\S^{d-1}$ and formula~\eqref{explicitQ}. In the computation, one needs to use the moment relation $$\int_{\S^{d-1}}\omega_j^2\,d\sigma(\omega)=\frac{1}{d}\int_{\S^{d-1}}\,d\sigma(\omega)$$ and the well known identities (easily obtained via the Stokes formula) \begin{equation*} \left\{ \begin{aligned} &\displaystyle\int_{\S^{d-1}}\omega_j^4\,d\sigma(\omega)=\frac{3}{d(d+2)}\displaystyle\int_{\S^{d-1}}d\sigma(\omega)\\ &\displaystyle\int_{\S^{d-1}}\omega_j^2\omega_k^2\,d\sigma(\omega)=\frac{1}{d(d+2)}\displaystyle\int_{\S^{d-1}}d\sigma(\omega), \qquad j\not=k. \end{aligned} \right. \end{equation*}
{
$\Box$}
\section{Far-field asymptotics of convolutions and application to the heat equation} \label{sec4}
The purpose of our next result is to describe the exact behavior as
$|x|\to \infty$ of the convolution product of two functions $f$ and $g$ from the asymptotic properties of each factor. We will consider only a simple particular situation that will be sufficient for our purposes.
\begin{proposition} \label{pro1} Let $d\ge1$ and $m\ge1$ two integers. Let $f\in \dot E^m_\vartheta$ for some $0\le \vartheta<d$
and $g\in L^1(\R^d,(1+|x|)^m dx)\cap \dot E^0_{d+m}$. Then the convolution product $f*g$ satisfies \begin{equation} \label{cpp}
f*g(x)=\sum_{ \genfrac{}{}{0pt}{}{\gamma\in\N^d}{0\le|\gamma|\le m-1} } \frac{(-1)^{|\gamma|} }{\gamma!} \biggl(\int y^\gamma g(y)\,dy\biggr)\partial^\gamma f(x) \,+\, {\cal R}(x), \end{equation} where ${\cal R}(x)$ is a remainder term satisfying, for some constant $C>0$ independent of $f$ and $g$ and all $x\not=0$: \begin{equation} \label{reme}
\bigl| \mathcal{R}(x)\bigr|\le C|x|^{-m-\vartheta} \|f\|_{\dot E^m_\vartheta}
\Bigl( \|g\|_{\dot E^0_{d+m}}+ \|g\|_{L^1(\R^d,|x|^m\,dx)}\Bigr). \end{equation} \end{proposition}
\begin{remark}
The identity~\eqref{cpp} is useful, for large~$|x|$, when at least one derivative $|\partial ^{\gamma}f|$
decays at infinity exactly as $c_{\gamma}|x|^{-\vartheta-\gamma}$
(at least in some directions). In this case, ${\cal R}(x)$ is indeed a lower order term as $|x|\to\infty$. \end{remark}
{
\noindent{\em Proof. }} We can assume, without restriction, that
$ \|f\|_{\dot E^m_\vartheta}= \|g\|_{\dot E^0_{m+d}}=1$. We have to estimate the difference between $\int f(x-y)g(y)\,dy$ and the first term on the right-hand side of~\eqref{cpp}. Such difference can be written as the sum of four terms $D_1+\cdots+D_4$, where \begin{equation*} \begin{split}
&D_1\equiv\int_{|y|\le |x|/2} \Bigl[f(x-y)-\sum_{|\gamma|\le m-1}
\frac{(-1)^{|\gamma|}}{\gamma!}\partial^\gamma f(x)y^\gamma\Bigr]g(y)\,dy,\\
&D_2\equiv\int_{|y|\le |x|/2} g(x-y) f(y)\,dy,\\
&D_3\equiv\int_{|y|\ge |x|/2,\; |x-y|\ge |x|/2} f(x-y)g(y)\,dy\,dy \end{split} \end{equation*} and
$$D_4\equiv - \sum_{|\gamma|\le m-1}
\frac{(-1)^{|\gamma|}}{\gamma!}\partial^\gamma f(x)
\int_{|y|\ge|x|/2} y^\gamma g(y)\,dy.$$
Using the Taylor formula, we see that
$$ |D_1|\le C |x|^{-\vartheta-m}\int_{|y|\le |x|/2}|y|^m|g(y)|\,dy,$$ which is bounded by the right-hand side of~\eqref{reme}. Direct estimates show that
$|D_2|$, $|D_3|$ and $|D_4|$ are bounded by $C|x|^{-\vartheta-m}$ as well.
{
$\Box$}
\begin{remark} We can give now a more precise statement about the asymptotics claimed in~\eqref{afci}. The simplest result reads as follow: if $f,g\in E^1_{\alpha+d}$ (the non-homogeneous space) for some $\alpha>0$, then
$$f*g(x)=\Bigl(\int f\Bigr)g(x)+\Bigl(\int g\Bigr)f(x)+\mathcal{O}\bigl(|x|^{-d-\alpha^*}\bigr),$$
as $|x|\to\infty$, where $\alpha^*=\min\{2\alpha,\alpha+1\}$. When $\alpha=1$ the remainder must be replaced by $\mathcal{O}\big(|x|^{-d-2}\log(|x|)\bigr)$. The proof relies on the same argument as that used in the proof of Proposition~\ref{pro1}: the only difference is that the Taylor formula is applied to both $f$ and $g$, so that one has to introduce an additional term $D_5$ in the decomposition of $f*g$. Of course, one could state many variants of this result: here the most important condition was that the decay of the two factors $f$ and $g$ (or at least the decay of the factor with the least spatial localization) must increase after derivation; but one could put, instead, a more general condition in terms of moduli of continuity. \end{remark}
Other useful functional spaces are, for $m\in\N$, $\vartheta\ge0$, \begin{equation*} X^m_\vartheta =
\Bigl\{ u\in L^1_{\rm loc}((0,\infty), C^m(\R^d))\colon \; \|u\|_{X_\vartheta^m}
\equiv \max_{|\alpha|\le m}\,
\hbox{ess}\sup_{\!\!\!\!\!\!\!\!\! x,t}\,(\sqrt t+|x|)^{\vartheta+|\alpha|}|\partial^\alpha_x u(x,t)|<\infty \Bigr\}. \end{equation*}
The use of such spaces for the Navier--Stokes equations is more or less classical (see, {\it e.g.\/}, \cite{CanP96}, \cite{CazDW}) but, unfortunately, there is no agreement on the notations.
The following lemma is elementary:
\begin{lemma} \label{heat l} Let $m\in\N$, $a\in\dot E^m_\vartheta$, with $0\le\vartheta<d$. Then there is a constant $C>0$, independent on $a$, such that \begin{equation*} \label{heat est}
\|e^{t\Delta}a\|_{X^m_\vartheta}\le C\|a\|_{\dot E^m_\vartheta}. \end{equation*} \end{lemma}
{
\noindent{\em Proof. }} For $\alpha\in\N^d$, $|\alpha|\le m$, one writes $\partial_x^\alpha e^{t\Delta}a(x)=\int \partial_x^\alpha g_t(x-y)\,a(y)\,dy$
and splits the integral in $\R^d$ into the three new integrals, corresponding to the three disjoint regions $|y|\le|x|/2$, $|x-y|< |x|/2$ and the complementary region in $\R^d$. For the second integral one first applies $|\alpha|$-times integration by parts. Then the direct estimate
$|\partial_x^\alpha g_t(x)|\le C|x|^{-d-|\alpha|}$
gives the spatial decay $|\partial_x^\alpha e^{t\Delta}a(x)|\le C|x|^{-\vartheta-|\alpha|}$. On the other hand, $a$ belongs to the Lorentz space $L^{d/\vartheta,\infty}(\R^d)$, and $\partial_x^\alpha g_t\in L^{d/(d-\vartheta),1}(\R^d)$. Then the time decay estimate $\|\partial_x^\alpha e^{t\Delta}a\|_\infty\le Ct^{-(\vartheta+|\alpha|)/2}$ follows from the generalized Young inequality (see, {\it e.g.\/} \cite{Lem02}).
{
$\Box$}
As an application, we get the exact asymptotic profile as $|x|\to\infty$ for the solution of the Cauchy problem associated with the heat equation for slowly oscillating data. We first recall two standard notations: if $\beta\in\N^d$ we set: $(2\beta-1)!!=\prod_{ \genfrac{}{}{0pt}{}{j=1,\ldots,d}{\beta_j\ge1} } 1\cdot3\cdot\ldots\cdot(2\beta_j-1)$ and $(2\beta)!!=\prod_{ \genfrac{}{}{0pt}{}{j=1,\ldots,d}{\beta_j\ge1} } 2\cdot 4\cdot \ldots\cdot 2\beta$. Now we can state the following: \begin{lemma} \label {as heat lem} (i) Let $m\ge1$ be an integer, $0\le\vartheta<d$ and $a\in \dot E^m_\vartheta$. Then, \begin{equation*}
\label{hke}
e^{t\Delta}a(x)=\sum_{|2\beta|\le m-1} \frac{(2t)^\beta}{(2\beta)!!} \,\partial^{2\beta}a(x)\,
+\,\mathcal{O}\bigl(t^{m/2}|x|^{-\vartheta-m}\bigr), \qquad\hbox{as $|x|\to\infty$,} \end{equation*} uniformly for $t>0$
({\it i.e.\/} the remainder term is bounded by $Ct^{m/2}|x|^{-\vartheta-m}$).
(ii) In particular, if $m\ge4$ and $a\in \dot E^m_1$: \begin{equation*}
\label{as heat lap}
e^{t\Delta}a(x) =a(x)+t\Delta a(x) +\mathcal{O}\bigl(t^2|x|^{-5}\bigr), \qquad
\hbox{as $|x|\to\infty$}, \end{equation*} uniformly for $t>0$. \end{lemma}
{
\noindent{\em Proof. }} Indeed, writing $e^{t\Delta}a(x)=g_t*a(x)$, we can apply Proposition~\ref{pro1}
with $g_t(x)=(4\pi t)^{-d/2}e^{-|x|^2/(4t)}$ instead of $g$. Observing that, for all $\beta\in\N^d$, $$\int y^{2\beta}g_t(y)\,dy=2^\beta(2\beta-1)!!\,t^\beta,$$ and that $\int y^\gamma g_t(y)\,dy=0$ if $\gamma\in\N^d$ is not of the form $\gamma=2\beta$, we obtain the result.
{
$\Box$}
\section{Global existence of decaying solutions} \label{sec5}
We already recalled that $F$ denotes the kernel of $e^{t\Delta}\P\hbox{div}(\cdot)$ and, for $t>0$,
$F(x,t)=t^{-(d+1)/2}F(x/\sqrt t,1)$. It is also well known that $|\partial_x^\alpha F(x,1)|\le C_\alpha(1+|x|)^{-d-1-|\alpha|}$ for all $\alpha\in\N^d$. A quick way to prove this decay at infinity is to observe that such estimate is immediate for $|x|\ge1$ for both terms in the right-hand side of equation~\eqref{profKF}. Moreover, it is clear from its definition that $F(\cdot,t)\in C^\infty(\R^d)$ for $t>0$.
Let us introduce the linear operator $\L$, defined on $d\times d$ matrices $w=(w_{h,k})$ by the relation \begin{equation} \label{Lw} \L(w)(x,t)=\int_0^t\!\!\int F(x-y,t-s)w(y,s)\,dy\,ds. \end{equation} More explicitly (and accordingly with the notation introduced in Section~\ref{sec2}),
the $j$-component is given by \begin{equation*} \label{Lwj} \L(w)_j(x,t)=\int_0^t\!\!\int \sum_{h.k} F_{j;h,k}(x-y,t-s)w_{h,k}(y,s)\,dy\,ds. \end{equation*}
The interest of considering such operator is that the Navier--Stokes bilinear operator can be expressed as \begin{equation*}
\label{thi} B(u,v)=\L(u\otimes v). \end{equation*}
We start with a simple lemma (already known in a slightly less general form, see \cite{Miy02}, \cite{CazDW}).
\begin{lemma}
\label{sll} Let $m\in\N$ and $w=(w_{h,k})\in X^m_2$. Then $\L(w)\in X^m_1$ and, for some constant $C>0$ independent of $w$, \begin{equation} \label{Lcont}
\|\L(w)\|_{X^m_1}\le C\|w\|_{X^m_2}. \end{equation} \end{lemma}
{
\noindent{\em Proof. }} We can assume, with no loss of generality, $\|w\|_{X^m_2}=1$. We start writing \begin{equation} \label{dB} \partial^\alpha_x\L(w)(t)=\int_0^t\!\!\int \partial^\alpha_x F(x-y,t-s)w(y,s)\,ds. \end{equation}
Let $\alpha\in\N^d$, such that $|\alpha|\le m$ and $x\not=0$. We split the spatial integral in equation~\eqref{dB} into the three regions
$|y|\le |x|/2$, next $|x-y|\le |x|/2$, then ($|y|\ge|x|/2$ and $|x-y|\ge|x|/2$) and we denote with~$I_1$, $I_2$ and~$I_3$, the three corresponding integrals. From the estimate (deduced from~\eqref{profKF})
$|\partial^\alpha_x F(x,t)|\le C|x|^{-(d+|\alpha|)}t^{-1/2}$, and the estimate $|w(y,s)|\le |y|^{-1}s^{-1/2}$, we obtain immediately \begin{equation} \label{twe}
|I_1(x,t)|+|I_3(x,t)|\le C|x|^{-1-|\alpha|}. \end{equation}
We now treat $I_2$. When~$|\alpha|=0$ we can simply use well known fact that $\|F(\cdot,t)\|_1\le Ct^{-1/2}$
to obtain $|I_2(x,t)|\le C|x|^{-1}$. When $1\le |\alpha|\le m$
we make as many integration by parts as needed, and use estimates of the form (deduced from the rescaling properties of $F$ recalled at the beginning of this section and the fact that $\partial^\alpha_x F(\cdot,1)\in L^1(\R^d,(1+|x|)^{|\alpha|}\,dx)$\,)
$$\|\, |\!\cdot\!|^\alpha \partial^\alpha_x F(\cdot,t-s)\|_1 \le C(t-s)^{-1/2}.$$
Then observing that $|\partial_y^\alpha w(y,s)|\le |y|^{-1-|\alpha|}s^{-1/2}$ for $|\alpha|\le m$, we conclude that $I_2$ can be estimated like $I_1$ and $I_3$ in~\eqref{twe}. Summarizing, we showed that \begin{equation} \label{spd}
\bigl|\partial^\alpha_x \L(w)(x,t) \bigr|\le C|x|^{-1-|\alpha|}. \end{equation}
There is a now well known strategy (see \cite{Miy02}) to deduce time decay estimates from the corresponding space decay estimates. Namely, using the semi-group property of the Oseen kernel, $$ \L(w)(t)=e^{t\Delta/2}\L(w)(t/2)+\int_{t/2}^t F(t-s)*w(s)\,ds\equiv K_1(t)+K_2(t).$$
From the Young inequality in Lorentz spaces, and observing that $\bigl\| \L(w)(t)\bigr\|_{L^{d,\infty}}$ is uniformly bounded, because of inequality~\eqref{spd}, we get $$
\bigl\|\partial^\alpha_x K_1(t)\bigr\|_\infty
\le \bigr\|\partial^\alpha_x g_{t/2}\bigr\|_{L^{d/(d-1),1}} \bigl\| \L(w)(t/2)\bigr\|_{L^{d,\infty}}
\le C\,t^{-(1+|\alpha|)/2}. $$ Moreover, \begin{equation*} \begin{split}
\bigl\|\partial_x^\alpha K_2(t)\bigr\|_\infty
\le \int_{t/2}^t \|F(t-s)\|_1\|\partial_x^\alpha w(s)\|_\infty\,ds\le C\,t^{-(1+|\alpha|)/2}. \end{split} \end{equation*}
Concluding, we showed that
$$\bigl |\partial^\alpha_x \L(w)\bigr|(x,t)
\le C\bigl( |x|^{-(1+|\alpha|)}\wedge t^{-(1+|\alpha|)/2}\bigr)\le C' (\sqrt t+|x|)^{-1-|\alpha|}.$$ This proves the natural estimate~\eqref{Lcont}.
{
$\Box$}
We now follow the standard procedure for constructing global solutions to (NS) in the space $X^m_1$. Our starting point will be the following basic existence result, which is nothing but a reformulation of well-know results in the literature (see \cite{Can95}, \cite{CanMP}, \cite{CazDW}, \cite{Miy02}) in a slightly more general form.
\begin{proposition} \label{theorem1} Let $d\ge2$ and $m\ge0$ be two integers. There exist two constants $\epsilon>0$ and $M>0$ such that for all divergence-free vector field $a\in \dot E^m_1$, satisfying \begin{equation*}
\|a\|_{\dot E^m_1} <\epsilon, \end{equation*} there exists a unique solution $u\in X^m_1$ of (NS) starting from $a$
(in the sense that $u(t)\to a$ in $\mathcal{S}'(\R^d)$, as $t\to0$), such that $\|u\|_{X^m_1}\le \epsilon M$. \end{proposition}
{
\noindent{\em Proof. }} We only have to apply the size estimate for the linear evolution
$$\|e^{t\Delta} a\|_{X^m_1}\le C\|a\|_{\dot E^m_1}$$ (this is a particular case of Lemma~\ref{heat l}) and the corresponding estimate for the bilinear operator: \begin{equation*} \label{bicont}
\|B(u,v)\|_{X^m_1}\le C\|u\|_{X^m_1} \|v\|_{X^m_1}. \end{equation*} This last inequality is obtained applying Lemma~\ref{sll} with $w=u\otimes v$. The existence a solution~$u\in X^m_1$ (and its unicity in a ball of such space)
now follows from the application of the contraction mapping theorem, as explained {\it e.g.\/} in Cannone's book~\cite{Can95}. Slightly changing the estimates of the previous Lemma we easily obtain, {\it e.g.\/}, the bound $|B(u,u)(x,t)|\le C|x|^{-3/2}t^{1/4}$, implying $B(u,u)(t)\to0$ in~$\mathcal{S}'(\R^d)$ as $t\to0$. Thus, from (NS), $u(t)\to a$ as $t\to0$ in the distributional sense.
{
$\Box$}
\begin{remark} In the particular case in which~$a$ is a homogeneous vector field of degree $-1$ in~$\R^d$, the solution~$u$ constructed in~Proposition~\ref{theorem1} is self-similar: \begin{equation*} u(x,t)=\frac{1}{\sqrt t} U\biggl(\frac{x}{\sqrt t}\biggr), \end{equation*} for some with $U\in E^m_1$ (the non-homogeneous space). This easily follows from the scaling invariance of (NS) (see {\it e.g.\/}~\cite[Ch. 3]{Can95}). \end{remark}
\section{Fine estimates of the bilinear term} \label{section6}
It follows from Lemma~\ref{sll} that, for $w\in X^0_2$, we have \begin{equation} \label{trin}
|\L(w)(x,t)|\le C(t^{-1/2}\wedge |x|^{-1}). \end{equation} This was enough for constructing a decaying solution of (NS).
However, to obtain such decay estimate we used only few properties of the kernel $F(x,t)$, namely, its pointwise decay and its rescaling properties. Next Lemma will allow us to considerably improve estimate~\eqref{trin}, at least in the parabolic region $|x|\ge \sqrt t$. Its proof will make an essential use of the \emph{cancellations properties\/} of the kernel $F(x,t)$ and requires some regularity for~$w$.
\begin{lemma} \label{fine estim} Let $w=(w_{h,k})$, with $w\in X^2_2$. Let $\L(w)$ be defined by equality~\eqref{Lw}. \begin{subequations} Then we have, for $d\ge3$, \begin{equation}
\label{decay Lw}
|\L(w)(x,t)|\le C\Bigl(t^{-1/2}\wedge \, t\,|x|^{-3}\Bigr). \end{equation} When $d=2$, we have the weaker estimate \begin{equation}
\label{fine est2}
|\L(w)(x,t)|\le Ct|x|^{-3}\log\Bigl(\frac{|x|}{\sqrt t}\Bigr), \qquad |x|\ge e\sqrt t. \end{equation} \end{subequations} Under the more stringent assumption $w\in X^3_2$, we have the following estimates for $\nabla\L(w)$: \begin{equation*} \label{grad Lw}
|\nabla \L(w)(x,t)|\le \begin{cases}
C\bigl(t^{-1}\wedge t|x|^{-4}\bigr), &\hbox{if $d\ge3$}\\
Ct^{-1} &\hbox{if $d=2$ and $|x|\le e\sqrt t$}\\
Ct|x|^{-4}\log\bigl(|x|/\sqrt t\bigr) &\hbox{if $d=2$ and $|x|\ge e\sqrt t$}. \end{cases} \end{equation*} In all these inequalities $C>0$ is a constant dependent on $w$
only through its $\|\cdot\|_{X^2_2}$ or its $\|\cdot\|_{X^3_2}$-norm, and independent on~$x$ and~$t$. \end{lemma}
{
\noindent{\em Proof. }} We can limit ourselves to the region $|x|\ge e\sqrt t$. Indeed, when $|x|\le e\sqrt t$ the result holds because of inequality~\eqref{Lcont}, which, in the special case $m=0,1$, implies
$|\L(w)(x,t)|\le Ct^{-1/2}$ and $|\nabla \L(x)(x,t)|\le Ct^{-1}$.
Let us decompose \begin{equation} \begin{split} \label{dec L}
\L(w)(x,t)&=\int_0^t\!\!\int_{|y|\le |x|/2}F(x-y,t-s)w(y,s)\,dy\,ds \\
&\qquad+\int_0^t\!\!\int_{|y|\le |x|/2}F(y,t-s)w(x-y,s)\,dy\,ds\\
&\qquad+\int_0^t\!\!\int_{|y|\ge |x|/2,\;|x-y|\ge |x|/2}F(x-y,t-s)w(y,s)\,dy\,ds\\
&\equiv\L_1+\L_2+\L_3 \end{split} \end{equation}
We start with estimating $\L_3$. Using $|F(x-y,t-s)|\le C|x-y|^{-d-1}\le C'|y|^{-d-1}$ (the two inequalities being valid in the region of~$\R^d$
where we perform the integration) and $|w(y,s)|\le |y|^{-2}$, we get $|\L_3(x,t)|\le Ct|x|^{-3}$.
In view of the use of the Taylor formula, we further decompose $\L_1$ (recalling also~\eqref{profKF}) as \begin{equation} \label{L1 dec} \begin{split}
\L_1=& \int_0^t\!\!\int_{|y|\le |x|/2}\bigl[F(x-y,t-s)-F(x,t-s)\bigr]w(y,s)\,dy\,ds\\
&\qquad + \FF(x)\!:\!\int_0^t \int_{|y|\le |x|/2} w(y,s)\,dy\,ds\\
&\qquad +|x|^{-d-1}\int_0^t \widetilde \Psi(x/\sqrt{t-s})\int_{|y|\le |x|/2} w(y,s)\,dy\,ds. \end{split} \end{equation}
Using $|\nabla F(x,t)|\le C|x|^{-d-2}$, next $|y|\,|w(y,s)|\le C|y|^{-1}$
shows that the first term in~\eqref{L1 dec} is bounded by $Ct|x|^{-3}$.
When $d\ge3$, since $|w(y,s)|\le C|y|^{-2}$, the second term in the right-hand side of~\eqref{L1 dec} is also bounded by $Ct|x|^{-3}$. When $d=2$, make use of the inequality $|w(y,s)|\le C(\sqrt s+|y|)^{-2}$ and of the chage of variables $y=\sqrt s z$. This leads to the weaker upper bound estimate of the form $Ct|x|^{-3}\log(|x|/\sqrt t)$, valid for $|x|\ge e\sqrt t$.
The simplest way to treat the third term on the right-hand side of~\eqref{L1 dec}
is to recall that $|\widetilde \Psi (x)|\le C$. In this way, one can proceed exactly as for the previous term and obtain the same bounds. This would be enough for the proof of this Lemma. However, for later use (namely, to shorten the proof of Lemma~\ref{lle} below), we want to prove that this last term in~\eqref{L1 dec}
is bounded, in the region $|x|\ge e\sqrt t$, by $Ct|x|^{-3}$ also when $d=2$. This is easy: indeed
$\widetilde\Psi$ has a fast decay at infinity; here, the use of the inequality $|\Psi(x)|\le C|x|^{-1}$ is enough to conclude.
We now consider $\L_2$. We decompose it as \begin{equation} \label{L2 dec} \begin{split}
\L_2= &\int_0^t\!\!\int_{|y|\le |x|/2} F(y,t-s) \bigl[w(x-y,s)-w(x,s)+y\cdot\nabla w(x,s)\bigr]\,dy\,ds\\
&\qquad\int_0^t w(x,s)\int_{|y|\le |x|/2} F(y,t-s)\,dy\,ds \\
&\qquad - \int_0^t \nabla w(x,s)\cdot \int_{|y|\le |x|/2}y \,F(y,t-s)\,dy\,ds.\\ \end{split} \end{equation}
Now we use the inequalities $|\nabla^2 w(x,t)|\le C|x|^{-4}$
and $|y|^2\,|F(y,t-s)|\le C|y|^{-d+1}$, and obtain that the first term on the right-hand side in~\eqref{L2 dec} is bounded by $Ct|x|^{-3}$. We now conclude using the cancellations of the kernel~$F$:
more precisely, since $\int F(\cdot,t-s)\,dy=0$ and $|F(y,t-s)|\le C|y|^{-d-1}$ the second term is also bounded by $Ct|x|^{-3}$.
A brutal estimate of the third term in~\eqref{L2 dec} would give a non-optimal bound of the form $C|x|^{-3}\log(|x|\sqrt t)$ for large $|x|$, which is not enough. But, for $|x|\ge 2\sqrt{t}$, the third term in~\eqref{L2 dec} can be further decomposed as \begin{equation} \label{L23 dec} \begin{split}
&\int_0^t \nabla w(x,s)\cdot \int_{|y|\le \sqrt{t-s}} y\,F(y,t-s)\,dy\,ds\\
& \qquad+\int_0^t \nabla w(x,s)\cdot \int_{\sqrt{t-s}\le |y|\le |x|/2} y\,\FF(y)\,dy\,ds\\
& \qquad+\int_0^t \nabla w(x,s)\cdot \int_{\sqrt{t-s}\le |y|\le |x|/2} y\,|y|^{-d-1}\widetilde \Psi(y/\sqrt{t-s})\,dy\,ds.
\end{split} \end{equation}
Now it is easy to see that the first and the third term in~\eqref{L23 dec} are $\mathcal{O}(t|x|^{-3})$. But $\FF$ has vanishing first order moments on the sphere (see Proposition~\ref{asG}) so that the second term in~\eqref{L23 dec} is zero.
Summarizing, we have established inequality~\eqref{fine est2} in the two-dimensional case and inequality~\eqref{decay Lw} when $d\ge3$.
To prove the inequality for~$\nabla\L$, we fix $\ell\in\{1,\ldots,d\}$ and we write \begin{equation*}
\partial_\ell L(x,t)=\int_0^t\!\! F(x-y,t-s)\partial_\ell w(y,s)\,dy\,ds\equiv \widetilde \L_1+\widetilde\L_2+\widetilde\L_3, \end{equation*}
where the decoposition is obtained as before (see~\eqref{dec L}). The two terms $\widetilde \L_2$ and $\widetilde \L_3$ are treated exactly as before, but we get now upper bound of the form $Ct|x|^{-4}$ since $\partial_\ell w$ (and its derivatives up to the second order) decays faster than $w$ (and its corresponding derivatives). Notice that we need use here the assumption $w\in X^3_2$ which ensures a decay for the derivatives up to the order three.
For treating~$\widetilde \L_1$ we integrate by parts. It is easy to see that the boundary term is bounded by $Ct|x|^{-4}$. The other term is $\int_0^t\int_{|y|\le |x|/2}\partial_\ell F(x-y,t-s)w(y,s)\,dy\,ds$, for which we obtain the usual bound $Ct|x|^{-4}$ when $d\ge3$ and
$Ct|x|^{-4}\log(|x|/\sqrt t)$ for $d=2$ and $|x|\ge e\sqrt t$.
{
$\Box$}
\begin{remark}
\label{rem fine} For later use, let us observe that if $u\in X^2_1$ is the solution constructed in Proposition~\ref{theorem1}, in the case $m\ge2$, then, applying Lemma~\ref{fine estim} to $w=u\otimes u$, so that $\L(w)=B(u,u)$, we get \begin{subequations} \begin{equation}
\label{buud}
|B(u,u)|(x,t)\le \begin{cases}
C(t^{-1/2}\wedge t|x|^{-3}) &\qquad\hbox{if $d\ge3$}\\
Ct^{-1/2} &\qquad\hbox{if $d=2$ and $|x|\le e\sqrt t$}\\
Ct|x|^{-3}\log(|x|/\sqrt t) &\qquad \hbox{if $d=2$ and $|x|\ge e\sqrt t$}. \end{cases} \end{equation} In the case $u\in X^3_1$ (this requires the more stringent assumption $a\in \dot E^3_1$ in Proposition~\ref{theorem1}), in addition to the above estimates, the bilinear term satisfies \begin{equation} \label{grad Buu}
|\nabla B(u,u)|(x,t)\le \begin{cases}
C(t^{-1}\wedge t|x|^{-4}), &\hbox{if $d\ge3$}\\
Ct^{-1} &\hbox{if $d=2$ and $|x|\le e\sqrt t$}\\
Ct|x|^{-4}\log(|x|/\sqrt t) &\hbox{if $d=2$ and $|x|\ge e\sqrt t$}. \end{cases} \end{equation} \end{subequations} These estimates will play an essential role in the study of the bi-integral formula \end{remark} \begin{equation} \label{biff} u(t)=e^{t\Delta}a-B(e^{t\Delta}a,e^{t\Delta}a)+2B(e^{t\Delta}a,B(u,u)) -B(B(u,u),B(u,u)). \end{equation}
\section{Asymptotic profiles of the velocity field in the 2D case}
In the two-dimensional case, from Lemma~\ref{heat l} and Remark~\ref{rem fine} we get, for $(x,t)\in\R^2\times(0,\infty)$, \begin{equation}
\label{qw}
|e^{t\Delta}a\otimes B(u,u)(x,t)|\le
\begin{cases}
Ct^{-1} & \hbox{if $|x|\le e\sqrt t$}\\
Ct|x|^{-4}\log(|x|/\sqrt t) &\hbox{if $|x|\ge e\sqrt t$}. \end{cases} \end{equation} The last term in~\eqref{biff} satisfies, always for $(x,t)\in\R^2\times(0,\infty)$, an even stronger estimate, namely \begin{equation}
\label{qwB}
|B(u,u)\otimes B(u,u)(x,t)|\le
\begin{cases}
Ct^{-1} & \hbox{if $|x|\le e\sqrt t$}\\
Ct^2|x|^{-6}\log^2(|x|/\sqrt t) &\hbox{if $|x|\ge e\sqrt t$}. \end{cases} \end{equation}
Next Lemma allows us to show that the two last terms in the right-hand side of~\eqref{biff} can be considered as remainders, {\it i.e.\/}, they can be included in the $\mathcal{O}(t|x|^{-3})$ term.
\begin{lemma} \label{le5} Let $w=(w_{h,k})$ defined on $\R^2\times (0,\infty)$ with $w_{h,k}(x,t)$ bounded by the right hand side of~\eqref{qw}, or by the right-hand side of~\eqref{qwB}. Then, if~$\L(w)$ is given by~\eqref{Lw}, we have for some $C>0$ independent on $x$ or~$t$,
$$|\L(x,t)|\le C\Bigl(t^{-{1/2}}\wedge \,t\,|x|^{-3}\Bigr).$$ \end{lemma}
{
\noindent{\em Proof. }} Our assumptions imply $w\in X^0_2$. Then we deduce from Lemma~\ref{sll} that $|\L(x,t)|\le Ct^{-1/2}$, therefore we can assume that
$|x|\ge e\sqrt t$. Then we split the spatial integral defining $\L$ (see~\eqref{Lw}) into the three regions
$|y|\le \sqrt s$, $\sqrt s\le |y|\le |x|/2$ and $|y|\ge |x|/2$. The first term that we obtain is bounded using $|F(x-y,t-s)|\le C|x|^{-3}$ (this is true only in 2D)
and $|w(y,s)|\le Cs^{-1}$. For the second term we use the same bound for $F$ and $|w(y,s)|\le Cs|y|^{-4}\log(|y|/\sqrt s)$. The last term is treated using the bound $|w(y,s)|\le C\sqrt s|y|^{-3}$ and
that $\|F(t-s)\|_1\le C(t-s)^{-1/2}$.
{
$\Box$}
Next Lemma will be useful for treating the term $B(e^{t\Delta}a,e^{t\Delta}a)$ arising in~\eqref{biff}. Note that for $a\in \dot E^2_1$ we have, from Lemma~\ref{heat l}, $e^{t\Delta}a \otimes e^{t\Delta}a\in X^2_2$.
\begin{lemma} \label{lle} Let $w=(w_{h,k})$, with $w_{h,k}\in X^2_2$ for all $h,k=1,2$. Then we have \begin{equation} \label{exp Lw}
\L(w)(x,t)=\FF(x)\!:\!\int_0^t\!\!\int_{|y|\le|x|} w(y,s)\,dy\,ds+\mathcal{O}(t|x|^{-3}),
\qquad\hbox{as $|x|\to\infty$}, \end{equation}
uniformly with respect to $t$ in the region $|x|\ge e \sqrt t$. Here $\L(w)$ is given by~\eqref{Lw} and $\FF(x)$ is the homomeneous tensor of order three defined by equation~\eqref{FF}. \end{lemma}
{
\noindent{\em Proof. }} This follows from the proof of Lemma~\ref{fine estim}. Therein, we decomposed $\L(w)$ as the sum of several terms, all of which, excepted one, could be bounded by
$Ct|x|^{-3}$. The only term for which such upper bound could brake down was \begin{equation*}
\FF(x)\!:\!\int_0^t\int_{|y|\le|x|/2} w(y,s)\,dy\,ds \end{equation*}
(see the second term in the right-hand side of~\eqref{L1 dec}). A simple modification of the error term now shows that we can change the above domain of the spatial integral into $\{|y|\le |x|\}$.
{
$\Box$}
\begin{lemma} \label{lol}
Let $a(x)$ be a vector field defined on $\R^2$, such that $a\in \dot E^2_1$. Then, for $|x|\to\infty$ and uniformly in time, in the region $|x|\ge e\sqrt t$, we have: \begin{equation*} \label{inth2}
\int_0^t\!\!\int_{|y|\le |x|} (e^{s\Delta}a\otimes e^{s\Delta}a)(y)\,dy\,ds
=\int_0^t\int_{\sqrt s\le |y|\le |x|} (a\otimes a) (y)\,dy\,ds+\mathcal{O}(t\,1) \end{equation*}
(here and below $\mathcal{O}(t\,1)$ denotes a remainder function bounded by $Ct$ for $|x|\ge e\sqrt t$).
In particular, if $a$ is homogeneous in $\R^2$ of degree~$-1$ \begin{equation*} \label{int hh}
\int_0^t\!\!\int_{|y|\le |x|} (e^{s\Delta}a\otimes e^{s\Delta}a)(y)\,dy\,ds
= t\log\Bigl(\frac{|x|}{\sqrt t}\Bigr) \biggl(\int_{\S^1} a\otimes a\biggr)+\mathcal{O}(t\,1),
\quad\hbox{ as $|x|\to\infty$}, \end{equation*} \end{lemma}
{
\noindent{\em Proof. }} Indeed, we can assume $|x|\ge \sqrt t$. Then $\displaystyle\int_0^t\!\!\displaystyle\int_{|y|\le \sqrt s} (e^{s\Delta}a\otimes e^{s\Delta}a)(y)\,dy\,ds$ is bounded by $Ct$. It remains to treat
$$\int_0^t\!\!\int_{\sqrt s\le |y|\le |x|} (e^{s\Delta}a\otimes e^{s\Delta}a)(y)\,dy\,ds,$$ which we can rewrite as the sum of four new integrals, if we use the decomposition $e^{t\Delta}a(x)=a(x)+\mathcal{R}(x,t)$ obtained in Lemma~\ref{as heat lem} (in the case $\vartheta=1$, $m=2$) and a similar decomposition for $e^{t\Delta}b$. Here, $\mathcal{R}$ satisfies
$|\mathcal{R}(x,t)|\le Ct|x|^{-3}$. An easy calculation shows that the three integrals containing at least one factor $\mathcal{R}$ are bounded by $Ct$.
{
$\Box$}
\begin{theorem} \label{theo2D} Let $u(x,t)\in X^2_1$ be the global solution of the Navier--Stokes equations in $\R^2$, with datum $a\in \dot E^2_1$
(as constructed in Proposition~\ref{theorem1}). Then $u$ has the following profile for $|x|\to\infty$, uniformly with respect to~$t$ in the region~$|x|\ge e\sqrt t$: \begin{equation}
\label{pro2Dnss}
u(x,t)=a(x)-\FF(x)\!:\!\int_0^t\!\!\int_{\sqrt s\le |y|\le |x|} (a\otimes a)(y)\,dy\,ds\;+\; \mathcal{O}(t|x|^{-3}). \end{equation} Moreover, if $a$ is homogeneous of degree $-1$, then $u(x,t)=\frac{1}{\sqrt t}U\Bigl(\frac{x}{\sqrt t}\Bigr)$ is self-similar and the profile $U(x)$ is such that \begin{equation} \label{pro2Dss}
U(x)=a(x)-\log(|x|)\FF(x)\!:\!\biggl(\int_{\S^1}a\otimes a\biggr) + \mathcal{O}(|x|^{-3}),
\qquad\hbox{as $|x|\to\infty$}. \end{equation} \end{theorem}
{
\noindent{\em Proof. }} The first statement follows from the bi-integral formula~\eqref{biff} and our previous Lemmata. Indeed, as we have already observed, by Lemma~\ref{as heat lem}, we can write $e^{t\Delta}a(x)=a(x)+\mathcal{O}(t|x|^{-3})$. Next, writing $$B(e^{t\Delta}a,e^{t\Delta}a)=\L(e^{t\Delta}a\otimes e^{t\Delta}a),$$
we apply first Lemma~\ref{lle} with $w=e^{t\Delta}a\otimes e^{t\Delta}a$, and then Lemma~\ref{lol}. This shows that $-B(e^{t\Delta}a,e^{t\Delta}a)$ equals to the second term on the right hand side of~\eqref{pro2Dnss}, up to an error $\mathcal{O}(t|x|^{-3})$ for large~$|x|$. The last two terms in the bi-integral formula can also be included into the remainder term
$\mathcal{O}(t|x|^{-3})$, as shown by combining inequalities~\eqref{qw}-\eqref{qwB} with Lemma~\ref{le5}.
In the case of homogeneous data, an elementary computations shows that
$$\int_0^t\!\!\int_{\sqrt s\le |y|\le |x|} (a\otimes a)(y)\,dy\,ds
=\biggl(\int_{\S^1} a\otimes a\biggr)t\log\Bigl(\frac{|x|}{\sqrt t}\Bigr)+t/2.$$ Then profile~\eqref{pro2Dss} follows from profile~\eqref{pro2Dnss} passing to self-similar variables and eliminating~$t$.
{
$\Box$}
\section{Asymptotics in the higher-dimensional case} \label{sec9}
We now establish the analogue of Lemma~\ref{lle} for the higher dimensional case. \begin{lemma} \label{lem ter}
Let $w=(w_{h,k})$ with $w_{h,k}\in X^1_4$. Then we have, as $|x|\to\infty$,
and uniformly in time, for $|x|\ge e\sqrt t$, \begin{subequations} \begin{equation}
\label{exp Lw d}
\L(w)(x,t)=\FF(x)\!:\!\int_0^t\!\!\int w(y,s)\,dy\,ds \,+\, \mathcal{O}\bigl(t|x|^{-5}\log(|x|/\sqrt t) \bigr) \end{equation} for $d=3$, and \begin{equation} \label{exp Lw dd}
\L(w)(x,t)=\mathcal{O}\bigl(t|x|^{-5}\log(|x|/\sqrt t) \bigr) \end{equation} when $d\ge4$. \end{subequations} \end{lemma}
{
\noindent{\em Proof. }} We go back to the decomposition $\L=\L_1+\L_2+\L_3$ obtained in~\eqref{dec L}. Writing~$\L_1$ as in~\eqref{L1 dec} and using the estimate $|w(y,s)|\le C(\sqrt s+|x|)^{-4}$, the bound $|\nabla F(x,t)|\le C|x|^{-d-2}$,
and the fast decay of $\widetilde \Psi$ shows that the first and the third term in~\eqref{L1 dec} are bounded by $Ct|x|^{-5}$ (with an additional logarithmic factor $\log(|x|/\sqrt t)$, for the first term in~\eqref{L1 dec}, when $d=3$) for $|x|\ge e\sqrt t$. The second term in~\eqref{L1 dec} has the form
$$\FF(x)\!:\!\int_0^t\!\!\int_{|y|\le |x|/2} w(y,s)\,dy\,ds .$$
Using again that $|w(y,s)|\le C(\sqrt s +|x|)^{-4}$ and distinguishing the between the cases $d=3$ and $d\ge4$ shows that such term can be written as the right-hand sides in~\eqref{exp Lw d}-\eqref{exp Lw dd}.
We now decompose $\L_2$, as \begin{equation} \label{L2 decc} \begin{split}
\L_2 &= \int_0^t\!\!\int_{|y|\le |x|/2} F(y,t-s)\bigl[w(x-y,s)-w(x,s)\bigr]\,dy\,ds\\
&\qquad +\int_0^t w(x,s)\int_{|y|\le |x|/2} F(y,t-s)\,dy\,ds. \end{split} \end{equation}
Since $|\nabla w(x,t)|\le C|x|^{-5}$, the first term in~\eqref{L2 decc} is bounded by $C|x|^{-5}t\log(|x|/\sqrt t)$
for $|x|\ge e\sqrt t$. Combining the estimate $|F(y, t-s)|\le |y|^{-d-1}$ with the condition $\int F(\cdot,t-s)\,ds=0$, shows that the second term in~\eqref{L2 decc} is bounded by~$Ct|x|^{-5}$. Such bound holds also for $\L_3$ as easily checked using the usual spatial decay estimates of $F$ and $w$.
{
$\Box$}
Our next Lemma essentially states that if $a$ and $b$ are two functions defined on~$\R^d$ and well behaved at infinity (for example, the derivatives of $a$ and $b$ decay faster than $a$ and $b$
as $|x|\to\infty$), then \begin{equation*}
\label{fun h}
(e^{t\Delta}a)(e^{t\Delta}b)\sim e^{t\Delta}(ab), \qquad\hbox{as ${|x|\to\infty}$}. \end{equation*} More precisely, we have:
\begin{lemma} \label{fun l} Let $d\ge3$ and $a,b\in \dot E^1_1$. Then \begin{equation}
\label{fun rh} (e^{t\Delta}a)(e^{t\Delta}b)=e^{t\Delta}(ab)-2\int_0^t e^{(t-s)\Delta}\bigl[\nabla e^{s\Delta}a\cdot \nabla e^{s\Delta}b\bigr]\,ds. \end{equation} \end{lemma}
{
\noindent{\em Proof. }} Let $v=e^{t\Delta}a$ and $w=e^{t\Delta}b$. Then we have $\partial_t v=\Delta v$ and $\partial_t w=\Delta w$. Multiplying by $w$ the first equation and by $v$ the second one we get \begin{equation*}
\partial_t(vw)=w\Delta v+v\Delta w=\Delta(vw)-2\nabla v\cdot\nabla w. \end{equation*} Since $d\ge3$, $ab$ is locally integrable in $\R^d$. But $(vw)(t)\to ab$ as $t\to0^+$ weakly (because $v(t)\to a$ and $w(t)\to b$ in $L^2_{\rm loc}(\R^d)$, for example, as $t\to0$). Then the conclusion follows from Duhamel formula.
\nobreak {
$\Box$}
In the above Lemma we only used, in fact, $a,b\in E^0_1$. The stronger assumption
$a,b\in \dot E^1_1$, however, ensures that the last term in~\eqref{fun rh}, decays faster as $|x|\to\infty$ than $e^{t\Delta}(ab)$.
We now give the higher-dimensional counterpart of Theorem~\ref{theo2D}.
\begin{theorem} \label{theorem3d} Let $u(x,t)\in X^3_1$ be the global solution of the Navier-Stokes equations starting from
$a\in \dot E^3_1$ (as constructed in Proposition~\ref{theorem1}). Then $u$ has the following profile as $|x|\to\infty$, uniformly in time for $|x|\ge e \sqrt t$. For $d=3$, \begin{subequations} \begin{equation}
\label{pro3Dnss} u(x,t) =e^{t\Delta}a(x)-t\,e^{t\Delta}\P\nabla\cdot(a\otimes a)-\FF(x)\!:\!\Lambda(t)
+\mathcal{O}\biggl(t^2|x|^{-5}\log\Bigl(\frac{|x|}{\sqrt t}\Bigr)\biggr), \end{equation}
for some matrix-valued function $\Lambda(t)=(\Lambda_{h,k}(t))$, satisfying $|\Lambda(t)|\le Ct^{3/2}$. Moreover, when $d\ge4$, \begin{equation} \label{pro4Dnss} u(x,t) = e^{t\Delta}a(x)-t\,e^{t\Delta}\P\nabla\cdot(a\otimes a)
+\mathcal{O}\biggl(t^2|x|^{-5}\log\Bigl(\frac{|x|}{\sqrt t}\Bigr)\biggr). \end{equation} \end{subequations} \end{theorem}
\begin{remark} The function $\Lambda(t)$ is not know explicitly, but it depends on $u$ and $a$ in an explicit way: see formula~\eqref{ssos} below. For more regular data, namely $a\in \dot E^4_1$, and recalling Lemma~\ref{as heat lem}
(applied with $m=4$ and $\vartheta=1$) one can replace in the above asymptotics the term $e^{t\Delta}a(x)$ with $a(x)+t\Delta a(x)$.
\end{remark}
{
\noindent{\em Proof. }} As for the proof of our previous theorem, we write $u$ by means of the bi-integral formula~\eqref{biff}. As an application of Lemma~\ref{fun l} we can rewrite (for $d\ge3$) the term $B(e^{t\Delta}a,e^{t\Delta}a)$ appearing in the bi-integral formula~\eqref{biff} in a more convenient form (we denote here by ${}^T\!\!A$ the transposed of the matrix $A$): \begin{equation}
\label{Baa} \begin{split} B(e^{t\Delta} &a ,e^{t\Delta}a) = \int_0^t F(t-s)* \bigl(e^{s\Delta}a\otimes e^{s\Delta}a\bigr)\,ds\\ &=\int_0^t e^{(t-s)\Delta}\P e^{s\Delta}\nabla\cdot(a\otimes a)
-2\int_0^t\!\!\int_0^s e^{(t-\tau)\Delta}\P\nabla\cdot\Bigl[\!{\phantom{\Bigl|}}^T\!\! \Bigl(\nabla\otimes e^{\tau\Delta}a\Bigr)
\Bigl(\nabla\otimes e^{\tau\Delta}a\Bigr)\Bigr]\,d\tau\,ds\\ &= t e^{t\Delta}\P\cdot\nabla(a\otimes a)-2\int_0^t(t-\tau) e^{(t-\tau)\Delta}\P\nabla\cdot
\Bigl[ \!{\phantom{\Bigl|}}^T\!\! \Bigl(\nabla\otimes e^{\tau\Delta}a\Bigr) \Bigl(\nabla\otimes e^{\tau\Delta}a\Bigr) \Bigr]\,d\tau\,, \end{split} \end{equation} where we applied Fubini's theorem in the last equality.
We set $$ \widetilde\L(w)(t)\equiv\int_0^t(t-\tau)F(t-\tau)*w(\tau)\,d\tau
\quad\hbox{and}\quad \overline\L(w)(t)\equiv\int_0^t \tau\,F(t-\tau)*w(\tau)\,d\tau
.$$ Note that, excepted for the additional factors $t-\tau$ or $\tau$, the operator $\widetilde{\L}$ and $\overline{\L}$ agree with the operator $\L$ introduced in~\eqref{Lw} and studied before. If we introduce the matrix $$
w_1\equiv
\!{\phantom{\Bigl|}}^T\!\! \Bigl(\nabla\otimes e^{\tau\Delta}a\Bigr) \Bigl(\nabla\otimes e^{\tau\Delta}a\Bigr), $$ then we can rewrite~\eqref{Baa} as \begin{equation*}
B(e^{t\Delta}a,e^{t\Delta}a)=t\,e^{t\Delta}\P\nabla\cdot(a\otimes a)-2\widetilde\L(w_1). \end{equation*} The estimates of Lemma~\ref{heat l} (in the case $m=\vartheta=1$), imply $w_1\in X^1_4$. But the result of Lemma~\ref{lem ter}, established before for the operator $\L$, can be easily adapted to the operators $\widetilde\L$ and $\overline\L$; indeed the factors $t-\tau$ and $\tau$ are harmless in our estimates due to the obvious inequalities $t-\tau\le t$ and $\tau\le t$. Thus, we get, for $d=3$, \begin{equation*}
\widetilde\L(w_1)= \FF(x)\!:\!\int_0^t\! (t-\tau)
\!\!\int w_1 \,dy\,ds
\,+\,\mathcal{O}\Bigl( t^2|x|^{-5}\log\bigl(|x|/\sqrt t\bigr)\Bigr). \end{equation*}
When $d\ge4$, we can simply write \begin{equation*}
\widetilde\L(w_1)=\mathcal{O}\bigl(t^2|x|^{-5}\log(|x|/\sqrt t)\bigr). \end{equation*}
It remains to write the asymptotics (or to estimate) the two last terms $B(e^{t\Delta}a,B(u,u))$ and $B(B(u,u),B(u,u))$ appearing in the bi-integral formula~\eqref{biff}. Let $$w_2\equiv\tfrac{1}{t} \,e^{t\Delta}a\otimes B(u,u).$$ We get from Lemma~\ref{heat l} (applied with $m=1$ and $\vartheta=1$) and Remark~\ref{rem fine} that $w_2\in X^1_4$. In the same way, Remark~\ref{rem fine} ensures that, if we set $$w_3\equiv \tfrac{1}{t} \,B(u,u)\otimes B(u,u),$$ then $w_3\in X^1_4$. Therefore, Lemma~\ref{lem ter} (or more precisely, the adaptation of this Lemma to $\overline\L(w_2)$ and $\overline\L(w_3)\,$) implies, for $d=3$, \begin{equation} \label{babuu} \begin{split} 2B(e^{s\Delta}a, & B(u,u))-B(B(u,u),B(u,u)) \\ &=\,\,\,2\overline\L(w_2)-\overline\L(w_3)\\
&=\,\,\, \FF(x)\!:\!\int_0^t s\!\!\int (2w_2-w_3)\,dy\,ds +\mathcal{O}\Bigl(t^2|x|^{-5}\log(|x|/\sqrt t)\Bigr),\\
\end{split} \end{equation}
as $|x|\to\infty$.
When $d\ge4$, the first term in the right-hand side of~\eqref{babuu} can be dropped. Therefore, the proof of the expansion~\eqref{pro4Dnss} follows from the bi-integral formula, collecting the above estimates.
In the case $d=3$, it is convenient to introduce the time-dependent matrix \begin{equation}
\label{ssos} \Lambda(t)=\int_0^t\!\!\int \Bigl[ -2(t-s)w_1-2s\, w_2+s\, w_3\Bigr]\,dy\,ds. \end{equation}
The expansion~\eqref{pro3Dnss} now follows by collecting all the above expressions. The estimate $|\Lambda(t)|\le Ct^{3/2}$ is immediate, because $w_1$, $w_2$ and $w_3$ belong to $X^1_4$.
{
$\Box$}
As an application of this theorem, we can complete the proof of Theorem~\ref{theoss} by giving the far-field asymptotics of self-similar solutions in the case $d\ge3$.
\noindent {\it End of the Proof of Theorem~\ref{theoss}.\/} We assumed that $a\in C^\infty(\S^{d-1})$ and that $a$ is is homogeneous of degree~$-1$. From the second part of Lemma~\ref{as heat lem},
$$ e^{t\Delta}a(x)=a(x)+t\Delta a(x)+\mathcal{O}(t^2|x|^{-5}).$$ But the solution $u$ is of the self-similar form $u(x,t)=\frac{1}{\sqrt t}U(x/\sqrt t)$. Moreover, the linear part $e^{t\Delta}a$ and the nonlinear part $B(u,u)$ of $u$ are also of self-similar form, so that, with the same notations of the previous proof, $w_j(y,s)=\frac{1}{s^2}W_j(y/\sqrt s)$, where $$W_j(y)=w_j(y,1), \qquad j=1,2,3.$$
If follows from~\eqref{ssos} that, in the case $d=3$, $\Lambda(t)$ is of the form $\Lambda(t)=t^{3/2}B$, for some \emph{constant\/} matrix $B=(B_{h,k})$. As for $\Lambda(t)$, such matrix $B$ is not known explicitly, however, it is possible to obtain an explicit integral formula relating $B$ to the datum $a$ and the profile $U$, performing a self-similar change of variables in the integral~\eqref{ssos}. An easy computation yields \begin{equation} \label{Bmat} B=\frac{1}{3}\int \Bigl(-8W_1-4 W_2+2 W_3\Bigr)(y)\,dy. \end{equation}
\begin{subequations} Now we can pass to self-similar variables in expansion~\eqref{pro3Dnss} and, after eliminating~$t$, we get, for $d=3$, \begin{equation} \label{pro3Dp} U(x)=a(x)+\Delta a(x)- e^{\Delta}\P \cdot\nabla (a\otimes a)
-\frac{ \mathcal{Q}(x)\!:\!B}{|x|^7} +\mathcal{O}\bigl(|x|^{-5}\log(|x|)\bigr), \end{equation}
as $|x|\to\infty$.
As before, for $d\ge4$, the far-field asymptotics has a simpler structure, namely, \begin{equation} \label{proDp}
U(x)=a(x)+\Delta a(x)- e^\Delta\P \cdot\nabla (a\otimes a)+ \mathcal{O}\bigl(|x|^{-5}\log(|x|)\bigr), \end{equation}
as $|x|\to\infty$. \end{subequations}
To finish the proof, it remains to show that we can drop the filtering operator $e^\Delta$ appearing in the right-hand side of equations~\eqref{pro3Dp} and~\eqref{proDp}. Recall that $a$ is smooth on the sphere. In fact, the condition $a\in C^\infty(\S^{d-1})$ will allow us to carry the proof using only ``soft arguments''. The datum $a$ being homogeneous of degree~$-1$,
$\nabla\cdot (a\otimes a)$ is a homogeneous distribution of degree~$-3$
(here we need $d\ge3$), which agree with a $C^\infty$ function outside the origin. But the matrix Fourier multiplier of the operator $\P$ (given by $\delta_{j,k}-\xi_j\xi_k|\xi|^{-2}$) is homogeneous of degree zero and smooth outside the origin). Then it follows (see, {\it e.g.\/}, \cite[p. 262]{Ste93}) that $\P\nabla\cdot(a\otimes a)$ is a homogeneous distribution of degree~$-3$ that agrees with a $C^\infty$ function outside the origin.
Now let $\chi\in C^\infty_0(\R^d)$ be a cut-off function equal to~$1$ in a neighborhood of the origin and write $$e^{\Delta}\P\nabla\cdot(a\otimes a)=e^{\Delta}\chi \P\nabla\cdot(a\otimes a)+
e^{\Delta}(1-\chi)\mathcal{A}(x),$$ where $\mathcal{A}(x)$ a smooth function on $\R^d$, agreeing with $\P\nabla\cdot(a\otimes a)$ outside a neighborhood of the origin. In particular, $(1-\chi)\mathcal A\in E^m_3$, for all $m\in\N$.
Note that $e^{\Delta}\chi\nabla\cdot(a\otimes a)$ is an analytic function, given by $$ e^{\Delta}\chi\P\nabla\cdot(a\otimes a)(x)=\langle\chi\P\nabla\cdot(a\otimes a),g_1(x-\cdot)\rangle,$$ where $g_1$ the standard gaussian and the $\langle\cdot,\cdot\rangle$ refers to the duality product between compactly supported distributions and $C^\infty$ functions. The properties of compactly supported distributions guarantee the existence of a compact $K$ in $\R^d$ and $C>0$, $M\in\N$ such that
$$ \Bigl| \langle\chi\P\nabla\cdot(a\otimes a),g_1(x-\cdot)\rangle \Bigr|
\le C\sum_{|\alpha|\le M}\sup_{y\in K}\partial^\alpha g_1(x-y)\le C'g_1(x/2)$$
for large enough~$|x|$. In particular,
$e^{\Delta}\chi\nabla\cdot(a\otimes a)=\mathcal{O}(|x|^{-5})$ as $|x|\to\infty$.
Let us now apply the asymptotic formula for convolution integrals~\eqref{cpp} with $g=g_1$ and $f=(1-\chi)\mathcal{A}$. We obtained this formula under the assumption $f\in \dot E^m_\vartheta$, with
$0\le\vartheta<d$. Here we have, instead, $f\in E^m_3\subset \dot E^m_3$ but it is easily checked that such formula remains valid, in this case, also when $d=3$, with the same proof, since $f$ is locally integrable. Applying this formula in the case $m=2$, and using $\int g_1=1$ and $\int y\,g_1(y)\,dy=0$, we get, for $|x|\to\infty$, \begin{equation*} \begin{split}
e^{\Delta}(1-\chi)\mathcal{A}(x)&=g_1*f(x)=f(x)+\mathcal{O}(|x|^{-5})\\
&=\mathcal A(x) +\mathcal{O}(|x|^{-5})\\
& =\P\nabla\cdot(a\otimes a)(x)+\mathcal{O}(|x|^{-5}). \end{split} \end{equation*} Theorem~\ref{theoss} is now completely proved.
{
$\Box$}
\end{document} |
\begin{document}
\author{ Heinz H.\ Bauschke\thanks{ Mathematics, University of British Columbia, Kelowna, B.C.\ V1V~1V7, Canada. E-mail: \texttt{heinz.bauschke@ubc.ca}.}, ~ Walaa M.\ Moursi\thanks{ Department of Electrical Engineering, Stanford University, 350 Serra Mall, Stanford, CA 94305, USA and Mansoura University, Faculty of Science, Mathematics Department, Mansoura 35516, Egypt. E-mail: \texttt{wmoursi@stanford.edu}.} ~and~ Xianfu Wang\thanks{ Mathematics, University of British Columbia, Kelowna, B.C.\ V1V~1V7, Canada. E-mail: \texttt{shawn.wang@ubc.ca}. }}
\title{\textsc Generalized monotone operators\\ and their averaged resolvents}
\date{February 22, 2019}
\maketitle
\begin{abstract} \noindent
The correspondence between the monotonicity of a (possibly) set-valued operator and the firm nonexpansiveness of its resolvent is a key ingredient in the convergence analysis of many optimization algorithms. Firmly nonexpansive operators form a proper subclass of the more general -- but still pleasant from an algorithmic perspective -- class of averaged operators. In this paper, we introduce the new notion of conically nonexpansive operators which generalize nonexpansive mappings. We characterize averaged operators as being resolvents of comonotone operators under appropriate scaling. As a consequence, we characterize the proximal point mappings associated with hypoconvex functions as cocoercive operators, or equivalently; as displacement mappings of conically nonexpansive operators. Several examples illustrate our analysis and demonstrate tightness of our results.
\end{abstract} {\small \noindent {\bfseries 2010 Mathematics Subject Classification:} {Primary 47H05, 47H09,
Secondary
49N15, 90C25. }
\noindent {\bfseries Keywords:} averaged operator, cocoercive operator, firmly nonexpansive mapping, hypoconvex function, maximally monotone operator, nonexpansive mapping, proximal operator. }
\section{Introduction}
In this paper, we assume that \begin{empheq}[box=\mybluebox]{equation*} \label{T:assmp} \text{$X$ is a real Hilbert space}, \end{empheq} with inner product $\innp{\cdot,\cdot}$ and induced norm $\norm{\cdot}$. Monotone operators form a beautiful class of operators that play a crucial role in modern optimization.
This class includes subdifferential operators of proper lower semicontinuous convex functions as well as matrices with positive semidefinite symmetric part. (For detailed discussions on monotone operators and the connection to optimization problems, we refer the reader to \cite{BC2017}, \cite{Borwein50}, \cite{Brezis}, \cite{BurIus}, \cite{Comb96}, \cite{Comb04}, \cite{Mord18}, \cite{Rock98}, \cite{Simons1}, \cite{Simons2}, \cite{Zeidler2a}, \cite{Zeidler2b}, and the references therein.)
The correspondence between the maximal monotonicity
of an operator and the firm nonexpansiveness of its \emph{resolvent} is of central importance from an algorithmic perspective: to find a critical point of the former, iterate the later!
Indeed, firmly nonexpansive operators belong to the more general and pleasant class of \emph{averaged} operators. Let $x_0\in X$ and let $T\colon X\to X$ be averaged. Thanks to the Krasnosel'ski\u{\i}--Mann iteration (see \cite{krans}, \cite{Mann} and also \cite[Theorem~5.14]{BC2017}), the sequence $(T^n x_0)_{n\in \ensuremath{\mathbb N}}$ converges weakly to a fixed point of $T$. When $T$ is the \emph{proximal mapping} associated with a proper lower semicontinuous convex function $f$, the set of fixed points of $T$ is the set of critical point of $f$; equivalently the set of minimizers of $f$. In fact, iterating $T$ is this case produces the famous proximal point algorithm,
see \cite{Rock76}. \emph{The main goal of this paper is to answer the question: Can we explore a new correspondence between a set-valued operator and its resolvent which generalizes the fundamental
correspondence between
monotone operators and
firmly nonexpansive mappings (see
\cref{fact:corres})? Our approach relies on the new notion of \emph{\text{conically nonexpansive}\ }operators as well as the notions of $\rho$-monotonicity (respectively $\rho$-comonotonicity) which, depending on the value of $\rho$, reduce to strong monotonicity, monotonicity or hypomonotonicity (respectively cocoercivity, monotonicity or cohypomonotonicity).}
Although some correspondences between a monotone operator $(\rho\geq 0)$ and its resolvent have been established in \cite{BMW2}, our analysis here not only provides more quantifications and but also goes beyond monotone operators. We now summarize the three main results of this paper:
\begin{itemize} \item[{\bf R1}] \namedlabel{R:1}{\bf R1} We show that, when $\rho>-1$, the resolvent of a $\rho$-monotone operator as well as the resolvent of its inverse are single-valued and have full domain. This allows us to extend the classical theorem by Minty (see \cref{thm:minty}) to this class of operators (see \cref{thm:Minty:type}).
\item[{\bf R2}] \namedlabel{R:2}{\bf R2} We characterize \text{conically nonexpansive}\ operators (respectively averaged operators and nonexpansive operators) to be resolvents of $\rho$-comonotone operators with $\rho>-1$ (respectively $\rho>-\tfrac{1}{2}$ and $\rho\ge -\tfrac{1}{2}$) (see \cref{cor:eq:nexp:-0.3} and also \cref{tab:1}). \item[{\bf R3}] \namedlabel{R:3}{\bf R3} As a consequence of \ref{R:2},
we obtain a novel characterization
of the proximal point mapping associated
with a \emph{hypoconvex} function\footnote{This is also
known as \emph{weakly convex} function.}
(under
appropriate scaling of the function)
to be
a \text{conically nonexpansive}\ mapping, or equivalently,
the displacement mapping
of a cocoercive
operator (see \cref{prop:hypocon:av}). \end{itemize}
The remainder of this paper is organized as follows.
\cref{sec:mono:comono} is devoted to the study of the
properties of $\rho$-monotone and
$\rho$-comonotone operators.
In \cref{sec:aver}, we provide a characterization
of averaged operators as resolvents of
$\rho$-comonotone operators.
\cref{sec:JA:RA} provides
useful correspondences between
an operators and its resolvent
as well as its reflected resolvent. In \cref{sec:linear}, we focus on $\rho$-monotone and
$\rho$-comonotone linear operators. In the final \cref{sec:hypocon},
we establish the connection to hypoconvex functions.
The notation we use is standard and follows, e.g.,
\cite{BC2017} or \cite{Rock98}.
\section{$\rho$-monotone and $\rho$-comonotone operators} \label{sec:mono:comono}
Let $A\colon X\rras X$. Recall that the \emph{resolvent} of $A$ is $J_A=(\ensuremath{\operatorname{Id}}+A)^{-1} $ and the \emph{reflected resolvent} of $A$ is $R_A=2J_A-\ensuremath{\operatorname{Id}} $, where $\ensuremath{\operatorname{Id}}\colon X\to X\colon x\mapsto x$. The \emph{graph} of $A$ is $\gra A=\menge{(x,u)\in X\times X}{u\in Ax}$. Let $T\colon X\to X$ and let $\alpha \in\left]0,1\right[$. Recall that \begin{enumerate}
\item
$T$ is \emph{nonexpansive}
if $(\forall (x,y)\in X\times X)$
$\norm{Tx-Ty}\le \norm{x-y}$. \item $T $ is \emph{$\alpha$-averaged} if there exists a nonexpansive operator $N\colon X\to X$ such that $T=(1-\alpha)\ensuremath{\operatorname{Id}}+\alpha N$; equivalently, $(\forall (x,y)\in X\times X)$ we have \begin{equation} (1-\alpha)\norm{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-Ty)}^2 \le \alpha(\norm{x-y}^2-\norm{Tx-Ty}^2). \end{equation} \item $T $ is \emph{firmly nonexpansive} if $T$ is $\tfrac{1}{2}$-averaged. Equivalently, if $(\forall (x,y)\in X\times X)$ $\normsq{Tx-Ty}+\normsq{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}\le \normsq{x-y}$. \end{enumerate}
We begin this section by stating the following two useful facts. \begin{fact}{\rm (see, e.g.,
\cite[Theorem~2]{EckBer})}
\label{fact:corres} Let $D$ be a nonempty subset of $X$, let $T\colon D\to X$, and set $A=T^{-1}-\ensuremath{\operatorname{Id}}$. Then $T=J_A$. Moreover, the following hold: \begin{enumerate}
\item
$T$ is firmly nonexpansive if and only if
$A$ is monotone.
\item
$T$ is firmly nonexpansive and $D=X$
if and only if
$A$ is maximally monotone. \end{enumerate} \end{fact}
\begin{fact}[{\bf Minty's Theorem}]{\rm\cite{Minty}
(see also \cite[Theorem~21.1]{BC2017})} \label{thm:minty} Let $A\colon X\rras X$ be monotone. Then \begin{equation} \label{eq:Minty} \gra A=\menge{(J_A x, (\ensuremath{\operatorname{Id}}-J_A)x)}{x\in \ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}}+A)}. \end{equation} Moreover, \begin{equation} \label{eq:Minty:2} \text{$A$ is maximally monotone $\siff$ $\ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}}+A)=X$.} \end{equation} \end{fact}
\begin{defn} Let $A\colon X\rras X$ and let $\rho\in \ensuremath{\mathbb R}$.
Then \begin{enumerate} \item $A$ is \emph{$\rho$-monotone}
if $(\forall (x,u)\in \gra A)$ $(\forall (y,v)\in \gra A)$ we have \begin{equation} \label{eq:def:hmon} \innp{x-y,u-v}\ge \rho\normsq{x-y}. \end{equation} \item $A$ is \emph{maximally $\rho$-monotone} if $A$ is $\rho$-monotone and there is no $\rho$-monotone operator $B\colon X\rras X$ such that $\gra B$ properly contains $\gra A$, i.e., for every $(x,u)\in X\times X$, \begin{equation} (x,u)\in \gra A ~\siff~ (\forall (y,v)\in \gra A) ~\innp{x-y,u-v}\ge \rho\normsq{x-y}. \end{equation} \item $A$ is \emph{\bhmon} if $(\forall (x,u)\in \gra A)$ $(\forall (y,v)\in \gra A)$ we have \begin{equation} \label{eq:def:cohmon} \innp{x-y,u-v}\ge \rho\normsq{u-v}. \end{equation} \item $A$ is \emph{\bhmaxmon} if $A$ is \bhmon\ and there is no \bhmon\ operator $B\colon X\rras X$ such that $\gra B$ properly contains $\gra A$, i.e., for every $(x,u)\in X\times X$, \begin{equation} (x,u)\in \gra A ~\siff~ (\forall (y,v)\in \gra A) ~\innp{x-y,u-v}\ge \rho\normsq{u-v}. \end{equation} \end{enumerate} \end{defn} Some comments are in order. \begin{rem}\ \begin{enumerate} \item When $\rho=0$, both $\rho$-monotonicity of $A$
and $\rho$-comonotonicity of $A$
reduce to the monotonicity
of $A$; equivalently
to the monotonicity of $A^{-1}$.
\item When $\rho< 0$, $\rho$-monotonicity is known as $\rho$-\emph{hypomonotonicity}, see \cite[Example~12.28]{Rock98} and \cite[Definition~6.9.1]{BurIus}. In this case, the $\rho$-comonotonicity is also known as \emph{$\rho$-cohypomonotonicity} (see \cite[Definition~2.2]{CombPenn04}).
\item In passing, we point out that when $\rho>0$, $\rho$-monotonicity of $A$ reduces to $\rho$-strong monotonicity of $A$, while $\rho$-comonotonicity of $A$
reduces to $\rho$-cocoercivity\footnote{Let $\beta>0$ and let $T\colon X\to X$. Recall that $T$ is $\beta$-\emph{cocoercive} if $\beta T$ is firmly nonexpansive, i.e., $(\forall (x,y)\in X\times X)$ $\innp{x-y,Tx-Ty}\ge \beta \normsq{Tx-Ty}$. } of $A$.
\end{enumerate} \end{rem}
Unlike classical monotonicity, $\rho$-comonotonicity of $A$ is \emph{not} equivalent to
$\rho$-comonotonicity of $A^{-1}$. Instead, we have the following correspondences.
\begin{lemma} \label{lem:bmon:inv} Let $A\colon X\rras X$ and let $\rho\in \ensuremath{\mathbb R}$. The following are equivalent: \begin{enumerate} \item \label{lem:bmon:inv:i} $A$ is \rhmon. \item \label{lem:bmon:inv:ii} $A^{-1}-\rho\ensuremath{\operatorname{Id}}$ is monotone. \item \label{lem:bmon:inv:iii} $A^{-1}$ is $\rho$-monotone, i.e., $(\forall (x,u)\in \gra A^{-1})$ $(\forall (y,v)\in \gra A^{-1})$ $ \innp{x-y,u-v}\ge \rho \normsq{x-y}. $ \end{enumerate} \end{lemma} \begin{proof} ``\ref{lem:bmon:inv:i}$\RA$\ref{lem:bmon:inv:ii}": Let $\{(x,u),(y,v) \}\subseteq X\times X$. Then $\{(x,u),(y,v) \}\subseteq\gra (A^{-1}-\rho\ensuremath{\operatorname{Id}})$ $\siff$ [$u\in A^{-1}x-\rho x $ and $v\in A^{-1}y-\rho y $] $\siff$ $\{(x, u+\rho x),(y, v+\rho y)\}\subseteq \gra A^{-1}$ $\siff$ $\{( u+\rho x, x),(v+\rho y,y)\}\subseteq \gra A$ $\RA$ $\innp{x-y,u-v+\rho(x-y)}\ge \rho\normsq{x-y}$ $\siff$ $\rho\normsq{x-y}+\innp{x-y,u-v}\ge \rho\normsq{x-y}$ $\siff$ $\innp{u-v,x-y}\ge 0$.
``\ref{lem:bmon:inv:ii}$\RA$\ref{lem:bmon:inv:iii}": Let $\{(x,u),(y,v) \}\subseteq\gra A^{-1}$. Then $\{(x,u-\rho x),(y,v-\rho y) \}\subseteq\gra (A^{-1}-\rho \ensuremath{\operatorname{Id}})$. Hence $\innp{x-y, u-v-\rho(x-y)}\ge 0$; equivalently $\innp{x-y, u-v}\ge\rho\normsq{x-y}$.
``\ref{lem:bmon:inv:iii}$\RA$\ref{lem:bmon:inv:i}": Let $\{(x,u),(y,v) \}\subseteq X\times X$. Then $\{(x,u),(y,v) \}\subseteq\gra A$ $\siff$ $\{(u,x),(v,y) \}\subseteq \gra A^{-1}$ $\RA $ $\innp{x-y,u-v}\ge \rho\normsq{u-v}$. \end{proof}
\begin{lemma} \label{lem:gra:conn} Let $A\colon X\rras X$ and let $\rho\in \ensuremath{\mathbb R}$. Then the following hold: \begin{enumerate} \item \label{eq:grA:grainv} $\gra A=\menge{(u+\rho x,x)}{(x,u)\in \gra (A^{-1}-\rho \ensuremath{\operatorname{Id}})}$. \item \label{eq:grainv:grA} $\gra (A^{-1}-\rho \ensuremath{\operatorname{Id}})=\menge{(u,x-\rho u)}{(x,u)\in \gra A}$. \end{enumerate} \end{lemma} \begin{proof} \ref{eq:grA:grainv}: Let $(x,u)\in X\times X$. Then $(x,u)\in \gra (A^{-1}-\rho \ensuremath{\operatorname{Id}})$ $\siff$ $u\in A^{-1}x-\rho x$ $\siff$ $u+\rho x\in A^{-1} x$ $\siff$ $x\in A(u+\rho x)$ $\siff$ $(u+\rho x,x)\in \gra A$. This proves ``$\supseteq$" in \ref{eq:grA:grainv}. The opposite inclusion can be proved similarly. \ref{eq:grainv:grA}: The proof proceeds similar to that of \ref{eq:grA:grainv}. \end{proof}
\begin{lemma} \label{lem:bmax:inv} Let $A\colon X\rras X$ and let $\rho\in \ensuremath{\mathbb R}$. The following are equivalent: \begin{enumerate} \item \label{lem:bmax:inv:i} $A$ is \rhmaxmon. \item \label{lem:bmax:inv:ii} $A^{-1}-\rho\ensuremath{\operatorname{Id}}$ is maximally monotone. \end{enumerate} \end{lemma} \begin{proof} Note that \cref{lem:bmon:inv} implies that $A$ is \rhmon\ $\siff $ $A^{-1}-\rho\ensuremath{\operatorname{Id}}$ is monotone. ``\ref{lem:bmax:inv:i}$\RA$\ref{lem:bmax:inv:ii}": Let $(y,v)\in X\times X$. Then $(y,v)$ is monotonically related to $\gra(A^{-1}-\rho\ensuremath{\operatorname{Id}})$ $\siff$ $(\forall (x,u)\in \gra (A^{-1}-\rho \ensuremath{\operatorname{Id}}))$ $\innp{x-y,u-v}\ge 0$ $\siff$ $(\forall (x,u)\in \gra (A^{-1}-\rho \ensuremath{\operatorname{Id}}))$ $\innp{x-y,u-v}+\rho\normsq{x-y}\ge \rho\normsq{x-y}$ $\siff$ $(\forall (x,u)\in \gra (A^{-1}-\rho \ensuremath{\operatorname{Id}}))$ $\innp{x-y,u+\rho x-(v+\rho y)}\ge\rho\normsq{x-y}$. Because the last inequality holds for all $(x,u)\in \gra (A^{-1}-\rho \ensuremath{\operatorname{Id}})$, the parametrization of $\gra A$ given in \cref{lem:gra:conn}\ref{eq:grA:grainv} and
the \emph{maximal} $\rho$-comonotonicity of $A$
imply
that
$(v+\rho y, y) \in \gra A$. Therefore, by \cref{lem:gra:conn}\ref{eq:grainv:grA},
$(y,v)\in \gra (A^{-1}-\rho \ensuremath{\operatorname{Id}})$.
``\ref{lem:bmax:inv:ii}$\RA$\ref{lem:bmax:inv:i}": Let $(y,v)\in X\times X$. Then $(y,v)$ is $\rho$-comonotonically related to $\gra A$ $\siff$ $(\forall (x,u)\in \gra A)$ $\innp{x-y,u-v}\ge \rho \normsq{u-v}$ $\siff$ $(\forall (x,u)\in \gra A)$ $\innp{x-\rho u-(y-\rho v),u-v}\ge 0$. It follows from \cref{lem:gra:conn}\ref{eq:grainv:grA} and
the \emph{maximal} monotonicity of $A^{-1}-\rho\ensuremath{\operatorname{Id}}$ that $(v, y-\rho v)\in \gra (A^{-1}-\rho \ensuremath{\operatorname{Id}})$, equivalently, using \cref{lem:gra:conn}\ref{eq:grA:grainv}, $(y,v)\in \gra A$. \end{proof}
\begin{rem} Note that when $\rho<0$, the (maximal) monotonicity of $A^{-1}-\rho\ensuremath{\operatorname{Id}}$ is equivalent to the (maximal) monotonicity of the Yosida approximation $(A^{-1}-\rho\ensuremath{\operatorname{Id}})^{-1}$. Such a characterization is presented in \cite[Proposition~6.9.3]{BurIus}. \end{rem}
\begin{prop} \label{prop:surject} Let $A\colon X\rras X$ be \bhmaxmon\ where $\rho>-1$. Then $\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A^{-1})=X$.
\end{prop} \begin{proof} By \cref{lem:bmax:inv}, $A^{-1}-\rho \ensuremath{\operatorname{Id}}$
is maximally monotone.
Consequently, because $1+\rho>0$,
the operator $\tfrac{1}{1+\rho}(A^{-1}-\rho\ensuremath{\operatorname{Id}})$
is maximally monotone. Applying \cref{eq:Minty:2} to $\tfrac{1}{1+\rho}(A^{-1}-\rho\ensuremath{\operatorname{Id}})$ we have $\ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}}+A^{-1})=\ensuremath{\operatorname{ran}} ((1+\rho)\ensuremath{\operatorname{Id}}+(A^{-1}-\rho \ensuremath{\operatorname{Id}})) =(1+\rho)\ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}}+\tfrac{1}{1+\rho}(A^{-1}-\rho\ensuremath{\operatorname{Id}}))=(1+\rho)X=X$. \end{proof}
\begin{prop} \label{prop:gen:min} Let $A\colon X\rras X$. Then the following hold: \begin{enumerate} \item \label{prop:gen:min:i} $J_{A^{-1}}=\ensuremath{\operatorname{Id}}-J_A.$ \item \label{prop:gen:min:ii} $\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A^{-1})=\ensuremath{\operatorname{dom}}(\ensuremath{\operatorname{Id}}-J_A)=\ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}}+A)$. \end{enumerate} \end{prop} \begin{proof} \ref{prop:gen:min:i}: This follows from \cite[Proposition~23.7(ii)~and~Definition~23.1]{BC2017}. \ref{prop:gen:min:ii}: Using \ref{prop:gen:min:i}, we have $\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A^{-1}) =\ensuremath{\operatorname{dom}} (\ensuremath{\operatorname{Id}}+A^{-1})^{-1} =\ensuremath{\operatorname{dom}} J_{A^{-1}} =\ensuremath{\operatorname{dom}} (\ensuremath{\operatorname{Id}}-J_A) =(\ensuremath{\operatorname{dom}} \ensuremath{\operatorname{Id}})\cap (\ensuremath{\operatorname{dom}} J_A) =\ensuremath{\operatorname{dom}} J_A =\ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}}+A)$. \end{proof}
\begin{corollary}[\bf{surjectivity of $\ensuremath{\operatorname{Id}}+A$ and $\ensuremath{\operatorname{Id}}+A^{-1}$}] \label{cor:surj} Let $A\colon X\rras X$ be \bhmaxmon\ where $\rho>-1$. Then \begin{equation} \ensuremath{\operatorname{dom}} J_A=\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A)=X, \end{equation} and \begin{equation}
\ensuremath{\operatorname{dom}} (\ensuremath{\operatorname{Id}}-J_A)=\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A^{-1})=X.
\end{equation} \end{corollary} \begin{proof} Combine \cref{prop:surject}
and \cref{prop:gen:min}\ref{prop:gen:min:i}\&\ref{prop:gen:min:ii}. \end{proof}
\begin{prop}[\bf{single-valuedness of the resolvent}] \label{prop:s:v} Let $A\colon X\rras X$ be \bhmon\ where $\rho>-1$. Then $J_A=(\ensuremath{\operatorname{Id}}+A)^{-1}$ and $J_{A^{-1}}=\ensuremath{\operatorname{Id}}-J_A$ are at most single-valued. \end{prop} \begin{proof} Let $x\in \ensuremath{\operatorname{dom}} J_A=\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A)$ and let $(u,v)\in X\times X$. Then $\{u,v\}\subseteq J_A x$ $\siff$ [$x-u\in Au$ and $x-v\in Av$] $\RA$ $\innp{(x-u)-(x-v),u-v}\ge \rho\normsq{u-v}$ $\siff$ $-\normsq{u-v}\ge \rho\normsq{u-v}$. Since $\rho>-1$, the last inequality implies that $u=v$. Now combine with \cref{prop:gen:min}\ref{prop:gen:min:i}. \end{proof}
\begin{corollary}[{See also \cite[Proposition~3.4]{PhanDao18}}]
Let $A\colon X\rras X$ be \bhmaxmon\ where $\rho>-1$.
Then $J_A=(\ensuremath{\operatorname{Id}}+A)^{-1}$ and
$J_{A^{-1}}=\ensuremath{\operatorname{Id}}-J_A$ are single-valued
and $\ensuremath{\operatorname{dom}} J_A=\ensuremath{\operatorname{dom}} J_{A^{-1}}=X$. \end{corollary}
In \cref{ex:sv:fr:fail} below, we illustrate that the assumption that $\rho >-1$ is critical in the conclusion of \cref{cor:surj} and \cref{prop:s:v}.
\begin{example} \label{ex:sv:fr:fail} Suppose that $X\neq \{0\}$. Let $C$ be a nonempty closed convex subset of $X$, let $r\in \ensuremath{\mathbb R}_+$, set $B=-\ensuremath{\operatorname{Id}}-rP_C$, set $A=B^{-1}$ and set $\rho=-(1+r)\le -1$. Then the following hold: \begin{enumerate} \item \label{ex:sv:fr:fail:i} $B-\rho \ensuremath{\operatorname{Id}} $ is maximally monotone. \item \label{ex:sv:fr:fail:ii} $A$ is \rhmaxmon. \item \label{ex:sv:fr:fail:iii} $\ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}}+A)=\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A^{-1})=(\rho+1)C=-rC$. \item \label{ex:sv:fr:fail:iii:b} $\ensuremath{\operatorname{Id}}+A$ is surjective $\siff$ $[C=X \text{~and~}r>0]$. \item \label{ex:sv:fr:fail:iv} $J_A$ is at most single-valued $\siff$ $J_{A^{-1}}$ is at most single-valued $\siff$ $[C=X \text{~and~}r>0]$. \end{enumerate} \end{example}
\begin{proof} \ref{ex:sv:fr:fail:i}: Indeed, $B-\rho\ensuremath{\operatorname{Id}}=-\ensuremath{\operatorname{Id}}-rP_C+(1+r)\ensuremath{\operatorname{Id}}=r(\ensuremath{\operatorname{Id}}-P_C)$. It follows from \cite[Example~23.4~\&~Proposition~23.11(i)]{BC2017} that $\ensuremath{\operatorname{Id}}-P_C$ is maximally monotone. Because $r\ge 0$, the operator $B-\rho\ensuremath{\operatorname{Id}}=r(\ensuremath{\operatorname{Id}}-P_C)$ is maximally monotone as well.
\ref{ex:sv:fr:fail:ii}: Combine \ref{ex:sv:fr:fail:i} and \cref{lem:bmax:inv}.
\ref{ex:sv:fr:fail:iii}: The first identity is \cref{prop:gen:min}\ref{prop:gen:min:ii}. Now $\ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}}+A^{-1})=\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+B)=\ensuremath{\operatorname{ran}}(-rP_C) =-r\ensuremath{\operatorname{ran}} P_C=-rC=(\rho+1)C$.
\ref{ex:sv:fr:fail:iii:b}: This is a direct consequence of \ref{ex:sv:fr:fail:iii}.
\ref{ex:sv:fr:fail:iv}: The first equivalence follows from \cref{prop:gen:min}\ref{prop:gen:min:i}. Note that $[r=0 \text{~or~} C=\{0\}]$ $\siff rC=\{0\}$ $\siff rP_C\equiv 0$ $\siff$ $B=-\ensuremath{\operatorname{Id}}$ $\siff$ $\gra J_{A^{-1}}=\gra J_B=\{0\}\times X$. Now suppose that $r> 0$. Then
$J_{A^{-1}}=J_B=(\ensuremath{\operatorname{Id}}+B)^{-1}=(-rP_C)^{-1} =(\ensuremath{\operatorname{Id}}+N_C)\circ(-r^{-1}\ensuremath{\operatorname{Id}})$ which is at most single-valued $\siff $ $C=X$, by e.g., \cite[Theorem~7.4]{BC2017}. \end{proof}
\begin{prop} \label{prop:surj:ow} Let $A\colon X\ensuremath{\rightrightarrows} X$ be \bhmon, where $\rho>-1$, and such that $\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A)=X$. Then $A$ is \bhmaxmon. \end{prop} \begin{proof} Let $(x,u)\in X\times X$ such that $(\forall (y,v)\in \gra A)$ \begin{equation} \label{eq:min:od} \innp{x-y,u-v}\ge \rho\normsq{u-v}. \end{equation} It follows from the surjectivity of $\ensuremath{\operatorname{Id}}+A$ that there exists $(y,v)\in X\times X$ such that $v\in Ay$ and $x+u=y+v\in (\ensuremath{\operatorname{Id}} +A)y$. Consequently, \cref{eq:min:od} implies that $\rho\normsq{u-v}\le \innp{x-y,u-v} =\innp{-(u-v),u-v}=-\normsq{u-v}$. Hence, because $\rho> -1$, we have $u=v$ and thus $x=y$ which proves the maximality of $A$. \end{proof} \begin{thm}[{\bf Minty parametrization}] \label{thm:Minty:type} Let $A\colon X\rras X$ be \bhmon\ where $\rho>-1$. Then \begin{equation} \gra A=\menge{(J_A x, (\ensuremath{\operatorname{Id}}-J_A)x)}{x\in \ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}}+A)}. \end{equation} Moreover, $A$ is \bhmaxmon\ $\siff$ $\ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}}+A)=X$, in which case \begin{equation} \gra A=\menge{(J_A x, (\ensuremath{\operatorname{Id}}-J_A)x)}{x\in X}. \end{equation} \end{thm} \begin{proof} Let $(x,u)\in X\times X$. In view of \cref{prop:s:v} we have $(x,u)\in \gra A$$\siff u\in Ax$ $\siff x+u\in x+Ax=(\ensuremath{\operatorname{Id}} +A)x$ $\siff x=J_A(x+u)$ $\siff$ [$z:=x+u\in \ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}} +A)$,
$x=J_A z$ and $u=x+u-x=x+u-J_A(x+u)=(\ensuremath{\operatorname{Id}}-J_A)z$]. The equivalence of maximal $\rho$-comonotonicity of $A$ and the surjectivity of $\ensuremath{\operatorname{Id}} +A$ follows from combining \cref{cor:surj} and \cref{prop:surj:ow}. \end{proof}
\begin{corollary} \label{lem:gr:RA} Suppose that $A\colon X\rras X$ is maximally $\rho$-comonotone where $\rho>-1$
and let $(x,u)\in X\times X$. Then the following hold: \begin{enumerate} \item \label{lem:gr:RA:i} $ (x,u)\in \gra J_A\siff (u, x-u) \in \gra A $. \item \label{lem:gr:RA:ii} $ (x,u)\in \gra R_A\siff \bigl(\tfrac{1}{2}(x+u), \tfrac{1}{2}(x-u) \bigr) \in \gra A $. \end{enumerate} \end{corollary} \begin{proof} Let $(x,u)\in X\times X$
and note that in view of \cref{prop:s:v}
and \cref{thm:Minty:type}
$J_A\colon X\to X$ and consequently $R_A\colon X\to X$
are single-valued.
\ref{lem:gr:RA:i}: We have $(x,u)\in \gra J_A$ $\siff$ $u=J_Ax$ $\siff$ $x-u=(\ensuremath{\operatorname{Id}}-J_A)x$. Now use \cref{thm:Minty:type}.
\ref{lem:gr:RA:ii}: We have $(x,u) \in\gra R_A\siff u=R_A x=2J_Ax-x$ $\siff x+u=2J_A x$ $\siff J_A x=\tfrac{1}{2}(x+u)$ $\siff x-J_Ax=x-\tfrac{1}{2}(x+u)=\tfrac{1}{2}(x-u)$ $\siff (\tfrac{1}{2}(x+u), \tfrac{1}{2}(x-u)) \in \gra A$, where the last equivalence follows from \cref{thm:Minty:type}. \end{proof}
\section{$\rho$-comonotonicity and averagedness} \label{sec:aver} We start this section with the following definition. \begin{definition} \label{def:con:nonexp}
Let $T\colon X\to X$
and let $\alpha\in \left ]0,+\infty\right [$.
Then $T $ is $\alpha$-\emph{\text{conically nonexpansive}} if there
exists a nonexpansive operator $N\colon X\to X$
such that $T=(1-\alpha)\ensuremath{\operatorname{Id}}+\alpha N$. \end{definition}
\begin{remark} \label{rem:con:nonexp} In view of \cref{def:con:nonexp}, it is clear that $T$ is $\alpha$-averaged if and only if [$T$ $\alpha$-\text{conically nonexpansive}\ and $\alpha\in \left ]0,1\right [$]. Similarly, $T$ is nonexpansive if and only if $T$ $1$-\text{conically nonexpansive}. \end{remark}
The proofs of the next two results are straightforward and hence omitted.
\begin{lemma}
\label{lem:coco:conc}
Let $T\colon X\to X$
and let $\alpha\in \left ]0,+\infty\right [$.
Then
\begin{equation}
\text{ $T $ is $\alpha$-\text{conically nonexpansive}\
$\siff$
$\ensuremath{\operatorname{Id}}-T$ is $\tfrac{1}{2\alpha}$-cocoercive}.
\end{equation} \end{lemma}
\begin{lemma} \label{lem:lip:cute} Let $D$ be a nonempty subset of $X$, let $T\colon D\to X$, let $N \colon D\to X$,
let $\alpha\in\left[1,+\infty\right[$ and set $T=(1-\alpha)\ensuremath{\operatorname{Id}}+\alpha N$. Suppose that $N\colon D\to X$ is nonexpansive. Then $(\forall (x,y)\in D\times D)$ we have \begin{equation} \norm{Tx-Ty}\le (2\alpha-1)\norm{x-y}, \end{equation} i.e., $T$ is Lipschitz with constant $2\alpha-1$. \end{lemma}
One can directly verify the following result. \begin{lemma} \label{lem:gen:Hilb} Let $(x,y)\in X\times X$ and let $\alpha\in \ensuremath{\mathbb R}$. Then \begin{equation} \alpha^2\normsq{x}-\normsq{(\alpha-1)x+y} =2\alpha\innp{x-y,y}-(1-2\alpha)\normsq{x-y}. \end{equation} \end{lemma}
\begin{lemma} \label{lem:av:ch} Let $D$ be a nonempty subset of $X$, let $N \colon D\to X$,
let $\alpha\in \ensuremath{\mathbb R}$ and set $T=(1-\alpha)\ensuremath{\operatorname{Id}}+\alpha N$. Then $N$ is nonexpansive if and only if $(\forall (x,y)\in D\times D)$ we have \begin{equation} 2\alpha\innp{Tx-Ty,(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}\ge (1-2\alpha)\normsq{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}. \end{equation} \end{lemma}
\begin{proof}
Let $(x,y)\in D\times D$. Applying \cref{lem:gen:Hilb} with $(x,y)$ replaced by $(x-y, Tx-Ty)$, we learn that \begin{subequations} \begin{align} &\qquad2\alpha\innp{Tx-Ty,(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}- (1-2\alpha)\normsq{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}\\ &=\alpha^2\normsq{x-y}-\normsq{(\alpha-1)(x-y)+(1-\alpha)(x-y)+\alpha(Nx-Ny)}\\ &=\alpha^2\big(\normsq{x-y}-\normsq{Nx-Ny}\big). \end{align} \end{subequations} Now $N$ is nonexpansive $\siff$ $\normsq{x-y}-\normsq{Nx-Ny}\ge 0$ and the conclusion directly follows. \end{proof}
We now provide new characterizations of averaged and nonexpansive operators. \begin{corollary} \label{cor:non:av:ch} Let $D$ be a nonempty subset of $X$, let $T\colon D\to X$, let $\alpha\in\left]0, +\infty\right[$ and let $(x,y)\in D\times D$.
Then the following hold:
\begin{enumerate}
\item
\label{cor:non:av:ch:i} $T$ is nonexpansive $\siff $ $2\innp{Tx-Ty,(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}\ge -\normsq{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}$. \item \label{cor:non:av:ch:ii} $T$ is $\alpha$-\text{conically nonexpansive}\ $\siff$
$2\alpha\innp{Tx-Ty,(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}\ge (1-2\alpha)\normsq{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}$. \end{enumerate} \end{corollary} \begin{proof} \ref{cor:non:av:ch:i}: Apply \cref{lem:av:ch} with $\alpha=1$.
\ref{cor:non:av:ch:ii}: A direct consequence of \cref{lem:av:ch}. \end{proof}
\begin{prop} \label{prop:gen:avn} Let $D$ be a nonempty subset of $X$, let $T\colon D\to X$, let $\alpha\in \left]0,+\infty\right[$, set $A=T^{-1}-\ensuremath{\operatorname{Id}}$
and set $N=\tfrac{1}{\alpha}T-\tfrac{1-\alpha}{\alpha}\ensuremath{\operatorname{Id}}$,
i.e.,
$T=J_A=(1-\alpha)\ensuremath{\operatorname{Id}}+\alpha N$. Then the following hold: \begin{enumerate} \item \label{prop:gen:avn:iii} $T$ is $\alpha$-\text{conically nonexpansive}\ $\siff$ $N$ is nonexpansive $\siff$ $A$ is $\big(\tfrac{1}{2\alpha}-1\big)$-comonotone. \item \label{prop:gen:avn:iv} {\rm [}$T$ is $\alpha$-\text{conically nonexpansive}\ and $D=X${\rm ]} $\siff$ {\rm [}$N$ is nonexpansive and $D=X${\rm ] } $\siff$ $A$ is maximally $\big(\tfrac{1}{2\alpha}-1\big)$-comonotone. \end{enumerate}
\end{prop} \begin{proof} \ref{prop:gen:avn:iii}: The first equivalence is \cref{def:con:nonexp}. We now turn to the second equivalence. ``$\RA$": Let $\{(x,u),(y,v)\}\subseteq \gra A$. Then $(x,u)=(T(x+u),(\ensuremath{\operatorname{Id}}-T)(x+u))$ and likewise $(y,v)=(T(y+v),(\ensuremath{\operatorname{Id}}-T)(y+v))$. It follows from \cref{lem:av:ch} applied with $(x,y)$ replaced by $(x+u,y+v)$
that $2\alpha\innp{x-y,u-v}\ge (1-2\alpha)\normsq{u-v}$. Since $\alpha>0$, the conclusion follows by dividing both sides of the last inequality by $2\alpha$. ``$\LA$":
Using \cref{thm:Minty:type}, we learn that $(\forall(x,y)\in D\times D)$ $\{(Tx,(\ensuremath{\operatorname{Id}}-T)x),(Ty,(\ensuremath{\operatorname{Id}}-T)y)\}\subseteq\gra A$ and hence $\innp{Tx-Ty,(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}\ge \bk{\tfrac{1}{2\alpha}-1}\normsq{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}$. Thus $2\alpha\innp{Tx-Ty,(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}\ge(1-2\alpha)\normsq{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}$. Now use \cref{lem:av:ch}.
\ref{prop:gen:avn:iv}: Note that $\ensuremath{\operatorname{dom}} N=\ensuremath{\operatorname{dom}} T=\ensuremath{\operatorname{ran}} T^{-1}=\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A)$. Now combine \ref{prop:gen:avn:iii}
and \cref{thm:Minty:type}.
\end{proof}
\begin{prop} \label{prop:-0.3:av} Let $D$ be a nonempty subset of $X$, let $T\colon D\to X$, let $\alpha\in \left]0,+\infty\right[$, set $A=T^{-1}-\ensuremath{\operatorname{Id}}$, i.e., $T=J_A$, and set $\rho=\tfrac{1}{2\alpha}-1>-1$. Then the following equivalences hold: \begin{enumerate}
\item \label{prop:-0.3:conic:ii} $T$ is $\alpha$-\text{conically nonexpansive}\ $\siff$ $A$ is $\rho$-comonotone. \item \label{prop:-0.3:conic:iii} {\rm [}$T$ is $\alpha$-\text{conically nonexpansive}\ and $D=X$ {\rm ]} $\siff$ $A$ is maximally $\rho$-comonotone. \item \label{prop:-0.3:nonexp:ii} $T$ is nonexpansive $\siff$ $A$ is $\bigl(-\tfrac{1}{2}\bigr)$-comonotone. \item \label{prop:-0.3:nonexp:iii} {\rm [}$T$ is nonexpansive and $D=X$ {\rm ]} $\siff$ $A$ is maximally $\bigl(-\tfrac{1}{2}\bigr)$-comonotone. \end{enumerate} If we assume that $\alpha\in \left]0,1\right[$, equivalently, $\rho>-\tfrac{1}{2}$, then we additionally have: \begin{enumerate}
\setcounter{enumi}{4}
\item \label{prop:-0.3:av:ii} $T$ is $\alpha$-averaged $\siff$ $A$ is \bmon. \item \label{prop:-0.3:av:iii} {\rm [}$T$ is $\alpha$-averaged and $D=X${\rm ]} $\siff$ $A$ is \bmaxmon. \end{enumerate}
\end{prop}
\begin{proof} \ref{prop:-0.3:conic:ii}\&\ref{prop:-0.3:conic:iii}: This follows from \cref{prop:gen:avn}\ref{prop:gen:avn:iii}\&\ref{prop:gen:avn:iv}. \ref{prop:-0.3:nonexp:ii}--\ref{prop:-0.3:av:iii}: Combine \ref{prop:-0.3:conic:ii}
and
\ref{prop:-0.3:conic:iii}
with \cref{rem:con:nonexp}. \end{proof} \begin{corollary}{\rm\bf(The characterization corollary).} \label{cor:eq:nexp:-0.3} Let $T\colon X\to X$. Then the following hold: \begin{enumerate}
\item \label{cor:eq:nexp:-0.3:i} $T$ is nonexpansive if and only if it is the resolvent of a maximally $\bigl(-\tfrac{1}{2}\bigr)$-comonotone operator $A\colon X\rras X$.
\item \label{cor:eq:nexp:-0.3:0} Let $\alpha\in \left]0,+\infty\right[$. Then $T$ is $\alpha$-\text{conically nonexpansive}\ if and only if it is the resolvent of a \bmon\ operator $A\colon X\rras X$, where $\rho=\tfrac{1}{2\alpha}-1>-1$ \big(i.e., $\alpha=\tfrac{1}{2(\rho+1)}$\big).
\item \label{cor:eq:nexp:-0.3:ii} Let $\alpha\in \left]0,1\right[$. Then $T$ is $\alpha$-averaged if and only if it is the resolvent of a \bmon\ operator $A\colon X\rras X$ where $\rho=\tfrac{1}{2\alpha}-1>-\tfrac{1}{2}$ (i.e., $\alpha=\tfrac{1}{2(\rho+1)}$). \end{enumerate} \end{corollary}
\begin{example}
Suppose that $U$ is a closed linear
subspace of $X$ and set $N=2P_U-\ensuremath{\operatorname{Id}}$.
Let
$\alpha\in \left[0,+\infty\right[$,
set $T_\alpha=(1-\alpha)\ensuremath{\operatorname{Id}}+\alpha N$,
and set
$A_\alpha=(T_\alpha)^{-1}-\ensuremath{\operatorname{Id}}$.
Then for every $\alpha\in \left[0,+\infty\right[$,
$T_\alpha$ is $\alpha$-conically nonexpansive and
\begin{equation}
A_\alpha
=\begin{cases}
N_U,&\text{if }\alpha=\tfrac{1}{2};\\
\tfrac{2\alpha}{1-2\alpha}P_{U^\perp}, &\text{otherwise}.
\end{cases}
\end{equation}
Moreover, $A_\alpha$ is
$\bigl(\tfrac{1}{2\alpha}-1\bigr)$-comonotone.
\end{example}
\begin{proof}
First note that
$T_\alpha=(1-\alpha)\ensuremath{\operatorname{Id}}+\alpha (2P_U-\ensuremath{\operatorname{Id}})
=(1-2\alpha)\ensuremath{\operatorname{Id}}+2\alpha P_U$.
The case $\alpha=\tfrac{1}{2}$ is
clear by, e.g., \cite[Example~23.4]{BC2017}.
Now suppose that $\alpha\in \left[0,+\infty\right[ \smallsetminus \{\tfrac{1}{2}\}$,
and let $y\in X$.
Then $y\in A_\alpha x$
$\siff x+y\in (\ensuremath{\operatorname{Id}}+A_\alpha) x$
$\siff x=T_\alpha (x+y )=(1-2\alpha)
(x+y)+2\alpha P_U(x+y)$
$\siff x=x+y-2\alpha(\ensuremath{\operatorname{Id}}-P_U)(x+y)$
$\siff y=2\alpha P_{U^\perp}(x+y)
=2\alpha P_{U^\perp} x+2\alpha P_{U^\perp}y
=2\alpha P_{U^\perp} x+2\alpha y$.
Therefore,
$y=\tfrac{2\alpha}{1-2\alpha} P_{U^\perp} x$,
and the conclusion follows in view of
\cref{cor:eq:nexp:-0.3}\ref{cor:eq:nexp:-0.3:0}.
\end{proof}
\begin{prop} \label{prop:av:-0.3} Let $A\colon X\rras X$ be such that $\ensuremath{\operatorname{dom}} A\neq \fady$, let $\rho\in \left]-1,+\infty\right[$,
set $D=\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A)$,
set $T = {J_A}$,
i.e.,
$A=T^{-1}-\ensuremath{\operatorname{Id}}$, and set $N= 2(\rho+1)T-(2\rho+1)\ensuremath{\operatorname{Id}}$, i.e., $T=\tfrac{2\rho+1}{2(\rho+1)}\ensuremath{\operatorname{Id}}+\tfrac{1}{2(\rho+1)}N$. Then the following equivalences hold: \begin{enumerate} \item \label{prop:av:-0.3:ii} $A$ is $\rho$-comonotone $\siff$ $N$ is nonexpansive.
\item \label{prop:av:-0.3:iii} $A$ is maximally $\rho$-comonotone $\siff$ $N$ is nonexpansive and $D=X$. \end{enumerate}
\end{prop} \begin{proof} \ref{prop:av:-0.3:ii}: Set $\alpha=\tfrac{1}{2(\rho+1)}$
and note that $\alpha>0$. It follows from \cref{prop:s:v} that $T=J_A$ is single-valued. Now use \cref{prop:gen:avn}\ref{prop:gen:avn:iii}. \ref{prop:av:-0.3:iii}: Combine \ref{prop:av:-0.3:ii}
and \cref{prop:gen:avn}\ref{prop:gen:avn:iv}. \end{proof}
\begin{prop} \label{prop:all:oth} Let $A\colon X\rras X$ be such that $\ensuremath{\operatorname{dom}} A\neq \fady$, let $\rho\in \left]-1,+\infty\right[$,
set $D=\ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}+A)$,
set $T={J_A}$,
i.e.,
$A=T^{-1}-\ensuremath{\operatorname{Id}}$,
and set $\alpha=\tfrac{1}{2(\rho+1)}$. Then we have the following equivalences: \begin{enumerate} \item \label{prop:all:oth:i:i}
$A$ is $\rho$-comonotone
$\siff$ $T$ is $\tfrac{1}{2(\rho+1)}$-\text{conically nonexpansive}.
\item \label{prop:all:oth:i:ii}
$A$ is maximally $\rho$-comonotone
$\siff$ $T$ is $\alpha$-\text{conically nonexpansive}\ and $D=X$. \item \label{prop:all:oth:ii}
$A$ is $\bigl(-\tfrac{1}{2}\bigr)$-comonotone
$\siff$ $T$ is nonexpansive. \item \label{prop:all:oth:iii} $A$ is maximally $\bigl(-\tfrac{1}{2}\bigr)$-comonotone $\siff$ $T$ is nonexpansive and $D=X$. \item \label{prop:all:oth:iv}\ {\rm[}$A$ is \bmon\ and $\rho>-\tfrac{1}{2}${\rm]} $\siff$ $T$ is $\alpha$-averaged. \item \label{prop:all:oth:v}\ {\rm [}$A$ is \bmaxmon\ and $\rho>-\tfrac{1}{2}${\rm]} $\siff$ {\rm[}$T$ is $\alpha$-averaged and $D=X${\rm]}. \end{enumerate} \end{prop}
\begin{proof} \ref{prop:all:oth:i:i}--\ref{prop:all:oth:v}: Use \cref{prop:-0.3:av}. \end{proof}
\begin{corollary} Let $A\colon X\rras X$ be maximally \bmon\ and $\rho>-\tfrac{1}{2}$. Then $J_A$ is $\tfrac{1}{2(\rho+1)}$-averaged. \end{corollary}
The following corollary provides an alternative proof to \cite[Proposition~6.9.6]{BurIus}.
\begin{corollary} \label{cor:zer:cl:con} Let $A\colon X\rras X$ be maximally \bmon\
and $\rho\ge -\tfrac{1}{2}$. Then $\ensuremath{\operatorname{zer}} A$ is closed and convex. \end{corollary} \begin{proof} It is clear that $\ensuremath{\operatorname{zer}} A=\ensuremath{\operatorname{Fix}} J_A$. The conclusion now follows from combining \cite[Corollary~4.14]{BC2017} and \cref{prop:all:oth}\ref{prop:all:oth:iii}. \end{proof}
Table~\ref{tab:1} below
summarizes the main results of this section.
\begin{table}[H]
\resizebox{0.78\textwidth}{!}{\begin{minipage}{\textwidth}
\begin{tabular}{p{3.5cm}|p{2.6cm}p{0.35cm}|p{2.2cm}
p{0.35cm}|p{3.6cm}p{0.35cm}|p{3.5cm}}
\toprule
$\rho$& $A$ & &$A^{-1}$& &$J_A$
& & $J_{A^{-1}}$ \\ \hline
\begin{tikzpicture}[baseline=-0.3ex]
\draw[>=triangle 45, <->] (-2,0) -- (1.5,0);
\draw [line width=0.75mm] (0,0)--(1.4,0);
\foreach \x in {0}
\draw (\x,1pt) -- (\x,-1pt)
node[anchor=north]{\footnotesize\x};
\node[circle,draw=black, fill=white,
inner sep=0pt,minimum size=5pt] at (0,0) {};
\end{tikzpicture}
& $\rho$-cocoercive &$\siff$
& $\rho$-strongly monotone & $\siff$
& $\tfrac{1}{2(\rho+1)}$-\text{conically nonexpansive}\
&$\siff$
& $(\rho+1)$-cocoercive
\\
\begin{tikzpicture}[baseline=-0.3ex]
\draw[>=triangle 45, <->] (-2,0) -- (1.5,0);
\draw [line width=0.75mm] (0,0)--(0,0);
\foreach \x in {0}
\draw (\x,1pt) -- (\x,-1pt)
node[anchor=north] {\footnotesize\x};
\node[circle,draw=black, fill=black,
inner sep=0pt,minimum size=5pt] at (0,0) {};
\end{tikzpicture}
&monotone &$\siff$
& monotone &$\siff$
&firmly nonexpansive&$\siff$
& firmly nonexpansive
\\
\begin{tikzpicture}[baseline=-0.3ex]
\draw[>=triangle 45, <->] (-2,0) -- (1.5,0);
\draw [line width=0.75mm] (-1/2,0)--(0,0);
\foreach \x in {0,-0.5}
\draw (\x,1pt) -- (\x,-1pt)
node[anchor=north] {\footnotesize\x};
\node[circle,draw=black, fill=white, inner sep=0pt,minimum size=5pt] at (0,0) {};
\node[circle,draw=black, fill=white, inner sep=0pt,minimum size=5pt] at (-0.5,0) {};
\end{tikzpicture}
&$\rho$-comonotone &$\siff$
& $\rho$-monotone &$\siff$
&$\tfrac{1}{2(\rho+1)}$-averaged &$\siff$
& $(\rho+1)$-cocoercive
\\
\begin{tikzpicture}[baseline=-0.3ex]
\draw[>=triangle 45, <->] (-2,0) -- (1.5,0);
\foreach \x in {-0.5}
\draw (\x,1pt) -- (\x,-1pt)
node[anchor=north] {\footnotesize \x};
\node[circle,draw=black, fill=black,
inner sep=0pt,minimum size=5pt] at (-0.5,0){};
\end{tikzpicture}
& $\rho$-comonotone
&$\siff$
& $\rho$-monotone
&$\siff$
& nonexpansive
&$\siff$
& $\tfrac{1}{2}$-cocoercive
\\
\begin{tikzpicture}[baseline=-0.3ex]
\draw[>=triangle 45, <->] (-2,0) -- (1.5,0);
\draw [line width=0.75mm] (-1,0)--(-0.5,0);
\foreach \x in {-1,-0.5}
\draw (\x,1pt) -- (\x,-1pt)
node[anchor=north] {\footnotesize \x};
\node[circle,draw=black, fill=white,
inner sep=0pt,minimum size=5pt] at
(-0.5,0) {};
\node[circle,draw=black, fill=white,
inner sep=0pt,minimum size=5pt] at
(-1,0) {};
\end{tikzpicture}
& $\rho$-comonotone
&$\siff$
& $\rho$-monotone
&$\siff$
& $\tfrac{1}{2(\rho+1)}$-\text{conically nonexpansive}\
&$\siff$
& $(\rho+1)$-cocoercive
\\
\begin{tikzpicture}[baseline=-0.3ex]
\draw[>=triangle 45, <->] (-2,0) -- (1.5,0);
\draw [line width=0.75mm] (-1.85,0)--(-1,0);
\foreach \x in {-1}
\draw (\x,1pt) -- (\x,-1pt)
node[anchor=north] {\footnotesize \x};
\node[circle,draw=black, fill=black,
inner sep=0pt,minimum size=5pt] at (-1,0){};
\end{tikzpicture}
& $\rho$-comonotone
&$\siff$
& $\rho$-monotone
&
$\RA$
& may fail to be at most single-valued
&
$\siff$
& may fail to be at most single-valued
\\
\toprule \end{tabular} \end{minipage}} \caption{Properties of an operator $A$ and its inverse $A^{-1} $ along with the corresponding resolvents $J_A$ and $J_{A^{-1}}$ respectively, for different values of $\rho\in \ensuremath{\mathbb R}$. Here, $A$ satisfies the implication: $\{(x,u),(y,v)\}\subseteq \gra A \RA \innp{x-y,u-v}\ge \rho\normsq{u-v}$.} \label{tab:1} \end{table}
\section{Further properties of the resolvent $J_A$ and the reflected resolvent $R_A$} \label{sec:JA:RA}
We start this section with the following useful lemma.
\begin{lemma} \label{lem:JA:RA:corr} Let $T\colon X\to X$, let $\alpha\in \left[0,1\right[$. Then the following hold: \begin{enumerate} \item \label{lem:JA:RA:corr:ii} $T$ is $\alpha$-averaged $\siff$ $ 2T-\ensuremath{\operatorname{Id}}=(1-2\alpha)\ensuremath{\operatorname{Id}} +2\alpha N$
for some nonexpansive
$N\colon X\to X$. \item \label{lem:JA:RA:corr:iv} $[T=\tfrac{\alpha}{2}(\ensuremath{\operatorname{Id}}+N)$ and $N$ is nonexpansive$]$ $\siff $ $-(2T-\ensuremath{\operatorname{Id}})$ is $\alpha$-averaged\footnote{This is also known as $\alpha$-negatively averaged (see \cite[Definition~3.7]{Gisel17}).}, in which case $T$ is a Banach contraction with Lipschitz constant $\alpha<1$. \item \label{lem:JA:RA:corr:iii} $T$ is $\tfrac{1}{2}$-strongly monotone $\siff $ $2T-\ensuremath{\operatorname{Id}}$ is monotone. \end{enumerate} \end{lemma} \begin{proof}
\ref{lem:JA:RA:corr:ii}: We have: $T$ is $\alpha$-averaged $\siff $ [$T=(1-\alpha)\ensuremath{\operatorname{Id}}+\alpha N$ and $N$ is nonexpansive] $\siff$ [$2T-\ensuremath{\operatorname{Id}}=(2-2\alpha)\ensuremath{\operatorname{Id}}+2\alpha N-\ensuremath{\operatorname{Id}}=(1-2\alpha)\ensuremath{\operatorname{Id}}+2\alpha N$ and $N$ is nonexpansive].
\ref{lem:JA:RA:corr:iv}: Indeed, $[T=\tfrac{\alpha}{2}(\ensuremath{\operatorname{Id}}+N)$ and $N$ is nonexpansive$]$ $\siff $ $2T-\ensuremath{\operatorname{Id}}=(\alpha-1)\ensuremath{\operatorname{Id}}+\alpha N=-((1-\alpha)\ensuremath{\operatorname{Id}}+\alpha(-N))$, equivalently $2T-\ensuremath{\operatorname{Id}}$ is $\alpha$-negatively averaged.
\ref{lem:JA:RA:corr:iii}: We have: $T$ is $\tfrac{1}{2}$-strongly monotone $\siff$ $T-\tfrac{1}{2}\ensuremath{\operatorname{Id}} $ is monotone $\siff$ $2T-\ensuremath{\operatorname{Id}} $ is monotone. \end{proof}
Before we proceed, we recall the following useful fact (see, e.g., \cite[Proposition~4.35]{BC2017}). \begin{fact} \label{Proposition:4.35} Let $T\colon X\to X$, let $(x,y)\in X\times X$
and let $\alpha \in]0, 1[$.
Then
\begin{equation} \text{$T$ is $\alpha$- averaged $\siff$ $\normsq{Tx-Ty}+(1-2\alpha) \normsq{x-y}
\le 2(1-\alpha)\innp{x-y,Tx-Ty}$.}
\end{equation} \end{fact}
\begin{prop} \label{p:corres:A:RA} Let $\alpha\in \left]0,1\right[$, let $\beta\in \bigl]-\tfrac{1}{2},+\infty\bigr[$, let $A\colon X\rras X$
and
suppose that $A$ is $\beta$-comonotone. Then the following hold: \begin{enumerate} \item \label{p:corres:A:RA:ii} $A$ is $\beta$-comonotone $\siff$ $J_A$ is $\tfrac{1}{2(1+\beta)}$-averaged $\siff$ $R_A=\big(1-\tfrac{1}{1+\beta}\big )\ensuremath{\operatorname{Id}}+\tfrac{1}{1+\beta}N$ for some nonexpansive $N\colon X\to X$. \item \label{p:corres:A:RA:iv} $A$ is $\beta$-strongly monotone $\siff$ $[J_A=\tfrac{1}{2(\beta+1)}(\ensuremath{\operatorname{Id}}+N)$ and $N$ is nonexpansive$]$ $\siff$
$-R_A$ is $\tfrac{1}{\beta+1}$-averaged, in which case $J_A$ is a Banach contraction with Lipschitz constant $\tfrac{1}{\beta+1}<1$. \item \label{p:corres:A:RA:iii} $A$ is nonexpansive $\siff$ $J_A$ is $\tfrac{1}{2}$-strongly monotone $\siff$ $R_A$ is monotone. \item \label{p:corres:A:RA:i} $A$ is $\alpha$-averaged $\siff$ $R_A$ is $\tfrac{1-\alpha}{\alpha}$-cocoercive. \item \label{p:corres:A:RA:i:d} $A$ is firmly nonexpansive $\siff$ $R_A$ is firmly nonexpansive. \end{enumerate} \end{prop} \begin{proof} Let $\{(x,u),(y,v)\}\subseteq X \times X$. Using \cref{lem:gr:RA}\ref{lem:gr:RA:i}, we have $ \{(x,u),(y,v)\}\subseteq\gra J_A$
$\siff \{(u, x-u),
(v,y-v)\}\subseteq\gra A$,
which we shall use repeatedly.
\ref{p:corres:A:RA:ii}: Let $\{(x,u),(y,v)\}\subseteq\gra J_A$. We have \begin{subequations}
\begin{align}
&~A \text{~is~} \beta\text{-comonotone~}
\nonumber\\
\siff&~\beta \normsq{(x-y)-(u-v)}\le \innp{(x-y)-(u-v), u-v}
\\
\siff&~\beta\normsq{x-y}+\beta\normsq{u-v}-2\beta\innp{x-y,u-v}
\le \innp{x-y,u-v}-\normsq{u-v}
\\
\siff&~ \beta\normsq{x-y}+(\beta+1)\normsq{u-v}
\le (2\beta+1) \innp{x-y,u-v}
\\
\siff & ~\normsq{u-v} +\tfrac{\beta}{\beta+1}\normsq{x-y}
\le \tfrac{2\beta+1}{\beta+1}
\innp{x-y,u-v}
\\
\siff &~ \normsq{u-v} +\big(1-\tfrac{1}{\beta+1}\big)\normsq{x-y}
\le 2\big(1-\tfrac{1}{2(\beta+1)}\big)
\innp{x-y,u-v}
\\
\siff &~ J_A \text{~is~} \tfrac{1}{2(\beta+1)}\text{-averaged}, \text{}
\\
\siff &~ \text{$R_A=\big(1-\tfrac{1}{1+\beta}\big )\ensuremath{\operatorname{Id}}+\tfrac{1}{1+\beta}N$ for some nonexpansive $N\colon X\to X$,}
\end{align}
\end{subequations}
where the last two equivalences follow from
\cref{Proposition:4.35}
and
\cref{lem:JA:RA:corr}\ref{lem:JA:RA:corr:ii},
respectively.
\ref{p:corres:A:RA:iv}:
We start by proving the equivalence
of the first and third statement. (see \cite[Proposition~5.4]{Gisel17} for ``$\RA$" and also \cite[Proposition~2.1(iii)]{MV18}).
Let $\{(x,u),(y,v)\}\subseteq \gra (-R_A)$, i.e.,
$\{(x,-u),(y,-v)\}\subseteq \gra R_A$.
In view of \cref{lem:gr:RA}\ref{lem:gr:RA:ii},
this is equivalent to
$ \{(\tfrac{1}{2}(x-u), \tfrac{1}{2}(x+u)),
(\tfrac{1}{2}(y-v), \tfrac{1}{2}(y+v))\}\subseteq\gra A$. We have \begin{subequations}
\begin{align}
& A \text{ is $\beta$-strongly monotone}
\nonumber\\
\siff~&
\innp{(x-y)+(u-v),(x-y)-(u-v)}\ge \beta \normsq{(x-y)-(u-v)}
\\
\siff~&
\normsq{x-y}-\normsq{u-v}\ge \beta\normsq{x-y}+\beta\normsq{u-v}
-2\beta\innp{x-y,u-v}
\\
\siff~&
2\beta\innp{x-y,u-v} \ge (\beta-1)\normsq{x-y}+(\beta+1)\normsq{u-v}
\\
\siff~&
\tfrac{ 2\beta}{\beta+1}\innp{x-y,u-v}
\ge \tfrac{\beta-1}{\beta+1}\normsq{x-y}+\normsq{u-v}
\\
\siff~&
2\bk{1-\tfrac{1}{\beta+1}}\innp{x-y,u-v}
\ge\bk{1-\tfrac{2}{\beta+1}}\normsq{x-y}+\normsq{u-v}
\\
\siff~& -R_A \text{~is~} \tfrac{1}{\beta+1}\text{-averaged},
\end{align}
\end{subequations}
where the last equivalence follows from
\cref{Proposition:4.35}.
Now apply
\cref{lem:JA:RA:corr}\ref{lem:JA:RA:corr:iv}
to prove the equivalence of the second and third statements
in \ref{p:corres:A:RA:iv}.
\ref{p:corres:A:RA:iii}:
Let $\{(x,u),(y,v)\}\subseteq\gra J_A$
and note that \cref{lem:gr:RA}\ref{lem:gr:RA:i}
implies that
$x-u\in Au$, $y-v\in Av$,
$2u-x\in (\ensuremath{\operatorname{Id}} - A)u$
and
$2v-y\in (\ensuremath{\operatorname{Id}} - A)v$.
It follows from \cref{cor:non:av:ch}\ref{cor:non:av:ch:i} applied with $(T,x,y)$ replaced by $(A,u,v)$ that \begin{subequations}
\begin{align}
A \text{~is nonexpansive}
\siff~& \innp{(x-y)-(u-v),2(u-v)-(x-y)}
\nonumber
\\
&\ge -\tfrac{1}{2}\normsq{2(u-v)-(x-y)}
\\
\siff~&-\normsq{x-y}-2\normsq{u-v}+3\innp{x-y,u-v}
\nonumber\\
&\ge
-2\normsq{u-v}-\tfrac{1}{2}\normsq{x-y}+2\innp{x-y,u-v}
\\
\siff ~& \innp{x-y, u-v}\ge \tfrac{1}{2}\normsq{x-y}
\\
\siff ~& J_A \text{~is~} \tfrac{1}{2}\text{-strongly monotone}
\\
\siff ~& R_A \text{~is~} \text{ monotone},
\end{align}
\end{subequations} where the last equivalence follows from \cref{lem:JA:RA:corr}\ref{lem:JA:RA:corr:iii}.
\ref{p:corres:A:RA:i}: Let $\{(x,u),(y,v)\}\subseteq X \times X$. Using \cref{lem:gr:RA} we have $ \{(x,u),(y,v)\}\subseteq\gra R_A$
$\siff \big\{\big(\tfrac{1}{2}(x+u), \tfrac{1}{2}(x-u)\big),
\big(\tfrac{1}{2}(y+v), \tfrac{1}{2}(y-v)\big)\big\}\subseteq\gra A$.
Let $\{(x,u),(y,v)\}\subseteq\gra R_A$. Applying \cref{cor:non:av:ch}\ref{cor:non:av:ch:ii} with $(T,x,y)$ replaced by $\big(A, \tfrac{1}{2}(x+u),\tfrac{1}{2}(y+v)\big)$ and \cref{rem:con:nonexp}, we learn that \begin{subequations}
\begin{align}
A \text{~is~} \alpha\text{-averaged~}
\siff~&
2\alpha\innp{\tfrac{1}{2}((x-y)-(u-v)),u-v}
\ge (1-2\alpha)\normsq{u-v}
\\
\siff~&\alpha\innp{x-y,u-v}-\alpha\normsq{u-v}
\ge (1-2\alpha)\normsq{u-v}
\\
\siff ~& \tfrac{\alpha}{1-\alpha}\innp{x-y, u-v}\ge \normsq{u-v},
\end{align}
\end{subequations}
equivalently
$R_A$ is $\tfrac{1-\alpha}{\alpha}$-cocoercive.
\ref{p:corres:A:RA:i:d}: Apply \ref{p:corres:A:RA:i} with $\alpha=\tfrac{1}{2}$. \end{proof}
\begin{rem} \cref{p:corres:A:RA}\ref{p:corres:A:RA:ii} generalizes the conclusion of \cite[Proposition~5.3]{Gisel17}. Indeed, if $\beta>0$ we have $A$ is $\beta$-cocoercive, equivalently $R_A$ is $\tfrac{1}{\beta+1}$-averaged. \end{rem}
\section{$\rho$-monotone and $\rho$-comonotone linear operators} \label{sec:linear} Let $A\in \ensuremath{\mathbb R}^{n\times n}$
and set
$A_{s}=\tfrac{A+A\tran}{2}$.
In the following we use
$\lam_{\min}(A)$ and $\lam_{\max}(A)$ to denote the smallest and largest eigenvalue of $A$, respectively, provided all eigenvalues of $A$ are real. \begin{prop} \label{prop:linear:hypo} Suppose that $A\in \ensuremath{\mathbb R}^{n\times n}$.
Then the following hold: \begin{enumerate} \item \label{prop:linear:hypo:i} $A$ is $\rho$-monotone $\siff$ $\lam_{\min}(A_{s})\ge \rho$. \item \label{prop:linear:hypo:ii} $A$ is $\rho$-comonotone $\siff$ $\lam_{\min}(A_{s}-\rho A\tran A)\ge 0$. \end{enumerate} \end{prop} \begin{proof} Let $ x\in \ensuremath{\mathbb R}^n$. \cref{prop:linear:hypo:i}: $A$ is $\rho$-monotone $\siff$ $\innp{x,Ax}\ge \rho \normsq{x}$ $\siff$ $\innp{x,(A-\rho \ensuremath{\operatorname{Id}})x}\ge 0$ $\siff$ $\innp{x,(A-\rho \ensuremath{\operatorname{Id}})_sx}\ge 0$ $\siff$ $\innp{x,(A_s-\rho \ensuremath{\operatorname{Id}})x}\ge 0$ $\siff$ $A_s-\rho \ensuremath{\operatorname{Id}}\succeq 0$ $\siff$ $A_s\succeq\rho \ensuremath{\operatorname{Id}}$ $\siff$ $\lam_{\min}(A_s)\ge \rho$. \cref{prop:linear:hypo:ii}: $A$ is $\rho$-comonotone $\siff$ $\innp{x,Ax}\ge \rho \normsq{Ax}$ $\siff$ $\innp{x,(A_s-\rho A\tran A)x}\ge 0$ $\siff$ $A_s-\rho A\tran A\succeq 0$ $\siff$ $\lam_{\min}(A_s-\rho A\tran A)\geq 0$. \end{proof}
\begin{example}
\label{ex:linear:rmono:rcomono} Suppose that $N\colon X\to X$ is continuous and linear such that $N^*=-N$
and $N^2=-\ensuremath{\operatorname{Id}}$.
Then $N$ is nonexpansive.
Moreover, let $\lambda\in \left[0,1\right[$, set $T_\lambda=(1-\lambda)\ensuremath{\operatorname{Id}}+\lambda N$
and set
$A_\lambda=(T_\lambda)^{-1}-\ensuremath{\operatorname{Id}}$. Then the following hold: \begin{enumerate}
\item \label{ex:linear:rmono:rcomono:i}
We have \begin{equation} A_\lambda=\tfrac{\lambda}{(1-\lambda)^2+\lambda^2} \big((1-2\lambda)\ensuremath{\operatorname{Id}}-N\big). \end{equation} \item \label{ex:linear:rmono:rcomono:ii} $ A_\lambda$ is $\rho$-monotone with optimal $\rho =\tfrac{\lambda(1-2\lambda)}{\lam^2+(1-\lam)^2}$. \item \label{ex:linear:rmono:rcomono:iii} $ A_\lambda$ is $\rho$-comonotone with optimal $\rho= \tfrac{1-2\lambda}{2\lambda} $. \end{enumerate} \end{example} \begin{proof} Let $x\in X$. Then $\norm{Nx}^2 =\innp{Nx,Nx} =\innp{x,N^*Nx} =\innp{x,-N^2x} =\innp{x,x}=\normsq{x} $.
Hence $N$ is nonexpansive; in fact, $N$ is an isometry. Now set \begin{equation} B_\lambda = \tfrac{\lambda}{(1-\lambda)^2+\lambda^2} \big((1-2\lambda)\ensuremath{\operatorname{Id}}-N\big). \end{equation} \cref{ex:linear:rmono:rcomono:i}: We have \begin{subequations} \begin{align} (\ensuremath{\operatorname{Id}}+B_\lambda)T_\lambda &=\left(\ensuremath{\operatorname{Id}}+\tfrac{\lambda}{(1-\lambda)^2 +\lambda^2}\big((1-2\lambda)\ensuremath{\operatorname{Id}}-N\big)\right) \big((1-\lambda)\ensuremath{\operatorname{Id}}+\lambda N\big) \\ &=\tfrac{1}{(1-\lambda)^2+\lambda^2} \big((1-\lambda)\ensuremath{\operatorname{Id}}-\lambda N\big)\big((1-\lambda)\ensuremath{\operatorname{Id}}+\lambda N\big) \\ &=\tfrac{1}{(1-\lambda)^2+\lambda^2} \big((1-\lambda)^2\ensuremath{\operatorname{Id}}-\lambda^2N^2\big)=\ensuremath{\operatorname{Id}}. \end{align} \end{subequations} Similarly, one can show that $T_\lambda(\ensuremath{\operatorname{Id}}+B_\lambda)=\ensuremath{\operatorname{Id}}$ and the conclusion follows.
\cref{ex:linear:rmono:rcomono:ii}: Using \cref{ex:linear:rmono:rcomono:i}, we have \begin{subequations} \begin{align} \innp{x, A_\lambda x} &= \frac{\lambda}{(1-\lambda)^2+\lambda^2} \big((1-2\lambda)\norm{x}^2-\innp{Nx,x}\big)\\ &= \frac{\lambda(1-2\lambda)}{(1-\lambda)^2+\lambda^2} \norm{x}^2.
\label{sub:eq:lin} \end{align} \end{subequations}
\cref{ex:linear:rmono:rcomono:iii}: Using \cref{ex:linear:rmono:rcomono:i}, we have \begin{subequations} \begin{align} \normsq{A_\lambda x} &= \frac{\lambda^2}{((1-\lambda)^2+\lambda^2)^2} \big((1-2\lam)^2\normsq{x}+\normsq{Nx}\big) \\ &= \frac{\lam^2}{((1-\lam)^2+\lam^2)^2} \big((1-2\lam)^2+1\big)\normsq{x}. \end{align} \end{subequations} Therefore, combining with \cref{sub:eq:lin} we obtain \begin{subequations} \begin{align} \innp{x, A_\lambda x} &= \frac{(1-2\lam)((1-\lam)^2+\lam^2)}{\lam ((1-2\lam)^2+1)} \cdot \frac{\lam^2((1-2\lam)^2+1)}{((1-\lam)^2+\lam^2)^2} \normsq{x} \\ &=\frac{(1-2\lam)((1-\lam)^2+\lam^2)}{\lam ((1-2\lam)^2+1)} \normsq{A_\lambda x},\\ &=\frac{1-2\lambda}{2\lambda} \normsq{A_\lambda x}, \end{align} \end{subequations} and the conclusion follows. \end{proof}
\section{Hypoconvex functions} \label{sec:hypocon} In this section, we apply results in the previous sections to characterize proximal mappings of hypoconvex functions. We shall assume that $f\colon X\to \left]-\infty,+\infty\right]$ is a proper lower semicontinuous function minorized by a concave quadratic function: $\exists \nu\in\ensuremath{\mathbb R}, \beta\in \ensuremath{\mathbb R}, \alpha\geq 0$ such that
$$(\forall x\in X)\quad f(x)\geq -\alpha\|x\|^2-\beta\|x\|+\nu.$$ For $\mu>0$, the Moreau envelope of $f$ is defined by
$$e_{\mu}f(x)=\inf_{y\in X}\Big(f(y)+\frac{1}{2\mu}\|x-y\|^2\Big),$$ and the associated proximal mapping $\ensuremath{\operatorname{Prox}}_{\mu f}$ by \begin{equation} \ensuremath{\operatorname{Prox}}_{\mu f}(x)
=\underset{y\in X}{\ensuremath{\operatorname{argmin}}}\Big(f(y)+\frac{1}{2\mu}\|x-y\|^2\Big), \end{equation} where $x\in X$. We shall use $\partial f$ for the subdifferential mapping from convex analysis. \begin{definition}\label{def:abst} An abstract subdifferential $\pars$ associates a subset $\pars f(x)$ of $X$ to $f$ at $x\in X$, and it satisfies the following properties: \begin{enumerate} \item\label{i:p1} $\pars f=\partial f$ if $f$ is a proper lower semicontinuous convex function; \item\label{i:p1.5} $\pars f=\nabla f$ if $f$ is continuously differentiable; \item\label{i:p2} $0\in\pars f(x)$ if $f$ attains a local minimum at $x\in\ensuremath{\operatorname{dom}} f$; \item\label{i:p3} for every $\beta\in\ensuremath{\mathbb R}$,
$$\pars\Big(f+\beta\frac{\|\cdot-x\|^2}{2}\Big)=\pars f +\beta (\ensuremath{\operatorname{Id}}-x).$$ \end{enumerate} \end{definition} The Clarke--Rockafellar subdifferential, Mordukhovich subdifferential, and Fr\'echet subdifferential all satisfy Definition~\ref{def:abst}\ref{i:p1}--\ref{i:p3}, see, e.g., \cite{Clarke90}, \cite{Mord06, Mord18}, so they are $\pars$. Related but different abstract subdifferentials have been used in \cite{aussel95, ioffe84, thibault95}.
Recall that $f$ is $\tfrac{1}{\lambda}$-hypoconvex (see \cite{Rock98, Wang2010}) if \begin{equation}
f((1-\tau)x+\tau y)\le (1-\tau) f(x)
+\tau f(y) +\frac{1}{2\lambda}\ \tau(1-\tau)
\norm{x-y}^2, \end{equation} for all $(x,y)\in X\times X$ and $\tau\in \left]0,1\right[$.
\begin{proposition}\label{p:sub:for}
If $f\colon X\to \left]-\infty,+\infty\right]$ is a proper
lower semicontinuous $\frac{1}{\lambda}$-hypoconvex function, then \begin{equation}\label{e:hypoc:sub}
\pars f=\partial\Big(f+\frac{1}{2\lambda}\|\cdot\|^2\Big)-\frac{1}{\lambda}\ensuremath{\operatorname{Id}}. \end{equation} Consequently, for a hypoconvex function the Clarke--Rockafellar, Mordukhovich, and Fr\'echet subdifferential operators all coincide. \end{proposition} \begin{proof}
For the convex function $f+\frac{1}{2\lambda}\|\cdot\|^2$, apply Definition~\ref{def:abst}\ref{i:p1} and
\ref{i:p3} to obtain
$$\partial\Big(f+\frac{1}{2\lambda}\|\cdot\|^2\Big)
=\pars\Big(f+\frac{1}{2\lambda}\|\cdot\|^2\Big) =\pars f+\frac{1}{\lambda}\ensuremath{\operatorname{Id}}$$ from which \eqref{e:hypoc:sub} follows. \end{proof}
Let $f^*$ denote the Fenchel conjugate of $f$. The following result is well known in $\ensuremath{\mathbb R}^n$, see, e.g., \cite[Exercise~12.61(b)(c), Example~11.26(d)~and~Proposition~12.19]{Rock98}, and \cite{Wang2010}. In fact, it also holds in a Hilbert space.
\begin{prop}\label{prop:hypo:func} The following are equivalent: \begin{enumerate}
\item
\label{prop:hypo:func:i}
$f$ is $\tfrac{1}{\lambda}$-hypoconvex.
\item
\label{prop:hypo:func:ii}
$f+\tfrac{1}{2\lam}
\normsq{\cdot}$ is convex.
\item
\label{prop:hypo:func:iii}
$\ensuremath{\operatorname{Id}}+\lambda \pars f$ is maximally monotone.
\item
\label{prop:hypo:func:vi} $(\forall \mu\in \left] 0,\lam\right[)$
$\ensuremath{\operatorname{Prox}}_{\mu f}$ is $\lambda/(\lambda-\mu)$-Lipschitz continuous with \begin{equation}
\label{eq:prox:res}
\ensuremath{\operatorname{Prox}}_{\mu f} =J_{\mu\pars f} =(\ensuremath{\operatorname{Id}}+\mu \pars f )^{-1}. \end{equation}
\item
\label{prop:hypo:func:v} $(\forall \mu\in \left] 0,\lam\right[)$
$\ensuremath{\operatorname{Prox}}_{\mu f}$ is
single-valued and continuous.
\end{enumerate}
\end{prop}
\begin{proof}
``\ref{prop:hypo:func:i}$\Leftrightarrow$\ref{prop:hypo:func:ii}":
Simple algebraic manipulations.
``\ref{prop:hypo:func:ii}$\Rightarrow$\ref{prop:hypo:func:iii}": As
$$\partial\Big(f+\frac{1}{2\mu}\|\cdot\|^2\Big)=
\pars\Big(f+\frac{1}{2\mu}\|\cdot\|^2\Big)=\pars f +\frac{1}{\mu}\ensuremath{\operatorname{Id}}$$
is maximally monotone, $\ensuremath{\operatorname{Id}}+\mu\pars f$ is maximally monotone.
``\ref{prop:hypo:func:iii}$\Rightarrow$\ref{prop:hypo:func:vi}": By Definition~\ref{def:abst}\ref{i:p2} and \ref{i:p3},
$y\in \ensuremath{\operatorname{Prox}}_{\mu f}(x)$ implies that
$$0\in\pars\Big(f(y)+\frac{1}{2\mu}\|y-x\|^2\Big)=\pars f(y)+\frac{1}{\mu}(y-x).$$
Thus, one has
\begin{equation}\label{e:prox:abst}
(\forall x\in X)\ \ensuremath{\operatorname{Prox}}_{\mu f}(x)\subseteq (\ensuremath{\operatorname{Id}}+\mu \pars f)^{-1}(x).
\end{equation}
Using
$$\ensuremath{\operatorname{Id}}+\mu\pars f=\frac{\lambda-\mu}{\lambda}\Big(\ensuremath{\operatorname{Id}}+\frac{\mu}{\lambda-\mu}(\ensuremath{\operatorname{Id}}+\lambda\pars f)\Big)$$
yields
$$(\ensuremath{\operatorname{Id}}+\mu\pars f)^{-1}=J_{A}\circ\Big(\frac{\lambda}{\lambda-\mu}\ensuremath{\operatorname{Id}}\Big),$$
where $A=\frac{\mu}{\lambda-\mu}(\ensuremath{\operatorname{Id}}+\lambda\pars f)$ is maximally monotone by the assumption.
Since
$J_{A}$ is nonexpansive on $X$, $(\ensuremath{\operatorname{Id}}+\mu\pars f)^{-1}$ is
$\lambda/(\lambda-\mu)$-Lipschitz.
Together with \eqref{e:prox:abst}, we obtain $\ensuremath{\operatorname{Prox}}_{\mu f}=(\ensuremath{\operatorname{Id}}+\mu\pars f)^{-1}.$
``\ref{prop:hypo:func:vi}$\Rightarrow$\ref{prop:hypo:func:v}": Clear.
``\ref{prop:hypo:func:v}$\Rightarrow$\ref{prop:hypo:func:ii}": Let $x\in X$ and let $\mu\in \left]0, \lambda\right[$.
We have
\begin{equation}
\label{e:conj}
e_{\mu}f(x)=\frac{1}{2\mu}\|x\|^2-\Big(f+\frac{1}{2\mu}\|\cdot\|^2\Big)^*\Big(\frac{x}{\mu}\Big),
\end{equation}
and $e_{\mu}$ is locally Lipschitz, see, e.g., \cite[Proposition 3.3(b)]{Jourani14}. By \cite[Proposition 5.1]{Bernard05}, \ref{prop:hypo:func:v}
implies that $e_{\mu}f$ is Fr\'echet differentiable with $\nabla e_{\mu}f=\mu^{-1}(\ensuremath{\operatorname{Id}}-\ensuremath{\operatorname{Prox}}_{\mu f})$. Then $\big(f+\frac{1}{2\mu}\|\cdot\|^2\big)^*$ is Fr\'echet differentiable by \eqref{e:conj}. It follows from
\cite[Theorem 1]{Stromberg11} that $ f+\frac{1}{2\mu}\|\cdot\|^2$ is convex. Since this hold for every $\mu\in ]0,\lambda[$, \ref{prop:hypo:func:ii} follows.
\end{proof}
We now provide a new refined characterization of hypoconvex functions in terms of the cocoercivity of their proximal operators; equivalently, of the conical nonexpansiveness of the displacement mapping of their proximal operators. \begin{theorem}
\label{prop:hypocon:av} Let $\mu\in \left] 0,\lambda\right[$. Then the following are equivalent. \begin{enumerate}
\item
\label{prop:hypocon:av:i}
$f$ is $\tfrac{1}{\lambda}$-hypoconvex.
\item
\label{prop:hypocon:av:ii}
$\ensuremath{\operatorname{Id}}-\ensuremath{\operatorname{Prox}}_{\mu f}$ is
$\tfrac{\lambda}{2(\lambda-\mu)}$-\text{conically nonexpansive}. \item \label{prop:hypocon:av:iii} $\ensuremath{\operatorname{Prox}}_{\mu f}$ is $\tfrac{\lambda-\mu}{\lambda}$-cocoercive. \end{enumerate} \end{theorem} \begin{proof} ``\ref{prop:hypocon:av:i}$\siff$\ref{prop:hypocon:av:ii}": Using $0<\tfrac{\mu}{\lambda}<1$ we have \begin{align*}
&\qquad\text{$f$ is $\tfrac{1}{\lambda}$-hypoconvex}
\\ &\siff \text{$\ensuremath{\operatorname{Id}}+\lambda \pars f$ is maximally monotone} \tag{\text{by \cref{prop:hypo:func}}} \\ &\siff \text{$\tfrac{\mu}{\lambda}\ensuremath{\operatorname{Id}}+ \mu\pars f$ is maximally monotone} \\ &\siff \text{$\mu\pars f$ is maximally $\bigl(-\tfrac{\mu}{\lambda}\bigr)$-monotone} \\ &\siff \text{$(\mu\pars f)^{-1}$ is maximally $\bigl(-\tfrac{\mu}{\lambda}\bigr)$-comonotone} \tag{\text{by \cref{lem:bmax:inv}}} \\ &\siff \text{$J_{(\mu\pars f)^{-1}}$ }
\text{is $\tfrac{\lambda}{2(\lambda-\mu)}$-\text{conically nonexpansive}} \tag{\text{by \cref{cor:eq:nexp:-0.3}\ref{cor:eq:nexp:-0.3:0}}} \\ &\siff \text{$ \ensuremath{\operatorname{Id}} - J_{\mu\pars f}$ }
\text{is $\tfrac{\lambda}{2(\lambda-\mu)}$-\text{conically nonexpansive}} \tag{\text{by \cref{prop:gen:min}\ref{prop:gen:min:i} }} \\ &\siff \text{$\ensuremath{\operatorname{Id}}-\ensuremath{\operatorname{Prox}}_{\mu f} $ is $\tfrac{\lambda}{2(\lambda-\mu)}$-\text{conically nonexpansive}} \tag{\text{by \cref{eq:prox:res}}}. \end{align*} ``\ref{prop:hypocon:av:ii}$\siff$\ref{prop:hypocon:av:iii}": Use \cref{lem:coco:conc}. \end{proof}
\begin{corollary}
\label{cor:Lips:grad} Suppose that $f\colon X\to \ensuremath{\mathbb R}$ is Fr\'echet differentiable such that $\grad f$ is Lipschitz with a constant $1/\lambda$. Then the following hold: \begin{enumerate}
\item
\label{cor:Lips:grad:i}
$\ensuremath{\operatorname{Id}} +\lambda \grad f$ is maximally monotone.
\item
\label{cor:Lips:grad:ii}
$f$ is $\tfrac{1}{\lambda}$-hypoconvex.
\item
\label{cor:Lips:grad:iii}
$f+\tfrac{1}{2\lambda}
\normsq{\cdot}$ is convex.
\item
\label{cor:Lips:grad:iv} $(\forall \mu\in \left] 0,\lambda \right[)$ $\ensuremath{\operatorname{Prox}}_{\mu f}$ is single-valued. \item \label{cor:Lips:grad:v} $(\forall \mu\in \left] 0,\lambda \right[)$ $\ensuremath{\operatorname{Prox}}_{\mu f}$ is $\tfrac{\lambda-\mu}{\lambda}$-cocoercive. \item \label{cor:Lips:grad:vi} $(\forall \mu\in \left] 0,\lambda \right[)$ $ \ensuremath{\operatorname{Prox}}_{\mu f} =J_{\mu\pars f} =(\ensuremath{\operatorname{Id}}+\mu \grad f )^{-1}. $ \item \label{cor:Lips:grad:vii} $(\forall \mu\in \left] 0,\lambda \right[)$ $\ensuremath{\operatorname{Id}}-\ensuremath{\operatorname{Prox}}_{\mu f}$ is $\tfrac{\lambda}{2(\lambda-\mu)}$-\text{conically nonexpansive}. \end{enumerate} \end{corollary} \begin{proof} Definition~\ref{def:abst}\ref{i:p1.5}
implies that $(\forall x\in X)$
$\pars f(x)=\{\grad f(x)\}$. \ref{cor:Lips:grad:i}: Indeed, $\lambda \grad f$ is nonexpansive. Now the conclusion follows from \cite[Example~20.29]{BC2017}. \ref{cor:Lips:grad:ii}--\ref{cor:Lips:grad:vii}: Combine \ref{cor:Lips:grad:i} with \cref{prop:hypo:func} and \cref{prop:hypocon:av}. \end{proof}
Finally, we give two examples to illustrate our results.
\begin{example} Suppose that $X=\ensuremath{\mathbb R}$. Let $\lambda>0$ and set, for every $\lambda $, $f_\lambda \colon x\mapsto \exp(x)-\tfrac{1}{2\lambda}x^2 $. Then $f$ is $\tfrac{1}{\lambda}$-hypoconvex by \cref{prop:hypo:func}, $f'_\lambda\colon x\mapsto \exp(x) -\tfrac{x}{\lambda}$,
and we have $\ensuremath{\operatorname{Id}}+\lambda f'_\lambda=\lambda \exp $ is maximally monotone. Moreover, for every $\mu\in \left]0,\lambda\right ]$ we have \begin{subequations} \begin{align} \ensuremath{\operatorname{Prox}}_{\mu f_\lambda}(x) &=\bigl( \ensuremath{\operatorname{Id}}+\mu f'_\lambda\bigr)^{-1}(x) =\bigl((1-\tfrac{\mu}{\lambda})\ensuremath{\operatorname{Id}}+\mu\exp\bigr)^{-1}(x) \label{se:1} \\ &=\begin{cases} \ln\bigl(\tfrac{x}{\mu}\bigr) , &\text{if~~}\mu=\lambda;\\ \tfrac{\lambda x}{\lambda-\mu} -\operatorname{Lambert} W
\bigl(\tfrac{\lambda\mu\exp(\lambda x/(\lambda-\mu ))}{\lambda-\mu }\bigr), &\text{if $\mu\in \left]0,\lambda\right[$}, \end{cases} \end{align} \end{subequations} where the first identity in \cref{se:1} follows from \cref{cor:Lips:grad}\ref{cor:Lips:grad:vi}. \end{example}
\begin{example}
Let $D$ be a nonempty closed convex subset of $X$,
let $\lambda>0$ and set, for every $\lambda $,
$f_\lambda
=
\iota_D -\tfrac{1}{2\lambda}\norm{\cdot}^2 $.
Then
$f$ is $\tfrac{1}{\lambda}$-hypoconvex
by \cref{prop:hypo:func},
and
$\pars f_\lambda
=N_D-\tfrac{1}{\lambda}\ensuremath{\operatorname{Id}}$
by
Proposition~\ref{p:sub:for}. Moreover, for every $\lambda>0$,
we have $\ensuremath{\operatorname{Id}}+\lambda \pars f_\lambda=N_D$ is
maximally monotone. Finally, using \cref{eq:prox:res}
and \cite[Example~23.4]{BC2017}
we have for every $\mu\in \left]0,\lambda\right [$
\begin{subequations}
\begin{align}
\ensuremath{\operatorname{Prox}}_{\mu f_\lambda}
&=\bigl( \ensuremath{\operatorname{Id}}+\mu \pars f_\lambda\bigr)^{-1}
=\bigl((1-\tfrac{\mu}{\lambda})\ensuremath{\operatorname{Id}}+\mu N_D\bigr)^{-1}\\
&=\bigl((1-\tfrac{\mu}{\lambda})(\ensuremath{\operatorname{Id}}+ N_D)\bigr)^{-1}
=P_D\circ \bigl(\tfrac{\lambda}{\lambda-\mu}\ensuremath{\operatorname{Id}}\bigr).
\end{align}
\end{subequations}
In particular, if $D$ is a closed convex cone
we learn that $ \ensuremath{\operatorname{Prox}}_{\mu f_\lambda}=\tfrac{\lambda}{\lambda-\mu}P_D$. \end{example}
\section*{Acknowledgment} HHB, WMM, and XW were partially supported by the Natural Sciences and Engineering Council of Canada.
\small
\end{document} |
\begin{document}
\begin{frontmatter} \title{Nonlocal equations with regular varying decay solutions}
\author[chu]{Sujin Khomrutai\corref{cor1}} \ead{sujin.k@chula.ac.th}
\address[chu]{Department of Mathematics and Computer Science, Faculty of Science, Chulalongkorn University, Bangkok 10330, Thailand}
\cortext[cor1]{Corresponding author.}
\begin{keyword} Asymptotic behavior \sep nonlocal equations \sep regular varying functions \sep fractional Laplacian \sep dispersal tails \sep regular varying modified exponential series \MSC[2010] 35B40 \sep 45A05 \sep 45M05 \end{keyword}
\begin{abstract} We study the asymptotic behavior for nonlocal diffusion equations $\partial_tu=\mathcal{J} u-\chi_0u$ in $\mathbb{R}^n\times(0,\infty)$ and obtain a sufficient condition so that solutions of the Cauchy problem decay in time at the rate of a regular varying function. In the sufficient condition, a sharp bound of certain forms is required for the $k$-fold iterations $\mathcal{J}^ku_0$ or the kernels $J_k$. We prove the desired decay rate by analyzing the asymptotic behavior of a regular varying modified exponential series. Then we verify that the sufficient condition is true for most of the known radially symmetric kernels, and for some more general kernels, using the sharp Young's convolution inequality and a Fourier splitting argument. Classical results on the decay of solutions for these nonlocal diffusion equations are re-established and generalized. Finally, using our framework, we can exhibit a kernel having a prescribed regular varying decay solutions for a wide class of regular varying functions.
\end{abstract}
\end{frontmatter}
\section{Introduction}
In this work, we give a sufficient condition for solutions of the nonlocal equation \begin{align}\label{Eqn:main} \partial_tu=\int_{\mathbb{R}^n}J(x,y)u(y,t)dy-\chi_0u(x,t)\quad(x,t)\in\mathbb{R}^n\times(0,\infty), \end{align} to decay in time at the rate of a regular varying function. Here $J=J(x,y)$ is a given function, not necessarily radially symmetric, and $\chi_0>0$ is a constant. The nonlocal equations of this form have been used to model and study many phenomena such as diffusion, image enhancement \cite{GilboaEtal08}, phase transition \cite{BatesEtal99}, dispersal of a species by a long-range effects \cite{Fife03}, etc. See also \cite{AndreuEtal10} and the reference therein.
In the first step of our investigation, we express the solution of (\ref{Eqn:main}) as a power series in time involving the $k$-folds iterations \[
\mathcal{J}^k=\mathcal{J}\circ\cdots\circ\mathcal{J}\quad\mbox{($k$ terms of $\mathcal{J}$), acting on the initial condition $u_0=u|_{t=0}$}, \] where $\mathcal{J}$ is the integral operator \[ \mathcal{J} u_0(x)=\int_{\mathbb{R}^n}J(x,y)u_0(y)dy. \] Indeed, we have the representation formula for solution of (\ref{Eqn:main}) as \[ u(t)=e^{-\chi_0t}u_0+e^{-\chi_0t}\sum_{k=1}^\infty\frac{t^k}{k!}\mathcal{J}^ku_0. \]
Then we turn the investigation into bounding norms of $J_k$, the kernels of the operators $\mathcal{J}^k$, or norms of the functions $\mathcal{J}^ku_0$. To the author knowledge, there have been no studies of nonlocal equations in this direction, where the asymptotic behavior of solutions is derived directly from the asymptotic behavior
of $J_k$ or of $\mathcal{J}^ku_0$ as $k\to\infty$. (Although, the closest one is \cite{BrandleEtal11}, where compact support and Gaussian $J$ are considered.) The benefit of taking this approach is that the results can be applied to nonlocal equations with real- or complex-valued kernels. This is in contrast with many works on asymptotic behavior of nonlocal equations that rely heavily on the positivity of the kernel. In fact, the positivity enables the application of comparison and barrier arguments. Another possible benefit of this approach is that it could lead to the study of non-symmetric nonlocal equations.
At an abstract level, we can prove the following general result. Assume that all $J_k$ or all $\mathcal{J}^ku_0$ lie in a Banach space $X$ with norm $\|\cdot\|$, and an estimate of the form \begin{align}\label{Tmp:JkJku0}
\|J_k\|\leq R_k\quad\mbox{or}\quad\|\mathcal{J}^ku_0\|\leq R_k \end{align} respectively, holds for all $k$ sufficiently large, where \[ R_k=R(k)=k^{\beta}L(k)\quad(\beta\in\mathbb{R}) \] is a regular varying function. Then we are able to prove that the solution of (\ref{Eqn:main}) satisfies the asymptotic behavior \[
\|u(t)\|\lesssim t^{\beta}L(t)\quad\mbox{as $t\to\infty$} \] in some suitable Banach space $Y$. This abstract result is valid for integral operators which can be either symmetric or non-symmetric, real-valued, or complex-valued. We obtained the preceding result by establishing the asymptotic behavior of the exponential type power series \[ \sum_{k=N}^\infty\frac{(\alpha t)^k}{k!}R_k\asymp R(\alpha t)e^{\alpha t}\quad\mbox{as $t\to\infty$}. \]
Having the above abstract result, we are now facing a new challenging question. For a given kernel $J$, how do we get an inequality of the form (\ref{Tmp:JkJku0})? In this work we pursue this question for radially symmetric kernels, i.e.\ $J=J(x-y)$. Note that in this case the $k$-fold product kernel function is \[ J_k=J\ast\cdots\ast J,\quad\mbox{the $(k-1)$-times convolution}. \]
The Banach spaces $X,Y$ are $L^p(\mathbb{R}^n)$, where $1\leq p\leq\infty$. Note that the usual Young's convolution inequality is not enough to get ``good bounds $R_k$" for $\|J_k\|_{L^p}$ or $\|\mathcal{J}^ku_0\|_{L^p}$, in the sense that the resulting power series for the solution does not exhibit a power decay in time, especially when $\chi_0=\|J\|_{L^1}$. Therefore some more sophisticated tools have to be employed.
The simplest convolution integral operators considered in this work are those having kernels possessing a higher integrability: $J\in L^1(\mathbb{R}^n)\cap L^r(\mathbb{R}^n)$ for some $r>1$. Such operators or kernels include \begin{itemize} \item[(1)] Continuous function with compact support, \item[(2)] $(1-\triangle)^{-1}$ (the Bessel potential operator), \item[(3)] $(\lambda-\mathcal{L})^{-1}$, where $\lambda>0$ and $\mathcal{L}$ is an elliptic operator, \item[(4)] weakly singular operator, etc. \end{itemize} For these kernels, we employ the sharp Young's convolution inequality to show that \[
\|J_k\|_{L^\infty}\lesssim k^{-n/2}\quad\forall\, k\,\,\mbox{large} \] The sharp constant in the sharp Young's (or Brascamp-Lieb) inequality played a crucial role in getting this asymptotic bound. After establishing this fundamental result, we can apply the abstract result from the previous paragraph to get the asymptotic behavior of solutions of (\ref{Eqn:main}) when $u_0\in L^1(\mathbb{R}^n)\cap L^\infty(\mathbb{R}^n)$. For the initial condition $u_0\in L^1(\mathbb{R}^n)$, the proof of our result directly give a refined asymptotic behavior generalizing partly the corresponding result in \cite{IgnatEtal08}.
Next, we turn our study to the stable laws. For simplicity, we put $\chi_0=\|J\|_{L^1}=1$ in (\ref{Eqn:main}) and $J\geq0$. We assume in this case that the kernel has the expansion in the Fourier variables as \[
\widehat{J}(\xi)=1-A|\xi|^\sigma(\ln(1/|\xi|)^\mu+o(|\xi|^\sigma(\ln(1/|\xi|)^\mu)\quad\mbox{as $|\xi|\to0$}. \] In the special case that \[ \mu=0,\quad\mbox{or}\quad\mu=1, \]
the result we obtained are classical results in \cite{ChasseigneEtal06}. So we have generalized the results to all real number $\mu$. For stable laws with $0<\sigma<2$, the kernels possess no higher integrability: $\|J\|_{L^r}=\infty$ for all $r>1$ (see \cite{Feller71}). This means we cannot apply the results from the previous case. To compensate this difficulty, we analyze the functions $\mathcal{J}^ku_0$ instead of the kernels $J_k$. As in \cite{ChasseigneEtal06}, an integrability assumption on $u_0$ and its Fourier transform $\widehat{u}_0$ have to be made. Now thanks to the radial symmetry of $J$, we get that \[ \widehat{J}_k(\xi)=\widehat{J}(\xi)^k\quad\forall\,k\in\mathbb{N}. \] Then employing a Fourier splitting of argument on the frequency domain $\mathbb{R}^n$, we can prove a bound for \[
\|\mathcal{J}_ku_0\|_{L^\infty}\lesssim(k\ln k)^{-n/\sigma}\quad\mbox{as $t\to\infty$}, \] and then the asymptotic behavior of solutions of (\ref{Eqn:main}) follows directly from the abstract result.
Finally, we extend the work to arbitrary slowly varying function $L:(0,\infty)\to(0,\infty)$ and arbitrary $\beta>0$. Under an assumption on $L$, we exhibit a kernel $J$ such that the solutions to (\ref{Eqn:main}) satisfy \[
\|u(t)\|_{L^p}\lesssim(tL(t))^{-\beta}\quad\mbox{as $t\to\infty$}. \]
\section{Preliminaries}\label{Sec:prelim}
\subsection*{a. Notation, basic fact, and convention}
Let $\varGamma(s)=\int_0^\infty e^{-\tau}\tau^{s-1}d\tau$ be the Gamma function and \[ \mathcal{F}\{f\}(\xi)=\widehat{f}(\xi)=\int_{\mathbb{R}^n}f(x)e^{-ix\cdot\xi}dx \] the Fourier transform of $f$. We denote \begin{align*} &a_k\sim b_k\,\,\,\,\mbox{as $k\to\infty$}\quad\Leftrightarrow\quad\lim_{k\to\infty}\frac{a_k}{b_k}=1\\ &f(t)\sim g(t)\,\,\,\,\mbox{as $t\to\infty$}\quad\Leftrightarrow\quad\lim_{t\to\infty}\frac{f(t)}{g(t)}=1. \end{align*} We shall often use the fact that if two sequences $\{a_k\},\{b_k\}$ satisfy $a_k\sim b_k$ as $k\to\infty$ and $b_k\neq0$ for all $k$ large, then there is a constant $C>0$ such that \[ \frac{1}{C}b_k\leq a_k\leq Cb_k\quad\forall\,k\geq k_0. \] If two functions $f(t)\sim g(t)$ as $t\to\infty$ and $g(t)\neq0$ for all $t$ large, then there is a constant $C>0$ such that \[ \frac{1}{C}g(t)\leq f(t)\leq Cg(t)\quad\forall\,t\geq t_0, \] i.e.\ $f(t)\asymp g(t)$ as $t\to\infty$.
Throughout this work $J=J(x,y)$ is a function defined for $(x,y)\in\mathbb{R}^n\times\mathbb{R}^n$, $J$ may be complex-valued, and let $\mathcal{J}$ be the integral operator with kernel $J$, that is \[ \mathcal{J} u:=\int_{\mathbb{R}^n}J(x,y)u(y)dy. \] For each positive integer $k$, the $k$-fold product $\mathcal{J}^k=\mathcal{J}\circ\cdots\circ\mathcal{J}$ ($k$ terms of $\mathcal{J}$) is the integral operator whose kernel $J_k=J_k(x,y)$ is given by \begin{align*} &J_k(x,y)=\int_{(\mathbb{R}^n)^{k-1}}J(x,y_1)J(y_1,y_2)\cdots J(y_{k-1},y)dy_{k-1}\cdots dy_1. \end{align*} Thus \begin{align*} \mathcal{J}^k u(x)&=\int_{\mathbb{R}^n}J_k(x,y)u(y)dy\\ &=\int_{(\mathbb{R}^n)^k}J(x,y_1)J(y_1,y_2)\cdots J(y_{k-1},y)u(y)dydy_{k-1}\cdots dy_1. \end{align*} Note that, if the kernel is radially symmetric, i.e.\ $J(x,y)=J(x-y)$, then the corresponding kernel $J_k$ takes the form \[ J_k(x)=\int_{(\mathbb{R}^n)^{k-1}}J(x-y_1)J(y_1-y_2)\cdots J(y_{k-1})dy_{k-1}\cdots dy_1, \] from which it can be easily seen (via a simple change of variables) that $J_k$ is also radially symmetric. For a symmetric kernel $J$, its $k$-fold product kernels are known as the convolution $J_k=J\ast\cdots\ast J$.
\subsection*{b. Representation formula of solutions}
Next, we find a representation formula for solutions of Eqn.\ (\ref{Eqn:main}). We employ the canonical transformation \[ v=e^{\chi_0t}u, \] so that the equation becomes \[ v(t)=u_0+\int_0^t\mathcal{J} v(\tau)d\tau\quad(t\geq0). \] Formally performing the Picard iteration, it follows that $v$ should satisfy \begin{align*} &v(t)=u_0+\int_0^t\mathcal{J}\left(u_0+\int_0^{\tau_1}\mathcal{J} v(\tau_2)d\tau_2\right)d\tau_1\\ &\hphantom{v(t)}=u_0+t\mathcal{J} u_0+\int_0^t\int_0^{\tau_1}\mathcal{J}^2v(\tau_2)d\tau_2d\tau_1,\\ &v(t)=u_0+\int_0^t\mathcal{J}\left(u_0+\tau_1\mathcal{J} u_0+\int_0^{\tau_1}\int_0^{\tau_2}\mathcal{J}^2 v(\tau_3)d\tau_3d\tau_2\right)d\tau_1\\ &\hphantom{v(t)}= u_0+t\mathcal{J} u_0+\frac{t^2}{2!}\mathcal{J}^2u_0+\int_0^t\int_0^{\tau_1}\int_0^{\tau_2}\mathcal{J}^3v(\tau_3)d\tau_3d\tau_2d\tau_1,\\ &v(t)=u_0+t\mathcal{J} u_0+\cdots+\frac{t^k}{k!}\mathcal{J}^ku_0+\int_0^t\int_0^{\tau_1}\cdots\int_0^{\tau_{k}}\mathcal{J}^{k+1}v(\tau_{k+1})d\tau_{k+1}\cdots d\tau_1,\quad k\in\mathbb{N}. \end{align*} Hence if $\mathcal{J}^{k+1}u_0$ decays sufficiently fast as $k\to\infty$, we can conclude that $v$ must have the form \[ v(t)=u_0+t\mathcal{J} u_0+\cdots+\frac{t^k}{k!}\mathcal{J}^ku_0+\cdots. \]
Inverting back the above consideration, we now set the following definition.
\begin{definition}\label{Def:SolGreen}
By a solution to the nonlocal equation (\ref{Eqn:main}) with a given initial value $u_0$, we mean the function \begin{align*} u(t)=\mathcal{G}(t)u_0:=e^{-\chi_0t}\sum_{k=0}^\infty\frac{t^k}{k!}\mathcal{J}^ku_0. \end{align*} The operator $\mathcal{G}(t)$ is the \textit{Green operator} for (\ref{Eqn:main}) whose kernel $G(x,y,t)$ is given by \begin{align*} G(x,y,t)=e^{-\chi_0t}\sum_{k=0}^\infty\frac{t^k}{k!}J_k(x,y), \end{align*} where each $J_k$ is the kernel of $\mathcal{J}^k$.
\end{definition}
The following result is directly followed from the definition.
\begin{lemma}\label{Lem:JkL1}
If $J$ is a radially symmetric $L^1$ function, then $J_k\in L^1(\mathbb{R}^n)$ for all $k\in\mathbb{N}$ and \[
\|J_k\|_{L^1}\leq\|J\|_{L^1}^k. \] Moreover, if $u_0\in L^\infty(\mathbb{R}^n)$ then \[
\|\mathcal{G}(t)u_0\|_{L^1}\leq e^{-(\chi_0-\|J\|_{L^1})t}\|u_0\|_{L^\infty}. \]
\end{lemma}
\begin{proof} We have \begin{align*}
\|J_k\|_{L^1}&=\int_{\mathbb{R}^n}\left|\int_{(\mathbb{R}^n)^{k-1}}J(x-y_1)J(y_1-y_2)\cdots J(y_{k-1})dy_{k-1}\cdots dy_1\right|dx\\
&\leq\int_{(\mathbb{R}^n)^k}|J(x-y_1)||J(y_1-y_2)|\cdots|J(y_{k-1})|dxdy_{1}\cdots dy_{k-1}=\|J\|_{L^1}^k. \end{align*} For the second assertion, we use Young's inequality to get \begin{align*}
\|\mathcal{G}(t)u_0\|_{L^1}\leq e^{-\chi_0t}\sum_{k=0}^\infty\frac{t^k}{k!}\|J_k\|_{L^1}\|u_0\|_{L^\infty}=e^{-(\chi_0-\|J\|_{L^1})t}\|u_0\|_{L^1}.\qquad\mbox{\qed} \end{align*} \end{proof}
\subsection*{c. Some tools from Analysis}
We will need the following facts in our study of exponential type series whose coefficients are modified by a regular varying sequence.
\begin{lemma}[\cite{TricErde51}]\label{Lem:ratiogamma}
Let $\alpha,\beta\in\mathbb{R}$. Then the ratio of Gamma functions has the asymptotic expansion \[ \frac{\varGamma(s+\alpha)}{\varGamma(s+\beta)}=s^{\alpha-\beta}\left(1+\frac{(\alpha-\beta)(\alpha+\beta-1)}{2s}+O(s^{-2})\right)\quad\mbox{as $s\to\infty$}. \] In particular, we have \[ \frac{\varGamma(s+\alpha)}{\varGamma(s+\beta)}\leq Cs^{\alpha-\beta}\quad\mbox{as $s\to\infty$}. \] \end{lemma}
\begin{lemma}[\cite{BealsWong10}]\label{Lem:Kummer}
For $a,b\in\mathbb{R}$ with $-b\not\in\mathbb{R}\cup\{0\}$, Kummer's confluent hypergeometric function of the first kind is defined by \[ M(a,b,s):=\sum_{k=0}^\infty\frac{(a)_k}{(b)_k}\frac{s^k}{k!}, \] where $(a)_k=a(a+1)\cdots(a+k-1)$ is the Pochhammer symbol. If $b>a>0$ then \[ M(a,b,s)\sim\frac{\varGamma(b)}{\varGamma(a)}s^{a-b}e^s\quad\mbox{as $s\to\infty$}. \]
\end{lemma}
Next, we recall the notion of regular varying functions.
\begin{definition}\label{Def:RegSlow}
A measurable function $R:[N_0,\infty)\to(0,\infty)$, where $N_0>0$, is called regular varying with index $\beta\in\mathbb{R}$ if it satisfies \[ \lim_{s\to\infty}\frac{R(\lambda s)}{R(s)}=\lambda^\beta\quad\mbox{for all $\lambda>0$}. \] A slowly varying function $L$ is a regular varying function with index $\beta=0$, or, it is characterized by \[ \lim_{s\to\infty}\frac{L(\lambda s)}{L(s)}=1\quad\mbox{for all $\lambda>0$}. \]
\end{definition}
It is a fact that $R$ is regular varying function with index $\beta$ if and only if it can be expressed as \[ R(s)=s^\beta L(s), \] where $L$ is slowly varying. For further properties see \cite{Bingham89}.
We also need the following crucial lemma.
\begin{lemma}[\cite{Karamata30},\cite{Bingham89}]\label{Lem:Slow}
If $L$ is a slowly varying function and $\varepsilon>0$ then \begin{align*} &\sup_{\tau\leq s}\tau^{\varepsilon}L(\tau)\sim s^{\varepsilon}L(s),\\ &\sup_{s\geq\tau}\tau^{-\varepsilon}L(\tau)\sim s^{-\varepsilon}L(s), \end{align*} as $s\to\infty$. Here $f(s)\sim g(s)$ as $s\to\infty$ means $\lim_{s\to\infty}f(s)/g(s)=1$.
\end{lemma}
\begin{lemma}\label{Lem:Slow2}
If $L$ is a slowly varying function and $\varepsilon>0$ then \begin{align*} &\inf_{\tau\leq s}\tau^{-\varepsilon}L(\tau)\sim s^{-\varepsilon}L(s),\\ &\inf_{\tau\geq s}\tau^{\varepsilon}L(\tau)\sim s^{\varepsilon}L(s), \end{align*} as $s\to\infty$.
\end{lemma}
\begin{proof} Observe that if $L$ is slowly varying then so is $K=1/L$. We have \begin{align*} \inf_{\tau\leq s}\tau^{-\varepsilon}L(\tau)=\frac{1}{\sup_{\tau\leq s}\tau^\varepsilon K(\tau)}\sim\frac{1}{s^\varepsilon K(s)}=s^{-\varepsilon}L(s)\quad(s\to\infty), \end{align*} which proves the first assertion. The second one follows by the same argument.\qquad\mbox{\qed} \end{proof}
Finally, for a radially symmetric kernel $J$, in order to get the $L^\infty$ bound for $k$-fold convolution kernels $J_k$ we shall need the following important result.
\begin{lemma}[Brascamp-Lieb inequality \citep{BrascampLieb76}, \citep{LiebLoss01}]\label{Lem:SharpYoung} Let $p_1,\ldots,p_k,r\in[1,\infty]$ ($k\geq2$) be such that \[ \frac{1}{p_1}+\cdots+\frac{1}{p_k}=k-1+\frac{1}{r}. \] Then \[
\|f_1\ast\cdots\ast f_k\|_{L^r}\leq\left(\prod_{l=1}^kC_{p_l}\right)^n\|f_1\|_{L^{p_1}}\cdots\|f_k\|_{L^{p_k}} \] for all $f_1\in L^{p_1}(\mathbb{R}^n),\ldots,f_k\in L^{p_k}(\mathbb{R}^n)$, where $C_p$ for each $1\leq p\leq\infty$ is defined by \[ C_p=\left(\frac{p^{1/p}}{q^{1/q}}\right)^{1/2},\quad\frac{1}{p}+\frac{1}{q}=1. \]
\end{lemma}
\section{Nonlocal equations with regular varying decay solutions}
Our first main result is a bound for an exponential type series \begin{align}\label{Series:Rk} \sum_{k=N}^\infty\frac{(\alpha t)^k}{k!}R_k\quad\mbox{when $R_k=k^{\beta}$}, \end{align} where $\beta\in\mathbb{R}$. Although the result is true for all real number $\beta$, the most important case in our study of nonlocal equations is when $\beta<0$.
\begin{theorem}\label{Thm:Kummer}
Let $N\in\mathbb{N}$ and $\alpha>0,\beta\in\mathbb{R}$ be constants. Then there are $C=C(N,\beta,\alpha)>0$ and $t_0>0$ such that \begin{align}\label{Est:beta} \frac{1}{C}(\alpha t)^\beta e^{\alpha t}\leq\sum_{k=N}^\infty\frac{(\alpha t)^k}{k!}k^{\beta}\leq C(\alpha t)^{\beta}e^{\alpha t}\quad\forall\,t\geq t_0. \end{align} In particular, \[ \sum_{k=N}^\infty\frac{(\alpha t)^k}{k!}k^\beta\asymp(\alpha t)^\beta e^{\alpha t}\quad\mbox{as $t\to\infty$}. \]
\end{theorem}
\begin{proof} First we prove the upper bound. By splitting the series into the sums over $N\leq k<N_0$ and that over $k\geq N_0$, where $N_0>\beta$ is fixed, and noting that the first sum obviously satisfies $\lesssim\langle\alpha t\rangle^\beta e^{\alpha t}$, it suffices to prove the desired upper estimate under the assumption that \[ N>\beta. \]
Replacing $k\to k+N$, we rewrite the series as \begin{align*} \sum_{k=N}^\infty\frac{(\alpha t)^k}{k!}k^{\beta}&=(\alpha t)^N\sum_{k=0}^\infty\frac{k!}{(k+N)!(k+N)^{-\beta}}\frac{(\alpha t)^k}{k!}. \end{align*}
\begin{claim} There is a constant $C_1=C_1(N,\beta)>0$ such that \[ (k+N)!(k+N)^{-\beta}\geq C_1\prod_{l=1}^k\left(l+N-\beta\right) \quad\forall\,k\geq0. \]
\end{claim}
\begin{proof} The desired estimate is equivalent to that \begin{align*} (k+N)^{-\beta}\geq C_2\frac{\varGamma(k+N-\beta+1)}{\varGamma(k+N+1)}, \end{align*} where $C_2:=C_1/\varGamma(N-\beta+1)$, for some constant $C_2$. By the work of Tricomi and Erd\'elyi on asymptotic ratio of Gamma functions (Lemma \ref{Lem:ratiogamma}) we have \[ \frac{\varGamma(k+N-\beta+1)}{\varGamma(k+N+1)}\sim(k+N)^{-\beta}\quad\mbox{as $k\to\infty$}, \] so there is a constant $C_3=C_3(N,\beta)>0$ such that \[ \frac{\varGamma(k+N-\beta+1)}{\varGamma(k+N+1)}\leq C_3(k+N)^{-\beta}\quad\forall\,k\geq0. \] Taking $C_2=1/C_3$, thus there is such a constant $C_2$ as claimed.\qquad\mbox{\qed} \end{proof}
According to the claim, we now have \begin{align*} \sum_{k=N}^\infty\frac{(\alpha t)^k}{k!}k^{\beta}&=(\alpha t)^N\sum_{k=0}^\infty\frac{k!}{(k+N)!(k+N)^{-\beta}}\frac{(\alpha t)^k}{k!}\\ &\leq C_{N,\beta}(\alpha t)^N\sum_{k=0}^\infty\frac{k!}{(1+N-\beta)(2+N-\beta)\cdots(k+N-\beta)}\frac{(\alpha t)^k}{k!}\\ &=C_{N,\beta}(\alpha t)^N\sum_{k=0}^\infty\frac{(1)_k}{(1+N-\beta)_k}\frac{(\alpha t)^k}{k!}, \end{align*} where $(a)_k:=a(a+1)\cdots(a+k-1)$ denotes the Pochhammer symbol.
The last series on the right hand side above takes the form of \textit{Kummer's confluent hypergeometric function of the first kind} (Lemma \ref{Lem:Kummer}). As $t\to\infty$, we have that \[ \sum_{k=0}^\infty\frac{(1)_k}{(1+N-\beta)_k}\frac{(\alpha t)^k}{k!}\sim\frac{\varGamma(1+N-\beta)}{\varGamma(1)}(\alpha t)^{-(N-\beta)}e^{\alpha t}. \] So we can choose $t_0>0$ such that \begin{align*} \sum_{k=N}^\infty\frac{(\alpha t)^k}{k!}k^{\beta}\leq C_{N,\beta,\alpha}(\alpha t)^N(\alpha t)^{-N+\beta}e^{\alpha t}=(\alpha t)^\beta e^{\alpha t}\quad\mbox{for all $t\geq t_0$}, \end{align*} which implies the desired upper estimate.
Next we prove the lower estimate of (\ref{Est:beta}). We can consider $t\geq1$. Also, to derive the lower bound, we can take $N>\beta$. According to the prove of the claim above, there is a constant $\tilde{C}_1=\tilde{C}_1(N,\beta)>0$ such that \begin{align*} (k+N)!(k+N)^{-\beta} &\leq \tilde{C}_1\prod_{l=1}^k\left(l+N-\beta\right)\quad\forall\,k\geq0. \end{align*} Then \begin{align*} \sum_{k=N}^\infty\frac{(\alpha t)^k}{k!}k^\beta&\gtrsim(\alpha t)^N\sum_{k=0}^\infty\frac{k!}{(1+N-\beta)(2+N-\beta)\cdots(k+N-\beta)}\frac{(\alpha t)^k}{k!}\\ &=(\alpha t)^N\sum_{k=0}^\infty\frac{(1)_k}{(1+N-\beta)_k}\frac{(\alpha t)^k}{k!} \end{align*} and so we conclude by the asymptotic behavior of Kummer's confluent hypergeometric function of the first kind once again, we find that \[ \sum_{k=N}^\infty\frac{(\alpha t)^k}{k!}k^\beta\gtrsim(\alpha t)^\beta e^{\alpha t} \] as needed. \qquad\mbox{\qed}
\end{proof}
\begin{remark}
In the special case that $\alpha=1$, $N=0$, and $\beta=n$ is a positive integer, the summation \[ B_n(t)=e^{-t}\sum_{k=0}^\infty\frac{t^k}{k!}k^n \] is the Bell polynomial, which is a polynomial in $t$ of degree $n$.
\end{remark}
Now we study the case that the exponential type series (\ref{Series:Rk}) has \[ R_k=R(k)\quad\mbox{where $R(s)$ is a regular varying function}. \] See Definition \ref{Def:RegSlow} and Lemma \ref{Lem:Slow}.
The following result is a generalization of Theorem \ref{Thm:Kummer}, though, its proof relies crucially on the result of the preceding theorem.
\begin{theorem}\label{Thm:Rk}
Let $N\in\mathbb{N}$, $\alpha>0$, and $R$ be a regular varying function with index $\beta\in\mathbb{R}$. Let $R_k=R(k)$ for any positive integer $k$. Then there are constants $C,t_0>0$ such that \begin{align}\label{Est:Rk} \frac{1}{C}R(\alpha t)e^{\alpha t}\leq\sum_{k=N}^\infty\frac{(\alpha t)^k}{k!}R_k\leq CR(\alpha t)e^{\alpha t}\quad\mbox{for all $t\geq t_0$}. \end{align} In particular, \[ \sum_{k=N}^\infty\frac{(\alpha t)^k}{k!}R_k\asymp R(\alpha t)e^{\alpha t}\quad\mbox{as $t\to\infty$}. \]
\end{theorem}
\begin{proof} Let us split the series into \begin{align*} \sum_{N\leq k<t}\frac{(\alpha t)^k}{k!}R_k+\sum_{k\geq t}\frac{(\alpha t)^k}{k!}R_k=:\mathcal{S}_{1}+\mathcal{S}_{2}. \end{align*} We will use Lemma \ref{Lem:Slow} to prove the upper bound. Let $R(s)=s^{\beta}L(s)$ where $L$ is a slowly varying function and $\beta\in\mathbb{R}$. Take $\varepsilon>0$. Then we have \begin{align*} \mathcal{S}_1&=\sum_{N\leq k<t}\frac{(\alpha t)^k}{k!}k^{\beta-\varepsilon}\cdot(k^{\varepsilon}L(k))\\ &\leq\sup_{N\leq k\leq t}k^{\varepsilon}L(k)\sum_{N\leq k<t}\frac{(\alpha t)^k}{k!}k^{\beta-\varepsilon}. \end{align*} By Lemma \ref{Lem:Slow} we have that \begin{align*} \sup_{N\leq k\leq t}k^{\varepsilon}L(k)\sim t^\varepsilon L(t)\quad\mbox{as $t\to\infty$}, \end{align*} so we get by Theorem \ref{Thm:Kummer} that \begin{align*} \mathcal{S}_1&\lesssim t^{\varepsilon}L(t)\sum_{N\leq k<t}\frac{(\alpha t)^k}{k!}k^{\beta-\varepsilon}\lesssim t^{\varepsilon}L(t)(\alpha t)^{\beta-\varepsilon}e^{\alpha t}\\
&\lesssim\alpha^{-\varepsilon}(\alpha t)^\beta L(\alpha t)e^{\alpha t}=\alpha^{-\varepsilon}R(\alpha t)e^{\alpha t}, \end{align*} as $t\to\infty$. Here, in the last inequality, we have used that $L$ is slowly varying, hence \[ L(\alpha t)\sim L(t)\quad\mbox{as $t\to\infty$}. \] Note that we may take $\varepsilon=\alpha$ so that $\alpha^{-\varepsilon}$ is bounded by a constant independent of $\alpha>0$.
Next we establish the upper bound of $\mathcal{S}_2$. As $t\to\infty$, we have by Lemma \ref{Lem:Slow} that \[ \sup_{k\geq t}k^{-\varepsilon}L(k)\sim t^{-\varepsilon}L(t), \] hence there are constants $C,t_0>0$ such that \[ \sup_{k\geq t}k^{-\varepsilon}L(k)\leq Ct^{-\varepsilon}L(t)\quad\mbox{for all $t\geq t_0$}. \] Then we have, for $t\geq t_0$, that \begin{align*} \mathcal{S}_2&=\sum_{k\geq t}\frac{(\alpha t)^k}{k!}k^{\beta+\varepsilon}\cdot(k^{-\varepsilon}L(k))\leq\sup_{k\geq t}k^{-\varepsilon}L(k)\sum_{k\geq t}\frac{(\alpha t)^k}{k!}k^{\beta+\varepsilon}\\ &\leq Ct^{-\varepsilon}L(t)\sum_{k\geq t}\frac{(\alpha t)^k}{k!}k^{\beta+\varepsilon}\leq Ct^{-\varepsilon}L(t)(\alpha t)^{\beta+\varepsilon}e^{\alpha t}\\ &\leq C\alpha^{\varepsilon}(\alpha t)^\beta L(\alpha t)e^{\alpha t}=C\alpha^\varepsilon R(\alpha t)e^{\alpha t}. \end{align*} Combining the upper estimates for $\mathcal{S}_1,\mathcal{S}_2$, we obtain the upper estimate in (\ref{Est:Rk}).
It remains to prove the lower estimate in (\ref{Est:Rk}). Clearly, \[ \sum_{k=N}^\infty\frac{(\alpha t)^k}{k!}R_k\geq\mathcal{S}_1, \] so it suffices to establish the lower bound of $\mathcal{S}_1$. We use Lemma \ref{Lem:Slow2}. Consider \begin{align*} \mathcal{S}_1&=\sum_{N\leq k<t}\frac{(\alpha t)^k}{k!}k^{\beta+\varepsilon}\cdot(k^{-\varepsilon}L(k))\\ &\geq\inf_{N\leq k\leq t}k^{-\varepsilon}L(k)\sum_{N\leq k<t}\frac{(\alpha t)^k}{k!}k^{\beta+\varepsilon}\\ &\gtrsim t^{-\varepsilon}L(t)\sum_{N\leq k<t}\frac{(\alpha t)^k}{k!}k^{\beta+\varepsilon}\quad(\mbox{by Lemma \ref{Lem:Slow2}})\\ &\gtrsim t^{-\varepsilon}L(t)(\alpha t)^{\beta+\varepsilon}e^{\alpha t}, \end{align*} where, in the last inequality, we have used that \[ \sum_{N\leq k<t}\frac{(\alpha t)^k}{k!}k^{\beta+\varepsilon}\sim\sum_{k=N}^\infty\frac{(\alpha t)^k}{k!}k^{\beta+\varepsilon}\gtrsim(\alpha t)^{\beta+\varepsilon}e^{\alpha t}\quad\mbox{as $t\to\infty$, by Theorem \ref{Thm:Kummer}}. \] Finally, using the slow variation of $L$, we obtain the desired lower estimate for $\mathcal{S}_1$.\qquad\mbox{\qed}
\end{proof}
\begin{remark}
For the case that $R$ is regular varying with index $\beta\leq0$, the preceding result was established in \cite{BinghamEtal83} using a probabilistic argument. Here we use analytic argument and obtain the case $\beta>0$ as well.
\end{remark}
\begin{example} There are many regular varying sequences. So by applying the preceding theorem, we get the following interesting conclusion. \begin{itemize} \item[(1)] For $L(s)=(\ln k)^\mu$ which clearly satisfies $\lim_{s\to\infty}L(\lambda s)/L(s)=1$ for all $\lambda>0$, we have \begin{align} \sum_{k=N}^\infty\frac{(\alpha t)^k}{k!}k^\beta(\ln k)^\mu\asymp (\alpha t)^\beta(\ln(\alpha t))^\mu e^{\alpha t}\quad\mbox{as $t\to\infty$}, \end{align} where $N\in\mathbb{N}$, $\alpha>0$, and $\beta,\mu\in\mathbb{R}$. \item[(2)] More generally, $L(s)=(\ln s)^{\mu_1}\cdots(\ln_ms)^{\mu_m}$ ($\ln_j=\ln\circ\cdots\circ\ln$, $j$ terms) is slowly varying, so we get \begin{align} \sum_{k=N}^\infty\frac{(\alpha t)^k}{k!}k^\beta(\ln k)^{\mu_1}\cdots(\ln_mk)^{\mu_m}\asymp(\alpha t)^\beta(\ln (\alpha t))^{\mu_1}\cdots(\ln_m(\alpha t))^{\mu_m}\quad\mbox{as $t\to\infty$}, \end{align} where $N\in\mathbb{N},\alpha>0$, and $\beta,\mu_1,\ldots,\mu_m\in\mathbb{R}$. \item[(3)] (Non-logarithmic slowly varying function). One can show that (see \cite{Bingham89}) \[ L(s)=\exp\left((\ln s)^{\mu_1}\cdots(\ln_ms)^{\mu_m}\right), \] is slowly varying for any $\mu_1,\ldots,\mu_m\in\mathbb{R}$. So we get \begin{align*} \sum_{k=N}^\infty\frac{(\alpha t)^k}{k!}k^\beta\exp\left((\ln k)^{\mu_1}\cdots(\ln_mk)^{\mu_m}\right)\asymp (\alpha t)^\beta\exp\left((\ln(\alpha t))^{\mu_1}\cdots(\ln_m(\alpha t))^{\mu_m}\right)\quad\mbox{as $t\to\infty$}, \end{align*} for all $N\in\mathbb{N},\alpha>0$, and $\beta,\mu_1,\ldots,\mu_m\in\mathbb{R}$. \item[(4)] One also has oscillating slowly varying function (see \cite{Bingham89}) \[ L(s)=\exp\left((\ln s)^{1/3}(\cos(\ln s))^{1/3}\right). \] \item[(5)] By Karamata's representation theorem, $L$ is slowly varying if and only if there are measurable functions $c(s),\varepsilon(s)$ such that $\lim_{s\to\infty}c(s)=c_0>0$ and $\lim_{s\to\infty}\varepsilon(s)=0$ such that \[ L(s)=c(s)\exp\left(\int_{s_0}^s\frac{\varepsilon(\tau)}{\tau}d\tau\right). \] \end{itemize}
\end{example}
\begin{remark} Using the result of Theorem \ref{Thm:Rk}, we will be able to give examples of nonlocal diffusion equations having arbitrarily regular varying decay solutions. \end{remark}
We conclude this section with the following sufficient condition on the kernel $J$ of (\ref{Eqn:main}) such that the decay of solutions is at the rate of a regular varying function. The result is true not only for radially symmetric nonlocal equations but also for non-symmetric ones. Examples of equations satisfying the hypothesis of this theorem will be presented in later section. For simplicity of the presentation, we consider $\chi_0=1$ in (\ref{Eqn:main}).
\begin{theorem}\label{Thm:GenDecay}
Let $\chi_0=1$ in (\ref{Eqn:main}) and let $R(t)=t^{\beta}L(t)$ be a regular varying function with index $\beta\in\mathbb{R}$. Assume there is a positive integer $N\in\mathbb{N}$ such that either (i) $u_0\in L^1(\mathbb{R}^n)\cap L^\infty(\mathbb{R}^n)$ and \begin{align}\label{Hyp:H2}\tag{H1} \begin{cases} \displaystyle
\sup_{x\in\mathbb{R}^n}\int_{\mathbb{R}^n}|J(x,y)|dy<\infty, \\
\\ \displaystyle
|J_k(x,y)|\leq R_k\quad\forall\,x,y\in\mathbb{R}^n, k=N,N+1,\ldots, \end{cases} \end{align} where $R_k:=R(k)$, or (ii) there is $1\leq p\leq\infty$ such that \begin{align}\label{Hyp:H3}\tag{H2} \begin{cases} \displaystyle \mathcal{J}_ku_0\in L^p(\mathbb{R}^n)&k=1,2,\ldots,N-1, \\
\\ \displaystyle
\|\mathcal{J}^ku_0\|_{L^p}\leq R_k&k=N,N+1,\ldots \end{cases} \end{align} Then the solution $u(t)$ of (\ref{Eqn:main}) satisfies \begin{align*}
\|u(t)\|_{L^q}\lesssim t^{\beta}L(t)\quad\mbox{as $t\to\infty$}, \end{align*} where $q=\infty$ in case (i) and $q=p$ in case (ii).
\end{theorem}
\begin{proof} We split the solution into \begin{align*} u(t)&=\mathcal{G}(t)u_0=e^{-t}\sum_{0\leq k<N}\frac{t^k}{k!}\mathcal{J}^ku_0+e^{-t}\sum_{k\geq N}\frac{t^k}{k!}\mathcal{J}^ku_0=:\mathcal{S}_1+\mathcal{S}_2. \end{align*} See Definition \ref{Def:SolGreen}.
Assume (i). By the first part of (\ref{Hyp:H2}) and the Fubini's theorem, we get that \[
\int_{\mathbb{R}^n}|J_k(x,y)|dy\leq\int_{\mathbb{R}^n}\int_{(\mathbb{R}^n)^{k-1}}|J(x,y_1)J(y_1,y_2)\cdots J(y_{k-1},y)|dy_{k-1}\cdots dy_1dy\leq M^k \]
where $M=\sup_{x}\int_{\mathbb{R}^n}|J(x,y)|dy<\infty$. The first term $\mathcal{S}_1$ can be estimated by \begin{align*}
|\mathcal{S}_1|&\leq e^{-t}\sum_{0\leq k<N}\frac{t^k}{k!}\int_{\mathbb{R}^n}|J_k(x,y)u_0(y)|dy\\
&\leq e^{-t}\|u_0\|_{L^\infty}\left\{\max_{1\leq k\leq N-1}M^k\right\}\sum_{0\leq k<N}\frac{t^k}{k!}\\
&\leq C_{M,N}\|u_0\|_{L^\infty}e^{-t}\sum_{0\leq k<N}\frac{t^k}{k!}\\
&\leq C_{M,N}\|u_0\|_{L^\infty}t^{-|\beta|}L(t), \end{align*} as $t\to\infty$. For the second term $\mathcal{S}_2$, we use the second part of the hypothesis (\ref{Hyp:H2}) and apply Theorem \ref{Thm:Rk} to get that \begin{align*}
|\mathcal{S}_2|&\leq e^{-t}\sum_{k\geq N}\frac{t^k}{k!}\int_{\mathbb{R}^n}|J_k(x,y)u_0(y)|dy\\
&\leq e^{-t}\|u_0\|_{L^1}\sum_{k\geq N}\frac{t^k}{k!}R_k\\
&\lesssim e^{-t}\|u_0\|_{L^1}R(t)e^{t}=\|u_0\|_{L^1}t^{\beta}L(t), \end{align*} as $t\to\infty$. Combining the inequalities of $\mathcal{S}_1,\mathcal{S}_2$, we get the desired estimate.
Now assume (ii). Again, we split $u(t)=\mathcal{S}_1+\mathcal{S}_2$. For $\mathcal{S}_1$, we apply the triangle inequality to get \begin{align*}
\|\mathcal{S}_1\|_{L^p}&\leq e^{-t}\sum_{0\leq k<N}\frac{t^k}{k!}\|\mathcal{J}_ku_0\|_{L^p}\\
&\leq\left(\max_{1\leq k\leq N_1}\|\mathcal{J}_ku_0\|_{L^p}\right)e^{-t}\sum_{0\leq k<N}\frac{t^k}{k!}\\ &\leq Ct^\beta L(t) \end{align*} as $t\to\infty$. For the second term, we use \begin{align*}
\|\mathcal{S}_2\|_{L^p}&\leq e^{-t}\sum_{k\geq N}\frac{t^k}{k!}\|\mathcal{J}_ku_0\|_{L^p}\\ &\leq e^{-t}\sum_{k\geq N}\frac{t^k}{k!}R_k\\ &\leq Ce^{-t}R(t)e^t=Ct^\beta L(t) \end{align*} by Theorem \ref{Thm:Rk}. Combining the both estimates, we conclude the assertion for (ii).\qquad\mbox{\qed}
\end{proof}
\section{Integral operators with higher integrability}
In this section we apply results from the previous section to study the asymptotic behavior of solutions to the nonlocal equation (\ref{Eqn:main}) if the kernel $J$ is a radially symmetric $L^1$ function: \begin{align}\label{Hyp:alpha2}\tag{H3}
J=J(x-y),\quad\int_{\mathbb{R}^n}|J(x)|dx=1. \end{align} Although the decay estimate derived in this section is now a classical result, our way of getting the estimate provides an alternative point of view. Additionally, the bound of kernels $J_k$ obtained (Proposition \ref{Prop:EstJk}) is new and could be useful in other discipline.
Before discussing our next main result, let us briefly recall the following basic fact.
\begin{lemma}[\cite{Feller71},\cite{Caravenna12}]
Let $f\in L^1(\mathbb{R}^n)$ and $f_k:=f\ast\cdots\ast f$ denote the $k$-fold convolution of $f$ \begin{itemize} \item[(i)] $f_k\in L^\infty(\mathbb{R}^n)$ for some $k$ if and only if $\widehat{f}\in L^q(\mathbb{R}^n)$ for some $1\leq q<\infty$. \item[(ii)] If $f_N\in L^\infty(\mathbb{R}^n)$ for some $N$ then $f_k\in L^\infty(\mathbb{R}^n)$ for all $k\geq N$. \item[(iii)] If $f\in L^{1+\varepsilon_0}(\mathbb{R}^n)$ for some $\varepsilon_0>0$, then $f_k\in L^\infty(\mathbb{R}^n)$ for all $k$ large enough. \end{itemize}
\end{lemma}
\begin{proof}
(ii) is obvious. The proof of (i) and (iii) can be found in \cite{Caravenna12}, \citep{Feller71}. We will present a more precise assertion (Proposition \ref{Prop:EstJk}) than (iii) which also provides the bound of the sup-norm $\|f_k\|_{L^\infty}\sim k^{-n/2}$. \qquad\mbox{\qed} \end{proof}
Now we present our next main result. The proof uses the Brascamp-Lieb inequality (or sharp Young's convolution inequality), Lemma \ref{Lem:SharpYoung}. We note that $J$ can be real, or complex valued.
\begin{proposition}\label{Prop:EstJk} Assume (\ref{Hyp:alpha2}) and furthermore \begin{align}\label{Hyp:J1epsilon} J\in L^1(\mathbb{R}^n)\cap L^{1+\varepsilon_0}(\mathbb{R}^n)\quad\mbox{for some $0<\varepsilon_0\leq\infty$}. \end{align} Let $N=\lceil\frac{1}{\varepsilon_0}\rceil+1$. If $k\geq N$ then $J_k\in L^\infty(\mathbb{R}^n)\cap C(\mathbb{R}^n)$ and there are constants $C_n,\gamma>0$ such that \begin{align}\label{IntegralCond}
\|J_k\|_{L^\infty}\leq C_{n}\exp\left(\gamma\int_{\mathbb{R}^n}|J(x)|\ln|J(x)|dx\right)k^{-n/2}\quad\forall\,k\geq N. \end{align}
\end{proposition}
\begin{proof} Without loss of generality, we can assume $J\geq0$. Clearly, $J\in L^p(\mathbb{R}^n)$ for all $1\leq p\leq1+\varepsilon_0$ by interpolation. We apply the Brascamp-Lieb (or sharp Young) inequalities (see Lemma \ref{Lem:SharpYoung}). Take $k\geq N=\lceil\frac{1}{\varepsilon_0}\rceil+1$, and put \[ p_1=\cdots=p_k=\frac{k}{k-1},\quad r=\infty,\quad f_1=\cdots=f_k=J \] in the Brascamp-Lieb inequality. Note that $f_l=J\in L^{p_l}(\mathbb{R}^n)$ for all $l$. For each $p_l$, the H\"older conjugate is $q_l=k$. We calculate \begin{align*} C_{p_l}^k&=\left(\frac{(k/(k-1))^{(k-1)/k}}{k^{1/k}}\right)^{k/2} \\ &=\frac{1}{k^{1/2}}\left(1+\frac{1}{k-1}\right)^{(k-1)/2}\\ &\leq\frac{\sqrt{e}}{k^{1/2}}. \end{align*} Now we have $J_k(x)=f_1\ast\cdots\ast f_k$. So we obtain by Lemma \ref{Lem:SharpYoung} and the above calculations that \begin{align*}
\sup_{x\in\mathbb{R}^n}|J_k(x)|&\leq\frac{e^{n/2}}{k^{n/2}}\left(\int_{\mathbb{R}^n}J(x)^{k/(k-1)}dx\right)^{k-1}. \end{align*} It is obvious that $J_k$ are continuous. Thus $J_k\in BC(\mathbb{R}^n)$.
Consider the preceding integral as $k\to\infty$. We apply the L'Hopital's rule and the dominated convergence theorem to get \begin{align*} \lim_{k\to\infty}(k-1)\ln\int_{\mathbb{R}^n} J(x)^{k/(k-1)}dx&=\lim_{\lambda\to0^+}\frac{1}{\lambda}\ln\int_{\mathbb{R}^n} J(x)^{\lambda+1}dx\quad(\lambda:=\frac{1}{k-1}),\\ &=\int_{\mathbb{R}^n}J(x)\ln J(x)dx<\infty. \end{align*} So there is a constant $\gamma>0$ such that the estimate \[
\sup_x|J_k(x)|\leq C_n\exp\left(\gamma\int J(x)\ln J(x)dx\right)k^{-n/2} \] is true for all $k\geq N$.\qquad\mbox{\qed} \end{proof}
\begin{remark} \begin{itemize} \item[(1)] In the preceding proposition, the kernels $J_k$ needs not be bounded when $k$ is small. For instance, the Bessel potential operator \[ \mathcal{B}=(1-\triangle)^{-1}\quad\mbox{on $\mathbb{R}^n$ ($n>2$)} \] is known to have the kernel $B\in L^1(\mathbb{R}^n)\cap L^{1+\varepsilon_0}(\mathbb{R}^n)$ for any $\varepsilon_0<\frac{2}{n-2}$ and the $k$-fold iterated kernel \[ B_k\not\in L^\infty(\mathbb{R}^n)\quad\mbox{for $k<\frac{n}{2}$},\quad B_k\in L^\infty(\mathbb{R}^n)\quad\mbox{for all $k\geq\frac{n}{2}$}. \]
More generally, if $\mathcal{K}$ is a weakly singular integral operator, i.e.\ its kernel $K$ satisfies \[
|K(x)|\sim\frac{1}{|x|^{n-\alpha}}\quad\mbox{as $|x|\to0$}\quad(0<\alpha<n), \] $K$ is finite outside the diagonal, and $K$ decays sufficiently fast at infinity, then \[ K_l\not\in L^\infty(\mathbb{R}^n)\quad\mbox{for $l<\frac{n}{\alpha}$},\quad K_l\in L^\infty(\mathbb{R}^n)\quad\mbox{for $l\geq\frac{n}{\alpha}$}. \] The Bessel potential $\mathcal{B}$ is a weakly singular operator having $\alpha=2$ and exponential decay at infinity. \item[(2)] A more precise result compared to Proposition \ref{Prop:EstJk} was derived in \citep{KonMolVain17} (Lemma 5.4), using the \textit{local limit theorem}, but for a rather restricted class of kernel functions (see Eq.\ (32) and (33) in \citep{KonMolVain17}). More precisely, in order to apply the local limit theorem, it was assumed in \citep{KonMolVain17} that the kernel $J$ has \textit{ultra light tail}, i.e.\ \[
|J(x)|,\,\,|\nabla J(x)|\lesssim e^{-|x|^\alpha}\quad\mbox{where $\alpha>1$}. \]
It should be observed that the estimate in Proposition \ref{Prop:EstJk} is uniform, whereas, Lemma 5.4 in \citep{KonMolVain17} is true only when $|x|\leq k$. More importantly, the estimate (48) derived in \citep{KonMolVain17} seems weaker in some cases than what we have shown here. \item[(3)] It should be noted that there are kernel functions which do not satisfy the criterion of Proposition \ref{Prop:EstJk}, that is there are $J\in L^1(\mathbb{R}^n)$ such that \[
\|J\|_{L^{1+\varepsilon}}=\infty\quad\mbox{for all $\varepsilon>0$}. \] A basic example is \[
J(x)=\frac{1}{|x|^n\{1+(\ln|x|)^2\}} \]
for which $\|J\|_{L^1}<\infty$ but $\|J\|_{L^{1+\varepsilon}}=\infty$ for all $\varepsilon>0$. \end{itemize}
\end{remark}
Using Theorem \ref{Thm:GenDecay} and Proposition \ref{Prop:EstJk}, we obtain the following decay property of solutions to (\ref{Eqn:main}).
\begin{theorem}\label{Thm:GreenJsigma}
Assume $J$ satisfies (\ref{Hyp:alpha2}) and (\ref{Hyp:J1epsilon}). If $u_0\in L^1(\mathbb{R}^n)\cap L^\infty(\mathbb{R}^n)$, then the solution $u(t)$ of (\ref{Eqn:main}) satisfies \begin{align}\label{Est:GJsigma1}
\|u(t)\|_{L^\infty}\leq Ct^{-n/2}\quad\forall\,t\geq t_0. \end{align} Moreover, for each $1\leq q\leq\infty$, there is a constant $C>0$ independent of $q$ such that \begin{align}\label{Est:GJsigma3}
\|u(t)\|_{L^q}\leq Ct^{-\frac{n}{2}(1-\frac{1}{q})}\quad\forall\,t\geq t_0. \end{align}
\end{theorem}
\begin{proof} By Proposition \ref{Prop:EstJk}, we have \[
\|J_k\|_{L^\infty}\leq R_k:=Ck^{-n/2}\quad C=C(n,J)>0 \] for all $k\geq N=\lceil 1/\varepsilon_0\rceil+1$. It is now clear that $J$ satisfies (\ref{Hyp:H2}) in Theorem \ref{Thm:GenDecay}. Thus we obtain \[
\|u(t)\|_{L^\infty}\leq Ct^{-n/2}\quad\forall\,t\geq t_0>0. \] For (\ref{Est:GJsigma3}), we simply apply the interpolation \[
\|u(t)\|_{L^q}\leq\|u(t)\|_{L^\infty}^{1-\frac{1}{q}}\|u(t)\|_{L^1}^{\frac{1}{q}} \] and Lemma \ref{Lem:JkL1}.\qquad\mbox{\qed}
\end{proof}
If $J\in L^1(\mathbb{R}^n)\cap L^\infty(\mathbb{R}^n)$, e.g.\ $J$ is continuous with compact support, then the result in the preceding theorem can be strengthen. In this case, we have for any $u_0\in L^1(\mathbb{R}^n)$ (not necessarily in $L^\infty(\mathbb{R}^n)$) then the solution of (\ref{Eqn:main}) satisfies \[
\|u(t)-e^{-t}u_0\|_{L^\infty}\lesssim t^{-n/2}\quad\mbox{as $t\to\infty$}. \] This refined asymptotic behavior was obtained in \cite{IgnatEtal08}.
Moreover, we have the following refined asymptotic behavior.
\begin{corollary}
Assume $J$ satisfies (\ref{Hyp:alpha2}) and (\ref{Hyp:J1epsilon}). Then for any $u_0\in L^1(\mathbb{R}^n)$, the solution of (\ref{Eqn:main}) satisfies \[
\left\|u(t)-e^{-t}\sum_{k=0}^{N-1}\frac{t^k}{k!}\mathcal{J}^ku_0\right\|_{L^\infty}\lesssim t^{-n/2}\quad\mbox{as $t\to\infty$}, \] where $N=\lceil\frac{1}{\varepsilon_0}\rceil+1$.
\end{corollary}
\begin{remark}\label{Rem:Stable}
In \citep{ChasseigneEtal06}, the nonlocal problem (\ref{Eqn:main}) was investigated with $\chi_0=\|J\|_{L^1}=1$. Assuming the kernel $J$ has the Fourier transform expansion \begin{align}\label{Fourier:J1}
\widehat{J}(\xi)=1-A|\xi|^\sigma+o(|\xi|^\sigma) \quad\mbox{as $\xi\to0$}, \end{align} and the initial function $u_0,\widehat{u}_0\in L^1(\mathbb{R}^n)$, the authors were able to prove the decay estimate \[
\|\mathcal{G}(t)u_0\|_{L^\infty}\leq Ct^{-n/\sigma}. \] This kind of kernel functions arises in the context of \textit{stable laws} with index $\sigma$ and it was remarked in \citep{Feller71} (the remark after Theorem 2, XV.\ 5) that if $J$ is a stable law with index $0<\sigma<2$, then necessarily \[
\|J_k\|_{L^\infty}=\infty\quad\mbox{for all $k$}, \]
since otherwise, the pointwise bound should be $t^{-n/2}$ (the normal distributions). We will address the issue that $\|J_k\|_{L^\infty}=\infty$ for all $k$ in Section \ref{Subsec:PtStable}.
As is noted in \cite{Alfaro17}, the expansion (\ref{Fourier:J1}) holds true for $J\geq0$, bounded, radially function $\|J\|_{L^1}$ where the algebraic tail is \begin{align*}
J(x)\sim\frac{1}{|x|^{n+2+\varepsilon}}\quad\varepsilon>0\,\,(\sigma=2),\quad J(x)\sim\frac{1}{|x|^\alpha}\quad n<\alpha<n+2\,\,(\sigma=\alpha-n\in(0,2)), \end{align*}
as $|x|\to\infty$. For the second type of tails, the second momentum is $\infty$. At the critical algebraic tail \[
J(x)\sim\frac{1}{|x|^{n+2}}\quad\mbox{as $|x|\to\infty$}, \] it follows that the Fourier expansion of $J$ behaves as \[
\widehat{J}(\xi)=1-A|\xi|^2\ln(1/|\xi|)+o(|\xi|^2\ln(1/|x|))\quad\mbox{as $|\xi|\to0$}. \]
\end{remark}
\begin{remark}
It is not difficult, by chasing through the proofs of results in this section that, if \[
\chi_0=\|J\|_{L^1}, \]
then the same decay of solution is still true under the hypotheses (\ref{Hyp:alpha2}) and (\ref{Hyp:J1epsilon}). On the other hand, if $\chi_0>\|J\|_{L^1}$ the solution shows exponential decay instead. This partially answers an open question raised in \citep{ChasseigneEtal06} about the asymptotic behavior of solutions to nonlocal diffusion equations when $\chi_0=1,\|J\|_{L^1}\neq1$. \end{remark}
\begin{remark}
In \citep{ChasseigneEtal14}, a fractional decay estimate was proved similar to our result. Under the assumption that $J\in C(\mathbb{R}^n)$ and $J\gtrsim|x-y|^{-(n+2\sigma)}$ for $|x-y|$ large, they obtained the asymptotic behavior \[
\|u(t)\|_{L^q}\leq C_qt^{-\frac{n}{2\sigma}(1-\frac{1}{q})} \] as $t\to\infty$. This estimate is true for $0<\sigma<1$ and $1\leq q<\infty$. More importantly, the constant $C_q$ depends on $q$. See also \citep{IgnatRossi09}.
\end{remark}
\section{Integral operators of stable laws}\label{Subsec:PtStable}
In this section, we study the nonlocal equation (\ref{Eqn:main}) when the kernel $J\geq0$ is radially symmetric and represents a stable law with index $0<\sigma<2$. For simplicity, we assume $\chi_0=\|J\|_{L^1}=1$. The decay estimates of solutions obtained in \cite{ChasseigneEtal06} will be reproved and generalized from the power series point of view (see Corollary \ref{Cor:DecaySigma}, and Theorem \ref{Thm:DecayLn} for the logarithmic perturbation case). The results will be further generalized to discover the decay of solutions of (\ref{Eqn:main}) satisfying a regular varying function in the next section.
As in Remark \ref{Rem:Stable}, for stable laws, $\|J_k\|_{L^\infty}=\infty$ for all $k$, so Proposition \ref{Prop:EstJk} is useless. To deal with this situation, it is necessarily to impose a certain integrability property for the initial condition $u_0$ so that $\mathcal{J}_ku_0\in L^p(\mathbb{R}^n)$ ($1\leq p\leq\infty$) for all $k$. This is essentially the key idea in deriving the decay estimate (but with $p=\infty$) in \cite{ChasseigneEtal06} via the Fourier splitting technique.
We show that $\|\mathcal{J}_ku_0\|_{L^p}$ is regular varying (w.r.t.\ $k$) and the decay estimate then follows directly from Theorem \ref{Thm:Rk}.
Let us state the assumption for the following theorem: \begin{align}\tag{H4}\label{Fourier:Jsigma} \begin{cases}
\displaystyle J=J(|x|)\geq0,\quad\chi_0=\|J\|_{L^1}=1,\\
\\ \displaystyle
\widehat{J}(\xi)=1-A|\xi|^\sigma+o(|\xi|^\sigma)\,\,\mbox{as $\xi\to0$},\\
\\ \displaystyle \hspace{2cm}\mbox{where $0<\sigma\leq2,A>0$,}\\
\\ \displaystyle u_0\in L^1(\mathbb{R}^n),\,\,\widehat{u}_0\in L^1(\mathbb{R}^n). \end{cases} \end{align} Note that, the assumption $\widehat{u}_0\in L^1(\mathbb{R}^n)$ implies $u_0\in L^\infty(\mathbb{R}^n)$ by the Fourier inversion formula. Such a kernel $J$ is also known as dispersal kernel \cite{Alfaro17}.
\begin{theorem}\label{Thm:StableJk}
Assume (\ref{Fourier:Jsigma}). Let $1\leq p\leq\infty$. Then, \[ \mathcal{J}_ku_0\in L^{p}(\mathbb{R}^n)\quad\mbox{for all $k$}, \] and, furthermore, there is a positive integer $N=N(n,J)$ such that \begin{align}\label{Est1:ThmStableJk}
\|\mathcal{J}_ku_0\|_{L^{p}}\leq C(\|u_0\|_{L^1}+\|\widehat{u}_0\|_{L^1})k^{-\frac{n}{\sigma}(1-\frac{1}{p})}\quad\forall\,k\geq N, \end{align} where $C>0$ is a constant depending only on $n,J$.
\end{theorem}
\begin{proof} It suffices to establish the case $p=\infty$. In fact, after doing so, we simply apply the interpolation inequality \[
\|\phi\|_{L^p}\leq\|\phi\|_{L^1}^{1/p}\|\phi\|_{L^{\infty}}^{1-1/p}, \]
together with the preservation of $L^1$ norms (Lemma \ref{Lem:JkL1}): $\|\mathcal{J}_ku_0\|_{L^1}\leq\|u_0\|_{L^\infty}$.
Observe that $\widehat{J}_k=(\widehat{J})^k\in L^\infty(\mathbb{R}^n)$ with $|\widehat{J}_k(\xi)|\leq\|J\|_{L^1}^k=1$. By (\ref{Fourier:Jsigma}) and the Riemann-Lebesgue lemma, there are positive constants $\varrho_0,\delta,D$ (depending only on $J$) such that \begin{align*} \begin{cases}
|\widehat{J}(\xi)|\leq1-D|\xi|^\sigma&\mbox{for $|\xi|\leq \varrho_0$},\\
\\
|\widehat{J}(\xi)|\leq1-\delta&\mbox{for $|\xi|>\varrho_0$}. \end{cases} \end{align*} Observe that $\widehat{\mathcal{J}_ku_0}=\widehat{J}_k\widehat{u}_0\in L^1(\mathbb{R}^n)$, hence $\mathcal{J}_ku_0\in L^\infty(\mathbb{R}^n)$.
For $|\xi|\leq \varrho_0\theta_k$, where $\theta_k:=k^{1/\sigma}$, we have $|\xi|/\theta_k\leq\varrho_0$ so \begin{align*}
\left|\widehat{J}_k\left(\frac{\xi}{\theta_k}\right)\right|&=\left|\widehat{J}\left(\frac{\xi}{\theta_k}\right)\right|^{k}\\
&\leq\left(1-D\frac{|\xi|^\sigma}{k}\right)^{k}\\
&\leq e^{-D|\xi|^{\sigma}}, \end{align*} where we have used the elementary inequality \begin{align}\label{Tool:linexp} 1-Dx/k\leq e^{-Dx/k}\quad\mbox{for all $x\geq0$ and $D,k>0$}. \end{align}
On the other hand, if $|\xi|>\varrho _0\theta_k$ then \[
\left|\widehat{J}_k\left(\frac{\xi}{\theta_k}\right)\right|\leq\left(1-\delta\right)^{k}. \] Since $(k^{-n/\sigma})^{1/k}\to1$ as $k\to\infty$, $\exists$ $N>0$ (depending only upon $n,J$) such that \[ (1-\delta)^{k}\leq k^{-n/\sigma}=\theta_k^{-n}\quad\mbox{for all $k\geq N$}. \]
Now fix $k\geq N$. We estimate the integral \begin{align*}
\int_{\mathbb{R}^n}\left|\widehat{\mathcal{J}_ku_0}\left(\frac{\xi}{\theta_k}\right)\right|d\xi&=\int_{|\xi|\leq \varrho_0\theta_k}\left|\widehat{\mathcal{J}_ku_0}\left(\frac{\xi}{\theta_k}\right)\right|d\xi+\int_{|\xi|>\varrho _0\theta_k}\left|\widehat{\mathcal{J}_ku_0}\left(\frac{\xi}{\theta_k}\right)\right|d\xi=:I_1+I_2. \end{align*} Then we have \begin{align*}
&I_1=\int_{|\xi|\leq \varrho_0\theta_k}\left|\widehat{J}_k\left(\frac{\xi}{\theta_k}\right)\right|\left|\widehat{u}_0\left(\frac{\xi}{\theta_k}\right)\right|d\xi\\
&\hphantom{I_1}\leq\int_{|\xi|\leq \varrho_0\theta_k}e^{-D|\xi|^\sigma}\|\widehat{u}_0\|_{L^\infty}d\xi\\
&\hphantom{I_1}\leq\|u_0\|_{L^1}\int_{\mathbb{R}^n}e^{-D|\xi|^\sigma}d\xi=:C_1\|u_0\|_{L^1}<\infty, \end{align*} and \begin{align*}
&I_2\leq\int_{|\xi|>\varrho_0\theta_k}(1-\delta)^{k}\left|\widehat{u}_0\left(\frac{\xi}{\theta_k}\right)\right|d\xi\\
&\hphantom{I_2}\leq\int_{\mathbb{R}^n}\theta_k^{-n}\left|\widehat{u}_0\left(\frac{\xi}{\theta_k}\right)\right|d\xi\\
&\hphantom{I_2}\leq\|\widehat{u}_0\|_{L^1}=:C_2\|\widehat{u}_0\|_{L^1}<\infty. \end{align*} Combining the estimates for $I_1,I_2$ we get that \[
\int_{\mathbb{R}^n}\left|\widehat{\mathcal{J}_ku_0}\left(\frac{\xi}{\theta_k}\right)\right|d\xi\leq C(\|u_0\|_{L^1}+\|\widehat{u}_0\|_{L^1}), \] for some constant $C>0$ depends only on $n,J$.
By the Fourier inversion formula we have \begin{align*}
k^{n/\sigma}\|\mathcal{J}_ku_0\|_{L^\infty}&\leq C_{n}\theta_k^n\|\widehat{\mathcal{J}_ku_0}\|_{L^1}\\
&=C_{n}\int_{\mathbb{R}^n}\left|\widehat{\mathcal{J}_ku_0}\left(\frac{\xi}{\theta_k}\right)\right|d\xi\\
&\leq C_{n,J}(\|u_0\|_{L^1}+\|\widehat{u}_0\|_{L^1}), \end{align*} therefore we obtain that \[
\|\mathcal{J}_ku_0\|_{L^\infty}\leq C_{n,J}(\|u_0\|_{L^1}+\|\widehat{u}_0\|_{L^1})k^{-n/\sigma}\quad\forall\,k\geq N. \] This completes the proof of the theorem.\qquad\mbox{\qed}
\end{proof}
\begin{remark}
In the case that $\sigma=2$, the estimate (\ref{Est1:ThmStableJk}) can be sharpen with the dependence on the initial condition on the right hand side removed. As was observed in \citep{ChasseigneEtal06}, if the Fourier transform of $J$ satisfies the second part of (\ref{Fourier:Jsigma}) with $\sigma=2$, then $J$ has the second moment, so the Local Limit Theorem can be implied to get the sharper estimate.
\end{remark}
The following result was proved in \citep{ChasseigneEtal06}.
\begin{corollary}\label{Cor:DecaySigma}
Assume $J$ and $u_0$ satisfy (\ref{Fourier:Jsigma}). Then the solution $u(t)$ of (\ref{Eqn:main}) satisfies $u(t)\in L^p(\mathbb{R}^n)$ for all $t\geq0$, for any $1\leq p\leq\infty$. Furthermore, \[
\|u(t)\|_{L^p}\leq C\left(\|u_0\|_{L^1}+\|\widehat{u}_0\|_{L^1}\right)t^{-\frac{n}{\sigma}(1-\frac{1}{p})}\quad\mbox{as $t\to\infty$}, \] where $C>0$ is a constant depending on $n,J$.
\end{corollary}
\begin{proof}
By Theorem \ref{Thm:StableJk}, we have \[ \mathcal{J}_ku_0\in L^p(\mathbb{R}^n),\quad k=1,\ldots,N-1 \] and \[
\|\mathcal{J}_ku_0\|_{L^p}\leq R_k:=C(\|u_0\|_{L^1}+\|\widehat{u}_0\|_{L^1})k^{-\frac{n}{\sigma}(1-\frac{1}{p})},\quad k=N,N+1,\ldots, \] i.e.\ (\ref{Hyp:H3}) is true. According to part (ii) of Theorem \ref{Thm:GenDecay}, we then have \[
\|u(t)\|_{L^p}\leq C(\|u_0\|_{L^1}+\|\widehat{u}_0\|_{L^1})t^{-\frac{n}{\sigma}(1-\frac{1}{p})}\quad\forall\,t\geq t_0>0, \] which proves the theorem.\qquad\mbox{\qed}
\end{proof}
For the borderline case that $\widehat{J}$ has the asymptotic expansion with logarithmic perturbation: \[
\widehat{J}(\xi)\sim1-A|\xi|^2(\ln1/|\xi|)\quad\mbox{as $\xi\to0$}, \]
was considered in the last section of \cite{ChasseigneEtal06}. (If $n=1$, this case corresponds to the Fourier transform of $J(x)\sim1/|x|^3$ as $|x|\to\infty$.) In this case, it was shown that the solution $u(t)$ to (\ref{Eqn:main}) has the asymptotic decay \[
\|u(t)\|_{L^\infty}\lesssim(t\ln t)^{-n/2}\quad\mbox{as $t\to\infty$}, \] for all $u_0\in L^1(\mathbb{R}^n)$, $\widehat{u}_0\in L^1(\mathbb{R}^n)$. We present the following generalization.
\begin{theorem}\label{Thm:Jksigmaln}
Assume \begin{align}\label{Fourier:Jkln}\tag{H5} \begin{cases}
\displaystyle J=J(|x|)\geq0,\quad \chi_0=\|J\|_{L^1}=1,\\
\\ \displaystyle
\widehat{J}(\xi)=1-A|\xi|^\sigma(\ln1/|\xi|)^\mu+o\left(|\xi|^\sigma(\ln1/|\xi|)^\mu\right)\quad\mbox{as $\xi\to0$,}\\
\\ \displaystyle \hspace{4cm}\mbox{where $\sigma\in(0,2],\mu\in\mathbb{R}$, $A>0$,}\\
\\ \displaystyle u_0\in L^1(\mathbb{R}^n),\,\,\widehat{u}_0\in L^1(\mathbb{R}^n). \end{cases} \end{align} Then, \[ \mathcal{J}_ku_0\in L^p(\mathbb{R}^n)\quad\mbox{for all $k$, for any $1\leq p\leq\infty$}, \] and, furthermore, there is a positive integer $N=N(n,J)$ such that \begin{align}
\|\mathcal{J}_ku_0\|_{L^p}\leq C\left(\|u_0\|_{L^1}+\|\widehat{u}_0\|_{L^1}\right)(k(\ln k)^\mu)^{-\frac{n}{\sigma}(1-\frac{1}{p})}\quad\forall\,k\geq N, \end{align} where $C>0$ is a constant depending only on $n,J$. \end{theorem}
\begin{proof}
Again it suffices to prove the results for $p=\infty$, the rest will follow from interpolation. Let $u_0\in L^1(\mathbb{R}^n)$ be such that $\widehat{u}_0\in L^1(\mathbb{R}^n)$. By the asymptotic behavior of $\widehat{J}$ and the Riemann-Lebesgue lemma, there are $\delta>0,0<\varrho_0<1$ such that \[
|\widehat{J}(\xi)|\leq \begin{cases}
\displaystyle 1-D|\xi|^\sigma\left|\ln\frac{1}{|\xi|}\right|^\mu&|\xi|\leq \varrho_0,\\
\\
\displaystyle 1-\delta&|\xi|>\varrho_0. \end{cases} \] The case $\mu=0$ was considered before. So assume $\mu\neq0$.
\textbf{Case I: $\mu>0$.} Let \[ \theta_k=(k(\ln k)^\mu)^{1/\sigma}\quad\mbox{and}\quad\varrho_k=\varrho_0k^{-\varepsilon},\quad0<\varepsilon<\min\{\sigma,1/\sigma\}. \] Note that $\varrho_k\theta_k\to\infty$ as $k\to\infty$ by the regular variation of $\varrho_k\theta_k$.
If $|\xi|\leq \varrho_k\theta_k$ then $\ln(\theta_k/|\xi|)\geq\ln(1/\varrho_k)=\varepsilon\ln k+\ln(1/\varrho_0)\geq\varepsilon\ln k$, and hence \begin{align*}
\left|\widehat{J}_k\left(\frac{\xi}{\theta_k}\right)\right|&\leq\left(1-D\frac{|\xi|^\sigma}{k(\ln k)^\mu}\left(\ln\frac{\theta_k}{|\xi|}\right)^\mu\right)^k\\
&\leq\left(1-D_1\frac{|\xi|^\sigma}{k}\right)^k,\\
&\leq e^{-D_1|\xi|^\sigma}. \end{align*}
If $\varrho_k\theta_k\leq|\xi|\leq \varrho_0\theta_k$ then $\ln(\theta_k/|\xi|)\geq\ln(1/\varrho_0)>0$ and \[
\frac{|\xi|^\varepsilon}{(\ln k)^\mu}\geq\frac{(\varrho_k\theta_k)^\varepsilon}{(\ln k)^\mu}=\varrho_0^\varepsilon k^{((1/\sigma)-\varepsilon)\varepsilon}(\ln k)^{(\mu\varepsilon/\sigma)-\mu}\geq c>0, \] for all $k\geq 2$. Hence \begin{align*}
\left|\widehat{J}_k\left(\frac{\xi}{\theta_k}\right)\right|&\leq\left(1-D\frac{|\xi|^{\sigma-\varepsilon}|\xi|^\varepsilon}{k(\ln k)^\mu}\left(\ln\frac{1}{\varrho_0}\right)^\mu\right)^k\\
&\leq\left(1-D_2\frac{|\xi|^{\sigma-\varepsilon}}{k}\right)^k\\
&\leq e^{-D_2|\xi|^{\sigma-\varepsilon}}, \end{align*} by (\ref{Tool:linexp}).
Finally since $(k(\ln k)^\mu)^{1/k}\to1$ as $k\to\infty$, there is $N\geq2$ such that \[ (1-\delta)^k\leq(k(\ln k)^\mu)^{-n/\sigma}=\theta_k^{-n}\quad\mbox{for all $k\geq N$}, \] which give \begin{align}\label{Tmp:powerln}
|\xi|>\varrho_0\theta_k,\,\,k\geq N\quad\Rightarrow\quad\left|\widehat{J}_k\left(\frac{\xi}{\theta_k}\right)\right|&\leq\theta_k^{-n}. \end{align} Now, for all $k\geq N$, we have by the preceding calculations that \begin{align*}
\int_{\mathbb{R}^n}\left|\widehat{\mathcal{J}_ku_0}\left(\frac{\xi}{\theta_k}\right)\right|d\xi&=\int_{\mathbb{R}^n}\left|\widehat{J}_k\left(\frac{\xi}{\theta_k}\right)\cdot\widehat{u}_0\left(\frac{\xi}{\theta_k}\right)\right|d\xi\\
&=\int_{|\xi|\leq \varrho_k\theta_k}\left|\cdots\right|d\xi+\int_{\varrho_k\theta_k<|\xi|\leq \varrho_0\theta_k}|\cdots|d\xi
+\int_{|\xi|>\varrho_0\theta_k}|\cdots|d\xi\\
&\leq\|\widehat{u}_0\|_{L^\infty}\int_{\mathbb{R}^n}e^{-D_1|\xi|^\sigma}+e^{-D_2|\xi|^{\sigma-\varepsilon}}d\xi+\int_{\mathbb{R}^n}\theta_k^{-n}\left|\widehat{u}_0\left(\frac{\xi}{\theta_k}\right)\right|d\xi\\
&\leq C(\|u_0\|_{L^1}+\|\widehat{u}_0\|_{L^1})<\infty. \end{align*} By Hausdorff-Young inequality, we obtain that \begin{align*}
(k(\ln k)^\mu)^{n/{\sigma}}\|\mathcal{J}_ku_0\|_{L^\infty}&\lesssim\theta_k^{n}\int_{\mathbb{R}^n}\left|\widehat{\mathcal{J}_ku_0}(\xi)\right|d\xi\\
&=\int_{\mathbb{R}^n}\left|\widehat{\mathcal{J}_ku_0}\left(\frac{\xi}{\theta_k}\right)\right|d\xi\leq C(\|u_0\|_{L^1}+\|\widehat{u}_0\|_{L^1}). \end{align*} Hence we obtain \[
\|\mathcal{J}_ku_0\|_{L^\infty}\leq C(\|u_0\|_{L^1}+\|\widehat{u}_0\|_{L^1})(k(\ln k)^\mu)^{-n/\sigma}, \] which is the desired estimate when $\mu>0$.
\textbf{Case II: $\mu<0$.} If $|\xi|\leq1$ then we use the estimate $|\widehat{J}_k(\xi/\theta_k)|\leq1$. If $1<|\xi|\leq\varrho_0\theta_k$ then $\theta_k/|\xi|\leq\theta_k$ and $\ln\theta_k=(\ln k)/\sigma+\mu/\sigma\ln\ln k\leq(\ln k)/\sigma$ for all $k\geq3$, hence \begin{align*}
\left|\widehat{J}_k\left(\frac{\xi}{\theta_k}\right)\right|&\leq\left(1-D\frac{|\xi|^\sigma}{k(\ln k)^\mu}\left(\ln\frac{\theta_k}{|\xi|}\right)^\mu\right)^k\\
&\leq\left(1-D\frac{|\xi|^\sigma}{k(\ln k)^\mu}(\ln\theta_k)^\mu\right)^k\\
&\leq\left(1-D_1\frac{|\xi|^\sigma}{k}\right)^k\leq e^{-D_1|\xi|^\sigma}, \end{align*}
by (\ref{Tool:linexp}). We apply the estimate (\ref{Tmp:powerln}) above for $|\xi|>\varrho_0\theta_k$. Then we obtain \begin{align*}
\int_{\mathbb{R}^n}\left|\widehat{\mathcal{J}_ku_0}\left(\frac{\xi}{\theta_k}\right)\right|d\xi&=\int_{|\xi|\leq 1}\left|\cdots\right|d\xi+\int_{1<|\xi|\leq \varrho_0\theta_k}|\cdots|d\xi
+\int_{|\xi|>\varrho_0\theta_k}|\cdots|d\xi\\
&\leq\|\widehat{u}_0\|_{L^\infty}\left\{\omega_n+\int_{\mathbb{R}^n}e^{-D_1|\xi|^\sigma}d\xi\right\}+\int_{\mathbb{R}^n}\theta_k^{-n}\left|\widehat{u}_0\left(\frac{\xi}{\theta_k}\right)\right|d\xi\\
&\leq C(\|u_0\|_{L^1}+\|\widehat{u}_0\|_{L^1})<\infty. \end{align*} The remaining now follows by the same argument as the case $\mu>0$. \qquad\mbox{\qed}
\end{proof}
\begin{theorem}\label{Thm:DecayLn}
Assume (\ref{Fourier:Jkln}). Then the solution $u(t)$ of (\ref{Eqn:main}) satisfies $u(t)\in L^p(\mathbb{R}^n)$ for any $1\leq p\leq\infty$. Furthermore, \[
\|u(t)\|_{L^p}\leq C(\|u_0\|_{L^1}+\|\widehat{u}_0\|_{L^1})(t(\ln t)^\mu)^{-\frac{n}{\sigma}(1-\frac{1}{p})}\quad\mbox{as $t\to\infty$}, \] where $C>0$ is a constant depending only on $n,J$.
\end{theorem}
\begin{proof} Simply apply Theorem \ref{Thm:Jksigmaln} and part (ii) of Theorem \ref{Thm:GenDecay}.\qquad\mbox{\qed}
\end{proof}
\begin{remark}
It can be seen easily that the argument used in the proof of the preceding theorem can be applied to $J\in L^1(\mathbb{R}^n)$ having the asymptotic expansion \[
\widehat{J}(\xi)=1-A|\xi|^\sigma(\ln1/|\xi|)^{\mu_1}(\ln_21/|\xi|)^{\mu_2}\cdots(\ln_m1/|\xi|)^{\mu_m}+l.o.t\quad\mbox{as $\xi\to0$}, \] where $\ln_k=\ln\circ\cdots\circ\ln$ ($k$ terms), and we get the asymptotic behavior \[
\|u(t)\|_{L^p}\lesssim(t(\ln t)^{\mu_1}\cdots(\ln_mt)^{\mu_m})^{-\frac{n}{\sigma}(1-\frac{1}{p})}\quad\mbox{as $t\to\infty$}, \] provided $u_0\in L^1(\mathbb{R}^n),\widehat{u}_0\in L^1(\mathbb{R}^n)$, $1\leq p\leq\infty$, $0<\sigma\leq2,\mu_1,\ldots,\mu_m\in\mathbb{R}$.
\end{remark}
\section{Nonlocal equations with prescribed decay}
In this section we present condition on $J$ which guarantees that the solution to (\ref{Eqn:main}) has the decay rate given by a regular varying function with negative index: \[
\|u(t)\|_{L^p}\lesssim (tL(t))^{-\beta}\quad\mbox{as $t\to\infty$}, \] where $\beta>0$ and $L:(0,\infty)\to(0,\infty)$ is slowly varying. By the smooth variation theorem for regular varying functions, we can assume without loss of generality that $L$ is smooth; in particular, it is continuous. We need to impose an important hypothesis: \begin{align}\label{Hyp:Mono} \mbox{$L$ is eventually monotone, i.e.\ $\exists\,N_0>0$ such that $L$ is monotone on $[N_0,\infty)$.} \end{align}
The main hypothesis is \begin{align}\label{Hyp:H6}\tag{H6} \begin{cases}
\displaystyle J=J(|x|)\geq0,\quad\chi_0=\|J\|_{L^1}=1,\\
\\ \displaystyle
\widehat{J}(\xi)=1-A|\xi|^{\sigma}L\left(|\xi|^{-\gamma}\right)+o\left(|\xi|^{\sigma}L\left(|\xi|^{-\gamma}\right)\right)\quad\mbox{as $|\xi|\to0$},\\
\\ \displaystyle \hspace{3cm}\mbox{where $\sigma=\frac{n}{\beta}(1-\frac{1}{p}),\gamma>0$, $1<p\leq\frac{n}{(n-2\beta)_+}$},\\
\\ \displaystyle u_0\in L^1(\mathbb{R}^n),\,\,\widehat{u}_0\in L^1(\mathbb{R}^n). \end{cases} \end{align} Note that $0<\sigma\leq2$.
\begin{theorem}
Let $L:(0,\infty)\to(0,\infty)$ be a slowly varying function satisfying (\ref{Hyp:Mono}), $\beta>0$. Assume (\ref{Hyp:H6}) with \[ \gamma>\sigma\quad\mbox{if $L$ is eventually increasing},\quad\gamma=\sigma\quad\mbox{if $L$ is eventually decreasing}. \] Then there is a positive integer $N=N(n,J)$ such that $\mathcal{J}_ku_0\in L^p(\mathbb{R}^n)$ for all $k$ and \[
\|\mathcal{J}_ku_0\|_{L^p}\leq C\left(\|u_0\|_{L^1}+\|\widehat{u}_0\|_{L^1}\right)(kL(k))^{-\beta}\quad\forall\,k\geq N, \] where $C>0$ is a constant. Moreover, in this case, the solution $u(t)$ of (\ref{Eqn:main}) satisfies \[
\|u(t)\|_{L^p}\lesssim(tL(t))^{-\beta}\quad\mbox{as $t\to\infty$}. \]
\end{theorem}
\begin{proof} It is obvious that $\mathcal{J}_ku_0\in L^q(\mathbb{R}^n)$ for all $k$ and $1\leq q\leq\infty$. Since we are interested in the behavior of solution of (\ref{Eqn:main}) as $t\to\infty$ and $N$ can be chosen arbitrarily (independent of $t$), the values of $L$ on $(0,N_0)$ is irrelevant. By redefining the function, we can assume that $L$ is monotone on $(0,\infty)$. If $\lim_{s\to\infty}L(s)$ is a finite positive number, then we have nothing to prove. So we will assume \begin{align} \lim_{s\to\infty}L(s)=\begin{cases} \infty&\mbox{if $L$ is increasing},\\ 0&\mbox{if $L$ is decreasing}. \end{cases} \end{align} By the hypothesis (\ref{Hyp:H6}) of $J$ and the Riemann-Lebesgue lemma, there are $\delta,D,\varrho_0>0$ such that \begin{align*}
|\widehat{J}(\xi)|\leq\begin{cases}
\displaystyle 1-D|\xi|^\sigma L(|\xi|^{-\gamma})&|\xi|\leq \varrho_0,\\
\\
\displaystyle 1-\delta&|\xi|>\varrho_0, \end{cases} \end{align*}
\textbf{Case I: $L(s)\to\infty$.} For this case, $\gamma>\sigma$. We define \[ \theta_k=(kL(k))^{1/\sigma}\quad\mbox{and}\quad\varrho_k=\varrho_0k^{-1/\gamma},\quad k=1,2,\ldots \]
If $|\xi|\leq\varrho_k\theta_k$ then $(|\xi|/\theta_k)^{-\gamma}\geq\varrho_k^{-\gamma}=\varrho_0^{-\gamma}k$. Since $L$ is increasing and is slowly varying, we have \[
L\left(\left(|\xi|/\theta_k\right)^{-\gamma}\right)\geq L(\varrho_0^{-\gamma}k)\sim L(k),\quad\mbox{as $k\to\infty$}. \] Thus for all $k$ sufficiently large, we have \begin{align*}
\left|\widehat{J}_k\left(\frac{\xi}{\theta_k}\right)\right|&\leq\left(1-D\frac{|\xi|^\sigma}{kL(k)}L\left(\left(|\xi|/\theta_k\right)^{-\gamma}\right)\right)^k\\
&\leq\left(1-D_1\frac{|\xi|^\sigma}{k}\right)^k\leq e^{-D_1|\xi|^\sigma}, \end{align*} by (\ref{Tool:linexp}).
Let $0<\varepsilon<\sigma$.
If $\varrho_k\theta_k<|\xi|\leq\varrho_0\theta_k$ then $(|\xi|/\theta_k)^{-\gamma}\geq\varrho_0^{-\gamma}$ and \[
\frac{|\xi|^\varepsilon}{L(k)}\geq\frac{(\varrho_k\theta_k)^\varepsilon}{L(k)}=\varrho_0^{\varepsilon}k^{(1/\sigma-1/\gamma)\varepsilon}L(k)^{\varepsilon/\sigma-1}\geq c>0, \] since $k^{(1/\sigma-1/\gamma)\varepsilon}L(k)^{\varepsilon/\sigma-1}$ is regular varying with positive index. Hence \begin{align*}
\left|\widehat{J}_k\left(\frac{\xi}{\theta_k}\right)\right|&\leq\left(1-D\frac{|\xi|^{\sigma-\varepsilon}|\xi|^\varepsilon}{kL(k)}L(\varrho_0^{-\gamma})\right)^k\\
&\leq\left(1-D_2\frac{|\xi|^{\sigma-\varepsilon}}{k}\right)^k\\
&\leq e^{-D_2|\xi|^{\sigma-\varepsilon}}. \end{align*}
Finally, if $|\xi|>\varrho_0\theta_k$ then we have \begin{align*}
\left|\widehat{J}_k\left(\frac{\xi}{\theta_k}\right)\right|&\leq(1-\delta)^k\leq(kL(k))^{-n/\sigma}=\theta_k^{-n}, \end{align*} for all $k$ sufficiently large. Here we have used that $L$ is slowly varying, so $(\alpha_1 \sqrt{k})^{1/k}\leq(kL(k))^{1/k}\leq (\alpha_2k^2)^{1/k}$ for some constants $\alpha_1,\alpha_2>0$, and \[ \lim_{k\to\infty}(\alpha_1\sqrt{k})^{1/k}=\lim_{k\to\infty}(\alpha_2k^2)^{1/k}=1\quad\therefore\,(kL(k))^{1/k}\to1, \] as $k\to\infty$.
Combining the above calculations we now get, for $k$ sufficiently large, that \begin{align*}
\int_{\mathbb{R}^n}\left|\widehat{\mathcal{J}_ku_0}\left(\frac{\xi}{\theta_k}\right)\right|d\xi&=\int_{\mathbb{R}^n}\left|\widehat{J}_k\left(\frac{\xi}{\theta_k}\right)\cdot\widehat{u}_0\left(\frac{\xi}{\theta_k}\right)\right|d\xi\\
&=\int_{|\xi|\leq \varrho_k\theta_k}\left|\cdots\right|d\xi+\int_{\varrho_k\theta_k<|\xi|\leq \varrho_0\theta_k}|\cdots|d\xi
+\int_{|\xi|>\varrho_0\theta_k}|\cdots|d\xi\\
&\leq\|\widehat{u}_0\|_{L^\infty}\int_{\mathbb{R}^n}e^{-D_1|\xi|^\sigma}+e^{-D_2|\xi|^{\sigma-\varepsilon}}d\xi+\theta_k^{-n}\int_{\mathbb{R}^n}|\widehat{u}_0(\xi/\theta_k)|d\xi\\
&=C(\|u_0\|_{L^1}+\|\widehat{u}_0\|_{L^1})<\infty. \end{align*} Applying Hausdorff-Young inequality then we get \begin{align*}
(kL(k))^{n/\sigma}\|\mathcal{J}_ku_0\|_{L^\infty}&\lesssim\theta_k^n\int_{\mathbb{R}^n}\left|\widehat{\mathcal{J}_ku_0}(\xi)\right|d\xi\\
&=\int_{\mathbb{R}^n}\left|\widehat{\mathcal{J}_ku_0}\left(\frac{\xi}{\theta_k}\right)\right|d\xi\leq C(\|u_0\|_{L^1}+\|\widehat{u}_0\|_{L^1}). \end{align*} Therefore \begin{align*}
\|\mathcal{J}_ku_0\|_{L^\infty}\leq C(\|u_0\|_{L^1}+\|\widehat{u}_0\|_{L^1})(kL(k))^{-n/\sigma} \end{align*} By interpolation we also get \begin{align*}
\|\mathcal{J}_ku_0\|_{L^p}\leq C(kL(k))^{-\frac{n}{\sigma}(1-\frac{1}{p})}=C(kL(k))^{-\beta} \end{align*} Using Theorem \ref{Thm:GenDecay} (ii), it follows that \[
\|u(t)\|_{L^p}\lesssim(tL(t))^{-\beta}\quad\mbox{as $t\to\infty$}. \]
\textbf{Case II: $L(s)\to0$.} For this case $\gamma=\sigma$. We employ a similar argument as in the proof of Theorem \ref{Thm:Jksigmaln}. Use $\theta_k=(kL(k))^{1/\sigma}$ as in the previous case. Let us split $\mathbb{R}^n$ into \begin{align*}
\{|\xi|\leq1\},\quad\{1<|\xi|\leq\varrho_0\theta_k\},\quad\{|\xi|>\varrho_0\theta_k\}. \end{align*}
If $|\xi|\leq1$, we employ the estimate $|\widehat{J}_k(\xi/\theta_k)|\leq1$. Assume $1<|\xi|\leq\varrho_0\theta_k$. Then \[
L((\theta_k/|\xi|)^\gamma)\geq L(\theta_k^\gamma)=L(kL(k)), \] where we have used that $L$ is now decreasing. Since $L(k)\to0$ as $k\to\infty$ and $L$ is decreasing, it follows that \[
L((\theta_k/|\xi|)^\gamma)\geq L(k)\quad\mbox{for all $k$ sufficiently large}. \] So, in this case, \begin{align*}
\left|\widehat{J}_k\left(\frac{\xi}{\theta_k}\right)\right|&\leq\left(1-D\frac{|\xi|^\sigma}{kL(k)}L\left((\theta_k/|\xi|)^\gamma\right)\right)^k\\
&\leq\left(1-D\frac{|\xi|^\sigma}{k}\right)^k\leq e^{-D|\xi|^\sigma}. \end{align*}
Finally, if $|\xi|>\varrho_0\theta_k$, then we have, as in the previous case, that \begin{align*}
\left|\widehat{J}_k\left(\frac{\xi}{\theta_k}\right)\right|&\leq\theta_k^{-n}, \end{align*} for all $k$ sufficiently large.
Combining the preceding calculations, we get \begin{align*}
\int_{\mathbb{R}^n}\left|\widehat{\mathcal{J}_ku_0}\left(\frac{\xi}{\theta_k}\right)\right|d\xi&=\int_{|\xi|\leq1}|\cdots|d\xi+\int_{1<|\xi|\leq\varrho_0\theta_k}|\cdots|d\xi+\int_{|\xi|>\varrho_0\theta_k}|\cdots|d\xi\\
&\leq C\|\widehat{u}_0\|_{L^\infty}\left\{\omega_n+\int_{\mathbb{R}^n}e^{-D|\xi|^\sigma}d\xi\right\}+\int_{\mathbb{R}^n}\theta_k^{-n}|\widehat{u}_0(\xi/\theta_k)|d\xi\\
&\leq C(\|u_0\|_{L^1}+\|\widehat{u}_0\|_{L^1})<\infty \end{align*} hence \[
\|\mathcal{J}_ku_0\|_{L^\infty}\lesssim\|\mathcal{J}_ku_0\|_{L^1}\lesssim(kL(k))^{1/\sigma}, \] for all $k$ sufficiently large. Invoking Theorem \ref{Thm:GenDecay} (ii), we obtain the desired asymptotic behavior of $u(t)$.\qquad\mbox{\qed}
\end{proof}
\end{document} |
\begin{document}
\title{Exponential data encoding for quantum supervised learning}
\author{S.~Shin} \affiliation{Department of Physics and Astronomy,
Seoul National University, 08826 Seoul, South Korea}
\author{Y.~S.~Teo} \email{yong.siah.teo@gmail.com} \affiliation{Department of Physics and Astronomy,
Seoul National University, 08826 Seoul, South Korea}
\author{H.~Jeong} \email{h.jeong37@gmail.com} \affiliation{Department of Physics and Astronomy,
Seoul National University, 08826 Seoul, South Korea}
\begin{abstract}
Reliable quantum supervised learning of a multivariate function mapping depends on the expressivity of the corresponding quantum circuit and measurement resources. We introduce exponential-data-encoding strategies that are hardware-efficient and optimal amongst all non-entangling Pauli-encoded schemes, which is sufficient for a quantum circuit to express general functions having very broad Fourier frequency spectra using only exponentially few encoding gates. We show that such an encoding strategy not only reduces the quantum resources, but also exhibits practical resource advantage during training in contrast with known efficient classical strategies when polynomial-depth training circuits are also employed. When computation resources are constrained, we numerically demonstrate that even exponential-data-encoding circuits with single-layer training modules can generally express functions that lie outside the classically-expressible region, thereby supporting the practical benefits of such a resource advantage. Finally, we illustrate the performance of exponential encoding in learning the potential-energy surface of the ethanol molecule and California's housing prices \end{abstract}
\maketitle
\section {Introduction} In the current noisy intermediate-scale quantum~(NISQ) era~\cite{Preskill2018quantumcomputingin}, a variety of NISQ algorithms have already been proposed to exploit the computing potentials of noisy quantum devices that are currently at our disposal~\cite{Bromley:2020applications,Bharti:2022noisy,Finnila:1994quantum,Kadowaki:1998quantum,Aaronson:2011computational,Aaronson:2011linear-optical,Hamilton:2017gaussian,Trabesinger:2012quantum,Georgescu:2014quantum}. Among them, quantum machine learning (QML)~\cite{Schuld:2015introduction,Schuld:2019quantum,Carleo:2019machine,date2020quantum,Perez-Salinas:2020aa,dutta2021singlequbit,Goto:2021universal} garnered a huge attention in conjunction with the long-standing reputation of classical machine learning~(CML). A natural question arises: ``Under which circumstances is QML advantageous over CML?'' Owing to the apparent stability and widely-regarded success of CML, the quest to search for avenues where QML outperforms CML becomes challenging. Practicality issues aside, the richness of quantum-computing techniques in QML makes them an interesting subject in its own right that deserves deeper exploration~\cite{Schuld.2203.01340}. Many techniques in CML have since been converted to the QML versions~\cite{RebentrostQSVM.2014, cong2019quantum, lloyd2018qgan, Liu2018anomaly, dunjko2016qrl}. Several performance aspects of QML have also been analyzed, such as expressive power \cite{Du:2020aa,Du:2022.expressivity}, generalization properties \cite{Banchi:2021generalization,Caro2021encodingdependent, Huang2021general}, and sample complexity \cite{Arunachalam:2017}.
The goal of ML is to learn a function mapping $f(\rvec{x})$ by training a particular computational model from a dataset $\{\rvec{x}_j\}$. The training accuracy relies on the function space the model generates~(\emph{expressivity}). Classical neural networks, for instance, exhibits a ``universal approximation property'' that ensures the universality for even single-hidden-layer models~\cite{hornik1989multilayer}. By the same token, universality of quantum models has also been investigated~\cite{Goto:2021universal,Perez-Salinas:2020aa}. We shall focus on the important paradigm of \emph{variational quantum machine learning}, where a variational quantum circuit is optimized to approximate unknown $f(\rvec{x})$ using a training dataset. In~\cite{Schuld:2021aa}, QML models are expressed as finite Fourier series of a frequency spectrum determined by the generator eigenvalues of the encoding gates. The authors showed that a sufficiently large circuit and arbitrary observable measurements can approximate any $f(\rvec{x})$ well, as finite Fourier series can approximate $L^2$ functions given sufficiently large frequency and coefficient spans~\cite{Weisz:2012aa}. A QML model may hence be treated as a Fourier-featured linear model~(FFLM)~\cite{Rahimi:2007random,Tancik:2020Fourier}.
In this article, based on the above \emph{FFLM computed by a variational NISQ circuit}~(QFFLM)~\cite{Biamonte:2021universal,Cerezo:2021variational,Cao:2019quantum,Endo:2021hybrid,McArdle:2020quantum}, we propose a much more efficient quantum (training-)data-encoding strategy that generates an exponentially large frequency spectrum in the number of encoding gates employed. For a fixed Fourier degree $d_\mathrm{F}$ (the maximum frequency over all $M$ variables), we show that such an \emph{exponential encoding} requires the least number of encoding gates \mbox{$N=\log_3(2d_\mathrm{F}+1)$} \emph{per encoded variable} amongst all non-entangling Pauli-encoded circuits, allowing exponential reduction of quantum resources compared to naive data-reuploading. Using FFLMs for ML, QFFLMs can possess a resource advantage in training compared to their classical counterparts~(CFFLMs). For supervised learning, we derive the criterion $N_\mathrm{gt} < O\!\left(\epsilon K^{M/2}\right)$ for the resource advantage, involving the number of single-qubit and CNOT gates $N_\mathrm{gt}$, FFLM dimension~$K^M$, and quantum-circuit sampling precision~$\epsilon$. When exponential encoding and a hardware-efficient {\it ansatz} are used, we show that this criterion is satisfied with $N_\mathrm{gt}=O(\mathrm{poly}(MN))$. Numerical demonstrations concerning superior expressivity and learning advantage with exponentially-encoded QFFLMs are also discussed.
\section{Optimal Pauli-data-encoded QML model with exponential encoding} The unitary $U_{\rvec{x};\rvec{\theta}}=W_2(\rvec{\theta}_2)V(\rvec{x})W_1(\rvec{\theta}_1)$ acting on an initialized pure state ket $\ket{\rvec{0}}$, followed by a single-qubit Pauli-$Z$ measurement, describes a rather general bounded QML model for large circuit depths~$L$ (elaborated in Fig.~\ref{fig:QSL_model}), defined by training parameters $\rvec{\theta}=(\rvec{\theta}_1^\top\,\,\rvec{\theta}_2^\top)^\top$ and training datum~$\rvec{x}$. By assigning $V(\rvec{x}) = \bigotimes_{m=1}^{M}\bigotimes_{n=1}^{N}\E{-\mathrm{i}\,\beta_{mn}x_m\,Z/2} = \bigotimes_{m=1}^{M}\sum_{\rvec{k}_m\in\{0,1\}^N} \ket{\rvec{k}_m}\E{-\mathrm{i}\,\lambda_{\rvec{k}_m}x_m}\bra{\rvec{k}_m}$ as a diagonal encoding unitary in the $N$-qubit standard basis~$\{\ket{\rvec{k}_m}\equiv\ket{\rvec{k}}\}$, where $\lambda_{\rvec{k}_m}$ are eigenvalues of $\sum_{n=1}^N \beta_{mn}Z_n/2$, the function $f_\mathrm{Q}(\rvec{x})$ expressible by the QML model is a finite $M$-variate Fourier series: \begin{equation}
f_\mathrm{Q}(\rvec{x})= \bra{\rvec{0}}U_{\rvec{x};\rvec{\theta}}^{\dag}\,Z_N\,U_{\rvec{x};\rvec{\theta}}\ket{\rvec{0}}=\!\!\!\!\!\!\!\!\!\!\sum_{n_1\in \Omega_1,n_2 \in \Omega_2,\ldots, n_M \in \Omega_M} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\widetilde{c}_{n_1,n_2,\ldots,n_M}\,\E{-\mathrm{i}\,\rvec{n}\bm{\cdot}\rvec{x}}\,,
\label{eq:quantum_model} \end{equation}
where $\widetilde{c}_{n_1,n_2,\ldots,n_M}$ are linear combinations involving $Z_N$, $W_2$ and $D_1=\sum_{\rvec{k}'}\ket{\rvec{k}'}\opinner{\rvec{k}'}{W_1}{\rvec{0}}\bra{\rvec{k}'}$~(see Appendix~\ref{app:deriv_1}). For sufficiently deep $W_l$s and extensive \emph{frequency spectra} $\Omega_m$s, $f_\mathrm{Q}(\rvec{x})$ is a finite Fourier series~[$|f_\mathrm{Q}(\rvec{x})|\leq1$] that approximates any target function $f(\rvec{x})$~\cite{Schuld:2021aa,Weisz:2012aa} well up to amplitude and period rescalings. Throughout the article, we focus on QML models of the \emph{parallel} kind shown in Fig.~\ref{fig:QSL_model}. As $\Omega_m$s are only determined by the type of encoding generators regardless of their positions in the circuit~\cite{Schuld:2021aa, Caro2021encodingdependent}, all analyses about the $\Omega_m$s extend to different \emph{ansatz} topologies such as serial circuits with alternating trainable and encoding modules.
\begin{figure}
\caption{Schematic QML model used to express an $M$-variate bounded $f_\mathrm{Q}(\rvec{x})$ [$|f_\mathrm{Q}(\rvec{x})|\leq1$]. Each training unitary $W_l(\rvec{\theta}_l)$ comprises $L$ layers of single-qubit~(each carrying three independent training parameters) and the full nearest-neighbor CNOT array. The unitary $V(\rvec{x})$ encodes each element $x_m$ of a training datum $\rvec{x}$ onto an $N$-qubit Pauli-$Z$ gate with prechosen weights $\beta_{m1},\ldots,\beta_{mN}$.}
\label{fig:QSL_model}
\end{figure}
To analyze how the choices of $\beta_{mn}>0$ affect the extensiveness of $\Omega_m$, we note that $\lambda_{\rvec{k}_m}=\sum_{n=1}^{N}a^{(\rvec{k}_m)}_{n}$, with $a^{(\rvec{k}_m)}_{n}=\pm\beta_{mn}/2$, are potentially $2^N$ distinct eigenvalues in the absence of degeneracy. Thus, the spectrum $\Omega_m=\{\lambda_{\rvec{k}_m}-\lambda_{\rvec{k}'_m}|\rvec{k}_m,\rvec{k}'_m\in\{0,1\}^N\}$ must contain \emph{at most} $4^N-2^N+1$ distinct frequencies~(including~zero). If we \emph{naively encode} all \mbox{$\beta_{mn}=1$}, then there will only be $2N+1$ distinct values: $\Omega_m = \{-N,-N+1,\ldots,-1,0,1,\ldots,N-1,N\}$; that is, we need $N=d_\mathrm{F}$ qubits per encoded variable to express a degree-$d_\mathrm{F}$ Fourier series.
With $N$ qubits, we can, instead, maximize the coverage of $\Omega_m$ by noting that each frequency element is a sum of $N$ numbers picked from the set $\{\beta_{mn},-\beta_{mn},0\}$, giving \emph{at most} $3^N$ distinct frequencies. We show in Appendix~\ref{app:exp_enc} that the \begin{equation} \text{\bf exponential encoding scheme}:\quad\beta_{mn} = 3^{n-1} \label{eq:main1} \end{equation} supply us a dense and exponentially-extensive spectrum $\Omega_m\equiv\Omega_\mathrm{exp}=\{-(3^{N}-1)/2, -(3^{N}-3)/2, \ldots, -1, 0 ,1 , \ldots, (3^{N}-3)/2, (3^{N}-1)/2\}$ for all $1\leq m\leq M$, with $d_\mathrm{F}=(3^N-1)/2$. Our first result~\eqref{eq:main1} implies that \mbox{$N=\log_3(2d_\mathrm{F}+1)$} qubits is sufficient to realize a degree-$d_\mathrm{F}$ Fourier series with this optimal encoding. The limitation to $3^N$ dense and distinct frequencies stems from the fact that Pauli rotations have eigenvalues that differ only in the sign. One may improve the coverage to $O(4^N)$ using nonlocal Hermitian-generator encodings, although hardware feasibility may be called into question~\cite{Caro2021encodingdependent}.
\section{QFFLMs and resource advantage} As~\eqref{eq:quantum_model} gives rise to a finite Fourier series, we write $f_\mathrm{Q}(\rvec{x})=\rvec{c}_\mathrm{Q}(\rvec{\theta})\bm{\cdot}\rvec{\phi}(\rvec{x})$, with $\rvec{\phi}(\rvec{x})=\bigotimes^M_{m=1}\rvec{\phi}(x_m)$ and $\rvec{\phi}(x_m)=\sqrt{2}(2^{-1/2},\cos x_m,\sin x_m,\ldots,\cos(n_{(K-1)/2}x_m),\sin(n_{(K-1)/2}x_m))^\top$ is a $K$-dimensional feature column containing all Fourier-basis functions covering a frequency spectrum $\Omega_m$ per variable as training features $(K \equiv 2d_\mathrm{F}+1)$. Thus, we immediately recognize that $f_\mathrm{Q}(\rvec{x})$ is a QFFLM~\cite{Schuld:2019quantum,Schuld:2021supervised}.
If an $M$-variate function $f(\rvec{x})\cong\rvec{c}\bm{\cdot}\rvec{\phi}(\rvec{x})$ is well-approximated by a Fourier series defined by $\rvec{c}$, we can train a $K^M$-dimensional FFLM either classically~(CFFLM) or with a quantum circuit~(QFFLM) to learn it. With a QFFLM, we may train $U_{\rvec{x};\rvec{\theta}}$ using training datasets $\{\rvec{x}_j,y_j= f(x_j)\}$. This involves minimizing the loss function, say the mean squared-error~(MSE) $\mathcal{L}_{\rvec{\theta}}\propto\sum_j[f_\mathrm{Q}(\rvec{x}_j)-y_j]^2$. To clearly analyze the \emph{computational resources} (denoted by $\mathrm{resrc}$) needed in training QFFLMs, we shall consider general gradient-based optimization methods for minimizing $\mathcal{L}_{\rvec{\theta}}$, which entail the computation $\partial_{\theta_k}\mathcal{L}_{\rvec{\theta}}\propto\sum_j[f_\mathrm{Q}(\rvec{x}_j) - y_j]\partial_{\theta_k}f_\mathrm{Q}(\rvec{x}_j)$ for every $\theta_k$. On NISQ devices, we may carry out the parameter-shift rule~\cite{Mitarai:2018quantum,Schuld:2019evaluating} (see also ) $\partial_{\theta_k}f_\mathrm{Q}(\rvec{x}_j)=[f_\mathrm{Q}(\rvec{x}_j;\theta_k+\pi/2)-f_\mathrm{Q}(\rvec{x}_j;\theta_k-\pi/2)]/2$, so that all $f_\mathrm{Q}$s constituting $\partial_{\theta_k}\mathcal{L}_{\rvec{\theta}}$ (evaluated at \emph{different} circuit parameters) are independently sampled from the circuit.
Suppose that every model function $f_\mathrm{Q}(\rvec{x}_j)$ is sampled with some assigned precision $\epsilon$ from $N_\mathrm{gt}$ single-qubit and CNOT gates, then \emph{each $f_\mathrm{Q}$ sampling} incurs $O(N_\mathrm{gt}/\epsilon^2)$ gate operations. Tracking all function samplings leads to the overall $\nabla_{\rvec{\theta}}\mathcal{L}_{\rvec{\theta}}$ sampling resources $\mathrm{resrc}_\mathrm{Q}=O(N_\mathrm{gt}^2/\epsilon^2)$. We next compare $\mathrm{resrc}_\mathrm{Q}$ with the resources~$\mathrm{resrc}_\mathrm{C}$ for computing loss-function gradients with a CFFLM defined as $f_\mathrm{C}(\rvec{x})=\rvec{c}_\mathrm{C}\bm{\cdot}\rvec{\phi}(\rvec{x})$, which amounts to evaluating dot products. In terms of classical scalar addition and multiplication operations, we show that $\mathrm{resrc}_\mathrm{C}=\Omega(K^M)$ for known efficient classical strategies~\cite{gudenberg:inria-00074262,Johnson:1982extensions,Griewank:2008,margossianADreview,wu2019efficient,NIPS2014_310ce61c,guo2016quantization,Murphy:2012machine,Pearson:1901lines,Hotelling:1936relations}, which encompass both exact and approximate function computations. Note that while $\mathrm{resrc}_\mathrm{C}$ depends linearly on $K^M$, $\mathrm{resrc}_\mathrm{Q}$ relates to quantum-circuit properties and can depend logarithmically in $K^M$. A resource advantage requires $\mathrm{resrc}_\mathrm{Q}<\mathrm{resrc}_\mathrm{C}$, or \begin{equation}
N_\mathrm{gt} < O\!\left(\epsilon K^{M/2}\right)\,.
\label{eq:qadvantage} \end{equation} That basic operations for both CFFLMs and QFFLMs are equivalent is assumed. Although in practice, basic quantum operations lag classical ones in time, the resulting complexity prefactor is a constant regardless of the model size, and is therefore negligible in \emph{resource-scaling} comparisons. Importantly, we observe that computation bottlenecks during training originate from evaluating $f_\mathrm{Q}$ and $f_\mathrm{C}$. Hence, \eqref{eq:qadvantage} holds for any $\mathcal{L}_{\rvec{\theta}}$ which takes $f_{Q/C}$ as input, and even for gradient-free optimization methods. All relevant technical details are found in Appendix~\ref{app:resrc_qfflm}.
\section{Resource advantage with exponential encoding}
Care has to be taken in assigning $\epsilon$ for QFFLMs. In computing $\mathcal{L}_{\rvec{\theta}}$, fixing the circuit-sampling repetitions to $O(1/\epsilon^2)$ yields the \emph{same} precision $\epsilon$ for \emph{both} the estimators $\widehat{f_\mathrm{Q}(\rvec{x}_j)}$ and $\widehat{\partial_{\theta_k}f_\mathrm{Q}(\rvec{x}_j)}$. On the other hand, their respective \emph{desired} precisions $\epsilon_{f}$ and $\epsilon_{\partial f}$ should \emph{at most} scale with typical orders of $|f_\mathrm{Q}(\rvec{x}_j)|$ and $|\partial_{\theta_k}f_\mathrm{Q}(\rvec{x}_j)|$ so that estimations are not mere random guesses~\cite{McClean:2018barren}. This suggests the conservative choice $\epsilon=\min\{\epsilon_f,\epsilon_{\partial f}\} =\min\left\{\sqrt{\MEAN{|f_\mathrm{Q}(\rvec{x}_j)|^2}{}},\sqrt{\MEAN{|\partial_{\theta_k}f_\mathrm{Q}(\rvec{x}_j)|^2}{}}\right\}$.
When the training parameters $\rvec{\theta}_1$ and $\rvec{\theta}_2$ are randomly initialized to start the minimization of some loss function $\mathcal{L}_{\rvec{\theta}}$, the \emph{barren-plateau phenomenon} refers to the statements $\MEAN{\partial_{\theta_k} \mathcal{L}_{\rvec{\theta}}}{}=0$ (averaged over randomly-chosen $\rvec{\theta}$) and that $\VAR{}{\partial_{\theta_k} \mathcal{L}_{\rvec{\theta}}}$ vanishes exponentially with the size of circuit. Intuitively, the gradient landscape of $\mathcal{L}_{\rvec{\theta}}$ quickly becomes extremely flat with increasing qubit number~\cite{McClean:2018barren,Arrasmith:2021effect,Cerezo:2021cost,Holmes:2022connecting}. When barren plateaus exist, $\min\left\{\sqrt{\MEAN{|f_\mathrm{Q}(\rvec{x}_j)|^2}{}},\sqrt{\MEAN{|\partial_{\theta_k}f_\mathrm{Q}(\rvec{x}_j)|^2}{}}\right\}=O(\alpha^{-MN/2})$ for some \mbox{$\alpha>1$}~\cite{Thanasilp:2021subtleties}, so that $\epsilon=O(\alpha^{-MN/2})$. In our context, since the measurement observable is a Pauli operator, and the {\it ansatz} for $W_l$ in~Fig.~\ref{fig:QSL_model} tends to a two-design with $L=O(\mathrm{poly} (MN))$~\cite{harrow_random_2009,Puchala_Z._Symbolic_2017}, we derive in Appendix~\ref{app:BPP} that $\MEAN{f_\mathrm{Q}^2}{}=O(2^{-{MN}})$ and $\MEAN{(\partial_{\theta_k} f_\mathrm{Q})^2}{}\leq O(2^{-{MN}})$, where $\left<\,\,\bm{\cdot}\,\,\right>$ is an average over $W_l$s. This implies that $\alpha=2$.
When exponential encoding is used in the presence of barren plateaus, $K=3^N$ and criterion \eqref{eq:qadvantage} becomes $N_\mathrm{gt} < O\!\left((3/\alpha)^{MN/2}\right)$, telling us that a feasible resource advantage requires \mbox{$\alpha<3$}. From the previous section, as $\alpha=2$ for QFFLMs of polynomial-depth $W_l$s, an exponentially-encoded circuit of $N_\mathrm{gt}=O(\mathrm{poly}(MN))$ indeed permits a resource advantage as $O(\mathrm{poly}(MN))\ll O\!\left((3/2)^{MN/2}\right)$ for large~$MN$. The existence of such an advantage, even under the influence of barren plateaus, is \emph{only} possible with encodings that generate exponentially-large models.
All $M$-variate FFLM functions $f_\mathrm{model}(\rvec{x})$ are characterized by the convex subspace $C_{K^M}=\{\rvec{c}\,\,|\,\,|\rvec{c}\bm{\cdot}\rvec{\phi}(\rvec{x})|\leq1\text{ for all }\rvec{x}\in[0,2\pi)^M\}$ of $K^M$-dimensional $\rvec{c}$. In the simplest case where $K=3$ and $M=1$, we show in Appendix~\ref{sec:CKM_struct} that the geometry of $C_{3}$ is that of a bicone. Generating \emph{any} $\rvec{c}_\mathrm{Q}(\rvec{\theta}) \in C_{K^M}$ with a QFFLM needs at least $K^{M}$ free (training-circuit) parameters~($\rvec{\theta}$). However, when the number of free parameters $N_\mathrm{tp} = O(N_\mathrm{gt})\geq K^{M}$, QFFLMs holds no resource advantage as criterion~{eq:qadvantage} can never be satisfied. This means that resource-advantageous QFFLMs possess coefficients $\rvec{c}_{Q}(\rvec{\theta})$ that necessarily cover a smaller subspace $S_\mathrm{Q}\subset C_{K^M}$. In this situation, where $N_\mathrm{tp} < K^M$, we say that the model is \emph{underparametrized}, and this is the only situation where QFFLMs can be resource-advantageous. Underparametrized models are thus crucial for practical implementations, especially in realistic scenarios where $N_\mathrm{gt}$ is limited.
\begin{figure}\label{fig:model_geom}
\end{figure}
\begin{figure}\label{fig:sim}
\end{figure}
\section{Practical benefits of underparametrized QFFLMs} \subsection{Different expressible function spaces for two models} \label{sec:VA} The subspace of functions expressible by a Q(C)FFLM is given by $S_\mathrm{Q}$~($S_\mathrm{C}$) of a dimension \emph{no greater than} $N_\mathrm{tp}^\mathrm{Q(C)FFLM}$, which is the number of free (trainable) parameters, so that underparametrized models always generate subspaces of smaller dimensions than $C_{K^M}$. For example, if subspace $S \subseteq C_3$ is characterized by $N_\mathrm{tp}=1$ free parameter such that $\rvec{c}(\theta)=2^{-1}(\sin\theta, \cos\theta, \sin\theta\,\cos\theta)^\top$, then $S$ itself is a one-dimensional curve lying in $C_3$. For a QFFLM, the interplay between $N_\mathrm{tp}^\mathrm{QFFLM}$, {\it circuit-ansatz} choice [which fixes the form of $\rvec{c}(\rvec{\theta})$, and consequently $S_\mathrm{Q}$], and model dimension~$K^M$ determines if a target $f(\rvec{x})$ can be accurately expressed.
Suppose that $f(\rvec{x})$ is characterized by a $\rvec{c}\in C_{\kappa^M}$ of dimension $\kappa^M$, which is generally different from $K^M$, the model dimension, and assume that $\kappa^M\gg1$ so that $\mathrm{resrc}$ is restricted to $O(\mathrm{polylog}(\kappa^M))$. Without \emph{a priori} information about $\rvec{c}$, there are two options to do machine learning on $f(\rvec{x})$ with an FFLM. Going by the classical route~[Fig.~\ref{fig:model_geom}(a)], as $\mathrm{resrc}_\mathrm{C}$ depends on the model dimension, we may construct a CFFLM of dimension $O(\mathrm{polylog}(\kappa^M))$ that is maximally permitted by the resrc constraint. Since any underparametrized CFFLM offers a worse expressivity than the fully-parametrized one, we choose the fully-parametrized CFFLM which utilizes {the same number of free parameters as the dimension---$N_\mathrm{tp}^{\mathrm{CFFLM}} = O(\mathrm{polylog}(\kappa^M))$. Such a fully-parametrized model can express the entire function space $S_\mathrm{C}=C_{O(\mathrm{polylog}(\kappa^M))}$ defined by its dimension. Going by the quantum route~[Fig.~\ref{fig:model_geom}(b)], under the \emph{same} $\mathrm{resrc}$ constraint, an $O(\kappa^M)$-dimensional QFFLM can be constructed with $O(\log(\kappa^M))$ encoding gates using the dense exponential encoding strategy in \eqref{eq:main1}. Polylogarithmic resrc constrains $N_\mathrm{tp}^{\mathrm{QFFLM}} = O(\mathrm{polylog}(\kappa^M)\epsilon)$, where $\epsilon$ depends on the circuit \emph{ansatz}. With $L=O(\mathrm{polylog}(\kappa^M))$, $\epsilon \sim \alpha^{-\log_3(\kappa^M)}$~($\alpha > 1$) in view of barren plateaus.
In the absence of barren plateaus, $\epsilon$ decays as a power law with the model size. Then, the dimensions of $S_\mathrm{Q}$ and $S_\mathrm{C}$ are both $O(\mathrm{polylog}(\kappa^M))$, and thus comparable in order. However, the QFFLM dimension $O(\kappa^M)$ results in an $O(N_\mathrm{tp}^{\mathrm{QFFLM}})$-dimensional $S_\mathrm{Q}$ that can exceed the boundaries of $S_\mathrm{C} = C_{O(\mathrm{polylog}(\kappa^M))}$, so that $\rvec{c}\notin C_{O(\mathrm{polylog}(\kappa^M))}$ may still be expressible by the QFFLM~(see Fig.~\ref{fig:model_geom}).
\begin{figure}\label{fig:ethanol}
\end{figure}
\subsection{Numerical simulations } Figure~\ref{fig:sim} supports the arguments of Section \ref{sec:VA} with numerical simulations on learning univariate functions $f(x)$ of various $\rvec{c}$ distributions~($M=1,\kappa=81$). We chose a CFFLM corresponding to a $64$-dimensional model using $N_\mathrm{tp}^{\mathrm{CFFLM}}=64$ parameters. The dimension of this CFFLM is less than $\kappa^M$, and $S_\mathrm{C} = C_{64} \subset C_{81}$. We compare this with a shallow $[L=O(\log(MN))]$ 4-qubit exponentially-encoded QFFLM that is barren-plateau-free~\cite{Cerezo:2021cost} and offers a similar $\mathrm{resrc}$ order: this has $N_\mathrm{tp}^{\mathrm{QFFLM}}=16$ parameters and the shallowest possible $W_1$ and $W_2$ of $L=1$, where each single-qubit rotation in $W_l$ is encoded with two training parameters corresponding to $Y$ and $Z$ gates. This QFFLM is of dimension~$81$, identical to that of $f(x)$, but possesses a \emph{16-dimensional} $S_\mathrm{Q} \subset C_{81}$ that can exceed $S_\mathrm{C} = C_{64}$ in certain boundaries. We define $r=\sqrt{\sum_{n=0}^{63} \vert c_n\vert^2 }/\sqrt{ \sum_{n=64}^{80} \vert c_n\vert^2}$ that measures $f(x)$'s classical-expressibility, where $c_n$ is the target Fourier coefficient labeled with odd indices for cosines and even ones for sines. Figure~\ref{fig:sim} shows that when $r\ll1$, QFFLM with exponential encoding expresses $f$ much more accurately, revealing the limitations of CFFLM whenever $N_\mathrm{tp}^{\mathrm{CFFLM}}<\kappa^M$. Based on this general observation, QFFLMs with exponential encoding are attractive alternatives for learning general $f(\rvec{x})$ with high-frequency Fourier components.
Figure~\ref{fig:ethanol} presents a machine-learning example in physics where high-dimensional QFFLMs with exponential data encoding are applied to express the unknown target function. The \texttt{revised-MD17 dataset}~\cite{Christensen2020,MD17,Rupp:2012fast} is used to train QFFLMs to learn the potential-energy surface of the ethanol molecule based on its atomic positions and nuclear charges, preprocessed into a ($M=36$)-variate function learning problem. To cope with such a large $M$, serial data-reuploading topologies are employed. The naive and exponential underparametrized QFFLMs respectively correspond to dimensions $7^{36}$ and $27^{36}$. Figure~\ref{fig:cali} compares the naive- and exponential-encoding schemes for learning California's housing prices~($M=8$). The quantum circuits employed for this problem take on a different \emph{ansatz}, namely the strongly-entangled-layers \emph{ansatz}~\cite{pennylane}. The Reader may consult Appendix~\ref{app:four_examples} for more details regarding the simulation of these problems.
We see that exponential encoding shows increasingly better training-loss performances as $L$ increases. On the other hand, one observes the contrasted test-loss performances in the two different machine-learning problems. For the potential-energy-surface learning problem, averaging over numerical experiments of various randomized seed values show a consistently higher test losses using exponential encoding. We speculate that this could arise from a much larger model-frequency spectrum relative to that of the actual target function, resulting in aliasing and overfitting~\cite{Peters:2022generalization}. In marked contrast, the test losses for exponential encoding in the housing-price learning problem coincide faithfully with the corresponding training losses, which are much lower than those of naive encoding for $L=3$. This could suggest the existence of other target-function learning problems that do require the assistance of exponential encoding for resource efficiency. Further studies on exponential-model generalizability is an interesting follow-up research that is beyond the scope of this article.
\begin{figure}\label{fig:cali}
\end{figure}
\section{Discussions} We proposed the exponential data encoding scheme for variational quantum machine learning that generates a Fourier-featured linear model of exponentially-large frequency spectra given a small number of encoding gates or qubits. This not only reduces computational resources compared to existing data reuploading models, but also offers a training-resource advantage over their classical counterparts using only polynomial-depth circuits. Exponential encoding is an important element for a quantum resource advantage as frequency spectra that are polynomially large with respect to the qubit or encoding-gate number can be efficiently constructed with classical models resource-wise, as in~Ref.~\cite{classicalsurrogate}. For the same resource order, quantum and classical models exhibit very different function expressivities, such that the former can express functions outside the classically-expressible region.
In Refs.~\cite{Mitarai2018QCL,Caro:2021encoding-dependent}, exponential number of basis functions had been discussed, and possibility for a resource advantage was also hinted. Here, we provided an explicit hardware-efficient methodology to control the presence of Fourier basis functions and show the existence of a resource advantage under physically-realistic constraints. Basis transformations between Fourier and polynomial types are briefly discussed in Appendix~\ref{app:bases}.
While exponential encoding allows any quantum supervised-learning model to flexibly acquire an arbitrary Fourier-frequency spectrum just by adjusting the training-data encoding weights, the overall function expressivity still depends on the coverage of the Fourier coefficients, which is limited to the training-gate number and circuit-{\it ansatz} universality. In the NISQ era with limited quantum resources, one is restricted to variational-quantum models with either highly extensive spectra and under-parameterized {\it ans{\"a}tze}, or non-extensive spectra and fully-parametrized {\it ans{\"a}tze}. As potentially advantageous quantum supervised-learning models are underparametrized ones, we believe that deeper studies of such models in terms of their generalizability and expressivity are pertinent to NISQ applications. Furthermore, our work suggests that the exploration of classical Fourier-featured models will help bridge concepts between classical and quantum learning methods, thereby unraveling the latter's hidden potentials.\\
\begin{acknowledgments}
The authors are grateful for the insightful and beneficial discussions with C.~Oh. This work is supported by HMC, the National Research Foundation of Korea (NRF) grants funded by the Korea government~(Grant Nos.~NRF-2020R1A2C1008609, NRF-2020K2A9A1A06102946, NRF-2019R1A6A1A10073437 and NRF-2022M3E4A1076099) \emph{via} the Institute of Applied Physics at Seoul National University, and the Institute of Information \& Communications Technology Planning \& Evaluation (IITP) grant funded by the Korea government (MSIT) (IITP-2021-0-01059 and IITP-2022-2020-0-01606). \end{acknowledgments}
\appendix
\begin{figure*}\label{fig:freq_spect}
\end{figure*}
\section{Exponential encoding schemes}
\subsection{Derivation of Eq.~\eqref{eq:quantum_model}} \label{app:deriv_1}
Beginning with the $M$-variate encoding unitary operator for $N$-qubit systems, \begin{equation}
V(\rvec{x}) = \bigotimes_{m=1}^{M}\bigotimes_{n=1}^{N}\E{-\mathrm{i}\,\beta_{mn}x_m\,Z/2} = \bigotimes_{m=1}^{M}\sum_{\rvec{k}_m\in\{0,1\}^N} \ket{\rvec{k}_m}\E{-\mathrm{i}\,\lambda_{\rvec{k}_m}x_m}\bra{\rvec{k}_m}\,, \end{equation} which comes from the product encoding in the single-qubit $Z$-Pauli rotation, the multivariate circuit-unitary action on the initial state ket $\ket{\rvec{0}}$ simplifies to \begin{widetext} \begin{align}
U_{\rvec{x};\rvec{\theta}}\ket{\rvec{0}}=&\,W_2(\rvec{\theta}_2)V(\rvec{x})W_1(\rvec{\theta}_1)\ket{\rvec{0}}\nonumber\\
=&\,W_2(\rvec{\theta}_2)\sum_{\rvec{k}'_1\in\{0,1\}^N}\ldots\sum_{\rvec{k}'_M\in\{0,1\}^N}\ket{\rvec{k}'_1,\rvec{k}'_2,\ldots,\rvec{k}'_M}\E{-\mathrm{i}\left(\lambda_{\rvec{k}_1'}x_1+\ldots+\lambda_{\rvec{k}_M'}x_M\right)}\bra{\rvec{k}'_1,\rvec{k}'_2,\ldots,\rvec{k}'_M}W_1(\rvec{\theta}_1)\ket{\rvec{0}}\nonumber\\
=&\,\sum_{\rvec{k}'_1\in\{0,1\}^N}\ldots\sum_{\rvec{k}'_M\in\{0,1\}^N}W_2D_1\ket{\rvec{k}'_1,\rvec{k}'_2,\ldots,\rvec{k}'_M}\E{-\mathrm{i}\sum^M_{m=1}\lambda_{\rvec{k}_m'}x_m}\,, \end{align} \end{widetext} where \begin{align}
D_1=&\,\sum_{\rvec{k}'_1\in\{0,1\}^N}\ldots\sum_{\rvec{k}'_M\in\{0,1\}^N}\ket{\rvec{k}'_1,\rvec{k}'_2,\ldots,\rvec{k}'_M}\nonumber\\
&\,\qquad\qquad\times\bra{\rvec{k}'_1,\rvec{k}'_2,\ldots\rvec{k}'_M}W_1\ket{\rvec{0}}\bra{\rvec{k}'_1,\rvec{k}'_2,\ldots,\rvec{k}'_M} \end{align} is diagonal in the multivariate computational basis $\{\ket{\rvec{k}'_1,\rvec{k}'_2,\ldots\rvec{k}'_M}\}$, and all arguments are dropped for the sake of notational simplicity. Therefore \begin{align}
f_\mathrm{Q}(\rvec{x})=&\,\sum_{\rvec{k}_1,\rvec{k}'_1\in\{0,1\}^N}\ldots\sum_{\rvec{k}_M,\rvec{k}'_M\in\{0,1\}^N}\E{-\mathrm{i}\sum^M_{m=1}\left(\lambda_{\rvec{k}_m'}-\lambda_{\rvec{k}_m}\right)x_m}\nonumber\\
&\,\qquad\times\bra{\rvec{k}_1,\rvec{k}_2,\ldots,\rvec{k}_M}W_2^\dag D_1^\dag Z_NW_2D_1\ket{\rvec{k}'_1,\rvec{k}'_2,\ldots,\rvec{k}'_M}\,. \end{align}
The integral differences $\lambda_{\rvec{k}_m'}-\lambda_{\rvec{k}_m}$ constitute the entire frequency spectrum $\Omega_m$ for the $m$th variable $x_m$. The explicit set $\Omega_m$ would then depend on the weights $\beta_{mn}$ attributed to the encoding of each qubit. More generally, we may write \begin{equation}
f_\mathrm{Q}(\rvec{x})=\sum_{n_1 \in \Omega_1}\sum_{n_2 \in \Omega_2}\ldots\sum_{n_M \in \Omega_M} \widetilde{c}_{n_1,n_2,\ldots,n_M}\,\E{-\mathrm{i}\,\rvec{n}\bm{\cdot}\rvec{x}} \end{equation} as a partial Fourier series in terms of the spectra $\Omega_1$, $\Omega_2$, \ldots, $\Omega_M$, where the coefficients $\widetilde{c}_{n_1,n_2,\ldots,n_M}$ are indeed linear combinations of the amplitudes $\bra{\rvec{k}_1,\rvec{k}_2,\ldots,\rvec{k}_M}W_2^\dag D_1^\dag Z_NW_2D_1\ket{\rvec{k}'_1,\rvec{k}'_2,\ldots,\rvec{k}'_M}$ with constituents $\rvec{k}_m,\rvec{k}'_m$ corresponding specifically to the correct integer $n_m$ for all $1\leq m\leq M$ simultaneously.
\begin{figure*}\label{fig:step}
\end{figure*}
\subsection{Recurrence relation in Eq.~(2)} \label{app:exp_enc}
We shall investigate the possible frequencies of finite Fourier series that is induced by the QFFLM when choosing different $\beta_{mn}$s. Here, we look for integer-valued $\beta_{mn}$s since we only consider, without loss of generality, the input domain $[-\pi,\pi)^M$. Only the $M=1$ case is necessary to understand the situation, as the \emph{frequency spectra} of $M$-variate Fourier series are just Cartesian products of univariate spectra. We start with only a single Pauli encoding ($N=1$). With $V(x) = \E{-\mathrm{i}\beta_1 x\,Z/2}$ and upon setting $\beta_1 = 1$, we have $\Omega^{(1)}=\{-1, 0 ,1\}$ as the frequency spectrum~[see Fig.~\ref{fig:freq_spect}(a)].
If we, next, append one more encoding gate $\E{-\mathrm{i} \beta_2 x\,Z/2}$ to $V(x)$, we have $\Omega^{(2)}=\{-\beta_2 - 1, -\beta_2 , -\beta_2 + 1, - 1, 0 , 1,\beta_2 - 1, \beta_2, \beta_2 + 1\}$ as illustrated in Fig.~\ref{fig:freq_spect}(b). More generally, let $\Omega^{(k)}$ as the $k$th frequency spectrum as a result of using $k$ encoding gates. Then $\Omega^{(k)}$ has all elements from $\Omega^{(k-1)}$, along with the new ones generated by adding $\pm\beta_k$ to all elements of $\Omega^{(k-1)}$: \begin{equation}
\Omega^{(k)} = \{ \Omega^{(k-1)} - \beta_k , \Omega^{(k-1)} , \Omega^{(k-1)} + \beta_k\},
\label{eq:Omega} \end{equation} where $\Omega^{(k-1)} + \beta_k$ denotes the set containing all elements of $\Omega^{(k-1)}$ added to $\beta_k$. As $\Omega^{(k)}$ always contains elements that are symmetrically distributed about zero, to generate a maximally non-degenerate integer-valued frequency spectrum, the inequality \begin{align}
\max\left\{\alpha \in \Omega^{(k-1)}\right\} &< \beta_k - \max\left\{\alpha \in \Omega^{(k-1)}\right\}\nonumber\\
\text{or}\quad2\,\max\left\{\alpha \in \Omega^{(k-1)}\right\} &< \beta_k
\label{eq:nondegencondition} \end{align} is to be satisfied, where we can easily deduce that $\max\left\{\alpha \in \Omega^{(k-1)}\right\} = \sum_{j=1}^{k-1}\beta_j$.
If we want to construct a dense integer-valued spectrum that starts from 0, the following recursive equation \begin{equation}
2\sum_{j=1}^{k-1}\beta_j + 1 = \beta_k \end{equation} must be satisfied, where setting $\beta_1 = 1$ gives $\beta_k=3^{k-1}$, giving the dense exponential encoding scheme in the main text. We can also construct maximally non-degenerate frequency spectra by choosing largely-spaced $\beta_k$s satisfying the inequality~\eqref{eq:nondegencondition}. For example, instead of using $\beta_k = 3^{k-1}$, the definition $\beta_k = l^{k-1}$ with $l>3$ would generate a frequency spectrum of cardinality $3^N$ as well. Such a choice, however, would result in a sparse frequency distribution and possibly a larger maximum frequency.
We can rather flexibly control the elements in $\Omega$ using Eq.~\eqref{eq:Omega}. For example, if we want to set the maximum frequency to some value $\beta^*$, then we may simply choose $N$ $\beta_j$s that satisfy $\sum_{j=1}^{N} \beta_j = \beta^*$. If $\beta^*$ is smaller than $3^N$ and $\beta_j$s are all integers, then the cardinality of the frequency spectrum will be smaller than $3^N$. This leads to a degenerate frequency spectrum that contains repeated integral frequency values, which is equivalent to assigning larger Fourier-coefficient weights to these values. Therefore, if we have some prior knowledge about the target function, such as some intuition about its Fourier frequencies, we can flexibly tweak the weights $\beta_j$ in the QFFLM so that it possesses the desired frequencies, and intentionally introduce degeneracy in its frequency spectrum to enhance the expressive power.
\begin{figure*}
\caption{Quality of 2D image expression of the handwritten digit ``2'', conveyed through the training accuracy in image regression. We consider $L=10$ layers for each trainable unitary $W_1$ and $W_2$, and only vary the training-data encoding scheme. QFFLM training is carried out with the Adam gradient optimizer of learning rate~0.03, for 300 iterative steps. To compare the expressivity performances of QFFLMs with different encoded schemes, we choose the three-tuple of encoded weights $\rvec{\beta}=(\beta_{11},\beta_{12},\beta_{13})=(\beta_{21},\beta_{22},\beta_{23})$ as (a)~$(1,1,1)$, (b)~$(1,2,3)$, (c)~$(1,3,9)$ and (d)~$(1,7,49)$. Reconstructed images~(c) and (d) are predictions of degree $d_\mathrm{F}=3^3 = 27$ bivariate Fourier series that are respectively dense and sparse in frequency spectra.}
\label{fig:mnist}
\end{figure*}
\subsection{Additional examples with (non)dense exponential encoding} \label{app:four_examples}
Two additional important demonstrations of function expressivity of QFFLMs shall be presented in this subsection. All simulations for QFFLMs are performed with the \texttt{Pennylane} Python library~\footnote{Visit \url{https://pennylane.ai/}.}. The loss function $\mathcal{L}_{\rvec{\theta}}$ chosen to quantify the model training accuracy is the mean squared-error $\mathcal{L}_{\rvec{\theta}}\propto\sum_{j=1}[f_{\mathrm{Q}}(\rvec{x}_j)-y_j]^2$ between the QFFLM predictions $f_{\mathrm{Q}}(\rvec{x}_j)$ and the target outputs $y_j$ for the dataset $\{\rvec{x}_j\}$. Since the training dataset is large and covers the complete function period uniformly, the training accuracy of the QFFLM defined by $\mathcal{L}_{\rvec{\theta}}$ directly equivalates to model expressivity. Hence, lowly-expressive models naturally result in large nonzero $\mathcal{L}_{\rvec{\theta}}$ bias. For the moment, when target functions are unknown, we propose training datasets using a bottom up approach by gradually increasing the number of layers $L$ until both the saturated training and test loss values reach the respective minimum values, whilst keeping all aspects of training constant.
\subsubsection{Step function reconstruction}
We employ the parallel QFFLM possessing a ``hardware-efficient'' \emph{ansatz} for $W_l$, as illustrated in Fig.~\ref{fig:QSL_model}, to investigate other examples of function expression. As the first example, we look at model expressivity for the univariate step function defined as \begin{equation}
f_{\mathrm{step}}(x) = \begin{cases}
\,\dfrac{1}{2}\, &\text{if } \, 0 \leq x \leq \pi\,, \\
-\dfrac{1}{2}\, &\text{if } \, -\pi < x < 0\,.
\end{cases} \end{equation} Since expressing plateaued functions requires Fourier-series models of very extensive frequency spectra $\Omega$, exponentially-encoded QFFLMs are ideal for this purpose. In Fig.~\ref{fig:step}, we see that increasing the qubit number quickly improves expressivity.
\subsubsection{Two-dimensional image regression}
We shall now discuss the second example of bivariate function expression. For this, we consider a 2D image of the handwritten digit~``2'' extracted from the MNIST dataset~\footnote{Visit the official MNIST website at \url{http://yann.lecun.com/exdb/mnist/}.} as the target image for demonstrating the expressive power of exponentially-encoded QFFLMs. The target image array has a resolution of $28 \times 28$, where each of the 764 array values has been normalized to have a magnitude bounded by~1. This array, therefore, corresponds to the set of outputs of a bivariate target function $|f(\rvec{x})|\leq1$ that is to be learnt with a QFFLM. We define this bivariate QFFLM using a ``hardware-efficient'' \emph{ansatz} with six qubits (three qubits for each feature variable $x_1$ and $x_2$).
Figure~\ref{fig:mnist} shows the expressivity of various QFFLMs of different $\beta_{mn}$ encoding schemes according to the resulting predicted images upon model training. Both the naive~[Fig.~\ref{fig:mnist}(a)] and linear-step~[Fig.~\ref{fig:mnist}(b)] encoding schemes show poor expressivities. The exponential-encoding schemes corresponding to Figs.~\ref{fig:mnist}(c) and~(d) result in $K^M=(3^3)^2=729$-dimensional QFFLMs. Owing to the predominantly uniform target image-array values representing the image background, which is expressible only by a Fourier series of a large frequency spectrum, it turns out that a nondense exponential-encoding scheme ($\beta_{mn}=7^{n-1}$) gives a much more efficient QFFLM for expressing such handwritten images than the dense one ($\beta_{mn}=3^{n-1}$). Thus, if one has some prior information about the target function, one can control the weights $\beta_{mn}$ to achieve better expressivity.
\subsubsection{Molecular-dynamics dataset of ethanol}
\begin{figure*}
\caption{Circuit diagrams for training with the revised MD17 and California housing-price datasets. (a)~Both circuits share a very similar structure that involves an initial trainable block $A$ and three classical-data encoding (or reuploading) blocks $B_1$, $B_2$ and $B_3$. (b)~The specific details of $A$ and $B_l$ that depend on the task at hand differ by the number of encoding unitary operators ($\widetilde{V}_l$ or $V_l$ depending on which of the two tasks is executed) and trainable modules $W$, (c)~where the latter respectively involve $N=6$ and $N=8$ qubits. The structure of $V_l$ follows that given in Fig.~\ref{fig:QSL_model}. In~(c), the structure of $\widetilde{V}_l(\Phi_1)$ is explicitly shown as an example.}
\label{fig:SMcircuits}
\end{figure*}
The complete revised-MD17~(rMD17) data bank contains molecular-dynamics training datasets of ten molecules~ \cite{Christensen2020}. The dataset of each molecule consists of 4 types of data, namely the atomic numbers, atomic spatial coordinates, atomic force vectors~(not considered in our context), and the energy scalars. Each dataset allows us to learn the potential-energy surface of the corresponding molecule. As an example, we select the dataset of the ethanol molecule that contains nine atoms, where each input training datum comprises the set of nine atomic numbers $\{Z_j\}^9_{j=1}$ and the set of nine spatial-coordinate columns $\{\rvec{r}_j\}^9_{j=1}$ (each three-dimensional). This is paired with the corresponding energy scalar as the output training datum.
The raw input data are further preprocessed into two-body Coulomb-potential functions. More specifically, given the datum $\{\rvec{r}_j,Z_j\}^9_{j=1}$, we define $ \Phi_{j_1\,\,j_2>j_1} = Z_{j_1}Z_{j_2}/ |\rvec{r}_{j_1} - \rvec{r}_{j_2}|$, where $Z_j$ is the atomic number of the $j$th atom, so that one obtains $M=9\cdot8/2 = 36$ new features per datum. As, we are encoding these features into single-qubit gates, we further normalize each of them such that they are within the interval [$0,2\pi$]. Each input datum is therefore now a 36-dimensional column $\rvec{\Phi}$. This preprocessing procedure suits this learning task since the potential-energy surface originates from Coulomb interactions among the atomic charges~\cite{MD17,Rupp:2012fast}. The output data were also standardized and normalized to be in range $[-1,1]$. We have chosen $2000$ data from the rMD17 ethanol dataset and split them into $1000$ training data and $1000$ test data. Training is done with the full batch using the Adam optimizer and the test loss is computed from randomly-sampled $200$ output test data out of the $1000$.
For this problem, because of the large number $M=36$ of features, we employed a six-qubit circuit and reuploaded the data three times~[see Fig.~\ref{fig:SMcircuits}]. Single-qubit rotation gates $\mathrm{Rot}(\theta_1,\theta_2,\theta_3) = R_{Z}(\theta_1)R_Y(\theta_2)R_Z(\theta_3) = R_{Z}(\theta_1)F^{\dag}R_Z(\theta_2)F R_Z(\theta_3) $ to encode three features, where $Y = F^{\dag}ZF$ and $R_G$ is the Pauli-$G=X,Y,Z$ rotation gate. Each reuploading is done with a reuploading block $B_{l=1,2,3}$. One such block is composed of two encoding layers (both equal to $V_l$) and two trainable layers ($W_{l1},W_{l2}$) which are interspersed. A trainable layer $W_{lj}$ is in turn composed of $L$ layers that are hardware-efficient. These encoding layers are used to encode a \emph{single} training input datum comprising all $M=36$ features packed into the column $\rvec{\Phi}$ using a total of 12 $\mathrm{Rot}$ gates. If we split this column into two---$\rvec{\Phi}=(\rvec{\Phi}_1^\top\,\,\,\rvec{\Phi}_2^\top)^\top$---each with 18 elements, then one $V_l$ uses six gates encode the first $18$ elements of $\rvec{\Phi}$ ($\rvec{\Phi}_1$), and another $V_l$ uses the other six gates to encode the remaining 18 elements ($\rvec{\Phi}_2$). Such a split encoding is done three times, where each time a different weight $\beta_l$ is used. Remember that each such triple-encoding procedure is carried out on a \emph{single} datum each time, which is to be repeated for all data.
Therefore, in this serial (triple-reuploading) configuration, we effectively have three encoding gates supplying three different encoding weights $\beta_1$, $\beta_2$ and $\beta_3$ for every feature. The total number of free trainable circuit parameters is $L\times6\times3+3 \times L \times 6 \times 2\times3 = 126 L$.
We compared the performance between naive-encoding and our exponential-encoding models through the aforementioned data-reuploading scheme. Both models differ only by the encoding type, and all other settings are identical. They are trained with the same input/output data and evaluated with exactly the same test batch data. We also tested the performance by increasing $L$ and witnessed a significant enhancement in learning with exponential encoding. We note that naively- and exponential-encoded QFFLMs respectively generate $(2\cdot3+1)^{36}=7^{36}$ and $(3^3)^{36}=27^{36}$ Fourier basis functions, which are very architecturally challenging for CFFLMs to handle.
\subsubsection{California housing-price dataset}
This is one of the basic machine-learning benchmarking dataset given by the \texttt{scikit-learn} Python package. It includes 20640 data, with each input datum being an eight-dimensional vector that stores information about each house and the corresponding output datum recording the house price. The main task is to predict the housing price using an eight-featured model. We train both the naively- and exponentially-encoded QFFLMs with 16512 training data, and evaluate them with the rest as test data. Training is done with randomized batches of 8000 data using the Adam optimizer. Model testing is performed on the full set of test data. All input features are standardized and normalized to $[-\pi,\pi]$, and output data are also normalized to $[0.03,1.0]$.
Here, we used exactly same \emph{ansatz} as in Ref.~\cite{classicalsurrogate}, where the trainable unitary operators $W_l$ is defined using the \texttt{StronglyEntanglingLayers} Python subpackage from \texttt{Pennylane}~\cite{pennylane}. The Reader may refer to Fig.~\ref{fig:SMcircuits}(c) for a visual representation of the \emph{ansatz}. A total of four trainable unitary modules and three data-encoding unitary modules, which supply a total of $L\times8\times3+3\times L\times8\times3=96L$ free trainable-circuit parameters. As always, both QFFLMs are trained with exactly the same data: only the encoding strategy is different. Here, we again observe a better learning performance with exponential encoding~(see Fig.~\ref{fig:cali}).
\subsection{From trigonometric basis to another} \label{app:bases}
The trigonometric bases that are inherent to FFLMs are canonical to Fourier-series representations. However, depending on the way training data are encoded, one is free to employ a different basis to express functions. For certain function classes that may be more naturally represented by polynomial functions, there is a reason to invoke a polynomial-type basis instead. If we rescale the (finite) domain of a univariate function $f(x)$ to $-1\leq x\leq 1$, we may make the connection between trigonometric and polynomial functions with the Chebyshev polynomials: \begin{align}
\mathrm{T}_n(\cos x) =&\, \cos(nx)\quad\quad\quad\,\,(\text{First kind})\nonumber\,,\\
\mathrm{U}_n(\cos x)\sin x =&\, \sin((n+1)x)\quad(\text{Second kind})\,. \end{align} Then, a straightforward encoding $x\mapsto\cos^{-1}x$ of training data $x$ allows us to map the trigonometric basis $\{1, \cos{x}, \sin{x}, \ldots, \cos{(nx)}, \sin{(nx)}\}$ into the new set of overcomplete basis $\{1, x, \sqrt{1-x^2}, 2x^2-1, x\sqrt{1-x^2}, \ldots, \mathrm{T}_n(x), \mathrm{U}_{n-1}(x)\sqrt{1-x^2}\}$, where the Chebyshev polynomials of each kind are themselves a complete basis. Clearly, if we adopt the exponential-encoding scheme $\beta_n=3^{n-1}$, we can construct an exponentially large polynomial basis for a given number of encoding gates. Similarly, we can also construct an exponentially large nonlinear function basis by using such exponential weights and the encoding $\beta_n \cos^{-1}(g(x))$ of some nonlinear function $g(x)$. Similar kinds of nonlinear data encodings have been used to prove certain universality properties of QML models in Refs.~\cite{Perez-Salinas:2020aa, Goto:2021universal}.
\section{Barren-plateau phenomenon} \label{app:BPP}
We explicitly derive the corresponding data-encoding-independent barren-plateau statements for the mean squared-error loss function \begin{equation}
\mathcal{L}_{\rvec{\theta}}=\int(\mathrm{d}\rvec{x})[f_\mathrm{Q}(\rvec{x})-f(\rvec{x})]^2\,,
\label{eq:MSEcontinuous} \end{equation} where for simplicity we shall assume that the training dataset is sufficiently large, so that the discrete average over this dataset conveniently becomes an integral average with respect to the $M$-variate normalized measure $(\mathrm{d}\rvec{x})$ over the complete periods. The lengthy, yet straightforward, calculations may be broken down into several stages.
\subsection{Useful identities}
We generalize the analysis of the barren-plateau phenomenon to any arbitrary $N$-qubit serial circuits containing ${L_{\mathrm{train}}}$ trainable unitary modules $\{W_l\}^{L_{\mathrm{train}}}_{l=1}$ that each have poly($N,2$) circuit depth, so that a randomized $W_l$ for many classes of circuit \emph{ans{\"a}tze} (including those consisting of regularly repeated arrangements of randomized single-qubit and CNOT gates) may be approximated as a two-design~\cite{harrow_random_2009}. Such a generalization covers arbitrary data reuploading cases, where classical-data encoding occurs at multiple instances throughout the quantum circuit. Without loss of generality, numerical examples presented in the main text refer to ${L_{\mathrm{train}}}=2$.
In view of this, the following integration result \begin{align}
&\,\int(\mathrm{d} U)_\mathrm{Haar}\, U^*_{j'_1k'_1}U^*_{j'_2k'_2}U_{j_1k_1}U_{j_2k_2}\nonumber\\ =&\,\dfrac{\delta_{j_1,j'_1}\delta_{j_2,j'_2}\delta_{k_1,k'_1}\delta_{k_2,k'_2}+\delta_{j_1,j'_2}\delta_{j_2,j'_1}\delta_{k_1,k'_2}\delta_{k_2,k'_1}}{d^2-1}\nonumber\\
&\,-\dfrac{\delta_{j_1,j'_1}\delta_{j_2,j'_2}\delta_{k_1,k'_2}\delta_{k_2,k'_1}+\delta_{j_1,j'_2}\delta_{j_2,j'_1}\delta_{k_1,k'_1}\delta_{k_2,k'_2}}{d(d^2-1)}
\label{eq:Weingarten2} \end{align} in terms of the computational matrix elements $U_{jk}=\opinner{j}{U}{k}$ for a $d=2^{N}$-dimensional random unitary operator $U$ distributed according to the Haar measure $(\mathrm{d} U)_\mathrm{Haar}$, and the basic identity \begin{equation}
\Haar{U\,O\,U^\dag}=\int(\mathrm{d} U)_\mathrm{Haar}\,U\,O\,U^\dag=\frac{1}{d}\tr{O}\,, \end{equation} are crucial~\cite{Puchala_Z._Symbolic_2017} and all we need to derive all statistical statements. By tracking all indices, it is possible to derive another useful integral identity \begin{align}
&\,\Haar{U^{\otimes2}\,O\, U^{\dag\,\otimes2}}=\int(\mathrm{d} U)_\mathrm{Haar}\,U^{\otimes2}\,O\, U^{\dag\,\otimes2}\nonumber\\
=&\,\left[\dfrac{\tr{O}}{d^2-1}-\dfrac{\tr{O\tau}}{d(d^2-1)}\right]1+\left[\dfrac{\tr{O\tau}}{d^2-1}-\dfrac{\tr{O}}{d(d^2-1)}\right]\tau\,,
\label{eq:HaarV2} \end{align} where $\tau$ is the swap operator with the trace property $\tr{O_1\otimes O_2\,\tau}=\tr{O_1O_2}=\tr{U'^{\otimes2}\,O_1\otimes O_2\,U'^{\dag\otimes2}\,\tau}$ for any two observables $O_1$ and $O_2$, and unitary operator $U'$. When $O$ is a Pauli observable, we have \begin{equation}
\Haar{U^{\otimes2}\,O^{\otimes2}\, U^{\dag\,\otimes2}}=\dfrac{d\,\tau-1}{d^2-1}\,.
\label{eq:pauliV2} \end{equation}
As we are discussing gradients, the forms of $W_l^{(1)}$ and $W_l^{(2)}$ in the derivative $\partial_{\mu l} W_l=-\frac{\mathrm{i}}{2}\,W^{(1)}_l\sigma_{\mu l} W^{(2)}_l$with respect to the $\mu$th parameter $\theta_{\mu l}$ in the $l$th training module need not fulfill the architectural constraint of a two-design. This is the case when $\theta_{\mu l}$ to which the gradient is taken lies near the edges of the trainable module $W_l$, so that either $W_l^{(1)}$ or $W_l^{(2)}$ are thin. Furthermore, for the majority of parameters in the bulk trainable modules $(1<l<{L_{\mathrm{train}}})$, both $W_l^{(1)}$ and $W_l^{(2)}$ need not possess Haar~first and second moments in order to obtain analytical statements. The subsequent calculations are also made \emph{independent} of the encoding unitary operators.
In subsequent discussions, we shall categorize the barren-plateau statements into three separate cases, namely \begin{align*}
\textbf{Case I:} & \,\,\,1<l<{L_{\mathrm{train}}}\,,\\
\textbf{Case II:} & \,\,\,l=1\,,\\
\textbf{Case III:} & \,\,\,l={L_{\mathrm{train}}}\,. \end{align*}
\subsection{A simple manifestation of two-design averages on $f_\mathrm{Q}(\rvec{x})$}
By nature of the ``hardware-efficient'' {\it ansatz}, random circuit initialization implies that \begin{widetext} \begin{align}
\Haar{f_\mathrm{Q}(\rvec{x})}=&\,\Haar{\opinner{\rvec{0}}{W_1^{\dag}V_1(\rvec{x})^{\dag}\ldots W_{{L_{\mathrm{train}}}-1}^{\dag}V_{{L_{\mathrm{train}}}-1}(\rvec{x})^{\dag} W_{L_{\mathrm{train}}}^{\dag}\,O\, W_{L_{\mathrm{train}}}V_{{L_{\mathrm{train}}}-1}(\rvec{x})W_{{L_{\mathrm{train}}}-1}\ldots V_1(\rvec{x})W_{1}}{\rvec{0}}}=0\,,\nonumber\\
\Haar{f_\mathrm{Q}^2(\rvec{x})}=&\,\Haar{\opinner{\rvec{0}}{W_1^{\dag\otimes2}V_1(\rvec{x})^{\dag\otimes2}\ldots W_{{L_{\mathrm{train}}}-1}^{\dag\otimes2}V_{{L_{\mathrm{train}}}-1}(\rvec{x})^{\dag\otimes2} W_{L_{\mathrm{train}}}^{\dag\otimes2}\,O^{\otimes2}\, W_{L_{\mathrm{train}}}^{\otimes2}V_{{L_{\mathrm{train}}}-1}(\rvec{x})^{\otimes2}\ldots V_1(\rvec{x})^{\otimes2}W_1^{\otimes2}}{\rvec{0}}}=\dfrac{1}{d+1}\,, \end{align} \end{widetext} upon recalling Eq.~\eqref{eq:pauliV2}. We now see a first manifestation of random circuit initialization on two-design approximable circuits, namely $\Haar{f_\mathrm{Q}(\rvec{x})}$ and $\Haar{f_\mathrm{Q}^2(\rvec{x})}=\VAR{}{f_\mathrm{Q}}\sim O(1/d)$. This is a manifestation of the barren-plateau phenomenon. We see that the analogous barren-plateau phenomenon gives rise to similar statements for the loss function $\mathcal{L}_{\rvec{\theta}}$.
\subsection{$\MEAN{\partial_{\mu l} \mathcal{L}_{\rvec{\theta}}}{}=0$ for all cases}
The average of the derivative of $ \mathcal{L}_{\rvec{\theta}}$ with respect to the parameter $\theta_{\mu l}$ corresponding to the $\mu$th qubit for the $l$th trainable module, \begin{equation}
\MEAN{\partial_{\mu l} \mathcal{L}_{\rvec{\theta}}}{}=2\int(\mathrm{d}\rvec{x})\,\MEAN{\left[f_\mathrm{Q}(\rvec{x})-f(\rvec{x})\right]\partial_{\mu l}f_\mathrm{Q}(\rvec{x})}{}\,, \end{equation} comprises the average of two terms, namely \begin{widetext}
\begin{align}
\partial_{\mu l}f_\mathrm{Q}(\rvec{x})=&\,\dfrac{\mathrm{i}}{2}\opinner{\rvec{0}}{B(\rvec{x})^\dag W^{(2)\dag}_l\sigma_\mu W^{(1)\dag}_l\Uenc{l}{\rvec{x}}^\dag A(\rvec{x})^\dag O A(\rvec{x})\Uenc{l}{\rvec{x}}W_lB(\rvec{x})}{\rvec{0}}+\mathrm{c.c.}\,,\nonumber\\
f_\mathrm{Q}(\rvec{x})\,\partial_{\mu l}f_\mathrm{Q}(\rvec{x})=&\,\dfrac{\mathrm{i}}{2}\opinner{\rvec{0}}{B(\rvec{x})^{\dag\otimes2}W^{(2)\dag\otimes2}_l1\otimes\sigma_\mu W^{(1)\dag\otimes2}_l\Uenc{l}{\rvec{x}}^{\dag\otimes2} A(\rvec{x})^{\dag\otimes2} O^{\otimes2} A(\rvec{x})^{\otimes2}\Uenc{l}{\rvec{x}}^{\otimes2}W^{\otimes2}_lB(\rvec{x})^{\otimes2}}{\rvec{0}}+\mathrm{c.c.}\,,
\end{align} \end{widetext} where $A(\rvec{x})=\prod^{l+1}_{l'={L_{\mathrm{train}}}}[\Uenc{l'}{\rvec{x}}W_{l'}]$ and $B(\rvec{x})=\prod^{1}_{l'=l-1}[\Uenc{l'}{\rvec{x}}W_{l'}]$ such that the unitary operator $U_{\rvec{x};\rvec{\theta}}=A(\rvec{x})\Uenc{l}{\rvec{x}}W_lB(\rvec{x})$ represents the complete ${L_{\mathrm{train}}}$-layered quantum circuit. As usual, the arguments $\rvec{\theta}_l$ are suppressed. The average of $\partial_{\mu l}f_\mathrm{Q}(\rvec{x})$ is easiest to treat.
\begin{flushleft}
\boxed{\MEAN{\partial_{\mu l}f_\mathrm{Q}(\rvec{x})}{}\text{for {\bf Case I}}\text{, and {\bf Case II}}} \end{flushleft}
As $\Haar{W_{L_{\mathrm{train}}}^{\dag}\Uenc{{L_{\mathrm{train}}}}{\rvec{x}}^{\dag}\, O\,\Uenc{{L_{\mathrm{train}}}}{\rvec{x}}W_{L_{\mathrm{train}}}}=0$ when $O$ is a Pauli operator, \begin{align}
\MEAN{\partial_{\mu l}f_\mathrm{Q}(\rvec{x})}{}=0\quad\text{for any $W^{(1)}_l$ and $W^{(2)}_l$}\,. \end{align}
\begin{flushleft}
\boxed{\MEAN{\partial_{\mu l}f_\mathrm{Q}(\rvec{x})}{}\text{for {\bf Case III}}} \end{flushleft}
For parameters in the edge training module $W_{L_{\mathrm{train}}}$ next to $O$, we inspect the operator \begin{equation}
Q_1=W^{(2)\dag}_{L_{\mathrm{train}}}\sigma_\mu W^{(1)\dag}_{L_{\mathrm{train}}}\Uenc{{L_{\mathrm{train}}}}{\rvec{x}}^{\dag} O\, \Uenc{{L_{\mathrm{train}}}}{\rvec{x}}W^{(1)}_{L_{\mathrm{train}}}W^{(2)}_{L_{\mathrm{train}}}\,. \end{equation} Now, note that $\tr{\Uenc{{L_{\mathrm{train}}}-1}{\rvec{x}}^{\dag} Q_1 \Uenc{{L_{\mathrm{train}}}-1}{\rvec{x}}}=\tr{Q_1}=\tr{\sigma_\mu \sigma'(\rvec{x})}$, where \begin{equation}
\sigma'(\rvec{x})= W^{(1)\dag}_{L_{\mathrm{train}}}\Uenc{{L_{\mathrm{train}}}}{\rvec{x}}^{\dag} O\, \Uenc{{L_{\mathrm{train}}}}{\rvec{x}}W^{(1)}_{L_{\mathrm{train}}}
\label{eq:sigprime} \end{equation} is yet another (rotated) Pauli operator parametrized by $\rvec{x}$, so that $\sigma'^2=1$ and $\tr{\sigma'}=0$. As $\tr{\sigma_\mu\sigma'}$ is real, \begin{align}
\MEAN{\partial_{\mu l}f_\mathrm{Q}(\rvec{x})}{}=&\,\dfrac{\mathrm{i}}{2}\left<\opinner{\rvec{0}}{B(\rvec{x})^\dag Q_1B(\rvec{x})}{\rvec{0}}\right> +\,\mathrm{c.c.}\nonumber\\
=&\,\dfrac{\mathrm{i}}{2d}\left<\tr{\sigma_\mu\sigma'}\right>-\dfrac{\mathrm{i}}{2d}\left<\tr{\sigma_\mu\sigma'}\right>=0\,. \end{align}
\begin{flushleft}
\boxed{\MEAN{f_\mathrm{Q}(\rvec{x})\,\partial_{\mu l}f_\mathrm{Q}(\rvec{x})}{}\text{for {\bf Case I}}\text{, and {\bf Case II}}} \end{flushleft}
Upon taking the average over $W_{{L_{\mathrm{train}}}}$ using Eq.~\eqref{eq:pauliV2}, we have \begin{align}
&\,\MEAN{f_\mathrm{Q}(\rvec{x})\,\partial_{\mu l}f_\mathrm{Q}(\rvec{x})}{}\nonumber\\
=&\,\dfrac{\mathrm{i}}{2(d+1)}\opinner{\rvec{0}}{B(\rvec{x})^{\dag\otimes2} W^{(2)\dag\otimes2}_l1\otimes\sigma_\mu W^{(2)\otimes2}_lB(\rvec{x})^{\otimes2}}{\rvec{0}}+\mathrm{c.c.} \end{align} Since $W^{(2)\dag\otimes2}_l1\otimes\sigma_\mu W^{(2)\otimes2}_l$ is yet another Pauli operator, these resulting expectation values are real regardless of whether $l=1$ or not (that is, whether $B(\rvec{x})=1$ correspondingly or not), so that $\MEAN{f_\mathrm{Q}(\rvec{x})\,\partial_{\mu l}f_\mathrm{Q}(\rvec{x})}{}=0$ for both cases.
\begin{flushleft}
\boxed{\MEAN{f_\mathrm{Q}(\rvec{x})\,\partial_{\mu l}f_\mathrm{Q}(\rvec{x})}{}\text{for {\bf Case III}}} \end{flushleft}
For this case, we look at the operator \begin{align}
Q_2=&\,W^{(2)\dag\otimes2}_{L_{\mathrm{train}}}\sigma'(\rvec{x})\otimes\sigma_\mu\sigma'(\rvec{x})\,W^{(2)\otimes2}_{L_{\mathrm{train}}}\,, \end{align} From the realization that $\sigma'(\rvec{x})$ in Eq.~\eqref{eq:sigprime} is a Pauli operator, the following two trace properties \begin{align}
&\,\tr{\Uenc{{L_{\mathrm{train}}}-1}{\rvec{x}}^{\dag\otimes2} Q_2\,\Uenc{{L_{\mathrm{train}}}-1}{\rvec{x}}^{\otimes2}}\nonumber\\
=&\,\tr{\sigma'(\rvec{x})\otimes\sigma_\mu\sigma'(\rvec{x})}=0\,,\nonumber\\
&\,\tr{\Uenc{{L_{\mathrm{train}}}-1}{\rvec{x}}^{\dag\otimes2} Q_2\,\Uenc{{L_{\mathrm{train}}}-1}{\rvec{x}}^{\otimes2}\tau}\nonumber\\
=&\,\tr{\sigma'(\rvec{x})\sigma_\mu\sigma'(\rvec{x})}=\tr{\sigma_\mu}=0 \end{align} become apparent. The proofs for $\MEAN{\partial_{\mu l} \mathcal{L}_{\rvec{\theta}}}{}=0$ are therefore complete.
\subsection{$\VAR{}{\partial_{\mu l} \mathcal{L}_{\rvec{\theta}}}\leq O(1/d)$ for all cases}
We hereby show that $\VAR{}{\partial_{\mu l} \mathcal{L}_{\rvec{\theta}}}\lesssim O(1/d)$ for two-design training circuit modules. Using the shorthand $\overline{g(\rvec{x})}\equiv\int\,(\mathrm{d}\rvec{x})\,g(\rvec{x})$ to denote the training-data average, we first make use of the Cauchy--Schwarz inequality to obtain \begin{align}
|\partial_{\mu l} \mathcal{L}_{\rvec{\theta}}|^2=&\,4\left|\overline{\left[f_\mathrm{Q}(\rvec{x})-f(\rvec{x})\right]\partial_{\mu l}f_\mathrm{Q}(\rvec{x})}\right|^2\nonumber\\
\leq&\, 4\,\mathcal{L}_{\rvec{\theta}}\,\overline{\left|\partial_{\mu l}f_\mathrm{Q}(\rvec{x})\right|^2}\leq16\,\overline{\left|\partial_{\mu l}f_\mathrm{Q}(\rvec{x})\right|^2}\,, \end{align} or \begin{equation}
\VAR{}{\partial_{\mu l} \mathcal{L}_{\rvec{\theta}}}\leq 16\,\left<\overline{\left|\partial_{\mu l}f_\mathrm{Q}(\rvec{x})\right|^2}\right>\,, \end{equation} since $\MEAN{\partial_{\mu l} \mathcal{L}_{\rvec{\theta}}}{}=0$ as demonstrated previously. The crucial quantity is now the average \begin{widetext}
\begin{align}
\MEAN{|\partial_{\mu l}f_\mathrm{Q}(\rvec{x})|^2}{}=&\,-\dfrac{1}{4}\opinner{\rvec{0}}{B(\rvec{x})^{\dag\otimes2}W^{(2)\dag\otimes2}_l\Uenc{l}{\rvec{x}}^{\dag\otimes2}\sigma_\mu^{\otimes2} W^{(1)\dag\otimes2}_l A(\rvec{x})^{\dag\otimes2} O^{\otimes2} A(\rvec{x})^{\otimes2}\Uenc{l}{\rvec{x}}^{\otimes2}W^{\otimes2}_lB(\rvec{x})^{\otimes2}}{\rvec{0}}\nonumber\\
&\,+\dfrac{1}{4}\opinner{\rvec{0}}{B(\rvec{x})^{\dag\otimes2}W^{(2)\dag\otimes2}_l\Uenc{l}{\rvec{x}}^{\dag\otimes2}1\otimes\sigma_\mu W^{(1)\dag\otimes2}_l A(\rvec{x})^{\dag\otimes2} O^{\otimes2}\nonumber\\
&\,\qquad\qquad\qquad\qquad\qquad\qquad\times A(\rvec{x})^{\otimes2}\Uenc{l}{\rvec{x}}^{\otimes2}W^{(1)\otimes2}_l\sigma_\mu\otimes1W^{(2)\otimes2}_lB(\rvec{x})^{\otimes2}}{\rvec{0}}+\mathrm{c.c.}\,.
\end{align} \end{widetext}
\begin{flushleft}
\boxed{\MEAN{|\partial_{\mu l}f_\mathrm{Q}(\rvec{x})|^2}{}\text{for {\bf Case I}}} \end{flushleft}
For general bulk modules, averaging over $W_{L_{\mathrm{train}}}$ yields \begin{align}
&\,\MEAN{|\partial_{\mu l}f_\mathrm{Q}(\rvec{x})|^2}{}\nonumber\\
=&\,-\dfrac{d}{4(d^2-1)}\left<\opinner{\rvec{0}}{B(\rvec{x})^{\dag\otimes2} W^{(2)\dag\otimes2}_l\sigma_\mu^{\otimes2}W^{(2)\otimes2}_l B(\rvec{x})^{\otimes2}}{\rvec{0}}\right>\nonumber\\
&\,+\dfrac{d}{4(d^2-1)}+\mathrm{c.c.}\,. \end{align} When $l>1$, \begin{equation}
\MEAN{|\partial_{\mu l}f_\mathrm{Q}(\rvec{x})|^2}{\mathrm{I}}=\dfrac{d^2}{2(d+1)(d^2-1)}\,. \end{equation}
\begin{flushleft}
\boxed{\MEAN{|\partial_{\mu l}f_\mathrm{Q}(\rvec{x})|^2}{}\text{for {\bf Case II}}} \end{flushleft}
If $l=1$, we note that \begin{align}
\gamma_\mathrm{II}=&\,\left<\opinner{\rvec{0}}{W^{(2)\dag\otimes2}_1\sigma_\mu^{\otimes2}W^{(2)\otimes2}_1}{\rvec{0}}\right>\nonumber\\
=&\,\left<\opinner{\rvec{0}}{W^{(2)\dag}_1\sigma_\mu W^{(2)}_1}{\rvec{0}}^2\right>\leq1\,,
\label{eq:gammaII} \end{align} such that \begin{equation}
\MEAN{|\partial_{\mu,1}f_\mathrm{Q}(\rvec{x})|^2}{\mathrm{II}}=\dfrac{d(1-\gamma_\mathrm{II})}{2(d^2-1)}\leq\dfrac{d}{2(d^2-1)}\,. \end{equation}
\begin{flushleft}
\boxed{\MEAN{|\partial_{\mu l}f_\mathrm{Q}(\rvec{x})|^2}{}\text{for {\bf Case III}}} \end{flushleft}
For this case, properties of the operators
\begin{align}
Q_{3\mathrm{a}}=&\,W^{(2)\dag\otimes2}_{L_{\mathrm{train}}}\sigma_\mu^{\otimes2} \sigma'(\rvec{x})^{\otimes2}\,W^{(2)\otimes2}_{L_{\mathrm{train}}}\,,\nonumber\\
Q_{3\mathrm{b}}=&\,W^{(2)\dag\otimes2}_{L_{\mathrm{train}}}1\otimes\sigma_\mu \sigma'(\rvec{x})^{\otimes2}\sigma_\mu\otimes1W^{(2)\otimes2}_{L_{\mathrm{train}}} \end{align} are necessary, where $\sigma'(\rvec{x})$ is defined in Eq.~\eqref{eq:sigprime}. To start off, \begin{align}
\tr{Q_{3\mathrm{a}}}=&\,\tr{\sigma_\mu\sigma'(\rvec{x})}^2=\tr{Q_{3\mathrm{b}}}\,. \end{align} For the trace properties with the swap operator, they are \begin{align}
&\,\tr{\Uenc{{L_{\mathrm{train}}}-1}{\rvec{x}}^{\dag\otimes2}Q_{3\mathrm{a}}\Uenc{{L_{\mathrm{train}}}-1}{\rvec{x}}^{\otimes2}\tau}\nonumber\\
=&\,\tr{\sigma_\mu\sigma'(\rvec{x})\sigma_\mu\sigma'(\rvec{x})}\equiv\gamma^{(2)}_\mathrm{III}(\rvec{x})\,,\nonumber\\
&\,\tr{\Uenc{{L_{\mathrm{train}}}-1}{\rvec{x}}^{\dag\otimes2}Q_{3\mathrm{b}}\Uenc{{L_{\mathrm{train}}}-1}{\rvec{x}}^{\otimes2}\tau}\nonumber\\
=&\,\tr{\sigma'(\rvec{x})\sigma_\mu \sigma_\mu\sigma'(\rvec{x})}=d\,. \end{align} These are critical in evaluating the average over $W_{{L_{\mathrm{train}}}-1}$ by invoking Eq.~\eqref{eq:pauliV2}: \begin{align}
\gamma^{(1)}_\mathrm{III}(\rvec{x})\equiv&\,\left<\tr{\sigma_\mu\sigma'(\rvec{x})}^2\right>\,,\nonumber\\
\gamma^{(2)}_\mathrm{III}(\rvec{x})\equiv&\,\left<\tr{[\sigma_\mu\sigma'(\rvec{x})]^2}\right>\,,\nonumber\\
\MEAN{|\partial_{\mu l}f_\mathrm{Q}(\rvec{x})|^2}{\mathrm{III}}
=&\,-\dfrac{\gamma^{(1)}_\mathrm{III}(\rvec{x})+\gamma^{(2)}_\mathrm{III}(\rvec{x})}{2d(d+1)}+\dfrac{\gamma^{(1)}_\mathrm{III}(\rvec{x})+d}{2d(d+1)}\nonumber\\
=&\,\dfrac{d-\gamma^{(2)}_\mathrm{III}(\rvec{x})}{2d(d+1)}\leq\dfrac{1}{d+1}\,.
\label{eq:gammaIII} \end{align} The final inequality is obtained from the fact that \begin{align}
\tr{[\sigma_\mu\sigma'(\rvec{x})]^2}^2\leq\tr{\sigma_\mu^2}\tr{[\sigma'(\rvec{x})\sigma_\mu\sigma'(\rvec{x})]^2}=d^2\,, \end{align} or $-d\leq\gamma^{(2)}_\mathrm{III}(\rvec{x})\leq d$, where we remind the Reader that $\left<\,\,\bm{\cdot}\,\,\right>$ is the average over random $W_{L_{\mathrm{train}}}^{(1)}$s.
Collecting all results, we have \begin{align}
\VAR{}{\partial_{\mu l} \mathcal{L}_{\rvec{\theta}}}\leq&\,
\begin{cases}
\dfrac{8d^2}{(d+1)(d^2-1)} &\mathrm{for}~\textbf{Case I}\,,\\[2ex]
\dfrac{8d(1-\gamma_\mathrm{II})}{d^2-1}\leq\dfrac{8d}{d^2-1} &\mathrm{for}~\textbf{Case II}\,,\\[2ex]
\dfrac{8[d-\overline{\gamma_\mathrm{III}(\rvec{x})}]}{d(d+1)}\leq\dfrac{16}{d+1} &\mathrm{for}~\textbf{Case III}\,,
\end{cases} \end{align} $\gamma_\mathrm{II}=\left<\opinner{\rvec{0}}{W^{(2)\dag}_1\sigma_\mu W^{(2)}_1}{\rvec{0}}^2\right>$ and $\gamma_\mathrm{III}(\rvec{x})=\left<\tr{[\sigma_\mu\sigma'(\rvec{x})]^2}\right>$. As special cases, one arrives at $\gamma_\mathrm{II}=1/(d+1)$ if $W_1^{(2)}$ is a two-design, and $\overline{\gamma_\mathrm{III}(\rvec{x})}=-d/(d^2-1)$ if $W_{L_{\mathrm{train}}}^{(1)}$ is a two-design, respectively, where the latter is obtained from the identity \begin{align}
&\,\Haar{VAV^\dag BVCV^\dag}=\dfrac{1}{d^2-1}\left(\tr{A}\tr{C}B+\tr{B}\tr{AC}\!\!1\right)\nonumber\\
&\qquad\qquad\qquad-\dfrac{1}{d(d^2-1)}\left(\tr{A}\tr{B}\tr{C}1+\tr{AC}B\right) \end{align} that can be consequently derived from Eq.~\eqref{eq:Weingarten2}. These all give $\VAR{}{\partial_{\mu l} \mathcal{L}_{\rvec{\theta}}}\leq8d^2/[(d+1)(d^2-1)]$ for any case and arbitrary ${L_{\mathrm{train}}}$.
Finally, to obtain the results in the main text for ${L_{\mathrm{train}}}=2$, we simply substitute $N=d_\mathrm{F}$ for the naive encoding strategy and $N=\log_3(2d_\mathrm{F}+1)$ for the dense exponential encoding strategy.
\section{A resource advantage for the QFFLM} \label{app:resrc_qfflm}
Our task at hand is to train a given $M$-variate Fourier-featured linear model~(FFLM) using a gradient-based optimization method that minimizes the MSE loss function $\mathcal{L}_{\rvec{\theta}} \propto \sum_{j}(f_\mathrm{model}(\rvec{x}_j; \rvec{\theta}) - y_j)^2$. Explicitly, an FFLM takes the form \begin{equation}
f_\mathrm{model}(\rvec{x}_j ; \rvec{\theta}) = \rvec{c}_\mathrm{model}(\rvec{\theta})\cdot\rvec{\phi}(\rvec{x}_j)\,, \end{equation} where the $N_\mathrm{tp}$ trainable parameters are consolidated in $\rvec{\theta} \in \mathbb{R}^{N_\mathrm{tp}}$, and $\rvec{\phi}(\rvec{x})$ is the Fourier feature column. We may choose to train this computational model using either a classical computer or a variational NISQ device shown in Fig.~\ref{eq:quantum_model}. We denote the subscript ``model'' as ``C'' for the former~(CFFLM), and ``Q'' for the latter~(QFFLM).
For a CFFLM, its Fourier coefficient column $\rvec{c}_{\mathrm{model}}=\rvec{c}_\mathrm{C}$ may generally depend on $\rvec{\theta}$ in a nonlinear fashion. If one parametrizes a $K^M$-dimensional $c_C$ such that it spans the entire convex space $C_{K^M}$ (see Sec.~\ref{sec:CKM_struct} in this SM), we obtain a universal $K^M$-dimensional CFFLM. If one considers a QFFLM, then $\rvec{\theta} \in [-\pi,\pi)^{N_\mathrm{tp}}$ is the parameter column that configures $O(N_\mathrm{tp})$ trainable quantum gates. The corresponding $\rvec{c}_{\mathrm{model}}=\rvec{c}_\mathrm{Q}$ contains elements that are nonlinear functions of $\rvec{\theta}$. Analytical forms of $\rvec{c}_\mathrm{Q}$'s elements vary with different \emph{ansatz} used. For example, if the ``hardware-efficient'' \emph{ansatz} shown in Fig.~\ref{fig:QSL_model} of the main text is employed, then by considering $M=1$ for simplicity, we have $c_{\mathrm{Q},l}(\rvec{\theta}) = \sum_{(\rvec{m},\rvec{n})\in I_l} \chi^l_{(\rvec{m},\rvec{n})} T^{N_\mathrm{tp}}_{(\rvec{m},\rvec{n})}(\rvec{\theta})$ for $1\leq l\leq K$, where $\chi^k_{(\rvec{m},\rvec{n})}$ is the weight of $T^{N_\mathrm{tp}}_{(\rvec{m},\rvec{n})}(\rvec{\theta}) = (\cos{\theta_1})^{m_1}(\sin{\theta_1})^{n_1}\ldots (\cos{\theta_{N_\mathrm{tp}}})^{m_{N_\mathrm{tp}}}(\sin{\theta_{N_\mathrm{tp}}})^{n_{N_\mathrm{tp}}}$, and $I_l$ is the set of nonnegative $(\rvec{m},\rvec{n})$s for the $l$th element $c_{\mathrm{Q},l}$. In the canonical case where each training parameter $\theta_j$ appears once in the quantum circuit, $m_l + n_l = 2$.
Model training involves computation of the MSE loss-function gradient with respect to $\rvec{\theta}$ given a training dataset $\{\rvec{x}_j,y_j\}$: \begin{equation}
\dfrac{\partial \mathcal{L}_{\rvec{\theta}}}{\partial \theta_{k}} \propto \sum_j\dfrac{\partial\mathcal{L}_j(\rvec{\theta})}{\partial\theta_k} = \sum_j[f_{\mathrm{model}}(\rvec{x}_j; \rvec{\theta}) - y_j]\dfrac{\partial f_{\mathrm{model}}(\rvec{x}_j;\rvec{\theta})}{\partial \theta_{k}}
\label{eq:lossgradient} \end{equation} We analyze and compare the gradient computational resources~$\mathrm{resrc}$ by counting the required number of basic computation elements (computational resources) to calculate Eq.~\eqref{eq:lossgradient} using both the CFFLM and QFFLM. The basic computation elements when using a CFFLM are scalar \textit{multiplication}, \textit{addition} and \textit{nonlinear operations}, which we shall consider to all be equivalent resource-wise; that is, computing $x\pm y$, $xy$, $\sqrt{y}$, $\cos(nx)$ or $\sin(nx)$ for scalars $x$ and $y$ amounts to the same resource usage~\cite{Griewank:2008}. We shall also disregard resource count originating from memory allocation, storage and reading, as they can vary significantly with different techniques and memory architectures. For the QFFLM, we take the elementary quantum gates, which are the single-qubit rotation and CNOT gates as basic computation elements.
From hereon, we shall only consider univariate models $f_{\mathrm{model}}(x_j;\rvec{\theta})$ without loss of generality, where the model ($\rvec{c}_{\mathrm{model}}$) dimension is set to $K$. For general $M$-variate models, all subsequent discussions still hold with $K \rightarrow K^M$.
\subsection{Exact $\mathcal{L}_{\rvec{\theta}}$-gradient computation for a CFFLM} \label{subsec:cfflm}
Fourier feature mapping [$x_j\mapsto\rvec{\phi}(x_j)$] is first performed \emph{once} on every training datum $x_j$. This preprocessing step requires $K = 2d_\mathrm{F} + 1$ (nonlinear) operations to calculate $\{1, \cos x_j , \sin x_j, \ldots, \cos(n_{d_\mathrm{F}}x_j), \sin(n_{d_\mathrm{F}}x_j) \}$. This results can be stored and reused during the entire training duration. After that, exact computation of $\partial \mathcal{L}_{\rvec{\theta}}/\partial \theta_{k}$ in~\eqref{eq:lossgradient} for a CFFLM using a classical computer may be carried out according to the following steps:\\[1ex]
\noindent For each $x_j$ and $\theta_k$, \begin{enumerate}[label=\textbf{\arabic*.}]
\item Calculate $f_\mathrm{C}(x_j; \rvec{\theta})$ given $\rvec{\theta}$.
\item Subtract the training output $y_j$ from the calculated $f_\mathrm{C}(x_j; \rvec{\theta})$.
\item Calculate the partial derivative $\partial f_\mathrm{C}(x_j; \rvec{\theta})/\partial\theta_k$.
\item Multiply answers from the second and third steps together. \end{enumerate}
{\bf Step~1} requires $K$ multiplications and $K$ additions to calculate the inner product $f_\mathrm{C}(x_j;\rvec{\theta})=\rvec{c}_\mathrm{C}(\rvec{\theta})\bm{\cdot} \rvec{\phi}(x_j)$, amounting to a total of $2K$ operations. If one uses a computationally nontrivial parametrization for $\rvec{c}_\mathrm{C}(\rvec{\theta})$, then additional computational resources $R_\mathrm{I}$ would be needed. {\bf Step~2} only requires one subtraction operation. In {\bf Step~3}, we denote by $R_\mathrm{II}$ the amount of computational resources for the partial derivative, \begin{equation}
\dfrac{\partial f_\mathrm{C}(x_j;\rvec{\theta})}{\partial \theta_{k}} = \sum^K_{l=1}\phi_l(x_j)\dfrac{\partial c_{\mathrm{C},l}(\rvec{\theta})}{\partial \theta_{k}}\,.
\label{eq:deriv_cfflm} \end{equation} The actual value of $R_\mathrm{II}$ depends on the parametrization of $\rvec{c}_\mathrm{C}$. {\bf Step~4} involves just one multiplication operation.
For every single datum $x_j$, {\bf Steps~3} and~{\bf4} are repeated for each component of $\rvec{\theta}\in\mathbb{R}^{N_\mathrm{tp}}$. Therefore, the total number of basic operations $\mathrm{resrc}_\mathrm{C}$ for calculating $\partial\mathcal{L}(\rvec{\theta})/\partial\theta_k$ is \begin{equation}
\mathrm{resrc}_\mathrm{C}\propto2K + R_{\mathrm{I}} + 1 + N_\mathrm{tp} (R_{\mathrm{II}} + 1)\,, \end{equation} with additional $K$ operations from Fourier-feature preprocessing mentioned in the beginning of this subsection. Suppose that, now, $N_\mathrm{tp}=K$ and we simply parametrize $\rvec{c}_\mathrm{C}(\rvec{\theta}) = \rvec{\theta} \in \mathbb{R}^K$. Then, the CFFLM may be parametrized into a $K$-dimensional universal model such that \emph{both} $R_\mathrm{I}=0=R_\mathrm{II}$, since $\rvec{c}_\mathrm{C}(\bm{\cdot})$ \emph{in this case} is the identity function and $\partial c_{\mathrm{C},l}/\partial\theta_k=\delta_{l,k}$. Hence, for such a CFFLM, $\mathrm{resrc}_\mathrm{C}\propto3K+1$, or $\mathrm{resrc}_\mathrm{C}=\Omega(K)$.
If $N_\mathrm{tp} < K$, we may reduce the number of repetitions of {\bf Steps~3} and~{\bf4}. In this \textit{under-parametrized} case, but $R_\mathrm{I}\geq0$ since $\rvec{c}_\mathrm{C}$ is now a $K$-dimensional (nonlinear) vectorial function of the $N_\mathrm{tp}$ training parameters. The resources $R_{\mathrm{II}}$ for computing $\partialf_\mathrm{C}(x_j;\rvec{\theta})/\partial\theta_k$, equivalently its components $\partial c_{\mathrm{C},l}(\rvec{\theta})/\partial\theta_k$, is also generally nonzero. The class of techniques known as \emph{automatic differentiation}~(AD)~\cite{margossianADreview} may be used to compute such a partial derivative. An advantage over conventional finite-difference methods, for instance, is the need for computing only a single function $c_{\mathrm{C},l}(\rvec{\theta})$ per $\theta_k$ instead of a pair of them displaced differently in $\rvec{\theta}$, since AD stores and reuses the computed $c_{\mathrm{C},l}(\rvec{\theta})$s. As a consequence, the number of basic operations for computing every $\partial c_{\mathrm{C},l}(\rvec{\theta})/\partial\theta_k$ is $O(R_{\mathrm{I}})$~\cite{margossianADreview}. Finally, the $2K$ basic operations arising from dot-product computation in {\bf Step~1} still exists. It is now clear that a positive $R_{\mathrm{I}}$ and $R_{\mathrm{II}}$ only increases $\mathrm{resrc}_\mathrm{C}$, and the dot-product computation is the bottleneck even for under-parametrized cases.
In other words, regardless of whether a given CFFLM is under-parametrized or not, \begin{equation}
\mathrm{resrc}_\mathrm{C}=\Omega(K)\,.
\label{eq:cmplxC} \end{equation}
\subsection{Variational $\mathcal{L}_{\rvec{\theta}}$-gradient computation for a QFFLM} \label{subsec:qfflm}
Analogous to Fourier-feature preprocessing for the CFFLM, preprocessing for QFFLMs involves the multiplication of weights $\beta_n$ to each training data input $x_j$, the output of which is then encoded onto a single-qubit gate. The number of multiplication is the same as that of the encoding gates, which is $\log_3(K)$ using exponential encoding, and never exceeds the total number of basic gates ($N_\mathrm{gt}$). This sets up the ``quantum equivalent'' of Fourier-feature mapping. As with CFFLMs, this data preprocessing is performed only once and the resulting data-encoded gate $V$ is repeatedly used in the QFFLM training. Neglecting the data preprocessing step, the procedure for computing $\partial \mathcal{L}_{\rvec{\theta}}/\partial \theta_{k}$ in \eqref{eq:lossgradient} of a QFFLM is as follows:\\[1ex]
\noindent For each $x_j$ and $\theta_k$, \begin{enumerate}[label=\textbf{\arabic*.}]
\item Sample $f_\mathrm{Q}(x_j; \rvec{\theta})$ from the variational quantum circuit given $\rvec{\theta}$.
\item Subtract the training output $y_j$ from the sampled $f_\mathrm{Q}(x_j; \rvec{\theta})$.
\item Sample the partial derivative $\partial f_\mathrm{Q}(x_j; \rvec{\theta})/\partial\theta_k$ from the variational quantum circuit.
\item Multiply answers from the second and third steps together. \end{enumerate}
While {\bf Steps~2} and~{\bf4} are the same as in the case of CFFLMs, the key differences lie in {\bf Steps~1} and~{\bf3}. In {\bf Step~1}, $f_\mathrm{Q}(x_j;\rvec{\theta})$ is sampled from the variational NISQ circuit that contains $N_\mathrm{gt}$ basic gates, where $N_\mathrm{gt}$ is completely dependent on our choice of training-circuit \emph{ansatz}, and does not need to scale with the model dimension $K$. In order to sample $f_\mathrm{Q}(x_j;\rvec{\theta})$ up to a desired precision $\epsilon_f$, a total of $O(N_\mathrm{gt}/\epsilon_f^2)$ gate operations are needed. In {\bf Step~3}, the partial derivative \begin{equation}
\dfrac{\partial f_\mathrm{Q}(x_j; \rvec{\theta})}{\partial \theta_k} = \dfrac{1}{2}\left[f_\mathrm{Q}\!\left(x_j; \rvec{\theta} + \dfrac{\pi}{2} \rvec{e}_k\right) - f_\mathrm{Q}\!\left(x_j; \rvec{\theta} - \dfrac{\pi}{2} \rvec{e}_k\right)\right]
\label{eq:paramshift} \end{equation} may be defined as a difference of two QFFLM functions of different circuit parameters using the parameter-shift~(PS) rule, where $\rvec{e}_k$ is represented by the unit vector that has $1$ in its $k$th component and $0$s otherwise. So, if $\partial f_\mathrm{Q}(x_j; \rvec{\theta})/\partial\theta_k$ is to be sampled up to some desired precision $\epsilon_{\partial f}$, then $2\,O(N_\mathrm{gt}/\epsilon_{\partial f}^2)$ gate operations are needed to sample both $f_\mathrm{Q}\!\left(x_j; \rvec{\theta} + \dfrac{\pi}{2} \rvec{e}_k\right)$ and $f_\mathrm{Q}\!\left(x_j; \rvec{\theta} - \dfrac{\pi}{2} \rvec{e}_k\right)$ that make up $\partial f_\mathrm{Q}(x_j; \rvec{\theta})/\partial\theta_k$ because the desired precision $\epsilon_{\partial f}$ for $\partial f_\mathrm{Q}(x_j; \rvec{\theta})/\partial\theta_k$ is the same as the sampling precision of its component QFFLM functions from Eq.~\eqref{eq:paramshift}. Including the additional scalar subtraction and multiplication operations gives a total of $2\,O(N_\mathrm{gt}/\epsilon_{\partial f}^2) + 2 $ basic operations as necessary computational resources. Since for every datum $x_j$, we again need to repeat {\bf Steps~3} and~{\bf4} $N_\mathrm{tp}$ times, the overall amount of $\mathcal{L}_{\rvec{\theta}}$-gradient calculation resources employed reads \begin{equation}
\mathrm{resrc}_\mathrm{Q}\propto O\!\left(\dfrac{N_\mathrm{gt}}{\epsilon_f^2}\right) + 1 + N_\mathrm{tp}\!\left[2O\!\left(\dfrac{N_\mathrm{gt}}{\epsilon_{\partial f}^2}\right) + 3\right]\,, \end{equation} apart from the additional $\log_3(K)$ data-preprocessing operations. As $N_\mathrm{tp} = O(N_\mathrm{gt})$, we have $\mathrm{resrc}_\mathrm{Q}=O(N_\mathrm{gt}/\epsilon_f^2) + O(N_\mathrm{gt}^2/\epsilon_{\partial f}^2)$.
We note that $\epsilon_f$ and $\epsilon_{\partial f}$ should not be arbitrarily chosen. In order to sample both $f_\mathrm{Q}(x_j;\rvec{\theta})$ and $\partialf_\mathrm{Q}(x_j;\rvec{\theta})/\partial\theta_k$ accurately, the desired errors $\epsilon_f$ and $\epsilon_{\partial f}$ should also scale \emph{at most} with these respective magnitudes. It turns out that these requirements would also result in an additive error of $\mathcal{L}_{\rvec{\theta}}$-gradient sampling that also scales at most with the magnitude of the $\mathcal{L}_{\rvec{\theta}}$-gradient. This is obvious from {\bf Step~4}, in which both sampled estimators $\widehat{f_\mathrm{Q}(x_j;\rvec{\theta})}\simf_\mathrm{Q}(x_j;\rvec{\theta})+\epsilon_f$ and $\widehat{\partialf_\mathrm{Q}(x_j;\rvec{\theta})/\partial\theta_k}\sim\partialf_\mathrm{Q}(x_j;\rvec{\theta})/\partial\theta_k+\epsilon_{\partial f}$ have the respective additive errors $\epsilon_f$ and $\epsilon_{\partial f}$ are multiplied together: \begin{align}
\widehat{\dfrac{\partial \mathcal{L}_j}{\partial \theta_k}}&=[\widehat{f_\mathrm{Q}(x_j;\rvec{\theta})}-y_j]\widehat{\dfrac{\partial f_\mathrm{Q}(x_j ; \rvec{\theta})}{\partial \theta_k}} \nonumber\\
&\sim[f_\mathrm{Q}(x_j;\rvec{\theta})+\epsilon_f-y_j]\left[\dfrac{\partial f_\mathrm{Q}(x_j ; \rvec{\theta})}{\partial \theta_k}+\epsilon_{\partial f}\right] \nonumber\\
&\sim \dfrac{\partial \mathcal{L}_j}{\partial \theta_k} + \epsilon_f \dfrac{\partial f_\mathrm{Q}(x_j ; \rvec{\theta})}{\partial \theta_k} + \epsilon_{\partial f}f_\mathrm{Q}(x_j;\rvec{\theta})\,. \end{align} It is evident that if $\epsilon_f = O(\left\vert f(x_j;\rvec{\theta})\right\vert)$ and $\epsilon_{\partial f} = O(\left\vert \partial f(x_j ;\rvec{\theta})/\partial \theta_k \right\vert)$, then $\widehat{\partial \mathcal{L}_j/\partial\theta_k}\sim\partial \mathcal{L}_j/\partial\theta_k+O(\left\vert f(x_j;\rvec{\theta}) \partial f(x_j ;\rvec{\theta})/\partial \theta_k \right\vert)$.
Since $\epsilon_f$ and $\epsilon_{\partial f}$ are respectively the model-function sampling precisions for estimating $f_\mathrm{Q}(x_j;\rvec{\theta})$ and $\partialf_\mathrm{Q}(x_j;\rvec{\theta})/\partial\theta_k$, we may take the more conservative route and replace $\epsilon_f,\epsilon_{\partial f}\rightarrow\epsilon=\min\{\epsilon_f,\epsilon_{\partial f}\}$. Finally, the overall gradient-calculation resources for $\mathcal{L}_{\rvec{\theta}}$ is therefore \begin{equation}
\mathrm{resrc}_\mathrm{Q}=O\!\left(\dfrac{N_\mathrm{gt}^2}{\epsilon^2}\right)\,.
\label{eq:cmplxQ} \end{equation} Comparing Eqs.~\eqref{eq:cmplxC} and \eqref{eq:cmplxQ} here gives Eq.~\eqref{eq:qadvantage}.
\subsection{Approximate $\mathcal{L}_{\rvec{\theta}}$-gradient computation for a CFFLM}
Given the approximate nature of QFFLM computation that is intrinsic to circuit sampling, we further investigate whether $\mathrm{resrc}_\mathrm{C}$ can be improved if known approximation techniques are employed in $\mathcal{L}_{\rvec{\theta}}$-gradient computation. For a CFFLM, one may first consider approximating \emph{the form} of $\partialf_\mathrm{C}(x_j;\rvec{\theta})/\partial\theta_k$ with the finite-difference method. However, doing so gives no reduction in computational resource scaling than AD~\cite{margossianADreview} that is utilized in the previous subsection.
Therefore, we focus on approximating the calculation of vectorial inner products. In the literature, there do exist published works that discuss classical algorithms concerning high-dimensional inner-product search problems~\cite{wu2019efficient,NIPS2014_310ce61c,guo2016quantization}. These studies often rely on numerical routines related to the maximum inner product search~(MIPS) problem of maximizing the inner product between a query vector and a set of search data vectors. However, extensive studies were performed to enhance the computation rate of such search problems, and do not directly address the resources utilized for computing inner products themselves.
Thus far, it appears that the only viable solution to reducing the $\mathrm{resrc}_\mathrm{C}=\Omega(K)$ scaling is to perform inner products between $K$-dimensional $\rvec{\phi}(x_j)$s and $\rvec{c}_\mathrm{C}$s using a projection method that projects these vectors onto an effective vector space of a smaller dimension. As usual, the $\mathcal{L}_{\rvec{\theta}}$-gradient computational resources using such a projection method shall neglect the data preprocessing step that computes $\rvec{\phi}(x_j)$ for all $x_j$ in the training set $X$ as described in Sec.~\ref{subsec:cfflm}. We highlight two popular projection methods that are each based on different numerical objectives.
\subsubsection{Random projection method}
We first discuss the \emph{random projection method}, where a randomized linear mapping $\mathcal{M}[\rvec{\phi}(x_j)]$ projects all $\rvec{\phi}(x_j)$s onto a $(\widetilde{d}<K)$-dimensional column vectors. A straightforward way to generate such a map is to define $\mathcal{M}[\rvec{\phi}(x_j)]=\widetilde{d}^{-1/2}\dyadic{A}\rvec{\phi}(x_j)\equiv\widetilde{\rvec{\phi}}(x_j)$, where the $\widetilde{d}\times K$ random matrix $\dyadic{A}$ has elements independently and identically distributed according to the standard Gaussian distribution. It is well-known that such random projections preserve the mutual distances between feature vectors $\rvec{\phi}(x_j)$ according to the following lemma~\cite{Johnson:1982extensions,JLnotes} that is satisfied by such a random linear map:
\begin{lemma}
\label{lem:JL}
\textit{[Johnson--Lindenstrauss~(JL)]} Given $0 < \widetilde{\epsilon} < 1$, the set $X$ of $\vert X \vert$ vectors in $\mathbb{R}^K$ and the effective dimension $\widetilde{d} \geq O(\log(\vert X \vert)/\widetilde{\epsilon}^2)$, there exists a linear map $\mathcal{M}: \mathbb{R}^K \rightarrow \mathbb{R}^{\widetilde{d}}$ such that
$$(1-\widetilde{\epsilon})\Vert \rvec{y}_1 - \rvec{y}_2 \Vert^2 \leq \Vert \mathcal{M}[\rvec{y}_1] - \mathcal{M}[\rvec{y}_2] \Vert^2 \leq (1+\widetilde{\epsilon})\Vert \rvec{y}_1 - \rvec{y}_2\Vert^2 $$
for all $\rvec{y}_1,\rvec{y}_2\in X$. \end{lemma}
\noindent Additional mandatory \emph{precomputation steps} are needed to set up the random projection method before actual CFFLM training commences. These include constructing the $ \widetilde{d}\times K$ random matrix $\dyadic{A}$, which takes $O(K\widetilde{d})$ resources if we treat the generation of \emph{one} Gaussian random variable as \emph{one} basic computational operation, and multiplying $\dyadic{A}$ to all $\rvec{\phi}(x_j)$s, which demands $O(\vert X\vert K\widetilde{d})$ resources. Furthermore, since the Nyquist--Shannon theorem implies that a target function well-approximated by a $K$-dimensional FFLM of largest Fourier frequency $d_\mathrm{F}=2K-1$ requires at least $O(K)$ equidistant training data points $\{x_j\}$ to avoid aliasing problems in expressing the function, we require $|X|=O(K)$. Therefore $\widetilde{d}=\Omega((\log K)/\widetilde{\epsilon}^2)$ is achieved with an inner-product complexity of $\Omega((\log K)/\widetilde{\epsilon}^2)$.
Thus, a randomly-projected CFFLM is still a $K$-dimensional Fourier-featured model, \begin{equation}
f_\mathrm{C}^{\text{(rand-proj)}}(x_j) = \widetilde{\rvec{c}_\mathrm{C}}^\top \widetilde{\rvec{\phi}}(x_j) = \widetilde{d}^{-1/2}\,\widetilde{\rvec{c}_\mathrm{C}}^\top \dyadic{A}\rvec{\phi}(x_j)=(\widetilde{d}^{-1/2}\dyadic{A}^\top\widetilde{\rvec{c}_\mathrm{C}})^\top\rvec{\phi}(x_j)\,, \end{equation} which is characterized by a $\widetilde{\rvec{c}_\mathrm{C}}$ that is $[\widetilde{d}=\Omega((\log K)/\widetilde{\epsilon}^2)]$-dimensional. In other words, the dot-product computation resources may be reduced from $O(K)$ to $\Omega((\log K)/\widetilde{\epsilon}^2)$, \emph{provided} that precomputation steps of complexity $O(K^2(\log K)/\widetilde{\epsilon}^2)$ are carried out prior to CFFLM training.
\subsubsection{Principal-component-analysis projection}
One may choose to perform a different kind of projection that is popular in machine learning. As there exist a few objectives that eventually lead to the same projection algorithm, we shall quote one exemplifying objective that is commonly considered. Suppose $\widetilde{\rvec{\phi}}(x_j)$ are the $(\widetilde{d}<K)$-dimensional projected feature column vectors of $\rvec{\phi}(x_j)$, and $\dyadic{B}$ is a $K\times\widetilde{d}$ approximate recovery matrix that gives the set of $K$-dimensional columns $\{\rvec{\phi}'(x_j)=\dyadic{B}\,\widetilde{\rvec{\phi}}(x_j)\}$. Henceforth, we shall assume that $\dyadic{B}$ is an isometry ($\dyadic{B}^\top\dyadic{B}=\dyadic{1}$). Then a reasonable prescription for defining the projected feature columns could be one which minimizes the average squared-error \begin{equation}
\mathcal{E}=\dfrac{1}{|X|}\sum_{x_j\in X}\Vert\rvec{\phi}(x_j)-\rvec{\phi}'(x_j)\Vert^2\,.
\label{eq:E1} \end{equation} Setting the variation \begin{equation}
\updelta\mathcal{E}=\dfrac{1}{|X|}\sum_{x_j\in X}[\rvec{\phi}(x_j)-\rvec{\phi}'(x_j)]^\top\dyadic{B}\,\updelta\widetilde{\rvec{\phi}}(x_j) \end{equation} with respect to $\widetilde{\rvec{\phi}}(x_j)$ to zero supplies us the extremal equation \begin{equation}
\dyadic{B}^\top\,\rvec{\phi}(x_j)=\dyadic{B}^\top\,\rvec{\phi}'(x_j)\,, \end{equation} implying that the optimal projected feature columns are given by $\widetilde{\rvec{\phi}}(x_j)=\dyadic{B}^\top\rvec{\phi}(x_j)$.
The next task would be to minimize \begin{equation}
\mathcal{E}=\dfrac{1}{|X|}\sum_{x_j\in X}\left\|(\dyadic{1}-\dyadic{B}\,\dyadic{B}^\top)\,\rvec{\phi}(x_j)\right\|^2=K-\Tr{\bm{\Sigma}\,\dyadic{B}\,\dyadic{B}^\top}
\label{eq:E2} \end{equation}
with respect to all isometries $\dyadic{B}$, where $\bm{\Sigma}=\sum_{x_j\in X}\rvec{\phi}(x_j)\rvec{\phi}(x_j)^\top/|X|$ and a simplification to the second equality makes use of the fact that $\dyadic{1}-\dyadic{B}\,\dyadic{B}^\top$ is a projector and $\left\|\rvec{\phi}(x_j)\right\|^2=K$. By assigning a Lagrange matrix $\bm{\Lambda}$ for the isometry constraint, the relevant Lagrange function reads \begin{equation}
\mathcal{D}=K-\Tr{\bm{\Sigma}\,\dyadic{B}\,\dyadic{B}^\top}+\Tr{\bm{\Lambda}\,(\dyadic{B}^\top\dyadic{B}-\dyadic{1})}\,. \end{equation} Setting the variation of $\mathcal{D}$ with respect to $\dyadic{B}$ to zero gives the extremal equation \begin{equation}
\dyadic{B}^\top\,\bm{\Sigma}=\bm{\Lambda}\,\dyadic{B}^\top\,, \end{equation} upon solving which for $\bm{\Lambda}$ finally leads to the eigenmatrix equation \begin{equation}
\dyadic{B}^\top\,\bm{\Sigma}=\dyadic{B}^\top\,\bm{\Sigma}\,\dyadic{B}\,\dyadic{B}^\top\,. \end{equation} The solution of $\dyadic{B}$ to this equation is then $\dyadic{B}=\left(\rvec{s}_1\,\,\rvec{s}_2\,\,\ldots\,\,\rvec{s}_{\widetilde{d}}\right)$, where the $\rvec{s}_j$ are $\widetilde{d}$ eigenvectors of $\bm{\Sigma}$.
Therefore, the complete recipe to construct the optimal $\widetilde{d}$-dimensional projection column vectors $\widetilde{\rvec{\phi}}(x_j)$ that minimizes $\mathcal{E}$ in either \eqref{eq:E1} or \eqref{eq:E2} is to search for the projection matrix $\dyadic{B}^\top$ that houses $\widetilde{d}$ eigenvectors of $\bm{\Sigma}$ corresponding to its $\widetilde{d}$ \emph{largest} eigenvalues as its rows. This is a version of the so-called principal-component-analysis~(PCA) projection~\cite{Pearson:1901lines,Hotelling:1936relations,Murphy:2012machine} that applies to our context, which preserves the approximately recovered columns $\rvec{\phi}'(x_j)$ with respect to $\rvec{\phi}(x_j)$ \emph{via} the minimization of $\mathcal{E}$. The result is also an under-parametrized $K$-dimensional CFFLM: \begin{equation}
f_\mathrm{C}^{\text{(PCA-proj)}}(x_j) = \widetilde{\rvec{c}_\mathrm{C}}^\top \widetilde{\rvec{\phi}}(x_j) = \widetilde{\rvec{c}_\mathrm{C}}^\top \dyadic{B}^\top\rvec{\phi}(x_j)=(\dyadic{B}\,\widetilde{\rvec{c}_\mathrm{C}})^\top\rvec{\phi}(x_j)\,. \end{equation}
In terms of computational resources, there are still the necessary precomputation steps one needs to carry out. For the PCA projection, this includes the computation of $\bm{\Sigma}$ that incurs $O(|X|K^2)=O(K^3)$ as $|X|=O(K)$ from the Nyquist--Shannon theorem, the computation of its $\widetilde{d}$ largest eigenvalues that takes a worst-case complexity of $O(K^3)$, and the multiplication of $\dyadic{B}^\top$ with $|X|$ $\rvec{\phi}(x_j)$s [$O(|X|K\widetilde{d})=O(K^2\widetilde{d})$]. Hence, we see that $O(K^3)$ is the bottleneck for the precomputation steps.
\subsubsection{Overall dot-product computational resources}
For both random and PCA projections, it is possible to perform dot-product calculations using a set of projected $\widetilde{\rvec{\phi}}(x_j)$s in place of the original $\rvec{\phi}(x_j)$s, which may be of a much smaller dimension than $K$. By disregarding the precomputation steps, one may claim a reduction in dot-product computational resources with such projection methods. However, care has to be taken in comparing computational resources between CFFLMs and QFFLMs. Strictly speaking, only data preprocessing steps that map the training inputs $x_j$ to the Fourier feature columns $\rvec{\phi}(x_j)$ are common to these two different models, and may therefore be disregarded in the computational-resource comparisons. Otherwise, for a fair comparison between CFFLMs and QFFLMs, \emph{any} additional precomputation complexities must be accounted for in analyzing any method applied to these models. In view of this, there is really no advantage in using projection methods to approximate dot-product computations.
\subsection{General loss functions and non-gradient-based optimization}
Here we consider general loss functions of the form $\mathcal{L}_{\rvec{\theta}}\propto\sum_j\mathcal{L}[f_\mathrm{C/Q}(x_j;\rvec{\theta})]$ given a training dataset $\{x_j,y_j\}$, where $f_\mathrm{C/Q}$ refers to the model function from either a CFFLM or QFFLM. Its partial derivative with respect to $\theta_k$ reads \begin{align}
\dfrac{\partial \mathcal{L}_{\rvec{\theta}}}{\partial \theta_k} \propto&\, \sum_j\dfrac{\partial \mathcal{L}[f_\mathrm{C/Q}(x_j ; \rvec{\theta})]}{\partial f_\mathrm{C/Q}(x_j ; \rvec{\theta})} \dfrac{\partial f_\mathrm{C/Q}(x_j ; \rvec{\theta})}{\partial \theta_k}\nonumber\\
=&\,\sum_j \mathcal{F}[f_\mathrm{C/Q}(x_j ; \rvec{\theta})]\dfrac{\partial f_\mathrm{C/Q}(x_j ; \rvec{\theta})}{\partial \theta_k}\,,
\label{eq:generalloss} \end{align} where $\mathcal{F}=\mathcal{L}'$ is some (non-linear) functional of $f_\mathrm{C/Q}(x_j; \rvec{\theta})$. We note that some loss function such as log-likelihood function in unsupervised learning, does not need any \emph{answers} $y_j$'s. Therefore, following arguments covers the general machine learning algorithms which uses function values $f_\mathrm{C/Q}$ as inputs for loss function.
We see that such generality only modifies \textbf{Step~2} in Secs.~\ref{subsec:cfflm} and \ref{subsec:qfflm}, namely that data-output subtraction is generalized to computing the possibly nonlinear functional $\mathcal{F}$. Since any $\mathcal{F}$ is clearly a functional of $f_\mathrm{C/Q}$, its resource complexity is \emph{at least} that for evaluating $f_\mathrm{C/Q}$. In practical cases, it is sensible to consider an $\mathcal{L}_{\rvec{\theta}}$ such that its component gradient $\mathcal{L}'(y)$ and higher-order derivatives are themselves easily computable for any argument $y$, so that the computational resources for $\mathcal{F}$ reduces to just that for evaluating $f_\mathrm{C/Q}$. In other words, the overall gradient computation for a general $\mathcal{L}_{\rvec{\theta}}$ still takes $\Omega(K^M)$ basic operations for CFFLMs, and $O(N_{gt}^2/\epsilon^2)$ basic operations for QFFLMs for such choices of $\mathcal{L}_{\rvec{\theta}}$.
\section{Structure of $C_{K^M}$} \label{sec:CKM_struct}
Upon identifying that $C_{K^M}=\{\rvec{c}\,|\,-1\leq f(\rvec{x})=\rvec{c}^\top\rvec{\phi}(\rvec{x})\leq1\text{ for all }\rvec{x}\in[0,2\pi)^M]\}$ is the set of admissible columns $\rvec{c}$ such that the degree-$d_\mathrm{F}$ Fourier-series function $f(\rvec{x})$ is bounded between $-1$ and 1, we may first notice that $C_{K^M}$ is convex. To understand this, we suppose that $\rvec{c}_1,\rvec{c}_2\in C_{K^M}$. Then if $\rvec{c}=\mu\rvec{c}_1+(1-\mu)\rvec{c}_2$ is any convex sum of $\rvec{c}_1$ and $\rvec{c}_2$ defined by $0\leq\mu\leq1$, by the triangular inequality, \begin{equation}
\left|\rvec{c}^\top\rvec{\phi}(\rvec{x})\right|\leq\mu\left|\rvec{c}^\top_1\rvec{\phi}(\rvec{x})\right|+(1-\mu)\left|\rvec{c}^\top_2\rvec{\phi}(\rvec{x})\right|\leq1\,, \end{equation} saying that $\rvec{c}\in C_{K^M}$.
Unfortunately, the general geometrical structure of the convex body $C_{K^M}$ is analytically hard to ascertain. For univariate degree-1 Fourier series [$M=1$ and $d_\mathrm{F}=1$ (or $K=3$)], however, it is straightforward to find out the three-dimensional shape of $C_3$. In this case, all elements $\rvec{c}=(c_1\,\,c_2\,\,c_3)^\top$ in $C_3$ are such that the magnitude of $f(x)=c_1+\sqrt{2}\,c_2\cos x+\sqrt{2}\,c_3\sin x$ is less than or equal to unity. To exhaust all necessary and sufficient constraints on $\rvec{c}$, we first search for the one that imposes $|f(x)|\leq 1$ for all $x\in[0,2\pi)$. The boundary of $C_3$ contains all $\rvec{c}$s that satisfy the equation $\max_{x\in[0,2\pi)}|f(x)|=1$.
\begin{figure}
\caption{(a)~Monte~Carlo simulation of the boundary of $C_3$ and (b)~the exact surface.}
\label{fig:bicone}
\end{figure}
When $0\leq c_1\leq1$, \begin{align}
&\,\max_{x\in[0,2\pi)} |f(x)|\nonumber\\
=&\,\max_{x\in[0,2\pi)}\left\{\left|c_1+\sqrt{2(c^2_2+c^2_3)}\,\sin\!\left(x+\tan^{-1}\!\dfrac{c_3}{c_2}\right)\right|\right\}\nonumber\\
=&\,c_1+\sqrt{2(c^2_2+c^2_3)}=1\,. \end{align}
By repeating the same exercise for $-1\leq c_1<0$, we obtain $-c_1+\sqrt{2(c^2_2+c^2_3)}=1$. Hence, the $C_3$ boundary is the biconical surface described by $(1-|c_1|)^2=2(c^2_2+c^2_3)$, where the common base has radius~$1/\sqrt{2}$ and length~$2$ (see Fig.~\ref{fig:bicone}).
\section{Technical details of numerical simulations concerning Fig.~\ref{fig:sim} and its extension} \label{sec:last_sec}
\begin{figure}
\caption{(a)~General circuit for the demonstrations discussed in Sec.~\ref{sec:last_sec} of this Supplemental Material, which deal with univariate target-function learning. The internal structures of the trainable modules in (b) and (c), which respectively apply for Figs.~\ref{fig:sim} and \ref{fig:inside}.}
\label{fig:last_circuits}
\end{figure}
\begin{figure}\label{fig:inside}
\end{figure}
All simulations concerning QFFLM training are performed with the \texttt{Pennylane} Python library, where the ``hardware-efficient'' circuit \emph{ansatz} described in the main text is employed. Those concerning CFFLM training are run with the \texttt{Pytorch} Python package for loss-function gradient computation purposes. We employ the Adam routine with a learning rate of~0.03 for both QFFLM and CFFLM optimization. A set of 200 equally-spaced training data points in the interval $[-\pi, \pi)$ are used to train both types of FFLMs, where full-batch learning is carried out with the complete training dataset. Iterative model training (gradient updates) is performed for a total of 500~training steps. The MSE $\mathcal{L}_{\rvec{\theta}}$ is used to measure the training accuracy, which in this case is also a direct measure of expressivity since the training dataset spans the entire function period with more than enough data points to accurately reconstruct the Fourier-series functions according to the Nyquist--Shannon theorem. Put differently, for sufficiently dense data points in the complete period, $\mathcal{L}_{\rvec{\theta}}$ is the discrete approximation of the average $L^2$ distance between the target function and trained FFLM stated in Eq~\eqref{eq:MSEcontinuous}.
All QFFLMs employ a four-qubit circuit (see Fig.~\ref{fig:last_circuits} for reference), where all basic data-encoding gates are arranged in parallel and collectively denoted by the unitary operator $V(x)$. This is accompanied by two trainable unitary operators $W_1(\rvec{\theta}_1)$ and $W_2(\rvec{\theta}_2)$ sandwiching $V(x)$. The measurement observable is set as the single-qubit Pauli-$Z$ operator acting on the last qubit. Both $W_1(\rvec{\theta}_1)$ and $W_2(\rvec{\theta}_2)$ are made up of $L$ layers of single-qubit rotation gate and a nearest-neighbor CNOT-gate array. Each single-qubit rotation is encoded with two training parameters; for instance, in $W_l(\rvec{\theta}_l)$ ($l=1,2$), if $l'$ labels the layer number, then the training-parameter column $\rvec{\theta}_{ll'}=(\theta_{l,l',0}\,\,\theta_{l,l',2}\,\,\ldots\,\,\theta_{l,l',7})^\top$ consisting of eight parameters with respect to the $l'$th layer in $W_l(\rvec{\theta}_l)$ is encoded according to $\rvec{\theta}_{ll'}\rightarrow\bigotimes_{k=0}^{3} [R_Z(\theta_{l,l',2k})R_Y(\theta_{l,l',2k+1})]$. By denoting $U^{(s,s')}_\textsc{CNOT}$ as the two-qubit CNOT unitary operator acting on qubits~$s$ and $s'$, the nearest-neighbor CNOT-gate-array unitary operator is given by $U_\textsc{CNOTarr}=\prod^3_{s=1} U^{(s,s+1)}_\textsc{CNOT}$.
While the crucial numerical results are presented in Fig.~\ref{fig:sim}, we would like to elaborate on the expressivity of QFFLMs in relation to the number of layers $L$ per training module. In the main text, only single-layered ($L=1$) trainable modules are used for the purpose of a fair comparison with the CFFLM under a similar $\mathrm{resrc}$ order, and this clearly limits the expressivity of the QFFLM in general. Most notably, we revisit the case of $r=55.5$, where the target function $f$ is well-expressible with CFFLMs. Figure~\ref{fig:inside} in this subsection shows the performances for $L=1$ through~4. For the randomly-generated target Fourier-series function $f$ of dimension $\kappa=81$ considered, and also other tested random target functions, the figure confirms the expectation that increasing $L$ improves the QFFLM expressivity.
\begin{figure*}
\caption{Performances of CFFLMs and QFFLMs for $M=1$ and $\kappa=243$.}
\label{fig:5qubit}
\end{figure*}
Furthermore, we also provide Fig.~\ref{fig:5qubit} that shows similar behaviors in the CFFLM and QFFLM performances under the same resource-constraint conditions described in the main text, this time with a larger target Fourier-series-function dimension~$\kappa=243$. In this case, the CFFLM is $(N_\mathrm{tp}^\mathrm{CFFLM}=100)$-dimensional and universal in $C_{N_\mathrm{tp}^\mathrm{CFFLM}}$, whereas the QFFLMs considered here have either $L=1$ and $L=2$ training layers that are both $\kappa$-dimensional and nonuniversal with respect to $C_\kappa$. As mentioned in the main text, all single-qubit rotation gates are encoded with two training parameters respectively on the $Y$ and $Z$ Pauli gates. Figure~\ref{fig:5qubit}, therefore, supplies more examples demonstrating that a full-dimensional QFFLM that is nonuniversal can express functions outside the classically-expressible region, where a universal but lower-dimensional CFFLM shows signs of limited learning capacity.
\input{ExpEncQSL_rev2.bbl}
\end{document} |
\begin{document}
\title{Perfect state transfer on a spin-chain without state initialization}
\author{C. Di Franco, M. Paternostro, and M. S. Kim}
\affiliation{School of Mathematics and Physics, Queen's University, Belfast BT7 1NN, United Kingdom}
\date{\today}
\begin{abstract} We demonstrate that perfect state transfer can be achieved using an engineered spin chain and clean local end-chain operations, without requiring the initialization of the state of the medium nor fine tuning of control-pulses. This considerably relaxes the prerequisites for obtaining reliable transfer of quantum information across interacting-spin systems. Moreover, it allows us to shed light on the interplay among purity, entanglement and operations on a class of many-body systems potentially useful for quantum information processing tasks. \end{abstract}
\pacs{03.67.-a, 03.67.Hk, 75.10.Pq}
\maketitle
The ability to prepare a fiducial state of a quantum system that has to accomplish a task of quantum communication or computation is one of seven {desiderata}, more commonly known as {\it DiVincenzo's criteria}~\cite{divincenzo}, that any reliable device for Quantum Information Processing (QIP) should meet. However, even the innocent request for a pure reference-state for the initialization of a QIP device is not easily granted, in practice, mainly due to the difficulty of preparing pure states of multipartite systems. A striking example is given by nuclear-magnetic-resonance QIP~\cite{nmr}, where the signal observed in an experiment comes from a chaotic ensemble of emitters, whose overall state is strongly mixed, and is ``reinterpreted'' quantum mechanically by relying on the concept of pseudo-purity~\cite{nmr2}. Another very important instance is provided by schemes for quantum state transfer (QST) in spin chains~\cite{unmodulated}. These have emerged as remarkable candidates for the realization of faithful short-distance transmission of quantum information~\cite{unmodulated2}. Although the preparation of the spin-medium in a fiducial pure state is an important step in the achievement of optimal transport fidelity, studies conducted on the effects of randomization of the chain's state have revealed that the process' efficiency gets spoiled, in a way that quantitatively depends on the mechanism assumed for such randomization~\cite{josephsonchain,rand}.
Here we show that the conditions about the initial state of a spin chain which enable perfect state transfer can be considerably reduced, without requiring fine tuning of control-pulses over the chain~\cite{fitz}. Specifically, we demonstrate a scheme for perfect QST that is able to bypass the initialization of the spin-medium in a {\it known pure} state. The scheme requires only end-chain single-qubit operations and a {\it single} application of a global unitary evolution and is thus fully within a scenario where the control over the core part of the spin medium is relaxed in favour of controllability of the first and last element of the chain. We show flexibility of our designed protocol, which can be adapted to various interaction models. In fact, {\it we give the general conditions necessary in order to achieve perfect state transfer without state initialization via our scheme}. With minimal changes, one can use any Hamiltonian satisfying particular conditions on the time-evolution of two-site operators, as clearly identified in this paper.
In order to provide the details of our protocol, we address the cases of two models for QST in spin chains which have been widely used so far~\cite{cambridge,josephsonchain}. We start with a nearest-neighbor Ising coupling involving $N$ spin-$1/2$ particles that also experience a transverse magnetic field. Its Hamiltonian reads $\hat{{\cal H}}_1=\sum^{N-1}_{i=1}J_{i}\hat{Z}_{i}\hat{Z}_{i+1}+\sum^N_{i=1}B_i\hat{X}_i$. Here, $J_i$ is the interaction strength between spin $i$ and $i+1$ and $B_{i}$ is the strength of the coupling of spin $i$ to a local magnetic field. In our notation, $\hat{X},\,\hat{Y}$ and $\hat{Z}$ denote the $x,\,y$ and $z$ Pauli matrix, respectively. We choose $J_i=J\sqrt{4i(N-i)}$ and $B_i=J\sqrt{(2i-1)(2N-2i+1)}$ with $J$ being a characteristic energy scale that depends on the specific physical implementation of the model (we choose units such that $\hbar=1$ throughout the paper). By applying the single-spin operation $(\hat{\openone}+i\hat{X})/\sqrt 2$ on the first element of the chain and using the eigenstates $\ket{\pm}=(\ket{0}\pm\ket{1})/\sqrt{2}$ of the $\hat{X}$ operator as computational basis, end-to-end perfect QST is achieved via $\hat{{\cal H}}_1$, when $Jt^*=\pi/4$ ($t^*$ is the evolution time) and for the initial fiducial state $\ket{+...+}_{2,...,N}$ of the spin-medium~\cite{josephsonchain}. This result has been obtained by analyzing the system from an information-flux (IF) viewpoint~\cite{informationflux}. However, the state-transfer fidelity is sensitive to deviations of the initial state from the one being ideally required.
For the understanding of the following discussion, it is enough to mention that the IF is in general rather useful when information regarding multi-site correlation functions is needed~\cite{matryoshka}. The analysis is performed in Heisenberg picture and requires $\hat{{\cal O}}(t)$, {\it i.e.} the time-evolved form of a given chain-operator $\hat{O}$. Here, for the purposes of our study, we concentrate on the evolution of two-site operators $\hat{\openone}_i\hat{X}_{N-i+1}$, $\hat{Z}_i\hat{Y}_{N-i+1}$, and $\hat{Z}_i\hat{Z}_{N-i+1}$. At time $t^*=\pi/4J$, by solving the relevant Heisenberg equations, we have that \begin{equation}
\begin{split}
&\hat{\cal \openone}_i(t^*)\hat{\cal X}_{N-i+1}(t^*)=\hat X_i\hat \openone_{N-i+1},\\
&\hat{\cal Z}_i(t^*)\hat{\cal Y}_{N-i+1}(t^*)=\hat Y_i\hat Z_{N-i+1},\\
&\hat{\cal Z}_i(t^*)\hat{\cal Z}_{N-i+1}(t^*)=\hat Z_i\hat Z_{N-i+1}.
\end{split} \label{eq:evolution} \end{equation} Clearly, each of these two-site operators evolves in its swapped version, without any dependence on other chain's operators. This paves the way to the core of our protocol, which we now describe qualitatively. Qubit $1$ is initialized in the input state $\rho^{in}$ (either a pure or mixed state) we want to transfer and qubit $N$ is projected onto an eigenstate of $\hat{Z}$. Then the interaction encompassed by $\hat{\cal H}_1$ is switched on for a time $t^*=\pi/4J$, after which we end up with an entangled state of the chain. The amount of entanglement shared by the elements of the chain depends critically on their initial state, as it is commented later on. Regardless of the amount of entanglement being set, a $\hat{Z}$-measurement over the first spin projects the $N$-th one onto a state that is locally-equivalent to $\rho^{in}$. More specifically, if the product of the measurement outcomes at $1$ (after the evolution) and $N$ (before the evolution) is $+1$ ($-1$), the last spin will be in $\rho^{in}$ ($\hat{X}\rho^{in}\hat{X}$). In any case, apart from a simple single-spin transformation, perfect state transfer is achieved. For completeness of presentation, here we quantitatively assess the performance of our proposal.
We start by considering spins $2,...,N-1$ all prepared in (unknown) eigenstates of the $\hat{Z}$ operator. For simplicity, we assume a pure state $\ket{\psi}=\alpha\ket{0}+\beta\ket{1}$ to be transmitted and the last spin in $\ket{0}$, although the generalization is straightforward. For definiteness, a representative of the initial state of the medium is written as $\ket{a_2...a_{N-1}}_{2,...,N-1}$ with $\ket{a_i}_i$ the state of spin $i$ ($a_i=0,1$). The final state of the chain, $e^{-i\hat{\cal H}_1t^*}\ket{\psi}_1\ket{a_2...a_{N-1}}_{2,...,N-1}\ket{0}_N$, is $\ket{\Psi}_{F}=(1/\sqrt{2})\,[\ket{0}_1\ket{a_{N-1}...a_2}_{2,...,N-1}\ket{\psi}_N+i\ket{1}_1\ket{a^\perp_{N-1}...a^\perp_2}_{2,...,N-1}(\hat{X}\ket{\psi}_N)]$, where $\bra{a^\perp_i}{a_i}\rangle=0,~\forall{i}$. Thus, upon measurement of the first spin over the $\hat{Z}$ eigenbasis, the state of the last spin is clearly locally-equivalent to $\ket{\psi}$ (and separable with respect to the subsystem $\{2,...,N-1\}$). The form of $\ket{\Psi}_{F}$ reveals the core of our mechanism. In fact, before the measurement stage, a fraction of genuine $N$-party entanglement of Greenberger-Horne-Zeilinger (GHZ) form~\cite{ghz} is shared by the elements of the chain. Such fraction is maximum for $\langle{\psi}|{\hat{X}}|{\psi}\rangle=0$ and disappears if $\ket{\psi}$ is taken as an eigenstate of $\hat{X}$, showing that the state to be transmitted acts as a knob for the entanglement in the chain. This consideration can be extended to any other spin of the medium. Indeed, suppose that one of the central spins (labelled $j$) is prepared in an eigenstate $\ket{\pm}_j$ of $\hat{X}$. The final state of the chain after the evolution driven by $\hat{\cal H}_1$ is $(1/\sqrt{2})\,\ket{\pm}_{N-j+1}[\ket{0}_1\ket{a_{N-1}...a_2}_{(2,...,N-1)'}\ket{\psi}_N+i\ket{1}_1\ket{a^\perp_{N-1}...a^\perp_2}_{(2,...,N-1)'}(\hat{X}\ket{\psi}_N)]$, where $(2,...,N-1)'$ denotes the set of all spins from $2$ to $N-1$, spin $N-j+1$ excluded. This shows that, in general, the GHZ entanglement shared by the elements of the chain before the measurement stage will not include the spins that are mirror-symmetrical with respect to any element initially prepared in an eigenstate of $\hat{X}$.
We can now extend our analysis to the case of an initial mixed state of the spin-medium. As before, for simplicity, the state of the last spin is $\ket{0}$. By following the same steps as above, the final state of the system would be given by $\rho_F=[\ket{0}_1\!\bra{0}\otimes\rho\otimes\rho^{in}_N\!+\!\ket{1}_1\!\bra{1}\otimes\hat{\cal S}_2\rho\otimes\hat{\cal T}_2\rho^{in}_N-(i\ket{0}_1\!\bra{1}\otimes\hat{\cal S}_1\rho\otimes\hat{\cal T}_1\rho^{in}_N\!+h.c.)]/2$ with $\rho$ the density matrix of spins from $2$ to $N-1$ obtained by applying a mirror-inversion operation on their initial state and $\rho^{in}$ the density matrix of the state to transfer. We have defined $\hat{\cal S}_1\rho=\rho\prod_{i=2}^{N-1}\hat{X}_i$, $\hat{\cal S}_2\rho=\prod_{i=2}^{N-1}\hat{X}_i\rho\prod_{i=2}^{N-1}\hat{X}_i$, $\hat{\cal T}_1\rho^{in}_N=\rho^{in}_N\hat{X}_N$, $\hat{\cal T}_2\rho^{in}_N=\hat{X}_N\rho^{in}_N\hat{X}_N$. Again, the crucial point here is that, regardless of the amount of entanglement established between the spin-medium and the extremal elements of the chain ({\it i.e.} spins $1$ and $N$), upon $\hat{Z}$-measurement of $1$, the last spin is disconnected from the rest of the system, {\it whose initial state is inessential to the performance of the protocol} and could well be, for instance, a thermal state of the chain in equilibrium at finite temperature. In fact, the key requirements for our scheme are the arrangement of the proper time-evolution (to be accomplished within the coherence times of the system) and the performance of clean projective measurements on spin $1$ and, preventively, on $N$.
The last requirement of our scheme is particularly important and, in order to estimate its relevance, we evaluate the performance of the protocol against the purity of the initial state of spin $N$. For the sake of simplicity, we focus on the case in which a pure state is transmitted, the case of mixed states being easily deduced. Our instrument is the input-output transfer fidelity $F_{\text{transfer}}=\bra{\psi}\rho^{out}_N\ket{\psi}$ (with $\rho^{out}_N$ the state of the last qubit after the protocol), which is unity when the two states are the same and zero when they are mutually orthogonal. We have that $F_{\text{transfer}}=p_{00}+(1-p_{00})\text{Tr}(\ket{\psi}\!\bra{\psi}{\hat X}),$ where $p_{00}\in[0,1]$ is the population of $\ket{0}$ in the density matrix describing the initial state of spin $N$ (decomposed over the $\hat{Z}$-basis). The independence of the state fidelity from the coherences of the initial state of $N$ implies that it is effectively the same to operate with a pure state $\ket{\phi}_N=\sqrt{p_{00}}\ket{0}_N+e^{i\varphi}\sqrt{1-p_{00}}\ket{1}_N$ or the mixed one $\rho_N=p_{00}\ket{0}\bra{0}+\gamma\ket{0}\bra{1}+\gamma^*\ket{1}\bra{0}+(1-p_{00})\ket{1}\bra{1}$, with $\gamma$ being arbitrary. Any error in the QST process has to be ascribed to the fact that, for a non-unit value of $p_{00}$, the perfectly-transmitted state $\ket{\psi}$ has an admixture with the ``wrong'' state $\hat{X}\!\ket{\psi}$. This explains the dependence of $F_{\text{transfer}}$ on the state to be transmitted (more precisely, on $\langle{\psi}|{\hat{X}}|{\psi}\rangle$).
As anticipated, our results are not bound to the specific instance of interaction model being considered but, more generally, on the way two-site operators evolve in time. Under different couplings, similar behaviors for objects like $\hat{\cal O}_i(t^*)\hat{\cal O}_{N-i+1}(t^*)$ can be observed, therefore leading to conclusions similar to those put forward in our discussion so far. In fact, with rather minor adjustments to the procedure described above, one can apply the scheme to $N$ spin-$1/2$ particles coupled via the $XX$ model $\hat{{\cal H}}_2=\sum^{N-1}_{i=1}K_{i}(\hat{X}_{i}\hat{X}_{i+1}+\hat{Y}_i\hat{Y}_{i+1})$ with $K_{i}=J\sqrt{i(N-i)}$. $\hat{\cal H}_2$ has been extensively analyzed~\cite{cambridge}: $1\rightarrow{N}$ perfect QST is achieved when the initial state of all the spins but the first one is $\ket{0}$. However, let us reason in terms of IF again, proceed as done above for the Ising model and look at the dynamics of two-site operators symmetrical with respect to the center of the chain. At time $t^*=\pi/4J$, we have that, for any $N$, $\hat{\cal \openone}_i(t^*)\hat{\cal Z}_{N-i+1}(t^*)=\hat Z_i\hat \openone_{N-i+1}$. On the other hand, for even $N$ we find \begin{equation}
\begin{split}
&\hat{\cal X}_i(t^*)\hat{\cal X}_{N-i+1}(t^*)=\hat X_i\hat X_{N-i+1},\\
&\hat{\cal X}_i(t^*)\hat{\cal Y}_{N-i+1}(t^*)=\hat Y_i\hat X_{N-i+1},
\end{split} \label{eq:evolutioncambridge} \end{equation} while for an odd number of spins in the chain we have \begin{equation}
\begin{split}
&\hat{\cal X}_i(t^*)\hat{\cal X}_{N-i+1}(t^*)=\hat Y_i\hat Y_{N-i+1},\\
&\hat{\cal X}_i(t^*)\hat{\cal Y}_{N-i+1}(t^*)=\hat X_i\hat Y_{N-i+1}.
\end{split} \label{eq:evolutioncambridge2} \end{equation} The procedure to follow has to be adjusted depending on the chain's length. In particular, the last spin has to be projected onto $\ket{\pm_N}=(\ket{0}\pm e^{iN\frac{\pi}{2}}\ket{1})/\sqrt{2}$. In what follows, we say that outcome $+1$ ($-1$) is found if a projection onto $\ket{+_N}$ ($\ket{-_N}$) is performed. This change of basis with respect to the protocol designed for the Ising model is due to the different form of the {\it transverse} nature of $\hat{\cal H}_2$. After the evolution $e^{-i\hat{\cal H}_2t}$, we measure the first spin over the $\hat{X}$ eigenbasis. The resulting output state depends on the product of the measurement outcomes at $1$ (after the evolution) and $N$ (before the evolution). If such product is $+1$ ($-1$), the transmitted state will be $(\hat{T}^N)^\dag\rho^{in}(\hat{T}^N)$ [$(\hat{T}^N)\rho^{in}(\hat{T}^N)^\dag$], where $\hat{T}=\ket{0}\bra{0}+e^{i\frac{\pi}{2}}\ket{1}\bra{1}$ (therefore, $\hat{T}^2=\hat{Z}$). Also in this case, apart from a single-spin transformation, perfect state transfer is achieved. A sketch of the general scheme for perfect state transfer is presented in Fig.~\ref{protocols}. \begin{figure}
\caption{Sketch of the scheme for perfect QST. $M_1$ and $M_2$ are measurements performed over a fixed basis, $\Sigma$ is a conditional operation, and $\hat{\cal H}$ is the Hamiltonian.}
\label{protocols}
\end{figure} For the XX model, we can write the final state of the system before the $M_2$-measurement stage (we consider the last spin in $\ket{+_N}$), as $\rho_F=\{\ket{+}_1\!\bra{+}\otimes\tilde{\rho}\otimes\tilde{\rho}^{in}_N\!+\!\ket{-}_1\!\bra{-}\otimes\hat{\cal S}_4\tilde{\rho}\otimes\hat{\cal T}_4\tilde{\rho}^{in}_N-[i(-1)^{N}\ket{+}_1\!\bra{-}\otimes\hat{\cal S}_3\tilde{\rho}\otimes\hat{\cal T}_3\tilde{\rho}^{in}_N+h.c.]\}/2$ with $\tilde{\rho}\!=\!\hat{A}^\dag\rho\hat{A}$, $\tilde{\rho}^{in}\!=\!(\hat{T}^N)^\dag\rho^{in}(\hat{T}^N)$, $\hat{A}=\prod_{i=2}^{N-1}\hat{T}_i^N$, $\hat{\cal S}_3\tilde{\rho}=\tilde{\rho}\prod_{i=2}^{N-1}\hat{Z}_i$, $\hat{\cal S}_4\tilde{\rho}=\prod_{i=2}^{N-1}\hat{Z}_i\tilde{\rho}\prod_{i=2}^{N-1}\hat{Z}_i$, $\hat{\cal T}_3\tilde{\rho}^{in}_N=\tilde{\rho}^{in}_N\hat{Z}_N$, $\hat{\cal T}_4\tilde{\rho}^{in}_N=\hat{Z}_N\tilde{\rho}^{in}_N\hat{Z}_N$.
In general, the protocol can be adapted to any Hamiltonian for which we can find a triplet of single-spin operators $\hat{\cal B},\hat{C},\hat{D}$ such that, for symmetric spin pairs, we have $\hat{\cal B}^{j_{O}}_i(t^*)\hat{C}_{N-i+1}\hat{\cal O}_{N-i+1}(t^*)=O_{i}\hat{D}^{k_O}_{N-i+1}$. Here, $\hat{\cal B}_{i}$ ($\hat{D}_{N-i+1}$) provides the eigenbasis for the measurement over spin $i$ ($N-i+1$) of the chain after (before) the evolution, $\hat{C}_{N-i+1}$ is a decoding operation, $\hat{O}_i=\hat{\cal O}_i(0)=X,Y,Z$ and $j_O,k_O=0,1$, depending on the coupling model. For instance, Eq.~(\ref{eq:evolution}) are gained by taking $\hat{\cal B}_i=\hat{\cal Z}_i$, $\hat{C}_{N-i+1}=\openone$, $\hat{D}_{N-i+1}=\hat{Z}_{N-i+1}$ with $j_X=k_X=0,j_{Y,Z}=k_{Y,Z}=1$. We point out that when these conditions are not fulfilled, as in Ref.~\cite{campos} where an antiferromagnetic Heisenberg chain is used, our protocol can still be rather successful. In these cases, through IF we can calculate an estimate of the average transfer fidelity~\cite{informationflux}. For instance, for a homogeneous XX model of $N=100$ spins with end-point coupling strengths $J_{1,N-1}$ such that $J_{1}=J_{N-1}\simeq{0.7}J$, the average transfer fidelity via our protocol is estimated to be $\ge0.87$.
As noticed for the case of $\hat{\cal H}_1$, the nature and amount of the entanglement generated during the performance of the protocol depends on the form of the initial state of the medium's spins. Multipartite entanglement shared by all (or some of) the elements of the chain, as well as only bipartite entanglement involving first and last qubits (as in the case when spins from $2$ to $N-1$ are in eigenstates of $\hat{X}$) can be generated. Nevertheless, unit fidelity of transfer is achieved when the right time evolution and perfect hard-projections are in order. This strongly supports the idea that QST protocols do not crucially rely on the specific nature and quality of the entanglement generated throughout the many-body dynamics, in stark contrast with other schemes for QIP~\cite{grover}.
On the other hand, the counterintuitive fact that $F_{\text{transfer}}=1$ regardless the initial state of medium could remind one, at first sight, the idea of {\it deterministic quantum computation with one quantum bit} (DQC1) proposed in Ref.~\cite{knill}. In this model, a single pure two-level system and arbitrarily many ancillae prepared in a maximally mixed state are used in order to solve problems for which no efficient classical algorithm is known. The apparent similarity with our case is resolved by observing that in DQC1 the initial state is restricted to that particular instance (a pure single-qubit state and a maximally mixed state of all the other qubits), which can be seen as the ``fiducial'' state invoked in DiVincenzo's criterion. Differently, our scheme completely relaxes the knowledge required about the state of the spin-medium, which might well be completely unknown to the agents that perform the QST process. The achievement of quantum computation with initial mixed states has also been analyzed in Ref.~\cite{parker}, where it has been shown that a single qubit supported by a collection of qubits in an arbitrary mixed state is sufficient to efficiently implement Shor's factorization algorithm. In this case, however, the performance of the protocol depends on the input state. Indeed, the average efficiency over all the possible random states (mixed or pure) is evaluated, but for some particular input states (for instance, $\ket{0...0}_{2,...,N}$) it can drop below classical limit. Differently, our scheme is independent of the initialization of the spin-medium and its efficiency cannot be spoiled by any input state.
It is worth clarifying an important aspect which is certainly apparent to the careful reader. The procedures described so far might remind one of the general scheme for one-way quantum computation put forward in Ref.~\cite{OW}. In both cases, the optimal result of a protocol depends on the performance of perfect projective measurements onto specific elements of a register and the feed-forward of a certain amount of classical information (in our case, the outcome of the measurements over spin $1$ and/or the initial projection of spin $N$). Moreover, as in the one-way model, in our proposal the ``pattern'' of quantum correlations depends on the initial state of the elements of the system. However, such an analogy cannot be pushed too forward as, remarkably, the use of quantum entanglement in the two protocols is different. While the one-way model relies on a pre-built multipartite entangled resource (the graph state) which is progressively destroyed by a proper program of measurements, in our scheme the multipartite entanglement (if any) is built while the protocol is running. We just need a single measurement for the processing of the information encoded at the input state. In addition, differently from a graph state, the preparation of some of the spins in the medium in states preventing their participation to a multipartite entangled state does not spoil the efficiency of the protocol, as we have demonstrated. This is not the case for a graph-state built out of pairwise Ising interactions: the wrong initialization of a part of the register excludes it from the overall entangled state, and actually ``blocks'' the transfer of information through that region of the register.
Finally, we would like to stress the difference between our opproach and those achieving perfect QST via mirror-inverting coupling model~\cite{mirror}. In our general protocol, mirror inversion is ``induced'' in models which otherwise would not allow it, by adjusting the pattern of quantum interferences within the spin medium via the encoding/decoding local stages. By means of these, one can avoid the pre-engineered fulfilment of precise conditions on the spectrum of each interacting spin~\cite{mirror,yung} which, combined with reflection symmetry, are required for mirror-inversion. Our models satisfy just the second of these conditions, perfect QST without initialization being achieved through the encoding and decoding steps we have described.
We have shown the existence of a simple control-limited scheme for the achievement of perfect QST in a system of interacting spins without the necessity of demanding state initialization. Our flexible protocol requires just {\it one-shot} unitary evolution and end-chain local operations. Its efficiency arises from the establishment of {\it correlations} between the first and last spin of the transmission-chain. With the exception of limiting cases where the transfer is automatically achieved [as for the transfer of eigenstates of $\hat{X}_1$ ($\hat{Z}_1$) when model $\hat{\cal H}_1$ ($\hat{\cal H}_2$) is used], these are set regardless of the state of the spin medium, their amount being a case-dependent issue. The end-chain measurements, which are key to our scheme, ``adjust'' such correlations in a way so as to achieve perfect QST. We hope that our study, which paves the way to a thorough investigation about the role played by multipartite entanglement in perfect QST, would help in the experimental realization of short-distance quantum communication in, for instance, engineered superconducting chains or patterned distributed nanosystems.
C.D.F. thanks D. Ballester and G. A. Paz-Silva for discussions. We acknowledge support from UK EPSRC and QIPIRC. M.P. is supported by the EPSRC (EP/G004579/1).
\end{document} |
\begin{document}
\title{Bounds on mixed state entanglement}
\newcommand{0000-0000-000-000X}{0000-0000-000-000X}
\author{Bruno Leggio $^{1}$, Anna Napoli $^{2,4}$, Hiromichi Nakazato $^{5}$ and Antonino Messina $^{3,4}$}
\affiliation{$^{1}$ Laboratoire Reproduction et D{\'e}veloppement des Plantes, Univ Lyon, ENS de Lyon, UCB Lyon 1, CNRS, INRA, Inria, Lyon 69342, France\\ $^{2}$ Universit\`a degli Studi di Palermo, Dipartimento di Fisica e Chimica - Emilio Segr\`e, Via Archirafi 36, I-90123 Palermo, Italy\\ $^{3}$ Universit\`a degli Studi di Palermo, Dipartimento di Matematica ed Informatica, Via Archirafi, 34, I-90123 Palermo, Italy \\ $^{4}$ I.N.F.N. Sezione di Catania, Via Santa Sofia 64, I-95123 Catania, Italy\\ $^{5}$ Department of Physics, Waseda University, Tokyo 169-8555, Japan\\
}
\begin{abstract} In the general framework of $d\times d$ mixed states, we derive an explicit bound for bipartite NPT entanglement based on the mixedness characterization of the physical system. The result derived is very general, being based only on the assumption of finite dimensionality. In addition it turns out to be of experimental interest since some purity measuring protocols are known. Exploiting the bound in the particular case of thermal entanglement, a way to connect thermodynamic features to monogamy of quantum correlations is suggested, and some recent results on the subject are given a physically clear explanation. \end{abstract}
\maketitle
\section{Introduction} In dealing with mixed states of a physical system, one has to be careful when speaking about entanglement. The definition of bipartite mixed state entanglement is unique (although problems may arise in dealing with multipartite entanglement \cite{Guhne}), but its quantification relies on several different criteria and it is not yet fully developed: many difficulties arise already in the definition of physically sensible measures \cite{Horodecki,Mintert}. The main problem affecting a few known mixed state entanglement measures is, indeed, the fact that extending a measure from a pure state case to a mixed state case usually requires challenging maximization procedures over all its possible pure state decompositions \cite{Eisert}-\cite{Takayanagi}. Notwithstanding, the investigation of the connection between entanglement and mixedness exhibited by a quantum system is of great interest, e.g. in quantum computation theory \cite{Jozsa,Datta} as well in quantum teleportation \cite{Paulson}. The threshold of mixedness exhibited by a quantum system compatible with the occurrence of entanglement between two parties of the same system has been analyzed, leading for example to the so-called Kus-Zyczkowski ball of absolutely separable states \cite{Kus}-\cite{Aubrun}. Quite recently, possible links between entanglement and easily measurable observables have been exploited to define experimental protocols aimed at measuring quantum correlations \cite{Brida}-\cite{Johnston}. The use of measurable quantities as entanglement witnesses for a wide class of systems has been known for some time \cite{Toth,Wiesniak}, but an analogous possibility amounting at entanglement measures is a recent and growing challenge. To present day some bounds for entanglement are measured in terms of correlation functions in spin systems \cite{Cramer} or using quantum quenches \cite{Cardy}. Indeed an experimental measure of entanglement is, generally speaking, out of reach because of the difficulty in addressing local properties of many-particle systems and of the fundamental non-linearity of entanglement quantifiers. For such a reason the best one can do is to provide experimentally accessible bounds on some entanglement quantifiers \cite{Guhne2}. The aim of this paper is to build a bound to the entanglement degree in a general bipartition of a physical system in a mixed state. We are going to establish an upper bound to the Negativity $N$ \cite{Peres} in terms of the Linear Entropy $S_L$. We are thus studying what is called Negative Partial Transpose (NPT) entanglement. It should however be emphasized that a non-zero Negativity is a sufficient but not necessary condition to detect entanglement, since Positive Partial Transpose (PPT, or bound) entanglement exists across bipartitions of dimensions higher than $2\times 3$, which can not be detected by means of the Negativity criterion \cite{Horodecki2}. Our investigation contributes to the topical debate concerning a link between quantum correlations and mixedness \cite{Wei}. We stress that our result is of experimental interest since the bound on $N$ may easily be evaluated measuring the Linear Entropy.
\section{An upper bound to the Negativity in terms of Linear Entropy}
Consider a $d$-dimensional system S in a state described by the density matrix $(0\leq p_i \leq 1, \;\;\;\forall i)$ \begin{equation}\label{rho} \rho=\sum_ip_i{\sigma_i} \end{equation} where each $\sigma_i$ represents a pure state, and define a bipartition into two subsystems S$_1$ and S$_2$ with dimensions $d_1$ and $d_2$ respectively ($d = d_1\cdot d_2$). It is common [19] to define Negativity as
\begin{equation}\label{N}
N=\frac{\|\rho^{T_1}\| -1}{d_m-1}=\frac{\textrm{Tr}{\sqrt{\rho^{T_1}(\rho^{T_1})^{\dag}}-1}}{d_m-1} \end{equation}
where $d_m = \min \{ d_1, d_2\}$, $\rho^{T_1}$ is the matrix obtained through a partial transposition with respect to the subsystem $S_1$ and $ \| \cdot \| $ is the trace norm ($ \| O \|\equiv \textrm{Tr}\{ \sqrt{O O^{\dag}}\}$). In what follows we will call $d_M= \max \{ d_1, d_2\}$. By construction, $0\leq N\leq 1$, with $N=1$ for maximally entangled states only. Furthermore, the Linear Entropy $S_L$ in our system is defined as
\begin{equation}\label{SL} S_L=\frac{d}{d-1}(1-\textrm{Tr} \rho^2)=\frac{d}{d-1}P_E \end{equation}
where $P_E=1-\textrm{Tr} \rho^2=1-\| \rho \|_2^2$ is a measure of mixedness in terms of the Purity $\textrm{Tr} \rho^2$ of the state, $ \| \rho \|_2 $ being the Hilbert-Schmidt norm of $\rho$ ($ \| O \|_2 \equiv \sqrt{\textrm{Tr} \{O O^{\dag}}\}$). By definition, $S_L = 0$ for any pure state while $S_L = 1$ for maximally mixed states. It is easy to see that there exists a link between the trace norm of an operator $O$ in a $d$-dimensional Hilbert space and its Hilbert-Schmidt norm. Such a link can be expressed as
\begin{equation}\label{normaO}
\| O \|^2=( \sum_{i=1}^d |\lambda_i | )^2 \leq d \sum_{i=1}^d | \lambda_i |^2 = d \| O \|^2_2 \end{equation} where $\lambda_i$ is the $i$-th eigenvalue of $O$ and the so-called Chebyshev sum inequality $\left( \sum_{i=1}^d a_i \right)^2 \leq d \sum_{i=1}^da_i^2$ has been used. Since, in addition, the Hilbert-Schmidt norm is invariant under partial transposition, one readily gets a first explicit link between Negativity and mixedness $P_E$, valid for generic $d$-dimensional systems, in the form of an upper bound, which reads
\begin{equation}\label{Q1} N\leq \frac{\sqrt{d}\sqrt{1-P_E}-1}{d_m-1}\equiv Q_1 \end{equation}
Equation (\ref{Q1}) provides an upper bound to the Negativity $N$ in terms of $P_E$ and thus, in view of equation (\ref{SL}), in terms of the Linear Entropy. This bound imposes a zero value for $N$ only for a maximally mixed state. It is known \cite{Thirring}, however, that no entanglement can survive in a state whose purity is smaller than or equal to $(d-1)^{-1}$. Also in the case of a pure (or almost pure) state, the bound becomes useless as long as the bipartition is not "balanced" (by "balanced" we mean a bipartition where $d_m=\sqrt{d}$). It indeed becomes greater than one (thus being unable to give information about entanglement) for mixedness smaller than $\frac{d-d_m^2}{d}$ which might even approach 1 in some specific cases (recall that, by definition, $d_m\leq d$). We however expect entanglement to be unbounded only in the case of pure states ($P_E = 0$). In the following we show that bound (\ref{Q1}) can be strengthened.
\section{Strengthening the previous bound} Observe firstly that the rank $r_{\rho}$ of $\rho^{T_1}$ is not greater than $d_m^2$ (equal to $d$) when $\rho$ is pure (maximally mixed). For this reason we write
\begin{equation}\label{r(SL)} r(S_L)\equiv \max_{\{\rho : \textrm{Tr} \rho^2=1-\frac{d-1}{d}S_L\}}r_{\rho} \end{equation}
By construction, $r(0)=d_m^2$ since any pure state can be written in Schmidt decomposition consisting of $d_m$ vectors, and $r(1) = d$ because a maximally mixed state is proportional to identity. Since by definition $\left( \sum_{i=1}^d | \lambda_i |\right)^2=\left( \sum_{i=1}^{r(S_L)}{| \lambda_i |}\right)^2$ holds for any physical system, equation (\ref{Q1}) may be substituted by the following inequality
\begin{equation}\label{N_leq} N\leq \frac{\sqrt{r(S_L)}\sqrt{1-P_E}-1}{d_m-1} \end{equation}
Note however that there exist at least some physical systems for which the function in (\ref{r(SL)}), due to the maximization procedure involved in its definition, is always equal to $d$ in the range $S_L \in (0, 1]$, showing then a discontinuity at $S_L = 0$ as
\begin{equation}\label{SL_to_0} \lim_{S_L\to 0} r(S_L)=d\neq d_m^2=r(0) \end{equation}
Since we want our result to hold generally, independently on the particular system analyzed, equation (\ref{N_leq}) can not improve equation (\ref{Q1}) because even for slightly mixed states $(0 < S_L<<1) $ we have a priori no information on $r(S_L)$ which might be equal to $d$, tracing back equation (\ref{N_leq}) to (5). Despite this, we may correct (\ref{N_leq}) exploiting the expectation that for very low mixedness some of these eigenvalues are much smaller than the other ones. Indeed for all the $ r(S_L)$, not vanishing eigenvalues appearing in (\ref{normaO}) are treated on equal footing in going from $\| \rho^{T_1} \|$ to $\| \rho^{T_1} \|_2$. To properly take into account the difference between them, go back to equation (\ref{rho}) and define a reference pure state $\sigma_R$ at will among the ones having the largest occupation probability $p_R$. The spectrum of $\sigma_R^{T_1}$ consists of $n_p$ non-zero eigenvalues $\{\mu_{\alpha} ^{(R)} \}$ $\left(\max n_p =d_m^2\right)$ and of $n_m=d-n_p$ zero eigenvalues $\{ \nu_{\alpha}^{(R)}\}$.
We call the former $\alpha$-class eigenvalues and the latter $\beta$-class eigenvalues, and obviously the latter class does not contribute to $\| \sigma_R^{T_1}\|$. In order to strengthen (\ref{Q1}) we are interested in the spectrum of $\rho^{T_1}$ which, in general, consists of $d$ non-zero eigenvalues. Unfortunately, then, we can not directly introduce analogous $\alpha$- and $\beta$-classes to identify which eigenvalues contribute to the sum involved in (4) comparatively much less than the other ones, when the state $\rho$ possesses a low mixedness degree and is thus very close to a pure state. To overcome this difficulty let us consider a parameter-dependent class of density matrices associated to the given $\rho$
\begin{equation}\label{tau} \tau (x)=\sum_i q_i(x)\sigma_i \end{equation} with $x\geq 0$, such that, for all $i$,
\begin{equation}\label{x_to_x1} \lim_{x\to x_1} q_i(x)=p_i \;\;\;\; \lim_{x\to 0} q_i(x)=\delta_{iR} \end{equation} and such that all $q_i(x)$s are continuous functions of $x$. Thus $\tau(x)^{T_1}$ continuously connects $\rho^{T_1}$ and $\sigma_R^{T1} $ and, as a consequence, any $\nu_{\beta}^{(R)}$ is continuously connected to a particular eigenvalue of $\rho^{T_1}$ which will be the the corresponding mixed state $\beta$-class eigenvalue $\nu_{\beta}$. In such a way one can define the function $\nu_{\beta_0}(x)$ as the eigenvalue of $\tau(x)^{T_1}$ having the property
\begin{equation}\label{x_to_0} \lim_{x\to 0} \nu_{\beta_0}(x)=\nu_{\beta_0}^{(R)} \end{equation} and so the $\beta$-class eigenvalue for $\rho^{T_1}$ as \begin{equation}\label{nu_beta} \nu_{\beta_0}\equiv \lim_{x\to x_1} \nu_{\beta_0}(x) \end{equation}
We emphasize at this point that the results of this paper do not depend on the explicit functional dependence of $\tau(x)$ on $x$, which can be chosen at will provided it satisfies conditions (\ref{x_to_x1}). Indeed, $\tau(x)$ is just a mathematical tool, with (in general) no physical meaning. To save some writing and in view of equation (\ref{normaO}), we put \begin{equation}\label{A} A=\sum_{\alpha}\mu_{\alpha}^2 \;\;\;\; B=\sum_{\beta}\nu_{\beta}^2 \end{equation} and notice that $\textrm{Tr}(\rho^{T_1})^2=\textrm{Tr} \rho^2=A+B$. We can now state (see Appendix for a proof) the following.
{\bf Lemma 1} Given a state $\rho$ of a system in a $d$-dimensional Hilbert space, and the associated reference pure state $\sigma_R$, for any set of states $\tau(t)$ satisfying (\ref{tau}) and (\ref{x_to_x1}), there exists a value $\delta\geq x_1$ such that $1-A(t)-B(t)\geq B(t)d$ for any $t \in [0,\delta]$. This result allows us to find a function $w(S_L)$ such that $w(0) = d_m^2$ and
\begin{equation}\label{normarho}
\| \rho^{T_1} \|^2\leq w(S_L)=f(\|\rho^{T_1} \|^2_2) \end{equation}
Starting from the identity \begin{eqnarray}\label{identity}
\| \rho^{T_1} \|^2=(\sum_{\alpha}^{d_m^2}|\mu_{\alpha}|)^2+(\sum_{\beta}^{d-1}|\nu_{\beta}|)^2+
\sum_{\alpha}^{d_m^2}|\mu_{\alpha}| \sum_{\beta}^{d-1}|\nu_{\beta}| \end{eqnarray} and applying the Chebyshev sum inequality term by term we obtain
\begin{equation}\label{rho_T1}
\| \rho^{T_1} \|^2\leq ( d_m\sqrt{A+B}+\sqrt{\frac{d-1}{d}}\sqrt{1-A-B} )^2 \end{equation} where Lemma 1 has been exploited. Expressing equation (\ref{rho_T1}) in terms of Negativity and Purity we finally get
\begin{equation}\label{Q2} N\leq \frac{d_m\sqrt{1-P_E}+\sqrt{\frac{d-1}{d}}\sqrt{P_E}-1}{d_m-1}\equiv Q_2 \end{equation}
Bound (\ref{Q2}) improves bound (\ref{N_leq}) for high purity when $S_L$ is small, that is $Q_2 < Q_1$, becoming in general greater than $Q_1$ at low purity. In addition it still suffers the same drawback as $Q_1$, not vanishing when $1-P_E=\frac{1}{d-1}$.
In such a case one has to consider the lower bound $\frac{1}{d-1}$ on purity, below which no entanglement survives. In order to take such a bound into account, instead of distinguishing among $\alpha$ and $\beta$ eigenvalues of $\rho^{T_1}$ , we can divide them into positive ones $\{ \xi_i\}$ and negative ones $ \{\chi_i\}$. In this way, calling $n_-$ and $ (d-n_-)$ the numbers of negative and positive eigenvalues, respectively and applying the Lagrange multiplier method to the function $\| \rho^{T_1} \|=\sum_i^{d-n_-} \xi_i+\sum_j^{n_-} \chi_j$ subjected to constraints $\sum_i\xi_i^2+\sum_j \chi_j^2=1-P_E$ and $\sum_i\xi_i+\sum_j \chi_j=1$ one finds
\begin{equation}\label{norma_rho_leq}
\| \rho^{T_1} \|^2\leq\frac{d-2n_-+2\sqrt{n_-(d-n_-)(d(1-P_E)-1)}}{d} \end{equation}
Bound (\ref{norma_rho_leq}) can be exploited to show that no entanglement can survive at purity lower than $\frac{1}{d-1}$. Indeed, for entanglement to exist, one eigenvalue at least has to be negative. However, by normalization, it always has to be true that $\| \rho^{T_1} \| \geq 1$ and this implies that, as long as $n_-\geq 1$, purity $1-P_E$ can not be smaller than $\frac{1}{d-1}$ as expected. However, in general, the number of negative eigenvalues is not known. In these cases the best one can do is to look for the maximum, with respect to $n_-$, of the right-hand side of (\ref{norma_rho_leq}), leading unfortunately once again to bound (\ref{Q1}) on $N$. However, since always $N\leq \Theta (\frac{d-2}{d-1}-P_E)\equiv Q_3$, where $\Theta (x)$ is the Heaviside step function, defining $Q=\min\{Q_1, Q_2, Q_3\}$, we state our final and main result as
\begin{equation}\label{NQ} N \leq Q \end{equation} valid for every possible bipartition of a quantum system, independently on its (finite) dimension, its detailed structure or its properties. It is worth stressing that computing Negativity quickly becomes a very hard task as the dimension of the Hilbert space grows, while the evaluation of purity can be performed without particular efforts. We emphasize in addition that bound $Q$ in (\ref{NQ}) only depends on purity, and is completely determined once a bipartition of the physical system is fixed and purity is known. This means that an experimental measure of purity allows to extract information about the maximal degree of bipartite entanglement one can find in the system under scrutiny. Some purity measuring protocols, or at least purity estimations based on experimental data, have been proposed. They are based on statistical analysis of homodyne distributions, obtained measuring radiation field tomograms\cite{Manko}, on the properties of graph states \cite{Wunderlich}, or on the availability of many different copies of the state over which separable measurements are performed \cite{Bagan}. In all the cases where a measure of purity is possible, an experimental estimation of bipartite entanglement is available thanks to (\ref{NQ}) which is then actually experimentally accessible.
\section{Crossover between $Q_1$ and $Q_2$ and numerical results} As commented previously, bounds $Q_1$ (Eq. (\ref{Q1})) and $Q_2$ (Eq. (\ref{Q2})) can supply information about bipartite entanglement in two different setups: $Q_1$ is indeed accurate enough for a balanced bipartition (i.e. when $d_m\sim \sqrt{d}$) but fails when $\sqrt{d}\gg d_m$ since it rapidly becomes greater than 1. To solve this problem, we obtained the bound $Q_2$ which, by construction, provides nontrivial information about bipartite entanglement in an unbalanced bipartition $(\sqrt{d}\gg d_m )$, but may not work properly for a balanced one. It is actually a very easy task to show that our new bound $Q_2$ works better than the old one, $Q_1$, (i.e., $Q_1\geq Q_2$) when the purity $P$ is greater than a critical value given in terms of the total Hilbert space dimension and of the subdimensions of the bipartition, i.e. when
\begin{equation}\label{Pc} P(\rho)\geq \frac{d-1}{d(\sqrt{d}-d_m)^2+d-1}=P_c \end{equation}
Two limiting cases are easily studied directly from Eq. (\ref{Pc}): for a perfectly balanced bipartition ($\sqrt{d}= d_m$), one gets $P_c=1$ and, since by definition $\frac{1}{d}\leq P(\rho)\leq 1$, in this case the bound $Q_1$ is smaller (and therefore works better) than the bound $Q_2$ for any possible quantum state. In the opposite limit, in a strongly unbalanced bipartition, one can roughly approximate $(\sqrt{d}-d_m)^2\sim d$ and, since by definition $d\geq 4$, this leads to $P_c\sim \frac{d-1}{d^2+d-1} < \frac{1}{d}$. Taking into account the natural bounds for the purity of a quantum state, this in turn means that in such a limit $Q_1 > Q_2$ for any quantum state or, in other words, our new bound works always better. This behavior can be clearly seen in Fig. (\ref{Fig1}), where the dependence of $P_c$ on $d$ and $d_m$ is shown together with the natural limiting values of $P(\rho)$.
\begin{figure}
\caption{(Color online) Purity threshold $P_c$ given in Eq. (\ref{Pc}) (green surface) and natural purity limits 1 (blue upper surface) and $\frac{1}{d}$ (red lower surface) as functions of $d\in [4,1000]$ and $d_m\in [2,25]$ such that $d_m^2\leq d$. Values of purity below the green surface are such that $Q_1 < Q_2$, while values of purity above the green surface yield $Q_2 < Q_1$. It is clear that, when $d_m^2< d$, $P_c\sim \frac{1}{d}$ meaning that $Q_2 < Q_1$ for most of quantum states. On the other hand, when $d_m^2\sim d$ and $Q_1 < Q_2$ almost everywhere in state space. }
\label{Fig1}
\end{figure}
\begin{figure}
\caption{((Color online) $\Delta_1$ (light red triangles) and $\Delta_2$ (dark blue circles) evaluated for 1000 randomly generated bipartite states, with $d_m = d_M$ randomly chosen in [2, 10]. For these perfectly balanced bipartite states $\Delta_1 < \Delta_2$ everywhere in state space. }
\label{Fig2}
\end{figure}
\begin{figure}
\caption{ (Color online) $\Delta_1$ (light red triangles) and $\Delta_2$ (dark blue circles) evaluated for 1000 randomly generated bipartite states, with $d_M = d_m+60$ and $d_m$ randomly chosen in [2, 10]. Since the bipartitions are no longer perfectly balanced, there is a much broader mixing of values of $\Delta_1$ and $\Delta_2$. In particular, $\Delta_1$ has a much wider distribution of values, while $\Delta_2$ seems to have a much denser distribution around central values. The inset shows the difference $\Delta_1-\Delta_2$. }
\label{Fig3}
\end{figure}
\begin{figure}
\caption{(Color online) $\Delta_1$ (light red triangles) and $\Delta_2$(dark blue circles) evaluated for 1000 randomly generated bipartite states, with $d_M = d_m+70$ and $d_m$ randomly chosen in [2, 5]. For these strongly unbalanced bipartitions we always detect $\Delta_1>\Delta_2$. The inset shows the difference $\Delta_1-\Delta_2$. }
\label{Fig4}
\end{figure}
To better exemplify such a behavior, we report here results of numerical simulations performed with the aid of the $QI$ package for Mathematica \cite{Mathematica}, by which random quantum states have been generated in different dimensions, uniformly distributed according to different metrics. On these states, we tested bounds $Q_1$ and $Q_2$. Figures (\ref{Fig2})-(\ref{Fig4}) show the differences $\Delta_i=Q_i-N$ $(i=1,2)$ of the bounds with the Negativity of the state, once a bipartition has been fixed. In particular, in a first run of simulations (Fig. (\ref{Fig2})) we generated $10^3$ perfectly balanced bipartite states (such that $d_m=d_M=\sqrt{d}$), randomly choosing the dimension of the two subsystems for each quantum state within the range $d_M=d_m\in[2,10]$. The results in Fig. (\ref{Fig2}) clearly show that $\Delta_1< \Delta_2$ for all the analyzed states. The second run of simulations has been performed with $d_m$ randomly chosen in $[2, 14]$ and $d_M = d_m+60$. In such a case, as can be seen in Fig. (\ref{Fig3}) the difference $\Delta_1-\Delta_2$ has no fixed sign. The two subdimensions are, indeed, such that the critical value of purity $P_c$ in Eq. (\ref{Pc}) is neither extremely close to $\frac{1}{d}$ nor to 1. As can be noticed from the inset of Fig. (\ref{Fig3}) which shows the difference $\Delta_1-\Delta_2$, however, on the average it is still true that $\Delta_1<\Delta_2$. The third set of numerical data, finally, has been obtained generating $10^3$ random states with subdimensions $d_M=d_m+70$ and $d_m$ randomly drawn in $[2,5]$. In this limit the value of $P_c$ is very close to the minimum of purity and we therefore expect $Q_2$ to work better than $Q_1$ for almost any state. This is indeed confirmed by the simulations shown in Fig. (\ref{Fig4}), in which $\Delta_2<\Delta_1$. As an example of application of our results, consider a single two-level system interacting with a spin system composed of $n_s$ spins, each of which lives in a $d_s$ dimensional Hilbert space. Therefore the total system Hilbert space has dimension $d = 2(d_s)^{n_s}$ . Let us suppose the spin system is a chain of 10 spin $\frac{1}{2}$ (which is a relatively small system, very far from its thermodynamic limit). The total Hilbert space dimension will then be $d = 2^{11}$, and considering the natural bipartition into the two-level system and the spin chain, one has $d_m = 2$ and $d_M = 2^{10}$. For such a system, the critical value of purity $P_c$ in Eq. (\ref{Pc}) is
\begin{equation}\label{Pcnum} P_c=\frac{2^{11}-1}{2^{11}(\sqrt{2^{11}}-2)^2+2^{11}-1}\sim 0.000534 \end{equation}
The lower value of purity for which bipartite entanglement can survive is, as said before, $P_l=\frac{1}{d-1}\sim 0.00049$. Therefore, for all the total states having purity $0.000534\leq P(\rho)\leq 1$, bound $Q_2$ in Eq. (\ref{Q2}) works better than $Q_1$. Only for the small fraction of states having an extremely low purity in the range $[0.00049, 0.000534]$ bound $Q_1$ gives better information than $Q_2$. This shows again that, for unbalanced bipartitions (and even in the case of a relatively small number of individual components of the total system), $Q_2$ works much better than $Q_1$.
\section{ Application to thermal entanglement} Of particular interest is the application of the results of this paper to the case of thermal entanglement, where both Linear Entropy and its link to Negativity acquire a much clearer meaning. A recent result \cite{Popescu}, indeed, shows how the canonical ensemble description of thermal equilibrium stems from the existence of quantum correlations between a system and its thermal bath. In view of this it has been shown that it is possible, with a very small statistical error, to replace the system + bath microcanonical ensemble with a pure state inside the suitable energy shell, still obtaining the appropriate thermal statistics characterizing Gibbs distribution. In this context, then, the Linear Entropy of the mixed Gibbs state provides a system/bath entanglement measure. Equation (\ref{NQ}) then can be viewed as a monogamy relation, describing the competition between two kinds of quantum correlations - internal ones measured by Negativity and external ones measured by Entropy. On the other hand it is known that some thermodynamic quantities (like e.g. heat capacity or internal energy) can be used as entanglement witnesses \cite{Wiesniak}, and recent works have shown an even closer link between heat capacity and entanglement for particular systems \cite{Leggio1,Leggio2}. The result of this paper suggests this link might hold very generally. Indeed, in the case of a Gibbs equilibrium state, $P_E$ can be given by the expression
\begin{equation}\label{PE} P_E=\sum_{i\neq j}\frac{e^{-\beta E_i}e^{-\beta E_j}}{Z^2}=\sum_{i\neq j}P_{E}^{ij} \end{equation} where $E_i$ is the $i$-th energy level of the system and $Z$ is its partition function, $\beta$ being the inverse temperature in units of $k_B$. Heat capacity in a finite dimensional system reads
\begin{equation}\label{CV} C_V\equiv\beta^2(\langle H^2\rangle-\langle H\rangle^2)=\beta^2\sum_{i\neq j}P_E^{ij}\frac{E_i-E_j}{2} \end{equation}
There is then a similarity between $P_E$ and $C_V$ as given by equations (\ref{PE}) and (\ref{CV}), suggesting how a measure of the latter, together with little knowledge about the energy spectrum of the physical system, might supply significant information on the Linear Entropy of the system and, as a consequence, on its maximal degree of internal bipartite entanglement. This triggers interest in further future investigation on a detailed analysis of the relation between $P_E$ and $C_V$ which, in turn, might supply us with an easily experimentally measurable entanglement bound as well as highlight how the origin of thermodynamical properties is strongly related to non-classical correlations and monogamy effects. Such a connection, and the usefulness of the bounds derived in the previous sections, can be exemplified with a simple three qutrit system with a parameter-dependent Hamiltonian
\begin{equation}\label{Hl} H_l=\omega J_z+\tau {\bf J}_1\cdot {\bf J}_2+({\bf J}_1\cdot {\bf J}_2)^2+k{\bf J}_0\cdot({\bf J}_1+{\bf J}_2) \end{equation} where ${\bf J}_i$ is the spin operator of the $i$-th particle, ${\bf J} ={\bf J}_0 +{\bf J}_1 +{\bf J}_2$. and $\omega,\;\; \tau, \;\;k$ are real interaction parameters. This effective Hamiltonian operator describes a system consisting of two ultracold atoms (spins labeled as 1 and 2) in a two-well optical lattice and in the Mott insulator phase, where thus the tunneling term in the usual Bose-Hubbard picture is accounted for as a second order perturbative term, both coupled with a third atom (labeled as 0) via an Heisenberg-like interaction. An external magnetic field is also present, uniformly coupled to the three atoms. Such a system is a generalization of the one studied in \cite{Leggio2}, where a deep connection between thermal entanglement and heat capacity in parameter space has been shown. Hamiltonian (\ref{Hl}) is analytically diagonalizable, thus allowing us to obtain explicit expressions for thermodynamic quantities characterizing the Gibbs equilibrium state of the three-atom system, together with the Negativity of the reduced state of the two quadratically coupled spins.
\begin{figure}
\caption{Negativity of reduced state of the two ultracold atoms (full line) and Heat Capacity of the system (dashed line) versus quadratic interaction parameter $\gamma$. The other parameters have been fixed as $k_B T=2$, $\tau=3$, and $k=1$. }
\label{Fig5}
\end{figure}
\begin{figure}
\caption{Negativity of reduced state of the two ultracold atoms (full line), bound $Q_1$ (dotted line) and bound $Q_2$ (dashed line) versus quadratic interaction parameter $\gamma$. The other parameters have been fixed as $k_B T=2$, $\tau=3$, and $k=1$. }
\label{Fig6}
\end{figure}
\begin{figure}
\caption{Negativity of reduced state of the two ultracold atoms (full line), bound $Q_1$ (dotted line) and bound $Q_2$ (dashed line) versus Heisenberg interaction parameter $k$. The other parameters have been fixed as $k_B T=10$, $\tau=3$, and $\gamma=1$. }
\label{Fig7}
\end{figure}
\begin{figure}
\caption{Negativity of reduced state of the two ultracold atoms (full line), bound $Q_1$ (dotted line) and bound $Q_2$ (dashed line) versus temperature $T$ (in units of $k_B$). The interaction parameters have been fixed as $\tau=4$, $k=5$ and $\gamma=1$. }
\label{Fig8}
\end{figure}
The mathematical origin of the connection between heat capacity and Negativity was already discussed in \cite{Leggio2} and is ultimately due to the presence of level crossing in the low-lying energy eigenvalues of the system. Here we want to show how the existence of the strong connection between purity and Negativity, expressed by bound (\ref{NQ}), can give some hints for a physical explanation of such an effect, and moreover to exemplify how bound (\ref{NQ}) can often supply important information on the amount of thermal entanglement. Figure (\ref{Fig5}) shows how the connection between thermal entanglement and heat capacity highlighted in \cite{Leggio2} is still present despite the interaction with a third atom. Figures (\ref{Fig6}) and (\ref{Fig7}) show bounds (\ref{Q1}) and (\ref{Q2}), together with the Negativity of the reduced state of two atoms, versus a certain interaction parameter in the Hamiltonian. Figure (\ref{Fig8}) finally shows the same quantities versus temperature for fixed Hamiltonian parameters. All energies in the plots are expressed in units of $\omega$. It is worth stressing here that, in all these plots, bound $Q_3=\Theta(\frac{d-2}{d-1}-P_E)$ is not shown. The reason is that, in order to preserve thermal entanglement, temperature in our simulations has to be kept at most of the same order of magnitude of spin-spin interactions, and in such a regime $P_E$ has not yet crossed the threshold $\frac{d-2}{d-1}$ so that $Q_3$ is constantly equal to one. It is clearly shown in Figures (\ref{Fig6}) and (\ref{Fig8}) how bound $Q_1$ given in (\ref{Q1}) can become, as discussed, larger than 1. In all these cases (except for a small temperature range in Figure (\ref{Fig8}), however, (\ref{Q2}) is still able to sensibly bound Negativity. In all the plots shown, and in general every time the bounds (\ref{Q1}) and (\ref{Q2}) are applied to the particular system analyzed here, one always gets useful information about bipartite entanglement in the form, of course, of an upper bound. Such a bound, however, gets very close to zero in some particular cases (see, for example, Figures (\ref{Fig7}) and (\ref{Fig8})), strongly restricting the allowed range of values for the Negativity. It is then shown that (\ref{NQ}) is able to produce non-trivial results.
It is worthy noting from that in Fig. (\ref{Fig5}) there exist ranges of the parameter $\gamma$ where the Negativity and the Heat Capacity exhibit simultaneous plateaus. This fact, previously shown and commented in reference \cite{Leggio2} too, in view of Equation (\ref{NQ}) and the strong link between Heat Capacity and the mixedness $P_E$ of a quantum state legitimates the deduction that in the parameter regions of very low Negativity the Heat capacity may be assumed as almost constant.
\section{Conclusions} In this paper we derived a bound on the degree of information storable as bipartite quantum entanglement within an open $d$-dimensional quantum system in terms of its Linear Entropy. Our result is quite general, holding for arbitrary bipartitions of an as well arbitrary system. We emphasize that our result is experimentally appreciable in view of quite recently proposed protocols aimed at measuring the Purity of a state of a quantum system. Inspired by the seminal paper of Popescu, Short and Winter \cite{Popescu}, our conclusions highlight the interplay between quantum entanglement inside a thermalized system and its physical properties. Our results are of interest not only for the Quantum Information researchers but also for the growing cross-community of theoreticians and experimentalists investigating the subtle underlying link between quantum features and thermodynamics.
\section{Appendix} {\bf Proof of Lemma 1.} Let us first notice that both $B(t)$ and $1-A(t)-B(t)$ go to zero when the state $\tau(t)$ is pure. Indeed $A+B=\textrm{Tr} \tau^2$ equals the purity, while $B$ by construction vanishes when $\tau$is pure (see Equations (9)-(13)). Notice further that both $A(t)$ and $B(t)$ are quadratic in $\nu_{\beta}(t)$ and $\mu_{\alpha}(t)$ and then the statement of Lemma 1 is independent of their sign. Such a statement can be rewritten as
\begin{equation}\label{A1} \frac{1-A-B}{d}-B=l(A, B)\geq 0 \end{equation}
The function $l(A,B)$ at the extremal points of its domain (corresponding to a pure and a maximally mixed state) satisfies (\ref{A1}). For a maximally mixed state, calling $n_{\alpha}$ $(n_{\beta})$ the number of $\alpha$- $(\beta-)$class eigenvalues $(n_{\alpha}+n_{\beta})$, one gets $A=\frac{n_{\alpha}}{d^2}$ and $B=\frac{n_{\beta}}{d^2}$ and thus $l=\frac{n_{\alpha-1}}{d^2}\geq 0$ since $n_{\alpha}\geq 1$. Let us now express $l(A,B)$ as
\begin{equation}\label{A2} h(\{\mu_{\alpha}\}, \{\nu_{\beta}\})=\frac{1}{d}\left( 1-\sum_{\alpha}^{n_{\alpha}}\mu_{\alpha}^2- \sum_{\beta}^{n_{\beta}}\nu_{\beta}^2\right)- \sum_{\beta}^{n_{\beta}}\nu_{\beta}^2 \end{equation}
We can address the investigation on internal points using the Lagrange multiplier method, taking into account the trace condition $\sum_{\alpha}^{n_{\alpha}}\mu_{\alpha}^2+ \sum_{\beta}^{n_{\beta}}\nu_{\beta}^2= 1$. From this method only one stationary point results, characterized by values of $\nu_{\beta}$ and $\mu_{\alpha}$ such that the corresponding state is mixed. It is straightforward to check that at this point the function (\ref{A2}) is positive. We then deduce that $l(A,B)\geq 0$, from which Lemma 1 directly follows. Finally, the range $\delta$ of validity of Lemma 1 is given by the requirement $q_R(t)\geq q_{i\neq R}(t)$, such a property being necessary for the sensible definition of the reference pure state $\sigma_R$ which guarantees, in turn, that $B(t)$ vanishes on pure states.
\acknowledgments{ A.M. acknowledges the kind hospitality provided by HN at the Physics Department of Waseda University on November 2019. All the authors are grateful to Marek Kus for his constructive and stimulating suggestion and for carefully reading the manuscript. HN was partly supported by Waseda University Grant for Special Research Projects (Project number: 2019C-256).}
\end{document} |
\begin{document}
\history{} \doi{}
\title{Pulse-engineered Controlled-V gate and its applications on superconducting quantum device} \author{ \uppercase{Takahiko Satoh}\authorrefmark{1,2}, \uppercase{Shun Oomura\authorrefmark{1,2}, Michihiko Sugawara\authorrefmark{1,2}, and Naoki Yamamoto}.\authorrefmark{1,2} } \address[1]{Quantum Computing Center, Keio University, Hiyoshi 3-14-1, Kohoku, Yokohama 223-8522, Japan} \address[2]{Graduate School of Science and Technology, Keio University, Hiyoshi 3-14-1, Kohoku, Yokohama 223-8522, Japan} \tfootnote{ This work was supported by MEXT Quantum Leap Flagship Program Grant Number JPMXS0118067285 and JPMXS0120319794. }
\markboth {Satoh \headeretal: Pulse-engineered Controlled-V gate and its applications on superconducting quantum device} {Satoh \headeretal: Pulse-engineered Controlled-V gate and its applications on superconducting quantum device}
\corresp{Corresponding author: Takahiko Satoh (email: satoh@sfc.wide.ad.jp).}
\begin{abstract} In this paper, we demonstrate that, by employing OpenPulse design kit for IBM superconducting quantum devices, the controlled-V gate (CV gate) can be implemented in about half the gate time to the controlled-X (CX or CNOT gate) and consequently 65.5\% reduced gate time compared to the CX-based implementation of CV. Then, based on the theory of Cartan decomposition, we characterize the set of all two-qubit gates implemented with only two or three CV gates; using pulse-engineered CV gates enables us to implement these gates with shorter gate time and possibly better gate fidelity than the CX-based one, as actually demonstrated in two examples. Moreover, we showcase the improvement of linearly-coupled three-qubit Toffoli gate, by implementing it with the pulse-engineered CV gate, both in gate time and the averaged output-state fidelity. These results imply the importance of our CV gate implementation technique, which, as an additional option for the basis gate set design, may shorten the overall computation time and consequently improve the precision of several quantum algorithms executed on a real device. \end{abstract} \begin{keywords} {Controlled-V gate, IBM Quantum device, OpenPulse} \end{keywords}
\titlepgskip=-15pt
\maketitle
\section{Introduction}
There are several type of platforms for implementing quantum computer, such as superconducting, ion, and optical devices. In this paper, we study the problem of reducing the circuit depth (total gate time) in the superconducting quantum device provided by IBM (called IBM Quantum), where Qiskit serves as the software development environment. Qiskit has two representation languages for designing quantum programs: OpenPulse \cite{qasm} and QASM.
OpenPulse is a language for specifying and physically controlling the pulse level of a target quantum gate, that enables introducing a large freedom in circuit design. As a result, OpenPulse can reduce the execution time through optimal pulse design for various type of quantum gates~\cite{gokhale2020optimized,earnest2021pulse}; also it can be applied to generate a new gate specific to a particular physical simulation~\cite{stenger2021simulating}. Recently, a computational framework has been proposed to aid such synthesis problems~\cite{nguyen2021enabling}.
QASM is the language for the circuit design with several quantum gates. Physically, each gate is decomposed into a set of precisely calibrated gates chosen from the universal quantum gate set \cite{barenco1995elementary, brylinski2002universal}. The universal gate set used in IBM Quantum is composed of single-qubit gates and the Controlled-X (CX, or often called CNOT) gate \cite{zulehner2019compiling}. The point of taking this fixed gate set is that, because it contains only 1 two-qubit interaction gate (i.e., CX gate), the calibration process is relatively easy. In particular, CX gate can be implemented precisely via the cross resonance (CR) Hamiltonian~\cite{kirchhoff2018optimized, cr, riron, sundaresan2020reducing, krantz2019quantum}, with the help of the echo scheme and the cancellation pulse technique~\cite{jikken}. However, the error rate of CX gate is still much higher than that of single-qubit gates~\cite{exp}, due to the longer pulse length (gate time) than that of single-qubit gates and the effects of cross-talk \cite{murali2020software, mundada2019suppression, sarovar2020detecting}. Hence, if a quantum algorithm must be realized on a circuit with unnecessarily many CX gates due to the QASM constraint, the accuracy of circuit will significantly decrease.
The above-mentioned issue may be resolved by adding some two-qubits gates to the default universal gate set composed of single-qubit gates and CX gate. In this work, we take the Controlled-V (CV) gate whose matrix representation in the computational basis is given by
\begin{equation}
CV =\left[\begin{array}{rrrr} 1&0&0&0\\ 0&1&0&0\\ 0&0&\frac{1+i}{2}&\frac{1-i}{2}\\ 0&0&\frac{1-i}{2}&\frac{1+i}{2} \end{array}\right], \end{equation}
which readily leads to the relation $CV^2=CX$. Note that, in the CX-based default implementation of CV gate on QASM, one needs 2 CX gates to create the two-qubits interaction process as shown in Fig.~\ref{fig:circuit_cv}.
\Figure[htb][width=240pt]{Openpulse/circuit_cv.pdf} {\label{fig:circuit_cv}Circuit diagram of CV gate in QASM-based implementation.}
The main reason for choosing CV gate is its potential ability to reduce the gate time in several QASM-based quantum algorithms. The point is that, by using OpenPulse, we can effectively implement CV gate by just halving the pulse length of the CR pulse used for generating CX gate, as suggested by the relation $CV^2=CX$. That is, the gate time of pulse-engineered CV gate is half that of CX gate, while the QASM-based CV gate shown in Fig.~\ref{fig:circuit_cv} needs the gate time at least twice that of CX. Therefore, if some CX gates on a quantum circuit can be replaced with the same or less number of CV gates, the total gate time of this circuit is reduced and thereby the accuracy of the circuit will be improved. A typical example is the Toffoli gate; it needs at least 6 CX gates to implement if CX is only given to us, but it can be implemented using 2 CX and 3 CV gates if CV gate is further available \cite{barenco1995elementary, divincenzo1998quantum, cs}.
This paper is organized as follows. In Section II, we describe how to implement CV gate using OpenPulse and then show the experimental result; the gate time of the pulse-engineered CV gate is shortened by 65.5\% and the gate fidelity is improved by 0.66\%, compared to the default QASM-based implementation of CV gate. In Section III, we first use the theory of Cartan decomposition to characterize the set of all two-qubit gates implemented with only two or three CV gates; because the pulse-engineered CV gate can be implemented with shorter gate time, those two-qubit gates can also be implemented with shorter gate time and possibly better gate fidelity than the default CX-based one. Actually, we show the experimental demonstration to generate \(\sqrt{iSWAP}\) and \(\sqrt{SWAP}\) using the pulse-engineered CV gates and confirm that, in both cases, the gate fidelity is improved thanks to the shorter gate time. In Section IV, we showcase an efficient method for implementing a linearly-coupled three-qubit Toffoli gate using the pulse-engineered CV gate.
\section{Pulse-engineered CV gate} \label{sec_cv} \subsection{Cross resonance interaction} On IBM Quantum devices, the cross resonance (CR) interaction is used to couple two qubits \cite{cr}, by irradiating the control qubit with a microwave pulse at the transition frequency of the target qubit. The microwave pulse has a Gaussian-square type envelope in the default setup; see Appendix~A. Under some approximation, we obtain the following model CR Hamiltonian \cite{jikken,cs, riron, sundaresan2020reducing}:
\begin{align}
H_{CR} =& \sum_{P = I, X, Y, Z}\frac{\omega_{ZP}(A,\phi)}{2}Z\otimes P \notag\\
&+ \sum_{Q = X, Y, Z}\frac{\omega_{IQ}(A,\phi)}{2}I\otimes Q,
\label{hamii} \end{align}
where the qubit ordering is control\(\otimes\)target. \(\omega_{ZP}\) and \(\omega_{IQ}\) represent the interaction strength, which are functions of the amplitude $A$ and the phase \(\phi\) of the microwave pulse. Note that the CR Hamiltonian is valid under the condition that the microwave pulse with transition frequency of the target qubit is irradiated to the control qubit. In the absence of noise, the two qubits are driven by the unitary operator
\begin{align}
U_{CR}
= \exp(-itH_{CR}).
\label{unii} \end{align}
\subsection{Pulse-engineered CX and CV gates} Let us define the general two-qubit unitary operator
\begin{equation} \label{eq_2q_ope}
[DE]^{\theta}=\exp \Big( -i\pi\frac{\theta}{2}D\otimes E \Big), \end{equation}
where $D$ and $E$ are arbitrary single-qubit operators. With this notation, the CX gate is represented as \cite{cr}:
\begin{equation} \label{cnotdeco}
CX = [ZI]^{1/2}[ZX]^{-1/2}[IX]^{1/2}. \end{equation}
That is, the two-qubit operation required to form the CX gate can only be served by the $Z\otimes X$ Hamiltonian. However, the CR Hamiltonian (\ref{hamii}) contains terms other than $Z\otimes X$ term, which thus should be eliminated by some means for implementing the CX gate via the CR Hamiltonian. This goal can be achieved, by using the echo sequence pulse scheme and applying a direct cancellation pulse on the target qubit as illustrated in Fig.~\ref{cnot}; in other words, these techniques are effectively used to generate the unitary evolution driven by the effective Hamiltonian, ${\tilde H}_{ZX}$, composed of only the $Z\otimes X$ term \cite{pro, sundaresan2020reducing}. In general, one can implement the unitary operator $[ZX]^{\theta}$ driven by the effective Hamiltonian ${\tilde H}_{ZX}$, by setting the interaction strength in terms of the pulse duration $t$ as $\theta = \omega_{ZX}(A,\phi) t/\pi$;
\begin{equation} \label{unitary} \begin{split}
[ZX]^{\theta}
&= {\tilde U}_{ZX} = \exp(-i \pi t {\tilde H}_{ZX}), \\
{\tilde H}_{ZX}
&= \frac{\omega_{ZX}(A,\phi)}{2}Z\otimes X.
\end{split} \end{equation}
For the CX gate case, the two-qubit interaction time $t_{CX}$ should be $t_{CX} = \pi/2 \, \omega_{ZX}(A,\phi)$ to realize $\theta = -1/2$.
\Figure[htb][width=220pt]{Openpulse/CNOT_PULSE.pdf} {Pulse schedule of CX gate for the control qubit 1 and the target qubit 4, implemented on IBM Q Toronto. Here, d1 and d4 denote the drive channels for local operations of qubits 1 and 4, while u3 is the control channel for CR-pulse responsible for the two-qubit interactions. $CR_+$ and $CR_-$ are the CR pulse shapes implementing $[ZX]^{1/2}$ on u3; d4 is the channel serving as the cancellation pulse. Two $\pi$-pulses on d1 placed before and after the first CR-pulse are used to realize the echo-scheme. The first Gaussian pulse on d4 corresponds to $[IX]$ in Eq.~\eqref{cnotdeco}, whereas $[ZI]$ is implemented without actual pulse-irradiation. \label{cnot} }
Next, from Eq. (\ref{cnotdeco}) and the fact that $IX$, $ZX$, and $ZI$ commute with each other, one can see that CV gate is decomposed as
\begin{equation}
CV = [ZI]^{1/4}[ZX]^{-1/4}[IX]^{1/4}. \end{equation}
In the present work, we directly implement \([ZX]^{-1/4}\) part using OpenPulse, without decomposing this gate into multiple CX gates. As expected from Eq. (\ref{unitary}), the interaction strength \(\theta\) of the two-qubits interaction part \([ZX]^{\theta}\) is proportional to the duration of CR pulse, as far as the effective Hamiltonian stands. Thus, we can create \([ZX]^{-1/4}\) by taking the duration of the CR pulse \(t_{CV}\) as
\begin{align}
t_{CV} = \frac{\pi}{4\omega_{ZX}(A,\phi)}, \end{align}
which is half the value of calibrated CX gate's CR pulse duration. The CR pulse envelope is a GaussianSquare pulse, i.e. a square pulse with Gaussian-shaped rising and falling edges~\cite{pro} (see also Appendix~\ref{sec_envelope}).
Note that, in all experimental demonstration shown in this paper, we keep the basic structure of the pulse schedule and amplitude parameters, for the combined CR and cancellation pulses in Fig.~\ref{cnot} unchanged, whereas we replace the local gate parameters for $[IX]^{1/2}$ and $[ZI]^{1/2}$ in the CX pulse definition with those of $[IX]^{1/4}$ and $[ZI]^{1/4}$; moreover, the CR pulse duration is changed to the value corresponding to $[ZX]^{1/4}$.
\subsection{Experimental environment} In the present work, we used the 0th, 1st, and 4th qubits of ibmq\_toronto, as shown in Fig.~\ref{processor}. Single-qubit gate operations on qubits 0, 1, and 4 are realized by the microwave irradiation to the drive-channel, d0, d1, and d4, respectively, whereas the CR-pulses for the two-qubit interactions between qubits 0 and 1, and qubits 1 and 4 are applied to the control channels, u0 and u3. Each experiment demonstrated in this paper was conducted 8192 times (meaning that 8192 measurement was performed for each circuit). There exist measurement errors that accidentally flips the detected bit; we applied the readout error mitigation technique~\cite{Qiskit} to fix this error. We list the single-qubit gate error and the readout error of the device in Table~\ref{oneperformance}. Also the two-qubit CX gate errors are 1.065\% and 1.5969\% for the 0-1 qubits pair and 1-4 qubits pair, respectively.
\Figure[htb][width=4.5cm]{Openpulse/topology.pdf} {The coupling map of ibmq\_toronto processor. We used qubits 0, 1, and 4 for the experiments.\label{processor}}
\begin{table}[h]
\centering
\caption{Qubit performance of ibmq\_toronto processor.}
\begin{tabular}{ccc}
qubit&$\sqrt{X}$ gate error & Readout error \\\hline
0&2.78e-4&5.69e-2\\
1&6.97e-4&4.05e-2\\
4&3.42e-4&7.30e-2
\end{tabular}
\label{oneperformance} \end{table}
\subsection{Experimental Results} We implemented the gate \eqref{unitary} with several values of the pulse duration \(\tau_d\) of the two CR pulses, which correspond to $CR_-$ and $CR_+$ shown in Fig.~\ref{cnot}, from 45.5 ns to 161 ns. For each \(\tau_d\) we test the following trial CV gate:
\begin{equation} \label{trial CV}
CV_{\rm trial}(\tau_d) = [ZI]^{1/4}[ZX]^{\theta(\tau_d)}[IX]^{1/4}, \end{equation}
where $\theta(\tau_d) = -\tau_d/4 t_{CV}$. Note that the duration for realizing the CX gate is 196 ns (see Appendix~\ref{sec_cxpulse} for details); hence, from the relation $CV^2=CX$, ideally $\tau_d$ would be identical to $\tau_{CV}=98 = 196/2$ ns to realize CV gate. We make this duration adjustment only for the flat-top part, and the Gaussian flanks are fixed. We applied the quantum process tomography (QPT) to construct the trial CV gate, to evaluate its gate fidelity $F_p$ to the ideal CV gate \cite{choi, schumacher1996sending, havel2003robust}. Note that we can use interleaved randomized benchmarking~\cite{cs} or {\it randomized\_benchmarking} function in the Qiskit libraries~\cite{Qiskit}, to estimate the gate fidelity.
\Figure[ht][width = 230pt]{Openpulse/CV_FIDELITY_average.pdf} {Gate fidelity of the trial CV gate \eqref{trial CV} to the ideal CV gate, as a function of the duration of CR pulse. Red vertical line denotes half duration of CR pulse in CX pulse schedule. The black line represents the theoretically calculated gate fidelity between the exact CV gate and the trial CV gate (9). Here, the physical control and target qubit is the 0-th and the 1st one depicted in Fig.~\ref{processor}, respectively. \label{cv_fidelity}}
Figure~\ref{cv_fidelity} shows the gate fidelity of the trial CV gate \eqref{trial CV} as a function of the CR pulse duration $\tau_d$, with and without the readout mitigation; these are the averages of three experimental results conducted three different days. The black line represents the theoretically calculated gate fidelity between the exact CV gate and the trial CV gate~\eqref{trial CV}, as a function of the CR duration; in the latter, $[ZX]^\theta$ can be analytically calculated using Eq.~\eqref{eq_2q_ope}, and $\theta(\tau_d)$ linearly increases with respect to $\tau_d$. Also for reference, the gate fidelity of the CV gate implemented in the QASM format (denoted as QASM CV) are shown. First, note that the readout error-mitigation works well and gives better fidelity values compared to the raw (unmitigated) results. The mitigated fidelity of CV gate implemented with OpenPulse (denoted as Pulse CV) takes the maximum value 99.23\% (averaged value for three different days) at the CR duration $\tau_d=101.5$ ns, which is close to the expected value $\tau_{CV} = 98$ ns, i.e., half the duration of CR pulse of the calibrated CX gate. Throughout all three different experiments, the maximum value is taken at 101.5 ns, which indicates that the optimal pulse duration is robust against calibration change. Another important finding is that the maximum value 99.23\% is 0.66\% higher than that of the CV gate fidelity achieved via the default QASM-based implementation using 2 CX gates.
Figure~\ref{cv_pulse} shows the actual pulse sequence of CV gate implemented in (a) the default QASM format with 2 CX gates (see Fig.~1) and (b) OpenPulse with the optimal pulse duration 101.5 ns. The total gate time of CV gate is 994 ns for the former, while it is 343 ns for the latter. Hence the present OpenPulse-based implementation achieves 65.5\% reduction in the total gate time of CV gate, compared to the default one (a), in addition to 0.66\% improvement in the gate fidelity.
\begin{figure}
\caption{(a) Pulse sequence for CV gate in the QASM implementation.
(b) Pulse sequence for CV gate implemented by OpenPulse with the CR pulse
duration 101.5 ns. }
\label{cv_pulse}
\end{figure}
\section{Two-qubit gate design with CV Gates} \label{sec_twoqgate} Arbitrary two-qubit gates can be implemented with three CX gates~\cite{geo, vidal2004universal}. However, generating two-qubit interactions only with CX gates can unnecessarily prolong the gate time. In this section, we study the set of two-qubit gates that can be configured with up to three CV gates instead of the same number of CX gates, based on the theory of Cartan decomposition. In particular, we consider \textsc{$\sqrt{SWAP}$}\xspace gate and \textsc{$\sqrt{iSWAP}$}\xspace gate as examples; they can be implemented with three and two CV gates, respectively, and thus the resulting gate-time is obviously shortened compared to the default CX-based implementations. We have also experimentally confirmed that the gate fidelity of those CV-based gates is superior to that of the CX-based one.
\subsection{Cartan decomposition} The Cartan decomposition proves that an arbitrary two-qubit unitary operation \(U\in SU(4)\) can be represented in the form
\begin{equation} \label{cartan}
U = k_1\exp\{\frac{i}{2}(aX\otimes X+bY\otimes Y+cZ\otimes Z)\}k_2, \end{equation}
where \(k_1,k_2\in SU(2)\otimes SU(2)\) are local single-qubit operations. When two-qubit unitaries $U$ and $V$ are connected through \(U = k_1 V k_2 \), we call that \(U\) and \(V\) are locally equivalent.
The Cartan decomposition is directly used to construct Weyl chamber that provides a clear view of geometric structure of the set of all non-local two-qubit gates. The Weyl chamber is illustrated as the tetrahedron \(OA_1A_2A_3\) in Fig.~\ref{Weyl-Chamber}(a); the point $[a, b, c]$ represents a locally equivalent class of two-qubit gate \cite{geo, opt}. Shown in Fig.~\ref{Weyl-Chamber}(b) are particularly important points corresponding to familiar two-qubit gates, $L=[\pi/2,0,0]$ for \{CX,~CY,~CZ\}, $A_2=[\pi/2,\pi/2,0]$ for \{DCX,~iSWAP\}, $A_3=[\pi/2,\pi/2,\pi/2]$ for SWAP, and $B_3=[\pi/4,\pi/4,\pi/4]$ for \textsc{$\sqrt{SWAP}$}\xspace. Note from Eq.~\eqref{cnotdeco} that CX is locally equivalent to $[ZX]^{-1/2}$, which is further locally equivalent to $[XX]^{-1/2}$ and thus identified by $L=[\pi/2, 0, 0]$. From this view, it is clear that CV corresponds to $C_1=[\pi/4, 0, 0]$.
\begin{figure}
\caption{(a) Weyl chamber (tetrahedron \(OA_1A_2A_3\)) contains all the locally equivalent class of two-qubit operations, with the exception of points on its base (see the caption of Fig. \ref{2D}). (b) Five important points in the Weyl chamber; at each point typical locally equivalent gates are indicated. (c) Colored area, i.e., the union of tetrahedra \(OB_1B_2B_3\) and \(A_1C_1C_2C_3\) in the Weyl chamber, shows the set of two-qubit gate realized with three CV gates. All the points in the figures are defined as \(B_1 =[3\pi/4,0,0]\), \(B_2 =[3\pi/8,3\pi/8,0]\), \(B_3 =[\pi/4,\pi/4,\pi/4]\), \(C_1 =[\pi/4,0,0]\), \(C_2 =[5\pi/8,3\pi/8,0]\), \(C_3 =[3\pi/4,\pi/4,\pi/4]\). }
\label{Weyl-Chamber}
\end{figure}
\begin{figure}\label{2D}
\end{figure}
A particularly useful result provided by this geometric picture is that \(n ~ (\geq 3)\) times repetition of \([\gamma,0,0]\) with $\gamma\in (0, \pi/2]$ can create an arbitrary two-qubit gate $[a,b,c]$ that satisfies the following condition:
\begin{equation} \label{CU3D} 0\leq a+b+c\leq n\gamma, ~ a-b-c\geq \pi-n\gamma. \end{equation}
This equation implies that $n=3$ operations of CX (or any of locally equivalent gate to \([\pi/2, 0, 0]\)) with appropriate local gates can span the entire area of Weyl chamber, i.e., tetrahedron \(OA_1A_2A_3\); that is, as is well known, 3 CX gates can generate arbitrary two-qubit unitary gates. Similarly, by using two \([\gamma,0,0]\) gates, we can create arbitrary two-qubit gate $[a,b,0]$ that satisfies the following condition:
\begin{equation} \label{CU2D} 0\leq a+b\leq 2\gamma, ~ a-b\geq \pi-2\gamma. \end{equation}
Thus, two CX gates can generate any two-qubit gate represented by the point inside the triangle $OA_1A_2$, which corresponds to the base of the Weyl chamber (see Fig.~\ref{2D}).
\subsection{Configurable CV-based two-qubit gates} We can now characterize the set of two-qubit gates generated by two or three operations of CV gate represented by $C_1=[\pi/4, 0, 0]$.
First, Eq.~\eqref{CU2D} with $\gamma=\pi/4$ indicates that 2 CV gates can generate any unitary gate represented by the point in the locally equivalent areas \(OLB\) and \(A_1LC\) illustrated in Fig.~\ref{2D} \cite{opt}. These areas are included in the triangle \(OA_1A_2\). Hence, there exist gates such that 2 CX gates can generate while 2 CV gates cannot, such as DCX (Double-CX gate, i.e., a 2-qubit gate composed of two back-to-back CX gates with alternate controls) or equivalently iSWAP represented by \(A_2=[\pi/2, \pi/2,0]\). However, there are still many useful two-qubit gate in \(OLB\) and \(A_1LC\), and it is thus important to have the pulse-engineered CV gate for generating those gates with significantly shorter time and possibly better gate fidelity than the case using the default QASM-based implementation with only CX. For example, the controlled-$U$ gate plays an essential role in several quantum algorithms such as Quantum Fourier Transform; fortunately, an arbitrary controlled-$U$ gate is specified by the point \([\gamma,0, 0]\) on the line $OL$ or $A_1 L$ and thus can be generated using two CV gates.
Second, Eq.~\eqref{CU3D} with $n=3$ and $\gamma=\pi/4$ elucidates the set of two-qubit gates that can be generated with 3 CV gates, which is depicted in the colored area in Fig.~\ref{Weyl-Chamber}(c). We can expect the same advantage as the 2 CV case, in implementing some two-qubit gates contained in this area via three pulse-engineered CV gates.
\subsection{Efficient implementation of \textsc{$\sqrt{iSWAP}$}\xspace and \textsc{$\sqrt{SWAP}$}\xspace via pulse-engineered CV gates} Here we show an experimental demonstration to implement the following 2 two-qubit gates via the pulse-engineered CV gates. That is, we consider \textsc{$\sqrt{iSWAP}$}\xspace gate represented by the point \(B=[\pi/4, \pi/4, 0]\) in Fig.~\ref{2D}:
\begin{align} \sqrt{iSWAP} =\left[\begin{array}{rrrr} 1&0&0&0\\ 0&\frac{1}{\sqrt{2}}&\frac{i}{\sqrt{2}}&0\\ 0&\frac{i}{\sqrt{2}}&\frac{1}{\sqrt{2}}&0\\ 0&0&0&1 \end{array}\right], \end{align}
and \textsc{$\sqrt{SWAP}$}\xspace gate represented by the point $B_3=[\pi/4, \pi/4, \pi/4]$ in Fig.~\ref{Weyl-Chamber}:
\begin{align}
&\sqrt{SWAP} =\left[\begin{array}{rrrr} 1&0&0&0\\ 0&\frac{1+i}{2}&\frac{1-i}{2}&0\\ 0&\frac{1-i}{2}&\frac{1+i}{2}&0\\ 0&0&0&1 \end{array}\right]. \end{align} Each of these gates together with some single-qubit gates can construct a universal gate set.
Recall that we cannot determine the Cartan decomposition \eqref{cartan} uniquely, for any two-qubit unitary matrix $U$. Thus, we used the decomposition algorithm 'TwoQubitBasisDecomposer' implemented in Qiskit \cite{Qiskit}. Figure~\ref{SQISWAP_CIRC} shows two types of decomposed gate layout of \textsc{$\sqrt{iSWAP}$}\xspace based on CX (middle) and CV (lower), which we call \textsc{$\sqrt{iSWAP}_{CX}$}\xspace and \textsc{$\sqrt{iSWAP}_{CV}$}\xspace, respectively. Also the case of \textsc{$\sqrt{SWAP}$}\xspace is shown in Fig.~\ref{SQSWAP_CIRC}, where the CX- and CV-based decompositions are called \textsc{$\sqrt{SWAP}_{CX}$}\xspace and \textsc{$\sqrt{SWAP}_{CV}$}\xspace, respectively. Here, $U_2(\phi,\lambda)$ and $U_3(\theta, \phi, \lambda)$ are the single qubit gates in the QASM language~\cite{cross2017open}, defined as follows: \begin{align} U_2(\phi,\lambda)= \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & -e^{i\lambda} \\ e^{i\phi} & e^{i(\phi + \lambda)} \\ \end{bmatrix},\\ U_3(\theta, \phi, \lambda)= \begin{bmatrix} \cos(\theta/2) & -e^{i\lambda}\sin(\theta/2) \\ e^{i\phi}\sin(\theta/2) & e^{i(\phi + \lambda)}\cos(\theta/2) \\ \end{bmatrix} \end{align}
The pulse schedules corresponding to these four decomposed circuits are shown in Figs.~\ref{sqi_pulse} and \ref{sq_pulse}, where the pulse for CX and CV were implemented with the optimized CR duration time identified in Section~\ref{sec_cv}.
\begin{figure}\label{SQISWAP_CIRC}
\end{figure}
\begin{figure}\label{SQSWAP_CIRC}
\end{figure}
\label{app_swappulse} \begin{figure}\label{sqi_pulse}
\end{figure}
\begin{figure}\label{sq_pulse}
\end{figure}
Figure~\ref{sqi_pulse} shows that the total gate time of \textsc{$\sqrt{iSWAP}_{CV}$}\xspace is shortened by 308 ns compared to that of \textsc{$\sqrt{iSWAP}_{CX}$}\xspace. Also, from Fig.~\ref{sq_pulse} we find that the total gate time of \textsc{$\sqrt{SWAP}_{CV}$}\xspace is $532$ ns shorter than that of \textsc{$\sqrt{SWAP}_{CX}$}\xspace. We compute the gate fidelities of these four gates to their ideal correspondence, using QPT. The results are summarized in Table~\ref{swaptable}, together with the gate time; \textsc{$\sqrt{iSWAP}$}\xspace (\textsc{$\sqrt{SWAP}$}\xspace) gate with CV gates achieves the better fidelity by 0.87 (2.14)~\% compared with the default CX-based implementation. This might be thanks to the shortened gate time realized via the pulse-engineered CV gate. Note that, when \textsc{$\sqrt{iSWAP}$}\xspace or \textsc{$\sqrt{SWAP}$}\xspace is involved in some larger quantum circuits, the gate-time advantage of the CV-based implementation may lead to significant improvement in the fidelity of those circuit.
\begin{table}[ht]
\centering
\caption{
Gate fidelity and gate time of $\sqrt{iSWAP}$ and $\sqrt{SWAP}$,
implemented with the default CX gate or the pulse-engineered CV gate.
For each input, we performed 8192 shots and calculated the gate fidelity
$F_P$ with the use of read-out error mitigation.}
\centering
\begin{tabular}{ccccc}
Gate & \#CX & \#CV & Fidelity ($F_{P}$) & Gate time (ns)\\\hline
$\textsc{$\sqrt{iSWAP}_{CX}$}\xspace$ & 2 & -- & 0.9765 & 1064\\
$\textsc{$\sqrt{iSWAP}_{CV}$}\xspace$ & -- & 2 & 0.9852 & 756\\
\textsc{$\sqrt{SWAP}_{CX}$}\xspace & 3 & -- & 0.9604 & 1631\\
\textsc{$\sqrt{SWAP}_{CV}$}\xspace & -- & 3 & 0.9818 & 1099\\
\end{tabular}
\label{swaptable} \end{table}
\section{High-speed and high-precision Toffoli gate with CV gates}
The pulse-engineered CV gate can be applied to improve the speed and precision of bigger size gates beyond the two-qubit case. As a demonstration, here we study the three-qubit Toffoli gate (or the Controlled-Controlled-X gate). The idea presented here is applicable to the general multi-qubit Toffoli gate appearing in many long-term algorithms such as QRAM database~\cite{giovannetti2008quantum} and the diffusion operator in Grover's search algorithm~\cite{grover1997quantum}.
\subsection{Gate implementation for linearly-coupled three qubits} \begin{figure}\label{toffoli_circ}
\end{figure}
If three qubits are fully connected, then we can construct Toffoli gate using 6 CX gates (and some single-qubit gates), while the combination of 3 CV and 2 CX gates also constructs Toffoli gate; hence the pulse-engineered CV gate enables reducing the total gate time. However, the standard structure of the current IBM Quantum devices is of the linear coupling form of qubits, in which case the number of necessary gates increase.
Here we consider two different construction of Toffoli gate with and without CV gates, \textsc{$TOF_{CV}$}\xspace and \textsc{$TOF_{CX}$}\xspace gate shown in Fig.~\ref{toffoli_circ}; note that $q_j~(j=0,1,4)$ represents the $j$th qubit of ibmq\_tronto device shown in Fig.~\ref{processor} and thus $q_0$ and $q_4$ are not directly connected. \textsc{$TOF_{CV}$}\xspace gate has 3 CX and 3 CV gates; hence, with the use of pulse-engineered CV gate, the total gate time of \textsc{$TOF_{CV}$}\xspace becomes shorter than that of the textbook Toffoli with 6 CX gates as well as \textsc{$TOF_{CX}$}\xspace. Note that the SWAP gate is built in there to connect $q_0$ and $q_4$; consequently, \textsc{$TOF_{CV}$}\xspace exchanges $q_0$ and $q_1$, while maintaining the functionality of Toffoli gate. However, the pure Toffoli composed of \textsc{$TOF_{CV}$}\xspace and subsequent SWAP gate needs 6 CX and 3 CV gates, meaning that it still can be realized with shorter gate time than \textsc{$TOF_{CX}$}\xspace by the pulse engineering of CV.
\subsection{Experimental results}
We conducted an experiment to compare the actual performance of \textsc{$TOF_{CV}$}\xspace (3 CX and 3 CV) to \textsc{$TOF_{CX}$}\xspace (8 CX), where the pulse-engineered CV is used in the former, on ibmq\_tronto processor shown in Fig.~\ref{processor}. The pulse sequences corresponding to these Toffoli gates are depicted in Fig.~\ref{toffoli_pulse}. As expected, the total gate time are 1778 ns and 2835 ns for \textsc{$TOF_{CV}$}\xspace and \textsc{$TOF_{CX}$}\xspace, respectively, suggesting that \textsc{$TOF_{CV}$}\xspace would have better precision than \textsc{$TOF_{CX}$}\xspace.
\begin{figure}\label{toffoli_pulse}
\end{figure}
Since QPT requires an excessive number of experiments, we have adopted the quantum state tomography and calculated the state fidelity \cite{magesan2011gate}:
\begin{equation}
F_{s}(\rho_{\rm exp}, \rho_{\rm ide}) =
Tr[\sqrt{\sqrt{\rho_{\rm exp}}\rho_{\rm ide}\sqrt{\rho_{\rm exp}}}]^2, \label{fidelitysiki2} \end{equation}
where $\rho_{\rm ide}$ denotes the ideal target density matrix and $\rho_{\rm exp}$ denotes the reconstructed density matrix in the experiment using the state tomography. As the input state to Toffoli gate, we prepared 12 states listed in Table~\ref{Toffolitable}, where $\ket{\pm}=(\ket{0}\pm\ket{1})/\sqrt{2}$. We performed 8192 shots (measurements) for each initial state and calculated $F_{s}(\rho_{\rm exp}, \rho_{\rm ide})$. Table~\ref{Toffolitable} summarizes the results, showing the superiority of \textsc{$TOF_{CV}$}\xspace for all input states except $\ket{111}$. As a result, $TOF_{CV}$ has 4.06~\% higher average fidelity than \textsc{$TOF_{CX}$}\xspace. This is a bigger superiority of the CV-based gate over the conventional CX-based one, compared to the previous case shown in Table~1, simply because the gate length becomes longer.
\begin{table}[ht]
\centering
\caption{
Fidelity $F_{\text{s}}$ for the two types of Toffoli gates. }
\begin{tabular}{c|cc}
\label{Toffolitable}
Input states & \textsc{$TOF_{CX}$}\xspace & \textsc{$TOF_{CV}$}\xspace \\\hline
$\ket{000}$ & 0.9287 & 0.9773\\ \hline
$\ket{001}$ & 0.9538 & 0.9711\\ \hline
$\ket{010}$ & 0.8886 & 0.9318\\ \hline
$\ket{011}$ & 0.9344 & 0.9450\\ \hline
$\ket{100}$ & 0.8697 & 0.9259\\ \hline
$\ket{101}$ & 0.9113 & 0.9481\\ \hline
$\ket{110}$ & 0.9167 & 0.9284\\ \hline
$\ket{111}$ & 0.9217 & 0.9089\\ \hline
$\ket{+10}$ & 0.8454 & 0.9462\\ \hline
$\ket{1+0}$ & 0.9046 & 0.9341\\ \hline
$\ket{++1}$ & 0.8860 & 0.9513\\ \hline
$\ket{--1}$ & 0.8603 & 0.9408\\ \hline
Average & 0.9018 & 0.9424\\
\end{tabular} \end{table}
\section{Conclusion}
Using only CX gates for entangling qubits in quantum computation is now a \textit{de facto} standard. IBM Quantum is no exception. While this approach is less burdensome for calibration, it has the disadvantage that some gate/circuit structure become redundant. To resolve this issue, in this paper we proposed using CV gates in addition to the default gate set; actually OpenPulse allows us to realize CV gate with shorter gate time than that of CX gate as well as the default CV gate composed of 2 CX gates. The parameters of the corresponding CR Hamiltonian for realizing such pulse-engineered CV gate are the same as those of the CX gate, except for the pulse length and some local gate parameters, meaning that the calibration burden is not significant. In particular, the result of Section II (Fig.~4) indicates that the optimal pulse length does not change in each calibration. The gate-time improvement in circuit design, which eventually leads to the gate-fidelity improvement, has been demonstrated with \(\sqrt{SWAP}\), \(\sqrt{iSWAP}\), and Toffoli gates. Note that the gate fidelity improvement were not totally great (0.66\% improvement for the CV implementation, 0.87\% for $\sqrt{iSWAP}$, and 2.14\% for $\sqrt{SWAP}$), and this may be due to the presence of ZZ interactions that cannot be counteracted by the echo scheme~\cite{jikken} that was employed in our method. Suppression of the ZZ interactions~\cite{kandala2011demonstration,PRXQuantum.1.020318,mitchell2021hardware,wei2021quantum} would allow us to further improve the gate performance.
In summary, from the practicality and feasibility viewpoint, we believe that the new gate set that contains the proposed pulse-engineered CV gate can be used to effectively reduce the redundancy of several quantum circuits, thereby realize shorter gate time in total, and eventually improve several quantum algorithms. Actually, to investigate a wider range of applications, we plan to execute comparative verification of the proposed method on a bigger-size circuit or a near-term quantum algorithm.
\appendix \section{}
\subsection{Gaussian square pulse envelope} \label{sec_envelope}
In all experiments we employed the Gaussian-Square pulse composed of the constant-amplitude part of length (width) \(\tau_w\) and Gaussian-formed rising and falling edges of length \(\tau_r\). The overall pulse waveform $f(t)$, as a function of time $t$, is thus given by
\begin{align*}
f(t) =
\begin{cases}
A\exp( -\frac{1}{2\sigma^2}(t - \frac{\tau_r}{2})^2), ~~
(0 \leq t < \tau_r) \\
A, ~~ (\tau_r \leq t < \tau_r + \tau_w) \\
A\exp( -\frac{1}{2\sigma^2}(t -\frac{(\tau_r+\tau_w)}{2})^2), ~~
(\tau_r + \tau_w \leq t < \tau_d), \\
\end{cases} \end{align*}
where $A$ is the maximum amplitude and $\sigma^2$ is the variance of the Gaussian part, respectively. Note that the overall pulse length or the duration is defined as
\begin{align}
\tau_d = 2\tau_r+\tau_w. \label{length} \end{align}
\subsection{Optimal pulse duration of CX gate} \label{sec_cxpulse}
Figure 1 shows the pulse schedule for implementing CX gate, where the CR pulse duration is 196 ns and accordingly the total gate time 462 ns; this is actually the best value that achieves the maximum gate fidelity. Here we show the detail of the OpenPulse experiment to identify this optimal duration.
The experiment was conducted in the same setting described in Section II-C, with the use of qubit 1 and 4. We evaluated the following trial CX gate with changing the duration $\tau_d \in [144, 259]$ ns:
\begin{equation}
CX_{\rm{trial}}(\tau_d) = [ZI]^{1/2}[ZX]^{\theta(\tau_d)}[IX]^{1/2}, \end{equation}
where $\theta(\tau_d)=-\tau_d / 2\tau_{CX}$ with the nominal value $\tau_{CX}=196$ ns. The yellow and blue lines in Fig.~\ref{cx_fidelity} depict the gate fidelity between the pulse-engineered $CX_{\rm{trial}}(\tau_d)$ and the ideal CX gate, with and without the readout error mitigation respectively. The black dotted line depicts the gate fidelity between the theoretical $CX_{\rm{trial}}(\tau_d)$ and the ideal CX gate. The figure thus shows that the optimal duration is exactly the nominal value, i.e., $\tau_d=\tau_{CX}=196$ ns, which achieves the perfect gate fidelity.
\Figure[htb][width = 240pt] {Openpulse/CX_FIDELITY_average.pdf} {Gate fidelity of the trial CX gate (17) to the ideal CX gate, as a function of the duration of CR pulse. \label{cx_fidelity}}
\section*{Acknowledgment} The results presented in this paper were obtained in part using an IBM Q quantum computing system as part of the IBM Q Network. The views expressed are those of the authors and do not reflect the official policy or position of IBM or the IBM Q team.
\EOD
\end{document} |
\begin{document}
\title{Factorization with a logarithmic energy spectrum of a central potential} \author{Ferdinand Gleisberg} \affiliation{Institut f\"ur Quantenphysik and Center for Integrated Quantum Science and Technology ($\rm {IQ}^{\rm {ST}}$), Universit\"at Ulm, D--89069 Ulm, Germany} \email{ferdinand.gleisberg@alumni.uni-ulm.de}
\author{Wolfgang P. Schleich} \affiliation{Institut f\"ur Quantenphysik and Center for Integrated Quantum Science and Technology ($\rm {IQ}^{\rm {ST}}$), Universit\"at Ulm, D--89069 Ulm, Germany} \affiliation{~Hagler Institute for Advanced Study at Texas A \& M University, Texas A \& M AgriLife Research, Institute for Quantum Studies and Engineering (IQSE) and Department of Physics and Astronomy, Texas A \& M University, College Station, Texas 77843-4242, USA}
\begin{abstract} We propose a method to factor numbers based on two interacting bosonic atoms in a central potential where the single-particle spectrum depends logarithmically on the radial quantum numbers of the zero angular momentum states. The bosons initially prepared in the ground state are excited by a sinusoidally time-dependent interaction into a state characterized by the quantum numbers which represent the factors of a number encoded in the frequency of the perturbation. We also discuss the full single-particle spectrum and limitations of our method caused by decoherence. \end{abstract}
\maketitle
\section{Introduction}\label{Intro}
It is well-known that the decomposition of a positive integer into a product of prime factors is a difficult problem in number theory for it requires non-polynomial time on a classical computer making it attractive for cryptological applications.\cite{Hardy} {\it E.g.} for decoding a message encoded by the famous RSA protocol \cite{RSA} decomposition of a large semiprime {\it i.e.} a number composed by two primes in a reasonable time is needed. Such a decomposition can easily be prevented by choosing larger and larger semiprimes. If the topic of prime factorization is mentioned somewhere, may be in a discussion or may be in an article, it does not take long until the name Peter Shor appears because on a large ideal quantum computer Shor's factorizing algorithm \cite{Shor} takes only polynomial time and is therefore expected to break the RSA scheme in the future.
As an alternative method we have studied the factorization of integers using bosonic atoms in one- and two-dimensional potentials both with a logarithmic energy spectrum.\cite{Gleisberg2013, Gleisberg2018,Gleisberg2015} Bosons in a spherically symmetric {\em harmonic} potential as well as in a spherical box provide textbook examples for the study of thermodynamics of the Bose--Einstein condensation.\cite{Pethick,Bagnato2019}
Our present theoretical study is motivated by the possibility to create and control nearly any kind of traps using adiabatic potentials as was stated in Ref. \onlinecite{Garraway2016}. For the presentation of our work we have chosen here a pedagogical approach. We constructed numerically a central potential with a logarithmic energy spectrum. Two bosons originally trapped in the ground state of this potential are excited by a periodic perturbation with a frequency which contains the semiprime we want to factor. After some time the bosons are found with a probability of about one half in a state where the energies of the individual bosons contain the factors of the semiprime. Then a measurement of these energies provides the factors we are looking for. The spherical symmetry of the unperturbed potential is crucial for our protocol. Among the difficulties to realize spherical symmetry experimentally we mention that here an environment free of gravity is required.\cite{Lundblad}
Our article is organized as follows. In Section \ref{Toolbox} we introduce the logarithmic energy spectrum and discuss the distribution of a given energy onto two single-particle states. In Section \ref{1DPot} the Schr{\"o}dinger equation in three dimensions is solved and it is found that the $s$ states {\it i.e.} the states with zero azimuthal quantum number are sufficient to determine the potential with a logarithmic energy spectrum. In Section \ref{Dimensions} we take into account the boundary condition at the origin and demonstrate that the single-particle $s$ states exhibit an energy spectrum similar to the one introduced in Section \ref{Toolbox}. Section \ref{TimeDepPert} discusses the realization of our factorizing scheme by two bosonic atoms moving in the central potential determined in Section \ref{1DPot}, moreover, when excited by a time dependent interaction the bosons achieve a transition into the factor state. In Section \ref{AppSol} we present the solution of the Schr{\"o}dinger equation within the rotating wave approximation while the calculation can be found in App. \ref{coupling} and \ref{RWA}. We mention that after a measurement of the single particle energies at randomly chosen times the factor state is found with a probability of about one half. Limitations of our method caused by decoherence are discussed in Section \ref{Limitations} followed by a short summary. An elementary discussion of the absence of accidental degeneracy in our logarithic spectrum can be found in App. \ref{accidental}.
\section{Mathematical toolbox}\label{Toolbox}
In the present section we first introduce the logarithmic energy spectrum and discuss its special role in finding the factors of an integer. We then turn to the distribution of a given energy onto two subsystems. This discussion constitutes the foundation for our factorization protocol.
Our factorization scheme is based on a logarithmic energy spectrum of the type \begin{equation}\label{SinglePartSpect} E_k(L) \equiv \hbar \omega_0 \ln\left(\frac{k}{L}+1\right),\quad k=0,1,2,\ldots \end{equation} with $E_0(L)=0$. Here, the constant $L$ plays the role of a scaling parameter and $\hbar\omega_0$ is the unit of energy.
In order to find the factors of a given semiprime $N=q_1\cdot q_2$ we try to distribute the energy \begin{equation}\label{2PartE} E_{\rm total}(N;L) \equiv\hbar\omega_0\,\ln\left(\frac{N}{L^2}\right) \end{equation} onto {\em two} subsystems with spectrum (\ref{SinglePartSpect}) and get \begin{eqnarray}\label{Distrib} E_{\rm total}(N;L) & = & \hbar\omega_0\ln\left(\frac{q_1}{L}\right)+\hbar\omega_0\ln\left(\frac{q_2}{L}\right)\\ \label{Distrib2} &=& E_{q_1-L} + E_{q_2-L} \end{eqnarray} where we have used Eq. (\ref{SinglePartSpect}). Since the parameter $L$ appears in the indices of the energies in Eq. (\ref{Distrib2}) it has to be integer. No negative indices are present in Eq. (\ref{SinglePartSpect}) therefore $N$ must not contain factors $q_i<L$. Moreover, a factor $q_i=L$ causes the unwanted case that the total energy (\ref{Distrib}) may be transferred to one subsystem while the other one is in the ground state $E_0(L)$ and no factorization takes place. We conclude that we have to remove factors $2,3,\ldots L$ what can be done by simple division before our factorization protocol can be applied. However, if $L$ is chosen to be unity it is easily verified that here the trivial factorization $N=1\times N$ cannot be excluded. Moreover, in Section \ref{Dimensions} we shall see that $L$ has to be odd. Therefore, throughout our article we consider the case $L\ge 3$. The question of uniqueness of the distribution (\ref{Distrib}) is easily answered because the fundamental theorem of arithmetics guarantees that the decomposition of the integer $N$ is unique if both factors, $q_1$ and $q_2$, respectively, are prime.
For our factorization protocol the subsystems have to be brought into a state with total energy (\ref{Distrib}) followed by a measurement of the energies of the subsystems which easily allows the determination of the factors $q_i$ as is described in Sect. \ref{AppSol}. In the remainder of our article we shall concentrate on the factorization of semiprimes.
\section{From three dimensions to one dimension}\label{1DPot}
In the present section we realize the subsystem with spectrum (\ref{SinglePartSpect}) by a particle of mass $\mu$ moving in three dimensions in a central potential $V(r)$ which we shall determine.
We start with the Schr\"odinger equation in spherical polar coordinates \begin{equation}\label{SchrEq} \left[-\frac{\hbar^2}{2\mu}\Delta + V(r)-E\right] \varphi(r,\Theta,\Phi)=0 \end{equation} and consider the wave functions \begin{equation}\label{3DFunction} \varphi_{k,\ell,m}(r,\Theta,\Phi)\equiv R_{k,\ell}(r)\,Y_\ell^m(\Theta,\Phi) \end{equation} which are simultaneous eigenfunctions of the Hamiltonian $\hat H$, the square of the angular momentum $\hat L^2$, and its $z$-component $\hat L_z$ which form a complete commuting set of operators with eigenvalues $E_{k,\ell}$, $\hbar^2\,\ell(\ell+1)$ and $\hbar\, m$, respectively. The radial quantum number $k$ as well as the azimuthal quantum number $\ell$ takes values $0,1,2,\ldots$ while the magnetic quantum number $m$ takes the $2\ell+1$ values $-\ell\ldots\ell$. The functions $Y_\ell^m(\Theta, \Phi)$ are the spherical harmonics. In what follows we shall use a short-hand notation for the three quantum numbers ${\bf k}\equiv (k,\ell,m)$.
Because the solution of Eq. (\ref{SchrEq}) can be found in most textbooks we jump directly to the radial equation valid in the region $r \ge 0$ \begin{equation}\label{RadialEq} \left[-\frac{\hbar^2}{2\mu}\frac{1}{r}\frac{d^2}{dr^2}r+\frac{\hbar^2\,\ell(\ell+1)}{2\mu r^2}+V(r)-E_{k,\ell}\right]\,R_{k,\ell}(r)=0 \end{equation} with the condition that $R_{k,\ell}(r)$ has to be square integrable and finite at the origin $r=0$. We consider $s$ states ($\ell=0$) and set \begin{equation}\label{RadialFkt} R_{k,0}(r)=\frac{u_{k,0}(r)}{r} \end{equation} with the boundary condition \begin{equation}\label{bc} u_{k,0}(0)=0. \end{equation} Moreover, we write the cartesian coordinate $x$ for the variable $r$ and assume a symmetric potential $V(x)=V(-x)$ where now $-\infty<x<\infty$. With these modifications it is easy to change Eq. (\ref{RadialEq}) into the equation \begin{equation}\label{1DSchrEq} \left[-\frac{\hbar^2}{2\mu}\frac{d^2}{dx^2}+V(x;L)-E_{k}(L)\right]u_{k}(x;L)=0 \end{equation} where for the moment we do not take into account the boundary conditions (\ref{bc}) and omit the $s$ state index $\ell=0$. This is the well-known Schr\"odinger equation for a particle of mass $\mu$ moving in the one-dimensional potential $V(x;L)$ with wave functions $u_{k}(x;L)$ which are even (odd) for even (odd) indices $k$. Here we have changed our notation in order to emphasize that the energies $E_{k}(L)$ (\ref{SinglePartSpect}), the potential $V(x;L)$, and the wave functions $u_{k}(x;L)$ depend on the scaling parameter $L$. Our iteration algorithm to determine the potential $V(x;L)$ from the single particle spectrum (\ref{SinglePartSpect}) is based on the Hellmann-Feynman theorem and is described in a previous article.\cite{Mack2010} In Fig. 1 we show $V(x;L=3)$ together with the eigenfunctions $u_k(x;L)$ for $0\le k\le 6$.
\begin{figure}
\caption{ One-dimensional potential $V(\xi;L)$ (dotted line) creating a logarithmic energy spectrum for a scaling parameter $L=3$ as a function of dimensionless coordinates $\xi\equiv \alpha \, x$ with $\alpha^2\equiv\mu \omega_0 /\hbar$. This potential is determined numerically by an iteration algorithm based on a perturbation theory using the Hellmann-Feynman theorem and is designed to obtain a logarithmic dependence of the energy eigenvalues $E_k (L)$ on the quantum number $k$ as given in Eq. (\ref{SinglePartSpect}). In the neighborhood of the origin the potential is approximately harmonic whereas for large values of $\xi$ it is logarithmic. In solid lines we depict the numerically determined energy wave functions of the first 7 states in their dependence on the dimensionless position. Both, the energies $E_k(L)$, $k=0,1,\ldots 6$ (dashed lines) as well as the potential $V(\xi;L)$ are shown in units of $\hbar\omega_0$. }
\label{Fc1}
\end{figure}
Note that states with quantum numbers $\ell>0$ were not needed for the determination of the potential $V(x;L)$. Some aspects of the full spectrum $E_{k,\ell}(L)$, however, are discussed in App. \ref{accidental}.
\section{Energy spectrum of $s$ states}\label{Dimensions}
We continue to limit ourselves to $s$ states only and suppress the index $\ell=0$. In the last section the potential $V(x,L)$ and the functions $u_k(x,L)$ were determined numerically and displayed in Fig. \ref{Fc1}. The three-dimensional potential $V(r;L)$ as well as the eigenfunctions $u_k(r;L)$ follow simply by replacing the coordinate $x$ by $r$ in either of them, where now only the region $r\ge 0$ is considered. Figure 2 shows the potential $V({\bf r};L)$ with position vector $\bf r$ in the $x$-$y$ plane.
\begin{figure}
\caption{ Three-dimensional potential $V(r;L=3)$ in units of $\hbar \omega_0$ creating the logarithmic energy spectrum Eq. (\ref{Etilde}) with scaling parameter $K=2$ as a function of the dimensionless coordinates $\xi=\alpha x$ and $\eta=\alpha y$ plotted in the plane $z=0$. }
\label{Fc2}
\end{figure} Here in three dimensions only {\em odd} solutions $u_k(x;L)$ of Eq. (\ref{1DSchrEq}) can satisfy the boundary condition (\ref{bc}). Therefore, energies $E_{k}(L)$ as well as eigenfunctions $u_k(x;L)$ with {\em even} index $k$ which were present in one dimension in Eq. (\ref{1DSchrEq}) do not appear anymore in three dimensions.
We will show now that the remaining spectrum $E_{k}(L)$ has indeed the form of Eq. (\ref{SinglePartSpect}) and therefore guarantees the validity of the results of Section \ref{Toolbox} which we need for our factorization procedure. We rewrite the energies with odd radial quantum numbers $k=2j+1$ \begin{equation}\label{Rewrite} E_{2j+1}(L)=\hbar\omega_0 \ln\left(\frac{2j+1}{L}+1\right) \end{equation} with $j=0,1,2,3\ldots$ and shift them by $-\hbar\omega_0\ln(1/L+1)$. It is easy to verify that the new spectrum is identical with single particle spectrum (\ref{SinglePartSpect}) \begin{equation}\label{Etilde}
E_j^{\rm 3d}(K)=\hbar\omega_0\ln\left(\frac{j}{K}+1\right) \end{equation} except that $L$ has to be replaced by a new scaling parameter \begin{equation}\label{Lprime} K=\frac{L+1}{2}. \end{equation}
In order that $K$ is a positive integer the parameter $L$ has to be odd. All the statements made in Section \ref{Toolbox} referring to the scaling length $L$ remain valid here provided $L$ is replaced by $K$. The eigenfunctions $v_j(r;K)$ belonging to $E_j^{\rm 3d}(K)$ are \begin{equation} v_{j}(r;K) \equiv u_{2j+1}(r;L). \end{equation} Figure 3 shows the radial functions \begin{equation}\label{Rad3d} R_j(r)=\frac{v_j(r;K)}{r} \end{equation} for indices $k=0,\ldots 5$ together with the potential $V(r;L=3)$ and the energy levels $E_j^{\rm 3d}(K=2)$ (\ref{Etilde}). \begin{figure}
\caption{ Central potential $V(\rho;L=3)$ creating the loga\-rith\-mic energy spectrum $E^{\rm 3d}_j(K=2)$ (\ref{Etilde}) in units of $\hbar\omega_0$ as function of the dimensionless radius $\rho\equiv \alpha \, r$ together with the corresponding radial functions $R_j(\rho)$ (\ref{Rad3d}) of the first 6 states in their dependence on the dimensionless radius. Note that the energies have been shifted in order that the ground state has an energy zero here. }
\label{Fc3}
\end{figure} To simplify the notation we pass over to the bra-ket formalism. The single-particle Schr\"odinger equation for the $s$ states reads \begin{equation} {\hat H}(K)\, \ket{j} = E_j^{\rm 3d}(K) \,\ket{j}\qquad j=0,1,2,\ldots \end{equation} where the quantum numbers $\ell=m=0$ are suppressed. The hamiltonian ${\hat H}(K)$ is characterized by the parameter $K$ (\ref{Lprime}).
The Schr{\"o}dinger equation for two non-interacting bosons is \begin{equation} \left({\hat H}_{1,2}(K)-E_{m,n}(K)\right)\, \ket{m,n}_B = 0 \end{equation} with \begin{eqnarray}\label{H12} {\hat H}_{1,2}(K)& = &{\hat H}_1(K)+{\hat H}_2(K)\\ E_{m,n}(K) & = & E_{m}^{\rm 3d}(K)+ E_{n}^{\rm 3d}(K)\label{Epq} \end{eqnarray} in accordance with Eqs. (\ref{Etilde} and \ref{Lprime}). Note that bosonic two-particle states are defined by \begin{eqnarray}\label{Bosons} \ket{m,n}_B \equiv \frac{1}{\sqrt{2}}\left(\ket{m,n}+\ket{n,m}\right),\\ \ket{m,m}_B \equiv \ket{m,m}.\nonumber \end{eqnarray} If two identical non-interacting bosons are in a state with energy \begin{equation}\label{FactorState} \hbar\omega_0\,\ln\left(\frac{N}{K^2}\right)=E_{p-K}^{\rm 3d}+E_{q-K}^{\rm 3d} \end{equation} where $N\equiv p\cdot q$ is semi-prime then according to Eqs. (\ref{Distrib2} and \ref{Etilde}) the bosons are in the state $\ket{p-K,q-K}_B$ we call factor state. A measurement of the energy of one of the bosons can only result in $\hbar \omega_0 \ln(p/K)$ or $\hbar \omega_0 \ln(q/K)$ and immediately yields the prime factors $p$ and $q$, respectively. In the remainder of the article we suppress the scaling parameter $K$ as well as the suffix $B$ and the superscript $\rm{3d}$ in order to simplify the notation.
\section{Time dependent perturbation}\label{TimeDepPert} In the present section we describe how the factorization protocol of Section \ref{Toolbox} can be realized with two interacting identical bosons placed in a three-dimensional potential shown in Fig. 2 with single particle spectrum (\ref{Etilde}) which does not show degeneracy for $s$ states. We prepare the two bosons in the ground state $\ket{\bf{0,0}}$ and at $t=0$ a perturbation \begin{equation}\label{Perturb} \delta V({\bf r}_1,{\bf r}_2;t) = \gamma \sin(\omega_{\rm ext}t) \,w({\bf r}_1,{\bf r}_2) \end{equation} is switched on. The frequency $\omega_{\rm ext}$ is chosen later in a way suitable for the factorization procedure.
The movement of the two-particle ket $\ket{\Psi (t)}$ is now governed by the Schr\"odinger equation in three dimensions \begin{equation}\label{SchrEqPert} i\hbar\frac{d}{dt} \ket{\Psi(t)}=[{\hat H}_{1,2} +\delta V(t) ]\ket{\Psi(t)} \end{equation} with the unperturbed stationary equations \begin{equation} {\hat H}_{1,2} \,\,\ket{{\bf k}_1,{\bf k}_2} = E_{{\bf k}_1,{\bf k}_2} \ket{{\bf k}_1,{\bf k}_2}. \end{equation} We substitute the expansion of the solution $\ket{\Psi(t)}$ into the two-particle eigenkets $\ket{{\bf k}_1,{\bf k}_2}$ of the unperturbed Hamiltonian ${\hat H}_{1,2}$ \begin{equation}\label{Expand} \ket{\Psi(t)}=\sum_{{\bf k}_1,{\bf k}_2} \, b_{\,{\bf k}_1,{\bf k}_2}(t) e^{-i E_{{\bf k}_1,{\bf k}_2} t/\hbar} \, \ket{{\bf k}_1,{\bf k}_2} \end{equation} into Eq. (\ref{SchrEqPert}) and arrive at the coupled system \begin{widetext} \begin{eqnarray}\label{CoupSys} \hspace{-1.5cm} i\hbar\, {\dot b}_{\,{\bf k}_1, {\bf k}_2}(t)= \gamma \sin(\omega_{\rm ext} t) \sum_{{\bf k}'_1,{\bf k}'_2} e^{i(E_{{\bf k}_1, {\bf k}_2}-E_{{\bf k}'_1,{\bf k}'_2}) t/\hbar}\,
W_{{\bf k}_1 ,{\bf k}_2; {\bf k}'_1, {\bf k}'_2}\, b_{\,{\bf k}'_1,{\bf k}'_2}(t)\\ b_{\,{\bf k}_1,{\bf k}_2}(0)=1 \mbox{ { } for { } } k_1+k_2+\ell_1+\ell_2=0 \mbox{ { } and { } } b_{\,{\bf k}_1,{\bf k}_2}(0)=0 \mbox{ { } otherwise}\nonumber \end{eqnarray} \end{widetext} which has to be solved for the probability amplitudes $b_{\,{\bf k}_1, {\bf k}_2}(t)$. Here the indices ${\bf k}_1$ {\it etc.} represent the triple of quantum numbers: ${\bf k}_1\equiv (k_1,\ell_1,m_1)$ introduced in Section \ref{1DPot}. The eigenkets $\ket{{\bf{k}_1},{\bf{k}_2}}$ of ${\hat H}_{1,2}$, the amplitudes $b_{\,{\bf k}_1,{\bf k}_2}(t)$, and the matrix elements \begin{equation}\label{MElem} W_{{\bf k}_1,{\bf k}_2;{\bf k}'_1,{\bf k}'_2}\equiv \bra{{\bf k}_1,{\bf k}_2} w({\bf {\hat r}}_1,{ \bf {\hat r}}_2) \ket{{\bf k}'_1,{\bf k}'_2} \end{equation} are 'bosonic' ones in the sense of Eq. (\ref{Bosons}) and are built out of the eigenkets $\ket{{\bf k}_1,{\bf k}_2}$ of ${\hat H}_{1,2}$ and the spacial part $w$ of the perturbation $\delta {\hat V}$. Moreover, in the summation in Eqs. (\ref{Expand}) and (\ref{CoupSys}) the same states must not be counted twice.
In App. \ref{coupling} we study the matrix element (\ref{MElem}) for the case of a contact interaction between the particles while in App. \ref{RWA} we benefit from the rotating wave approximation and reduce the system (\ref{CoupSys}) to the much simpler system (\ref{b00Short}) and (\ref{bpqShort}) of only two differential equations for two probability amplitudes, namely that of the ground state and that of the factor state, respectively. We solve them in then next section.
\section{Approximate solution}\label{AppSol}
With the help of the so-called secular or rotating wave approximation (RWA) \cite{Cohen} we have reduced the infinite system (\ref{CoupSys}) to only two first-order differential equations with constant coefficients (\ref{b00Short} and \ref{bpqShort}). App. \ref{RWA} presents the calculation. Together with the initial conditions $b_{0,0}(0)=1$ and $b_{p-K,q-K}(0)=0$ the equations are immediately solved. The resulting probability amplitudes are for the ground state \begin{equation} b_{0,0}(t)=\cos(\Omega t) \end{equation}
and \begin{equation}\label{b_sin} b_{p-K,q-K}(t)=\sin(\Omega t) \end{equation}
for the factor state, respectively, and the so-called Rabi frequency is \begin{equation}\label{Rabi} \Omega=\frac{\gamma}{2\hbar}\,W_{0,0;p-K,q-K} \end{equation} which is proportional to the interaction matrix element (\ref{W02}). In Sect. \ref{Toolbox} and \ref{Dimensions} it was shown that if the bosons are in the factor state $\ket{p-K,q-K}$ they have a two-particle energy $\hbar\omega_0\ln(N/K^2)$ with $N=p\cdot q$ (\ref{FactorState}).
As mentioned there the factors $p$ or $q$ are determined by a measurement of single-particle energies (\ref{Distrib}) and the factorization protocol has ended successfully.
At time $t$ the system can be found with probability $|b_{p-K,q-K}(t)|^2$ in the factor state and at times equal to an odd multiple of $\pi / (2\Omega)$ with certainty but, unfortunately, the Rabi frequency $\Omega$ is not known. Instead, we content ourselves with measuring at a time chosen at random from a time interval $[0,T]$ much larger than $\pi/\Omega$. Utilizing Eq. (\ref{b_sin}) it is easy to see that the probability to find the factor state is about one half. Then the measurement of a single-particle energy gives one of the factors while the other one follows from division.
An estimate for a time of measurement by making a guess for the factors $p$ and $q$ and determining so the Rabi frequency (\ref{Rabi}) was presented in a previous article.\cite{Gleisberg2013}
\section{Limitations}\label{Limitations}
In the present section we shall sketch what prevents our protocol to factor larger and larger semiprimes. According to Ref. \onlinecite{Schleich}
there is a high probability for the periodic transition into the factor state as long as the difference between the energies of this state and of the next off-resonant state, respectively, is larger than the energy $\hbar\Omega$ of the Rabi oscillation \begin{equation}
\hbar \omega_0\left|\ln\left(\frac{N\pm 1}{K^2}\right)-\ln\left(\frac{N}{K^2}\right)\right| \approx \frac{\hbar \omega_0}{N} \gg \hbar\Omega . \end{equation} Because the Rabi frequency $\Omega$ defined by Eq. (\ref{Rabi}) is proportional to the strength $\gamma$ of the perturbation (\ref{Perturb}) this condition can easily be satisfied by choosing $\gamma$ as small as needed. Unfortunately, a second condition arises from Section \ref{AppSol} where the time of measurement of the energies of the two bosons was chosen randomly from an interval $[0,T]$. To find the factor state with a probability of $\approx 1/2$ the length $T$ of the interval had to fulfill the condition $\Omega \,T\gg 1$. On the other hand the system has to be {\em free of decoherence} during the time interval {\it i.e.} $T<T_{\rm dec}$ leading to two inequalities the Rabi frequency has to fulfill \begin{equation} \Omega \gg \frac{1}{T_{\rm dec}} \quad \mbox{and} \quad \Omega \ll \frac{\omega_0}{N}. \end{equation} Our aim is to find an upper limit of the number to be factored $N$. In our articles Ref. \onlinecite{Gleisberg2013} and \onlinecite{Gleisberg2015} for different experimental situations and models for the spacial part of the interaction an $N$-dependence of the transition matrix element \begin{equation} W_{0,0;p-K,q-K} \propto N^{-1/2} \end{equation} was found in rough approximation and the same is valid, of course, for the Rabi frequency $\Omega$ (\ref{Rabi}). The semiprime $N$ to be factored therefore has an upper limit \begin{equation} N < \min \left( \left[\frac{\gamma T_{\rm dec}}{\hbar}\right]^2 ,\left[\frac{\hbar\omega_0}{\gamma}\right]^2\right). \end{equation} Assuming that according to Eq. (\ref{Perturb}) the interaction strength $\gamma$ can be chosen at will this relation shows that the crucial limiting factor for the magnitude of $N$ is the decoherence time $T_{\rm dec}$.
\section{Summary}
In the present article we have proposed a method to find the factors of a semiprime $N$ based on the quantum dynamics of two identical bosonic atoms moving in a spherically symmetric trap whose $s$ states exhibit a logarithmic single particle spectrum.
In the first part of our work we have determined a central potential such that it has a logarithmic energy spectrum. First we calculated numerically a one-dimensional potential from a logarithmic single particle spectrum. Because of the close relationship bet\-ween three-dimensional spherically symmetric and one-dimensional problems, respectively, the central potential then was easily found. As expected this potential had an energy spectrum with a logarithmic $s$ wave part but with a scaling length different from the one in the one-dimensional spectrum.
In the second part of our work we attacked the problem how to bring the bosons into the factor state. The bosons were excited from their ground state by a periodic time-dependent contact interaction of a frequency which was determined by the number $N$ to be factored. To exclude transitions bet\-ween non-$s$ states we discussed {\it in extenso} the absence of degeneracy. Then we showed within the framework of the well-known rotating wave approximation that the bosons performed a Rabi oscillation between the ground state and the factor state. The latter was found with a probability of about one half when the energies of the bosons were measured at a randomly chosen time. From these the factors of $N$ were easily determined and our factorization protocol has ended successfully.
\begin{acknowledgments} We thank M. A. Efremov and M. Freyberger for stimulating discussions on this topic. WPS is grateful to the Hagler Institute for Advanced Study at Texas A\&M University for a Faculty Fellowship and to Texas A\&M University AgriLife Research for its support. The research of the IQST is financially supported by the Ministry of Science, Research and Arts Baden-W{\"u}rttemberg. \end{acknowledgments}
\appendix
\section{Matrix elements of the interaction}\label{coupling}
In this appendix we study the matrix element (\ref{MElem}) \begin{equation}\label{A1} W_{{\bf k}_1,{\bf k}_2;{\bf k}'_1,{\bf k}'_2} \equiv \bra{{\bf k}_1,{\bf k}_2} w({\bf {\hat r}}_1,{\bf {\hat r}}_2) \ket{{\bf k}'_1,{\bf k}'_2} \end{equation} assuming a contact interaction between the particles \begin{equation}\label{SimpleEx} w({\bf r}_1,{\bf r}_2) = \delta^{(3)}({\bf r}_1-{\bf r}_2). \end{equation} With the help of (\ref{SimpleEx}) the transition matrix element can be represented by the eigenfunctions $\varphi_{\bf k}(\bf r)$ (\ref{3DFunction}) introduced in section \ref{1DPot}: \begin{equation}\label{A3} W_{{\bf k}_1,{\bf k}_2;{\bf k}'_1,{\bf k}'_2} \equiv \int d^3 r \, \varphi_{{\bf k}_1}({\bf r})^\ast \varphi_{{\bf k}_2}({\bf r})^\ast\, \varphi_{{\bf k}'_1}({\bf r}) \varphi_{{\bf k}'_2}({\bf r}). \end{equation} Having in mind that we start our procedure at time $t=0$ with the two particles in the ground state $\ket{{\bf 0},{\bf 0}}$ we consider the matrix elements $W_{\bf{0,0};\bf{k}_1,\bf{k}_2}$ for a transition into some excited state $\ket{{\bf k}_1,{\bf k}_2}$. It is not difficult to derive the expression \begin{widetext} \begin{equation}\label{W02} W_{\bf{0,0};\bf{k}_1,\bf{k}_2}=\frac{1}{4\pi}\int\,dr\,r^2\,R_{0,0}(r)^2\,R_{k_1,\ell_1}(r)\,R_{k_2,\ell_2}(r) \delta_{\ell_1,\ell_2} \,\delta_{m_1+m_2,0}. \end{equation} \end{widetext}
Here we have substituted Eq. (\ref{3DFunction}) for the eigenfunctions $\varphi_{{\bf k}}({\bf r})$, moreover we applied the well-known orthonormality of the spherical harmonics \begin{equation}\label{orthrel} \int\!\int d\Omega\,Y_{\ell_1}^{m_1\ast}(\theta,\varphi)\,Y_{\ell_2}^{m_2}(\theta,\varphi)=\delta_{\ell_1,\ell_2}\,\delta_{m_1,m_2} \end{equation} and the relation for their complex conjugate \begin{equation} Y_\ell^{m\ast}(\theta,\varphi)=Y_\ell^{-m}(\theta,\varphi). \end{equation} Note that $Y_0^0 \equiv 1/\sqrt{4\pi}$.
In the next appendix we use the matrix element (\ref{W02}) when we return to the system of coupled equations (\ref{CoupSys}) which we shall simplify considerably.
\section{Rotating wave approximation (RWA)}\label{RWA}
In this appendix we put all magnetic quantum numbers $m_i=0$ and omit them henceforth. This assumption will be justified in the calculation below. A single-particle state is now characterized by only two quantum numbers $k$ and $\ell$, respectively. We study the sub-system \begin{eqnarray}\label{GenSys} i\hbar\, {\dot b}_{\,0,0;0,0}(t) & = & \gamma \sin(\omega_{\rm ext} t) \\
& & \times \sum_{k_1,k_2,\ell} e^{-i(E_{k_1,\ell}+E_{k_2,\ell}) t/\hbar} \nonumber \\
& & \times \,\,\, W_{0,0;0,0;k_1,\ell,k_2,\ell} \,\, b_{\, {k_1},\ell;k_2,\ell}(t) \nonumber \end{eqnarray} of the system (\ref{CoupSys}) with the matrix element (\ref{W02}) and a zero ground state energy of the two bosons.
The essence of the RWA applied to (\ref{GenSys}) is simply to keep all terms with constant coefficients on the right hand side and to neglect all oscillating terms. The external frequency $\omega_{\rm ext}$ is now chosen such that the energy $\hbar\omega_{\rm ext}$ agrees with the energy \begin{equation}\label{omegaExt} E_{p-K,0;q-k,0}=E_{p-K,0}+E_{q-K,0}=\hbar\omega_0\ln\left(\frac{N}{K^2}\right) \end{equation} of the factor state and is determined by the number to be factored $N=p\cdot q$. Consider now the time dependent factors \begin{eqnarray}\label{constant} & & \hspace{-0.4cm} \frac{1}{2i}\,\left[e^{i(E_{p-K,0}+E_{q-K,0})t/\hbar}-e^{-i(E_{p-K,0}+E_{q-K,0})t/\hbar}\right] \\ & & \times \,\, e^{-i(E_{k_1,\ell}+E_{k_2,\ell})t/\hbar} \nonumber \end{eqnarray} which appear on the right hand side of (\ref{GenSys}). Note here the expanded sinus. Assuming $p\ge q$ only the term with $k_1=p-K$, $k_2=q-K$ and $\ell=0$ survives the application of the RWA and is of amount $(2i)^{-1}$. Appendix \ref{accidental} discusses the absence of accidental degeneracy in the single particle spectrum $E_{k,\ell}$ (\ref{SinglePartSpect}) which is demonstrated in Figure \ref{Fc7}. None of the terms with $\ell\ge 1$ may therefore lead to additional constant terms in (\ref{constant}). We note in passing that the $(2\ell+1)$-fold degeneracy with respect to the magnetic quantum number $m$ is simply unity as was mentioned above.
With these results it is easy to see that (\ref{GenSys}) is reduced to the equation \begin{equation}\label{b00Short} i\hbar{\dot b}_{\,0,0}(t) =
\frac{\gamma}{2i} W_{0, 0 ; {p-K}, {q-K} }\, b_{ p-K, q-K}(t) \end{equation} where the index $\ell=0$ present in the matrix elements and in the probability amplitudes is omitted here for convenience. To derive a second equation we select the term with $k_1=p-K$ and $k_2=q-K$ from (\ref{CoupSys}) and proceeding like before we get \begin{equation}\label{bpqShort} i\hbar{\dot b}_{\,p-K,q-K}(t) =
-\frac{\gamma}{2i} W_{p-K, q-K ; 0,0 }\, b_{ 0, 0}(t). \end{equation} of the unperturbed $s$ states. Equations (\ref{b00Short} and \,\ref{bpqShort}) characterize the dynamics of the two-boson system driven by the periodic perturbation (\ref{Perturb}) with frequency (\ref{omegaExt}). Together with the initial conditions $b_{0,0}(0)=1$ and $b_{p-K,q-K}(0)=0$ and the symmetry \begin{equation}\label{Wsymm} W_{m,n;0,0}=W_{0,0;m,n} \end{equation} they are solved in Sect. \ref{AppSol}. \begin{figure}
\caption{ Scaled effective potential (solid line) formed by the angular momentum barrier (dotted) and the potential $V(\rho;L=3)$ (dashed) which in the quantum case creates the loga\-rith\-mic energy spectrum (\ref{Etilde}) for a scaling parameter $K=2$ (\ref{Lprime}) as function of the dimensionless radius $\rho\equiv \alpha_{\rm cl} \, r$. The horizontal line $E=0.86\, V_0$ denotes the energy of the radial coordinate $r(t)$ of the classical particle moving {\em periodically} from the left turning point to the right one and back. Of course, $\Theta(t)$ is {\em not} periodic as is the orbit $r(\Theta)$ shown in Fig. \ref{Fc5}. }
\label{Fc4}
\end{figure}
\section{Absence of accidental degeneracy}\label{accidental}
The energy spectra of {\em any} central potential exhibit the $(2\ell+1)$-fold ''essential degeneracy'' as the energy levels $E_{k,\ell}$ do not depend on the magnetic quantum number $m$. It has been proven long ago that the only potentials that show {\em accidental} degeneracy are the Coulomb and the harmonic oscillator one, respectively.\cite{Bertrand} This is a consequence of the existence of a conserved quantity which does not commute with any member of a complete system of commuting operators of the problem.\cite{LandauIII} In the Coulomb case this is the well-known Runge-Lenz vector.\cite{Runge, Lenz} The conserved quantity for the harmonic oscillator problem we shall discuss below. We conclude that in our potential which is neither of both accidental degeneracy is absent. Nevertheless we shall study this problem in more detail here.
\subsection{Trajectory of a classical particle}\label{degeneracy}
\begin{figure}
\caption{ Trajectory $r(\Theta)$ of a classical particle with mass $\mu$ and energy $E=0.86\,V_0$ moving in the effective potential shown in Fig. 4. The trajectory starts at an inner turning point and an angle $\Theta=0$. After having covered five periods it reaches an inner turning point at an angle $\Theta\approx 11\pi/8$. }
\label{Fc5}
\end{figure}
We recall the trajectories of a classical particle in the harmonic oscillator as well as in the Coulomb potential are closed, the latter for negative energies only. Following the textbook \onlinecite{Goldstein} we calculate the trajectory of a classical particle of mass $\mu$, energy $E$ and angular momentum $J$ moving in the effective potential \begin{equation} V_{\rm eff}(r) =\frac{J^2}{2\mu r^2}+V_0 \,v(r) \end{equation} with an energy $E=0.86\,V_0$ periodically between both turning points as displayed in Fig. \ref{Fc4} while five periods of the trajectory $r(\Theta)$ are shown in Fig. \ref{Fc5} where the dimensionless radius $\rho = \alpha_{\rm cl}\, r$ with \begin{equation} \alpha_{\rm cl}=\left(\frac{\mu V_0}{J^2}\right)^{1/2} \end{equation} was used.
The potential determined numerically from the spectrum (\ref{SinglePartSpect}) and shown in Fig. 2 is denoted by $v(\rho)$. It is evident that the orbit of the particle does not close but precesses around the force center thus indicating the absence of accidental degeneracy.
\subsection{Energy spectrum}
The most direct way to check for degeneracy is simply to calculate the energies $E_{k,\ell}$ for the potential under consideration with radial and azimuthal quantum numbers $k$ and $\ell$, respectively. If two ore more of the energies with different indices are equal degeneracy is present otherwise not.
\begin{figure}
\caption{ Lowest scaled energies of the three-dimensional harmonic oscillator. The scheme of levels $E_n=\hbar \omega(n+3/2)$ shows degeneracy as the principal quantum number $n=2k+\ell$ depends on both, the radial quantum number $k$ and the azimuthal quantum number $\ell$, respectively. For example the level $n=2$ is doubly degenerate for the quantum numbers $k=1, \ell=0$ and $k=0, \ell=2$, respectively. }
\label{Fc6}
\end{figure} \begin{figure}
\caption{ Scaled energies of a particle with mass $\mu$ moving in a three-dimensional potential leading to a spectrum with $s$ state part (\ref{Etilde}) and scaling parameter $K=2$ of Eq. (\ref{Lprime}). Every energy level is characterized by {\em two} quantum numbers $k$ and $\ell$, respectively. No principal quantum number can be identified and evidently no accidental degeneracy takes place. }
\label{Fc7}
\end{figure}
Before we turn to our potential $v(\rho)$ we recall the situation for the three-dimensional harmonic oscillator where the lowest energy levels are displayed in Fig. \ref{Fc6}.
The energies $E_{k,\ell}$ depend on a combination of both indices $k$ and $\ell$ namely on the principal quantum number $n=2k+\ell$ leading to degeneracy of the levels $E_n=\hbar \omega (n+3/2)$ as can be checked from levels with $n=2,3,4$ of the figure.\cite{Cohen} If the x- and y-axis are oriented along the symmetry axes of the elliptic orbit of the oscillator then it can be shown that the additional integral of motion reduces to the scalar function $E_x - E_y$, the difference between the energies of the projections of the motion onto the x- and the y-axis, respectively.\cite{Khriplovich}
With the help of the potential $v(\rho)$ we solved the radial equation (\ref{RadialEq}) numerically. The lowest energy levels $E_{k,\ell}$ are displayed in Fig. \ref{Fc7}. At first sight the scheme of the energies resembles that of the harmonic oscillator potential. But looking more closely we observe that the levels which in the harmonic oscillator scheme of Fig. \ref{Fc6} were degenerate with each other now differ slightly. We conjecture that the higher energy levels behave similarly and no accidental degeneracy is present. We emphasize once more that the $(2\ell+1)$-fold {\em essential} degeneracy with respect to the magnetic quantum number $m$ is caused by the central potential $v(\rho)$.
\end{document} |
\begin{document}
\title{The Deffuant model on $\Z$ with higher-dimensional
opinion spaces}
\begin{abstract} When it comes to the mathematical modelling of social interaction patterns, a number of different models have emerged and been studied over the last decade, in which individuals randomly interact on the basis of an underlying graph structure and share their opinions. A prominent example of the so-called bounded confidence models is the one introduced by Deffuant et al.: Two neighboring individuals will only interact if their opinions do not differ by more than a given threshold $\theta$. We consider this model on the line graph $\mathbb{Z}$ and extend the results that have been achieved for the model with real-valued opinions by considering vector-valued opinions and general metrics measuring the distance between two opinion values. As in the univariate case there turns out to exist a critical value $\theta_\text{\upshape c}$ for $\theta$ at which a phase transition in the long-term behavior takes place, but $\theta_\text{\upshape c}$ depends on the initial distribution in a more intricate way than in the univariate case. \end{abstract}
\section{Introduction}\label{intro}
Consider a simple graph $G=(V,E)$ and assume the vertex set $V$ to be either finite or countably infinite with bounded maximal degree. The vertices are assumed to represent individuals and each of them is assigned an opinion value. The edges in $E$ -- being connections between individuals -- are understood to embody the possibility of mutual influence. For that reason it is no restriction to focus on connected graphs, as the components could be treated individually otherwise. From different directions including social sciences, physics and mathematics, there has been raised interest in various models for what is called {\itshape opinion dynamics} and deals with the evolution of such a system under a given set of interaction rules. These models are qualitatively different but share similar ideas, see \cite{Survey} for an extensive survey.\\[0.5em] \noindent The {\em Deffuant model} (introduced by Deffuant et al.\ \cite{Model}) is one of those and features two parameters, the confidence bound $\theta>0$ and the convergence parameter $\mu\in(0,\tfrac 12]$, shaping the willingness to approach the other individual's opinion in a compromise. There are two types of randomness in the model: One is the random {\em initial configuration}, meaning that at time $t=0$ the vertices are assigned identically distributed opinions, the other are the {\em random encounters} thereafter. Serving as a regime for the latter, all the edges in $E$ are assigned unit rate Poisson processes, which are independent of one another and the initial configuration. Whenever a Poisson event occurs on an edge, the corresponding adjacent vertices interact in the manner described below. Just like in most of the analyses of this model, we will consider i.i.d.\ initial opinion values, but comment on how the considerations can be generalized.
By $\eta_t(v)$ we denote the opinion value at vertex $v\in V$ at time $t\geq0$. The current value will not change until at some future time $t$ a Poisson event occurs at one of the edges incident to $v$, say $e=\langle u,v\rangle$, which then might cause an update. Let $\eta_{t-}(u):=\lim_{s\uparrow t}\eta_s(u)=a$ and $\eta_{t-}(v):=\lim_{s\uparrow t}\eta_s(v)=b$ be the two opinion values of $u$ and $v$, just before this happens.
If these opinions lie at a distance less than the confidence bound $\theta$ from one another, they will symmetrically take a step, whose size is scaled by $\mu$, towards a common compromise, if not they stay unchanged. Although there is a section on vector-valued binary opinions in the original paper by Deffuant et al.\ \cite{Model}, using a different model, the Deffuant model with the interaction rule just described was originally only defined for opinions being real-valued and the absolute value as notion of distance. In order to broaden the original scope of this model to vector-valued opinions, the natural replacement for the absolute value is the Euclidean distance $$d(x,y)=\n{ x-y}=\sqrt{(x-y)^2},\text{ for all }x,y\in\R^k.$$ Given this measure of distance, the rule for opinion updates in the Deffuant model reads as follows:
\begin{equation*}
\eta_t(u) = \left\{ \begin{array}{ll}
a+\mu(b-a) & \mbox{if $\n{ a-b}\leq\theta$,} \\
a & \mbox{otherwise}
\end{array} \right.
\end{equation*} and similarly
\begin{align}\label{dynamics}\end{align}
\begin{equation*}
\eta_t(v) = \left\{ \begin{array}{ll}
b+\mu(a-b) & \mbox{if $\n{ a-b}\leq\theta$,} \\
b & \mbox{otherwise.}
\end{array} \right.
\end{equation*} Note that choosing $k=1$ gives back the original model.
As the assumptions on the graph force $E$ to be countable, there will almost surely be neither two Poisson events occurring simultaneously nor a limit point in time for the Poisson events on edges incident to one fixed vertex. Yet in addition to that there is a more subtle issue in how the simple pairwise interactions shape transitions of the whole system in the infinite setting, putting it into question whether the whole process is well-defined by the update rule (\ref{dynamics}). For infinite graphs with bounded degree, however, this problem is settled by standard techniques in the theory of interacting particle systems, see Thm.\ 3.9 on p.\ 27 in \cite{Liggett}. \vspace*{1em}
\noindent One of the most natural questions in this context -- motivated by interpretations coming from social science -- seems to be, under what conditions the individual opinions will converge to a common consensus in the long run and under what conditions they are going to split up into groups of individuals holding different opinions instead. In this regard let us define the following types of scenarios for the asymptotic behavior of the Deffuant model on a connected graph as time tends to infinity:
\begin{definition}\label{states} \begin{enumerate}[(i)] \item {\itshape No consensus}\\ There will be finally blocked edges, i.e.\ edges $e=\langle u,v\rangle$ s.t. $$\n{\eta_t(u)-\eta_t(v)}>\theta,$$ for all times $t$ large enough. Hence the vertices fall into different opinion groups. \item {\itshape Weak consensus}\\ Every pair of neighbors $\{u,v\}$ will finally concur, i.e. $$\lim_{t\to\infty}\n{\eta_t(u)-\eta_t(v)}=0.$$ \item {\itshape Strong consensus}\\ The value at every vertex converges, as $t\to\infty$, to a common limit $l$, where $$l=\begin{cases}\text{the average of the initial opinion values},&\text{if }G\text{ is finite}\\
\E\eta_0,&\text{if }G\text{ is infinite}\end{cases}$$ and $\mathcal{L}(\eta_0)$ denotes the distribution of the initial opinion values.\end{enumerate} \end{definition}
\noindent The first analyses of the Deffuant model and similar opinion dynamics were strongly simulation-based and thus confined to a finite number of agents. In \cite{sim1} for example, Fortunato simulated the long-term behavior of the Deffuant model on four different kinds of finite graphs: Two deterministic examples -- the complete graph and the square lattice -- as well as two random graphs -- those given by the Erd\H{o}s-R\'enyi model as well as the Barab\'asi-Albert model. He found strong numerical evidence that, given initial opinions that are independently and uniformly distributed on $[0,1]$, a confidence threshold $\theta$ less than $\tfrac12$ leads to a fragmentation of opinions, $\theta>\tfrac12$ leads to a consensus -- irrespectively of the underlying graph structures that were considered. Later, the simulation studies were extended to the generalization of the Deffuant model to higher-dimensional opinion values, see for instance \cite{sim2}.
There are however crucial differences between the interactions on a finite compared to an infinite graph. In the finite case, statements about consensus or fragmentation tend to be valid not with probability $1$ but at best with a probability that is close to $1$: In the standard case of i.i.d.\ $\text{\upshape unif}([0,1])$ initial opinions for example, any non-trivial confidence bound, i.e.\ $\theta\in(0,1)$, can lead to either consensus or fragmentation depending on the initial values and the order of interactions. Furthermore, the fact that the dynamics (\ref{dynamics}) preserves the opinion average of two interacting agents implies that strong consensus follows from weak consensus on a finite graph. This does not have to hold in an infinite setting.
The first major step in terms of a theoretical analysis of the model on an infinite graph was taken by Lanchier \cite{Lanchier}, who treated the model on the line graph $\Z$ -- similarly with an i.i.d.\ $\text{\upshape unif}([0,1])$ configuration. His main result implies that there is a phase transition at $\theta=\tfrac12$ from a.s.\ no consensus to a.s.\ weak consensus. These findings were reproven and slightly sharpened by Häggström \cite{ShareDrink} to the statement of Theorem \ref{on Z} below, using a non-random pairwise averaging procedure on $\Z$ which he termed {\em Sharing a drink} (SAD) to get a workable representation of the opinion values at times $t>0$.
Using his line of argument, the results were generalized to initial distributions other than $\text{\upshape unif}([0,1])$ by Häggström and Hirscher \cite{Deffuant} as well as Shang \cite{Shang}, independently. In \cite{Deffuant}, the analysis of the Deffuant model was in addition to that extended to other infinite graphs, namely higher-dimensional integer lattices $\Z^d$ and the infinite cluster of supercritical i.i.d.\ bond percolation on these lattices.
\vspace*{1em} \noindent In this paper we stay on the infinite line graph, that is the integer numbers $\Z$ with consecutive integers forming an edge. The direction in which we want to broaden the analysis is -- as already indicated -- the generalization of the Deffuant model on $\Z$ to vector-valued opinions. In Section 2, we give a brief summary of the results for real-valued opinions derived in \cite{Deffuant}, together with the key ideas and tools that were used there.
In Section 3 we establish corresponding results for the case of higher-dimen\-sional opinions sticking, as indicated above, to the Euclidean norm as measure of distance between the opinions of interacting agents. Actually, the main results (Theorem \ref{nogap} and \ref{gapsEucl}) in this section match the statement for real-valued opinions (Theorem \ref{gen}) in the sense that the radius of the initial distribution as well as the largest gap in its support -- the generalized definitions of which you will find in Definition \ref{radius} and \ref{gap} -- determine the critical value for $\theta$ at which there is a phase transition from a.s.\ no consensus to a.s.\ strong consensus. While the concept of a distribution's radius straightforwardly transfers to higher dimensions, the one of a gap has to be properly redefined and investigated. Doing this, we can in fact characterize the support of the opinion values at times $t>0$, see Proposition \ref{supp_t}. Even though we will throughout the paper consider the initial opinions to be i.i.d.\ it is mentioned in the remark after Theorem \ref{gapsEucl}, how the arguments can be extended to particular dependent initial configurations in the way it was done in \cite{Deffuant}.
Section 4 finally deals with the generalization of the Deffuant model to distance measures other than the Euclidean, in both one and higher dimensions. We pin down properties a general metric $\rho$ (used to determine whether two opinions are close enough to compromise or not) needs to have in order to allow for the results from Section 3 to be preserved (see Theorem \ref{nogaprho} and \ref{gapsrho}). Examples are given to illustrate the necessity of the requirements imposed on $\rho$.
\vspace*{1em} \noindent At this point it should be mentioned that the vectorial model that was already introduced in the original paper by Deffuant et al.\ \cite{Model} and analyzed quite recently by Lanchier and Scarlatos \cite{Lanchier2} does not fit the general framework of this paper. Unlike all opinion dynamics considered here, its update rule is different from (\ref{dynamics}) and especially not average preserving, leading to substantial qualitative differences.
\section{Background on the univariate case}\label{1d}
\begin{theorem}[\bf Lanchier]\label{on Z}
Consider the Deffuant model on the graph $(\Z,E)$, where $E=\{\langle v,v+1\rangle, v\in\Z\}$
with i.i.d.\ {\upshape unif}$([0,1])$ initial configuration and fixed $\mu\in(0,\tfrac12]$.
\begin{enumerate}[(i)]
\item If $\theta>\tfrac12$, the model converges almost surely to strong consensus, i.e. with
probability $1$ we have: $\lim_{t\to\infty}\eta_t(v)=\tfrac12$ for all $v\in\Z$.
\item If $\theta<\tfrac12$ however, the integers a.s.\ split into (infinitely many) finite clusters
of neighboring individuals asymptotically agreeing with one another, but no global consensus is approached.
\end{enumerate} \end{theorem}
\noindent Accordingly, for independent initial opinions that are uniform on $[0,1]$, the critical value $\theta_\text{c}$ equals $\frac12$, with subcritical values of $\theta$ leading a.s.\ to no consensus and supercritical ones a.s.\ to strong consensus. The case when the confidence bound actually takes on value $\theta_\text{c}$ is still an open problem. The ideas Häggström \cite{ShareDrink} used to reprove the above result were adapted to accommodate more general univariate initial distributions leading to a similar statement for all such having a first moment $\E\eta_0\in\R\cup\{-\infty,+\infty\}$, see Thm.\ 2.2 in \cite{Deffuant}, which reads as follows:
\begin{theorem}\label{gen} Consider the Deffuant model on $\Z$ with real-valued i.i.d.\ initial opinions. \begin{enumerate}[(a)]
\item Suppose the initial opinion of all agents follows an arbitrary bounded distribution $\mathcal{L}(\eta_0)$
with expected value $\E\eta_0$ and $[a,b]$ being the smallest closed interval containing its support.
If $\E\eta_0$ does not lie in the support, let $I\subseteq[a,b]$ be the maximal, open interval such that
$\E\eta_0$ lies in $I$ and $\Prob(\eta_0\in I)=0$. In this case let $h$ denote the length of $I$, otherwise
set $h=0$.
Then the critical value for $\theta$, where a phase transition from a.s.\ no consensus to a.s.\ strong
consensus takes place, becomes $\theta_\text{\upshape c}=\max\{\E\eta_0-a,b-\E\eta_0,h\}$.
The limit value in the supercritical regime is $\E\eta_0$.
\item Suppose the initial opinions' distribution is unbounded but its expected value exists, either in the
strong sense, i.e.\ $\E\eta_0\in\R$, or the weak sense, i.e.\ $\E\eta_0\in\{-\infty,+\infty\}$.
Then the Deffuant model with arbitrary fixed parameter $\theta\in(0,\infty)$ will a.s.\ behave
subcritically, meaning that no consensus will be approached in the long run. \end{enumerate} \end{theorem}
\noindent
The situation at criticality is unsolved with the exception of the case when the gap around the mean
is larger than its distance to the extremes of the initial distribution's support. Given this condition, however,
the following proposition (which is Prop.\ 2.4 in \cite{Deffuant}) settles the question about the long-term
behavior for critical $\theta$:
\begin{proposition}\label{crit}
Let the initial opinions be again i.i.d.\ with $[a,b]$ being the smallest closed interval containing
the support of the marginal distribution,
and the latter feature a gap $(\alpha,\beta)$ of width $\beta-\alpha>\max\{\E\eta_0-a,b-\E\eta_0\}$ around its
expected value $\E\eta_0\in[a,b]$.\vspace*{0.5em}
\noindent At criticality, that is for $\theta=\theta_\text{\upshape c}
=\max\{\E\eta_0-a,b-\E\eta_0,\beta-\alpha\}=\beta-\alpha$, we get
the following: If both $\alpha$ and $\beta$ are atoms of the distribution $\mathcal{L}(\eta_0)$, i.e.\
$\Prob(\eta_0=\alpha)>0$ and $\Prob(\eta_0=\beta)>0$, the system approaches a.s.\ strong consensus. However, it
will a.s.\ lead to no consensus if either $\Prob(\eta_0=\alpha)=0$ or $\Prob(\eta_0=\beta)=0$. \end{proposition}
\noindent
Since the same line of reasoning was used in both \cite{ShareDrink} and \cite{Deffuant} to derive the results
we just stated, it is worth taking a closer look on the key concepts involved, especially as they will be the
foundation for most of the conclusions drawn in the upcoming sections.
The presumably most central among these is the idea of {\em flat points}. If $\E\eta_0\in\R$, a vertex $v\in\Z$
is called {\em$\epsilon$-flat to the right} in the initial configuration $\{\eta_0(u)\}_{u\in\Z}$ if for all
$n\geq0$:
\begin{equation}\label{rflat}
\frac{1}{n+1}\sum_{u=v}^{v+n}\eta_0(u)\in\left[\E\eta_0-\epsilon,\E\eta_0+\epsilon\right].
\end{equation}
It is called {\em$\epsilon$-flat to the left} if the above condition is met with the sum running
from $v-n$ to $v$ instead. Finally, $v$ is called {\em two-sidedly $\epsilon$-flat} if for all $m,n\geq0$
\begin{equation}\label{tflat}
\frac{1}{m+n+1}\sum_{u=v-m}^{v+n}\eta_0(u)\in\left[\E\eta_0-\epsilon,\E\eta_0+\epsilon\right].
\end{equation}
However, in order to understand how vertices being one- or two-sidedly $\epsilon$-flat in the initial
configuration play an important role in the further evolution of the configuration another concept is
indispensable, namely the non-random pairwise averaging procedure Häggström \cite{ShareDrink} called
{\em Sharing a drink} (SAD).
Think of glasses being placed at all integers, the one at site $0$ being
brimful, all others empty. Just as in the Deffuant model, neighbors interact and share, but this time without
randomness and confidence bound. In other words, we start with the initial profile $\{\xi_0(v)\}_{v\in\Z}$,
given by $\xi_0(0)=1$ and $\xi_0(v)=0$ for all $v\neq0$, and a finite sequence $(e_n)_{n=1}^N$ of edges
along which updates of the form (\ref{dynamics}) are performed, i.e.\ for the profile $\{\xi_n(v)\}_{v\in\Z}$
after step $n$ and $e_{n+1}=\langle u,u+1\rangle$ we get
$\{\xi_{n+1}(v)\}_{v\in\Z}$ by
\begin{equation}\label{transf}\begin{array}{rl}\xi_{n+1}(u)&\!\!\!=\,(1-\mu)\,\xi_{n}(u)+\mu\,\xi_{n}(u+1),\\
\xi_{n+1}(u+1)&\!\!\!=\,\mu\,\xi_{n}(u)+(1-\mu)\,\xi_{n}(u+1);\end{array}
\end{equation}
all other values stay unchanged.
Elements of $[0,1]^\Z$ that can be obtained in such a way are called
SAD-profiles. The crucial connection to the Deffuant model is that the opinion value $\eta_t(0)$ at any given
time $t>0$ can be written as a weighted average of values at time $t=0$ with weights given by an SAD-profile
(see La.\ 3.1 in \cite{ShareDrink}). The fact that all SAD-profiles share certain properties (the most important
being unimodality) renders it possible to derive characteristics of the future evolution of the Deffuant dynamics
given the initial configuration. For instance, the opinion value at a two-sidedly $\epsilon$-flat vertex in the
initial configuration can never move further than $6\epsilon$ away from the mean (see La.\ 6.3 in
\cite{ShareDrink}).
These two vital ingredients -- flat points and SAD-profiles -- of the line of argument in \cite{ShareDrink}
and Sect.\ 2 in \cite{Deffuant} can be adapted in order to analyze the Deffuant model with
vector-valued opinions, as we will see in the following section.
\section{Deffuant model with multivariate opinions and the Euclidean norm as measure of distance}
Having characterized the long-term behavior of the Deffuant dynamics on $\Z$ starting from a general univariate i.i.d.\ configuration, the next step of generalization with regard to the marginal initial distribution is, as indicated in the introduction, to allow for vectors instead of numbers to represent the opinions. Like in the univariate case, we want the initial opinions to be independent and identically distributed, just now with some common distribution $\mathcal{L}(\eta_0)$ on $\R^k$. This will ensure ergodicity of the setting (with respect to shifts) as before.
In this section we will consider $\R^k$ to be equipped with the Borel $\sigma$-algebra generated by the Euclidean norm, denoted by $\mathcal{B}^k$.
\begin{definition}\label{radius} If the distribution of $\eta_0$ has a finite expectation, define its {\em radius} by $$R:=\inf\left\{r>0,\;\Prob\big(\eta_0\in B[\E\eta_0,r]\big)=1\right\},$$ where $B[y, r]:=\{x\in\R^k,\;\n{ x-y}\leq r\}$ denotes the closed Euclidean ball with radius $r$ around $y$. Note that the radius of an unbounded distribution is infinite. \end{definition}
\noindent The notion of $\epsilon$-flatness easily translates to the new setting by just replacing the intervals by balls: If $\E\eta_0\in\R^k$, a vertex $v\in\Z$ is called $\epsilon$-flat to the right in the initial configuration $\{\eta_0(u)\}_{u\in\Z}$ if for all $n\geq0$: \begin{equation}\label{eucflat}
\frac{1}{n+1}\sum_{u=v}^{v+n}\eta_0(u)\in B[\E\eta_0,\epsilon], \end{equation} similarly for $\epsilon$-flatness to the left and two-sided $\epsilon$-flatness -- compare with (\ref{rflat}) and (\ref{tflat}).
With these notions in hand we can state and prove a higher-dimensional analogue of Theorem \ref{gen}, valid for initial distributions whose support does not feature a substantial gap around the mean. The proof of this result will be a fairly straightforward adaptation of the methods for the univariate case indicated in Section \ref{1d}. In contrast, the more general case treated in Theorem \ref{gapsEucl} requires invoking more intricate geometrical considerations.
\begin{theorem}\label{nogap} In the Deffuant model on $\Z$ with the underlying opinion space $(\R^k,\n{\,.\,})$ and an initial opinion distribution $\mathcal{L}(\eta_0)$ we have the following limiting behavior: \begin{enumerate}[(a)] \item If $\mathcal{L}(\eta_0)$ has radius $R\in[0,\infty)$ and mass around its mean, i.e. \begin{equation}\label{matm} \Prob\big(\eta_0\in B[\E\eta_0,r]\big)>0 \text{ for all }r>0, \end{equation} the critical parameter is $\theta_\text{\upshape c}=R$, meaning that for $\theta<R$ we have a.s.\ no consensus and for $\theta>R$ a.s.\ strong consensus. \item Let $\eta_0=(\eta_0^{(1)},\dots,\eta_0^{(k)})$ be the random initial opinion vector. If at least one of the coordinates $\eta_0^{(i)}$ has an unbounded marginal distribution, whose expected value exists (regardless of whether finite, $+\infty$ or $-\infty$), then the limiting behavior will a.s.\ be no consensus, irrespectively of $\theta$. \end{enumerate} \end{theorem}
\begin{proof} \begin{enumerate}[(a)] \item To show the first part is just like in the univariate case (included in part (a) of Theorem 2.2) little more than following the arguments in the last two sections of \cite{ShareDrink}: The central arguments go through even for vector-valued opinions as the crucial properties of the absolute value that were used are shared by its replacement in higher dimensions, the Euclidean norm. Because of that, we only sketch the main line of reasoning and refer to Sect.\ 6 in \cite{ShareDrink} and Sect.\ 2 in \cite{Deffuant} for a more thorough presentation of the arguments.
First of all, the (multivariate) Strong Law of Large Numbers -- in the following abbreviated by SLLN -- tells us that the averages in (\ref{eucflat}) for large $n$ are close to the mean in Euclidean distance. For $\epsilon>0$ fixed, choose $N\in\N$ such that the event $$A:=\bigg\{\frac{1}{n+1}\sum_{u=1}^{n+1}\eta_0(u)\in B[\E\eta_0,\tfrac{\epsilon}{3}]\ \text{for all }n\geq N\bigg\}$$ has positive probability. Using (\ref{matm}) and the fact that the initial opinions are i.i.d., we can locally modify the configuration to conclude that the event $\{\eta_0(v)\in B[\E\eta_0,\tfrac{\epsilon}{3}] \text{ for }v=1,\dots,N+1\}\cap A$ has positive probability, implying the $\epsilon$-flatness to the right of site $1$ -- just as it was done in La.\ 4.2 in \cite{ShareDrink}.
For $\theta<R$, the probability of $\{\eta_0\notin B[\E\eta_0,\theta+\epsilon]\}$ is non-zero for $\epsilon$ small enough, hence a vertex can be at distance larger than $\theta$ from $B[\E\eta_0,\epsilon]$ initially. Due to the independence of initial opinions, the event that site $-1$ is $\epsilon$-flat to the left, $1$ is $\epsilon$-flat to the right and $\eta_0(0)\notin B[\E\eta_0,\theta+\epsilon]$ has positive probability. Using the SAD representation, it follows -- mimicking Prop.\ 5.1 in \cite{ShareDrink} -- that given such an initial configuration the opinion value at site $1$ will be a convex combination of averages in (\ref{eucflat}) for all times $t>0$ and thus in $B[\E\eta_0,\epsilon]$, due to the convexity of Euclidean balls. The same holds for site $-1$ and the half-line to the left. Consequently, the edges $\langle-1,0\rangle$ and $\langle0,1\rangle$ will stay blocked for ever. Ergodicity of the initial opinion sequence ensures that with probability $1$ (infinitely many) vertices will get isolated that way, which settles the subcritical case.
In the supercritical regime, i.e.\ $\theta>R$, we focus on two-sidedly $\epsilon$-flat vertices: If site $0$ is $\epsilon$-flat to the left and $1$ is $\epsilon$-flat to the right, both are two-sidedly $\epsilon$-flat -- using again the convexity of $B[\E\eta_0,\epsilon]$. By independence this event has positive probability, by ergodicity we will a.s.\ have (infinitely many) two-sidedly $\epsilon$-flat vertices. Mimicking La.\ 6.3 in \cite{ShareDrink} literally, we find that vertices which are two-sidedly $\epsilon$-flat in the initial configuration will never move further than $6\epsilon$ away from the mean, irrespectively of future interactions. Choosing $\epsilon>0$ small, such that\ $7\epsilon<\theta-R$ say, will ensure that updates along edges incident to two-sidedly $\epsilon$-flat vertices will never be prevented by the distance of opinions exceeding the confidence bound.
The proof of Prop.\ 6.1 in \cite{ShareDrink}, which states that neighbors will either finally concur or the edge between them be blocked for large $t$, can be adopted as well: Its central idea -- borrowed from physics -- that every individual starts with an initial amount of energy that is then partly transferred partly lost in interactions works regardless whether the opinions $\{\eta_t(v)\}_{v\in \Z}$ are shaped by numbers or vectors. Merely in the current setting, the term $W_t(v)=(\eta_t(v))^2$, that defines the energy at vertex $v$ at time $t$, has to be read as a dot product. Again, if the opinions $\eta_{t}(u),\eta_{t}(v)$ of two neighbors are within the confidence bound but $\n{\eta_{t}(u)-\eta_{t}(v)}\geq\delta$ for some fixed $\delta>0$, $W_t(u)+W_t(v)$ decreases by at least $2\mu(1-\mu)\delta^2$ when they compromise. This can not happen infinitely often with positive probability as the expected energy at time $t=0$ is $\E W_0(v)=\E(\eta_0^{\,2})<\infty$ and the expectation of $W_t(v)$ is both non-increasing with $t$ and non-negative. For details see Prop.\ 6.1 and La.\ 6.2 in \cite{ShareDrink}.
Following from the considerations above, two-sidedly $\epsilon$-flat vertices and their neighbors therefore have to finally concur with probability $1$, forcing the opinion values of the neighbors to eventually lie at a distance strictly less than $7\epsilon$ from the mean as well. By our choice of $\epsilon$, this conclusion propagates inductively showing that the limiting behavior will a.s.\ be strong consensus, if we let $\epsilon$ tend to $0$.
\item In order to prove the second claim, we use part (b) of Theorem \ref{gen}, focussing on the $i$th coordinate only. Fix $\theta\in(0,\infty)$. Since $$|x_i-y_i|\leq\n{x-y} \text{ for all vectors }x,y\in\R^k \text{ and }i\in\{1,\dots,k\},$$ a distance of more than $\theta$ in the $i$th coordinate of the opinion vectors for two neighbors $u,v$ implies that the edge between them is blocked. The arguments used for unbounded distributions in Theorem \ref{gen} (see Thm.\ 2.2 in \cite{Deffuant}) show that under the given conditions, there are a.s.\ vertices that differ more than $\theta$ from both their neighbors in the $i$th coordinate (with respect to the absolut value) in the initial configuration and this will not change no matter whom their neighbors will compromise with. Consequently, the corresponding opinion vectors will always be at Euclidean distance more than $\theta$. \end{enumerate} \end{proof}\vspace*{-1em}
\begin{remark}
Pretty much as in the univariate setting, the case where all unbounded coordinates of $\eta_0$ do not have
an expected value (neither finite nor $+\infty$ nor $-\infty$) remains unsolved by Theorem \ref{nogap}. \end{remark}
\noindent When it comes to bounded initial distributions which do have a large gap around the mean, the picture in higher dimensions drastically changes -- something that \par\begingroup \rightskip15em\noindent will require several preliminary results before we are ready to state and prove this section's main result, Theorem \ref{gapsEucl}. The major difference to the univariate case is that with higher-dimensional opinions the update along some edge $\langle u,v\rangle$ can actually lead to a situation, where both $u$ and $v$ come closer to the opinion corresponding to a third vertex $w$, which lies within the confidence bound of neither $\eta(u)$ nor $\eta(v)$, see the picture on the right. \par\endgroup
In the case of real-valued opinions this is impossible, because in that setting an update along
$\langle u,v\rangle$ always increases $\min\{|\eta(u)-\eta(w)|, |\eta(v)-\eta(w)|\}$, if $\eta(w)$ does not lie in between $\eta(u)$ and $\eta(v)$.
To illustrate how this changes the conditions, let us consider the initial distributions $\text{\upshape unif}(S^{k-1})$, where $S^{k-1}$ denotes the Euclidean unit sphere in $\R^k$. For $k=1$ this is just $\text{\upshape unif}(\{-1,1\})$, which by Theorem \ref{gen} has the trivial critical value $\theta_c=2$. For $k\geq2$ however, the fact that opinions close to each other can compromise in order to form a central opinion will bring $\theta_c$ down to the radius 1 of the distribution as we will see in the sequel.
The statement of the main result in this section, Theorem \ref{gapsEucl}, resembles very much the one of Theorem \ref{gen} (a), only the notion of a gap in the initial distribution has to be reinterpreted in the higher-dimensional setting, making the proof of this generalized result rather technical. However, while establishing auxiliary results, we will gain additional information about the set of opinion values that can occur in the Deffuant model at times $t>0$ depending on the initial distribution and the confidence bound. When it comes to the initial distribution $\mathcal{L}(\eta_0)$, the most important features besides its expected value are its support and the corresponding radius.
\begin{definition} Consider an $\R^k$-valued random variable $\zeta$. Its {\itshape support} is the following subset of $\R^k$, which is closed with respect to the Euclidean metric: $$\supp(\zeta):=\left\{x\in\R^k,\;\Prob\big(\zeta\in B[x,r]\big)>0\ \text{for all }r>0\right\}.$$ \end{definition}
\noindent Observe that this definition corresponds to the standard notion of {\em spectrum of a measure} (see for example Thm.\ 2.1 and Def.\ 2.1 in \cite{Partha}) -- applied to the distribution of a random variable.
If the initial distribution has a finite expectation, the radius can also be written as $$R=\sup\left\{\n{\E\eta_0-x},\; x\in\supp(\eta_0)\right\},$$ as the following proposition shows.
\begin{proposition}\label{Radius} If $\E\eta_0\in\R^k$, we have \begin{equation}\label{rad} \inf\left\{r>0,\;\Prob\big(\eta_0\in B[\E\eta_0,r]\big)=1\right\}= \sup\left\{\n{\E\eta_0-x},\; x\in\supp(\eta_0)\right\}.\end{equation} \end{proposition}
\begin{proof} First, consider a set $A$ which is compact in $(\R^k,\n{\,.\,})$ and a subset of the complement of $\supp(\eta_0)$. The claim is that these properties of $A$ imply $\Prob(\eta_0\in A)=0$. Indeed, for every $x\in A\subseteq (\supp(\eta_0))^\text{\upshape c}$ there exists $r_x>0$ s.t.\ $\Prob\big(\eta_0\in B[x,r_x]\big)=0$. Let $B(y,r)$ denote the open Euclidean ball with radius $r$ around $y$, then $\{B(x,r_x),\;x\in A\}$ is an open cover of $A$, which by compactness has a finite subcover $\{B(x_i,r_{x_i}),\;1\leq i\leq n\}$. Consequently $$\Prob(\eta_0\in A)\leq\Prob\Big(\eta_0\in \bigcup_{i=1}^n B[x_i,r_{x_i}]\Big)=0.$$ If $r$ is greater than the supremum in (\ref{rad}) it follows that $\supp(\eta_0)\subseteq B(\E\eta_0,r)$. Since $$\big(B(\E\eta_0,r)\big)^\text{\upshape c}=\Bigg(B[\E\eta_0,r+1]\setminus B(\E\eta_0,r)\Bigg)\cup \Bigg(\bigcup_{q\in\mathbb{Q}^k\setminus B[\E\eta_0,r+1]}B[q,1]\Bigg)$$ and the right-hand side is a countable union of nullsets with respect to $\mathcal{L}(\eta_0)$, we get $\Prob\big(\eta_0\in B[\E\eta_0,r]\big)=1$, which means that $r$ is greater or equal to the infimum in (\ref{rad}).
On the other hand, if $r$ is less than the supremum, there exists a point $x\in\supp(\eta_0)\setminus B[\E\eta_0,r]$, which consequently has a positive distance $\delta$ to the closed ball $B[\E\eta_0,r]$. This gives $$\Prob\big(\eta_0\in B[\E\eta_0,r]\big)\leq1-\Prob\big(\eta_0\in B[x,\tfrac{\delta}{2}]\big)<1.$$ In other words, $r$ does not appear in the set the infimum is taken over. Putting both arguments together proves (\ref{rad}). \end{proof}
\begin{definition}\label{Dtheta} \begin{enumerate}[(i)] \item For a finite graph $G=(V,E)$ and an edge $e=\langle u,v\rangle\in E$ let the update described in (\ref{dynamics}), considered as a deterministic map on the set of $\R^k$-valued profiles, be denoted by $T_e^\theta$. So if $T_e^\theta$ is applied to $\xi=\{\xi(v)\}_{v\in V}$ it just means that all values stay unchanged with the only exception of \begin{equation}\label{T_e}\left(\begin{array}{c}T_e^\theta\xi(u)\\T_e^\theta\xi(v)\end{array}\right) =\left(\begin{array}{c}(1-\mu)\,\xi(u)+\mu\,\xi(v)\\\mu\,\xi(u)+(1-\mu)\,\xi(v)\end{array}\right)\quad\text{if } \n{\xi(u)-\xi(v)}\leq\theta.\end{equation} \item Consider a finite section $\{1,\dots,n\}$ of the line graph, a finite sequence $(e_i)_{i=1}^N$ of edges $e_i\in\{\langle1,2\rangle,\dots,\langle n-1,n\rangle\}$ and some values $x_1,\dots,x_n$ in $\supp(\eta_0)$. Such a triple will from now on be called a {\itshape finite configuration}.\\ To {\em update the configuration} (with respect to $\theta$) will mean that we take $x_1,\dots,x_n$ as initial opinions, i.e.\ we set $\eta_0(v)=x_v$ for all $v\in\{1,\dots,n\}$, and then apply $T_{e_N}^\theta\circ T_{e_{N-1}}^\theta\circ\ldots\circ T_{e_1}^\theta$ to $\{\eta_0(v)\}_{v\in \{1,\dots,n\}}$.
Slightly abusing the notation, let the outcome, i.e.\ the final opinion values $\{T_{e_N}^\theta\circ\ldots\circ T_{e_1}^\theta\,\eta_0(v)\}_{v\in \{1,\dots,n\}}$, be denoted by $\{\eta_N(1),\dots,\eta_N(n)\}$.
\item Let $\nu$ denote the initial distribution $\mathcal{L}(\eta_0)$. For $\theta>0$, let $\D_\theta(\nu)$ denote the set of vectors in $\R^k$ which the opinion values of finite configurations can collectively approach, if updated according to confidence bound $\theta$. More precisely, $x\in\D_\theta(\nu)$ if and only if for all $r>0$, there exist some $n\in\N=\{1,2,\dots\}$, $x_1,\dots,x_n\in\supp(\eta_0)$ and $(e_i)_{i=1}^N$ as above, such that updating the configuration with respect to $\theta$ yields $\eta_N(v)\in B[x,r]$ for all $v\in\{1,\dots,n\}$. \end{enumerate} \end{definition}
\noindent It is worth emphasizing that finite configurations are supposed to mimick the dynamics of the Deffuant model, interpreting $(e_i)_{i=1}^N$ as the locations of the first $N$ Poisson events on the edges $\langle0,1\rangle,\langle1,2\rangle,\dots,\langle n-1,n\rangle,\langle n,n+1\rangle$ in (strict) chronological order. In this respect, considering $\theta$, we can choose the sequence $(e_i)_{i=1}^N$ such that only Poisson events causing an actual update are considered by simply eliminating all events on edges where the opinions of the two vertices are more than $\theta$ apart.
Note that according to the definition, $\D_\theta(\nu)$ depends on $\supp(\eta_0)$ and $\theta$, as well as $\mu$, the latter being less obvious. See Example \ref{mu} below for an instance where $\mu$ actually makes a difference. Let us now turn to various properties of the set $\D_\theta(\nu)$.
\begin{lemma}\label{properties} Fix the distribution $\nu$ of $\eta_0$ and let $\D_\theta(\nu)$ and $R$ be defined as above. \begin{enumerate}[(a)]
\item $\D_\theta(\nu)$ is closed and increases with $\theta$.
\item $\supp(\eta_0)\subseteq \D_\theta(\nu)\subseteq \overline{\conv(\supp(\eta_0))}\subseteq B[\E\eta_0,R]$ for all
$\theta>0$, where $\conv(A)$ denotes the convex hull, $\overline{A}$ the closure of a set $A$. \end{enumerate} \end{lemma}
\begin{proof} \begin{enumerate}[(a)]
\item The first claim follows directly from the definition: For a sequence $(x_n)_{n\in\N}$ in $\D_\theta(\nu)$
such that $\n{ x-x_n}\to 0$ and every $r>0$, there exists some $x_n \in B[x,\tfrac{r}{2}]$. Due to $x_n\in
\D_\theta(\nu)$, there exists a finite configuration with all final opinion values in $B[x_n,\tfrac{r}{2}]$.
But since $B[x_n,\tfrac{r}{2}]\subseteq B[x,r]$, this implies $x\in\D_\theta(\nu)$.
As for the second claim, since we are free to choose the edge sequence in finite configurations, it is obvious
that making $\theta$ larger only allows for more options when we are to come up with a setting that brings the
opinion values collectively inside $B[x,r]$ for some given $x\in\R^k$ and $r>0$.
\item The first inclusion is trivial, as for $x\in\supp(\eta_0)$ the finite configuration with $n=1,\ x_1=x$ will do.
The second inclusion is due to the fact that every update of opinions is a convex combination, see (\ref{T_e}).
Consequently, all final opinion values of finite configurations lie within $\conv(\supp(\eta_0))$. The last
inclusion, which is meaningful only for $R<\infty$, follows from Proposition \ref{Radius} and the fact that
$B[\E\eta_0,R]$ is both convex and closed. \end{enumerate} \end{proof}
\noindent It should be mentioned that an easy corollary to Carath\'eodory's Theorem on the convex hull states that the convex hull of a compact set in $\R^k$ is compact as well. If $\eta_0$ has a bounded support, this implies that the convex hull of $\supp(\eta_0)$ is actually closed, i.e.\ $\overline{\conv(\supp(\eta_0))}=\conv(\supp(\eta_0))$.
\begin{example}\label{jump} To get familiar with the idea behind $\D_\theta(\nu)$, let us consider the discrete real-valued initial distribution given by $\Prob(\eta_0=\tfrac1n)=\tfrac{1}{2^n}, n\in\N$. It is not hard to see that this implies $\supp(\eta_0)=\{\tfrac1n,\; n\in\N\}\cup\{0\}$. Having the Taylor expansion of the logarithm in mind we find $$\E\eta_0=\sum_{n=1}^\infty \frac{1}{n\,2^n}=-\left(-\sum_{n=1}^\infty \frac{(\tfrac12)^n}{n}\right)
=-\ln(1-\tfrac12)=\ln(2).$$ By Theorem \ref{gen} we get $\theta_c=R=\ln(2)$, since $\Prob(\eta_0\in[0,1])=1$ and the largest gap in between the point masses is $\tfrac12$.
For two point masses situated at $x$ and $y$ at distance $0<\n{ x-y}\leq\theta$, all convex combinations of $x,y$ are in $\D_\theta(\nu)$: For $\alpha\in[0,1]$ and $r>0$, take $m,n\in\N$ s.t.\
$$\left|\frac{m}{m+n}-\alpha\right|\leq\frac{r}{4\,\max\{\n{x},\n{y}\}}.$$ Let us set up a finite configuration with $m+n$ vertices, $x_1=\ldots=x_m=x$ and $x_{m+1}=\ldots=x_{m+n}=y$ as well as enough Poisson events on every edge (in an appropriate order) such that -- having updated the configuration according to the edge sequence -- the outcome $\eta_N(v)$ will be at distance less than $\tfrac{r}{2}$ from the average $\tfrac{m}{m+n}\,x+\tfrac{n}{m+n}\,y$ for all $v\in\{1,\dots,m+n\}$. Since all the opinion values lie in an interval of length at most $\theta$ in the beginning and hence always will, we could choose the edge sequence by always taking the edge with largest current discrepancy next, to see that a finite sequence with the claimed property exists. This will ensure \begin{align*} \n{\eta_N(v)-(\alpha x+(1-\alpha)y)}&\leq\tfrac{r}{2}+\n{(\tfrac{m}{m+n}\,x+\tfrac{n}{m+n}\,y)-(\alpha x+(1-\alpha)y)}\\
&\leq\tfrac{r}{2}+|\tfrac{m}{m+n}-\alpha|\cdot\n{x}+|\alpha-\tfrac{m}{m+n}|\cdot\n{y}\\ &\leq r, \end{align*} hence $\alpha x+(1-\alpha)y\in\D_\theta(\nu)$. This observation together with the fact that gaps of width larger than $\theta$ can not be bridged leads to $$\D_\theta(\nu)=[0,\tfrac{1}{n_\theta}]\cup\{\tfrac1n,\; n< n_\theta\},$$ where $n_\theta:=\max\{n\in\N,\;\tfrac{1}{n-1}-\tfrac{1}{n}>\theta\}$. \end{example}
\begin{lemma}\label{circle} \begin{enumerate}[(a)]
\item For all $x\in\R^k$ and $0\leq \delta<\tfrac{\theta}{2}$, the set $\D_\theta(\nu)\cap B[x,\delta]$ is convex.
\item If $R<\infty$, then $\D_{2R}(\nu)=\overline{\conv(\supp(\eta_0))}=\conv(\supp(\eta_0))$.
\item The connected components of $\D_\theta(\nu)$ are convex and at distance at least $\theta$ from one another.
If $\D_\theta(\nu)$ is connected, then $\D_\theta(\nu)=\overline{\conv(\supp(\eta_0))}$.
\item If $R<\infty$ and $\nu$ has mass around its mean, i.e.\ condition (\ref{matm}) holds, then
$\D_{\theta}(\nu)=\conv(\supp(\eta_0))$ already for $\theta>R$.
\item For $R<\infty$, the set-valued mapping
$$\begin{cases}(0,\infty)\to\mathcal{B}^k\\
\vartheta\mapsto \D_\vartheta(\nu)\end{cases}$$ is piecewise constant with only
finitely many jumps on $[\delta,\infty)$ for all $\delta>0$.
\item If $\D_\theta(\nu)$ is connected and $\E\eta_0$ finite, then $\E\eta_0\in\D_\theta(\nu)$ \end{enumerate} \end{lemma}
\begin{proof} \begin{enumerate}[(a)]
\item The proof of the first part of this lemma follows the idea of the above example. Let $y,z\in\D_\theta(\nu)$
and their distance be $0<\n{ y-z}\leq 2\delta<\theta$.
Let $\epsilon=\theta-2\delta>0$. For any $\epsilon\geq r>0$, there exist finite configurations $\chi_1$ and
$\chi_2$ with final values in $B[y,\tfrac{r}{4}]$ and $B[z,\tfrac{r}{4}]$ respectively.
For $\alpha\in[0,1]$ choose again $m,n\in\N$ s.t.\
$$\left|\frac{m}{m+n}-\alpha\right|\leq\frac{r}{4\,\max\{\n{y},\n{z}\}}.$$
We define a new finite configuration by putting $m$ copies of $\chi_1$ and $n$ copies of $\chi_2$ next
to each other: Their finite sections of the line graph (together with the assigned initial values) will be
concatenated blockwise -- the order among the blocks being irrelevant -- by adding an edge between two
consecutive blocks in order to form the underlying line graph of a larger finite configuration. To get an
edge sequence for the whole configuration we will simply string together the edge sequences of the
individual copies, again in a blockwise manner and arbitrary order.
Updating according to the edge sequence will then bring all the opinion values within distance $\theta$
of one another. Therefore, we can bring the final outcomes arbitrarily close, say at distance at most
$\tfrac{r}{4}$, to the average of the initial values, let's denote it by $\overline{x}$, by just adding a
large enough (but finite) number of Poisson events on each edge (appropriately ordered as before). From the
properties of the chosen building blocks, $\chi_1$ and $\chi_2$, it readily follows
that the initial average is at distance at most $\tfrac{r}{4}$ from $\tfrac{m}{m+n}\,y+\tfrac{n}{m+n}\,z$.
This entails for every vertex $v$ of the finite configuration
\begin{align*}
\n{\eta_N(v)-(\alpha y+(1-\alpha)z)}&\leq\tfrac{r}{4}+\n{\overline{x}-(\alpha y+(1-\alpha)z)}\\
&\hspace{-2cm}\leq\tfrac{r}{4}+\tfrac{r}{4}+\n{(\tfrac{m}{m+n}\,y+\tfrac{n}{m+n}\,z)-(\alpha y+(1-\alpha)z)}\\
&\hspace{-2cm}\leq\tfrac{r}{2}+|\tfrac{m}{m+n}-\alpha|\cdot\n{y}+|\alpha-\tfrac{m}{m+n}|\cdot\n{z}\\
&\hspace{-2cm}\leq r,
\end{align*}
which shows $\alpha y+(1-\alpha)z\in\D_\theta(\nu)$.
\item By Lemma \ref{properties} it is enough to show $\D_{2R}(\nu)\supseteq\conv(\supp(\eta_0))$. Thus, letting
$x,y\in\supp(\eta_0)\subseteq B[\E\eta_0,R]$, we have to show that $\conv(\{x,y\})\subseteq\D_{2R}(\nu)$.
But since $\n{x-y}$ can be at most $2R$, this is done as described in Example \ref{jump}, just the line segment
$\conv(\{x,y\})$ plays now the role of the interval considered there.
\item First of all, the connected components of $\D_\theta(\nu)$ are actually path-connected and moreover the pathes
can be chosen to be polygonal chains: Assume that a connected component $C$ contains more than one path-connected
component. Fix one such, say $C_1$. Due to connectedness of $C$, a second one $C_2$ must exist s.t.\ the
Euclidean distance between $C_1$ and $C_2$ is $0$. But part (a) then implies that also $C_1\cup C_2$ is
path-connected, a contradiction.
Moreover, using the statement of part (a) we can transform any curve in $\D_\theta(\nu)$ to a polygonal
chain which completely lies in $\D_\theta(\nu)$.
Let us turn to the convexity of connected components. Fix a component $C$ of $\D_\theta(\nu)$ and $x,y\in C$,
s.t.\ $\n{x-y}\geq\theta$, since otherwise (a) guarantees
$$\conv(\{x,y\})=\{\alpha x+(1-\alpha)y,\;\alpha\in[0,1]\}\subseteq C.$$
By the above, there exists a polygonal chain in $\D_\theta(\nu)$, say
$$l:=\begin{cases}[0,1]\to\R^k\\s\mapsto l(s)\end{cases}$$
such that $l(0)=x,\ l(1)=y$ and $l$ is continuous and piecewise linear.
Let us define $x_0=x,\ x_{j+1}=l(s_j),$ where $s_j:=\max\{s\in[0,1],\;\n{x_j-l(s)}=\tfrac{\theta}{2}\}$, if
$\n{x_j-y}\geq\theta$ and $x_{j+1}=y$ otherwise. Using (a) and these intermediate points shows that we can assume
without loss of generality a certain sparseness of the chain, namely that its intermediate
points $x_1,\dots,x_n$ are s.t.\ pairwise distances in $\{x=x_0,x_1,\dots,x_n,x_{n+1}=y\}$ are at least
$\tfrac{\theta}{2}$ and hence $n\leq\tfrac{2L}{\theta}$, where $L$ denotes the length of the original chain.
Note that the modification of the polygonal chain as just described will only decrease its length.
\par\begingroup \rightskip18.4em
Given a polygonal chain in $\D_\theta(\nu)$ connecting $x$ and $y$, let us assume that the minimal
angle at an intermediate point is $\pi-2\alpha<\pi$ at $x_j$.
Considering $B[x_j,\tfrac{\theta}{2}]$ and using (a)
once more, we can replace $x_j$ by the two intersection points of the ball's boundary and the chain
$x_j^{(1)},x_j^{(2)}$
and conclude that the polygonal chain through the points $x,x_1,\dots,x_{j-1},x_j^{(1)},x_j^{(2)},$\linebreak
$x_{j+1},\dots,x_n,y$ still lies in $\D_\theta(\nu)$ and is at least by $\theta\cdot(1-\cos(\alpha))$ shorter.
\par\endgroup
\vspace*{-6.3cm}
\vspace*{-0.5cm}
We can then sparsify the updated chain as described above and denote the result by $l_1$.
Iterating the whole procedure gives a sequence $(l_m)_{m\in\N}$ of shorter and shorter polygonal chains in
$\D_\theta(\nu)$ connecting $x$ and $y$. Since the length is bounded below by $\n{x-y}$, the internal angels
must approach $\pi$ uniformly. Let $\pi-2\alpha_1,\dots,\pi-2\alpha_n$ be the angles at $x_1,\dots,x_n$.
An easy geometric argument yields that all points on the chain are at distance at most
$$\sum_{j=1}^n\tan(2\alpha_1+\dots+2\alpha_j)\,L\leq \tfrac{8nL}{\pi}\sum_{j=1}^n\alpha_j\leq
\tfrac{16L^2}{\pi\theta}\sum_{j=1}^n\alpha_j.$$
from the line through $x$ and $x_1$, if $\sum_{j=1}^n\alpha_j\leq\tfrac{\pi}{8}$, as $\tan(z)\leq \tfrac{4}{\pi}z$
for all $z\in[0,\tfrac{\pi}{4}]$. This also holds for the endpoint $y$, which is why the maximal distance
of a point on the chain to the line segment between $x$ and $y$ is bounded by
$\tfrac{32L^2}{\pi\theta}\sum_{j=1}^n\alpha_j$.
Let $n_m$ and $(\alpha_j^{(m)})_{j=1}^{n_m}$ correspond to $l_m$. Then
$$\sum_{j=1}^{n_m}\alpha_j^{(m)}\leq\tfrac{2L}{\theta}\max_{1\leq j\leq n_m}\alpha_j^{(m)}
\stackrel{m\to\infty}{\longrightarrow} 0$$
implies that the sequence $(l_m)_{m\in\N}$ must approach the line segment between $x$ and $y$, i.e.\
$\conv(\{x,y\})=\{\alpha x+(1-\alpha)y,\;\alpha\in[0,1]\}$, uniformly -- in the sense that
$$\max_{s\in l_m}\min_{z\in\conv(\{x,y\})}\n{s-z}\to 0\quad\text{as }m\to\infty.$$
Since $C$ being a component of $\D_\theta(\nu)$ is closed, we find
$\conv(\{x,y\})\subseteq C,$ which proves the convexity of C.
Assuming that there are two points in different connected components, say $x\in C_1, y\in C_2$ s.t.\
$\n{x-y}<\theta$,
already implies (by part (a)) that $C_1\cup C_2$ is connected, as before. Finally, if $\D_\theta(\nu)$ is
connected, what we just proved induces that it is convex. Being a closed superset of $\supp(\eta_0)$, this implies
$$\overline{\conv(\supp(\eta_0))}\subseteq\D_\theta(\nu),$$
which by Lemma \ref{properties} is all that needed to be shown.
\item Let us now assume that $\nu$ has not only a finite radius but also mass around its mean, that is
$\E\eta_0\in \supp(\eta_0)$. For $\theta>R$, $\D_\theta(\nu)$ is then
connected, which by part (c) implies the claim. Indeed, let $\epsilon\in(0,\theta-R)$ and choose a point $x$
in $B[\E\eta_0,\epsilon]\cap\supp(\eta_0)$. By the choice of $\epsilon$, all points in $B[\E\eta_0,R]$
are at distance less than $\theta$ from $x$, which by the reasoning in part (a) and
$\D_\theta(\nu)\subseteq B[\E\eta_0,R]$ (see Lemma \ref{properties}) implies
$\conv(\{x,y\})\subseteq \D_\theta(\nu)$ for all $y\in\D_\theta(\nu)$, hence the connectedness of
$\D_\theta(\nu)$.
\item The first thing to notice is that, given $R<\infty$, for all $\theta>0$ the set $\D_\theta(\nu)$ has
finitely many connected components. Indeed, choose a point $x_i$ in each, then the open balls $B(x_i,\theta)$ must be
disjoint by (c) and lie within $B(\E\eta_0,R+\theta)$. Consequently, there can't be more than
$(\tfrac{R+\theta}{\theta})^k$ of them.
Let $C_1,\dots,C_n$ be the connected components of $\D_\delta(\nu)$, for some $\delta>0$, and
$d\geq\delta$ the minimal distance between them. When $\theta$ is made larger than $d$, at least two of
the components merge. Hence there can be only $n-1$ further jumps. For $\delta\leq\theta<d$ we have
$\D_\theta(\nu)=\D_\delta(\nu)$.
\item Let us assume the contrary, i.e.\ $\E\eta_0\notin \D_\theta(\nu)$. As this set is closed, there
exists some $y\in\D_\theta(\nu)$ such that the Euclidean distance from $\E\eta_0$ to $\D_\theta(\nu)$
is given by $\n{\E\eta_0-y}>0$.
\par\begingroup \rightskip18em
Choosing $x:=\tfrac12(\E\eta_0+y)$ and using the convexity of $\D_\theta(\nu)$
-- if there existed $z\in\D_\theta(\nu)$ such that $(z-y)\cdot(x-y)>0$, $y$ would not be closest to $\E\eta_0$
in $\D_\theta(\nu)$ --
as well as $\supp(\eta_0)\subseteq \D_\theta(\nu)$ we find\\[0.2cm]\hspace*{0.2cm}
$\E\big((\eta_0-x)\cdot(y-x)\big)>0$, but\\[0.1cm]\hspace*{0.75cm} $(\E\eta_0-x)\cdot(y-x)<0,$\\[0.2cm]
a contradiction.
\par\endgroup \end{enumerate}
\vspace*{-5.5cm}
\vspace*{-0.5cm} \end{proof}
\begin{example}
\begin{enumerate}[(a)]
\item
To get an impression of how $\D_\theta(\nu)$ grows with $\theta$, let us consider the initial distribution
on $\R^3$ given by $\text{\upshape unif}(\{(2,1,0),(2,-1,0),(-2,0,1),(-2,0,-1)\})$, i.e.\ featuring four point
masses at the given vertices. It is easy to check that $\E\eta_0=(0,0,0)$ and $R=\sqrt{5}$, see Figure \ref{convex}.
Since all pairwise distances are at least $2$, $\D_\theta(\nu)=\supp(\eta_0)$ for $\theta<2$.
For $\theta\geq2$ the opinion values $(2,1,0)$ and $(2,-1,0)$ can compromise, same for
$(-2,0,1)$ and $(-2,0,-1)$. This implies that $\D_\theta(\nu)$ contains both line segments
$\{(2,\alpha,0),\;\alpha\in[-1,1]\}$ and $\{(-2,0,\alpha),\;\alpha\in[-1,1]\}$.
The latter are at distance $4$, hence we can conclude
$$\D_\theta(\nu)=\begin{cases}
\{(2,1,0),(2,-1,0),(-2,0,1),(-2,0,-1)\},&\!\text{for }\theta<2\\
\{(2,\alpha,0),(-2,0,\alpha),\;\alpha\in[-1,1]\},&\!\text{for }\theta\in[2,4)\\
\conv(\{(2,1,0),(2,-1,0),(-2,0,1),(-2,0,-1)\}),&\!\text{for }\theta>4.
\end{cases}$$
\begin{figure}
\caption{$\D_\theta(\nu)$ for $\eta_0$ being uniformly distributed on the set\\
$\{(2,1,0),(2,-1,0),(-2,0,1),(-2,0,-1)\}$, evolving with growing $\theta$.}
\label{convex}
\end{figure}
\noindent
For $\theta=4$ it depends on whether the values $(-2,0,0),(2,0,0)$ can be achieved or merely approximated
by finite configurations, in other words $\mu$ (see also Example \ref{mu}). Note how $\D_\theta(\nu)$ grows
by forming local convex hulls.
If we choose $\text{\upshape unif}(\{(0.99,1,0),(0.99,-1,0),(-0.99,0,1),(-0.99,0,-1)\})$ to be the initial
distribution instead, we can observe a certain chain reaction effect. $\theta\geq2$ brings the point masses
pairwise within the confidence bound as before, but this time also their convex hulls. So for this distribution
$\nu$ we find
$$\D_\theta(\nu)=\begin{cases}
\supp(\eta_0),&\text{for }\theta<2\\
\conv(\supp(\eta_0)),&\text{for }\theta\geq2.
\end{cases}$$
\item
Example \ref{jump} already shows that the mapping $\vartheta\mapsto \D_\vartheta(\nu)$ can have infinitely
(but still countably) many jumps on $(0,\infty)$. Taking the discrete initial distribution given by
$$\Prob(\eta_0=2^n)=\tfrac{1}{3^{n}}\text{ and }\Prob(\eta_0=-2^n)=\tfrac{1}{3^{n}},\text{ for }n\in\N,$$
shows that part (e) of Lemma \ref{circle} doesn't hold for the case $R=\infty$, i.e.\ under the
weaker condition that $\E\eta_0$ is finite.
\item
Coming back to the example mentioned above, where $\eta_0\sim\text{\upshape unif}(S^{k-1})$ for some $k\geq 2$, it is
not hard to see that $\D_\theta(\nu)=B[\mathbf{0},1]$ for all $\theta>0$. Indeed, since
$\supp(\eta_0)=S^{k-1}$ is connected and $\supp(\eta_0)\subseteq\D_\theta(\nu)$, it has to be contained in a
connected component of $\D_\theta(\nu)$. All such are convex by Lemma \ref{circle}, hence
$\conv(S^{k-1})=B[\mathbf{0},1]\subseteq\D_\theta(\nu)$. The reverse inclusion follows directly from part (b)
of Lemma \ref{properties}. \end{enumerate} \end{example}
\begin{definition}\label{suppdef} For $\theta>0$ and $t\geq0$, let the {\em support of the distribution of $\eta_t$} be denoted by $\supp_\theta(\eta_t)$. \end{definition}
\noindent The support of $\eta_t$ evidently depends on $\theta$. However, for $t=0$ it holds that $\supp_\theta(\eta_0)=\supp(\eta_0)$ irrespectively of $\theta$, as the dynamics of the model is not yet involved. Note that for values of $\theta$ where $\D_\theta(\nu)$ increases, $\supp_\theta(\eta_t)$ can actually depend on $\mu$ as well, see Example \ref{mu} below. Let us next derive properties of $\supp_\theta(\eta_t)$ similar to those of $\D_\theta(\nu)$. \begin{lemma}\label{suppt}
\begin{enumerate}[(a)]
\item For $0<s<t$ we get $\supp_\theta(\eta_s)=\supp_\theta(\eta_t)$.
\item $\supp_\theta(\eta_t)$ increases with $\theta$ and for all $\theta>0$:
$$\supp(\eta_0)\subseteq \supp_\theta(\eta_t)\subseteq\overline{\conv(\supp(\eta_0))}\subseteq B[\E\eta_0,R].$$
\end{enumerate} \end{lemma}
\begin{proof}
\begin{enumerate}[(a)]
\item $\supp_\theta(\eta_s)\subseteq\supp_\theta(\eta_t)$ readily follows from the fact, that for every set $A$
$\Prob(\eta_s(v)\in A)>0$ implies $\Prob(\eta_t(v)\in A)>0$, since with positive probability
there won't be any Poisson events on the edges $\langle v-1,v\rangle$ and $\langle v,v+1\rangle$ in the time
interval $[s,t]$ forcing $\eta_s(v)=\eta_t(v)$.
But the reverse inclusion is also true. To see this we will locally modify the configuration:
$x\in\supp_\theta(\eta_t)$ if and only if for all $r>0$, there exists
some $n\in\N$ such that the event that $\eta_t(0)\in B[x,r]$ and at least one of the edges $\langle -n,-n+1\rangle,
\dots,\langle -1,0\rangle$ and $\langle 0,1\rangle,\dots,\langle n-1,n\rangle$ respectively, has not experienced
any Poisson event up to time $t$ has positive probability. That the Poisson events occurring on
$\langle -n,-n+1\rangle,\dots,\langle n-1,n\rangle$ up to $t$ already occur in the same order up to time $s$
(and no further events) has positive probability. Due to the fact that the Poisson events are independent of
the starting configuration, such a modification of the interactions shows $\Prob(\eta_s(0)\in B[x,r])>0$.
\item To prove the monotonicity in $\theta$, we will dissect the event described in part (a) a little more closely.
For $x\in\supp_\vartheta(\eta_t)$ and $r>0$, let us consider the event that $\eta_t(0)\in B[x,r]$ and at least one
of the edges between $-n$ and $0$ as well as between $0$ and $n$ has not experienced any Poisson event up to time
$t$. For sufficiently large $n$ this has positive probability as mentioned before. Fix $n$ to be large enough in this
respect and denote the corresponding event by $A$.
Let again $(e_i)_{i=1}^N$ encode the chronologically ordered locations of the random but finite number of
Poisson events occurring up to time $t$ on the edge set $\langle -n,-n+1\rangle,\dots,\langle n-1,n\rangle$.
Further, let $(e_{i_j})_{j=1}^{N'}$ be the subsequence of $(e_i)_{i=1}^N$ which contains only those edges on which a
difference exceeding the confidence bound prevented the occurring Poisson event from invoking an actual update
of opinions.
Since there are only finitely many choices for the sequence $(e_i)_{i=1}^N$ and its corresponding subsequence, if
$N\in\N$ is fixed, and $N$ is a.s.\ finite, we can partition the event $A$ into $\{A_m,\;m\in\N\}$ according to the
different choices of $(e_i)$ and $(e_{i_j})$. Note that for the subsequences to be considered equal not only
their length and ordered elements must coincide, but also the set of indices $\{i_j,\;1\leq j\leq N'\}$ has to
be identical. From $\Prob(A)>0$ we can conclude that there must be some $A_m$ which has positive probability.
In other words, there exists a set $C\subseteq(\R^k)^{2n-1}$ s.t.\
$$\Prob\big((\eta_0(v))_{v=-n+1}^{n-1}\in C\big)>0$$
and given a starting configuration in $C$, Poisson events on the edges given by the fixed sequence $(e_i)_{i=1}^N$
corresponding to $A_m$ will ensure, in the Deffuant model with confidence bound $\vartheta$, that the final value
at $0$ is in $B[x,r]$.
Let $B$ be the event that the locations of all Poisson events on the edge set
$\{\langle -n,-n+1\rangle,\dots,\langle n-1,n\rangle\}$ up to $t$ are given by the subsequence of $(e_i)_{i=1}^N$
which is obtained by removing the elements of $(e_{i_j})$.
Given $B$ and $\{(\eta_0(v))_{v=-n+1}^{n-1}\in C\}$, the dynamics of the Deffuant model with confidence
bounds $\vartheta$ and $\theta\geq\vartheta$ respectively will coincide up to time $t$ between the two edges
without Poisson events shielding $0$ from $-n$ and $n$. Since $B$ has positive probability and the Poisson
events are independent of $\{(\eta_0(v))_{v=-n+1}^{n-1}\in C\}$ this implies that $x\in\supp_\vartheta(\eta_t)$
forces $x\in\supp_{\theta}(\eta_t)$ for all $\theta\geq\vartheta$, hence the claimed monotonicity.
When it comes to the second statement, the first inclusion was actually proved in (a) as the argument
used in order to show $\supp_\theta(\eta_s)\subseteq\supp_\theta(\eta_t)$ is also valid for $s=0$.
The second and third inclusion can be verified as in part (b) of Lemma \ref{properties}.
\end{enumerate} \end{proof}
\noindent The following proposition reveals how the set $\D_\theta(\nu)$ comes into play in the analysis of the long-term behavior of the Deffuant model. \begin{proposition}\label{supp_t}
If $\vartheta\mapsto \D_\vartheta(\nu)$ has no jump in $[\theta-\epsilon,\theta+\epsilon]$ for fixed
$\theta$ and some $\epsilon>0$, the following equality holds true for all $t>0$:
$$\supp_{\theta}(\eta_t)=\D_\theta(\nu).$$ \end{proposition}
\begin{proof} Before proving this result, we want to mention that given $R<\infty$, the continuity assumption can be weakened: If $R<\infty$ and $\vartheta\mapsto\D_\vartheta(\nu)$ has no jump at $\theta$, part (e) of Lemma \ref{circle}, already implies that $\D_\vartheta(\nu)$ is constant on an interval $[\theta-\epsilon,\theta+\epsilon]$ for suitably small $\epsilon>0$.
Let us first focus on the inclusion $\supp_{\theta}(\eta_t)\supseteq\D_\theta(\nu)$. For every fixed $x$ in $\D_\theta(\nu)=\D_{\theta-\epsilon}(\nu)$ and all $r>0$, there exists a finite configuration with $n\in\N$, $x_1,\dots,x_n\in\supp(\eta_0)$ and edge sequence $(e_i)_{i=1}^N$, s.t.\ updating the configuration with respect to the confidence bound $\theta-\epsilon$ yields $\eta_N(v)\in B[x,r]$ for all $v\in\{1,\dots,n\}$. Let further $t>0$ be fixed. Due to $x_v\in\supp(\eta_0)$, we get $\Prob(\eta_0\in B[x_v,\epsilon])>0$.
Consequently, in the Deffuant model on $\Z$ the following event has positive probability: $\eta_0(v)\in B[x_v,\epsilon]$ for all $v\in\{1,\dots,n\}$, up to time $t$ Poisson events have occurred on neither $\langle0,1\rangle$ nor $\langle n,n+1\rangle$ and the locations of the events on $\langle1,2\rangle,\dots,\langle n-1,n\rangle$ are chronologically ordered given by $(e_i)_{i=1}^N$. Note that every Poisson event which leads to an update in the given finite configuration does the same in this configuration of the whole model with respect to parameter $\theta$, as the margins coming from slightly altered initial values are convex combinations of the initial margins $\eta_0(v)-x_v$ and thus always bounded by $\epsilon$. This shows $\Prob(\eta_t(1)\in B[x,r+\epsilon])>0$, hence $x\in \supp_{\theta}(\eta_t)$.
When it comes to the reverse inclusion, consider again the Deffuant model with confidence bound $\theta$. By definition, $x\in\supp_{\theta}(\eta_t)$ if and only if for all $r>0:$ $\Prob(\eta_t(v)\in B[x,r])>0$. But every such value $\eta_t(v)$ is formed by (finitely many) convex combinations starting from a finite collection of initial values $\{\eta_0(u)\}_{u=v-k}^{v+l}$. Part (a) of Lemma \ref{circle} shows that $\eta_{s-}(u),\eta_{s-}(v)\in\D_{\theta+\epsilon}(\nu)$ implies $\eta_{s}(u),\eta_{s}(v)\in\D_{\theta+\epsilon}(\nu)$ after an update along the edge $\langle u,v\rangle$ at time $s$, since this can only occur if the former are at distance less than or equal to $\theta$. Thus, due to $\{\eta_0(u)\}_{u=v-k}^{v+l}\subseteq\supp(\eta_0)\subseteq\D_{\theta+\epsilon}(\nu)$, an inductive argument verifies $\eta_t(v)\in\D_{\theta+\epsilon}(\nu)$ and hence $$\supp_{\theta}(\eta_t)\subseteq\overline{\D_{\theta+\epsilon}(\nu)}=\D_{\theta+\epsilon}(\nu)=\D_\theta(\nu).
$$ \end{proof} \\[1em]\noindent Note that if $\vartheta\mapsto \D_\vartheta(\nu)$ has a jump at $\theta$, the subtle issue with critical compromises, as considered in Proposition \ref{crit}, reappears. To make this point clear, let us consider the initial distribution $\nu=\text{\upshape unif}(\{\tfrac14,\tfrac34\})$, for which we find $$\D_{\tfrac12}(\nu)=\supp_{\tfrac12}(\eta_t)=[\tfrac14,\tfrac34].$$ Taking $\eta_0\sim\text{\upshape unif}\big([0,\tfrac14]\cup[\tfrac34,1]\big)$ instead yields $$[0,1]=\D_{\tfrac12}(\nu)\supsetneq\supp_{\tfrac12}(\eta_t)=[0,\tfrac14]\cup[\tfrac34,1].$$
\begin{definition}\label{gap} Given an initial distribution $\mathcal{L}(\eta_0)=\nu$, define the length of the {\em largest gap} in its support as $$h:=\inf\{\theta>0,\;\D_\theta(\nu)\text{ is connected}\}.$$ \end{definition} Following this definition we get $h=0$ for $\nu=\text{\upshape unif}(S^{k-1})$ and $k\geq2$, but $h=2$ for $\nu=\text{\upshape unif}(S^0)$. Considering the other two distributions appearing in the above example, we observe that $\text{\upshape unif}(\{(2,1,0),(2,-1,0),(-2,0,1),(-2,0,-1)\})$ has $h=4$ and $\text{\upshape unif}(\{(0.99,1,0),(0.99,-1,0),(-0.99,0,1),(-0.99,0,-1)\})$ instead $h=2$. In addition, parts (b) and (d) of Lemma \ref{circle} tell us that $h\leq 2R$ if $R$ is finite and $h\leq R$ if additionally $\E\eta_0\in\supp(\eta_0)$.
Having generalized the notion of a gap in a distribution on $\R$ to higher dimensions finally allows us to formulate and prove a result corresponding to the cases of Theorem \ref{gen} that were omitted by Theorem \ref{nogap}.
\begin{theorem}\label{gapsEucl} Consider the Deffuant model on $\Z$ with an initial distribution on $(\R^k,\n{\,.\,})$ that is bounded, i.e.\ $$R=\inf\left\{r>0,\;\Prob\big(\eta_0\in B[\E\eta_0,r]\big)=1\right\}<\infty,$$ and $h$ being the length of the largest gap in its support. Then the critical value for the confidence bound, where a phase transition from a.s.\ no consensus to a.s.\ strong consensus takes place is $\theta_\text{\upshape c}=\max\{R,h\}$. \end{theorem}
\begin{proof} Having analyzed the qualitative differences invoked by higher-dimen\-sional opinion values, the proof of this theorem is to a large extent similar to the one of part (a) of Thm.\ 2.2 in \cite{Deffuant}, which is Theorem \ref{gen} in the foregoing section. Let us consider the following three scenarios:
\begin{enumerate}[(i)] \item {\em For $\theta<h$ we cannot have consensus:}\\ By definition of $h$ the set $\D_{\theta+\epsilon}(\nu)$ is not connected for $\epsilon>0$ sufficiently small; by Lemma \ref{circle} (e) we can choose $\epsilon$ such that $\vartheta\mapsto \D_\vartheta(\nu)$ has no jump at $\theta+\epsilon$ and thus (by Proposition \ref{supp_t}) get $\D_{\theta+\epsilon}(\nu)=\supp_{\theta+\epsilon}(\eta_t)$ for all $t>0$. In addition, Lemma \ref{circle} (c) tells us that there exist two connected components, say $C_1$ and $C_2$, both being convex and at distance at least $\theta+\epsilon$ from the corresponding complementary part of $\supp_{\theta+\epsilon}(\eta_t)$, i.e.\ $\n{x-y}\geq\theta+\epsilon$ for all $x\in C_i, y\in \supp_{\theta+\epsilon}(\eta_t)\setminus C_i$ and $i=1,2$.
By Lemma \ref{suppt} we know that $\supp(\eta_0)\subseteq\supp_{\theta}(\eta_t)\subseteq\supp_{\theta+\epsilon}(\eta_t)$. In the Deffuant model with confidence bound $\theta$ opinions in $C_1$ cannot compromise with opinions in $\supp_{\theta}(\eta_t)\setminus C_1\subseteq\supp_{\theta+\epsilon}(\eta_t)\setminus C_1$ and thus never leave the convex set $C_1$. The same holds for $C_2$.
Consequently, $\Prob(\eta_0(v)\in C_i)=\Prob(\eta_t(v)\in C_i)>0$, for $i=1,2$. For a fixed vertex $v$, it follows from the independence of initial opinions that $\Prob(\eta_0(v)\in C_1,\eta_0(v+1)\in C_2)>0$, which dooms the edge $\langle v, v+1\rangle$ to be blocked for all $t\geq0$, due to $\n{\eta_t(v)-\eta_t(v+1)}\geq\theta+\epsilon$. Ergodicity of the initial configuration ensures that a.s.\ infinitely many neighboring vertices will be prevented from compromising by holding opinions in $C_1$ and $C_2$ respectively, hence no consensus in the long run.
\vspace*{-5.5cm} \item {\em For $\theta<R$ we cannot have consensus:}
\par\begingroup \rightskip17em
Given $\theta<R$, there exists some $y\in\supp(\eta_0)\setminus B[\E\eta_0,\theta+2\epsilon]$ for fixed
$\epsilon\in\big(0,\tfrac{R-\theta}{2}\big)$. Choose $z$ to be the point on the line segment connecting
$\E\eta_0$ and $y$ which has Euclidean distance $\epsilon$ to $\E\eta_0$, see the picture to the right.
With help of this point, define the half-space $H:=\{x\in\R^k,\; (x-z)\cdot(y-z)\leq0\}$.
Clearly, $B[\E\eta_0,\epsilon]\subseteq H$ and by the
same
argument
as
in
part
(e)
of
\par\endgroup\vspace*{-0.14cm}
Lemma \ref{circle}: $\Prob(\eta_0\in H)>0$, as the contrary would imply
$$\E[(\eta_0-z)\cdot(y-z)]>0>(\E\eta_0-z)\cdot(y-z),$$
a contradiction.
Using this auxiliary construction, we can finish the proof of this subcase following the argument in the proof
of Theorem \ref{gen} (b), see Thm.\ 2.2 in \cite{Deffuant}. As the distribution is bounded, the SLLN states
\begin{equation}\label{SLLN}
\Prob\left(\lim_{n\to\infty}\frac{1}{n}\sum_{u=v+1}^{v+n}\eta_0(u)=\E\eta_0\right)=1.
\end{equation}
Consequently, for sufficiently large $N\in\N$ the following event has non-zero probability:
\begin{equation*}
A_N:=\left\{\frac{1}{n}\sum_{u=v+1}^{v+n}\eta_0(u)\in H\text{ for all }n\geq N\right\}.
\end{equation*}
Let $\xi$ denote the (real-valued) distribution of $(\eta_0-z)\cdot(y-z)$ and $\xi|_{(-\infty,0]}$ its
distribution conditioned on the event $\{(\eta_0-z)\cdot(y-z)\leq0\}=\{\eta_0\in H\}$. Obviously,
$\xi|_{(-\infty,0]}$ is stochastically dominated by $\xi$, i.e.\ $\xi|_{(-\infty,0]}\preceq\xi$, which implies
$$\left(\bigotimes_{u=v+1}^{v+N}\xi|_{(-\infty,0]}\right)\otimes\left(\bigotimes_{u>v+N}\xi\right)
\preceq\bigotimes_{u\geq v+1}\xi .$$
Let $B$ be the event $\{\eta_0(v+1)\in H,\dots,\eta_0(v+N)\in H\}$, which has non-zero probability
by independence, and
$$A_1:=\left\{\frac{1}{n}\sum_{u=v+1}^{v+n}\eta_0(u)\in H\text{ for all }n\in\N\right\}.$$
Rewriting the event $A_N$ as
\begin{equation*}
A_N=\left\{\frac{1}{n}\sum_{u=v+1}^{v+n}\big(\eta_0(u)-z\big)\cdot\big(y-z\big)\leq0\text{ for all }n\geq N\right\},
\end{equation*}
the stochastic domination from above yields:
\begin{align*}
\Prob(A_1)&\geq\Prob(A_1\cap B)=\Prob(A_N\cap B)=\Prob(A_N|B)\cdot\Prob(B)\\
&\geq\Prob(A_N)\cdot\Prob(B)>0.
\end{align*}
The very same ideas as in the proof of Prop.\ 5.1 in \cite{ShareDrink} show that if $A_1$ occurs and the edge
$\langle v,v+1\rangle$ doesn't allow for an update up to time $t>0$, irrespectively of the dynamics on
$\{u\in\Z, u\geq v+1\}$, we get that $\eta_t(v+1)$ is a convex combination of the averages
$\{\frac{1}{n}\sum_{u=v+1}^{v+n}\eta_0(u),\; n\in\N\}$, hence in $H$ as the latter is convex.
By symmetry, the same holds for site $v-1$ and the half-line to the left, i.e.\ $\{u\in\Z, u\leq v-1\}$.
Independence of the initial opinions therefore guarantees that with positive probability, the initial
configuration can be such that $\eta_0(v)\in B(y,\epsilon)$ and the values at sites $v-1$ and $v+1$ are doomed
to stay in $H$, blocking the edges adjacent to $v$ once and for all, as the distance of $y$ to $H$ is at least
$\theta+\epsilon$. Ergodicity makes sure that with probability $1$ infinitely many sites will get stuck
this way.
\item {\em For $\theta>\max\{R,h\}$ we get a.s.\ strong consensus:}\\ Choose $\beta$ such that $0<\beta<\theta-\max\{R,h\}$. By definition of $h$ and Lemma \ref{circle} (e), $\E\eta_0\in\D_{\theta-\beta}(\nu)$. Because of that, for all $\epsilon>0$, there exists a finite configuration such that the final opinion values all lie in $B[\E\eta_0,\tfrac{\epsilon}{6}]$, i.e.\ $n\in\N$, $x_1,\dots,x_n\in\supp(\eta_0)$ and an edge sequence $(e_i)_{i=1}^N$ from $\{\langle1,2\rangle,\dots,\langle n-1,n\rangle\}$, s.t.\ updating the configuration with respect to the confidence bound $\theta-\beta$ yields $\eta_N(v)\in B[\E\eta_0,\tfrac{\epsilon}{6}]$ for all $v\in\{1,\dots,n\}$, see Definition \ref{Dtheta}. From this point on, we can go about as in step (ii) of the proof of Thm.\ 2.2 (a) in \cite{Deffuant}:
Let us consider some fixed time point $t>0$ and the corresponding configuration $\{\eta_t(v)\}_{v\in\Z}$.
With probability 1, there exists an infinite increasing sequence of not necessarily consecutive edges
$(\langle v_k,v_k+1\rangle)_{k\in\N}$ to the right of site $1$, on which no Poisson event has occurred up to
time $t$.
Let $l_k:=v_{k+1}-v_k,\text{ for } k\in\N,$ denote the random lengths of the intervals in between and
$l_0:=v_1-v_0+1$ the one of the interval including $1$, where $\langle v_0-1,v_0\rangle$ is the first
edge to the left of $1$ without Poisson event. Since the involved Poisson processes are independent,
it is easy to verify that the $l_k,\ k\in\N_0=\{0,1,2,\dots\}$, are i.i.d., having a geometric distribution
on $\N$ with parameter $\text{e}^{-t}$.
For $\delta>0$, let $A_\delta$ be the event that $l_0$ is finite and only finitely many of the events
$\{l_k\geq k\,\tfrac{\delta}{R}\},\ k\in\N,$ occur. Then their independence and the Borel-Cantelli lemma tell
us that $A_\delta$ has probability $1$. On $A_\delta$ however the following holds a.s.\ true:
\begin{align*}
\limsup_{v\to\infty}\Big\lVert\frac{1}{v}\sum_{u=1}^{v}\eta_t(u)-\E\eta_0\Big\rVert_2
&=\limsup_{v\to\infty}\Big\lVert\frac{1}{v}\sum_{u=1}^{v}\big(\eta_t(u)-\E\eta_0\big)\Big\rVert_2\\
&=\limsup_{v\to\infty}\Big\lVert\frac{1}{v}\sum_{u=v_0}^{v}\big(\eta_t(u)-\E\eta_0\big)\Big\rVert_2\\
&\leq\limsup_{v\to\infty}\Big\lVert\frac{1}{v}\sum_{u=v_0}^{v}\big(\eta_0(u)-\E\eta_0\big)\Big\rVert_2
+\delta\\
&=\limsup_{v\to\infty}\Big\lVert\frac{1}{v}\sum_{u=1}^{v}\big(\eta_0(u)-\E\eta_0\big)\Big\rVert_2
+\delta\\
&=\delta.
\end{align*}
The second and second to last equality follow from the finiteness of $v_0$, the last equality from
the SLLN applied to the sequence $(\eta_0(u))_{u\geq1}$, stating
$$\lim_{v\to\infty}\frac{1}{v}\sum_{u=1}^{v}\eta_0(u)=\E\eta_0\text{ almost surely.}$$
The inequality is due to the fact that the Deffuant model is mass-preserving in the sense that
$\eta_t(u)+\eta_t(v)=\eta_{t-}(u)+\eta_{t-}(v)$ in (\ref{dynamics}), hence for all $k\in\N$:
$\sum_{u=v_0}^{v_k}\eta_0(u)=\sum_{u=v_0}^{v_k}\eta_t(u)$. For the average at time $t$ running from $v_0$
to some $v\in\{v_k+1,\dots, v_{k+1}\}$ to differ by more than $\delta$ from the one at time 0, the interval
has to be of length more than $k\,\tfrac{\delta}{R}$, since $v_k\geq k$ and $\n{\eta_t(u)-\E\eta_0}\in[0,R]$ for
all $t,u$. This, however, will happen only finitely many times.
Since $\delta>0$ was arbitrary, we have established that even for $t>0$
\begin{equation}\label{outerpart}
\lim_{v\to\infty}\frac{1}{v}\sum_{u=1}^{v}\eta_t(u)=\E\eta_0 \text{ almost surely.}
\end{equation}
Now we are going to use the finite configuration from above and a conditional version of the so-called
{\em local modification}, a technique often used in percolation theory.
Due to (\ref{outerpart}), there exists some integer number $k$ s.t.\ the event
$$A:=\left\{\frac{1}{v}\sum_{u=1}^{v}\eta_t(u)\in B[\E\eta_0,\tfrac{\epsilon}{3}]\text{ for all }v\geq kn\right\}$$
has probability greater than $1-\text{e}^{-2t}$.
Let $B$ in turn be the event that there was no Poisson event on $\langle 0,1\rangle$ and
$\langle kn,kn+1\rangle$ up to time $t$, hence $\Prob(B)=\text{e}^{-2t}$. Finally, let $C$ be the event
that the initial values satisfy
$$\eta_0(ln+i)\in B[x_i,\min\{\beta,\tfrac{\epsilon}{6}\}],\text{ for all }0\leq l\leq k-1\text{ and }1\leq i\leq n,$$
and the Poisson firings on the edges $\langle 0,1\rangle,\dots,\langle kn,kn+1\rangle$ up to time $t$ are given
by a concatenation of the $k$ finite sequences given by shifting $(e_i)_{i=1}^N$ $ln$ vertices to the right,
$0\leq l\leq k-1$. In other words, up to time $t$ there are no Poisson events on the $k+1$ edges
$\{\langle0,1\rangle,\langle n,n+1\rangle,\dots,\langle kn,kn+1\rangle\}$ and the dynamics in the $k$ blocks
$\{ln+1,\dots,(l+1)n\}$ resembles the dynamics of the finite configuration, accordingly leading to
$\eta_t(v)\in B[\E\eta_0,\tfrac{\epsilon}{3}]$ for all $v\in\{1,\dots,kn\}$, see also the proof of
Proposition \ref{supp_t}. Note that $C$ has non-zero probability, $C\subseteq B$ and also $A\cap B$ has strictly
positive probability as $\Prob(A\cap B^\mathsf{c})\leq\Prob(B^\mathsf{c})=1-\text{e}^{-2t}<\Prob(A)$.
Consider two configurations $\{\eta_0'(v)\}_{v\in\Z}$ and $\{\eta_0''(v)\}_{v\in\Z}$, independent from each other and
having the same distribution as $\{\eta_0(v)\}_{v\in\Z}$ underlying the dynamics of the Deffuant model. Then also
the compound configuration
$$\tilde{\eta}_0(v)=\begin{cases}\eta_0'(v),&\text{for }v\in\{1,\dots,kn\}\\
\eta_0''(v),&\text{for }v\notin\{1,\dots,kn\}\end{cases}$$
has the i.i.d.\ distribution of the initial configuration.
With positive probability $A\cap B$ occurs for the initial configuration $\{\eta_0''(v)\}_{v\in\Z}$ and
$C$ for the initial configuration $\{\eta_0'(v)\}_{v\in\Z}$. The fact that $(\tilde{\eta}_s(v))_{v\in\Z}$
equals $\{\eta_s'(v)\}_{v\in\Z}$ on $\{1,\dots,kn\}$ and $\{\eta_s''(v)\}_{v\in\Z}$ outside $\{1,\dots,kn\}$ for
$s\in[0,t]$ given $B$, together with the independence of the involved building block configurations, shows
that with positive probability $A\cap B\cap C'$ holds for the configuration at time $t$, where
$$C'=\left\{\eta_t(v)\in B[\E\eta_0,\tfrac{\epsilon}{3}]\text{ for all }v\in\{1,\dots,kn\}\right\}.$$
An easy calculation reveals that $A\cap C'$ implies the $\epsilon$-flatness
to the right of site $1$ in the configuration at time $t$. By symmetry in left and right, the same holds true
for the site $0$ and $\epsilon$-flatness to the left with respect to the configuration $\{\eta_t(v)\}_{v\in\Z}$.
As the two parts $\{\eta_t(v)\}_{v\leq0}$ and $\{\eta_t(v)\}_{v\geq1}$ of the configuration at time $t$ are
conditionally independent given there was no Poisson event on the edge $\langle0,1\rangle$ up to time $t$, we
have actually shown that the origin is two-sidedly $\epsilon$-flat with respect to the configuration
$\{\eta_t(v)\}_{v\in\Z}$ with positive probability.
The supercritical case is now settled as in part (a) of Theorem \ref{nogap}. Following the reasoning of
Sect.\ 6 in \cite{ShareDrink}, the proof of La.\ 6.3 there tells us that a two-sidedly $\epsilon$-flat vertex will
never move further than $6\epsilon$ away from the mean and Prop.\ 6.1 guarantees that two neighbors will a.s.\
either finally concur or end up further than $\theta$ apart from each other. Choosing
$0<\epsilon<\tfrac{\theta-R}{6}$ the latter is impossible for vertices neighboring a two-sidedly $\epsilon$-flat
vertex, which means that they will a.s.\ finally concur and the same holds true for every vertex by induction.
Ergodicity of the setting at time $t$ guarantees that there will be a.s.\ (infinitely many) two-sidedly
$\epsilon$-flat vertices forcing almost sure strong consensus. \end{enumerate}\vspace*{-0.9em} \end{proof}
\begin{remark} It is worth emphasizing that only the support and expected value of a bounded initial distribution determine the critical value for $\theta$: As long as it does not affect the support, the dependence relations between the coordinates of the random vector $\eta_0$ do not influence the critical parameter $\theta_\text{\upshape c}$.
Furthermore, having proved this result for more general multivariate distributions, part (a) of Theorem \ref{nogap} becomes a special case of Theorem \ref{gapsEucl}, since using part (d) of Lemma \ref{circle} shows that the maximal gap in a distribution of $\eta_0$ with mass around its mean cannot be larger than its radius, i.e.\ $h\leq R$.\\[0.5em] \noindent Finally, the requirement that the initial opinions are independent is not as vital as it might seem. The independence was merely used to guarantee that we can locally modify initial configurations and still obtain events with positive probability. Consequently, the i.i.d.\ property can be replaced by the weaker condition that $\{\eta_0(v)\}_{v\in\Z}$ is a stationary sequence, ergodic with respect to shifts and allowing conditional probabilities such that the conditional distribution of $\eta_0(0)$ given $\{\eta_0(v)\}_{v\in\Z\setminus\{0\}}$ almost surely has the same support as the marginal distribution $\mathcal{L}(\eta_0)$, with the above conclusions remaining valid. This last condition is a natural extension to continuous state spaces of the well-known {\em finite energy condition} from percolation theory -- for a more detailed discussion of this extension to dependent initial opinions, see Sect.\ 2.2 in \cite{Deffuant}. \end{remark}
\begin{example}\label{sphere} \begin{enumerate}[(a)] \item With Theorem \ref{gapsEucl} in hand, we can finally settle the case of $\eta_0\sim\text{\upshape unif}(S^{k-1})$. Irrespectively of $k$, this distribution has radius $R=1$, but for $k=1$, the maximal gap is $h=2$, for $k>1$ instead $h=0$. By the above theorem, we can conclude $$\theta_\text{\upshape c}=\max\{R,h\}=\begin{cases}
2,&\text{for }k=1\\
1,&\text{for }k\geq2.
\end{cases}$$
In short, the fact that $S^{k-1}$ is disconnected for $k=1$ but connected for $k\geq2$ makes all the difference. \item If the random vector $\eta_0$ has independent coordinates, each being Bernoulli distributed with parameter $p\in(0,1)$, i.e.\ for all $1\leq i\leq k$ $$\Prob\big(\eta_0^{(i)}=1\big)=1-\Prob\big(\eta_0^{(i)}=0\big)=p,$$ its support is the hypercube $\{0,1\}^k$ and the expected value $\E\eta_0=p\,\mathbf{e}$, where $\mathbf{e}$ is the $k$-dimensional vector of all ones. The radius of this initial distribution is $R=\max\{\n{\E\eta_0-\mathbf{0}},\n{\E\eta_0-\mathbf{e}}\}=\sqrt{k}\,\max\{p,1-p\}.$ It is not hard to see that a distribution with the hypercube as its support has the maximal gap $h=1$. Indeed, for $\theta<1$ no two opinion values can interact, for $\theta>1$ all neighboring corners get within the confidence bound and their pairwise convex hulls form the edges of the hypercube, hence their union is a connected set giving $\D_\theta(\nu)=[0,1]^k$, for $\theta>1$, by means of Lemma \ref{circle}.
In conclusion, the Deffuant model with this initial distribution features the critical value $$\theta_\text{\upshape c}=\begin{cases}
1,&\text{for }k=1\text{ or }k=2,3 \text{ and }p\in[1-\tfrac{1}{\sqrt{k}},\tfrac{1}{\sqrt{k}}]\\
\sqrt{k}\,\max\{p,1-p\},&\text{for }k\geq4 \text{ or } k=2,3 \text{ and }
p\notin[1-\tfrac{1}{\sqrt{k}},\tfrac{1}{\sqrt{k}}].
\end{cases}$$ As stated in the above remark, the independence of the individual coordinates is not essential, as long as the support stays unchanged. A relation like $\eta_0^{(1)}=1-\eta_0^{(2)}$ in the Bernoulli example with parameter $p=\tfrac12$ however, will influence both $\supp(\eta_0)$ and as a consequence $\theta_\text{\upshape c}$ as well. \end{enumerate} \end{example}
\begin{example}\label{mu} There is one more crucial change when the opinions in the Deffuant model on $\Z$ are given by vectors instead of real numbers. The parameter $\mu$, shaping the size of compromising steps, which was of no particular interest so far, can actually play a crucial role in the critical case.
In order to verify this claim, let us consider the two-dimensional initial distribution given by $\text{\upshape unif}(\{(0,0),(1,0),(\tfrac{1}{\pi},1)\})$, which is depicted below. Given $\theta=1$ we have $$[0,1]\times\{0\}\subseteq\supp_\theta(\eta_t)\text{ for all }t>0,$$ following the reasoning of Example \ref{jump}. But the point $(\tfrac{1}{\pi},0)$ can only be approximated, never attained by $\eta_t(v)$, if $\mu$ is rational for example. For $\mu=\tfrac{1}{\pi}$ on the other hand, $\eta_t(v)=(\tfrac{1}{\pi},0)$ with positive probability which leads to $\supp(\eta_t)=\conv(\supp(\eta_0))$.
Note that for this distribution, we have $h=1>R$, since $\E\eta_0=\tfrac13\,(1+\tfrac{1}{\pi},1)$. \par\begingroup \rightskip13.5em\noindent Similarly to the proof of the above theorem, we can conclude that the Deffuant model on $\Z$ with confidence bound $\theta=\theta_\text{\upshape c}=1$ and this initial distribution approaches almost surely no consensus for $\mu\in(0,\tfrac12]\cap\mathbb{Q}$ and almost surely strong consensus for $\mu=\tfrac{1}{\pi}$:
If $\mu$ is rational, vertices holding the initial opinion $(\tfrac{1}{\pi},1)$ can never compromise with such holding an opinion $(a,0)$ since $a$ is rational and can therefore not be $\tfrac{1}{\pi}$. Consequently, we will have a.s.\ no consensus due to blocked edges. \par\endgroup \vspace*{-4.8cm}
\vspace*{0.4cm}
If $\mu=\tfrac{1}{\pi}$ however, we can come up with a finite configuration allowing for the local modification, which guaratees the existence of two-sidedly $\epsilon$-flat vertices. Actually $n=3$ is enough and $$x_1=(1,0),\ x_2=(0,0),\ x_3=(\tfrac{1}{\pi},1)$$ will be an appropriate choice of starting values, if the edge sequence $(e_i)_{i=1}^N$ begins with $e_1=\langle1,2\rangle,\ e_2=\langle2,3\rangle$, since that will bring the value at site $1$ to $(1-\tfrac{1}{\pi},0)$, the one at $2$ to $(\tfrac{1}{\pi},\tfrac{1}{\pi})$ and the one at $3$ to $(\tfrac{1}{\pi},1-\tfrac{1}{\pi})$, all lying in $B[\E\eta_0,\tfrac12]$, and thus their pairwise distances are all less than the confidence bound. If the edge sequence contains the edge pair $(\langle1,2\rangle,\langle2,3\rangle)$ enough times, the final values of the finite configuration will all lie at Euclidean distance at most $\tfrac\epsilon3$ from the initial average $\tfrac13(x_1+x_2+x_3)=\E\eta_0$ for any fixed $\epsilon>0$. Note that in the present case, when transforming the finite configuration into a part of the dynamics on the whole line graph, we don't have to worry about taking small balls around the initial values $x_i$ in order to get an event $C$ with positive probability, since the $x_i$ are atoms of the initial distribution. Taking small balls would actually invalidate the argument due to the fact that the parameter $\theta$ is pinned to the critical value $\theta_\text{\upshape c}=1$ not allowing for small marginals.\\[0.5em] \noindent Another fact that can be seen from this example is that the jumps of the mapping $\vartheta\mapsto \D_\vartheta(\nu)$ do not have to be continuous from the right in the sense that $\D_\theta(\nu)=\bigcap_{\vartheta>\theta} \D_\vartheta(\nu)$. Given $\mu\in\mathbb{Q}$ we get for this initial distribution
$$\D_\theta(\nu)=\begin{cases}
\supp(\eta_0),&\text{for }\theta<1\\
[0,1]\times\{0\}\cup\{(\tfrac{1}{\pi},0)\},&\text{for }\theta=1\\
\conv(\supp(\eta_0)),&\text{for }\theta>1,
\end{cases}$$ hence there can actually be a double jump. \end{example}
\section{Metrics other than the Euclidean distance}
Having investigated the changes that multidimensional opinion values cause in the Deffuant model, another interesting aspect is the impact of the measure of distance between two opinions. What happens if we apply some general metric $\rho$ other than the natural choice given by the Euclidean norm?
Although this generalization does not entirely fit the framework as laid out in Section 1, it is not worth repeating all the definitions as one would simply have to replace all appearing distances $\n{x-y}$ by $\rho(x,y)$ correspondingly. Note however that switching to a general metric $\rho$ influences the dynamics of the Deffuant model only in determining which opinion values are within `speaking distance', that is allowing for an update if neighbors with corresponding opinions interact. Once the two values are close enough in this respect, the updated opinion values will just be the convex combinations described in (\ref{dynamics}), even if the straight line connecting both values might no longer be the geodesic between them (as in the Euclidean case) and the steps taken towards the arithmetic average can be of different length if $\rho$ is not translation invariant.
With respect to the considerations in the foregoing section, the following properties of a distance measure play an important role.
\begin{definition}\label{extracon} Consider a metric $\rho$ on $\R^k$. \begin{enumerate}[(i)] \item Let the metric $\rho$ be called {\itshape sensitive to coordinate} $i$, if there exists a function $\varphi:[0,\infty)\to[0,\infty)$ such that
$\lim_{s\to\infty}\varphi(s)=\infty$ and for any two vectors $x, y\in\R^k$ with $|x_i-y_i|> s$, it holds that $\rho(x,y)>\varphi(s)$. \item Call $\rho$ {\em locally dominated by the Euclidean distance}, if there exist some $\gamma,c>0$ such that for $x,y\in\R^k$ with $\n{x-y}\leq\gamma$ it holds that
\begin{equation}\label{domi}\rho(x,y)\leq c\cdot||x-y||_2.\end{equation} \item Finally, let $\rho$ be called {\itshape weakly convex} if for all $x,y,z\in\R^k$: $$\rho(x, \alpha y+(1-\alpha)\,z)\leq \max\{\rho(x,y),\rho(x,z)\}\quad\text{for all }\alpha\in[0,1].$$ \end{enumerate} \end{definition}
\noindent The convexity of balls $B_\rho(x,r)=\{y\in\R^k,\;\rho(x,y)<r\}$ generated by the metric is a crucial feature. It is not hard to check that the balls generated by $\rho$ are convex if and only if the metric is weakly convex: Sufficiency is obvious, since $y,z\in B_\rho(x,r)$ immediately gives $\conv(\{y,z\})\subseteq B_\rho(x,r)$. As to necessity, if there are $x,y,z\in\R^k$, $\alpha\in(0,1)$ s.t.\ $\rho(x, \alpha y+(1-\alpha)\,z)>\max\{\rho(x,y),\rho(x,z)\} $, we can choose $r\in(\max\{\rho(x,y),\rho(x,z)\},\rho(x, \alpha y+(1-\alpha)\,z))$ and conclude that $B_\rho(x,r)$ can not be convex. It should be mentioned that when talking about the metric space $(\R^k,\rho)$, we will always assume that it is equipped with the Borel $\sigma$-algebra generated by the metric $\rho$.
If $\rho$ is locally dominated by the Euclidean distance, we can find a constant $C=C(\theta)$ such that (\ref{domi}) holds in fact for all $x,y\in\R^k$ with $\rho(x,y)\leq\theta$ if $c$ is replaced by $C$: If $\n{x-y}>\gamma$ but $\rho(x,y)\leq\theta$, we can conclude that $$\rho(x,y)\leq\theta\leq\tfrac{\theta}{\gamma}\,\n{x-y},$$ hence $C:=\max\{c,\tfrac{\theta}{\gamma}\}$ will do.
\begin{definition} Let the Deffuant model with respect to a general distance measure $\rho$ be defined just as in Section \ref{intro}, with the only change that the restriction of the confidence bound in (\ref{dynamics}) will now rule that Poisson events cause updates only if $\rho(a,b)\leq\theta$, where $a,b$ denote the opinion values at the corresponding vertices. As the convexity of balls is enormously important in the analysis presented in the foregoing section, in what follows $\rho$ will be assumed to be weakly convex.
No consensus still means that we have finally blocked edges, that is some $\langle u,v\rangle$ s.t.\ $\rho(\eta_{t}(u),\eta_{t}(v))>\theta$ for all $t$ large enough. Similarly, the convergence notion in the definition of consensus is now based on the distance $\rho$.
As before, the initial opinions are i.i.d.\ with some common distribution $\mathcal{L}(\eta_0)$ on $\R^k$. If the distribution of $\eta_0$ has a finite expectation, we define its radius with respect to $\rho$ as $$R_\rho:=\inf\left\{r>0,\;\Prob\big(\eta_0\in B_\rho(\E\eta_0,r)\big)=1\right\},$$ similarly to the Euclidean case, see Definition \ref{radius}.
Likewise, the notion of $\epsilon$-flatness transfers to the new setting as follows: A vertex $v\in\Z$ is called {\em $\epsilon$-flat (with respect to $\rho$)} to the right in the initial configuration $\{\eta_0(u)\}_{u\in\Z}$ if for all $n\geq0$: \begin{equation}\label{rhoflat}
\frac{1}{n+1}\sum_{u=v}^{v+n}\eta_0(u)\in B_\rho(\E\eta_0,\epsilon), \end{equation} similarly for $\epsilon$-flatness to the left and two-sided $\epsilon$-flatness. \end{definition}
\noindent By imposing appropriate additional restrictions on the weakly convex metric $\rho$ and the initial distribution, we can retrieve the result of Theorem \ref{nogap} also in this generalized setting. The extra restriction on $\mathcal{L}(\eta_0)$ is that $\E[\eta_0^{\,2}]$ is finite, as this is no longer directly implied by the finiteness of the initial distribution's radius (just think of a bounded metric). The Cauchy-Schwarz inequality implies that this constraint is equivalent to the finiteness of the entries in the covariance matrix corresponding to the distribution of $\eta_0$, which is why we will simply refer to it as having a finite second moment, just as in the univariate case.
Finally, note that if we fix an initial distribution $\mathcal{L}(\eta_0)$, due to the update rule (\ref{dynamics}), all possible future opinion values lie in the convex hull of its support, $\conv(\supp\eta_0)$. For this reason it will suffice in every respect that $\rho$ is weakly convex (and possibly locally dominated by the Euclidean norm) on $\conv(\supp\eta_0)$ only, not the entire $\R^k$.
\begin{theorem}\label{nogaprho} In the Deffuant model on $\Z$ with the underlying opinion space $(\R^k,\rho)$ and an initial opinion distribution $\mathcal{L}(\eta_0)$ we have the following limiting behavior: \begin{enumerate}[(a)] \item If $\rho$ is locally dominated by the Euclidean distance and $\mathcal{L}(\eta_0)$ has a finite second moment, a finite radius $R_\rho\in[0,\infty)$ and mass around its mean, i.e.\ \begin{equation}\label{rhomatm} \Prob\big(\eta_0\in B_\rho(\E\eta_0,r)\big)>0 \text{ for all }r>0, \end{equation} the critical parameter is $\theta_\text{\upshape c}=R_\rho$, meaning that for $\theta<R_\rho$ we have a.s.\ no consensus and for $\theta>R_\rho$ a.s.\ strong consensus. \item Let $\eta_0=(\eta_0^{(1)},\dots,\eta_0^{(k)})$ be the random initial opinion vector. If one of the coordinates $\eta_0^{(i)}$ has an unbounded marginal distribution (with respect to the absolute value), its expected value exists (regardless of whether finite, $+\infty$ or $-\infty$) and $\rho$ is sensitive to this coordinate, the limiting behavior will a.s.\ be no consensus, irrespectively of $\theta$. \end{enumerate} \end{theorem}
\begin{proof} \begin{enumerate}[(a)] \item The proof of this theorem is exactly the same as the proof of Theorem \ref{nogap}. One only has to check that the additional requirements on $\rho$ make up for the crucial properties of the Euclidean norm that were used in the cited proof. The (multivariate) SLLN states that the averages in (\ref{rhoflat}) for large $n$ are close to the mean in Euclidean distance, hence with respect to $\rho$ due to (\ref{domi}). Local modification of the initial profile will then guarantee the existence of one-sidedly $\epsilon$-flat vertices.
The crucial role of $\epsilon$-flat vertices is preserved by the weak convexity of $\rho$: The proof of Prop.\ 5.1 in \cite{ShareDrink} shows that given an edge $\langle v-1,v\rangle$ along which there have been no updates yet, the opinion value at $v$ is a convex combination of averages as in (\ref{rhoflat}), hence lies in $B_\rho(\E\eta_0,\epsilon)$ as well, if $v$ was $\epsilon$-flat to the right with respect to the initial configuration, due to convexity of the $\rho$-balls.
As to the supercritical regime, the a.s.\ existence of two-sidedly $\epsilon$-flat vertices follows from the a.s.\ existence of one-sidedly $\epsilon$-flat vertices and the i.i.d.\ property of the initial configuration, just as in the Euclidean case. The weak convexity of $\rho$ is needed once more to conclude that the opinion values of two-sidedly $\epsilon$-flat vertices stay close to the mean, just as in La.\ 6.3 in \cite{ShareDrink}.
When we want to apply the argument of Prop.\ 6.1 in \cite{ShareDrink}, stating that neighbors will a.s.\ either finally concur or the edge between them be blocked for large $t$, it is essential that condition (\ref{domi}), together with the finite second moment, allows once again to borrow the energy idea. The extra condition of a finite second moment implies the finiteness of the expected initial engergy $\E[W_0(v)]=\E[\eta_0(v)^{2}]$, as mentioned just before the theorem. If the opinions $\eta_{t}(u),\eta_{t}(v)$ of two neighbors are within the confidence bound with respect to $\rho$ but $\rho(\eta_{t}(u),\eta_{t}(v))\geq\delta$ for some $\delta>0$, then due to (\ref{domi}):
$||\eta_{t}(u)-\eta_{t}(v)||_2\geq\tfrac{\delta}{C}$, where $C=\max\{c,\tfrac{\theta}{\gamma}\}>0$, see the comments after Definition \ref{extracon}. This will cause an energy loss of at least $2\mu(1-\mu)(\tfrac{\delta}{C})^2$ when they compromise. Again, this cannot happen infinitely often with positive probability as the expected energy at time $t=0$ is finite and the expected total energy preserved over time.
\item Given $\rho$ is sensitive to coordinate $i$, the idea of proof of the second claim can be reutilized as well. The sensitivity leads to the fact that there is some $s>0$ s.t.\ $|x_i-y_i|> s$ implies $\rho(x,y)>\theta$. As alluded in the proof of Theorem \ref{nogap}, the arguments used for unbounded distributions in Thm.\ 2.2 in \cite{Deffuant} show that under the given conditions, there are a.s.\ vertices that differ more than $s$ from both their neighbors in the $i$th coordinate (with respect to the absolut value) in the initial configuration and this will not change no matter whom their neighbors will compromise with. Consequently the corresponding opinion vectors will always be at $\rho$-distance more than $\theta$. \end{enumerate} \end{proof}\vspace*{-1em}
\begin{example}
\begin{enumerate}[(a)]
\item The $L^p$-norm for general $p\in[1,\infty]$ on $\R^k$ is defined as follows:
$$\lVert x\rVert_p:=\Big(\sum_{i=1}^k |x_i|^p\Big)^{\tfrac1p}\quad\text{for } p\in[1,\infty)\text{ and}\quad
\lVert x\rVert_{\infty}:=\max_{1\leq i\leq k} |x_i|.$$
In fact, these norms are all equivalent. More precisely, for $1\leq q<p\leq\infty$:
$$\lVert x\rVert_p\leq\lVert x\rVert_q\leq k^{\big(\tfrac1q-\tfrac1p\big)}\,\lVert x\rVert_p.$$
This implies for all $p\in[1,\infty]$:
$$\lVert x\rVert_p\leq\sqrt{k}\;\n{x}.$$
In other words all induced metrics $\rho(x,y)=\lVert x-y\rVert_p$, are -- to be precise globally --
dominated by the Euclidean distance.
It is easy to check that the norm axioms guarantee the convexity of balls, hence
the metric induced by $\lVert\,.\,\rVert_p$ is weakly convex for any $p\in[1,\infty]$.
Furthermore, $\lVert x\rVert_p\geq k^{\big(\tfrac1p-1\big)}\lVert x\rVert_1\geq k^{\big(\tfrac1p-1\big)}|x_i|$
for all $1\leq i\leq k$ implies sensitivity to every coordinate. In conclusion, both parts of Theorem \ref{nogaprho}
can be applied to the Deffuant model with the metric induced by some $L^p$-norm, i.e.\
$\rho(x,y)=\lVert x-y\rVert_p$, $p\in[1,\infty]$, as distance measure.
\item
If the definition of $\lVert\,.\,\rVert_p$ is extended to values for $p$ in $(0,1)$, the corresponding
functions are not subadditive, hence do not induce a metric.
Raised to the power $p$, we get the distance measures
$$\rho_p(x,y):=\big(\lVert x-y\rVert_p\big)^{p}=\sum_{i=1}^k |x_i-y_i|^p,$$ which
are in fact metrics for all $p\in(0,\infty)$ and obviously sensitive to every coordinate. For $p\in(0,1)$
these metrics fail to have convex balls. For $p\in[1,\infty)$ however, they are weakly convex which can be seen
from the weak convexity of $\lVert\,.\,\rVert_p$ as follows:
\begin{align*}\rho_p(x, \alpha y+(1-\alpha)\,z)&=\big(\big\lVert x-\big(\alpha y+(1-\alpha)\,z\big)\big\rVert_p\big)^p\\
&\leq\big(\max\{\lVert x-y\rVert_p,\lVert x-z\rVert_p\}\big)^p\\
&=\max\{\rho_p(x,y),\rho_p(x,z)\}.
\end{align*}
The metrics $\rho_p,\ p\in[1,\infty)$ are no longer equivalent to the Euclidean distance, but still locally
dominated in the sense of (\ref{domi}). In conclusion, Theorem \ref{nogaprho} equally applies to the Deffuant model
where distances are taken with respect to $\rho_p$.
More generally, given $\phi=(\phi_i)_{i=1}^k$ with non-negative functions $\phi_i$ defined on $\R_{\geq0}$ we
can consider
$$\rho_\phi(x,y):=\sum_{i=1}^k\phi_i\big(|x_i-y_i|\big).$$
For this to be a proper metric, the $\phi_i$ have to be convex satisfying $\phi_i(s)=0$ if and only if $s=0$.
Defined this way $\rho_\phi$ is convex, in particular weakly convex. It will be locally dominated by the Euclidean
distance by default and sensitive to coordinate $i$ if and only if $\phi_i(s)$ is unbounded as $s\to\infty$.
\end{enumerate}
\end{example}
\begin{example} \label{discr}
The extra condition (\ref{domi}) cannot be dropped. Let us consider the discrete metric
$\rho(x,y)=\mathbbm{1}_{\{x\neq y\}}$ -- which is weakly convex -- on $\R$. Clearly, it is not locally dominated by the
Euclidean metric. Let $\eta_0$ have the mixed distribution with constant density $\tfrac14$ on $[-1,1]$ and point
mass $\tfrac12$ at $0$. Hence $\mathcal{L}(\eta_0)$ has expectation $0$ and radius 1 (actually both with respect to
$\rho$ and the Euclidean distance). Regarding (\ref{rhomatm}), we find $\Prob(\eta_0\in B_\rho(0,\epsilon))\geq\tfrac12$
for all $\epsilon\geq0$. Take $\mu\in(0,\tfrac12]$ to be a transcendental number (e.g.\ $\tfrac{1}{\pi}$).
Furthermore, we choose $\theta\geq2$ which obviously makes blocked edges impossible.
At every time $t$, $\eta_t(v)$ is a finite (but random) convex combination of the initial opinions
$\{\eta_0(y)\}_{y\in\Z}$, say
\begin{equation}\label{SADrep}\eta_t(v)=\sum_{y\in\Z}\xi_{v,t}(y)\,\eta_0(y),\end{equation}
which is the SAD representation, see La.\ 3.1 in \cite{ShareDrink}. Almost surely, there are two edges that do not
experience Poisson events up to time $t$ and enclose $v$. It is not hard to show -- by induction on the (a.s.\ finitely
many) Poisson events occurring up to time $t$ on the edges between those two -- that the non-zero factors $\xi_{v,t}(y)$
in the representation of $\eta_t(v)$ are (random) polynomials in $\mu$ with integer coefficients. Furthermore, for
$y\neq v$ they have no constant term, for $y=v$ the constant term equals $1$:
At time $0$ we find $\xi_{u,0}(y)=\mathbbm{1}_{\{u=y\}}$ for all $u,y\in\Z$. With a Poisson event at
time $s$ on the edge $\langle u, u+1\rangle$ that actually causes an update, the coefficients change according to
\begin{equation*}\begin{array}{rl}\xi_{u,s}(y)&\!\!\!=\,(1-\mu)\,\xi_{u,s-}(y)+\mu\,\xi_{u+1,s-}(y)\\
\xi_{u+1,s}(y)&\!\!\!=\,\mu\,\xi_{u,s-}(y)+(1-\mu)\,\xi_{u+1,s}(y),\end{array}
\end{equation*}
for all $y\in\Z$, compare with (\ref{transf}). This establishes the induction step.\\[1em]
\noindent
Using the representation (\ref{SADrep}) we find for two neighbors $u,v$:
$$\eta_t(v)-\eta_t(u)=\sum_{y\in\Z}\big(\xi_{v,t}(y)-\xi_{u,t}(y)\big)\,\eta_0(y).$$
As $\xi_{v,t}(v)-\xi_{u,t}(v)$ is a non-zero polynomial in $\mu$ with integer coefficients, it cannot be zero.
Additionally, due to the fact that $\theta\geq2$, the $\xi$-factors only depend on the Poisson events, which implies
that the two random variables
$$X:=\frac{1}{\xi_{v,t}(v)-\xi_{u,t}(v)}\;\sum_{y\neq v}\big(\xi_{v,t}(y)-\xi_{u,t}(y)\big)\,\eta_0(y)$$
and $\eta_0(v)$ are independent. Since $\Prob(\eta_0(v)=0)=\Prob(\eta_0(v)\neq0)=\tfrac12$, we get
$$\Prob(\eta_t(v)-\eta_t(u)\neq0)\geq\Prob(X=0,\eta_0(v)\neq0)+\Prob(X\neq0,\eta_0(v)=0)=\tfrac12.$$
This leads to
$$\Prob\Big(\limsup_{t\to\infty}\rho\big(\eta_t(u),\eta_t(v)\big)=1\Big)\geq\tfrac12$$
for all neigbors $u,v$, which renders even weak consensus impossible.
In fact, with this choice of initial distribution and metric, the Deffuant model exhibits a limiting behavior
that is not a.s.\ approaching one of the scenarios described in Definition \ref{states}, since it does not
feature blocked edges, nor almost sure consensus formation in the long run -- instead at any time $t$ the opinions
of two neighbors are with probability at least $\tfrac12$ at distance $1$, always at speaking terms but not converging.
Since the choice of $\theta$ is trivial, we can find out what happens by looking at the Deffuant model employing
the Euclidean distance instead. By Theorem \ref{nogap} all opinions will a.s.\ approach the mean $0$,
but whenever two of them do not coincide they are at $\rho$-distance 1. \end{example}
\begin{example}
To illustrate the importance of the sensitivity in part (b) of Theorem \ref{nogaprho}, let us consider the
two metrics $d(x,y)=\n{x-y}$, that is the Euclidean metric, and
$$\rho(x,y)=\begin{cases}\n{x-y},&\text{if }\n{x-y}\leq1\\
1,&\text{otherwise.}
\end{cases}$$
Evidently, $\rho$ is not sensitive to any coordinate and that it is weakly convex is not hard to check either:
For $r<1$ the balls $B_{\rho}(x,r)$ are the same as the Euclidean balls, for $r\geq1$ we get
$B_{\rho}(x,r)=\R^k$. So in either case it is a convex set.
For simplicity, let us take $k$ to be $1$ -- the Euclidean distance is then induced by the absolute value --
and choose the standard normal distribution $\mathcal{N}(0,1)$ as initial distribution. Due to $\rho(x,y)\leq|x-y|$,
$\rho$ is locally dominated by the Euclidean distance. As the normal distribution has a finite second moment
and mass around its mean, part (a) of Theorem \ref{nogaprho} shows that in the Deffuant model using $\rho$
as the distance measure, the radius $R_\rho=1$ marks the critical value for $\theta$ at which we have a
phase transition from a.s.\ no consensus to a.s.\ strong consensus.
In the Deffuant model using the Euclidean distance however, there will a.s.\ be no consensus irrespectively
of $\theta$ according to Theorem \ref{gen} (b). \end{example}
\noindent The final aim will now be to prove a generalization of Theorem \ref{gapsEucl} to the Deffuant model with general metric $\rho$ instead of the Euclidean. In order to be able to do this we have to transfer the necessary auxiliary results leading to Theorem \ref{gapsEucl}, essentially by replacing all occurring Euclidean distances by distances with respect to $\rho$, however it requires small adjustments.
\begin{definition} Consider a random variable $\xi$ on $(\R^k,\rho)$. The {\itshape support} of its distribution is the following subset of $\R^k$, closed with respect to $\rho$: \begin{equation}\label{suprho} \supp(\xi):=\left\{x\in\R^k,\;\Prob\big(\xi\in B_\rho(x,r)\big)>0\ \text{for all }r>0\right\}. \end{equation} \end{definition}
\begin{remark} The last argument in the proof of Proposition \ref{Radius} shows $\supp(\eta_0)\subseteq B_\rho[\E\eta_0,R_\rho]$ for all initial distributions bounded with respect to $\rho$. The first part of its proof, i.e.\ showing that $\supp(\eta_0)\subseteq B[\E\eta_0,r]$ implies $\Prob(\eta_0\in B[\E\eta_0,r])=1$, is based on the theorem of Heine-Borel, stating that closed and bounded sets are compact in $(\R^k,\n{\,.\,})$, which does not hold for general metric spaces. For the discrete metric (see Example \ref{discr}) and a probability measure without point masses, the set defined in (\ref{suprho}) is in fact empty.
If however $(\R^k,\rho)$ is separable -- i.e.\ there exists a countable dense subset -- we get $\Prob(\xi\in\supp(\xi))=1$ for any random variable $\xi$ (see e.g.\ Thm.\ 2.1, p.\ 27 in \cite{Partha}), and thus the full statement of Proposition \ref{Radius}.
Given $\rho$ is locally dominated by the Euclidean distance, we can immediately conclude that $(\R^k,\rho)$ is separable, since due to (\ref{domi}) the set $\mathbb{Q}^k$ is not only dense in $(\R^k,\n{\,.\,})$ but also in $(\R^k,\rho)$.
In conclusion, if $(\R^k,\rho)$ is separable and $\eta_0$ has a finite expectation, its distribution's radius can be written as $R_\rho=\sup\{\rho(\E\eta_0,x),\; x\in\supp(\eta_0)\}$. \end{remark}
\noindent Adjusting the definition of $\D_\theta(\nu)$ (see Definition \ref{Dtheta}) to the general setting by substituting $\rho$-balls for Euclidean balls -- let us denote the resulting set by $\D_\theta^\rho(\nu)$ -- allows to reuse the arguments in the lemmas dealing with its properties. Although referencing to Proposition \ref{Radius}, in order to prove Lemma \ref{properties} only $\supp(\eta_0)\subseteq B[\E\eta_0,R]$ was needed, hence its statement is true for any weakly convex $\rho$ -- with the terms related to closure now referring to the topology generated by $\rho$.
As the final conclusions similar to Theorem \ref{gapsEucl} will require $\rho$ to be locally dominated by the Euclidean distance, let us assume for the remainder of this section that $\rho$ is not only weakly convex but also (\ref{domi}) holds.
When it comes to the central Lemma \ref{circle}, the claims that can be modified to hold for such $\rho$ as well without major efforts read as follows (again connectedness and closure refer to the topology generated by $\rho$):
\begin{lemma}\label{circlerho} Let $\rho$ be a weakly convex metric locally dominated by the Euclidean distance. \begin{enumerate}[(a)]
\item For all $x\in\R^k$ and $0\leq \delta<\tfrac{\theta}{2}$, the set $\D_\theta^\rho(\nu)\cap B_\rho[x,\delta]$
is convex.
\item The connected components of $\D_\theta^\rho(\nu)$ are convex and at $\rho$-distance at least
$\theta$ from one another. If $\D_\theta^\rho(\nu)$ is connected, then
$\D_\theta^\rho(\nu)=\overline{\conv(\supp(\eta_0))}$.
\item If $R_\rho<\infty$ and $\nu$ has mass around its mean, i.e.\ condition (\ref{rhomatm}) holds, then
$\D_{\theta}^\rho(\nu)=\overline{\conv(\supp(\eta_0))}$ for all $\theta>R_\rho$.
\item If $\D_\theta^\rho(\nu)$ is connected and $\E\eta_0$ finite, then $\E\eta_0\in\D_\theta^\rho(\nu)$ \end{enumerate} \end{lemma}
\begin{proof} The proof is essentially identical to the one of Lemma \ref{circle}. In part (a) we only have to choose $m,n\in\N$ such that
$$\left|\frac{m}{m+n}-\alpha\right|\leq \frac{\min\{\tfrac{r}{4c},\tfrac{\gamma}{2}\}}{\max\{\n{y},\n{z}\}}.$$ Then $$\n{(\tfrac{m}{m+n}\,y+\tfrac{n}{m+n}\,z)-(\alpha y+(1-\alpha)z)}\leq
|\tfrac{m}{m+n}-\alpha|\cdot\n{y}+|\alpha-\tfrac{m}{m+n}|\cdot\n{z}\leq\gamma,$$ which together with (\ref{domi}) implies
\begin{align*}
\rho\big(\eta_N(v),\alpha y+(1-\alpha)z\big)&\leq
\tfrac{r}{2}+\rho\big(\tfrac{m}{m+n}\,y+\tfrac{n}{m+n}\,z,\alpha y+(1-\alpha)z\big)\\
&\leq\tfrac{r}{2}+c\,\n{(\tfrac{m}{m+n}\,y+\tfrac{n}{m+n}\,z)-(\alpha y+(1-\alpha)z)}\\
&\leq \tfrac{r}{2}+c\,(|\tfrac{m}{m+n}-\alpha|\cdot\n{y}+|\alpha-\tfrac{m}{m+n}|\cdot\n{z})
\leq r.
\end{align*}
\noindent As to part (b), we can follow the first part of the proof of Lemma \ref{circle} (c) replacing every Euclidean distance by $\rho$ until the angles are considered. Since $B_\rho[x_j,\tfrac{\theta}{2}]$ might be oddly shaped, we can define $r:=\min\{\tfrac{\theta}{2c},\gamma\}>0$ and consider the Euclidean ball $B[x_j,r]$ which by (\ref{domi}) is contained in $B_\rho[x_j,\tfrac{\theta}{2}]$. Cutting short an angle $\alpha$ as described there, will now reduce the (Euclidean) length of the polygonal chain by at least $2r\cdot(1-\cos(\alpha))$ and the argument goes through yielding that the Euclidean closure of the component $C$ connected with respect to $\rho$ contains $\conv(\{x,y\})$. It follows from the generalized statement of Lemma \ref{properties} that being a component of $\D_\theta^\rho(\nu)$, $C$ is $\rho$-closed. This in turn implies that $C$ is also closed with respect to the Euclidean distance, using (\ref{domi}), and hence containing $\conv(\{x,y\})$. The rest of the claim easily follows, again by replacing $\n{x-y}$ by $\rho(x,y)$.
\noindent Part (c) is an easy consequence of the arguments leading to (a) and (b) that can be verified just as in the proof of Lemma \ref{circle} (d).
\noindent Finally, the only insight needed to accept the proof of Lemma \ref{circle} (f) as proof of claim (d) above is that $\D_\theta^\rho(\nu)$, being closed in $(\R^k,\rho)$, is also closed in the Euclidean space $(\R^k,\n{\,.\,})$, due to (\ref{domi}). \end{proof}
\begin{definition} Corresponding to Definition \ref{suppdef}, let the support of the distribution of $\eta_t$ in the Deffuant model with parameter $\theta$ and distance measure $\rho$ be denoted by $\supp_\theta^\rho(\eta_t)$.
Respectively, the length of the largest gap in $\supp(\eta_0)$ with respect to $\rho$ will be given by $$h_\rho:=\inf\{\theta>0,\;\D_\theta^\rho(\nu)\text{ is connected in } (\R^k,\rho)\},$$ compare with Definition \ref{gap}. \end{definition}
\noindent Following the arguments in the proof of Lemma \ref{suppt} with scrutiny reveals that the corresponding statements are also true for $\supp_\theta^\rho(\eta_t)$ in place of $\supp_\theta(\eta_t)$ and $B_\rho[\E\eta_0,R_\rho]$ substituting $B[\E\eta_0,R]$ -- actually even for metrics which are only weakly convex and not locally dominated by the Euclidean distance for only the convexity of $B_\rho[\E\eta_0,R_\rho]$ is needed. Concerning Proposition \ref{supp_t} however, we will not bother with the proof of a similar statement for the Deffuant model with general $\rho$. The only fact needed in the upcoming theorem is $$\supp_\theta^\rho(\eta_t)\subseteq\D_{\theta+\epsilon}^\rho(\nu)\quad\text{for }\epsilon>0,$$ which readily follows from the last argument in the proof of this very proposition. Having followed up the crucial intermediate steps makes it possible to slightly modify the proof of Theorem \ref{gapsEucl} in order to get an argument establishing the following result:
\begin{theorem}\label{gapsrho} Consider the Deffuant model on $\Z$ with opinion values in $(\R^k,\rho)$, where the corresponding distance measure $\rho$ is a weakly convex metric, locally dominated by the Euclidean distance. Assume it features an initial opinion distribution which has a finite second moment and is bounded with respect to $\rho$, i.e.\ $$R_\rho=\inf\left\{r>0,\;\Prob\big(\eta_0\in B_\rho[\E\eta_0,r]\big)=1\right\}<\infty.$$ If $h_\rho$ denotes the length of the largest gap in its support, then the critical value for the confidence bound, where a phase transition from a.s.\ no consensus to a.s.\ strong consensus takes place is $\theta_\text{\upshape c}=\max\{R_\rho,h_\rho\}$. \end{theorem}
\begin{proof} As mentioned, the reasoning follows closely the proof of Theorem \ref{gapsEucl}. In case (i), where $\theta<h_\rho$ we can conclude from Lemma \ref{suppt} and the above remarks that for $\epsilon>0$ such that $\theta+\epsilon<h_\rho$ it follows that $$\supp(\eta_0)\subseteq\supp_\theta^\rho(\eta_t)\subseteq\D_{\theta+\epsilon}^\rho(\nu).$$ The set $\D_{\theta+\epsilon}^\rho(\nu)$ is not connected (with respect to $\rho$) by definition of $h_\rho$, hence comprises convex components $C_1$ and $C_2$ at $\rho$-distance at least $\theta+\epsilon$ (see Lemma \ref{circlerho}). Again, we can choose the components such that $\Prob(\eta_0\in C_i)>0$ for $i=1,2$, since if we had $\Prob(\eta_0\in C_1)=1$, the fact that $C_1$ is closed with respect to $\rho$ would give $\supp(\eta_0)\subseteq C_1$ and so (using its convexity and the generalization of Lemma \ref{properties}) $$\D_{\theta+\epsilon}^\rho(\nu)\subseteq\overline{\conv(\supp(\eta_0))}\subseteq C_1.$$ But $C_1=\D_{\theta+\epsilon}^\rho(\nu)$ contradicts the disconnectedness.
Consequently, for a fixed vertex $v$ independence of the initial opinions guarantees that the event $\{\eta_0(v)\in C_1,\eta_0(v+1)\in C_2\}$ has positive probability, which dooms the edge $\langle v, v+1\rangle$ to be blocked by $\rho(\eta_t(v),\eta_t(v+1))\geq\theta+\epsilon$ for all $t\geq0$. Indeed, in the Deffuant model with parameter $\theta$, $\eta_t(v)$ can not leave the convex set $C_1$ since $\supp_\theta^\rho(\eta_t)\setminus C_1$, being a subset of $\D_{\theta+\epsilon}^\rho(\nu)\setminus C_1$, is at distance at least $\theta+\epsilon$ to $C_1$ for all $t$. The same holds for $\eta_t(v+1)$ and $C_2$ respectively. Due to ergodicity, the existence of blocked edges is therefore an almost sure event.\\[1em] \noindent The analysis of case (ii), $\theta<R_\rho$, requires likewise only minor adjustments of the argument in the proof of Theorem \ref{gapsEucl}. To begin with, the finite second moment of $\eta_0$ implies $\E\eta_0\in\R^k$, which is not ensured by $R_\rho<\infty$ itself. Let this time $y$ be an element of $\supp(\eta_0)\setminus B_\rho[\E\eta_0,\theta+2\epsilon]$, which is non-empty for $\epsilon\in(0,\tfrac{R-\theta}{2})$. Since both $B_\rho[y,\theta+\epsilon]$ and $B_\rho[\E\eta_0,\epsilon]$ are convex and closed -- with respect to $\rho$ and thus $\n{\,.\,}$ due to (\ref{domi}) -- as well as disjoint, we can choose $z_1\in B_\rho[y,\theta+\epsilon]$ and $z_2\in B_\rho[\E\eta_0,\epsilon]$ such that $$\n{z_1-z_2}=\min\{\n{a-b},\;a\in B_\rho[y,\theta+\epsilon]\text{ and }b\in B_\rho[\E\eta_0,\epsilon]\}>0$$ and then define $z=\tfrac12(z_1+z_2)$ and the half-space $H$ with respect to this point $z$ accordingly. Note that $H$ contains $B_\rho[\E\eta_0,\epsilon]$ and is disjoint from $B_\rho[y,\theta+\epsilon]$, just as in the Euclidean setting, because of the convexity of $\rho$-balls and the choice of $z_1,z_2$. Moreover, the local domination property (\ref{domi}) forces $B_\rho[\E\eta_0,\epsilon]$ to be a superset of $B[\E\eta_0,\delta]$, where $\delta=\min\{\tfrac{\epsilon}{c},\gamma\}$, and thus that $\E\eta_0$ lies in the Euclidean interior of $H$. Having established this, we can follow the rest of the argument (beginning with (\ref{SLLN}), which again follows from the finite second moment of $\eta_0$) literally, having in mind that $y$ has $\rho$-distance larger than $\theta+\epsilon$ to $H$.\\[1em] \noindent Finally, in the supercritical case (iii), i.e.\ $\theta>\max\{R_\rho,h_\rho\}$, we only have to take Lemma \ref{circlerho} as a replacement for Lemma \ref{circle} and again write $\rho$ for the appearing Euclidean distances. It is crucial to notice, that limits with respect to the Euclidean distance as in the SLLN and (\ref{outerpart}) are also limits with respect to $\rho$, once again using (\ref{domi}). Furthermore, in several places either the triangle inequality or the convexity of Euclidean balls was used, but being a weakly convex metric, $\rho$ has the corresponding properties. Using the idea of energy to conclude that two neighbors will a.s.\ either finally concur or end up with opinions further than $\theta$ apart from each other, the fact that $\rho$ is locally dominated by the Euclidean distance is indispensable and employed as in the proof of Theorem \ref{nogaprho} (a). This is also where the finiteness of the second moment is needed. \end{proof}
\begin{example} In order to discern in how far the results of this section do actually add to the univariate case as well, let us finally consider a metric on $\R$ which is not translation invariant. One can take for example
$\rho(x,y)=|x^3-y^3|$ for all $x,y\in\R$. This metric $\rho$ obviously generates convex balls, in other words is weakly convex. However, since
$$\frac{|x^3-y^3|}{|x-y|}=|x^2+xy+y^2|\to \infty\quad \text{as } x,y\to\infty$$ it is not locally dominated by the absolut value. Nevertheless, as long as we consider a fixed bounded distribution this problem can be overcome -- as was pointed out just before Theorem \ref{nogaprho} -- since on any bounded interval (\ref{domi}) holds for $\rho$ and some properly chosen $c>0$.
If we consider the initial distribution $\nu=\text{\upshape unif}\{-\tfrac12,\tfrac12\}$, which has radius $R_\rho=\tfrac18$, we can conclude from Theorem \ref{gapsrho}, that the critical value for the confidence bound is $\theta_\text{\upshape c}=\rho(-\tfrac12,\tfrac12)=\tfrac14$. Unlike the Euclidean case, this value will change with a translation of the initial distribution: Taking $\eta_0+\tfrac32$ instead of $\eta_0$, in other words $\nu=\text{\upshape unif}\{1,2\}$ as marginal distribution for the initial configuration, we find $R_\rho=\tfrac{37}{8}$ and $\theta_\text{\upshape c}=\rho(1,2)=7$. \end{example}
\makebox[0.8\textwidth][l]{
\begin{minipage}[t]{\textwidth}
{\sc \small Timo Hirscher\\
Department of Mathematical Sciences,\\
Chalmers University of Technology,\\
412 96 Gothenburg, Sweden.}\\
hirscher@chalmers.se
\end{minipage}}
\end{document} |
\begin{document}
\title{The frequency of elliptic curve groups over prime finite fields}
\begin{abstract} Letting $p$ vary over all primes and $E$ vary over all elliptic curves over the finite field $\mathbb F_p$, we study the frequency to which a given group $G$ arises as a group of points $E(\mathbb F_p)$. It is well-known that the only permissible groups are of the form $G_{m,k}:=\mathbb Z/m\mathbb Z\times \mathbb Z/mk\mathbb Z$. Given such a candidate group, we let $M(G_{m, k})$ be the frequency to which the group $G_{m, k}$ arises in this way. Previously, the second and fourth named authors determined an asymptotic formula for $M(G_{m, k})$ assuming a conjecture about primes in short arithmetic progressions. In this paper, we prove several unconditional bounds for $M(G_{m, k})$, pointwise and on average. In particular, we show that $M(G_{m, k})$ is bounded above by a constant multiple of the expected quantity when $m\le k^A$ and that the conjectured asymptotic for $M(G_{m, k})$ holds for almost all groups $G_{m, k}$ when $m\le k^{1/4-\epsilon}$. We also apply our methods to study the frequency to which a given integer $N$ arises as the group order $\#E(\mathbb F_p)$. \end{abstract}
\section{Introduction}\label{intro}
Given an elliptic curve $E$ over the prime finite field $\mathbb F_p$, we let $E(\mathbb F_p)$ denote its set of $\mathbb F_p$ points. It is well-known that $E(\mathbb F_p)$ admits the structure of an abelian group, and in fact, \[ E(\mathbb F_p)\cong G_{m,k}:= \mathbb Z/m\mathbb Z\times\mathbb Z/mk\mathbb Z \] for some positive integers $m$ and $k$. It is natural to wonder which groups of the form $G_{m,k}$ arise in this way and how often they occur as $p$ varies over all primes and $E$ varies over all elliptic curves over $\mathbb F_p$. The former problem, of characterizing which groups are realized in this way was studied in~\cite{BPS:2012, CDKS1}, while the frequency of occurrence was studied by the second and fourth named authors in~\cite{DS-MEG}. In the present work, we explore the frequency of occurrence further.
Given a group $G$ of the form $G_{m,k}=\mathbb Z/m\mathbb Z\times\mathbb Z/mk\mathbb Z$, we set $N=|G|=m^2k$ and let $M_p(G)$ denote the weighted number of isomorphism classes of elliptic curves over $\mathbb F_p$ with group isomorphic to $G$, that is to say \[
M_p(G)=\sum_{\substack{E/\mathbb F_p\\ E(\mathbb F_p)\cong G}}\frac{1}{|\Aut_p(E)|}, \] where the sum is taken over all isomorphism classes of elliptic curves over $\mathbb F_p$ and
$|\Aut_p(E)|$ is the number of $\mathbb F_p$-automorphisms of $E$. It is worth noting here that $|\Aut_p(E)|=2$ for all but a bounded number of isomorphism classes $E$ over $\mathbb F_p$, and hence \[ M_p(G) = \frac{1}{2} \#\{ E/\mathbb F_p : E(\mathbb F_p)\cong G \} +O(1), \]
In \cite{DS-MEG}, the authors studied the weighted number of isomorphism classes of elliptic curves over any prime finite field with group of points isomorphic to $G$, i.e., they studied \[ M(G):=\sum_pM_p(G) . \]
The primes counted by $M(G)$ must lie in a very short interval near $N=|G|$. This is because the Hasse bound implies that $p+1-2\sqrt p<N<p+1+2\sqrt p$, which is equivalent to saying that \[ N^{-} := N+1-2\sqrt{N} < p < N+1+2 \sqrt{N}=:N^+. \] Even the Riemann hypothesis does not guarantee the existence of a prime in such a short interval. Hence the main theorem of \cite{DS-MEG} can only be proven under an appropriate conjecture concerning the distribution of primes in short intervals. In the statement below, we refer to the conjecture assumed in~\cite{DS-MEG} as the Barban-Davenport-Halberstam (BDH) estimate for short intervals.
Before stating the main theorem of~\cite{DS-MEG}, we fix some more notation. Given a group $G=G_{m,k}$, we let $\Aut(G)$ denote its automorphism group (as a group). This should not be confused with $\Aut_p(E)$ as defined above, which refers to the set of $\mathbb F_p$-automorphisms of the elliptic curve $E$. We also define the function \eq{define K(G)}{ K(G)= \prod_{\ell\nmid N}\left(1-\frac{\leg{N-1}{\ell}^2\ell+1}{(\ell-1)^2(\ell+1)}\right) \prod_{\ell\mid m}\left(1-\frac{1}{\ell^2}\right) \prod_{\substack{\ell\mid k\\ \ell\nmid m}}\left(1-\frac{1}{\ell(\ell-1)}\right), } where the products are taken over all primes $\ell$ satisfying the stated conditions and $\leg{\cdot}{\ell}$ denotes the usual Kronecker symbol. In~\cite{DS-MEG}, the function $K(G)$ was only computed for odd order groups, and its definition contained a mistake. It was corrected to the form that we give here in~\cite{DS-MEG-corr}. Note that the function $K(G)$ is bounded between two constants independently of the the parameters $m$ and $k$. In paraphrased form, the main theorem of~\cite{DS-MEG} is as follows.
\begin{thm}[David-Smith] \label{meg-rephrased} Assume that the BDH estimate for short intervals holds. Fix $A,B>0$. Then for every nontrivial, odd order group $G=G_{m,k}$, we have that \[
M(G)=\left(K(G) + O_{A,B}\left( \frac{1}{(\log|G|)^{B}} \right)\right)\frac{|G|^2}{|\Aut (G)|\log |G|}
\asymp \frac{mk^2}{\phi(m)\phi(k)\log k}, \] provided that $m\le (\log k)^A$. \end{thm}
For precise details concerning the conjecture assumed to prove Theorem~\ref{meg-rephrased}, we refer the reader to \cite{DS-MEG}. We note that the result of Theorem~\ref{meg-rephrased} is restricted to the range $m\le (\log k)^A$. However, we believe that it should hold in the range $m\le k^A$. Proving such a result at the present time would however require an even stronger hypothesis than the one assumed in \cite{DS-MEG}. Unconditionally, it is possible to obtain upper bounds of the correct order of magnitude in this larger range. This is the context of our first theorem.
\begin{thm} \label{CDKS} Fix $A>0$ and consider integers $m$ and $k$ with $1\le m\le k^A$. Let $G=G_{m,k}$, $N=|G|=m^2k$, and \[ \delta = \frac{1}{N/(\phi(m)\log(2N))}
\sum_{\substack{ N^-<p\le N^+ \\ p\equiv 1\pmod m }} \sqrt{(p-N^-)(N^+-p)}, \] and note that $\delta\ll1$ by the Brun-Titchmarsch inequality. For any fixed $\lambda>1$, \[
\delta^\lambda \cdot \frac{|G|^2}{ |\Aut (G)|\log(2|G|)} \ll M(G) \ll \delta^{1/\lambda} \cdot \frac{|G|^2}{ |\Aut (G)|\log(2|G|)}, \] the implied constants depending at most on $A$ and $\lambda$. \end{thm}
Employing the above result together with the Bombieri-Vinogradov theorem, we also show that the lower bound implicit in Theorem~\ref{meg-rephrased} holds for a positive proportion of groups $G$.
\begin{thm} \label{CDKS2} Consider numbers $x$ and $y$ with $1\le x\le \sqrt{y}$. Then there are absolute positive constants $c_1$ and $c_2$ such that \[
M(G_{m, k}) \ge c_1\cdot \frac{|G_{m,k}|^2}{|\Aut (G_{m,k})|\log(2|G_{m,k}|)} \] for at least $c_2xy$ pairs $(m,k)$ with $m\le x$ and $k\le y$. \end{thm}
\begin{rmk} It is not possible for such a lower bound to hold for all groups $G=G_{m, k}$. As was noted in~\cite{BPS:2012}, several groups of this form do not arise in this way at all. For example, the group $G_{11,1}$ never occurs as the group of points on any elliptic curve over any finite field. \end{rmk}
Our final result for $M(G_{m, k})$ is that on average the full asymptotic of Theorem~\ref{meg-rephrased} holds unconditionally.
\begin{thm} \label{CDKS3} Fix $\epsilon>0$ and $A\ge1$. For $2\le x\le y^{1/4-\epsilon}$ we have that \als{
\frac{1}{xy}\sum_{\substack{m\le x,\, k\le y \\ mk>1}} \left| M(G_{m,k}) - \frac{K(G_{m, k}) |G_{m, k}|^2}{|\Aut (G_{m, k})|\log|G_{m, k}|} \right|
\ll \frac{y}{(\log y)^A}, } the implied constant depending at most on $A$ and $\epsilon$. Moreover, if the generalized Riemann hypothesis is true, then the same result is true for $x\le y^{1/2-\epsilon}$. \end{thm}
In \cite{DS-MEN, DS-MEN-corr}, the second and fourth named authors studied the related question of how many elliptic curves over $\mathbb F_p$ have a given number of points, that is to say the asymptotic behaviour of \[
M(N) := \sum_p \sum_{\substack{ E/\mathbb F_p\\ \#E(\mathbb F_p)=N}}\frac{1}{|\Aut_p(E)|} . \] It was shown in \cite{DS-MEN, DS-MEN-corr} that \[ M(N)\sim K(N)\cdot \frac{N^2}{\phi(N)\log N} \quad(N\to\infty) \] under suitable assumptions on the distribution of primes in short arithmetic progressions, where \eq{define K(N)}{ K(N) = \prod_{\ell\nmid N} \left(1- \frac{ \leg{N-1}{\ell}^2\ell+1}{(\ell-1)^2(\ell+1)} \right)
\prod_{\ell|N} \left(1-\frac{1}{\ell^{\nu_\ell(N)}(\ell-1)}\right). } Here $\nu_\ell(N)$ denotes the usual $\ell$-adic valuation of $N$. As one might expect, the methods of this paper apply to the study of $M(N)$ as well.
We start by recording the obvious identity \[ M(N) = \sum_{m^2k=N} M(G_{m, k}) . \] Then it is possible to show that, as expected, most of the contribution to $M(N)$ comes from groups $G_{m, k}$ with $m$ small, that is to say groups that are nearly cyclic.
\begin{thm}\label{MEN-1} For $N\ge1$ and $x\ge1$, we have that \[ M(N) = \sum_{\substack{m^2k=N \\ m\le x}} M(G_{m, k})
+ O\left( \frac{N^2}{x\phi(N)\log(2N)} \right) . \] \end{thm}
Finally, we conclude with two more results on $M(N)$.
\begin{thm} \label{MEN-2} Let $N\ge1$ and set \[ \eta = \frac{1}{N/(\log(2N))}
\sum_{\substack{ N^-<p\le N^+ \\ p\equiv 1\pmod m }} \sqrt{(p-N^-)(N^+-p)} , \] and note that $\eta\ll1$ by the Brun-Titchmarsch inequality. For any fixed $\lambda>1$, \[ \eta^\lambda \cdot \frac{N^2}{\phi(N) \log(2N) } \ll M(N) \ll \eta^{1/\lambda} \cdot \frac{N^2}{\phi(N)\log(2N)}, \] the implied constants depending at most on $\lambda$. \end{thm}
\begin{thm} \label{MEN-3} Fix $A>0$. For $x\ge1$, we have that \als{
\frac{1}{x}\sum_{1<N\le x} \left| M(N) - \frac{K(N)N^2}{\phi(N)\log N} \right|
\ll_A \frac{x}{(\log x)^A} . } \end{thm}
The present paper also includes an appendix (by Greg Martin and the second and fourth named authors) giving a probabilistic interpretation to the Euler factors arising in the constants $K(N)$ and $K(G)$ defined by~\eqref{define K(G)} and~\eqref{define K(N)}, respectively. This interpretation is similar to the heuristic leading to the conjectural constants in related conjectures on properties of the reductions of a fixed global elliptic curve $E$ over the rationals (e.g., the Lang-Trotter conjectures~\cite{LT:1976} and the Koblitz~\cite{Kob:1988} conjecture)
with the additional feature that the Euler factors at the primes $\ell$ dividing $N$ or $|G|$ are related to certain matrix counts over $\mathbb Z / \ell^e \mathbb Z$ for $e$ large enough.
\subsection*{Notation} Given a natural number $n$, we denote with $P^+(n)$ and $P^-(n)$ its largest and smallest prime factor, respectively, with the convention that $P^+(1)=1$ and $P^-(1)=\infty$. Moreover, we let $\tau_r(n)$ denote the coefficient of $1/n^s$ in the Dirichlet series $\zeta(s)^r$. In particular, $\tau_r(n)=r^{\omega(n)}$ for square-free integers $n$, where $\omega(n)$ denotes the number of distinct prime factors of $n$. In the special case when $r=2$, we simply write $\tau(n)$ in place of $\tau_2(n)$, which counts the number of divisors of $n$. We write $f*g$ to denote the Dirichlet convolution of the arithmetic functions $f$ and $g$, defined by $(f*g)(n)=\sum_{ab=n}f(a)g(b)$. As usual, given a Dirichlet character $\chi$, we write $L(s,\chi)$ for its Dirichlet series. In addition, we make use of the notation \[
E(x,h;q) := \max_{(a,q)=1} \left| \sum_{\substack{ x <p \le x + h \\ p\equiv a\pmod q}} \log p - \frac{h}{\phi(q)} \right| . \] Finally, for $d\in\mathbb Z$ that is not a square and for $z\ge1$, we let \[ \mathcal L(d) = L\left(1,\left(\frac{d}{\cdot}\right)\right) = \prod_{\ell} \left( 1- \frac{\leg{d}{\ell}}{\ell}\right)^{-1} \quad\text{and}\quad \mathcal L(d;z) = \prod_{\ell\le z} \left( 1- \frac{\leg{d}{\ell}}{\ell}\right)^{-1}. \]
\section{Outline of the proofs}\label{outline}
In this section, we outline the chief ideas that go into the proofs of our main results. However, most of our remarks concern the proofs of Theorems~\ref{CDKS} and~\ref{CDKS3}. This is primarily because the remaining results are essentially corollaries of these theorems. In particular, the main ingredient in the proof of Theorem~\ref{MEN-1} is Theorem~\ref{CDKS}, and the main ingredients in the proof Theorem~\ref{MEN-3} are Theorems~\ref{CDKS3} and~\ref{MEN-1} together with a short computation. Theorem~\ref{MEN-2} is not truly a corollary, but its proof is essentially the same as that of Theorem~\ref{CDKS}. The proof of Theorem~\ref{CDKS2} is somewhat different. The ideas involved in its proof are essentially the same as those used to show Theorem 1.6 of~\cite{CDKS1} together with an application of Theorem~\ref{CDKS}. All of this will be expounded further in Section~\ref{proofs}, where we complete the proofs of all six results.
For the remainder of this section, we focus our attention on outlining the main ingredients in the proofs of Theorems~\ref{CDKS} and~\ref{CDKS3}. Throughout, we fix a group $G=G_{m,k}=\mathbb Z/m\mathbb Z\times\mathbb Z/mk\mathbb Z$, and we set $N=|G|=m^2k$. Moreover, given a prime $p\equiv 1\pmod {m}$, we set \begin{eqnarray} \label{def-dp} d_{m,k}(p)= \frac{(p-1-N)^2-4N}{m^2}=\left(\frac{p-1}{m}-mk\right)^2-4k. \end{eqnarray} Often, when the dependence on $m$ and $k$ is clear from the context, we will simply write $d(p)$ in place of $d_{k,m}(p)$. Our starting point is the following lemma, whose proof is based on Deuring's work \cite{Deu} and its generalization due to Schoof \cite{Schoof}. We shall give the details of its proof in Section \ref{deuring lemma}.
\begin{lma}\label{formula for M(G)} For any $m,k\in\mathbb N$, we have that \[ M(G_{m, k}) = \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }}\sum_{\substack{f^2\mid d(p) ,\, (f,k)=1 \\
d(p)/f^2\equiv 1,0\pmod 4 }}
\frac{\sqrt{|d(p)|} \mathcal L(d(p)/f^2)}{2\pi f} . \] \end{lma}
For the proof of Theorem \ref{CDKS}, we shall use the following simplified but weaker version of Lemma \ref{formula for M(G)}.
\begin{cor}\label{bounds for M(G)} For any $m,k\in\mathbb N$, we have that \[ \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }}
\sqrt{|d(p)|} \mathcal L(d(p))
\ll M(G_{m, k}) \ll
\sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }} \frac{|d(p)|^{3/2}}{\phi(|d(p)|)}
\mathcal L(d(p)). \] \end{cor}
\begin{proof} For the lower bound, note that the term $f=1$ in Lemma \ref{formula for M(G)} always contributes to $M(G_{m, k})$, since $d(p)\equiv 0,1\mod 4$ for all $m,k$ and $p\equiv 1\mod m$. For the upper bound, notice that \[ \mathcal L(d(p)/f^2) \le \frac{f}{\phi(f)} \mathcal L(d(p)) . \] Since \[
\sum_{f|n} \frac{1}{\phi(f)}\ll \frac{n}{\phi(n)} , \] the claimed upper bound follows. \end{proof}
Evidently, Lemma \ref{formula for M(G)} and Corollary \ref{bounds for M(G)} reduce the estimation of $M(G_{m, k})$ to estimating an average of Dirichlet series evaluated at 1. In order to do so, we expand the Dirichlet series as an infinite sum and invert the order of summation by putting the sum over primes $p$ inside. For each fixed $n$ in the Dirichlet sum, understanding this sum over primes involves understanding the distribution of the set \eq{our set}{ \left\{\frac{p-1}{m}: N^-<p<N^+,\ p\equiv 1\mod m\right\} } in arithmetic progressions $a\mod b$, where the modulus $b=b(n)$ depends on $n$ and other parameters which are less essential. Already when $b=m=1$, this problem is very hard and unsolved, even if we assume the validity of the Riemann Hypothesis. In order to limit the size of the moduli $b$ that are involved, we need to truncate the Dirichlet series that appear before inverting the order of summation. We could do this for each individual Dirichlet series, using character sum estimates such as the P\'olya-Vinogradov inequality or Burgess's bounds as in \cite{DS-MEN,DS-MEG}, but this would still leave us to deal with rather large moduli $b$. Instead, we use the following result, which implies that for {\it most} characters $\chi$, $L(1,\chi)$ can be approximated by a very short Euler product, and then by a sum over integers $n$ supported only on small primes.
\begin{lma} \label{lemmashortproduct} Let $\alpha \geq 1$ and $Q\ge3$. There is a set $\mathcal{E}_\alpha(Q)\subset[1,Q]\cap\mathbb Z$ of at most $Q^{2/\alpha}$ integers such that if $\chi$ is a Dirichlet character modulo $q\le\exp\{(\log Q)^2\}$ whose conductor does not belong to $\mathcal{E}_\alpha(Q)$, then \[ L(1,\chi) = \prod_{\ell\le (\log Q)^{8\alpha^2}} \left(1-\frac{\chi(\ell)}{\ell}\right)^{-1} \left(1 + O_\alpha\left(\frac1{(\log Q)^\alpha}\right)\right). \] \end{lma}
\begin{proof} By a classical result, essentially due to Elliott (see \cite[Proposition 2.2]{GS}), we know that there is a set $\mathcal{E}_\alpha(Q)$ of at most $Q^{2/\alpha}$ integers from $[1,Q]$ such that \[ L(1,\psi) = \prod_{\ell \leq (\log{Q})^{8\alpha^2}} \left( 1 - \frac{\psi(\ell)}{\ell} \right)^{-1}
\left( 1 + O\left(\frac{\alpha}{(\log Q)^\alpha}\right) \right) \] for all primitive characters $\psi$ of conductor in $[1,Q]\setminus\mathcal{E}_\alpha(Q)$. So if $\chi$ is a Dirichlet character modulo $q\le \exp\{(\log Q)^2\}$ induced by $\psi$ and the conductor of $\psi$ is in $[1,Q]\setminus \mathcal{E}_\alpha(Q)$, then \als{ L(1, \chi)
&= \prod_{\ell \mid q} \left( 1 - \frac{\psi(\ell)}{\ell} \right)
\prod_{\ell \leq (\log{Q})^{8\alpha^2}} \left( 1 - \frac{\psi(\ell)}{\ell} \right)^{-1}
\left( 1 + O\left(\frac{\alpha}{(\log Q)^\alpha}\right) \right)\\
&= \prod_{\ell \mid q,\,\ell > (\log Q)^{8\alpha^2} } \left( 1 - \frac{\psi(\ell)}{\ell} \right)
\prod_{\ell \le (\log Q)^{8\alpha^2} } \left( 1 - \frac{\chi(\ell)}{\ell} \right)^{-1}
\left( 1 + O\left(\frac{\alpha}{(\log Q)^\alpha}\right) \right) . } Finally, note that \[ \log\left(\prod_{\ell \mid q,\,\ell > (\log Q)^{8\alpha^2} }\left( 1 - \frac{\psi(\ell)}{\ell} \right) \right)
\ll \sum_{\ell \mid q,\,\ell > (\log Q)^{8\alpha^2} } \frac{1}{\ell}
\le \frac{\omega(q) }{(\log Q)^{8\alpha^2}} \ll \frac{1}{(\log Q)^{8\alpha^2-2}}, \] since $\omega(q)\le \log q/\log 2\ll (\log Q)^2$, which completes the proof of the lemma. \end{proof}
Expanding the short product in the above lemma leads to an approximation of $L(1,\chi)$ by a sum over $(\log Q)^A$-smooth integers, and we know that very few of them get $>Q^\epsilon$:
\begin{lma}\label{smooth} Let $f:\mathbb{N}\to\{z\in\mathbb C:|z|\le1\}$ be a completely multiplicative function. For $u\ge1$ and $x\ge10$ we have that \[ \prod_{p\le x}\left(1-\frac{f(p)}p\right)^{-1}
= \sum_{\substack{P^+(n)\le x\\n\le x^u}} \frac{f(n)}n+ O\left(\frac{\log x}{e^u}\right) . \] \end{lma}
\begin{proof} We have that \als{
\left| \prod_{p\le x}\left(1-\frac{f(p)}p\right)^{-1}
- \sum_{\substack{P^+(n)\le x\\n\le x^u}} \frac{f(n)}n \right|
= \left| \sum_{\substack{P^+(n)\le x\\ n > x^u }} \frac{f(n)}n \right|
&\le \frac{1}{e^u} \sum_{P^+(n)\le x} \frac{1}{n^{1-1/\log x}} \\
&\ll \frac{1}{e^u} \exp\left\{\sum_{p\le x} \frac{1}{p^{1-1/\log x}} \right\} . } So using the formula $p^{1/\log x}=1+O(\log p/\log x)$ and the prime number theorem, we obtain the claimed result. \end{proof}
Combining Lemmas \ref{lemmashortproduct} and \ref{smooth}, we may replace $L(1,\chi)$ by a very short sum for most characters $\chi$, which means that we only need information for the distribution of the set \eqref{our set} for very small moduli. This leads to the following fundamental result, which is an improvement of Theorem \ref{meg-rephrased}. It will be proven in Section \ref{approx}.
\begin{thm}\label{approximation of M(G)} Fix $\alpha\ge1$ and $\epsilon\le 1/3$, and consider integers $m$ and $k$ with $1\le m\le k^{\alpha}$ and $k$ large enough so that $k^{\frac{1}{2}-\epsilon} \ge(\log k)^{\alpha+2}$. Set $G=G_{m,k}$, and consider $h\in[mk^{\epsilon},m \sqrt{k}/(\log k)^{\alpha+2}]$. Then \als{
M(G) &= \frac{K(G) |G|^2}{|\Aut (G)|\log|G|}
+ O_{\alpha,\epsilon} \left(\frac{k}{(\log k)^{\alpha}}
+ \frac{\sqrt{k}}{h} \sum_{q\le k^{\epsilon} } \tau_3(q)
\int_{N^-}^{N^+} E(y,h; qm ) \mathrm{d} y\right), } where $K(G)$ is defined by \eqref{define K(G)}. \end{thm}
Even though we cannot estimate the error term for any given values of $m$ and $k$, we can do so if we average over $m$ and $k$ using the following result, which is a consequence of Theorem 1.1 in \cite{Kou}.
\begin{lma}\label{bv-short}Fix $\epsilon>0$ and $A\ge1$. For $x\ge h\ge2$ and $1\le Q^2\le h/x^{1/6+\epsilon}$, we have that \[ \int_x^{2x}\sum_{q\le Q} E(y,h;q) \mathrm{d} y \ll\frac{xh}{(\log x)^A}. \] If, in addition, the Riemann hypothesis for Dirichlet $L$-functions is true, then the above estimate holds when $1\le Q^2\le h/x^\epsilon$. \end{lma}
Theorem \ref{approximation of M(G)} and Lemma \ref{bv-short} lead to a proof of Theorem \ref{CDKS3} in a fairly straightforward way as we will see in Section \ref{proofs}.
Next, we turn to the proof of Theorem \ref{CDKS}. Using Corollary~\ref{bounds for M(G)} and H\"older's inequality, we reduce the proof of this result to that of controlling sums of the form \eq{CDKS - key sum}{ \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }}
\left(\frac{|d(p)|}{\phi(|d(p)|) }\right)^s \mathcal L(d(p))^r , } where we take $r>0$ to prove the implicit upper bound and $r<0$ for the lower bound.
Nevertheless, we only seek an upper bound for the sum in \eqref{CDKS - key sum}, even for the lower bound in Theorem \ref{CDKS}. Therefore we can replace the sum over primes with a sum over almost primes and use sieve methods to detect the latter kind of integers. More precisely, we will majorize the characteristic function of primes $\le 2N$ by a convolution $\lambda*1$, where $\lambda$ is a certain truncation of the M\"obius function. This will be done using the {\it fundamental lemma of sieve methods}, which we state below in the form found in \cite[Lemma 5]{FI}. We could have also used Selberg's sieve, but the calculations are actually simpler when using Lemma \ref{lemma-FI}.
\begin{lma}\label{lemma-FI} Let $y\ge2$ and $D=y^u$ with $u\ge2$. There exist two arithmetic functions $\lambda^\pm:\mathbb{N}\to[-1,1]$, supported on $\{d\in\mathbb{N}:P^+(d)\le y,\,d\le D\}$, for which \[ \begin{cases}
(\lambda^-*1)(n)=(\lambda^+*1)(n)=1 &\text{if}\ P^-(n)>y,\\
(\lambda^-*1)(n)\le0\le(\lambda^+*1)(n) &\text{otherwise}. \end{cases} \] Moreover, if $g:\mathbb{N}\to\mathbb{R}$ is a multiplicative function with $0\le g(p)\le\min\{2,p-1\}$ for all primes $p\le y$, and $\lambda\in\{\lambda^+,\lambda^-\}$, then \[ \sum_d \frac{ \lambda(d)g(d) }{d}
= (1+O(e^{-u})) \prod_{p\le y} \left(1-\frac{g(p)}p\right). \] \end{lma}
Combining Lemmas \ref{lemmashortproduct} and \ref{lemma-FI}, we are led to following key result, which will proven in Section \ref{proof of Prop startwiththat}. As we will see in the same section, Theorem \ref{CDKS} is an easy consequence of this intermediate result.
\begin{prop}\label{startwiththat} Let $m,k\in\mathbb N$ and set $N=m^2k$. For any $r\in\mathbb{R}$ and $s\ge0$, we have that \[ \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }}
\left(\frac{|d(p)|}{\phi(|d(p)|) }\right)^s \mathcal L(d(p))^r
\ll_{r,s} \left(\frac{k}{\phi (k)}\right)^r \frac{\sqrt{N}}{\phi(m)\log(2k)}. \] \end{prop}
\section{Completion of the proof of the main results}\label{proofs}
In this section we prove Theorems \ref{CDKS}-\ref{MEN-3}. We start by stating a preliminary result, which is Lemma 15 of \cite{DS-MEG} in slightly altered form.
\begin{lma}\label{Aut(G)} For $m,k\in\mathbb N$, we have that \[
\frac{|{\Aut}(G_{m,k})|}{|G_{m,k}|}
= m\phi(m) \frac{\phi(k)}{k} \prod_{\substack{\ell | m \\ \ell\nmid k}} \left(1-\frac{1}{\ell^2} \right) . \] \end{lma}
\begin{proof}[Proof of Theorem \ref{CDKS}] The claimed inequalities are a consequence of Corollary \ref{bounds for M(G)}, Proposition \ref{startwiththat}, and H\"older's inequality. Indeed, let $\mu=\lambda/(\lambda-1)$, so that $1/\lambda+1/\mu=1$. Then we have that \als{ M(G_{m, k})
&\ll \sum_{\substack{ N^-<p<N^+ \\ p\equiv 1\mod m}} \sqrt{|d(p)|} \frac{|d(p)|}{\phi(|d(p)|)} \mathcal L(d(p)) \\
&\le \left( \sum_{\substack{ N^-<p<N^+ \\ p\equiv 1\mod m}} \sqrt{|d(p)|}\right)^{\frac{1}{\lambda}}
\left( \sum_{\substack{ N^-<p<N^+ \\ p\equiv 1\mod m}}
\sqrt{|d(p)|} \left(\frac{|d(p)|}{\phi(|d(p)|)}\right)^\mu \mathcal L(d(p))^{\mu}\right)^{\frac{1}{\mu}} \\
&\ll
\left( \sum_{\substack{ N^-<p<N^+ \\ p\equiv 1\mod m}} \frac{\sqrt{(N^+-p)(p-N^-)}}{m} \right)^{\frac{1}{\lambda}}
\left( \sum_{\substack{ N^-<p<N^+ \\ p\equiv 1\mod m}}
\sqrt{k} \left(\frac{|d(p)|}{\phi(|d(p)|)}\right)^\mu \mathcal L(d(p))^{\mu}\right)^{\frac{1}{\mu}} , }
since $|d(p)|=(N^+-p)(p-N^-)/m^2\ll N/m^2=k$. So the definition of $\delta$ and Proposition \ref{startwiththat} imply that \[ M(G_{m, k}) \ll_{\lambda,A} \delta^{1/\lambda} \frac{km}{\phi(m)\log(2N)} \frac{k}{\phi(k)}. \] Hence the upper bound in Theorem \ref{CDKS} follows by Lemma \ref{Aut(G)}.
The proof of the lower bound is similar, having as a starting point the inequality \[
\sum_{\substack{ N^-<p<N^+ \\ p\equiv 1\mod m}} \sqrt{|d(p)|}
\le \left( \sum_{\substack{ N^- <p \le N^+ \\ p\equiv 1\pmod{m} }}
\sqrt{|d(p)|} \mathcal L(d(p)) \right)^{\frac{1}{\lambda}}
\left( \sum_{\substack{ N^- <p \le N^+ \\ p\equiv 1\pmod{m} }}
\frac{\sqrt{|d(p)|}}{\mathcal L(d(p))^{\mu/\lambda}} \right)^{\frac{1}{\mu}} . \] \end{proof}
\begin{proof}[Proof of Theorem \ref{MEN-2}] The proof of Theorem \ref{MEN-2} is completely analogous to the proof of Theorem \ref{CDKS}. The only difference is that instead of starting with Corollary \ref{bounds for M(G)}, we observe that \[
\sum_{N^-<p<N^+} \sqrt{|D_N(p)|} \mathcal L(D_N(p))
\ll M(N) \ll
\sum_{N^-<p<N^+} \frac{|D_N(p)|^{3/2}}{\phi(|D_N(p)|)}
\mathcal L(D_N(p)), \] a consequence of relation~\eqref{reduction to class number avg} below with $n=1$. \end{proof}
\begin{proof}[Proof of Theorem~\ref{CDKS2}] Note that when $m=k=1$ and $N=1$, then $N^+=4$ and $N^-=0$ and thus the primes 2 and 3 belong to the set $\{N^-<p\le N^+: p\equiv 1\pmod m\}$. So, by Theorem \ref{CDKS}, it suffices to show Theorem \ref{CDKS2} when $y$ is large enough. We further assume that $x\in\mathbb N$, which we may certainly do. Observe that $(N^+-p)(p-N^-)\asymp N$ for $p\in((\sqrt{N}-1/2)^2,(\sqrt{N}+1/2)^2)$, and thus \[ \frac{1}{N/(\phi(m)\log(2N))} \sum_{\substack{N^-<p<N^+ \\ p\equiv 1\mod{p} }} \sqrt{ (N^+-p)(p-N^-)}
\gg \frac{\phi(m)}{\sqrt{N}} \sum_{\substack{(\sqrt{N}-1/2)^2<p<(\sqrt{N}+1/2)^2 \\ p\equiv 1\mod{m} }} \log p . \] So, if we set \[
C(m,k) = \frac{|G_{m, k}|^2}{|\Aut(G_{m, k})| \log(2G_{m, k})} \asymp \frac{mk^2}{\phi(m)\phi(k)\log(mk)}, \] then Theorem \ref{CDKS} with $\lambda=2$ implies that \als{ \sum_{\substack{3x/4<m\le x \\ y/100<k\le y}} \sqrt{ \frac{M(G_{m, k})}{C(m,k)} }
&\gg \sum_{\substack{3x/4<m\le x \\ y/100<k\le y}} \frac{\phi(m)}{x\sqrt{y}}
\sum_{\substack{ (m\sqrt{k}-1/2)^2<p< (m\sqrt{k}+1/2)^2 \\ p\equiv 1\pmod m}} \log p \\
&\ge\sum_{3x/4<m\le x} \sum_{\substack{ x^2y/3 < p \le 4x^2y/9 \\ p\equiv 1\pmod m }} \frac{\phi(m)\log p}{x\sqrt{y}}\sum_{ \substack{y/100<k \le y \\ (\sqrt p -1/2)^2/m^2<k < (\sqrt p+1/2)^2/m^2 }} 1 , } provided that $y$ is large enough. Note that \[ \frac{(\sqrt p+1/2)^2 - (\sqrt p-1/2)^2}{m^2} = \frac{2\sqrt{p}}{m^2} \ge \frac{2x\sqrt{y/3}}{x^2} > 1, \] by our assumptions that $x\le\sqrt{y}$. Since we also have that $(\sqrt p-1/2)^2/m^2>y/100$ and that $(\sqrt p+1/2)^2/m^2\le y$ for $y$ large enough and $m$ and $p$ as above, we conclude that \[ \sum_{\substack{3x/4<m\le x \\ y/100<k\le y}} \sqrt{ \frac{M(G_{m, k})}{C(m,k)} }
\gg \frac{1}{x^2} \sum_{3x/4<m\le x} \phi(m)
\sum_{\substack{ x^2y/3 < p \le 4x^2y/9 \\ p\equiv 1\pmod m }} \log p. \] This last double sum equals \[ \sum_{3x/4<m\le x} \phi(m) \cdot
\frac{x^2y}{9\phi(m)} + O_A\left( \frac{x^3y}{(\log y)^A}\right) \gg x^3y, \] by the Bombieri Vinogradov theorem. Therefore we conclude that \[ \sum_{\substack{3x/4<m\le x \\ y/100<k\le y}} \sqrt{ \frac{M(G_{m,k})}{C(m,k)} } \gg xy. \] Since the summands are all $\ll1$ in this range by Theorem \ref{CDKS} (recall that $\delta\ll1$ there), we obtain Theorem \ref{CDKS2}. \end{proof}
\begin{proof}[Proof of Theorem \ref{CDKS3}] Let $\theta$ be a parameter, which we take to be $1/2$ or $1/4$, according to whether we assume the generalized Riemann hypothesis or not. We then suppose that $1\le x\le y^{\theta-\epsilon}$. Note that Theorem \ref{CDKS} and Lemma \ref{Aut(G)} imply that \[
\sum_{\substack{m\le x,\, k\le y/(\log y)^A \\ mk>1}} \left| M(G_{m,k}) - \frac{K(G_{m, k}) |G_{m, k}|^2}{|\Aut (G_{m, k})|\log|G_{m, k}|} \right|
\ll \frac{xy^2}{(\log y)^A} . \] We break the remaining range of $m$ and $k$ into dyadic intervals, hence reducing Theorem \ref{CDKS3} to showing that \[ E:= \sum_{\substack{ x/2< m\le x \\ y/2< k\le y}}
\left| M(G_{m,k}) - \frac{K(G_{m, k}) |G_{m, k}|^2}{|\Aut (G_{m, k})|\log|G_{m, k}|} \right| \ll_{\epsilon,A} \frac{xy^2}{(\log y)^A} \] for $x\le y^{\theta-\epsilon}$. (Note that these might be different values of $x,y$ and $\epsilon$ than the ones we started with.) We apply Theorem \ref{approximation of M(G)} with $h= (x^2y)^{1/2}/(\log y)^{A+2}$ for all $m\in[x/2,x]$ and $k\in[y/2,y]$, to deduce that \als{ E &\ll \frac{\sqrt{y}}{h} \sum_{\substack{ x/2< m\le x \\ y/2< k\le y}} \sum_{q \le k^{\epsilon} } \tau_3(q)
\int_{(m^2k)^-}^{(m^2k)^+} E(t,h; qm ) \mathrm{d} t
+ \frac{xy^2}{(\log y)^A} \\
&=: E'+ \frac{xy^2}{(\log y)^A} , } say. Putting the sum over $k$ inside, we find that \als{ E'&\ll \frac{\sqrt{y}}{h} \sum_{x/2<m\le x} \sum_{q \le y^{\epsilon} } \tau_3(q)
\int_{x^2y/10}^{2x^2y} E(t,h; qm)
\left(\sum_{ \substack{ y/2<k\le y \\ t^-/m^2<k < t^+/m^2}} 1 \right) \mathrm{d} t \\
&\ll \frac{y}{hx} \sum_{m\le x} \sum_{q \le y^{\epsilon} } \tau_3(q)
\int_{x^2y/10}^{2x^2y} E(t,h; qm) \mathrm{d} t
\le \frac{y}{hx} \sum_{m\le x} \sum_{q \le y^{\epsilon} } \tau_4(q)
\int_{x^2y/10}^{2x^2y} E(t,h; q) \mathrm{d} t . } We note that $E(u,h; b) \ll \sqrt{h/\phi(b)} \sqrt{E(u,h;b)}$, by the Brun-Titchmarsch inequality. So the Cauchy-Schwarz inequality and Lemma \ref{bv-short} imply that \als{ E'&\ll \frac{y}{x h} \left(\sum_{b\le xy^{3\epsilon}} \tau_4(b)^2 \int_{x^2y/10}^{2x^2y}
\frac{h}{\phi(b)} \mathrm{d} t\right)^{\frac{1}{2}}
\left( \sum_{b\le xy^{3\epsilon}} \int_{x^2y/10}^{2x^2y} E(t,h; b) \mathrm{d} t\right)^{\frac{1}{2}} \\
&\ll \frac{y}{x h} \left( x^2yh(\log y)^{16} \cdot \frac{x^2yh}{(\log y)^{2A+16}} \right)^{\frac{1}{2}}
= \frac{xy^2}{(\log y)^A} , } which completes the proof of Theorem \ref{CDKS3}. \end{proof}
\begin{proof}[Proof of Theorem \ref{MEN-1}]
Theorem~\ref{CDKS} implies that \[ M(G_{m, k}) \ll \frac{k^{3/2}}{\phi (k)} \frac{\sqrt{N}}{\phi(m)\log(2k)} = \frac{mk^2}{\phi(k)\phi(m)\log(2k)}\le \frac{Nmk}{\phi(N)\phi(m)\log(2k)}. \] Therefore, \[ \sum_{\substack{m^2k=N \\ m>x}} M(G_{m, k})
\ll \sum_{\substack{m^2|N \\ x<m\le\sqrt{N} }} \frac{N^2}{m\phi(m)\phi(N)\log(2N/m^2)}
\ll \frac{N^2}{x\phi(N)\log(2N)}, \] which completes the proof of Theorem \ref{MEN-1}. \end{proof}
\begin{proof}[Proof of Theorem \ref{MEN-3}] In view of Theorem \ref{MEN-1}, it suffices to show that \[
\sum_{1<N\le x} \left| \sum_{\substack{ m^2k=N \\ m\le (\log x)^A}} M(G_{m, k}) - \frac{K(N)N^2}{\phi(N)\log N} \right|
\ll_A \frac{x^2}{(\log x)^A}, \] where $K(N)$ is defined by~\eqref{define K(N)}. Note that \als{
&\sum_{1<N\le x} \left| \sum_{\substack{ m^2k=N \\ m\le (\log x)^A}} M(G_{m, k})
- \sum_{\substack{ m^2k=N \\ m\le (\log x)^A}} \frac{K(G_{m, k}) |G_{m, k}|^2}{|\Aut (G_{m, k})|\log|G_{m, k}|} \right| \\
&\quad\le \sum_{\substack{1<m^2k\le x \\ m\le (\log x)^A}}
\left| M(G_{m, k}) - \frac{K(G_{m, k}) |G_{m, k}|^2}{|\Aut (G_{m, k})|\log|G_{m, k}|} \right| \\
&\quad\le \sum_{1\le 2^j\le (\log x)^{A} }
\sum_{\substack{k\le x/4^j \\ 2^j\le m<2^{j+1} \\ m^2k>1 }}
\left| M(G_{m, k}) - \frac{K(G_{m, k}) |G_{m, k}|^2}{|\Aut (G_{m, k})|\log|G_{m, k}|} \right| \\
&\quad\ll_A \sum_{1\le 2^j\le (\log x)^{A}} \frac{x^2}{8^j(\log x)^A} \ll \frac{x^2}{(\log x)^A} } by Theorem~\ref{CDKS3}. So it suffices to show that \eq{MEN-3 goal}{
\sum_{1<N\le x} \frac{N}{\log N} \left| \sum_{\substack{ m^2k=N \\ m\le (\log x)^A}} \frac{K(G_{m, k}) |G_{m, k}|}{|\Aut (G_{m, k})|}
- \frac{K(N)N}{\phi(N)} \right|
\ll_A \frac{x^2}{(\log x)^A} . } In fact, Lemma \ref{Aut(G)} implies that \als{
\frac{ K(G_{m, k}) |G_{m, k}|}{|\Aut(G_{m, k})|}
&= \frac{k}{m\phi(m)\phi(k)} \prod_{\substack{\ell | m \\ \ell\nmid k}} \left(1-\frac{1}{\ell^2} \right)^{-1} K(G_{m, k}) \\
&= \frac{N}{m^2\phi(N)}\prod_{\ell|(m,k)} \left(1-\frac{1}{\ell}\right)^{-1}
\prod_{\substack{\ell | m \\ \ell\nmid k}} \left(1-\frac{1}{\ell^2} \right)^{-1} K(G_{m, k}) \\
&= \frac{N}{m^2\phi(N)} \prod_{\ell\nmid N}\left(1-\frac{\leg{N-1}{\ell}^2\ell+1}{(\ell-1)^2(\ell+1)}\right)
\prod_{\ell\mid (m,k)}\left(1+\frac{1}{\ell}\right)
\prod_{\substack{\ell\mid k\\ \ell\nmid m}}\left(1-\frac{1}{\ell(\ell-1)}\right) . } Therefore, \als{
\sum_{\substack{m^2k = N \\ m\le (\log x)^A}} \frac{K(G_{m, k}) |G_{m, k}|}{|\Aut(G_{m, k})|}
&= \sum_{m^2k = N} K(G_{m, k}) \frac{|G_{m, k}|}{|\Aut(G_{m, k})|} + O\left(\frac{N}{(\log x)^A\phi(N)}\right) \\ &= \frac{N}{\phi(N)} \prod_{\ell\nmid N}\left(1-\frac{\leg{N-1}{\ell}^2\ell+1}{(\ell-1)^2(\ell+1)}\right) \cdot S(N)
+ O\left(\frac{N}{(\log x)^A\phi(N)}\right), } where \[ S(N) = \sum_{m^2k=N} \frac{1}{m^2} \prod_{\ell\mid (m,k)}\left(1+\frac{1}{\ell}\right)
\prod_{\substack{\ell\mid k\\ \ell\nmid m}}\left(1-\frac{1}{\ell(\ell-1)}\right) . \] Note that \als{ S(\ell^v) &= 1-\frac{1}{\ell(\ell-1)} + \sum_{1\le j\le v/2} \frac{1}{\ell^{2j}} \left(1+\frac{{\bf 1}_{j<v/2}}{\ell}\right) \\
&=1-\frac{1}{\ell(\ell-1)} + \sum_{1\le j\le v/2} \frac{1}{\ell^{2j}}
+ \sum_{1\le j\le v/2} \frac{{\bf 1}_{j<v/2}}{\ell^{2j+1}} \\
&=1-\frac{1}{\ell(\ell-1)} + \sum_{i=2}^v \frac{1}{\ell^i} = 1-\frac{1}{\ell^v(\ell-1)} . } So we conclude that \[
\sum_{\substack{m^2k = N \\ m\le (\log x)^A}} \frac{K(G_{m, k}) |G_{m, k}|}{|\Aut(G_{m, k})|}
= \frac{K(N) N}{\phi(N)} + O\left(\frac{N}{(\log x)^A\phi(N)}\right) , \] which yields relation \eqref{MEN-3 goal}, thus completing the proof of Theorem \ref{MEN-3}. \end{proof}
\section{Reduction to an average of Dirichlet series}\label{deuring lemma}
In this section, we prove Lemma \ref{formula for M(G)} using the theory developed by Deuring~\cite{Deu} and somewhat generalized by Schoof~\cite{Schoof}. As before, we fix a group $G=G_{m,k}=\mathbb Z/m\mathbb Z\times\mathbb Z/mk\mathbb Z$, and we set $N=|G|=m^2k$. Given a prime $p$ and an integer $n$ such that $n^2|N$, we define \[
M_p(N;n)=\sum_{\substack{E/\mathbb F_p\\ \#E(\mathbb F_p)=N\\ E(\mathbb F_p)[n]\cong G_{n,1}}}\frac{1}{|\Aut_p(E)|}, \] the weighted number of isomorphism classes of elliptic curves over any prime finite field which have exactly $N$ rational points and whose rational $n$-torsion subgroup is isomorphic to $G_{n,1}=\mathbb Z/n\mathbb Z\times\mathbb Z/n\mathbb Z$. It is not hard to relate $M_p(G)$ to a sum involving $M_p(N;n)$. This is accomplished via an inclusion-exclusion argument, which gives the relation \begin{equation}\label{inclusion exclusion}
M_p(G)=\sum_{r^2 | k} \mu(r)M_p(N; r m). \end{equation}
In~\cite{Schoof}, Schoof essentially gave a formula for $M_p(N;n)$ in terms of class numbers. However, one needs to exercise care here as Schoof counts each $\mathbb F_p$-isomorphism class $E$ with weight $1$ instead of with weight $1/|\Aut_p(E)|$ as we do here. Given a negative discriminant $D$, we let $H(D)$ denote the \textit{Kronecker class number}, which is defined as \[ H(D)=\sum_{ \substack{f^2\mid D\\ D/f^2 \equiv 0,1\pmod{4} }} \frac{h(D/f^2)}{w(D/f^2)}. \] Here, as usual, $h(d)$ denotes the (ordinary) class number of the unique imaginary quadratic order of discriminant $d$, and $w(d)$ denotes the cardinality of its unit group. Then letting \[ D_N(p)=(p+1-N)^2-4p=(p-1-N)^2-4N \]
and reworking the proofs of~\cite[Lemma 4.8 and Theorem 4.9]{Schoof} to count each class $E$ with weight $1/|\Aut_p(E)|$, we arrive at the formula \eq{reduction to class number avg}{ M_p(N;n)=
\begin{cases}
H\left(\frac{D_N(p)}{n^2}\right)&\text{if }p\in(N^-,N^+)\text{ and }p\equiv 1\pmod n,\\
0&\text{otherwise}.
\end{cases} } Note here that $D_N(p)/n^2$ is a negative discriminant whenever $p\in(N^-,N^+)$, $p\equiv 1\pmod n$, and $n^2\mid N$.
\begin{lma}\label{Deuring for groups} Let $m,k\in\mathbb N$ and recall that $d(p) = d_{m,k}(p)$ is defined by \eqref{def-dp}. If $p\in (N^-,N^+)$ and $p\equiv 1\pmod m$, then \[ M_p(G_{m, k})= \sum_{\substack{f^2\mid d(p),\, (f,k)=1\\ \frac{d(p)}{f^2}\equiv 0,1\pmod 4}}\frac{h(d(p)/f^2)}{w(d(p)/f^2)}. \] Otherwise, $M_p(G_{m, k})=0$. \end{lma}
\begin{rmk} The above formula is amenable to computation. Indeed, given a prime $p$ and any $m$ and $k$, very simple modifications to the usual quadratic forms algorithm for computing class numbers (see~\cite[pp.~99--100]{BV:2007} for example) make it possible to compute $M_p(G_{m, k})$ using at most $O(k)$ arithmetic operations, which is reasonable for small $k$. If we put \begin{equation*} H_k(D)=\sum_{\substack{f^2\mid D,\, (f,k)=1\\ \frac{D}{f^2}\equiv 0,1\pmod 4}}\frac{h(D/f^2)}{w(D/f^2)} \end{equation*} for each negative discriminant $D$ and each positive integer $k$, then the only modifications needed are as follows. When the algorithm produces the (not necessarily primitive) form $ax^2+bxy+cy^2$, say with $(a,b,c)=f\ge1$, it is counted subject to the following rules, provided that $(f,k)=1$. \begin{enumerate} \item Forms proportional to $x^2+y^2$ are counted with weight $1/4$. \item Forms proportional to $x^2+xy+y^2$ are counted with weight $1/6$. \item All other forms are counted with weight $1/2$. \end{enumerate} Similarly, tables of $M(G_{m, k})$ or $M_p(G_{m, k})$ values can be computed for $m$ and $k$ of modest size by simultaneously computing a table of values of $H_k(D)$. \end{rmk}
\begin{proof} It follows from~\eqref{reduction to class number avg} that $M_p(G)=0$ unless $p\in(N^-,N^+)$ and $p\equiv 1\pmod m$. Therefore, assume that $p\in (N^-,N^+)$ and $p\equiv 1\pmod m$, and write $k=s^2t$ with $t$ square-free. Combining relations~\eqref{inclusion exclusion} and~\eqref{reduction to class number avg} with the definition of the Kronecker class number, we find that \begin{equation*} \begin{split} M_p(G) =\sum_{\substack{r\mid s\\ p\equiv 1\pmod{rm}}}\mu(r)H\left(\frac{D_N(p)}{(rm)^2}\right) &=\sum_{\substack{r\mid s\\ p\equiv 1\pmod{rm}}}\mu(r)H\left(\frac{d(p)}{r^2}\right)\\ &=\sum_{\substack{r\mid s\\ p\equiv 1\pmod{rm}}}\mu(r)
\sum_{\substack{f^2\mid\frac{d(p)}{r^2}\\ \frac{d(p)}{(rf)^2}\equiv 0,1\pmod 4}}
\frac{h(d(p)/(rf)^2)}{w(d(p)/(rf)^2)}\\ &=\sum_{\substack{r\mid s\\ p\equiv 1\pmod{rm}}}\mu(r)
\sum_{\substack{f^2\mid d(p),\, r\mid f\\ \frac{d(p)}{f^2}\equiv 0,1\pmod 4}}\frac{h(d(p)/f^2)}{w(d(p)/f^2)}. \end{split} \end{equation*} Now interchanging the sum over $r$ with the sum over $f$ and recalling the identity \begin{equation*} \sum_{r\mid n}\mu(n)=\begin{cases}1&\text{if }n=1,\\ 0&\text{otherwise},\end{cases} \end{equation*} we arrive at the formula \begin{equation*} \begin{split} M_p(G) &= \sum_{\substack{f^2\mid d(p)\\ (f,s,(p-1)/m)=1\\ \frac{d(p)}{f^2}\equiv 0,1\pmod 4}}\frac{h(d(p)/f^2)}{w(d(p)/f^2)}. \end{split} \end{equation*}
In order to complete the proof, it is sufficient to show that, in the above sum, the condition $(f,s,(p-1)/m)=1$ implies the simpler condition $(f,k)=1$, the converse implication being immediate. To this end, we write $p=1+jm$ and assume that $(f,s,(p-1)/m)=(f,s,j)=1$. Then $d(p)=(j-mk)^2-4k$, and the condition $d(p)/f^2\equiv 0, 1\pmod 4$ may be rewritten as \begin{equation}\label{disc condition} (j-mk)^2-4k\equiv 0, f^2\pmod{4f^2}. \end{equation} Now let $\ell$ be any prime dividing $(f,k)$. Then the above congruence implies that $\ell\mid j$, but that implies that $\ell^2\mid (j-mk)^2$. Whence $\ell^2\mid 4k$. If $\ell$ is odd, then we have that $\ell^2\mid k$, and hence $\ell\mid (f,s,j)=1$, which is a contradiction. If $\ell=2$, then we divide~\eqref{disc condition} through by $4$ to obtain \begin{equation*} \left(\frac{j}{2}-m\frac{k}{2}\right)^2-k\equiv 0, \frac{f^2}{4}\pmod{f^2}. \end{equation*} Since $\ell=2\mid (f,k)$, we have that $k$ is even and congruent to a difference of two squares modulo $4$. This in turn implies that $k\equiv 0\pmod 4$, i.e., $2\mid s$. Thus, in this case we also have the contradiction $\ell=2\mid (f,s,j)=1$. Therefore, we conclude that $(f,k)=1$, and this completes the proof of the lemma. \end{proof}
Lemma~\ref{Deuring for groups} together with the class number formula immediately yields Lemma \ref{formula for M(G)}.
\section{Local computations}\label{local computations}
In this section we gather some local computations which we will need in the proofs of Theorem \ref{approximation of M(G)} and Proposition \ref{startwiththat}. As before, we continue to assume that $m, k,$ and $N$ are positive integers with $N=|G_{m, k}|=m^2k$.
\begin{lma}\label{generic quad lemma} Let $\ell$ be an odd prime prime. For $e\ge1$, $(d,\ell)=1$ and $(a,b)=1$, we have that \[ \#\{j\in\mathbb Z/\ell^e\mathbb Z : j^2\equiv d\pmod{\ell^e}\} = 1+ \leg{d}{\ell} \] and \[ \#\{j\in\mathbb Z/\ell^e\mathbb Z : j^2\equiv d\pmod{\ell^e},\,(a+bj,\ell)=1\} = 1+ \leg{a^2-db^2}{\ell}^2 \leg{d}{\ell} . \] \end{lma}
\begin{proof} The first formula is classical. For the second, we first note that if $\leg{d}{\ell}=-1$, then $\leg{a^2-db^2}{\ell}^2=1$, and the formula holds. Now assume that $\leg{d}{\ell}=1$, so that there are exactly two solutions to the congruence $j^2\equiv d\mod{\ell^e}$, say $\pm j_0$. If $\ell\mid b$, then the condition $(a+bj,\ell)=1$ is satisfied trivially for all $j\in\mathbb Z$, and the claimed result follows. Finally, if $\ell\nmid b$, then we need to exclude exactly one of the solutions when $a\equiv \pm bj_0\mod{\ell}$, that is to say when $a^2\equiv b^2d\mod{\ell}$. So the claimed formula holds in this last case too. \end{proof}
We set \eq{T(n) def}{ T(n) = \sum_{d\mod n} \leg{d-4k}{n} \#\{j\mod n: j^2\equiv d\mod n,\ (N+1+jm,n)=1\}. }
\begin{prop}\label{odd primes prop} Let $\ell$ be a prime not dividing $2k$ and $w\ge1$. Then \[ \frac{T(\ell^w)}{\ell^{w-1}} = - \leg{m(N-1)}{\ell}^2 +
\begin{cases}
\ell - 1 - \leg{k}{\ell}
&\mbox{if $w$ is even},\cr
- 1
&\mbox{if $w$ is odd}.
\end{cases} \] \end{prop}
\begin{proof} We write $T(\ell^w)=T_1(\ell^w)+T_2(\ell^w)$, where $T_1(\ell^w)$ is the same sum as $T(\ell^w)$ with the additional restriction that $\ell|d$ and $T_2(\ell^w)$ is the remaining sum. First, we calculate $T_1(\ell^w)$. We have that \als{ T_1(\ell^w)
&= \sum_{\substack{d\mod{\ell^w}\\\ell|d}}
\leg{d-4k}{\ell^w} \sum_{\substack{j\mod{\ell^w} \\ j^2\equiv d\mod {\ell^w}}}\leg{N+1+jm}{\ell}^2 \\
&= \sum_{\substack{d\mod{\ell^w}\\\ell|d}}
\leg{-4k}{\ell}^w \leg{N+1}{\ell}^2 \sum_{\substack{j\mod{\ell^w},\, \ell|j \\ j^2\equiv d\mod {\ell^w}}} 1 \\
&= \leg{-k}{\ell}^w \leg{N+1}{\ell}^2 \sum_{\substack{j\mod{\ell^w} \\ \ell|j }} 1
= \leg{-k}{\ell}^w \leg{N+1}{\ell}^2 \ell^{w-1} . } Finally, we compute $T_2(\ell^w)$. Applying Lemma \ref{generic quad lemma}, we find that \als{ T_2(\ell^w) &= \sum_{\substack{d\mod{\ell^w}\\ (d,\ell)=1}} \leg{d-4k}{\ell}^w
\left( 1+\leg{(N+1)^2-dm^2}{\ell}^2 \leg{d}{\ell} \right) \\
&= \ell^{w-1} \sum_{d\mod{\ell}} \leg{d-4k}{\ell}^w
\left( 1+\leg{(N+1)^2-dm^2}{\ell}^2 \leg{d}{\ell} \right) - \ell^{w-1}\leg{-k}{\ell}^w. } If $\ell\mid m$, then $\leg{(N+1)^2-dm^2}{\ell}=1$ for all $d\mod\ell$. On the other hand, if $\ell \nmid m$, then there is precisely one $d\mod\ell$ such that $(N+1)^2-dm^2\equiv 0\mod{\ell}$, for which we have that \[ \leg{d-4k}{\ell}^{w}=\leg{m^2d-4m^2k}{\ell}^{w} = \leg{(N-1)^2}{\ell}^{w} = \leg{N-1}{\ell}^2 \quad\text{and}\quad \leg{d}{\ell} = \leg{N+1}{\ell}^2. \] Thus, whether $\ell$ divides $m$ or not, we have \[ \frac{T_2(\ell^w)}{\ell^{w-1}} = - \leg{-k}{\ell}^w - \leg{m(N-1)(N+1)}{\ell}^2
+ \sum_{d\mod\ell} \leg{d-4k}{\ell}^{w} \left( 1+ \leg{d}{\ell} \right), \] which implies that \als{ \frac{T(\ell^w)}{\ell^{w-1}} & = \leg{-k}{\ell}^w \leg{N+1}{\ell}^2 - \leg{-k}{\ell}^w - \leg{m(N-1)(N+1)}{\ell}^2 \\
&\quad + \sum_{d\mod\ell} \leg{d-4k}{\ell}^{w} \left( 1+ \leg{d}{\ell} \right) . }
Note that if $\ell|N+1$, then $\leg{-k}{\ell}=1$ and thus \[ \leg{-k}{\ell}^w \leg{N+1}{\ell}^2 - \leg{-k}{\ell}^w - \leg{m(N-1)(N+1)}{\ell}^2 = -1 = - \leg{m(N-1)}{\ell}^2, \] whereas if $\ell\nmid N+1$, then \[ \leg{-k}{\ell}^w \leg{N+1}{\ell}^2 - \leg{-k}{\ell}^w - \leg{m(N-1)(N+1)}{\ell}^2 = - \leg{m(N-1)}{\ell}^2 . \] So \als{ \frac{T(\ell^w)}{\ell^{w-1}}
&= - \leg{m(N-1)}{\ell}^2
+ \sum_{d\mod\ell} \leg{d-4k}{\ell}^{w} \left( 1+ \leg{d}{\ell} \right) . } If now $w$ is odd, then \[
\sum_{d\mod\ell} \leg{d-4k}{\ell}^{w} \left( 1+ \leg{d}{\ell} \right) = \sum_{d\mod\ell} \leg{d-4k}{\ell}\leg{d}{\ell} = -1 , \] using for example \cite[Exercise 1.1.9]{Stepanov} since $(2k, \ell)=1$. Finally, if $w$ is even, then \[
\sum_{d\mod\ell} \leg{d-4k}{\ell}^{w} \left( 1+ \leg{d}{\ell} \right)
= \ell-1 + \sum_{\substack{d\mod\ell \\ d\not\equiv 4k\mod{\ell}}} \leg{d}{\ell}
= \ell-1 - \leg{k}{\ell} , \] which completes the proof of the proposition. \end{proof}
\begin{cor}\label{formula for P(ell)} For a prime $\ell$ not dividing $2k$, we have that \[ P(\ell) := 1 + \sum_{w\ge1} \frac{T(\ell^w)}{\ell^{2w-1}(\ell- \leg{m}{\ell}^2) }
= \frac{\ell^3-\leg{m}{\ell}^2\ell^2 - (1+\leg{m}{\ell}^2\leg{N-1}{\ell}^2)\ell - 1- \leg{N-1}{\ell}^2\leg{k}{\ell} }
{(\ell^2-1)(\ell-\leg{m}{\ell}^2)} . \] \end{cor}
\begin{proof} Lemma \ref{odd primes prop} and a straightforward computation imply that \[ P(\ell) = \frac{\ell^3-\leg{m}{\ell}^2\ell^2 - (1+\leg{m}{\ell}^2\leg{N-1}{\ell}^2)\ell + \leg{m}{\ell}^2-\leg{m(N-1)}{\ell}^2-1-\leg{k}{\ell} } {(\ell^2-1)(\ell-\leg{m}{\ell}^2)} . \] Finally, note that \[ \leg{m(N-1)}{\ell}^2+\leg{k}{\ell} - \leg{m}{\ell}^2 = \leg{N-1}{\ell}^2\leg{k}{\ell} , \]
since $\leg{k}{\ell}=\leg{m}{\ell}^2=1$ if $\ell|N-1$. \end{proof}
\section{Proof of Proposition \ref{startwiththat}}\label{proof of Prop startwiththat}
This section is dedicated to the proof of Proposition \ref{startwiththat}, which gives an upper bound of the conjectured order of magnitude for the average of special values \[ \mathcal L(d(p)) = L \left(1, \displaystyle{\left( \frac{d(p)}{\cdot}\right)} \right) \] summed over integers with no small prime factors. A key role will be played by the fundamental lemma of sieve methods, i.e. Lemma \ref{lemma-FI}.
\begin{proof}[Proof of Proposition \ref{startwiththat}] We shall employ the notation \[
\rho(n) := \frac{|n|}{\phi(|n|)} = \prod_{\ell | n} \left(1-\frac{1}{\ell}\right)^{-1} . \] We will simplify the sum we are estimating with an application of the Cauchy-Schwarz inequality but, first, we massage the $L$-functions that appear in it. Note that if $p=1+jm$, then $d(p)=(j-mk)^2 - 4k \equiv j^2\pmod k$. So \als{ \mathcal L(d(p))^r
= \prod_{ \substack{ \ell | k \\ \ell\nmid j}} \left(1-\frac{1}{\ell} \right)^{-r}
\prod_{ \ell \nmid k} \left(1-\frac{ \leg{d(p)}{\ell} }{\ell}\right)^{-r}
\ll_r \rho(k)^r \rho((j,k))^{|r|} \mathcal L(k^2d(p))^r, } and consequently, \[ S := \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }}
\rho(d(p))^{s} \mathcal L(d(p))^r
\ll_r \rho(k)^r \sum_{\substack{N^-<p<N^+ \\ p =1+jm,\, j\in\mathbb N }} \rho((j,k))^{|r|} \rho(d(p))^s
\mathcal L(k^2d(p))^r. \] Hence the Cauchy-Schwarz inequality yields that \eq{C-S}{ \frac{S}{\rho(k)^r }
\ll_r \left(\sum_{\substack{N^-<p<N^+ \\ p =1+jm }} \rho((j,k))^{2|r|} \rho(d(p))^{2s} \right)^{\frac{1}{2}}
\left(\sum_{\substack{N^-<p<N^+ \\ p\equiv 1\pmod m }}
\mathcal L(k^2d(p))^{2r} \right)^{\frac{1}{2}}
=: \sqrt{S_1 S_2} , } say.
First, we estimate $S_1$. Note that \[
\rho(n)^v \asymp_v \prod_{\ell|n} \left(1+\frac{v}{\ell}\right)
= \sum_{a|n} \frac{\mu^2(a) \tau_{v}(a) }{a} , \] for any $v\ge0$. Since \[
\sum_{\substack{a|n \\ a>x}} \frac{\mu^2(a) \tau_v(a)}{a}
\le \frac{1}{x}\sum_{a|n} \mu^2(a)v^{\omega(a)}
= \frac{(v+1)^{\omega(n)}}{x} \ll_{v,\epsilon} \frac{n^\epsilon}{x} , \] we find that \eq{C-S-e1}{ S_1 &\ll_r \sum_{\substack{ N^-<p <N^+ \\ p=1+jm}}
\left( \sum_{\substack{a|(k,j) \\ a\le k^{1/5} }}\frac{\mu^2(a)\tau_{2|r|}(a)}{a} + O_r(k^{-1/6}) \right)
\left( \sum_{\substack{b|d(p) \\ b\le k^{1/5} }}\frac{\mu^2(b)\tau_{2s}(b)}{b} + O_s(k^{-1/6})\right) \\
&= \sum_{ \substack{ a,b\le k^{1/5} \\ a|k }} \frac{\mu^2(a)\mu^2(b)\tau_{2|r|}(a)\tau_{2s}(b) }{ab}
\sum_{\substack{N^-<p<N^+ \\ p=1+jm \\ a|j,\ b| d(p) }} 1
+O_{r,s}( k^{11/30} ), } using the trivial estimate $\#\{N^-<p<N^+:p\equiv1\mod m\}\ll \sqrt{N}/m=\sqrt{k}$. The innermost sum in the second line of \eqref{C-S-e1} equals \[ \sum_{\substack{ h \in \mathbb Z/[a,b]\mathbb Z \\ h \equiv 0 \pmod a \\ (h-mk)^2 \equiv 4k \pmod b}}
\sum_{\substack{ N^-<p <N^+ \\ p=1+jm \\ j\equiv h \pmod{[a,b]} }} 1
\ll \frac{\sqrt{N}}{\phi(m[a,b])\log(2k)}
\sum_{\substack{ h \in \mathbb Z/[a,b]\mathbb Z \\ h\equiv 0 \pmod a \\ (h-mk)^2 \equiv 4k \pmod b}} 1
\le \frac{\sqrt{N} \tau(b) }{\phi(m[a,b])\log(2k)} , \] where the first inequality follows from the Brun-Titchmarsch inequality and the second from the fact that $b$ is square-free. Since $\phi(m[a,b])\ge\phi(m)\phi([a,b])$, relation \eqref{C-S-e1} becomes \eq{estimation of S_1}{
S_1 &\ll_{r,s} \frac{\sqrt{N}}{\phi(m)\log(2k)} \sum_{ \substack{ a,b\le k^{1/5} \\ a|k }}
\frac{\mu^2(a)\mu^2(b)\tau_{2|r|}(a)\tau_{2s}(b)^2 }{a\cdot b\cdot \phi([a,b])}
+ k^{11/30}
\ll_{r,s} \frac{\sqrt{N}}{\phi(m)\log(2k)} . }
Next, we turn to the estimation of \[ S_2 = \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }} \mathcal L(k^2d(p))^{2r} . \] Our first task is to replace the $L$-values that appear in the above sum with truncated Euler products. We set \[ S_3 = \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }} \mathcal L(k^2d(p);z^{80000})^{2r} \] with $z=\log(4k)$ and estimate the error \[ R:=S_2-S_3 \]
using Lemma~\ref{lemmashortproduct}. First note that since $d(p)$ is a discriminant and $|d(p)|\le 4k$ for $p\in (N^-,N^+)$, it follows that \eq{leg-k^2d(p)}{ \leg{k^2d(p)}{\cdot} }
is periodic modulo $k|d(p)|\le 4k^2$ and its conductor cannot exceed $|d(p)|\le 4k$. Thus, we may apply Lemma~\ref{lemmashortproduct} with $\alpha=100$ and $Q=4k$. Now let $d_1=d_1(p)$ be the discriminant of the quadratic number field $\mathbb Q(\sqrt{d(p)})$, so that the character in \eqref{leg-k^2d(p)} is induced by the primitive character $\leg{d_1}{\cdot}$. If $|d_1| \notin \mathcal{E}_{100}(4k)$, then we can approximate $\mathcal L(k^2d(p))^{2r}$ very well by $\mathcal L(k^2d(p);z^{80000})^{2r}$. Otherwise, we write $d(p) = d_1 b^2$ and note that \[ \mathcal L(k^2d(p))^{2r}
\le \rho(kb)^{2|r|} \mathcal L(d_1)^{2r}
\ll_r \rho(kb)^{2|r|}\cdot \begin{cases}
(\log|d_1|)^{2r} &\text{if}\ r\ge0,\cr
|d_1|^{1/8} &\text{if}\ r<0,
\end{cases} \] the second estimate being a consequence of Siegel's theorem. In any case, we find that \[ \mathcal L(k^2d(p))^{2r}
\ll_r (\rho(kb))^{2|r|} |d_1|^{1/8} \ll_r (kb |d_1|)^{1/8} \le (k |d(p)|)^{1/8} \le (2k)^{1/4}. \] Combining the above, we arrive at the estimate \als{
R &\ll_{r} \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }} \frac{(\log\log k)^{2|r|} }{\log^{100}(2k)}
+ \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m \\ |d_1| \in\mathcal{E}_{100}(4k)}} k^{1/4} }
Note that if $p=1+jm$ is such that $|d_1| \in\mathcal{E}_{100}(4k)$, then $d(p)=d_1 b^2$ for some $b\in\mathbb{N}$, or equivalently, $(j-mk)^2-d_1 b^2=4k$. So for each fixed $d_1$ with $|d_1|\in \mathcal{E}_{100}(4k)$, there are at most $4\tau(4k)\ll k^{1/100}$ admissible values of $j$ (and hence of $p$). Consequently, \eq{estimation of R}{ R \ll_{r}
\sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }} \frac{(\log\log k)^{2|r|} }{\log^{100}(2k)}
+ k^{1/4} \cdot k^{1/100} \cdot |\mathcal{E}_{100}(4k)|
\ll_{r} \frac{\sqrt{N} }{ \log(2k)\phi(m)}, } by Lemma\ \ref{lemmashortproduct} and the Brun-Titchmarsch inequality.
Finally, we turn to the estimation of $S_3$. First, note that \[ \mathcal L(k^2d(p);z^{80000})^{2r}
\ll_r \mathcal L(k^2d(p);\sqrt{z})^{2r}
\ll_r \prod_{ \substack{ \ell\nmid 2pk \\ 2|r|+1< \ell \le \sqrt{z} }} \left(1+2r\cdot \frac{\leg{d(p)}{\ell} }{\ell}\right) , \] by Mertens' estimate, which immediately implies that \[ S_3 \ll_r
\sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod{m} }}
\prod_{ \substack{ \ell\nmid 2pk \\ 2|r|+1 < \ell \le \sqrt{z} }}
\left(1+2r\cdot \frac{\leg{d(p)}{\ell} }{\ell}\right) . \] We cannot estimate this sum as it is because that would require information about primes in arithmetic progressions that are currently not available. We refer the reader to~\cite{DS-MEG} for a more detailed discussion about this issue. Instead, we extend the summation from primes $p$ to integers $n$ with no prime factors $\le k^{1/8}$ and we apply Lemma~\ref{lemma-FI} with $D=k^{1/4}$ and $y=k^{1/8}$. Hence \eq{S3-S4}{ S_3 \ll_r \sum_{\substack{N^-< n <N^+\\ n \equiv 1\pmod {m} }}(\lambda^+*1)(n)
\prod_{ \substack{2|r|+1 < \ell\le\sqrt{z} \\ \ell\nmid 2nk }} \left(1+2r\cdot \frac{\leg{d(n)}{\ell}}{\ell}\right)=:S_4, } by the positivity of the above Euler product. Expanding this product to a sum, opening the convolution $(\lambda^+*1)(n)$, and interchanging the order of summation yields \als{ S_4 &=
\sum_{\substack{\ell|a\ \Rightarrow\ 2|r|+1<\ell\le \sqrt{z} \\ (a,2k)=1}}\frac{\mu^2(a) \tau_{2r}(a) }{a}
\sum_{\substack{N^-<n<N^+\\ (n,a)=1 \\ n\equiv 1\pmod{m} }}
(\lambda^+*1)(n) \leg{d(n)}{a} \nonumber \\
&= \sum_{\substack{\ell|a\ \Rightarrow\ 2|r|+1<\ell\le \sqrt{z} \\ (a,2k)=1}}\frac{\mu^2(a)\tau_{2r}(a) }{a}
\sum_{\substack{b \leq k^{1/4}\\ (b,am)=1}}\lambda^+(b)
\sum_{\substack{N^-<n<N^+\\ (n,a)=1,\, b|n \\ n\equiv 1\pmod{m} }}\leg{d(n)}{a} . } Splitting the integers $n\in(N^-,N^+)$ according to the congruence class of $d(n)\pmod a$, we deduce that \eq{S_3}{
S_4 = \sum_{\substack{\ell |a\ \Rightarrow\ 2|r|+1<\ell\le \sqrt{z} \\ (a,2k)=1}}
\frac{\mu^2(a)\tau_{2r}(a)}{a}
\sum_{\substack{b \leq k^{1/4}\\ (b,am)=1}}\lambda^+(b)
\sum_{c\in\mathbb Z/a\mathbb Z} \leg{c}{a} S(a,b,c), } where \[ S(a,b,c)
:= \#\left\{ N^-<n<N^+ :
\begin{array}{ll}
n\equiv 1\pmod {m} & (n,a)=1 \\
n\equiv 0\pmod b & d(n)\equiv c\pmod{a}
\end{array}
\right\}. \] We fix $a$, $b$ and $c$ as above and calculate $S(a,b,c)$. Set $n=1+j m$, and define $\Delta(j)=(j-mk)^2-4k$, so that $d(n)=\Delta(j)$. Note that $n$ is counted by $S(a,b,c)$ if and only if $mk - 2\sqrt{k} < j < mk+2\sqrt{k}$, $\Delta(j)\equiv c\pmod a$, $1+jm\equiv0\pmod b$ and $(1 + jm, a) = 1$. Thus we have that \begin{equation}\label{T} S(a,b,c)=\left(\frac{4\sqrt{k}}{ab}+O(1)\right)J(a,b,c), \end{equation} where \[ J(a,b,c) :=\#\{j\in \mathbb Z/ab\mathbb Z :
\Delta(j)\equiv c\pmod a,\
1 + jm \equiv 0 \pmod b,\
(1 + jm, a) = 1\}. \] By the Chinese remainder theorem, we find that \[ J(a,b,c)=U(a,c) := \#\{j\in \mathbb Z/a\mathbb Z: \Delta(j)\equiv c\pmod a,\ (1+ jm,a) = 1\}, \] since $(b,m)=1$, and thus there is exactly one solution modulo $b$ to the equation $1+jm\equiv 0\pmod b$. Note that $U(a,c)\le \tau(a)$ by Lemma \ref{generic quad lemma} and that \[ \sum_{c\in\mathbb Z/a\mathbb Z}\leg{c}{a} U(a,c) = T(a), \] where $T(a)$ is defined by relation \eqref{T(n) def}. Together with relations~\eqref{S_3} and~\eqref{T}, this implies that \als{
S_4 &= 4\sqrt{k} \sum_{\substack{\ell|a\ \Rightarrow\ 2|r|+1<\ell\le \sqrt{z} \\ (a,2k)=1}}\frac{\mu^2(a)\tau_{2r}(a)T(a)}{a^2}
\sum_{\substack{b \leq k^{1/4}\\ (b,am)=1}}\frac{\lambda^+(b)}b \\
&\quad + O \left( k^{1/4} \sum_{ P^+(a)\le\sqrt{z} } \mu^2(a)\tau_{2|r|}(a)\tau(a) \right) . } The error term in the above estimate is \[
\ll k^{1/4} \sum_{ P^+(a)\le\sqrt{z} } \mu^2(a)\tau_{2|r|}(a)\tau(a)
= k^{1/4} \prod_{\ell\le \sqrt{z}} (1+4|r|) \ll_r k^{1/3} . \]
Finally, note that $|T(a)|\le \tau(a)$ for square-free values of $a$, by Proposition \ref{odd primes prop}. So applying Lemma~\ref{lemma-FI} we conclude that \als{
S_4 &\ll_r \sqrt{k} \sum_{\substack{P^+(a)\le\sqrt{z} \\ (a,2k )=1}}\frac{\mu^2(a) \tau(a) \tau_{2|r|}(a)}{a^2}
\prod_{\substack{\ell\le k^{1/8}\\ \ell\nmid am}}\left(1-\frac1{\ell}\right)+k^{1/3} \\
&\ll \sqrt{k}\sum_{\substack{P^+(a)\le\sqrt{z} \\ (a,2k )=1}}\frac{\mu^2(a) \tau(a) \tau_{2|r|}(a)}{a^2}
\frac1{\log(2k)} \frac{m}{\phi(m)} \frac{a}{\phi(a)} +k^{1/3} . } Inserting this estimate in \eqref{S3-S4}, we obtain the upper bound \eq{S_3 e2}{
S_3 \ll_r \frac{\sqrt{k}}{\log(2k)}\frac{m}{\phi(m)}\sum_{(a,2k)=1}\frac{\mu^2(a)\tau(a)\tau_{2|r|}(a)}{a\phi(a)}
\ll_r \frac{\sqrt{k}}{\log(2k)}\frac{m}{\phi(m)}. } Combining the above inequality with relations \eqref{C-S}, \eqref{estimation of S_1}, and \eqref{estimation of R} completes the proof of the proposition. \end{proof}
\section{Approximating $M(G)$}\label{approx}
In this section, we prove Theorem \ref{approximation of M(G)}. We start with a preliminary lemma.
\begin{lma}\label{prime sum} Let $N=m^2k>1$ and $d(p)=d_{m,k}(p)$. If $1\le q\le h \le \sqrt{N}$ and $(a,q)=1$, then \[
\sum_{\substack{N^-<p\le N^+ \\ p\equiv a\mod q }} \sqrt{|d(p)|}
= \frac{2\pi mk}{\phi(q)\log N}
+ O\left( \frac{h}{\sqrt{N}} \cdot \frac{mk}{q} + \frac{\sqrt{k}}{h\log N} \int_{N^-}^{N^+} E(y,h;q) dy \right) . \] \end{lma}
\begin{proof} We note the trivial bound $\#\{t<p\le t+h:p\equiv a\mod q\}\ll h/q$, which we will use several times throughout the proof. We have that \eq{prime sum log insertion}{
\sum_{\substack{N^-<p\le N^+ \\ p\equiv a\mod q }} \sqrt{|d(p)|}
= \sum_{\substack{N^-<p\le N^+ \\ p\equiv a\mod q }} \frac{\sqrt{|d(p)|}\log p}{\log N}
+ O\left( \frac{\sqrt{k}}{q} \right) . } Note that if $t=N+1+2\sqrt{N}u_0$ and $u_0\in[-1+2\eta,1-\eta]$ with $\eta:=h/\sqrt{4N}$, then \als{
\sqrt{|d(t)|} = 2\sqrt{k}\cdot \sqrt{1-u_0^2}
&= \frac{2\sqrt{k}}{\eta}\int_{u_0-\eta}^{u_0} \sqrt{1-u^2} \,\mathrm{d} u
+ O\left(\frac{\eta\sqrt{k}}{\sqrt{1-u_0^2}}\right) \\
&=\frac{4mk}{h}\int_{u_0-\eta}^{u_0} \sqrt{1-u^2} \, \mathrm{d} u
+ O\left(\frac{h\sqrt{k}}{\sqrt{4N-(N+1-t)^2}}\right) . }
Therefore \als{
\sum_{\substack{N^-<p\le N^+ \\ p\equiv a\mod q }} \frac{\sqrt{|d(p)|}\log p}{\log N}
&= \sum_{\substack{10h+ N^-<p\le -10h+N^+\\ p\equiv a\mod q }} \frac{\sqrt{|d(p)|}\log p}{\log N}
+ O\left( \frac{h^{1/2}N^{1/4}}{m} \cdot \frac{h}{q}\right) \\
&= \frac{4mk}{h\log N}\sum_{\substack{N^- +10h<p\le N^+ - 10h \\ p\equiv a\mod q }} (\log p)
\int_{\frac{p-N-1-h}{2\sqrt{N}}}^{\frac{p-N-1}{2\sqrt{N}}}
\sqrt{1-u^2} \,\mathrm{d} u \\
&\quad + O\left( \sum_{\substack{N^- +10h<p\le N^+ - 10h \\ p\equiv a\mod q }}
\frac{h\sqrt{k}}{\sqrt{(N^+-p)(p-N^-)}}
+ \frac{h^{3/2}N^{1/4}}{mq} \right) \\
&= \frac{4mk}{h\log N} \int_{-1+9\eta}^{1-10\eta} \sqrt{1-u^2}
\sum_{\substack{N+1+2u\sqrt{N}<p\le N+1+2u\sqrt{N}+h \\ N^- +10h<p\le N^+ - 10h \\ p\equiv a\mod q }}
(\log p) \,\mathrm{d} u \\
&\quad + O\left( \sum_{\substack{N^- +10h<p\le N^+ - 10h \\ p\equiv a\mod q }}
\frac{h\sqrt{k}}{\sqrt{(N^+-p)(p-N^-)}}
+ \frac{h^{3/2}N^{1/4}}{mq} \right) . } First, we simplify the main term. If $u\in[-1+10\eta,1-11\eta]$, then the condition that $N^- +10h<p\le N^+ - 10h$ can be discarded. On the other hand, if $u\in[-1,1]\setminus[-1+10\eta,1-11\eta]$, then \als{
\sqrt{1-u^2}
\sum_{\substack{N+1+2u\sqrt{N}<p\le N+1+2u\sqrt{N}+h \\ N^- +10h<p\le N^+ - 10h \\ p\equiv a\mod q }}
(\log p)
&\le \sqrt{1-u^2}
\sum_{\substack{N+1+2u\sqrt{N}<p\le N+1+2u\sqrt{N}+h \\ p\equiv a\mod q }}
(\log p) \\
&\ll \sqrt{\eta}\cdot \frac{h\log N}{q} . } Therefore \als{ &\int_{-1+9\eta}^{1-10\eta} \sqrt{1-u^2}
\sum_{\substack{N+1+2u\sqrt{N}<p\le N+1+2u\sqrt{N}+h \\ N^- +10h<p\le N^+ - 10h \\ p\equiv a\mod q }}
(\log p) \,\mathrm{d} u \\
&\qquad= \int_{-1}^1 \sqrt{1-u^2}
\sum_{\substack{N+1+2u\sqrt{N}<p\le N+1+2u\sqrt{N}+h \\ p\equiv a\mod q }}
(\log p) \, \mathrm{d} u
+ O\left( \frac{\eta^{3/2} h\log N}{q} \right) \\
&\qquad= \int_{-1}^{1} \sqrt{1-u^2} \frac{h}{\phi(q)} \, \mathrm{d} u
+ O\left( \int_{-1}^1 E(N+1+2u\sqrt{N},h;q) \mathrm{d} u + \frac{h^{5/2}\log N}{N^{3/4} q} \right) \\
&\qquad=\frac{\pi}{2}\cdot \frac{h}{\phi(q)} +
O\left( \frac{1}{\sqrt{N}} \int_{N^-}^{N^+} E(y,h;q) \mathrm{d} y + \frac{h^{5/2} \log N}{N^{3/4} q} \right) . } Consequently, \als{
\sum_{\substack{N^-<p\le N^+ \\ p\equiv a\mod q }} \sqrt{|d(p)|}
&= \frac{2\pi mk}{\phi(q)\log N}
+ O\left( \sum_{\substack{N^- +10h<p\le N^+ - 10h \\ p\equiv a\mod q }} \frac{h\sqrt{k}}{\sqrt{(N^+-p)(p-N^-)}}\right) \\
&+O\left( \frac{\sqrt{k}}{h\log N} \int_{N^-}^{N^+} E(y,h;q) dy + \frac{\sqrt{k}}{q} + \frac{h^{3/2}N^{1/4}}{mq}\right) , } where the term $\sqrt{k}/q$ inside the big-Oh comes from \eqref{prime sum log insertion}.
It remains to bound \[ \sum_{\substack{N^- +10h<p\le N^+ - 10h \\ p\equiv a\mod q }} \frac{1}{\sqrt{(N^+-p)(p-N^-)}} . \] We break this sum into two pieces, according to whether $p\le N+1$ or $p>N+1$. Note that \als{ \sum_{\substack{N^- +10h<p\le N+1 \\ p\equiv a\mod q }} \frac{1}{\sqrt{(N^+-p)(p-N^-)}}
&\ll N^{-1/4}\sum_{\substack{N^- +10h<n\le N+1 \\ n\equiv a\mod q }} \frac{1}{\sqrt{n-N^-}} . } We cover the range of summation by intervals of length $h$ to find that \als{ \sum_{\substack{N^- +10h<p\le N+1 \\ p\equiv a\mod q }} \frac{1}{\sqrt{(N^+-p)(p-N^-)}}
&\ll N^{-1/4} \sum_{1\le j\le 2\sqrt{N}/h} \frac{1}{\sqrt{jh}} \cdot
\sum_{\substack{N^-+jh<n\le N^-+jh+h \\ n\equiv a\mod q }} 1 \\
&\ll \frac{\sqrt{h}}{N^{1/4}q} \sum_{1\le j\le 2\sqrt{N}/h} \frac{1}{\sqrt{j}}
\ll \frac{1}{q} . }
Similarly, we find that \[ \sum_{\substack{N+1<p\le N^+ -10h \\ p\equiv a\mod q }} \frac{1}{\sqrt{(N^+-p)(p-N^-)}} \ll \frac{1}{q} \] too, which implies that \als{
\sum_{\substack{N^-<p\le N^+ \\ p\equiv a\mod q }} \sqrt{|d(p)|}
&= \frac{2\pi mk}{\phi(q)\log N}
+ O\left( \frac{\sqrt{k}}{h\log N} \int_{N^-}^{N^+} E(y,h;q) dy + \frac{h\sqrt{k}}{q} + \frac{h^{3/2}N^{1/4}}{mq}\right) . } Since $h^{3/2}=N^{3/4} (h/\sqrt{N})^{3/2}\le N^{3/4} (h/\sqrt{N})$, the lemma follows. \end{proof}
Using the above result and the results of Section \ref{local computations}, we will prove Theorem \ref{approximation of M(G)}. But first, we need to introduce some additional notation and state another intermediate result. Set \eq{J_r(v) def}{ J_r(v) = \{1\le j\le 2^{2v+3}:(j-mk)^2\equiv 4k+4^vr\pmod{2^{2v+3}}, \, jm\equiv 0\pmod 2\} } and \eq{J(v)}{
\mathcal J(v) = \frac{1}{2^{v_0-1}} \sum_{r\in\{0,1,4,5\}} \frac{|J_r(v)|}{2-\leg{r}{2}}, \quad\text{where}\quad v_0 =
\begin{cases}
2 &\text{if}\ 2\nmid m,\cr
3 &\text{if}\ 2|m.
\end{cases} } Finally, set \[ \mathcal J = \sum_{\substack{v\ge0 \\ (2^v,k)=1}} \frac{\mathcal J(v)}{8^v} . \] Then we have the following formula.
\begin{lma}\label{formula for J} \[ \mathcal J =
\begin{cases}
\frac{2}{3} &\mbox{if $2\nmid mk$},\cr
\frac{3}{2} &\mbox{if $2\mid (m,k)$},\cr
1 &\mbox{if $2\mid mk$, $2\nmid (m,k)$}.
\end{cases} \] \end{lma}
We postpone the proof of this lemma till the last section.
\begin{proof}[Proof of Theorem \ref{approximation of M(G)}] We will show the theorem with $8\epsilon\in(0,1/3]$ in place of $\epsilon$ and when $k$ is large enough in terms of $\epsilon$, which is clearly sufficient. Our starting point is Lemma \ref{formula for M(G)}, which states that \[ M(G)
=\sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }}
\sum_{\substack{f^2\mid d(p), \, (f,k)=1 \\ d(p)/f^2\equiv 1,0\pmod 4 }}
\frac{\sqrt{|d(p)|} \mathcal L(d(p)/f^2) }{2\pi f} , \] where $N=m^2k$ and $d(p)=d_{m,k}(p)=((p-N-1)^2-4N)/m^2$ as usual. If $p=1+jm$, then $d(p)=(j-mk)^2-4k$. Therefore, if $\ell$ is an odd prime dividing $k$, so that $(\ell,f)=1$ for $f$ as in the above sum, then \[ \leg{d(p)/f^2}{\ell} =\leg{d(p)}{\ell} = \leg{j}{\ell}^2 , \] Next, we write $f=2^vg$ with $g$ odd and consider $r\in\{0,1,4,5\}$ such that $d(p)/f^2\equiv r\mod 8$. Then we have that $\leg{d(p)/f^2}{2}=\leg{r}{2}$. Moreover, since $g^2\equiv1\mod{8}$, we have that \[ d(p)/f^2 \equiv d(p)/2^{2v}\mod{8}, \]
Therefore, the conditions $f^2|d(p)$ and $d(p)/f^2\equiv r\mod 8$ are equivalent to having $d(p)\equiv 4^v r\mod{2^{2v+3}}$ and $g^2|d(p)$. Setting \[
\rho(g,d) = \prod_{\ell|g} \left(1-\frac{\leg{d}{\ell}}{\ell} \right)^{-1} \] then gives us that \[ \mathcal L(d(p)/f^2)
= \mathcal L((2kg)^2d(p)) \frac{\rho(g,d(p)/g^2)}{1-\leg{r}{2}/2} \prod_{\ell|k,\,\ell\nmid 2j}\left(1-\frac{1}{\ell}\right)^{-1}. \] Since \[
\prod_{\ell|k,\,\ell\nmid 2j}\left(1-\frac{1}{\ell}\right)^{-1}
= \sum_{\substack{a|k \\ (a,2j)=1}} \frac{\mu^2(a)}{\phi(a)}, \] we deduce that \als{ M(G)
&= \sum_{r\in\{0,1,4,5\}} \frac{1}{2-\leg{r}{2}} \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }}
\sum_{\substack{a|k \\ (a,2j)=1}}
\sum_{\substack{v\ge 0,\, (2^v,k)=1 \\ d(p)\equiv 4^v r\mod{2^{2v+3}} }}
\sum_{\substack{g^2\mid d(p)\\ (g,2k)=1 }} \frac{\mu^2(a) \sqrt{|d(p)|}}{\pi 2^v\phi(a)g} \\
&\qquad \times \rho(g,d(p)/g^2) \mathcal L((2kg)^2d(p) ) . } We now use Lemma~\ref{lemmashortproduct} to replace the $L$-value $\mathcal L((2kg)^2d(p))$ by a suitably truncated product. Arguing as in the proof of relation~\eqref{estimation of R}, we note that $\leg{(2kg)^2d(p)}{\cdot}$ is a character modulo
$2kg|d(p)|\le 16k^{5/2}$ with conductor not exceeding $|d(p)|\le 4k$. Thus, we may apply Lemma~\ref{lemmashortproduct} with $Q=4k$ and $5\alpha$ in place of $\alpha$ to replace $\mathcal L((2kg)^2d(p))$ by $\mathcal L((2kg)^2d(p); z)$, where we take $z=(\log(4k))^{200\alpha^2}$. The result is that \als{ M(G)
&= \sum_{r\in\{0,1,4,5\}} \frac{1}{2-\leg{r}{2}} \sum_{\substack{N^-<p<N^+ \\ p=1+jm,\,j\ge1 }}
\sum_{\substack{a|k \\ (a,2j)=1}}
\sum_{\substack{ (2^v,k)=1 \\ d(p)\equiv 4^v r\mod{2^{2v+3}} }}
\sum_{\substack{g^2\mid d(p) \\ (g,2k)=1 }} \frac{\mu^2(a) \sqrt{|d(p)|}}{\pi 2^v\phi(a) g} \\
&\qquad \times \rho(g,d(p)/g^2) \mathcal L((2kg)^2d(p) ; z )
+O_{\alpha}\left(\frac{k}{(\log k)^{\alpha}} \right) . } Next, we notice that we can truncate the sums over $a,g$ and $v$ at the cost of a small error term. More precisely, using the crude bound \[
\rho(g,d(p)/g^2)\mathcal L((2kg)^2d(p);z) \ll \frac{g}{\phi(g)} \log(2kg|d(p)|)\ll (\log k)^2 , \] we find that the contribution to $M(G)$ by those summands with $\max\{a,g,2^v\}>k^\epsilon$ is \eq{divisor bound}{ \ll \frac{\sqrt{k} (\log k)^3}{k^\epsilon} \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }}
\sum_{\substack{a|k \\ (2^vg)^2|d(p)}} 1
\ll_\epsilon k^{(1-\epsilon)/2} \sum_{\substack{ N^-<n<N^+ \\ n\equiv 1\mod m }} 1
\ll k^{1-\epsilon/2} } by the bound $\tau(n)\ll_\delta n^\delta$, with $\delta < \epsilon/4$. Moreover, \[
\mathcal L((2kg)^2d(p) ; z)
= \sum_{\substack{P^+(n)\le z \\ (n,2kg)=1}} \frac{\leg{d(p)}{n} }{n}
= \sum_{\substack{P^+(n)\le z,\, n\le k^\epsilon \\ (n,2kg)=1}} \frac{ \leg{d(p)}{n} }{n}
+ O_{\epsilon,\alpha}\left( (\log k)^{-\alpha-10}\right) \] by Lemma \ref{smooth}. Therefore, \als{
M(G) &= \sum_{r\in\{0,1,4,5\}} \frac{1}{2-\leg{r}{2}} \sum_{\substack{a|k,\, a\le k^\epsilon \\ (a,2)=1}}
\sum_{\substack{2^v\le k^\epsilon \\ (2^v,k)=1}}
\sum_{\substack{g\le k^\epsilon \\ (g,2k)=1 }}
\sum_{\substack{P^+(n)\le z,\, n\le k^\epsilon \\ (n,2kg)=1}} \frac{\mu^2(a)}{\pi 2^v\phi(a) g n} \\
&\qquad \times
\sum_{\substack{N^-<p<N^+ \\ p=1+jm,\,j\ge1 \\ (a,j)=1,\, g^2|d(p) \\ d(p)\equiv 4^v r \mod{2^{2v+3}} }}
\rho(g,d(p)/g^2) \leg{d(p)}{n}\sqrt{|d(p)|}
+ O_{\alpha,\epsilon} \left(\frac{k}{(\log k)^{\alpha}}\right) . }
We note that if $d(p)/g^2\equiv b\mod{g}$, then $\leg{d(p)/g^2}{\ell} = \leg{b}{\ell}$ for all $\ell|g$ and consequently, $\rho(g,d(p)/g^2)=\rho(g,b)$. So summing over possible choices for $d(p)/g^2\mod g$ and $d(p)\mod n$, we deduce that \als{
M(G) &= \sum_{r\in\{0,1,4,5\}} \frac{1}{2-\leg{r}{2}} \sum_{\substack{a|k,\, a\le k^\epsilon \\ (a,2)=1}}
\sum_{\substack{2^v\le k^\epsilon \\ (2^v,k)=1}}
\sum_{\substack{g\le k^\epsilon \\ (g,2k)=1 }}
\sum_{\substack{P^+(n)\le z,\, n\le k^\epsilon \\ (n,2kg)=1}} \frac{\mu^2(a)}{\pi 2^v \phi(a) g n } \\
&\qquad \times \sum_{b=1}^g \rho(g,b)
\sum_{c=1}^n\leg{c}{n}
S_r(v,a,g,b,n,c) + O_{\alpha,\epsilon} \left(\frac{k}{(\log k)^{\alpha}}\right) , } where \[ S_r(v,a,g,b,n,c) : = \sum_{\substack{N^-<p\le N^+ \\ p=1+jm,\,j\ge 1,\, (j,a)=1 \\ d(p)\equiv bg^2\mod{g^3} \\
d(p)\equiv 4^vr \mod{2^{2v+3}},\, d(p)\equiv c\mod{n} }} \sqrt{|d(p)|} . \] We write $p=1+jm$ and note that $(1+jm,2agn)=1$ if $k$ is large enough, since $2agn\le 2k^{3\epsilon}\le 2k^{1/8}$ by assumption, and $p>N^-=(m\sqrt{k}-1)^2$. Moreover, with this notation we have that $d(p)=\Delta(j):=(j-mk)^2-4k$. So, if we set \als{ J_r(v,a,g,b,n,c) =
\left\{ j\mod{2^{2v+3} a g^3n}:
\begin{array}{rl}
\Delta(j) \equiv 4^v r \pmod{2^{2v+3} },& \Delta(j) \equiv b g^2 \pmod{g^3}, \\
\Delta(j)\equiv c\mod{n},& (j,a)=1,\\
(1+jm, agn)=1,& jm\equiv 0\pmod2
\end{array}\right\}, } then we find that \[ S_r(v,a,g,b,n,c) = \sum_{j\in J_r(v,a,g,b,n,c)}
\sum_{\substack{N^-<p\le N^+ \\ p\equiv 1+jm\mod{2^{2v+3}ag^3nm}}}
\sqrt{|d(p)|} . \] Applying Lemma \ref{prime sum} with $h$ as in the statement of the theorem, we deduce that \als{
\frac{S_r(v,a,g,b,n,c)}{ |J_r(v,a,g,b,n,c)|}
&= \frac{2\pi mk}{\phi(2^{2v+3}ag^3nm)\log N} \\
&\quad + O\left( \frac{k}{4^vag^3n(\log k)^{\alpha+1}}
+ \frac{\sqrt{k}}{h\log k} \int_{N^-}^{N^+} E(y,h;2^{2v+3}ag^3nm) \mathrm{d} y \right) , } by our assumption that $h\le m\sqrt{k}/(\log k)^{\alpha+1}$ and that $m\le \sqrt{k}$. In order to compute the contribution of the above error term to $M(G)$, we note that \als{
\sum_{b=1}^g \rho(b,g) \sum_{c=1}^n |J_r(b,v,g,a,n,c)|
&\le \sum_{b=1}^g \sum_{c=1}^n
\sum_{ \substack{ j\mod {2^{2v+3} ag^3n} \\ \Delta(j)\equiv bg^2\pmod{g^3} \\
\Delta(j)\equiv 4^vr \pmod{2^{2v+3}} \\ 2|jm,\ \Delta(j)\equiv c \mod{n} }} \frac{g}{\phi(g)}
= \sum_{ \substack{ j\mod {2^{2v+3} a g^3n} \\ g^2 | \Delta(j),\, 2|jm \\ \Delta(j)\equiv 4^vr\pmod{2^{2v+3}} }} \frac{g}{\phi(g)} \\
&= \frac{g}{\phi(g)} agn \sum_{ \substack{ j\mod {2^{2v+3} g^2} \\ 2|jm,\ g^2 | \Delta(j) \\ \Delta(j)\equiv 4^vr\pmod{2^{2v+3}} }} 1
\ll \frac{ag^2n}{\phi(g)} \cdot \tau(g) \cdot |J_r(v)| }
by the Chinese remainder theorem and Lemma \ref{generic quad lemma}, where $J_r(v)$ is defined by \eqref{J_r(v) def}. Since we also have that $|J_r(v)|\ll \mathcal J(v) \ll1$ by Lemmas \ref{prime 2 lma1} and \ref{prime 2 lma2} below, we conclude that \als{ M(G) &= \frac{2mk}{\log N}\sum_{r\in\{0,1,4,5\}} \frac{1}{2-\leg{r}{2}}
\sum_{\substack{a|k,\, a\le k^\epsilon \\ (a,2)=1}}
\sum_{\substack{2^v\le k^\epsilon \\ (2^v,k)=1}}
\sum_{\substack{g\le k^\epsilon \\ (g,2k)=1 }}
\sum_{\substack{P^+(n)\le z,\, n\le k^\epsilon \\ (n,2kg)=1}}
\frac{\mu^2(a)}{2^{3v+v_0}\phi(a)\phi(g^4an^2m)} \\
&\qquad\times \sum_{b=1}^g \rho(g,b)
\sum_{c=1}^n \leg{c}{n}|J_r(b,v,g,a,n,c)|
+ O_{\alpha,\epsilon}\left( \frac{k}{(\log k)^\alpha} + E\right), } where $v_0$ is defined by \eqref{J(v)} and \[ E:= \frac{\sqrt{k}}{h} \sum_{q\le 8k^{7\epsilon}} \tau_3(q) \int_{N^-}^{N^+} E(y,h;mq) \mathrm{d} y , \] since, for any $q\in\mathbb N$, we have that \[
\sum_{\substack{q=2^{2v+3}ag^3n \\ a|k,\ (a,2)=(gn,2k)=1}} \tau(g) \le \sum_{g|q} \tau(g) = \tau_3(q) . \] If we set \[ I(g,b) = \#\{1\le j\le g^3: \Delta(j)\equiv bg^2\pmod{g^3}, \, (1+jm,g)=1\} \] and \[
F(a) = \#\{ 1\le j\le a: (j,a)=1,\,(1+jm,a)=1\} = \prod_{\ell^w \| a} \ell^{w-1} \left(\ell-1-\leg{m}{\ell}^2\right) , \] then the Chinese remainder theorem implies that \als{
\sum_{c=1}^n \leg{c}{n} |J_r(v,a,g,b,n,c)|
&= F(a) \cdot |J_r(v)| \cdot I(g,b) \sum_{c=1}^n \leg{c}{n}
\sum_{\substack{ j\mod n \\ \Delta(j)\equiv c\mod{n} \\(1+jm,n)=1 }}1 \\
&= F(a) \cdot |J_r(v)|\cdot I(g,b) \cdot T(n), } where $T(n)$ is defined by \eqref{T(n) def}. Therefore, \[ M(G) = \frac{mk}{\phi(m)\log N} S_1S_2S_3+ O_{\alpha,\epsilon}\left( \frac{k}{(\log k)^\alpha} + E\right), \] where \[ S_1 = \sum_{r\in\{0,1,4,5\}} \frac{2}{2-\leg{r}{2}}
\sum_{\substack{2^v\le k^\epsilon \\ (2^v,k)=1}} \frac{|J_r(v)|}{2^{3v+v_0}}
=\mathcal J + O(k^{-\epsilon}), \]
by the trivial estimate $|J_r(v)|\ll 4^v$, \als{
S_2 = \sum_{\substack{a|k,\,a\le k^\epsilon \\(a,2)=1}} \frac{\mu^2(a) F(a)}{\phi(a)a}
\prod_{\ell|a,\,\ell\nmid m} \frac{\ell}{\ell-1}
&= \prod_{\substack{\ell|k\\ \ell\ne 2}} \left( 1+ \frac{\ell-1-\leg{m}{\ell}^2}{(\ell-1)(\ell-\leg{m}{\ell}^2)}\right)
+ O(k^{-\epsilon/2}) \\
&= \prod_{\substack{\ell|k\\ \ell\ne 2}} \frac{\ell^2- \leg{m}{\ell}^2 \ell -1 }{(\ell-1)(\ell-\leg{m}{\ell}^2)} + O(k^{-\epsilon/2}) } by arguing as in relation \eqref{divisor bound}, and \[ S_3 = \sum_{\substack{g\le k^\epsilon \\ (g,2k)=1 }} \sum_{b=1}^g \frac{\rho(g,b) I(g,b) S_4(g) }{g^4}
\prod_{\ell|g,\,\ell\nmid m} \frac{\ell}{\ell-1} \] with \[
S_4(g) = \sum_{\substack{P^+(n)\le z,\, n\le k^\epsilon \\ (n,2kg)=1}} \frac{T(n)}{n^2}\prod_{\ell|n,\,\ell\nmid m}\frac{\ell}{\ell-1} . \] In the above, to factor $\phi(g^4 a n^2 m)$, we have used the identity $$
\phi(g^4 a n^2 m) = \phi(m) g^4 a n^2 \prod_{\ell|g,\,\ell\nmid m} \frac{\ell -1}{\ell} \;
\prod_{\ell|a,\,\ell\nmid m} \frac{\ell - 1}{\ell} \;
\prod_{\ell|n,\,\ell\nmid m} \frac{\ell - 1}{\ell} $$ which holds since $a,n$ and $g$ are pairwise coprime. Note that \als{
I(g,b) &= \prod_{\ell^w\| g} \#\{j\mod {\ell^{3w}}: (j-mk)^2\equiv 4k+bg^2\mod{\ell^{3w}},\, (1+jm,\ell)=1\} \\
& = \prod_{\ell|g} \left(1+ \leg{(N+1)^2-(4k+bg^2)m^2}{\ell}^2 \leg{4k+bg^2}{\ell}\right) \\
& =\left(1+ \leg{N-1}{\ell}^2\leg{k}{\ell} \right)^{\omega(g)} }
by Lemma \ref{generic quad lemma}, which is applicable here because $4k+bg^2\equiv 4k\not\equiv 0\mod\ell$ for all primes $\ell|g$. So we see that $I(g,b)$ is independent of $b$, which implies that \als{ \sum_{b=1}^g \rho(g,b) I(g,b)
&= I(g,0) \prod_{\ell^w\|g} \left( \sum_{b=1}^{\ell^w} \frac{1}{1-\leg{b}{\ell}/\ell} \right) \\
&= I(g,0) \prod_{\ell^w\|g} \left(\ell^{w-1}+ \ell^{w-1}\frac{\ell-1}{2}\frac{1}{1-1/\ell}
+\ell^{w-1}\frac{\ell-1}{2}\frac{1}{1+1/\ell}\right) \\
&= g I(g,0) \prod_{\ell|g} \frac{\ell^2+\ell+1}{\ell(\ell+1)} . } Thus we conclude that \[ S_3 = \sum_{\substack{g\le k^\epsilon \\ (g,2k)=1 }}
\frac{S_4(g)}{g^3} \prod_{\ell|g} \frac{(1+\leg{N-1}{\ell}^2\leg{k}{\ell})(\ell^2+\ell+1)}{(\ell-\leg{m}{\ell}^2)(\ell+1)}. \] Moreover, if $P(\ell)$ is as in Corollary \ref{formula for P(ell)}, then we have that \[
S_4(g) = \frac{P}{\prod_{\ell|g} P(\ell)} \left( 1
+ O\left(\frac{1}{(\log k)^{\alpha+1}}\right) \right) , \quad\text{where}\quad P : = \prod_{\ell\nmid 2k} P(\ell) . \] Therefore \als{ S_3\left( 1+ O\left(\frac{1}{(\log k)^{\alpha+1}}\right) \right)
&= P \cdot \prod_{\ell\nmid2k} \left( 1 + \sum_{w\ge1} \frac{(1+\leg{N-1}{\ell}^2\leg{k}{\ell})(\ell^2+\ell+1)}{\ell^{3w}(\ell-\leg{m}{\ell}^2)(\ell+1)P(\ell)}\right) \\
&= \prod_{\ell\nmid2k} \left( P(\ell) + \frac{1+\leg{N-1}{\ell}^2\leg{k}{\ell}}{(\ell^{2}-1)(\ell-\leg{m}{\ell}^2)} \right) \\
&= \prod_{\ell\nmid2k} \frac{\ell^3 - \leg{m}{\ell}^2 \ell^2 - (1+\leg{m(N-1)}{\ell}^2)\ell }{(\ell^{2}-1)(\ell-\leg{m}{\ell}^2)}\\
&= \prod_{\ell\nmid2N} \left( 1- \frac{\ell \leg{N-1}{\ell}^2+1}{(\ell^{2}-1)(\ell-1)} \right) . } Consequently, \als{ M(G) &= \frac{\mathcal J mk}{\phi(m)\log N} \prod_{\ell\nmid2N} \left( 1- \frac{\ell \leg{N-1}{\ell}^2+1}{(\ell^{2}-1)(\ell-1)} \right)
\prod_{\substack{\ell|k \\ \ell>2}} \left( 1+ \frac{\ell-1-\leg{m}{\ell}^2}{(\ell-1)(\ell-\leg{m}{\ell}^2)}\right) \\
&\quad + O_{\alpha,\epsilon}\left( \frac{k}{(\log k)^\alpha} + E\right) . } So the theorem follows by the above estimates together with Lemmas \ref{Aut(G)} and \ref{formula for J}. \end{proof}
\section{Powers of 2}\label{2}
The goal of this section is to show Lemma \ref{formula for J}, which gives the value of \begin{equation*} \mathcal J = \sum_{\substack{v\ge0 \\ (2^v,k)=1}} \frac{\mathcal J(v)}{8^v}, \end{equation*} where \begin{equation*}
\mathcal J(v) = \frac{1}{2^{v_0-1}} \sum_{r\in\{0,1,4,5\}} \frac{|J_r(v)|}{2-\leg{r}{2}}, \quad\quad v_0 =
\begin{cases}
2 &\text{if}\ 2\nmid m,\cr
3 &\text{if}\ 2|m,
\end{cases} \end{equation*} and \begin{equation*} J_r(v) = \{1\le j\le 2^{2v+3}:(j-mk)^2\equiv 4k+4^vr\pmod{2^{2v+3}}, \, jm\equiv 0\pmod 2\}. \end{equation*} We start with the following standard lemma.
\begin{lma}\label{generic quad lemma - prime 2} We have that \[ \#\{j\in\mathbb Z/8\mathbb Z : j^2\equiv d\pmod{8}\} =
\begin{cases}
2 &\text{if}\ d\equiv 0,4\mod 8,\\
4 &\text{if}\ d\equiv 1\mod 8,\\
0 &\text{otherwise}.
\end{cases} \] Moreover, if $d$ is odd and $e\ge3$, then \[ \#\{j\in\mathbb Z/2^e\mathbb Z : j^2\equiv d\pmod{2^e}\} =
\begin{cases}
4 &\text{if}\ d\equiv 1\mod 8, \\
0 &\text{otherwise}.
\end{cases} \] \end{lma}
We shall use the above lemma to calculate $|J_r(v)|$ and $\mathcal J(v)$ when $(2^v,k)=1$. First, we note that if $v\ge1$, then $k$ must be odd and \eq{J_r(v) alt}{
|J_r(v)| = \begin{cases}
2\cdot \#\{ j\mod{ 2^{2v+1} } : j^2\equiv k+4^{v-1} r\mod{2^{2v+1}} \} &\text{if}\ 2|m,\\
0 &\text{if}\ 2\nmid m.
\end{cases} }
Indeed, when $v\ge1$, the relation $(j-mk)^2\equiv 4k+4^vr\mod{2^{2v+3}}$ implies that $2|(j-mk)$. Since $k$ is odd and we also have that $jm\equiv 0\mod{2}$, we deduce that $2\mid (m,j)$. Hence, $|J_r(v)|=0$ when $2\nmid m$. Assuming that $2\mid m$, we write $j=mk+2j'$ and find that \als{
|J_r(v)| &= \#\{j'\mod{2^{2v+2}} : j'^2\equiv k+4^{v-1} r\mod{2^{2v+1}} \} \\
&= 2\cdot \#\{j\mod{2^{2v+1} } : j^2\equiv k+4^{v-1} r\mod{2^{2v+1}} \} , } as claimed.
\begin{lma}\label{prime 2 lma1} Let $v\ge0$ with $(2^v,k)=1$. If $m$ is odd, then \[ \mathcal J(v) =
\begin{cases}
1 &\mbox{if $v=0$ and $2|k$},\\
\frac{2}{3} &\mbox{if $v=0$ and $2\nmid k$},\\
0 &\text{if $v\ge1$ and $2\nmid k$}.
\end{cases} \] \end{lma}
\begin{proof} The case $v\ge1$ follows by \eqref{J_r(v) alt}. Assume now that $v=0$. Since $m$ is odd, the condition $jm\equiv 0\pmod 2$ implies that every $j\in J_r(v)$ is even. Writing $j=2j'$, we deduce that \[
|J_r(0)| = \#\{j'\mod 4: (2j'-mk)^2\equiv 4k+r\mod{8}\} \]
If $k$ is odd, then we must have that $(2j'-mk)^2-4k\equiv -3\mod8$ and thus $r=5$, in which case $|J_r(0)|=4$; otherwise $|J_r(v)|=0$. So \[ \mathcal J(0) = \frac{1}{2} \cdot \frac{4}{2-(-1)} = \frac{2}{3} . \]
Finally, assume that $k$ is even. Writing $z=j'-mk/2$, our task reduces to counting solutions to $4z^2\equiv r\mod 8$ with $1\le z\le 4$. If $r\in\{1,5\}$, then there are no such solutions, whereas if $r\in\{0,4\}$, then there are precisely two such solutions. Consequently, when $m$ is odd and $k$ is even, \[ \mathcal J(0)=\frac{1}{2} \left( \frac{2}{2-0} + \frac{2}{2-0} \right) = 1 , \] and the lemma follows in this case too. \end{proof}
\begin{lma}\label{prime 2 lma2} Let $v\ge0$ with $(2^v,k)=1$, and suppose that $2|m$. If $2|k$, then \[ \mathcal J(0) = \frac{3}{2} . \] If $k\equiv 1\pmod 8$, then \[ \mathcal J(v) = \begin{cases}
\frac{5}{6} &\mbox{if $v=0$}, \\
1 &\mbox{if $v=1$}, \\
2 &\mbox{if $v=2$}, \\
\frac{14}{3} &\mbox{if $v\ge3$}.
\end{cases} \] If $k\equiv 3,7\pmod 8$, then \[ \mathcal J(v) = \begin{cases}
\frac{5}{6} &\mbox{if $v=0$}, \\
\frac{4}{3} &\mbox{if $v=1$}, \\
0 &\mbox{if $v\ge2$}.
\end{cases} \] If $k\equiv 5\pmod 8$, then \[ \mathcal J(v) = \begin{cases}
\frac{5}{6} &\mbox{if $v=0$}, \\
1 &\mbox{if $v=1$}, \\
\frac{8}{3} &\mbox{if $v=2$}, \\
0 &\mbox{if $v\ge3$}.
\end{cases} \] \end{lma}
\begin{proof}
First, we calculate $|J_r(0)|$. Note that the condition $jm\equiv 0\mod2$ is trivially satisfied now since $2|m$. Therefore, a change of variable and Lemma \ref{generic quad lemma - prime 2} imply that \eq{J_r(0)}{
|J_r(0)| = \#\{j\mod 8: j^2\equiv 4k+r\mod{8}\} =
\begin{cases}
2 &\text{if}\ 4k+r\equiv 0,4\mod 8,\\
4 &\text{if}\ 4k+r\equiv 1\mod 8,\\
0 &\text{if}\ 4k+r\equiv 5\mod 8 .
\end{cases} } Thus, \[ \mathcal J(0) = \begin{cases}
\frac{1}{4}\left( \frac{2}{2-0}+\frac{4}{2-1} + \frac{2}{2-0} + \frac{0}{2-(-1)} \right) = \frac{3}{2}
&\text{if}\ 2|k,\\
\frac{1}{4} \left(\frac{2}{2-0}+\frac{0}{2-1} + \frac{2}{2-0}+ \frac{4}{2-(-1)}\right) = \frac{5}{6}
&\text{if}\ 2\nmid k .
\end{cases} \]
Next assume that $v\ge1$, and note that the condition $(2^v,k)=1$ means that we only need consider this case when $k$ is odd. By relation \eqref{J_r(v) alt}, we have that \[
|J_r(v)| = 2\cdot \#\{ j\mod{2^{2v+1}}: j^2\equiv k+4^{v-1} r\mod{2^{2v+1}} \}. \]
Now if $v\ge2$, then Lemma~\ref{generic quad lemma - prime 2} implies that $|J_r(v)|=2\cdot 4=8$ or $|J_r(v)|=0$ according to whether $k+4^{v-1}r\equiv 1\mod{8}$ or not. Therefore, when $v\ge2$, \[ \mathcal J(v) = \begin{cases}
\frac{1}{4} \left(\frac{8}{2-0}+\frac{8}{2-0}\right) = 2
&\text{if}\ v=2\ \text{and}\ k\equiv 1\mod 8,\\
\frac{1}{4} \left( \frac{8}{2-1} + \frac{8}{2-(-1)}\right) = \frac{8}{3}
&\text{if}\ v=2\ \text{and}\ k\equiv 5\mod 8,\\
\frac{1}{4} \left(\frac{8}{2-0}+\frac{8}{2-1} + \frac{8}{2-0}+ \frac{8}{2-(-1)}\right) = \frac{14}{3}
&\text{if}\ v\ge3\ \text{and}\ k\equiv 1\mod 8,\\
0 &\text{otherwise} .
\end{cases} \] Finally, we consider the case $v=1$. Using Lemma~\ref{generic quad lemma - prime 2} again, we have \[
|J_r(1)| = 2\cdot \#\{ j\mod{8}: j^2\equiv k+r\mod 8 \}
= \begin{cases}
4 &\text{if}\ k+r\equiv 0,4\mod 8,\\
8 &\text{if}\ k+r\equiv 1\mod 8,\\
0 &\text{otherwise}.
\end{cases} \] Therefore, \[ \mathcal J(1) = \begin{cases}
\frac{1}{4} \cdot \frac{8}{2-0} = 1
&\text{if}\ k\equiv 1,5\mod 8,\\
\frac{1}{4} \left( \frac{4}{2-1} + \frac{4}{2-(-1)} \right) = \frac{4}{3}
&\text{if}\ k\equiv 3,7 \mod 8 ,
\end{cases} \] which completes the proof of the lemma. \end{proof}
Lemma \ref{formula for J} now follows as a direct consequence of Lemmas \ref{prime 2 lma1} and \ref{prime 2 lma2}.
\appendix
\section{by Chantal David, Greg Martin and Ethan Smith}
The purpose of this appendix is to give a probabilistic interpretation to the Euler factors arising in $K(G)\frac{|G|}{|\Aut(G)|}$ and $K(N)\frac{N}{\phi(N)}$, where $K(G)$ and $K(N)$ are defined by~\eqref{define K(G)} and~\eqref{define K(N)}, respectively. Given a prime $\ell$, we let $\nu_\ell(\cdot)$ denote the usual $\ell$-adic valuation. For each integer $e\ge 1$, we also let $\GL_2(\mathbb Z/\ell^e\mathbb Z)$ denote the usual group of invertible $2\times 2$ matrices with entries from $\mathbb Z/\ell^e\mathbb Z$. The $2\times 2$ identity matrix we denote by $I$. The main results of this appendix are as follows.
\begin{thm}\label{K(N) interpretation} For each positive integer $N$, \begin{equation*} \frac{K(N)\cdot N}{\phi(N)} =\prod_\ell\left(\lim_{e\rightarrow\infty}\frac{\ell^e\cdot\#\{\sigma\in\GL_2(\mathbb Z/\ell^e\mathbb Z) : \det(\sigma)+1-\tr(\sigma)\equiv N\pmod{\ell^e}\}}{\#\GL_2(\mathbb Z/\ell^e\mathbb Z)}\right), \end{equation*} where the product is taken over all primes $\ell$. Furthermore, the sequences defining the Euler factors are constant for $e>\nu_\ell(N)$. \end{thm}
\begin{rmk} If $\mu$ denotes the Haar measure on the space of $2\times 2$ matrices over the $\ell$-adic integers $\mathbb Z_\ell$, normalized so that $\mu\left(\GL_2(\mathbb Z_\ell)\right)=1$, then the Euler factor of $K(N)\frac{N}{\phi(N)}$ for the prime $\ell$ may be viewed as the density function for the probability measure on $\mathbb Z_\ell$ defined by the pushforward of $\mu$ via the map $\det+1-\tr:\GL_2(\mathbb Z_\ell)\rightarrow\mathbb Z_\ell$. \end{rmk}
\begin{thm}\label{K(G) interpretation} For each pair of positive integers $m$ and $k$, put $G=G_{m,k}=\mathbb Z/m\mathbb Z\times\mathbb Z/mk\mathbb Z$. Then \begin{equation*}
\frac{K(G)\cdot |G|}{|\Aut(G)|} =\prod_\ell\left(\lim_{e\rightarrow\infty}\frac{\ell^e\cdot\#\left\{\sigma\in\GL_2(\mathbb Z/\ell^e\mathbb Z) :
\begin{array}{l}\det(\sigma)+1-\tr(\sigma)\equiv |G|\pmod{\ell^e},\\
\sigma\equiv I\pmod{\ell^{\nu_\ell(m)}},\\
\sigma\not\equiv I\pmod{\ell^{\nu_\ell(m)+1}}
\end{array}\right\}}
{\#\GL_2(\mathbb Z/\ell^e\mathbb Z)}\right), \end{equation*}
where the product is taken over all primes $\ell$. Furthermore, the sequences defining the Euler factors are constant for $e>\nu_\ell(|G|)$. \end{thm}
For the remainder of this appendix, we assume that $e, n, N,$ and $\ell$ are positive integers with $\ell$ prime and $n^2\mid N$. Later we will also assume that $N=|G|=m^2k$. For convenience, we let \begin{equation*} C_{N,n}(\ell^e)=\left\{\sigma\in\GL_2(\mathbb Z/\ell^e\mathbb Z) :
\det(\sigma)+1-\tr(\sigma)\equiv N\pmod{\ell^e},\ \sigma\equiv I\pmod{\ell^{\nu_\ell(n)}}\right\}. \end{equation*} In the case that $\ell\nmid n$, we note that the condition $\sigma\equiv I\pmod{\ell^{\nu_\ell(n)}}$ is vacuous. As usual, $\leg{\cdot}{\ell}$ denotes the Kronecker symbol modulo $\ell$.
\begin{lma}\label{matrix count mod ell} If $\ell\nmid n$, then \begin{equation*} \#C_{N,n}(\ell)= \ell\left(\ell^2-\leg{N}{\ell}^2\ell-1-\leg{N-1}{\ell}^2\right). \end{equation*} \end{lma}
\begin{proof} We first observe that $\#C_{N,n}(\ell)$ is equal to the number of quadruples $(a,b,c,d)$ satisfying $0\le a,b,c,d<\ell$ and \begin{align} ad-bc+1-(a+d)&\equiv N\pmod\ell,\label{det-tr cond}\\ ad-bc&\not\equiv 0\pmod\ell\label{det cond}. \end{align} The lemma follows by first counting the number of quadruples satisfying~\eqref{det-tr cond} and then removing the number of quadruples satisfying~\eqref{det-tr cond} that do not satisfy~\eqref{det cond}.
Rearranging, we see that the condition~\eqref{det-tr cond} may be rewritten as \begin{equation*} (a-1)(d-1)-bc\equiv N\pmod\ell. \end{equation*} It is clear that any choice of $a,b,c$ with $a\ne 1$ uniquely determines $d$. On the other hand, if $a=1$, then there are $\ell$ choices for $d$, and the pair $(b,c)$ must satisfy $bc\equiv -N\pmod\ell$. Therefore, there are \begin{equation*} \ell^3+\left(1-\leg{N}{\ell}^2\right)\ell^2-\ell \end{equation*} solutions $(a,b,c,d)$ to~\eqref{det-tr cond} with $0\le a,b,c,d<\ell$.
We now count the number of quadruples $(a,b,c,d)$ with $0\le a,b,c,d<\ell$ for which~\eqref{det-tr cond} holds but~\eqref{det cond} does not. These are the quadruples that satisfy the system \begin{align*} a+d&\equiv 1-N\pmod\ell,\\ ad&\equiv bc\pmod\ell. \end{align*} It is clear that any choice of $a$ uniquely determines $d$. If $a=0$ or $a=1-N$, then there are $2\ell-1$ choices for the pair $(b,c)$. On the other hand, if $a\ne 0,1-N$, there are only $\ell-1$ choices for $(b,c)$. Therefore, there are \begin{equation*} \ell^2+\leg{N-1}{\ell}^2\ell \end{equation*} solutions $(a,b,c,d)$ to~\eqref{det-tr cond} with $0\le a,b,c,d<\ell$ for which~\eqref{det cond} does not hold. \end{proof}
\begin{prop}\label{matrix proportions for ell not dividing N} If $\ell\nmid N$, then \begin{equation*} \#C_{N,n}(\ell^e)=\ell^{3(e-1)+1}\left(\ell^2-\ell-1-\leg{N-1}{\ell}^2\right) \end{equation*} for every $e\ge 1$. \end{prop}
\begin{proof} The case $e=1$ is treated in Lemma~\ref{matrix count mod ell}, and so we assume that $e\ge 2$. Since any $\sigma\in C_{N,n}(\ell^e)$ must reduce modulo $\ell$ to a matrix in $C_{N,n}(\ell)$, it suffices to count the number of matrices in $C_{N,n}(\ell^e)$ that reduce to a given matrix in $C_{N,n}(\ell)$. To this end, we assume that $\sigma_0\in C_{N,n}(\ell)$ and $\sigma\in C_{N,n}(\ell^e)$ is such that $\sigma\equiv\sigma_0\pmod\ell$. Thus, we may write \begin{equation*} \sigma_0=\begin{pmatrix}a_0&b_0\\ c_0&d_0\end{pmatrix}\quad\text{and}\quad \sigma=\begin{pmatrix}a_0+a\ell&b_0+b\ell\\ c_0+c\ell&d_0+d\ell\end{pmatrix} \end{equation*} with $0\le a_0,b_0,c_0,d_0<\ell$ and $0\le a,b,c,d<\ell^{e-1}$. Note that the condition $\det\sigma\not\equiv 0\pmod\ell$ is necessarily satisfied since $\det\sigma\equiv\det\sigma_0\pmod\ell$ and $\sigma_0\in C_{N,n}(\ell)$. Therefore, $\sigma\in C_{N,n}(\ell^e)$ if and only if \begin{equation}\label{lift det-tr cond} a_0d_0-b_0c_0+1-a_0-d_0 +(a(d_0-1)+d(a_0-1)-b_0c-bc_0)\ell +(ad-bc)\ell^2 \equiv N\pmod{\ell^e}. \end{equation} Since $\sigma_0\in C_{N,n}(\ell)$, it follows that $a_0d_0-b_0c_0+1-a_0-d_0=N+k_0\ell$ for some $k_0$, and hence condition~\eqref{lift det-tr cond} reduces to \begin{equation*} k_0+ ((d_0-1)a-c_0b-b_0c+(a_0-1)d) +(ad-bc)\ell \equiv 0\pmod{\ell^{e-1}}. \end{equation*} Since $\ell\nmid N$, $\sigma_0$ cannot be the identity matrix modulo $\ell$, and the polynomial $(d_0-1)a - c_0 b - b_0 c + (a_0-1) d$ in the variables $a,b,c,d$ has at least one nonzero coefficient. Say for example that $d_0 - 1$ is not zero. Then for each triple $(b,c,d)$, there is a unique choice of $a$ satisfying the above congruence. Therefore, there are exactly $\ell^{3(e-1)}$ solutions $(a,b,c,d)$ with $0\le a,b,c,d<\ell^{e-1}$. \end{proof}
Let $\Mat_2(\mathbb Z/\ell^k\mathbb Z)$ denote the ring of $2\times 2$ matrices with entries from $\mathbb Z/\ell^k\mathbb Z$. In order to compute $C_{N,n}(\ell^e)$ when $\ell\mid N$ we need to know the number of matrices in $\Mat_2(\mathbb Z/\ell^k\mathbb Z)$ of every individual determinant.
\begin{prop}\label{det prop} Let $M$ be a positive integer, and let $r=\nu_\ell(M)$. Then for $r,s\ge 0$, we have \begin{equation*} \#\left\{\sigma\in\Mat_2(\mathbb Z/\ell^{r+s}\mathbb Z) : \det(\sigma)\equiv M\pmod{\ell^{r+s}}\right\} =\ell^{2(r-1)}\left(\ell^{3s}(\ell+1)(\ell^{r+1}-1)+\delta(s)\right), \end{equation*} where $\delta(s)$ is defined by \begin{equation*} \delta(s):=\begin{cases} 1&\text{if }s=0,\\ 0&\text{otherwise}. \end{cases} \end{equation*} \end{prop}
For the proof of Proposition~\ref{det prop}, we first make a simple reduction and fix some notation. Given any positive integer $M$, we write $M=\ell^rM'$ with $r=\nu_\ell(M)$ and $(M',\ell)=1$. Since the determinant maps $\GL_2(\mathbb Z/\ell^{r+s}\mathbb Z)$ onto $(\mathbb Z/\ell^{r+s}\mathbb Z)^*$, it follows that there is an $\alpha\in\GL_2(\mathbb Z/\ell^{r+s}\mathbb Z)$ such that $\det(\alpha)\equiv M'\pmod{\ell^{r+s}}$. Since the map $\sigma\mapsto\alpha\sigma$ is a group automorphism of $\Mat_2(\mathbb Z/\ell^{r+s}\mathbb Z)$ and since $\det(\sigma)=M=\ell^rM'$ if and only if $\det(\alpha^{-1}\sigma)=\ell^r$, it follows that \begin{equation*} \#\left\{\sigma\in\Mat_2(\mathbb Z/\ell^{r+s}\mathbb Z) : \det(\sigma)\equiv M\pmod{\ell^{r+s}}\right\} =\#F(r,s), \end{equation*} where \begin{equation*} F(r,s):=\left\{\sigma\in\Mat_2(\mathbb Z/\ell^{r+s}\mathbb Z) : \det(\sigma)\equiv \ell^r\pmod{\ell^{r+s}}\right\}. \end{equation*} Thus, we see that $\#\left\{\sigma\in\Mat_2(\mathbb Z/\ell^{r+s}\mathbb Z) : \det(\sigma)\equiv M\pmod{\ell^{r+s}}\right\}$ depends on the power of $\ell$ dividing $M$ and not on the $\ell$-free part of $M$. With this in mind, we define \begin{equation*} f(r,s):=\#F(r,s), \end{equation*} where we adopt the natural convention that $f(0,0)=1$. Proposition~\ref{det prop} then follows easily by induction on $r$ using the following lemma.
\begin{lma} For every $s\ge 0$, we have \begin{align*} f(0,s)&=\ell^{3s-2}(\ell^2-1)+\ell^{-2}\delta(s),\\ f(1,s)&=\ell^{3s}(\ell+1)(\ell^2-1)+\delta(s),\\ f(r,s)&=\ell^{3(r+s-1)}(\ell+1)(\ell^2-1)+\ell^4f(r-2,s),\quad r\ge 2. \end{align*} \end{lma}
\begin{proof} By convention we have $f(0,0)=1$. For $s\ge 1$, we have the well-known formula \begin{equation*} f(0,s)=\#\SL_2(\mathbb Z/\ell^s\mathbb Z)=\ell^{3s-2}(\ell^2-1). \end{equation*} This proves the first formula given in the statement of the lemma.
Now assume that $r\ge 1$. If $r=1$ and $s=0$, then we have \begin{equation*} f(1,0)=\#\Mat_2(\mathbb Z/\ell\mathbb Z)-\#\GL_2(\mathbb Z/\ell\mathbb Z)=\ell^3+\ell^2-\ell. \end{equation*} We observe that any $\sigma\in F(r,s)$ must reduce modulo $\ell$ to some $\sigma_0\in F(1,0)$. Thus, we assume that $\sigma_0\in F(1,0)$, and we write \begin{equation*} \sigma_0=\begin{pmatrix}a_0&b_0\\ c_0&d_0\end{pmatrix}\quad\text{and}\quad \sigma=\begin{pmatrix}a_0+a\ell&b_0+b\ell\\ c_0+c\ell&d_0+d\ell\end{pmatrix}, \end{equation*} with $0\le a_0, b_0, c_0, d_0<\ell$ and $0\le a,b,c,d<\ell^{r+s-1}$. By definition, we see that $\sigma\in F(r,s)$ if and only if \begin{equation*} a_0d_0-b_0c_0+(d_0a-c_0b-b_0c+a_0d)\ell+(ad-bc)\ell^2\equiv\ell^r\pmod{\ell^{r+s}}. \end{equation*} If $\sigma_0$ is not the zero matrix modulo $\ell$, then there are exactly $\ell^{3(r+s-1)}$ choices of $(a,b,c,d)$ satisfying the above congruence. On the other hand, if $\sigma_0$ is the zero matrix (which is always an element of $F(1,0)$), the above congruence condition reduces to \begin{equation}\label{zero matrix cond} (ad-bc)\ell^2\equiv \ell^r\pmod{\ell^{r+s}}. \end{equation} If $r=1$, then there can be no solutions to~\eqref{zero matrix cond} with $s\ge 1$. Therefore, \begin{equation*} f(1,s)=\ell^{3s}(f(1,0)-1)=\ell^{3s}(\ell^3+\ell^2-\ell-1) =\ell^{3s}(\ell+1)(\ell^2-1) \end{equation*} when $s\ge 1$, and this completes the proof of the second formula stated in the lemma. On the other hand, if $r\ge 2$, then condition~\eqref{zero matrix cond} reduces to \begin{equation*} (ad-bc)\equiv\ell^{r-2}\pmod{\ell^{r-2+s}}. \end{equation*} There are $\ell^4f(r-2,s)$ solutions to this congruence with $0 \leq a,b,c,d < \ell^{r+s-1}$. Whence \begin{equation*} \begin{split} f(r,s)&=\ell^{3(r+s-1)}(f(1,0)-1)+\ell^4f(r-2,s)\\ &=\ell^{3(r+s-1)}(\ell+1)(\ell^2-1)+\ell^4f(r-2,s) \end{split} \end{equation*} for $r\ge 2$, and this completes the proof of the lemma. \end{proof}
\begin{prop}\label{ell divides N but not n} If $v=\nu_\ell(N)\ge 1$ and $\ell\nmid n$, then \begin{equation*} \#C_{N,n}(\ell^e) =\ell^{3e-v-2}(\ell+1)\left(\ell^{v+1}-\ell^v-1\right) \end{equation*} for every $e>v$. \end{prop}
\begin{proof} By Lemma~\ref{matrix count mod ell}, we have \begin{equation}\label{base case} \#C_{N,n}(\ell)=\ell(\ell^2-2)=\ell^3-2\ell, \end{equation} and so we may assume that $e\ge 2$. We proceed in a manner similar to the proof of Proposition~\ref{matrix proportions for ell not dividing N}. In particular, we assume that $\sigma_0\in C_{N,n}(\ell)$ and count the number of $\sigma\in C_{N,n}(\ell^e)$ that reduce to $C_{N,n}(\ell)$. Writing \begin{equation*} \sigma_0=\begin{pmatrix}a_0&b_0\\ c_0&d_0\end{pmatrix}\quad\text{and}\quad \sigma=\begin{pmatrix}a_0+a\ell&b_0+b\ell\\ c_0+c\ell&d_0+d\ell\end{pmatrix} \end{equation*} with $0\le a_0,b_0,c_0,d_0<\ell$ and $0\le a,b,c,d<\ell^{e-1}$, we deduce that the quadruple $(a,b,c,d)$ must satisfy~\eqref{lift det-tr cond}. As in the proof of Proposition~\ref{matrix proportions for ell not dividing N}, if $\sigma_0$ is not the identity matrix, there are exactly $\ell^{3(e-1)}$ choices for $(a,b,c,d)$.
Now suppose that $\sigma_0$ is the identity matrix. (Note that the identity matrix is always an element of $C_{N,n}(\ell)$ when $\ell\mid N$.) Then writing $N=\ell^vN'$ with $v=\nu_\ell(N)\ge 1$ and $(N',\ell)=1$, we see that condition~\eqref{lift det-tr cond} reduces to \begin{equation}\label{id case} (ad-bc)\ell^2\equiv N'\ell^{v}\pmod{\ell^{e}}. \end{equation} Clearly, there are no solutions to this congruence unless $v\ge 2$. Therefore, if $v=1$ and $e\ge 2$, we have that \begin{equation*} \#C_{N,n}(\ell^e) =\ell^{3(e-1)}(\ell^3-2\ell-1) =\ell^{3e-3}(\ell+1)(\ell^2-\ell-1). \end{equation*} Now, suppose that $v\ge 2$ and $e\ge 3$. Then~\eqref{id case} reduces to \begin{equation} (ad-bc)\equiv N'\ell^{v-2}\pmod{\ell^{e-2}}. \end{equation} The number of solutions to this congruence with $0\le a,b,c,d<\ell^{e-1}$ is equal to \begin{equation*} \ell^4\#\{\alpha\in\Mat_2(\mathbb Z/\ell^{e-2}\mathbb Z): \det(\alpha)\equiv N'\ell^{v-2}\pmod{\ell^{e-2}}\}. \end{equation*} Since we are assuming that $v<e$, Proposition~\ref{det prop} implies that the above count is equal to \begin{equation*} \ell^4\ell^{2(v-3)}\ell^{3(e-v)}(\ell+1)(\ell^{v-1}-1) =\ell^{3e-v-2}(\ell+1)(\ell^{v-1}-1). \end{equation*} Putting everything together, we find that \begin{equation*} \begin{split} \#C_{N,n}(\ell^e) &=\ell^{3(e-1)}(\ell^3-2\ell-1)+\ell^{3e-v-2}(\ell+1)(\ell^{v-1}-1)\\ &=\ell^{3e-v-2}(\ell+1)\left(\ell^{v+1}-\ell^v-1\right) \end{split} \end{equation*} for $v\ge 2$. \end{proof}
Recall our standing assumption that $n^2\mid N$. \begin{thm}\label{complete matrix count thm} Let $u=\nu_\ell(n)$ and $v=\nu_\ell(N)$. Then for every $e>v$, we have \begin{equation*} \#C_{N,n}(\ell^e) =\begin{cases} \ell^{3(e-1)+1}\left(\ell^2-\ell-1-\leg{N-1}{\ell}^2\right)&\text{if }u=0\text{ and }v= 0,\\ \ell^{3e-v-2}(\ell+1)\left(\ell^{v+1}-\ell^v-1\right)&\text{if }u=0\text{ and }v\ge 1,\\ \ell^{3e-v-2}(\ell+1)(\ell^{v-2u+1}-1)&\text{if }1\le u\le v/2,\\ 0&\text{if } 0\le v/2<u. \end{cases} \end{equation*} Therefore, for every $e>v$, we have \begin{equation*} \begin{split} \frac{\ell^e\#C_{N,n}(\ell^e)}{\#\GL_2(\mathbb Z/\ell^e\mathbb Z)} &=\begin{cases} \displaystyle \left(1-\frac{\leg{N-1}{\ell}^2\ell+1}{(\ell-1)^2(\ell+1)}\right)&\text{if }u=0\text{ and }v= 0,\\ \displaystyle \frac{\ell}{\ell-1}\left(1-\frac{1}{\ell^v(\ell-1)}\right)&\text{if }u=0\text{ and }v\ge 1,\\ \displaystyle \frac{\ell}{\ell^{2u}(\ell-1)}\left(\frac{\ell^{v+1}-\ell^{2u}}{\ell^{v+1}-\ell^v-1}\right)\left(1-\frac{1}{\ell^v(\ell-1)}\right)&\text{if }1\le u\le v/2,\\ 0&\text{if }0\le v/2<u. \end{cases} \end{split} \end{equation*} \end{thm}
\begin{proof} Note that the second assertion of theorem follows from the first together with the well-known formula \begin{equation*} \#\GL_2(\mathbb Z/\ell^e\mathbb Z)=\ell^{4(e-1)+1}(\ell+1)(\ell-1)^2, \end{equation*} and so it suffices to prove the first assertion of the theorem.
The first two cases have already been addressed by Propositions~\ref{matrix proportions for ell not dividing N} and~\ref{ell divides N but not n}. Therefore, we may assume that $u\ge 1$. Supposing that $\sigma\in C_{N,n}(\ell^e)$, we may write \begin{equation*} \sigma=\begin{pmatrix}1+a\ell^u&b\ell^u\\ c\ell^u&1+d\ell^u\end{pmatrix} \end{equation*} with $0\le a,b,c,d<\ell^{e-u}$ chosen such that \begin{equation*} (ad-bc)\ell^{2u}\equiv N'\ell^v\pmod{\ell^e}. \end{equation*} This congruence clearly has no solutions if $e>v$ and $2u>v$. Therefore, we may assume that $2\le 2u\le v<e$. In this case the above congruence is equivalent to the condition \begin{equation*} (ad-bc)\equiv N'\ell^{v-2u}\pmod{\ell^{e-2u}} \end{equation*} for $0 \leq a,b,c,d < \ell^{e-u}$. Applying Proposition~\ref{det prop} with $r=v-2u$ and $s=e-v>0$, we find that \begin{equation*} \begin{split} \#C_{N,n}(\ell^e) &=\ell^{4u}\ell^{2(v-2u-1)}\ell^{3(e-v)}(\ell+1)(\ell^{v-2u+1}-1)\\ &=\ell^{3e-v-2}(\ell+1)(\ell^{v-2u+1}-1). \end{split} \end{equation*} \end{proof}
We are now ready to give the proofs of Theorems~\ref{K(N) interpretation} and~\ref{K(G) interpretation}.
\begin{proof}[Proof of Theorems~\ref{K(N) interpretation} and~\ref{K(G) interpretation}]
Theorem~\ref{K(N) interpretation} follows easily from~\eqref{define K(N)} and the cases of Theorem~\ref{complete matrix count thm} with $\nu_\ell(n)=u=0$. For the proof of Theorem~\ref{K(G) interpretation}, we let $N=m^2k=|G|$, and for each prime $\ell$, we put \begin{equation*} v_\ell(N,n):=\frac{\ell^e\#C_{N,n}(\ell^e)}{\#\GL_2(\mathbb Z/\ell^e\mathbb Z)} \end{equation*} with $e=e_\ell>\nu_\ell(N)$. We then compute the absolutely convergent infinite product \begin{equation*} \prod_\ell\left(v_\ell(N,m)-v_\ell(N,\ell m)\right) \end{equation*} in two different ways. On the one hand, by definition of the $v_\ell(N,n)$, the above expression is equal to \begin{equation*} \prod_\ell\left(\frac{\ell^e\cdot\#\left\{\sigma\in\GL_2(\mathbb Z/\ell^e\mathbb Z) :
\begin{array}{l}\det(\sigma)+1-\tr(\sigma)\equiv N\pmod{\ell^e},\\
\sigma\equiv I\pmod{\ell^{\nu_\ell(m)}},\\
\sigma\not\equiv I\pmod{\ell^{\nu_\ell(m)+1}}
\end{array}\right\}}
{\#\GL_2(\mathbb Z/\ell^e\mathbb Z)}\right). \end{equation*}
On the other hand, by comparing~\eqref{define K(G)} and Lemma~\ref{Aut(G)} with Theorem~\ref{complete matrix count thm}, we see that it is equal to $K(G)\frac{|G|}{|\Aut(G)|}$. \end{proof}
\def$'${$'$}
\end{document} |
\begin{document}
\title[Congruence Topology]{The Congruence Topology, \\ Grothendieck Duality and Thin groups} \author[A. Lubotzky]{Alexander Lubotzky} \address{Institute of Mathematics\\ Hebrew University\\ Jerusalem 9190401, Israel\\ alex.lubotzky@mail.huji.ac.il} \author[T.N. Venkataramana]{T.N. Venkataramana} \address{Tata Institute of Fundamental Research\\ Homi Bhabha Road\\ Colaba, Mumbai 400005, India\\ venky@math.tifr.res.in} \maketitle
\baselineskip 16pt
\begin{abstract}
This paper answers a question raised by Grothendieck in 1970 on the ``Grothendieck closure" of an integral linear group and proves a conjecture of the first author made in 1980. This is done by a detailed study of the congruence topology of arithmetic groups, obtaining along the way, an arithmetic analogue of a classical result of Chevalley for complex algebraic groups. As an application we also deduce a group theoretic characterization of thin subgroups of arithmetic groups. \end{abstract}
\section*{0. Introduction}
If $\varphi: G_1 \to G_2$ is a polynomial map between two complex varieties, then in general the image of a Zariski closed subset of $G_1$ is not necessarily closed in $G_2$. But here is a classical result:
\begin{thm*}[Chevalley] \label{Chevalleythm} If $\varphi$ is a polynomial homomorphism between two complex algebraic groups then $\varphi(H)$ is closed in $G_2$ for every closed subgroup $H$ of $G_1$. \end{thm*}
There is an arithmetic analogue of this issue: Let $G$ be a $\mathbb{Q}$-algebraic group, let $\mathbb{A}_f = \Pi^*_{p\, prime} \mathbb{Q}_p$ be the ring of finite \'adeles over $\mathbb{Q}$. The topology of $G(\mathbb{A}_f)$ induces the congruence topology on $G(\mathbb{Q})$. If $K$ is compact open subgroup of $G(\mathbb{A}_f)$ then $\Gamma = K \cap G(\mathbb{Q})$ is called a congruence subgroup of $G(\mathbb{Q})$. This defines the congruence topology on $G(\mathbb{Q})$ and on all its subgroups. A subgroup of $G(\mathbb{Q})$ which is closed in this topology is called congruence closed. A subgroup $\Delta$ of $G$ commensurable to $\Gamma$ is called an arithmetic group.
Now, if $\varphi:G_1 \to G_2$ is a $\mathbb{Q}$-morphism between two $\mathbb{Q}$-groups, which is a surjective homomorphism (as $\mathbb{C}$-algebraic groups) then the image of an arithmetic subgroup $\Delta$ of $G_1$ is an arithmetic subgroup of $G_2$ (\cite[Theorem 4.1 p.~174]{Pl-Ra}), but the image of a congruence subgroup is not necessarily a congruence subgroup. It is well known that $\text{\rm SL}_n(\mathbb{Z})$ has congruence subgroups whose images under the adjoint map $\text{\rm SL}_n(\mathbb{Z}) \to \text{\rm PSL}_n(\mathbb{Z}) \hookrightarrow \text{Aut} (M_n(\mathbb{Z}))$ are not congruence subgroups (see \cite{Ser} and Proposition \ref{congruence image} below for an exposition and explanation). So, the direct analogue of Chevalley theorem does not hold. Still, in this case, if $\Gamma$ is a congruence subgroup of $\text{\rm SL}_n(\mathbb{Z})$, then $\varphi(\Gamma)$ is a normal subgroup of $\overline{\varphi(\Gamma)}$, the (congruence) closure of $\varphi(\Gamma)$ in $\text{\rm PSL}_n(\mathbb{Z})$, and the quotient is a finite abelian group. Our first technical result says that the general case is similar. It is especially important for us that when $G_2$ is simply connected, the image of a congruence subgroup of $G_1$ is a congruence subgroup in $G_2$ (see Proposition \ref{arithmeticchevalley} (ii) below).
Before stating the result, we give the following definition and set some notations for the rest of the paper:
Let $G$ be a linear algebraic group over $\mathbb{C}$, $G^0$ - its connected component, and $R = R(G)$ - its solvable radical, i.e. the largest connected normal solvable subgroup of $G$. We say that $G$ is \emph{essentially simply connected} if $G_{ss}:= G^0/R$ is simply connected.
Given a subgroup $\Gamma$ of $GL_n$, we will throughout the paper denote by $\Gamma^0$ the intersection of $\Gamma$ with $G^0$, where $G^0$ is the connected component of $G$ - the Zariski closure of $\Gamma$. Therefore, $\Gamma^0$ is always a finite index normal subgroup of $\Gamma$.
The notion ``essentially simply connected" will play an important role in this paper due to the following proposition, which can be considered as the arithmetic analogue of Chevalley's result above:
\begin{prop}\label{arithmeticchevalley}
\begin{enumerate}[(i)] \item If $\varphi: G_1 \to G_2$ is a surjective (over $\mathbb{C}$) algebraic homomorphism between two $\mathbb{Q}$-defined algebraic groups, then for every congruence closed subgroup $\Gamma$ of $G_1 (\mathbb{Q})$, the image $\varphi(\Gamma^0)$ is normal in its congruence closure $\overline{\varphi(\Gamma^0)}$ and $\overline{\varphi(\Gamma^0)}/\varphi (\Gamma^0)$ is a finite abelian group. \item If $G_2$ is essentially simply connected, and $\Gamma$ a congruence subgroup of $G_1$ then $\overline{\varphi(\Gamma)} = \varphi (\Gamma)$, i.e., the image of a congruence subgroup is congruence closed. \end{enumerate} \end{prop}
This analogue of Chevalley's theorem, and a result of \cite{Nori}, \cite{Weis} enable us to prove:
\begin{prop}\label{congruenceimage} If $\Gamma_1 \le \text{\rm GL}_n(\mathbb{Z})$ is a congruence closed subgroup (i.e. closed in the congruence topology) with Zariski closure $G$, then there exists a congruence subgroup $\Gamma$ of $G$, such that $[\Gamma, \Gamma] \le \Gamma_1^0 \le \Gamma$. If $G$ is essentially simply connected then the image of $\Gamma _1$ in $G/R(G)$ is actually a congruence subgroup. \end{prop}
We apply Proposition \ref{arithmeticchevalley} (ii) in two directions:
\begin{enumerate}[(A)]
\item Grothendieck-Tannaka duality for discrete groups, and
\item A group theoretic characterization of thin subgroups of arithmetic groups. \end{enumerate}
\subsection*{Grothendieck closure} In \cite{Gro}, Grothendieck was interested in the following question:
\begin{question} \label{isocompletion} Assume $\varphi: \Gamma_1 \to \Gamma_2$ is a homomorphism between two finitely generated residually finite groups inducing an isomorphism $\hat\varphi:\hat\Gamma_1 \to \hat\Gamma_2$ between their profinite completions. Is $\varphi$ already an isomorphism?
\end{question}
To tackle Question \ref{isocompletion}, he introduced the following notion. Given a finitely generated group $\Gamma$ and a commutative ring $A$ with identity, let $Cl_A(\Gamma)$ be the group of all automorphisms of the forgetful functor from the category $\text {Mod}_A(\Gamma)$ of all finitely generated $A$-modules with $\Gamma$ action to $\text {Mod}_A(\{ 1 \})$, preserving tensor product. Grothendieck's strategy was the following: he showed that, under the conditions of Question \ref{isocompletion}, $\varphi$ induces an isomorphism from $\text {Mod}_A(\Gamma_2)$ to $\text {Mod}_A(\Gamma_1)$, and hence also between $Cl_A (\Gamma_1)$ and $Cl_A(\Gamma_2)$. He then asked: \begin{question} \label{closureisomorphism} Is the natural map $\Gamma \hookrightarrow Cl_{\mathbb{Z}} (\Gamma)$ an isomorphism for a finitely generated residually finite group? \end{question}
An affirmative answer to Question \ref{closureisomorphism} would imply an affirmative answer to Question \ref{isocompletion}. Grothendieck then showed that arithmetic groups with the (strict) congruence subgroup property do indeed satisfy $Cl_{\mathbb{Z}}(\Gamma)\simeq \Gamma$.
Question 0.4 basically asks whether $\Gamma$ can be recovered from its category of representations. In \cite{Lub}, the first author phrased this question in the framework of Tannaka duality, which asks a similar question for compact Lie groups. He also gave a more concrete description of $Cl_\mathbb{Z}(\Gamma)$:
\begin{equation}\label{profinitegrothendieck} Cl_\mathbb{Z} (\Gamma) = \{ g \in \hat \Gamma | \hat\rho (g) (V) = V,\quad \forall \quad (\rho, V) \in \text {Mod}_\mathbb{Z} (\Gamma)\}.\end{equation}
Here $\hat\rho$ is the continuous extension $\hat\rho: \hat \Gamma \to \text{Aut} (\hat V)$ of the original representation $\rho: \Gamma \to \text{Aut} (V)$.
However, it is also shown in \cite{Lub}, that the answer to Question \ref{closureisomorphism}
is negative. The counterexamples provided there
are the arithmetic groups for which the weak congruence subgroup property holds but not the strict one, i.e. the congruence kernel is finite but non-trivial. It was conjectured in \cite[Conj A, p. 184]{Lub}, that for an arithmetic group $\Gamma$, $Cl_\mathbb{Z} (\Gamma) = \Gamma$ if and only if $\Gamma $ has the (strict) congruence subgroup property. The conjecture was left open even for $\Gamma = \text{\rm SL}_2 (\mathbb{Z})$.
In the almost 40 years since \cite{Lub} was written various counterexamples were given to question \ref{isocompletion} (\cite{Pl-Ta1}, \cite{Ba-Lu}, \cite{Br-Gr}, \cite{Py}) which also give counterexamples to question \ref{closureisomorphism}, but it was not even settled whether $Cl_\mathbb{Z}(F) = F$ for finitely generated non-abelian free groups $F$.
We can now answer this and, in fact, prove the following surprising result, which gives an essentially complete answer to Question \ref{closureisomorphism}.
\begin{thm}\label{maintheorem} Let $\Gamma$ be a finitely generated subgroup of $\text{\rm GL}_n(\mathbb{Z})$. Then $\Gamma$ satisfies Grothendieck-Tannaka duality, i.e. $Cl_\mathbb{Z}(\Gamma) = \Gamma$ if and only if $\Gamma $ has the congruence subgroup property i.e., for some (and consequently for every) faithful representation $\Gamma \rightarrow \text{\rm GL}_m(\mathbb Z)$ such that the Zariski closure $G$ of $\Gamma$ is essentially simply connected, every finite index subgroup of $\Gamma $ is closed in the congruence topology of $\text{\rm GL}_n(\mathbb{Z})$. In such a case, the image of the group $\Gamma$ in the semi-simple (simply connected) quotient $G/R$ is a congruence arithmetic group. \end{thm}
The Theorem is surprising as it shows that the cases proved by Grothendieck himself (which motivated him to suggest that the duality holds in general) are essentially the only cases where this duality holds.
Let us note that the assumption on $G$ is not really restrictive. In Lemma \ref{simplyconnectedsaturate}, we show that for every $\Gamma \le \text{\rm GL}_n(\mathbb{Z})$ we can find an ``over" representation of $\Gamma$ into $\text{\rm GL}_m (\mathbb{Z})$ (for some $m$) whose Zariski closure is essentially simply connected.
Theorem \ref{maintheorem} implies Conjecture A of [Lub].
\begin{corr}\label{lubconjecture} If $G$ is a simply connected semisimple $\mathbb{Q}$-algebraic group, and $\Gamma$ a congruence subgroup of $G(\mathbb{Q})$, then $Cl_\mathbb{Z} (\Gamma) = \Gamma$ if and only if $\Gamma$ satisfies the (strict) congruence subgroup property. \end{corr}
In particular: \begin{corr}\label{0.10} $Cl_\mathbb{Z}(F) \neq F$ for every finitely generated free group on at least two generators; furthermore, $Cl_\mathbb{Z} (\text{\rm SL}_2(\mathbb{Z})) \neq \text{\rm SL}_2 (\mathbb{Z})$. \end{corr} In fact, it will follow from our results that $Cl_\mathbb{Z}(F)$ is uncountable. \\
Before moving on to the last application, let us say a few words about how Proposition \ref{arithmeticchevalley} helps to prove a result like Theorem \ref{maintheorem}. The description of $Cl_\mathbb{Z} (\Gamma)$ as in Equation \ref{profinitegrothendieck} implies that \begin{equation}\label{eq0.2} Cl_\mathbb{Z} (\Gamma) = \underset{\rho}{\lim\limits_{\leftarrow}} \quad \overline{\rho(\Gamma)} \end{equation} when the limit is over all $(\rho, V)$ when $V$ is a finitely generated abelian group, $\rho$ a representation $\rho: \Gamma \to \text{Aut} (V)$ and $\overline{\rho (\Gamma)} = \hat\rho (\hat\Gamma)\cap \text{Aut} (V) \subseteq \text{Aut} (\hat V)$. This is an inverse limit of countable discrete groups, so one can not say much about it unless the connecting homomorphisms are surjective, which is, in general, not the case. Now, $\overline{\rho(\Gamma)}$ is the congruence closure of $\rho(\Gamma)$ in $\text{Aut} (V)$ and Proposition \ref{arithmeticchevalley} shows that the corresponding maps are ``almost" onto, and are even surjective if the modules $V$ are what we call here ``simply connected representations", namely those cases when $V$ is torsion free (and hence isomorphic to $\mathbb{Z}^n$ for some $n$) and the Zariski closure of $\rho(\Gamma)$ in $\text{Aut} (\mathbb{C} \underset{\mathbb{Z}}{\otimes} V) = \text{\rm GL}_n (\mathbb{C})$ is essentially simply connected. We show further that the category $\text {Mod}_{\mathbb{Z}}(\Gamma)$ is ``saturated" with such modules (see Lemma \ref{simplyconnectedsaturate}) and we deduce that one can compute $Cl_\mathbb{Z}(\Gamma)$ as in Equation \ref{profinitegrothendieck} by considering only simply connected representations. We can then use Proposition \ref{arithmeticchevalley}(b), and get a fairly good understanding of $Cl_\mathbb{Z}(\Gamma)$. This enables us to prove Theorem \ref{maintheorem}. In addition, we also deduce: \begin{corr}\label{simplyconnectedonto} If $(\rho, V)$ is a simply connected representation, then the induced map $Cl_\mathbb{Z}(\Gamma) \to \text{Aut} (V)$ is onto $Cl_\rho(\Gamma): = \overline{\rho(\Gamma)}$ - the congruence closure of $\Gamma$. \end{corr}
From Corollary \ref{simplyconnectedonto} we can deduce our last application.
\subsection*{Thin groups} In recent years, following \cite{Sar}, there has been a lot of interest in the distinction between thin subgroups and arithmetic subgroups of algebraic groups. Let us recall:
\begin{definition}\label{0.12} A subgroup $\Gamma \le \text{\rm GL}_n(\mathbb{Z})$ is called {\bf thin} if it is of infinite index in $G \cap \text{\rm GL}_n(\mathbb{Z})$, when $G$ is its Zariski closure in $\text{\rm GL}_n$. For a general group $\Gamma$, we will say that it is a {\bf thin group} (or it {\bf has a thin representation}) if for some $n$ there exists a representation $\rho:\Gamma \to \text{\rm GL}_n(\mathbb{Z})$ for which $\rho(\Gamma)$ is thin. \end{definition}
During the last five decades a lot of attention was given to the study of arithmetic groups, with many remarkable results, especially for those of higher rank (cf. \cite{Mar}, \cite{Pl-Ra} and the references therein). Much less is known about thin groups. For example, it is not known if there exists a thin group with property $(T)$. Also, given a subgroup of an arithmetic group (say, given by a set of generators) it is difficult to decide whether it is thin or arithmetic (i.e., of finite or infinite index in its integral Zariski closure).
It is therefore of interest and perhaps even surprising that our results enable us to give a purely group theoretical characterization of thin groups $\Gamma \subset GL_n(\mathbb{Z})$. Before stating the precise result, we make the topology on $Cl_\mathbb{Z}(\Gamma)$ explicit.
If we take the class of simply connected representations $(\rho,V)$ for computing the group $Cl_\mathbb{Z}(\Gamma)$, one can then show that $Cl_\mathbb{Z}(\Gamma)/\Gamma$ is a {\it closed} subspace of the product $\prod _\rho (Cl_\rho(\Gamma)/\Gamma)$, where each $Cl_\rho(\Gamma)/\Gamma$ is given the discrete topology. This is the topology on the quotient space $Cl_\mathbb{Z}(\Gamma)/\Gamma$ in the following theorem. We can now state:
\begin{thm}\label{thincriterion} Let $\Gamma$ be finitely generated $\mathbb{Z}$-linear group. Then $\Gamma$ is a thin group if and only if it satisfies (at least) one of the following conditions: \begin{enumerate} \item $\Gamma$ is not $FAb$ (namely, it does have a finite index subgroup with an infinite abelianization), or \item $Cl_\mathbb{Z} (\Gamma)/\Gamma $ is not compact. \end{enumerate} \end{thm}
\noindent {\bf Warning} \ There are groups $\Gamma$ which can be realized both as arithmetic groups as well as thin groups. For example, the free group is an arithmetic subgroup of $\text{\rm SL}_2(\mathbb{Z})$, but at the same time a thin subgroup of every semisimple group, by a well known result of Tits \cite{Ti}. In our terminology this is a thin group.
T.N.V. thanks the Math Department of the Hebrew University for great hospitality while a major part of this work was done. He would also like to thank the JC Bose fellowship (SR/S2/JCB-22/2017) for support during the period 2013-2018.
The authors thank the Math Department of the University of Marseilles, and the conference at Oberwolfach, where the work was completed. We would especially like to thank Bertrand Remy for many interesting discussions and for his warm hospitality.
A.L. is indebted to ERC, NSF and BSF for support.
\section{Preliminaries on Algebraic Groups over $\field{Q}$}
We recall the definition of an essentially simply connected group: \begin{defn} Let $G$ be a linear algebraic group over $\field{C}$ with maximal connected normal solvable subgroup $R$ (i.e. the radical of $G$) and identity component $G^0$. We say that $G$ is {\bf essentially simply connected} if the semi-simple part $G^0/R=H$ is a simply connected. \end{defn}
Note that $G$ is essentially simply connected if and only if, the quotient $G^0/U$ of the group $G^0$ by its unipotent radical $U$ is a product $H_{ss}\times S$ with $H_{ss}$ simply connected and semi-simple, and $S$ is a torus.
For example, a semi-simple connected group is essentially simply connected if and only if it is simply connected. The group $\mathbb{G}_m\times \text{\rm SL}_n$ is essentially simply connected; however, the radical of the group $\text{\rm GL}_n$ is the group $R$ of scalars and $\text{\rm GL}_n/R=\text{\rm SL}_n/centre$, so $\text{\rm GL}_n$ is {\it not} essentially simply connected. We will show later (Lemma \ref{surjectivemorphisms}(iii)) that every group has a finite cover which is essentially simply connected.
\begin{lemma} \label{essentiallysimplyconnected} Suppose $G\subset G_1\times G_2$ is a subgroup of a product of two essentially simply connected linear algebraic groups $G_1,G_2$ over $\field{C}$; suppose that the projection $\pi _i$ of $G$ to $G_i$ is surjective for $i=1,2$. Then $G$ is also essentially simply connected. \end{lemma}
\begin{proof} Assume, as we may, that $G$ is connected. Let $R$ be the radical of $G$. The projection of $R$ to $G_i$ is normal in $G_i$ since $\pi_i: G\rightarrow G_i$ is surjective. Moreover, $G_i/\pi _i(R)$ is the image of the semi-simple group $G/R$; the latter has a Zariski dense compact subgroup, hence so does $G_i/\pi _i(R)$; therefore, $G_i/\pi _i(R)$ is reductive and is its own commutator. Hence $G_i/\pi _i(R)$ is semi-simple and hence $\pi _i(R)=R_i$ where $R_i$ is the radical of $G_i$. Let $R^* = G \cap (R_1 \times R_2)$. Since $R_1 \times R_2$ is the radical of $G_1 \times G_2$, it follows that $R^*$ is a solvable normal subgroup of $G$ and hence its connected component is contained in $R$. Since $R \subseteq R_1 \times R_2$, it follows that $R$ is precisely the connected component of the identity of $R^*$. We then have the inclusion $G/R^*\subset G_1/R_1\times G_2/R_2$ with projections again being surjective.
By assumption, each $G_i/R_i=H_i$ is semi-simple, simply connected. Moreover $G/R^*=H$ where $H$ is connected, semi-simple. Thus we have the inclusion $H\subset H_1\times H_2$. Now, $H\subset H_1\times H_2$ is such that the projections of $H$ to $H_i$ are surjective, and each $H_i$ is simply connected. Let $K$ be the kernel of the map $H\rightarrow H_1$ and $K^0$ its identity component. Then $H/K^0 \rightarrow H_1$ is a surjective map of connected algebraic groups with finite kernel. The simple connectedness of $H_1$ then implies that $H/K^0=H_1$ and hence that $K=K^0 \subset \{1\}\times H_2$ is normal in $H_2$.
Write $H_2=F_1\times \cdots \times F_t$ where each $F_i$ is {\it simple} and simply connected. Now, $K$ being a closed normal subgroup of $H_2$ must be equal to $\prod _{i\in X} F_i$ for some subset $X$ of $\{ 1, \cdots, t\}$,
and is simply connected. Therefore, $K=K^0$ is simply connected.
From the preceding two paragraphs, we have that both $H/K$ and $K$ are simply connected, and hence so is $H = G/R^*$. Since $R$ is the connected component of $R^*$ and $G/R^*$ is simply connected, it follows that $G/R = G/R^*$ and hence $G/R$ is simply connected. This completes the proof of the lemma. \end{proof}
\subsection{Arithmetic Groups and Congruence Subgroups} \label{congruence}
In the introduction, we defined the notion of arithmetic and congruence subgroup of $G(\field{Q})$ using the adelic language. One can define the notion of arithmetic (res. congruence) group in more concrete terms as follows. Given a linear algebraic group $G\subset \text{\rm SL}_n$ defined over $\field{Q}$, we will say that a subgroup $\Gamma \subset G(\field{Q})$ is an {\it arithmetic group} if is commensurable to $G\cap \text{\rm SL}_n(\mathbb{Z})=G(\mathbb{Z})$; that is, the intersection $\Gamma \cap G(\mathbb{Z})$ has finite index both in $\Gamma$ and in $G(\mathbb{Z})$. It is well known that the notion of an arithmetic group does not depend on the specific linear embedding $G\subset \text{\rm SL}_n$. As in \cite{Ser}, we may define the {\it arithmetic completion} $\widehat{G}$ of $G(\field{Q})$ as the completion of the group $G(\field{Q})$ with respect to the topology on $G(\field{Q})$ as a topological group, obtained by designating arithmetic groups as a fundamental systems of neighbourhoods of identity in $G(\field{Q})$.
Given $G\subset \text{\rm SL}_n$ as in the preceding paragraph, we will say that an arithmetic group $\Gamma \subset G(\field{Q})$ is a {\it congruence subgroup} if there exists an integer $m \geq 2$ such that $\Gamma $ contains the ``principal congruence subgroup'' $G(m\mathbb{Z})=\text{\rm SL}_n(m\mathbb{Z})\cap G$ where $\text{\rm SL}_n(m\mathbb{Z})$ is the kernel to the residue class map $\text{\rm SL}_n(\mathbb{Z})\rightarrow \text{\rm SL}_n(\mathbb{Z}/m\mathbb{Z})$. We then get the structure of a topological group on the group $G(\field{Q})$ by designating congruence subgroups of $G(\field{Q})$ as a fundamental system of neighbourhoods of identity. The completion of $G(\field{Q})$ with respect to this topology, is denoted $\overline{G}$. Again, the notion of a congruence subgroup does not depend on the specific linear embedding $G\rightarrow \text{\rm SL}_n$ .
Since every congruence subgroup is an arithmetic group, there exists a map from $\pi: \widehat{G}\rightarrow \overline{G}$ which is easily seen to be surjective, and the kernel $C(G)$ of $\pi$ is a compact profinite subgroup of $\widehat{G}$. This is called the {\it congruence subgroup kernel}. One says that $G(\field{Q})$ has the {\it congruence subgroup property} if $C(G)$ is trivial. This is easily seen to be equivalent to the statement that every arithmetic subgroup of $G(\field{Q})$ is a congruence subgroup.
It is known (see p.~108, last but one paragraph of \cite{Ra2} or \cite{Ch}) that solvable groups $G$ have the congruence subgroup property.
Moreover, every solvable subgroup of $\text{\rm GL}_n(\mathbb{Z})$ is polycyclic. In such a group, every subgroup is intersection of finite index subgroups. So every solvable subgroup of an arithmetic group is congruence closed. We will use these facts frequently in the sequel.
Another (equivalent) way of viewing the congruence completion is (see \cite{Ser}, p.~276, Remarque) as follows: let $\mathbb{A} _f$ be the ring of finite adeles over $\field{Q}$, equipped with the standard adelic topology and let $\mathbb{Z} _f \subset \mathbb{A} _f$ be the closure of $\mathbb{Z}$. Then the group $G({\mathbb A} _f)$ is also a locally compact group and contains the group $G(\field{Q})$. The congruence completion $\overline{G}$ of $G(\field{Q})$ may be viewed as the closure of $G(\field{Q})$ in $G({\mathbb A} _f)$.
\begin{lemma} \label{surjectivemorphisms} Let $H,H^*$ be linear algebraic groups defined over $\field{Q}$. \begin{enumerate}[(i)] \item Suppose $H^* \rightarrow H$ is a surjective $\field{Q}$-morphism. Let $(\rho, W _{\field{Q}})$ be a representation of $H$ defined over $\field{Q}$. Then there exists a faithful $\field{Q}$-representation $(\tau, V_{\field{Q}})$ of $H^*$ such that $(\rho,W)$ is a sub-representation of $(\tau, V)$.
\item If $H^*\rightarrow H$ is a surjective map defined over $\field{Q}$ , then the image of an arithmetic subgroup of $H^*$ under the map $H^*\rightarrow H$ is an arithmetic subgroup of $H$.
\item If $H$ is connected, then there exists a connected essentially simply connected algebraic group $H^*$ with a surjective $\mathbb{Q}$-defined homomorphism $H^*\to H$ with finite kernel.
\item If $H^*\rightarrow H$ is a surjective homomorphism of algebraic $\field{Q}$-groups which are essentially simply connected, then the image of a congruence subgroup of $H^*(\field{Q})$ is a congruence subgroup of $H(\field{Q})$.
\end{enumerate}
\end{lemma}
\begin{proof} Let $\theta: H^*\rightarrow \text{\rm GL}(E)$ be a faithful representation of the linear algebraic group $H^*$ defined over $\field{Q}$ and $\tau =\rho \oplus \theta$ as $H^*$-representation. Clearly $\tau$ is faithful for $H^*$ and contains $\rho$. This proves (i).
Part (ii) is the statement of Theorem (4.1) of \cite{Pl-Ra}.
We now prove (iii). Write $H=R G$ as a product of its radical $R$ and a semi-simple group $G$. Let $H^*_{ss}\rightarrow G$ be the simply connected cover of $G$. Hence $H^*_{ss}$ acts on $R$ through $G$, via this covering map. Define $H^*=R\rtimes H ^*_{ss}$ as a semi-direct product. Clearly, the map $H^*\rightarrow H$ has finite kernel and satisfies the properties of (iii).
To prove (iv), we may assume that $H$ and $H^*$ are connected. If $U^*,U$ are the unipotent radicals of $H^*$ and $H$, the assumptions of (iv) do not change for the quotient groups $H^*/U^*$ and $H/U$. Moreover, since $H^*$ is the semi-direct product of $U^*$ and $H^*/U^*$ (and similarly for $H,U$) and the unipotent $\field{Q}$-algebraic group $U$ has the congruence subgroup property, it suffices to prove (iv) when both $H^*$ and $H$ are reductive. By assumption, $H^*$ and $ H$ are essentially simply connected; i.e. $H^*=H^*_{ss}\times S^*$ and $H=H_{ss}\times S$ where $S,S^*$ are tori and $H^*_{ss},H_{ss}$ are simply connected semi-simple groups. Thus we have connected reductive $\field{Q}$-groups $H^*,H$ with a surjective map such that their derived groups are simply connected (and semi-simple), and the abelianization $(H^*)^{ab}$ is a torus (similarly for $H$).
Now, $[H^*,H^*]=H^*_{ss}$ is a simply connected semi-simple group and hence it is a product $F_1\times \cdots\times F_s$ of simply connected $\field{Q}$-simple algebraic groups $F_i$. Being a factor of $[H^*,H^*]=H^*_{ss}$, the group $[H,H]=H_{ss}$ is a product of a (smaller) number of these $F_i$'s. After a renumbering of the indices, we may assume that $H_{ss}$ is a product $F_1\times \cdots \times F_r$ for some $r\leq s$ and the map $\pi$ on $H^*_{ss}$ is the projection to the first $r$ factors. Hence the image of a congruence subgroup of $H^*_{ss}$ is a congruence subgroup of $H_{ss}$.
The tori $S^*,S$ have the congruence subgroup property by a result of Chevalley (as already stated at the beginning of this section, this is true for all solvable algebraic groups). Hence the image of a congruence subgroup of $S^*$ is a congruence subgroup of $S$. We thus need only prove that every subgroup of the reductive group $H$ of the form $\Gamma _1\Gamma _2$, where $\Gamma _1\subset H_{ss}$ and $\Gamma _2\subset S$ are congruence subgroups, is itself a congruence subgroup of $H$. We use the adelic form of the congruence topology. Suppose $K$ is a compact open subgroup of the $H(\mathbb{A}_f)$ where $\mathbb{A} _f$ is the ring of finite adeles. The image of $H(\mathbb{Q})\cap K$ under the quotient map $H\rightarrow H^{ab}=S$ is a congruence subgroup in the torus $S$ and hence $H(\mathbb{Q})\cap K' \subset (H_{ss}(\mathbb{Q} )\cap K) (S(\mathbb{Q} )\cap K)$ for some possibly smaller open subgroup $K'\subset H(\mathbb{A}_f)$. This proves (iv). \end{proof}
Note that part (iii) and (iv) prove Proposition \ref{arithmeticchevalley}(ii).
\section{The Arithmetic Chevalley Theorem}
In this section, we prove Proposition \ref{arithmeticchevalley}(i). Assume that $\varphi: G_1 \rightarrow G_2$ is a surjective morphism of $\field{Q}$-algebraic groups. We are to prove that $\varphi (\Gamma^0)$ contains the commutator subgroup of a congruence subgroup of $G_2(\field{Q})$ containing it.
Before starting on the proof, let us note that in general, the image of a congruence subgroup of $G_1(\mathbb Z)$ under $\varphi$ need not be a congruence subgroup of $G_2(\mathbb Z)$. The following proposition gives a fairly general situation when this happens.
\begin{prop} \label{congruence image} Let $\pi : G_1\rightarrow G_2$ be a finite covering of semi-simple algebraic groups defined over $\field{Q}$ with $G_1$ simply connected and $G_2$ not. Assume $G_1 (\mathbb{Q})$ is dense in $G_1(\mathbb{A}_f)$. Write $K$ for the kernel of $\pi$ and $K_f$ for the kernel of the map $G_1(\mathbb{A} _f)\rightarrow G_2(\mathbb{A} _f)$. Let $\Gamma $ be a congruence subgroup of $G_1(\field{Q})$ and $H$ its closure in $G_1(\mathbb{A} _f)$. Then the image $\pi (\Gamma) \subset G_2(\field{Q})$ is a congruence subgroup if and only if $K H\supset K_f$ .
\end{prop}
Before proving the proposition, let us note that while $K$ is finite, the group $K_f$ is a product of infinitely many finite abelian groups and that $K_f$ is central in $\overline{G_1}$. This implies
\begin{corr} \begin{enumerate} [(i)]
\item There are infinitely many congruence subgroups $\Gamma _i$ with $\pi (\Gamma _i)$ non-congruence subgroups of unbounded finite index in their congruence closures $ \overline{\Gamma _i}$.
\item For each of these $\Gamma = \Gamma_i$, the image $\pi (\Gamma)$ contains the commutator subgroup $[\overline{\Gamma},\overline{\Gamma}]$, and is normal in $\overline{\Gamma}$ (with abelian quotient).
\end{enumerate}
\end{corr}
We now prove Proposition \ref{congruence image}.
\begin{proof} Let $G_3$ be the image of the rational points of $G_1(\field{Q})$: \[G_3=\pi (G_1(\field{Q}))\subset G_2(\field{Q}).\] Define a subgroup $\Delta$ of $G_3$ to be a {\it quasi-congruence subgroup} if the inverse image $\pi ^{-1}(\Delta)$ is a congruence subgroup of $G_1(\field{Q})$. Note that the quasi-congruence subgroups of $G_3 $ are exactly the images of congruence subgroups of $G_1(\field{Q})$ by $\pi$. It is routine to check that by declaring quasi-congruence subgroups to be open, we get the structure of a topological group on $G_3$ . This topology is weaker or equal to the arithmetic topology on $G_3$ . However, it is strictly stronger than the congruence topology on $G_3$. The last assertion follows from the fact that the completion of $G_3=G_1(\field{Q})/K(\field{Q})$ is the quotient $\overline{G_1}/K$ where $\overline{G_1}$ is the congruence completion of $G_1 (\field{Q})$, whereas the completion of $G_3$ with respect to the congruence topology is $\overline{G_1}/K_f$.
Now let $\Gamma \subset G_1(\field{Q})$ be a congruence subgroup and $\Delta_1=\pi (\Gamma)$; let $\Delta_2$ be its congruence closure in $G_3$. Then both $\Delta_1$ and $\Delta_2$ are open in the quasi-congruence topology on $G_3$. Denote by $G_3 ^*$ the completion of $G_3$ with respect to the quasi-congruence topology, so $G^*_3 = \overline{G_1}/K$ and denote by $\Delta_1^*,\Delta_2^*$ the closures of $\Delta_1,\Delta_2$ in $G_3^*$. We then have the equalities \[\Delta_2/\Delta_1= \Delta_2^*/\Delta_1^*, \quad \Delta_2^*= \Delta^*_1 K_f/K. \]
Hence $\Delta_1 ^*=\Delta_2 ^*$ if and only if $K\Delta_1^*\supset K_f$. This proves Proposition \ref{congruence image}.
The proof shows that $\Delta_1^*$ is normal in $\Delta_2^*$ (since $ K_f$ is central) with abelian quotient. The same is true for $\Delta_1$ in $\Delta_2$ and the corollary is also proved. \end{proof}
To continue with the proof of Proposition \ref{arithmeticchevalley}, assume, as we may (by replacing $G_1$ with the Zariski closure of $\Gamma$), that $G_1$ has no characters defined over $\field{Q}$. For, suppose that $G_1$ is the Zariski closure of $\Gamma \subset G_1(\mathbb Z)$. Let $\chi :G_1 \rightarrow {\mathbb G}_m$ be a non-trivial (and therefore surjective) homomorphism defined over $\field{Q}$; then the image of the arithmetic group $G_1(\mathbb Z)$ in ${\mathbb G}_m(\field{Q})$ is a Zariski dense arithmetic group. However, the only arithmetic groups in ${\mathbb G}_m(\field{Q})$ are finite and cannot be Zariski dense in ${\mathbb G}_m$. Therefore, $\chi $ cannot be non-trivial. We can also assume that $G_1$ is connected.
We start by proving Proposition 0.1 for the case that $\Gamma $ is a congruence subgroup.
If we write $G_1=R _1H_1$ where $H_1$ is semi-simple and $R_1$ is the radical, we may assume that $G_1$ is essentially simply connected (Lemma \ref{surjectivemorphisms}(iii)), without affecting the hypotheses or the conclusion of Proposition \ref{arithmeticchevalley}.
Hence $G_1 = R_1 \rtimes H_1$ is a semi direct product. Then clearly, every congruence subgroup of $G_1 $ contains a congruence subgroup of the form $\Delta \rtimes \Phi$ where $\Delta \subset R_1$ and $\Phi \subset H_1$ are congruence subgroups. Similarly, write $G_2=R_2H_2$. Since $\varphi $ is easily seen to map $R_1$ onto $R_2$ and $H_1$ onto $H_2$, it is enough to prove the proposition for $R_1$ and $H_1$ separately.
We first recall that if $G$ is a solvable linear algebraic group defined over $\field{Q}$ then the congruence subgroup property holds for $G$, i.e., every arithmetic subgroup of $G$ is a congruence subgroup (for a reference see p.~108, last but one paragraph of \cite{Ra2} or \cite{Ch}). Consequently, by Lemma \ref{surjectivemorphisms} (ii), the image of a congruence subgroup in $R_1$ is an arithmetic group in $R_2$ and hence a congruence subgroup. Thus we dispose of the solvable case.
In the case of semi-simple groups, denote by $H_2^*$ by the simply connected cover of $H_2$. The map $\varphi : H_1 \rightarrow H_2$ lifts to a map from $H_1$ to $H_2^*$. For simply connected semi-simple groups, a surjective map from $H_1$ to $H_2^*$ sends a congruence subgroup to a congruence subgroup by Lemma 1.3 (iv).
We are thus reduced to the situation $H_1=H_2^*$ and $\varphi: H_1\rightarrow H_2$ is the simply connected cover of $H_2$.
By our assumptions, $H_1$ is now connected, simply connected and semisimple. We claim that for any non-trivial $\mathbb{Q}$-simple factor $L$ of $H_1$, $L(\mathbb{R})$ is not compact. Otherwise, the image of $\Gamma$, the arithmetic group, there is finite and as $\Gamma$ is Zariski dense, so $H_1$ is not connected. The strong approximation theorem (\cite[Theorem 7.12]{Pl-Ra}) gives now that $H_1(\mathbb{Q})$ is dense in $H_1(\mathbb{A}_f)$. So Proposition 2.1 can be applied to finish the proof of Proposition 0.1 in the case $\Gamma$ is a congruence subgroup.
We need to show that it is true also for the more general case when $\Gamma$ is only congruence closed. To this end let us formulate the following Proposition which is of independent interest.
\begin{prop}\label{pr2.3} Let $\Gamma \subseteq \text{\rm GL}_n(\mathbb{Z}), G$ its Zariski closure and $Der = [G^0, G^0]$. Then $\Gamma$ is congruence closed if and only if $\Gamma \cap Der$ is a congruence subgroup of $Der$. \end{prop}
\begin{proof} If $G^0$ has no toral factors, this is proved in \cite{Ve}, in fact, in this case a congruence closed Zariski dense subgroup is a congruence subgroup. (Note that this is stated there for general $G$, but the assumption that there is no toral factor was mistakenly omitted as the proof there shows.)
Now, if there is a toral factor, we can assume $G$ is connected, so $G^{ab} = V \times S$ where $V$ is unipotent and $S$ a torus. Now $\Gamma\cap [G, G]$ is Zariski dense and congruence closed, so it is a congruence subgroup by \cite{Ve} as before. For the other direction, note that the image of $\Gamma$ is $U \times S$, being solvable, is always congruence closed, so the Proposition follows. \end{proof}
Now, we can end the proof of Proposition 0.1 for congruence closed subgroups by looking at $\varphi $ on $G_3 = \overline{\Gamma}$ the Zariski closure of $\Gamma$ and apply the proof above to $Der (G^0_3)$. It also proves Proposition \ref{congruenceimage}.
Of course, Proposition \ref{pr2.3} is the general form of the following result from \cite{Ve} (based on \cite{Nori} and \cite{Weis}), which is, in fact, the core of Proposition \ref{pr2.3}. \begin{prop} \label{noriconsequence} Suppose $\Gamma \subset G(\mathbb Z)$ is Zariski dense, $G$ simply connected and $\Gamma $ a subgroup of $G(\mathbb Z)$ which is closed in the congruence topology. Then $\Gamma $ is itself a congruence subgroup. \end{prop}
\section{The Grothendieck closure}
\subsection{The Grothendieck Closure of a group $\Gamma$}
\begin{defn} Let $\rho : \Gamma \rightarrow \text{\rm GL}(V)$ be a representation of $\Gamma$ on a lattice $V$ in a $\field{Q}$-vector space $V\otimes \field{Q}$. Then we get a continuous homomorphism $\widehat{\rho}: \widehat{\Gamma}\rightarrow \text{\rm GL}(\widehat{V})$ (where, for a group $\Delta$, $\widehat{\Delta}$ denotes its profinite completion) which extends $\rho$ . \\
Denote by $Cl_{\rho }(\Gamma)$ the subgroup of the profinite completion of $\Gamma$, which preserves the lattice $V$: $Cl_{\rho}(\Gamma)= \{g\in \widehat{\Gamma}: \widehat{\rho}(g)(V)\subset V\}$. In fact, since $\det (\hat\rho(g)) = \pm 1$ for every $g \in \Gamma$ and hence also for every $g \in \hat \Gamma$, for $g \in Cl_g (\Gamma), \, \hat\rho (g) (V) = V$, and hence $Cl_\rho (\Gamma)$ is a subgroup of $\hat\Gamma$. We denote by $Cl(\Gamma)$ the subgroup \begin{equation}\label{eq3.1} Cl(\Gamma)= \{g\in \widehat{\Gamma}: \widehat{\rho} (g) (V)\subset V \quad \forall \quad lattices \quad V \}.\end{equation} Therefore, $Cl(\Gamma)=\cap_{\rho} Cl_{\rho}(\Gamma)$ where $\rho$ runs through all integral representations of the group $\Gamma$.
Suppose now that $V$ is any finitely generated abelian group (not necessarily a lattice i.e. not necessarily torsion-free) which is also a $\Gamma$-module. Then the torsion in $V$ is a (finite) subgroup with finite exponent $n$ say. Then $nV$ is torsion free. Since $\Gamma$ acts on the finite group $V/nV$ by a finite group via, say, $\rho$, it follows that $\widehat{\Gamma}$ also acts on the finite group $V/nV$ via $\widehat{\rho}$. Thus, for $g\in \widehat{\Gamma}$ we have $\widehat{\rho}(g)(V/nV)=V/nV$. Suppose now that $g\in Cl(\Gamma)$. Then $g(nV)=nV$ by the definition of $Cl(\Gamma)$. Hence $g(V)/nV=V/nV$ for $g\in Cl(\Gamma)$. This is an equality in the quotient group $\widehat{V}/nV$. This shows that $g(V)\subset V+nV=V$ which shows that $Cl(\Gamma)$ preserves {\it all} finitely generated abelian groups $V$ which are $\Gamma$ -modules.
By $Cl_{\mathbb Z}(\Gamma)$ we mean the {\it Grothendieck closure} of the (finitely generated) group $\Gamma$. It is essentially a result of \cite{Lub} that the Grothendieck closure $Cl_{\mathbb Z}(\Gamma)$ is the same as the group $Cl(\Gamma)$ defined above (in \cite{Lub}, the group considered was the closure with respect to {\it all} finitely generated $\mathbb Z$ modules which are also $\Gamma$ modules, whereas we consider only those finitely generated $\mathbb Z$ modules which are $\Gamma$ modules and which are torsion-free; the argument of the preceding paragraph shows that these closures are the same). From now on, we identify the Grothendieck closure $Cl_{\mathbb Z}(\Gamma)$ with the foregoing group $Cl(\Gamma)$. \end{defn}
\begin{notation}\label{BDG} Let $\Gamma$ be a group, $V$ a finitely generated torsion-free abelian group which is a $\Gamma$-module and $\rho: \Gamma \rightarrow \text{\rm GL}(V)$ the corresponding $\Gamma$-action. Denote by $G_{\rho}$ the Zariski closure of the image $\rho(\Gamma )$ in $\text{\rm GL}(V\otimes \field{Q})$, and $G_{\rho}^0$ its connected component of identity. Then both $G_{\rho},G_{\rho}^0$ are linear algebraic groups defined over $\field{Q}$, and so is $Der_\rho = [G^0_\rho, G^0_\rho]$.
Let $B = B_{\rho}(\Gamma)$ denote the subgroup $\widehat{\rho}(\widehat{\Gamma})\cap \text{\rm GL}(V)$. Since the profinite topology of $\text{\rm GL}(\hat V) $ induces the congruence topology on $\text{\rm GL}(V), B_\rho(\Gamma)$ is the congruence closure of $\rho(\Gamma)$ in $\text{\rm GL}(V)$.
We denote by $D=D_{\rho}(\Gamma)$ the intersection of $B$ with the derived subgroup $Der_\rho = [G^0,G^0]$. We thus have an exact sequence \[1 \rightarrow D \rightarrow B \rightarrow A \rightarrow 1, \] where $A= A_\rho (\Gamma)$ is an extension of a finite group $G/G^0$ by an abelian group (the image of $B\cap G^0$ in the abelianization $(G^0)^{ab}$ of the connected component $G^0$). \end{notation}
\subsection{Simply Connected Representations}
\begin{defn} \label{simplyconnectedrepresentations} We will say that $\rho$ is {\bf simply connected} if the group $G = G_{\rho}$ is {\it essentially simply connected}. That is, if $U$ is the unipotent radical of $G$, the quotient $G^0/U$ is a product $H\times S$ where $H$ is semi-simple and simply connected and $S$ is a torus. \end{defn}
An easy consequence of Lemma \ref{essentiallysimplyconnected} is that simply connected representations are closed under direct sums.
\begin{lemma} \label{SCdirectsum} Let $\rho _1,\rho _2$ be two simply connected representations of an abstract group $\Gamma$. Then the direct sum $\rho _1\oplus \rho _2$ is also simply connected. \end{lemma}
We also have:
\begin{lemma}\label{surjectiveclosure} Let $\rho: \Gamma \rightarrow \text{\rm GL}(W)$ be a sub-representation of a representation $\tau: \Gamma \rightarrow \text{\rm GL}(V)$ such that both $\rho, \tau$ are simply connected. Then the map $r: B_{\tau}(\Gamma )\rightarrow B_{\rho}(\Gamma )$ is surjective. \end{lemma}
\begin{proof} The image of $B_{\tau}(\Gamma)$ in $B_{\rho}(\Gamma)$ contains the image of $D_{\tau}$. By Proposition \ref{pr2.3}, $D_\tau$ is a congruence subgroup of the algebraic group $Der_{\tau}$. The map $Der_{\tau}\rightarrow Der_{\rho}$ is a surjective map between simply connected groups. Therefore, by part (iv) of Lemma \ref{surjectivemorphisms}, the image of $D_{\tau}$ is a congruence subgroup $F$ of $D_{\rho}$. Now, by Proposition \ref{pr2.3}, $D_\rho \cdot \rho(\Gamma)$ is congruence closed, hence equal to $B_\rho$ which is the congruence closure of $\rho(\Gamma)$ and $B_\tau \to B_\rho$ is surjective. \end{proof}
\subsection{Simply-Connected to General}
\begin{lemma} \label{simplyconnectedsaturate} Every (integral) representation $\rho: \Gamma \rightarrow \text{\rm GL}(W)$ is a sub-representation of a faithful representation $\tau: \Gamma \rightarrow \text{\rm GL}(V)$ where $\tau$ is simply connected. \end{lemma}
\begin{proof} Let $\rho : \Gamma \rightarrow \text{\rm GL}(W)$ be a representation. Let $Der$ be the derived subgroup of the identity component of the Zariski closure $H= G_\rho $ of $\rho (\Gamma)$. Then, by Lemma \ref{surjectivemorphisms}(iii), there exists a map $H^* \rightarrow H^0$ with finite kernel such that $H^*$ is connected and $H^*/U^*= (H^*)_{ss}\times S^*$ where $H^*_{ss}$ is a simply connected semi-simple group. Denote by $W_{\field{Q}}$ the $\field{Q}$-vector space $W\otimes \field{Q}$. By Lemma \ref{surjectivemorphisms}(i), $\rho :H^0 \rightarrow \text{\rm GL}(W_{\field{Q}})$ may be considered as a sub-representation of a faithful representation $(\theta, E_{\field{Q}}) $ of the covering group $H^*$. \\
By (ii) of Lemma \ref{surjectivemorphisms}, the image of an arithmetic subgroup of $H^*$ is an arithmetic group of $H$. Moreover, as $H(\mathbb{Z})$ is virtually torsion free, one may choose a normal, torsion-free arithmetic subgroup $\Delta \subset H(\mathbb Z)$ such that the map $H^*\rightarrow H^0 $ splits over $\Delta$. In particular, the map $H^*\rightarrow H^0$ splits over a normal subgroup $N$ of $\Gamma$ of finite index. Thus, $\theta$ may be considered as a representation of the group $N$. \\
Consider the induced representation $Ind _N^{\Gamma}(W_{\field{Q}})$. Since $W_{\field{Q}}$ is a representation of $\Gamma$, it follows that $Ind_N^{\Gamma}(W_{\field{Q}})=W_{\field{Q}} \otimes Ind_N^{\Gamma}(triv_N)\supset W_{\field{Q}}$. Since, by the first paragraph of this proof, $W _{\field{Q}}\subset E_{\field{Q}}$ as $H^*$ modules, it follows that $W _{\field{Q}} \mid _N \subset E_{\field{Q}}$ and hence $W_{\field{Q}}\subset Ind _N^{\Gamma}(E_{\field{Q}})=:V_{\field{Q}}$. Write $\tau =Ind_N^{\Gamma}(E_{\field{Q}})$ for the representation of $\Gamma$ on $V_{\field{Q}}$. The normality of $N$ in $\Gamma$ implies that the restriction representation $\tau \mid _N$ is contained in a direct sum of the $N$-representations $n\to\theta(\gamma n\gamma^{-1})$ as $\gamma$ varies over the finite set $\Gamma/N$.
Write $G_{\theta \mid _N}$ for the Zariski closure of the image $\theta (N)$. Since $G_{\theta \mid _N}$ has $H^*$ as its Zariski closure and the group $H^*_{ss}$ is simply connected, each $\theta $ composed with conjugation by $\gamma$ is a simply connected representation of $N$. It follows from Lemma \ref{SCdirectsum} that $\tau \mid _N$ is simply connected. Since simple connectedness of a representation is the same for subgroups of finite index, it follows that $\tau $, as a representation of $\Gamma$, is simply connected.
We have now proved that there exists $\Gamma$-equivariant embedding of the module $(\rho,W_{\field{Q}})$ into $(\tau, V_{\field{Q}})$ where $W,V$ are lattices in the $\field{Q}$-vector spaces $W_{\field{Q}},V_{\field{Q}}$. A basis of the lattice $W$ is then a $\field{Q}$-linear combination of a basis of $V$; the finite generation of $W$ then implies that there exists an integer $m$ such that $mW\subset V$, and this inclusion is an embedding of $\Gamma$-modules. Clearly, the module $(\rho,W)$ is isomorphic to $(\rho,mW)$ the isomorphism given by multiplication by $m$. Hence the lemma follows. \end{proof}
The following is the main technical result of this section, from which the main results of this paper are derived:
\begin{proposition} \label{surjective} The group $Cl(\Gamma)$ is the inverse limit of the groups $B_{\rho}(\Gamma)$ where $\rho$ runs through simply connected representations and $B_\rho(\Gamma)$ is the congruence closure of $\rho(\Gamma)$. Moreover, if $\rho: \Gamma \rightarrow \text{\rm GL}(W)$ is simply connected, then the map $Cl(\Gamma)\rightarrow B_{\rho} (\Gamma)$ is surjective. \end{proposition}
\begin{proof} Denote temporarily by $Cl(\Gamma)_{sc}$ the subgroup of elements of $\widehat{\Gamma}$ which stabilize the lattice $V$ for all {\it simply connected} representations $(\tau, V)$. Let $W$ be an arbitrary finitely generated torsion-free lattice which is also a $\Gamma$-module; denote by $\rho$ the action of $\Gamma$ on $W$. \\
By Lemma \ref{simplyconnectedsaturate}, there exists a simply connected representation $(\tau, V)$ which contains $(\rho, W)$. If $g\in Cl(\Gamma)_{sc}$, then $\widehat{\tau}(g)(V)\subset V$; since $\Gamma$ is dense in $\widehat{G}$ and stabilizes $W$, it follows that for all $x\in \widehat{\Gamma}$, $\widehat{\tau}(x)(\widehat{W})\subset \widehat{W}$; in particular, for $g\in Cl(\Gamma)_{sc}$, $\widehat{\rho}(g)(W)= \widehat{\tau}(g)(W)\subset \widehat{W}\cap V=W$. Thus, $Cl(\Gamma)_{sc}\subset Cl(\Gamma)$. \\
The group $Cl(\Gamma)$ is, by definition, the set of all elements $g$ of the profinite completion $\widehat{\Gamma}$ which stabilize all $\Gamma$ stable torsion free lattices. It follows in particular, that these elements $g$ stabilize all $\Gamma$-stable lattices $V$ associated to simply connected representations $(\tau,V)$; hence $Cl(\Gamma)\subset Cl(\Gamma)_{sc}$. The preceding paragraph now implies that $Cl(\Gamma)=Cl(\Gamma)_{sc}$. This proves the first part of the proposition (see Equation \ref{eq0.2}).
We can enumerate all the simply connected integral representations $\rho$, since $\Gamma$ is finitely generated. Write $\rho_1,\rho _2, \cdots, \rho _n \cdots, $ for the sequence of simply connected representations of $\Gamma$. Write $\tau _n$ for the direct sum $\rho _1\oplus \rho _2 \oplus \cdots \oplus \rho _n$. Then $\tau _n\subset \tau _{n+1}$ and by Lemma \ref{SCdirectsum} each $\tau $ is simply connected; moreover, the simply connected representation $\rho _n$ is contained in $\tau _n$. \\
By Lemma \ref{surjectiveclosure}, it follows that $Cl(\Gamma)$ is the inverse limit of the {\it totally ordered family} $B_{\tau _n}(\Gamma)$; moreover, $B_{\tau _{n+1}}(\Gamma)$ maps {\bf onto} $B_{\tau _n}(\Gamma)$. By taking inverse limits, it follows that $Cl(\Gamma)$ maps {\it onto} the group $B_{\tau _n}(\Gamma)$ for every $n$. It follows, again from Lemma \ref{surjectiveclosure}, that every $B_{\rho _n}(\Gamma)$ is a homomorphic image of $B_{\tau _n}(\Gamma)$ and hence of $Cl(\Gamma)$. This proves the second part of the proposition. \end{proof}
\begin{defn} Let $\Gamma$ be a finitely generated group. We say that $\Gamma$ is $FAb$ if the abelianization $\Delta ^{ab}$ is finite for every finite index subgroup $\Delta\subset \Gamma$. \end{defn}
\begin{cor}\label{closureinverse} If $\Gamma$ is $FAb$ then for every simply connected representation $\rho$, the congruence closure $B_\rho(\Gamma)$ of $\rho(\Gamma)$ is a congruence subgroup and $Cl(\Gamma)$ is an inverse limit over a totally ordered set $\tau_n$ of simply connected representations of $\Gamma$, of congruence groups $B_n$ in groups $G_n= G_{{\tau_n}}$ with $G^0_n$ simply connected. Moreover, the maps $B_{n+1}\rightarrow B_n$ are surjective. Hence the maps $Cl(\Gamma)\rightarrow B_n$ are all surjective. \end{cor}
\begin{proof} If $\rho: \Gamma \rightarrow \text{\rm GL}(V)$ is a simply connected representation, then for a finite index subgroup $\Gamma ^0$ the image $\rho (\Gamma ^0)$ has connected Zariski closure, and by assumption, $G^0/U=H\times S$ where $S$ is a torus and $H$ is simply connected semi-simple. Since the group $\Gamma$ is $F_{Ab}$ it follows that $S=1$ and hence $G^0=Der (G^0)$. Now Proposition \ref{noriconsequence} implies that $B_{\rho }(\Gamma)$ is a congruence subgroup of $G_{\rho}(V)$. The Corollary is now immediate from the Proposition \ref{surjective}. We take $B_n=B_{\tau _n}$ in the proof of the proposition. \end{proof}
We can now prove Theorem 0.5. Let us first prove the direction claiming that the congruence subgroup property implies $Cl(\Gamma) = \Gamma$. This was proved for arithmetic groups $\Gamma$ by Grothendieck, and we follow here the proof in \cite{Lub} which works for general $\Gamma$. Indeed, if $\rho: \Gamma \to \text{\rm GL}_n(\mathbb{Z})$ is a faithful simply connected representation such that $\rho(\Gamma)$ satisfies the congruence subgroup property, then it means that the map $\hat\rho: \hat\Gamma \to \text{\rm GL}_n (\hat\mathbb{Z})$ is injective. Now $\rho \left(Cl(\Gamma)\right) \subseteq \text{\rm GL}_n (\mathbb{Z}) \cap \hat\rho (\hat\Gamma)$, but the last is exactly the congruence closure of $\rho (\Gamma)$. By our assumption, $\rho(\Gamma)$ is congruence closed, so it is equal to $\rho(\Gamma)$. So in summary $\hat\rho (\Gamma) \subset \hat\rho \left(Cl (\Gamma)\right)\subseteq\rho(\Gamma) = \hat\rho(\Gamma)$. As $\hat\rho$ is injective, $\Gamma = Cl (\Gamma)$.
In the opposite direction: Assuming $Cl(\Gamma) = \Gamma$. By the description of $Cl(\Gamma) $ in (0.1) or in (3.1), it follows that for every finite index subgroup $\Gamma' $ of $\Gamma$, $Cl(\Gamma') = \Gamma'$ (see \cite[Proposition 4.4]{Lub}). Now, if $\rho $ is a faithful simply connected representation of $\Gamma$, it is also such for $\Gamma'$ and by Proposition 3.6, $\rho\left(Cl(\Gamma')\right)$ is congruence closed. In our case it means that for every finite index subgroup $\Gamma'$, $\rho(\Gamma')$ is congruence closed, i.e. $\rho(\Gamma)$ has the congruence subgroup property.
\section{Thin Groups}
Let $\Gamma$ be a finitely generated $\mathbb{Z}$-linear group, i.e. $\Gamma \subset \text{\rm GL}_n(\mathbb{Z})$, for some $n$. Let $G$ be its Zariski closure in $\text{\rm GL}_n(\mathbb{C})$ and $\Delta = G \cap \text{\rm GL}_n(\mathbb{Z})$. We say that $\Gamma$ is a \emph{thin} subgroup of $G$ if $[\Delta:\Gamma] = \infty$, otherwise $\Gamma$ is an arithmetic subgroup of $G$. In general, given $\Gamma$, (say, given by a set of generators) it is a difficult question to determine if $\Gamma$ is thin or arithmetic. Our next result gives, still, a group theoretic characterization for the {\bf abstract} group $\Gamma$ to be thin. But first a warning: an abstract group can sometimes appear as an arithmetic subgroup and sometimes as a thin subgroup. For example, the free group on two generators $F = F_2$ is a finite index subgroup of $\text{\rm SL}_2(\mathbb{Z})$, and so, arithmetic. But at the same time, by a well known result of Tits asserting that $\text{\rm SL}_n(\mathbb{Z})$ contains a copy of $F$ which is Zariski dense in $\text{\rm SL}_n$ [Ti]; it is also thin. To be precise, let us define:
\begin{definition}\label{thindefinition} A finitely generated $\mathbb{Z}$-linear group $\Gamma$ is called a {\bf thin group} if it has a faithful representation $\rho: \Gamma \to \text{\rm GL}_n(\mathbb{Z})$ for some $n \in \mathbb{Z}$, such that $\rho(\Gamma)$ is of infinite index in $\overline{\rho(\Gamma)}^Z \cap \text{\rm GL}_n (\mathbb{Z})$ where $\overline{\rho(\Gamma)}^Z $ is the Zariski closure of $\Gamma$ in $\text{\rm GL}_n$. Such a $\rho$ will be called a thin representation of $\Gamma$. \end{definition}
We have assumed that $i: \Gamma \subset GL_n(\mathbb Z)$. Assume also, as we may (see Lemma \ref{simplyconnectedsaturate}) that the representation $i$ is simply connected. By Proposition \ref{surjective}, the group $Cl(\Gamma)$ is the subgroup of $\widehat{\Gamma}$ which preserves the lattices $V_n$ for a totally ordered set (with respect to the relation of being a sub representation) of faithful simply connected integral representations $(\rho _n,V_n)$ of $\Gamma$ with the maps $Cl(\Gamma) \rightarrow B_n$ being surjective, where $B_n$ is the congruence closure of $\rho_n(\Gamma)$ in $GL(V_n)$. Hence, $Cl(\Gamma)$ is the inverse limit (as $n$ varies) of the congruence closed subgroups $B_n$ and $\Gamma$ is the inverse limit of the images $\rho _n(\Gamma)$. Equip $B_n/\rho _n(\Gamma)$ with the discrete topology. Consequently, $Cl(\Gamma)/\Gamma$ is a closed subspace of the Tychonov product $\prod _n (B_{n}/\rho_n(\Gamma))$. This is the topology on $Cl(\Gamma)/\Gamma$ considered in the following theorem.
\begin{thm}\label{thin compact} Let $\Gamma$ be a finitely generated $\mathbb{Z}$-linear group, i.e. $\Gamma \subset \text{\rm GL}_m (\mathbb{Z})$ for some $n$. Then $\Gamma$ is \emph{not} a thin group if and only if $\Gamma$ satisfies both of the following two properties:
\begin{enumerate}[(a)]
\item $\Gamma$ is an $FAb $ group (i.e. for every finite index subgroup $\Lambda $ of $\Gamma$, $\Lambda/[\Lambda, \Lambda]$ is finite), and
\item The group $Cl(\Gamma)/\Gamma$ is compact
\end{enumerate}
\end{thm}
\begin{proof} Assume first that $\Gamma$ is a thin group. If $\Gamma$ is not $FAb$ we are done. So, assume $\Gamma$ is $FAb$. We must now prove that $Cl(\Gamma)/\Gamma$ is not compact. We know that $\Gamma$ has a faithful thin representation $\rho :\Gamma \rightarrow GL_n(\mathbb Z)$ which in addition, is simply connected. This induces a surjective map (see Proposition \ref{surjective}) $Cl(\Gamma)\rightarrow B_\rho (\Gamma)$ where $B_\rho (\Gamma)$ is the congruence closure of $\rho (\Gamma)$ in $GL_n(\mathbb Z)$.
As $\Gamma$ is $FAb, B_\rho(\Gamma) $ is a congruence subgroup, by Corollary \ref{closureinverse}.
But as $\rho$ is thin, $\rho (\Gamma)$ has infinite index in $B_\rho (\Gamma)$. Thus, $Cl(\Gamma)/\Gamma$ is mapped {\it onto} the discrete infinite quotient space $B_\rho (\Gamma)/\rho (\Gamma)$. Hence $Cl(\Gamma)/\Gamma$ is not compact.
Assume now $\Gamma$ is not a thin group. This implies that for every faithful integral representation $\rho(\Gamma)$ is of finite index in its integral Zariski closure. We claim that $\Gamma/[\Gamma, \Gamma]$ is finite. Otherwise, as $\Gamma$ is finitely generated, $\Gamma$ is mapped on $\mathbb{Z}$. The group $\mathbb{Z}$ has a Zariski dense integral representation $\tau$ into $\mathbb{G}_a\times S$ where $S$ is a torus; take any integral matrix $g \in \text{\rm SL}_n(\mathbb{Z})$ which is neither semi-simple nor unipotent, whose semisimple part has infinite order. Then both the unipotent and semisimple part of the Zariski closure $H$ of $\tau(\mathbb{Z})$ are non trivial and $H(\mathbb{Z})$ cannot contain $\tau(\mathbb{Z})$ as a subgroup of finite index since $H(\mathbb{Z})$ is commensurable to $\mathbb{G}_a (\mathbb{Z}) \times S(\mathbb{Z})$ and both factors are non trivial and infinite. The representation $\rho \times \tau$ (where $\rho$ is any faithful integral representation of $\Gamma$) will give a thin representation of $\Gamma$. This proves that $\Gamma/[\Gamma, \Gamma]$ is finite. A similar argument (using an induced representation) works for every finite index subgroup, hence $\Gamma$ satisfies $FAb$.
We now prove that $Cl(\Gamma)/\Gamma$ is compact. We already know that $\Gamma$ is $FAb$, so by Corollary \ref{closureinverse}, $Cl(\Gamma) = \underset{\leftarrow}{\lim} B_{\rho_n} (\Gamma)$ when $B_n = B_{\rho_n} (\Gamma)$ are congruence groups with surjective homomorphisms $B_{n + 1} \to B_n$. Note that as $\Gamma$ has a faithful integral representation, we can assume that all the representations $\rho_n$ in the sequence are faithful and \begin{equation}\label{inverse limit} \Gamma = \lim_{\stackrel{\longleftarrow}{n}} \rho_n(\Gamma).\end{equation} This implies that $Cl(\Gamma)/\Gamma = \lim\limits_{\stackrel{\longleftarrow}{n}} B_n/\rho_n (\Gamma)$. Now, by our assumption, each $\rho_n(\Gamma)$ is of finite index in $B_n = B_{\rho_n} (\Gamma)$. So $Cl(\Gamma)/\Gamma$ is an inverse limit of finite sets and hence compact. \end{proof}
\section{Grothendieck closure and super-rigidity}
Let $\Gamma$ be a finitely generated group. We say that $\Gamma$ is \emph{integral super-rigid} if there exists an algebraic group $G \subseteq \text{\rm GL}_m(\mathbb{C})$ and an embedding $i:\Gamma_0 \mapsto G$ of a finite index subgroup $\Gamma_0$ of $\Gamma$, such that for every integral representation $\rho:\Gamma \to \text{\rm GL}_n(\mathbb{Z})$, there exists an algebraic representation $\tilde \rho: G\to\text{\rm GL}_n(\mathbb{C})$ such that $\rho$ and $\tilde \rho$ agree on some finite index subgroup of $\Gamma_0$. Note: $\Gamma$ is integral super-rigid if and only if a finite index subgroup of $\Gamma$ is integral super-rigid.
Example of such super-rigid groups are, first of all, the irreducible (arithmetic) lattices in high rank semisimple Lie groups, but also the (arithmetic) lattices in the rank one simple Lie groups $Sp(n, 1)$ and $\mathbb{F}^{-20}_4 $ (see \cite{Mar}, \cite{Cor}, \cite{Gr-Sc}). But \cite{Ba-Lu} shows that there are such groups which are thin groups.
Now, let $\Gamma$ be a subgroup of $\text{\rm GL}_m(\mathbb{Z})$, whose Zariski closure is essentially simply connected. We say that $\Gamma$ satisfies the \emph{congruence} \emph{subgroup} \emph{property} (CSP) if the natural extension of $i:\Gamma \to\text{\rm GL}_m(\mathbb{Z})$ to $\hat \Gamma$, i.e. $\tilde i: \hat\Gamma \to \text{\rm GL}_m(\hat\mathbb{Z})$ has finite kernel.
\begin{thm}\label{superrigid} Let $\Gamma \subseteq \text{\rm GL}_m (\mathbb{Z})$ be a finitely generated subgroup satisfying $(FAb)$. Then \begin{enumerate}[{\rm(a)}]
\item $Cl(\Gamma)/\Gamma$ is compact if and only if $\Gamma$ is an arithmetic group which is integral super-rigid.
\item $Cl(\Gamma)/\Gamma $ is finite if and only if $\Gamma$ is an arithmetic group satisfying the congruence subgroup property. \end{enumerate} \end{thm}
\begin{remarks}\label{super} \begin{enumerate}[{\rm(a)}] \item The finiteness of $Cl(\Gamma)/\Gamma$ implies, in particular, its compactness , so Theorem \ref{superrigid} recovers the well known fact (see \cite{BMS}, \cite{Ra2}) that the congruence subgroup property implies super-rigidity.
\item As explained in \S 2 (based on \cite{Ser}) the simple connectedness is a necessary condition for the CSP to hold. But by Lemma \ref{simplyconnectedsaturate}, if $\Gamma$ has any embedding into $\text{\rm GL}_n(\mathbb{Z})$ for some $n$, it also has a simply connected one.
\end{enumerate}
\end{remarks}
We now prove Theorem \ref{superrigid}.
\begin{proof}: Assume first $Cl(\Gamma)/\Gamma$ is compact in which case, by Theorem \ref{thin compact}, $\Gamma$ must be an arithmetic subgroup of some algebraic group $G$. Without loss of generality (using Lemma \ref{simplyconnectedsaturate}) we can assume that $G$ is connected and simply connected, call this representation $\rho: \Gamma \to G$. Let $\theta$ be any other representation of $\Gamma$.
Let $\tau =\rho \oplus \theta$ be the direct sum. The group $G_{\tau}$ is a subgroup of $G_{\rho}\times G_{\theta}$ with surjective projections. Since both $\tau $ and $\rho $ are embeddings of the group $\Gamma$, and $\Gamma$ does not have thin representations, it follows (from Corollary \ref{closureinverse}) that the projection $\pi : G_{\tau} \rightarrow G_{\rho}$ yields an isomorphism of the arithmetic groups $\tau (\Gamma)\subset G_{\tau}(\mathbb Z)$ and $\rho (\Gamma)\subset G_{\rho}(\mathbb Z)$.
Assume, as we may, that $\Gamma$ is torsion-free and $\Gamma$ is an arithmetic group. Every arithmetic group in $G_{\tau}(\mathbb Z)$ is virtually a product of the form $U_{\tau}(\mathbb Z)\rtimes H_{\tau}(\mathbb Z)$ where $U_{\tau}$ and $H_{\tau}$ are the unipotent and semi-simple parts of $G_{\tau}$ respectively (note that $G_{\tau}^0$ cannot have torus as quotient since $\Gamma$ is $FAb$). Hence $\Gamma\cap U_{\tau}(\mathbb Z)$ may also be described as the virtually maximal normal nilpotent subgroup of $\Gamma$. Similarly for $\Gamma\cap U_{\rho}(\mathbb Z)$. This proves that the groups $U_{\tau}$ and $U_{\rho}$ have isomorphic arithmetic groups which proves that $\pi: U_{\tau} \rightarrow U_{\rho}$ is an isomorphism. Otherwise $Ker(\pi)$, which is a $\mathbb{Q}$-defined normal subgroup of $U_\tau$, would have an infinite intersection with the arithmetic group $\Gamma\cap U_\tau$.
Therefore, the arithmetic groups in $H_{\tau}$ and $H_{\rho}$ are isomorphic and the isomorphism is induced by the projection $H_{\tau}\rightarrow H_{\rho}$. Since $H_{\rho}$ is simply connected by assumption, and is a factor of $H_{\tau}$, it follows that $H_{\tau}$ is a product $H_{\rho}H$ where $H$ is a semi-simple group defined over $\field{Q}$ with $H(\mathbb Z)$ Zariski dense in $H$. But the isomorphism of the arithmetic groups in $H_{\tau}$ and $H_{\rho}$ then shows that the group $H(\mathbb Z)$ is finite which means that $H$ is finite. Therefore, $\pi: H_{\tau}^0\rightarrow H_{\rho}$ is an isomorphism and so the map $G_{\tau}^0\rightarrow G_{\rho}$ is also an isomorphism since it is a surjective morphism between groups of the same dimension, and since $G_{\rho}$ is simply connected.
This proves that $\Gamma$ is a super-rigid group.
In \cite{Lub}, it was proved that if $\Gamma$ satisfies super rigidity in some simply connected group $G$, then (up to finite index) $Cl(\Gamma)/\Gamma$ is in 1-1 correspondence with $C(\Gamma) = \text{\rm Ker} (\hat \Gamma\to G(\hat\mathbb{Z}))$. This finishes the proof of both parts (a) and (b). \end{proof}
\begin{remark}\label{csp and superrigid} In the situation of Theorem \ref{superrigid}, $\Gamma$ is an arithmetic group, satisfying super-rigidity. The difference between parts (a) and (b), is whether $\Gamma$ also satisfies CSP. As of now, there is no known arithmetic group (in a simply connected group) which satisfies super-rigidity without satisfying CSP. The conjecture of Serre about the congruence subgroup problem predicts that arithmetic lattices in rank one Lie groups fail to have CSP. These include Lie groups like $Sp (n, 1)$ and $\mathbb{F}_4^{(-20)}$ for which super-rigidity was shown (after Serre had made his conjecture). Potentially, the arithmetic subgroups of these groups can have $Cl(\Gamma)/\Gamma$ compact and not finite. But (some) experts seem to believe now that these groups do satisfy CSP. Anyway as of now, we do not know any subgroup $\Gamma$ of $\text{\rm GL}_n(\mathbb{Z})$ with $Cl(\Gamma)/\Gamma$ compact and not finite.
\end{remark}
\end{document} |
\begin{document}
\title{Stopped processes and Doob's optional sampling theorem} \author{Jacobus J. Grobler \\ Research Unit for Business Mathematics and Informatics\\ North-West University (Potchefstroom Campus),\\ Potchefstroom 2520,\\ South Africa\\ email: jacjgrobler@gmail.com\\ \\ Christopher Michael Schwanke \\ Department of Mathematics\\ Lyon College\\ Batesville, AR 72501, USA\\ and\\ Research Unit for Business Mathematics and Informatics\\ North-West University (Potchefstroom Campus),\\ Potchefstroom 2520,\\ South Africa\\ email: cmschwanke26@gmail.com } \maketitle \begin{abstract} Using the spectral measure $\mu_\mb{S}}\def\T{\mb{T}$ of the stopping time $\mb{S}}\def\T{\mb{T},$ we define the stopping element $X_\mb{S}}\def\T{\mb{T}$ as a Daniell integral $\int X_t\,d\mu_\mb{S}}\def\T{\mb{T}$ for an adapted stochastic process $(X_t)_{t\in J}$ that is a Daniell summable vector-valued function. This is an extension of the definition previously given for right-order-continuous sub-martingales with the Doob-Meyer decomposition property. The more general definition of $X_\mb{S}}\def\T{\mb{T}$ necessitates a new proof of Doob's optional sampling theorem, because the definition given earlier for sub-martingales implicitly used Doob's theorem applied to martingales. We provide such a proof, thus removing the heretofore necessary assumption of the Doob-Meyer decomposition property in the result.
Another advancement presented in this paper is our use of unbounded order convergence, which properly characterizes the notion of almost everywhere convergence found in the classical theory. Using order projections in place of the traditional indicator functions, we also generalize the notion of uniformly integrable sequences. In an essential ingredient to our main theorem mentioned above, we prove that uniformly integrable sequences that converge with respect to unbounded order convergence also converge to the same element in $\mc{L}^1$. \end{abstract}
Keywords: Vector lattice, Riesz space, stochastic process, stopping time, stopped process.
AMS Classification: 46B40, 46G10, 47N30, 60G20.
\section{Introduction} In this paper we study the stopped process of a stochastic process in Riesz spaces. The notion of a stopped process is fundamental to the study of stochastic processes, since it is often used to extend results that are valid for bounded processes to hold also for unbounded processes. In~\cite{G2} we defined stopped processes for a class of submartingales and we expressed the need to get a definition applicable to more general processes. In this paper we introduce a much more general definition of stopped processes using the Daniell integral. The definition applies to every Daniell integrable process with reference to a certain spectral measure; this class of processes includes the important class of right-continuous processes.
Considering the classical case, let $(\Omega,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F},P)$ be a probability space, and let $(X_t=X(t,\omega))_{t\in J,\omega\in\Omega}$ be a stochastic process in the $L^1(\Omega,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F},P)$ adapted to the filtration $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)$ of sub-$\sigma$-algebras of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}.$ If the real valued non-negative stochastic variable $\mb{S}}\def\T{\mb{T}(\omega)$ is a stopping time for the filtration, then the stopped process is the process (see~\cite[Proposition 2.18]{KS}) $$ (X_{t\n \mb{S}}\def\T{\mb{T}})_{t\in J}=(X(t\n \mb{S}}\def\T{\mb{T}(\omega),\omega))_{t\in J,\, \omega\in\Omega}. $$ The paths of this process are equal to $X_t(\omega)$ up to time $\mb{S}}\def\T{\mb{T}(\omega),$ and from then on they remain constant with value $X_\mb{S}}\def\T{\mb{T}(\omega)=X(\mb{S}}\def\T{\mb{T}(\omega),\omega).$
The difficulty encountered in the abstract case is to define, what we shall call the {\em stopping element, $X_\mb{S}}\def\T{\mb{T},$} needed in the definition of the stopped process $X_{t\wedge\mb{S}}\def\T{\mb{T}}.$ We note that $X_\mb{S}}\def\T{\mb{T}$ can be interpreted as an element of a vector-valued functional calculus on $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ induced by the vector function $t\mapsto X_t,$ in the same way as, in the case of a real-valued function $t\mapsto f(t)$ the element $f(X),$ $X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E},$ is an element of the functional calculus on $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ induced by $f.$ The latter element can be obtained as a limit of simple elements of the form $\sum_{i=1}^n f(t_i)(E_{i+1}-E_i),$ with $E_i$ elements of the spectral system of $X$ with reference to a weak order unit $E$ (by taking $f(t)=t,$ the reader will recognize this as Freudenthal's spectral theorem). The element $f(X)$ can then be interpreted as an integral $$ \int_\mb{R}}\def\C{\mb{C}}\def\N{\mb{N} f(t)\,d\mu_E(t), $$ with $\mu_E$ the spectral measure that is the extension to the Borel algebra in $\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}$ of the (vector-) measure defined on left-open right-closed intervals $(a,b]$ by $\mu_E[(a,b]]=E_b-E_a$ (see~\cite[Sections IV.10, XI.5-7]{Vu}). Our approach will be to define a similar functional calculus for vector-valued functions, and we do this by employing the vector-valued Daniell integral as defined in~\cite{G12}.
Having a more general definition of $X_\mb{S}}\def\T{\mb{T}$ implies that a new proof of Doob's optional sampling theorem is needed. The reason is that a special case of Doob's theorem (for martingales) is implicitly contained in the definition of $X_\mb{S}}\def\T{\mb{T}$ as given in \cite{G2}. Having obtained such a proof also proves that the definition given in \cite{G2} is a valid definition for the case considered there.
A novelty in this paper is that we do not use order convergence in the definition of a continuous stochastic process, but unbounded order (uo-) convergence. This gives us a better model of the classic case, where convergence of a stochastic process is defined to mean that for almost every $\omega$ the paths $X_t(\omega)$ of the process are continuous functions of $t.$ It is known that uo-convergence in function spaces is the correct notion to describe almost everywhere convergence. The fact, from integration theory, stating that a sequence that is pointwise almost everywhere convergent and uniformly integrable, is convergent in $L^1,$ is also generalized. This generalization is needed in the proof of Doob's optional sampling theorem.
We finally remark that we use, following \cite{G12, Pr}, the Daniell integral for vector-valued functions in our work. It turns out that Daniell's integral fits in perfectly in the Riesz space setting in which we describe stochastic processes. In fact, Daniell's original 1918 paper~\cite{PJD} was the first paper using, what we call today, Riesz space theory.
\section{Preliminaries} Let $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ be a Dedekind complete, perfect Riesz space with weak order unit $E.$ We assume $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ to be separated by its order continuous dual $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^\sim_{00}.$ For the theory of Riesz spaces (vector lattices) we refer the reader to the following standard texts~\cite{AB2,LZ,MN, Sch, Z1, Z2}. For results on topological vector lattices the standard references are~\cite{AB1,F}. We denote the {\it universal completion} of $\mathfrak E,$ which is an $f$-algebra that contains $\mathfrak E$ as an order dense ideal, by $\mathfrak{E}^u$ (the fact that it is an ideal follows from~\cite[Lemma 7.23.15 and Definition 7.23.19]{AB1}). Its multiplication is an extension of the multiplication defined on the principal ideal $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}_E$ and $E$ is the algebraic unit and a weak order unit for $\mathfrak E^u$ (see \cite{Z1}). The set of order bounded band preserving operators, called orthomorphisms, is denoted by $\operatorname{Orth}(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}).$ We refer to \cite{Donner, G9} for the definition and properties of the \emph{sup-completion} $\mathfrak{E}^s$ of a Dedekind complete Riesz space $\mathfrak{E}.$ It is a unique Dedekind complete ordered cone that contains $\mathfrak E$ as a sub-cone of its group of invertible elements and its most important property is that it has a largest element. Being Dedekind complete this implies that every subset of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^s$ has a supremum in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^s.$ Also, for every $C\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^s,$ we have $C=\sup\{X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}: X\le C\}$ and $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is a solid subset of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^s.$
A {\em conditional expectation} $\mb{F}$ defined on $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is a strictly positive order continuous linear projection with range a Dedekind complete Riesz subspace $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}$ of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}.$ It has the property that it maps weak order units onto weak order units. It may be assumed, as we will do, that $\mb{F}E=E$ for the weak order unit $E.$ The space $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is called {$\mb{F}$-universally complete} (respectively, {$\mb{F}$-universally complete in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$}) if, whenever $X_\alpha\uparrow$ in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ and $\mb{F}(X_\alpha)$ is bounded in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ (respectively in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$), then $X_\alpha\uparrow X$ for some $X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}.$ If $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is $\mb{F}$-universally complete in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u,$ then it is $\mb{F}$-universally complete.
\textit{We shall henceforth tacitly assume that $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is $\mb{F}$-universally complete in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u.$}
For an order closed subspace $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}$ of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E},$ we shall denote the set of all order projections in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ by $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}$ and its subset of all order projections mapping $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}$ into itself, by $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}}.$ This set can be identified with the set of all order projections of the vector lattice $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}$ (see~\cite{G1}).
B.A. Watson~\cite{W} proved that if $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G}$ is an order closed Riesz subspace of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ with $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}\subset\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G},$ then there exists a unique conditional expectation $\mb{F}_\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G}$ on $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ with range $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G}$ and $\mb{F}\mb{F}_\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G}=\mb{F}_\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G}\mb{F}=\mb{F}$ (see~\cite{G2,W}). We shall also use the fact (see~\cite[Theorem 3.3 and Proposition 3.4]{G2}) that $Z=\mb{F}_\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G}(X),$ if and only if we have $$ \mb{F}(\mathbb P Z)=\mb{F}(\mathbb P X) \mbox{ holds for every projection $\mathbb P\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G}}$ }. $$
The conditional expectation $\mb{F}$ may be extended to the sup-completion in the following way: For every $X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^s,$ define $\mathbb F X$ by $\sup_{\alpha}\mathbb F X_\alpha\in\mathfrak{E}^s$ for any upward directed net $X_\alpha\uparrow X$, $X_\alpha\in\mathfrak{E}.$ It is well defined (see~\cite{G8}). We define $\operatorname{dom}^+ \mb{F}:=\{0\le X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^s:\ \mb{F}(X)\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u\}.$ Then $\operatorname{dom}^+\mb{F}\subset\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$ (see~\cite[Proposition 2.1]{G6}) and we define $\operatorname{dom}\mb{F}=\operatorname{dom}^+\mb{F}-\operatorname{dom}^+\mb{F}.$ Since we're assuming $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is $\mb{F}$-universally complete in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u,$ we have $\operatorname{dom}\,\mb{F}=\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}.$
If $XY\in \rm{dom}\,{\mathbb F}$ (with the multiplication taken in the $f$-algebra $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$), where $Y\in \mathfrak E$ and $X\in \mathfrak F=\mc{R}(\mb{F}),$ we have that $\mathbb F(XY)= X \mathbb F(Y)$. This fundamental fact is referred to as the \emph{averaging property} of $\mb F$ (see~\cite{G1}).
Let $\Phi$ be the set of all $\phi\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^\sim_{00}$ satisfying $|\phi|(E)=1$ and extend $|\phi|$ to $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^s$ by continuity. Define $\mathscr{P}$ to be the set of all Riesz seminorms defined on $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ by $p_{\phi}(X):=|\phi|(\mb{F}(|X|)$ where $\phi\in\Phi.$ We define the space $\mathscr{L}^1:=(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E},\sigma(\mathscr{P}))$ and have that $\mc{L}^1:=\{X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u: p_\phi(X)<\infty\mbox{ for all } \phi\in\Phi\},$ equipped with the locally solid topology $\sigma(\mathscr{L}^1,\mathscr{P})$ (for the proof see~\cite{G9}).
We next define the space $\mc{L}^2$ to consist of all $X\in\mc{L}^1$ satisfying $|X|^2\in\mc{L}^1,$ where the product is taken in the $f$-algebra $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u.$ Thus, $\mc{L}^2:=\{X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u: |\phi|(\mb{F}(|X|^2))<\infty\mbox{ for all } \phi\in\Phi\}.$ For $X\in\mc{L}^2$ we define the Riesz seminorm
$q_{\phi}(X):=(|\phi|(\mb{F}(|X|^2))^{1/2},$ and we denote the set of all these seminorms by $\mathscr{Q},$ and we equip $\mc{L}^2$ with the weak topology $\sigma(\mathscr{Q}).$
The spaces $\mathscr{L}^1$ and $\mathscr{L}^2$ are topologically complete (see~\cite{G7} and \cite{G9} and note that this may not be true without the assumption that $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is $\mb{F}$-universally complete in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$).
A {\em filtration} on $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is a set $(\mb{F}_t)_{t\in J}$ of conditional expectations satisfying $\mb{F}_s=\mb{F}_s\mb{F}_t$ for all $s<t.$ We denote the range of $\mb{F}_t$ by $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t.$ A {\em stochastic process} in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is a function $t\mapsto X_t\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E},$ for $t\in J,$ with $J\subset\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}^+$ an interval. The stochastic process $(X_t)_{t\in J}$ is {\em adapted to the filtration} if $X_t\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t$ for all $t\in J.$
We shall write $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_t$ to denote the set of all order projections that maps $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t$ into itself and we recall that $\mb{F}_t\mathbb P=\mathbb P\mb{F}_t$ holds for all $\mathbb P\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_t.$ The projections in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_t$ are the {\em events} up to time $t$ and $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_t$ is a complete Boolean algebra.
Let $(\mb{F}_t)_{t\in J}$ be a filtration on $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$. We recall that $$ \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{t+}:=\bigcap_{s>t}\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_s, $$ and the filtration is called right continuous if $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{t+}=\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t$ for all $t\in J.$ Since we're assuming that $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is $\mb{F}$-universally complete in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$, there exists a unique conditional expectation $\mb{F}_{t+}$ from $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ onto $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{t+}$ satisfying $\mb{F}\mb{F}_{t+}=\mb{F}_{t+}\mb{F}=\mb{F}$ (see~\cite[Proposition 3.8]{G2}). The set of order projections in the space $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{t+}$ will be denoted by $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{t+}.$
If $(X_t)$ is a stochastic process adapted to $({\mathbb F}_t, \mathfrak F_t)$, we call $(X_t, \mathfrak F_t,\mb{F}_t)$ a \emph{supermartingale} (respectively \emph{submartingale}) if ${\mathbb F}_t(X_s)\leq X_t$ (respectively ${\mathbb F}_t(X_s)\geq X_t$) for all $t\leq s$. If the process is both a sub- and a supermartingale, it is called a \emph{martingale}. A stochastic process $(X_t)$ is said to be uo-convergent to $X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ as $t$ tends to $s,$ if $$
o-\lim_{t\to s}|X_t-X|\wedge Z=0 $$ for every positive $Z\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}. $
We call a stochastic process uo-continuous in a point $s$ if
$$
\operatorname{uo}-\lim_{t\to s}X_t=X_s.
$$ In function spaces uo-convergence corresponds to pointwise almost everywhere convergence. Hence, the use of uo-convergence to define continuity in the abstract case, yields a direct generalization of the notion of path-wise continuity. The definitions of right-uo-continuity and left-uo-continuity in a point $s$ use uo-convergence from the right or from the left respectively.
The band generated by $(tE-X)^+$ in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is denoted by $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}_{(tE>X)}$ and the projection of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ onto this band by $\mathbb P_{(tE>X)}.$ The component of $E$ in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}_{(tE>X)}$ is denoted by $E_t^\ell,$ i.e., $E_t^\ell=\mathbb P_{(tE>X)}E.$ The system $(E_t^\ell)_{t\in J}$ is an increasing left-continuous system, called the {\em left-continuous spectral system of $X.$} Also, if $\overline{E}^r_t$ is the component of $E$ in the band generated by $(X-tE)^+$ and $E^r_t:=E-\overline{E}^r_t,$ the system $(E^r_t)$ is an increasing right-continuous system of components of $E,$ called the {\em right-continuous spectral system} of $X$ (see~\cite{LZ,G2}). The next definition was given in~\cite{G2}.
\begin{definition} \rm A {\em stopping time} for the filtration $(\mb{F}_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)_{t\in J}$ is an orthomorphism $\mb{S}\in\operatorname{Orth}(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E})$ such that its right continuous spectral system $(\mb{S}^r_t)$ of projections satisfies $\mb{S}^r_t\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_t.$ It is called an {\em optional time} if the condition holds for its left-continuous system $(\mb{S}}\def\T{\mb{T}_t^\ell)_{t\in J}.$ \end{definition}
We recall the fact that a stopping time is also an optional time and that the two concepts coincide for right-continuous filtrations. We shall use the following notation: $C^\ell_t:=\mb{S}}\def\T{\mb{T}_t^\ell E$ and $C^r_t:=\mb{S}}\def\T{\mb{T}^r_tE.$ The processes $C^\ell_\mb{S}}\def\T{\mb{T}:=(C^\ell_t)_{t\in J}$ and $C^r_\mb{S}}\def\T{\mb{T}:=(C^r_t)_{t\in J}$ are processes of components of $E.$ We have the following reformulation of the definition:
\begin{proposition} The orthomophism $\mb{S}}\def\T{\mb{T}$ is a stopping time for the filtration $(\mb{F}_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)$ if and only if the stochastic process $C^r_\mb{S}}\def\T{\mb{T}$ is adapted to the filtration. Similarly, $\mb{S}}\def\T{\mb{T}$ is an optional time if and only if the stochastic process $C^\ell_\mb{S}}\def\T{\mb{T}$ is adapted to the filtration. \end{proposition}
We recall (see~\cite{G2}) that the set of events (order projections) determined prior to the stopping time $\mb{S}}\def\T{\mb{T}$ is defined to be the family of projections $$ \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_\mb{S}}\def\T{\mb{T}:=\{\mathbb P\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}\,:\, \mathbb P\mb{S}}\def\T{\mb{T}^r_t\mb{F}_t=\mb{F}_t\mathbb P\mb{S}}\def\T{\mb{T}^r_t\mbox{ for all $t$}\}. $$ $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_\mb{S}}\def\T{\mb{T}$ is a complete Boolean sub-algebra of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}$ and $\mathbb P\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_\mb{S}}\def\T{\mb{T}$ if and only if $\mathbb P\mb{S}}\def\T{\mb{T}^r_t\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_t$ for every $t\in J.$ The set $$ \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{C}_\mb{S}}\def\T{\mb{T}:=\{\mathbb P E\,:\, \mathbb P\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_\mb{S}}\def\T{\mb{T}\}, $$ is a Boolean algebra of components of $E$ and we denote by $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_\mb{S}}\def\T{\mb{T}$ the order closed Riesz subspace of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ generated by $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{C}_\mb{S}}\def\T{\mb{T}.$ By~\cite[Proposition 5.4]{G2}, there exists a unique conditional expectation $\mb{F}_\mb{S}}\def\T{\mb{T}$ that maps $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ onto $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_\mb{S}}\def\T{\mb{T}$ with the property that $\mb{F}=\mb{F}\mb{F}_\mb{S}}\def\T{\mb{T}=\mb{F}_\mb{S}}\def\T{\mb{T}\mb{F}.$
Similarly, if $\mb{S}}\def\T{\mb{T}$ is an optional time for the filtration $(\mb{F}_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t),$ we find in~\cite{G2} that the Boolean algebra $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{\mb{S}}\def\T{\mb{T}+}$ of events determined immediately after $\mb{S}}\def\T{\mb{T}$ is the Boolean algebra of projections given by \[ \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{\mb{S}}\def\T{\mb{T}+}:=\{\mathbb P\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}\, :\, \mathbb P\mb{S}}\def\T{\mb{T}_t^r\mb{F}_{t+} = \mb{F}_{t+}\mathbb P\mb{S}}\def\T{\mb{T}_t^r\mbox{ for all }t\}. \] This is again a complete Boolean algebra of projections and $$ \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{C}_{\mb{S}}\def\T{\mb{T}+}:=\{\mathbb P E\,:\, \mathbb P\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{\mb{S}}\def\T{\mb{T}+}\} $$ is a complete Boolean algebra of components of $E.$ We define the space $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{\mb{S}}\def\T{\mb{T}+}$ to be the Dedekind complete Riesz space generated by $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{C}_{\mb{S}}\def\T{\mb{T}+}.$ Since the space contains $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_0,$ there exists a unique conditional expectation, denoted by $\mb{F}_{\mb{S}}\def\T{\mb{T}+},$ that maps $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ onto $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{\mb{S}}\def\T{\mb{T}+}$ with the property that $\mb{F}=\mb{F}\mb{F}_{\mb{S}}\def\T{\mb{T}+}=\mb{F}_{\mb{S}}\def\T{\mb{T}+}\mb{F}.$
Our aim is now to define the stopping element $X_\mb{S}}\def\T{\mb{T}$ for a stopping time $\mb{S}}\def\T{\mb{T},$ and having done that, the process $(X_{t\wedge\mb{S}}\def\T{\mb{T}})$ will be called the {\em stopped process.}
\section{Definition of the stopping element $X_\mb{S}}\def\T{\mb{T}$}
Let $J=[a,b]$ and consider the optional time $\mb{S}}\def\T{\mb{T}$ for the filtration $(\mb{F}_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)_{t\in J}$ defined on $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ with spectral interval contained in $J.$ Its left continuous spectrum of band projections $(\mb{S}}\def\T{\mb{T}^\ell_t)_{t\in J}$ is then adapted to the filtration meaning that $\mb{S}}\def\T{\mb{T}^\ell_t$ is a band projection in the Dedekind complete space $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t.$ Therefore, the component $C^\ell_t$ of $E$ is an element of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t.$ We define a vector measure $\mu_\mb{S}}\def\T{\mb{T}$ on the intervals $[s,t)$ by defining $$ \mu_\mb{S}}\def\T{\mb{T}[t_{i-1},t_i):=C^\ell_{t_i}-C^\ell_{t_{i-1}}. $$ We refer the reader to~\cite[Section XI.5]{Vu} for a proof that this defines a $\sigma$-additive measure on the algebra of all finite unions of disjoint sub-intervals of the form $[s,t)$ in $J$ (and can be extended to $\sigma$-additive measure on the $\sigma$-algebra of Borel subsets of $J$).
Next, let $\pi=\{a=t_0<t_1<\cdots<t_n=b\}$ be a partition of $J.$ We define $\mb{L}$ to be the Riesz space of all right-continuous simple processes of the form $$ X^\pi_t:=\sum_{t_i\in\pi}X_{t_i}\chi_{[t_{i-1}, t_i)},\ \ X_{t_i}\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{t_i}, $$ where $\pi$ varies over all partitions of $J$ and $\chi_S$ is the indicator function of the set $S.$
We note that the process $(X^\pi_t)_{t\in J}$ is not in general adapted to the filtration $(\mb{F}_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t),$ because $X^\pi_{t_{i-1}}=X_{t_i}$ and $X_{t_i}$ need not be an element of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{t_{i-1}}.$ We have that $X^\pi_{t}\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{t_i}$ for all $t_{i-1}\le t<t_i.$
The next proposition holds. \begin{proposition} With pointwise ordering, $\mb{L}$ is a Riesz subspace of the \linebreak Dedekind complete Riesz space $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^J$ of all $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$-valued functions defined on the interval $J.$ \end{proposition}
Let $$ I_\mb{S}}\def\T{\mb{T}(X_t^\pi):=\sum_{t_i\in\pi}X_{t_i}(C^\ell_{t_i}-C^\ell_{t_{i-1}}) =\sum_{t_i\in\pi}X_{t_i}\mu_\mb{S}}\def\T{\mb{T}[t_{i-1},t_i), $$ where the product of $X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ and the component $C=\mathbb P E,$ $\mathbb P\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P},$ is defined to be $\mathbb P X.$ We can therefore also write $$ I_\mb{S}}\def\T{\mb{T}(X_t^\pi):=\sum_{t_i\in\pi}(\mb{S}}\def\T{\mb{T}^\ell_{t_i}-\mb{S}}\def\T{\mb{T}^\ell_{t_{i-1}})X_{t_i}. $$ \begin{proposition} The operator $I_\mb{S}}\def\T{\mb{T}:\mb{L}\to\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ has the following properties. \begin{enumerate} \item[\rm(1)] $I_\mb{S}}\def\T{\mb{T}$ is positive and linear; \item[\rm(2)] $I_\mb{S}}\def\T{\mb{T}$ is $\sigma$-order continuous, i.e., if $X_{n,t}\in \mb{L}$ and $X_{n,t}\downarrow_n 0$ for each $t\in J,$ then $I_\mb{S}}\def\T{\mb{T}(X_{n,t})\downarrow_n 0;$ \item[\rm(3)] $I_\mb{S}}\def\T{\mb{T}$ is a lattice homomorphism. \end{enumerate} The operator $I_\mb{S}}\def\T{\mb{T}$ is therefore a positive vector-valued Daniell integral defined on $\mb{L}.$ \end{proposition} {\em Proof.} Property (1) needs no proof. To prove (2), let $X_{n,t}\in\mb{L}$ satisfy $X_{n,t}\downarrow 0$ for every $t\in [a,b].$ Let $$ X_{1,t}=\sum_{i=1}^N\xi_{i}\chi_{[t_{i-1},t_i)}(t), \xi_i\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_i, $$ and let $\epsilon>0$ be arbitrary. For each $i,$ $1\le i\le N,$ let $$ \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}_{1,i}:=\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}_{(\xi_{i}>\epsilon E)} =\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}_{(X_{1,t}\chi_{[t_{i-1},t_{i})}(t)>\epsilon E)}, $$ i.e., the band generated by $(\xi_{i}-\epsilon E)^+$ in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}.$ We also define $$ \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}_{n,i}:=\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}_{(X_{n,t}\chi_{[t_{i-1},t_{i})}(t)>\epsilon E)} \mbox{ for }n\ge 1, $$ and we denote the band projection onto $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}_{n,i}$ by $\mathbb P_{n,i}$ for each $i$ and $n.$ Then, since $X_{n,t}\downarrow$ for all $t, $ we have that $$ \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}_{n+1,i}\subset \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}_{n,i}\subset \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}_{1,i} \mbox{ for }1\le i\le N. $$ For $n\ge 1,$ let $B_{n,i}\subset J$ be defined by $$ B_{n,i}:=\{t\in[t_{i-1},t_i)\,:\, \mathbb P_{n,i}X_{n,t}>0\}. $$ The definition of the simple function $X_{n,t}$ implies that each $B_{n,i}$ is a finite union of left-closed right-open intervals, and note also that by the definition of $\mathbb P_{n,i},$ we have for every $t\in B_{n,i},$ $\mathbb P_{n,i}X_{n,t}>\epsilon E.$ Since, for every fixed $t\in J$ we have that $X_{n,t}\downarrow 0,$ it follows that $B_{n,i}\downarrow 0.$ The vector measure $\mu_\mb{S}}\def\T{\mb{T}$ is a $\sigma$-additive measure and it follows, since $\mu_\mb{S}}\def\T{\mb{T}(J)=C^\ell_b-C^\ell_a\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E},$ that $\mu_\mb{S}}\def\T{\mb{T}(B_{n,i})\downarrow 0.$ Then, \begin{align*} I_\mb{S}}\def\T{\mb{T}(X_{n,t}\chi_{[t_{i-1},t_{i})}(t)) &=I_\mb{S}}\def\T{\mb{T}[(\mathbb P_{n,i}X_{n,t}+\mathbb P^d_{n,i}X_{n,t})(\chi_{B_{n,i}}(t)+ \chi_{B^c_{n,i}}(t))]\\ &\le (\mathbb P_{n,i}\xi_{i}+\epsilon E)\mu_\mb{S}}\def\T{\mb{T}(B_{n,i})+\epsilon(\mathbb P_i-\mathbb P_{i-1})E\\ &\le (\xi_{i}+\epsilon E)\mu_\mb{S}}\def\T{\mb{T}(B_{n,i})+\epsilon (\mathbb P_i-\mathbb P_{i-1})E. \end{align*} It therefore follows, for each $\epsilon>0$ and $i,$ $1\le i\le N,$ that $$ o-\lim_{n\to\infty}I_\mb{S}}\def\T{\mb{T}(X_{n,t}\chi_{[t_{i-1},t_i)}(t))\le \epsilon(\mathbb P_i-\mathbb P_{i-1})E. $$ Summing over $i,$ we get $$ \inf_nI_\mb{S}}\def\T{\mb{T}(X_{n,t})\le \epsilon(\mathbb P_b-\mathbb P_{a})E. $$ This holds for every $\epsilon>0$ and so $I_\mb{S}}\def\T{\mb{T}(X_{n,t})\downarrow 0.$
We now prove (3). Let $X=X_t^\pi$ and $Y=Y_t^\pi$ be two elements of $\mb{L}$ written with the same partition $\pi$ of $J.$ Then, with $\Delta\mb{S}}\def\T{\mb{T}^\ell_{t_i}:=(\mb{S}}\def\T{\mb{T}^\ell_{t_i}-\mb{S}}\def\T{\mb{T}^\ell_{t_{i-1}})$ \begin{multline*} I_\mb{S}}\def\T{\mb{T}(X\vee Y)=\sum_{i=1}^n \Delta\mb{S}}\def\T{\mb{T}^\ell_{t_i}(X_{t_i}\vee Y_{t_i}) =\sum_{i=1}^n \Delta\mb{S}}\def\T{\mb{T}^\ell_{t_i}X_{t_i}\vee \Delta\mb{S}}\def\T{\mb{T}^\ell_{t_i}Y_{t_i}\\ =\bigvee_{i=1}^n\Delta\mb{S}}\def\T{\mb{T}^\ell_{t_i}X_{t_i}\vee \Delta\mb{S}}\def\T{\mb{T}^\ell_{t_i}Y_{t_i} =\bigvee_{i=1}^n\Delta\mb{S}}\def\T{\mb{T}^\ell_{t_i}X_{t_i}\vee \bigvee_{i=1}^n\Delta\mb{S}}\def\T{\mb{T}^\ell_{t_i}Y_{t_i} =I_\mb{S}}\def\T{\mb{T}(X)\vee I_\mb{S}}\def\T{\mb{T}(Y). \end{multline*} Thus $I_\mb{S}}\def\T{\mb{T}$ is a Riesz homomorphism.\phantom{em}
$\Box$
Applying the Daniell extension procedure to the primitive positive integral $I_\mb{S}}\def\T{\mb{T},$ we obtain an integral defined on the Riesz space $\mc{L}_\mb{S}}\def\T{\mb{T}$ of all Daniell $\mb{S}}\def\T{\mb{T}$-summable vector-valued functions that has the special property that it is a Riesz homomorphism.
\begin{theorem}
An adapted left-continuous process $(X_t)$ that is bounded by an $\mb{S}}\def\T{\mb{T}$-summable vector valued function $X$ is $\mb{S}}\def\T{\mb{T}$-summable. In particular, if $|X_t|\le ME,$ $M\ge 0,$ then $X=(X_t)$ is $\mb{S}}\def\T{\mb{T}$-summable. \end{theorem} {\em Proof.} Let $\pi_n=\{a=t^{(n)}_0<t^{(n)}_1<\ldots<t^{(n)}_{2^n}=b\}$ be a diadic partition of $[a,b].$ Define the element $X_n$ by \begin{equation} X_n(t):=\sum_{i=1}^{2^n} X_{t^{(n)}_{i-1}}\chi_{[t^{(n)}_{i-1},t^{(n)}_i)}(t). \end{equation} Then $X_n(t)$ belongs to $\mb{L}$ and we claim that $X_n(t)$ converges to $X_t$ in every point $t\in[a,b].$
Fix an element $t_0\in[a,b].$ Then, for each $n$ we have that $t_0\in [t^{(n)}_{i-1},t^{(n)}_i)\in\pi_n$ for a unique $i,$ $1\le i\le 2^n,$ and $X_n(t_0)=X_{t^{(n)}_{i-1}}.$ If, at some stage, $t_0$ is the left endpoint of an interval $[t^{(n')}_{i-1},t^{(n')}_i),$ then, for all finer partitions, it will remain the left endpoint of some interval in that partition and so $X_n(t_0)=X_{t_0}$ for all $n\ge n'.$ We may therefore assume that $t_0>t^{(n)}_{i-1}$ for all $n$ (here, by abusing the notation, $t^{(n)}_{i-1}$ will always denote the left endpoint of the unique interval of $\pi_n$ to which $t_0$ belongs; $i$ will therefore also depend on $n$). Since $t^{(n)}_i-t^{(n)}_{i-1}<(b-a)2^{-n},$ we have that $t^{(n)}_{i-1}\uparrow t_0$ as $n$ tends to infinity, and by the left continuity of $(X_t)$ we have that $X_n(t_0)=X_{t^{(n)}_{i-1}}$ converges to $X_{t_0}$ in order as $n\to\infty.$
Since $(X_t)$ is bounded by an $\mb{S}}\def\T{\mb{T}$-summable function $X,$ the Lebesgue domination theorem for the Daniell integral implies that $(X_t)$ is summable and that, with convergence in order, \begin{equation*} \lim_{n\to\infty}I_\mb{S}}\def\T{\mb{T} (X_n(t))=I_\mb{S}}\def\T{\mb{T} (X_t).\tag*{$\Box$} \end{equation*}
\begin{definition}\label{X_S} For $X\in\mc{L}_\mb{S}}\def\T{\mb{T}$ we define $$ X_\mb{S}}\def\T{\mb{T}:=I_\mb{S}}\def\T{\mb{T}(X_t). $$ \end{definition}
For the proof of the next result we need the following fact about unbounded order convergence.
\begin{proposition}\label{proposition 3.4} Let $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ be a Dedekind complete Riesz space with weak order unit $E.$ Then the following are equivalent \begin{itemize} \item[\rm(1)] The sequence $(X_n)\subset\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is uo-convergent to $X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}.$
\item[\rm(2)] $(|X_n-X|\wedge kE)$ is order convergent to $0$ in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ for every $k\in\N.$ \item[\rm(3)] The sequence $(X_n)$ is order convergent to $X$ in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u.$ \end{itemize} \end{proposition} {\em Proof.} The implication (1)$\implies$(2) is clear.
(2)$\implies$(3): For every $k\in\N,$ there exists a sequence $(V_n^{(k)})$ in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ such that $V_n^{(k)}\downarrow_n 0$ and $$
|X-X_n|\wedge kE\le V^{(k)}_n. $$ Let $$
\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}^{(k)}:=\bigcap_{n\in\N}\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}_{(kE>|X-X_n|)} $$ and let $\mathbb P_k$ be the projection onto $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}^{(k)}.$ We note that $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}^{(k)}$ is an increasing sequence of bands and so also is the sequence of projections $\mathbb P_k.$ We also have that $$
\mathbb P_k(|X-X_n|)=\mathbb P_k(|X-X_n|\wedge kE)\le\mathbb P_k V_n^{(k)}\mbox{ for all }n. $$ We now put $\Q_1:=\mathbb P_1,\ \Q_2:=(\mathbb P_2-\mathbb P_1),\ldots \Q_n:=(\mathbb P_n-\mathbb P_{n-1}),\ldots.$ Then $(\Q_n)$ is a sequence of disjoint projections. We define $$ V_n:=\sup_{k\in\N}\Q_kV_n^{(k)}\mbox{ for all }n. $$ This supremum exists in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$ for each $n\in \N.$ Now, since $E$ is a weak order we have for every $n\in\N$ that $$
|X-X_n|\wedge kE\uparrow |X-X_n|, $$ and so $$
|X-X_n|=\sup_{k\in\N}\Q_k(|X-X_n|)\le\sup_{k\in\N} \Q_kV^{(k)}_n=V_n\downarrow 0\mbox{ in }\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u. $$ Hence, $(X_n)$ is order convergent to $X$ in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u.$
(3)$\implies$(1): By assumption there exists a sequence $(Z_n)$ in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$ such that $|X-X_n|\le Z_n\downarrow 0.$ Consider an arbitrary $U\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}.$ Then $$
|X-X_n|\wedge U\le Z_n\wedge U\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}, $$ since, $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E},$ being Dedekind complete, is an ideal in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$ (see our remark in section 2). Moreover, since $Z_n\downarrow 0$ in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u,$ it follows that $Z_n\wedge U\downarrow 0$ in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ by the order denseness of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u.$ Therefore (1) holds.\phantom{em}
$\Box$
\begin{proposition} Let $(X^n_t)$ be a sequence in $\mc{L}_\mb{S}}\def\T{\mb{T}$ that is uo-convergent to $X_t\in\mc{L}_\mb{S}}\def\T{\mb{T}$ in each point $t\in J.$ Then $(X^n_\mb{S}}\def\T{\mb{T})$ is uo-convergent to $X_\mb{S}}\def\T{\mb{T}.$ \end{proposition}
{\em Proof.} The constant vector-valued functions $t\mapsto kE$ are in $\mb{L}$ and therefore Daniell integrable. This shows that the sequence $|X^n_t-X_t|\n kE,$ which is order convergent to $0$ in each point $t,$ is pointwise bounded by the integrable function $t\mapsto kE.$ Therefore Lebesgue's dominated convergence theorem for the Daniell integral implies that $I_\mb{S}}\def\T{\mb{T}(|X^n_t-X_t|\wedge kE)$ is order convergent to $0.$ But, since $I_\mb{S}}\def\T{\mb{T}$ is a Riesz homomorphism, $$
I_\mb{S}}\def\T{\mb{T}(|X^n_t-X_t|\wedge kE)=|I_\mb{S}}\def\T{\mb{T} (X^n)-I_\mb{S}}\def\T{\mb{T}(X)|\wedge I_\mb{S}}\def\T{\mb{T}(kE)=
|I_\mb{S}}\def\T{\mb{T} (X^n)-I_\mb{S}}\def\T{\mb{T}(X)|\wedge kE, $$ which shows that $X^n_\mb{S}}\def\T{\mb{T}=I_\mb{S}}\def\T{\mb{T}(X_n)\overset{uo}{\to}I_\mb{S}}\def\T{\mb{T}(X)=X_\mb{S}}\def\T{\mb{T}.$\phantom{em}
$\Box$
\section{Uniform Integrability} In this section we generalize the notion of uniform integrability. There are several ways in which one can do this, due to the different modes of convergence we have. It seems that convergence in $\mc{L}^1$ is the right notion to use in our case. The role of the integral is played by a conditional expectation $\mb{F}$ that is defined on the Dedekind complete Riesz space $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}.$ Our assumptions on $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ are as we stated them in section~2. We recall that the Riesz semi-norm $p_\phi$ is defined as $$
p_\phi(X):=|\phi|(\mb{F}(|X|)),\ \ \phi\in\Phi $$ and the topology of $\mc{L}^1$ is the locally solid topology $\sigma(\mc{L}^1,\mc{P}).$
\begin{definition}\label{UniformInt}\rm . The sequence $(X_n)$ in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is called \textit{$\mc{L}^1$-uniformly integrable} whenever we have that, for every $p_\phi\in\mc{P},$ \begin{equation}\label{eqUniformInt}
p_\phi(\mathbb P_{(|X_n|\ge \lambda E)}|X_n|)\to 0 \mbox{ as $0\le\lambda\uparrow\infty$ uniformly in $n.$} \end{equation} This means that for every $\epsilon>0$ and for every $p_\phi\in\mc{P}$ there exists some $\lambda_0$ (depending on $\epsilon$ and $p_\phi$) such that for all $\lambda\ge\lambda_0,$ we have we have that $$
p_\phi(\mathbb P_{(|X_n|\ge \lambda E)}|X_n|) <\epsilon \mbox{ for all $n\in\N.$} $$
A stronger notion, that can be called \textit{order-uniformly integrable} is to have $$
\sup_{n\in\N}\mb{F}(\mathbb P_{(|X_n|\ge \lambda E)}|X_n|)\downarrow 0 \mbox{ as $\lambda\uparrow\infty$}. $$ \end{definition} If $(X_n)$ is order-uniformly integrable, we have for every $n$ that $$
p_\phi(\mathbb P_{(|X_n|\ge \lambda E)}|X_n|)\le p_\phi(\sup_{n\in\N}\mb{F}(\mathbb P_{(|X_n|\ge \lambda E)}|X_n|))\downarrow 0. $$
It follows that $p_\phi(\mathbb P_{(|X_n|\ge \lambda E)}|X_n|)\to 0$ uniformly in $n$ as $\lambda\uparrow\infty.$ Thus, order-uniform integrability of $(X_n)$ implies $\mc{L}^1$-uniform integrability of $(X_n).$
We note that for each fixed $n$ we have that $\mathbb P_{(|X_n|>\lambda E)}\downarrow 0$ as $\lambda\uparrow\infty.$ Therefore, for each fixed $n,$ $\mathbb P_{(|X_n|>\lambda E)}|X_n|\downarrow $0 as $\lambda\uparrow\infty$ and since $\mb{F}$ is order continuous, also $\mb{F}(\mathbb P_{(|X_n|>\lambda E)}|X_n|)\downarrow 0$ as $\lambda\uparrow\infty$ for each fixed $n.$ Therefore, if $(X_n)$ has only a finite number of non-zero elements, it is clear (see~\cite[Theorem 16.1]{LZ}) that $(X_n)$ is order-uniformly integrable.
If $(X_n)$ is a bounded sequence in $\mc{L}^2,$ i.e., if for every $q\in \mc{Q}$ there exists a constant $M_\phi$ such that $q_\phi(X_n)\le M_\phi$ for all $n\in \N,$ then, by the Cauchy-inequality, $$
p_\phi(\mathbb P_{(|X_n|>\lambda E)}|X_n|)\le q_\phi(\mathbb P_{(|X_n|>\lambda E)}E)q_\phi(X_n)\le q_\phi(\mathbb P_{(|X_n|>\lambda E)}E)M_\phi. $$
Therefore, if $q_\phi(\mathbb P_{(|X_n|>\lambda E)}E)\to 0$ uniformly in $n$ as $\lambda\uparrow\infty,$ then $(X_n)$ is $\mc{L}^1$-uniformly integrable. The next proposition can be compared to~\cite[Theorem I.2.1]{DU}.
\begin{proposition}\label{4P1} Let $0\le X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E},$ let $(\mathbb P_{t})_{0\le t<\infty}$ be projections in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}.$ Then, given any $p_\phi\in\mc{P}$ and $\epsilon>0,$ there exists a $\delta>0$ such that $p_\phi(\mathbb P_tE)<\delta,$ implies that $p_\phi(\mathbb P_{t} X)<\epsilon.$ Thus, if $p_\phi(\mathbb P_{t} E)$ converges to $0,$ then $p_\phi(\mathbb P_{t} X)$ converges to $0.$ \end{proposition} {\em Proof.} Assume that the proposition is false. Then there exists an element $\phi\in\Phi$ and some $\epsilon>0$ such that, for every $k,$ there exists a projection $\mathbb P_{t_k}$ satisfying $p_\phi(\mathbb P_{t_k}E)<2^{-k}$ and $p_\phi(\mathbb P_{t_k}X)>\epsilon.$ Define $$ \Q_k:=\mathbb P_{t_k}\vee \mathbb P_{t_{k+1}}\vee\ldots, $$ Then $\Q_k\downarrow$ and \begin{equation}\label{eq4.2.1} p_\phi(\Q_kX)\ge p_\phi(\mathbb P_{t_k}X)>\epsilon. \end{equation} But, $$ p_\phi(\Q_kE)\le \sum_{j=k}^\infty p_\phi(\mathbb P_{t_j} E)\le 2^{1-k} \downarrow 0, \mbox{ as $k\to\infty$}. $$
Since $\mb{F}$ is strictly positive, $p_\phi=|\phi|\mb{F}$ is strictly positive on the carrier band $C_{\phi}$ of $\phi.$ Therefore, $\Q_kE \downarrow 0$ on $C_\phi.$ But then $\Q_kX\downarrow 0$ on $C_\phi$ and by the order continuity of $p_\phi,$ it follows that $p_\phi(\Q_kX)\downarrow 0.$ This contradicts (\ref{eq4.2.1}). \phantom{em}
$\Box$
The next theorem is also a generalization of a well-known fact about uniform integrability.
\begin{theorem}\label{TH43} The sequence $(X_n)$ in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is uniformly integrable if and only if it satisfies the following conditions: \begin{enumerate} \item[{\rm(1)}] $(X_n)$ is a bounded set in $\mc{L}^1$
\item[{\rm(2)}] For every $p_\phi\in\mc{P},$\ \ $p_\phi(\mathbb P|X_n|)\to 0$ uniformly in $n$ as $p_\phi(\mathbb P E)\to 0,$ i.e., given
$\epsilon>0$ and $p_\phi\in\mc{P},$ there exists a $\delta>0$ such that, if $p_\phi(\mathbb P E)\le\delta,$ then $p_\phi(\mathbb P|X_n|)<\epsilon$ for all $n\in\N.$ \end{enumerate} \end{theorem} {\em Proof.} Suppose that $(X_n)$ is a bounded set in $\mc{L}^1$ and that it is uniformly continuous, i.e., that $(X_n)$ satisfies condition (2). Then we have by Chebyshev's inequality, we have $$
\mb{F}(\mathbb P_{(|X_n|\ge tE)}E)\le \frac{1}{t}\mb{F}(|X_n|), $$ which implies that $$
p_\phi(\mathbb P_{(|X_n|\ge tE)}E)\le\frac{1}{t}p_\phi(X_n)\le M_\phi, $$
for a number $M_\phi\ge 0.$ By the boundedness of $(X_n)$ in $\mc{L}^1,$ it follows that $p_\phi(\mathbb P_{(|X_n|\ge tE)}E)\to 0$ uniformly in $n.$ It follows by (2) that $$
p_\phi(\mathbb P|X_n|)\to 0 \mbox{ uniformly in $n$} $$ and so $(X_n)$ is uniformly integrable.
Conversely, if $(X_n)$ is uniformly integrable, we have for every $p_\phi\in\mc{P}$ that \begin{align}\label{equation4.3}
p_\phi(\mathbb P|X_n|)&=p_\phi(\mathbb P\mathbb P_{(|X_n|\ge tE)}|X_n|)
+p_\phi(\mathbb P\mathbb P_{(|X_n|< tE)}|X_n|) \nonumber\\
&\le p_\phi(\mathbb P_{(|X_n|\ge tE)}|X_n|) + tp_\phi(\mathbb P E). \end{align}
By the uniform integrability, we can choose, for given $\epsilon>0,$ a number $t_0$ such that the first term is less that $\epsilon/2$ for all $n.$ We then have, for $\phi(\mathbb P E)<\epsilon/2t_0$ that $p_\phi(\mathbb P|X_n|)<\epsilon$ for all $n,$ thus proving that condition (2) holds.
\noindent Taking the projection $\mathbb P$ in (\ref{equation4.3}) equal to the identity $I,$ it follows that for large $t$ (depending on $p_\phi$) we have $$
p_\phi(|X_n|)\le \epsilon+t=M_\phi<\infty. $$ Since this holds for arbitrary $p_\phi\in\mc{P},$ the set $(X_n)$ is bounded in $\mc{L}^1.$\phantom{em}
$\Box$
\begin{corollary}\label{L:Xn+X} If $(X_n)$ and $(Y_n)$ are uniformly integrable sequences, then $(X_n+Y_n)$ is also uniformly integrable. In particular, if $X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ then $(X_n+X)$ is uniformly integrable. \end{corollary} {\em Proof.} It is clear that if $(X_n)$ and $(Y_n)$ are bounded sequences in $\mc{L}^1$ then $(X_n+Y_n)$ is also a bounded sequence in $\mc{L}^1.$ Also, since they are uniformly integrable, they are uniformly continuous, i.e., condition (2) in Theorem 4.3 above holds for both of them. But then, for every $p_\phi\in\mc{P},$ we have that if $p_\phi(\mathbb P E)\to 0,$ then $$
p_\phi(\mathbb P|X_n+Y_n|)\le p_\phi(\mathbb P|X_n|)+p_\phi(\mathbb P |Y_n|)\to 0 $$ uniformly in $n.$ By Theorem~\ref{TH43}, this implies that $(X_n+Y_n)$ is uniformly integrable. \phantom{em}
$\Box$
Below we denote unbounded order convergence of a sequence $(X_n)$ to an element $X$ by $X_n\overset{uo}{\to}X.$
\begin{lemma}\label{L:zero} If $X_n\overset{uo}{\to}0$ and $(X_n)$ is uniformly integrable, then $X_n\to 0$ in $\mc{L}^1.$ \end{lemma}
{\em Proof.} Suppose that $X_n\overset{uo}{\to}0$ and that $(X_n)$ is uniformly integrable. Let $\epsilon>0$ and $p_\phi\in\mc{P}$ be given. Then, it follows from the uniform integrability, that \begin{align*}
p_\phi(X_n)&=p_\phi(\mathbb P_{(|X_n|\geq\lambda E)}X_n)
+p_\phi(\mathbb P_{(|X_n|<\lambda E))}X_n)\\
&\le p_\phi(\mathbb P_{(|X_n|\geq\lambda E)}X_n)
+p_\phi(|X_n|\wedge \lambda E)\\
&<\epsilon/2 + p_\phi(|X_n|\wedge \lambda_0 E), \end{align*} for some $\lambda_0>0$ and for all $n\in\N.$ Since $X_n\overset{uo}{\to}0$ by assumption, and since $p_\phi$ is order continuous, there exist some $N\in\N$ such that for all $n\ge N$ we have that the last term above is less that $\epsilon/2.$ Thus, $p_\phi(X_n)\to 0$ and this holds for every $p_\phi\in\mc{P}.$ \phantom{em}
$\Box$
\begin{theorem}\label{T:FXn->FX} If $X_n\overset{uo}{\to}X$ and $(X_n)$ is uniformly integrable, then $X_n\to X$ in $\mc{L}^1.$ \end{theorem} {\em Proof.} Suppose that $X_n\overset{uo}{\to}X$ and that $(X_n)$ is uniformly integrable. For each $n\in\mb{N}$ define $C_n:=X_n-X$. Then $C_n\overset{uo}{\to}0$, and by Corollary~\ref{L:Xn+X}, we know that $(C_n)$ is uniformly integrable. Thus, by Lemma~\ref{L:zero}, it is true that $C_n\to 0$ in $\mc{L}^1.$ But this is equivalent to $X_n\to\, X$ in $\mc{L}^1.$ \phantom{em}
$\Box$
\noindent \textbf{Conclusion} If the sequences $(X_{\mb{S}}\def\T{\mb{T}_n})$ and $(X_{\T_n})$ are uniformly integrable and uo-converge in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ to $X_\mb{S}}\def\T{\mb{T}$ and $X_\T$, respectively, then it is easy to see that for any band projection $\mathbb P$ we have that $\mathbb P(X_{\mb{S}}\def\T{\mb{T}_n})\overset{uo}{\to}\mathbb P(X_\mb{S}}\def\T{\mb{T})$ and $\mathbb P(X_{\T_n})\overset{uo}{\to}\mathbb P(X_\T)$. It is also easy to see that $(\mathbb P(X_{\mb{S}}\def\T{\mb{T}_n}))$ and $(\mathbb P(X_{\T_n}))$ are also uniformly integrable. Therefore, by Theorem~\ref{T:FXn->FX} we have $X_{\mb{S}}\def\T{\mb{T}_n}\to X_\mb{S}}\def\T{\mb{T}$ and $\mathbb P X_{\T_n}\to\mathbb P X_\T$ in $\mc{L}^1.$ This fact will be used in the proof of Doob's optional sampling theorem below.
\begin{definition}\rm (see~\cite[Problem 3.11]{KS}) Let $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_n)$ be a decreasing sequence of Dedekind complete Riesz subspaces of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E},$ i.e., $$ \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{n+1}\subseteq \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{n}\subseteq \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}, $$ with $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_n$ the range of a conditional expectation $\mb{F}_n:\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}\to\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_n$ satisfying $\mb{F}_n\mb{F}_m=\mb{F}_m\mb{F}_n=\mb{F}_m$ if $m>n.$ The process $(X_n)$ with $X_n\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_n$ and $\mb{F}_{n+1}(X_n)\ge X_{n+1}$ is called a {\em backward submartingale}. \end{definition} We note that $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_\infty:=\bigcap_n\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_n$ is a Dedekind complete Riesz space that is contained in each of the spaces $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_n$ and so there exists a conditional expectation $\mb{F}_\infty:\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}\to\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_\infty$ with the property that for each $n$ we have $\mb{F}_\infty\mb{F}_n=\mb{F}_n\mb{F}_\infty=\mb{F}_\infty.$ Furthermore, applying $\mb{F}_\infty$ to both sides of the inequality in the definition, we find that for all $n,$ $\mb{F}_\infty(X_n)\ge\mb{F}_\infty(X_{n+1}),$ i.e., the sequence $(\mb{F}_\infty(X_n))$ is a decreasing sequence. It is also easy to show by induction that for all $n$ one has $\mb{F}_n(X_1)\ge X_n.$
\begin{example}\rm Let $(X_t,\mb{F}_t)_{t\in J}$ be a submartingale. With $J=[a,b],$ we have for any sequence of real numbers $t_n\downarrow a,$ that $(X_{t_n},\mb{F}_{t_n})_{n\in \N})$ is a backward submartingale. In this case, $\mb{F}_\infty=\mb{F}_a=\mb{F}.$ \end{example}
Since we work in the setting where we do not have an integral, but a fixed conditional expectation $\mb{F},$ we shall assume for all backward submartingales considered that $\mb{F}_\infty=\mb{F}.$
\begin{proposition}\label{4P2} Let $(X_n)$ be a backward submartingale with $\mb{F}_\infty=\mb{F}.$ If the sequence $(\mb{F}(X_n))$ is bounded below, i.e., if $$ Y=\inf_{n\in\N}\mb{F}(X_n)\mbox{ exists in }\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}, $$ then the sequence $(X_n)$ is uniformly integrable. \end{proposition} {\em Proof.} By Jensen's inequality, $(X_n^+, \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_n)$ is also a backward submartingale. Hence, for $\lambda>0,$ we find by the Chebyshev inequality that for each $n,$ $$
\lambda\mb{F}(\mathbb P_{(|X_n|>\lambda E)}E)\le\mb{F}(|X_n|) =-\mb{F}(X_n)+2\mb{F}(X^+_n)\le -Y+2\mb{F}(X_1^+). $$ It follows that \begin{equation}\label{4E1}
\lim_{\lambda\to\infty}p_\phi(\mathbb P_{(|X_n|>\lambda E)}E)=0 \mbox{ uniformly in $n,$} \end{equation} and therefore also \begin{equation}\label{4E2} \lim_{\lambda\to\infty}p_\phi(\mathbb P_{(X_n^+>\lambda E)}E)=0 \mbox{ uniformly in $n.$} \end{equation} Using the backward submartingale property of $(X_n^+),$ we have \begin{multline}\label{4E3} \mb{F}(\mathbb P_{(X_n^+>\lambda E)}X_n^+) \le\mb{F}(\mathbb P_{(X_n^+>\lambda E)}\mb{F}_n X_1^+)\\ =\mb{F}\mb{F}_n(\mathbb P_{(X_n^+>\lambda E)}X_1^+) =\mb{F}(\mathbb P_{(X_n^+>\lambda E)}X_1^+). \end{multline} Hence, we have for any $p_\phi\in\mc{P},$ that \begin{equation}\label{E47} p_\phi(\mathbb P_{(X_n^+>\lambda E)}X_n^+)\le p_\phi(\mathbb P_{(X_n^+>\lambda E)}X_1^+). \end{equation} We now apply Proposition \ref{4P1} to find for every $\epsilon>0,$ a $\delta>0$ such that, if $p_\phi(\mathbb P_{(X_n^+>\lambda E)}E)<\delta,$ then $p_\phi(\mathbb P_{(X_n^+>\lambda E)}X_1^+)<\epsilon.$ From (\ref{4E2}), there exist some $\lambda_0$ such that, for $\lambda>\lambda_0,$ $p_\phi(\mathbb P_{(X_n^+>\lambda E)}E)<\delta,$ for all $n\in\N.$ It then follows from (\ref{E47}) that for all $\lambda>\lambda_0,$ we have $p_\phi(\mathbb P_{(X_n^+>\lambda E)}X_n^+)<\epsilon$ for all $n\in \N.$ This shows that the backwards submartingale $(X_n^+)$ is uniformly integrable.
We next show that the sequence $(X_n^-)$ is also uniformly integrable. Note that $\mathbb P_{(X_n^->\lambda E)}=\mathbb P_{(X_n<-\lambda E)}$ and that for $m<n,$ we have $X_n\le\mb{F}_nX_m.$ Now,\begin{multline}\label{4E4} 0\ge\mb{F}(\mathbb P_{(X_n<-\lambda E)}X_n)=\mb{F}(X_n)-\mb{F}(\mathbb P_{(X_n\ge-\lambda E)}X_n)\\ \ge\mb{F}(X_n)-\mb{F}(\mathbb P_{(X_n\ge-\lambda E)}\mb{F}_nX_m)\\ \ge\mb{F}(X_n)-\mb{F}(\mathbb P_{(X_n\ge-\lambda E)}X_m)\\ =\mb{F}(X_n)-\mb{F}(X_m)+\mb{F}(\mathbb P_{(X_n<-\lambda E)}X_m). \end{multline} Since the sequence $\mb{F}(X_n)\downarrow_n Y,$ $(X_n)$ is convergent in $\mc{L}^1$ and therefore a Cauchy sequence. For a given $\epsilon>0,$ we can choose $m=m(\epsilon)$ such that for all $n>m,$ we have $$ p_\phi(X_m-X_n)<\epsilon/2. $$
Also, by Proposition~\ref{4P1}, there exists a $\delta>0,$ such that
$p_\phi(\mathbb P_{(|X_n|>\lambda E)}E)<\delta,$ implies that
$p_\phi(\mathbb P_{(|X_n|>\lambda E)}X_m)<\epsilon/2$ and using (\ref{4E1}), we can find a $\lambda_0$ such that for all $\lambda>\lambda_0,$ we have for all $n\in \N$ that
$p_\phi(\mathbb P_{(|X_n|>\lambda E)}E)<\delta$ and therefore for all $n\in\N$ that $p_\phi(\mathbb P_{(|X_n|>\lambda E)}X_m)<\epsilon/2$ if $\lambda>\lambda_0.$ But,
$\mathbb P_{(X_n^->\lambda E)}\le \mathbb P_{(|X_n|>\lambda E)}$ and therefore, there exists a $\lambda$ such that for all $\lambda>\lambda_0,$ $$ p_\phi(\mathbb P_{(X_n^->\lambda E)}X_m)<\epsilon/2 \mbox{ for all $n\in\N.$} $$ We now use the inequality in (\ref{4E4}): For all $n>m(\epsilon)$ we have \begin{align}\label{4E5} \mb{F}(\mathbb P_{(X_n^->\lambda E)}X_n^-)
&=|\mb{F}(\mathbb P_{(X_n^->\lambda E)}X_n)|\nonumber \\ &=-\mb{F}(\mathbb P_{(X_n<-\lambda E)}X_n) \\ &\le(\mb{F}(X_m)-\mb{F}(X_n))-\mb{F}(\mathbb P_{(X_n<-\lambda E)}X_m) \end{align} and so, for all $n>m$ we get $$ p_\phi(\mathbb P_{(X_n^->\lambda E)}X_n^-)\le p_\phi(X_m-X_n)+ p_\phi(\mathbb P_{(X_n^->\lambda E)}X_m)<\epsilon/2+\epsilon/2=\epsilon $$ for all $\lambda>\lambda_0.$
For $n=1,2,\ldots,m$ we have that $p_\phi(\mathbb P_{(X_n^->\lambda E)}X_n^-)\downarrow 0$ as $\lambda\to\infty$ so we can find $\lambda_n$ such that for $\lambda>\lambda_n,$ we have $p_\phi(\mathbb P_{(X_n^->\lambda E)}X_n^-)<\epsilon.$ If $\lambda>\max\{\lambda_0,\lambda_1,\ldots,\lambda_m\}$ we have that $$ p_\phi(\mathbb P_{(X_n^->\lambda E)}X_n^-)<\epsilon \mbox{ for all $n\in\N$} $$ Thus, $(X_n^-)$ is uniformly integrable. Our final result, that $(X_n)=(X_n^+-X_n^-$) is uniformly integrable, follows from Corollary \ref{L:Xn+X}.\phantom{em}
$\Box$
\section{The optional sampling theorem} As remarked in the introduction, we have to prove the optional sampling theorem using Definition~\ref{X_S}.
\begin{theorem} Let $(X_t)_{t\in J}$ be a right-uo-continuous submartingale and let $\mb{S}}\def\T{\mb{T}\le\T$ be two optional times of the filtration $(\mb{F}_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t).$ Then, if either \begin{enumerate} \item $\T$ is bounded or \item $(X_t)$ has a last element, \end{enumerate} we have $$ \mb{F}_{\mb{S}}\def\T{\mb{T}+}X_\T\ge X_\mb{S}}\def\T{\mb{T}. $$ If $\mb{S}}\def\T{\mb{T}$ and $\T$ are stopping times, one has $$ \mb{F}_{\mb{S}}\def\T{\mb{T}}X_\T\ge X_\mb{S}}\def\T{\mb{T}. $$ \end{theorem} {\em Proof.} Let $\pi_n=\{a=t_0<t_1<\ldots<t_{2^n}=b\}$ be a diadic partition of $J=[a,b]$ and define the sequence $(\mb{S}}\def\T{\mb{T}_n)$ by putting \begin{equation}\label{5E1} \mb{S}}\def\T{\mb{T}_n=\sum_{i=1}^{2^n}t_i(\mb{S}}\def\T{\mb{T}^\ell_{t_i}-\mb{S}}\def\T{\mb{T}^\ell_{t_{i-1}}) =\sum_{i=1}^{2^n}t_i\Delta\mb{S}}\def\T{\mb{T}^\ell_i=\sum_{i=1}^{2^n}t_i\mb{S}}\def\T{\mb{T}^\ell_{t_i}(\mb{S}}\def\T{\mb{T}_{t_{i-1}}^{\ell})^d \end{equation} and similarly, \begin{equation}\label{5E2} \T_n=\sum_{j=1}^{2^n}t_j(\T^\ell_{t_j}-\T^\ell_{t_{j-1}}) =\sum_{j=1}^{2^n}t_j\Delta\T^\ell_j\sum_{i=1}^{2^n}t_j\T^\ell_{t_j}(\T_{t_{j-1}}^{\ell})^d. \end{equation} We now write them both as sums with respect to the partition $\{\Delta\mb{S}}\def\T{\mb{T}^\ell_i\Delta\T^l_j\}_{i,j=1}^n,$ i.e., we get \begin{equation}\label{5E3} \mb{S}}\def\T{\mb{T}_n=\sum_{i=1}^{2^n}\sum_{j=1}^{2^n}s_{ij}\Delta\T^\ell_j\Delta\mb{S}}\def\T{\mb{T}^\ell_i,\ s_{ij}=t_i\mbox{ and } \T_n=\sum_{i=1}^{2^n}\sum_{j=1}^{2^n}t_{ij}\Delta\T^\ell_j\Delta\mb{S}}\def\T{\mb{T}^\ell_i,\ t_{ij}=t_j. \end{equation} Now, $\mb{S}}\def\T{\mb{T}\le\T$ implies that for each fixed $n,$ $\mb{S}}\def\T{\mb{T}_n\le\T_n$ and so $s_{ij}\le t_{ij};$ this implies that $t_i\Delta\T^\ell_j\Delta\mb{S}}\def\T{\mb{T}^\ell_i\le t_j\Delta\T^\ell_j\Delta\mb{S}}\def\T{\mb{T}^\ell_i$ for all $i,j$ such that $\Delta\T^\ell_j\Delta\mb{S}}\def\T{\mb{T}^\ell_i\ne 0.$
Each $\mb{S}}\def\T{\mb{T}_n$ and each $\T_n$ is a stopping time for the filtration and by Freudenthal's theorem, $\mb{S}}\def\T{\mb{T}_n\downarrow \mb{S}}\def\T{\mb{T}$ and $\T_n\downarrow\T.$
With these definitions for $\mb{S}}\def\T{\mb{T}_n$ and $\T_n,$ we have that \begin{equation}\label{5E4} X_{\mb{S}}\def\T{\mb{T}_n}=\sum_{i=1}^{2^n}\sum_{j=1}^{2^n}\Delta\T^\ell_j\Delta\mb{S}}\def\T{\mb{T}^\ell_i X_{t_{i}}\mbox{ and } X_{\T_n}=\sum_{i=1}^{2^n}\sum_{j=i}^{2^n}\Delta\T^\ell_j\Delta\mb{S}}\def\T{\mb{T}^\ell_i X_{t_{j}}. \end{equation} Next, we put \begin{equation}\label{5E5} \mb{F}_{\mb{S}}\def\T{\mb{T}_n}:=\sum_{i=1}^{2^n}\sum_{j=1}^{2^n}\mb{F}_{t_{i}}\Delta\T^\ell_j\Delta\mb{S}}\def\T{\mb{T}^\ell_i. =\sum_{i=1}^{2^n}\mb{F}_{t_{i}}\Delta\mb{S}}\def\T{\mb{T}^\ell_i. \end{equation} It is readily checked that $\mb{F}_{\mb{S}}\def\T{\mb{T}_n}$ is a strictly positive, order continuous projection that maps $E$ onto $E,$ i.e., $\mb{F}_{\mb{S}}\def\T{\mb{T}_n}$ is a conditional expectation. Its range is the direct sum $$ \bigoplus_{i=1}^{2^n}\Delta\mb{S}}\def\T{\mb{T}^\ell_i\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{t_i} $$ of the bands $\Delta\mb{S}}\def\T{\mb{T}_i^\ell \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{t_i}$ (since $\mb{F}_{t_i}$ and $\Delta\mb{S}}\def\T{\mb{T}_i^\ell $ commute), and the projections in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ that belong to this space are exactly those projections in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ that belong to $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_t$ for all $t$ such that $\mb{S}}\def\T{\mb{T}_n \le tE.$ Therefore, the space $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{\mb{S}}\def\T{\mb{T}_n},$ which is by definition the space generated by these projections, is equal to the space $\bigoplus_{i=1}^{2^n}\Delta\mb{S}}\def\T{\mb{T}^\ell_i\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{t_i}.$ Moreover, $$ \mb{F}\mb{F}_{\mb{S}}\def\T{\mb{T}_n}=\sum_{i=1}^{2^n}\mb{F}\mb{F}_{t_{i}}\Delta\mb{S}}\def\T{\mb{T}^\ell_i=\mb{F}\sum_{i=1}^{2^n}\Delta\mb{S}}\def\T{\mb{T}^\ell_i=\mb{F}, $$ and similarly $\mb{F}_{\mb{S}}\def\T{\mb{T}_n}\mb{F}=\mb{F}.$ Therefore, $\mb{F}_{\mb{S}}\def\T{\mb{T}_n}$ is the unique conditional expectation with range $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{\mb{S}}\def\T{\mb{T}_n}$ satisfying these two conditions.
For each fixed $i,$ consider the sum \begin{align}\label{equation5.6a} \sum_{j=i}^{2^n}\mb{F}_{t_{i}}\Delta\T^\ell_j\Delta\mb{S}}\def\T{\mb{T}^\ell_i X_{t_j} &=\mb{F}_{t_i}\Delta\T^\ell_{2^n}\Delta\mb{S}}\def\T{\mb{T}^\ell_i X_{t_{2^n}}+\ldots +\mb{F}_{t_i}\Delta\T^\ell_{t_i}\Delta\mb{S}}\def\T{\mb{T}^\ell_i X_{t_{i}} \end{align} and note that, in the first term, $$ \Delta\T^\ell_{2^n}\Delta\mb{S}}\def\T{\mb{T}^\ell_i=\Delta \mb{S}}\def\T{\mb{T}^\ell_i(\T^\ell_{t_{2^n}}(\T^{\ell}_{t_{2^n-1}})^d)=\Delta \mb{S}}\def\T{\mb{T}^\ell_i(\T^{\ell}_{t_{2^n-1}})^d\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{t_{2^n-1}}. $$ Therefore, \begin{align*} \mb{F}_{t_i}\Delta\T^\ell_{2^n}\Delta\mb{S}}\def\T{\mb{T}^\ell_i X_{t_{2^n}}&=\mb{F}_{t_i}\mb{F}_{t_{2^n-1}} \Delta\T^\ell_{2^n}\Delta\mb{S}}\def\T{\mb{T}^\ell_i X_{t_{2^n}}\\ &=\mb{F}_{t_i}\Delta\T^\ell_{2^n}\Delta\mb{S}}\def\T{\mb{T}^\ell_i \mb{F}_{t_{2^n-1}} X_{t_{2^n}}\\ &\ge\mb{F}_{t_i}\Delta\T^\ell_{2^n}\Delta\mb{S}}\def\T{\mb{T}^\ell_i X_{t_{2^n-1}}. \end{align*} Substituting this inequality in Equation (\ref{equation5.6a}), and repeating the process we finally arrive at \begin{equation} \sum_{j=i}^{2^n}\mb{F}_{t_{i}}\Delta\T^\ell_j\Delta\mb{S}}\def\T{\mb{T}^\ell_i X_{t_j} \ge\mb{F}_{t_{i}}\Delta\mb{S}}\def\T{\mb{T}^\ell_i(\sum_{j=i}^{2^n}\Delta\T^\ell_j) X_{t_i} =\mb{F}_{t_{i}}\Delta\mb{S}}\def\T{\mb{T}^\ell_i(\sum_{j=i}^{2^n}\Delta\T^\ell_j) X_{t_i} =\Delta\mb{S}}\def\T{\mb{T}^\ell_iX_{t_i}. \end{equation} Thus, \begin{equation}\label{5E6} \mb{F}_{\mb{S}}\def\T{\mb{T}_n}(X_{\T_n})=\sum_{i=1}^{2^n}\sum_{j=i}^{2^n}\mb{F}_{t_{i}}\Delta\T^\ell_j\Delta\mb{S}}\def\T{\mb{T}^\ell_i X_{t_j} \ge \sum_{i=1}^{2^n}\Delta\mb{S}}\def\T{\mb{T}^\ell_i X_{t_i}=X_{\mb{S}}\def\T{\mb{T}_n}. \end{equation} (This is Doob's optional sampling theorem for this special case.)
For every $\mathbb P\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{\mb{S}}\def\T{\mb{T}_n},$ we therefore have that \begin{equation}\label{5E7} \mb{F}(\mathbb P X_{\T_n})=\mb{F}\mb{F}_{\mb{S}}\def\T{\mb{T}_n}\mathbb P X_{\T_n}= \mb{F}\mathbb P(\mb{F}_{\mb{S}}\def\T{\mb{T}_n}X_{\T_n})\ge\mb{F}(\mathbb P X_{\mb{S}}\def\T{\mb{T}_n}). \end{equation} By~\cite[Proposition 5.15]{G2}, we have that $\displaystyle \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{\mb{S}}\def\T{\mb{T}+}=\bigcap_{n=1}^\infty \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{\mb{S}}\def\T{\mb{T}_n}.$ Therefore, by (\ref{5E7}), \begin{equation}\label{5E7b} \mb{F}(\mathbb P X_{\T_n})\ge\mb{F}(\mathbb P X_{\mb{S}}\def\T{\mb{T}_n})\mbox{ holds for all $\mathbb P\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{\mb{S}}\def\T{\mb{T}+}.$} \end{equation} If $\mb{S}}\def\T{\mb{T}$ is a stopping time, it follows from~\cite[Proposition 5.9]{G2} that since $\mb{S}}\def\T{\mb{T}\le\mb{S}}\def\T{\mb{T}_n,$ we have $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{\mb{S}}\def\T{\mb{T}}\subset\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{\mb{S}}\def\T{\mb{T}_n}$ and so this inequality holds in that case also for all $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{\mathbb P}\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_\mb{S}}\def\T{\mb{T}.$
Applying the arguments above to $\mb{S}}\def\T{\mb{T}_n\le\mb{S}}\def\T{\mb{T}_{n+1}$ we get as in (\ref{5E6}) that $$ \mb{F}_{\mb{S}}\def\T{\mb{T}_n}(X_{\mb{S}}\def\T{\mb{T}_{n+1}})\ge X_{\mb{S}}\def\T{\mb{T}_n}\mbox{ for all $n$}, $$ which implies that $(\mb{S}}\def\T{\mb{T}_n,\mb{F}_{\mb{S}}\def\T{\mb{T}_n})$ is a backward submartingale and so $(\mb{F}(X_{\mb{S}}\def\T{\mb{T}_n}))$ is a decreasing sequence and, using (\ref{5E4}), $\mb{F}(X_{\mb{S}}\def\T{\mb{T}_n})\ge\mb{F}(X_a)$ for all $n.$ Applying Proposition~\ref{4P2}, we have that the sequence $(X_{\mb{S}}\def\T{\mb{T}_n})$ is uniformly integrable. The same is true for the sequence $(X_{\T_n}).$
We now note that $$ X_{\mb{S}}\def\T{\mb{T}_n}=I_\mb{S}}\def\T{\mb{T}(X^n),\mbox{ and }X_{\T_n}=I_{\T}(X^n), $$ with $$ X^n_t=\sum_{i=0}^{2^n}X_{t_i}\chi_{[t_{i-1},t_i)}(t). $$ Our assumption that $X=(X_t)$ is uo-right-continuous, implies that in each point $t$ we have that $X^n_t$ is uo-convergent to $X_t.$ Now let $k\in\N$ and define the constant process $(kE_t=E).$ This process is Daniell integrable since $I_\mb{S}}\def\T{\mb{T}(kE)=kE\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}.$ Consider the process $X^n\wedge kE=(X^n_t\wedge kE_t)=(X^n_t\wedge kE).$ Then $X^n\wedge kE$ converges in order pointwise to $X\wedge kE.$ By Lebesgue's dominated convergence theorem, we get that $ I_\mb{S}}\def\T{\mb{T}(X^n\wedge kE) $ is order convergent to $I_\mb{S}}\def\T{\mb{T}(X\wedge kE).$ But, $I_\mb{S}}\def\T{\mb{T}$ is a Riesz homomorphism, so we get that $$ I_\mb{S}}\def\T{\mb{T}(X^n\wedge kE)=I_\mb{S}}\def\T{\mb{T}(X^n)\wedge I_\mb{S}}\def\T{\mb{T}(kE)=I_\mb{S}}\def\T{\mb{T}(X^n)\wedge kE. $$
Therefore, by Proposition~(\ref{proposition 3.4}), $X_{\mb{S}}\def\T{\mb{T}_n}=I_\mb{S}}\def\T{\mb{T}(X^n)$ is uo-convergent to $I_\mb{S}}\def\T{\mb{T}(X)=X_{\mb{S}}\def\T{\mb{T}}$ and the same holds for $I_\T(X^n)$ and $I_\T(X).$ By our result for a uo-convergent sequence that is uniformly integrable we have that $p_\phi(\mathbb P X_{\mb{S}}\def\T{\mb{T}_n})$ converges to $p_\phi(\mathbb P X_\mb{S}}\def\T{\mb{T})$ and also $p_\phi(\mathbb P X_{\T_n})$ converges to $p_\phi(\mathbb P X_\T),$ for every $\mathbb P\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{\mb{S}}\def\T{\mb{T}+}$ and for every $p_\phi\in\mc{P}.$ Recalling that $p_\phi=|\phi|\mb{F}$ for $\phi\in\Phi$, this implies, using~(\ref{5E7b}), that $$
|\phi|\mb{F}(\mathbb P X_\T)\ge |\phi|\mb{F}(\mathbb P X_\mb{S}}\def\T{\mb{T}),\mbox{ for every $\mathbb P\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{\mb{S}}\def\T{\mb{T}+}.$} $$ But since $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^\sim_{00}$ separates the points of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E},$ we get $$ \mb{F}(\mathbb P X_\T)\ge \mb{F}(\mathbb P X_\mb{S}}\def\T{\mb{T}),\mbox{ for every $\mathbb P\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{\mb{S}}\def\T{\mb{T}+}$}, $$ and thus $$ \mb{F}\mb{F}_{\mb{S}}\def\T{\mb{T}+}(\mathbb P X_\T)=\mb{F}\mathbb P(\mb{F}_{\mb{S}}\def\T{\mb{T}+}X_\T) \ge \mb{F}\mathbb P (X_\mb{S}}\def\T{\mb{T}). $$ Since this holds for every $\mathbb P\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{\mb{S}}\def\T{\mb{T}+},$ we have that $$ \mb{F}_{S+}X_\T \ge X_\mb{S}}\def\T{\mb{T}. $$ This proves the theorem if $\mb{S}}\def\T{\mb{T}, \T$ are optional times. In the case that they are stopping times, the theorem holds since the inequalies hold for all $\mathbb P\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_\mb{S}}\def\T{\mb{T}.$ \phantom{em}
$\Box$
\begin{thebibliography}{99} \bibitem{AB1} Aliprantis, C.D. and Burkinshaw, O., \textit{Locally solid Riesz spaces}, Academic Press, New York, San Francisco, London, 1978. \bibitem{AB2} Aliprantis, C.D. and Burkinshaw, O., \textit{Positive Operators}, Academic Press Inc., Orlando, San Diego, New York, London, 1985.
\bibitem{DU} Diestel, J., J.J. Uhl, \textit{Vector measures}, Amer. Math. Soc., Providence, RI, 1977.
\bibitem{PJD}
P.J. Daniell,
\emph{A general form of integral},
Annals of Mathematics, Second Series, Vol. 19, no. 4 (Jun., 1918),
279-294. (https://www.jstor.org/stable/1967495) \bibitem{Donner} Donner, K., {\em Extension of Positive Operators and Korovkin Theorems}, Lecture Notes in Mathematics, Volume 904, Springer-Verlag, Berlin, Heidelberg, New York, 1982. \bibitem{F} Fremlin, D.H., \textit{Topological Riesz spaces and measure theory,} Cambridge University Press, Cambridge, 1974.
\bibitem{G1} Grobler, J.J., {Continuous stochastic processes in Riesz spaces: the Doob-Meyer decomposition}, \textit{Positivity} \textbf{14} (2010), 731-751. \bibitem{G2} Grobler, J.J., {Doob's optional sampling theorem in Riesz spaces}, \textit{Positivity} \textbf{15} (2011), 617-637.
\bibitem{G12} Grobler, J.J., {101 Years of vector lattice theory. A vector-valued Daniell integral} \textit{Preprint: Research Gate} 2020. \bibitem{G6} Grobler, J.J., C.C.A. Labuschagne and Maraffa, V., {Quadratic variation of martingales in Riesz spaces}, \textit{J. Math. Anal. Appl.}, \textbf{410} (2014) 418-426. \bibitem{G7} Grobler, J.J. and C.C.A. Labuschagne, {The It\^o integral for Brownian motion in vector lattices: Part 1}, \textit{J. Math. Anal. Appl.}, \textbf{423} (2015) 797-819. doi 10.1016/j.jmaa.2014.08.013. \bibitem{G8} Grobler, J.J. and C.C.A. Labuschagne, {The It\^o integral for Brownian motion in vector lattices: Part 2}, \textit{J. Math. Anal. Appl.}, \textbf{423} (2015) 820-833. doi 10.1016/j.jmaa.2014.09.063.
\bibitem{G9} Grobler, J.J. and C.C.A. Labuschagne, {The It\^o integral for martingales in vector lattices}, \textit{J. Math. Anal. Appl.} \textbf{450} (2017) 1245-1274. doi 10.1016/j.jmaa.2017.01.081.
\bibitem{KS} Karatzas, I. and Shreve, S.E., \textit{Brownian motion and stochastic calculus}, Graduate Texts in Mathematics, Springer, New York, Berlin, Heidelberg, 1991.
\bibitem{LZ} Luxemburg, W.A.J. and Zaanen, A.C., \textit{Riesz Spaces I}, North-Holland Publishing Company, Amsterdam, London, 1971.
\bibitem{MN} Meyer-Nieberg, P., \textit{Banach Lattices,} Springer-Verlag, Berlin, Heidelberg, New York, 1991. \bibitem{Pr} Protter, P.E., \textit{Stochastic integration and Differential equations,} Sprinter-Verlag, Berlin, Heidelberg, New York, 2005.
\bibitem{Sch} Schaefer, H.H., \textit{Banach lattices and positive operators,} Springer-Verlag, Berlin, Heidelberg, New York, 1974.
\bibitem{Vu} Vulikh, B.Z., \textit{Introduction to the theory of partially ordered spaces,} Wolters-Noordhoff Scientific Publishers, Groningen, 1967. \bibitem{W} Watson, B.A., An \^Ando-Douglas type theorem in Riesz spaces with a conditional expectation, {\em Positivity} \textbf{13} (2009), no 3, 543--558 \bibitem{Z1} Zaanen, A.C., \textit{Riesz spaces II}, North-Holland, Amsterdam, New York, 1983. \bibitem{Z2} Zaanen, A.C., \textit{Introduction to Operator theory in Riesz spaces}, Springer-Verlag, Berlin, Heidelberg, New York, 1991.
\end{thebibliography}
\end{document}
$$ \end{lemma}
{\em Proof.} Let $m, n\in\mb{N}$, and let $s,t>0$. Since $V$ is in the band generated by $W,$ we have \[ (tW)\wedge V\uparrow V \mbox{ as } t\uparrow \infty. \] It follows from~\cite[Theorem 30.5]{LZ} and the order continuity of $\mb{F}$ that if $t\to\infty,$ \[
\mb{F}(\mathbb P(|X_n|\ge s(tW\wedge V)))(|X_n|)\downarrow_t\mb{F}(\mathbb P(|X_n|\ge sV))(|X_n|). \] Thus there exists a sequence $U_m\downarrow 0$ such that \[
\mb{F}(\mathbb P(|X_n|\ge s(tW\wedge V)))(|X_n|)\leq U_m+\mb{F}(\mathbb P(|X_n|\ge sV))(|X_n|) \] holds for sufficiently large $t$. Moreover,since
$\mb{F}(\mathbb P^n_\lambda, V)|X_n|\downarrow_\lambda 0$ uniformly in $n$ as $\lambda\uparrow\infty,$ there exists $Z_m\downarrow 0$ such that \[
\underset{n}{\sup}\mb{F}(\mathbb P(|X_n|\ge sV))(|X_n|)\le Z_m \] holds for sufficiently large $s$. Then $(U_m+Z_m)\downarrow 0$ and for sufficiently large $s$ and $t$ we have \begin{align*}
\mb{F}(\mathbb P(|X_n|\ge stW))(|X_n|)&\leq\mb{F}(\mathbb P(|X_n|\ge s(tW\wedge V)))(|X_n|)\\
&\leq U_m+\mb{F}(\mathbb P(|X_n|\ge sV))(|X_n|)\\ &\leq U_m+Z_m. \end{align*} It follows that $\mb{F}(\mathbb P^n_\lambda, W)\downarrow 0$ uniformly in $n$. \phantom{em}
$\Box$
\begin{lemma}\label{L:Xn+X} If $(X_n)$ is uniformly integrable, then so is $(X_n+X)$ for any $X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$. \end{lemma}
{\em Proof.} Suppose $(X_n)$ is uniformly integrable, and let $X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$. Let $W$ be a positive weak order unit for which $W\geq |X|$ (e.g. take $W=|X|+E$). Since $(X_n)$ is uniformly integrable, it follows from Lemma~\ref{L:VW} that there exists $Z_m\downarrow 0$ such that for every $m\in\mb{N}$ we have \[
\sup_n\mb{F}(\mathbb P(|X_n|\ge \lambda W))(|X_n|)\leq Z_m \] holds for large enough $\lambda$. Let $m\in\mb{N}$ be fixed. Then for $\lambda$ sufficiently large, \begin{align*}
&\mb{F}(\mathbb P(|X_n+X|\ge \lambda W))(|X_n+X|)\\
&\leq\mb{F}(\mathbb P(|X_n+X|\ge \lambda W))(|X_n|)+\mb{F}(\mathbb P(|X_n+X|\ge \lambda W))(|X|)\\
&\leq\mb{F}(\mathbb P(|X_n|+|X|\ge \lambda W))(|X_n|)+\mb{F}(\mathbb P(|X_n|+|X|\ge \lambda W))(|X|)\\
&\leq\mb{F}(\mathbb P(|X_n|+W\ge \lambda W))(|X_n|)+\mb{F}(\mathbb P(|X_n|+W\ge \lambda W))(|X|)\\
&=\mb{F}(\mathbb P(|X_n|\ge (\lambda-1)W))(|X_n|)+\mb{F}(\mathbb P(|X_n|\ge (\lambda-1)W))(|X|)\\
&\leq Z_m+\mb{F}(\mathbb P(|X_n|\ge (\lambda-1)W))(|X|). \end{align*} Thus for sufficiently large $t\in\mb{N}$ we have \[
\mb{F}(\mathbb P(|X_n-X|\ge tW))(|X_n-X|)\leq Z_m+\mb{F}(\mathbb P(|X_n|\ge (t-1)W))(|X|). \]
Choose $W$ as (as we may) equal to $|X|+E.$ By Chebyshev's inequality, we get \begin{align*}
\mb{F}(\mathbb P(|X_n|\ge (t-1)W)(E)&\le\mb{F}(\mathbb P(|X_n|\ge (t-1)E)(E)\\
&\le \frac{1}{t-1}\mb{F}(|X_n|). \end{align*}
But, since $X_n$ is uniformly integrable, $\mb{F}(|X_n|)$ is order bounded: let $Z_k'\downarrow 0$ be such that $\mb{F}(\mathbb P(|X_n|\ge tE)|X_n|)\le Z_m'$ for all $n$ and for all $t\ge c.$ Then, \begin{multline*}
\mb{F}(|X_n|)=\mb{F}(\mathbb P(|X_n|\ge cE)|X_n|)+\mb{F}(\mathbb P(|X_n|< cE)|X_n|)\\ \le Z'_m+cE\le Z'_1+cE =U\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}. \end{multline*}
Thus, $\mb{F}(\mathbb P(|X_n|\ge(t-1)W)E)$ converges $U$-uniformly to $0$ as $t\to\infty$ uniformly in $n.$ Let $p_\phi\in \mathscr{P}$ and $\epsilon>0$ be arbitrary, then there exist some $\delta>0$ such that, for all $n\in \N,$ $$
\mb{F}(\mathbb P(|X_n|\ge (t-1)W)(E)<\delta U \implies
p_\phi(\mb{F}(\mathbb P(|X_n|\ge (t-1)W))(|X|))<\epsilon. $$
Choose $t_0$ so large that $(t-1)^{-1}<\delta.$ Then we have for all $t\ge t_0$ that $p_\phi(\mb{F}(\mathbb P(|X_n|\ge (t-1)W))(|X|))<\epsilon$
uniformly in $n$ as $t\uparrow\infty.$ We now apply Proposition 4.3 by taking $Z''_k$ to be the sequence
$\frac{1}{t_k-1}(Z_1'+cE$) with $t_k\uparrow\infty.$ Then there exists a sequence $U_k\downarrow 0$ such that $\mb{F}(\mathbb P(|X_n|\ge (t-1)W))(|X|)\downarrow_t 0$ uniformly in $n$ and the rest of the argument follows.
But since $\mb{F}(\mathbb P(|X_n|\ge (t-1)W))(|X|)\downarrow_t 0$, there exists $U_j\downarrow 0$ such that $\mb{F}(\mathbb P(|X_n|\ge (t-1)W))(|X|)\leq U_m$ holds for sufficiently large $t\in\mb{N}$ (recall that $m$ is fixed). Therefore, we have \[
\sup_n\mb{F}(\mathbb P(|X_n+X|\ge \lambda W))(|X_n+X|)\leq Z_m+U_m \] holds for sufficiently large $\lambda$. Invoking Lemma~\ref{L:VW} once again, we conclude that $(X_n+X)$ is uniformly integrable. \phantom{em}
$\Box$
Used in the proof:
\begin{enumerate} \item The definitions of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_\mb{S}}\def\T{\mb{T},$ $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{\mb{S}}\def\T{\mb{T} +},$ $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_\mb{S}}\def\T{\mb{T},$ $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{\mb{S}}\def\T{\mb{T}+},$ $\mb{F}_\mb{S}}\def\T{\mb{T},$ $\mb{F}_{S+}.$ \item If $\mb{S}}\def\T{\mb{T}_n$ is a Freudenthal simple element for $\mb{S}}\def\T{\mb{T},$ then for all projections $\mathbb P\in \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{\mb{S}}\def\T{\mb{T}_n}$ one has $$ \mb{F}_a(\mathbb P X_{\mb{S}}\def\T{\mb{T}_n})\le\mb{F}_a(\mathbb P X_{\T_n}). $$ which implies $X_{\mb{S}}\def\T{\mb{T}_n}=\mb{F}_{\mb{S}}\def\T{\mb{T}_n}X_{\mb{S}}\def\T{\mb{T}_n}\le\mb{F}_{\mb{S}}\def\T{\mb{T}_n}X_{\T_n}$ (this is almost the result we want for simple optional times). \item $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{\mb{S}}\def\T{\mb{T}+}=\bigcap_{n=1}^\infty \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{\mb{S}}\def\T{\mb{T}_n}.$ This implies that the first inequality in 2 also holds for all $\mathbb P\in \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{\mb{S}}\def\T{\mb{T}+}.$ \item If $\mb{S}}\def\T{\mb{T}$ is a stopping time, then $\mb{S}}\def\T{\mb{T}\le\mb{S}}\def\T{\mb{T}_n$ implies $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_\mb{S}}\def\T{\mb{T}\subset\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{\mb{S}}\def\T{\mb{T}_n}$ \item Similarly as in 2, $(X_{\mb{S}}\def\T{\mb{T}_n})$ is a backward submartingale with $(\mb{F}_a(X_{\mb{S}}\def\T{\mb{T}_n}))$ decreasing and bounded below by $\mb{F}(X_a)$ (remember $J=[a,b]$). Same is true for $\T_n$ replacing $\mb{S}}\def\T{\mb{T}_n.$ \item The conditions in 5 imply that the sequences $(X_{\mb{S}}\def\T{\mb{T}_n})$ and $(X_{\T_n})$ are "uniformly integrable" which implies that if they converge in order in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ to $X_\mb{S}}\def\T{\mb{T}$ and $X_\T$ respectively, then $\mb{F}_a(\mathbb P X_\mb{S}}\def\T{\mb{T})=\lim_{n}\mb{F}_a(\mathbb P X_{\mb{S}}\def\T{\mb{T}_n})$ and the same for $\T$ replacing $\mb{S}}\def\T{\mb{T}.$ Using 2 it follows that $$ \mb{F}_a(\mathbb P X_\mb{S}}\def\T{\mb{T})\le \mb{F}_a(P X_\T)\mbox{ for all }P\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{\mb{S}}\def\T{\mb{T}+}. $$ (Here we need to prove a theorem: If a sequence is uo-convergent and uniformly integrabel, then it converges in order. I guess it is true.)
The sequence $(X_n)$ in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is called \textit{$\mb{F}$-uniformly integrable} whenever we have that $\mb{F}(\mathbb P_\lambda):=\mb{F}(\mathbb P(X_n\ge \lambda E))\downarrow 0 $ as $0\le\lambda\uparrow\infty$ uniformly in $n.$ That is, there exists a sequence $Z_m\downarrow 0$ such that for every $m$ there exists some $M\ge 0$ such that if $\lambda>M,$ then
$\sup_n\mb{F}(\mathbb P_\lambda |X_n|)\le Z_m.$
\item $\lim X_{\mb{S}}\def\T{\mb{T}_n}=X_\mb{S}}\def\T{\mb{T}$ and $\lim X_{\T_n}=X_{\T},$ i.e., the condition in 6 is fulfilled (this last condition uses Lebesgue's theorem for the Daniell integral). \end{enumerate}.
Let $\mb{F}$ be a conditional expectation on $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}.$ We call two events $\mathbb P$ and $\Q$ in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}$ \textit{$\mb{F}$- conditionally independent} whenever $$ \mb{F} (\mathbb P\Q)\mb{F}=(\mb{F}\mathbb P)(\mb{F}\Q)\mb{F}=(\mb{F}\Q)(\mb{F}\mathbb P)\mb{F}, $$
which is equivalent to $\mb{F}(\mathbb P\Q)|_\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}=(\mb{F}\mathbb P)(\mb{F}\Q)|_\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}=(\mb{F}\Q)(\mb{F}\mathbb P)|_\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}.$ By~\cite[Lemma 4.2]{G4} $(\mb{F}\mathbb P)(\mb{F}\Q)\mb{F}=(\mb{F}\Q)(\mb{F}\mathbb P)\mb{F}$ and so it is sufficient to define conditional independence by stating only one of the conditions.
A class $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{C}$ of projections are $\mb{F}$-independent if, for every choice of a finite number of elements $\mathbb P_j,\ j=1,\ldots n$ of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{C}$ we have
$$
\mb{F}(\prod_{j=1}^n\mathbb P_j)|_\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}=\prod_{j=1}^n\mb{F}\mathbb P_j|_\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}.
$$
Classes $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{C}_\alpha$ are called $\mb{F}$-independent if, for any choice of projections $\mathbb P_\alpha\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{C}_\alpha,$ we have that this chosen class is $\mb{F}$-independent.
For every $t\in\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}$ let $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}_t$ be the band generated by $(tE-X)^+=\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}(tE>X)$ with band projection $\mathbb P_t=\mathbb P(tE>X).$ Let $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}(X)$ be the order complete Boolean subalgebra of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}$ generated by all $\mathbb P_t.$ We note that $(tE-X)^+$ is an element of the Riesz space $[X,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}]$ generated by the element $X$ and $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}=\mb{F}(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E})$ and so the projections $\mathbb P_t$ are projections in this space, i.e., $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}(X)\subset\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{[X,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}]}.$ We say that two elements $X,Y\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ are $\mb{F}$-conditionally independent if the classes $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}(X)$ and $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}(Y)$ are $\mb{F}$-independent.
We say that the element $X$ is independent of the algebra $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G}$ of projections whenever the algebras $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G}$ and $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}(X)$ are $\mb{F}$-conditionally independent. This means that for any $\Q\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G}$ and $\mathbb P\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}(X),$ we have that $\mathbb P$ and $\Q$ are $\mb{F}$-conditionally independent.
We can also define classes of elements in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ to be $\mb{F}$-conditionally independent and so it is also meaningful to refer to Riesz subspaces of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ to be $\mb{F}$-conditionally independent.
Denoting the ordered closed Riesz subspace generated by two subsets $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G}$ and $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{H}$ in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ by $[\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G},\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{H}],$ we recall~(\cite[Corollary 4.8]{G4}), that elements $X$ and $Y$ in the Riesz space $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ are $\mb{F}$-conditionally independent if and only if the Riesz subspaces $[X,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}]$ and $[Y,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}]$ of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ are $\mb{F}$-conditionally independent. Since these spaces contain $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F},$ we have by a result of Bruce Watson~\cite{W} that if $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is $\mb{F}$-universally complete, that there exist a unique conditional expectation $\mb{F}_X:\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}\to [X,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}]$ such that $\mb{F}\mb{F}_X=\mb{F}_X\mb{F}=\mb{F}.$
We recall the following fact from measure theory: Let $\Omega$ be a set and $(E,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{S})$ be a measurable space. Let $X:\Omega\to E$ be a function and suppose that $Y:\Omega\to\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}$ be a $\sigma(X)$ measurable function. Then there exist a measurable function $f:E\to \mb{R}}\def\C{\mb{C}}\def\N{\mb{N}$ such that $Y=f(X).$ (Dynkin?)
We need an analogue of this fact in the abstract case in order to state the Markov property. Firstly, we recall that the right-continuous spectral system of an element $X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}:$ Let $E$ be a weak order unit for $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}.$ For $t\in\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}$ we define $\overline{E}_t^r$ to be the component of $E$ in the band generated by $(X-tE)^+=(tE-X)^-$ and set $E_t^r:=E-\overline{E}_t^r.$ Then the system $(E_t^r)_{t\in\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}}$ is an increasing right continuous system of components of $E.$ We proved in ~\cite[Lemma 3.7]{G3}, that $\mu_X(a,b]:=E_b^r-E^r_a$ defines a measure on the algebra of all left-open right-closed intervals and that it can be extended to a countably additive vector measure on the Borel $\sigma$-algebra $\mc{B}(\mb{R}}\def\C{\mb{C}}\def\N{\mb{N})$ (see also~\cite[Chapter XI section 5]{Vu}). Its values are in the set $\{\mathbb P E:\ \mathbb P\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}(X)\}.$ The measure $\mu_X$ is a Boolean measure and as such also satisfies the condition that $\mu_X(A\cap B)=\mu_X(A)\wedge \mu_X(B)$ (\cite[Theorem XI.5.(c)]{Vu}). Since we will always be working with the right-continuous spectral system, but will have to distinguish between spectral systems of different element, we will henceforth denote the right-continuous spectral system of the element $X$ by $(E^X_t)_{t\in\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}}.$
Assuming that $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ has a weak order unit, the functional calculus can be extended to bounded Borel measurable functions. (Verwysing) For a bounded measurable function the integral $$ f(X):=\int_\mb{R}}\def\C{\mb{C}}\def\N{\mb{N} f(t)\,d\mu_X(t)\in \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E} $$ exists as a limit of measurable step-functions in the same way as, for the function $f(t)=t,$ the integral exists in the proof of Freudenthal's theorem (see~\cite[Theorem 40.3]{LZ}. For an arbitrary positive measurable function the integral also exists, but its value is not necessarily in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E},$ but in its supremum completion $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}_s.$ The Borel-measurable function $f$ is then said to be integrable with respect to the Boolean measure $\mu_X$ whenever both $\int_\mb{R}}\def\C{\mb{C}}\def\N{\mb{N} f^+\,d\mu_X$ and $\int_\mb{R}}\def\C{\mb{C}}\def\N{\mb{N} f^-\,d\mu_X$ are elements of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ and in that case the integral is defined to be their difference.
The question of which elements $Y$ can be written as $Y=f(X)$ was answered in part by V.I. Sobolov (see~\cite[Theorem XI.7.a]{Vu}). If $M$ denotes the values of $\mu_X,$ i.e., if $M=\{\mu_X(A): A\in\mc{B}(\mb{R}}\def\C{\mb{C}}\def\N{\mb{N})\},$ then the bounded element $Y\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ can be written as $f(X)$ for some Borel-measurable function if and only if $E_t^Y\in M$ for all $t\in\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}.$ In particular, if $A\in\mc{B}(\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}),$ then $\mu_X(A)=I_A(X).$
\begin{lemma}\label{lemma 1.1} Let $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ be a Dedekind complete Riesz space with weak order unit $E.$ Let $\sigma(X)$ be the $\sigma$-algebra of components of $E$ generated by the components $E^X_t,$ $t\in\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}.$ Then, for each $E_\alpha\in\sigma(X),$ there exists a Borel measurable set $A_\alpha\in\mc{B}(\mb{R}}\def\C{\mb{C}}\def\N{\mb{N})$ such that $\mu_X(A_\alpha)=E_\alpha.$ \end{lemma} {\em Proof.} We note that for $E_{s,t}:=E^X_t-E^X_s$ we have that the interval $I_{s,t}:=(s,t]\in\mc{B}(\mb{R}}\def\C{\mb{C}}\def\N{\mb{N})$ satisfies $\mu_X(I_{s,t})=E_{s,t}.$ Consider the set $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{D}$ of all $E_\alpha\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{X}$ that is contained in the range of $\mu_X.$ Then the set of all $E_{s,t}$ is a $\pi$-system that is contained in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{D}.$ We claim that $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{D}$ is a Dynkin-system: \begin{enumerate} \item[(a)] The element $E\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{D},$ for $\mu_X(-\infty,\infty)=E^X_{\infty}-E^X_{-\infty}=E-0=E.$ \item[(b)] Let $E_\alpha, E_\beta\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{D}$ with $E_\alpha\le E_\beta.$ Let $\mu_X(A_\alpha)=E_\alpha$ and $\mu_X(A_\beta)=E_\beta$ with $A_\alpha, A_\beta\in\mc{B}(\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}).$ We first note that the disjoint complement $E_\alpha^d$ of $E_\alpha$ also belongs to $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{D}$ because, from $E=\mu_X(\mb{R}}\def\C{\mb{C}}\def\N{\mb{N})=\mu_X(A_\alpha\cup A_\alpha^c)=\mu_X(A_\alpha)+\mu_X(A_\alpha^c)=E_\alpha+\mu_X(A_\alpha^c)$ it follows that $\mu_X(A_\alpha^c)=E-E_\alpha=E_\alpha^d.$ Now, $E_\beta-E_\alpha=E_\beta\wedge E_\alpha^d=\mu_X(A_\beta)\wedge\mu_X(A_\alpha^c)=\mu_X(A_\beta\cap A_\alpha^c)$ by our earlier remark that $\mu_X$ is a Boolean measure. So $E_\beta-E_\alpha\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{D}.$ \item[(c)] Let $E_n\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{X}$ with $E_n\uparrow E_0.$ Let $\mu_X(A_n)=E_n.$ We claim that $\mu_X(\bigcup_nA_n)=E_0.$ Define the disjoint sequence of sets $B_n$ in the usual way by $B_1=A_1,$ $B_{n+1}=A_{n+1}-\bigcup_{j=1}^n B_j.$ Then $\bigcup_nB_n= \bigcup_nA_n=A_0,$ and, since $E_n\uparrow,$ $$ \mu_X(B_1)=\mu_S(A_1)=E_1, \mu_X(B_2)=E_2-E_1,\ldots,\mu_X(B_n)=E_n-E_{n-1}. $$ It follows that $$ \mu_X(A_0)=\sum_{j=1}^\infty\mu_X(B_n)=\sum_{j=1}^\infty (E_{n}-E_{n-1})=E_0. $$ We therefore have that $E_0\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{D}.$ \end{enumerate} It follows that $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{D}$ contains the $\sigma$-algebra generated by the images of all left-open right-closed intervals, i.e, $\sigma(X)\subset \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{D}.$ Thus, for every $E_\alpha\in\sigma(X),$ there exists an element $A_\alpha\in\mc{B}(\mb{R}}\def\C{\mb{C}}\def\N{\mb{N})$ such that $\mu_X(A_\alpha)=E_\alpha.$\phantom{em}
$\Box$
In general the algebra of components of $E$ generated by the components $E^X_t,$ $t\in\mb{R}}\def\C{\mb{C}}\def\N{\mb{N},$ may not be a $\sigma$-algebra. This may depend on $X$ but also on the Riesz subspace $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}$ of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ to which $X$ belongs. One can define an element $X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ to be a \textit{measurable element} in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ if the Boolean $\sigma$-algebra $\sigma(X),$ generated by $\{E^X_t :\ t\in\mb{R}}\def\C{\mb{C}}\def\N{\mb{N},\},$ is equal to the order complete Boolean algebra generated by the set. This is equivalent to saying that the Boolean algebra $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}(X)$ is the $\sigma$-algebra generated by the projections $\mathbb P(tE>X),$ $t\in\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}.$
\noindent \textit{Conjecture:\,} {\sl If $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is a super-Dedekind complete Riesz space, then every $X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is measurable.}
\begin{proposition} Let $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ be a Dedekind complete Riesz space with weak order unit $E$ and let $X$ be a measurable element of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}.$ Let $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G}_X$ be the Riesz subspace of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ generated by the algebra $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}(X).$ Then, for every $Y\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G}_X$ there exists a real valued Borel-measurable function $f$ defined on $\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}$ such that $Y=f(X).$ \end{proposition}
{\em Proof.} Note that our assumption is that $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}(X)=\sigma(X).$ Let $Y\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G}_X$ be bounded, say $|Y|\le ME.$ Then there exists a sequence of simple elements of the form $$ s_n=\sum_{i\in\pi}t_iE_i, \ E_i\in\sigma(X), $$ that converges $E$-uniformly to $Y$ (by Freudenthall's theorem). By Lemma~\ref{lemma 1.1}, there exists, for each $i,$ a Borel measurable set $A_i$ such that $E_i=\mu_X(A_i).$ Hence, $$ s_n=\sum_{i\in\pi}t_i\mu_X(A_i)=\int_\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}(\sum_{i\in\pi}t_iI_{A_i})\,d\mu_X=\sigma_n(X), $$ with $\sigma_n(t)$ the real step function $\sum_{i\in\pi}t_iI_{A_i}(t).$ It easily follows that, since $s_n\to Y,$ then, with $\sigma_n\to f, $ we have $Y=f(X).$
The extension to arbitrary $Y\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G}_X$ should be without problems.\phantom{em}
$\Box$
Remark: It seems as though a definition of a Markov-process used by S.E. Shreve~\cite[page 76]{Sh} can be used in our case only if we make the extra assumptions that the elements $X_t$ in the stochastic process $(X_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F})_{t\in J}$ are measurable elements of the Riesz space $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}.$
\section{Markov Processes}
In the theory of Markov processes it if convenient to use some of the classical notation. The order closed Riesz subspace generated in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ by $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}$ and the elements $X_1,X_2,\ldots,X_n\in \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E},$ that is, the space $[X_1,\ldots,X_n,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}],$ will be denoted by $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{X_1,X_2,\ldots,X_n}.$ The unique conditional expectation that maps $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ onto this subspace, will be denoted by $\mb{F}(\cdot\,|\,X_1,X_2,\ldots,X_n).$ This is the conditional expectation that satisfies $\mb{F}\mb{F}(\cdot\,|\,X_1,X_2,\ldots,X_n)=\mb{F}(\cdot\,|\,X_1,X_2,\ldots,X_n)\mb{F}=\mb{F}.$
Similarly, the order closed Riesz subspace generated by $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}$ and $\{X_s : s\ge t\}$ will be denoted by $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{X_{s,s\ge t}}$ and the conditional expectation onto this space by $\mb{F}(\cdot\,|\,X_{s,s\ge t}).$ The notation $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{X_{s,s\le t}}$ needs no explanation. It defines a filtration on $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ to which $(X_t)$ is adapted and is denoted by Karatzas and Shreve~\cite{KS} by $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}^X_t.$ We shall also adopt this simpler notation.
We formulate abstractly different definitions used in the literature. In each case the citation refers to the classical definition.
1. \cite[Definition 1.1]{BG}. Theorem 4.5.4 in Ash and Gardner~\cite{AG}.
\begin{definition}[{\rm Blumethal, R.M.; Getoor, R.K.}] Let $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ be a Dedekind complete Riesz space with a weak order unit $E.$ Let $(X_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F}_t)_{t\in J}$ be a stochastic process adapted to the filtration $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F}_t).$ The process is called a \textit{Markov process} if $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t$ and $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{X_{s,s\ge t}}$ are $\mb{F}_t$-conditionally independent, i.e., \begin{equation} \mb{F}_t(\mathbb P\Q)\mb{F}_t=\mb{F}_t(\mathbb P)\mb{F}_t(\Q)\mb{F}_t,\mbox{ for all }\mathbb P\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_t,\ \Q\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{X_{s,s\ge t}})_. \end{equation} \end{definition}
2. \cite[Definition 4.5.1]{AG}. Theorem 1.3(iii) in Blumenthal and Getoor.
\begin{definition}[{\rm Ash and Gardner}] Let $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ be a Dedekind complete Riesz space with a weak order unit $E.$ Let $(X_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F}_t)_{t\in J}$ be a stochastic process adapted to the filtration $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F}_t).$ The process is called a \textit{Markov process} if for every Borel set $B\in\mc{B}(\mb{R}}\def\C{\mb{C}}\def\N{\mb{N})$ and $s,t\in J$ with $s<t$ \begin{equation}
\mb{F}_s(I_B(X_t))=\mb{F}(I_B(X_t)\,|\,X_s). \end{equation} Equivalent: For any Borel-measureable function $g$ for which $g(X_t)\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E},$ \begin{equation}
\mb{F}_s(g(X_t))=\mb{F}(g(X_t)\,|\,X_s) \end{equation} \end{definition} Note that by the definition of the measure $\mu_{X_t},$ equation 2.3 can be written as \begin{equation}
\mb{F}_s(\mu_{X_t}(B))=\mb{F}(\mu_{X_t}(B)\,|\,X_s) \end{equation} and the next equation as \begin{equation}
\mb{F}_s\left(\int_\mb{R}}\def\C{\mb{C}}\def\N{\mb{N} g\,d\mu_{X_t}\right)=\mb{F}\left(\int_\mb{R}}\def\C{\mb{C}}\def\N{\mb{N} g\,d\mu_{X_t}\,|\,X_s\right) \end{equation}
3. \cite[Definition 2.3.6]{Sh}. See Ash and Gardner: Comments 4.5.2(c). \begin{definition}[{\rm Shreve, S.E.}] Let $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ be a Dedekind complete Riesz space with a weak order unit $E.$ Let $(X_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F}_t)_{t\in J}$ be a stochastic process adapted to the filtration $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F}_t).$ The process is called a \textit{Markov process} if for all $s\le t\in J$ and for every nonnegative Borel-measurable function $f,$ there exists a Borel-measurable function $g$ such that \begin{equation} \mb{F}_s(f(X_t))=g(X_s). \end{equation} \end{definition}
4. \cite[Definition 10.5.4]{Kuo}. See Ash and Gardner: Comments 4.5.3(b).
\begin{definition}[{\rm Kuo}] Let $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ be a Dedekind complete Riesz space with a weak order unit $E.$ Let $(X_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F}_t)_{t\in J}$ be a stochastic process adapted to the filtration $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F}_t).$ The process is called a \textit{Markov process} if for $a\le t_1<t_2<\cdots<t_n<t\le b,$ we have the equality \begin{equation}
\mb{F}(\mathbb P(X_t\le xE)E\,|\,X_{t_1},X_{t_2},\ldots,X_{t_n})=\mb{F}(\mathbb P(X_t\le xE)E\,|\,X_{t_n}). \end{equation} \end{definition}
Note that our notation for $\mathbb P(X_t\le xE)E$ is $E^{X_t}_x.$ So the equation above becomes \begin{equation}
\mb{F}(E^{X_t}_x\,|\,X_{t_1},X_{t_2},\ldots,X_{t_n})=\mb{F}(E^{X_t}_x\,|\,X_{t_n}). \end{equation}
5. \cite[Definition 4.1]{VW} \begin{definition}[{\rm Vardy, J. and Watson, B.A.}]
Let $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ be a Dedekind complete Riesz space with a weak order unit $E.$ Let $(X_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F}_t)_{t\in J}$ be a stochastic process adapted to the filtration $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F}_t).$ The process is called a \textit{Markov process} if for any set of points $a\le t_1<t_2<\cdots<t_n<t\le b$ and for any component $E^{X_t}_\alpha$ of $E,$ we have the equality \begin{equation}
\mb{F}(E^{X_t}_\alpha\,|\,X_{t_1},X_{t_2},\ldots,X_{t_n})=\mb{F}(E^{X_t}_\alpha\,|\,X_{t_n}). \end{equation} This is equivalent to \begin{equation}
\mb{F}(\cdot\,|\,X_{t_1},X_{t_2},\ldots,X_{t_n})\mb{F}_t=\mb{F}(\cdot,|\,X_{t_n})\mb{F}_t. \end{equation}
\end{definition}
It seems to us that the strongest definition is Definition 2.3. So, let us take Definition 2.3 as our definition of a Markov process.
\begin{lemma} Let $(X_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)_{t\in J}$ be a Markov process suppose $f$ and $g$ are non-negative as given in the definition. Then for $s\le t\in J$ we have \begin{equation}
g(X_s)=\mb{F}(f(X_t)\,|\,X_s). \end{equation} \end{lemma}
{\em Proof.}\ We have, since $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{X_s}\subset\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_s,$ by the properties of a conditional expectation, that $\mb{F}(\mb{F}_s(f(X_t))\,|\, X_s)=\mb{F}(f(X_t)\,|\, X_s).$ But, since $(X_t)$ is a Markov process, $\mb{F}_s(f(X_t))=g(X_s)$ and so, substituting this in the first equation we get $$
\mb{F}(f(X_t)\,|\, X_s)=\mb{F}(g(X_s)\,|\,X_s)=g(X_s). $$ \phantom{Koos}\phantom{em}
$\Box$
\begin{corollary} If $(X_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)$ is a Markov process, then it satisfies the condition of Definition 2.2 (i.e., the Definition found in Ash and Gardner~\cite{AG}. \end{corollary} {\em Proof.\ } If it is a Markov process, then, for $s\le t$ and any Borel measurable $f$ we have that for some Borel measurable $g$ that $$
\mb{F}_s(f(X_t))=g(X_s)=\mb{F}(f(X_t)\,|\,X_s), $$ which is the condition of the Ash-Gardner definition.\phantom{em}
$\Box$
The equivalent formulation of the Ash-Gardner definition is that for all Borel measurable sets $B$ one gets for all $s,t\in J,$ $s<t,$ that $$
\mb{F}_s(I_B(X_t))=\mb{F}(I_B(X_t)\,|\,X_s). $$
We say that a stochastic process $(X_t)$ has the Markov property with respect to a finite set $I=\{t_1<t_2<\cdots<t_n\}$ if, for all Borel sets $B\in\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}$ we have $$
\mb{F}(I_B(X_{t_n})\,|\,X_{t_1},X_{t_1},\ldots,X_{t_{n-1}})=\mb{F}(I_B(X_{t_n})\,|\,X_{t_{n-1}}). $$
In the proof of the next result, we use the monotone class theorem applied to order complete algebras. The difference with the usual application is that we now apply it to upwards directed systems whereas in the classical case with $\sigma$-algebras, it is applied to upwards directed sequences. We use the result for order complete algebras of projections.
\begin{enumerate}
\item[(a)] A class of projections is called a {\em $\pi$-class}, if it is closed under multiplication. \item[(b)] A class of projections is called a {\em $d$-class}, if \begin{enumerate} \item[(i)] The identity operator $\mb{I}}\def\A{\mb{A}}\def\B{\mb{B}$ is in the class; \item[(ii)] If $\mathbb P\le \Q,$ with both $\mathbb P$ and $\Q$ in the class implies that $ \Q-\mathbb P=\Q\mathbb P^d $ is in the class; \item[(iii)] If if $\mathbb P_\alpha $ is in the class and if $\mathbb P_\alpha\uparrow\mathbb P$ then $\mathbb P$ is in the class. \end{enumerate} \item[(c)] The monotone class theorem: If a $\pi$-class contains a $d$-class, then it contains the complete algebra generated by the $\pi$-class. \end{enumerate} The only point where the proof of $(c)$ differs from the proof in the countable case,(see for instance~\cite[Theorem 1.3.9]{A&D}) is in the proof that for an arbitrary set of projections $\mathbb P_\alpha$ their supremum is in the algebra assuming that they are elements of a $d$-system. Here we use the standard method of first forming the set of all finite suprema of the $\mathbb P_\alpha,$ which is an upward directed set of projections having the same supremum as the original set.
\begin{proposition} Let $(X_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}^X_t)_{t\in J}$ be a stochastic process. If $(X_t)_{t\in I}$ has the Markov property for all finite subsets $I\subset J,$ then $(X_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}^X_t)_{t\in J}$ satisfies the Ash-Gardner definition of a stochastic process. \end{proposition}{\em Proof.} We have to prove that for every Borel set $B,$ and for $s<t$ $$
\mb{F}(I_B(X_t)\,|\,X_r, r\le s)=\mb{F}(I_B(X_t)\,|\,X_s). $$ In order to do that, we note that the Boolean algebra of projections generated by the Boolean algebra of projections $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{X_r,r\le s}$ is equal to the the Boolean algebra of projections generated by all finite families of Boolean algebras $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{X_{t_1},X_{t_2},\ldots,X_{t_n}}$ with $t_1<t_2<\ldots<t_n\le s.$ Using a defining property of a conditional expectation operator (see~\cite[Theorem 3.3]{G2}), we will prove that for all projections $\mathbb P$ belonging to $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{X_{s,s\le t}},$ we have \begin{equation}
\mb{F}(\mathbb P I_B(X_t))=\mb{F}(\mathbb P(\mb{F}(I_B(X_t)\,|\,X_s)). \end{equation}
This then implies that $\mb{F}(I_B(X_t)\,|\,X_s)=\mb{F}(I_B(X_t)\,|\,X_r, r\le s)$ .
We do it using the monotone class theorem: Let $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_s$ be the set of all projections $\mathbb P$ satisfying Equation~(\ref{equation 2.12}). For any $\mathbb P\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{X_{t_1},\ldots,X_{t_n}},$ we have, by the definition of conditional expectation that \begin{align*}
\mb{F}(\mathbb P I_B(X_t))&=\mb{F}(\mathbb P I_B(X_t)\,|\,X_{t_1},\ldots,X_{t_n}) \\
&=\mb{F}(\mathbb P I_B(X_t)\,|\,X_s)), \end{align*} with the last equality by our assumption that $(X_t)$ has the Markov-property for finite sets. Thus, $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{X_{t_1},\ldots,X_{t_n}}\subset \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_s.$ This holds for any choice of indices, and so the Boolean algebra generated by the algebras $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{X_{t_1},\ldots,X_{t_n}},$ is a Boolean algebra of projections (and therefore a $\pi$-class) that is contained in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_s.$ It is clear that $\mb{I}}\def\A{\mb{A}}\def\B{\mb{B}\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_s,$ because $\mb{I}}\def\A{\mb{A}}\def\B{\mb{B}\in \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{X_{t_1},\ldots,X_{t_n}}.$ Also, if $\mathbb P_\alpha\uparrow \mathbb P$ and every $\mathbb P_\alpha$ satisfies equation~\ref{equation 2.12}, then by the order continuity of the relevant conditional expectation operators, we have that $\mathbb P$ also satisfies equation~\ref{equation 2.12}. By the monotone class theorem this shows that $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_s$ contains the order complete algebra generated by all the $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{X_{t_1},\ldots,X_{t_n}}$ for any choice of indices. As this algebra is equal to the algebra $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{X_{s,s\le t}},$ equation~\ref{equation 2.12} holds for all $\mathbb P$ in this algebra and we are done.\phantom{em}
$\Box$
\begin{proposition} If $(X_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)$ is a Markov process and if $E_\alpha$ is a component of $E$ such that $E_\alpha\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{X_{r,r\ge t}},$ then \begin{equation}
\mb{F}_t(E_\alpha)=\mb{F}(E_\alpha\,|\,X_t). \end{equation} \end{proposition}
Using this result one can prove: If $(X_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)$ is a stochastic process adapted to the filtration $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)$ then it is a Markov process if for each $t$ the Riesz spaces $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t$ and $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{X_{s,s\ge t}}$ are conditionally independent, given $X_t.$ This is the Blumenthal-Getoor definition.
We next prove that if $X_t$ is measurable, then anyone of the definitions 2.1, 2.2, 2.4 and 2.5 implies the definition 2.3 (Shreve) that we took as our definition.
The next lemma was proved by Vardy and Watson~\cite{VW} \begin{lemma} If $\displaystyle X_n=\sum_{k=1}^n Y_k$ for $n=1,2,\ldots,$ and if the $(Y_k)$ are conditionally $\mb{F}$-independent, then $X_n$ is a Markov process with respect to the filtration $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}^Y_n=\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{Y_1,Y_2,\ldots,Y_n}$ and hence with respect to the filtration $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}^X_n=\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{X_1,X_2,\ldots,X_n}.$ \end{lemma}
Hence, we also derive the well known result
\begin{theorem} A process $(X_t)$ with independent increments is a Markov process. \end{theorem}
\begin{corollary} A Brownian motion $(B_t)$ is a Markov process. \end{corollary}
\section{Representation by Brownian integrals}
\sc B$\text{\footnotesize{\.{I}}}$bliography
\input{BibliografieMarkov.tex}
\end{document}
\section{Preliminaries} We assume $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ to be a Dedekind complete Riesz space with weak order $E$ separated by its order continuous dual $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^\sim_{00}.$ We also assume that $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is {\em perfect,} i.e., $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}=(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^\sim_{00})^\sim_{00}.$ For the theory of Riesz spaces (vector lattices) we refer the reader to the following standard texts~\cite{AB2,LZ,MN, Sch, Z1, Z2}. For results on topological vector lattices the standard references are~\cite{AB1,F}. We denote the {\it universal completion} of $\mathfrak E,$ which is an $f$-algebra that contains $\mathfrak E$ as an order dense ideal, by $\mathfrak E^u.$ Its multiplication is an extension of the multiplication defined on the principal ideal $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}_E$ and $E$ is the algebraic unit and a weak order unit for $\mathfrak E^u$ (see \cite{Z1}). The set of order bounded band preserving operators, called orthomorphisms, is denoted by $\operatorname{Orth}(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}).$ We refer to \cite{Donner, G9} for the definition and properties of the \emph{sup-completion} $\mathfrak E^s$ of a Dedekind complete Riesz space $\mathfrak E.$ It is a unique Dedekind complete ordered cone that contains $\mathfrak E$ as a sub-cone of its group of invertible elements and its most important property is that it has a largest element. Being Dedekind complete this implies that every subset of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^s$ has a supremum in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^s.$ Also, for every $C\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^s,$ we have $C=\sup\{X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}: X\le C\}$ and $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is a solid subset of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^s.$
As mentioned in the introduction, a {\em conditional expectation} $\mb{F}$ defined on $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is a strictly positive order continuous linear projection with range a Dedekind complete Riesz subspace $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}$ of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ with the property that $\mb{F}$ maps weak order units onto weak order units. It may be assumed, as we will do, that $\mb{F}E=E$ for the weak order unit $E.$ The space $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is called {$\mb{F}$-universally complete} (respectively, {$\mb{F}$-universally complete in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$}) if, whenever $X_\alpha\uparrow$ in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ and $\mb{F}(X_\alpha)$ is bounded in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ (respectively in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$), then $X_\alpha\uparrow X$ for some $X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}.$ If $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is $\mb{F}$-universally complete in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u,$ then it is $\mb{F}$-universally complete.
\textit{We shall assume henceforth that $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is $\mb{F}$-universally complete in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u.$}\\ It follows that if $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G}$ is an order closed Riesz subspace of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ with $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}\subset\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G},$ then there exists a unique conditional expectation $\mb{F}_\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G}$ on $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ with range $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G}$ and $\mb{F}\mb{F}_\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G}=\mb{F}_\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{G}\mb{F}=\mb{F}$ (see~\cite{G2,W}).
The conditional expectation $\mb{F}$ may be extended to the sup-completion in the following way: For every $X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^s,$ define $\mathbb F X$ by $\sup_{\alpha}\mathbb F X_\alpha\in\mathfrak{E}^s$ for any upward directed net $X_\alpha\uparrow X$, $X_\alpha\in\mathfrak{E}.$ It is well defined (see~\cite{G8}). We define $\operatorname{dom}^+ \mb{F}:=\{0\le X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^s:\ \mb{F}(X)\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u\}.$ Then $\operatorname{dom}^+\mb{F}\subset\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$ (see~\cite[Proposition 2.1]{G6}) and we define $\operatorname{dom}\mb{F}=\operatorname{dom}^+\mb{F}-\operatorname{dom}^+\mb{F}.$ If $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is $\mb{F}$-universally complete in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u,$ then $\operatorname{dom}\,\mb{F}=\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}.$
If $XY\in \rm{dom}\,{\mathbb F}$ (with the multiplication taken in the $f$-algebra $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$), where $Y\in \mathfrak E$ and $X\in \mathfrak F=\mc{R}(\mb{F}),$ we have that $\mathbb F(XY)= X \mathbb F(Y)$. This fundamental fact is referred to as the \emph{averaging property} of $\mb F$ (see~\cite{G1}).
Let $\Phi$ be the set of all $\phi\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^\sim_{00}$ satisfying $|\phi|(E)=1$ and extend $|\phi|$ to $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^s$ by continuity. Define $\mathscr{P}$ to be the set of all Riesz seminorms defined on
$\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^s$ by $p_{\phi}(X):=|\phi|(\mb{F}(|X|)$ where $\phi\in\Phi.$ Similarly, we define for $\phi\in\Phi$ the set $\mathscr{Q}$ of Riesz seminorms $q$ by putting $q_{\phi}(X):=(|\phi|(\mb{F}(|X|^2))^{1/2},$ where the product is formed in the $f$-algebra $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u.$ We define the space $\mathscr{L}^1:=(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E},\sigma(\mathscr{P}))$ and have that $\mc{L}^1:=\{X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}_s: p_\phi(X)<\infty\mbox{ for all } \phi\in\Phi\},$ equipped with the locally solid topology $\sigma(\mathscr{L}^1,\mathscr{P})$ (for the proof see~\cite{G9}). The space $\mc{L}^2$ then consists of all $X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^s$ satisfying $q_{\phi}(X)<\infty\mbox{ for all } \phi\in\Phi,$ equipped with the weak topology $\sigma(\mathscr{Q}).$ We have
$\mathscr{L}^2=\{X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}_s\,:\, |X|^2\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}\}.$ One also has the Cauchy inequality $p_{\phi}(XY)\le q_{\phi}(X)q_{\phi}(Y)\mbox{ for all } X,Y\in \mc{L}^2. $ and hence, for all $X,Y\in\mc{L}^2,$ $XY\in\mc{L}^1.$ The spaces $\mathscr{L}^1$ and $\mathscr{L}^2$ are topologically complete (see~\cite{G7} and \cite{G9} and note that this may not be true without the assumption that $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is $\mb{F}$-universally complete in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$).
A {\em filtration} on $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is a set $(\mb{F}_t)_{t\in J}$ of conditional expectations satisfying $\mb{F}_s=\mb{F}_s\mb{F}_t$ for all $s<t.$ We denote the range of $\mb{F}_t$ by $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t.$ A {\em stochastic process} in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is a function $t\mapsto X_t\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E},$ for $t\in J,$ with $J\subset\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}^+$ an interval. The stochastic process $(X_t)_{t\in J}$ is {\em adapted to the filtration} if $X_t\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t$ for all $t\in J.$ If $(X_t)$ is a stochastic process adapted to $({\mathbb F}_t, \mathfrak F_t)$, we call $(X_t, \mathfrak F_t,\mb{F}_t)$ a \emph{supermartingale} (respectively \emph{submartingale}) if ${\mathbb F}_t(X_s)\leq X_t$ (respectively ${\mathbb F}_t(X_s)\geq X_t$) for all $t\leq s$. If the process is both a sub- and a supermartingale, it is called a \emph{martingale}.
The submartingale $(X_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F}_t)_{t\in J}$ is said to have a {\em Doob-Meyer decomposition} if $X_t=M_t+A_t$ with $(M_t)$ a martingale and $(A_t)$ a right-continuous increasing process. In~\cite{G1,G9} conditions can be found for a submartingale to have a unique Doob-Meyer decomposition. If $(X_t)$ is a martingale, the increasing process in the decomposition of the submartingale $(X_t^2)$ is denoted by $(\<X\>_t)$ and this process is called the {\em compensator } of the martingale $(X_t).$
The stochastic process $(X_t)_{t\in J=[a,b]}$ is called \textit{locally H\"older-continuous with exponent $\gamma$ (also $\gamma$-H\"older-continuous)} if there exists a number $\delta>0$ and a strictly positive orthomorphism $\mb{S}$ such that for all $s,t\in[a,b]$ satisfying $0<|t-s|\mb{I}\le\mb{S}}\def\T{\mb{T}$ on a band $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{C}$ one has
$|X_t-X_s|\le\delta|t-s|^\gamma E \mbox{ on the band }\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{C}.$ The maximal band for which this can hold for given $s$ and $t$ is the band
$\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}(|t-s|E\le\mb{S}}\def\T{\mb{T} E)=\{(|t-s|E-\mb{S}}\def\T{\mb{T} E)^+\}^d$ (see~\cite{G4,G5}). We note that if $\delta_n=s_n-t_n,$ then $(\delta_n E-\mb{S}}\def\T{\mb{T} E)^+\downarrow 0$ if $\delta_n\downarrow 0.$ Therefore $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}_n:=\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}(\delta_n E\le\mb{S}}\def\T{\mb{T} E)\uparrow \mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}.$
If $(X_t)$ is a locally $\gamma$-H\"older continuous submartingale with Doob-Meyer decomposition $X_t=M_t+A_t,$ then $(A_t)$ is also locally $\gamma$-H\"older continuous (see~\cite{G10}).
For $\pi=\{a=t_0<t_1<\cdots<t_n=t\}$ a partition of the interval $[a,t]$ with mesh $|\pi|,$ we put $ V_t^{(p)}(\pi):=\sum_{i=1}^n|X_{t_{i}}-X_{t_{i-1}}|^p.$ The element $ V_t^{(p)}(X):=\sup_{\pi}V_t^{(p)}(\pi)\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^s $ is called the {\em $(p)$-variation of $X$ on $[a,t].$} $X$ has {\em finite $(p)$-variation} on $[a,t]$ if $V_t^{(p)}(X)\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u.$ We say that $X$ has {\em finite ${(p)}$-variation} if it has finite $(p)$-variation on $[a,t]$ for every $t\in[a,b]$ and if $V_b^{(p)}(X)\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$ we say that $X$ is of {\em bounded $(p)$-variation.} The function $t\mapsto V_t^{(p)}(X)$ is called the {\em total $(p)$-variation process of $X.$} If $p=2$ we call the variation the {\em quadratic variation}.
The component of $E$ in the band $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}(tE>X)$ is denoted by $E_t^\ell$ and $(E_t^\ell)_{t\in J}$ is an increasing left-continuous system, called the {\em left-continuous spectral system of $X.$} Also, if $\overline{E}^r_t$ is the component of $E$ in the band generated by $(X-tE)^+$ and $E^r_t:=E-\overline{E}^r_t,$ the system $(E^r_t)$ is an increasing right-continuous system of components of $E,$ called the {\em right-continuous spectral system} of $X$ (see~\cite{LZ,G2}). For the filtration $(\mb{F}_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)_{t\in J},$ we denote by $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_t$ the set of all order projections in the space $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t.$ A {\em stopping time} for this filtration is an orthomorphism $\mb{S}\in\operatorname{Orth}(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E})$ such that its right continuous spectral system $(\mb{S}^r_t)$ of projections satisfies $\mb{S}^r_t\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_t.$ If this holds for the left-continuous system, it is called an {\em optional time}. We refer the reader to~\cite{G2,G10} for a definition of the element $X_{\mb{T}}$ where $X$ is a submartingale adapted to the filtration $(\mb{F}_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)_{t\in[a,b]}$ and $\T$ is a stopping time for the filtration. We note in this respect that the element $X_\T$ is defined first for an increasing process and then for a martingale and put together, for a submartingale. The process $(X_{t\wedge\T)}$ is called the {\em stopped process.} If there exists a non-decreasing sequence $(\T_n)$ of stopping times such that $\T_n\uparrow b\mb{I}}\def\A{\mb{A}}\def\B{\mb{B}$ and such that for each $n$ the process $(X_{t\wedge \T_n})_{t\in[a,b]}$ is a martingale, then $(X_t)$ is called a {\em local martingale.}
The stochastic process $(B_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F}_t)$ is called an $\mb{F}$-conditional {\em Brownian motion} if, for all $0\le s<t,$ the following conditions hold \begin{enumerate} \item[(1)] $B_0=0;$ \item[(2)] the increment $B_t-B_s$ is $\mb{F}$-conditionally independent of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_s;$ \item[(3)] $\mb{F}[(B_t-B_s)^2]=(t-s)E;$ \item[(4)] $\mb{F}[(B_t-B_s)^4]=3(t-s)^2E.$ \end{enumerate} An $\mb{F}$-conditional Brownian motion is a martingale that is locally H\"older continuous with exponent $\gamma$ for all $\gamma\in(0,\tfrac 14)$ and that $\<B\>_t=t.$ (see~\cite[Definition 5.2]{G4}).
The integrals we use are the stochastic integral with reference to a martingale and the Dobrakov integral, for a summary of which we refer the reader to~\cite{G9}. The latter integral is defined for an $\mc{L}^2$-valued function $X(t)$ with reference to an operator valued measure $\mu_A$ defined on the Borel $\sigma$-algebra $\mc{B}(J)$ with values in $L(\mc{L}^2,\mc{L}^1).$ The vector measure $\mu_A$ on $\mc{B}(J)$ is the extension of a measure defined on intervals by an integrable increasing right-continuous process $(A_t)_{t\in J},$ with $\mu_A(a,b]:=A_b-A_a$ and we assume $A_t\in\mc{L}^2$ for all $t\in J.$ The operator is then the multiplication operator (with the product formed in the $f$-algebra $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$) and will have values in $\mc{L}^1.$ We assume, as in~\cite{G9}, that all bounded intervals in $\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}^+$ have finite semivariation. The set of all Dobrakov integrable functions will be denoted by $L^1([a,b],\mu_A)$ and by $L^2([a,b],\mu_A)$ we denote the space of all $\mu_A$-integrable functions $X$ from $[a,b]$ in
$\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^s$ satisfying $|\phi|\mb{F}\int_a^b |X|^2\,d\mu_A<\infty$ for all $\phi\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^\sim_{00}.$
If $(M_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F}_t)$ is a martingale with compensator $\<M\>,$ the closure of the set $\mb{L}$ of all simple predictable processes in $L^2([a,b],\mu_{\<M\>}),$ is denoted by \\ $L^2_{\operatorname{pred}}([a,b],\<M\>).$ The It\^o integral $I^M(X)=\int_a^bX_t\,d\mu_{\<M\>}$ is defined for all $X\in L^2_{\operatorname{pred}}([a,b],\<M\>).$ Moreover, $I_t^M(X)=\int_a^tX_u\,d\mu_{\<M\>}$ is a martingale. The domain of $I^M$ can be extended further to the space $\mc{L}_{\operatorname{pred}}(L^2[a,b],\<M\>),$ but the resulting indefinite integral is no longer a martingale, but a {\em local martingale} (for the detail, see~\cite{G9}). In the special case that $M$ is a Brownian motion, the space $\mc{L}_{\operatorname{pred}}(L^2[a,b],\<M\>)$ is denoted by $\mc{L}_{ad}(L_2[a,b])).$
\section{The cross-variation process} Let $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ be a Dedekind complete perfect vector lattice and assume that $\mb{F}$ is a conditional expectation defined on $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ and that $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is $\mb{F}$-universally complete in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u.$ We consider the set $\mc{M}_2$ of right order continuous martingales $(X_t)_{t\in[a,b]}$ satisfying the condition that $X_t\in\mc{L}^2$ for every $t\in[a,b].$ $\mc{M}_2^c$ will denote the set of order continuous martingales in $\mc{M}_2.$ For $X\in\mc{M}_2,$ we have that $X^2$ is a nonnegative submartingale and hence of class DL (see\cite[Definition 7.3]{G1}). Therefore, by ~\cite[Theorems 5.11 and 5.12]{G9}, $X^2$ has a unique Doob-Meyer decomposition $$ X^2=M+A, \ $$ where $M$ is a right continuous martingale and $A$ is a natural increasing process with $A_0=0.$ As was mentioned above, the compensator $(\lr{X}_t)$ of $X$ is defined to be the process $(A_t).$ For the properties of $\lr{X}$ inherited from those of $X$ we refer to~\cite[Theorem 3.3]{G10}.
\begin{definition}\label{cross-variation}\rm For martingales $X,Y\in\mc{M}_2$ we define their \textit{cross-variation process $\lr{X,Y}$} by $$ \lr{X,Y}_t:=\tfrac 14[\lr{X+Y}_t-\lr{X-Y}_t],\ \ 0\le t<\infty. $$ We say $X$ and $Y$ are {\em orthogonal} if $\lr{X,Y}_t=0$ for all $0\le t<\infty.$ \end{definition}
\begin{proposition}
$XY-\lr{X,Y}$ is a martingale. \end{proposition} {\em Proof.} A linear combination of martingales is again a martingale. Thus, if $X$ and $Y$ are in $\mc{M}_2,$ then both the processes $(X+Y)^2-\langle X+Y\rangle$ and $(X-Y)^2-\langle X-Y\rangle$ are martingales and so is their difference $4XY-\langle X+Y\rangle-\lr{X-Y}.$ It follows that $$ XY-\lr{X,Y}=XY-\tfrac 14[\lr{X+Y}-\lr{X-Y}] $$ is a martingale.\phantom{em}
$\Box$
\begin{proposition}\label{uniqueness of cv} If $X,Y\in\mc{M}_2$ then $\lr{X,Y}$ is the only process of the form $A=A^{(1)}-A^{(2)},$ with $A^{(j)}$ adapted natural increasing processes, such that $XY-A$ is a martingale. In particular, $\lr{X,X}=\lr{X}.$ \end{proposition} {\em Proof.} By definition, $\lr{X,Y}$ is the difference of two processes as required and $XY-\lr{X,Y}$ is a martingale. This proves the existence of $A.$
To prove the uniqueness assume that $A=A^{(1)}-A^{(2)}$ and $B=B^{(1)}-B^{(2)}$ are two processes such that $XY-A=M$ and $XY-B=N$ are martingales and the processes $A^{(j)}$ en $B^{(j)}$ are adapted natural increasing processes. Hence, $M_t+A_t=N_t+B_t.$ Put $(C_t):=(A_t-B_t)=(N_t-M_t)$ for $t\in[a,b]$ and note that $(C_t)$ is a martingale. An inspection of the proof of the uniqueness of the Doob-Meyer decomposition in~\cite[Theorem 7.5]{G1}, shows that $A_t=B_t$ for all $t.$
Finally, since $\lr{X}$ is an adapted natural increasing process satisfying the condition that $X^2-\lr{X}$ is a martingale, it follows that $\lr{X,X}=\lr{X}.$\phantom{em}
$\Box$
We call a process $A$ with the property that it is the difference of two adapted natural increasing processes $A^{(j)}$ as in the Proposition above, a \textit{regular process}.
\begin{proposition} For $X,Y\in\mc{M}_2$ the following identities hold: \begin{enumerate} \item[{(i)}] If $a\le s<t\le b,$ then \begin{align*} \mb{F}_s[(X_t-X_s)(Y_t-Y_s)] &= \mb{F}_s[(X_tY_t-X_sY_s)] \\ &= \mb{F}_s[\lr{X,Y}_t-\lr{X,Y}_s]. \end{align*} Hence, if $X$ and $Y$ are orthogonal, then $XY$ is a martingale and the increments of $X$ and $Y$ over $[s,t]$ are conditionally uncorellated. \item[(ii)] If $a\le s<t\le u<v\le a,$ then $$ \mb{F}[(X_v-X_u)(Y_t-Y_s)]=0. $$ Thus, the expectation of products of increments of two martingales over non-overlapping intervals are zero.
\item[(iii)] If $a\le u<v\le b$, then \begin{multline*} \mb{F}[(X_v-X_u)(Y_v-Y_u)-(\<X,Y\>_v-\<X,Y\>_u)]= \\ \mb{F}[(X_vY_v-\<X,Y\>_v)-(X_uY_u-\<X,Y\>_u)]=0. \end{multline*} It follows that the expectation of the product of terms of the form $(X_vY_v-\<X,Y\>_v)-(X_uY_u-\<X,Y\>_u)$ and of the form $(X_v-X_u)(Y_v-Y_u)-(\<X,Y\>_v-\<X,Y\>_u)$ taken over non-overlapping intervals are zero. \end{enumerate}
\end{proposition} {\em Proof.} (i) For $a\le s<t\le b,$ \begin{align*} \mb{F}_s[(X_t-X_s)(Y_t-Y_s)] &=\mb{F}_s[X_tY_t-X_sY_t-X_tY_s+X_sY_s] \\ &=\mb{F}_s[X_tY_t]-X_sY_s-X_sY_s+X_sY_s \\ &=\mb{F}_s[X_tY_t-X_sY_s]. \end{align*} Put $M_t=X_tY_t-\lr{X,Y}_t.$ Then, since $M_t$ is a martingale, $$ \mb{F}_s[X_tY_t-X_sY_s]-\mb{F}_s[\lr{X,Y}_t-\lr{X,Y}_s]=\mb{F}_s[M_t-M_s]=0. $$ This proves (i).
(ii) For $a\le s<t\le u<v\le a,$ \begin{multline*} \mb{F}[(X_v-X_u)(Y_t-Y_s)]=\mb{F}\{\mb{F}_u[(X_v-X_u)(Y_t-Y_s)]\}\\ =\mb{F}[(Y_t-Y_s)\mb{F}_u(X_v-X_u)]=0. \end{multline*}
(iii) If If $a\le u<v\le b$, the identity follows immediately from (i). The remaining statement follows then as in the proof of (ii). \phantom{em}
$\Box$
\begin{proposition}\label{properties cross-variation} Let $X, Y, Z$ be elements of $\mc{M}_2.$ The following properties hold. \begin{enumerate} \item[{(i)}] $\lr{\alpha X+\beta Y,Z}=\alpha\lr{X,Z}+\beta\lr{Y,Z}$ for all real numbers $\alpha, \beta.$ \item[{(ii)}] $\lr{X,Y}=\lr{Y,X}.$
\item[{(iii)}] $|\lr{X,Y}|^2\le\lr{X}\lr{Y}.$ \item[{(iv)}] Let $V_t^1(X)$ be the variation of $X$ over the interval $[a,t].$ Then, for $s<t,$ $$ V_t^1(\lr{X,Y})-V_s^1(\lr{X,Y})\le\tfrac 12[\lr{X}_t-\lr{X}_s+\lr{Y}_t-\lr{Y}_s]. $$ \end{enumerate} Thus, the cross-variation process defines a bilinear transformation on $\mc{M}_2\times\mc{M}_2.$ \end{proposition} {\em Proof.} We show firstly that (ii) holds. From $(-X)^2=X^2$ it follows that $\lr{-X}=\lr{X}.$ Consequently, $$ \lr{X,Y}-\lr{Y,X}=\tfrac 14[\lr{Y-X}-\lr{X-Y}]=\tfrac 14[\lr{Y-X}-\lr{Y-X}]=0, $$ i.e., $\lr{X,Y}=\lr{Y,X}.$
\noindent (i) $\lr{\alpha X+\beta Y,Z}$ is the unique regular process $A$ such that $(\alpha X+\beta Y)Z-A$ is a martingale. But, $\alpha\lr{X,Z}+\beta\lr{Y,Z}$ is 'n regular process $B$ such that $(\alpha X+\beta Y)Z-B=\alpha XZ+\beta YZ-B $ is a martingale. Hence, $A=B$ and (i) holds.
\noindent (iii) Using (ii), we have \begin{align*} 0\le\lr{\alpha X+Y}&=\lr{\alpha X+Y,\alpha X+Y} \\ &=\alpha^2\lr{X,X}+2\alpha\lr{X,Y}+\lr{Y,Y}\\ &=\alpha^2\lr{X}+2\alpha\lr{X,Y}+\lr{Y}. \end{align*} Hence, for every $t\in[a,b],$ we have for all $\alpha\in\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}$ that \begin{equation}\label{quadratic inequality} 0\le \alpha^2\lr{X}_t+2\alpha\lr{X,Y}_t+\lr{Y}_t. \end{equation} Let $\Omega$ be the Stone space of the Boolean algebra $\ms{B}_{\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}}$ of all bands in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}.$ Then $C^\infty(\Omega)$ is Riesz isomorphic to $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u,$ which is, as we remarked in the introduction, an $f$-algebra (in fact, it inherits its $f$-algebra structure from $C^\infty(\Omega)$). Thus $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$ and $C^\infty(\Omega)$ are also isomorphic as $f$-algebras (see~\cite[Section 50]{LZ} and \cite[Chapter 7]{AB1}). Identifying an element $X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$ with its image, we consider $X$ to be a real function on $\Omega.$ Thus the inequality (\ref{quadratic inequality}) becomes an inequality involving real numbers, i.e., for every fixed $t\in[a,b]$ and $\omega\in\Omega,$ we have $$ 0 \le \alpha^2\lr{X}_t(\omega)+2\alpha\lr{X,Y}_t(\omega)+\lr{Y}_t(\omega) \mbox{ for all }\alpha\in\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}. $$ This implies that for every fixed $t\in[a,b]$ and $\omega\in\Omega,$ we have \begin{equation} \lr{X,Y}_t(\omega)^2\le \lr{X}_t(\omega)\lr{Y}_t(\omega), \end{equation} i.e., $\lr{X,Y}_t^2\le \lr{X}_t\lr{Y}_t$ holds in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$ for all $t\in[a,b]$ and this proves (iii).
\noindent (iv) For $a\le s<t\le b,$ we have \begin{align*}
|\lr{X,Y}_t-\lr{X,Y}_s|&=\tfrac 14|(\lr{X+Y}_t-\lr{X-Y}_t)-(\lr{X+Y}_s-\lr{X-Y}_s)|\\
&=\tfrac 14|(\lr{X+Y}_t-\lr{X+Y}_s)+(\lr{X-Y}_s-\lr{X-Y}_t)|\\
&\le\tfrac 14 |(\lr{X+Y}_t-\lr{X+Y}_s)|+|(\lr{X-Y}_s-\lr{X-Y}_t)|\\ &=\tfrac 14 [(\lr{X+Y}_t-\lr{X+Y}_s)+(\lr{X-Y}_t-\lr{X-Y}_s)]\\ &=\tfrac 14 [(\lr{X+Y}_t+\lr{X-Y}_t)-(\lr{X+Y}_s+\lr{X-Y}_s)]. \end{align*} But, using (i) and (ii) and the fact that $\lr{X}=\lr{X,X},$ we get \begin{multline*} \lr{X+Y}=\lr{X+Y,X+Y}=\lr{X,X+Y}+\lr{Y,X+Y}\\ =\lr{X,X}+2\lr{X,Y}+\lr{Y,Y} =\lr{X}+2\lr{X,Y}+\lr{Y} \end{multline*} and also $\lr{X-Y}=\lr{X}-2\lr{X,Y}+\lr{Y}.$ Therefore, for $a\le s<t\le b,$ we have \begin{align}
|\lr{X,Y}_t-\lr{X,Y}_s|&\le\tfrac 12 [(\lr{X}_t+\lr{Y}_t)-(\lr{X}_s+\lr{Y}_s)] \nonumber\\ &=\tfrac 12 [\lr{X}_t-\lr{X}_s+\lr{Y}_t-\lr{Y}_s]. \end{align} Let $V_t^1\lr{X,Y}$ be the total variation of $\lr{X,Y}$ over the interval $[0,t],$ then $V_t^1\lr{X,Y}-V_s^1\lr{X,Y}$ is the variation of $\lr{X,Y}$ over the interval $[s,t].$ For any partition $\pi=\{s=t_0<t_1<\cdots<t_n=t\}$ of $[s,t],$ we have \begin{align*}
\sum_{k=1}^n|\lr{X,Y}_{t_k}-\lr{X,Y}_{t_{k-1}}| &\le\tfrac 12[\sum_{k=1}^n\lr{X}_{t_k}-\lr{X}_{t_{k-1}}+\sum_{k=1}^n\lr{Y}_{t_k}-\lr{Y}_{t_{k-1}}]\\ &=\tfrac 12 [\lr{X}_t-\lr{X}_s+\lr{Y}_t-\lr{Y}_s]. \end{align*} It follows that $V^1_t\lr{X,Y}-V^1_s\lr{X,Y}\le \tfrac 12 [\lr{X}_t-\lr{X}_s+\lr{Y}_t-\lr{Y}_s]$ and this completes the proof. \phantom{KOOOOOOOS}\phantom{em}
$\Box$
\begin{remark}\label{remark 3.6}\rm Considering the proof of (iii) above, we see that if $X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$ and if $P(\alpha,X)$ is a proposition that holds for all $\alpha\in\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}$ in the $f$-algebra $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u,$ then the proposition $P(\alpha Y, X)$ is also true for any given $Y\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u.$ This follows by representing the elements as elements of $C^\infty(\Omega)$ for some compact topological space $\Omega;$ then the proposition $P(\alpha, X)$ holds for every $\alpha$ and every $\omega\in\Omega.$ Therefore, since $\alpha Y(\omega)$ is again a real number, one has that $P(\alpha Y(\omega), X(\omega))$ must also be true for every $\omega.$ Thus, the proposition holds for $\alpha$ replaced by $\alpha Y.$ \end{remark}
For the quadratic variation of a $\gamma$-H\"older continuous martingale $X,$ we have the formula \begin{equation}\label{eq3.4}
\lim_{|\pi|\to 0}V_t^{(2)}(\pi,X)=\lim_{|\pi|\to 0} \sum_{k=1}^m(X_{t_k}-X_{t_{k-1}})^2=\<X\>, \end{equation} with convergence in $\mc{L}^1$-conditional probability (see~\cite[Theorem 6.1]{G10} for the definition). We prove a similar characterization for the cross-variation process. Although the proof is similar to that of (\ref{eq3.4}), there are some technical points that warrant the inclusion of a proof.
\begin{theorem}Let $X,Y$ be $\gamma$-H\"older continuous martingales. Then, \begin{equation}
\lim_{|\pi|\to 0} \sum_{k=1}^m(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}})=\<X,Y\>, \end{equation}
where $\pi=\{t_0,t_1,\ldots,t_m\}$ is a partition of the interval $[a,b]$ with mesh $|\pi|$ and the convergence is in $\mc{L}^1$-conditional probability. \end{theorem}
{\em Proof.\ } Let $\displaystyle m_t(Z;\pi):=\sup_{1\le k\le m}|Z_{t_k}-Z_{t_{k-1}}|$ for the stochastic process $(Z_t)$ and let \begin{equation*} CV_t(\pi,X,Y):=\sum_{k=1}^m(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}}). \end{equation*} Consider the bounded case, i.e., say for some $K>0,$ \\ $$
\sup\limits_{s\in[a,b]}\{|X_s|,|Y_s|,\<X\>_s,\<Y\>_s\}\le KE. $$ Then, \begin{align*} &\mb{F}[CV_t(\pi,X,Y)-\<X,Y\>_t]^2 \\ &=\mb{F}\left[\sum_{k=1}^m(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}})- (\<X,Y\>_{t_k}-\<X,Y\>_{t_{k-1}})\right]^2 \\ &=\sum_{k=1}^m\mb{F}\left[(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}})- (\<X,Y\>_{t_k}-\<X,Y\>_{t_{k-1}})\right]^2 \\ &\le 2\sum_{k=1}^m\mb{F}\left[(X_{t_k}-X_{t_{k-1}})^2(Y_{t_k}-Y_{t_{k-1}})^2+ (\<X,Y\>_{t_k}-\<X,Y\>_{t_{k-1}})^2\right] \\
&\le 2\sum_{k=1}^m\mb{F}\left[|X_{t_k}-X_{t_{k-1}}||X_{t_k}+X_{t_{k-1}}|(Y_{t_k}-Y_{t_{k-1}})^2\right] \\
&\phantom{jacobus}+2\mb{F} m_t(\<X,Y\>;\pi)\sum_{k=1}^m|\<X,Y\>_{t_k}-\<X,Y\>_{t_{k-1}}|\\ &\le 4K\mb{F} m_t(X;\pi)\sum_{k=1}^m(Y_{t_k}-Y_{t_{k-1}})^2 \\ &\phantom{jacobus}+2\mb{F} m_t(\<X,Y\>;\pi)\sum_{k=1}^m(\<X\>_{t_k}-\<X\>_{t_{k-1}})+(\<Y\>_{t_k}-\<Y\>_{t_{k-1}})\mbox{ \rm by (3.3)}\\ &\le 4K\mb{F} m_t(X;\pi)V_t^{(2)}(\pi, Y) +2\mb{F} m_t(\<X,Y\>;\pi)(\<X\>_{t_m}+\<Y\>_{t_m})\\ &\le 4K\mb{F} m_t(X;\pi)V_t^{(2)}(\pi,Y) + 4K\mb{F} m_t(\<X,Y\>;\pi). \end{align*} From \cite[Lemma 6.2]{G10}, we have that $\mb{F}[V_t^{(2)}(\pi,Y)]^2\le 6K^4E$ and so applying the Cauchy inequality to the first term, we get (as in the proof of the formula for one variable in \cite{G11}) that it is bounded by $$ 4K\sqrt{6K^4}\sqrt{\mb{F} m_t(X;\pi)^2}. $$
Now, if $|\pi|\to 0$ the $\gamma$-H\"older continuity of $X$ and of $\lr{X,Y}$ implies that in both terms the factor $m_t$ tends to zero. Therefore, the right-hand side tends to zero and we have proved the desired result for bounded martingales. The extension to the general case now proceeds via localization, exactly as in the proof of the corresponding result for the quadratic variation (see~\cite[Theorem 6.1]{G10}).
\begin{corollary} If $\sup\{|X_s|,|Y_s|,\<X\>_s,\<Y\>_s\}\le KE,\ s\in[a,t],$ then $$ \mb{F}(CV_t(\pi,X,Y))^2\le 2K\mb{F} m_t(X;\pi)V_t^{(2)}(\pi,Y). $$
\end{corollary}
\section{The cross-variation formula} Let $M=(M_t)_{t\in[a,b]}$ and $N=(N_t)_{t\in [a,b]}$ belong to the set $\mc{M}_2^c$ of continuous square integrable martingales and let $X\in L^2_{\operatorname{pred}}([a,b],\<M\>_t)$ and $Y\in L^2_{\operatorname{pred}}([a,b],\<N\>_t).$ (see~\cite[Definition 5.3]{G9}). Let $I^M_t(X):=\int_a^t X_s\,dM_s$ and $I^N_t(Y):=\int_a^t Y_s\,dN_s.$ Then we have, by~\cite[Theorem 6.5]{G9}, $$ \<I^M_t(X)\>=\int_a^tX_u^2\,d\<M\>_u \mbox{ and } \<I^N_t(Y)\>=\int_a^tY_u^2\,d\<N\>_u. $$ Our aim is to prove the cross-variation formula \begin{equation} \<I^M(X),I^N(Y)\>_t=\int_a^tX_uY_u\,d\<M,N\>_u,\ \ t\in[a,b]. \end{equation} We first consider the case where $X$ and $Y$ are simple predictable adapted processes.
\begin{lemma} Let $M$ and $N$ be martingales in $\mc{M}^c_2$ and let $X$ and $Y$ be simple adaptable predictable processes. Then, for all $a\le s<t\le b,$ we have \begin{equation} \mb{F}_s[(I_t^M(X)-I_s^M(X))((I_t^N(Y)-I_s^N(Y)))]=\mb{F}_s\left[\int_s^t X_uY_u\,d\<M,N\>_u\right]. \end{equation} Consequently, \begin{equation} \<I^M(X),I^N(Y)\>_t=\int_a^tX_uY_u\,d\<M,N\>_u,\ \ t\in[a,b]. \end{equation} \end{lemma} {\em Proof.}\ Let $\pi=\{a=t_0<t_1<\cdots<t_n=b\}$ be a partition of the interval $[a,b],$ let $$ X_u=\sum_{i=1}^n X_{i-1}I_{(t_{i-1},t_i]}(u)\mbox{ and }Y_u=\sum_{i=1}^n Y_{i-1}I_{(t_{i-1},t_i]}(u), $$ with $X_i, Y_i\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_i.$ Let $a\le s<t\le b$ and suppose that $t_{k-1}\le s<t_k$ and $t_{\ell}\le t\le t_{\ell+1}.$ Then, using Remark~\ref{remark 3.6}(ii) and Proposition~\ref{proposition 3.4},
\begin{align*} &\mb{F}_s[(I_t^M(X)-I_s^M(X))(I_t^N(Y)-I_s^N(Y))]\\ &=\mb{F}_s[(X_{k-1}(M_{t_k}-M_s)+\sum_{i=k}^{\ell-1}X_i(M_{t_{i+1}}-M_{t_i})+X_\ell(M_t-M_{t_{\ell}}))\cdot\\ &\qquad \qquad \qquad\qquad \cdot(Y_{k-1}(N_{t_k}-N_s)+\sum_{i=k}^{\ell-1}Y_i(N_{t_{i+1}}-N_{t_i}) +Y_\ell(N_t-N_{t_{\ell}}))]\\ &=\mb{F}_s[X_{k-1}Y_{k-1}(M_{t_k}-M_s)(N_{t_k}-N_s)\\ &\qquad \qquad \qquad\qquad +\sum_{i=k}^{\ell-1}X_i Y_i(M_{t_{i+1}}-M_{t_i})(N_{t_{i+1}}-N_{t_i})+\\ & \ \ \ \qquad \qquad \qquad\qquad \qquad\qquad +X_\ell Y_\ell(M_t-M_{t_{\ell}})(N_t-N_{t_{\ell}})] \end{align*}
\begin{align*} &=\mb{F}_s[X_{k-1}Y_{k-1}(\<M_{t_k},N_{t_k}\>-\<M_{s},N_s\>)\\ &\qquad \qquad \qquad\qquad +\sum_{i=k}^{\ell-1}X_i Y_i(\<M_{t_{i+1}},N_{t_{i+1}}\> -\<M_{t_i},N_{t_i}\>)+\\ & \ \ \ \qquad \qquad \qquad\qquad \qquad\qquad +X_\ell Y_\ell(\<M_t,N_t\>-\<M_{t_{\ell}},N_{t_{\ell}}\>)]\\ &=\mb{F}_s\left[\int_s^t X_uY_u\,d\<M_u,N_u\>\right]. \end{align*} But, by Proposition~\ref{proposition 3.4}, $$ \mb{F}_s[(I_t^M(X)-I_s^M(X))(I_t^N(Y)-I_s^N(Y))]=\mb{F}_s[(I_t^M(X)I_t^N(Y)-I_s^M(X)I_s^M(Y))], $$ from which we conclude that $I_t^M(X)I_t^N(Y)-\int_a^tX_uY_u\,d\<M,N\>$ is a martingale. It follows from Proposition~\ref{uniqueness of cv} that \begin{equation*} \<I^M(X),I^N(Y)\>_t=\int_a^tX_uY_u\,d\<M,N\>_u,\ \ t\in[a,b].\tag*{\phantom{em}
$\Box$} \end{equation*}
\begin{corollary} Let $M$ and $N$ be martingales in $\mc{M}^c_2$ and let $X$ be a simple predictable processes. Then, for all $a\le s<t\le b,$ we have \begin{equation} \<I^M(X),N\>_t=\int_a^tX_u\,d\<M,N\>_u,\ \ t\in[a,b]. \end{equation} \end{corollary} {\em Proof.\ } Take in the lemma $Y_t=E,$ $t\in[a,b].$\phantom{em}
$\Box$
In order to extend the result to the general case, we need the following inequality due to Kunita and Wataba~\cite{KW} (see also~\cite[Proposition 3.2.14]{KS}).
\begin{theorem}{ \rm (Kunita-Watanabe inequality)}\label{Kunita Watanabe}
If $M, N\in\mc{M}_2^c$ and if $X\in L^2_{\operatorname{pred}}([a,b],\<M\>_t)$ and $Y\in L^2_{\operatorname{pred}}([a,b],\<N\>_t),$ then \begin{equation}
\left(\int_a^t|X_sY_s|\,dV^1_s\right)^2 \le\left(\int_a^tX_u^2\,d\<M\>_u\right)\left(\int_a^tY_u^2\,d\<N\>_u\right), \ t\in[a,b]. \end{equation} \end{theorem} {\em Proof.}\ By definition there exist nets of simple predictable processes $(X_{\gamma})$ that converges to $X$ and $(Y_{\delta})$ that converges to $Y$ in $L^2([a,b],\<M\>)$ and $L^2([a,b],\<N\>)$ respectively. For any fixed pair $(\alpha,\delta),$ write the processes $X_{s,\gamma}$ and $Y_{s,\delta},$ $a\le s\le t,$ with respect to the same partition $\pi=a=t_0<t_1<\cdots<t_n=t$ as follows $$ X_{s,\gamma}=\sum_{i=1}^nX_iI_{(t_{i-1},t_i]}(s),\ Y_{s,\delta}=\sum_{i=1}^nY_iI_{(t_{i-1},t_i]}(s). $$ Denote Let $V_s^1(\<X,Y\>)$ by $V_s^1.$ From Proposition~\ref{properties cross-variation} (iv), we have, replacing $M$ by $\alpha M$ and $N$ by $\beta N$ for arbitrary real numbers $\alpha$ and $\beta,$ \begin{align*}
\alpha\beta (V^1_{s_i}-V^1_{s_{i-1}})&\le|\alpha||\beta|(V^1_{s_i}-V^1_{s_{i-1}})\\ &\le\frac{1}{2}[\alpha^2(\<M\>_{s_i}-\<M\>_{s_{i-1}})+\beta^2(\<N\>_{s_i}-\<N\>_{s_{i-1}})]. \end{align*}
But then, using Remark~\ref{remark 3.6}, we get by replacing $\alpha$ by $\alpha |X_i|$ and $\beta$ by $|Y_i|,$ $$
2\alpha|X_iY_i|(V^1_{s_i}-V^1_{s_{i-1}})\le\alpha^2|X_i|^2(\<M\>_{s_i}-\<M\>_{s_{i-1}})
+|Y_i|^2(\<N\>_{s_i}-\<N\>_{s_{i-1}}). $$ Summing the right and left hands of this inequality yields, for all $\alpha\in\mb{R}}\def\C{\mb{C}}\def\N{\mb{N},$ \begin{align*}
2\alpha\int_a^t|X_{s,\gamma}Y_{s,\delta}|\,dV^1_s
\le\alpha^2\int_a^t|X_{s,\gamma}|^2\,d\<M\>_s+\int_a^t|Y_{s,\delta}|^2\,d\<N\>_s. \end{align*} This implies that $$
\left[\int_a^t|X_{s,\gamma}Y_{s,\delta}|\,dV^1_s\right]^2\le
\int_a^t|X_{s,\gamma}|^2\,d\<M\>_s\cdot\int_a^t|Y_{s,\delta}|^2\,d\<N\>_s. $$ Hence, since $X_{s,\gamma}\to X_s$ and $Y_{s,\delta}\to Y_s$ we get the required result.\phantom{em}
$\Box$
\begin{lemma} If $M,N\in\mc{M}_2^c,$ if $X\in L^2_{\operatorname{pred}}([a,b],\<M\>)$ and if $(X_\alpha)$ is a net in $L^2_{\operatorname{pred}}([a,b],\<M\>)$ such that for some $t\in [a,b]$ $$
\lim_{\alpha}\int_0^t|X_{\alpha,u}-X_u|^2\,d\<M\>_u=0 \mbox{ in order}, $$ then $$ \lim_{\alpha}\<I(X_n),N)\>_s=\<I(X),N\>_s, \ a\le s\le t \mbox{ in order}. $$ \end{lemma} {\em Proof.}\ It follows from Proposition~\ref{properties cross-variation} (iii), that for $a\le s\le t,$ we have \begin{align*}
|\<I(X_\alpha)-I(X),N\>_s|^2&\le \<I(X_\alpha)-I(X)\>_s\<N\>_s \\
&\le \int_a^t|X_{\alpha,u}-X_u|^2\,d\<M\>_u\cdot\<N\>_t. \end{align*} If $0\le a_\alpha$ in an $f$-algebra and $a_\alpha^2\to 0$ in order, then $a_\alpha\to 0$ in order. It follows that $\lim_{\alpha}\<I(X_\alpha),N)\>_s=\<I(X),N\>_s \mbox{ in order, for }a\le s\le t .$ This completes the proof.\phantom{em}
$\Box$
As was mentioned, if $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ is $\mb{F}$-universally complete in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u,$ then, for each $\phi\in\Phi,$ we have that the seminorms $p_\phi$ and $q_\phi$ are complete norms when restricted to the carrier band of $\phi$ (see~\cite[Proposition 4.4]{G9}). Therefore, if $(X_\alpha)$ is a net in $\mc{L}^1$ that converges to an element $X\in \mc{L}^1$ and if $\mathbb P_\phi$ is the projection onto the carrier band of $\phi$ then $(X_\alpha)$ has a subsequence $(X_{\alpha_n})$ such that $p_\phi(X_{\alpha_n}-X)$ converges to $0.$ But, if a sequence converges in norm, it has a subsequence that converges in order to the limit element. Thus, for every fixed $\phi\in\Phi$ there exists a subsequence of the net $(X_\alpha)$ (depending on $\phi$) that converges in order to $X.$ We will use that in the proof of the next lemma.
\begin{lemma} If $M,N\in\mc{M}^c_2$ and $X\in L^2_{\operatorname{pred}}([a,b],\<M\>),$ then \begin{equation} \<I^M(X),N\>_t=\int_a^t X_u\,d\<M,N\>_u,\ \ a\le t\le b. \end{equation} \end{lemma} {\em Proof.}\ Fix an element $\phi\in\Phi.$ Let $(X_\alpha)$ be a net of simple predictable processes that converges in $L^2_{\operatorname{pred}}([a,b],\<M\>)$ to $X.$ Thus, $$
\overline{q}_\phi(X_{\alpha}-X)^2=|\phi|\mb{F}(\int_a^b|X_{{\alpha},u}-X_u|^2)\,d\<M\>_u)\to 0. $$
With $Y_\alpha:= \int_a^b|X_{{\alpha},u}-X_u|^2)\,d\<M\>_u,$ this means that $p_\phi(Y_\alpha)\to 0.$ By our remark before the lemma, this means that there exists a subsequence $(Y_{\alpha_n})$ that converges to $0$ in $\mc{L}^1_\phi$ and consequently a subsequence (that we will again denote by $(Y_{\alpha_n})$) that converges in order to zero on the carrier band of $\phi.$ Thus, by the definition of $Y_\alpha,$ we have that there exists a sequence of simple predictable processes $(X_{\alpha_n})$ such that $$
\mathbb P_\phi\left(\int_a^b |X_{\alpha_n,u}-X_u|^2\,d\<M\>_u\right)\to 0\ \mbox{in order} $$ and consequently, also for any $t\in[a.b],$ we have
$\mathbb P_\phi(\int_a^t |X_{\alpha_n,u}-X_u|^2\,d\<M\>_u)\to 0\ \mbox{in order}.$ By Lemma~\ref{lemma 4.4}, we have that
$$ \lim_{n\to\infty}\mathbb P_\phi\<I(X_{\alpha_n}),N)\>_s=\mathbb P_\phi\<I(X),N\>_s \mbox{ in order, for }a\le s\le t . $$ But, by Corollary~\ref{4.2}, we have that for each $n$ \begin{equation} \<I^M(X_{\alpha_n}),N\>_t=\int_a^tX_{\alpha_n,u}\,d\<M,N\>_u,\ \ t\in[a,b]. \end{equation} Considering the right hand side, we find by th Kunita-Watanabe inequality that for $t\in[a,b],$ $$
\int_a^t|X_{\alpha_n,u}-X|\,dV^1_u
\le\left(\int_a^t|X_{\alpha_n,u}-X|^2\,d\<M\>_u\right)\<N\>_t $$ and by what be proved above, we get that $$
\mathbb P_\phi\left|\int_a^t|X_{\alpha_n,u}-X|\,d\<M,N\>_u\right|\le\mathbb P_\phi\int_a^t|X_{\alpha_n,u}-X|\,dV^1_u\to 0\mbox{ in order}. $$ It follows from letting $n$ tend to infinity in Equation~(\ref{equation 4.7}), that the left hand side converges in order to $\<I^M(X),N\>_t$ on the carrier band of $\phi,$ and the right hand side converges in order to $\int_a^tX_u\,d\<M,N\>_u$ on the carrier band of $\phi$ by Lemma~\ref{lemma 4.4}. Thus, $$ \mathbb P_\phi\<I^M(X),N\>_t=\mathbb P_\phi\int_a^tX_u\,d\<M,N\>_u $$ and this holds for every fixed $\phi.$ This completes the proof.\phantom{em}
$\Box$
\begin{theorem} If $M,N\in\mc{M}_2^c,$ $X\in L^2_{\operatorname{pred}}([a,b],\<M\>),$ $Y\in L^2_{\operatorname{pred}}([a,b],\<N\>),$ then \begin{equation} \<I^M(X),I^N(Y)\>_t=\int_a^t X_uY_u\,d\<M,N\>_u,\ t\in[a,b], \end{equation} and equivalently, \begin{equation} \mb{F}_s[(I^M_t(X)-I^M_s(X))(I^N_t(Y)-I^N_s(Y))]=\mb{F}_s[\int_s^t X_uY_u\,d\<M,N\>_u,]. \end{equation} \end{theorem} {\em Proof.}\ The preceding lemma states that $$ d\<I^M(X),N\>_u=Y_u\,d\<M,N\>_u,\ \ a\le u\le b. $$ Replace in Lemma~\ref{lemma 4.5} $N$ by $I^N(Y),$ then we have \begin{align*} \<I^M(X),I^N(Y)\>_t&=\int_a^t X_u\,d\<M,I^N(Y)\>_u \\ &=\int_a^t X_uY_u\,d\<M,N\>_u, \end{align*} where we formally replaced $d\<M,I^N(Y)\>_u$ in the integral by $Y_ud\<M,N\>_u.$ To see that this can be done, note that if $$ \sum_{i=1}^n X_i\mu_{\<M,I^N(Y)\>}(S_i)=\sum_{i=1}^n X_i\int_{S_i}Y_u\,d(\<M,N\>), $$ is an approximating sum for the integral $\int_a^t X_u\,d\<M,I^N(Y)\>_u,$ and if $$ \sum_{j=1}^{n_i} Y_j\mu_{\<M,N\>}(S_{ij}) $$ is an approximating sum for the integral $\int_{S_i}Y_u\,d\<M,N\>_u,$ then $$ \sum_{i=1}^n\sum_{j=1}^{n_i}X_iY_j\mu_{\<M,N\>}(S_{ij}) $$ is an approximating sum for the integral $\int_a^t X_u\,d\<M,I^N(Y)\>_u $ as well as for the integral $\int_a^t X_uY_u\,d\<M,N\>_u.$ Therefore the two integrals are equal.\phantom{em}
$\Box$
\begin{theorem}\label{calculus 1} Let $M$ be a continuous martingale and let $X\in L^2_{\operatorname{pred}}([a,b],\<M\>).$ The It\^o integral $I^M(X)$ is the unique continuous martingale $\Phi$ that satisfies \begin{equation} \langle}\def\>{\rangle\Phi,N\>_t=\int_a^t X_u\,d\<M,N\>_u,\ t\in[a,b], \end{equation} for every continuous martingale $N.$ \end{theorem} {\em Proof.}\ We know that $I^M(X)$ satisfies this condition by Lemma~\ref{lemma 4.5}. Suppose that $\Phi$ also satisfies the condition for all continuous maringales $N.$ Then, for any such $N$ we get that $\langle}\def\>{\rangle\Phi-I^M(X),N\>=0$ for all continuous martingales $N.$ Replace $N$ by $\Phi-I^M(X),$ then $$
\langle}\def\>{\rangle\Phi-I^M(X)\>=\langle}\def\>{\rangle\Phi-I^M(X),\Phi-I^M(X)\>=0. $$ But, this means that the quadratic variation of the martingale $\Phi-I^M(X)$ is zero (see~\cite{G10}). Consequently $\Phi-I^M(X)=0$ and we are done.\phantom{em}
$\Box$
It is important that the calculus used (and proved) in Theorem~\ref{theorem 4.6} is also true for the It\^o integral. Thus, if, in the ``stochastic differential" notation $dN=X\,dM,$ then $Y\,dN=XY\,dM.$ This is the content of the next theorem.
\begin{theorem} Let $M$ be a continuous martingale and let $X\in L^2([a,b],\<M\>).$ Let $N:=I^M(X)$ and suppose furthermore that $Y\in L^2([a,b],\<N\>).$ Then $XY\in L^2([a,b],\<M\>)$ and $I^N(Y)=I^M(XY).$ \end{theorem} {\em Proof.}\ By assumption, we have for the martingale $N,$ that $N_t=\int_a^t X_u\,dM_u=I^M(X)_t.$ By the properties of the It\^o integral, this implies that for the quadratic variation of $N,$ we have $$ \<N\>_t=\int_a^t X_u^2\,d\<M\>_u. $$ Thus, $d\<N\>=X^2\,d\<M\>.$ Applying the calculus in Theorem~\ref{theorem 4.6}, we get $$ \phi\mb{F}\int_a^b X_u^2Y_u^2\,d\<M\>_u=\phi\mb{F}\int_a^bY_u^2\,d\<N\>_u<\infty,\mbox{ for all $\phi\in\Phi.$ } $$ This shows that $XY\in\mc{L}^2([a,b],\<M\>).$
For any continuous martingale $\tilde{N},$ we have by Lemma~\ref{lemma 4.5} that \begin{align*} \<I^M(XY),\tilde{N}\>_t&=\int_a^t X_uY_u\,d\<M,\tilde{N}\>_u\\ &=\int_a^t Y_u\,d\<N,\tilde{N}\>_u\\ &=\<I^N(Y),\tilde{N}\>_t. \end{align*} Therefore, $I^M(XY)=I^N(Y)$ follows from the uniqueness part of Theorem~\ref{calculus 1}. \phantom{em}
$\Box$
\section{Exponential processes}
Exponential processes play an important role in Girsanov's theorem. Given a stochastic process $(X_t)_{t\in[0,a]}$ and a Brownian motion $(B_t)_{t\in[0,a]},$ both adapted to the filtration $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F}_t),$ the exponential process $Z_t(X)$ is a transformation of $X$ that is a local martingale and under certain conditions a martingale.
We recall from~\cite[Section 3]{G8} and~\cite[Section 7]{G9}, that $\mc{L}_{ad}(L_2[0,a])=\mc{L}_{pred}(L^2[0,a],\<B\>).$
\begin{definition}({\rm\cite[Definition 8.7.1, p 137]{Kuo} and~\cite[Problem 3.2.28]{KS}})\label{definition 5.1a} The exponential process given by $X\in \mc{L}_{ad}(L_2[0,a])$ is defined to be the stochastic process \begin{equation} Z_t(X)=\exp\left[\int_0^t X_s\,dB_s-\frac{1}{2}\int_0^t X_s^2\,ds\right],\ \ 0 \le t \le a. \end{equation} \end{definition}
In order to prove the main property of exponential processes (Theorem~\ref{theorem 5.2} below), we need the following fact that is interesting in its own right.
\begin{lemma} Let $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F}_t)_{t\in[0,a]}$ be a filtration and let $(B_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)$ be a Brownian motion. If $(X_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)_{t\in[0,a]}$ is a bounded stochastic process, then the martingale $$ N_t:=\int_0^t X_s\,dB_s,\ t\in[a,b], $$ satisfies \begin{equation} \mb{F}(N_v-N_u)^4\le C(v-u)^2\ \mbox{ for all $0\le u<v<t.$} \end{equation} Therefore, the process $(N_t)$ is $\gamma$-H\"older continuous for $\gamma\in (0,\tfrac 14).$ \end{lemma} {\em Proof.} We first prove that the inequality holds for the simple adapted step process $$ X_t = \sum^r_{i=1} X_{i-1}1_{(t_{i-1},t_i]}(t), $$ with $X_{i-1}\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_{t_{i-1}}$ and $0=t_0<t_1<\cdots<t_r=t.$ In this case, for $0\le u<v\le t,$ we have \begin{align*} &\mb{F}(N_v-N_u)^4\\ &=\mb{F}\left(\int_u^v X_s\,dB_s\right)^4 \\ &=\mb{F}\left(X_{k-1}(B_{t_k}-B_{u})+ \sum^{\ell-1}_{i=k+1}X_{i-1}(B_{t_i} - B_{t_{i-1}})+ X_{\ell-1}(B_v - B_{t_{\ell-1}})\right)^4. \end{align*} Writing, for notational convenience, $u=t_{k-1}$ and $v=t_\ell,$ we have \begin{multline} \mb{F}(N_v-N_u)^4 =\mb{F}\left(\sum^{\ell}_{i=k}X_{i-1}(B_{t_i} - B_{t_{i-1}})\right)^4 \\ =\mb{F}\big(\sum_{i,j,m,n=k}^\ell X_{i-1}X_{j-1}X_{m-1}X_{n-1} (B_{t_i} - B_{t_{i-1}})(B_{t_j} - B_{t_{j-1}})\cdot \\ \cdot(B_{t_m} - B_{t_{m-1}})(B_{t_n} - B_{t_{n-1}})\big). \end{multline} Since $(B_t)$ is a Brownian motion, we have for $u<v$, that $\mb{F}(B_v-B_u)=\mb{F}(B_v-B_u)^3=0,\ \mb{F}(B_v-B_u)^2=(v-u)E,$ and $\mb{F}(B_v-B_u)^4=3(v-u)^2E.$ By considering the intervals in the following equations to be non-overlapping and in increasing order, we have that \begin{multline*} \mb{F}(X_{i-1}X_{j-1}X_{m-1}X_{n-1} (B_{t_i} - B_{t_{i-1}})(B_{t_j}-B_{t_{j-1}})\cdot\\ \cdot(B_{t_m}-B_{t_{m-1}})(B_{t_n}-B_{t_{n-1}}))\\ =\mb{F}\mb{F}_{t_{n-1}}(X_{i-1} (B_{t_i}-B_{t_{i-1}})X_{j-1}(B_{t_j}-B_{t_{j-1}})\cdot \\ \phantom{MMMM}\cdot X_{m-1}(B_{t_m}-B_{t_{m-1}})X_{n-1}(B_{t_n}-B_{t_{n-1}}))\\ =\mb{F}(X_{i-1} (B_{t_i}-B_{t_{i-1}})X_{j-1}(B_{t_j}-B_{t_{j-1}})\cdot \\\cdot X_{m-1}(B_{t_m}-B_{t_{m-1}})X_{n-1}\mb{F}_{t_{n-1}}(B_{t_n}-B_{t_{n-1}}))=0. \end{multline*} Similarly, using $\mb{F}=\mb{F}\mb{F}_{t_{n-1}},$ we have \begin{align*} &\mb{F}(X_{i-1}^2X_{m-1}X_{n-1} (B_{t_i} - B_{t_{i-1}})^2(B_{t_m}-B_{t_{m-1}})(B_{t_n}-B_{t_{n-1}}))\\ &=\mb{F}(X_{i-1}X_{m-1}^2X_{n-1} (B_{t_i} - B_{t_{i-1}})(B_{t_m}-B_{t_{m-1}})^2(B_{t_n}-B_{t_{n-1}}))\\ &=\mb{F}(X_{i-1}^3X_{n-1} (B_{t_i} - B_{t_{i-1}})^3(B_{t_n}-B_{t_{n-1}}))\\ &=\mb{F}(X_{i-1}X_{n-1}^3 (B_{t_i} - B_{t_{i-1}})(B_{t_n}-B_{t_{n-1}})^3)\\ &=0 \end{align*}
and, with $|X_t|\le C$ for all $t,$ \begin{align*} &\mb{F}(X_{i-1}X_{m-1}X_{n-1}^2 (B_{t_i}-B_{t_{i-1}})(B_{t_m}-B_{t_{m-1}})(B_{t_n}-B_{t_{n-1}})^2)\\ &=\mb{F}\mb{F}_{t_{n-1}}( X_{i-1}(B_{t_i}-B_{t_{i-1}})X_{m-1}(B_{t_m}-B_{t_{m-1}})X_{n-1}^2(B_{t_n}-B_{t_{n-1}})^2)\\ &=\mb{F}(X_{i-1}(B_{t_i}-B_{t_{i-1}})X_{m-1}(B_{t_m}-B_{t_{m-1}})X_{n-1}^2(t_n-t_{n-1}))\\ &\le C^2(t_n-t_{n-1})\mb{F}(X_{i-1}(B_{t_i}-B_{t_{i-1}})X_{m-1}(B_{t_m}-B_{t_{m-1}}))\\ &=0. \end{align*} It follows that in (\ref{equation 5.2a}) the only non-zero terms are the terms that are products of linear square factors, or linear factors to the fourth power, i.e.,
\begin{align*} &\mb{F}(N_v-N_u)^4 \\ &=\mb{F}\big(\sum_{i,j=k}^\ell \mb{F}_{t_{i-1}}X_{i-1}^4(B_{t_i} - B_{t_{i-1}})^4 +X_{i-1}^2X_{j-1}^2(B_{t_i} - B_{t_{i-1}})^2(B_{t_j} - B_{t_{j-1}})^2)\big)\\ &\le C^4\mb{F}\big(\sum_{i,j=k}^\ell \mb{F}_{t_{i-1}}(B_{t_i} - B_{t_{i-1}})^4 + F_{t_{i-1}}\mb{F}_{t_{j-1}}(B_{t_i} - B_{t_{i-1}})^2(B_{t_j} - B_{t_{j-1}})^2)\big)\\ &=C^4\sum_{i,j=k}^\ell 3(t_i-t_{i-1})^2+(t_i - t_{i-1})(t_j-t_{j-1})\\ &\le 3C^4\left(\sum_{i,j=k}^\ell (t_i-t_{i-1})^2+(t_i - t_{i-1})(t_j-t_{j-1})\right)E\\ &=3C^4(\sum_{i=k}^\ell(t_i-t_{i-1}))^2=3C^4(v-u)^2E.\\ \end{align*} We have thus shown that \begin{equation} \mb{F}\left(\int_u^v X_s\,dB_s\right)^4\le 3C^4(v-u)^2E \end{equation}
whenever $(X_s)$ is a bounded simple adapted stochastic process. Now, if $(X_s)$ is a bounded adapted stochastic process (and therefore a bounded element of $L^2_{\text{ad}}[0,b],\mc{L}^2)$) with $|X_s|\le C,$ there exists a net $(X^\alpha_s)$ of bounded simple processes with $|X_\alpha|\le C$ for all $\alpha,$ that converges to $(X_s)$ and such that the It\^o integrals $I(X^\alpha)$ converges in $\mc{L}^2$ to $I(X)$ (see~\cite[Lemma 4.7]{G7}. But then, for every $0<t\le b,$ we have $\mb{F}(I_t(X^\alpha))$ converges to $\mb{F}(I_t(X)).$ It follows from equation (\ref{equation 5.3a}) applied to $X^\alpha$ that equation (\ref{equation 5.3a}) also applies to $X.$
The final conclusion follows from the Kolmogorov-\u{C}entsov theorem (\cite{G4,G5}.\phantom{em}
$\Box$
\begin{theorem} Let $X\in \mc{L}_{ad}(L_2[0,a])$ and let $(B_t)$ be a Brownian motion with respect to the filtration $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t).$ For $t\in[0,a],$ let $$ Y_t:=\int_0^t X_s\,dB_s-\tfrac 12\int_0^tX_s^2\,ds. $$ If $$ Z_t(X)=\exp(Y_t),\ 0\le t\le a, $$ then \begin{equation} Z_t(X)=E+\int_0^tZ_sX_s\,dB_s. \end{equation} \end{theorem} {\em Proof.} Write $(Y_t)_{t\in[0,a]}$ as $$ Y_t=N_t+A_t $$ with $N_t=\int_0^tX_s\,dB_s$ and $A_t=-\tfrac 12\int_0^tX_s^2\,ds$ then, since $X\in \mc{L}_{ad}(L_2[0,a]),$ the process $N$ is a local martingale by~\cite[Theorem 4.2]{G8}.
We first assume that the processes $(X_t)$ and $(N_t)$ are bounded and then complete the proof by the process of localization~(see~\cite[Section 5]{G10} and \cite[Section 3.2.2]{G11}). Thus we assume that both $|X_t|\le CE$ and $|N_t|\le CE$ for some positive number $C.$
From the first inequality, we have that $(X_s)\in L^2_{ad}([0,t],\mc{L}^2),$ which implies that $N_t=I^M(X)_t$ is a continuous martingale which is, by Lemma~\ref{Lemma 5.2aa}, $\gamma$-H\"older continuous. Also, this inequality implies that $|A_t|\le\tfrac 12 C^2tE\le \tfrac 12 aC^2E$ and, together with the second inequality, it follows that $(Y_t)$ is uniformly bounded in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}_E.$ Hence, also $(Z_t)$ is uniformly bounded and so $Z_t\in L^2_{\operatorname{pred}}([0,t],\<N\>)$ (see~\cite[Proposition 3.6]{G11}).
From the second inequality, we have that $N_t\in L^2_{pred}([0,t],\<N\>)$ (see~\cite[Proposition 3.6]{G11}). Applying Theorem~\ref{theorem 4.8}, we have that $ZX\in L^2([0,t],\mc{L}^2)$ and $\int_0^t Z_s\,dN_s=\int_0^tZ_sX_s\,dB_s.$ Also, by its definition, $dA_s=-\tfrac 12 X_s^2\,ds.$
Applying It\^o's formula~(see \cite{G11}) for the function $\theta(x)=e^x,$ we get that \begin{align} Z_t&=Z_0+\int_0^t\theta'(Y_s)dN_s+\int_0^t \theta'(Y_s)\,dA_s+\tfrac 12\int_0^t \theta''(Y_s)\,d\<N\>_s \nonumber\\ &=E+\int_0^t Z_s X_s\,dB_s-\tfrac 12\int_0^t Z_sX_s^2\,ds+\tfrac 12\int_0^t Z_sX_s^2\,ds\nonumber\\ &=E+\int_0^t Z_s X_s\,dB_s, \end{align} and we note in passing that in this special case, we have, by \cite[Theorem 4.13]{G7}, that $(Z_t)$ is a martingale. This proves the theorem for the bounded case.
We proceed to the general case by the process of localization (see~\cite[Section 5]{G10}). Let $$
\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}_t^n:=\{\bigcup_{s<t}\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}(|N_s|)> nE)\cup\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}(|X_s| > nE)\}, $$ and let $\mathbb P^{(n)}_t$ be the projection onto $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}_t^n.$ For each fixed $n,$ the system of band projections $(\mathbb P_t^{(n)},\mb{I}}\def\A{\mb{A}}\def\B{\mb{B})_{t\in[0,a]}$ is a right continuous increasing system of projections. Thus, as in the paper cited above, it defines a stopping time $\mb{S}}\def\T{\mb{T}_n$ of the filtration $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)$ and $\mb{S}}\def\T{\mb{T}_n\uparrow a\mb{I}}\def\A{\mb{A}}\def\B{\mb{B}$ (see~\cite[Proposition 5.1]{G10}). Note that $\mathbb P_t^{(n)}$ is the projection onto the band $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}(tE>\mb{S}}\def\T{\mb{T}_nE).$ We denote $\mb{I}}\def\A{\mb{A}}\def\B{\mb{B}-\mathbb P_t^{(n)}$ by $\Q_t^{(n)}=\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}(tE\le \mb{S}}\def\T{\mb{T}_nE).$
For every fixed $n,$ we have \begin{equation}
\Q_t^{(n)}(|Z_t|)=\Q_t^{(n)}(|Z_{t\wedge\mb{S}}\def\T{\mb{T}_n}|)\le nE \mbox{ and }\Q_t^{(n)}(|X_t|)=\Q_t^{(n)}(|X_{t\wedge\mb{S}}\def\T{\mb{T}_n}|)\le nE. \end{equation}
Consequently, $\Q_t^{(n)}(|Z_tX_t|)\le n^2E$ which implies that $$ \Q_t^{(n)}(Z_tX_t)\in L^2_{\operatorname{pred}}([0,t],\mc{L}^2). $$ But then, since $\mathbb P_t^{(n)}\uparrow \mb{I}}\def\A{\mb{A}}\def\B{\mb{B},$ we have $\Q_t^{(n)}\downarrow 0$ and so, in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u,$ it follows that $$ Z_tX_t-\mathbb P_t^{(n)}(Z_tX_t)=\Q_t^{(n)}Z_tX_t\to 0. $$ Thus $Z_tX_t$ is the order limit in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$ of a sequence of elements belonging to $L^2([0,t],\mc{L}^2)$ and therefore belongs to $\mc{L}_{ad}(L^2[0,t]).$ This shows that the integral $\int_0^t Z_sX_s\,dB_s$ exists and is a local martingale.
We next claim that $N_{t\wedge \mb{S}}\def\T{\mb{T}_n}$ is a martingale. We claim that $A_{t\wedge\mb{S}}\def\T{\mb{T}_n}$ is bounded by a constant: We have \begin{align*}
|A_{t\wedge \mb{S}}\def\T{\mb{T}_n}|=\tfrac 12\int_0^{t\wedge\mb{S}}\def\T{\mb{T}_n}X_s^2\,ds =\tfrac 12\int_0^t\Q^{(n)}_sX_s^2\,ds \le\tfrac 12 n^2\int_0^t\,ds\le\tfrac 12 n^2aE. \end{align*} Therefore, $(\Q^{(n)}_tX_t)$ belongs to $L^2([0,t],\mc{L}^2)$ which in its turn shows that, for every $n,$ $$ M^{(n)}_t:=N_{t\wedge\mb{S}}\def\T{\mb{T}_n}=\int_0^{t\wedge\mb{S}}\def\T{\mb{T}_n}X_s\,dB_s=\int_0^t\Q^{(n)}_sX_s\,dB_s $$ is a martingale.
{\em We claim that $(M^{(n)}_t)$ is a bounded martingale for each fixed $n.$}
Since $n$ is fixed, we shall write $\mb{S}}\def\T{\mb{T}$ for $\mb{S}}\def\T{\mb{T}_n$ and similarly for $\mathbb P_t$ for $\mathbb P^{(n)}_t$ and $\Q_t$ for $\Q^{(n)}_t.$ Let $\pi=\{0=t_0<t_1<\ldots <t_n=t\}$ be a partition of the interval $[0,t],$ and let $$ \mb{S}}\def\T{\mb{T}_\pi=\sum_{i=1}^n t_{i-1}(\mathbb P_{t_i}-\mathbb P_{t_{i-1}})=\sum_{i=1}^n t_{i-1}\Delta\mathbb P_{t_i} $$ be a lower approximating Freudenthal sum for $\mb{S}}\def\T{\mb{T}.$ Then $\mb{S}}\def\T{\mb{T}_\pi$ is a stopping time for the filtration $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_s,\mb{F}_s)_{0\le s\le t}$ and $$ \int_0^{\mb{S}}\def\T{\mb{T}_\pi}X_s\,dB_s=\sum_{i=1}^n\Delta\mathbb P_{t_i}\int_0^{t_{i-1}}X_s\,dB_s=\sum_{i=1}^n\Delta\mathbb P_{t_i}N_{t_{i-1}}. $$ Hence, $$
\left|\int_0^{t\wedge\mb{S}}\def\T{\mb{T}_\pi}X_s\,dB_s\right|=\sum_{i=1}^n\Delta\mathbb P_{t_i}|N_{t_{i-1}}|\le nE. $$ On the other hand, $$ \int_0^{t\wedge\mb{S}}\def\T{\mb{T}_\pi}X_s\,dB_s=\int_0^tY^\pi_s\,dB_s, $$ where $$ (Y^\pi_s)=\sum_{i=1}^n\Q_{t_{i-1}}X_sI_{(t_{i-1},t_i]}(s) $$
Now, if $|\pi|\to 0,$ we have $(Y^\pi_s)\to (\Q_sX_{s-})=(\Q_sX_{s})$ since we assume $(X_s)$ to be order continuous in $s$ and since $|\Q_sX_s|\le nE,$ it follows from Lebesgue's theorem that $$
\int_0^t\phi\mb{F}(|\Q_sX_s-Y^\pi_s|^2)\,ds\to 0 \mbox{ as } |\pi|\to 0. $$ Using the defining property of the It\^o integral, we then have $$
\lim_{|\pi|\to 0}\phi\mb{F}(|\int_0^t\Q_sX_s\,dB_s-\int_0^tY^\pi_s\,dB_s|^2)=0. $$ Thus, $(\int_0^tY^\pi_s\,dB_s)$ converges in $\mc{L}^2$ to $\int_0^t \Q_sX_s\,dB_s.$ But, $$
\left|\int_0^tY^\pi_s\,dB_s\right|=\left|\int_0^{t\wedge\mb{S}}\def\T{\mb{T}_\pi} X_s\,dB_s\right|\le nE, $$ from which we conclude (by the Birkhoff inequality) that also $$
\left|\int_0^{t\wedge\mb{S}}\def\T{\mb{T}}X_s\,dB_s\right|=\left|\int_0^{t}\Q_sX_s\,dB_s\right|\le nE. $$ This establishes our claim.
We can now apply the first part of the proof. For each fixed $n,$ let $\tilde{X}^{(n)}_t:=\Q^{(n)}_tX_t.$ Then $(\tilde{X}^{(n)}_t)$ is a bounded adapted process and by our preceding step, $(\tilde{N_t}):=(\int_0^t\tilde{X}^{(n)}_s\,dB_s)$ is a bounded martingale. Therefore, if $$ \tilde{Y}^{(n)}_t:=\int_0^t\tilde{X}^{(n)}_s\,dB_s-\tfrac 12\int_0^t[\tilde{X}^{(n)}_t]^2\,ds $$ and if $$ \tilde{Z}^{(n)}_t(\tilde{X}^{(n)}_t)=\exp(\tilde{Y}^{(n)}_t), $$ then $$ \tilde{Z}^{(n)}_t(\tilde{X}^{(n)}_t)=E+\int_0^t\tilde{Z}^{(n)}_s\tilde{X}^{(n)}_s\,dB_s. $$ However, if $\mathbb P^{(n)}=\mathbb P(tE\le \mb{S}}\def\T{\mb{T}_n),$ then $\mathbb P^{(n)}(\tilde{X}^{(n)}_t)=\mathbb P^{(n)}X_t,$ $\mathbb P^{(n)}(\tilde{Z}^{(n)}_t)=\mathbb P^{(n)}Z_t$ and therefore, on the band $\mathbb P^{(n)}\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E},$ we have $$ \mathbb P^{(n)}[Z_t(X_t)]=\mathbb P^{(n)}\left(E+\int_0^tZ_sX_s\,dB_s\right). $$ Since we have $\mb{S}}\def\T{\mb{T}_n\uparrow \mb{I}}\def\A{\mb{A}}\def\B{\mb{B}$ it follows that, in order in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u,$ the left-hand side converges to $Z_t(X_t)$ and the right-hand side to $E+\int_0^tZ_sX_s\,dB_s.$ \phantom{em}
$\Box$
We note that, if $Z_t(X)$ is a martingale, then $\mb{F}(Z_t(X))=E$ for all $t\in[0,T],$ because, for all $t\in[0,T],$ we have $$ \mb{F}(Z_t(X))=Z_0(X)=E. $$ That the converse is true, is the content of the next theorem.
\begin{theorem} If $X\in\mc{L}_{ad}(L^2[0,T])$ satisfies the condition that $$ \mb{F}(Z_t(X))=E \mbox{ for all } t\in[0,T], $$ then the exponential process $(Z_t(X))_{t\in[0,T]}$ is a martingale. \end{theorem} {\em Proof.}
From equation (\ref{equation 5.2}) it follows that $Z_t(X)$ is a local martingale. Hence there exists an increasing sequence of stopping times $\T_n\uparrow T$ such that that stopped processes $$ Z_t^n(X):=Z_{t\wedge \T_n}(X) $$ are martingales. Therefore, $$ \mb{F}_s(Z_t^n(X))=Z_s^n(X),\ \ 0\le s\le t,\ n\ge 1. $$ We now let $n$ tend to infinity and get, by Fatou's lemma, that \begin{equation} \mb{F}_s(Z_t(X))=\mb{F}_s(\liminf Z_t^n(X))\le \liminf \mb{F}_s(Z_t^n(X) =\liminf Z^n_s(X) = Z_s(X). \end{equation} It follows that $(Z_t(X))_{0\le t\le T}$ is a supermartingale. Suppose that it is not a martingale. Then, for some $s,t\in[0,T],$ $s<t,$ we have that $(Z_s(X)-\mb{F}_s(Z_t(X)))>0$ and since $\mb{F}$ is strictly positive, we get $$ \mb{F}(Z_s(X))>\mb{F}(\mb{F}_s(Z_t(X)))=\mb{F}(Z_t(X)), $$ contradicting our assumption that $\mb{F}(Z_s(X)=\mb{F}(Z_t(X))=E$ for all $s,\ t\in[0,T].$ \phantom{Koooos}\phantom{em}
$\Box$
\section{Integration by parts formula for martingales}\label{sec 7}
Let $(X_t)$ and $(Y_t)$ be two semi-martingales, $X_t=X_0+M_t+B_t$ and $Y_t=Y_0+N_t+C_t$, with $0\le t\le a$, where $M_t,N_t$ are martingales and $B_t,C_t$ are regular adapted processes satifying $B_0=0$ and $C_0=0.$ Then the integration by parts formula holds, i.e., \begin{equation} X_tY_t-X_0Y_0=\int_0^tX_s\,dY_s+\int_0^t Y_s\,dX_s+\<M,N\>_t. \end{equation} To prove this, we have to apply It\^o's rule for two dimensional processes. We therefore include the proof of It\^o's rule for a multi-dimensional process (see~\cite[Theorem 3.3.6, page 153]{KS}. We shall prove the following.
\begin{theorem}\label{theorem 8.7} Let $M_t=(M_t^{(1)},M_t^{(2)},\ldots,M_t^{(n)},\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)_{t\in[0,a]}$ be a vector of continuous martingales, $A_t:=(A_t^{(1)},A_t^{(2)},\ldots,A_t^{(n)},\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)_{t\in[0,a]}$ a vector of adapted processes of bounded variation with $A_0=0$ and set $X_t=X_0+M_t+A_t,\ 0\le t\le a$ where $X_0=(X_0^{(i)})$ and $X_0^{(i)}\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_0.$ Let $f(t,x):[0,a]\times\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}^n\to\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}$ be of class $C^{1,2}.$ Then, for $t\in[0,a],$ \begin{multline} f(t,X_t)=f(0,X_0)+\int_0^t\frac{\partial}{\partial t}f(s,X_s)\,ds +\sum_{i=1}^n\int_0^t\frac{\partial}{\partial x_i}f(s,X_s)\,dB_s^{(i)} \\ +\sum_{i=1}^n\int_0^t\frac{\partial}{\partial x_i}f(s,X_s)\,dM_s^{(i)} \\ +\frac 12\sum_{i=1}^n\sum_{j=1}^n\int_0^t\frac{\partial^2}{\partial x_i\partial x_j}f(s,X_s) \,d\<M^{(i)},M^{(j)}\>_s \\ \end{multline} \end{theorem} {\em Proof.} We refer the reader to the proof of the It\^o formula in the one-dimensional case as given in~\cite[Section 3.2]{G11}, and, without going too much into the detail, show how the proof of the multi-dimensional case follows. For notational convenience, set $z_t=(t,x_t),$ (thus $Z_t=(t,X_t)$) and consider the bounded case. For a partition $\pi=\{0=t_0<t_1<\cdots<t_m=t\},$ we have (with the products interpreted as inner products) \begin{multline} f(t,X_t)-f(0,X_0)=f(Z_t)-f(Z_0)= \\
=\sum_{k=1}^m\nabla f(Z_{t_{k-1}})(Z_{t_k}-Z_{t_{k-1}})+\frac{1}{2}\sum_{k=1}^m[Y_k(Z_{t_k}-Z_{t_{k-1}})](Z_{t_k}-Z_{t_{k-1}}). \end{multline} The derivative is the total derivative, i.e., $$ \nabla f=(\frac{\partial f}{\partial t},\frac{\partial f}{\partial x_1},\ldots,\frac{\partial f}{\partial x_m}) =(f_t,f_{x_1},\ldots,f_{x_n})$$ and $Y_k$ is an
$(n+1)\times (n+1)$-matrix $Y_k=(Y^{k}_{ij})$ where $Y^k_{ij}$ is the image under representation of the continuous function $\displaystyle\frac{\partial^2f}{\partial z_i\partial z_j}(\eta^k_{ij})=f_{z_iz_j}(\eta^k_{ij})$ as constructed in the one-dimensional case in~\cite[Lemma 3.9]{G11}. The next step is to substitute $Z_t=(t,X_0+M_t+A_t)$ in (\ref{equation 8.15}) and then to approximate the resulting sums as $|\pi|$ tends to zero.
The terms involving the first order derivatives yield the first four terms in the theorem exactly by the same arguments as in \cite{G11}.
In the last term, we have that all mixed terms that contain a factor of the form $(t_i-t_j),$ or
$(A_{t_i}-A_{t_j})$ tend to zero and consequently we have to show that, as $|\pi|$ tends to zero, the sum \begin{equation} \frac 12\sum_{k=1}^m[Y_k(M_{t_k}-M_{t_{k-1}})](M_{t_k}-M_{t_{k-1}}) \end{equation} tends to the last term. The first step there is to show that we can replace $Y_k$ by the matrix $\displaystyle\left(\frac{\partial^2 f}{\partial x_i\partial x_j}(t_{k-1},X_{t_{k-1}})\right)=\big(f_{x_ix_j}(t_{k-1},X_{t_{k-1}})\big).$ To do this, we consider the difference \begin{multline}
\left|\sum_{k=1}^m\left[\sum_{i=1}^n\sum_{j=1}^n Y^k_{ij}(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}}) \right.\right. \\
\left.\left. -\sum_{i=1}^n\sum_{j=1}^n f_{x_ix_j}(t_{k-1},X_{t_{k-1}}))(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right]\right|\\
=\left|\sum_{k=1}^m\left[\sum_{i=1}^n\sum_{j=1}^n (Y^k_{ij}-f_{x_ix_j}(t_{k-1},X_{t_{k-1}})(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right]\right|\\
\le \sum_{i=1}^n\sum_{j=1}^n \sup_{1\le k\le m}|Y^k_{ij}-f_{x_ix_j}(t_{k-1},X_{t_{k-1}})|\sum_{k=1}^m
|M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}}||M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}}|. \end{multline} Now for each fixed $i,j$ and $\phi\in\Phi,$ we have that (see \cite[Lemma 3.9(c)]{G11}) $$
\phi\mb{F}(m^{ij})=\phi\mb{F}(\sup_{1\le k\le m}|Y^k_{ij}-f_{x_ix_j}(t_{k-1},X_{t_{k-1}})|)\to 0 \mbox{ as } |\pi|\to 0. $$ Also,
since $\mb{F}\sum_{k=1}^m(M^{(i)}_{t_k}-M^{(j)}_{t_{k-1}})^2\le\mb{F}(M^{(i)}_{t_m})^2\le K^2E$ (see~\cite[Lemma 6.2]{G11}), \begin{align*} \mb{F}(\sum_{k=1}^m
|M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}}||M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}}|) &\le\mb{F}\left(V^{(2)}_t(\pi,M^{i})^{1/2} V^{(2)}_t(\pi,M^{j})^{1/2}\right) \\ &\le \left(\mb{F} V^{(2)}_t(\pi,M^{i})\mb{F} V^{(2)}_t(\pi,M^{j})\right)^{1/2} \\ &\le \left(K^4E\right)^{1/2}\\ &=K^2E. \end{align*}
It now follows from the Cauchy inequality and Corollary \ref{corollary 8.5} that $Y^k$ can be replaced by the matrix $\displaystyle\left(f_{x_ix_j}(t_{k-1},X_{t_{k-1}})\right).$
The final step is to consider \begin{multline*} \sum_{k=1}^m\left[\sum_{i=1}^n\sum_{j=1}^n f_{x_ix_j}(t_{k-1},X_{t_{k-1}})(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right.\\ \left.-\sum_{i=1}^n\sum_{j=1}^n f_{x_ix_j}(t_{k-1},X_{t_{k-1}})(\<M^{(i)},M^{(j)}\>_{t_k}-\<M^{(i)},M^{(j)}\>_{t_{k-1}})\right] \\ =\sum_{k=1}^m\left[\sum_{i=1}^n\sum_{j=1}^n f_{x_ix_j}(t_{k-1},X_{t_{k-1}})(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right.\\ \left.\phantom{\sum_{i=1}^n}-(\<M^{(i)},M^{(j)}\>_{t_k}-\<M^{(i)},M^{(j)}\>_{t_{k-1}})\right]. \\ \end{multline*} Now, using arguments used in the proof of Theorem~\ref{theorem 8.5}, we get \begin{multline*}
\mb{F}\left|\sum_{k=1}^m\left[\sum_{i=1}^n\sum_{j=1}^n f_{x_ix_j}(t_{k-1},X_{t_{k-1}})(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right.\right.\\
\left.\left.\phantom{\sum_{i=1}^n}-(\<M^{(i)},M^{(j)}\>_{t_k}-\<M^{(i)},M^{(j)}\>_{t_{k-1}})\right]\right|^2 \\ =\mb{F}\left[\sum_{k=1}^m\sum_{i=1}^n\sum_{j=1}^n [f_{x_ix_j}(t_{k-1},X_{t_{k-1}})]^2[(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right.\\ \left.\phantom{\sum_{i=1}^n}-\big(\<M^{(i)},M^{(j)}\>_{t_k}-\<M^{(i)},M^{(j)}\>_{t_{k-1}}\big)]^2\right]\\
\le 2 \|f_{x_ix_j}\|^2_\infty\mb{F}\left[\sum_{k=1}^m\sum_{i=1}^n\sum_{j=1}^n (M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})^2(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})^2\right.\\ \left.\phantom{\sum_{i=1}^n}+\big(\<M^{(i)},M^{(j)}\>_{t_k}-\<M^{(i)},M^{(j)}\>_{t_{k-1}}\big)^2\right] \\
\le 4K\|f_{x_ix_j}\|^2_\infty\left[\sum_{i=1}^n\sum_{j=1}^n \left(\mb{F} m_t(M^{(i)},\pi)V^{(2)}(\pi,M^{(j)}) \right.\right. \\ \left.\phantom{\sum_{i=1}^n}\left.+\mb{F} m_t(\<M^{(i)},M^{(j)}\>,\pi)\right)\right]. \end{multline*} The remainder of the proof to show that the right-hand side tends to zero, proceeds as in the proof of formula (\ref{equation 8.12}).\phantom{em}
$\Box$
To prove the integration by parts formula, we apply the above Theorem, taking $f(t,x,y):=xy.$ Then, \begin{multline*} X_tY_t-X_0Y_0 =f(X_t,Y_t)-f(X_0,Y_0)\\ =\int_0^tY_s\,dA_s+\int_0^tX_s\,dC_s+\int_0^tY_s\,dM_s+\int_0^tX_s\,dN_s \\ +\tfrac 12\int_0^t\,d\<M,N\>_s+\tfrac 12\int_0^t\,d\<N,M\>_s\\ =\int_0^tX_s\,dY_s+\int_0^tY_s\,dX_s+\<M,N\>_t. \end{multline*}
\section{Girsanov's Theorem}
Let $(B_t)_{t\in[0,a]}$ be a Brownian motion adapted to the filtration $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F}_t)_{t\in[0,a]}.$ Let $X=(X_t)_{t\in[0,a]}$ belong to the space $\mc{L}_{ad}^2(L^2[0,a])$ and define the exponential process \begin{equation} Z_t=Z_t(X):=\exp\left[\int_0^t X_s\,dB_s-\tfrac 12 \int_0^t X_s^2\,ds\right]. \end{equation} As we have shown in Theorem 4.9, we have that \begin{equation} Z_t=E+\int_0^t Z_sX_s\,dB_s, \end{equation} which shows that $(Z_t)$ is a local martingale, that is a martingale if $\mb{F}(Z_t)=E$ for all $t\in[0,a]$ (see Theorem 6.5).
We now define a new filtration on the space $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}=\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_a$ as follows: \begin{equation} \widetilde{\mb{F}}_s(Y):=Z_s^{-1}\mb{F}_s(Z_aY) \mbox{ for }0\le s\le a,\ Y\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}. \end{equation} We shall denote the range of $\widetilde{\mb{F}}_s$ by $\widetilde{\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}}_s.$ \begin{proposition} If $(Z_s)$ is a martingale, then \begin{enumerate} \item For each $s,$ $\widetilde{\mb{F}}_s$ is a conditional expectation. \item The system $(\widetilde{\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}}_t,\widetilde{\mb{F}}_t)_{t\in[0,a]}$ is a filtration on $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}.$ \item $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t=\widetilde{\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}}_t.$ \item If $0\le s\le t\le a,$ then $\widetilde{\mb{F}}_s(Y)=Z_s^{-1}\mb{F}_s(Z_tY).$ \end{enumerate} \end{proposition} {\em Proof.} 1. Clearly $\widetilde{\mb{F}}_s$ is positive for every $s.$ Moreover, since by assumption $(Z_s)$ is a martingale, $$ \widetilde{\mb{F}}_s(E)=Z_s^{-1}(\mb{F}_s(Z_aE))=Z_s^{-1}\mb{F}_s(Z_a)=Z_s^{-1}Z_s=E. $$ If $Y>0,$ then $\widetilde{\mb{F}}_s(Y)=Z_s^{-1}\mb{F}_s(Z_aE)>0$ because $\mb{F}_s$ is strictly positive and multiplication by $Z_s$ is a strictly positive operator. We show that it is a projection: \begin{align*} \widetilde{\mb{F}}_s^2(Y)&=Z_s^{-1}\mb{F}_s(Z_a\widetilde{\mb{F}}_s(Y)))=Z_s^{-1}\mb{F}_s(Z_aZ_s^{-1}\mb{F}_s(Z_aY))\\ &=Z_s^{-1}\mb{F}_s(Z_aY)\mb{F}_s(Z_aZ_s^{-1})=Z_s^{-1}\mb{F}_s(Z_aY)Z_s^{-1}\mb{F}_sZ_a\\ &=Z_s^{-1}\mb{F}_s(Z_aY)Z_s^{-1}Z_s=Z_s^{-1}\mb{F}_s(Z_aY)=\widetilde{\mb{F}}_s(Y). \end{align*}
2. Let $0\le s<t\le a,$ then \begin{align*} \widetilde{\mb{F}}_s\widetilde{\mb{F}}_t(Y)&=Z_s^{-1}\mb{F}_s(Z_a\widetilde{\mb{F}}_t(Y)))=Z_s^{-1}\mb{F}_s(Z_aZ_t^{-1}\mb{F}_t(Z_aY))\\ &=Z_s^{-1}\mb{F}_s\mb{F}_t(Z_aZ_t^{-1}\mb{F}_t(Z_aY))=Z_s^{-1}\mb{F}_s\mb{F}_t(Z_aY)\mb{F}_t(Z_aZ_t^{-1})\\ &=Z_s^{-1}\mb{F}_s(Z_aY)Z_t^{-1}Z_t=Z_s^{-1}\mb{F}_s(Z_aY)=\widetilde{\mb{F}}_s(Y). \end{align*}
3. If $Y\in{\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}}_t$ then $$ \widetilde{\mb{F}}_t(Y)=Z_t^{-1}\mb{F}_t(Z_a Y)=Z_t^{-1}Y\mb{F}_t(Z_a)=Z_t^{-1}YZ_t=Y. $$ Hence, $Y\in\widetilde{\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}}_t,$ and so $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t\subset\widetilde{\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}}_t.$ Conversely, if $Y\in\widetilde{\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}}_t,$ then, since $\widetilde{\mb{F}}_t$ is a projection, we have $$ Y=\widetilde{\mb{F}}_t=Z_t^{-1}\mb{F}_t(Z_aY) $$ and so $Z_tY\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t\subset\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t^u.$ It follows that $Y=Z_t^{-1}(Z_tY)\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t^u\cap\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}.$ Thus, $Y\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t$ and so we are done.
4.\ We have \begin{multline*} \widetilde{\mb{F}}_s(Y)=Z_s^{-1}\mb{F}_s(Z_aY)=Z_s^{-1}\mb{F}_s\mb{F}_t\mb{F}_a(Z_aY)=Z_s^{-1}\mb{F}_s\mb{F}_tZ_a\mb{F}_a(Y)\\ =Z_s^{-1}\mb{F}_s(Z_t\mb{F}_a(Y)) =Z_s^{-1}\mb{F}_s\mb{F}_a(Z_tY)=Z_a^{-1}\mb{F}_s(Z_tY). \end{multline*} \phantom{em}
$\Box$
\begin{example}(\rm See~\cite[Page 191, Section 3.3.5]{KS})\end{example} Let $(\Omega,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F},P)$ be a probability space, let $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)_{t\in[0,a]}$ be a filtration of sub-$\sigma$-algebras of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}$ and let $(X_t)_{t\in[0,a]}$ be a measurable adapted process satisfying $$ P\left[\int_0^a X_t^2\,dt<\infty\right]=1. $$ Suppose that
$\int_\Omega Z_t(X(\omega))=EZ_t(X)=1$ for $0\le t\le a.$ It is known that this condition implies that $(Z_t)_{t\in[0,a]}$ is a martingale with respect to the filtration $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t).$ Define probability measures $\widetilde{P}_t(A)=E[1_AZ_t],\ A\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\ t\in[0,a].$ It then follows that the conditional expectations with reference to these measures, denoted by $\widetilde{E}(Y\,|\,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_s),$ satisfy the Bayes' rule (\cite[Lemma 3.5.3]{KS}) $$
\widetilde{E}(Y\,|\,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_s)=\frac{1}{Z_s}E(Z_tY\,|\,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_s),\ \mbox{$P$- and $\widetilde{P}_a$-almost everywhere} $$ for all $0\le s\le t\le a.$ This result easily follows from the definition of the measures and of conditional expectation. This result is the motivation to define the conditional expectations $\widetilde{\mb{F}}_t,$ $t\in[0,a]$ in our measure-free approach.
The proposition above shows that for a given process $X,$ filtration $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F}_t)$ and Brownian motion $(B_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F}_t)$ the filtration can be transformed into a new filtration $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\widetilde{\mb{F}}_t).$ Girsanov's theorem shows how to transform the given Brownian motion $(B_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F}_t)$ into a Brownian motion $(\widetilde{B}_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\widetilde{\mb{F}}_t)$ (see~\cite{Gi} and \cite{CM}).
\begin{theorem}{\rm(Girsanov (1960), Cameron and Martin (1944))}\label{theorem 7.3} Let the exponential process $Z=Z(X),$ as defined in (\ref{equation 8.1}), be a martingale. Then, the process $(\widetilde{B}_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\widetilde{\mb{F}}_t)_{t\in[0,a]},$ defined by \begin{equation} \widetilde{B}_t:= B_t -\int_0^t X_s\,ds,\ \ 0\le t\le a, \end{equation} is a Brownian motion. \end{theorem}
We shall use L\'evy's characterization of Brownian motion to prove the theorem. The result in~\cite{G11} shows that we have to prove that $\widetilde{B}$ is $\gamma$-H\"older continuous and has quadratic variation $\langle}\def\>{\rangle\widetilde{B}\>_t=tE.$ In order to do this, we first prove the following result from which the desired result easily follows.
\begin{proposition} Let $Z=Z(X)$ be a martingale. If $M$ is a $\gamma$-H\"older continuous martingale, then the same is true for for the process $\widetilde{M}$ defined by \begin{equation} \widetilde{M}_t:=M_t-\int_0^tX_s\,d\<M,B\>_s,\ \ 0\le t\le a. \end{equation} Furthermore, if $N$ is another $\gamma$-H\"older continuous martingale and if $$ \widetilde{N}_t=N_t-\int_0^tX_s\,d\<N,B\>_s,\ \ 0\le t\le a, $$ then $$ \langle}\def\>{\rangle\widetilde{M},\widetilde{N}\>=\langle}\def\>{\rangle{M},{N}\>,\ \ 0\le t\le a. $$ \end{proposition} {\em Proof.} We assume that $M$ and $N$ are bounded martingales with bounded quadratic variations and also that $Z_t(X)$ and $\int_0^tX_s^2\,ds$ is bounded by some $ME.$ From the Kunita-Watanabe inequality~(Theorem \ref{Kunita Watanabe}) that $$
\left|\int_0^tX_s\,d\<M,B\>_s\right|^2\le \int_0^tX_s^2\,ds\int_0^t\,d\<M\>_s\le\<M\>_t\int_0^t X_s^2\,ds\le M^2E. $$ Therefore, $\widetilde{M}$ is also bounded. We now apply the integration by parts formula, derived in Section~\ref{sec 7} and Equation~(\ref{equation 8.2}), to get \begin{align*} Z_t\widetilde{M_t}&=Z_t\widetilde{M}_t-Z_0\widetilde{M}_0 \\ &=\int_0^t Z_u\,d\widetilde{M}_u+\int_0^t\widetilde{M}_u\,dZ_u+\<Z,M\>_t \\ &=(\int_0^t Z_u\,dM_u-\int_0^t Z_uX_u\,d\<M,B\>_u)+\int_0^t\widetilde{M}_uZ_uX_u\,dB_u+\<Z,M\>_t. \end{align*} By the cross-variation formula (Theorem~\ref{theorem 4.6}), we have that \begin{align*} \int_0^t Z_uX_u\,d\<M,B\>_u&=\<I^B(Z_uX_u),I^M(E)\>_t \\ &=\<Z-E,M\>_t=\<Z,M\>_t-\<E,M\>_t=\<Z,M\>_t, \end{align*} for, $M=EM$ is a martingale implying that $M$ and $E$ are orthogonal, i.e., $\<E,M\>_t=0.$ Thus, it follows that \begin{equation} Z_t\widetilde{M}_t=\int_0^tZ_u\,dM_u+\int_0^t\widetilde{M}_uZ_uX_u\,dB_u. \end{equation} The right-hand side is a martingale relative to the filtration $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F}_t)$ and so, for $s\le t\le a,$ we have from Proposition~\ref{proposition 8.1}(4), that \begin{equation} \widetilde{\mb{F}}_s(\widetilde{M}_t)=Z_s^{-1}\mb{F}_s(Z_t\widetilde{M}_t)=Z_s^{-1}Z_s\widetilde{M}_s=\widetilde{M}_s. \end{equation} and we have shown that $(\widetilde{M}_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\widetilde{\mb{F}}_t)$ is a martingale.
Again applying the integration by parts formula, we get \begin{align*} &\widetilde{M}_t\widetilde{N}_t-\<M,N\>_t \\ &=\int_0^t\widetilde{M}_u\,d\widetilde{N}_u+\int_0^t\widetilde{N}_u\,d\widetilde{M}_u\\ &=\int_0^t\widetilde{M}_u\,d(N_u-\int_0^u X_s\,d\<N,B\>_s)+\int_0^t\widetilde{N}_u\,d(M_u-\int_0^uX_s\,d\<M,B\>_s)\\ &=\int_0^t\widetilde{M}_u\,dN_u+\int_0^t\widetilde{N}_u\,dM_u-\left[\int_0^t\widetilde{M}_u X_u\,d\<N,B\>_u +\int_0^t\widetilde{N}_uX_u\,d\<M,B\>_u\right].\\ \end{align*} This shows that $\widetilde{M}_t\widetilde{N}_t-\<M,N\>_t$ is a semi-martingale. Applying the integration by parts formula, we get \begin{multline} Z_t[\widetilde{M}_t\widetilde{N}_t-\<M,N\>_t] =\int_0^tZ_u\,d[\widetilde{M}_u\widetilde{N}_u-\<M,N\>_u]+ \\ +\int_0^t[\widetilde{M}_u\widetilde{N}_u-\<M,N\>_u]\,dZ_u+\<I^N(\widetilde{M})+I^M(\widetilde{N}),Z\>_t. \end{multline} The second integral is, using Equation~(\ref{equation 8.2}), equal to \begin{equation}\label{equation 8.9} \int_0^t[\widetilde{M}_u\widetilde{N}_u-\<M,N\>_u]Z_uX_u\,dB_u. \end{equation} By the preceding integral representation of $\widetilde{M}_u\widetilde{N}_u-\<M,N\>_u,$ we use the cross-variation formula to derive that \begin{align} &\int_0^tZ_u\,d[\widetilde{M}_u\widetilde{N}_u-\<M,N\>_u] \nonumber \\ &=\int_0^t Z_u\widetilde{M}_u\,dN_u+\int_0^t Z_u\widetilde{N}_u\,dM_u \nonumber \\ &\phantom{\int_0^t Z_u\widetilde{M}_u\,dN_u}-\int_0^t\widetilde{M}_uZ_uX_u\,d\<N,B\>_u -\int_0^t\widetilde{N}_uZ_uX_u\,d\<M,B\>_u \nonumber \\ &=\int_0^t Z_u\widetilde{M}_u\,dN_u+\int_0^t Z_u\widetilde{N}_u\,dM_u \nonumber \\ &\phantom{\int_0^t Z_u\widetilde{M}_u\,dN_u}-\langle}\def\>{\rangle\int_0^t\widetilde{M}\,dN,Z\>_t-\langle}\def\>{\rangle\int_0^t\widetilde{N}\,dM,Z\>_t. \end{align} Substituting Equations (\ref{equation 8.9}) and (\ref{equation 8.10}) in Equation (\ref{equation 8.8}), we get \begin{align} Z_t[\widetilde{M}_t\widetilde{N}_t&-\<M,N\>_t] =\nonumber \\ &=\int_0^t Z_u\widetilde{M}_u\,dN_u+\int_0^t Z_u\widetilde{N}_u\,dM_u +\int_0^t[\widetilde{M}_u\widetilde{N}_u-\<M,N\>_u]Z_uX_u\,dB_u. \end{align} Equation~\ref{equation 8.11} shows that $(Z_t[\widetilde{M}_t\widetilde{N}_t-\<M,N\>_t],\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\mb{F}_t)_{t\in[0,t]}$ is a martingale. It follows from the definition of the transformed filtration that $$ \widetilde{\mb{F}}_s(\widetilde{M}_t\widetilde{N}_t-\<M,N\>_t)=Z_s^{-1}\mb{F}_s(Z_t[\widetilde{M}_t\widetilde{N}_t-\<M,N\>_t]) =Z_s^{-1}Z_s[\widetilde{M}_s\widetilde{N}_s-\<M,N\>_s]. $$ Therefore, $([\widetilde{M}_t\widetilde{N}_t-\<M,N\>_t],\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F},\widetilde{\mb{F}}_t)_{t\in[0,t]}$ is a martingale. But, $\langle}\def\>{\rangle\widetilde{M},\widetilde{N}\>$ is the unique process $A$ such that $\widetilde{M}\widetilde{N}-A$ is a martingale, and this shows that $$ \langle}\def\>{\rangle\widetilde{M},\widetilde{N}\>_t=\<M,N\>_t,\ \ 0\le t\le a. $$ \phantom{em}
$\Box$
{\em Proof of Girsanov's theorem:}\ Put $M=N=B$ in Proposition 8.4. Then, by Equation~(\ref{equation 8.5}), we get that $$ \widetilde{B}_t=B_t-\int_0^tX_s\,d\<B,B\>_s=B_t-\int_0^tX_s\,d\<B\>_s=B_t-\int_0^tX_s\,ds, \ \ 0\le t\le a, $$ is a continuous martingale satifying $\langle}\def\>{\rangle\widetilde{B}\>=\langle}\def\>{\rangle\widetilde{B},\widetilde{B}\>=\<B,B\>=\<B\>=tE.$ It follows from L\'evy's theorem (see~\cite[Theorem 4.6]{G11}) that $(\widetilde{B}_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\widetilde{\mb{F}}_t)_{t\in[0,a]}$ is a Brownian motion.\phantom{em}
$\Box$
In many applications one needs the multidimensional Girsanov theorem. We state it here but will not prove it, since the proof does not require any new ideas.
Let $B=(B_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)_{t\in[0,a]}=((B_t^{(1)},\ldots,B_t^{(m)}),\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)_{t\in[0,a]}$ be an $m$-dimensional Brownian motion and let $X=((X_t^{(1)},\ldots,X_t^{(m)}),\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)_{t\in[0,a]}$ be a vector of adapted processes satisfying $X^{(i)}_t\in\mc{L}_{ad}^2(L^2[0,a])$ for $1\le i\le a.$
Then, for each $i,$ the stochastic integral $I^{B^{(i)}}(X^{(i)})$ is defined and is a local martingale.
Set \begin{equation}
Z_t(X):=\exp\left[\sum_{i=1}^m\int_0^tX_s^{(i)}\,dB_s^{(i)}-\frac 12\int_0^t\|X_s\|^2\,ds\right]. \end{equation}
Then we have, as in the one dimensional case, \begin{equation} Z_t(X)=1+\sum_{i=1}^m\int_0^tZ_s(X)X_s^{(i)}\,dB_s^{(i)}. \end{equation}
Then,
\begin{theorem} Assume that $Z(X)=(Z_t(X))$ is a martingale. Define the process $$ \widetilde{B}=((\widetilde{B}_t^{(1)},\ldots,\widetilde{B}_t^{(m)}),\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)_{t\in[0,a]} $$ by \begin{equation} \widetilde{B}_t^{(i)}:= {B}_t^{(i)}-\int_0^t{X}_s^{(i)}\,ds,\ \ 1\le i\le m. \end{equation} Then, the process $(\widetilde{B}_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)_{t\in[0,a]}$ is an $m$-dimesional Brownian motion adapted to the filtration $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t,\widetilde{\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}}_t).$ \end{theorem}
\textbf{Acknowledgement}
The research of the second named author was supported by the National Research Foundation (Grant No. 87502).
\input{Bibliografie.tex}
\begin{lemma} Let $X,Y$ be martingales satisfying $\sup\{|X_s|,|Y_s|,\<X\>_s,\<Y\>_s\}\le KE$ for all $s\in[a,t].$ Let $\pi=\{a=t_0<t_1<\cdots<t_m=b\}$ be a partition of $[a,t].$ Then \begin{equation} \mb{F}[CV_t(\pi)]^2\le 6K^4E. \end{equation} \end{lemma} {\em Proof.} We recall that it follows from the definition of the cross variation $\<X,Y\>$ that it can be written as the difference of two adapted natural positive increasing processes $A$ and $B$ and that $A=\tfrac 14\<X+Y\>$ and $B=\tfrac 14\<X-Y\>.$ Therefore, since $\<X\pm Y\>\le\<X\>+\<Y\>\le 2KE,$ we have that $\sup\{A,B\}\le \tfrac 12 KE.$ For $0\le k\le m-1$ we have \begin{align*} &\mb{F}_{t_k}\left[\sum_{j=k+1}^m(X_{t_j}-X_{t_{j-1}})(Y_{t_j}-Y_{t_{j-1}})\right]\\ &=\mb{F}_{t_k}\left[\sum_{j=k+1}^m X_{t_j}Y_{t_j}-\mb{F}_{t_{j-1}}(X_{t_j}Y_{t_{j-1}}-X_{t_{j-1}}Y_{t_j}+X_{t_{j-1}}Y_{t_{j-1}})\right]\\ &=\mb{F}_{t_k}\left[\sum_{j=k+1}^m X_{t_j}Y_{t_j}-X_{t_{j-1}}Y_{t_{j-1}}\right]\\ &=\mb{F}_{t_k}(X_{t_m}Y_{t_m}-X_{t_k}Y_{t_k})\\ &=\mb{F}_{t_k}(\<X,Y\>_{t_m}-\<X,Y\>_{t_k})\\ &=\mb{F}_{t_k}(\<X,Y\>_{t_m}-(A_{t_k}-B_{t_k})) \\ &\le\mb{F}_{t_k}(\<X,Y\>_{t_m}+B_{t_k})\\ &\le\mb{F}_{t_k}(\<X,Y\>_{t_m}+B_{t_m})\\ &\le\tfrac 32 KE \end{align*} Hence, \begin{align*} &\mb{F}\left[\sum_{k=1}^{m-1}\sum_{j=k+1}^m(X_{t_j}-X_{t_{j-1}})(Y_{t_j}-Y_{t_{j-1}})\right]\\ &=\mb{F}\left[\sum_{k=1}^{m-1}(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}})\sum_{j=k+1}^m\mb{F}_{t_k}(X_{t_j}-X_{t_{j-1}})(Y_{t_j}-Y_{t_{j-1}})\right]\\ &=\mb{F}\left[\sum_{k=1}^{m-1}(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}})(\mb{F}_{t_k}(\<X,Y\>_{t_m}
-\<X,Y\>_{t_k})\right]\\ &\le\mb{F}\left[\sum_{k=1}^{m-1}(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}})\mb{F}_{t_k}(\<X,Y\>_{t_m}+B_{t_m})\right]\\ &=\mb{F}(\<X,Y\>_{t_m}+B_{t_m})\left[\sum_{k=1}^{m-1}(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}})\right] \\ &=\mb{F}(\<X,Y\>_{t_m}+B_{t_{m}})(\<X,Y,\>_{t_{m-1}})-\<X,Y\>_{t_0})\\ &\le 3K^2E. \end{align*} Similarly, \begin{multline*} \mb{F}[\sum_{k=1}^{m-1}\sum_{j=k+1}^m(X_{t_j}-X_{t_{j-1}})(Y_{t_j}-Y_{t_{j-1}})]\\ \ge \mb{F}(\<X,Y\>_{t_m}-A_{t_{m}})(\<X,Y,\>_{t_{m-1}})-\<X,Y\>_{t_0})\ge -3K^2E \end{multline*} So, we arrive at $$
\left|\mb{F}\left[\sum_{k=1}^{m-1}\sum_{j=k+1}^m(X_{t_j}-X_{t_{j-1}})(Y_{t_j}-Y_{t_{j-1}})\right]\right|\le 3K^2E $$
\noindent \end{document}
\section{The Wiener integral} Let $F_{B_t}(x):=\mb{F}\mathbb P(B_t\le xE)E$ be the distribution function of $B_t.$ We proved in~\cite{G11} that for all $s<t,$ the distribution function of $B_t-B_s$ is given by $$ F_{B_t-B_s}(x)=\frac{1}{\sqrt{2\pi(t-s)}}\left[\int_{-\infty}^ xe^{-\frac{y^2}{2(t-s)}}dy\right]E, $$ which means that $B_t-B_s$ is normally distributed with mean $0$ and variance $s-t.$ Taking $s=0,$ we get $$ F_{B_t}(x)=\frac{1}{\sqrt{2\pi t)}}\left[\int_{-\infty}^ xe^{-\frac{y^2}{2t}}dy\right]E, $$ and so $$ dF_{B_t}(x)=\frac{1}{\sqrt{2\pi t}}e^{-\frac{x^2}{2t}}dxE . $$ It follows from~\cite{G11}, that $$ \mb{F}(B_t)=\left[\frac{1}{\sqrt{2\pi t)}}\int_{-\infty}^\infty xe^{-\frac{x^2}{2t}}dx \right]E. $$
We now consider a function $f\in L^2[a,b],$ i.e., a function that is square integrable with respect to Lebesgue measure on $[a,b].$ The stochastic process $X_t=f(t)E,\ t\in[a,b],$ then satisfies $X_t\in\mc{L}^2$ for almost every $t\in[a,b]$ because, for any $\phi\in\Phi,$ we have $\phi\mb{F}(|X_t|^2)=\phi\mb{F}|f(t)|^2E)=|f(t)|^2\phi\mb{F}(E)=|f(t)|^2<\infty$ for almost all $t\in [a,b].$ Then, $$
\overline{q}(X_t)^2=\int_a^b \phi\mb{F}(|X_t|^2)\,dt=\int_a^b|f(t)|^2\,dt<\infty $$ holds for every $q\in\mathscr{Q}$ which shows that $X_t\in L^2([a,b],\mc{L}^2)$ (see~\cite{G7}) and since the process $X_t$ is adapted to any filtration on $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E},$ it belongs to the filtration $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)_t,\mb{F}_t)$ with reference to which $(B_t)$ is a Brownian motion. In our notation used in~\cite{G7}, it belongs to $ L^2_{ad}([a,b],\mc{L}^2)$ and so the It\^o integral of the process exists. In this case, the integral is called the {\em Wiener integral} of $f.$ Of course it is much easier to define the Wiener integral directly and not via the It\^o integral, but then we simply repeat arguments used in~\cite{G7} used there to define the It\^o integral. For a step function $f=\sum_{i=1}^n a_iI_{[t_{i-1},t_i)},$ where $t_0=a$ and $t_n=b$ the Wiener integral equals $$ I(f)=\sum_{i=1}^n a_i(B_{t_i}-B_{t_{i-1}})=\sum_{i=1}^n f(t_{i-1})(B_{t_i}-B_{t_{i-1}}). $$ A special property of the Wiener integral, not necessarily shared by the It\^o integral is the following:
\begin{theorem} For each $f\in L^2[a,b],$ the Wiener integral $\int_a^b f(t)\,dB_t$ is a Gaussian random variable with mean $0$ and variance $\|f\|^2=\int_a^b |f(t)|^2\,dt.$ \end{theorem} {\em Proof.} Since the increments $B_t-B_s,$ $s<t,$ of the Brownian motion are independent, so is their sum. A calculation with the characteristic function then shows that the sum of the independent elements appearing in the integral are again independent with mean $0.$ Since the variance of $B_t-B_s$ equals $t-s,$ that of $a(B_t-B_s)=a^2(t-s).$ Therefore, if $\sigma^2$ is the variance of $I(f)$ above, we get $$
\sigma^2=\sum_{i=1}^n a_i^2(t_i-t_{i-1})=\sum_{i=1}^n f(t_{i-1})^2(t_i-t_{i-1})=\int_a^b|f(t)|^2\,dt. $$ Thus the theorem is true for step functions.
Consider the mapping $f(t)\mapsto f(t)E$ from $L^2[a,b]\to L^2_{ad}([a,b],\mc{L}^2).$ As was remarked above, this mapping is an isometric embedding of $L^2[a,b]$ into $L^2_{ad}([a,b],\mc{L}^2).$ Since for an arbitrary $f\in L^2[a,b]$ there exists a sequence of step functions $f_n$ in $L^2[a,b]$ that converges in norm to $f,$ the images of the elements in the sequence are simple adapted processes that converges in $L^2([a,b],\mc{L}^2)$ to $(X_t).$ Hence, the Wiener integrals $I(f_n)$ (which is the same as the It\^o integrals $I(X_n)$) converges to the Wiener integral $I(f)=\int_a^b f(t)\,dB_t$ and the convergence is in the space $\mc{L}^2.$ If we denote the variation of $I(f_n)$ by $\sigma_n,$ then, since $\sigma_n^2=\|f_n\|^2,$ we have that $\sigma_n$ is a convergent sequence, converging to $\sigma:=\|f\|.$ We thus have a sequence $I(f_n)$ of normally distributed stochastic variables with mean $0$ and variance $\sigma_n$ that converges in $\mc{L}^2$ to the stochastic variable $I(f).$ We have to prove that $I(f)$ is also normally distributed with mean zero and with variance $\sigma^2.$
The distribution function of $I(f_n)$ satisfies $$
F_n(x):=F_{I(f_n)}(x)=\frac{1}{\sqrt{2\pi}\|f_n\|}\left[\int_{-\infty}^x \exp\left(-{\tfrac{y^2}{2\|f_n\|^2}}\right)\,dy \right]E $$
For every fixed $x\in\mb{R}}\def\C{\mb{C}}\def\N{\mb{N},$ $$
\lim_{n\to\infty}F_n(x)=\frac{1}{\sqrt{2\pi}\|f\|}\left[\int_{-\infty}^x \exp\left(-{\tfrac{y^2}{2\|f\|^2}}\right)\,dy \right]E, $$
since $\|f_n\|\to\|f\|$ as $n\to\infty.$
On the other hand, as we shall prove in the sequel, since $I(f_n)\to I(f)$ in $\mc{L}^2,$ we have that $ F_n(x)\overset{n}{\underset{\infty}{\longrightarrow}} F(x):=F_{I(f)}(x).$
Hence, $$
F(x):=\frac{1}{\sqrt{2\pi}\|f\|}\left[\int_{-\infty}^x \exp\left({-\tfrac{y^2}{2\|f\|^2}}\right)\,dy \right]E $$
and so we conclude that $I(f)$ is normally distributed with mean $0$ and variance $\|f\|^2.$
This concludes the proof of the theorem.\phantom{em}
$\Box$
We recall that the sequence $(X_n)$ order converges in conditional probability to $X$ if for each $\epsilon>0,$ we have $$
\lim_{n\to\infty}|\phi|\mb{F}(\mathbb P(|X-X_n|\ge \epsilon E))=0 $$ for every $\phi\in\Phi.$ It follows from Chebyshev's inequality that if $X_n\to X$ in $\mc{L}^2,$ then $$
|\phi|\mb{F}(\mathbb P(|X-X_n|\ge\epsilon E))\le\frac{1}{\epsilon^2}|\phi|\mb{F}(|X-X_n|^2), $$ and so $X_n\to X$ in conditional probability. The next proposition is part of Theorem 7.1.7 in~\cite{A&D}. In the proof we will write for an element $X\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ and a scalar $\lambda$ that $X\le \lambda$ or $X\ge\lambda$ meaning that $X\le\lambda E$ or $X\ge\lambda E.$
\begin{proposition} Let $X_n, X$ be elements of $\mc{L}^2$ with $\mb{F}$-conditional distribution functions $F_n(x), F(x).$ If $X_n$ converges in probability to $X,$ then $F_n(x)$ converges to $F(x)$ in all continuity points of $F.$ \end{proposition} {\em Proof.} We have \begin{align*} F_n(X)&=\mb{F}(\mathbb P(X_n\le x)E) \\ &=\mb{F}(\mathbb P(X_n\le x, X>x+\epsilon)E)+\mb{F}(\mathbb P(X_n\le x, X\le x+\epsilon)E)\\
&\le\mb{F}(\mathbb P(|X_n-X|\ge\epsilon )E)+\mb{F}(\mathbb P(X\le x+\epsilon)E)\\
&=\mb{F}(\mathbb P(|X_n-X|\ge\epsilon)E)+F(x+\epsilon), \end{align*} and \begin{align*} F(x-\epsilon)&=\mb{F}(\mathbb P(X\le x-\epsilon)E)\\ &=\mb{F}(\mathbb P(X\le x-\epsilon, X_n>x)E)+\mb{F}(\mathbb P(X\le x-\epsilon,X_n\le x)E)\\
&\le \mb{F}(\mathbb P(|X_n-X|\ge \epsilon ))E+\mb{F}(\mathbb P(X_n\le x)E)\\
&=\mb{F}(\mathbb P(|X_n-X|\ge\epsilon ))E+F_n(x). \end{align*}
Hence, for every $\phi\in\Phi,$ $$
|\phi|F(x-\epsilon)\le \lim_{n\to\infty} |\phi|F_n(x)\le |\phi|F(x+\epsilon) $$ for every $\epsilon>0.$ This shows that in every continuity point of $F,$ we have that $$
\lim_{n\to\infty}|\phi|F_n(x)=|\phi|F(x) $$ Therefore, $\lim_{n\to\infty}F_n(x)=F(x)$ in every continuity point of $F.$ \phantom{em}
$\Box$
\begin{corollary} If $X_n$ converges to $X$ in $\mc{L}^2,$ then its $\mb{F}$-conditional distribution function converges in every point of continuity to the $\mb{F}$-conditional distribution function of $X.$ \end{corollary}
\section*{Appendix A: Cross-variation formula} We prove that the cross-variation formula for martingales hold. Although the proof is analogous to the proof of the quadratic variation formula \begin{equation}
\lim_{|\pi|\to 0} \sum_{k=1}^m(X_{t_k}-X_{t_{k-1}})^2=\<X\>, \end{equation} given in~\cite{G10}, there are some differences.
\begin{theorem}\label{theorem 8.5} Let $X,Y$ be $\gamma$-continuous martingales. Then, \begin{equation}\label{equation 8.12}
\lim_{|\pi|\to 0} \sum_{k=1}^m(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}})=\<X,Y\>, \end{equation}
where $\pi=\{t_0,t_1,\ldots,t_m\}$ is a partition of the interval $[a,b]$ with mesh $|\pi|$ and the convergence is in $\mc{L}^1$-conditional probability (see~\cite[Section 6]{G10}. \end{theorem} {\em Proof.\ } Set \begin{equation*} CV_t(\pi):=\sum_{k=1}^m(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}}). \end{equation*} We first note that if $X,Y$ are in $\mc{M}_2,$ and if $0\le s<t\le u<v\le a,$ then \begin{multline*} \mb{F}[(X_v-X_u)(Y_t-Y_s)]=\mb{F}\{\mb{F}_u[(X_v-X_u)(Y_t-Y_s)]\}\\ =\mb{F}[(Y_t-Y_s)\mb{F}_u(X_v-X_u)]=0. \end{multline*} Thus, the expectation of increments over non-overlapping intervals are zero. Also, $$ \mb{F}[(X_v-X_u)(Y_v-Y_u)]=\mb{F}[X_vY_v-\mb{F}_u(X_uY_v+X_vY_u)+X_u Y_u]=\mb{F}[X_v Y_v- X_u Y_u] $$ Then, since $XY-\<X,Y\>$ is a martingale, \begin{multline*} \mb{F}[(X_v-X_u)(Y_v-Y_u)-(\<X,Y\>_v-\<X,Y\>_u)]= \\ \mb{F}[(X_vY_v-\<X,Y\>_v)-(X_uY_u-\<X,Y\>_u)]=0. \end{multline*} It follows that the expectation of the product of terms of the form $(X_vY_v-\<X,Y\>_v)-(X_uY_u-\<X,Y\>_u)$ and of the form $(X_v-X_u)(Y_v-Y_u)-(\<X,Y\>_v-\<X,Y\>_u)$ taken over non-overlapping intervals will be zero.
We firstly consider the case where $$
\sup\{|X_s|,|Y_s|,\<X\>_s,\<Y\>_s\}\le KE,\ s\in[a,t], $$ and, for a stochastic process $Z=(Z_t)_{t\in[0,t]}$ we use the notation
$\displaystyle m_t(Z;\pi):=\sup_{1\le k\le m}|Z_{t_k}-Z_{t_{k-1}}|.$ Then, \begin{align*} &\mb{F}[CV_t(\pi)-\<X,Y\>_t]^2 \\ &=\mb{F}\left[\sum_{k=1}^m(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}})- (\<X,Y\>_{t_k}-\<X,Y\>_{t_{k-1}})\right]^2 \\ &=\sum_{k=1}^m\mb{F}\left[(X_{t_k}-X_{t_{k-1}})(Y_{t_k}-Y_{t_{k-1}})- (\<X,Y\>_{t_k}-\<X,Y\>_{t_{k-1}})\right]^2 \\ &\le 2\sum_{k=1}^m\mb{F}\left[(X_{t_k}-X_{t_{k-1}})^2(Y_{t_k}-Y_{t_{k-1}})^2+ (\<X,Y\>_{t_k}-\<X,Y\>_{t_{k-1}})^2\right] \\
&\le 2\sum_{k=1}^m\mb{F}\left[|X_{t_k}-X_{t_{k-1}}||X_{t_k}+X_{t_{k-1}}|(Y_{t_k}-Y_{t_{k-1}})^2\right] \\
&\phantom{jacobus}+2\mb{F} m_t(\<X,Y\>;\pi)\sum_{k=1}^m|\<X,Y\>_{t_k}-\<X,Y\>_{t_{k-1}}|\\ &\le 4K\mb{F} m_t(X;\pi)\sum_{k=1}^m(Y_{t_k}-Y_{t_{k-1}})^2 \\ &\phantom{jacobus}+2\mb{F} m_t(\<X,Y\>;\pi)\sum_{k=1}^m(\<X\>_{t_k}-\<X\>_{t_{k-1}})+(\<Y\>_{t_k}-\<Y\>_{t_{k-1}})\mbox{ \rm by (3.3)}\\ &\le 4K\mb{F} m_t(X;\pi)V_t^{(2)}(\pi, Y) +2\mb{F} m_t(\<X,Y\>;\pi)(\<X\>_{t_m}+\<Y\>_{t_m})\\ &\le 4K\mb{F} m_t(X;\pi)V_t^{(2)}(\pi,Y) + 4K\mb{F} m_t(\<X,Y\>;\pi) \end{align*} From \cite[Lemma 6.2]{G10}, we have that $\mb{F}[V_t^{(2)}(\pi,Y)]^2\le 6K^4E$ and so applying the Cauchy inequality to the first term, we get (as in the proof of the formula for one variable in \cite{G11}) that it is bounded by $$ 4K\sqrt{6K^4}\sqrt{\mb{F} m_t(X;\pi)^2} $$
Now, if $|\pi|\to 0$ the $\gamma$-H\"older continuity of $X$ and of $\lr{X,Y}$ implies that in both terms the factor $m_t$ tends to zero. Therefore, the right-hand side tends to zero and we have proved the desired result for bounded martingales. The extension to the general case now proceeds via localization, exactly as in the proof of the corresponding result for the quadratic variation (see~\cite[Section 6]{G10}.
\begin{corollary}\label{corollary 8.5} If $\sup\{|X_s|,|Y_s|,\<X\>_s,\<Y\>_s\}\le KE,\ s\in[a,t],$ then $$ \mb{F}(CV_t(\pi))^2\le 2K\mb{F} m_t(X:\pi)V_t^{(2)}(\pi,Y). $$
\end{corollary}
\section*{Appendix B: A multi-dimensional version of It\^o's rule} In order to prove the integration by parts formula, we used the following multi-dimensional version of It\^o's rule (see~\cite[Theorem 3.3.6, page 153]{KS}.
\begin{theorem}\label{theorem 8.7} Let $M_t=(M_t^{(1)},M_t^{(2)},\ldots,M_t^{(n)},\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)_{t\in[0,a]}$ be a vector of local continuous martingales, $B_t:=(B_t^{(1)},B_t^{(2)},\ldots,B_t^{(n)},\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)_{t\in[0,a]}$ a vector of adapted processes of bounded variation with $B_0=0$ and set $X_t=X_0+M_t+B_t,\ 0\le t\le a$ where $X_0$ is an $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_0$ measurable element in $\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}^n.$ Let $f(t,x):[0,a]\times\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}^n\to\mb{R}}\def\C{\mb{C}}\def\N{\mb{N}$ be of class $C^{1,2}.$ Then, for $t\in[0,a],$ \begin{multline} f(t,X_t)=f(0,X_0)+\int_0^t\frac{\partial}{\partial t}f(s,X_s)\,ds +\sum_{i=1}^n\int_0^t\frac{\partial}{\partial x_i}f(s,X_s)\,dB_s^{(i)} \\ +\sum_{i=1}^n\int_0^t\frac{\partial}{\partial x_i}f(s,X_s)\,dM_s^{(i)} \\ +\frac 12\sum_{i=1}^n\sum_{j=1}^n\int_0^t\frac{\partial^2}{\partial x_i\partial x_j}f(s,X_s) \,d\<M^{(i)},M^{(j)}\>_s \\ \end{multline} \end{theorem} {\em Proof.} We refer the reader to the proof of the It\^o formula in the one-dimensional case as given in~\cite[Section 3.2]{G11}, and, without going too much into the detail, show how the proof of the multi-dimensional case follows. For notational convenience, set $z_t=(t,x_t),$ (thus $Z_t=(t,X_t)$) and consider the bounded case. For a partition $\pi=\{0=t_0<t_1<\cdots<t_m=t\},$ we have (with the products interpreted as inner products) \begin{multline}\label{equation 8.15} f(t,X_t)-f(0,X_0)=f(Z_t)-f(Z_0)= \\
=\sum_{k=1}^m\nabla f(Z_{t_{k-1}})(Z_{t_k}-Z_{t_{k-1}})+\frac{1}{2}\sum_{k=1}^m[Y_k(Z_{t_k}-Z_{t_{k-1}})](Z_{t_k}-Z_{t_{k-1}}). \end{multline} The derivative is the total derivative, i.e., $$ \nabla f=(\frac{\partial f}{\partial t},\frac{\partial f}{\partial x_1},\ldots,\frac{\partial f}{\partial x_m}) =(f_t,f_{x_1},\ldots,f_{x_n})$$ and $Y_k$ is an
$(n+1)\times (n+1)$-matrix $Y_k=(Y^{k}_{ij})$ where $Y^k_{ij}$ is the image under representation of the continuous function $\displaystyle\frac{\partial^2f}{\partial z_i\partial z_j}(\eta^k_{ij})=f_{z_iz_j}(\eta^k_{ij})$ as constructed in the one-dimensional case in~\cite[Lemma 3.9]{G11}. The next step is to substitute $Z_t=(t,X_0+M_t+B_t)$ in (\ref{equation 8.15}) and then to approximate the resulting sums as $|\pi|$ tends to zero.
The terms involving the first order derivatives yield the first four terms in the theorem exactly by the same arguments as in \cite{G11}.
In the last term, we have that all mixed terms that contain a factor of the form $(t_i-t_j),$ or
$(B_{t_i}-B_{t_j})$ tend to zero and consequently we have to show that, as $|\pi|$ tends to zero, the sum \begin{equation} \frac 12\sum_{k=1}^m[Y_k(M_{t_k}-M_{t_{k-1}})](M_{t_k}-M_{t_{k-1}}) \end{equation} tends to the last term. The first step there is to show that we can replace $Y_k$ by the matrix $\displaystyle\left(\frac{\partial^2 f}{\partial x_i\partial x_j}(t_{k-1},X_{t_{k-1}})\right)=\big(f_{x_ix_j}(t_{k-1},X_{t_{k-1}})\big).$ To do this we consider the difference \begin{multline} \sum_{k=1}^m\left[\sum_{i=1}^n\sum_{j=1}^n Y^k_{ij}(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}}) \right. \\ \left. -\sum_{i=1}^n\sum_{j=1}^n f_{x_ix_j}(t_{k-1},X_{t_{k-1}}))(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right]\\ =\sum_{k=1}^m\left[\sum_{i=1}^n\sum_{j=1}^n (Y^k_{ij}-f_{x_ix_j}(t_{k-1},X_{t_{k-1}})(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right]\\
\le CV_t(\pi)\sup_{\begin{substack}{1\le k\le m \\ 1\le i,j\le n}\end{substack}}|Y^k_{ij}-f_{x_ix_j}(t_{k-1},X_{t_{k-1}})|. \end{multline} As \cite[Lemma 3.9(c)]{G11} it follows that $$
\phi\mb{F}\left(\sup_{\begin{substack}{1\le k\le m \\ 1\le i,j\le n}\end{substack}}|Y^k_{ij}-f_{x_ix_j}(t_{k-1},X_{t_{k-1}})|\right)\to 0 \mbox{ as $|\pi|\to 0$}. $$ It now follows from the Cauchy inequalty and Corollary \ref{corollary 8.5} that $Y^k$ can be replaced by the matrix $\displaystyle\left(f_{x_ix_j}(t_{k-1},X_{t_{k-1}})\right).$
The final step is to consider \begin{multline*} \sum_{k=1}^m\left[\sum_{i=1}^n\sum_{j=1}^n f_{x_ix_j}(t_{k-1},X_{t_{k-1}})(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right.\\ \left.-\sum_{i=1}^n\sum_{j=1}^n f_{x_ix_j}(t_{k-1},X_{t_{k-1}})(\<M^{(i)},M^{(j)}\>_{t_k}-\<M^{(i)},M^{(j)}\>_{t_{k-1}})\right] \\ =\sum_{k=1}^m\left[\sum_{i=1}^n\sum_{j=1}^n f_{x_ix_j}(t_{k-1},X_{t_{k-1}})(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right.\\ \left.\phantom{\sum_{i=1}^n}-(\<M^{(i)},M^{(j)}\>_{t_k}-\<M^{(i)},M^{(j)}\>_{t_{k-1}})\right] \\ \end{multline*} Now, using arguments used in the proof of Theorem~\ref{theorem 8.5}, we get \begin{multline*}
\mb{F}\left|\sum_{k=1}^m\left[\sum_{i=1}^n\sum_{j=1}^n f_{x_ix_j}(t_{k-1},X_{t_{k-1}})(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right.\right.\\
\left.\left.\phantom{\sum_{i=1}^n}-(\<M^{(i)},M^{(j)}\>_{t_k}-\<M^{(i)},M^{(j)}\>_{t_{k-1}})\right]\right|^2 \\ =\mb{F}\left[\sum_{k=1}^m\sum_{i=1}^n\sum_{j=1}^n [f_{x_ix_j}(t_{k-1},X_{t_{k-1}})]^2[(M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})\right.\\ \left.\phantom{\sum_{i=1}^n}-\big(\<M^{(i)},M^{(j)}\>_{t_k}-\<M^{(i)},M^{(j)}\>_{t_{k-1}}\big)]^2\right]\\
\le 2 \|f_{x_ix_j}\|^2_\infty\mb{F}\left[\sum_{k=1}^m\sum_{i=1}^n\sum_{j=1}^n (M^{(i)}_{t_k}-M^{(i)}_{t_{k-1}})^2(M^{(j)}_{t_k}-M^{(j)}_{t_{k-1}})^2\right.\\ \left.\phantom{\sum_{i=1}^n}+\big(\<M^{(i)},M^{(j)}\>_{t_k}-\<M^{(i)},M^{(j)}\>_{t_{k-1}}\big)^2\right] \\
\le 4K\|f_{x_ix_j}\|^2_\infty\left[\sum_{i=1}^n\sum_{j=1}^n \left(\mb{F} m_t(M^{(i)},\pi)V^{(2)}(\pi,M^{(j)}) \right.\right. \\ \left.\phantom{\sum_{i=1}^n}\left.+\mb{F} m_t(\<M^{(i)},M^{(j)}\>,\pi)\right)\right]. \end{multline*} The remainder of the proof to show that the right-hand side tends to zero, proceeds as in the proof of formula (\ref{equation 8.12}).
\section{Exponential processes}
First approach following Kuo: Denote by $\mc{L}_{\operatorname{pred}}(L^1[a,b],\<M\>)$ the set of all elements $(X_t)$ satisfying \begin{enumerate} \item[\rm{1.}] $(X_t)$ is predictable adapted in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u$ (see~\cite[Section 7]{G9});
\item[\rm{2.}] $X$ is integrable, i.e., $\int_a^b|X_t|\,d\<M\>\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u,$ \end{enumerate}
and recall the $(X_t)\in\mc{L}_{\operatorname{pred}}(L^2[a,b],\<M\>)$ if the first condition above holds and if $|X|^2$ is integrable. \begin{definition}\label{Ito process}({\rm Kuo, Definition 7.4.2}) An It\^o process $X=(X_t)_{t\in[a,b]},$ $X_a\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_a,$ is a stochastic process defined by \begin{equation} X_t:=X_a+\int_a^t Y_s\,dB_s +\int_a^t Z_s\,ds,\ \ a\le t\le b, \end{equation} where $Y=(Y_s)\in \mc{L}_{\operatorname{pred}}(L^2[a,b],\<B\>)$ and $Z=(Z_t)\in\mc{L}_{\operatorname{pred}}(L^1[a,b],\<B\>).$ \end{definition}
\noindent General form of It\^o's formula
\begin{theorem} Let $(X_t)$ be an It\^o process defined by~(\ref{Ito process}) and suppose that $\theta(t,x)$ is a continuous function with continuous partial derivatives $\theta_t,$ $\theta_x$ and $\theta_{xx}.$ Then \begin{multline} \theta(t,X_t)=\theta(a,X_a)+\int_a^t\theta_x(s,X_s)Y_s\,dB_s \\ +\int_a^t\left[\theta_t(s,X_s)+\theta_x(s,X_s)Z_s+\frac{1}{2}\theta_{xx}(s,X_s)Y_s^2\right]\,ds \end{multline} \end{theorem}
\begin{corollary}(\textrm{Kuo Example 7.4.4}) Let $Y=(Y_t)\in\mc{L}(L^2,\<B\>)$ and consider the It\^o process \begin{equation}\label{exp process} X_t=\int_0^t Y_s\,dB_s-\frac{1}{2}\int_0^tY_s^2\,ds, 0\le t\le 1, \end{equation} and the function $\theta(t,x)=e^x$ for all $t.$ then, applying the result above, we get
\begin{multline} \exp(\int_0^t Y_s\,dB_s-\frac{1}{2}\int_0^tY_s^2\,ds)= E+\int_0^tY_s\exp(X_s)dB_s \\ +\int_0^t\left[\exp(X_s)(-\frac{1}{2}Y_s^2)+\frac{1}{2}\exp(X_s)Y_s^2\right]\,ds \\ =E+\int_0^t Y_s\exp(\int_0^t Y_s\,dB_s-\frac{1}{2}\int_0^tY_s^2\,ds)dB_s. \end{multline} \end{corollary}
The process $(X_t)$ defined in~(\ref{exp process}) plays a central role in Girsanov'e theorem which answers the question which processes $(Y_t)$ are such that $(X_t)$ is a martingale. It is clear the $(X_t)$ is a local martingale.
\begin{definition}(\textrm{Kuo, Definition 8.7.1, p 137}) The exponential process given by $Y\in \mc{L}(L^2[a,b],t)$ is defined to be the stochastic process \begin{equation} \mathscr{E}_{Y,t}=\exp\left[\int_0^t Y_s\,dB_s-\frac{1}{2}\int_0^t Y_s^2\,ds\right],\ \ 0 \le t \le T. \end{equation} \end{definition}
It follows from Corollary~\ref{exp process}, that \begin{equation}\label{exp process2} \mathscr{E}_{Y,t}=E+\int_0^t Y_s\mathscr{E}_{Y,s}\, dB_s,\ \ 0\le t\le T. \end{equation}
\section{Abandoned, Entered and Stopped processes} Let $\mb{S}}\def\T{\mb{T}$ be a stopping time for the filtration $(\mb{F}_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)_{t\in J}$ and let $(X_t)_{t\in J}$ be an adapted stochastic process. Then the projections onto the band $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}(tE\le\mb{S}}\def\T{\mb{T} E)$ are the projections $\mathbb P^{\mb{S}}\def\T{\mb{T}}_t=(\mb{S}}\def\T{\mb{T}_t^\ell)^d\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_t$ and therefore, the process $(A^{\mb{S}}\def\T{\mb{T},X}_t)=(\mathbb P^{\mb{S}}\def\T{\mb{T}}_tX_t)_{t\in J}$ is an adapted stochastic process. We call $(A^{\mb{S}}\def\T{\mb{T},X}_t)$ the process that {\em abandons $(X_t)$ at time $\mb{S}}\def\T{\mb{T},$} or simply the {\em abandoned process}. On the band $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}(tE\le\mb{S}}\def\T{\mb{T} E)$ it has the value $X_t$ and on the band $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}(tE>\mb{S}}\def\T{\mb{T} E)$ it is zero.
Similarly, the projections onto the band $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}(tE\ge\mb{S}}\def\T{\mb{T} E),$ i.e., the projections $\mb{S}}\def\T{\mb{T}^r_t\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_t,$ will be denoted by $\Q^{\mb{S}}\def\T{\mb{T}}_t.$ The adapted process $(B^{\mb{S}}\def\T{\mb{T},X}_t)=(\Q^{\mb{S}}\def\T{\mb{T}}_tX_t)_{t\in J}$ will be called the {\em process that enters $(X_t)$ at time $\mb{S}}\def\T{\mb{T},$} or the {\em entered process}. Here we have that on the band $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}(tE<\mb{S}}\def\T{\mb{T} E)$ it has the value $0$ and on the band $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{B}(tE\ge\mb{S}}\def\T{\mb{T} E)$ it has the value $X_t.$
If $X_t\ge 0$ for all $t\in J,$ we define the adapted process $(X_{\mb{S}}\def\T{\mb{T},t})$ by \begin{equation} X_{\mb{S}}\def\T{\mb{T},t}:=A^{\mb{S}}\def\T{\mb{T},X}_t\n B^{\mb{S}}\def\T{\mb{T},X}_t\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}. \end{equation} We call the point $t_0$ a discontinuity of $\mb{S}}\def\T{\mb{T},$ if the right-continuous spectral system $(\mb{S}}\def\T{\mb{T}^\ell_t)$ is constant on an interval to the right of $t_0.$ else we call $t_0$ a point of continuity. The set of points of discontinuity of $\mb{S}}\def\T{\mb{T}$ will be denoted by $D.$ We define $X_\mb{S}}\def\T{\mb{T}$ as follows \begin{equation} X_\mb{S}}\def\T{\mb{T}:=\begin{cases}
&\sup_{t\in J\setminus D}X_{\mb{S}}\def\T{\mb{T},t}=\sup_{t\in J}A^{\mb{S}}\def\T{\mb{T},X}_t\n B^{\mb{S}}\def\T{\mb{T},X}_t \\
&\inf_{t>t_0}A^{\mb{S}}\def\T{\mb{T},X}_t\n B^{\mb{S}}\def\T{\mb{T},X}_t \mbox{ if $t_0\in D.$}\end{cases} \end{equation} In order to define $X_\mb{S}}\def\T{\mb{T}$ for general elements, we
This implies that $X_\mb{S}}\def\T{\mb{T}\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u,$ which is an $f$-algebra. We therefore take $X_t^+$ and $X_t^-$ to define as above two elements $X_\mb{S}}\def\T{\mb{T}^+$ and $X_\mb{S}}\def\T{\mb{T}^-.$ The element $X_\mb{S}}\def\T{\mb{T}$ is then defined as \begin{equation} X_\mb{S}}\def\T{\mb{T}:=X_S^+ -X_\mb{S}}\def\T{\mb{T}^-\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}^u. \end{equation} The {\em stopped process} is then defined in the usual way as $(X_{t\n\mb{S}}\def\T{\mb{T}})_{t\in J}.$
\begin{lemma} The system of elements $\{X_{\mb{S}}\def\T{\mb{T},t}\,|\,t\in J\}$ is a disjoint system. \end{lemma}
{\em Proof.}\ Let $s<t.$ It is then clear that at least one of the factors $A_t^{\mb{S}}\def\T{\mb{T},X}$,$B_t^{\mb{S}}\def\T{\mb{T},X},$ $A_s^{\mb{S}}\def\T{\mb{T},X}$ or $B_s^{\mb{S}}\def\T{\mb{T},X}$ is zero. As they are all positive, it follows that $|X_{\mb{S}}\def\T{\mb{T},t}|\wedge|X_{\mb{S}}\def\T{\mb{T},s}|=0.$ \phantom{em}
$\Box$
\begin{example} \rm Consider the case mentioned in the introduction where $(X_t=X(t,\omega))_{t\in J,\omega\in\Omega}$ is a stochastic process in $L^1(\Omega,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F},P)$ adapted to the filtration $(\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t)$ of $\sigma$-algebras contained in $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}.$ If $\mb{S}}\def\T{\mb{T}(\omega)$ is a stopping time for the filtration, then the process $(X_{\mb{S}}\def\T{\mb{T},t}(\omega))$ is defined by the elements $$ X_{\mb{S}}\def\T{\mb{T},t}(\omega)=\begin{cases} 0 & \mbox{if $t<\mb{S}}\def\T{\mb{T}(\omega)$ or if $t>\mb{S}}\def\T{\mb{T}(\omega)$} \\ X_t(\omega) & \mbox{if $t=S(\omega).$}\end{cases} $$ \end{example}
We consider the case in which $\mb{S}}\def\T{\mb{T}$ is a simple function. In that case $\mb{S}}\def\T{\mb{T}$ can be written as $$ \mb{S}}\def\T{\mb{T}=\sum_{i=1}^n t_i\mathbb P_{t_i}, \ \mathbb P_{t_i}\in\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{P}_{t_i}, a\le t_1<t_2<\cdots<t_n\le b, $$ with the projections $\mathbb P_{t_i}\mathbb P_{t_j}=0$ if $i\ne j.$ Putting In this case we have that the decreasing function $(\mathbb P^\mb{S}}\def\T{\mb{T}_t)_{t\in J}$ has the finite range consisting of the projections $$\{\mb{I}}\def\A{\mb{A}}\def\B{\mb{B}=\sum_{j=1}^n\mathbb P_{t_j},\sum_{j=2}^{n}\mathbb P_{t_j},\sum_{j=3}^{n}\mathbb P_{t_j},\ldots,\mathbb P_{j_n} \}=\{\Q_1,\Q_2,\ldots,\Q_n\}. $$ It can therefore be written as $$ (\mathbb P^\mb{S}}\def\T{\mb{T}_t)_{t\in J}=(\sum_{i=1}^n \Q_iI_{(t_{i-1},t_i]}(t))_{t\in J},\ \ (J=[a,b]). $$ Hence, $$ A_t^{\mb{S}}\def\T{\mb{T},X}=\sum_{i=1}^n\Q_i X_t I_{[t_{i-1},t_i]}(t). $$ On the other hand, we have that $\Q_t^\mb{S}}\def\T{\mb{T}$ is the increasing system $\{\mathbb P_{t_1},\mathbb P_{t_1}+\mathbb P_{t_2},\ldots, \\ \mathbb P_{t_1}+\cdots+\mathbb P_{t_n}\}.$ So, $$ B^{\mb{S}}\def\T{\mb{T},X}_t=\sum_{i=1}^n\mathbb P_{t_i}X_tI_{[t_{i-1},t_i]}(t) $$ It follows that $X_\mb{S}}\def\T{\mb{T}=\sum_{i=1}^n \mathbb P_{t_i}X_{t_i}.$
We remark that we defined, for simple functions $\mb{S}}\def\T{\mb{T},$ in \cite{G1,G2}the element $X_\mb{S}}\def\T{\mb{T}$ by this formula and then extended the definition to the case of submartingales.
Next Question: Let $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_\mb{S}}\def\T{\mb{T}$ be the Dedekind complete Riesz subspace of $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ generated by the abandoned process $(A^{\mb{S}}\def\T{\mb{T},X}_t$; is it the same as the one we defined earlier? It contains $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_0=\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}$ so there exists a conditional expectation $\mb{F}_\mb{S}}\def\T{\mb{T}$ that maps $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{E}$ onto $\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_\mb{S}}\def\T{\mb{T}.$ Can I characterize it? (in terms of the filtration $(\mb{F}_t,\mathfrak}\def\mc{\mathcal}\def\mb{\mathbb}\def\ms{\mathscr{F}_t).$
What is $(\A_t^{\mb{S}}\def\T{\mb{T},X}):=(\mathbb P_t^\mb{S}}\def\T{\mb{T}\mb{F}_t)$? What is $(\B_t^{\mb{S}}\def\T{\mb{T},X}):=(\Q_t^\mb{S}}\def\T{\mb{T}\mb{F}_t)$? What is their infimum? $\mb{F}_{\mb{S}}\def\T{\mb{T},t}=\A_t^{\mb{S}}\def\T{\mb{T},X}\wedge \B_t^{\mb{S}}\def\T{\mb{T},X}$ Is the supremum of this system equal to $\mb{F}_\mb{S}}\def\T{\mb{T}$? Note that $\A_t^{\mb{S}}\def\T{\mb{T},X}$ is not a conditional expectation (it maps $E$ to $\mathbb P_t^{\mb{S}}\def\T{\mb{T}}E$ (which is a weak order unit for the projection band that is the range of $\mathbb P_t^{\mb{S}}\def\T{\mb{T}}).$
If $(X_t)$ is a submartingale then for each $s,t$ $s<t$ we have $$ \A_s^{\mb{S}}\def\T{\mb{T},X}A_t^{\T,X}=\mathbb P_s^{\mb{S}}\def\T{\mb{T}}\mb{F}_s\mathbb P_t^{\T}X_t =\mb{F}_s\mathbb P_s^{\mb{S}}\def\T{\mb{T}}\mathbb P_t^{\T}X_t=\mb{F}_s\mathbb P_s^{\mb{S}}\def\T{\mb{T}}X_t=\mathbb P_s^{\mb{S}}\def\T{\mb{T}}\mb{F}_sX_t\ge \mathbb P_s^{\mb{S}}\def\T{\mb{T}} X_s=A_s^{\mb{S}}\def\T{\mb{T},X}. $$
Have in mind a simple proof of Doob's sampling theorem: $\mb{F}_\mb{S}}\def\T{\mb{T} X_\T\ge X_\mb{S}}\def\T{\mb{T}$.
\end{document} |
\begin{document}
\begin{center}
\Large
\textbf{Exact dynamics of moments and correlation functions for fermionic Poisson-type GKSL equations}
\large
\textbf{Iu.A. Nosal}\footnote{Faculty of Physics, Lomonosov Moscow State University.},
\textbf{A.E. Teretenkov}\footnote{Department of Mathematical Methods for Quantum Technologies,
Steklov Mathematical Institute of Russian Academy of Sciences, Moscow, Russia.\\ E-mail:taemsu@mail.ru}
\end{center}
\footnotesize Gorini-Kossakowski-Sudarshan-Lindblad equation of Poisson-type for the density matrix is considered. The Poisson jumps are assumed to be unitary operators with generators, which are quadratic in fermionic creation and annihilation operators. The explicit dynamics of the density matrix moments and Markovian multi-time ordered correlation functions is obtained.\\
\textit{AMS classification:} 81S22, 82C31, 81Q05, 81Q80
\textit{Keywords:} GKSL equation, irreversible quantum dynamics, Poisson stochastic process, exact solution, fermions \normalsize
\section{Introduction}
In this work we obtain both the fermionic analog of results of \cite{Teretenkov20} and new result on multi-time ordered Markovian correlation functions. The latter one is also important due to modern interest to non-Markovian effects which manifest themselves only on the level of multi-time Markovian correlation functions rather than master equations \cite{Gullo14}. As in \cite{Teretenkov20} we consider the equation for the density matrix \begin{equation}\label{eq:mainEq} \frac{d}{dt} \rho_t = \mathcal{L}(\rho_t), \qquad \mathcal{L}(\rho) = \sum_{k=1}^K \lambda_k (U_k \rho U_k^{\dagger} - \rho), \qquad \lambda_k >0, \end{equation} where $ U_k $ are unitary . Let us also note that the generator $ \mathcal{L} $ has the \textit{Gorini-Kossakowski-Sudarshan-Lindblad} (GKSL) form \cite{gorini1976completely, lindblad1976generators} \begin{equation*} \mathcal{L}(\rho) = \sum_{k=1}^K \left(L_k \rho_t L_k^{\dagger} - \frac12 L_k^{\dagger} L_k\rho- \frac12 \rho L_k^{\dagger} L_k\right), \end{equation*} where $ L_k = \sqrt{\lambda_k} U_k $.
In our case $ U_k $ are exponential of quadratic forms in fermionic creation and annihilation operators in the finite-dimensional Hilbert space. Such generators naturally arise in the case of averaging with respect to classical Poisson processes with intensities $ \lambda_k $ and unitary jumps $ U_k $ \cite{Kummerer87}, so we call them Poisson-type generators. For the infinite-dimensional Hilbert space such generators were discussed in \cite{Holevo96, Holevo98}. Let us note that Poisson processes and the correspondent quantum Markov equations arise in physical applications \cite{accardi2002quantum, vacchini2009quantum, Basharov2014, TrubBash2018}. Unitary evolution with the fermionic quadratic generators mentioned above was discussed in \cite{Fried1953, Ber86, Dodonov83}. Its bosonic counterpart was discussed in \cite{Fried1953, Ber86, Manko79, Manko87, dodonov2002nonclassical, dodonov2003theory, Cheb11, Cheb12}.
Now let us specify the exact mathematical formulation of our problem and main results. We use the notation which is similar to \cite{Ter17, Ter19}. We consider the finite-dimensional Hilbert space $\mathbb{C}^{2^n}$. In such a space one could (see \cite[p. 407]{Takht11} for explicit formulae) define $ n $-pairs of fermionic creation and annihilation operators satisfying \textit{canonical anticommutation relations} (CARs): $ \{\hat{c}_i^{\dagger}, \hat{c}_j \} = \delta_{ij}, \{\hat{c}_i, \hat{c}_j\} = 0 $. Let us define the $2n$-dimensional vector $\mathfrak{c} = (\hat{c}_1, \ldots, \hat{c}_n, \hat{c}_1^{\dagger}, \ldots, \hat{c}_n^{\dagger})^T$ of creation and annihilaiton operators. The quadratic forms in such operators we denote by $ \mathfrak{c}^T K \mathfrak{c} $, $ K \in \mathbb{C}^{2n \times 2n} $. Define the $2n \times 2n$-dimensional matrix \begin{equation*} E = \biggl( \begin{array}{cc} 0 & I_n \\ I_n & 0 \end{array} \biggr), \end{equation*} where $ I_n $ is the identity matrix from $ \mathbb{C}^{n \times n} $. Then CARs take the form $ \{f^T\mathfrak{c}, \mathfrak{c}^Tg \} = f^TEg$, $ f, g \in \mathbb{C}^{2n}$. We also define the $\sim$-conju\-ga\-tion of matrices by the formula \begin{equation*} \tilde{K} = E \overline{K} E, \qquad K \in \mathbb{C}^{2n \times 2n}, \end{equation*} where the overline is an (elementwise) complex conjugation.
\begin{theorem}\label{th:main}
Let the density matrix $ \rho_t $ satisfy Eq.~\eqref{eq:mainEq}, where the unitary operators $ U_k$, $ k=1, \ldots, K $, are defined by the formulae $ U_k = e^{- \frac{i}{2} \mathfrak{c}^T H_k \mathfrak{c} } $, $ H_k = -H_k^T = -\tilde{H}_k \in \mathbb{C}^{2n \times 2 n} $, then
1) Dynamics of the moments has the form
\begin{equation}\label{eq:momDynam}
\langle \otimes_{m=1}^M \mathfrak{c} \rangle_t = e^{\sum_{k=1}^K\lambda_k (\otimes_{m=1}^M O_k - I_{(2n)^M}) t} \langle \otimes_{m=1}^M \mathfrak{c} \rangle_0, \qquad O_k = e^{-i E H_k},
\end{equation}
where the average is defined by the formula $ \langle \otimes_{m=1}^M \mathfrak{c} \rangle_t \equiv \mathrm{tr} \; (\rho_t \otimes_{m=1}^M \mathfrak{c}) $, $ I_{(2n)^m} $ is the identity matrix in $ \mathbb{C}^{2n} \otimes \cdots \otimes \mathbb{C}^{2n} = \mathbb{C}^{(2n)^m} $.
2) If we denote $ L_{M, m} = \sum_{k=1}^K\lambda_k (\otimes_{r=1}^m O_k - I_{(2n)^m}) \otimes I_{(2n)^{M-m}}$ for $ m =1, \ldots, M $, then the dynamics the Markovian multi-time ordered correlation functions has the form
\begin{equation}\label{eq:corrEvol}
\langle \mathfrak{c}(t_M) \otimes \ldots \otimes \mathfrak{c}(t_1) \rangle = e^{L_{M,1} (t_M - t_{M-1})} \ldots e^{L_{M,M} t_1} \langle \otimes_{m=1}^M \mathfrak{c} \rangle_0,
\end{equation}
where $ t_M \geqslant \ldots \geqslant t_1 \geqslant 0 $ and the tensor $ \langle \mathfrak{c}(t_M) \otimes \ldots \otimes \mathfrak{c}(t_1) \rangle $ is defined by its elements
\begin{equation}\label{eq:corrDef}
\langle \mathfrak{c}_{j_M}(t_M) \ldots \mathfrak{c}_{j_1}(t_1) \rangle \equiv \tr( \mathfrak{c}_{j_M} e^{\mathcal{L}(t_M - t_{M-1})} \ldots \mathfrak{c}_{j_2} e^{\mathcal{L} (t_2 - t_1)}\mathfrak{c}_{j_1} e^{\mathcal{L} t_1}\rho_0),
\end{equation}
where $ j_m =1, \ldots, 2n, $ for all $ m =1, \ldots, M $. \end{theorem}
In definition \eqref{eq:corrEvol} for the Markovian multi-time ordered correlation functions we follow \cite{Gullo14}.
In particular, for the first and second moments we have \begin{equation*} \langle\mathfrak{c}\rangle_t = e^{\sum_{k=1}^K \lambda_k(O_k - I_{2n}) t} \langle\mathfrak{c}\rangle_0, \qquad \langle\mathfrak{c} \otimes \mathfrak{c}\rangle_t = e^{\sum_{k=1}^K \lambda_k (O_k \otimes O_k - I_{4 n^2}) t} \langle \mathfrak{c} \otimes \mathfrak{c} \rangle_0 \end{equation*} and for two-time correlation functions we have \begin{equation*} \langle\mathfrak{c}(t_2) \mathfrak{c}(t_1) \rangle = e^{L_{2,1} (t_2 - t_1)} e^{L_{2,2} t_1} \langle \mathfrak{c} \otimes \mathfrak{c} \rangle_0. \end{equation*} Notr that $ \langle\mathfrak{c}(t) \mathfrak{c}(t) \rangle = e^{L_{2,2} t} \langle \mathfrak{c} \otimes \mathfrak{c} \rangle_0 = e^{\sum_{k=1}^K \lambda_k (O_k \otimes O_k - I_{4 n^2}) t} \langle \mathfrak{c} \otimes \mathfrak{c} \rangle_0 = \langle\mathfrak{c} \otimes \mathfrak{c}\rangle_t $.
\section{Dynamics in Heisenberg picture}
In this section we prove Th.~\ref{th:main}. The main idea consists in transfer to the Heisenberg picture. To do it let us calculate the conjugate generator $ \mathcal{L}^* $ defined by the relation \begin{equation}\label{eq:defConjGen} \mathrm{tr} \, \hat{X} \mathcal{L}(\rho) = \mathrm{tr} \, \mathcal{L}^* (\hat{X})\rho, \end{equation} for arbitrary matrices $ \rho, X \in \mathbb{C}^{2n \times 2n} $. We need lemma 1 from \cite{Ter17} in the case when $ A = i H $, $ B =0 $, which takes the following form.
\begin{lemma}
\label{lem:orthTransform}
Let $ H = -H^T \in \mathbb{C}^{2 n \times 2n} $, then $ e^{ \frac{i}{2} \mathfrak{c}^T H \mathfrak{c} } \mathfrak{c} e^{- \frac{i}{2} \mathfrak{c}^T H \mathfrak{c} } = O \mathfrak{c}$, where $O = e^{-i E H} $. \end{lemma} Let us note that in accordance with lemma 4 from \cite{Ter17}, if $ \tilde{H} = - H$, then the matrix $ \frac12 \mathfrak{c}^T H \mathfrak{c} $ is self-adjoint. Thus, the operators $ U_k $ defined in Th.~\ref{th:main} are unitary indeed.
\begin{lemma}
\label{lem:conjGen}
Let $\mathcal{L} $ be defined by \eqref{eq:mainEq} with $ U_k = e^{- \frac{i}{2} \mathfrak{c}^T H_k \mathfrak{c} } $, $ H_k = \tilde{H}_k \in \mathbb{C}^{2n \times 2 n} $, and $ \mathcal{L}^*$ be defined by formula \eqref{eq:defConjGen}, then
\begin{equation*}
\mathcal{L}^*(\otimes_{m=1}^M \mathfrak{c}) = \sum_{k=1}^K \lambda_k (\otimes_{m=1}^M O_k - I_{(2n)^M}) \otimes_{m=1}^M \mathfrak{c} , \qquad O_k = e^{-i E H_k}.
\end{equation*} \end{lemma}
\begin{demo}
By the cyclic property of trace we have $ \tr \hat{X} (U_k \rho U_k^{\dagger} - \rho) = \tr (U_k^{\dagger} \hat{X} U_k - \hat{X}) \rho $. Hence, by Eq.~\eqref{eq:defConjGen} we obtain
\begin{equation*}
\mathcal{L}^* (\hat{X}) = \sum_{k=1}^K \lambda_k (U_k^{\dagger} \hat{X} U_k - \hat{X}).
\end{equation*}
Taking the elements of the tensor $ \otimes_{m=1}^M \mathfrak{c} $ as $ \hat{X} $ we obtain
\begin{equation*}
\mathcal{L}^*(\otimes_{m=1}^M \mathfrak{c} )
= \sum_{k=1}^K \lambda_k (U_k^{\dagger} (\otimes_{m=1}^M \mathfrak{c} ) U_k - \otimes_{m=1}^M \mathfrak{c} ) = \sum_{k=1}^K \lambda_k ( \otimes_{m=1}^M (U_k^{\dagger} \mathfrak{c} U_k) - \otimes_{m=1}^M \mathfrak{c}).
\end{equation*}
By lemma \ref{lem:orthTransform}, we have $ U_k^{\dagger} \mathfrak{c} U_k= e^{\frac{i}{2} \mathfrak{c}^T H_k \mathfrak{c} } \mathfrak{c} e^{- \frac{i}{2} \mathfrak{c}^T H_k \mathfrak{c} } = e^{-i E H_k} \mathfrak{c} = O_k \mathfrak{c} $.
Thus, we obtain
\begin{equation*}
\mathcal{L}^*(\otimes_{m=1}^M \mathfrak{c} )
= \sum_{k=1}^K \lambda_k ( \otimes_{m=1}^M ( O_k \mathfrak{c} ) - \otimes_{m=1}^M \mathfrak{c}) = \sum_{k=1}^K \lambda_k (\otimes_{m=1}^M O_k - I_{(2n)^M}) \otimes_{m=1}^M \mathfrak{c}. \quad \quad \qed
\end{equation*} \end{demo}
\noindent\textbf{Proof of Th.~\ref{th:main}}.
1) Taking into account lemma \ref{lem:conjGen} we obtain the Heisenberg evolution of the operators $ \otimes_{m=1}^M \mathfrak{c} $ in the following explicit form.
\begin{equation}\label{eq:HeisEvol}
e^{\mathcal{L}^* t}(\otimes_{m=1}^M \mathfrak{c}) = e^{ \sum_{k=1}^K \lambda_k (\otimes_{m=1}^M O_k - I_{(2n)^M}) t} \otimes_{m=1}^M \mathfrak{c}
\end{equation}
Then taking into account the definition of the average from the statement of Th.~\ref{th:main} we have
\begin{align*}
\langle \otimes_{m=1}^M \mathfrak{c} \rangle_t &\equiv \mathrm{tr} \; (\otimes_{m=1}^M \mathfrak{c} \rho_t) = \mathrm{tr} \; ( \otimes_{m=1}^M \mathfrak{c} e^{\mathcal{L} t}(\rho_0)) = \mathrm{tr} \; (e^{\mathcal{L}^* t}(\otimes_{m=1}^M \mathfrak{c})\rho_0) = \\
&= e^{ \sum_{k=1}^K \lambda_k (\otimes_{m=1}^M O_k - I_{(2n)^M}) t} \mathrm{tr} \; (\otimes_{m=1}^M \mathfrak{c} \rho_0) = e^{ \sum_{k=1}^K \lambda_k (\otimes_{m=1}^M O_k - I_{(2n)^M}) t} \langle \otimes_{m=1}^M \mathfrak{c} \rangle_0
\end{align*}
Thus, we obtain \eqref{eq:momDynam}.
2) As for the moments let us turn to Heisenberg evolution operators in definition \eqref{eq:corrDef} \begin{equation*}
\tr( \mathfrak{c}_{j_M} e^{\mathcal{L}(t_M - t_{M-1})} \ldots \mathfrak{c}_{j_{2}} e^{\mathcal{L} (t_2 - t_1)}\mathfrak{c}_{j_1} e^{\mathcal{L} t_1}\rho_0) = \tr \rho_0 e^{\mathcal{L}^* t_1}((e^{\mathcal{L}^* (t_2 - t_1)} ((\ldots e^{\mathcal{L}^* (t_M - t_{M-1})} \mathfrak{c}_{j_M}\ldots)\mathfrak{c}_{j_2} ))\mathfrak{c}_{j_1}). \end{equation*} By formula \eqref{eq:HeisEvol} taking into account the definition of $ L_{1,1} $ we have \begin{equation*} e^{\mathcal{L}^* (t_M - t_{M-1})} \mathfrak{c}_{j_M} = (e^{L_{1,1} (t_M - t_{M-1})} \mathfrak{c})_{j_M}, \end{equation*} then \begin{align*} e^{\mathcal{L}^* (t_{M-1} - t_{M-2})}((e^{\mathcal{L}^* (t_M - t_{M-1})} \mathfrak{c}_{j_M}) \mathfrak{c}_{j_{M-1}}) &= e^{\mathcal{L}^* (t_{M-1} - t_{M-2})}( (e^{L_{1,1} (t_M - t_{M-1})} \mathfrak{c})_{j_M} \mathfrak{c}_{j_{M-1}}) =\\ = e^{\mathcal{L}^* (t_{M-1} - t_{M-2})}( (e^{L_{1,1} (t_M - t_{M-1})} \mathfrak{c}) \otimes \mathfrak{c})_{j_M j_{M-1}} &= e^{\mathcal{L}^* (t_{M-1} - t_{M-2})}( e^{L_{2,1} (t_M - t_{M-1})} \mathfrak{c} \otimes \mathfrak{c})_{j_M j_{M-1}} =\\ = ( e^{L_{2,1} (t_M - t_{M-1})} e^{\mathcal{L}^* (t_{M-1} - t_{M-2})}(\mathfrak{c} \otimes \mathfrak{c}))_{j_M j_{M-1}} &= (e^{L_{2,1} (t_M - t_{M-1})} e^{L_{2,2} (t_{M-1} - t_{M-2})} \mathfrak{c} \otimes \mathfrak{c})_{j_M j_{M-1}} \end{align*} Analogously we have \begin{equation*} e^{\mathcal{L}^* t_1}((e^{\mathcal{L}^* (t_2 - t_1)} ((\ldots e^{\mathcal{L}^* (t_M - t_{M-1})} \mathfrak{c}_{j_M}\ldots)\mathfrak{c}_{j_2} ))\mathfrak{c}_{j_1}) = e^{L_{M,1} (t_M - t_{M-1})} \ldots e^{L_{M,M} t_1} \otimes_{m=1}^M \mathfrak{c} \end{equation*} By averaging with respect to the initial state $ \rho_0 $ we obtain \eqref{eq:corrEvol}. \qed
\section{Conclusions}
In this work we have considered evolution for the density matrix in accordance with GKSL equation \eqref{eq:mainEq}. In part 1) of Th.~\ref{th:main} we have obtained the fermionic analog to Th.~1 from \cite{Teretenkov20}. We have also obtained multi-time ordered Markovian correlation functions, which is a generalization of single-time formula \eqref{eq:momDynam} to the multi-time case. This is important due to the modern discussion of quantum Markovianity which necessarily (according to \cite{Gullo14}) leads to the very special form \eqref{eq:corrDef} for multi-time ordered correlation in addition to the GKSL from of the master equations. The explicit expression for these correlation functions in our case is presented in part 2) of Th.~\ref{th:main}. The study of Markovian and non-Markovian effects are important now due to rising interest to the open quantum systems, the range of approaches to which is becoming wider and wider now \cite{Trushechkin19, Trushechkin19a, Luchnikov19, Teretenkov19a, Teretenkov19b}. A possible direction of future development consists in calculation of more general multi-time observables, e.g. unordered correlation functions in 2D echo-spectroscopy \cite{Plenio13}.
\end{document} |
\begin{document}
\pagestyle{myheadings} \newcommand\testopari{\sc Giacomo Canevari and Pierluigi Colli} \newcommand\testodispari{\sc Solvability and asymptotic analysis of a phase field system} \markboth{\testodispari}{\testopari} \renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}}
\thispagestyle{empty}
{\cred \begin{center} {{\bf\huge Solvability and asymptotic analysis\\[1.5mm] of a generalization of the Caginalp\\[3mm] phase field system\footnote{{\bf Acknowledgment.}\quad\rm The financial support of the MIUR-PRIN Grant 2008ZKHAHN \emph{``Phase transitions, hysteresis and multiscaling''} and of the IMATI of CNR in Pavia is gratefully acknowledged.}}}
{\large\sc Giacomo Canevari and Pierluigi Colli}
{\sl Dipartimento di Matematica ``F. Casorati'', Universit\`a di Pavia\\%[1mm] Via Ferrata, 1, 27100 Pavia, Italy\\%[1mm] E-mail: {\tt giacomo.canevari@gmail.com \ \ pierluigi.colli@unipv.it}}
\end{center}
\vskip6mm \begin{abstract}
We study a diffusion model of phase field type, which consists of a system of two partial differential equations involving as variables the thermal displacement, that is basically the time integration of temperature, and the order parameter. Our analysis covers the case of a non-smooth (maximal monotone) graph along with a smooth anti-monotone function in the phase equation. Thus, the system turns out a generalization of the well-known Caginalp phase field model for phase transitions when including a diffusive term for the thermal displacement in the balance equation. Systems of this kind have been extensively studied by Miranville and Quintanilla. We prove existence and uniqueness of a weak solution to the initial-boundary value problem, as well as various regularity results ensuring that the solution is strong and with bounded components. Then we investigate the asymptotic behaviour of the solutions as the coefficient of the diffusive term for the thermal displacement tends to $0$ and prove convergence to the Caginalp phase field system as well as error estimates for the difference of the solutions. \vskip3mm \noindent {\bf Key words:} phase field model, well-posedness, regularity, asymptotic behaviour, error estimates. \vskip3mm \noindent {\bf AMS (MOS) Subject Classification:} 35K55, 35B30, 35B40, 80A22.
\end{abstract}
\section{Introduction} \label{intro} This paper is concerned with the initial and boundary value problem: \begin{equation} w_{tt} - \alpha\Delta w_t - \beta\Delta w + u_t = f \quad \textrm{ in } \Omega \times (0,T) \label{1} \end{equation} \begin{equation} u_t - \Delta u + \gamma (u) + g(u) \ni w_t \quad \textrm{ in } \Omega \times (0,T) \label{2} \end{equation} \begin{equation} \partial_n w = \partial_n u = 0 \qquad \textrm{on } \Gamma\times (0, T) \label{3} \end{equation} \begin{equation} w(\cdot, \, 0) = w_0\, , \quad w_t (\cdot, \, 0) = v_0 \, , \quad u(\cdot, \, 0) = u_0 \qquad \textrm{ in } \Omega \label{4} \end{equation} where $\Omega \subset \mathbb{R}^3 $ is a bounded domain with smooth boundary $\Gamma$, $T> 0$ represents some finite time, and $\partial_n $ denotes the outward normal derivative on $\Gamma$. Moreover, $\alpha$ and $\beta$ are two positive parameters, $\gamma : \mathbb{R} \to 2^{\mathbb{R}} $ is a maximal monotone graph (one can see \cite[in particular pp. 43--45]{Brezis} or \cite{Barbu}), $g : \mathbb{R} \to \mathbb{R}$ is a Lipschitz-continuous function, $ f$ is a given source term in equation \eqref{1} and $w_0,\, v_0 , \, u_0$ stand for initial data. The inclusion (in place of the equality) in \eqref{2} is due to the presence of the possibly multivalued graph $\gamma$.
Equations \eqref{1}--\eqref{2} yield a system of phase field type. Such systems have been introduced (cf.~\cite{caginalp}) in order to include phase dissipation effects in the dynamics of moving interfaces arising in thermally induced phase transitions. In our case, we move from the following expression for the total free energy \begin{equation}
\Psi (\theta, u) = \int_\Omega \left( - \frac12 \theta^2 - \theta u + \phi (u) + G(u) + \frac12 |\nabla u |^2 \right) \label{5} \end{equation} where the variables $\theta$ and $u$ denote the (relative) temperature and order parameter, respectively. Let us notice from the beginning that our $w$ represents the thermal displacement variable, related to $\theta$~by \begin{equation} w(\cdot, \, t) = w_0 + (1* \theta) (\cdot, \, t) = w_0 + \int_0^t\!\! \theta (\cdot, \, s)\, ds , \quad \ t\in [0,T]. \label{6} \end{equation} In \eqref{5}, $ \phi : [0,+\infty] \to \mathbb{R} $ is the convex and lower semicontinuous function such that $\phi(0) = 0 = \min \phi$ and its subdifferential $\partial \phi$ coincides with $\gamma $, while $G$ stands for a smooth, in general concave, function such that $G' = g$. A typical example for $\phi $ and $G$ is the double obstacle case \begin{equation} \phi(u) = I_{[-1, +1]} (u) =
\begin{cases} 0 & \text{if $|u|\leq 1$} \\
+\infty &\text{if $|u| >1$} \end{cases} , \quad \ G(u) = 1- u^2 \label{7} \end{equation} so that the two wells of the sum $\phi (u) + G(u)$ are located in $ -1$ and $+1$, and one of the two is preferred as minimum of the potential in \eqref{5} according to whether the temperature $\theta$ is negative or positive. Indeed, note the presence of the term $- \theta u $ besides $\phi (u) + G(u)$ in the expression of $\Psi$.
The example given in \eqref{7} is inspired by the systematic approach of Michel Fr\'emond to non-smooth thermomechanics: we refer to the monography \cite{fremond} which also deals with the phase change models. In the case of \eqref{7} the subdifferential
of the indicator function of the interval $[-1, +1]$ reads $$ \xi \in \partial I_{[-1, +1]} (u) \quad \hbox{ if and only if } \quad \xi \ \left\{ \begin{array}{ll} \displaystyle \leq \, 0 \ &\hbox{if } \ u=-1 \\[0.1cm]
= \, 0 \ &\hbox{if } \ |u| < 1 \\[0.1cm] \geq \, 0 \ &\hbox{if } \ u = + 1 \\[0.1cm] \end{array} \right. . $$ Let us point out that, with a different terminology motivated by earlier studies on the Stefan problem~\cite{duvaut}, some authors (cf.~\cite{fremond}) prefer to name ``freezing index'' the variable $w$ defined by \eqref{6}, having also in mind applications to frost propagation in porous media.
Another meaningful variable of the Stefan problem is the enthalpy $e$, which in our case is defined by $$ e= - d_\theta \Psi \quad (- \hbox{ the variational derivative of $\Psi $ with respect to } \theta ) , $$ whence $ e = \theta + u = w_t + u $. Then, the governing balance and phase equations are given~by \begin{equation} e_{t} + \Div {\bf q} = f \label{1phys} \end{equation} \begin{equation} u_t + d_u \Psi =0 \label{2phys} \end{equation} where ${\bf q} $ denotes the thermal flux vector and $d_u \Psi$ stands for the variational derivative of $\Psi$ with respect to $u$. Hence, \eqref{2phys} reduces exactly to \eqref{2} along with the Neumann homogeneous boundary condition for $u$. If we assume the classical Fourier law $ {\bf q} = - \nabla \theta $ (for the moment let us take the heat conductivity coefficient just equal to 1), then \eqref{1phys} is nothing but the usual energy balance equation as in the Caginalp model~\cite{caginalp}. This is also as in the weak formulation of the Stefan problem, in which the mere pointwise inclusion $ u \in \left( \partial I_{[-1, +1]}\right)^{-1} ( \theta)$, or equivalently $ \theta \in \partial I_{[-1, +1]} ( u)$, replaces \eqref{2}.
Another approach, which is by now well established, consists in adopting the so-called Cattaneo-Maxwell law (see, e.g., \cite{cgg1, MQ1} and references therein): such a law reads \begin{equation} {\bf q} + \varepsilon {\bf q}_t = - \nabla \theta , \quad \hbox{ for } \, \varepsilon > 0 \, \hbox{ small}, \label{qeps} \end{equation} and leads to the following equation \[ \varepsilon \theta_{tt} + \theta - \Delta \theta + \varepsilon u_{tt} + u_t = f \quad \textrm{ in } \Omega \times (0,T) \label{1mv} \] which has been investigated in \cite{MQ1}. On the other hand, if we solve \eqref{qeps} with respect to ${\bf q} $ we find $$ {\bf q} = {\bf q_0} + k* \nabla \theta , \ \hbox{ where } \, (k* \nabla \theta) (x,t) := \int_0^t \!\! k(t-s) \nabla \theta (x,s) ds , $$ ${\bf q_0} (x,t)$ is known and can be incorporated in the source term, $k (t) $ is a given kernel (depending on $\varepsilon$ of course): from \eqref{1phys} we obtain the balance equation for the standard phase field model with memory which has a hyperbolic character and has been extensively studied in \cite{cgg1, cgg2}.
In \cite{gn1,gn2,gn3,gn4} Green and Naghdi presented an alternative approach based on a thermomechanical theory of deformable media. This theory takes advantage of an entropy balance rather than the usual entropy inequality. If we restrict our attention to the heat conduction, these authors proposed three different theories, labeled as type I, type II and type III, respectively. In particular, when type I is linearized, we recover the classical theory based on the Fourier law \begin{equation} {\bf q} = - \alpha \nabla w_t , \quad \alpha >0 \ \hbox{ (type I). } \label{typeI} \end{equation} Furthermore, linearized versions of the two other theories yield \begin{equation} {\bf q} = - \beta \nabla w , \quad \beta >0 \ \hbox{ (type II) } \label{typeII} \end{equation} and \begin{equation} {\bf q} = - \alpha \nabla w_t - \beta \nabla w \quad \hbox{(type III). } \label{typeIII} \end{equation} Note that here we have used the thermal displacement \eqref{6} (instead of $\theta$) to write such laws. We also point out that \eqref{typeII}--\eqref{typeIII} have been recently discussed, applied and compared by Miranville and Quintanilla in \cite{MQ2, MQ3, MQ4} (there the reader can find a rich list of references as well). In particular, \eqref{typeIII} leads via \eqref{1phys} to our equation \eqref{1}; further, a no flux boundary condition for $ {\bf q}$ corresponds to $ \partial_n w = 0 $ in \eqref{3}.
Thus, the system \eqref{1}--\eqref{4} results from \eqref{1phys}--\eqref{2phys} when \eqref{5} and \eqref{typeIII} are postulated. We are interested in the study of existence, uniqueness, regularity of the solution to the initial-boundary value problem \eqref{1}--\eqref{4} when $\gamma$ is an arbitrary maximal monotone graph, possibly multivalued, singular and with bounded domain. Of course, the case of $\Psi$ shaped by a multiwell potential $ u \mapsto - w_t u + \phi (u) + G(u)$ is recovered as a sample. Then we study the asymptotic behaviour of the problem as $\beta \searrow 0$, obtaining convergence of solutions to the problem with $\beta=0$, which corresponds to \eqref{typeI}, the (type I) case of Green and Naghdi. We also prove two error estimates of the difference of solutions in suitable norms, showing a linear rate of convergence in both estimates. In a subsequent study we would like to address the investigation of the analogous limit $\alpha \searrow 0$ to obtain the (type II) case in \eqref{typeII}.
The paper is organized as follows. In Section~\ref{wepo} we state the main results related to the problem~\eqref{1}--\eqref{4}: existence and uniqueness of a weak solution, regularity results yielding a strong solution, further regularity results ensuring the boundedness of $u, \, w_t $ and of the appropriate selection of $\gamma (u)$. Section~\ref{sec: Pa} contains the related statements. Then we investigate the asymptotic limit as $\beta \searrow 0$: precisely, the convergence result and the error estimates under different assumptions on the data. In Section~\ref{no-un} we introduce some notation and present the uniqueness proof. The approximation of the problem~\eqref{1}--\eqref{4} via a Faedo-Galerkin scheme and the derivation of the uniform a priori estimates are carried out in Section~\ref{app}. Regularity and boundedness properties for the solutions are proved in Sections~\ref{reg1}--\ref{reg3}. Finally, the details of the asymptotic analysis as $\beta \searrow 0$ are developed in Section~\ref{beta=0}.
} \section{Well-posedness and regularity for $\alpha, \beta >0$} \label{wepo} We point out the assumptions on the data and state clearly the formulation of the problem and the main results we achieve. Let $\Omega\subseteq\mathbb{R}^3$ be a bounded {\cred smooth domain}
with boundary $\Gamma = \partial\Omega$ {\cred and} let $T > 0$. Set {\cred $Q:= \Omega \times (0,T)$. We assume that} \begin{equation} \alpha\, , \; \beta \in (0, +\infty) \label{alpha, beta} \end{equation} \begin{equation} \label{f} f \in \vett{L^2}{H^1(\Omega)'} + \vett{L^1}{L^2(\Omega)} \end{equation} \begin{equation} \gamma\subseteq\mathbb{R}\times\mathbb{R} \, \textrm{ is a maximal monotone graph, with } \, \gamma(0)\ni 0 \label{gamma} \end{equation} \begin{equation} \phi:\mathbb{R}\longrightarrow [0, +\infty] \, \textrm{ is convex and lower-semicontinuous} \label{phi} \end{equation} \begin{equation} \phi(0) = 0 \ \textrm{ and } \ \partial\phi = \gamma \label{phi gamma} \end{equation} \begin{equation} g: \mathbb{R} \longrightarrow \mathbb{R} \, \textrm{ is Lipschitz-continuous} \label{g} \end{equation} \begin{equation} w_0 \in H^1(\Omega) \, , \quad v_0\inL^2(\Omega) \, , \quad u_0\inL^2(\Omega) \, , \quad \phi(u_0)\in L^1(\Omega) . \label{initial data} \end{equation} The effective domain of $\gamma$ will be denoted by $D(\gamma)$. We consider
\noindent \textbf{Problem $\left(\textbf{P}_{\alpha, \beta}\right)$.} Find $(w, \, u, \, \xi)$ satisfying \begin{equation} w \in \vett{W^{1, \, \infty}}{L^2(\Omega)} \cap \vett{H^1}{H^1(\Omega)} \label{w} \end{equation} \begin{equation} w_{tt} \in \vett{L^1}{H^1(\Omega)'} \label{w_tt} \end{equation} \begin{equation} u \in \vett{H^1}{H^1(\Omega)'} \cap C^0\left([0, T]; \, L^2(\Omega)\right) \cap \vett{L^2}{H^1(\Omega)} \label{u} \end{equation} \begin{equation} \xi\in L^2(Q) \, , \qquad u\in D(\gamma) \ \textrm{ and } \ \xi\in\gamma(u)\ \textrm{ a.e. in } \, Q \label{xi} \end{equation} \begin{equation} \begin{split} \dual{w_{tt}(t)}{v} + \alpha\scal{\nabla w_t(t)}{\nabla v}_{L^2(\Omega)} + \beta\scal{\nabla w(t)}{\nabla v}_{L^2(\Omega)} + \dual{u_t(t)}{v} = \dual{f(t)}{v} \\ \qquad\textrm{for all } v\inH^1(\Omega) \textrm{ and a.a. } t\in (0, T) \end{split} \label{eq. A weak} \end{equation} \begin{equation} \begin{split} \dual{u_t(t)}{v} + \scal{\nabla u(t)}{\nabla v}_{L^2(\Omega)} + \scal{\xi(t)}{v}_{L^2(\Omega)} + \scal{g(u)(t)}{v}_{L^2(\Omega)} = \scal{w_t(t)}{v}_{L^2(\Omega)} \\ \qquad \textrm{for all } v\inH^1(\Omega) \textrm{ and a.a. } t\in(0, T) \end{split} \label{eq. B weak} \end{equation} \begin{equation} {\cred w(0) = w_0\ \textrm{ in } \, H^1(\Omega) \, , \quad w_t(0) = v_0 \ \textrm{ in } \, H^1(\Omega)'\, , \quad u(0) = u_0 \ \textrm{ in } \, L^2(\Omega). } \label{initial condition} \end{equation} We can prove the well-posedness of this problem.
\begin{theor}[Existence and uniqueness] Let assumptions \eqref{alpha, beta}--\eqref{initial data} hold. Then Problem~$\left(\textbf{P}_{\alpha, \beta}\right)$ has a unique solution. \end{theor}
Next, in addition to \eqref{alpha, beta}--\eqref{initial data}, we suppose \begin{equation} f \in L^2(0,T;L^2(\Omega)) + \vett{L^1}{H^1(\Omega)} \label{f strong} \end{equation} \begin{equation} w_0 \in H^2(\Omega) \, , \quad \partial_n w_0 = 0 \, \textrm{ on } \, {\cred \Gamma} \, , \quad v_0 \in H^1(\Omega) \, , \quad u_0\in H^1(\Omega) \, ; \label{initial data strong} \end{equation} in this case, we are able to prove a regularity result, which allows us to {\cred solve a strong formulation of Problem~$\left(\textbf{P}_{\alpha, \beta}\right)$}.
\begin{theor}[Regularity and strong solution] \label{th: strong solution} Assume \eqref{f strong}--\eqref{initial data strong} in addition to \eqref{alpha, beta}--\eqref{initial data}. Then the unique solution $(w, \, u, \, \xi)$ of Problem~$\left(\textbf{P}_{\alpha, \beta}\right)$ fulfills \begin{equation} w \in \vett{W^{1, \, \infty}}{H^1(\Omega)} \cap \vett{H^1}{H^2(\Omega)} \label{w strong} \end{equation} \begin{equation} w_{tt} \in \vett{L^1}{L^2(\Omega)} \label{w_tt strong} \end{equation} \begin{equation} u \in \vett{H^1}{L^2(\Omega)} \cap C^0\left([0, T]; \, H^1(\Omega)\right) \cap \vett{L^2}{H^2(\Omega)} \, . \label{u strong} \end{equation} In particular, $(w, \, u, \, \xi)$ solves Problem~$\left(\textbf{P}_{\alpha, \beta}\right)$ in a strong sense, that is, {\cred $w$ and $u$ satisfy} \[ w_{tt} - \alpha\Delta w_t - \beta\Delta w + u_t = f \quad \textrm{ a.e. in } Q \] \[ u_t - \Delta u + \xi + g(u) = w_t, \quad {\cred \xi \in \gamma(u)} \quad \textrm{ a.e. in } Q \] \[ \partial_n w = \partial_n u = 0 \quad \textrm{ a.e. on } \Gamma\times (0, T) \, . \] \end{theor}
The aim of the subsequent results is to provide $L^\infty$ estimates. We will need to strengthen again the hypotheses on the initial data. {\cred For $s\in D(\gamma)$ let us denote} by $\gamma^0(s)$ the element of $\gamma(s)$ having minimal modulus. {\cred Then, we require that} \begin{equation} u_0 \in H^2(\Omega) \, , \quad \partial_n u_0 = 0 \,\textrm{ on } \, {\cred \Gamma} \label{u0 strong} \end{equation} \begin{equation} u_0 \in D(\gamma) \quad \textrm{a.e. in } \Omega \, , \quad \ \gamma^0(u_0)\inL^2(\Omega) \, . \label{u0 gamma} \end{equation}
\begin{theor}[Further regularity] \label{th: regularity} If the conditions \eqref{alpha, beta}--\eqref{initial data}, \eqref{f strong}--\eqref{initial data strong} and \eqref{u0 strong}--\eqref{u0 gamma} hold, then the solution $(w, \, u, \, \xi)$ of Problem~$\left(\textbf{P}_{\alpha, \beta}\right)$ fulfills \begin{equation} u \in \vett{W^{1, \, \infty}}{L^2(\Omega)} \cap \vett{H^1}{H^1(\Omega)} \cap \vett{L^\infty}{H^2(\Omega)} \, . \label{u stronger} \end{equation} \end{theor}
The above results still hold if the dimension $N$ of the domain $\Omega$ is arbitary. On the other hand, {\cred since \eqref{u stronger} implies in particular that $u$ is continuous from $[0,T]$ to the space $H^s(\Omega)$ for all $s<2$, then, if we let $N \leq 3$ and $s$ sufficiently large, it turns out that $H^s(\Omega) \subset C^0(\overline\Omega) $ and consequently} \[ u \in C^0(\overline Q) \, . \]
Finally, we assume for the data {\cred enough regularity to get $L^\infty$ estimates for} $w_t$ and $\xi$. The hypothesis $N\leq 3$ is essential in the proof of the following {\cred result}. \begin{theor}[$L^\infty$ estimate for $w_t$ and $\xi$] \label{th: L^infty estimate} In addition to {\cred assumptions} \eqref{alpha, beta}--\eqref{initial data}, \eqref{f strong}--\eqref{initial data strong} and \eqref{u0 strong}--\eqref{u0 gamma}, we ask \begin{equation} f \in \vett{L^\infty}{L^2(\Omega)} + \vett{L^r}{H^1(\Omega)} \quad \textrm{ for some } \, r > 4/3 \label{f stronger} \end{equation} \begin{equation} \gamma^0(u_0) \in L^\infty(\Omega) \, . \label{gamma strong} \end{equation} Then we have \[ w_t \in L^\infty(Q) \, , \qquad \xi \in L^\infty(Q) \, . \] \end{theor}
{\cblu \begin{remark} All the statements contained in this paper still hold if $\Omega\subseteq\mathbb{R}^3$ is, for instance, a convex polyhedron, for which standard results on Sobolev embeddings and regularity for elliptic problems apply. \end{remark}}
\section{Asymptotic behaviour as $\beta\searrow 0$} \label{sec: Pa} Let us fix the parameter $\alpha$ once and for all. We shall concentrate on the asymptotic behaviour of the solution {\cred as} $\beta \searrow 0$, so we let $\beta$ vary in a bounded subset of $(0, +\infty)$. We {\cred allow} the source term and the initial data in Problem~$\left(\textbf{P}_{\alpha, \beta}\right)$ {\cred to} vary with $\beta$, by replacing $f$, $w_0$, $v_0$ and $u_0$ in {\cred \eqref{eq. A weak} and \eqref{initial condition}} with $f_\beta$, $w_{0, \beta}$, $v_{0,\,\beta}$ and $u_{0, \beta}$ respectively. We will denote by $(w_\beta, \, u_\beta, \, \xi_\beta)$ the solution to Problem~$\left(\textbf{P}_{\alpha, \beta}\right)$.
If we set $\beta = 0$ in the statement of Problem~$\left(\textbf{P}_{\alpha, \beta}\right)$, we get a first-order system of differential equations, with respect to time, in the variable $w_t$, which is of physical relevance {\cred (recall that $w_t=\theta$)}. Anyway, we avoid this change of variable, in order to preserve the formalism. We {\cred introduce} the formulation of Problem~$\left(\textbf{P}_{\alpha}\right)$, in which $\beta$ is set to be zero.
\noindent \textbf{Problem $\left(\textbf{P}_{\alpha}\right)$.} Find $(w, \, u, \, \xi)$ satisfying \eqref{w}--\eqref{xi} as well as {\cred \begin{equation} \begin{split} \dual{w_{tt}(t)}{v} + \alpha\scal{\nabla w_t(t)}{\nabla v}_{L^2(\Omega)} + \dual{u_t(t)}{v} = \dual{f(t)}{v}\\ \hbox{for all \,$v\inH^1(\Omega)$ \, and a.a. \,$t\in (0,T)$} \end{split} \label{eq. Aa weak} \end{equation} \begin{equation} \begin{split} \dual{u_t(t)}{v} + \scal{\nabla u(t)}{\nabla v}_{L^2(\Omega)} + \scal{(\xi + g(u))(t)}{v}_{L^2(\Omega)} = \scal{w_t(t)}{v}_{L^2(\Omega)}\\ \hbox{for all \, $v\inH^1(\Omega)$ \, and a.a.\, $t\in (0,T)$} \end{split} \label{eq. Ba weak} \end{equation} \begin{equation} {\cred w(0) = w_0\ \textrm{ in } \, H^1(\Omega) \, , \quad w_t(0) = v_0 \ \textrm{ in } \, H^1(\Omega)'\, , \quad u(0) = u_0 \ \textrm{ in } \, L^2(\Omega). } \label{initial condition a} \end{equation} }
We state at first the well-posedness of Problem~$\left(\textbf{P}_{\alpha}\right)$ and a convergence result.
\begin{theor}[Well-posedness for $\left(\textbf{P}_{\alpha}\right)$] \label{th: well-posedness Pa} If the hypotheses \eqref{f}--\eqref{initial data} hold, then Problem~$\left(\textbf{P}_{\alpha}\right)$ admits exactly one solution. \end{theor} \begin{theor}[Convergence as $\beta\searrow 0$] \label{th: convergence} We assume {\cred \eqref{f}--\eqref{initial data} and} \begin{equation} f_\beta \rightharpoonup f \quad \textrm{in } \vett{L^2}{H^1(\Omega)'} + \vett{L^1}{L^2(\Omega)} \label{f convergence beta} \end{equation} \begin{equation} w_{0, \beta} \rightharpoonup w_0 \quad \textrm{in } H^1(\Omega) \, , \qquad v_{0, \beta} \rightharpoonup v_0 \, , \quad u_{0, \beta} \rightharpoonup u_0 \quad \textrm{in } L^2(\Omega) . \label{data convergence beta} \end{equation} Then, the convergences \[ w_\beta \rightharpoonup^* w \quad \textrm{in } \vett{W^{1, \, \infty}}{L^2(\Omega)} \, , \qquad w_\beta \rightharpoonup w \quad \textrm{in } \vett{H^1}{H^1(\Omega)} \] \[ u_\beta \rightharpoonup u \quad \textrm{in } \vett{H^1}{H^1(\Omega)'} \cap \vett{L^2}{H^1(\Omega)} \] \[ \xi_\beta \rightharpoonup \xi \quad \textrm{in } L^2(Q) \, . \] hold{\cred , where $(w,u,\xi)$ denotes the solution to Problem $\left(\textbf{P}_{\alpha}\right)$.} \end{theor}
With slightly strengthened hypotheses, we are able to prove the strong convergence for the {\cred solution and even} to give an estimate for the convergence {\cred rate}.
\begin{theor}[{\cred First error estimate}] \label{th: first estimate error} In addition to \eqref{gamma}--\eqref{g} and \eqref{f convergence beta}--\eqref{data convergence beta}, we assume
\begin{equation} \norm{f_\beta - f}_{\vett{L^2}{H^1(\Omega)'} + \vett{L^1}{L^2(\Omega)}} \leq c\,\beta \label{f rate a} \end{equation} \begin{equation} {\cred \norm{w_{0, \beta} - w_0}_{H^1(\Omega)} }+ \norm{v_{0, \beta} - v_0}_{H^1(\Omega)'} + \norm{u_{0, \beta} - u_0}_{L^2(\Omega)} \leq c\, \beta \label{data rate a} \end{equation} for some constant $c$ which is independent of $\beta$. Then {\cred one has} the estimate{\cred \begin{equation} \begin{split} \norm{w_\beta - w}_{\vett{H^1}{L^2(\Omega)}\cap\vett{L^\infty}{H^1(\Omega)}} \hskip2cm\\ + \norm{u_\beta - u}_{\vett{L^\infty}{L^2(\Omega)}\cap\vett{L^2}{H^1(\Omega)}} \leq c\,\beta \end{split} \label{stimaerr1} \end{equation} } where $c$ does not depend on $\beta$. \end{theor}
If $\gamma$ is a (single-valued) smooth function, and if enough regularity on the data is assumed, it is possible to obtain much stronger estimates. The assumption $N\leq 3$ on the spatial dimension is essential for the proof of the following result.
\begin{theor}[{\cred Second error estimate}] \label{th: second estimate error} {\cred Let \eqref{gamma}--\eqref{g}, \eqref{f convergence beta}--\eqref{data convergence beta} hold {\cred and} \begin{equation} \gamma: D(\gamma)\longrightarrow \mathbb{R} \ \textrm{ be single-valued and locally Lipschitz-continuous.} \label{gamma lipshitz} \end{equation} Moreover, assume that the data $\{ f_\beta, \, w_{0, \beta},\, v_{0, \beta}, \, u_{0, \beta} \}$, as well as $\{ f, \, w_{0},\, v_{0}, \, u_{0} \}$, satisfy \eqref{f strong}--\eqref{initial data strong}, \eqref{u0 strong}--\eqref{u0 gamma}, \eqref{f stronger}--\eqref{gamma strong} along with \begin{equation} \norm{f_\beta}_{\vett{L^\infty}{L^2(\Omega)} + \vett{L^r}{H^1(\Omega)} }
+ \norm{ u_{0, \beta}}_{H^2(\Omega)} + \norm{ \gamma(u_{0, \beta}) }_{L^\infty (\Omega)} \leq c \label{debole1} \end{equation} \begin{equation} \norm{f_\beta - f}_{\vett{L^2}{L^2(\Omega)} + \vett{L^1}{H^1(\Omega)}} \leq c\, \beta \label{f rate b} \end{equation} \begin{equation} \norm{w_{0, \beta} - w_0}_{H^2(\Omega)} + \norm{v_{0, \beta} - v_0}_{H^1(\Omega)} + \norm{u_{0, \beta} - u_0}_{H^1(\Omega)} \leq c\, \beta \label{data rate strong} \end{equation} where $r>4/3$. Then the estimate \begin{equation} \begin{split} \norm{w_\beta - w}_{\vett{W^{1,\infty}}{H^1(\Omega)}\cap\vett{H^1}{H^2(\Omega)}} \hskip4cm\\ + \norm{u_\beta - u}_{\vett{H^1}{L^2(\Omega)}\cap\vett{L^\infty}{H^1(\Omega)}\cap\vett{L^2}{H^2(\Omega)}} \leq c \, \beta \end{split} \label{stimaerr2} \end{equation} holds for a suitable constant $c$, which may depend on $\alpha$ but not on $\beta$. } \end{theor}
{\cred \section{Notation and uniqueness proof} \label{no-un} Before facing the proof of all the } results, for the sake of convenience we fix some notation: \[ Q_t = \Omega \times {\cred (0, t) \quad \textrm{for } 0 \leq t \leq T , \quad \ Q=Q_T ,} \] \[ H = L^2(\Omega) \, , \ \quad V = H^1(\Omega) \, , \quad\ {\cred W = \left\{v\in H^2(\Omega): \: \partial_n v = 0 \quad \textrm{a.e. on } \Gamma \right\}. } \] We embed $H$ in $V'$, by means of the formula \[ \dual{y}{v} = \scal{y}{v}_H \qquad \textrm{for all } y\in H \, , \; v\in\ V \, . \] Furthermore, the same symbol $\norm{\cdot}_H$ will denote both the norm in $L^2(\Omega)$ and in $L^2(\Omega)^{{\cred N}}$; we behave similarly with $\norm{\cdot}_V$. If $a$, $b$ are functions of space and time variables, we introduce the convolution product with respect to time \[ (a * b)(t) = \int_0^t a(s)b(t - s) ds \, , \qquad 0 \leq t \leq T \, . \] We also point out that the symbols $c$, $c_i$ -- even in the same formula -- stand for different constants, depending on $\Omega$, $T$ and the data, but not on the parameters $\alpha$, $\beta$. However, as we will be interested in the study of convergence as $\beta\searrow 0$, if a constant $c$ depends on $\alpha$, $\beta$ in such a way that $c$ is bounded whenever $\alpha$, $\beta$ lie bounded, then we will accept the notation $c$. A constant depending on the data and on $\alpha$, but not on $\beta$, may be denoted by $c_\alpha$ or $c_{\alpha, i}$ {\cred or simply $c$, as it will happen in Section~\ref{beta=0}.}
{\cred In our computations, we will often exploit the H\"older and Young inequalities to infer} \[ \int_{Q_t} ab \leq \frac{1}{2\sigma} \int_0^t \nh{a(s)}ds + \frac{\sigma}{2} \int_0^t \nh{b(s)} ds \] where $a, \, b \in L^2(Q)$ and $\sigma > 0$ is arbitrary. We point out another inequality which will turn out to be useful: if $\varphi\in\vett{H^1}{H}$, then the fundamental {\cred t}heorem of calculus and the H\"older inequality entail \begin{equation}
{\cred \nh{\varphi(t)} = \left\| \varphi(0) + \int_0^t\varphi_t(s)ds\right\|_H^2 \leq 2\nh{\varphi(0)} + 2T \int_0^t \nh{\varphi_t(s)} ds } \label{fond. t. calculus} \end{equation} for all $0 \leq t \leq T$. Now, let us concentrate on the uniqueness proof.
Let $(w_1, \, u_1, \, \xi_1)$ and $(w_2, \, u_2, \, \xi_2)$ be solutions to the Problem~$\left(\textbf{P}_{\alpha, \beta}\right)$; we claim that they coincide. Setting $w = w_1 - w_2$, \mbox{$u = u_1 - u_2$} and $\xi = \xi_1 - \xi_2$, we easily get \begin{equation} \dual{w_{tt}(t)}{v} + \alpha\scal{\nabla w_t(t)}{\nabla v}_H + \beta\scal{\nabla w(t)}{\nabla v}_H + \dual{u_t(t)}{v} = 0 \label{A uniq.} \end{equation} \begin{equation} \begin{split} \dual{u_t(t)}{v} + \scal{\nabla u(t)}{\nabla v}_H + \scal{\xi(t)}{v}_H + \scal{g(u_1)(t) - g(u_2)(t)}{v}_H = \scal{w_t(t)}{v}_H \end{split} \label{B uniq.} \end{equation} for all $v\in V$ and a.a. $0 \leq t \leq T$, along with the initial conditions {\cred \begin{equation} w(0) = w_t(0) = u(0) = 0 \, . \label{3-2bis} \end{equation} } We choose $v = u(t)$ in equation \eqref{B uniq.} and integrate over $(0, t)$; thus, we obtain \[ \frac{1}{2}\nh{u(t)} + \int_0^t \nh{\nabla u(s)} ds + \int_{Q_t}\xi u = - \int_{Q_t} \left(g(u_1) - g(u_2)\right)u + \int_{Q_t} w_t u \, . \] Accounting for the Lipschitz-continuity of $g$, the H\"older inequality and the monotonicity of $\gamma$, frow the above equality we easily derive \begin{equation} \frac{1}{2}\nh{u(t)} + \int_0^t \nh{\nabla u(s)} ds \leq c \int_0^t\nh{u(s)}ds + \int_{Q_t} w_t u \, . \label{B uniq..} \end{equation} Integrating in time the equation \eqref{A uniq.} (this is possible thanks to \eqref{w}) and taking the initial data {\cred \eqref{3-2bis}} into account, we have \begin{equation} \scal{w_{t}(t)}{v}_H + \alpha\scal{\nabla w(t)}{\nabla v}_H + \beta\scal{1*\nabla w(t)}{\nabla v}_H + \scal{u(t)}{v}_H = 0 \, ; \label{A new uniq.} \end{equation} we choose $v = w_t(t)$ in \eqref{A new uniq.} and integrate over $(0, t)$. Noticing that the equality \begin{equation} \scal{1*\nabla w(t)}{ {\cred \nabla} w_t(t)}_H = \frac{d}{dt} \scal{1*\nabla w(t)}{\nabla w(t)}_H - \nh{\nabla w(t)} \label{derivative} \end{equation} holds, we get \begin{equation} \begin{split} \int_0^t\nh{w_t(s)}ds + \frac{\alpha}{2} \nh{\nabla w(t)} = - \beta\scal{1*\nabla w(t)}{\nabla w(t)}_H \\ + \beta\int_0^t\nh{\nabla w(s)} ds - \int_{Q_t} uw_t \, . \end{split} \label{A new uniq..} \end{equation} The H\"older inequality and \eqref{fond. t. calculus} allow us to deal with the right-hand side of this formula: \begin{equation} - \beta\scal{1*\nabla w(t)}{\nabla w(t)}_H \leq \frac{c\beta^2}{\alpha} \int_0^t\nh{\nabla w(s)}ds + \frac{\alpha}{4}\nh{\nabla w(t)} \, . \\ \label{A diseg} \end{equation} Collecting {\cred now} \eqref{B uniq..}, \eqref{A new uniq..} and \eqref{A diseg}, it follows that \[ \begin{split} \frac{1}{2}\nh{u(t)} + \int_0^t \nh{\nabla u(s)} ds + \int_0^t\nh{w_t(s)}ds + \frac{\alpha}{4} \nh{\nabla w(t)} \\ \leq c \int_0^t\nh{u(s)}ds + c\left(\beta + \frac{\beta^2}{\alpha}\right) \int_0^t\nh{\nabla w(s)}ds \, ; \end{split} \] {\cred then, by} applying the Gronwall lemma {\cred and recalling \eqref{3-2bis}}, we obtain $ {\cred u=w= 0}$ almost everywhere in $Q$. A comparison in \eqref{eq. B weak} and the density of $H^1(Q)$ as a subspace of $L^2(Q)$ entail $\xi = 0$ almost everywhere in $Q$; thus, the proof of uniqueness is complete.
\section{Approximation and a priori estimates} \label{app} We are going to prove the existence of a solution to Problem~$\left(\textbf{P}_{\alpha, \beta}\right)$ via a Faedo-Galerkin method. {\cred First, we approximate the graph $\gamma$ with its Yosida regularization: for all $\varepsilon \in (0,1]$ say, we let \[ {\cred \gamma_\varepsilon := \frac{1}{\varepsilon}\left\{I - \left(I + \varepsilon\gamma\right)^{-1}\right\} \quad \hbox{ and } \quad \phi_\varepsilon(s) := \min_{\tau\in\mathbb{R}}\left\{\frac{1}{2\varepsilon}\abs{\tau - s}^2 + \phi(\tau)\right\} \quad \hbox{ for }\, s\in\mathbb{R} } \] where $I$ denotes the identity on $\mathbb{R}$. We recall that $\phi_\varepsilon$ is a nonnegative, convex and differentiable function, $\gamma_\varepsilon$ is Lipschitz-continuous, monotone and {\cred \begin{equation} \label{prope} \gamma_\varepsilon(0) = 0 \, , \quad \phi_\varepsilon ' = \gamma_\varepsilon \, , \quad 0 \leq \phi_\varepsilon(s) \leq \phi(s) \, , \quad \abs{\gamma_\varepsilon(s)} \leq \abs{\gamma^0(s)} \ \quad \forall\ \varepsilon > 0, \ s\in\mathbb{R} \end{equation} } {\cred (see, e.g., \cite[Prop.~2.6, p.~28 and Prop.~2.11, p.39]{Brezis} or \cite[pp.~57--58]{Barbu}).}
We look for a solution of the approximating problem} in a finite-dimensional subspace $V_n\subseteq V$, chosing a sequence $\left\{V_n\right\}$ filling up $V$; then we get a priori estimates and use compactness arguments to take the limit {\cred as} $n \longrightarrow +\infty$. {\cred In a second step we let $\varepsilon \searrow 0.$}
A special choice of the approximating subspaces will be useful. Let $\left\{v_i\right\}_{i\in\mathbb{N}}$ be an orthonormal basis for $V$ satisfing \begin{equation} {\cred - \Delta v_i = \lambda_i v_i \quad \textrm{ in }\, \Omega , \quad \quad \partial_n v_i = 0 \quad \textrm{ on } \, \Gamma } \label{numeroform} \end{equation} where $\left\{\lambda_i\right\}_{i\in\mathbb{N}}$ are the eigenvalues of the Laplace operator; also, let $V_n$ be the subspace of $V$ spanned by $v_1, \, \ldots, \, v_n$, for all $n\in\mathbb{N}$. Thus, we have defined an increasing sequence of subspaces, whose {\cred union} is dense in $V$, and hence in $H$; furthermore, we notice that the regularity of $\cred \Omega$ implies ${\cred V_n\subseteq W }$, for all $n\in\mathbb{N}$.
As approximations of the data $w_0$, $v_0$, $u_0$ we choose the projections on $V_n$: let $w_{0, n}$ be the projection of $w_0$, with respect to $V$, and let $v_{0, n}$, $u_{0, n}$ be the projections of $v_0$, $u_0$, with respect to $H$. We notice that \begin{equation} w_{0, n} \longrightarrow w_0 \ \textrm{ in } V \, , \quad v_{0, n} \longrightarrow v_0 \ \textrm{ in } H \, , \quad u_{0, n} \longrightarrow u_0 \ \textrm{ in } H \, . \label{data convergence} \end{equation} We also need to regularize the source term $f$: so, we first write {\cred \begin{equation} f = f^{(1)} + f^{(2)} \, , \quad \textrm{where } \, f^{(1)}\in\vett{L^2}{V'} \, \textrm{ and } \, f^{(2)}\in \vett{L^1}{H} \, , \label{f splittata} \end{equation} } then we assume $f_n^{(1)}$, $f_n^{(2)}$ to be functions in $C^0\left([0, T]; \, V'\right)$, $C^0\left([0, T]; \, H\right)$ respectively, such that \begin{equation} f_n^{(1)} \longrightarrow f^{(1)} \ \textrm{ in } \vett{L^2}{V'} \, , \quad f_n^{(2)} \longrightarrow f^{(2)} \ \textrm{ in } \vett{L^1}{H} \, ; \label{f convergence} \end{equation} we also set $f_n = f_n^{(1)} + f_n^{(2)}$.
Now we are ready to state the approximated problem. For the sake of simplicity, we do not specify explicitly the dependency {\cred on} $\varepsilon$ in the solution.
\textbf{Problem $\left(\textbf{P}_{\alpha, \beta}\right)_{n, \, \varepsilon}$}. Find $T_n\in (0, T]$ and $(w_n, u_n)$ satisfying \[ w_n\in C^2([0, T_n]; \, V_n) \, , \qquad u_n\in C^1([0, T_n]; \, V_n) \] \begin{equation} \begin{split} \scal{\partial^2_{t} w_n(t)}{v}_H + \alpha \scal{\nabla \partial_t w_n (t)}{\nabla v}_H + \beta \scal{\nabla w_n(t)}{\nabla v}_H + \scal{\partial_t u_n(t)}{v}_H\\ = \dual{f_n(t)}{v} \qquad \textrm{for all } v\in V_n \textrm{ and all } t \in [0, T_n] \end{split} \label{eq. A n} \end{equation} \begin{equation} \begin{split} \scal{\partial_{t} u_n(t)}{v}_H + \scal{\nabla u_n (t)}{\nabla v}_H + \scal{\gamma_\varepsilon(u_n)(t)}{v}_H + \scal{g(u_n)(t)}{v}_H \\ = \scal{\partial_t w_n(t)}{v}_H \qquad \textrm{for all } v\in V_n \textrm{ and all } t \in [0, T_n] \end{split} \label{eq. B n} \end{equation} {\cred \begin{equation} \label{in.cond} w_n(0) = w_{0, n} \, , \qquad \partial_t w_n(0) = v_{0, n} \, , \qquad u_{n}(0) = u_{0, n} \, . \end{equation} } Writing $w_n$ and $u_n$ as linear combinations of $v_1, \, \ldots , \, v_n$ with time-dependent coefficients, and testing equations \eqref{eq. A n} and \eqref{eq. B n} {\cred by} $v = v_1, \, \ldots , \, v_n$, we obtain a system of ordinary {\cred differential} equations, for whose local existence and uniqueness standard results apply. Thus, Problem~$\left(\textbf{P}_{\alpha, \beta}\right)_{n, \, \varepsilon}$ admits a solution, defined on some interval $[0, \, T_n]$. The following estimates imply that these solutions can be extended over the whole interval $[0, T]$.
\textbf{First a priori estimate.} We choose $v = {\cred u_n}(t)$ in equation \eqref{eq. B n} and integrate over~$(0, t)$: \[ \begin{split} \frac{1}{2}\nh{u_n(t)} + \int_0^t \nh{\nabla u_n(s)} ds + \int_{Q_t} \gamma_\varepsilon(u_n)u_n \\ = - \int_{Q_t} g(u_n)u_n + \int_{Q_t} u_n\partial_t w_n + \frac{1}{2} \nh{u_{0, n}} \, . \end{split} \] The last term in the left-hand side is non negative, because $\gamma_\varepsilon$ is {\cred increasing} and $\gamma_\varepsilon(0) = 0$; it will be ignored in the following estimates. Meanwhile, the right-hand side can be easily estimated using the Lipschitz-continuity of $g$ and \eqref{data convergence}; so we get \begin{equation} \frac{1}{2}\nh{u_n(t)} + \int_0^t \nh{\nabla u_n(s)} ds \leq c \int_0^t \nh{u_n(s)} ds + \int_{Q_t} u_n\partial_t w_n + c \, . \label{test 1, 2} \end{equation} Following the same computation {\cred as in} the uniqueness proof, we integrate equation \eqref{eq. A n} with respect to time: \begin{equation} \begin{split} \scal{\partial_{t} w_n(t)}{v}_H + \alpha \scal{\nabla w_n (t)}{\nabla v}_H + \beta \scal{1*\nabla w_n(t)}{\nabla v}_H + \scal{ u_n(t)}{v}_H \qquad \\ = \dual{1*f_n^{(1)}(t)}{v} + \scal{1*f_n^{(2)}(t)}{v}_H + \scal{v_{0, n} + u_{0, n}}{v}_H + \alpha\scal{\nabla w_{0, n}}{\nabla v}_H \end{split} \label{new A n} \end{equation} for all $v\in V_n$ and $0 \leq t \leq T_n$. We take $v = \partial_t w_n(t)$ in the previous equation and integrate over $(0, t)$. Recalling the identity \eqref{derivative}, we have \begin{equation} \begin{split} \int_0^t \nh{\partial_tw_n(s)} ds + \frac{\alpha}{2}\nh{\nabla w_n(t)} = \sum_{i = 1}^7 T_i(t) + \frac{\alpha}{2}\nh{\nabla w_{0, n}} \end{split} \label{test 1, 1} \end{equation} where we have set \[ T_1(t) = \beta\int_0^t\nh{\nabla w_n(s)}ds \, , \quad T_2(t) = - \beta\scal{1*\nabla w_n(t)}{\nabla w_n(t)}_H \] \[ T_3(t) = - \int_{Q_t}\!\! u_n\partial_t w_n \, , \quad {\cred T_4(t) = \int_0^t\!\! \left\langle 1*f_n^{(1)}(s), \partial_t w_n (s)\right\rangle ds }\, , \quad T_5(t) = \int_{Q_t}\!\!\left(1*f_n^{(2)}\right)\partial_t w_n \] \[ {\cred T_6(t) = \int_0^t\scal{v_{0, n} + u_{0, n}}{\partial_t w_n(s)}_H ds \, , \quad T_7(t) = \alpha \int_0^t\scal{\nabla w_{0, n}}{\nabla\partial_t w_n(s)}_H ds } \, . \] We do not need any estimate on terms $T_1$ and $T_3$. With simple applications of the H\"older inequality, we estimate $T_2$, $T_5$ and $T_6$: \[ T_2(t) \leq \frac{\alpha}{8}\nh{\nabla w_n(t)} + \frac{c\beta^2}{\alpha}\int_0^t\nh{\nabla w_n(s)} ds \] \[ T_5(t) \leq \frac{1}{4}\int_0^t \nh{\partial_t w_n(s)} ds + \int_0^t \nh{1*f^{(2)}_n(s)} ds \] \[ T_6(t) \leq \frac{1}{4}\int_0^t \nh{\partial_tw_n(s)} ds + c \nh{v_{0, n}} + c\nh{u_{0, n}} \, . \] We deal with $T_7$ by direct integration and the use of the H\"older inequality: \[ T_7(t) = \alpha\scal{\nabla w_n(t)}{\nabla w_{0, n}}_H - \alpha\nh{\nabla w_{0, n}} \leq \frac{\alpha}{8} \nh{\nabla w_n(t)} + \alpha\nh{\nabla w_{0, n}} \, . \] Now we pay attention to $T_4$ and integrate by parts in time:{\cred \[ \begin{split} T_4(t) = \dual{1*f_n^{(1)}(t)}{w_n(t)} - \int_0^t \dual{f_n^{(1)}(s)}{w_n(s)}ds \leq \frac{1}{2\sigma} \norm{1*f_n^{(1)}(t)}^2_{V'} \\ + \frac{\sigma}{2} \norm{w_n(t)}^2_V + \frac{1}{2}\int_0^t\norm{f_n^{(1)}(s)}^2_{V'}ds + \frac{1}{2}\int_0^t \norm{w_n(s)}^2_V ds \, , \end{split} \] } where $\sigma> 0$ is arbitrary, to be set later. According to the definition of the norm in $V$ and the inequality \eqref{fond. t. calculus}, we have \[ \begin{split} T_4(t)\leq \frac{1}{2\sigma} \norm{1*f_n^{(1)}(t)}^2_{V'} + \sigma T\int_0^t\nh{\partial_t w_n(s)}ds + \frac{\sigma}{2}\nh{\nabla w_n(t)} \\ + \frac{1}{2}\int_0^t\norm{f_n^{(1)}(s)}^2_{V'}ds + T\int_0^t\left(\int_0^s \nh{\partial_t w_n(\tau)}d\tau\right)ds \\ + \frac12 \int_0^t\nh{\nabla w_n(s)}ds + T\left(\sigma + 1\right)\nh{w_{0, n}} \, . \end{split} \] We collect all the terms containing $\norm{\partial_t w_n}_{L^2(0, t; \, H)}$ and $\norm{\nabla w_n(t)}_H$ in the left-hand side of \eqref{test 1, 1}; their coefficients turn out to be, respectively, \[ k_1 = \frac{1}{2} - T\sigma \, , \quad k_2 = \frac{1}{2}\left(\frac{\alpha}{2} - \sigma \right) \, . \] We choose $\sigma \ {\cred \leq} \ \min\left\{\alpha/4, \, 1/4T \right\}$, so that $k_1 \geq 1/4$, $k_2 \geq \alpha/8$. We also remark that the assumptions \eqref{f convergence} and \eqref{data convergence} enable us to get a bound for terms involving ${\cred f_n^{(1)}, \,f_n^{(2)} } $ and the initial data. Finally, adding \eqref{test 1, 2} and \eqref{test 1, 1} and taking into account all the previous inequalities, we obtain \[ \begin{split} \frac{1}{2}\nh{u_n(t)} + \int_0^t \nh{\nabla u_n(s)} ds + \frac{1}{4} \int_0^t \nh{\partial_tw_n(s)} ds + \frac{\alpha}{8}\nh{\nabla w_n(t)} \\ \leq c\int_0^t \nh{u_n(s)} ds + T\int_0^t\left(\int_0^s \nh{\partial_t w_n(\tau)}d\tau\right)ds + c_\alpha \int_0^t\nh{\nabla w_n(s)}ds + c_\alpha \, . \end{split} \] The Gronwall lemma entails{\cred \begin{equation} \norm{u_n}_{\vett{L^\infty}{H} \cap \vett{L^2}{V}} +
\norm{w_n}_{\vett{H^1}{H}} + \sqrt{\alpha} \norm{w_n}_{\vett{L^\infty}{V}} \leq c_{\alpha} \, . \label{estimate-1} \end{equation} }
\textbf{Second a priori estimate.} Since $\phi_\varepsilon$ is at most of quadratic growth, by definition, and $\gamma_\varepsilon$ is Lipschitz-continuous, from the estimate \eqref{estimate-1} we directly derive \begin{equation} \norm{\phi_\varepsilon(u_n)}_{\vett{L^\infty}{L^1(\Omega)}} \leq c'_{\alpha, 1} \label{estimate 2, 1} \end{equation} \begin{equation} \norm{\gamma_\varepsilon(u_n)}_{L^2(Q)} \leq c'_{\alpha, 2}\, ; \label{estimate 2, 2} \end{equation} where the symbols $c'_{\alpha, i}$ denote positive constants, possibly depending on $\varepsilon$ and $\alpha$, but not on $n$ and $\beta$.
{\cblu By \eqref{numeroform}, we can easily check that \[ \scal{y}{z}_H = \scal{P_ny}{z}_H \qquad \textrm{for all } y\in V\, , \quad z\in V_n \] where $P_ny$ is the projection of $y$ in $V_n$, with respect to $V$. Then,} as we have a uniform estimate for $u_n$ in $\vett{L^2}{V}$, it is not difficult to extract from \eqref{eq. B n} the property \begin{equation} \norm{\partial_t u_n}_{\vett{L^2}{V'}} \leq c'_{\alpha, 3} \, . \label{estimate 2, 3} \end{equation}
\textbf{Third a priori estimate.} We take $v = \partial_t w_n(t)$ as a test function in equation \eqref{eq. A n} and integrate over $(0, t)$; {\cred thanks to} the H\"older inequality, we get \begin{equation} \begin{split} \frac{1}{2}\nh{\partial_t w_n(t)} + \alpha\int_0^t\nh{\nabla\partial_t w_n(s)}ds + \frac{\beta}{2} \nh{\nabla w_n(t)} \hskip2.5cm \\ \leq \int_0^t \dual{f_n^{(1)} - \partial_t u_n(s)}{\partial_t w_n(s)} ds + \int_0^t\norm{f_n^{(2)}(s)}_H \norm{\partial_t w_n(s)}_H ds \\ + \frac{1}{2}\nh{v_{0,\, n}} + \frac{\beta}{2}\nh{\nabla w_{0, n}} \, . \end{split} \label{test 3, 1} \end{equation} We consider the term involving $f_n^{(1)} - \partial_t u_n$: \[ \begin{split} \int_0^t \dual{f_n^{(1)} - \partial_t u_n(s)}{\partial_t w_n(s)} ds \leq \frac{c}{\alpha} \int_0^t\norm{f_n^{(1)}(s)}^2_{V'}ds + \frac{c}{\alpha} \int_0^t\norm{\partial_t u_n(s)}^2_{V'}ds \\ + \frac{\alpha}{2} \int_0^t\nh{\partial_t w_n(s)} ds + \frac{\alpha}{2} \int_0^t \nh{\nabla\partial_t w_n(s)} ds \, . \end{split} \] Because of the estimate \eqref{estimate 2, 3} {\cred and the properties} \eqref{f convergence} and \eqref{data convergence}, from \eqref{test 3, 1} we deduce \[ \begin{split} \frac{1}{2}\nh{\partial_t w_n(t)} + \frac{\alpha}{2} \int_0^t\nh{\nabla\partial_t w_n(s)}ds + \frac{\beta}{2} \nh{\nabla w_n(t)} \\ \leq c' + \frac{\alpha}{2} \int_0^t\nh{\partial_t w_n(s)}ds + \int_0^t\norm{f_n^{(2)}(s)}_H \norm{\partial_t w_n(s)}_H ds \, , \end{split} \] where $c'$ depends on $\varepsilon, \, \alpha$. Hence, by a generalized version of the Gronwall lemma {\cred (see, e.g., \cite[pp. 156--157]{Brezis})}, we {\cred infer that} \begin{equation} \norm{w_n}_{\vett{W^{1, \, \infty}}{H}} + {\cred \sqrt{\alpha}} \norm{w_n}_{\vett{H^1}{V}} \leq c'_{\alpha, 4} \, . \label{estimate 3, 1} \end{equation}
\textbf{Passage to the limit as $n \longrightarrow + \infty$.} From the estimates \eqref{estimate-1}, \eqref{estimate 2, 1}--\eqref{estimate 2, 3}, \eqref{estimate 3, 1}, with standard arguments of weak or weak* compactness we can find functions $(w_\varepsilon, \, u_\varepsilon)$ such that, possibly taking a subsequence as $n\longrightarrow + \infty$, {\cred \begin{eqnarray}
w_n \rightharpoonup^* w_\varepsilon && \textrm{in } \ \vett{W^{1, \, \infty}}{H} \cap \vett{L^\infty}{V} \label{1conv}\\ w_n \rightharpoonup w_\varepsilon && \textrm{in }\ \vett{H^1}{V} \label{2conv}\\ u_n \rightharpoonup u_\varepsilon && \textrm{in } \ \vett{H^1}{V'} \cap \vett{L^2}{V} \label{3conv}\\ u_n \rightharpoonup^* u_\varepsilon && \textrm{in } \ \vett{L^\infty}{H} \, .\label{4conv} \end{eqnarray} Note that \eqref{2conv} implies } the strong convergence \begin{equation} w_n \longrightarrow w_\varepsilon \qquad \textrm{in } C^0\left([0, T]; \, H\right) ; \label{w C0} \end{equation} on the other hand, {\cred the generalised Ascoli theorem and the Aubin-Lions lemma (see, e.g., \cite[pp.~57--58]{Lions} and \cite[Sect.~8, Cor.~4]{Simon}) entail \begin{equation} u_n \longrightarrow u_\varepsilon \quad \textrm{ strongly in } C^0\left([0, T]; \, V' \right) \textrm{ and in } L^2(Q) ; \label{u C0} \end{equation} thus}, since $g$ and $\gamma_\varepsilon$ are Lipschitz-continuous,
we easily check that \[ g(u_n) \longrightarrow g(u_\varepsilon) \quad \textrm{ {\cred and} } \quad \gamma_\varepsilon(u_n) \longrightarrow \xi_\varepsilon \quad {\cred \textrm{ strongly in } L^2(Q),} \] {\cred where $\xi_\varepsilon = \gamma_\varepsilon(u_\varepsilon )$. We then take the limit as $n\longrightarrow + \infty$ in \eqref{eq. A n}--\eqref{in.cond}} and see that $(w_\varepsilon, \, u_\varepsilon, \, \xi_\varepsilon)$ fulfill{\cred s} equations {\cred \eqref{xi}--\eqref{initial condition}, where $\gamma $ is replaced by $\gamma_\varepsilon$. Indeed, by \eqref{w C0}--\eqref{u C0} and \eqref{data convergence}}, it is obvious that $w_\varepsilon(0) = w_0$, $u_\varepsilon(0) = u_0$. To deal with the last initial condition properly, we fix a test function $v\in V_m$, where $m\geq 1$ is arbitrary, and we integrate in time equation \eqref{eq. A n}; we get equation \eqref{new A n}, {\cred for $0 \leq t \leq T$ and} $n\geq m$. Arguing as in {\cred \cite[pp.~12--13]{Lions}}, we can take the limit in \eqref{new A n}, \eqref{eq. B n} and check that $(w_\varepsilon, \, u_\varepsilon, \, \xi_\varepsilon)$ fulfills \begin{equation} \begin{split} \dual{\partial_tw_\varepsilon(t)}{v} = - \alpha\scal{\nabla w_\varepsilon(t)}{\nabla v}_H - \beta\scal{1*\nabla w_\varepsilon(t)}{\nabla v}_H \\ - \dual{u_\varepsilon(t)}{v} + \dual{1*f(t)}{v} + \alpha\scal{\nabla w_0}{\nabla v}_H + \scal{v_0 + u_0}{v}_H \end{split} \label{eq. A int} \end{equation} \begin{equation} \dual{\partial_tu_\varepsilon(t)}{v} + \scal{\nabla u_\varepsilon(t)}{\nabla v}_H + \scal{\xi_\varepsilon(t)}{v}_H + \scal{g(u_\varepsilon)(t)}{v}_H = \scal{\partial_tw_\varepsilon(t)}{v}_H \label{eq. B int} \end{equation} for {\cred a.a. $t\in(0,T)$,} $m\geq 1$ and $v\in V_m$; by a density argument, the same equalities hold when $v\in V$. Since the right-hand side in \eqref{eq. A int} is a continuous function in $[0, T]$, taking $t = 0$ we find that \[ {\cred \dual{\partial_t w_\varepsilon(0)}{v}} = \scal{v_0}{v}_H \qquad \textrm{for all } v\in V \] {\cred whence the second of \eqref{initial condition} follows.}
\textbf{Fifth a priori estimate.} As a consequence of the weak lower semi-con\-ti\-nui\-ty of the norm in a Banach space, $(w_\varepsilon, \, u_\varepsilon, \, \xi_\varepsilon)$ satisfy the estimate \eqref{estimate-1}; we now need to improve estimates \eqref{estimate 2, 1}--\eqref{estimate 2, 3}, \eqref{estimate 3, 1}.
We first notice that, because of the Lipschitz-continuity of $\gamma_\varepsilon$, $\xi_\varepsilon(t)\in V$ for all $t$; thus, we can choose $v = \xi_\varepsilon(t)$ in equation \eqref{eq. B int} and integrate over $(0, t)$, to get \begin{equation} \begin{split} \int_{Q_t} \partial_t u_\varepsilon \,\xi_\varepsilon + \int_{Q_t} \gamma_\varepsilon'(u_\varepsilon)\abs{\nabla u_\varepsilon}^2 + \int_0^t\nh{\xi_\varepsilon(s)}ds = \int_{Q_t} g(u_\varepsilon)\,\xi_\varepsilon + \int_{Q_t} \partial_t w_\varepsilon \,\xi_\varepsilon \, . \end{split} \label{test ?} \end{equation} {\cred In view of \eqref{prope},} we have \[ \int_{Q_t} \partial_t u_\varepsilon \, \xi_\varepsilon = \int_{Q_t} \frac{\partial}{\partial t}\left(\phi_\varepsilon(u_\varepsilon)\right) = \norm{\phi_\varepsilon(u_\varepsilon(t))}_{L^1(\Omega)} - \norm{\phi_\varepsilon(u_0)}_{L^1(\Omega)} \, ; \] on the other hand, because of the Lipschitz continuity of $g$, \[ \int_{Q_t} g(u_\varepsilon)\xi_\varepsilon \leq c \int_{Q_t} \left(\abs{u_\varepsilon} + 1\right)\xi_\varepsilon \leq c\int_0^t \left(\nh{u_\varepsilon(s)} + 1\right)ds + \frac{1}{2}\int_0^t \nh{\xi_\varepsilon(s)} ds \, . \] From these estimates and \eqref{test ?}, we derive{\cred \[ \begin{split} \int_\Omega \phi_\varepsilon(u_\varepsilon)(t) + \int_{Q_t}\gamma_\varepsilon'(u_\varepsilon)\abs{\nabla u_\varepsilon}^2 + \frac{1}{2}\int_0^t\nh{\xi_\varepsilon(s)}ds \hskip1.5cm\\ {}\leq c\int_0^t\nh{u_\varepsilon(s)}ds + c\int_0^t\nh{\partial_t w_\varepsilon(s)}ds + \int_\Omega\phi_\varepsilon(u_0) + c \, . \end{split} \] } We notice that the second term in the lef-hand side is nonnegative, because of the monotonicity of $\gamma_\varepsilon$. Secondly, accounting for \eqref{estimate-1}, {\cred \eqref{prope} and} \eqref{initial data}, {\cred we infer that \begin{equation} \norm{\phi_\varepsilon(u_\varepsilon)}_{\vett{L^\infty}{L^1(\Omega){1}}} + \norm{\gamma_\varepsilon(u_\varepsilon)}_{L^2(Q)} \leq c_\alpha \, . \label{estimate-4} \end{equation} } Now, by comparison in the equation \eqref{eq. B int}, we have \begin{equation} \norm{\partial_t u_\varepsilon}_{\vett{L^2}{V'}} \leq {\cred c_\alpha} \, ; \label{estimate 4, 3} \end{equation} {\cred and consequently we can also establish the estimate \eqref{estimate 3, 1}, now} for a constant which is independent of $\varepsilon$.
\textbf{Passage to the limit as $\varepsilon \searrow 0$.} We are able to repeat the compactness argument {\cred as above} and find $(w, \, u, \, \xi)$, {\cred a candidate for the solution to} Problem~$\left(\textbf{P}_{\alpha, \beta}\right)$, as a limit of a subsequence of $(w_\varepsilon, \, u_\varepsilon, \, \xi_\varepsilon)$. The proof will be easily completed by {\cred the} passage to the limit as $\varepsilon \searrow 0$, provided that we deduce \eqref{xi}.
By construction, we can assume that \[ \xi_\varepsilon \rightharpoonup \xi \ \textrm{ in } L^2(Q) \, , \qquad u_\varepsilon \longrightarrow u \ \textrm{ in } L^2(Q) \, , \] from which the equality \[ \lim_{\varepsilon\searrow 0}\int_Q \xi_\varepsilon u_\varepsilon = \int_Q \xi u \] follows; {\cred at this point, we apply \cite[Prop.~1.1, p.~42]{Barbu} and deduce \eqref{xi}. Thus,} the proof of the existence of a solution {\cred to} Problem~$\left(\textbf{P}_{\alpha, \beta}\right)$ is complete.
\section{Regularity and strong solutions} \label{reg1} This section is devoted to the derivation of further a priori estimates on the {\cred approximating} solutions $(w_n, \, u_n, \, \xi_n)$, which are independent of $n$ and $\varepsilon$, under stronger assumptions. The same compactness -- passage to the limit arguments then apply, and this will prove Theorem \ref{th: strong solution}. We first notice that the hypothesis {\cred \eqref{initial data strong} and $V_n \subseteq W$} make it possible to assume \begin{equation} {\cred w_{0, n} \longrightarrow w_0 \ \textrm{ in } W\, , \qquad v_{0, n} \longrightarrow v_0 \ \hbox{ and } \ u_{0, n} \longrightarrow u_0 \ \textrm{ in } V \, ;} \label{piuconvstr} \end{equation} on the other hand, owing to \eqref{f strong}, we can require $f_n^{(1)}\in L^2(Q)$, $f_n^{(2)}\in \vett{L^1}{V}$ for all $n\in\mathbb{N}$ and \begin{equation} f_n^{(1)} \longrightarrow f^{(1)} \ \textrm{ in } L^2(Q)\, , \qquad f_n^{(2)} \longrightarrow f^{(2)} \ \textrm{ in } \vett{L^1}{V} . \label{fpiuconvstr} \end{equation}
\textbf{Sixth a priori estimate.} We choose $v = \partial_t w_n(t)$ in the equation \eqref{eq. A n} and integrate over $(0, t)$; an application of the H\"older inequality yields \begin{equation} \begin{split} \frac{1}{2}\nh{\partial_t w_n(t)} + \alpha\int_0^t\nh{\nabla\partial_t w_n(s)}ds + \frac{\beta}{2}\nh{\nabla w_n(t)} \leq - \int_{Q_t}\partial_t u_n\partial_t w_n \\ + \int_0^t\norm{f_n(s)}_H \norm{\partial_t w_n(s)}_H ds + \frac{1}{2}\nh{v_{0, n}} + \frac{\beta}{2}\nh{\nabla w_{0, n}} \, . \end{split} \label{test 5, 1} \end{equation} Now{\cred , we take} $v = \partial_t u_n(t)$ in \eqref{eq. B n} and integrate over $(0, t)$; recalling that $\gamma_\varepsilon = \phi_\varepsilon'$, using the H\"older inequality and the Lipschitz-continuity of $g$, we get \begin{equation} \begin{split} \frac{1}{2} \int_0^t\nh{\partial_t u_n(s)}ds + \frac{1}{2}\nh{\nabla u_n(t)} + \norm{\phi_\varepsilon(u_n(t))}_{L^1(\Omega)} \\ \leq \int_{Q_t} \partial_t u_n \, \partial_t w_n + c\int_0^t \left(\nh{u_n(s)} + 1\right)ds + \norm{\phi_\varepsilon(u_{0, n})}_{L^1(\Omega)} \, . \end{split} \label{test 5, 3} \end{equation} Adding \eqref{test 5, 1} and \eqref{test 5, 3}, thanks to the assumptions \eqref{initial data}, \eqref{data convergence}, the inequality \eqref{fond. t. calculus} and $\phi_\varepsilon \leq \phi$, we finally have \[ \begin{split} \frac{1}{2}\nh{\partial_t w_n(t)} + \alpha\int_0^t\nh{\nabla\partial_t w_n(s)}ds + \frac{\beta}{2}\nh{\nabla w_n(t)} \\ + \frac{1}{2} \int_0^t\nh{\partial_t u_n(s)}ds + \frac{1}{2}\nh{\nabla u_n(t)} + \norm{\phi_\varepsilon(u_n(t))}_{L^1(\Omega)} \\ \leq c\int_0^t \left(\int_0^s\nh{\partial_t u_n(\tau)}d\tau\right)ds + \int_0^t \norm{f_n(s)}_H\norm{\partial_t w_n (s)}_H ds + c \, . \end{split} \] The generalised Gronwall lemma {\cred (see, e.g., \cite[pp. 156--157]{Brezis})} enable{\cred s} us to achieve \begin{equation} \norm{w_n}_{\vett{W^{1, \infty}}{H}} + {\cred \sqrt{\alpha}} \norm{w_n}_{\vett{H^1}{V}} + {\cred \sqrt{\beta}} \norm{w_n}_{\vett{L^\infty}{V}} \leq c_1 \label{estimate 5, 1} \end{equation} \begin{equation} \norm{u_n}_{\vett{H^1}{H} \cap \vett{L^\infty}{V}} \leq c_2. \label{estimate 5, 2} \end{equation}
\begin{remark} Only the hypotheses \eqref{alpha, beta}--\eqref{initial data} and $f\in\vett{L^1}{H}$ have been effectively exploited in the proof of this estimate. \end{remark} \begin{remark} By means of \eqref{estimate 5, 1}--\eqref{estimate 5, 2}, the estimates \eqref{estimate-4}--\eqref{estimate 4, 3} can be {\cred rewritten} in terms of some constant which is independent of $\alpha$. \end{remark}
\textbf{Seventh a priori estimate.} We take $v = -\Delta u_n(t)$ in equation \eqref{eq. B n}; this is possible, because of the special choice of the approximating space $V_n$. We integrate over $(0, t)$ and use the H\"older inequality and the Lipschitz continuity of $g$: \[ \begin{split} \frac{1}{2}\nh{\nabla u_n(t)} + \int_0^t \nh{\Delta u_n(s)} ds + \int_{Q_t}\gamma'_\varepsilon(u_n)\abs{\nabla u_n}^2 \\ = - \int_{Q_t} g'(u_n)\abs{\nabla u_n}^2 - \int_{Q_t} \partial_t w_n \Delta u_n + \frac{1}{2}\nh{\nabla u_{0, n}} \\ \leq c\norm{\nabla u_n}^2_{\vett{L^2}{H}} + \frac{1}{2}\norm{\partial_t w_n}^2_{\vett{L^2}{H}} + \frac{1}{2}\int_0^t\nh{\Delta u_n(s)} ds + \frac{1}{2}\nh{\nabla u_{0, n}} \, . \end{split} \] The monotonicity of $\gamma_\varepsilon$ yields that the last term in the lef-hand side is non negative. Owing to {\cred conditions \eqref{piuconvstr}} on the data and estimates \eqref{estimate 5, 1}, \eqref{estimate 5, 2}, we have \[ \frac{1}{2}\int_0^t\nh{\Delta u_n(s)} ds \leq c \qquad \textrm{for all } 0\leq t \leq T \, ; \] hence, on account of this inequality, the estimate \eqref{estimate 5, 2} and the boundary conditions for $u_n$, {\cred known} regularity results for elliptic problems entail \begin{equation} \norm{u_n}_{\vett{L^2}{W}} \leq c_3 \, , \label{estimate 6, 1} \end{equation} where ${\cred c_3}$ does not depend on $\alpha$, $\beta$.
\textbf{Eigth a priori estimate.} Since $w_n \in C^2([0, T]; \, V_n)$, the special choice of $V_n$ enable{\cred s} us to take $v = - \Delta \partial_t w_n(t)$ as a test function in the equation \eqref{eq. A n}. We integrate over $(0, t)$ and use the H\"older inequality: \begin{equation} \begin{split} \frac{1}{2}\nh{\nabla\partial_t w_n(t)} + \alpha\int_0^t\nh{\Delta\partial_t w_n(s)}ds + \frac{\beta}{2}\nh{\Delta w_n(t)} \\ {}\leq {\cred \frac{\alpha}{2}\int_0^t\nh{\Delta\partial_t w_n(s)} ds + \frac{1}{\alpha} \int_0^t \nh{\partial_t u_n(s)} ds + \frac{1}{\alpha}\int_0^t\nh{f^{(1)}_n(s)}ds }\\ {}- \int_{Q_t} f_n^{(2)}\Delta\partial_t w_n + \frac{1}{2}\nh{\nabla v_{0, n}} + \frac{\beta}{2}\nh{\Delta w_{0, n}} \, . \end{split} \label{test 7, 1} \end{equation} For the term involving $f_n^{(2)}$, we integrate by parts in space, {\cred recalling} that $\partial_n v = 0$ for all $v\in V_n$: \begin{equation} \abs{\int_{Q_t} f_n^{(2)}\Delta\partial_t w_n} = \abs{\int_{Q_t}\nabla f_n^{(2)}\cdot\nabla \partial_t w_n} \leq \int_0^t \norm{\nabla f_n^{(2)}{\cred (s)}}_H \norm{\nabla \partial_t w_n(s)}_H ds \, . \label{test 7,1bis} \end{equation} {\cred Then, in view of \eqref{piuconvstr}, \eqref{fpiuconvstr}, \eqref{estimate 5, 2} and owing to the generalized Gronwall lemma (see \cite[pp. 156--157]{Brezis}), from \eqref{test 7, 1}--\eqref{test 7,1bis} we obtain} \begin{equation} \norm{w_n}_{\vett{W^{1, \infty}}{V}} + {\cred \sqrt{\alpha}} \norm{w_n}_{\vett{H^1}{W}} + {\cred \sqrt{\beta}} \norm{w_n}_{\vett{L^\infty}{W}} \leq c_{\alpha, 4} \, . \label{estimate 7, 1} \end{equation} Finally, if we choose $v = \partial_t^2 w_n(t)$ in the equation \eqref{eq. A n}, we get \[
\cred{ \nh{\partial_t^2 w_n(t)} \leq \left\{ \alpha\norm{\partial_t w_n(t)}_W + \beta\norm{w_n(t)}_W +
\norm{\partial_t u_n(t)}_H + \norm{f_n(t)}_H \right\} \norm{\partial_t^2 w_n(t)}_H \, ;}
\] thanks to the estimates above, it is easy to derive \begin{equation} \norm{\partial_t^2 w_n}_{\vett{L^1}{H}} \leq {\cred c_{\alpha, 5} } \, . \label{estimate 7, 3} \end{equation}
Having established all the a priori estimates corresponding to \eqref{w strong}--\eqref{u strong} on the solutions of the approximating problem, we have completed the proof of Theorem \ref{th: strong solution}.
\section{Further regularity} \label{reg2} Throughout this section we assume \eqref{u0 strong} {\cred and} \eqref{u0 gamma} in addition to all the hypotheses we had in {\cred Section~\ref{reg1}}. As we are interested in {\cred proving} Theorem~\ref{th: regularity}, we should get further estimates on the solution of the approximated problem. By the stronger assumptions on the initial data, we can require \begin{equation} u_{0, n} \longrightarrow u_0 \quad \textrm{ in } W \, . \label{u0 convergence strong} \end{equation}
Consider the equation \eqref{eq. B n} and derive it, with respect to time, obtaining \[ \begin{split} \scal{\partial_t^2 u_n(t)}{v}_H + \scal{\nabla\partial_t u_n(t)}{\nabla v}_H + \scal{\gamma'_\varepsilon(u_n(t))\partial_t u_n(t)}{v}_H \\ + \scal{g'(u_n(t))\partial_t u_n(t)}{v}_H = \scal{\partial_t^2 w_n(t)}{v}_H \end{split} \] for all $v\in V_n$ and a.a. $\cred t\in (0,T)$. We choose $v = \partial_t u_n(t)$ as an admissible test function, integrate over $(0, t)$ and use the Lipschitz continuity of $g$ to get \begin{equation} \begin{split} \frac{1}{2}\nh{\partial_t u_n(t)} + \int_0^t \nh{\nabla \partial_t u_n(s)} ds + \int_{Q_t} \gamma'_\varepsilon(u_n)\abs{\partial_t u_n}^2 {\cred {}\leq c\int_0^t \nh{\partial_t u_n(s)} ds} \\ {}+ \int_0^t \norm{\partial_t^2 w_n(s)}_H\norm{\partial_t u_n(s)}_H ds + \frac{1}{2} \nh{\partial_t u_n(0)} \, . \end{split} \label{test 8, 1} \end{equation} Since the last term in the left-hand side is non negative because of {\cred the monotonicity of $\gamma_\varepsilon$}, if we had a bound for the last term in the right-hand side, we could use the generalized Gronwall lemma to conclude. In order to provide such an estimate, we set $t = 0$, $v = \partial_t u_n(0)$ in the equation \eqref{eq. B n}; we obtain \[ \nh{\partial_t u_n(0)} \leq \left\{ \norm{\Delta u_{0, n}}_H + \norm{\gamma_\varepsilon(u_{0, n})}_H + \norm{g(u_{0, n})}_H + \norm{v_{0, n}}_H \right\} \norm{\partial_t u_n(0)}_H \] and thus, taking into account the Lipschitz {\cred continuity of $g$, we infer} \[ \begin{split} \norm{\partial_t u_n(0)} \leq \norm{\Delta u_{0, n}}_H + \norm{\gamma'_\varepsilon}_{L^\infty(\mathbb{R})}\norm{u_{0, n} - u_0}_H + \norm{\gamma_\varepsilon(u_0)}_H\\
+ c \left( \norm{u_{0, n}}_H + 1\right) + \norm{v_{0, n}}_H . \end{split} \] Now, assumptions \eqref{u0 convergence strong} and \eqref{data convergence}, as well as \eqref{u0 gamma} and $\abs{\gamma_\varepsilon} \leq \abs{\gamma^0}$, enable us to achieve \begin{equation} \norm{\partial_t u_n(0)} \leq c \label{test 8, 2} \end{equation} for all $\varepsilon > 0$ and $n$ large enough, depending on $\varepsilon$; these requests on parameters are not restrictive, as we first take the limit for $n \longrightarrow + \infty$, then for $\varepsilon \searrow 0$. From \eqref{test 8, 1} and \eqref{test 8, 2} we {\cred deduce that} \begin{equation} \norm{u_n}_{\vett{W^{1, \infty}}{H} \cap \vett{H^1}{V}} \leq {\cred c_{\alpha, 6}} \, . \label{estimate 8, 1} \end{equation}
Finally, we consider equation \eqref{eq. B n} and we rewrite it in the form \[ \scal{\nabla u_n(t)}{\nabla v}_H + \scal{\gamma_\varepsilon(u_n(t))}{v}_H = \scal{F_n(t)}{v} \, , \] for all $v\in V_n$ and a.a. {\cred $t\in (0,T)$, where $F_n = \partial_t w_n - \partial_t u_n - g(u_n) $}. Testing with $v = -\Delta u_n(t)$ the previous equation and integrating by parts in space, we obtain \[ \nh{\Delta u_n(t)} + \int_\Omega \gamma'_\varepsilon(u_n(t))\abs{\nabla u_n(t)}^2 \leq \norm{F_n(t)}_H \norm{\Delta u_n(t)}_H \qquad \textrm{for all } 0\leq t \leq T \, . \] Since the estimates \eqref{estimate 5, 1} and \eqref{estimate 8, 1} entail \[ \norm{F_n}_{\vett{L^\infty}{H}} \leq c \] and we can apply the regularity results for elliptic problems, we deduce \begin{equation} \norm{u_n}_{\vett{L^\infty}{W}} \leq {\cred c_{\alpha, 7}} \, , \label{estimate 8, 2} \end{equation} thus concluding the proof of Theorem \ref{th: regularity}.
\section{$L^\infty$ estimates} \label{reg3} The aim of this section is to obtain $L^\infty$ estimates on $w_t$ and on $\xi$, under the hypotheses \eqref{f stronger} and \eqref{gamma strong}.
We first deal with $w_t$. Setting $\varphi = \alpha w_t + \beta w$, Theorem \ref{th: strong solution} entail that the equalities \[ \frac{1}{\alpha}\varphi_t - {\cred \Delta\varphi } = \frac{\beta}{\alpha} w_t - u_t + f \quad \textrm{ in } Q \, , \qquad \partial_n \varphi = 0 \quad \textrm{ on }\Gamma \times (0, T) \] hold almost everywhere. Furthermore, the assumption \eqref{f stronger}, the estimates \eqref{estimate 5, 1} and {\cred \eqref{estimate 8, 1}} and the continuous embedding $V \hookrightarrow L^6(\Omega)$ {\cred (valid if $\Omega\subseteq\mathbb{R}^3$ is a bounded Lipschitz domain)}, yield \[ \frac{\beta}{\alpha} w_t - u_t + f \in \vett{L^\infty}{H} + \vett{L^r}{L^6(\Omega)} \, , \qquad \textrm{with } r > 4/3 \, . \] In these conditions, {\cred Theorem~7.1 in \cite[p.~181]{LSU}} applies and {\cred ensures that} $\varphi\in L^\infty(Q)$. Since we already know {\cred that $w\in L^\infty(Q)$ (as it is implied, for example, by \eqref{estimate 7, 1})}, we have $w_t\in L^\infty(Q)$ and \begin{equation} \norm{w_t}_{L^\infty(Q)} \leq \frac{1}{\alpha}\norm{\varphi}_{L^\infty(Q)} + \frac{c\beta}{\alpha} \norm{w}_{\vett{{\cred L^\infty}}{W}} \leq c_{\alpha, 8} \, . \label{estimate 9, 1} \end{equation} We notice that, being $\alpha$ fixed and letting $\beta$ vary in a bounded set, we can find an upper bound for the constant $c_{\alpha, 8}$.
In order to prove a $L^\infty$ estimate for $\xi$, we consider the solution $(w_\varepsilon, \, u_\varepsilon)$ to the approximating problem, in which the Yosida regularization appears; we then fix $p\in(1, \, +\infty)$ and get a bound for $\norm{\gamma_\varepsilon(u_\varepsilon)}_{L^p(Q)}$, which is independent of $p$, $\varepsilon$. From this, we will obtain a uniform bound for \[ \norm{\xi_\varepsilon}_{L^\infty(Q)} = \lim_{p\rightarrow+\infty} \norm{\gamma_\varepsilon(u_\varepsilon)}_{L^p(Q)} \, , \] and, via a weak* compactness argument, $\xi\in L^\infty(Q)$. For the sake of simplicity, we do not plug in the subscript $\varepsilon$ in the solution any more.
We know that the equalities \begin{equation} u_t - \Delta u + \gamma_\varepsilon(u) + g(u) = w_t \quad \textrm{in }Q \, , \label{u_vareps} \end{equation} \[ \partial_n u = 0 \quad \textrm{on }\Gamma\times (0, T) \, , \qquad u(0) = u_0 \quad \textrm{in } \Omega \] hold a.e.; we choose $\abs{\gamma_\varepsilon(u)}^{p - 1}\gamma_\varepsilon(u)$ as a test function, by which we multiply both sides of the equation \eqref{u_vareps} -- this is admissible since $u\in L^\infty(Q)$. Integrating over $Q$, we get \begin{equation} \begin{split} \int_Q \frac{\partial}{\partial t}\phi_{\varepsilon, \, p}(u) + \int_Q \nabla u \cdot \nabla\left(\abs{\gamma_\varepsilon(u)}^{p - 1}\gamma_\varepsilon(u)\right) + \int_Q \abs{\gamma_\varepsilon(u)}^{p + 1} \\ = \int_Q \left(w_t - g(u)\right)\abs{\gamma_\varepsilon(u)}^{p - 1}\gamma_\varepsilon(u) \, , \end{split} \label{test 9, 1} \end{equation} where we have set \[ \phi_{\varepsilon, \, p}(t) = \int_0^t \abs{\gamma_\varepsilon(s)}^{p - 1}\gamma_\varepsilon(s)\, ds \qquad \textrm{for all } t\in\mathbb{R} \, ; \] $\gamma_\varepsilon$ is increasing and $\gamma_\varepsilon(0) = 0$, so we have $\phi_{\varepsilon, \, p}\geq 0$ for all $\varepsilon$, $p$. Since $w_t, \, u\in L^\infty(Q)$ and $g$ is continuous, for the right-hand side we have \[ \abs{\int_Q \left(w_t - g(u)\right)\abs{\gamma_\varepsilon(u)}^{p - 1}\gamma_\varepsilon(u)} \leq c_\alpha \norm{\gamma_\varepsilon(u)}^p_{L^p(\Omega)} \, ; \] on the other hand, a direct calculation and the monotonicity of $\gamma_\varepsilon$ show that \[ \nabla u \cdot \nabla\left(\abs{\gamma_\varepsilon(u)}^{p - 1}\gamma_\varepsilon(u)\right) = p\gamma'_\varepsilon(u)\abs{\gamma_\varepsilon(u)}^{p - 1}\abs{\nabla u}^2 \geq 0 \qquad \textrm{a.e. in } Q \, . \] Collecting all the information we have {\cred obtained} so far, from \eqref{test 9, 1} we derive \begin{equation} \int_\Omega \phi_{\varepsilon, \, p}(u(T)) + \norm{\gamma_\varepsilon(u)}^{p + 1}_{L^{p+1}(Q)} \leq c_\alpha \norm{\gamma_\varepsilon(u)}^p_{L^p(Q)} + \int_\Omega \phi_{\varepsilon, \, p}(u_0) \label{test 9, 2} \end{equation} and, since the first term can be ignored, we need only to find an estimate for the last term. We recall that, for the Yosida approximation of a maximal monotone graph, the inequality \[ \abs{\gamma_\varepsilon(s)} \leq \abs{\gamma^0(s)} \qquad \textrm{for all } s\in D(\gamma)\, , \quad \varepsilon > 0 \] holds {\cred (see, e.g., \cite[Prop.~2.6, p.~28]{Brezis})}; according to that, we have \[ \begin{split} \int_\Omega \phi_{\varepsilon, \, p}(u_0) \leq \int_\Omega \abs{\gamma^0(u_0)}^p \abs{u_0} \leq \frac{p}{p + 1}\int_\Omega \abs{\gamma^0(u_0)}^{p + 1} + \frac{1}{p + 1}\int_\Omega\abs{u_0}^{p + 1} \\ \leq \frac{p}{p + 1}\int_\Omega \abs{\gamma^0(u_0)}^{p + 1} + \frac{1}{p + 1}\norm{u_0}_{L^{p + 1}(\Omega)}^{p + 1}\, , \end{split} \] where the H\"older and Young inequalities {\cred have been used. We} recall that $u_0\in L^\infty(\Omega)$ by the assumption \eqref{u stronger} {\cred and also} notice that the same inequalities imply \[ c_\alpha\norm{\gamma_\varepsilon(u)}^p_{L^p(Q)} \leq \frac{p}{p + 1}\norm{\gamma_\varepsilon(u)}^{p + 1}_{L^{p+1}(Q)} + \frac{c_\alpha}{p + 1} \, . \] Now, we come back to the equation \eqref{test 9, 2}; according to the previous estimates, we {\cred infer that} \[ \frac{1}{p + 1}\norm{\gamma_\varepsilon(u)}^{p + 1}_{L^{p+1}(Q)} \leq \frac{p}{p + 1}\norm{\gamma^0(u_0)}^{p + 1}_{L^{p+1}(\Omega)} + \frac{1}{p + 1}\norm{u_0}_{L^{p + 1}(\Omega)}^{p + 1} + \frac{c_\alpha}{p + 1} \] and, hence, \[ \begin{split} \norm{\gamma_\varepsilon(u)}_{L^{p+1}(Q)} \leq \left\{p \norm{\gamma^0(u_0)}^{p + 1}_{L^{p+1}(\Omega)} + \norm{u_0}_{L^{p + 1}(\Omega)}^{p + 1} + c_\alpha \right\}^{1/(p + 1)} \\ \leq c_\alpha\left\{ \norm{\gamma^0(u_0)}_{L^\infty(\Omega)} + \norm{u_0}_{L^\infty(\Omega)} + 1\right\} \, , \end{split} \] which provides the desired estimate and concludes the proof.
\section{Well-posedness of $\left(\textbf{P}_{\alpha}\right)$ {\cred and} convergence as $\beta\searrow 0$} \label{beta=0} Now we set the notation as in {\cred Section} \ref{sec: Pa}, since we are interested in the proof of Theorems~\ref{th: well-posedness Pa}--\ref{th: second estimate error}. We assume that the hypotheses \eqref{alpha, beta}--\eqref{initial data} are satisfied, and we start by studying the convergence as $\beta\searrow 0$, by a compactness argument.
\textbf{Convergence as $\beta\searrow 0$.} {\cred We recall the a priori estimates \eqref{estimate 3, 1}, \eqref{estimate-1}, \eqref{estimate-4}, \eqref{estimate 4, 3} which are independent of $\beta$ and thus holding also for $(w_\beta, \, u_\beta, \, \xi_\beta)$. Moreover, adopting the notation as in \eqref{f splittata}--\eqref{f convergence}, by a comparison in \eqref{eq. A weak} we find out that $\{ \partial_t^2 w_\beta - f^{(2)}_\beta \} $ is uniformly bounded in $\vett{L^2}{V'}$. Therefore, we can find a subsequence $\beta_k\searrow 0$ and functions $w, \, u, \, \xi$ such that \[ w_{\beta_k} \rightharpoonup^* w \ \hbox{ in } \ \vett{W^{1, \, \infty}}{H} \, , \qquad w_{\beta_k} \rightharpoonup w \ \hbox{ in } \ \vett{H^1}{V} \] \[ \partial_t^2 w_\beta - f^{(2)}_\beta \ \rightharpoonup \ w_{tt} - f^{(2)} \ \hbox{ in } \ \vett{L^2}{V'} \] \[ u_{\beta_k} \ \hbox{ tends to } \ u \ \hbox{ weakly in } \, \vett{H^1}{V'} \cap \vett{L^2}{V}, \, \hbox{ whence strongly in } \, L^2(Q), \] \[ \xi_{\beta_k} \rightharpoonup \xi \ \hbox{ in } \ L^2(Q) \] as $k\longrightarrow +\infty$, and here part of \eqref{f convergence beta} has been used. Then, in view of \eqref{g}, \eqref{f convergence beta} and \eqref{data convergence beta}, we can pass to the limit in \eqref{eq. A weak} and \eqref{eq. B weak}, as well as in the initial conditions \eqref{initial condition} which can be recovered weakly in $V'$ at least. On the other hand, $u\in D(\gamma)$ and $\xi\in\gamma(u)$ a.e. in $Q$ follow as a consequence of the above convergences and \cite[Lemma~1.3, p.~42]{Barbu}.}
\textbf{Uniqueness for $\left(\textbf{P}_{\alpha}\right)$.} By applying the previous result with $f_\beta = f$, $w_{0, \beta} = w_0$, $v_{0, \beta} = v_0$ and $u_{0, \beta} = u_0$ given, we obtain the existence of a solution to Problem~$\left(\textbf{P}_{\alpha}\right)$; we still have to prove the uniqueness. Let $(w_1, \, u_1, \, \xi_1)$ and $(w_2, \, u_2, \, \xi_2)$ be solutions of $\left(\textbf{P}_{\alpha}\right)$; we write down the equations for the differences $w = w_1 - w_2$, $u = u_1 - u_2$, $\xi = \xi_1 - \xi_2$ and integrate with respect to time the first one: \[ \scal{w_{t}(t)}{v}_H + \alpha\scal{\nabla w(t)}{\nabla v}_H + \scal{u(t)}{v}_H = 0\, , \] \[ \dual{u_t(t)}{v} + \scal{\nabla u(t)}{\nabla v}_H + \scal{\xi(t)}{v}_H + \scal{g(u_1)(t) - g(u_2)(t)}{v}_H = \scal{w_t(t)}{v}_H \, , \] {\cred to be complemented with null initial conditions as in \eqref{3-2bis}.} We set $v = w_t(t)$ in the first equation and $v = u(t)$ in the second one, integrate over $(0, t)$ and add the two equations; it is straightforward to obtain \[ \int_0^t \nh{w_t(s)} ds + \frac{\alpha}{2}\nh{\nabla w_t(t)} + \frac{1}{2}\nh{u(t)} + \int_0^t \nh{\nabla u(s)} ds \leq c\int_0^t \nh{u(s)} ds \, . \] According to the Gronwall lemma {\cred and owing to $w(0)=0$, it turns out that $w = u = 0$} a.e. in $Q$ and, by comparison in the second equation, $\xi = 0$ a.e. in $Q$.
\newcommand{\widehat{w}_\beta}{\widehat{w}_\beta} \newcommand{\widehat{u}_\beta}{\widehat{u}_\beta} \newcommand{\widehat{\xi}_\beta}{\widehat{\xi}_\beta} \newcommand{\widehat{w}_{0, \beta}}{\widehat{w}_{0, \beta}} \newcommand{\widehat{u}_{0, \beta}}{\widehat{u}_{0, \beta}} \newcommand{\widehat{v}_{0, \beta}}{\widehat{v}_{0, \beta}} \newcommand{\widehat{f}_\beta}{\widehat{f}_\beta}
\textbf{Error equations.} Because of the uniqueness, the whole family $\left\{(w_\beta, \, u_\beta, \, \xi_\beta)\right\}_{\beta > 0}$ {\cred converges}, as $\beta\searrow 0$, to the solution $(w, \, u, \, \xi)$ of Problem~$\left(\textbf{P}_{\alpha}\right)$. So, it makes sense to study the speed of this convergence. In order to perform that, we set $\widehat{w}_\beta = w_\beta - w$, $\widehat{u}_\beta = u_\beta - u$, $\widehat{\xi}_\beta = \xi_\beta - \xi$ and consider the problem obtained for these variables, by subtracting side by side the equations of Problems~$\left(\textbf{P}_{\alpha, \beta}\right)$ and~$\left(\textbf{P}_{\alpha}\right)$. For all $v\in V$ and a.a. {\cred $t\in (0,T)$, the equalities} \begin{equation} \begin{split} \dual{\partial_t^2\widehat{w}_\beta(t)}{v} + \alpha\scal{\nabla \partial_t\widehat{w}_\beta(t)}{\nabla v}_H + \beta\scal{\nabla w_\beta(t)}{\nabla v}_H + \dual{\partial_t \widehat{u}_\beta(t)}{v} \\ = \dual{\widehat{f}_\beta(t)}{v} \end{split} \label{eq A err} \end{equation} \begin{equation} \begin{split} \dual{\partial_t\widehat{u}_\beta(t)}{v} + \scal{\nabla \widehat{u}_\beta(t)}{\nabla v}_H + \scal{\widehat{\xi}_\beta(t)}{v}_H + \scal{g(u_\beta)(t) - g(u)(t)}{v}_H \\ = \scal{\partial_t\widehat{w}_\beta(t)}{v}_H \end{split} \label{eq B err} \end{equation}
are satisfied, as well as the {\cred initial conditions} \[ \widehat{w}_\beta(0) = \widehat{w}_{0, \beta} \, , \qquad \partial_t\widehat{w}_\beta(0) = \widehat{v}_{0, \beta} \, , \qquad \widehat{u}_\beta(0) = \widehat{u}_{0, \beta} \, , \] where {\cred $ \widehat{f}_\beta = f_\beta - f = \widehat{f}_\beta^{(1)} + \widehat{f}_\beta^{(2)},$ \[ \begin{split} \widehat{f}_\beta^{(1)}= f_\beta^{(1)} - f^{(1)} \ \longrightarrow \ 0 \ \hbox{ in } \, L^2 (0,T; V') \\ \widehat{f}_\beta^{(2)}= f_\beta^{(2)} - f^{(2)} \ \longrightarrow \ 0 \ \hbox{ in } \, L^1 (0,T; H) \end{split} \] (cf.~\eqref{f rate a}),} $\widehat{w}_{0, \beta} := w_{0, \beta} - w_0$, $\widehat{v}_{0, \beta} := v_{0, \beta} - v_0$, and $\widehat{u}_{0, \beta} := u_{0, \beta} - u_0$.
\textbf{First estimate for the convergence error.} Now, we want to show Theorem~\ref{th: first estimate error}, so we assume all the needed hypotheses. Choose $v = \widehat{u}_\beta(t)$ in the equation \eqref{eq B err} and integrate over $(0, t)$; by the monotonicity of $\gamma$ and the Lipschitz-continuity of $g$, we easily derive \begin{equation} \frac{1}{2}\nh{\widehat{u}_\beta(t)} + \int_0^t \nh{\nabla \widehat{u}_\beta(s)} ds \leq {\cred \frac{1}{2}\nh{\widehat{u}_{0, \beta}} + c \int_0^t \nh{\widehat{u}_\beta(s)} ds + \int_{Q_t} \widehat{u}_\beta\,\partial_t\widehat{w}_\beta \, }. \label{test err 1, 1} \end{equation} We integrate with respect to time the equation \eqref{eq A err}:{\cred \[ \begin{split} \scal{\partial_t \widehat{w}_\beta(t)}{v}_H + \alpha \scal{\nabla \widehat{w}_\beta (t)}{\nabla v}_H + \beta \scal{1*\nabla w_\beta(t)}{\nabla v}_H + \scal{\widehat{u}_\beta(t)}{v}_H \\ = \langle 1*\widehat{f}_\beta(t), v \rangle + \scal{\widehat{v}_{0, \beta} + \widehat{u}_{0, \beta}}{v}_H + \alpha\scal{\nabla\widehat{w}_{0, \beta}}{\nabla v}_H \, . \end{split} \] We set $v = \partial_t \widehat{w}_\beta$ and integrate over $(0, t)$; keeping only the first two terms in the left-hand side, we obtain \begin{equation} \begin{split} \int_0^t \nh{\partial_t\widehat{w}_\beta(s)} ds + \frac{\alpha}{2}\nh{\nabla\widehat{w}_\beta(t)} \leq \frac{\alpha}{2}\nh{\nabla\widehat{w}_{0, \beta}} \\ - \beta \scal{1*\nabla w_\beta(t)}{\nabla \widehat{w}_\beta (t)}_H + \beta\int_0^t\!\! \scal{\nabla w_\beta(s)}{\nabla \widehat{w}_\beta(s)}_H ds -\int_{Q_t} \widehat{u}_\beta\,\partial_t\widehat{w}_\beta \\ + \int_0^t\!\! \left\langle 1*\widehat{f}_\beta^{(1)}(s) + \widehat{v}_{0, \beta}, \partial_t \widehat{w}_\beta (s)\right\rangle ds + \int_{Q_t}\!\!\left(1* \widehat{f}_\beta^{(2)}+ \widehat{u}_{0, \beta}\right)\partial_t \widehat{w}_\beta + \cblu{\alpha\int_{Q_t} \nabla\widehat{w}_{0, \beta} \nabla\partial_t\widehat{w}_\beta} \, . \end{split} \label{test err 1, 2} \end{equation} Due to the Young and H\"older inequalities and the boundedness of $\{w_\beta\}$ in $L^2(0,T;V)$, we have that \begin{equation} \begin{split} - \beta \scal{1*\nabla w_\beta(t)}{\nabla \widehat{w}_\beta (t)}_H \leq \frac{c}{\alpha} \beta^2 \int_0^t\!\! \nh{\nabla w_\beta (s)} ds + {\cblu \frac{\alpha}{12}}\nh{\nabla\widehat{w}_\beta(t)} \\ \leq c\beta^2 + {\cblu \frac{\alpha}{12}}\nh{\nabla\widehat{w}_\beta(t)} \end{split} \label{nuova1} \end{equation} and \begin{equation} \beta\int_0^t\!\! \scal{\nabla w_\beta(s)}{\nabla \widehat{w}_\beta(s)}_H ds \leq c\beta^2 + \alpha \int_0^t\!\! \nh{\nabla\widehat{w}_\beta(s)}ds \, , \label{nuova2} \end{equation} {\cblu \begin{equation} \alpha \int_{Q_t}\nabla\widehat{w}_{0, \beta} \nabla \partial_t\widehat{w}_\beta \leq \frac{\alpha}{12}\nh{\nabla\widehat{w}_\beta(t)} + c\alpha\nh{\nabla\widehat{w}_{0, \beta}} \, . \label{nuova5} \end{equation} } On the other hand, arguing as in the estimate of the term $T_4(t) $ of \eqref{test 1, 1} we deduce that \begin{equation} \begin{split} \int_0^t\!\! \left\langle 1*\widehat{f}_\beta^{(1)}(s) + \widehat{v}_{0, \beta}, \partial_t \widehat{w}_\beta (s)\right\rangle ds \\ = \left\langle 1*\widehat{f}_\beta^{(1)}(t) + \widehat{v}_{0, \beta}, \widehat{w}_\beta (t )\right\rangle - \int_0^t\!\! \left\langle \widehat{f}_\beta^{(1)}(s), \widehat{w}_\beta (s)\right\rangle ds \\ \leq c \left( \int_0^t\norm{\widehat{f}_\beta^{(1)}(s)}_{V'}^2 ds + \Vert\widehat{v}_{0, \beta} \Vert_{V'}^2 + \Vert\widehat{w}_{0, \beta} \Vert_{H}^2 \right) + \frac14 \int_0^t\!\! \nh{\partial_t\widehat{w}_\beta(s)} ds \\ + {\cblu\frac{\alpha}{12}}\nh{\nabla\widehat{w}_\beta(t)} + c \int_0^t\!\! \left( \int_0^s \nh{\partial_t\widehat{w}_\beta(\tau)} d\tau \right) ds + c \, \alpha \int_0^t\!\! \nh{\nabla\widehat{w}_\beta(s)}ds \end{split} \label{nuova3} \end{equation} Finally, we observe that \begin{equation} \int_{Q_t}\!\!\left(1* \widehat{f}_\beta^{(2)}+ \widehat{u}_{0, \beta}\right)\partial_t \widehat{w}_\beta \leq c \left( \norm{\widehat{f}_\beta^{(2)} }^2_{L^1(0,T;H)} + \Vert\widehat{u}_{0, \beta} \Vert_{H}^2 \right) + \frac14 \int_0^t\!\! \nh{\partial_t\widehat{w}_\beta(s)} ds \\ \label{nuova4} \end{equation} Now we add \eqref{test err 1, 1} and \eqref{test err 1, 2}; collecting also all the estimates in \eqref{nuova1}--\eqref{nuova4}, we find out that \[ \begin{split} \frac{1}{2}\nh{\widehat{u}_\beta(t)} + \int_0^t \nh{\nabla \widehat{u}_\beta(s)} ds + \frac{1}{2}\int_0^t \nh{\partial_t\widehat{w}_\beta(s)} ds + \frac{\alpha}{4}\nh{\nabla\widehat{w}_\beta(t)} \\ \leq c\beta^2 + c \left( \norm{ \widehat{f}_\beta^{(1)} }_{L^2(0,T;V')}^2 + \norm{\widehat{f}_\beta^{(2)} }^2_{L^1(0,T;H)} + \Vert\widehat{u}_{0, \beta} \Vert_{H}^2 + \Vert\widehat{v}_{0, \beta} \Vert_{V'}^2
+ \Vert\widehat{w}_{0, \beta} \Vert_{V}^2 \right) \\ + c\int_0^t \nh{\widehat{u}_\beta(s)} ds + c \int_0^t\!\! \left( \int_0^s \nh{\partial_t\widehat{w}_\beta(\tau)} d\tau \right) ds + c \, \alpha \int_0^t\!\! \nh{\nabla\widehat{w}_\beta(s)}ds . \end{split} \] At this point, it suffices to recall \eqref{f rate a}--\eqref{data rate a} and apply the Gronwall lemma to obtain the thesis of Theorem \ref{th: first estimate error}. }
\textbf{Second estimate for the convergence error.} {\cred Our aim is to prove Theorem \ref{th: second estimate error}, whose hypotheses are assumed to be satisfied. Thus, we can apply Theorems~\ref{th: regularity} and~\ref{th: L^infty estimate} to get a bound \begin{equation} \norm{u_\beta}_{L^\infty(Q)} + \norm{u}_{L^\infty(Q)} + \norm{\xi_\beta}_{L^\infty(Q)} + \norm{\xi}_{L^\infty(Q)} \leq c_\alpha \label{compactness} \end{equation} with $c_\alpha $ which is independent of $\beta$. Now, if $\gamma$ is a maximal monotone graph which reduces to a single-valued function in its domain, then $D(\gamma)$ is an open interval $(a, \, b)$ and, if $b < +\infty$, then $\gamma(r) \nearrow +\infty$ as $r \nearrow b$; similarly, if $a > -\infty$ then $\gamma(r) \searrow -\infty$ as $r\searrow a$. In any case, the condition \eqref{compactness} implies the existence of some compact interval $K\subseteq D(\gamma)$ such that $u_\beta(\overline Q)\subseteq K$ for all $\beta>0$, $u(\overline Q)\subseteq K$. Since $\gamma$ is assumed to be locally Lipschitz-continuous (cf.~\eqref{gamma lipshitz}), thanks to \eqref{stimaerr1} we immediately deduce that \[ \norm{\xi_\beta - \xi}_{L^\infty(0,T; H)} \leq c \norm{u_\beta - u}_{L^\infty(0,T; H)}\leq c \beta \, . \] Moreover, by suitably modifying $g$ we can set $\widehat{\xi}_\beta \equiv 0$ in equation~\eqref{eq B err}, without loss of generality.
We start by taking $v = \partial_t\widehat{w}_\beta$ in \eqref{eq A err}, $v = \partial_t\widehat{u}_\beta$ in \eqref{eq B err}, integrating both equations over $(0, t)$ and adding side by side. Thanks to the Lipschitz-continuity of $g$ and the Young and H\"older inequalities, it is straightforward to obtain \[ \begin{split} \frac{1}{2}\nh{\partial_t\widehat{w}_\beta(t)} + \alpha \int_0^t\!\!\nh{\nabla\partial_t\widehat{w}_\beta(s)}ds + \int_0^t\!\!\nh{\partial_t\widehat{u}_\beta(s)}ds +\frac{1}{2}\nh{\nabla\widehat{u}_\beta(t)} \\ \leq \frac{\beta^2}{\alpha} \norm{w_\beta }_{L^2(0,T;V)}^2 + \frac{\alpha}{4}\int_0^t\nh{\nabla \partial_t\widehat{w}_\beta(s)}ds + c\int_0^t\nh{\widehat{u}_\beta(s)}ds + \frac12\int_0^t\nh{\partial_t\widehat{u}_\beta(s)}ds \\ + \frac1\alpha \norm{\widehat{f}_\beta^{(1)}}^2_{L^2(0,T;V')} + \frac{\alpha}4 \norm{\partial_t\widehat{w}_\beta}^2_{L^2(0,T;H)} + \frac{\alpha}{4}\int_0^t\!\!\nh{\nabla\partial_t\widehat{w}_\beta(s)}ds\\ + \int_0^t\!\!\norm{\widehat{f}_\beta^{(2)}(s)}_H \norm{\partial_t\widehat{w}_\beta(s)}_H ds + \frac{1}{2}\nh{\widehat{v}_{0, \beta}} + \frac{1}{2}\nh{\nabla\widehat{u}_{0, \beta}} \, . \end{split} \] Taking into account conditions \eqref{f rate a}, \eqref{data rate strong} and the previous estimate \eqref{stimaerr1}, we easily have \[ \begin{split} \frac{1}{2}\nh{\partial_t\widehat{w}_\beta(t)} + \frac{\alpha}2
\int_0^t\nh{\nabla\partial_t\widehat{w}_\beta(s)}ds + \frac{1}{2} \int_0^t\nh{\partial_t\widehat{u}_\beta(s)}ds +\frac{1}{2}\nh{\nabla\widehat{u}_\beta(t)} \\ \leq c\, \beta^2 + \int_0^t\norm{\widehat{f}_\beta^{(2)} (s)}_H\norm{\partial_t\widehat{w}_\beta(s)}_H ds \end{split} \] whence, by \eqref{f rate a} and a generalised Gronwall lemma {\cred (cf., e.g., \cite[Lemme~A5, p.~157]{Brezis})}, we infer that \begin{equation} \norm{\widehat{w}_\beta}_{ \vett{W^{1, \, \infty}}{H} \cap \vett{H^1}{V}} + \norm{\widehat{u}_\beta}_{ \vett{H^1}{H} \cap \vett{L^\infty}{V}} \leq c\, \beta \label{estimate err 2} \end{equation} where the constant $c$ obviously depends on $\alpha$.
Next, observe that the assumptions on the data are strong enough to guarantee that \eqref{eq A err} and \eqref{eq B err} can be reformulated as \begin{equation} \partial_t^2 \widehat{w}_\beta - \alpha \Delta \partial_t\widehat{w}_\beta = \beta\Delta w_\beta - \partial_t \widehat{u}_\beta + \widehat{f}_\beta \quad \hbox{ a.e. in } \, Q \label{equazA} \end{equation} \begin{equation} \partial_t\widehat{u}_\beta - \Delta \widehat{u}_\beta + g(u_\beta ) - g(u) = \partial_t \widehat{w}_\beta \quad \hbox{ a.e. in } \, Q \label{equazB} \end{equation} along with the homogeneous Neumann boundary conditions for both $\widehat{w}_\beta$ and $\widehat{u}_\beta$.
In view of \eqref{estimate err 2}, by a comparison of terms in \eqref{equazB} it is standard to deduce that $\norm{\Delta \widehat{u}_\beta}_{L^2(0,T;H)} \leq c\, \beta$ and consequently, owing to elliptic regularity estimates, we obtain
\begin{equation} \norm{\widehat{u}_\beta}_{\vett{L^2}{W}} \leq c_\alpha\beta \, . \label{estimate err 2, 3} \end{equation} At this point, let us emphasize that for the proof of \eqref{estimate err 2} and \eqref{estimate err 2, 3} we have just used the control \eqref{f rate a} on the difference $\widehat{f}_\beta$.
We now pay attention to the equation \eqref{equazA} and multiply both sides by $-\Delta\partial_t\widehat{w}_\beta $, which belongs to $L^2(Q)$ (cf.~\eqref{w strong}), and integrate, also by parts, over $Q_t$. By means of the H\"older and Young inequalities, we infer that \[ \begin{split} \frac{1}{2}\nh{\nabla\partial_t\widehat{w}_\beta(t)} + \alpha \int_0^t\nh{\Delta\partial_t\widehat{w}_\beta(s)}ds \leq \frac{1}{2}\nh{\nabla\widehat{v}_{0, \beta}} + \frac{\beta^2}{\alpha} \int_0^t\nh{\Delta w_\beta(s)}ds\\ + \frac{2}{\alpha}\int_0^t\nh{\partial_t\widehat{u}_\beta(s)}ds + \frac{2}{\alpha} \norm{\widehat{f}_\beta^{(1)}}^2_{L^2(0,T;H)} + \frac{\alpha}{2}\int_0^t\nh{\Delta\partial_t\widehat{w}_\beta(s)}ds \\ + \int_0^t \norm{\nabla \widehat{f}_\beta^{(2)} (s) }_H \norm{\nabla\partial_t\widehat{w}_\beta(s)}_H ds . \end{split} \] Hence, recalling the uniform boundedness of $\{w_\beta\}$ in $L^2(0,T; W)$, we use \eqref{data rate strong}, \eqref{estimate err 2}, \eqref{f rate b} and apply the generalised Gronwall lemma as before to obtain \[ \nh{\nabla\partial_t\widehat{w}_\beta(t)} + \int_0^t\nh{\Delta\partial_t\widehat{w}_\beta(s)}ds \leq c\, \beta^2. \] Now, by virtue of \eqref{fond. t. calculus} and \eqref{data rate strong} we also infer \[ \norm{\Delta \widehat{w}_\beta (t)}_H \leq c\, \beta \quad \hbox{ for all } \, t\in [0,T] . \] Then, standard elliptic regularity properties and the previous estimates \eqref{estimate err 2} and \eqref{estimate err 2, 3} lead us to \eqref{stimaerr2}, thus completing the proof of Theorem~\ref{th: second estimate error}.
}
\end{document} |
\begin{document}
\iffalse
\Panzahl {3} \Pautor {Martin Fuchs}
\Panschrift {Saarland University \\ Department of Mathematics \\
P.O. Box 15 11 50 \\ 66041 Saarbr\"ucken \\
Germany} \Pepost {fuchs@math.uni-sb.de}
\Ptitel {On the local boundedness of generalized minimizers of variational problems with linear growth}
\Pjahr {2017} \Pnummer {390}
\Pdatum {\today}
\Pcoautor {Jan Müller} \Pcoanschrift {Saarland University \\ Department of Mathematics \\
P.O. Box 15 11 50 \\ 66041 Saarbr\"ucken \\
Germany} \Pcoepost {jmueller@math.uni-sb.de}
\qPautor {Xiao Zhong} \qPanschrift { FI-00014 University of Helsinki \\ Department of Mathematics\\
P.O. Box 68 (Gustaf Hällströmin katu 2b) \\ 00100 Helsinki} \qPepost {xiao.x.zhong@helsinki.fi}
\Ptitelseite
\fi
\parindent2ex
\title{f On the local boundedness of generalized minimizers of variational problems with linear growth}
\noindent \\ \begin{bf}AMS classification\end{bf}: 49N60, 49Q20, 49J45 \noindent \\ \begin{bf}Keywords\end{bf}: variational problems of linear growth, TV-regularization, denoising and inpainting of images, local boundedness of solutions.
\begin{abstract} We prove local boundedness of generalized solutions to a large class of variational problems of linear growth including boundary value problems of minimal surface type and models from image analysis related to the procedure of TV--regularization occurring in connection with the denoising of images, which might even be coupled with an inpainting process. Our main argument relies on a Moser--type iteration procedure. \end{abstract} \blfootnote{ \begin{large}\Letter\end{large}Michael Bildhauer (bibi@math.uni-sb.de), \\ Martin Fuchs (fuchs@math.uni-sb.de),\\
Jan Müller (corresponding author) (jmueller@math.uni-sb.de),\\ Saarland University (Department of Mathematics), P.O. Box 15 11 50, 66041 Saarbrücken, Germany,\\ Xiao Zhong (xiao.x.zhong@helsinki.fi),\\
FI-00014 University of Helsinki (Department of Mathematics), P.O. Box 68 (Gustaf Hällströmin katu 2b), 00100 Helsinki (Finland). }
\section{Introduction} In this note we investigate variational problems of linear growth defined for functions $u : \Omega \to \mathbb{R}^N$ on a domain $\Omega \subset \mathbb{R}^{n}$. The general framework of these kind of problems is explained e.g. in the monographs \cite{Giu,GMS1,GMS2,AFP,Bi}, where the reader interested in the subject will find a lot of further references as well as the definitions of the underlying spaces such as $ \mbox{BV} (\Omega,\mathbb{R}^N)$ and $W^{1,p} (\Omega,\mathbb{R}^N)$ (and their local variants) consisting of all functions having finite total variation and the mappings with first order distributional derivatives located in the Lebesgue class $L^p (\Omega,\mathbb{R}^N)$, respectively. We will mainly concentrate on the case $n \ge 2$ assuming that $\Omega$ is a bounded Lipschitz region, anyhow, the case $n = 1$ can be included but is accessible by much easier means as it is outlined for example in \cite{BGH} and \cite{FMT}. To begin with, we consider the minimization problem \begin{equation} \label{G1} J [w] := \int_\Omega F (\nabla w) \, \mathrm{d}x \to \min \ \mbox{in} \ u_0 + \weenull (\Omega,\mathbb{R}^N) \end{equation} with boundary datum \begin{equation} \label{G2} u_0 \in W^{1,1} (\Omega,\mathbb{R}^N)\, , \end{equation} where $ \weenull (\Omega,\mathbb{R}^N)$ is the class of all functions from the Sobolev space $W^{1,1} (\Omega,\mathbb{R}^N)$ having vanishing trace (see, e.g., \cite{Ad}). Throughout this note we will assume that the energy density $F : \mathbb{R}^{N\times n} \to [0, \infty)$ satisfies the following hypotheses: \begin{gather} \label{G3} F \in C^2 (\mathbb{R}^{N\times n})\text{ and (w.l.o.g.) } \ F (0) = 0;\\
\label{G5} \nu_1 |P| - \nu_2 \le F (P) \le \nu_3|P| + \nu_4;\\
\label{growth} 0\leq D^2F(P)(Q,Q)\leq \nu_5\frac{1}{1+|P|}|Q|^2. \end{gather} with suitable constants $\nu_1, \nu_3, \nu_5 > 0, \ \nu_2, \nu_4 \ge 0$ and for all $P,Q\in\mathbb{R}^{N\times n}$. For notational simplicity, we collect the constants $\nu_i$ in a tuple \[ \nu:=\big(\nu_1,...,\nu_5\big). \] \begin{remark}\label{rm1.1}
We note that the above assumptions on $F$ particularly imply \begin{equation} \label{G4}
|DF (P)| \le c(n)\cdot\max\{\nu_1,\nu_3\}, \end{equation} which is a consequence of the linear growth condition \gr{G5} combined with the fact that $F$ is a convex function, which follows from the first inequality in \gr{growth}. A short proof of estimate \gr{G4} is given in \cite{Da}, Lemma 2.2 on p. 156. Moreover, the convexity of $F$ together with \gr{G5} also yields \[
0 = F (0) \ge F (P) - P:D F (P) \ge \nu_1 |P| - \nu_2 - P : DF (P), \] hence \begin{equation} \label{G6}
DF (P) : P \ge \nu_1 |P| - \nu_2\, , \ P \in \mathbb{R}^{N\times n}. \end{equation} \end{remark} As a matter of fact, problem (\ref{G1}) has to be replaced by its relaxed variant (see, e.g., \cite{AFP}, p. 303 and Theorem 5.47 on p. 304, or \cite{Bi}, chapter 4, as well as \cite{Bi2}) \begin{eqnarray} \label{G7}
\widetilde{J} [w] &: =& \int_\Omega F \left(\nabla^a w\right) \, \mathrm{d}x + \int_\Omega F^\infty \left(\frac{\nabla^s w}{|\nabla^s w|}\right) \mathrm{d} \left|\nabla^s w\right| \\[2ex] &&+ \int_{\partial \Omega} F^\infty \left(\left(u_0 - w\right) \otimes\mathfrak{n}\right) \mathrm{d} \mathcal{H}^{n - 1} \to \min \mbox{ in } BV(\Omega,\mathbb{R}^N). \nonumber \end{eqnarray} Here $\nabla w = \nabla^a w \, {\cal{L}}^n + \nabla^s w$ is the Lebesgue decomposition of the measure $\nabla w$, $F^\infty$ is the recession function of $F$, i.e. \[ F^\infty (P) = \lim_{t \to \infty} \frac{1}{t} \,F (tP), \ P \in \mathbb{R}^{N\times n}, \] $\mathcal{H}^{n - 1}$ is Hausdorff's measure of dimension $n - 1$, and $\mathfrak{n}$ denotes the outward unit normal to $\partial \Omega$. By construction, problem (\ref{G7}) admits at least one solution, and the main result of \cite{Bi2} (compare Theorem 3 in this reference) states: \begin{theorem} Let (\ref{G2}) - (\ref{growth}) hold together with $n = 2$ and $N=1$. Assume in addition that $F$ is of class $C^2$ satisfying for some $\mu > 1$ the condition of $\mu$-ellipticity \begin{equation} \label{G8}
\nu_6\left(1 + |P|\right)^{- \mu} |Q|^2 \le D^2 F (P) (Q, Q), \ P, Q \in \mathbb{R}^2\, , \end{equation} with a constant $\nu_6 > 0$. \begin{enumerate} \item[a)] Assume $\mu \le 3$ in (\ref{G8}). Then (\ref{G7}) admits a solution $u^\ast$ in the space $W^{1,1} (\Omega)$. For each subdomain $\Omega^\ast \subset \subset \Omega$ we have \[
\int_{\Omega^\ast} \left|\nabla u^\ast \right| \ln (1 + |\nabla u^\ast|) \, \mathrm{d}x < \infty\, , \] and any BV-solution $u$ of (\ref{G7}) differs from $u ^\ast$ by an additive constant. \item[b)] If the case $\mu < 3$ is considered, then $u^\ast$ from a) is actually of class $C^{1,\alpha} (\Omega)$ for any $\alpha \in (0, 1)$. \end{enumerate} \end{theorem} \begin{remark}\label{rm1.2} The above results extend to vector valued functions $u : \Omega \to \mathbb{R}^{N}, N \ge 2$, provided we impose the structure condition \begin{equation} \label{G9}
F (P) = \widetilde{F} \big(|P|\big) \end{equation} for a suitable function $\widetilde{F} : [0, \infty) \to [0, \infty)$ of class $C^2$ which satisfies appropriate requirements implying \gr{G3}-\gr{growth} for $F$. For details we refer to the appendix. \end{remark} \begin{remark} The main feature of Theorem 1.1 is that the ellipticity condition (\ref{G8}) together with an upper bound on the parameter $\mu$ is sufficient for obtaining a minimizer in a Sobolev class or even in a space or smooth functions. At the same time, the counterexample in section 4.4 of \cite{Bi} shows the sharpness of the limitation $\mu \le3$. \end{remark} \noindent Our first result concerns the situation where we drop condition (\ref{G8}) or allow values $\mu > 3$ even without restriction on the dimension $n$. \begin{theorem} Under the assumptions (\ref{G2}) - (\ref{growth}) the variational problem (\ref{G7}) has a solution $u \in BV(\Omega,\mathbb{R}^N)$, which in addition is a locally bounded function, i.e. $u\in BV(\Omega,\mathbb{R}^N)\cap L^\infty_\mathrm{loc}(\Omega,\mathbb{R}^N)$. \end{theorem}
\begin{remark} Note that we merely impose (\ref{G2}) on the boundary data. If we assume $u_0 \in L^\infty (\Omega,\mathbb{R}^N)$, then any solution $u$ of (\ref{G7}) is in the space $L^\infty (\Omega,\mathbb{R}^N)$, which follows from the results in \cite{BF6}. \end{remark} \noindent Next, we look at a variational problem originating in the work of Rudin, Osher and Fatemi \cite{ROF} on the denoising of images. To be precise, we assume that $n=2$, $N=1$ and consider a measurable subset (``the inpainting region'') $D$ of $\Omega\subset\mathbb{R}^2$ such that \begin{equation} \label{G10} 0 \le {\cal {L}}^2 (D) < {\cal{L}}^2 (\Omega)\, , \end{equation} where ${\cal{L}}^2 (D) = 0$ corresponds to the case of ``pure denoising''. Moreover, we consider given (noisy) data $f : \Omega - D \to \mathbb{R}$ such that \begin{equation} \label{G11} f \in L^{2} (\Omega- D) \end{equation} and pass to the problem \begin{equation} \label{G12}
K [w] := \int_\Omega F (\nabla w) \, \mathrm{d}x + \lambda \int_{\Omega - D} |w - f|^2 \, \mathrm{d}x \to \min \ \mbox{in} \ W^{1,1} (\Omega) \, , \end{equation} where $\lambda > 0$ is some parameter and $F$ satisfies (\ref{G3}) - (\ref{growth}). The problem (\ref{G12}) can be regarded as a model for the inpainting of images combined with simultaneous denoising. The relaxed version of (\ref{G12}) reads as \begin{eqnarray} \label{G13}
\widetilde{K} [w] &: =& \int_\Omega F \left(\nabla^a w\right) \, \mathrm{d}x + \int_\Omega F^\infty \left(\frac{\nabla^s w}{|\nabla^s w|}\right) \mathrm{d} \left|\nabla^s w\right| \\[2ex]
&&+ \lambda \int_{\Omega - D} |w - f|^2 \, \mathrm{d}x \to \min \ \mbox{in} \ \mbox{BV} (\Omega) \, , \nonumber \end{eqnarray} and concerning the regularity of solutions of (\ref{G13}) we obtained in \cite{BFT}, Theorem 2: \begin{theorem} Consider a density $F$ as in Theorem 1.1 for which (\ref{G8}) holds with $\mu < 2$. Moreover, we replace (\ref{G11}) with the stronger condition $f \in L^\infty (\Omega - D)$. Then the problem (\ref{G13}) (and thereby (\ref{G12})) admits a unique solution $u$ for which we have interior $C^{1, \alpha}$-regularity on the domain $\Omega$. \end{theorem} \begin{remark} The result of Theorem 1.3 extends to domains $\Omega$ in $\mathbb{R}^{n}$ with $n \ge 3$, where we might even include the vector case of functions $u : \Omega \to \mathbb{R}^{N}$, provided we have (\ref{G9}) in case $N>1$. We refer to \cite{Ti}. The reader should also note that boundedness of the data $f$ implies the boundedness of solutions to (\ref{G13}) (see, e.g., \cite{BF2}). \end{remark} \begin{remark}
In the paper \cite{BFMT} the reader will find some intermediate regularity results for solutions $u$ of (\ref{G13}) saying that even without the assumption $f \in L^\infty (\Omega - D)$ the solution $u$ belongs to some Sobolev class. With respect to these results we can even replace the ``data term'' $ \int_{\Omega- D} |u - f|^2 \, \mathrm{d}x$ by more general expressions (with appropriate variants of (\ref{G11})), however, in any case $\mu$-ellipticity (\ref{G8}) together with an upper bound on $\mu$ is required. \end{remark} \begin{remark} The counterexamples from \cite{FMT} show that for $\mu > 2$ we can not in general hope for the solvability of problem (\ref{G12}), which means that for these examples any solution $u$ of (\ref{G13}) belongs to $\mbox{BV} (\Omega) - W^{1,1} (\Omega)$. \end{remark} In the spirit of Theorem 1.2 we have the following weak regularity result for problem (\ref{G13}). \begin{theorem} Let (\ref{G3}) - (\ref{growth}) hold, let $D$ satisfy (\ref{G10}), suppose that $n=2$, $N=1$ and consider data $f$ with (\ref{G11}). Then problem (\ref{G13}) admits a solution $u$ in the space $\mbox{BV} (\Omega) \cap L^\infty_{\mathrm{loc}} (\Omega)$, which is unique in the case of pure denoising (i.e. $D = \emptyset$). \end{theorem} \noindent Our paper is organized as follows: in Section 2 we introduce a new type of linear regularization of the problems (\ref{G7}) and (\ref{G13}) by means of $\mu$-elliptic functionals including results on the regularity and the convergence properties of the family of approximate solutions $u_\delta$. In Section 3 we then derive local uniform bounds of the type \begin{equation} \label{G14}
\sup_{\delta > 0} \|u_\delta\|_{L^\infty (\Omega^\ast,\mathbb{R}^N)} \le c (\Omega^\ast) < \infty \end{equation} for subdomains $\Omega^\ast \subset \subset \Omega$ by a Moser-type iteration procedure, which yields the result of Theorem 1.2 by passing to the limit $\delta \downarrow 0$. In the last section we will deduce the statement of Theorem 1.4 from the proof of Theorem 1.2.
\section{$\mu$-elliptic regularization}
In the context of variational problems of linear growth, it is a common approach to consider a sequence of regularizing functionals whose minimizers are sufficiently smooth and converge to a solution of the actual problem. In our previous works (cf. e.g. \cite{BF1,BF2,BF3,FMT}) this was achieved by adding a Dirichlet term $\delta\int_\Omega |\nabla w|^2\, \mathrm{d}x$ for a decreasing sequence $\delta\downarrow 0$. For fixed $\delta$, we then deal with a quadratic elliptic functional and therefore have the well developed machinery for this type of problems, as it is e.g. outlined in the classical monograph \cite{GT}. However, in the situation of Theorems 1.2 and 1.4, a quadratic regularization and the resulting inhomogeneity between the linear and the quadratic term causes some difficulties. We therefore prefer to work with a linear regularization, for which the notion of $\mu$-ellipticity (cf. \gr{G8}) turns out to be the correct framework in terms of existence and regularity of approximating solutions. Let us first consider the situation of Theorem 1.2, where, just for technical simplicity, we replace (\ref{G2}) by the requirement that \begin {equation} \label{H1} u_0 \in W^{1,p} (\Omega,\mathbb{R}^N)\, \end{equation} for some $p>1$. We would like to note that the limit case $p=1$ can be included via a suitable approximation (cf. \cite{BF7} and in particular the work \cite{Bi2}, where the approximation is made explicit in the two-dimensional case). We may therefore actually drop (\ref{H1}) and return to the original hypothesis (\ref{G2}). Now for $0 < \delta < 1$ let \begin{equation} \label{H2} J_\delta [w] :=\delta \int_\Omega F_\mu(\nabla w)\, \mathrm{d}x + J [w] \to \min \ \mbox{in} \ u_0 + \weenull(\Omega,\mathbb{R}^N), \end{equation} where $F_\mu:\mathbb{R}^{N\times n}\rightarrow [0,\infty)$ is chosen to satisfy \begin{gather} \label{G3'} F_\mu \in C^2 (\mathbb{R}^{N\times n})\text{ and (w.l.o.g.) } \ F_\mu (0) = 0\, ;\\
\label{G5'} \widetilde{\nu}_1 |P| - \widetilde{\nu}_2 \le F_\mu (P) \le \widetilde{\nu}_3|P| + \widetilde{\nu}_4;\\
\label{growth'} \widetilde{\nu}_5\left(1 + |P|\right)^{- \mu} |Q|^2 \leq D^2F_\mu(P)(Q,Q)\leq \widetilde{\nu}_6\frac{1}{1+|P|}|Q|^2, \end{gather} with suitable constants $\widetilde{\nu}_1, \widetilde{\nu}_3, \widetilde{\nu}_5,\widetilde{\nu}_6 > 0, \ \widetilde{\nu}_2, \widetilde{\nu}_4 \ge 0$, some $\mu\in (1,\infty)$ and for all $P,Q\in\mathbb{R}^{N\times n}$. Again we set \[
\widetilde{\nu}:=\big(\widetilde{\nu}_1,...,\widetilde{\nu}_6\big). \] We further note that the above assumptions imply \begin{align}\label{muconst}
DF_\mu(P):P\geq \widetilde{\nu}_1|P|-\widetilde{\nu}_2,\quad P\in \mathbb{R}^{N\times n}. \end{align} If the vector case $N>1$ is considered, we impose a structure condition on $F_\mu$ in the spirit of \gr{G9}, i.e. \[
F_\mu(P)=\widetilde{F}_\mu\big(|P|\big) \] for some $\mu$-elliptic function $\widetilde{F}_\mu:\mathbb{R}\to\mathbb{R}$, implying the above assumptions for $F_\mu$ (compare the appendix). A convenient choice for $F_\mu$ is e.g. given by \[
F_\mu(P)=\Phi_\mu\big(|P|\big), \] where $\Phi_\mu$ is defined by \begin{align*} \Phi_\mu(r):=\int_0^r\int_0^s(1+t)^{-\mu}\,dt\,ds,\;r\geq 0, \end{align*} which means \begin{align*} \left\{\begin{aligned}
&\Phi_\mu(r)=\frac{1}{\mu-1}r+\frac{1}{\mu-1}\frac{1}{\mu-2}(r+1)^{-\mu+2}-\frac{1}{\mu-1}\frac{1}{\mu-2},\;\mu\neq 2,\\
\,\\
&\Phi_2(r)=r-\ln(1+r),\;r\geq 0. \end{aligned}\right. \end{align*} \begin{lemma}\label{lem2.1} If we fix $1<\mu<1+\frac{2}{n}$, then we have: \begin{enumerate} \item[a)] Problem \gr{H2} admits a unique solution $u_\delta\in u_0+\weenull(\Omega,\mathbb{R}^N)$. It even holds (not necessarily uniformly with respect to $\delta$) $u_\delta\in C^{1,\alpha}(\Omega,\mathbb{R}^N)$.
\item[b)] $\displaystyle\sup_\delta \int_\Omega |\nabla u_\delta| \, \mathrm{d}x < \infty$\, ; \item[c)] $\displaystyle\int_\Omega D F_{\delta,\mu} (\nabla u_\delta) \cdot \nabla \varphi \, \mathrm{d}x = 0 $ for any $\varphi \in \weenull(\Omega,\mathbb{R}^N), \ F_{\delta,\mu} (P) :=\delta F_\mu(P)+ F (P)$\, . \item[d)] Each $L^1$-cluster point of the family $u_\delta$ is a solution of problem (\ref{G7}). \end{enumerate} \end{lemma} \begin{proof} It is easy to see that under our assumptions on $F$ the density $F_{\delta,\mu}$ is $\mu$-elliptic itself in the sense of \gr{G8}, so that we may cite the results from \cite{BF8} for part a). Part b) and c) are clear from the fact that $u_\delta$ minimizes $J_\delta$. For part c) we observe that due to b) and the $BV$-compactness property (see Theorem 3.23 on p. 132 in \cite{AFP}), there exists a function $\overline{u}\in BV(\Omega,\mathbb{R}^N)$ such that $u_\delta\rightarrow \overline{u}$ in $L^1(\Omega)$ for some sequence $\delta\downarrow 0$. Thanks to the lower semicontinuity of the functional $\widetilde{J}$ from \gr{G7}, it follows \[ \widetilde{J}[\overline{u}]\leq \liminf_{\delta\rightarrow 0}\widetilde{J}[u_\delta]=\liminf_{\delta\rightarrow 0}J[u_\delta]\leq \liminf_{\delta\rightarrow 0}J_\delta[u_\delta]\leq \liminf_{\delta\rightarrow 0}J_\delta[v]=J[v], \] where $v\in u_0+\weenull(\Omega,\mathbb{R}^N)$ is arbitrary. But since in \cite{BF9} it was proved that the set of $\widetilde{J}$-minimizers coincides with the set of all $L^1$-limits of $J$-minimizing sequences, the above chain of inequalities implies the claimed minimality. \end{proof}
Next we consider the setting of Theorem 1.4. Keep in mind that in this situation we restrict ourselves to $n=2$ and $N=1$. Since we merely assume $f\in L^2(\Omega-D)$, we need to ``cut-off'' the data in order to obtain a sufficiently smooth approximation. This means that for $\delta\in (0,1)$ we set \[
f_\delta:\Omega-D\rightarrow\mathbb{R},\,f_\delta(x):=\left\{\begin{aligned}f(x),&\,\text{ if }\,|f(x)|\leq \delta^{-1}, \\ \delta^{-1},&\,\text{ if }\,|f(x)|> \delta^{-1}\end{aligned}\right. \] and consider the problem \begin{align}\label{Kdel} \begin{split}
K_\delta[w]:=\delta\int_\Omega F_\mu(\nabla w)\, \mathrm{d}x+ \int_\Omega F (\nabla w) \, \mathrm{d}x + \lambda \int_{\Omega - D} &|w - f_\delta|^2 \, \mathrm{d}x\\
&\to\min \text{ in }W^{1,1}(\Omega). \end{split} \end{align}
\begin{lemma} If we fix $1<\mu<2$, then we have: \begin{enumerate} \item[a)] Problem \gr{Kdel} admits a unique solution $\widetilde{u}_\delta\in W^{1,1}(\Omega,\mathbb{R}^N)$. It even holds (not necessarily uniformly with respect to $\delta$) $\widetilde{u}_\delta\in C^{1,\alpha}(\Omega,\mathbb{R}^N)$.
\item[b)] $\displaystyle\sup_\delta \int_\Omega |\nabla \widetilde{u}_\delta| \, \mathrm{d}x < \infty$,\; $\displaystyle\sup_\delta \int_{\Omega-D} |\widetilde{u}_\delta|^2 \, \mathrm{d}x < \infty$ ; \item[c)] $\displaystyle\int_\Omega D F_{\delta,\mu} (\nabla \widetilde{u}_\delta) \cdot \nabla \varphi \, \mathrm{d}x+\lambda\int_{\Omega-D} (\widetilde{u}_\delta-f_\delta)\varphi\, \mathrm{d}x = 0$ for any $\varphi \in W^{1,1}(\Omega,\mathbb{R}^N)$,
$F_{\delta,\mu} (p) :=\delta F_\mu(p)+ F (p)$. \item[d)] Each $L^1$-cluster point of the sequence $\widetilde{u}_\delta$ is a solution of problem (\ref{G7}). \end{enumerate} \end{lemma}
\begin{proof} Since $f_\delta\in L^\infty(\Omega)$ for each fixed value of $\delta$, we are in the situation of \cite{BFT}, where we remark that the density $F_{\delta,\mu}$ is $\mu$-elliptic thanks to our assumptions on $F$. We can therefore apply the results of this work which give us the claim of part a). Parts b) and c) are once again clear from the minimality of the $\widetilde{u}_\delta$, where for the second bound in b) we have to make use of the fact that $f_\delta\to f$ in $L^2(\Omega-D)$. It thus remains to justify d). By the bounds of part b), the family $\widetilde{u}_\delta$ is bounded uniformly in $W^{1,1}(\Omega)$ and hence there exists an $L^1$-cluster point $\hat{u}\in BV(\Omega)$ of some sequence $\delta\downarrow 0$ due to the $BV$-compactness property. From the lower semicontinuity of the relaxation $\widetilde{K}$ it then follows for arbitrary $v\in W^{1,1}(\Omega)$ \begin{align*}
\widetilde{K}[\hat{u}]&\leq \liminf_{\delta\downarrow 0}\widetilde{K}[\widetilde{u}_\delta]\leq \liminf_{\delta\downarrow 0} \left[K_\delta[\widetilde{u}_\delta]+\lambda\int_{\Omega-D} \Big(|\widetilde{u}_\delta-f|^2-|\widetilde{u}_\delta-f_\delta|^2\Big)\, \mathrm{d}x\right]\\
&\leq \liminf_{\delta\downarrow 0}\left[K_\delta[v]+\lambda\int_{\Omega-D} \Big(|\widetilde{u}_\delta-f|^2-|\widetilde{u}_\delta-f_\delta|^2\Big)\, \mathrm{d}x\right]\\
&=K[v]+\liminf_{\delta\downarrow 0}\lambda\int_{\Omega-D} \Big(|\widetilde{u}_\delta-f|^2-|\widetilde{u}_\delta-f_\delta|^2\Big)\, \mathrm{d}x\\
&=K[v]+\liminf_{\delta\downarrow 0}\int_{\Omega-D} \Big(|f|^2-|f_\delta|^2+2\widetilde{u}_\delta(f_\delta-f)\Big)\, \mathrm{d}x=K[v], \end{align*} since $f_\delta\rightarrow f$ in $L^2(\Omega-D)$ and $\widetilde{u}_\delta$ is uniformly bounded in $L^2(\Omega-D)$. The claimed minimality of $\hat{u}$ now follows from the fact that any function $w\in BV(\Omega)$ can be approximated by a sequence $w_k\in C^\infty(\Omega)\cap W^{1,1}(\Omega)$ such that $\widetilde{K}[w]=\lim_{k\rightarrow\infty}\widetilde{K}[w_k]$ (cf. Lemma 2.1 and 2.2 in \cite{FT}). \end{proof}
\section{Proof of Theorem 1.2} We consider the general case $n\geq 2$, $N\geq 1$. Our starting point is the Euler equation from Lemma 2.1 c) \begin{align}\label{Eeq} \delta \int_\Omega DF_\mu(\nabla u_\delta):\nabla\varphi\, \mathrm{d}x+\int_\Omega DF(\nabla u_\delta):\nabla\varphi\, \mathrm{d}x=0, \end{align}
where we choose $\varphi=\eta^2|u_\delta|^su_\delta$ for some positive exponent $s$ and a function $\eta\in C^1_0(\Omega)$, $0\leq\eta\leq 1$, which is an admissible choice due to Lemma 2.1 a). We observe \[
\nabla \varphi=u_\delta\otimes\Big(2\eta |u_\delta|^s\nabla\eta+\eta^2\nabla\big(|u_\delta|^s\big)\Big)+\eta^2|u_\delta|^s\nabla u_\delta \] and therefore \begin{align}\label{3.9} \begin{split} DF&(\nabla u_\delta):\nabla \varphi\\
=&2\eta |u_\delta|^s DF(\nabla u_\delta):(u_\delta\otimes\nabla\eta)+\eta^2DF(\nabla u_\delta):\Big(u_\delta\otimes\nabla\big(|u_\delta|^s\big)\Big)\\
&\hspace{5.5cm}+\eta^2|u_\delta|^s DF(\nabla u_\delta):\nabla u_\delta=:T_1+T_2+T_3. \end{split} \end{align} Note that due to \gr{G6} we have
\[
T_3= \eta^2|u_\delta|^s DF(\nabla u_\delta):\nabla u_\delta\geq \nu_1\eta^2|\nabla u_\delta||u_\delta|^s-\nu_2\eta^2|u_\delta|^s. \] For the term $T_2$ of \gr{3.9}, we use the structure condition \gr{G9} (in case $N>1$) and get \begin{align*}
DF(\nabla u_\delta):\Big(\eta^2u_\delta\otimes\nabla\big(|u_\delta|^s\big)\Big)=\frac{\widetilde{F}'\big(|\nabla u_\delta|\big)}{|\nabla u_\delta|}\Big(\eta^2u_\delta\otimes\nabla \big(|u_\delta|^s\big)\Big):\nabla u_\delta. \end{align*} From \begin{align*}
\Big(\eta^2u_\delta\otimes\nabla \big(|u_\delta|^s\big)\Big):\nabla u_\delta&=\frac{1}{2}s|u_\delta|^{s-1}\eta^2\nabla|u_\delta|\cdot \nabla|u_\delta|^2\\
&=s|u_\delta|^{s}\eta^2\nabla|u_\delta|\cdot \nabla|u_\delta|\geq 0 \end{align*} we then obtain the estimate \begin{align*}
DF(\nabla u_\delta):\nabla \varphi\geq 2\eta |u_\delta|^s DF(\nabla u_\delta):(u_\delta\otimes\nabla\eta) +\nu_1\eta^2|\nabla u_\delta||u_\delta|^s-\nu_2\eta^2|u_\delta|^s \end{align*} and similarly (compare the definition of $F_\mu$ and recall inequality \gr{muconst}) \begin{align*}
DF_\mu(\nabla u_\delta):\nabla \varphi\geq 2\eta |u_\delta|^s DF_\mu(\nabla u_\delta):(u_\delta\otimes\nabla\eta) +\widetilde{\nu}_1\eta^2|\nabla u_\delta||u_\delta|^s-\widetilde{\nu}_2\eta^2|u_\delta|^s. \end{align*} Note that in the scalar case these inequalities are valid without condition \gr{G9}. The Euler equation (\ref{Eeq}) then implies (using the boundedness of $DF$ and $DF_\mu$, compare \gr{G4}) \begin{align}\label{3.14} \begin{split}
\int_\Omega |\nabla u_\delta|&|u_\delta|^s\eta^2\, \mathrm{d}x\leq c \left[\int_\Omega \eta^2|u_\delta|^s\, \mathrm{d}x+\int_\Omega |u_\delta|^{s+1}\eta|\nabla\eta|\, \mathrm{d}x\right] \end{split} \end{align} for some constant $c=c(\nu,\widetilde{\nu})$. In the next step we set \[
v:=|u_\delta|^{s+1}\eta^2. \] Then \[
|\nabla v|\leq (s+1)\eta^2|u_\delta|^s\big|\nabla\big(|u_\delta|\big)\big|+2|u_\delta|^{s+1}\eta|\nabla\eta|\leq c(n)(s+1)\eta^2|u_\delta|^s|\nabla u_\delta|+2|u_\delta|^{s+1}\eta|\nabla\eta|. \]
Furthermore, from the Sobolev-Poincar\'e inequality we have \[
\int_\Omega |\nabla v|\, \mathrm{d}x\geq c(n)\left(\int_\Omega |v|^\frac{n}{n-1}\, \mathrm{d}x\right)^\frac{n-1}{n}, \] and we can therefore estimate the left-hand side of \gr{3.14} from below by \begin{align*}
\int_\Omega |u_\delta|^s|\nabla u_\delta|\eta^2\, \mathrm{d}x\geq \frac{c(n)}{s+1}\left[\left(\int_\Omega |u_\delta|^{(s+1)\frac{n}{n-1}}\eta^\frac{2n}{n-1}\, \mathrm{d}x\right)^\frac{n-1}{n}-2\int_\Omega |u_\delta|^{s+1}\eta|\nabla\eta|\, \mathrm{d}x\right]. \end{align*} We insert this into inequality \gr{3.14} which then yields \begin{align}\label{3.15}
\left(\int_\Omega |u_\delta|^{(s+1)\frac{n}{n-1}}\eta^\frac{2n}{n-1}\, \mathrm{d}x\right)^\frac{n-1}{n}\leq c(s+1)\left[\int_\Omega |u_\delta|^s\eta^2\, \mathrm{d}x+\int_\Omega |u_\delta|^{s+1}\eta |\nabla\eta|\, \mathrm{d}x\right] \end{align} with a constant $c=c(n,\nu,\widetilde{\nu})$. Now we fix some open ball $B_{R_0}$ inside $\Omega$. For any $j\in {\mathbb{N}}_0$ we set \[ R_j:=\frac{n-1}{n}R_0+\Big(\frac{n-1}{n}\Big)^j\frac{R_0}{n} \] and consider the sequence of concentric open Balls $B_j$ of radius $R_j$ inside $B_0=B_{R_0}$. Note that \[ \bigcap_{j=0}^\infty B_j\supset B_{\frac{n-1}{n}R_0}=:B_\infty. \] We further choose smooth functions $\eta_j\in C_0^\infty(B_j)$ such that $\eta_j\equiv 1$ on $B_{j+1}$, $0\leq\eta\leq 1$ and \[
|\nabla\eta_j|\leq \frac{2}{R_j-R_{j+1}}=c(R_0,n)\Big(\frac{n}{n-1}\Big)^j. \] Then, together with the choice $s_j:=\big(\frac{n}{n-1}\big)^j-1$, the inequality \gr{3.15} implies \begin{align}\label{3.16}
\left(\int_{B_{j+1}}|u_\delta|^{(\frac{n}{n-1})^{j+1}}\, \mathrm{d}x\right)^\frac{n-1}{n}\leq c\Big(\frac{n}{n-1}\Big)^{2j}\left[\int_{B_j}|u_\delta|^{s_j}\, \mathrm{d}x+\int_{B_j}|u_\delta|^{s_j+1}\, \mathrm{d}x\right],\quad\forall j\in{\mathbb{N}}, \end{align} with a constant $c=c(n,\nu,\widetilde{\nu},R_0)$.
In the following, we fix the value of the parameter $\delta\in (0,1)$ and note that by Hölder's inequality we have \begin{align}\label{3.6} \begin{split}
&\int_{B_j}|u_\delta|^{s_j}\, \mathrm{d}x\leq \left(\int_{B_j}|u_\delta|^{s_j+1}\, \mathrm{d}x\right)^\frac{s_j}{s_j+1}\cdot \left(\int_{B_j}1\, \mathrm{d}x\right)^\frac{1}{s_j+1}\\
&\leq c(R_0,n)\left(\int_{B_j}|u_\delta|^{s_j+1}\, \mathrm{d}x\right)^\frac{s_j}{s_j+1}. \end{split} \end{align} Next we let \[
a_j:=\max\left\{1,\int_{B_j}|u_\delta|^{\big(\frac{n}{n-1}\big)^j}\, \mathrm{d}x\right\} \] and obtain from \gr{3.16} \begin{align*}
\big(a_{j+1}\big)^{\frac{n-1}{n}}&\leq \max\left\{1,c \Big(\frac{n}{n-1}\Big)^{2j}\bigg[\int_{B_j}|u_\delta|^{s_j}\, \mathrm{d}x+\int_{B_j}|u_\delta|^{s_j+1}\, \mathrm{d}x\bigg]\right\}\\
&\leq c \Big(\frac{n}{n-1}\Big)^{2j} \max\left\{1,\int_{B_j}|u_\delta|^{s_j}\, \mathrm{d}x+\int_{B_j}|u_\delta|^{s_j+1}\, \mathrm{d}x\right\}. \end{align*} On the right-hand side we apply inequality \gr{3.6} with the result \[
\big(a_{j+1}\big)^{\frac{n-1}{n}}\leq c \Big(\frac{n}{n-1}\Big)^{2j} \max\left\{1,\int_{B_j}|u_\delta|^{s_j+1}\, \mathrm{d}x+\bigg(\int_{B_j}|u_\delta|^{s_j+1}\, \mathrm{d}x\bigg)^{\frac{s_j}{s_j+1}}\right\} \] for a suitable positive constant $c=c(n,\nu,\widetilde{\nu},R_0)$, hence we arrive at
\begin{align}\label{3.7} \big(a_{j+1}\big)^\frac{n-1}{n}\leq c\Big(\frac{n}{n-1}\Big)^{2j}\cdot a_j\quad\forall j\in{\mathbb{N}}. \end{align} Through an iteration, we obtain from \gr{3.7} \begin{align}\label{3.8} \begin{split}
&\|u_\delta\|_{L^{s_j+1}(B_j,\mathbb{R}^N)}\\
&\leq \big(a_j\big)^{\big(\frac{n-1}{n}\big)^j}\leq c^{\;\sum\limits_{k=1}^{j-1}\big(\frac{n-1}{n}\big)^{k}}\Big(\frac{n}{n-1}\Big)^{\;\sum\limits_{k=1}^{j-1}2k\big(\frac{n-1}{n}\big)^{k}}\cdot \max\Big\{1,\|u_\delta\|_{L^{\frac{n}{n-1}}(\Omega,\mathbb{R}^N)}\Big\}, \end{split} \end{align} and since \[ \sum\limits_{k=1}^\infty \Big(\frac{n-1}{n}\Big)^k=n-1 \quad\text{as well as}\quad \sum\limits_{k=1}^\infty 2k\Big(\frac{n-1}{n}\Big)^k=2n(n-1), \] we may pass to the limit $j\rightarrow\infty$ which yields \begin{align*}
\sup_{x\in B_\infty}|u_\delta(x)|=\lim_{j\rightarrow\infty}\|u_\delta\|_{L^{s_j+1}(B_\infty,\mathbb{R}^N)}\leq c^{n-1}\Big(\frac{n}{n-1}\Big)^{2n(n-1)}\cdot \max\Big\{1,\|u_\delta\|_{L^{\frac{n}{n-1}}(\Omega,\mathbb{R}^N)}\Big\} \end{align*}
with the right-hand side being bounded independently of the parameter $\delta$ since due to Lemma 2.1 b), the sequence $u_\delta$ is uniformly bounded in $W^{1,1}(\Omega,\mathbb{R}^N)$ and hence by Sobolev's embedding in $L^\frac{n}{n-1}(\Omega,\mathbb{R}^N)$. The conclusion is that we find $\sup_{x\in B_\infty}|u_\delta(x)|$ to be bounded by some constant which does not depend on the parameter $\delta$, which means that also the $L^1$-limit $\overline{u}$ of the $u_\delta$ is locally bounded. This finishes the proof of Theorem 1.2.\qed
\section{Proof of Theorem 1.4} We would like to remind the reader of the fact that in the setting of Theorem 1.4 we restrict ourselves to the case $n=2$ and $N=1$. So let $\widetilde{u}_\delta$ denote the solution from Lemma 2.2 a) and assume henceforth that we are in the situation of Theorem 1.4. Let $x_0\in \Omega$. We choose $R_0>0$ small enough such that $B_{R_0}(x_0)\subset \Omega$ and \begin{equation} \label{r0} \int_{B_{R_0}(x_0)-D}\vert f\vert^2\, \mathrm{d}x<\varepsilon_0. \end{equation} Here $\varepsilon_0>0$ is small and will be determined soon. Let $\eta\in C^\infty_0(\Omega)$ be a non-negative cut-off function with support in $B_{R_0}(x_0)$ and $s\ge 0$ a non-negative number. By Lemma 2.2 a), we can use the following function \[ \varphi= \vert \widetilde{u}_\delta\vert^s\widetilde{u}_\delta\eta^2\] as a testing function to the Euler equation in Lemma 2.2 c), and we obtain that \begin{equation}\label{start1} \delta\int_\Omega DF_\mu(\nabla\widetilde{u}_\delta)\cdot\nabla \varphi\, \mathrm{d}x+\int_\Omega DF(\nabla \widetilde{u}_\delta) \cdot\nabla \varphi\, \mathrm{d}x+\lambda\int_{\Omega-D} (\widetilde{u}_\delta-f)\varphi\, \mathrm{d}x=0. \end{equation} Note that \[ \nabla \varphi=(s+1)\vert \widetilde{u}_\delta\vert^s \eta^2\nabla \widetilde{u}_\delta +2\vert \widetilde{u}_\delta\vert^s\widetilde{u}_\delta\eta\nabla\eta.\] Thus by (\ref{G4}), (\ref{G6}) and \gr{muconst}, we have \begin{equation}\label{eq1} \begin{aligned} DF_{\delta,\mu}&(\nabla \widetilde{u}_\delta)\cdot \nabla \varphi\ge \\ & \delta (s+1)\widetilde{\nu}_1 \vert \widetilde{u}_\delta\vert^s\vert \nabla \widetilde{u}_\delta\vert\eta^2-\delta(s+1)\widetilde{\nu}_2\vert \widetilde{u}_\delta\vert^s\eta^2-
2\delta|DF_\mu|\vert \widetilde{u}_\delta\vert^{s+1}\eta\vert\nabla\eta\vert\\ &+(s+1)\nu_1 \vert \widetilde{u}_\delta\vert^s\vert \nabla \widetilde{u}_\delta\vert\eta^2-(s+1)\nu_2\vert \widetilde{u}_\delta\vert^s\eta^2-
2|DF|\vert \widetilde{u}_\delta\vert^{s+1}\eta\vert\nabla\eta\vert. \end{aligned} \end{equation} We also note that \begin{equation}\label{eq2} 2\lambda (\widetilde{u}_\delta-f)\widetilde{u}_\delta\vert \widetilde{u}_\delta\vert^s\eta^2\ge -2\lambda f\widetilde{u}_\delta\vert \widetilde{u}_\delta\vert^s\eta^2\ge -2\lambda \vert f\vert \vert \widetilde{u}_\delta\vert^{s+1}\eta^2. \end{equation} Now it follows from (\ref{start1}), (\ref{eq1}) and (\ref{eq2}) that \begin{equation}\label{start2} \begin{aligned} &(\delta+1)(s+1)\int_\Omega \vert \widetilde{u}_\delta\vert^s\vert\nabla \widetilde{u}_\delta\vert^2\eta^2\, \mathrm{d}x \\
&\le c(\delta+1)(s+1)\left[
\int_\Omega \vert \widetilde{u}_\delta\vert^s\eta^2\, \mathrm{d}x+\int_\Omega \vert \widetilde{u}_\delta\vert^{s+1}\eta\vert\nabla \eta\vert\, \mathrm{d}x\right]
+2\lambda \int_{\Omega-D} \vert f\vert \vert \widetilde{u}_\delta\vert^{s+1}\eta^2\, \mathrm{d}x \end{aligned} \end{equation} with a constant $c=c(\nu,\widetilde{\nu})$. As in the proof of Theorem 1.2, we let \[ v=\vert \widetilde{u}_\delta\vert^{s+1}\eta^2.\] Then \[ \vert \nabla v\vert \le (s+1)\vert \widetilde{u}_\delta\vert^s\vert \nabla \widetilde{u}_\delta\vert\eta^2+2\vert \widetilde{u}_\delta\vert^{s+1}\eta\vert\nabla\eta\vert\] and by the Sobolev inequality, we further have \[ c(n)\left(\int_\Omega v^2\, \mathrm{d}x\right)^{\frac{1}{2}}\le \int_\Omega \vert\nabla v\vert\, \mathrm{d}x.\] Thus (\ref{start2}) implies \begin{equation}\label{start4} \begin{aligned}
(\delta+1) \left(\int_\Omega|v|^2 \, \mathrm{d}x\right)^{\frac{1}{2}} \le c(\delta+1)(s+1)\left[
\int_\Omega \vert \widetilde{u}_\delta\vert^s\eta^2\, \mathrm{d}x+\int_\Omega \vert \widetilde{u}_\delta\vert^{s+1}\eta\vert\nabla \eta\vert\, \mathrm{d}x\right]\\
+\underset{\mbox{$=:T$}}{\underbrace{2\lambda \int_{\Omega-D} \vert f\vert \vert \widetilde{u}_\delta\vert^{s+1}\eta^2\, \mathrm{d}x}},
\end{aligned} \end{equation} with a constant $c=c(n,\nu,\widetilde{\nu})$. We will estimate the term $T$ in the following way: by the Hölder inequality and (\ref{r0}), \begin{equation*} \begin{aligned} 2\lambda \int_{\Omega-D} \vert f\vert\vert \widetilde{u}_\delta\vert^{s+1}\eta^2\, \mathrm{d}x&\le 2\lambda \left( \int_{B_{R_0}(x_0)-D} \vert f\vert^2\, \mathrm{d}x\right)^{\frac{1}{2}}\left(\int_\Omega \vert \widetilde{u}_\delta\vert^{2(s+1)}\eta^4\, \mathrm{d}x\right)^{\frac{1}{2}}\\ &\le 2\lambda \varepsilon_0^{1/2}\left(\int_\Omega \vert \widetilde{u}_\delta\vert^{2(s+1)}\eta^4\, \mathrm{d}x\right)^{\frac{1}{2}}. \end{aligned} \end{equation*} If we choose $\varepsilon_0$ small such that \[ 2\lambda \varepsilon_0^{1/2}\leq\frac{1}{2}<\frac{\delta+1}{2},\] then the term $T$ can be absorbed in the left-hand side of (\ref{start4}), and we deduce from this inequality \begin{equation}\label{start5} \begin{aligned}
\left(\int_\Omega |\widetilde{u}_\delta|^{2(s+1)}\eta^4\, \mathrm{d}x\right)^{\frac{1}{2}} \le 2c(s+1)\left[
\int_\Omega \vert \widetilde{u}_\delta\vert^s\eta^2\, \mathrm{d}x+\int_\Omega \vert \widetilde{u}_\delta\vert^{s+1}\eta\vert\nabla \eta\vert\, \mathrm{d}x\right]. \end{aligned} \end{equation} Note that this is just inequality \gr{3.15} (with $n=2$) from the preceding section, so that from this point on we can simply repeat the arguments which were used to obtain the uniform local boundedness of $u_\delta$ in the proof of Theorem 1.2. This finishes our proof. \qed
\noindent\begin{Large}\textbf{Appendix: discussion of the structure condition \gr{G9}}\end{Large}
\noindent For the interested reader we explain Remark \ref{rm1.2} in a more detailed form. \setcounter{section}{1} \renewcommand{\Alph{section}}{\Alph{section}} \setcounter{theorem}{1} \begin{lemma}
Consider a function $\widetilde{F}:[0,\infty)\rightarrow [0,\infty)$ of class $C^2$ satisfying (with constants $\nu_1,\nu_3,\nu_5>0$, $\nu_2,\nu_4\geq 0$) \begin{enumerate}
\item[(A1)]$\widetilde{F}(0)=0$,
\item[(A2)]$\widetilde{F}'(0)=0$,
\item[(A3)]$\nu_1t-\nu_2\leq \widetilde{F}(t)\leq \nu_3t+\nu_4$,
\item[(A4)]$\widetilde{F}''(t)\geq 0$,
\item[(A5)]$\displaystyle\widetilde{F}''(t)\leq \nu_5\frac{1}{1+t}$ \end{enumerate}
for any $t\geq 0$. Then we have \gr{G3}-\gr{growth} for the density $F(P):=\widetilde{F}\big(|P|\big)$, $P\in\mathbb{R}^{N\times n}$. If in addition for some $\nu_6>0$ and $\mu>1$ \begin{enumerate}
\item[(A6)]$\displaystyle\min\limits_{t\geq 0}\bigg\{\frac{\widetilde{F}'(t)}{t},\widetilde{F}''(t)\bigg\}\geq \nu_6 (1+t)^{-\mu}$, \end{enumerate} we obtain the condition \gr{G8} of $\mu$-ellipticity for $F$. \end{lemma} \begin{remark}
The hypotheses (A1-6) hold for the function $\widetilde{F}(t):=\Phi_\mu(t)$ defined before Lemma \ref{lem2.1}. \end{remark}
\noindent \textit{Proof of Lemma A.1}. The validity of \gr{G3} and \gr{G5} is immediate. From (A2) and (A4) we deduce that the non-negative function $\widetilde{F}'$ is increasing with finite limit on account of (A3) (recall Remark \ref{rm1.1}), hence \begin{align}
\tag{A7} 0\leq \widetilde{F}'(t)\leq \nu_5\frac{1}{1+t},\;t\geq 0, \end{align} provided we replace $\nu_5$ from (A5) by a larger constant if necessary. Next we observe the formula \begin{align*}
D^2F(P)(Q,Q)=\frac{1}{|P|}\widetilde{F}'\big(|P|\big)\left[|Q|^2-\frac{(P: Q)}{|P|^2}\right]+\widetilde{F}''\big(|P|\big)\frac{(P: Q)^2}{|P|^2},\;\;P,Q\in \mathbb{R}^{N\times n}, \end{align*} implying the estimate \begin{align}\tag{A8} \begin{split}
\min\Big\{\widetilde{F}''\big(|P|\big),\frac{1}{|P|}\widetilde{F}'\big(|P|\big)\Big\}|Q|^2\leq D^2F(P)(Q,Q)\\
\leq \max\Big\{\widetilde{F}''\big(|P|\big),\frac{1}{|P|}\widetilde{F}'\big(|P|\big)\Big\}|Q|^2,\;\;P,Q\in\mathbb{R}^{N\times n}. \end{split} \end{align} In conclusion, \gr{growth} follows from (A4), (A5), (A7) and (A8). In the same manner \gr{G8} is deduced from the additional hypothesis (A6). \qed
\begin{tabular}{l l} Michael Bildhauer (bibi@math.uni-sb.de) & Xiao Zhong (xiao.x.zhong@helsinki.fi) \\ Martin Fuchs (fuchs@math.uni-sb.de) & FI-00014 University of Helsinki \\ Jan Müller (jmueller@math.uni-sb.de) & Department of Mathematics\\ Saarland University & P.O. Box 68 (Gustaf Hällströmin katu 2b) \\ Department of Mathematics & 00100 Helsinki\\ P.O. Box 15 11 50 & Finland\\ 66041 Saarbrücken \\ Germany
\end{tabular}
\end{document} |
\begin{document}
\title{ Second order Method for Solving 3D Elasticity Equations with Complex Interfaces }
\author{Bao Wang$^{1}$, Kelin Xia$^{1}$ and Guo-Wei Wei$^{1,2,3}$ \footnote{ Corresponding author. Email: wei@math.msu.edu} \\ \\ \small \it $^1$Department of Mathematics, Michigan State University, East Lansing, MI 48824, USA \\ \small \it $^2$Department of Electrical and Computer Engineering, \\ \small \it Michigan State University, East Lansing, MI 48824, USA \\ \small \it $^3$Center for Mathematical Molecular Biosciences, \\ \small \it Michigan State University, East Lansing, MI 48824, USA }
\date{\today}
\maketitle
\begin{abstract} Elastic materials are ubiquitous in nature and indispensable components in man-made devices and equipments. When a device or equipment involves composite or multiple elastic materials, elasticity interface problems come into play. The solution of three dimensional (3D) elasticity interface problems is significantly more difficult than that of elliptic counterparts due to the coupled vector components and cross derivatives in the governing elasticity equation. This work introduces the matched interface and boundary (MIB) method for solving 3D elasticity interface problems. The proposed MIB method utilizes fictitious values on irregular grid points near the material interface to replace function values in the discretization so that the elasticity equation can be discretized using the standard finite difference schemes as if there were no material interface. The interface jump conditions are rigorously enforced on the intersecting points between the interface and the mesh lines. Such an enforcement determines the fictitious values. A number of new technique are developed to construct efficient MIB schemes for dealing with cross derivative in coupled governing equations. The proposed method is extensively validated over both weak and strong discontinuity of the solution, both piecewise constant and position-dependent material parameters, both smooth and nonsmooth interface geometries, and both small and large contrasts in the Poisson's ratio and shear modulus across the interface. Numerical experiments indicate that the present MIB method is of second order convergence in both $L_\infty$ and $L_2$ error norms. \end{abstract}
{\it Keywords:}~ Elasticity Interface Problem; Complex interface; Matched interface and boundary.
{\setcounter{tocdepth}{4} \tableofcontents}
\section{Introduction}
Although materials, such as solids, are composed of atoms or molecules, which are discrete in nature, continuum models based on the continuum mechanics are highly accurate and applicable to length scales much greater than that of inter-atomic distances \cite{ZhanChen:2010a}. One of the most widely applied continuum models is elasticity theory, which describes how solid materials return to their original shapes once being deformed by applied forces. Linear elasticity theory is often employed when the deformation is relatively small. In such a case, the stress-strain relation is governed by the constitutive equation. One class of elastic materials is isotropic homogeneous, whose constitutive equations can be uniquely determined with any two terms of six moduli, namely, bulk modulus, Young's modulus, Lam$\acute{e}$'s first parameter, shear modulus, Poisson's ratio and p-wave modulus \cite{Anandarajah:2010}. For isotropic inhomogeneous materials, the inhomogeneity is often modeled by position-dependent moduli in their constitutive equations. For example, in seismic wave equations, inhomogeneity is accounted by position-dependent Lam$\acute{e}$'s parameters \cite{shearer:1999}. Similar models have also been employed in the elasticity analysis of biomolecules \cite{Wei:2009,Wei:2013,KLXia:2013d}.
Interface description in the elasticity modeling is indispensable whenever elastic materials encounter rapid changes or discontinuities in material properties due to voids, pores, inclusions, dislocations, cracks or composite structures \cite{Dvorak:2013,Fries:2010,Sukumar:2001,Stolarska:2001}. The resulting problem is called an elasticity interface problem, which is of considerable importance in man-made materials, devices, equipments, tissue engineering, biomedical science and biophysics \cite{Sukumar:2001,Stolarska:2001,Wei:2009,Wei:2013,KLXia:2013d}. Mathematically, discontinuities in elasticity interface problems can be classified into two types, namely, strong ones and weak ones. Strong discontinuities are referred to situations where the displacement has jumps across the interface, while weak discontinuities have a continuous displacement but with jumps in the gradient of the displacement. In general, analytical solution to elasticity interface problems is difficult to obtain, except for simple interface geometries. In 1950s, Eshelby found that under a uniformly applied stress, an infinite and elastically isotropic system with an ellipsoidal inhomogeneity has a uniform eigenstrain distribution inside the ellipsoidal domain \cite{Eshelby:1956,Eshelby:1957}. For arbitrarily shaped inhomogeneity, semianalytic approaches have been proposed for finding stress tensors \cite{Mathiesen:2008}.
Numerical approaches, such as finite element methods (FEMs), boundary element methods (BEMs) and finite difference methods (FDMs), are the main workhorse for elasticity interface problems arising from practical applications. Based on computational meshes used, these methods can be classified into two types, i.e., schemes utilize body-fitting meshes and algorithms based on special interface schemes. Body-fitting meshes are generated according to geometry of the interface so that no mesh lines cut through the interface. In this type of methods, locally adaptive meshes are frequently employed based on local refinement techniques \cite{XuZL:2003}. In the second type of algorithms, regular meshes that may cut through the interface are used. Consequently, sophisticated numerical schemes are needed to incorporate the interface conditions into element shape functions or operator discretizations. Immersed interface method (IIM) originally proposed for elliptic interface problems \cite{LeVeque:1994} has been developed to solve two-dimensional (2D) elasticity interface problems with isotropic homogenous media \cite{YangXZ:2003}. This finite difference based approach achieves second order accuracy. A second-order sharp numerical method has been developed for linear elasticity equations \cite{Theillard:2013}.
Many finite element based methods have also been proposed for elasticity interface problems. Among them, partition of unity method (PUM), the generalized finite element method (GFEM) and extended finite element method (XFEM) are designed to capture the non-smooth property of the solution over the interface \cite{Sukumar:2001,Stolarska:2001,Fries:2010}. Enrichment functions are utilized to handle the material interface. Discontinuous Galerkin based methods have also been constructed to deal with strong and weak discontinuities \cite{Hansbo:2002,Becker:2009,Mergheim:2006}. Recently, immerse finite element (IFM) method has been developed to solve elasticity problems with interface jump conditions \cite{LiZL:2005,XieH:2011}. This approach locally modifies finite element basis functions to enforce the jump conditions across the interface. Most recently, a Nitsche type method has been proposed for elasticity interface problems \cite{Michaeli:2013}.
There are few numerical issues in the solution of elasticity interface problems. One issue is to deal with complex interface geometry. It is easy to construct a numerical method for some special designed simple interface shapes. However, it is a challenge to automatically deal with complex interface geometries. Another issue is to develop robust numerical schemes for handling interfaces of Lipschitz continuity or geometric singularities, such as cusps, sharp edges, tips and self-intersecting surfaces \cite{HouSM:2012}. It is still a major challenge to develop second order accurate schemes for arbitrarily complex interface geometries in a three-dimensional (3D) setting. One example of arbitrarily complex interface geometries is the protein molecular surfaces \cite{Yu:2007,Yu:2007a,Geng:2007a}. The other issue is position dependent material parameters. It is necessary for numerical methods to be able to treat spatially varying coefficients. Additionally, taking care of strong discontinuities, handling large contrast between material parameters across the interfaces and treatment of the Poisson's ratio near the incompressible limit are also valid numerical issues in elasticity interface problems \cite{LinT:2012,LinT:2013}.
Finally, it is a challenge to develop second order accurate schemes for arbitrarily complex interface geometries in three-dimensional (3D) setting. As a vector equation, the existence of three deformation components gives rise to an extraordinary requirement for numerical schemes to be unusually efficient. Although many elegant and efficient algorithms have been developed for 2D and 3D elasticity interface problems, to our best knowledge, there is little literature about second order convergent schemes for arbitrarily complex interface geometries in 3D, including interfaces of Lipschitz continuity.
The matched interface and boundary (MIB) method was originally constructed for dealing with material interfaces in Maxwell's equations \cite{Zhao:2004} and Poisson equation \cite{Yu:2007,Yu:2007a,Zhou:2006c,Zhou:2006d,Geng:2007a}. The essential idea of the MIB method is to extend the solution beyond the interface so that the derivatives near the interface can be discretized as if there were no interface. The extension along the interface is carried out by iteratively incorporating the lowest order of interface conditions so that in principle, arbitrarily high order accuracy can be achieved. Sixteenth order accuracy was achieved for simple interface geometries \cite{Zhao:2004,Zhou:2006c} and up to sixth order convergence was realized for 3D complex interface shapes i \cite{Yu:2007a}. For arbitrarily complex interfaces with geometric singularities, robust second order numerical convergence was observed \cite{Yu:2007,Yu:2007a,Geng:2007a}. In the past decade, the MIB method has been successfully applied to a variety of problems. For example, in computational biophysics, an MIB based Poisson-Boltzmann solver, MIBPB \cite{DuanChen:2011a}, has been developed for the analysis of the electrostatic potential of biomolecules \cite{Yu:2007, Geng:2007a,Zhou:2008b}, molecular dynamics \cite{Geng:2011} and charge transport phenomenon \cite{QZheng:2011a, QZheng:2011b}. Zhao has constructed second order and fourth order MIB schemes for the Helmholtz problems \cite{SZhao:2010a,SZhao:2008a}. A second order MIB method has been developed by Zhou and coworkers to solve the Navier-Stokes equations with discontinuous viscosity and density \cite{YCZhou:2012a}. Recently, the MIB method has been extended to solve elliptic equations with multi-material interfaces \cite{KLXia:2011}.
The object of the present work is to develop MIB schemes for solving 3D elasticity interface problems. We consider both smooth and sharp interfaces for isotropic homogeneous and inhomogeneous elastic materials. First, we extend our earlier MIB method for elliptic interface problems to elasticity counterparts. To this end, we take care of both central derivatives and cross derivatives in the elasticity equation. Several numerical techniques namely, disassociation,
extrapolation, and neighbor combination, are proposed to compute the fictitious values for the discretization of the cross derivatives. Additionally, to make the present MIB method efficient for dealing with three coupled vector components, we carefully optimize our algorithms so that the resulting discretization matrix is as symmetric and diagonally dominant as possible. Moreover, to handle geometric singularities, we develop a technique to simultaneously employ two sets of interface conditions from two intersecting points where the interface meets mesh lines.
Finally, we validate the proposed MIB for wide variety of elasticity interface problems, including large contrast in material parameters across the interface, strong interface discontinuity, sharp-edged interface and variable material coefficients.
The rest of this paper is organized as follows. The formulation of 3D elasticity interface problems is presented in Section \ref{theory}. Section \ref{algorithm} is devoted to the construction of MIB algorithms for elasticity interface problems. Methods for determining fictitious values are proposed for both central derivatives and cross derivatives in the elasticity equation. The present methods are extensively validated by analytical tests with complex interface geometries, including interfaces of Lipschitz continuity in Section \ref{validation}. We demonstrate that the second order accuracy is achieved by the proposed MIB method. This paper ends with a conclusion.
\section{Formulation of the elasticity interface problem} \label{theory}
The 3D linear elasticity motion considered in the present work is governed by the following linear elasticity equations \begin{equation} \label{elas} \nabla \cdot \mathbb{T}+\mathbf{F}=\frac{d^2\mathbf{u}}{dt^2}, \end{equation} where $\mathbb{T}$ is the strain tensor, $\mathbf{F}=\left(F_1(\mathbf{x}), F_2(\mathbf{x}), F_3(\mathbf{x})\right)^T:=(F_1, F_2, F_3)^T$ is the external force on the elasticity body, $\mathbf{u}=\left(u_1(\mathbf{x}), u_2(\mathbf{x}), u_3(\mathbf{x})\right)^T$ is a displacement vector, $\mathbf{x}=\left(x, y, z\right)^T$ is a position vector, and $*^T$ is the transpose of quantity $*$. \\
For isotropic homogeneous media, the strain tensor $\mathbb{T}$ is a 3 by 3 symmetric matrix which has the form \begin{equation} \mathbb{T}=\lambda \rm{tr}(\sigma)I+2\mu\sigma, \end{equation} where $\lambda$ is the Lam\'{e}'s parameter, $\mu$ is the shear modulus, $I$ is a 3 by 3 identity matrix, and $\sigma$ is a stress tensor which can be further written as \begin{equation} \sigma=\frac{1}{2}\left(\nabla \mathbf{u}+(\nabla\mathbf{u})^T\right). \end{equation} The static state elasticity equation is given by \begin{equation} \label{elas_s} \nabla \cdot \mathbb{T}+\mathbf{F}=\mathbf{0}. \end{equation} In the present work, we focus on the static state problem (\ref{elas_s}).
\subsection{Interface jump conditions} Consider a two-phase elastic body having two different elastic materials in a rectangular prism domain $\Omega \subset \mathbb{R}^3$. The two phase elastic motion is separated by an arbitrarily complex interface $\Gamma$, which splits the whole domain $\Omega$ into $\Omega^+$ and $\Omega^-$, i.e., $\Omega=\Omega^+\cup\Gamma\cup\Omega^-$, as illustrated in Fig. \ref{inter_pro}.
\begin{figure}
\caption{Illustration of the elasticity interface problem at a cross section $(x=x_i)$. The whole domain consists of two subdomains $\Omega^+$ and $\Omega^-$ by the interface $\Gamma$.}
\label{inter_pro}
\end{figure}
\begin{lemma} For the 3D elasticity equations of the static state (\ref{elas_s}), if the source term $\mathbf{F}$ has a potential function representation $U$, i.e., $\nabla U=\mathbf{F}$, then the static state elasticity equations can be written as a homogeneous equation. More precisely, there exist another 3 by 3 matrix $\tilde{\mathbb{T}}$ such that $$ \nabla\cdot\tilde{\mathbb{T}}=\mathbf{0}, $$ where $\mathbf{0}$ is the 3D zero vector. \end{lemma}
\begin{definition} \textbf{Weak Solution:} $\tilde{\mathbb{T}}$ is said to be the weak solution of the homogeneous equation $\nabla\cdot\tilde{\mathbb{T}}=\mathbf{0}$ provided $$ \int_{\Omega}\nabla\phi\cdot \tilde{\mathbb{T}}d{\bf r}=0, $$ holds for $\forall \phi\in C^\infty_0(\Omega)$, where $C^\infty_0(\Omega)$ is the space of smooth functions with compact support on $\Omega$, and $d{\bf r}=dxdydz$ is the volume integral element. \end{definition}
\begin{theorem} Let $\mathbb{T}$ be a second order tensor in $\mathbb{R}^3$ which can be written as an $3$ by $3$ matrix. For the elasticity equations \begin{equation} \nabla \cdot \mathbb{T}+\mathbf{F}=\mathbf{0}, \end{equation} where $\mathbf{F}$ is an $3$-dimensional vector valued function and $\mathbf{0}\in \mathbb{R}^3$. If the force term has a potential function $U$, i.e., $\nabla U=\mathbf{F}$, then across the interface, the weak solution satisfies the following interface conditions \begin{equation} \label{inter2} [\mathbb{T}\cdot \mathbf{n}]=\mathbf{T}, \end{equation} where $\mathbf{T}$ is an $3$-dimensional vector-valued function, $[*]$ is the jump of quantity $*$ across the interface, and $\mathbf{n}$ is the normal direction of the interface. \end{theorem}
\begin{remark} If the material has no fracture which corresponds to the weak discontinuity in the linear elasticity interface problem, the following interface condition is enforced in traditionally elasticity interface problems $$ [\mathbf{u}]=\mathbf{0}. $$ However, fractures often occur in realistic materials which corresponds to the strong discontinuity. In this work, our numerical scheme is designed for both strong and weak discontinuity of elasticity interface problems. \end{remark}
\subsection{Linear elasticity interface problem} In this work, we only consider the static state elasticity equation (\ref{elas_s}). As discussed above, the 3D elasticity interface problem can be formulated as \begin{eqnarray}
\nabla\cdot\mathbb{T}+\mathbf{F} &=& \mathbf{0}, \quad \mbox{in} \quad \Omega^+\cup\Omega^-,\\
\left[\mathbf{u}\right] \mid_\Gamma&=& \mathbf{b}, \quad \mbox{on} \quad \Gamma,\\
\left[\mathbb{T}\cdot\mathbf{n}\right]\mid_\Gamma &=& \mathbf{T}, \quad \mbox{on} \quad \Gamma,\\
\mathbf{u} &=& \mathbf{u}^0, \quad \mbox{on} \quad \partial\Omega. \end{eqnarray} where $\mathbf{u}=(u_1, u_2, u_3)^T: \Omega\rightarrow \mathbb{R}^3$ is the displacement field and $\mathbf{n}=(n_1, n_2, n_3)^T$ is the unit outer normal vector to the interface $\Gamma$. Function $\mathbf{F}$, as stated above, is a 3D vector-valued function of body force field. Vector $\mathbf{u}^0=(u_1^0, u_2^0, u_3^0)^T$ is the Dirichlet boundary conditions. For elasticity interface problem, generally, if vector $\mathbf{b}=(b_1, b_2, b_3)$ does not equal 0, it is called strong discontinuity, otherwise weak discontinuity. Here vector valued function $\mathbf{T}=(\phi, \psi, \eta)^T$ are the jump of the traction $\mathbb{T}\cdot\mathbf{n}$ across the interface $\Gamma$.
In material science, the stress-strain relation is usually described by constitutive equation, which in terms of Lam\'{e}'s parameters can be expressed as, \begin{equation} \label{strain_stress} \mathbb{T}=\lambda \rm{tr}(\sigma)I+2\mu\sigma. \end{equation} Here stain tensor $\sigma$ is defined as, $$ \sigma=\frac{1}{2}\left(\nabla \mathbf{u}+(\nabla \mathbf{u})^T\right). $$
Dramatic different elasticity behaviors can be observed between inhomogeneous media and homogeneous media. To take this property into consideration, we elaborate the elasticity interface problem in both situations. For inhomogeneous material, Lam\'{e}'s parameters are position dependent, i.e., $\lambda=\lambda(x,y,z))$ and $\mu=\mu(x,y,z))$. Using the constitutive equation in Eq. (\ref{strain_stress}), the governing equation of elasticity interface problem can be expressed as, \begin{eqnarray} \label{el11} \nabla\lambda(\nabla \cdot \mathbf{u})+\nabla\mu \cdot \left[\nabla\mathbf{u}+(\nabla\mathbf{u})^T \right] +(\lambda+\mu)\nabla\nabla \cdot \mathbf{u} +\mu \nabla^2\mathbf{u}=-\mathbf{F}. \end{eqnarray} We can spell out all the terms as following, \begin{equation} \label{el11} \resizebox{0.92\hsize}{!}{$ (\lambda+2\mu)\frac{\partial^2 u_1}{\partial x^2}+\mu\frac{\partial^2 u_1}{\partial y^2}+\mu\frac{\partial^2 u_1}{\partial z^2}+(\lambda+\mu)\frac{\partial^2 u_2}{\partial x\partial y}+(\lambda+\mu)\frac{\partial^2 u_3}{\partial x\partial z}+ \lambda_x\left(\frac{\partial u_1}{\partial x}+\frac{\partial u_2}{\partial y}+\frac{u_3}{\partial z}\right)+2\mu_x\frac{\partial u_1}{\partial x}+\mu_y\left(\frac{\partial u_1}{\partial y}+\frac{\partial u_2}{\partial x}\right)+\mu_z\left(\frac{u_1}{\partial z}+\frac{\partial u_3}{\partial x}\right)=-F_1, $ } \end{equation} \begin{eqnarray} \label{el12} \resizebox{0.92\hsize}{!}{$ \mu\frac{\partial^2 u_2}{\partial x^2}+(\lambda+2\mu)\frac{\partial^2 u_2}{\partial y^2}+\mu\frac{\partial^2 u_2}{\partial z^2}+(\lambda+\mu)\frac{\partial^2 u_1}{\partial x\partial y}+(\lambda+\mu)\frac{\partial^2 u_3}{\partial y\partial z}+ \mu_x\left(\frac{\partial u_2}{\partial x}+\frac{\partial u_1}{\partial y}\right)+\lambda_y\left(\frac{\partial u_1}{\partial x}+\frac{\partial u_2}{\partial y}+\frac{\partial u_3}{\partial z}\right)+2\mu_y\frac{\partial u_2}{\partial y}+\mu_z\left(\frac{\partial u_2}{\partial z}+\frac{\partial u_3}{\partial y}\right)=-F_2, $ } \end{eqnarray} \begin{eqnarray} \label{el13} \resizebox{0.92\hsize}{!}{$ \mu\frac{\partial^2 u_3}{\partial x^2}+\mu\frac{\partial^2 u_3}{\partial y^2}+(\lambda+2\mu)\frac{\partial^2 u_3}{\partial z^2}+(\lambda+\mu)\frac{\partial^2 u_1}{\partial x\partial z}+(\lambda+\mu)\frac{\partial^2u_2}{\partial y\partial z}+ \mu_x\left(\frac{\partial u_3}{\partial x}+\frac{\partial u_1}{\partial z}\right)+\mu_y\left(\frac{\partial u_3}{\partial y}+\frac{\partial u_2}{\partial z}\right)+\lambda_z\left(\frac{\partial u_1}{\partial x}+\frac{\partial u_2}{\partial y}+\frac{\partial u_3}{\partial z}\right)+ 2\mu_z\frac{\partial u_3}{\partial z}=-F_3. $ } \end{eqnarray} With the constitutive equations, the jump conditions regarding to the stain tensor can be represented as, \begin{eqnarray} \label{el17} \left[\left(\lambda\left(\frac{\partial u_1}{\partial x}+\frac{\partial u_2}{\partial y}+\frac{\partial u_3}{\partial z}\right)+2\mu \frac{\partial u_1}{\partial x} \right)n_1+\mu\left(\frac{\partial u_2}{\partial x}+\frac{\partial u_1}{\partial y}\right)n_2+\mu\left(\frac{\partial u_3}{\partial x}+\frac{\partial u_1}{\partial z}\right)n_3\right]\mid_\Gamma=\phi, \ \mbox{on}\ \Gamma, \\ \left[\mu\left(\frac{\partial u_1}{\partial y}+\frac{\partial u_2}{\partial x}\right)n_1+\left(\lambda\left(\frac{\partial u_1}{\partial x}+\frac{\partial u_2}{\partial y}+\frac{\partial u_3}{\partial z}\right)+2\mu\frac{\partial u_2}{\partial y} \right)n_2+\mu\left(\frac{\partial u_3}{\partial y}+\frac{\partial u_2}{\partial z}\right)n_3\right]\mid_\Gamma=\psi, \ \mbox{on}\ \Gamma, \\ \left[\mu\left(\frac{\partial u_1}{\partial z}+\frac{\partial u_3}{\partial x}\right)n_1+\mu\left(\frac{\partial u_2}{\partial z}+\frac{\partial u_3}{\partial y}\right)n_2+\left(\lambda\left(\frac{\partial u_1}{\partial x}+\frac{\partial u_2}{\partial y}+\frac{\partial u_3}{\partial z}\right)+2\mu\frac{\partial u_3}{\partial z} \right)n_3\right]\mid_\Gamma=\eta, \ \mbox{on}\ \Gamma. \end{eqnarray} Together with the Dirichlet boundary conditions and the jump conditions,
we set up the general formulation for linear elasticity interface problem with inhomogeneous media.
For homogeneous material, algebraic relations exist between different elasticity moduli, i.e., Bulk modulus $K$, Young's modulus $E$, Lam \'{e}'s first parameter $\lambda$, Shear modulus $\mu$, Poisson's ratio $\nu$ and P-wave modulus $M$. For instance, Lam\'{e}'s parameters can be represented by Young's modulus $E$ and Poisson's ratio $\nu$ as, $$ \mu=\frac{E}{2(1+\nu)},\lambda=\frac{E\nu}{(1+\nu)(1-2\nu)}. $$ Due to the constant moduli, the governing equation can be simplified as \begin{eqnarray} \label{el11_simplify} (\lambda+\mu)\nabla\nabla \cdot \mathbf{u} +\mu \nabla^2\mathbf{u}=-\mathbf{F}. \end{eqnarray}
With the above algebraic relations of elasticity moduli, the governing equation can be further written as, \begin{eqnarray}\label{elas_eq1} 2(1-\nu)\frac{\partial^2 u_1}{\partial x^2}+(1-2\nu)\frac{\partial^2 u_1}{\partial y^2}+(1-2\nu)\frac{\partial^2 u_1}{\partial z^2}+\frac{\partial^2 u_2}{\partial x\partial y}+\frac{\partial^2 u_3}{\partial x\partial z}=f_1,\\ (1-2\nu)\frac{\partial^2 u_2}{\partial x^2}+2(1-\nu)\frac{\partial^2 u_2}{\partial y^2}+(1-2\nu)\frac{\partial^2 u_2}{\partial z^2}+\frac{\partial^2 u_1}{\partial x\partial y}+\frac{\partial^2 u_3}{\partial y\partial z}=f_2,\\ (1-2\nu)\frac{\partial^2 u_3}{\partial x^2}+(1-2\nu)\frac{\partial^2 u_3}{\partial y^2}+2(1-\nu)\frac{\partial^2 u_3}{\partial z^2}+\frac{\partial^2 u_1}{\partial x\partial z}+\frac{\partial^2 u_2}{\partial y\partial z}=f_3. \end{eqnarray} Here $(f_1,f_2,f_3)$ are prerequisite terms, and they can be related to the body force by $(f_1,f_2,f_3)=(-\frac{F_1}{\mu+\lambda},-\frac{F_2}{\mu+\lambda},-\frac{F_3}{\mu+\lambda})$.
Also the second set of jump conditions can be rewritten as, \begin{eqnarray} \label{deri_jump1}
\left[\frac{2\mu}{1-2\nu}\left((1-\nu)\frac{\partial u_1}{\partial x}+\nu\frac{\partial u_2}{\partial y}+\nu\frac{\partial u_3}{\partial z}\right)n_1+\mu\left(\frac{\partial u_1}{\partial y}+\frac{\partial u_2}{\partial x}\right)n_2+\mu\left(\frac{\partial u_1}{\partial z}+\frac{\partial u_3}{\partial x}\right)n_3\right]|_\Gamma=\phi, \ \mbox{on}\ \Gamma, \\
\left[\mu\left(\frac{\partial u_1}{\partial y}+\frac{\partial u_2}{\partial x}\right)n_1+\frac{2\mu}{1-2\nu}\left(\nu\frac{\partial u_1}{\partial x}+(1-\nu)\frac{\partial u_2}{\partial y}+\nu\frac{\partial u_3}{\partial z}\right)n_2+\mu\left(\frac{\partial u_3}{\partial y}+\frac{\partial u_2}{\partial z}\right)n_3\right]|_\Gamma=\psi, \ \mbox{on}\ \Gamma, \\
\left[\mu\left(\frac{\partial u_1}{\partial z}+\frac{\partial u_3}{\partial x}\right)n_1+\mu\left(\frac{\partial u_2}{\partial z}+\frac{\partial u_3}{\partial y}\right)n_2+\frac{2\mu}{1-2\nu}\left(\nu\frac{\partial u_1}{\partial x}+\nu\frac{\partial u_2}{\partial y}+(1-\nu)\frac{\partial u_3}{\partial z}\right)n_3\right]|_\Gamma=\rho, \ \mbox{on}\ \Gamma, \end{eqnarray}
The above two sets of equations, together with the Dirichlet boundary conditions and jump conditions regarding to displacement, constitute general formulation for linear elasticity interface problem in homogeneous media.
\section{Method and algorithm}\label{algorithm}
In this section, the MIB method for elliptical interface problems is extended to solve elasticity interface problems.
Due to the existence of the interface, the direct application of the standard central finite difference (CFD) schemes leads to a dramatic decrease in the accuracy and convergence of the numerical solution. To maintain the designed order of accuracy, MIB method extends function values across the interface. The resulting extended function values are called fictitious values, which are employed, together with function values on the other side of the interface, for the CFD discretization of the PDE across the interface. For example, at a grid point $(i, j, k)$ near the interface, if its finite difference scheme refers to some grid points that are in the other side of the interface, fictitious values from other side of the interface are utilized in the finite difference discretization. To extend the function values to the other side of the interface and enable the MIB discretization of second order convergence, the interface conditions on both function values and normal derivatives are utilized and enforced.
The location of a fictitious value is called an irregular grid point, while those grid points where no fictitious value is required are called regular grid points. Loosely speaking, irregular grid points form extended domains on both sides of the interface. The extended domains ensure that the standard central finite difference scheme can be uniformly applied without the loss of numerical accuracy.
Additionally, derivatives involved in the elasticity equation are classified into central derivatives and cross derivatives. Central derivatives involve only one direction, while cross derivatives refer to more than one direction. These two situations are to be handled in different manners in the present method. Additional care is needed for discretizing cross derivatives to the second order accuracy.
Moreover, interfaces are classified into smooth ones and nonsmooth ones. The nonsmooth interfaces are Lipschitz continuous with geometric singularities, such as cusps, tips and/or sharp edges. To maintain designed order of accuracy, nonsmooth interfaces are much more difficult to deal with.
\subsection{General MIB algorithms for Laplace operator}\label{smoothinterface}
\subsubsection{Simplification of interface jump conditions}
As the interface normal direction varies along the interface, which is very troublesome from a computational perspective. It is necessary to define a set of local coordinates at each intersection point of the interface and the Cartesian mesh, so that different interface geometries can be treated in a systematical manner. In this section, we present the local coordinate transformation formula. At a specific intersection point, the local coordinate system is chosen to be $(\xi, \eta, \zeta)$, where $\xi$ is along the normal direction and $\eta$ is in the $xy$ plane. This local coordinate system can be obtained from the Cartesian coordinate system via the following transformation \begin{equation} \label{trans} \left(
\begin{array}{c}
\xi \\
\eta\\
\zeta
\end{array} \right)= \mathbf{P}\cdot \left(
\begin{array}{c}
x \\
y\\
z
\end{array} \right), \end{equation} where $\mathbf{P}\doteq \{P(i, j)\}_{i, j=1, 2, 3}$ is a transformation matrix \begin{equation} \label{trans_mat} \mathbf{P}= \left(
\begin{array}{ccc}
\sin\phi\cos\theta & \sin\phi\sin\theta &\cos\phi \\
-\sin\theta & \cos\theta &0\\
-\cos\phi\cos\theta &-\cos\phi\sin\theta &\sin\phi
\end{array} \right), \end{equation} where $\theta$ and $\phi$ are the azimuth and zenith angle with respect to the normal direction, respectively.
In the new local coordinate system, the interface conditions on function values and normal derivatives become (here for simplicity, we only discuss the constant material parameter case, the case of spatially dependent material parameters can be treated similarly) \begin{equation} \label{jump_val}
[\mathbf{u}]|_\Gamma=\mathbf{b}, \end{equation} and \begin{equation} \label{deri_jump_normal}
[\mathbb{T}\cdot\xi]|_\Gamma=\mathbf{T}. \end{equation}
To achieve better stability and higher efficiency, which is essential for the present 3D vector equation, only the lowest order jump conditions are utilized in the MIB method. Therefore, we avoid generating high order (derivative) jump conditions, even if in arbitrarily high order MIB methods \cite{Zhao:2004,Zhou:2006c,Yu:2007}. However, we hope to have as many low order jump conditions as possible so as to gain flexibility in dealing with complex interface geometries. To this end, we differentiate the jump condition of the vector function to derive two additional sets of interface jump conditions along $\eta$ and $\zeta$ directions, respectively \begin{equation} \label{jump1}
[\mathbf{u}_\eta]|_\Gamma=\left(-\sin\theta \frac{\partial \mathbf{u^+}}{\partial x}+\cos\theta \frac{\partial \mathbf{u^+}}{\partial y}\right)-\left(-\sin\theta \frac{\partial \mathbf{u^-}}{\partial x}+\cos\theta \frac{\partial \mathbf{u^-}}{\partial y}\right), \end{equation} and \begin{equation} \label{jump2}
[\mathbf{u}_\zeta]|_\Gamma=\left(-\cos\theta\cos\theta \frac{\partial \mathbf{u^+}}{\partial x}-\cos\phi\sin\theta \frac{\partial \mathbf{u^+}}{\partial y}+\sin\phi \frac{\partial \mathbf{u^+}}{\partial z}\right)-\left(-\cos\theta\cos\theta \frac{\partial \mathbf{u^-}}{\partial x}-\cos\phi\sin\theta \frac{\partial \mathbf{u^-}}{\partial y}+\sin\phi \frac{\partial \mathbf{u^-}}{\partial z}\right). \end{equation} where $\mathbf{u}=(u_1, u_2, u_3)^T$.
In summary, at a specific intersection point of the interface and the mesh line, there are four sets of interface conditions (\ref{jump_val})-(\ref{jump2}), which only refer to the function values and lowest order derivatives. This property is crucial to endow the MIB method with high efficiency and stability in handling complex interface geometries since no higher order derivative is referred in determining fictitious values. Additionally, lowest order derivatives lead to a more banded matrix and a smaller condition number, which are crucial in solving 3D vector interface problems.
In the MIB method, the function values near the interface are extended across the interface by introducing fictitious values. The extension is done along one mesh line at a time, so that it is locally a 1D-like scheme for a higher dimensional interface. Fictitious values can be determined by the aforementioned interface conditions (\ref{jump_val})-(\ref{jump2}). These conditions involve eighteen derivatives $\frac{\partial \mathbf{u}^{\pm}}{\partial x_j}$, where $x_j=x,y,z$ and ${\bf u}=(u_1,u_2,u_3)^T$. These derivatives are to be evaluated on the interface and thus are called interfacial derivatives. Due to the geometric complexity, some of these eighteen inetrfacial derivatives can be very difficult to compute numerically. In general, these interfacial derivatives are grouped into six sets because $u_1$ $u_2$ and $u_3$ can be treated in a similar manner in most situations.
In a second order scheme, we typically have two (sets of) fictitious values along one specific mesh line at one time. However, there are four sets of interface conditions. Therefore,
two sets of interface conditions are redundant. This redundancy gives two more degrees of freedom for us to design efficient and robust second order schemes in a complex interface geometry. Our basic idea is to algebraically eliminate two sets of interfacial derivatives that are the most difficult to compute by using two sets of redundant interface conditions. Therefore, at each intersection point we only need to evaluate four sets of derivatives that are relatively easy to approximate numerically.
The two sets of derivatives that are to be eliminated are selected by the following principles. \begin{itemize} \item Two sets of fictitious values along the mesh line that intersects with the interface are determined at one time.
\item Two sets of derivatives along the mesh line that intersect with the interface must be kept.
\item In the remaining four sets of derivatives, select two sets that are most difficult to evaluate due to the local geometry and eliminate them by two sets of jump conditions. \end{itemize}
Denote $\mathbf{T}:=(T_1, T_2, T_3)^T$ in interface conditions (\ref{deri_jump_normal}), and by further introducing the matrix notation, the interface conditions (\ref{deri_jump_normal}-\ref{jump2}) can be rewritten as: \begin{eqnarray*} \label{jump} &\left(T_1, T_2, T_3, [\frac{\partial u_1}{\partial \eta}], [\frac{\partial u_2}{\partial \eta}], [\frac{\partial u_3}{\partial \eta}], [\frac{\partial u_1}{\partial \zeta}], [\frac{\partial u_2}{\partial \zeta}], [\frac{\partial u_3}{\partial \zeta}] \right)^T \\&= \mathbf{C} \left(\frac{\partial u^+_1}{\partial x}, \frac{\partial u^-_1}{\partial x}, \frac{\partial u^+_1}{\partial y}, \frac{\partial u^-_1}{\partial y}, \frac{\partial u^+_1}{\partial z}, \frac{\partial u^-_1}{\partial z}, \frac{\partial u^+_2}{\partial x}, \frac{\partial u^-_2}{\partial x}, \frac{\partial u^+_2}{\partial y}, \frac{\partial u^-_2}{\partial y}, \frac{\partial u^+_2}{\partial z}, \frac{\partial u^-_2}{\partial z}, \frac{\partial u^+_3}{\partial x}, \frac{\partial u^-_3}{\partial x}, \frac{\partial u^+_3}{\partial y}, \frac{\partial u^-_3}{\partial y}, \frac{\partial u^+_3}{\partial z}, \frac{\partial u^-_3}{\partial z}\right)^T \end{eqnarray*} where $$ \mathbf{C}= \left(
\begin{array}{ccc}
C_{1, 1} & C_{1, 2} &C_{1, 3} \\
C_{2, 1} & C_{2, 2} &C_{2, 3}\\
C_{3, 1} &C_{3, 2} &C_{3, 3}
\end{array} \right), $$
$$ C_{1, 1}= \left( \begin{array}{cccccc}
M^+P(1, 1) & -M^-P(1, 1) &\mu^+P(1, 2) &-\mu^-P(1, 2) &\mu^+P(1, 3) &-\mu^-P(1, 3)\\
\lambda^+P(1, 2) & -\lambda^-P(1, 2) &\mu^+P(1, 1) &-\mu^-P(1, 1) &0 &0\\
\lambda^+P(1, 3) & -\lambda^-P(1, 3) &0 &0 &\mu^+P(1, 1) &-\mu^-P(1, 1)
\end{array} \right) $$
$$ C_{1, 2}= \left( \begin{array}{cccccc}
\mu^+P(1, 2) & -\mu^-P(1, 2) &\lambda^+P(1, 1) &-\lambda^-P(1, 1) &0 &0\\
\mu^+P(1, 1) & -\mu^-P(1, 1) &M^+P(1, 2) &-M^-P(1, 2) &\mu^+P(1, 3) &-\mu^-P(1, 3)\\
0 & 0 &\lambda^+P(1, 3) &-\lambda^-P(1, 3) &\mu^+P(1, 2) &-\mu^-P(1, 2)
\end{array} \right) $$
$$ C_{1, 3}= \left( \begin{array}{cccccc}
\mu^+P(1, 3) & -\mu^-P(1, 3) &0 &0 &\lambda^+P(1, 1) &\lambda^-P(1, 1)\\
0 &0& \mu^+P(1, 3) &-\mu^-P(1, 3) &\lambda^+P(1, 2) & -\lambda^-P(1, 2)\\
\mu^+P(1, 1) & -\mu^-P(1, 1) &\mu^+P(1, 2) &-\mu^-P(1, 2) &M^+P(1, 3) &-M^-P(1, 3)
\end{array} \right) $$
$$ C_{2, 1}= \left( \begin{array}{cccccc}
P(2, 1) & -P(2, 1) &P(2, 2) &-P(2, 2) &0 &0\\
0 &0& 0 &0 &0 & 0\\
0 &0& 0 &0 &0 & 0
\end{array} \right) $$
$$ C_{2, 2}= \left( \begin{array}{cccccc}
0 &0& 0 &0 &0 & 0\\
P(2, 1) & -P(2, 1) &P(2, 2) &-P(2, 2) &0 &0\\
0 &0& 0 &0 &0 & 0
\end{array} \right) $$
$$ C_{2, 3}= \left( \begin{array}{cccccc}
0 &0& 0 &0 &0 & 0\\
0 &0& 0 &0 &0 & 0\\
P(2, 1) & -P(2, 1) &P(2, 2) &-P(2, 2) &0 &0
\end{array} \right) $$
$$ C_{3, 1}= \left( \begin{array}{cccccc}
P(3, 1) & -P(3, 1) &P(3, 2) &-P(3, 2) &P(3, 3) &-P(3, 3)\\
0 &0& 0 &0 &0 & 0\\
0 &0& 0 &0 &0 & 0
\end{array} \right) $$
$$ C_{3, 2}= \left( \begin{array}{cccccc}
0 &0& 0 &0 &0 & 0\\
P(3, 1) & -P(3, 1) &P(3, 2) &-P(3, 2) &P(3, 3) &-P(3, 3)\\
0 &0& 0 &0 &0 & 0
\end{array} \right) $$ $$ C_{3, 3}= \left( \begin{array}{cccccc}
0 &0& 0 &0 &0 & 0\\
0 &0& 0 &0 &0 & 0\\
P(3, 1) & -P(3, 1) &P(3, 2) &-P(3, 2) &P(3, 3) &-P(3, 3)
\end{array} \right). $$
In the above expressions, $M=\frac{2\mu(1-\nu)}{1-2\nu}$, $\lambda$ and $\mu$ are the $p$-wave module, bulk modulus and shear modulus, respectively. Here $*^+$ and $*^-$ are the limiting values of the quantity $*$ inside and outside the interface, respectively.
\begin{lemma} Consider the matrix: $$ A\doteq \left( \begin{array}{cccccc}
M^+P(1, 1) & -M^-P(1, 1) &\mu^+ P(1, 2) &-\mu^- P(1, 2) &\mu^+ P(1, 3) &-\mu^-P(1, 3)\\
P(2, 1) &-P(2, 1)&P(2, 2) &-P(2, 2) &0 &0\\
P(3, 1) &-P(3, 1) &P(3, 2) &-P(3, 2) &P(3, 3) &-P(3, 3)
\end{array} \right) $$ where $M^+, M^-, \mu^+, \mu^-, P(i, j), i, j=1, 2, 3$ are the same as above. Then $\forall 1\leq l, m\leq 6, l\neq m$, there exists constants $a, b, c$ such that the $l$-th and $m$-th column of the vector $aA(1, :)+bA(2, :)+cA(3, :)$ are both zero, where $A(1, :), A(2, :), A(3, :)$ are the first, second and the last row of the matrix $A$. \end{lemma}
\begin{proof} If $l=5, m=6$ or $l=6, m=5$ we simply let $a=0, b=1, c=0$ then it is obvious that the $5$-th and $6$-th column of the vector $aA(1, :)+bA(2, :)+cA(3, :)$ are both zero. \\
Otherwise, we let: $$ a=A(2, l)A(3, m)-A(3, l)A(2, m), $$ $$ b=A(3, l)A(1, m)-A(1, l)A(3, m), $$ $$ c=A(1, l)A(2, m)-A(2, l)A(1, m), $$ then we have the $l$-th and $m$-th column of the vector $aA(1, :)+bA(2, :)+cA(3, :)$ are both zero. \end{proof}
Now suppose that according to the local geometry the $l$-th and $m$-th elements of the array: \begin{equation} \label{derivs} \left(\frac{\partial \mathbf{u}^+}{\partial x}, \frac{\partial \mathbf{u}^-}{\partial x}, \frac{\partial \mathbf{u}^+}{\partial y}, \frac{\partial \mathbf{u}^-}{\partial y}, \frac{\partial \mathbf{u}^+}{\partial z}, \frac{\partial \mathbf{u}^-}{\partial z}\right), \end{equation} are to be eliminated, where $1\leq l, m \leq 6, l\neq m$. We are going to seek the combined interface conditions for computing the two pairs of fictitious values at the two irregular grid points. \\
First, if $l=5, m=6$ or $l=6, m=5$ then we simply employ the interface conditions Eqs. (\ref{jump_val}) and (\ref{jump1}) for computing the fictitious values. Otherwise, we have the following results.
\begin{lemma} For given $1\leq l, m\leq 6$ $l\neq 5$ or $6$, or, $m\neq 5$ or $6$, then there exists constants $a_i, b_i, c_i, d_i, e_i, f_i, g_i, i=1, 2, 3$ such that the $l$-th, $m$-th $(l+6)$-th $(m+6)$-th $(l+12)$-th and $(m+12)$-th elements of the following vectors are all zero: $$ a_1C(1, :)+b_1C(4, :)+c_1C(7, :)+d_1C(5, :)+e_1C(8, :)+f_1C(6, :)+g_1C(9, :), $$ $$ a_2C(2, :)+b_2C(4, :)+c_2C(7, :)+d_2C(5, :)+e_2C(8, :)+f_2C(6, :)+g_2C(9, :), $$ $$ a_3C(3, :)+b_3C(4, :)+c_3C(7, :)+d_3C(5, :)+e_3C(8, :)+f_3C(6, :)+g_3C(9, :), $$ where $C(i, :), i=1,...,9$ is the $i$-th column of the above matrix $C$. \end{lemma}
\begin{proof} We only show that there exists constants $a_1, b_1, c_1, d_1, e_1, f_1, g_1$ such that the results stated in the lemma are true for the first vector, and the other two are the same. \\
Consider the following three matrices: $$ A_1\doteq \left( \begin{array}{cccccc}
M^+P(1, 1) & -M^-P(1, 1) &\mu^+ P(1, 2) &-\mu^- P(1, 2) &\mu^+ P(1, 3) &-\mu^-P(1, 3)\\
P(2, 1) &-P(2, 1)&P(2, 2) &-P(2, 2) &0 &0\\
P(3, 1) &-P(3, 1) &P(3, 2) &-P(3, 2) &P(3, 3) &-P(3, 3)
\end{array} \right) $$
$$ A_2\doteq \left( \begin{array}{cccccc}
\mu^+P(1, 2) & -\mu^-P(1, 2) &\lambda^+ P(1, 1) &-\lambda^- P(1, 1) &0 &0\\
P(2, 1) &-P(2, 1)&P(2, 2) &-P(2, 2) &0 &0\\
P(3, 1) &-P(3, 1) &P(3, 2) &-P(3, 2) &P(3, 3) &-P(3, 3)
\end{array} \right) $$
$$ A_3\doteq \left( \begin{array}{cccccc}
\mu^+P(1, 3) & -\mu^-P(1, 3) &0 &0 &\lambda^+ P(1, 1) &-\lambda^-P(1, 1)\\
P(2, 1) &-P(2, 1)&P(2, 2) &-P(2, 2) &0 &0\\
P(3, 1) &-P(3, 1) &P(3, 2) &-P(3, 2) &P(3, 3) &-P(3, 3)
\end{array} \right) $$ According the previous lemma, let \begin{eqnarray*} a_1=A_1(2, l)A_1(3, m)-A_1(3, l)A_1(2, m)=A_2(2, l)A_2(3, m)-A_2(3, l)A_2(2, m)&\\ =A_3(2, l)A_3(3, m)-A_3(3, l)A_3(2, m)=C(4, l)C(7, m)-C(7, l)C(4, m), \end{eqnarray*} $$ b_1=A_1(3, l)A_1(1, m)-A_1(1, l)A_1(3, m)=C(7, l)C(1, m)-C(1, l)C(7, m), $$ $$ c_1=A_1(1, l)A_1(2, m)-A_1(2, l)A_1(1, m)=C(1, l)C(4, m)-C(4, l)C(1, m), $$ $$ d_1=A_2(3, l)A_2(1, m)-A_2(1, l)A_2(3, m)=C(8, l+6)C(1, m+6)-C(1, l+6)C(8, m+6), $$ $$ e_1=A_2(1, l)A_2(2, m)-A_2(2, l)A_2(1, m)=C(1, l+6)C(5, m+6)-C(5, l+6)C(1, m+6), $$ $$ f_1=A_3(3, l)A_3(1, m)-A_3(1, l)A_3(3, m)=C(9, 1+12)C(1, m+12)-C(1, l+12)C(9, m+12), $$ $$ g_1=A_3(1, l)A_3(2, m)-A_3(2, l)A_3(1, m)=C(1, l+12)C(6, m+12)-C(6, l+12)C(1, m+12), $$ then we have the $l$-th and $m$-th column of the following vectors are all zero: $$ a_1A_1(1, :)+b_1A_1(2, :)+c_1A_1(3, :), $$ $$ a_1A_2(1, :)+d_1A_2(2, :)+e_1A_2(3, :), $$ $$ a_1A_3(1, :)+f_1A_3(2, :)+g_1A_3(3, :). $$ Note the relationship of the matrix $C$ and the matrices $A_1, A_2$ and $A_3$, it ends up that the $l$-th, $m$-th $(l+6)$-th $(m+6)$-th $(l+12)$-th and $(m+12)$-th elements of the following vector are all zero: $$ a_1C(1, :)+b_1C(4, :)+c_1C(7, :)+d_1C(5, :)+e_1C(8, :)+f_1C(6, :)+g_1C(9, :). $$ \end{proof}
According to the above lemma, if $l\neq 5$ or $6$, or, $m\neq 5$ or $6$. The following two sets of interface conditions can be employed to compute the fictitious values, which do not contains the $l$-th and $m$-th columns' elements of (\ref{derivs}).
The first set of interface condition is due to the jump of function values, i.e., \begin{eqnarray} \label{inter1}
[u_1]|_\Gamma=b_1, \end{eqnarray} \begin{eqnarray} \label{inter22}
[u_2]|_\Gamma=b_2, \end{eqnarray} \begin{eqnarray} \label{inter3}
[u_3]|_\Gamma=b_3. \end{eqnarray} The other set is due to derivatives, \begin{eqnarray} \label{inter4} a_1T_1+b_1[\frac{\partial u_1}{\partial \eta}]+c_1\left[\frac{\partial u_1}{\partial \zeta}\right]+d_1\left[\frac{\partial u_2}{\partial \eta}\right]+e_1\left[\frac{\partial u_2}{\partial \zeta}\right]+ f_1\left[\frac{\partial u_3}{\partial \eta}\right]+g_1\left[\frac{\partial u_3}{\partial \zeta}\right]=\\\nonumber \left(a_1C(1, :)+b_1C(4, :)+c_1C(7, :)+d_1C(5, :)+e_1C(8, :)+f_1C(6, :)+g_1C(9, :)\right)\cdot \mathbf{\alpha}, \end{eqnarray}
\begin{eqnarray} \label{inter5} a_2T_2+b_2\left[\frac{\partial u_1}{\partial \eta}\right]+c_2\left[\frac{\partial u_1}{\partial \zeta}\right]+d_2\left[\frac{\partial u_2}{\partial \eta}\right]+e_2\left[\frac{\partial u_2}{\partial \zeta}\right]+ f_2\left[\frac{\partial u_3}{\partial \eta}\right]+g_2\left[\frac{\partial u_3}{\partial \zeta}\right]=\\\nonumber \left(a_2C(2, :)+b_2C(4, :)+c_2C(7, :)+d_2C(5, :)+e_2C(8, :)+f_2C(6, :)+g_2C(9, :)\right)\cdot \mathbf{\alpha}, \end{eqnarray}
\begin{eqnarray} \label{inter6} a_3T_3+b_3\left[\frac{\partial u_1}{\partial \eta}\right]+c_3\left[\frac{\partial u_1}{\partial \zeta}\right]+d_3\left[\frac{\partial u_2}{\partial \eta}\right]+e_3\left[\frac{\partial u_2}{\partial \zeta}\right]+ f_3\left[\frac{\partial u_3}{\partial \eta}\right]+g_3\left[\frac{\partial u_3}{\partial \zeta}\right]=\\\nonumber \left(a_3C(3, :)+b_3C(4, :)+c_3C(7, :)+d_3C(5, :)+e_3C(8, :)+f_3C(6, :)+g_3C(9, :)\right)\cdot \mathbf{\alpha}, \end{eqnarray}
where $a_1=a_2=a_3=C(4, l)C(7, m)-C(7, l)C(4, m)$,
$b_1=C(7, l)C(1, m)-C(1, l)C(7, m)$,
$c_1=C(1, l)C(4, m)-C(4, l)C(1, m)$,
$d_1=C(8, l+6)C(1, m+6)-C(1, l+6)C(8, m+6)$,
$e_1=C(1, l+6)C(5, m+6)-C(5, l+6)C(1, m+6)$,
$f_1=C(9, 1+12)C(1, m+12)-C(1, l+12)C(9, m+12)$,
$g_1=C(1, l+12)C(6, m_12)-C(6, l+12)C(1, m+12)$,
$b_2=C(7, l)C(2, m)-C(2, l)C(7, m)$,
$c_2=C(2, l)C(4, m)-C(4, l)C(2, m)$,
$d_2=C(8, l+6)C(2, m+6)-C(2, l+6)C(8, m+6)$,
$e_2=C(2, l+6)C(5, m+6)-C(5, l+6)C(2, m+6)$,
$f_2=C(9, 1+12)C(2, m+12)-C(2, l+12)C(9, m+12)$,
$g_2=C(2, l+12)C(6, m+12)-C(6, l+12)C(2, m+12)$,
$b_3=C(7, l)C(3, m)-C(3, l)C(7, m)$,
$c_3=C(3, l)C(4, m)-C(4, l)C(3, m)$,
$d_3=C(8, l+6)C(3, m+6)-C(3, l+6)C(8, m+6)$,
$e_3=C(3, l+6)C(5, m+6)-C(5, l+6)C(3, m+6)$,
$f_3=C(9, 1+12)C(3, m+12)-C(3, l+12)C(9, m+12)$,
$g_3=C(3, l+12)C(6, m+12)-C(6, l+12)C(3, m+12)$.
In the following, we omit the discussion for the case that $l=5, m=6$ or $l=6, m=5$, which is essential the same as the other cases.
\subsubsection{General fictitious scheme}
Consider the geometry illustrated in Fig. \ref{3D_inter}, two sets of fictitious values $\mathbf{f}(i, j, k):=(f_1^c(i, j, k), f_2^c(i, j, k), f_3^c(i, j, k))^T$ and $\mathbf{f}(i+1, j, k):=(f_1^c(i+1, j, k), f_2^c(i+1, j, k), f_3^c(i+1, j, k))^T$ are to be determined on the irregular grid points $(i, j, k)$ and $(i+1, j, k)$ for discretizing central derivatives. \begin{figure}\label{3D_inter}
\end{figure}
We denote the left domain as $\Omega^+$ and the right one as $\Omega^-$, in this case, $\mathbf{u}^+$, $\mathbf{u}^-$, $\mathbf{u}_y^+$, and $\mathbf{u}_y^-$ at $(x_0, y_0, z_0)$ can be easily approximated through interpolations and standard finite difference (FD) schemes from information in $\Omega^+$ and $\Omega^-$, respectively: \begin{equation} \mathbf{u}^+=(\omega_{0, j-1}, \omega_{0, j}, \omega_{0, j+1})\cdot\left(\mathbf{u}(i, j-1, k), \mathbf{u}(i, j, k), \mathbf{f}(i, j+1, k)\right)^T, \end{equation} \begin{equation} \mathbf{u}^-=(\tilde{\omega}_{0, j}, \tilde{\omega}_{0, j+1}, \tilde{\omega}_{0, j+2})\cdot\left(\mathbf{f}(i, j, k), \mathbf{u}(i, j+1, k), \mathbf{u}(i, j+2, k)\right)^T, \end{equation} \begin{equation} \mathbf{u}_y^+=(\omega_{1, j-1}, \omega_{1, j}, \omega_{1, j+1})\cdot\left(\mathbf{u}(i, j-1, k), \mathbf{u}(i, j, k), \mathbf{f}(i, j+1, k)\right)^T, \end{equation} \begin{equation} \mathbf{u}_y^-=(\tilde{\omega}_{1, j}, \tilde{\omega}_{1, j+1}, \tilde{\omega}_{1, j+2})\cdot\left(\mathbf{f}(i, j, k), \mathbf{u}(i, j+1, k), \mathbf{u}(i, j+2, k)\right)^T, \end{equation} where $\omega_{m, n}$, $\tilde{\omega}_{m, n}$ denote the interpolation or finite difference weights, the first subscript $n$ represents either the interpolation ($n=0$) or the first order derivatives ($n=1$) at interface point $(x_0, y_0, z_0)$, while the second subscript denotes the node index. All the coefficients/weights are generated from standard Lagrange polynomials \cite{fonberg:1998}.
We only need to compute two of the remaining four vector valued interface quantities. If $\mathbf{u}_x^-$ and $\mathbf{u}_z^-$ can be conveniently computed, then $\mathbf{u}_x^+$ and $\mathbf{u}_z^+$ are eliminated by using the above elimination process with setting $l=1$ and $m=5$.
Here we provide a detailed scheme to approximate $\frac{\partial \mathbf{u}^-}{\partial z}$. Other derivatives can be approximated in the same manner. Without the loss of generality, we only demonstrate how to approximate the first component $\frac{\partial u_1^-}{\partial z}$.
As shown in Fig. \ref{3D_inter}, to approximate $\frac{\partial u_1^-}{\partial z}$, we need $u_1$ values along the auxiliary line $y=y_0$ on the $yz$-plane. However, these values are unavailable on the grid and have to be approximated by the interpolation schemes along the $y$-direction. Therefore six more auxiliary grid points are involved. In this situation, $\frac{\partial u_1^-}{\partial z}|_{(x_0, y_0, z_0)}$ can be approximated as \begin{eqnarray} \frac{\partial u_1^-}{\partial z}= \left( \begin{array}{c}
w_{1, k} \\
w_{1, k+1} \\
w_{1, k+2} \end{array} \right)^T\cdot \left( \begin{array}{ccccccccc}
\omega_{0, j} &\omega_{0, j+1}& \omega_{0, j+2} &0 &0 & 0 &0 &0 &0\\
0 &0& 0 &\omega_{0, j}^{'} &\omega_{0, j+1}^{'} & \omega_{0, j+2}^{'} &0 &0 &0\\
0 &0& 0 &0 &0 & 0 &\omega_{0, j}^* &\omega_{0, j+1}^* &\omega_{0, j+2}^* \end{array} \right) \cdot \mathbf{U}. \end{eqnarray} Here $\mathbf{U}=(f_1^c(i, j, k), u_1(i, j+1, k), u_1(i, j+2, k), u_1(i, j, k+1), u_1(i, j+1, k+1),u_1(i, j+2, k+1), u_1(i, j, k+2), u_1(i, j+1, k+2), u_1(i, j+2, k+2))^T$.
By solving the above six interface conditions Eq.(\ref{inter1})-(\ref{inter6}) together, six fictitious values $\mathbf{f}(i, j, k)$ and $\mathbf{f}(i, j+1, k)$ can be easily represented in terms of 48 function values and 12 interface jump conditions around them.
\subsubsection{Matrix optimization} The MIB matrix is banded due to the reason that the interfaces are 2D surfaces and typically there is only one fictitious value on each side of the interface in a second order MIB scheme. However, to determine each pair of fictitious values, 12 auxiliary grid points are involved and their distribution affects the convergence property of the resulting MIB matrix. In most cases, the choice of these 12 auxiliary grid points is not unique. In general, it is very important to make the MIB matrix optimally symmetric and diagonally dominated so as to accelerate the speed of the convergence of the resulting linear algebraic solver. This aspect becomes more important in elasticity interface problems than in elliptic interface problems because the matrix size is much larger. We therefore select 12 auxiliary grid points as close to the interface as possible. This strategy has been employed in our earlier MIBPB II software package \cite{Yu:2007,Yu:2007a} for solving elliptical interface problems. A more detailed description can be found elsewhere \cite{Yu:2007a}. In the present work, we utilize the same strategy to construct the MIB matrix for elasticity interface problems.
\subsubsection{Fictitious scheme for interface with large curvatures}
The key assumption in the above scheme is that there should be at least two grid points on each mesh line inside a subdomain so that fictitious values on the mesh line can be determined. However, when the curvature of the interface is very large, the above requirement cannot be guaranteed on all mesh sizes.
\begin{figure}
\caption{An illustration of the disassociation type of irregular grid points at cross section $(x=x_i)$. Fictitious values $\mathbf{f}(i, j, k)$ cannot be computed from $y$-direction by the aforementioned scheme. Nevertheless, they can be computed from the $z$-direction. In the discretization schemes, the fictitious value at $(i, j, k)$ found from the vertical direction is utilized for both vertical and horizontal discretizations of the derivatives in the governing equation. }
\label{dis}
\end{figure}
As shown in Fig. \ref{dis}, the above scheme is not applicable for finding the fictitious values at grid point $(i, j, k)$ along the $y$-direction, since there is only one point inside the interface along the $y$-direction, which is not enough for the interpolation. Nevertheless, there is no problem to find the fictitious values at grid point $(i, j, k)$ along the $z$-direction. Hence, it is possible to replace the fictitious values to be found along $y$-direction with the fictitious values found along the $z$-direction or the $x$-direction.
Note that this replacement does not reduce the numerical accuracy in general, since if fictitious values found along $z$-direction has the numerical accuracy $O(h^m)$ for some integer $m$, this estimate holds for the fictitious values at $(i, j, k)$ no matter how they are determined, where $h$ is the grid size of the uniform mesh.
\begin{remark} In principle, to make the final algebraic linear system as symmetric and banded as possible, if the fictitious values can be found along the given direction, one should avoid using the disassociation technique. \end{remark}
\subsubsection{Fictitious scheme for interface with sharp edge}\label{sharpinterface}
Geometric singularities, such as tips, cusps and self-interesting surfaces, are ubiquitous in science and engineering problems. Due to the existence of geometric singularities, the schemes proposed above may not work because fictitious values can be determined. Therefore, it is crucial to develop some special schemes for determining fictitious values near geometric singularities.
According to the local interface geometry, the sharp-edged interface can be classified into two classes, one is locally convex, the other is locally concave, as shown in Fig. \ref{sharp}. \begin{figure}
\caption{Illustration of two types of sharp-edged interfaces at cross section $(x=x_i)$. The jump conditions of the function values at the point $o_1$ and $ o_2$ (two red points) and the jump of derivatives at $o_1$ are employed to compute fictitious values at two blue irregular grid points. }
\label{sharp}
\end{figure}
Let us discuss how to determine fictitious values at grid point $(i, j, k)$ for the convex interface case, and the other case can be treated in the same manner.
In the MIB scheme, a pair of fictitious values on a mesh is determined at one time. Suppose that the fictitious values at grid point $(i, j, k)$ is going to be determined along the positive $y$-direction and the interface intersects the mesh line at point $o_1$, the MIB scheme determines fictitious values at $(i, j, k)$ and $(i, j+1, k)$ simultaneously.
In the left chart of Fig. \ref{sharp}, the point $(i, j-1, k)$ will be referred in the discretization of the interface conditions (\ref{inter1})-(\ref{inter6}). Due to the sharp-edged interface, $(i, j-1, k)$ is not in the same subdomain with $(i, j, k)$, and fictitious values at grid point $(i, j, k)$ cannot be calculated directly from the interface conditions (\ref{inter1})-(\ref{inter6}). In this case, one more set of fictitious values at grid point $(i, j-1, k)$ will be involved, so that there are nine fictitious values to be determined while there are only six interface conditions available.
Note that the jump of the function values at point $o_2$, which is another intersection point of the interface with the mesh line, can be utilized to compute fictitious values. Now there are nine interface conditions, namely, three jumps of function values at $o_1$, three jumps of function values at $o_2$ and three jumps of derivatives at $o_1$.
The discretization of interface conditions (\ref{inter1})-(\ref{inter6}) in this sharp-edged interface situation can be obtained simply by replacing $\mathbf{u}(i, j-1, k)$ with fictitious values $\mathbf{f}(i, j-1, k)$, where $\mathbf{f}(i, j-1, k):=(f_1^c(i, j-1, k), f_2^c(i, j-1, k), f_3^c(i, j-1, k))^T$ is the fictitious values at node $(i, j-1, k)$. Three more interface conditions at $o_2$ can be discretized as \begin{equation} \label{inter7}
[\mathbf{u}]|_{o_2}=\left(\omega'_{0, j-1}, \omega'_{0, j}, \omega'_{0, j+1}\right)\cdot\left((\mathbf{f}(i, j-1, k), \mathbf{u}(i, j, k), \mathbf{u}(i, j+1, k))^T-(\mathbf{u}(i, j-1, k), \mathbf{f}(i, j, k), \mathbf{u}(i, j+1, k))^T\right). \end{equation}
Fictitious values $\mathbf{f}(i, j-1, k), \mathbf{f}(i, j, k)$ and $\mathbf{f}(i, j+1, k)$ can be calculated from the modified discretization of interface conditions (\ref{inter1})-(\ref{inter6}) and Eqs. (\ref{inter7}).
\subsubsection{Second Order MIB Finite Difference for Central Derivatives} All the fictitious values referred in the MIB discretization of the central derivatives can be obtained by the above schemes. At any grid point the second order MIB method applies to all the central derivatives referred in the governing equations of the elasticity interface problem. At an irregular grid point if the CFD scheme refers to some grid points in the other side of the interface, the MIB scheme simply replace the function values at that point by its fictitious values. For instance, the second order MIB finite difference for $\frac{\partial^2 u_1}{\partial y^2}$ at grid point $(i, j, k)$ and $(i, j+1, k)$ in the left chart of Fig. \ref{sharp} are given, respectively, by: \begin{eqnarray*} \frac{\partial^2 u_1}{\partial y^2}(i, j, k)=\frac{1}{h^2}\left(f^c_1(i, j-1, k)-2u_1(i, j, k)+f^c_1(i, j+1, k)\right), \end{eqnarray*} and \begin{eqnarray*} \frac{\partial^2 u_1}{\partial y^2}(i, j+1, k)=\frac{1}{h^2}\left(f^c_1(i, j, k)-2u_1(i, j+1, k)+u_1(i, j+2, k)\right). \end{eqnarray*}
\subsection{General MIB algorithms for cross derivatives}
The cross derivatives in the elasticity equations make the second order CFD scheme more complicated as the points referred in the CFD schemes are restricted not only to the nearest neighbor points, but also the next nearest neighbor points. This situation does not occur to the elliptic interface problems.
A critical idea of the MIB method is to reduce high dimensional problems to locally lower dimensional problems. As such in determining fictitious values for the elliptical interface problems, the MIB scheme carries out 1D-like extensions, which makes the MIB highly efficient for versatile interface geometries and geometric singularities. Similar idea is applied in the present elasticity interface problem in determining fictitious values for both central derivatives and cross derivatives. Based on local interface geometric information, different schemes are designed, including, disassociation type, extrapolation type and neighbor combination type.
\subsubsection{Disassociation scheme}
First we define the disassociation type of fictitious values. \begin{definition} An irregular grid point associated with cross derivatives is called a disassociation type provided that the irregular grid point is also an irregular grid point associated with central derivatives. \end{definition}
The fictitious values on the disassociation type of irregular grid points for cross derivatives can be replaced by fictitious values found for the central derivatives. Their order of approximation was analyzed in an earlier paper \cite{Zhou:2006d}.
As illustrated in Fig. \ref{dis}, grid point $(i, j, k)$ is not only irregular in central derivatives, but also irregular in cross derivatives. In this case, fictitious values for the central derivatives at grid point $(i, j, k)$ are obtained based on the numerical scheme proposed for central derivatives.
\subsubsection{Extrapolation scheme}
\begin{figure}
\caption{Illustration of extrapolation type of irregular grid points used in cross derivatives at cross section $(z=z_k)$. In the left case, fictitious value at the bottom red point and function values at other two red points are employed to approximate fictitious value at $(i, j, k)$. For the middle case, function values at the right most red point and fictitious values at other two red points are utilized to extrapolate fictitious value at $(i, j, k)$. For the right case, fictitious values at three red points are used to approximate fictitious value at $(i, j, k)$. }
\label{ext1}
\end{figure}
If a grid point is irregular in the CFD scheme of the cross derivatives while regular for that of the central derivatives, the aforementioned disassociation technique fails. Further, if there exists a direction along which three values are available (function value or fictitious value), then the extrapolation method is applied. Suppose that we are seeking the fictitious values at grid point $(i, j, k)$ and project the problem into $xy$-plane, according to the local geometry, the MIB scheme can be classified into three cases. \begin{itemize} \item Scheme I. Two function values and one fictitious value are used for the extrapolation. Function values at grid point $(i, j+2, k)$ and $(i, j+3, k)$, fictitious values at $(i, j+1, k)$ are available and used to extrapolate fictitious values for the cross derivatives at grid point $(i, j, k)$, see the left chart of Fig. \ref{ext1}.
\item Scheme II. One function value and two fictitious values are used for the extrapolation. Function values at grid point $(i+3, j, k)$, fictitious values at $(i+1, j, k)$ and $(i+2, j, k)$ are available and used to extrapolate fictitious values for the cross derivatives at grid point $(i, j, k)$, see the middle chart of Fig. \ref{ext1}.
\item Scheme III. Three fictitious values at grid point $(i, j+1, k)$, $(i, j+2, k)$ and $(i, j+3, k)$ are applied to extrapolate the fictitious value for the cross derivative at the grid point, see the right chart of Fig. \ref{ext1}. \end{itemize}
Now we consider a very special case, in which fictitious values for cross derivatives cannot be obtained with the above schemes.
\begin{figure}
\caption{Illustration of a single point situation at cross section $(x=x_i)$. All the nearest and next nearest neighbor grid points are referred in the second order CFD scheme at grid point $(i, j, k)$. First, fictitious values at its two nearest neighbor grid point, $(i, j+1, k)$ and $(i, j, k+1)$ can be determined by the fictitious scheme for sharp-edged interfaces. Second, by the neighbor combination scheme, the fictitious values at the blue point $(i, j+1, k+1)$ can be approximated by the function or fictitious values at three black points.}
\label{1pt}
\end{figure}
As illustrated in Fig. \ref{1pt}, the interface is a sphere centered at $(i, j, k)$ with radius less than grid size $h$. The CFD scheme at grid point $(i, j, k)$ refers to all its distance one and two neighbor points. Note that all these points are in the other side of the interface. To attain a convergent discretization at grid point $(i, j, k)$, all the fictitious values at its neighbor points should be found. Here without the loss of generality, let us only consider the way to find fictitious values for $u_1$ at grid points $(i, j+1, k), (i, j, k+1)$ and $(i, j+1, k+1)$. Due to the symmetry, other fictitious values can be obtained in the same manner.
First, the fictitious values at grid points $(i, j+1, k)$ and $(i, j, k+1)$ can be found by the sharp interface scheme for central derivatives. Denote the obtained fictitious values to be $f_1^c(i, j+1, k)$ and $f_1^c(i, j, k+1)$, respectively. Further let the analytic extension of the exact solution at these grid points to be $\hat{u}_1(i, j+1, k)$ and $\hat{u}_1(i, j, k+1)$. The numerical extension based on the above MIB scheme satisfies: $f_1^c(i, j+1, k)=\hat{u}_1(i, j+1, k)+O(h^3)$ and $f_1^c(i, j, k+1)=\hat{u}_1(i, j, k+1)+O(h^3)$. \\
Now the only fictitious value to be determined is $f_1^c(i, j+1, k+1)$. Based on the Taylor expansion and the above MIB extension estimates, following equations hold for the uniform Cartesian mesh with grid size $h$. $$ u_1(i, j+1, k+1)=u_1(i, j, k)+\frac{\partial u_1 }{\partial y}(i, j, k)h+\frac{\partial u_1 }{\partial z}(i, j, k)h+O(h^2) $$
$$ u_1(i, j+1, k)=u_1(i, j, k)+\frac{\partial u_1 }{\partial y}(i, j, k)h+O(h^2) $$
$$ u_1(i, j, k+1)=u_1(i, j, k)+\frac{\partial u_1 }{\partial z}(i, j, k)h+O(h^2) $$
$$ f_1^c(i, j+1, k)=\hat{u}_1(i, j+1, k)+O(h^3), $$ $$ f_1^c(i, j, k+1)=\hat{u}_1(i, j, k+1)+O(h^3), $$ where $h$ is the size of the Cartesian mesh.
Therefore, let fictitious value at grid point $(i, j+1, k+1)$ to be: $$ f_1^c(i, j+1, k+1)=f_1^c(i, j+1, k)+f_1^c(i, j, k+1)-u_1 (i, j, k). $$ By direct calculation, the following estimate holds $$ f_1^c(i, j+1, k+1)=\hat{u}_1(i, j+1, k+1)+O(h^2), $$ where $\hat{u}_1(i, j+1, k+1)$ is the analytic extension of the exact solution at grid point $(i, j+1, k+1)$.
\begin{remark} The proposed scheme for finding fictitious values at $(i, j+1, k+1)$ may reduce the numerical accuracy, while based on numerous numerical tests, the proposed scheme is still of second order convergence globally. \end{remark}
\subsubsection{Second Order MIB Finite Difference for Cross Derivatives} It is obviously that all the fictitious values at irregular grid points are guaranteed to be found by the above extension and combination schemes. The local combination scheme may lead to some numerical accuracy reduction, however, in most case, this scheme is used quite seldom. Based on our numerous numerical tests examples, the MIB scheme still has the second order numerical accuracy for both $L_\infty$ and $L_2$ error for the elasticity interface problem.
Similar to the MIB discretization of the central derivatives, in the discretization of cross derivatives, when grid point from the other subdomain referred, fictitious values at that point are adopted to replace the function values in the CFD discretization. For instance, the MIB discretization of the $\frac{\partial^2 u_1}{\partial y\partial z}$ at grid point $(i, j, k)$ in Fig. \ref{1pt} is given by:
\begin{eqnarray*} \frac{\partial^2 u_1}{\partial y\partial z}(i, j, k)=\frac{f_1^c(i, j+1, k+1)+f_1^c(i, j-1, k-1)-f_1^c(i, j+1, k-1)-f_1^c(i, j-1, k+1)}{4h^2}. \end{eqnarray*}
\section{Numerical experiments} \label{validation}
Numerous numerical tests are designed in this section to investigate the accuracy, efficiency and robustness of the proposed MIB method for solving 3D elasticity interface problems with both smooth and non-smooth material interfaces. We consider a large number of complex geometric shapes, including sphere, hemisphere, ellipsoid, cylinder, torus, acorn-like, apple-shaped, flower-like, and pentagon-star shapes in our tests. Both piecewise constant material parameters and position-dependent material parameters are tested in our investigation. Furthermore, problems with small and large contrast in Poisson's ratio and shear modulus across the interface are also examined.
The standard bi-conjugate gradient method is employed to solve the linear algebraic system generated by the MIB discretization of the governing equation of the elasticity interface problems. Numerical solutions are compared to the designed analytical solution. Both $L_2$ and $L_\infty$ error measurements are employed in examining the accuracy and convergence of the MIB algorithm for 3D elasticity interface problems $$
L_\infty(u_k):=\max{|u_k(m, n, l)-\hat{u}_k(m, n, l)|}, k=1, 2, 3; \forall m=1, 2,\cdots n_x; \forall n=1, 2,\cdots n_y; \forall l=1, 2,\cdots n_z $$ and $$ L_2:=\sqrt{\frac{1}{n_x*n_y*n_z}\sum^{n_x}_{m=1}\sum^{n_y}_{n=1}\sum^{n_z}_{l=1}(u_k(m, n, l)-\hat{u}_k(m, n, l))^2}, $$ where $u_k$ $\hat{u}_k$ are the numerical and exact solutions, respectively. Here $L_\infty$ is the maximum error over all the grid points in the computational domain.
\subsection{Smooth interface}
\subsubsection{Piecewise constant shear modulus}
In this section, the proposed MIB method is tested for the piecewise constant material parameters associated with smooth material interfaces. Problems with both large and small contrasts of Poisson's ratio and shear modulus across the interface are considered in our investigation.
\textbf{Example~1.} In this example, the computational domain is set to $[-3, 3]\times[-3, 3]\times[-3, 3]$ and the interface is a sphere which is defined by $x^2+y^2+z^2=4$. A sphere is the simplest irregular or complex interface in 3D. The exact solution is designed to be $$ u_1(x, y)= \left\{\begin{array}{ll} x^2+y^2+z^2-4+\cos(x)\cos(y)\cos(z), &\ \ \mbox{in}\ \ \Omega^+,\\ \cos(x)\cos(y)\cos(z), &\ \ \mbox{in}\ \ \Omega^-, \end{array}\right. $$ $$ u_2(x, y)= \left\{\begin{array}{ll} x^2+y^2+z^2-4+xy+\cos(x)\cos(y)\cos(z), &\ \ \mbox{in}\ \ \Omega^+,\\ xy+\cos(x)\cos(y)\cos(z), &\ \ \mbox{in}\ \ \Omega^-, \end{array}\right. $$ and $$ u_3(x, y)= \left\{\begin{array}{ll} x^2+y^2+z^2-4+yz+\cos(x)\cos(y)\cos(z), &\ \ \mbox{in}\ \ \Omega^+,\\ yz+\cos(x)\cos(y)\cos(z), &\ \ \mbox{in}\ \ \Omega^-. \end{array}\right. $$
Note that the above solution guarantees the continuity of the solution across the interface. The Dirichlet boundary conditions and interface jump conditions can be derived from the above exact solution. We consider a series of three cases to test the robustness of the proposed MIB method for large contrasts in material parameters across the interface.
\textbf{Case~1.} First, let the piecewise constant type of Poisson's ratio and shear modulus to be $$ \nu= \left\{\begin{array}{ll} \nu^+=0.20, &\ \ \mbox{in}\ \ \Omega^+,\\ \nu^-=0.24, &\ \ \mbox{in}\ \ \Omega^-, \end{array}\right. $$ and $$ \mu= \left\{\begin{array}{ll} \mu^+=1500000, &\ \ \mbox{in}\ \ \Omega^+,\\ \mu^-=2000000, &\ \ \mbox{in}\ \ \Omega^-. \end{array}\right. $$
Table \ref{Ex1_Case1_linf} lists the grid refinement analysis for the $L_\infty$ error of the Case 1 of Example 1. We obtain a quite robust second order accuracy in the $L_\infty$ error norm. It is also interesting to examine the convergence in the $L_2$ error norm as well. Table \ref{Ex1_Case1_2} presents the grid refinement analysis for the $L_2$ error of the Case 1 of Example 1. We again found highly accurate solutions.
\begin{table} \caption{ The $L_\infty$ errors for the Case 1 of Example 1.} \label{Ex1_Case1_linf} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $\ L_\infty(u_1) $ & Order &$ L_\infty(u_2)$ & Order &$ L_\infty(u_3)$ &Order \\ \hline $10\times 10\times10$ &$6.70\times 10^{-2}$ & &$6.31\times 10^{-2}$ & &$5.68\times 10^{-2}$ & \\ $20\times 20\times20$ &$1.39\times 10^{-2}$ &2.27 &$1.36\times 10^{-2}$ &2.21 &$1.31\times 10^{-2}$ & 2.12 \\ $40\times40\times40$ &$2.72\times 10^{-3}$ &2.35 &$2.94\times 10^{-3}$ &2.21 &$2.69\times 10^{-3}$ &2.28 \\ $80\times80\times80$ &$7.58\times 10^{-4}$ &1.84 &$7.28\times 10^{-4}$ &2.01 &$7.17\times 10^{-4}$ &1.91 \\ \hline \end{tabular} \end{table}
\begin{table} \caption{ The $L_2$ errors for the Case 1 of Example 1.} \label{Ex1_Case1_2} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $L_2(u_1)$ &Order &$L_2(u_2)$ &Order &$L_2(u_3)$ &Order \\ \hline $10\times 10\times10$ &$1.30\times 10^{-2}$ & &$1.29\times 10^{-2}$ & &$1.30\times 10^{-2}$ & \\ $20\times 20\times20$ &$3.20\times 10^{-3}$ &2.02 &$3.20\times 10^{-3}$ &2.01 &$3.15\times 10^{-3}$ &2.05 \\ $40\times40\times40$ &$8.34\times 10^{-4}$ &1.94 &$8.39\times 10^{-4}$ &1.93 &$8.27\times 10^{-4}$ &1.93 \\ $80\times80\times80$ &$2.25\times 10^{-4}$ &1.89 &$2.25\times 10^{-4}$ &1.90 &$2.23\times 10^{-4}$ &1.89 \\ \hline \end{tabular}
\end{table}
\begin{figure}
\caption{Numerical solution to the sphere interface problem of Case 1 with 40 grid points along each direction. Left chart: $u_1$; Middle chart $u_2$; Right chart: $u_3$. }
\label{shpere_sol}
\end{figure}
\begin{figure}
\caption{Numerical error in solving the sphere interface problem of Case 1 with 40 grid points along each direction. Left chart: $u_1$; Middle chart $u_2$; Right chart: $u_3$. }
\label{shpere_err}
\end{figure}
Figures \ref{shpere_sol} and \ref{shpere_err} illustrate the solution and error with 40 grid points along each direction. Apparently, the errors are quite small.
\textbf{Case~2.} In this case, we test the proposed MIB method for large contrasts in material parameters across the interface. We make the Poisson's ratio to be 1000 times in contrast $$ \nu= \left\{\begin{array}{ll} \nu^+=0.00024, &\ \ \mbox{in}\ \ \Omega^+,\\ \nu^-=0.24, &\ \ \mbox{in}\ \ \Omega^-, \end{array}\right. $$ while the shear modulus remains unchanged, $$ \mu= \left\{\begin{array}{ll} \mu^+=1500000, &\ \ \mbox{in}\ \ \Omega^+,\\ \mu^-=2000000, &\ \ \mbox{in}\ \ \Omega^-. \end{array}\right. $$
Table \ref{Ex1_Case2_linf} lists the grid refinement analysis for the $L_\infty$ error. Similarly, Table \ref{Ex1_Case2_2} gives the grid refinement analysis for the $L_2$ error of Case 2. It is seen that both the accuracy and convergence are not affected by the large contrast in the Poisson's ratio across the interface.
\begin{table} \caption{The $L_\infty$ errors for the Case 2 of Example 1.} \label{Ex1_Case2_linf} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $\ L_\infty(u_1) $ & Order &$ L_\infty(u_2)$ & Order &$ L_\infty(u_3)$ &Order \\ \hline $10\times 10\times10$ &$6.21\times 10^{-2}$ & &$5.95\times 10^{-2}$ & &$5.45\times 10^{-2}$ & \\ $20\times 20\times20$ &$1.55\times 10^{-2}$ &2.00 &$1.55\times 10^{-2}$ &1.94 &$1.53\times 10^{-2}$ & 1.83 \\ $40\times40\times40$ &$3.13\times 10^{-3}$ &2.31 &$3.45\times 10^{-3}$ &2.17 &$3.28\times 10^{-3}$ &2.22 \\ $80\times80\times80$ &$8.19\times 10^{-4}$ &1.93 &$7.89\times 10^{-4}$ &2.13 &$7.88\times 10^{-4}$ &2.06 \\ \hline \end{tabular} \end{table}
\begin{table} \caption{The $L_2$ errors for the Case 2 of Example 1.} \label{Ex1_Case2_2} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $L_2(u_1)$ &Order &$L_2(u_2)$ &Order &$L_2(u_3)$ &Order \\ \hline $10\times 10\times10$ &$1.29\times 10^{-2}$ & &$1.28\times 10^{-2}$ & &$1.28\times 10^{-2}$ & \\ $20\times 20\times20$ &$3.29\times 10^{-3}$ &1.97 &$3.28\times 10^{-3}$ &1.96 &$3.22\times 10^{-3}$ &1.99 \\ $40\times40\times40$ &$8.49\times 10^{-4}$ &1.95 &$8.53\times 10^{-4}$ &1.94 &$8.41\times 10^{-4}$ &1.94 \\ $80\times80\times80$ &$2.29\times 10^{-4}$ &1.89 &$2.29\times 10^{-4}$ &1.90 &$2.28\times 10^{-4}$ &1.89 \\ \hline \end{tabular} \end{table}
\textbf{Case~3.} Having tested the proposed MIB method for large contrast in the Poisson's ratio, let us enlarge the contrast of the shear modulus across the interface, while the Poisson's ratio is unchanged, $$ \nu= \left\{\begin{array}{ll} \nu^+=0.20, &\ \ \mbox{in}\ \ \Omega^+,\\ \nu^-=0.24, &\ \ \mbox{in}\ \ \Omega^-, \end{array}\right. $$ and $$ \mu= \left\{\begin{array}{ll} \mu^+=2000, &\ \ \mbox{in}\ \ \Omega^+,\\ \mu^-=2000000, &\ \ \mbox{in}\ \ \Omega^-. \end{array}\right. $$
\begin{table} \caption{The $L_\infty$ errors for the Case 3 of Example 1.} \label{Ex13inf} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $\ L_\infty(u_1) $ & Order &$ L_\infty(u_2)$ & Order &$ L_\infty(u_3)$ &Order \\ \hline $10\times 10\times10$ &$6.70\times 10^{-2}$ & &$6.31\times 10^{-2}$ & &$5.68\times 10^{-2}$ & \\ $20\times 20\times20$ &$1.40\times 10^{-2}$ &2.26 &$1.41\times 10^{-2}$ &1.94 &$1.31\times 10^{-2}$ &2.12 \\ $40\times40\times40$ &$2.72\times 10^{-3}$ &2.36 &$2.39\times 10^{-3}$ &2.16 &$2.69\times 10^{-3}$ &2.28 \\ $80\times80\times80$ &$7.58\times 10^{-4}$ &1.85 &$7.28\times 10^{-4}$ &1.72 &$7.17\times 10^{-4}$ &1.91 \\ \hline \end{tabular} \end{table}
\begin{table} \caption{The $L_2$ errors for the Case 3 of Example 1.} \label{Ex1_Case3_2} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $L_2(u_1)$ &Order &$L_2(u_2)$ &Order &$L_2(u_3)$ &Order \\ \hline $10\times 10\times10$ &$1.30\times 10^{-2}$ & &$1.29\times 10^{-2}$ & &$1.30\times 10^{-2}$ & \\ $20\times 20\times20$ &$3.19\times 10^{-3}$ &2.03 &$3.20\times 10^{-3}$ &2.01 &$3.15\times 10^{-3}$ &2.05 \\ $40\times40\times40$ &$8.34\times 10^{-4}$ &1.94 &$8.39\times 10^{-4}$ &1.94 &$8.27\times 10^{-4}$ &1.93 \\ $80\times80\times80$ &$2.25\times 10^{-4}$ &1.89 &$2.24\times 10^{-4}$ &1.91 &$2.23\times 10^{-4}$ &1.89 \\ \hline \end{tabular} \end{table}
Table \ref{Ex13inf} gives the grid refinement analysis for the $L_\infty$ error of Case 3. In Table \ref{Ex1_Case3_2}, we provide the grid refinement analysis for the $L_2$ error of Case 3. Obviously, the accuracy and convergence are the same as those in Case 1. Therefore, the proposed method is very robust against large contrasts in material parameters. We have obtained second order accuracy in both $L_\infty$ and $L_2$ error norms in all three cases in Example 1.
\textbf{Example~2.} In this example, we modify the interface geometry. Let the computational domain be $[-3, 3]\times[-3, 3]\times[-3, 3]$ and the interface be given as a hemisphere $$ \left\{\begin{array}{ll} x^2+y^2+z^2=4,\\ z\geq 0, \end{array}\right. $$
To ensure the continuity of the solution across the interface, the analytic solution adopted in this example is the same as that in Example 1. In this example, we also test the numerical scheme for three different cases of Poisson's ratio and shear modulus, in each case the material parameters are inherited from the corresponded case in Example 1.
\textbf{Case~1.} Table \ref{Ex2_Case1_linf} gives the grid refinement analysis of the $L_\infty$ error. Similarly the grid refinement analysis of the $L_2$ error is presented in Table \ref{Ex2_case1_2}. It is seen that both the level of accuracy and the order of convergence are the same as those in the Case 1 of Example 1, which suggests that the proposed method is sensitive to the change in the geometry.
\begin{table} \caption{The $L_\infty$ errors for the Case 1 of Example 2.} \label{Ex2_Case1_linf} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $\ L_\infty(u_1) $ & Order &$ L_\infty(u_2)$ & Order &$ L_\infty(u_3)$ &Order \\ \hline $10\times 10\times10$ &$6.38\times 10^{-2}$ & &$5.93\times 10^{-2}$ & &$6.17\times 10^{-2}$ & \\ $20\times 20\times20$ &$1.35\times 10^{-2}$ &2.24 &$1.33\times 10^{-2}$ &2.16 &$1.35\times 10^{-2}$ & 2.19 \\ $40\times40\times40$ &$2.67\times 10^{-3}$ &2.34 &$2.97\times 10^{-3}$ &2.16 &$2.70\times 10^{-3}$ &2.32 \\ $80\times80\times80$ &$6.28\times 10^{-4}$ &2.09 &$6.52\times 10^{-4}$ &2.19 &$5.91\times 10^{-4}$ &2.19 \\ \hline \end{tabular} \end{table}
\begin{table} \caption{The $L_2$ errors for the Case 1 of Example 2.} \label{Ex2_case1_2} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $L_2(u_1)$ &Order &$L_2(u_2)$ &Order &$L_2(u_3)$ &Order \\ \hline $10\times 10\times10$ &$1.33\times 10^{-2}$ & &$1.32\times 10^{-2}$ & &$1.57\times 10^{-2}$ & \\ $20\times 20\times20$ &$3.35\times 10^{-3}$ &1.99 &$3.33\times 10^{-3}$ &1.99 &$3.44\times 10^{-3}$ &2.19 \\ $40\times40\times40$ &$8.61\times 10^{-4}$ &1.96 &$8.61\times 10^{-4}$ &1.95 &$8.65\times 10^{-4}$ &1.99 \\ $80\times80\times80$ &$2.01\times 10^{-4}$ &2.10 &$2.02\times 10^{-4}$ &2.09 &$2.01\times 10^{-4}$ &2.11 \\ \hline \end{tabular} \end{table}
Figures \ref{hemi_sol} and \ref{hemi_err} show the numerical solution and error of the Case 1 of Example 2, respectively. The number of grids is 40 along each direction of the computational domain.
\begin{figure}
\caption{Numerical solution to the Case 1 of the hemisphere interface problem with 40 grid points along each direction of the computational domain. Left chart: $u_1$; Middle chart $u_2$; Right chart: $u_3$. }
\label{hemi_sol}
\end{figure}
\begin{figure}
\caption{Numerical error for the solving the Case 1 of the hemisphere interface problem with 40 grids along each direction of the computational domain. Left chart: $u_1$; Middle chart $u_2$; Right chart: $u_3$. }
\label{hemi_err}
\end{figure}
\textbf{Case~2.} Table \ref{Ex2_Case2_linf} gives the grid refinement analysis of the $L_\infty$ error for the large contrast between Poisson's ratio across the interface. The numerical behavior is quite similar to that in the Case 2 of Example 1. \begin{table} \caption{The $L_\infty$ errors for the Case 2 of Example 2.} \label{Ex2_Case2_linf} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $\ L_\infty(u_1) $ & Order &$ L_\infty(u_2)$ & Order &$ L_\infty(u_3)$ &Order \\ \hline $10\times 10\times10$ &$5.87\times 10^{-2}$ & &$5.59\times 10^{-2}$ & &$5.93\times 10^{-2}$ & \\ $20\times 20\times20$ &$1.48\times 10^{-2}$ &1.99 &$1.50\times 10^{-2}$ &1.90 &$1.58\times 10^{-2}$ & 1.91 \\ $40\times40\times40$ &$3.00\times 10^{-3}$ &2.30 &$3.47\times 10^{-3}$ &2.11 &$3.30\times 10^{-3}$ &2.26 \\ $80\times80\times80$ &$6.80\times 10^{-4}$ &2.14 &$7.00\times 10^{-4}$ &2.31 &$6.72\times 10^{-4}$ &2.30 \\ \hline \end{tabular} \end{table}
Table \ref{Ex2_case2_2} lists the grid refinement analysis of the $L_2$ error for the large contrast between Poisson's ratio across the interface. We observe the second order convergence in the $L_2$ error norm.
\begin{table} \caption{The $L_2$ errors for the Case 2 of Example 2.} \label{Ex2_case2_2} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $L_2(u_1)$ &Order &$L_2(u_2)$ &Order &$L_2(u_3)$ &Order \\ \hline $10\times 10\times10$ &$1.33\times 10^{-2}$ & &$1.32\times 10^{-2}$ & &$1.62\times 10^{-2}$ & \\ $20\times 20\times20$ &$3.42\times 10^{-3}$ &1.96 &$3.40\times 10^{-3}$ &1.96 &$3.55\times 10^{-3}$ &2.19 \\ $40\times40\times40$ &$8.73\times 10^{-4}$ &1.97 &$8.75\times 10^{-4}$ &1.96 &$8.89\times 10^{-4}$ &2.00 \\ $80\times80\times80$ &$2.02\times 10^{-4}$ &2.11 &$2.02\times 10^{-4}$ &2.11 &$2.13\times 10^{-4}$ &2.06 \\ \hline \end{tabular} \end{table}
\textbf{Case~3.} Table \ref{Ex2_Case3_linf} offers the grid refinement analysis of the $L_\infty$ error for the large contrast between shear modulus across the interface. Table \ref{Ex2_case3_2} gives the grid refinement analysis of the $L_2$ error. In all these three cases in Example 2, the second order convergence in both $L_\infty$ and $L_2$ errors is essential reached. The level of accuracy is the same as that found in Example 1.
\begin{table} \caption{The $L_\infty$ errors for Case 3 of Example 2.} \label{Ex2_Case3_linf} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $\ L_\infty(u_1) $ & Order &$ L_\infty(u_2)$ & Order &$ L_\infty(u_3)$ &Order \\ \hline $10\times 10\times10$ &$6.38\times 10^{-2}$ & &$5.93\times 10^{-2}$ & &$6.17\times 10^{-2}$ & \\ $20\times 20\times20$ &$1.35\times 10^{-2}$ &2.24 &$1.33\times 10^{-2}$ &2.16 &$1.35\times 10^{-2}$ &2.19 \\ $40\times40\times40$ &$2.67\times 10^{-3}$ &2.34 &$2.97\times 10^{-3}$ &2.16 &$2.70\times 10^{-3}$ &2.32 \\ $80\times80\times80$ &$6.28\times 10^{-4}$ &2.09 &$6.52\times 10^{-4}$ &2.19 &$5.91\times 10^{-4}$ &2.19 \\ \hline \end{tabular} \end{table}
\begin{table} \caption{The $L_2$ errors for the Case 3 of Example 2.} \label{Ex2_case3_2} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $L_2(u_1)$ &Order &$L_2(u_2)$ &Order &$L_2(u_3)$ &Order \\ \hline $10\times 10\times10$ &$1.33\times 10^{-2}$ & &$1.32\times 10^{-2}$ & &$1.57\times 10^{-2}$ & \\ $20\times 20\times20$ &$3.35\times 10^{-3}$ &1.99 &$3.33\times 10^{-3}$ &1.99 &$3.44\times 10^{-3}$ &2.19 \\ $40\times40\times40$ &$8.60\times 10^{-4}$ &1.96 &$8.61\times 10^{-4}$ &1.95 &$8.65\times 10^{-4}$ &1.99 \\ $80\times80\times80$ &$2.01\times 10^{-4}$ &2.10 &$2.00\times 10^{-4}$ &2.10 &$2.00\times 10^{-4}$ &2.11 \\ \hline \end{tabular} \end{table}
\textbf{Example~3.} In this example, the computational domain is set to be: $[-3, 3]\times[-4, 4]\times[-2, 2]$ with an ellipsoid interface defined as $\frac{x^2}{4}+\frac{y^2}{9}+z^2=1$.
The Dirichlet boundary condition and interface jump conditions are determined from the following exact solution $$ u_1(x, y)= \left\{\begin{array}{ll} \frac{x^2}{4}+\frac{y^2}{9}+z^2-1+\cos(x)\cos(y)\cos(z), &\ \ \mbox{in}\ \ \Omega^+,\\ \cos(x)\cos(y)\cos(z), &\ \ \mbox{in}\ \ \Omega^-. \end{array}\right. $$ $$ u_2(x, y)= \left\{\begin{array}{ll} \frac{x^2}{4}+\frac{y^2}{9}+z^2-1+xy+\cos(x)\cos(y)\cos(z), &\ \ \mbox{in}\ \ \Omega^+,\\ xy+\cos(x)\cos(y)\cos(z), &\ \ \mbox{in}\ \ \Omega^-. \end{array}\right. $$ and $$ u_3(x, y)= \left\{\begin{array}{ll} \frac{x^2}{4}+\frac{y^2}{9}+z^2-1+yz+\cos(x)\cos(y)\cos(z), &\ \ \mbox{in}\ \ \Omega^+,\\ yz+\cos(x)\cos(y)\cos(z), &\ \ \mbox{in}\ \ \Omega^-. \end{array}\right. $$ Obviously, the property of solution continuity across the interface is also satisfied in the above solution. In this example, three different cases of the material parameters used in the above two examples are adopted to examine the sensitivity of the proposed MIB method to the change in interface geometry.
\textbf{Case~1.} Grid refinement analysis for $L_\infty$ error is demonstrated in Table \ref{Ex3_case1_linf} for the ellipsoid interface. A similar
analysis for $L_2$ error is listed in Table \ref{Ex3_case1_2} for the ellipsoid interface. Again, we see the same type of behavior in accuracy and convergence as that in last few examples.
\begin{table} \caption{The $L_\infty$ errors for Case 1 of Example 3.} \label{Ex3_case1_linf} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $\ L_\infty(u_1) $ & Order &$ L_\infty(u_2)$ & Order &$ L_\infty(u_3)$ &Order \\ \hline $10\times 10\times10$ &$3.96\times 10^{-2}$ & &$5.23\times 10^{-2}$ & &$3.16\times 10^{-2}$ & \\ $20\times 20\times20$ &$1.52\times 10^{-2}$ &1.38 &$1.31\times 10^{-2}$ &2.00 &$8.07\times 10^{-2}$ & 1.97 \\ $40\times40\times40$ &$2.82\times 10^{-3}$ &2.43 &$3.45\times 10^{-3}$ &1.93 &$1.90\times 10^{-3}$ &2.09 \\ $80\times80\times80$ &$6.99\times 10^{-4}$ &2.01 &$8.81\times 10^{-4}$ &1.97 &$4.83\times 10^{-4}$ &1.98 \\ \hline \end{tabular} \end{table}
\begin{table} \caption{The $L_2$ errors for the Case 1 of Example 3.} \label{Ex3_case1_2} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $L_2(u_1)$ &Order &$L_2(u_2)$ &Order &$L_2(u_3)$ &Order \\ \hline $10\times 10\times10$ &$1.16\times 10^{-2}$ & &$1.45\times 10^{-2}$ & &$6.72\times 10^{-2}$ & \\ $20\times 20\times20$ &$3.54\times 10^{-3}$ &1.71 &$3.96\times 10^{-3}$ &1.88 &$1.86\times 10^{-3}$ &1.85 \\ $40\times40\times40$ &$8.61\times 10^{-4}$ &2.04 &$1.08\times 10^{-4}$ &1.88 &$4.78\times 10^{-4}$ &1.96 \\ $80\times80\times80$ &$2.23\times 10^{-4}$ &1.95 &$2.86\times 10^{-4}$ &1.92 &$1.26\times 10^{-4}$ &1.93 \\ \hline \end{tabular} \end{table}
The numerical solution and error of the ellipsoid interface problem are illustrated in Figs. \ref{ell_sol}-\ref{ell_err} with 40 grid points along each direction of the computational domain.
\begin{figure}
\caption{Numerical solution to the Case 1 of the ellipsoid interface problem with 40 grid points along each direction of the computational domain. Left chart: $u_1$; Middle chart $u_2$; Right chart: $u_3$. }
\label{ell_sol}
\end{figure}
\begin{figure}
\caption{Numerical error for solving the Case 1 of the ellipsoid interface problem with 40 grid points along each direction of the computational domain. Left chart: $u_1$; Middle chart $u_2$; Right chart: $u_3$. }
\label{ell_err}
\end{figure}
\textbf{Case~2.} Grid refinement analysis for the $L_\infty$ error is demonstrated in Table \ref{Ex3_case2_linf}. A similar grid refinement analysis for $L_2$ error is illustrated in Table \ref{Ex3_case2_2}. \begin{table} \caption{The $L_\infty$ errors for the Case 2 of Example 3.} \label{Ex3_case2_linf} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $\ L_\infty(u_1) $ & Order &$ L_\infty(u_2)$ & Order &$ L_\infty(u_3)$ &Order \\ \hline $10\times 10\times10$ &$5.00\times 10^{-2}$ & &$5.10\times 10^{-2}$ & &$3.37\times 10^{-2}$ & \\ $20\times 20\times20$ &$1.26\times 10^{-2}$ &1.99 &$1.37\times 10^{-2}$ &1.90 &$8.01\times 10^{-2}$ & 2.07 \\ $40\times40\times40$ &$3.24\times 10^{-3}$ &1.96 &$3.63\times 10^{-3}$ &1.92 &$2.00\times 10^{-3}$ &2.00 \\ $80\times80\times80$ &$7.73\times 10^{-4}$ &2.07 &$9.98\times 10^{-4}$ &1.87 &$5.15\times 10^{-4}$ &1.96 \\ \hline \end{tabular} \end{table}
\begin{table} \caption{The $L_2$ errors for Case 2 of Example 3.} \label{Ex3_case2_2} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $L_2(u_1)$ &Order &$L_2(u_2)$ &Order &$L_2(u_3)$ &Order \\ \hline $10\times 10\times10$ &$1.20\times 10^{-2}$ & &$1.50\times 10^{-2}$ & &$6.93\times 10^{-3}$ & \\ $20\times 20\times20$ &$3.77\times 10^{-3}$ &1.67 &$4.13\times 10^{-3}$ &1.70 &$1.91\times 10^{-3}$ &1.86 \\ $40\times40\times40$ &$9.18\times 10^{-4}$ &2.04 &$1.12\times 10^{-4}$ &1.88 &$4.88\times 10^{-4}$ &1.97 \\ $80\times80\times80$ &$2.36\times 10^{-4}$ &1.96 &$2.94\times 10^{-4}$ &1.93 &$1.30\times 10^{-4}$ &1.91 \\ \hline \end{tabular} \end{table}
\textbf{Case~3.} Grid refinement analysis for $L_\infty$ error is demonstrated in Table \ref{Ex3_case3_linf}. We also illustrate the grid refinement analysis in terms of $L_2$ error in Table \ref{Ex3_case1_3}.
The second order convergence of the MIB algorithm is essentially observed from all the numerical tests in Example 3.
\begin{table} \caption{The $L_\infty$ errors for the Case 3 of Example 3.} \label{Ex3_case3_linf} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $\ L_\infty(u_1) $ & Order &$ L_\infty(u_2)$ & Order &$ L_\infty(u_3)$ &Order \\ \hline $10\times 10\times10$ &$3.97\times 10^{-2}$ & &$5.23\times 10^{-2}$ & &$3.16\times 10^{-2}$ & \\ $20\times 20\times20$ &$1.22\times 10^{-2}$ &1.70 &$1.31\times 10^{-2}$ &2.00 &$8.07\times 10^{-2}$ &1.97 \\ $40\times40\times40$ &$2.82\times 10^{-3}$ &2.11 &$3.45\times 10^{-3}$ &1.93 &$1.90\times 10^{-3}$ &2.07 \\ $80\times80\times80$ &$6.99\times 10^{-4}$ &2.01 &$8.81\times 10^{-4}$ &1.97 &$4.82\times 10^{-4}$ &1.98 \\ \hline \end{tabular} \end{table}
\begin{table} \caption{The $L_2$ errors for the Case 3 of Example 3.} \label{Ex3_case1_3} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $L_2(u_1)$ &Order &$L_2(u_2)$ &Order &$L_2(u_3)$ &Order \\ \hline $10\times 10\times10$ &$1.16\times 10^{-2}$ & &$1.45\times 10^{-2}$ & &$7.72\times 10^{-2}$ & \\ $20\times 20\times20$ &$3.24\times 10^{-3}$ &1.84 &$3.96\times 10^{-3}$ &1.99 &$1.86\times 10^{-3}$ &2.05 \\ $40\times40\times40$ &$8.61\times 10^{-4}$ &1.91 &$1.08\times 10^{-3}$ &1.87 &$4.78\times 10^{-4}$ &1.96 \\ $80\times80\times80$ &$2.23\times 10^{-4}$ &1.95 &$2.86\times 10^{-4}$ &1.92 &$1.26\times 10^{-4}$ &1.92 \\ \hline \end{tabular} \end{table}
\begin{remark} In all above examples, the continuity of the solution across the interface, i.e., the no fracture condition, has been carefully maintained in designing the analytical solutions. However, for real world problems, having fractures at the interface is very common. In the following two numerical experiments, the continuity condition of the function values across the interface is dropped. We test our method for handling general jumps of the function values across the interface. Numerically, this situation is slightly more difficult to deal with. \end{remark}
\textbf{Example~4.} The computational domain is set to be $[-2, 2]\times[-2, 2]\times[-2, 4.4]$ with a cylinder interface defined as $$ \left\{\begin{array}{ll} x^2+y^2=\frac{\pi^2}{4}, &\ \ ,\\ z\leq \pi, &\ \ ,\\ z\geq 0. \end{array}\right. $$
The Dirichlet boundary condition and interface conditions are determined from the following exact solutions. $$ u_1(x, y)= \left\{\begin{array}{ll} x^2+y^2+z^2-4, &\ \ \mbox{in}\ \ \Omega^+,\\ \cos(x)\cos(y)\cos(z), &\ \ \mbox{in}\ \ \Omega^-. \end{array}\right. $$
$$ u_2(x, y)= \left\{\begin{array}{ll} x^2+y^2+z^2+xy-4, &\ \ \mbox{in}\ \ \Omega^+,\\ xy+\cos(x)\cos(y)\cos(z), &\ \ \mbox{in}\ \ \Omega^-. \end{array}\right. $$ and $$ u_3(x, y)= \left\{\begin{array}{ll} x^2+y^2+z^2+yz-4, &\ \ \mbox{in}\ \ \Omega^+,\\ yz+\cos(x)\cos(y)\cos(z), &\ \ \mbox{in}\ \ \Omega^-. \end{array}\right. $$
The values of the Poisson's ratio and shear modulus are, respectively, set to $$ \nu= \left\{\begin{array}{ll} \nu^+=0.20, &\ \ \mbox{in}\ \ \Omega^+,\\ \nu^-=0.24, &\ \ \mbox{in}\ \ \Omega^-. \end{array}\right. $$ and $$ \mu= \left\{\begin{array}{ll} \mu^+=1500000, &\ \ \mbox{in}\ \ \Omega^+,\\ \mu^-=2000000, &\ \ \mbox{in}\ \ \Omega^-. \end{array}\right. $$
Table \ref{Ex4_linf} offers the grid refinement analysis of the $L_\infty$ error for Example 4. Similar grid refinement analysis of the $L_2$ error is given in Table \ref{Ex4_2} for Example 4. A high level of accuracy and a robust order of convergence are observed from these tests.
\begin{table} \caption{The $L_\infty$ errors of Example 4.} \label{Ex4_linf} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $\ L_\infty(u_1) $ & Order &$ L_\infty(u_2)$ & Order &$ L_\infty(u_3)$ &Order \\ \hline $20\times 20\times20$ &$4.68\times 10^{-3}$ & &$4.68\times 10^{-3}$ & &$7.07\times 10^{-3}$ & \\ $40\times 40\times40$ &$1.16\times 10^{-3}$ &2.01 &$1.17\times 10^{-3}$ &2.00 &$1.74\times 10^{-3}$ & 2.02 \\ $80\times80\times80$ &$2.87\times 10^{-4}$ &2.02 &$2.91\times 10^{-4}$ &2.00 &$4.23\times 10^{-4}$ &2.04 \\ \hline \end{tabular} \end{table}
\begin{table} \caption{The $L_2$ errors of Example 4.} \label{Ex4_2} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $L_2(u_1)$ &Order &$L_2(u_2)$ &Order &$L_2(u_3)$ &Order \\ \hline $20\times 20\times20$ &$1.04\times 10^{-3}$ & &$1.04\times 10^{-3}$ & &$1.58\times 10^{-3}$ & \\ $40\times 40\times40$ &$2.61\times 10^{-4}$ &1.99 &$2.62\times 10^{-4}$ &1.99 &$3.86\times 10^{-4}$ &2.03 \\ $80\times80\times80$ &$6.69\times 10^{-5}$ &1.96 &$6.77\times 10^{-5}$ &1.95 &$9.77\times 10^{-5}$ &1.98 \\ \hline \end{tabular} \end{table}
\begin{figure}
\caption{Numerical solution to the cylinder interface problem with 40 grid points along each direction of the computational domain. Left chart: $u_1$; Middle chart $u_2$; Right chart: $u_3$. }
\label{cyl_sol}
\end{figure}
\begin{figure}
\caption{Numerical error of solving the cylinder interface problem with 40 grid points along each direction of the computational domain. Left chart: $u_1$; Middle chart $u_2$; Right chart: $u_3$. }
\label{cyl_err}
\end{figure}
Numerical solution and error are depicted in Figs. \ref{cyl_sol} and \ref{cyl_err}, respectively, where 40 grid points are used along each direction of the computational domain. Obviously, the error is very small in all of three solutions.
\textbf{Example~5.} Geometric complexity is a major issue in interface problems. It is often the case that numerical methods designed for simple interface geometries do not work for complex interface geometries. In this example, we consider a more complicated interface, which is defined to be a torus $$ \left\{\begin{array}{ll} x(u, v)=(R+r\cos v)\cos u, &\ \ \\ y(u, v)=(R+r\sin v)\sin u, &\ \ \\ z(u, v)=r\sin v, \end{array}\right. $$ where $u, v\in [0,2\pi]$ are two parameters. The computational domain is set to be $[-10, 10]\times[-10, 10]\times[-5, 5]$
The above torus can also be represented as $$ (R-\sqrt{x^2+y^2})^2+z^2=r^2. $$ We set $R=4, r=2$ in our numerical experiments.
The Poisson's ratio, shear modulus and designed analytic solution in Example 4 are adopted for this example. Grid refinement analysis in terms of $L_\infty$ error is given in Table \ref{Ex5_linf}. A similar grid refinement analysis in terms of $L_2$ error is given Table \ref{Ex5_2}. Although there is some small fluctuation in the convergent order, the second order convergence is essentially obtained in this test.
\begin{table} \caption{The $L_\infty$ errors of Example 5.} \label{Ex5_linf} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $\ L_\infty(u_1) $ & Order &$ L_\infty(u_2)$ & Order &$ L_\infty(u_3)$ &Order \\ \hline $20\times 20\times20$ &$2.04\times 10^{-1}$ & &$2.04\times 10^{-1}$ & &$1.12\times 10^{-1}$ & \\ $40\times 40\times40$ &$4.14\times 10^{-2}$ &2.30 &$4.05\times 10^{-2}$ &2.33 &$2.34\times 10^{-2}$ & 2.09 \\ $80\times80\times80$ &$1.24\times 10^{-2}$ &1.74 &$1.09\times 10^{-2}$ &1.89 &$4.66\times 10^{-3}$ &1.97 \\ \hline \end{tabular}
\end{table}
\begin{table} \caption{The $L_2$ errors of Example 5.} \label{Ex5_2} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x \times n_y \times n_z$ & $L_2(u_1)$ &Order &$L_2(u_2)$ &Order &$L_2(u_3)$ &Order \\ \hline $20\times 20\times20$ &$4.54\times 10^{-2}$ & &$4.52\times 10^{-2}$ & &$1.87\times 10^{-2}$ & \\ $40\times 40\times40$ &$1.12\times 10^{-2}$ &1.71 &$1.10\times 10^{-2}$ &2.04 &$4.40\times 10^{-3}$ &2.09 \\ $80\times80\times80$ &$2.92\times 10^{-3}$ &2.04 &$2.90\times 10^{-3}$ &1.92 &$1.12\times 10^{-3}$ &1.97 \\ \hline \end{tabular}
\end{table}
\begin{figure}
\caption{Numerical solution to the torus interface problem with 40 grid points along each direction of the computational domain. Left chart: $u_1$; Middle chart: $u_2$; Right chart: $u_3$. }
\label{torus_sol}
\end{figure}
\begin{figure}
\caption{Numerical error to the torus interface problem with 40 grid points along each direction of the computational domain, Left chart: $u_1$; Middle chart: $u_2$; Right chart: $u_3$. }
\label{torus_err}
\end{figure}
Figures \ref{torus_sol} and \ref{torus_err} illustrate the numerical solution and the error in a $40\times40\times40$ mesh. Note that errors appear large in this test example. However, the amplitude of the solution is much larger too, due to a much larger computational domain.
\textbf{Example~6.} For the last example of the smooth interface with piecewise constant material parameters, we consider a more complicated interface geometry, i.e., a flower-like cylinder interface. The interface can be represented as $$ \left\{\begin{array}{ll} r=\frac{5}{2}+\frac{5}{7}\sin{5\theta},\\ -\frac{2}{3}\leq z \leq \frac{2}{3}, \end{array}\right. $$ where $r=\sqrt{x^2+y^2}$ and $\theta=\arctan{\frac{y}{x}}$. The computational domain is set to $[-5, 5]\times[-5, 5]\times[-2, 2]$.
Material parameters and exact solutions designed in Example 4 are utilized in this example. Grid refinement analysis in terms of $L_\infty$ error is given in Table \ref{Ex66_linf} and a similar analysis in terms of $L_2$ error is given in Table \ref{Ex66_2}. It is quite interesting to see that good convergent orders are attained.
\begin{table} \caption{The $L_\infty$ errors of Example 6.} \label{Ex66_linf} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} Grid Size &$L_\infty(u_1)$ &Order &$L_\infty(u_2) $ &Order &$L_\infty(u_3) $ &Order \\ \hline $0.5$ &$4.29\times 10^{-2}$ & &$4.49\times 10^{-2}$ & &$1.95\times 10^{-2}$ & \\ $0.25$ &$9.04\times 10^{-3}$ &2.25 &$9.46\times 10^{-3}$ &2.25 &$4.97\times 10^{-3}$ & 1.97 \\ $0.125$ &$1.96\times 10^{-3}$ &2.21 &$2.17\times 10^{-3}$ &2.03 &$7.02\times 10^{-3}$ & 2.82 \\ \hline \end{tabular}
\end{table}
\begin{table} \caption{The $L_2$ errors of Example 6.} \label{Ex66_2} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} Grid Size & $L_2(u_1) $ &Order &Error $L_2(u_2) $ &Order & $L_2(u_3) $ &Order \\ \hline $0.5$ &$4.12\times 10^{-3}$ & &$4.96\times 10^{-3}$ & &$2.35\times 10^{-3}$ & \\ $0.25$ &$9.90\times 10^{-4}$ &2.06 &$1.11\times 10^{-3}$ &2.16 &$4.10\times 10^{-4}$ &2.52 \\ $0.125$ &$2.11\times 10^{-4}$ &2.23 &$2.38\times 10^{-4}$ &2.22 &$7.68\times 10^{-5}$ &2.42 \\ \hline \end{tabular}
\end{table}
\begin{figure}
\caption{Numerical solution to the flower interface problem with grid size 0.125. Left chart: $u_1$; Middle chart: $u_2$; Right chart: $u_3$.}
\label{flower_sol}
\end{figure}
\begin{figure}
\caption{Numerical error for solving the flower interface problem with grid size 0.125. Left chart: $u_1$; Middle chart: $u_2$; Right chart: $u_3$.}
\label{flower_err}
\end{figure}
Figures \ref{flower_sol} and \ref{flower_err} demonstrate the solution and error of the flower-liked interface problem with grid size $0.125$. Note that the error amplitude depends on the mesh size.
\subsubsection{Position dependent shear modulus}
Spatially varying shear modulus occurs frequently in natural and man-made materials and devices. The ability to deal with position-dependent material parameters cannot be overemphasized for practical applications. For example, the protein molecules can have variable shear modulus \cite{KLXia:2013d}.
In this subsection, we consider that the shear modulus is given as a position-dependent function.
\textbf{Example~7.} In this example, we consider the problem defined in Example 1, while replace the shear modulus in Example 1 by the following position-dependent function $$ \mu= \left\{\begin{array}{ll} \mu^+=1500000+(x+y+z), &\ \ \mbox{in}\ \ \Omega^+,\\ \mu^-=2000000+xyz, &\ \ \mbox{in}\ \ \Omega^-. \end{array}\right. $$
The error analysis is given in Tables \ref{Ex6_linf} and \ref{Ex6_2} for $L_\infty$ and $L_2$, respectively. Essentially, second order convergence is obtained. The level of accuracy is the same as that obtained for Example 1, which indicates that the proposed MIB method is not sensitive to position-dependent material parameters.
\begin{table} \caption{The $L_\infty$ errors of Example 7.} \label{Ex6_linf} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x\times n_y\times n_z$ &$L_\infty(u_1)$ &Order &$L_\infty(u_2) $ &Order &$L_\infty(u_3) $ &Order \\ \hline $10\times 10\times10$ &$6.61\times 10^{-2}$ & &$6.27\times 10^{-2}$ & &$5.67\times 10^{-2}$ & \\ $20\times 20\times20$ &$1.37\times 10^{-2}$ &2.27 &$1.34\times 10^{-2}$ &2.27 &$1.31\times 10^{-2}$ & 2.11 \\ $40\times 40\times40$ &$2.66\times 10^{-3}$ &2.36 &$2.84\times 10^{-3}$ &2.24 &$2.67\times 10^{-3}$ & 2.29 \\ $80\times80\times80$ &$7.41\times 10^{-4}$ &1.90 &$7.15\times 10^{-4}$ &1.99 &$7.26\times 10^{-4}$ &1.89 \\ \hline \end{tabular}
\end{table}
\begin{table} \caption{The $L_2$ errors of Example 7.} \label{Ex6_2} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x\times n_y\times n_z$ & $L_2(u_1) $ &Order &Error $L_2(u_2) $ &Order & $L_2(u_3) $ &Order \\ \hline $10\times 10\times10$ &$1.28\times 10^{-2}$ & &$1.28\times 10^{-2}$ & &$1.29\times 10^{-2}$ & \\ $20\times 20\times20$ &$3.18\times 10^{-3}$ &2.01 &$3.19\times 10^{-3}$ &2.00 &$3.14\times 10^{-3}$ & 2.04 \\ $40\times 40\times40$ &$8.30\times 10^{-4}$ &1.94 &$8.36\times 10^{-4}$ &1.93 &$8.26\times 10^{-4}$ &1.93 \\ $80\times80\times80$ &$2.24\times 10^{-4}$ &1.89 &$2.24\times 10^{-4}$ &1.90 &$2.23\times 10^{-4}$ &1.89 \\ \hline \end{tabular}
\end{table}
\textbf{Example~8.} To further test the proposed method for its performance in dealing with variable material parameters, we consider an example by setting the shear modulus in Example 4 to the following spatially dependent functions $$ \mu= \left\{\begin{array}{ll} \mu^+=1500000+2000(x+y+z), &\ \ \mbox{in}\ \ \Omega^+,\\ \mu^-=2000000+1500xyz, &\ \ \mbox{in}\ \ \Omega^-. \end{array}\right. $$
\begin{table} \caption{The $L_\infty$ errors of Example 8.} \label{Ex7_linf} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x\times n_y\times n_z$ &$L_\infty(u_1)$ &Order &$L_\infty(u_2) $ &Order &$L_\infty(u_3) $ &Order\\ \hline $10\times 10\times10$ &$1.85\times 10^{-2}$ & &$1.85\times 10^{-2}$ & &$3.14\times 10^{-2}$ & \\ $20\times 20\times20$ &$4.68\times 10^{-3}$ &1.98 &$4.68\times 10^{-3}$ &1.98 &$7.07\times 10^{-3}$ & 2.15 \\ $40\times 40\times40$ &$1.15\times 10^{-3}$ &2.02 &$1.17\times 10^{-3}$ &2.00 &$1.74\times 10^{-3}$ & 2.02 \\ $80\times80\times80$ &$2.99\times 10^{-4}$ &1.94 &$3.19\times 10^{-4}$ &1.87 &$4.23\times 10^{-4}$ &2.01 \\ \hline \end{tabular}
\end{table}
\begin{table} \caption{The $L_2$ errors of Example 8.} \label{Ex7_2} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} $n_x\times n_y\times n_z$ & $L_2(u_1) $ &Order &Error $L_2(u_2) $ &Order & $L_2(u_3) $ &Order \\ \hline $10\times 10\times10$ &$4.16\times 10^{-3}$ & &$4.16\times 10^{-3}$ & &$7.47\times 10^{-3}$ & \\ $20\times 20\times20$ &$1.04\times 10^{-3}$ &2.00 &$1.04\times 10^{-3}$ &2.00 &$1.58\times 10^{-3}$ &2.24 \\ $40\times 40\times40$ &$2.63\times 10^{-4}$ &1.98 &$2.64\times 10^{-4}$ &1.98 &$3.92\times 10^{-4}$ &2.01 \\ $80\times80\times80$ &$6.82\times 10^{-5}$ &1.95 &$7.07\times 10^{-5}$ &1.90 &$1.00\times 10^{-4}$ &1.97 \\ \hline \end{tabular}
\end{table}
The $L_\infty$ and $L_2$ errors are analyzed in Tables \ref{Ex7_linf} and \ref{Ex7_2}, respectively.
\textbf{Example~9.}
In this numerical experiment, we further investigate the robustness of the proposed MIB algorithm to the position dependent shear modulus, now we redo example 8, however, change the shear modulus in example 8 to be the following functions. $$ \mu= \left\{\begin{array}{ll} \mu^+=1500000+2000(x^2+y^2+z^2), &\ \ \mbox{in}\ \ \Omega^+,\\ \mu^-=2000000+1500x^2y^2z^2, &\ \ \mbox{in}\ \ \Omega^-. \end{array}\right. $$
The $L_\infty$ and $L_2$ errors are analyzed in Tables \ref{Ex8_linf} and \ref{Ex8_2}, respectively.
\begin{table} \caption{The $L_\infty$ errors of Example 9.} \label{Ex8_linf} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} Grid Size &$L_\infty(u_1)$ &Order &$L_\infty(u_2) $ &Order &$L_\infty(u_3) $ &Order \\ \hline $20\times 20\times20$ &$1.67\times 10^{-1}$ & &$1.65\times 10^{-1}$ & &$9.39\times 10^{-2}$ & \\ $40\times 40\times40$ &$4.20\times 10^{-2}$ &1.99 &$5.36\times 10^{-2}$ &1.62 &$2.66\times 10^{-2}$ & 1.82 \\ $80\times80\times80$ &$9.97\times 10^{-3}$ &2.07 &$9.71\times 10^{-3}$ &2.48 &$5.96\times 10^{-3}$ &2.43 \\ \hline \end{tabular}
\end{table}
\begin{table} \caption{The $L_2$ errors of Example 9.} \label{Ex8_2} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} Grid Size & $L_2(u_1) $ &Order &Error $L_2(u_2) $ &Order & $L_2(u_3) $ &Order \\ \hline $20\times 20\times20$ &$4.27\times 10^{-2}$ & &$4.26\times 10^{-2}$ & &$1.78\times 10^{-2}$ & \\ $40\times 40\times40$ &$1.10\times 10^{-2}$ &1.96 &$1.09\times 10^{-2}$ &1.97 &$4.48\times 10^{-3}$ &1.99 \\ $80\times80\times80$ &$2.80\times 10^{-3}$ &1.97 &$2.78\times 10^{-3}$ &1.97 &$1.08\times 10^{-3}$ &2.05 \\ \hline \end{tabular}
\end{table}
\begin{remark} From the above three examples, we observe the second order accuracy in both $L_\infty$ and $L_2$ norms for elasticity interfaces with position-dependent material parameters. Additionally the level of accuracy is not affected by the spatially varying material parameters. \end{remark}
\subsection{Nonsmooth interfaces}
Nonsmooth interfaces are omnipresent in practical applications and give rise to challenges for numerical algorithm design. In this section, we consider a few elasticity interface problems with geometric singularities.
\textbf{Example~10.} In this example, let us consider an apple-like interface \cite{Yu:2007a} $$ \rho=1.9\left(1-\cos{\phi}\right), $$ where $\rho=\sqrt{x^2+y^2+z^2}$ and $\phi=\arccos{\frac{z}{\rho}}$. The computational domain is set to $[-5, 4.6]\times[-5, 4.6]\times[-8, 4]$.
The values of the Poisson's ratio and shear modulus are, respectively $$ \nu= \left\{\begin{array}{ll} \nu^+=0.24, &\ \ \mbox{in}\ \ \Omega^+,\\ \nu^-=0.20, &\ \ \mbox{in}\ \ \Omega^-. \end{array}\right. $$ and $$ \mu= \left\{\begin{array}{ll} \mu^+=2000000, &\ \ \mbox{in}\ \ \Omega^+,\\ \mu^-=1500000, &\ \ \mbox{in}\ \ \Omega^-. \end{array}\right. $$
The Dirichlet boundary condition and interface jump conditions can be determined by the following exact solution $$ u_1(x, y)= \left\{\begin{array}{ll} \cos{x}\cos{y}\cos{z}+xyz, &\ \ \mbox{in}\ \ \Omega^+,\\ 3, &\ \ \mbox{in}\ \ \Omega^-. \end{array}\right. $$
$$ u_2(x, y)= \left\{\begin{array}{ll} \cos{x}\cos{y}\cos{z}+x^2+y^2+z^2, &\ \ \mbox{in}\ \ \Omega^+,\\ 3, &\ \ \mbox{in}\ \ \Omega^-. \end{array}\right. $$ and $$ u_3(x, y)= \left\{\begin{array}{ll} \cos{x}\cos{y}\cos{z}, &\ \ \mbox{in}\ \ \Omega^+,\\ 3, &\ \ \mbox{in}\ \ \Omega^-. \end{array}\right. $$
Grid refinement analysis in terms of $L_\infty$ error is given in Table \ref{Ex10_linf}. A similar $L_2$ error analysis is given in Table \ref{Ex10_2}. The level of accuracy and the order of convergence are similar to those observed in earlier cases.
\begin{table} \caption{The $L_\infty$ errors of Example 10.} \label{Ex10_linf} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} Grid Size &$L_\infty(u1)$ &Order &$L_\infty(u2) $ &Order &$L_\infty(u3) $ &Order \\ \hline $0.6$ &$5.08\times 10^{-2}$ & &$5.18\times 10^{-2}$ & &$6.60\times 10^{-2}$ & \\ $0.3$ &$1.39\times 10^{-2}$ &1.87 &$1.41\times 10^{-2}$ &2.06 &$1.74\times 10^{-2}$ & 1.92 \\ $0.15$ &$3.07\times 10^{-3}$ &2.18 &$3.77\times 10^{-3}$ &2.18 &$4.09\times 10^{-3}$ &2.09 \\ \hline \end{tabular}
\end{table}
\begin{table} \caption{The $L_2$ errors of Example 10.} \label{Ex10_2} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} Grid Size & $L_2(u1) $ &Order &Error $L_2(u2) $ &Order & $L_2(u3) $ &Order \\ \hline $0.6$ &$5.86\times 10^{-3}$ & &$6.11\times 10^{-3}$ & &$9.44\times 10^{-3}$ & \\ $0.3$ &$1.51\times 10^{-3}$ &2.17 &$1.61\times 10^{-3}$ &1.92 &$2.55\times 10^{-3}$ &1.89 \\ $0.15$ &$3.77\times 10^{-4}$ &2.07 &$3.96\times 10^{-4}$ &2.02 &$6.69\times 10^{-4}$ &1.93 \\ \hline \end{tabular}
\end{table}
\begin{figure}
\caption{Solution to the apple-like interface problem with grid size 0.15. Left chart: $u_1$; Middle chart: $u_2$; Right chart: $u_3$.}
\label{app_sol}
\end{figure}
\begin{figure}
\caption{Numerical error for solving the apple-life interface problem with grid size 0.15. Left chart: $u_1$; Middle chart: $u_2$; Right chart: $u_3$.}
\label{app_err}
\end{figure}
Figures \ref{app_sol} and \ref{app_err} illustrate the numerical solution and error of solving the apple-liked interface problem with grid size $0.15$. The grid refinement analysis in Table \ref{Ex10_linf} and Table \ref{Ex10_2} indicate the second order convergence in both $L_2$ and $L_\infty$ error norms. Note that largest error did not occur at the geometric singularity, which indicates that the proposed MIB method works well for geometric singularities.
\textbf{Example~11.} Next, we consider an oak-acorn interface geometry \cite{Yu:2007a} $$ \left\{\begin{array}{ll} \left(\frac{x}{d}\right)^2+\left(\frac{y}{d}\right)^2=(z-q)^2, &\ \ \mbox{if} z>0,\\ x^2+y^2+(z-g)^2=R^2, &\ \ \mbox{if} z\leq 0, \end{array}\right. $$ where $q=-\frac{6}{7}$, $g=\frac{1}{2}$, $R=\frac{15}{7}$ and $d=\sqrt{\frac{R^2-g^2}{q^2}}$. The computational domain is set to $[-5, 4.6]\times[-5, 4.6]\times[-5, 4.6]$. Note that this interface has a tip.
The material parameters and exact solutions in Example 10 are adopted. Grid refinement analysis in terms of $L_\infty$ error is given in Table \ref{Ex11_linf}. In Table \ref{Ex11_2}, similar analysis in terms of $L_2$ error is also given. These results show that the second order convergence in both $L_2$ and $L_\infty$ error norms is achieved.
\begin{table} \caption{The $L_\infty$ errors of Example 11.} \label{Ex11_linf} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} Grid Size &$L_\infty(u1)$ &Order &$L_\infty(u2) $ &Order &$L_\infty(u3) $ &Order \\ \hline $0.48$ &$3.90\times 10^{-2}$ & &$4.28\times 10^{-2}$ & &$6.18\times 10^{-2}$ & \\ $0.24$ &$9.92\times 10^{-3}$ &1.98 &$1.01\times 10^{-2}$ &2.08 &$1.19\times 10^{-2}$ &2.38 \\ $0.12$ &$2.29\times 10^{-3}$ &2.12 &$2.54\times 10^{-3}$ &1.99 &$2.60\times 10^{-3}$ &2.19 \\ \hline \end{tabular}
\end{table}
\begin{table} \caption{The $L_2$ errors of Example 11.} \label{Ex11_2} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} Grid Size & $L_2(u1) $ &Order &Error $L_2(u2) $ &Order & $L_2(u3) $ &Order \\ \hline $0.48$ &$5.91\times 10^{-3}$ & &$6.37\times 10^{-3}$ & &$7.44\times 10^{-3}$ & \\ $0.24$ &$1.36\times 10^{-3}$ &2.17 &$1.48\times 10^{-3}$ &2.11 &$1.88\times 10^{-3}$ &1.98 \\ $0.12$ &$3.25\times 10^{-4}$ &2.07 &$3.60\times 10^{-4}$ &2.04 &$4.06\times 10^{-4}$ &2.21 \\ \hline \end{tabular}
\end{table}
\begin{figure}
\caption{Numerical solution to the acorn interface problem with grid size 0.12. Left chart: $u_1$; Middle chart: $u_2$; Right chart: $u_3$.}
\label{acorn_sol}
\end{figure}
\begin{figure}
\caption{Numerical error for solving the acorn interface problem with grid size 0.12. Left chart: $u_1$; Middle chart: $u_2$; Right chart: $u_3$.}
\label{acorn_err}
\end{figure}
The geometry, numerical solution and error distribution are provided in Figures \ref{acorn_sol} and \ref{acorn_err}, which are computed with grid size $0.15$ in all directions. Again, the largest error is away from the tip, which indicates the robustness of the present MIB method for dealing with geometric singularity.
\textbf{Example~12.} Finally, let us extend the benchmark pentagon-star interface test used in 2D to a 3D one, which is a more complicated interface with very a sharp edge. We set the interface as $$ \phi(r, \theta)=\left\{\begin{array}{ll} \frac{R\sin{(\theta_t/2)}}{\sin{(\theta_t/2+\theta-\theta_r-2\pi(i-1)/5})}-r, &\ \ \theta_r+\pi(2i-2)/5\leq \theta <\theta_r+\pi(2i-1)/5,\\ \frac{R\sin{(\theta_t/2)}}{\sin{(\theta_t/2-\theta+\theta_r+2\pi(i-1)/5})}-r, &\ \ \theta_r+\pi(2i-3)/5\leq \theta <\theta_r+\pi(2i-2)/5, \end{array}\right. $$ where $\theta_t=\frac{\pi}{5}$, $\theta_t=\frac{\pi}{7}$, $R=\frac{6}{7}$ and $i=1, 2, 3, 4, 5$. Furthermore, we have $r=\sqrt{x^2+y^2}$ and $\theta=\arctan{\frac{y}{x}}$. The $z$-direction of the interface is constrained by $$ -\frac{\sqrt{3}}{2}\leq z \leq \frac{\sqrt{3}}{2}. $$ The computational domain is set to $[-1.3, 1.1]\times[-1.3, 1.1]\times[-1.3, 1.1]$. The material parameters and exact solutions in Example 10 are utilized for this problem.
Grid refinement $L_\infty$ error analysis is given in Table \ref{Ex12_linf}. We also shown the grid refinement analysis in terms of $L_2$ error in Table \ref{Ex12_2}. The grid refinement analysis shows that the second order convergence in both $L_2$ and $L_\infty$ error norms is obtained.
\begin{table} \caption{The $L_\infty$ errors of Example 12.} \label{Ex12_linf} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} Grid Size &$L_\infty(u1)$ &Order &$L_\infty(u2) $ &Order &$L_\infty(u3) $ &Order \\ \hline $0.12$ &$1.27\times 10^{-3}$ & &$1.57\times 10^{-3}$ & &$1.75\times 10^{-2}$ & \\ $0.06$ &$2.08\times 10^{-4}$ &2.61 &$3.98\times 10^{-4}$ &1.98 &$1.76\times 10^{-4}$ &3.31 \\ $0.03$ &$3.80\times 10^{-5}$ &2.45 &$6.56\times 10^{-5}$ &2.60 &$2.79\times 10^{-5}$ &2.66 \\ \hline \end{tabular}
\end{table}
\begin{table} \caption{The $L_2$ errors of Example 12.} \label{Ex12_2} \centering \begin{tabular}{lllllllll} \hline
\cline{1-7} Grid Size & $L_2(u1) $ &Order &Error $L_2(u2) $ &Order & $L_2(u3) $ &Order \\ \hline $0.12$ &$3.71\times 10^{-4}$ & &$1.67\times 10^{-4}$ & &$3.57\times 10^{-4}$ & \\ $0.06$ &$3.79\times 10^{-5}$ &3.29 &$4.47\times 10^{-5}$ &1.90 &$3.76\times 10^{-5}$ &3.25 \\ $0.03$ &$7.67\times 10^{-6}$ &2.30 &$1.07\times 10^{-5}$ &2.60 &$6.08\times 10^{-6}$ &2.63 \\ \hline \end{tabular}
\end{table}
\begin{figure}
\caption{Numerical solution to the pentagon star interface problem with grid size 0.03. Left chart: $u_1$; Middle chart: $u_2$; Right chart: $u_3$. }
\label{pen_sol}
\end{figure}
\begin{figure}
\caption{Numerical error for solving the pentagon star interface problem with grid size 0.03. Left chart: $u_1$; Middle chart: $u_2$; Right chart: $u_3$.}
\label{pen_err}
\end{figure}
To give a visualization of the numerical solution, error and interface geometry of the pentagon star liked interface problem, we provide Figs. \ref{pen_sol} and \ref{pen_err}, which are plotted with grid size $0.03$. In general, error is very small. Additionally, the largest error does not occur at the sharp edge of the interface. Therefore, from the above three test examples, we can conclude that the proposed MIB method is very robust in handling geometric singularities.
\section{Conclusion}
In this work, we develop the matched interface and boundary (MIB) method for solving three dimensional (3D) elasticity interface problems. Both isotropic homogeneous material and isotropic inhomogeneous material are considered in the theoretical modeling and numerical computation. In particular, the isotropic inhomogeneous material is described by a strain-stress constitutive law with a position-dependent modulus function.
Most previous effort in the MIB method has been for elliptic interface problems. Its essential idea is to replace function values on irregular grid points fictitious values in the discretization so that the standard finite difference schemes can be systematically employed as if there were no interface. Interface jump conditions are enforced on the intersecting points between the interface and the mesh lines, which in turn determines fictitious values. In principle, the MIB method developed for one interface problem can be utilized for solving another interface problem because the MIB procedure does depend on the form of the partial differential equation. However, elasticity interface equations are exceptional because they involve both central derivatives and cross derivatives, which lead to new difficulties in determining fictitious values. Additionally, the elasticity interface equation is a vector equation with three deformation components in a 3D setting, which results in more demanding in efficient numerical schemes in terms of computer memory storage and convergent speed in solving the linear algebraic system. Consequently, a new MIB method has been developed in this work to address these issues. To make the MIB scheme of second order convergence, a number of techniques for central derivatives and cross derivatives is proposed in this work. For central derivatives, techniques such as local coordinate transformation, disassociation, two sets of jump conditions are utilized, while for cross derivatives, disassociation, extrapolation and neighbor combination techniques are proposed to determining fictitious values. The resulting large sparse linear systems for the coupled vector equations are solved efficiently by using the bi-conjugated gradient method.
The proposed MIB method has been validated by using a variety of benchmark examples. In terms of interface complexity, we considered both smooth interfaces and nonsmooth interfaces. Smooth interface geometries include sphere, hemisphere, genus-1 torus, flower and cylinder. In the category of nonsmooth interface geometries, apple-shaped, oak-acorn-shaped and pentagon star interfaces are considered. It is well-known that in order to achieve second order convergence, nonsmooth interface geometries require special considerations in the interface algorithm design. The robustness of the MIB method proved by showing that the largest error occurs away from the geometric singularities.
The proposed MIB method has also been tested for elasticity interface problems with both weak discontinuity and strong discontinuity in solutions. Another standard test is the stability of the numerical schemes for large contrasts in material parameters across the interface. These aspects are investigated with numerous examples. We have demonstrated that the proposed MIB method is not sensitive to change in discontinuity and material contrast.
Finally, two classes of material parameters, namely, piecewise constants and spatially varying Poisson's ratio and shear modulus are considered in our numerical experiments. We have demonstrated with extensive numerical examples that proposed MIB method achieve second order convergence in both $L_\infty$ and $L_2$ error norms for all the tests described above. Additionally, the level of MIB accuracy is not affected the above mentioned test issues. We therefore believe that the present MIB is ready for applications to the real world problems. In fact, application to complex biomolecular systems is under our consideration.
\section*{Acknowledgments}
This work was supported in part by NSF grants IIS-1302285 and DMS-1160352, NIH grant R01GM-090208 and MSU Center for Mathematical Molecular Biosciences Initiative.
\section*{Literature cited} \renewcommand\refname{}
\end{document} |
\begin{document}
\begin{abstract} We consider the dynamical properties of $C^{\infty}$-variations of the flow on an aperiodic Kuperberg plug ${\mathbb K}$. Our main result is that there exists a smooth 1-parameter family of plugs ${\mathbb K}_{\epsilon}$ for ${\epsilon}\in (-a,a)$ and $a<1$, such that: (1) The plug ${\mathbb K}_0 = {\mathbb K}$ is a generic Kuperberg plug; (2) For ${\epsilon}<0$, the flow in the plug ${\mathbb K}_{\epsilon}$ has two periodic orbits that
bound an invariant cylinder, all other orbits of the flow are wandering, and the flow has topological entropy zero; (3) For ${\epsilon}>0$, the flow in the plug ${\mathbb K}_{\epsilon}$ has positive topological entropy, and an abundance of periodic orbits.
\end{abstract}
\title{Aperiodicity at the boundary of chaos}
\thanks{2010 {\it Mathematics Subject Classification}. Primary 37C10, 37C70, 37B25, 37B40}
\author{Steven Hurder} \address{Steven Hurder, Department of Mathematics, University of Illinois at Chicago, 322 SEO (m/c 249), 851 S. Morgan Street, Chicago, IL 60607-7045} \email{hurder@uic.edu} \thanks{Preprint date: March 25, 2016; revised September 25}
\author{Ana Rechtman} \address{Ana Rechtman, Instituto de Matem\'aticas, Universidad Nacional Aut\'onoma de M\'exico, Ciudad Universitaria, 04510 Ciudad de M\'exico, Mexico} \email{rechtman@im.unam.mx}
\date{}
\keywords{Kuperberg flows, aperiodic flows, topological entropy}
\maketitle
\section{Introduction} \label{sec-intro}
In this paper, we analyze the dynamical properties of flows in a $C^{\infty}$-neighborhood of the Kuperberg flows introduced in \cite{Kuperberg1994}, or to be more precise, of {\it generic
Kuperberg flows} as introduced in \cite{HR2016}. The Kuperberg flows are exceptional for the simplicity of their explicit construction in \cite{Kuperberg1994},
and this explicitness makes it a straightforward process to construct 1-parameter families of $C^\infty$-deformations of a given generic Kuperberg flow.
We show in this work that the Kuperberg flows are furthermore remarkable, in that there are $C^{\infty}$-nearby flows with simple dynamics, and that there are $C^{\infty}$-nearby flows with positive topological entropy and an abundance of periodic orbits.
The construction of a Kuperberg flow is based on the construction of an aperiodic
plug, which we call a \emph{Kuperberg Plug}, and is noted in this paper by
${\mathbb K}_0$. A plug is a manifold with boundary endowed with a flow, that
enables the modification of a given flow inside a flow-box, so that
after modification, there are orbits that enter the flow-box and never
exit. Moreover, Kuperberg's construction does this modification without introducing additional periodic orbits.
Expository treatments of Kuperberg's construction were given by Ghys \cite{Ghys1995} and Matsumoto \cite{Matsumoto1995}, and in the first chapters of the authors' work \cite{HR2016}. Our work also introduced new concepts for the study of the dynamical properties of the Kuperberg flows, which allows one to investigate many further remarkable aspects of these flows.
As a consequence of Katok's theorem on $C^2$-flows on $3$-manifolds \cite{Katok1980},
the topological entropy of a Kuperberg flow is zero. In \cite{HR2016} we developed a technique based on the introduction of an ``almost transverse'' rectangle inside the plug and a pseudogroup modeling the return map of the flow to the rectangle, to make an explicit computation of the topological entropy. This computation revealed
chaotic behavior for the flow, but such that it evolves at a very slow rate, and thus it does not result in positive topological entropy. We proved that, under some extra hypotheses, such generic Kuperberg flows have positive ``slow entropy'', and that the rates of chaotic behavior grow at a precise subexponential, but non-polynomial rate, as discussed in the proof of \cite[Theorem~21.10]{HR2016}. The calculation behind the proof of this result suggests that for some Kuperberg-like flows near to a generic Kuperberg flow, there should be actual chaotic behavior, and also positive entropy.
In this paper we prove the following two theorems which make these remarks more precise.
\begin{thm}\label{thm-main1} There exists a $C^{\infty}$ 1-parameter family of plugs ${\mathbb K}_{\epsilon}$ for ${\epsilon}\in (-1,0]$ such that: \begin{enumerate} \item The plug ${\mathbb K}_0$ is a Kuperberg plug; \item For ${\epsilon}<0$, the flow in the plug ${\mathbb K}_{\epsilon}$ has two periodic orbits that
bound an invariant cylinder, and every other orbit belongs to the wandering set, and thus the flow has topological entropy zero. \end{enumerate} \end{thm} The proof of Theorem~\ref{thm-main1} uses the same technical tools as developed in the previous works \cite{Kuperberg1994,Kuperbergs1996,Ghys1995,Matsumoto1995,HR2016} for the study of the dynamics of Kuperberg flows, and the result is notable mainly for its contrast with the following result, that a $C^{\infty}$-neighborhood of a generic Kuperberg flow also contains flows which have exceptionally wild dynamics. In particular, we obtain the following result: \begin{thm}\label{thm-main2} There exists a $C^{\infty}$ 1-parameter family of plugs ${\mathbb K}_{\epsilon}$ for ${\epsilon}\in [0,a)$, $a > 0$,
such that: \begin{enumerate} \item The plug ${\mathbb K}_0$ is a generic Kuperberg plug; \item For $e>0$, the flow in ${\mathbb K}_{\epsilon}$ has positive topological entropy, and an abundance of periodic orbits. \end{enumerate} \end{thm} The proof of Theorem~\ref{thm-main2} is based on the understanding of the dynamics of standard Kuperberg flows developed in \cite{HR2016}, and in particular uses in a fundamental way the technique of relating the dynamics of a Kuperberg-like flow to the dynamics of its return map to an almost transverse rectangle.
The construction of the plugs ${\mathbb K}_{\epsilon}$ in the proofs of both Theorems~\ref{thm-main1} and \ref{thm-main2} follows closely the original construction by K. Kuperberg in \cite{Kuperberg1994}, which begins with a modified version of the original Wilson Plug \cite{Wilson1966},
where the modification given in Section~\ref{subsec-wilson} removes the stable periodic orbits for the flow and replaces them with unstable periodic orbits. The construction of the 1-parameter family of plugs is described in the first part of this work, Section~\ref{subsec-kuperberg}, and closely follows the construction in \cite{HR2016}.
The \emph{Radius Inequality} is the main condition in Kuperberg's construction that is used to prove that the flow is aperiodic. The only change in the requirements of the construction of the flows $\Phi_t^{\epsilon}$ which we study, is that the Radius Inequality gets replaced for ${\epsilon} \ne 0$ with the \emph{Parametrized Radius Inequality}, as stated in Section~\ref{subsec-kuperberg}. For ${\epsilon}=0$ we recover the original Radius Inequality for Kuperberg flows.
Section~\ref{sec-radiuslevel} introduces some of the main tools for the study of the dynamics of Kuperberg flows, the radius and level functions, and also gives some of the immediate consequences for the study of the dynamics that are independent of the value of ${\epsilon}$ in the Parametrized Radius Inequality.
As we mentioned above, one of the main techniques in \cite{HR2016} is to introduce a pseudogroup modeling the dynamics of the Kuperberg flow. The other main technique is the study of surfaces tangent to the flow, known as propellers, that were used to describe the topological properties of the minimal set of a generic Kuperberg flow. These surfaces are introduced here in Section~\ref{sec-propeller}.
The plugs ${\mathbb K}_{\epsilon}$ for ${\epsilon}<0$ have rather simple dynamical properties, as stated in Theorem~\ref{thm-main1} and described in Section~\ref{sec-enegative}. The study of these flows does not requires any extra hypothesis in the constructions.
On the other hand, the study of the dynamical properties of plugs ${\mathbb K}_{\epsilon}$ which satisfy the Parametrized Radius Inequality for ${\epsilon} \geq 0$ is extraordinarily complicated. The detailed analysis in \cite{HR2016} of the standard Kuperberg flows
required the introduction of the {\it generic hypothesis} in that work (see Hypothesis~\ref{hyp-SRI} below), in order to deduce a wide range of properties for the flows. For the case when ${\epsilon} > 0$, many of the corresponding results do not hold, so we consider in this work only the question of the existence of compact invariant sets for the flows, such that the restricted dynamics of the flow admits a ``horseshoe'' in its transversal model. Even with this more restricted goal, it is still necessary to impose geometric hypothesis on the construction as stated in Hypotheses~\ref{hyp-monotone} and \ref{hyp-offset}, in order to obtain
the proof of Theorem~\ref{thm-main2}.
The construction of the pseudogroup acting on an almost transverse rectangle ${\bf R}_{0}\subset {\mathbb K}_{\epsilon}$ is done in Section~\ref{sec-pseudogroups}. Instead of introducing a pseudogroup analogous to the one in Chapter 9 of \cite{HR2016}, we introduce the simplest pseudogroup that allows us to prove Theorem~\ref{thm-main2}. Section~\ref{sec-horseshoe} is dedicated to the proof that this pseudogroup contains ``horseshoe maps'', and the proof introduces surfaces analogous to the propellers introduced in \cite{HR2016} which are used to define these maps. In Section~\ref{sec-entropy} we describe the situation in which these horseshoe maps can be embedded in the flow $\Phi_t^{\epsilon}$, so that they generate positive entropy for the flow, and not just for the pseudogroup. Then in Section~\ref{subsec-admissible}, we discuss the construction of examples of $C^{\infty}$-deformations of a generic Kuperberg flow, such that the hypotheses of Theorem~\ref{thm-entropypositive} are satisfied, completing the proof of Theorem~\ref{thm-main2}.
The results in this paper are inspired by the work in the paper \cite{HR2016}, and for the sake of brevity, we are forced to refer occasionally to results in \cite{HR2016}. However, the novel results used in the proof of Theorems~\ref{thm-main1} and \ref{thm-main2} are explained and proved here. Further questions and open problems concerning the
dynamical properties of the many variations of Kuperberg flows are
discussed in the paper \cite{HKR2016}.
\section{Construction of parametrized families}\label{sec-plugs}
In this section, we present the construction of the 1-parameter family of plugs ${\mathbb K}_{\epsilon}$ for ${\epsilon}\in (-a,a)$, and introduce some of the techniques used for the study of their dynamical properties. This follows closely the outline of the construction and study of the usual Kuperberg flows in Chapters $2$ and $3$ of \cite{HR2016}.
In Section~\ref{subsec-wilson} we give the construction of the modified Wilson plug as introduced by Kuperberg in \cite{Kuperberg1994}.
The family of plugs is constructed in Section~\ref{subsec-kuperberg}, and the Parametrized Radius Inequality is introduced.
\subsection{Plugs}\label{subsec-plugs}
A $3$-dimensional plug is a manifold $P$ endowed with a vector field ${\mathcal X}$ satisfying the following ``plug conditions''. The 3-manifold $P$ is of the form $D \times [-2,2]$, where $D$ is a compact 2-manifold with boundary $\partial D$. Set $$\partial_v P = \partial D \times [-2,2] \quad , \quad \partial_h^- P = D \times \{-2\} \quad , \quad \partial_h^+ P = D \times \{2\} \ .$$ Then the boundary of $P$ has a decomposition $$\partial P ~ = ~ \partial_v P \cup \partial_h P ~ = ~ \partial_v P \cup \partial_h^-P \cup \partial_h^+ P \ .$$ Let $\frac{\partial}{\partial z}$ be the \emph{vertical} vector field on $P$, where $z$ is the coordinate of the interval $[-2,2]$.
The vector field ${\mathcal X}$ must satisfy the conditions: \begin{itemize} \item[(P1)] \emph{vertical at the boundary}: ${\mathcal X}=\frac{\partial}{\partial z}$ in a neighborhood of $\partial P$; thus, $\partial_h^- P$ and $\partial_h^+ P$ are the entry and exit regions of $P$ for the flow of ${\mathcal X}$, respectively; \item[(P2)] \emph{entry-exit condition}: if a point $(x,-2)$ is in the same trajectory as $(y,2)$, then $x=y$. That is, an orbit that traverses $P$, exits just in front of its entry point; \item[(P3)] \emph{trapped orbit}: there is at least one entry point whose entire \emph{forward} orbit is contained in $P$; we will say that its orbit is \emph{trapped} by $P$; \item[(P4)] \emph{tame}: there is an embedding $i \colon P\to {\mathbb R}^3$ that preserves the vertical direction. \end{itemize}
Note that conditions (P2) and (P3) imply that if the forward orbit of a point $(x,-2)$ is trapped, then the backward orbit of $(x,2)$ is also trapped.
A {\it semi-plug} is a manifold $P$ endowed with a vector field ${\mathcal X}$ as above, satisfying conditions (P1), (P3) and (P4), but not necessarily (P2). The concatenation of a semi-plug with an inverted copy of it, that is a copy where the direction of the flow is inverted, is then a plug.
Note that condition (P4) implies that given any open ball $B(\vec{x},{\epsilon}) \subset {\mathbb R}^3$ with ${\epsilon} > 0$, there exists a modified embedding $i' \colon P \to B(\vec{x},{\epsilon})$ which preserves the vertical direction again. Thus, a plug can be used to change a vector field ${\mathcal Z}$ on any $3$-manifold $M$ inside a flowbox, as follows. Let ${\varphi} \colon U_x \to (-1,1)^3$ be a coordinate chart which maps the vector field ${\mathcal Z}$ on $M$ to the vertical vector field $\frac{\partial}{\partial z}$. Choose a modified embedding $i' \colon P \to B(\vec{x},{\epsilon}) \subset (-1,1)^3$, and then replace the flow $\frac{\partial}{\partial z}$ in the interior of $i'(P)$ with the image of ${\mathcal X}$. This results in a flow ${\mathcal Z}'$ on $M$.
The entry-exit condition implies that a periodic orbit of ${\mathcal Z}$ which meets $\partial_h P$ in a non-trapped point, will remain periodic after this modification. An orbit of ${\mathcal Z}$ which meets $\partial_h P$ in a trapped point never exits the plug $P$, hence after modification, limits to a closed invariant set contained in $P$. A closed invariant set contains a minimal set for the flow, and thus, a plug serves as a device to insert a minimal set into a flow.
\subsection{The modified Wilson plug ${\mathbb W}$}\label{subsec-wilson}
We next introduce the ``modified Wilson Plug'', which was
the starting point of Kuperberg's construction of her plug. We add to the construction one
hypothesis that seems to be implicitly assumed by certain
conclusions stated in \cite{Ghys1995,Matsumoto1995}, and that are
part of the generic hypothesis used in \cite{HR2016}. This
hypothesis is not needed to construct Kuperberg's aperiodic plug,
but it seems necessary in order to prove the results in this paper.
Consider the rectangle, as illustrated in Figure~\ref{fig:wilson1}, \begin{equation}\label{eq-rectangle} {{\bf R}} = [1,3]\times[-2,2] = \{(r,z) \mid 1 \leq r \leq 3 ~ \& -2 \leq z \leq 2\} \ . \end{equation}
For a constant $0 < g_0\leq 1$, choose a $C^\infty$-function $g \colon {\bf R} \to [0,g_0]$ which satisfies the ``vertical'' symmetry condition $g(r,z) = g(r,-z)$. Also, require that $g(2,-1) = g(2,1) = 0$, that $g(r,z) = g_0$ for $(r,z)$ near the boundary of ${\bf R}$, and that $g(r,z) > 0$ otherwise. We may take $g_0 = 1/10$ for example.
We make the following additional hypothesis on the function $g$ in the construction. \begin{hyp}\label{hyp-genericW} We require the function $g$ satisfy the above conditions, and in addition: \begin{equation}\label{eq-generic1}
g(r,z) = g_0 \quad \text{for} \quad (r-2)^2 + (|z| -1)^2 \geq {\epsilon}_0^2 \end{equation}
where $0 < {\epsilon}_0 < 1/4$ is sufficiently small, as will be specified later in Section~\ref{subsec-kuperberg}. In addition, we also require that $g(r,z)$ is monotone increasing as a function of the distance $\displaystyle \sqrt{(r-2)^2 + (|z| -1)^2}$ from the special points where it vanishes, and that $g$ is non-degenerate. Non-degenerate means that the matrix of second partial derivatives at each vanishing point is invertible. \end{hyp}
Define the vector field ${\mathcal W}_v = g \cdot \frac{\partial}{\partial z}$ which has two singularities, $(2,\pm 1)$, and is otherwise everywhere vertical. The flow lines of this vector field are illustrated in Figure~\ref{fig:wilson1}.
\begin{figure}
\caption{Vector field ${\mathcal W}_v$ }
\label{fig:wilson1}
\end{figure}
Next, choose a $C^\infty$-function $f \colon {\bf R} \to [-1,1]$ which satisfies the following conditions: \begin{enumerate} \item[(W1)] $f(r,-z) = -f(r, z)$ ~ [\emph{anti-symmetry in z}] \item[(W2)] $f(r,z) = 0$ for $(r,z)$ near the boundary of ${\bf R}$ \item[(W3)] $f(r,z) \geq 0$ for $-2 \leq z \leq 0$. \item[(W4)] $f(r,z) \leq 0$ for $0 \leq z \leq 2$. \item[(W5)] $f(r,z) =1$ for $5/4 \leq r \leq 11/4$ ~ \text{and} ~ $-7/4 \leq z \leq -1/4$. \item[(W6)] $f(r,z) = -1$ for $5/4 \leq r \leq 11/4$ ~ \text{and}~ $1/4 \leq z \leq 7/4$. \end{enumerate} Condition (W1) implies that $f(r,0) =0$ for all $1 \leq r \leq 3$.
Next, define the manifold with boundary \begin{equation}\label{eq-wilsoncylinder} {\mathbb W}=[1,3] \times {\mathbb S}^1\times[-2,2] \cong {\mathbf R} \times {\mathbb S}^1 \end{equation} with cylindrical coordinates $x = (r, \theta,z)$. That is,
${\mathbb W}$ is a solid cylinder with an open core removed, obtained by rotating the rectangle ${\bf R}$, considered as embedded in ${\mathbb R}^3$, around the $z$-axis.
Extend the functions $f$ and $g$ above to ${\mathbb W}$ by setting $f(r, \theta, z) = f(r, z)$ and $g(r, \theta, z) = g(r, z)$, so that they are invariant under rotations around the $z$-axes. Define the Wilson vector field on ${\mathbb W}$ by \begin{equation}\label{eq-wilsonvector} {\mathcal W} =g(r, \theta, z) \frac{\partial}{\partial z} + f(r, \theta, z) \frac{\partial}{\partial \theta} \end{equation} Let $\Psi_t$ denote the flow of ${\mathcal W}$ on ${\mathbb W}$. Observe that the vector field ${\mathcal W}$ is vertical near the boundary of ${\mathbb W}$ and horizontal in the periodic orbits. Also, ${\mathcal W}$ is tangent to the cylinders $\{r=cst\}$. The flow of $\Psi_t$ on the cylinders $\{r=cst\}$ is illustrated (in cylindrical coordinate slices) by the lines in Figures~\ref{fig:flujocilin} and \ref{fig:Reebcyl}.
\begin{figure}
\caption{$r \approx 1,3$}
\caption{$r \approx 0$}
\caption{$r = 0$}
\caption{${\mathcal W}$-orbits on the cylinders $\{r=const.\}$ }
\label{fig:flujocilin}
\end{figure}
\begin{figure}\label{fig:Reebcyl}
\end{figure}
Define the closed subsets: \begin{eqnarray*} {\mathcal C} ~ & \equiv & ~ \{r=2\} \quad \text{[\emph{The ~ Full ~ Cylinder}]}\\ {\mathcal R} ~ & \equiv & ~ \{(2,\theta,z) \mid -1 \leq z \leq 1\} \quad \text{[\emph{The ~ Reeb ~ Cylinder}]}\\ {\mathcal A} ~ & \equiv & ~ \{z=0\} \quad \text{[\emph{The ~ Center ~ Annulus}]}\\ {\mathcal O}_i ~ & \equiv & ~ \{(2,\theta,(-1)^i) \} \quad \text{[\emph{Periodic Orbits, i=1,2}]} \end{eqnarray*} Note that ${\mathcal O}_1$ is the lower boundary circle of the Reeb cylinder ${\mathcal R}$, and ${\mathcal O}_2$ is the upper boundary circle.
We give some of the basic properties of the Wilson flow. Let $R_{\varphi} \colon {\mathbb W} \to {\mathbb W}$ be rotation by the angle $\varphi$. That is, $R_{\varphi}(r,\theta,z) = (r, \theta + \varphi, z)$. \begin{prop}\label{prop-wilsonproperties} Let $\Psi_t$ be the flow on ${\mathbb W}$ defined above, then: \begin{enumerate} \item $R_{\varphi} \circ \Psi_t = \Psi_t \circ R_{\varphi}$ for all $\varphi$ and $t$. \item The flow $\Psi_t$ preserves the cylinders $\{r=const.\}$ and in particular
preserves the cylinders ${\mathcal R}$ and ${\mathcal C}$. \item ${\mathcal O}_i$ for $i=1,2$ are the periodic orbits for $\Psi_t$. \item For $x = (2,\theta,-2)$, the forward orbit $\Psi_t(x)$ for $t > 0$ is trapped. \item For $x = (2,\theta,2)$, the backward orbit $\Psi_t(x)$ for $t < 0$ is trapped. \item For $x = (r,\theta,z)$ with $r \ne 2$, the orbit $\Psi_t(x)$ terminates in the top face $\partial_h^+ {\mathbb W}$ for some $t \geq 0$, and terminates in $\partial_h^- {\mathbb W}$ for some $t \leq 0$. \item The flow $\Psi_t$ satisfies the entry-exit condition (P2) for plugs. \end{enumerate} \end{prop} \proof The only assertion that needs a comment is the last, which follows by (W1) and the symmetry condition imposed on the functions $g$ and $f$. \endproof
\eject
\subsection{The family of spaces ${\mathbb K}_{\epsilon}$}\label{subsec-kuperberg}
The construction of the family of Kuperberg Plugs ${\mathbb K}_{\epsilon}$ begins with the modified Wilson Plug ${\mathbb W}$ with vector field ${\mathcal W}$ constructed in Section~\ref{subsec-wilson}, and follows the original construction of K. Kuperberg, except for the choices of self-embeddings. The parameter ${\epsilon}$ is a real number, and admits negative and positive values, though for ${\epsilon} > 0$ we will later assume that ${\epsilon}$ is ``sufficiently small''. For ${\epsilon}=0$ we recover the original Kuperberg Plug. Moreover, as proved in Sections~\ref{sec-enegative} and \ref{sec-entropy}, this is the only plug in the family that has no periodic orbits.
The construction follows the steps in Chapter 3 of \cite{HR2016}. The first step is to re-embed the manifold ${\mathbb W}$ in ${\mathbb R}^3$ as a {\it folded
figure-eight}, as shown in Figure~\ref{fig:8doblado}, preserving the vertical direction.
\begin{figure}
\caption{Embedding of Wilson Plug ${\mathbb W}$ as a {\it folded figure-eight} }
\label{fig:8doblado}
\end{figure}
The fundamental idea of the Kuperberg Plug is to construct two insertions of ${\mathbb W}$ in itself. The subtlety of the original construction arises in the precise requirements on this insertion, which were chosen so that the periodic orbits of the flow are ``cut open'' by a non-periodic orbit for the inserted plug.
Here, we modify this construction so that for values of ${\epsilon} \ne 0$, the self-insertion again intercepts the periodic orbits, but the resulting variation of the radius inequality results in a family of plugs with much different characteristics than Kuperberg's original construction. Finally, in Section~\ref{sec-entropy}, we impose further restrictions on the insertion maps for ${\epsilon} > 0$, so that the entropies of the resulting Kuperberg flows can be more readily calculated.
Consider in the annulus $[1,3] \times {\mathbb S}^1$ two topological closed disks $L_i$, for $i=1,2$, whose boundaries are composed of two arcs: $\alpha^\prime_i$ in the interior of $[1,3] \times {\mathbb S}^1$, and $\alpha_i$ in the outer boundary circle $\{r=3\}$, as depicted in Figure~\ref{fig:insertiondisks}. To be precise, let $\zeta_1 = \pi/4$ and $\zeta_2 = - \pi/4$, then let $\alpha_i$ be the arcs defined by $$
\alpha_1 ~ = ~ \{(3, \theta) \mid ~ |\theta - \zeta_1| \leq 1/10\} \quad , \quad
\alpha_2 ~ = ~ \{(3, \theta) \mid ~ |\theta - \zeta_2| \leq 1/10\} $$ We let $\alpha_i'$ be the curves which in polar coordinates $(r,\theta)$ are parabolas with minimum values $r = 3/2$ and base the line segment $\alpha_i$, as depicted in Figure~\ref{fig:insertiondisks}. We choose an explicit form for the embedded curves, for example, given by $\displaystyle \alpha_i' \equiv \{ r = 3/2 + 300/2 \cdot (\theta - \zeta_i)^2 \}$.
\begin{figure}
\caption{ The disks $L_1$ and $L_2$ }
\label{fig:insertiondisks}
\end{figure}
Consider the closed sets $D_i \equiv L_i \times[-2,2] \subset {\mathbb W}$, for $i = 1,2$. Note that each $D_i$ is homeomorphic to a closed $3$-ball, that $D_1 \cap D_2 = \emptyset$, and each $D_i$ intersects the cylinder ${\mathcal C} = \{r=2\}$ in a rectangle. We label also the top and bottom faces of these regions \begin{equation}\label{eq-regions} L_1^{\pm}= L_1\times \{\pm 2\} ~ , ~ L_2^{\pm} = L_2\times \{\pm 2\}. \end{equation}
The next step is to define families of insertion maps $\sigma_i^{\epsilon} \colon D_i \to {\mathbb W} $, for $i=1,2$, in such a way that for ${\epsilon}=0$ the periodic orbits ${\mathcal O}_{1}$ and ${\mathcal O}_{2}$ for the ${\mathcal W}$-flow intersect $\sigma_i^{\epsilon}(L_i^-)$ in points corresponding to ${\mathcal W}$-trapped points. Consider the two disjoint arcs $\beta_i'$ in the inner boundary circle $\{r=1\}$, \begin{eqnarray*}
\beta_1' & = & \{(1, \theta) \mid ~ |\theta - (\zeta_1 + \pi) | \leq 1/10\}\\
\beta_2' & = & \{(1, \theta) \mid ~ |\theta - (\zeta_2 + \pi) | \leq 1/10\} \end{eqnarray*}
Now choose a smooth family of orientation preserving diffeomorphisms $\sigma_i^{\epsilon} \colon \alpha_i' \to \beta_i'$, $i=1,2$, for $-a \leq {\epsilon} \leq a$, where $a < {\epsilon}_0$ is sufficiently small. Extend these maps to smooth embeddings $\sigma_i^{\epsilon} \colon D_i \to {\mathbb W} $, for $i=1,2$, as illustrated in Figure~\ref{fig:twisted}. We require the following conditions for all ${\epsilon}$ and for $i=1,2$: \begin{itemize} \item[(K1)] $\sigma_i^{\epsilon}(\alpha_i'\times z)=\beta_i'\times z$ for all $z\in [-2,2]$, the interior arc $\alpha_i^\prime$ is mapped to a boundary arc $\beta_i'$. \item[(K2)] ${\mathcal D}_i^{\epsilon} = \sigma_i^{\epsilon}(D_i)$ then ${\mathcal D}_1^{\epsilon} \cap {\mathcal D}_2^{\epsilon} =\emptyset$; \item[(K3)] For every $x \in L_i$, the image ${\mathcal I}_{i,x}^{\epsilon} \equiv \sigma_i ^{\epsilon} (x \times
[-2,2])$ is an arc contained in a trajectory of ${\mathcal W}$; \item[(K4)] We have $\sigma_1 ^{\epsilon} (L_1 \times \{-2\}) \subset \{z < 0\}$ and $\sigma_2 ^{\epsilon} (L_2 \times \{2\}) \subset \{z > 0\}$; \item[(K5)] Each slice $\sigma_i ^{\epsilon} (L_i\times\{z\})$ is transverse to the vector field ${\mathcal W}$, for all $-2\leq z \leq 2$. \item[(K6)] ${\mathcal D}_i ^{\epsilon}$ intersects the periodic orbit ${\mathcal O}_i$ and not ${\mathcal O}_j$, for $i \ne j$. \end{itemize}
The ``horizontal faces'' of the embedded regions ${\mathcal D}_i^{\epsilon}\subset {\mathbb W}$ are labeled by \begin{equation}\label{eq-tongues} {\mathcal L}_1^{{\epsilon}\pm}= \sigma_1^{\epsilon}(L_1\times \{\pm 2\}) ~ , ~ {\mathcal L}_2^{{\epsilon}\pm} = \sigma_2^{\epsilon}(L_2\times \{\pm 2\}). \end{equation}
Note that the arcs ${\mathcal I}_{i,x}^{\epsilon}$ are line segments from $\sigma_i^{\epsilon}(x \times \{-2\})$ to $\sigma_i^{\epsilon}(x \times \{2\})$ which follow the ${\mathcal W}$-trajectory, and traverse the insertion from one face to another. Since ${\mathcal W}$ is vertical near the boundary of ${\mathbb W}$ and horizontal at the two periodic orbits, (K3-K6) imply that the arcs ${\mathcal I}_{i,x}^{\epsilon}$ are vertical near the inserted curve $\sigma_i ^{\epsilon} (\alpha_i')$ and horizontal at the intersection of the insertion with the periodic orbit ${\mathcal O}_i$. Thus, the embeddings of the surfaces $\sigma_i ^{\epsilon} (L_i\times\{z\})$ make a {\it half turn} upon insertion, for each $-2 \leq z \leq 2$,
as depicted in Figure~\ref{fig:twisted}. The turning is clockwise for the bottom insertion $i=1$ as in Figure~\ref{fig:twisted}, and counter-clockwise for the upper insertion $i=2$.
\begin{figure}
\caption{The image of $L_1\times [-2,2]$ in ${\mathbb W}$ under $\sigma_1$ }
\label{fig:twisted}
\end{figure}
The first insertion $\sigma_1^{\epsilon}(D_1)$ in Figure~\ref{fig:twisted} intersects the first periodic orbit of ${\mathcal W}$ and is disjoint of the second periodic orbit.
The picture of the second insertion $\sigma_2^{\epsilon}(D_2)$ is disjoint from the first insertion and the first periodic orbit, and it intersects the second periodic orbit.
The embeddings $\sigma_i ^{\epsilon}$ are also required to satisfy two further conditions: \begin{itemize} \item[(K7)] For $i=1,2$, the disk $L_i$ contains a point $(2,\theta_i)$ such that
the image under $\sigma_i ^{\epsilon}$ of the vertical segment
$(2,\theta_i)\times[-2,2] \subset D_i \subset {\mathbb W}$ is contained
in $\{r=2+{\epsilon}\} \cap \{\theta_i^- \leq \theta \leq \theta_i^+\}$,
and for
${\epsilon}=0$ it is contained in $\{r=2\} \cap \{\theta_i^- \leq \theta \leq \theta_i^+\} \cap
\{z=(-1)^i\}$.\\ \item[(K8)] {\it Parametrized Radius Inequality}: For all $x' = (r', \theta', -2) \in L_i^-$, let $x = (r, \theta,z) =
\sigma_i^{\epsilon}(r', \theta', -2) \in {\mathcal L}_i^{{\epsilon}-}$, then $r < r'+{\epsilon}$ unless $x' = (2,\theta_i, -2)$ and then $r=2+{\epsilon}$. \end{itemize}
Note that by (K3) we have $r(\sigma_i^{\epsilon}(r', \theta', z')) = r(\sigma_i^{\epsilon}(r', \theta', -2))$ for all $-2 \leq z' \leq 2$, so that (K8) holds for all points $x' = (r', \theta', z') \in L_i \times [-2,2]$.
Observe that for ${\epsilon}=0$, we recover the Radius Inequality of Kuperberg,
one of the most fundamental concepts of Kuperberg's construction. Figure~\ref{fig:modifiedradius} represents the radius inequality for ${\epsilon}<0$, ${\epsilon} =0$, and ${\epsilon}>0$. Note that in the third illustration (c) for the case ${\epsilon}>0$, the insertion as illustrated has a vertical shift upwards. This is not required by conditions (K7) and (K8), but will be used to prove Theorem~\ref{thm-main2} as explained in Section~\ref{subsec-transverse}. (The vertex points $\{ v_1^{\epsilon}, v_2^{\epsilon}\}$ defined in \eqref{eq-boundarypoints} correspond to this vertical offset.)
\begin{figure}
\caption{${\epsilon} < 0$}
\caption{${\epsilon} = 0$}
\caption{${\epsilon} > 0$}
\caption{ The modified radius inequality for the cases ${\epsilon}<0$, ${\epsilon}=0$ and ${\epsilon}>0$}
\label{fig:modifiedradius}
\end{figure}
\begin{remark}\label{rmk-domains} We add the following hypothesis relating the value of ${\epsilon}_0$ in \eqref{eq-generic1} and the insertions: let ${\epsilon}_0$ as introduced in Hypothesis~\ref{hyp-genericW} be sufficiently small so that the ${\epsilon}_0$-neighborhood of the periodic orbits ${\mathcal O}_i$ intersects the insertion regions ${\mathcal D}_i^{\epsilon}$ on the interior of their faces for all ${\epsilon}$ considered. \end{remark}
Finally, define ${\mathbb K}_{\epsilon}$ to be the quotient manifold obtained from ${\mathbb W}$ by identifying the sets $D_i$ with ${\mathcal D}_i^{\epsilon}$. That is, for each point $x \in D_i$ identify $x$ with $\sigma_i^{\epsilon}(x) \in {\mathbb W}$, for $i = 1,2$.
The restricted ${\mathcal W}$-flow on the inserted disk ${\mathcal D}_i^{\epsilon} = \sigma_i^{\epsilon}(D_i)$ is not compatible with the restricted ${\mathcal W}$-flow on $D_i$. Thus, to obtain a smooth vector field ${\mathcal X}_{\epsilon}$ from this construction, it is necessary to modify ${\mathcal W}$ on each insertion ${\mathcal D}_i^{\epsilon}$.
The idea is to replace the vector field ${\mathcal W}$ on the interior of each region ${\mathcal D}_i^{\epsilon}$ with the image vector field. This requires a minor technical step first.
Smoothly reparametrize the image of ${\mathcal W}|_{D_i}$ under $\sigma_i^{\epsilon}$ on an open neighborhood of the boundary of ${\mathcal D}_i^{\epsilon}$ so that it agrees with the restriction of ${\mathcal W}$ to the same neighborhood. This is possible since the vector field ${\mathcal W}$ is vertical on a sufficiently small open neighborhood of $\partial D_i\cap \partial {\mathbb W}$, so is mapped by $\sigma_i^{\epsilon}$ to an orbit segment of ${\mathcal W}$ by (K3). We obtain a vector field ${\mathcal W}_i'$ on ${\mathcal D}_i^{\epsilon}$ with the same orbits as the image of ${\mathcal W}|_{D_i}$.
Then modify ${\mathcal W}$ on each insertion ${\mathcal D}_i^{\epsilon}$, replacing it with the modified image ${\mathcal W}_i'$. Let ${\mathcal W}'$ denote the vector field on ${\mathbb W}$ after these modifications, and note that ${\mathcal W}'$ is smooth. By the modifications made above, the vector field ${\mathcal W}^\prime$ descends to a smooth vector field on ${\mathbb K}_{\epsilon}$ denoted by ${\mathcal K}_{\epsilon}$. Let $\Phi_t^{\epsilon}$ denote the flow of the vector field ${\mathcal K}_{\epsilon}$ on ${\mathbb K}_{\epsilon}$. The resulting space ${\mathbb K}_{\epsilon} \subset {\mathbb R}^3$ is illustrated in Figure~\ref{fig:K}. Note that the flow of ${\mathcal K}_{\epsilon}$ on ${\mathbb K}_{\epsilon}$ clearly satisfies the plug conditions (P1) and (P4) of Section~\ref{sec-plugs}, while Proposition~\ref{prop-ee} below will show that the condition (P2) is also satisfied. For the cases ${\epsilon} < 0$, the trapped
orbit condition (P3) is shown by Corollary~\ref{cor-trapped},
so that the flow of ${\mathcal K}_{\epsilon}$ on ${\mathbb K}_{{\epsilon}}$ is a plug in the sense of
Section~\ref{sec-plugs}.
\begin{figure}
\caption{ The Kuperberg Plug ${\mathbb K}_{\epsilon}$}
\label{fig:K}
\end{figure}
\section{Level and radius functions}\label{sec-radiuslevel}
In this section we recover some of the results on the orbit behaviors of the flows ${\mathbb K}_{\epsilon}$
that are independent of ${\epsilon}$ and that will be useful for the study of their dynamics in the plugs ${\mathbb K}_{\epsilon}$. As with the study of the Kuperberg flow ${\mathcal K}_0$, an orbit in ${\mathbb K}_{\epsilon}$ is formed by concatenating pieces of orbits of the Wilson plug, where these orbit segments are the result of the construction of ${\mathbb K}_{\epsilon}$ via the insertion maps $\sigma_i^{\epsilon}$. Thus, understanding the way these arcs concatenate is a fundamental aspect of understanding the dynamics of of the flow in ${\mathbb K}_{\epsilon}$.
We start by introducing notations that will be used throughout this work, and also some basic concepts which are fundamental for relating the dynamics of the two vector fields ${\mathcal W}$ and ${\mathcal K}_{\epsilon}$. These results are contained in the literature \cite{Ghys1995,HR2016, Kuperberg1994, Kuperbergs1996, Matsumoto1995}, though in a variety of differing notations and presentations. We adopt the notation of \cite{HR2016}, and parts of the following text is also adapted from this work.
Recall that ${\mathcal D}_i^{\epsilon} = \sigma_i^{\epsilon}(D_i)$ for $i =1,2$ are solid $3$-disks embedded in ${\mathbb W}$.
Introduce the sets: \begin{equation}\label{eq-notchedW} {\mathbb W}'_{\epsilon} ~ \equiv ~ {\mathbb W} - \left \{ {\mathcal D}_1^{\epsilon} \cup {\mathcal D}_2^{\epsilon} \right\} \quad , \quad \widehat{{\mathbb W}}_\e~ \equiv ~ \overline{{\mathbb W} - \left \{ {\mathcal D}_1^{\epsilon} \cup {\mathcal D}_2^{\epsilon} \right\}} ~ . \end{equation} The closure $\widehat{{\mathbb W}}_\e$ of ${\mathbb W}'_{\epsilon}$ is the \emph{pi\`{e}ge de Wilson creus\'{e}} as defined in \cite[page 292]{Ghys1995}. The compact space $\widehat{{\mathbb W}}_\e \subset {\mathbb W}$ is the result of ``drilling out'' the interiors of ${\mathcal D}_1^{\epsilon}$ and $ {\mathcal D}_2^{\epsilon}$, as the terminology \emph{creus\'{e}} suggests.
For $x, y \in {\mathbb K}_{\epsilon}$, we say that $x \prec_{{\mathcal K}_{\epsilon}} y$ if there exists $t \geq 0$ such that $\Phi_t^{\epsilon}(x) = y$. Likewise, for $x',y' \in {\mathbb W}$, we say that $x' \prec_{{\mathcal W}} y'$ if there exists $t \geq 0$ such that $\Psi_t(x') = y'$.
Let $\tau \colon {\mathbb W} \to {\mathbb K}_{\epsilon}$ denote the quotient map,
which for $i = 1,2$, identifies a point $x \in D_i$ with its
image $\sigma_i^{\epsilon}(x) \in {\mathcal D}_i^{\epsilon}$. Even if $\tau$ depends on
${\epsilon}$, we will denote it simply by $\tau$. Then the restriction $\tau' \colon {\mathbb W}'_{\epsilon} \to {\mathbb K}_{\epsilon}$ is injective and onto. Let $(\tau')^{-1} \colon {\mathbb K}_{\epsilon} \to {\mathbb W}'_{\epsilon}$ denote the inverse map, which followed by the inclusion ${\mathbb W}'_{\epsilon} \subset {\mathbb W}$, yields the (discontinuous) map $\tau^{-1} \colon {\mathbb K}_{\epsilon} \to {\mathbb W}$, where $i=1,2$, we have: \begin{equation}\label{eq-radiusdef} \tau^{-1}(\tau(x)) =x ~ {\rm for} ~ x \in D_i ~ ,~ {\rm and} ~ \sigma_i^{\epsilon}(\tau^{-1}(\tau(x))) = x ~ {\rm for} ~ x \in {\mathcal D}_i^{\epsilon}~. \end{equation} For $x \in {\mathbb K}_{\epsilon}$, let $x=(r,\theta,z)$ be defined as the ${\mathbb W}$-coordinates of $\tau^{-1}(x) \subset {\mathbb W}'_{\epsilon}$. In this way, we obtain (discontinuous) coordinates $(r,\theta,z)$ on ${\mathbb K}_{\epsilon}$.
In particular, let $r \colon {\mathbb W}'_{\epsilon} \to [1,3]$ be
the restriction of the radius coordinate on ${\mathbb W}$, then the function is extended to the \emph{radius function} of ${\mathbb K}_{\epsilon}$, again denoted by $r$, where for $x \in {\mathbb K}_{\epsilon}$ set $r(x) = r(\tau^{-1}(x))$.
The flow of the vector field ${\mathcal W}$ on ${\mathbb W}$ preserves the radius function on ${\mathbb W}$, so $x' \prec_{{\mathcal W}} y'$ implies that $r(x') = r(y')$. However, $x \prec_{{\mathcal K}_{\epsilon}} y$ need not imply that $r(x) = r(y)$, and the
points of discontinuity are the transition points defined below.
Let $\partial_h^-{\mathbb K}_{\epsilon} \equiv \tau(\partial_h^- {\mathbb W}\setminus (L_1^-\cup L_2^-))$ and $\partial_h^+{\mathbb K}_{\epsilon} \equiv \tau(\partial_h^+ {\mathbb W}\setminus (L_1^+\cup L_2^+))$ denote the bottom and top horizontal faces of ${\mathbb K}_{\epsilon}$, respectively. Note that the vertical boundary component $\partial_v{\mathbb K}_{\epsilon} \equiv \tau(\widehat{{\mathbb W}}_\e \cap \partial_v {\mathbb W})$ is tangent to the flow.
Points $x' \in \partial_h^- {\mathbb W}$ and $y' \in \partial_h^+ {\mathbb W}$ are said to be \emph{facing}, and we write $x' \equiv y'$, if $x' = (r, \theta, -2)$ and $y' = (r, \theta, 2)$ for some $r$ and $\theta$. There is also a notion of facing points for $x, y \in {\mathbb K}_{\epsilon}$, if either of two cases are satisfied: \begin{itemize} \item For $x = \tau(x') \in \partial_h^-{\mathbb K}_{\epsilon}$ and $y = \tau(y') \in \partial_h^+{\mathbb K}_{\epsilon}$, if $x' \equiv y'$ then $x \equiv y$. \item For $i=1,2$, for $x', y' \in \partial^{\pm} {\mathbb W}$ and $x = \sigma_i^{\epsilon}(x')$ and $y = \sigma_i^{\epsilon}(y')$, if $x' \equiv y'$ then $x \equiv y$. \end{itemize}
The context in which the notation $x \equiv y$ is used dictates which usage applies.
Consider the embedded disks ${\mathcal L}_i^{{\epsilon}\pm} \subset {\mathbb W}$ defined by \eqref{eq-tongues}, which appear as the faces of the insertions in ${\mathbb W}$. Their images in the quotient manifold ${\mathbb K}_{\epsilon}$ are denoted by: \begin{equation}\label{eq-sections} E^{{\epsilon}}_1= \tau({\mathcal L}_1^{{\epsilon}-}) ~ , ~ S^{{\epsilon}}_1= \tau({\mathcal L}_1^{{\epsilon}+}) ~ , ~ E^{{\epsilon}}_2= \tau({\mathcal L}_2^{{\epsilon}-}) ~ , ~ S^{{\epsilon}}_2= \tau({\mathcal L}_2^{{\epsilon}+}) ~ . \end{equation} Note that $\tau^{-1}(E^{{\epsilon}}_i) = L_i^-$, while $\tau^{-1}(S^{{\epsilon}}_i) = L_i^+$. The {\it transition points} of an orbit of ${\mathcal K}_{\epsilon}$ are those points which meet the $E^{{\epsilon}}_i$, $S^{{\epsilon}}_i$, $\partial_h^-{\mathbb K}_{\epsilon}$ or $\partial_h^+{\mathbb K}_{\epsilon}$, for $i=1,2$.
They are then either \emph{primary} or \emph{secondary} transition
points, where $x\in {\mathbb K}_{\epsilon}$ is: \begin{itemize} \item a \emph{primary entry point} if $x\in \partial_h^-{\mathbb K}_{\epsilon}$; \item a \emph{primary exit point} if $x \in \partial_h^+{\mathbb K}_{\epsilon}$; \item a \emph{secondary entry point} if $x \in E^{{\epsilon}}_1 \cup E^{{\epsilon}}_2$; \item a \emph{secondary exit point} $x \in S^{{\epsilon}}_1 \cup S^{{\epsilon}}_2$. \end{itemize} If a ${\mathcal K}_{\epsilon}$-orbit contains no transition points, then it lifts to a ${\mathcal W}$-orbit in ${\mathbb W}$ which flows from $\partial_h^- {\mathbb W}$ to $\partial_h^+ {\mathbb W}$.
A \emph{${\mathcal W}$-arc} is a closed segment $[x,y]_{{\mathcal K}_{\epsilon}} \subset {\mathbb K}_{\epsilon}$ of
the flow of ${\mathcal K}_{\epsilon}$ whose endpoints $\{x,y\}$ are the only transition
points in $[x,y]_{{\mathcal K}_{\epsilon}}$. The open interval $(x,y)_{{\mathcal K}_{\epsilon}}$ is then the image under $\tau$ of a unique ${\mathcal W}$-orbit segment in ${\mathbb W}'_{\epsilon}$, denoted by $(x',y')_{{\mathcal W}}$ (see Figure~\ref{fig:cWarcs}.)
Let $[x', y']_{{\mathcal W}}$ denote the closure of $(x',y')_{{\mathcal W}}$ in $\widehat{{\mathbb W}}_\e$, then we say that $[x', y']_{{\mathcal W}}$ is the \emph{lift} of $[x,y]_{{\mathcal K}_{\epsilon}}$. Note that the radius function $r$ is constant along $[x', y']_{{\mathcal W}}$.
The properties of the Wilson flow ${\mathcal W}$ on $\widehat{{\mathbb W}}_\e$ determine the
endpoints of lifts $[x',y']_{{\mathcal W}}$. We state the six cases which
arise explicitly, as they will be cited in later arguments. For a
proof we refer to Lemma~4.1 of \cite{HR2016}. Figure~\ref{fig:cWarcs} helps in visualizing these cases.
\begin{lemma}\label{lem-cases1} Let $[x,y]_{{\mathcal K}_{\epsilon}} \subset {\mathbb K}_{\epsilon}$ be a ${\mathcal W}$-arc, and let $[x',y']_{{\mathcal W}} \subset \widehat{{\mathbb W}}_\e$ denote its lift.
\begin{enumerate}
\item {\bf (p-entry/entry)} If $x$ is a primary entry point, then $\displaystyle x' \in \partial_h^- {\mathbb W} \setminus (L_1^- \cup L_2^- )$, and if $y$ an entry point, then $y$ is a secondary entry point and $y' \in {\mathcal L}_i^{{\epsilon}-}$ for $i =1$ or $2$.
\item {\bf (p-entry/exit)} If $x$ is a primary entry point, then $x' \in \partial_h^-
{\mathbb W} \setminus ( L_1^- \cup L_2^- )$, and if $y$ is an exit point,
then $\displaystyle y' \in \partial_h^+{\mathbb W}\setminus (L_1^+\cup L_2^+)$ is a primary exit point, and by the entry/exit condition on ${\mathbb W}$ we have $x\equiv y$.
\item {\bf (s-entry/entry)} If $x$ is a secondary entry point, then $x' \in L_i^-$, for $i =1$ or $2$, and if $y$ is an entry point, then we have $y' \in {\mathcal L}_j^{{\epsilon}-} $ where $j =1,2$ is not necessarily equal to $i$.
\item {\bf (s-entry/exit)} If $x$ is a secondary entry point, then $x' \in L_i^-$, for $i =1$ or $2$, and if $y$ is an exit point, then $y$ is a secondary exit point, $y'
\in L_i^+$ and $x \equiv y$ by the entry/exit condition of ${\mathbb W}$.
\item {\bf (s-exit/entry)} If $x$ is a secondary exit point, then $x' \in {\mathcal L}_i^{{\epsilon}+}$, for $i =1$ or $2$, and if $y$ is an entry point, so that $y' \in {\mathcal L}_j^{{\epsilon}-}$ then $j=2$ if $i=2$, and $j =1,2$ if $i=1$.
\item {\bf (s-exit/exit)} If $x$ is a secondary exit point, then $x' \in {\mathcal L}_i^{{\epsilon}+}$, for $i =1$ or $2$, and if $y$ is a primary
exit point, $y' \in \{ \partial_h^+ {\mathbb W} \setminus ( L_1^+ \cup L_2^+) \}$. If $y$ is a secondary exit point, then $y' \in L_j^+$, where $j =1$ or $2$ is not necessarily equal to $i$. \end{enumerate} \end{lemma}
Figure~\ref{fig:cWarcs} illustrates some of the notions discussed in this section.
The disks $L_1^-$ and $L_2^-$ contained in $\partial_h^-{\mathbb W}$ are drawn in the bottom face, though they are partially obscured by the inner cylindrical boundary $\{r=1\}$. The image of $L_1^-$ under $\sigma_1^{\epsilon}$ is the entry face of the insertion region in the lower half of the cylinder, while the image of $L_2^-$ under $\sigma_2^{\epsilon}$ is the entry face of the insertion region in the upper half of the core cylinder. Analogously, the disks $L_1^+$ and $L_2^+$ in $\partial_h^+{\mathbb W}$ are mapped to the exit region of the insertion regions. The intersection of the ${\mathcal W}$-periodic orbits ${\mathcal O}_i$ with the insertions in ${\mathbb W}$ are illustrated, as well as two ${\mathcal W}$-arcs in ${\mathbb W}'_{\epsilon}$ that belong to the same orbit. One ${\mathcal W}$-arc goes from $\partial_h^-{\mathbb W}$ to ${\mathcal L}_1^{{\epsilon}-}$, hence from a principal entry point to a secondary entry point (as in Lemma \ref{lem-cases1}(1)). The second ${\mathcal W}$-arc goes from ${\mathcal L}_1^{{\epsilon}+}$ to ${\mathcal L}_1^{{\epsilon}-}$, thus from a secondary exit point to a secondary entry point (as in Lemma \ref{lem-cases1}(5)).
\begin{figure}
\caption{${\mathcal W}$-arcs lifted to ${\mathbb W}'_{\epsilon}$ }
\label{fig:cWarcs}
\end{figure}
Introduce the radius coordinate function along ${\mathcal K}_{\epsilon}$-orbits, where for $x \in {\mathbb K}_{\epsilon}$, set $\rho_x^{\epsilon}(t) \equiv r(\Phi_t^{\epsilon}(x))$. Note that if $\Phi_{t}^{\epsilon}(x)$ is not a transition point then the function $\rho_x^{\epsilon}(t)$ is locally constant at $t$, and thus if a ${\mathcal K}_{\epsilon}$-arc $\{\Phi_t^{\epsilon}(x) \mid t_0 \leq t \leq t_{1}\}$ contains no transition point, then $\rho_x^{\epsilon}(t)=\rho_x^{\epsilon}(t_0)$ for all $t_0 \leq t \leq t_1$.
The \emph{level function} along an orbit indexes the discontinuities of the radius function. Given $x \in {\mathbb K}_{\epsilon}$, set $n_x(0) = 0$, and for $t > 0$, define \begin{equation}\label{def-level+} n_x(t) = \# \left\{ (E^{{\epsilon}}_1 \cup E^{{\epsilon}}_2) \cap \Phi_s^{\epsilon}(x) \mid 0 < s \leq t \right\} - \# \left\{ (S^{{\epsilon}}_1 \cup S^{{\epsilon}}_2) \cap \Phi_s^{\epsilon}(x) \mid 0 < s \leq t \right\} . \end{equation}
That is, $n_x(t)$ is the total number of secondary entry points, minus
the total number of secondary exit points, traversed by the flow of $x$ over the interval
$0 < s \leq t$.
The function can be extended to negative time by setting, for $t < 0$,
\begin{equation}\label{def-level-} n_x(t) = \# \left\{ (S^{{\epsilon}}_1 \cup S^{{\epsilon}}_2) \cap \Phi_s^{\epsilon}(x) \mid t < s \leq 0 \right\} - \# \left\{ (E^{{\epsilon}}_1 \cup E^{{\epsilon}}_2) \cap \Phi_s^{\epsilon}(x) \mid t < s \leq 0 \right\}. \end{equation}
We use throughout this work a Riemannian metric on the tangent bundle to ${\mathbb K}_{\epsilon}$. The Wilson plug ${\mathbb W}$ has a natural product Riemannian metric, where the rectangle ${\bf R}$ in \eqref{eq-rectangle} has the product euclidean metric, and the circle factor ${\mathbb S}^1$ has length $2\pi$. Then modify this metric along the insertions $\sigma_i^{\epsilon}$ to smooth them out, and so obtain a Riemannian metric on ${\mathbb K}_{\epsilon}$.
For $x' \prec_{{\mathcal W}} y'$ in ${\mathbb W}$, let $d_{{\mathbb W}}(x',y')$ denote the
path length of the ${\mathcal W}$-orbit segment $[x' ,y']_{{\mathcal W}}$ between
them. Similarly, for $x \prec_{{\mathcal K}_{\epsilon}} y$ in ${\mathbb K}_{\epsilon}$, let
$d_{{\mathbb K}_{\epsilon}}(x,y)$ denote the path length of the ${\mathcal K}_{\epsilon}$-orbit segment $[x,y]_{{\mathcal K}_{\epsilon}}$. Note that if $[x,y]_{{\mathcal K}_{\epsilon}}$ is a ${\mathcal W}$-arc with lift $[x',y']_{{\mathcal W}}$ then we have $d_{{\mathbb K}_{\epsilon}}(x,y) = d_{{\mathbb W}}(x',y')$ by the choice of the metric on ${\mathbb W}$.
In Chapter 4 of \cite{HR2016} (Lemmas~4.3 and 4.4 and Corollary~4.5) the following basic length estimates are established for the Kuperberg Plug ${\mathbb K}_0$, and the proofs for this case carry over directly to the flows on the plugs ${\mathbb K}_{\epsilon}$: \begin{enumerate}
\item Let $0 < \delta < 1$. There exists $L(\delta) > 0$ such that for any $\xi \in {\mathbb W}$ with $|r(\xi) -2| \geq \delta$, the total ${\mathcal W}$-orbit segment $[x',y']_{{\mathcal W}}$ through $\xi$ has length bounded above by $L(\delta)$. \item There exists $0 < d_{min} <
d_{max}$ such that if $[x',y']_{{\mathcal W}} \subset \widehat{{\mathbb W}}_\e$ is the lift of
a ${\mathcal W}$-arc $[x,y]_{{\mathcal K}_{\epsilon}}$, then we have the uniform estimate \begin{equation}\label{eq-segmentlengths} d_{min} \leq d_{{\mathbb W}}(x',y') \leq d_{max} ~ . \end{equation} \end{enumerate} Thus for $[x,y]_{{\mathcal K}_{\epsilon}} \subset {\mathbb K}_{\epsilon}$ a \emph{${\mathcal W}$-arc}, there is a uniform length estimate
\begin{equation}\label{eq-segmentlengths2} d_{min} \leq d_{{\mathbb K}_{\epsilon}}(x,y) \leq d_{max} ~ . \end{equation}
We end this section by recalling some technical results, which are key to analyzing the orbits of the flows $\Phi_t^{\epsilon}$ on ${\mathbb K}_{\epsilon}$. For a proof of the first of these, we refer to \cite[Proposition 5.5]{HR2016}, whose proof carries over directly to the case of the flow $\Phi_t^{{\epsilon}}$ on ${\mathbb K}_{\epsilon}$.
\begin{prop} \label{prop-shortcut4} Let $x \in {\mathbb K}_{\epsilon}$. For $n \geq 3$, assume that we are given successive transition points $x_{\ell} = \Phi_{t_{\ell}}^{{\epsilon}}(x)$ for $0 = t_0 < t_1 < \cdots < t_{n-1} < t_n$. Let $[x_\ell',y_{\ell+1}']_{\mathbb W}$ be te lift of the ${\mathcal W}$-arc $[x_\ell,x_{\ell+1}]_{\mathbb K}$ for all $0\leq \ell \leq n-1$.
Suppose that $n_{x_0}(t) \geq 0$ for all $0 \leq t < t_n$, and that $n_{x_0}(t_{n-1}) = 0$. Then $x_0' \prec_{{\mathcal W}} y_n'$, and hence $r(x_0') = r(y_n')$. Moreover, if $x_0$ is an entry point and $x_n$ is an exit point, then
$x_0\equiv x_n$. \end{prop}
For a proof of the next result, we refer to the proof of \cite[Proposition 5.6]{HR2016}. \begin{prop}\label{prop-ee} Let $x \in \partial_h^-{\mathbb K}_{\epsilon}$ be a primary entry point, and label the successive transition points by $x_{\ell} = \Phi_{t_{\ell}}^{\epsilon}(x)$ for $0 = t_0 < t_1 < \cdots < t_{n-1} < t_n$.
If $x_n$ is a primary exit point, then $x \prec_{{\mathcal W}} x_n$ and hence $x \equiv x_n$. Moreover, $n_x(t) \geq 0$ for $0 \leq t < t_{n}$. \end{prop}
Note that this implies the flow of ${\mathcal K}_{\epsilon}$ on ${\mathbb K}_{\epsilon}$ satisfies condition (P2) of Section~\ref{sec-plugs}. The following result proves that ${\mathbb K}_{\epsilon}$ satisifies condition (P3) for all ${\epsilon}$, implying that ${\mathbb K}_{\epsilon}$ is a plug.
\begin{cor}\label{cor-trapped} Let $x\in \partial_h^-{\mathbb K}_{\epsilon}$ be a primary entry point with $r(x)=2$. Then the forward ${\mathcal K}_{\epsilon}$-orbit of $x$ is trapped. \end{cor}
\proof Assume that the forward ${\mathcal K}_{\epsilon}$-orbit of $x$ is not trapped, and let $x_{\ell} = \Phi_{t_{\ell}}^{\epsilon}(x)$ for $0 = t_0 < t_1 < \cdots < t_{n-1} < t_n$ be the transition points. Then $x_n$ is a primary exit point and by Proposition~\ref{prop-ee}, $x\prec_{\mathcal W} x_n$, which is a impossible since $r(x)=2$. \endproof
\section{Propellers and double propellers in ${\mathbb W}$}\label{sec-propeller}
Propellers, simple and double, as defined in Chapters 11, 12 and 13 of \cite{HR2016}, are obtained from the Wilson flow of selected arcs in $\partial_h^-{\mathbb W}$. Double propellers are formed by the union of two (simple) propellers, as described below. These geometric structures associated to the Kuperberg flows $\Phi_{t}^{{\epsilon}}$ are an essential tool for analyzing their dynamical properties.
Consider a curve in the entry region, ${\gamma} \subset \partial_h^- {\mathbb W}$, with a parametrization ${\gamma}(s) = (r(s), \theta(s), -2)$ for $0 \leq s \leq 1$,
and assume that the map ${\gamma} \colon [0,1] \to \partial_h^- {\mathbb W}$ is a homeomorphism onto its image. We use the notation ${\gamma}_s = {\gamma}(s)$ when convenient, so that
${\gamma}_0 = {\gamma}(0)$ denotes the initial point and ${\gamma}_1 = {\gamma}(1)$ denotes the terminal point of ${\gamma}$. For $\delta > 0$, assume that $r({\gamma}_0) = 3$, $r({\gamma}_1) = 2+\delta$, and $2 + \delta < r({\gamma}_s) < 3$ for $0 < s < 1$.
The ${\mathcal W}$-orbits of the points in ${\gamma}$ traverse ${\mathbb W}$ from $ \partial_h^- {\mathbb W}$ to $ \partial_h^+ {\mathbb W}$, and hence the flow of ${\gamma}$ generates a compact invariant surface $P_{{\gamma}} \subset {\mathbb W}$. The surface $P_{{\gamma}}$ is parametrized by $(s,t) \mapsto \Psi_t({\gamma}(s))$ for $0 \leq s \leq 1$ and $0 \leq t \leq T_s$, where $T_s$ is the exit time for the ${\mathcal W}$-flow of ${\gamma}(s)$. Observe that as $s \to 1$ and $\delta \to 0$, the exit time $T_s \to \infty$.
The surface $P_{{\gamma}}$ is called a {\it propeller}, due to the nature of its shape in ${\mathbb R}^3$. It takes the form of a ``tongue'' wrapping around the core cylinder ${\mathcal C}(2 + \delta)$ which contains the orbit of ${\gamma}_1$. To visualize the shape of this surface, consider the case where ${\gamma}$
is topologically transverse to the cylinders ${\mathcal C}(r_0) = \{r=r_0\}$ for $2+ \delta \leq r_0 \leq 3$. The transversality assumption implies that the radius $r({\gamma}_s)$ is \emph{monotone decreasing} as $s$ increases.
Figure~\ref{fig:propeller} illustrates the surface $P_{{\gamma}}$ as a ``flattened'' propeller on the right, and its embedding in ${\mathbb W}$ on the left. As $\delta \to 0$ the surface approaches the cylinder ${\mathcal C}= \{r=2\}$ in an infinite spiraling manner.
\begin{figure}
\caption{ Embedded and flattened finite propeller}
\label{fig:propeller}
\end{figure}
We comment on the details in Figure~\ref{fig:propeller}. The horizontal boundary $\partial_h P_{{\gamma}}$ is composed of the initial curve ${\gamma} \subset \partial_h^- {\mathbb W}$, and its mirror image $\overline{\gamma} \subset \partial_h^+ {\mathbb W}$ via the entry/exit condition on the Wilson Plug. The vertical boundary $\partial_v P_{{\gamma}}$ is composed of the vertical segment ${\gamma}_0 \times [-2,2]$ in $\partial_v {\mathbb W}$, and the orbit $\{\Psi_t({\gamma}_1) \mid 0 \leq t \leq T_1\}$ which is the inner (or long) edge in the interior of ${\mathbb W}$. One way to visualize the surface, is to consider the product surface ${\gamma} \times [-2,2]$, and then start deforming it by an isotopy which follows the flow lines of ${\mathcal W}$, as illustrated in Figure~\ref{fig:propeller}. In the right hand side of the figure, some of the orbits in the propeller are illustrated, while in the left hand side, just the boundary orbit is illustrated.
Consider the orbit $\{\Psi_t({\gamma}_1) \mid 0 \leq t \leq T_1\}$ of the endpoint ${\gamma}_1$ with $r({\gamma}_1) = 2+\delta$.
The path $t \mapsto \Psi_t({\gamma}_1)$ makes a certain number of turns in the positive ${\mathbb S}^1$-direction before reaching the core annulus ${\mathcal A}$ at $z = 0$. The Wilson vector field ${\mathcal W}$ is vertical on the plane ${\mathcal A}$, so the flow of ${\gamma}_1$ then crosses ${\mathcal A}$, after which the orbit $\Psi_t({\gamma}_1)$ starts turning in the negative direction and ascending until it reaches $\partial_h^+ {\mathbb W}$. The point where the flow $\Psi_t({\gamma}_1)$ intersects ${\mathcal A}$ is called the \emph{tip} of the propeller $P_{{\gamma}}$.
The anti-symmetry of the vector field ${\mathcal W}$ implies that the number of turns in one direction (considered as a real number) equals the number of turns in the opposite direction. To be precise, for ${\gamma}_1 = (r_1, \theta_1, -2)$ in coordinates, let $\Psi_t({\gamma}_1) = (r_1(t), \theta_1(t), z_1(t))$ in coordinates. The function $z_1(t)$ is monotone increasing, and by the symmetry, we have $z_1(T_1 /2) = 0$. Thus, the tip is the point $\Psi_{T_1 /2}({\gamma}_1)$.
Next, for fixed $0 \leq a < 2\pi$, consider the intersection of $P_{{\gamma}}$ with a slice
\begin{equation}\label{eq-slices} {\bf R}_{a} \equiv \{\xi = (r, a, z) \mid ~ 1 \leq r \leq 3 ~, ~ -2 \leq z \leq 2\} ~ . \end{equation} Each rectangle ${\bf R}_{a}$ is tangent to the Wilson flow along the annulus ${\mathcal A}$, and also near the boundaries of ${\mathbb W}$, but is transverse to the flow at all other points. The case when $a = \theta_1(T_1/2)$ is special, as the tip of the propeller is tangent to ${\bf R}_{a}$.
Assume that $a \ne \theta_1(T_1/2)$, then the orbit $\Psi_t({\gamma}_1)$ intersects ${\bf R}_{a}$ in a series of points on the line ${\mathcal C}(2+\delta) \cap {\bf R}_{a}$ that are paired, as illustrated in the right hand side of Figure~\ref{fig:arcspropeller}. Moreover, the intersection $P_{{\gamma}} \cap {\bf R}_{a}$ consists of a finite sequence of arcs between the symmetrically paired points of $\Psi_t({\gamma}_1) \cap {\bf R}_{a}$.
The number of such arcs is equal to, plus or minus one, the number of times the curve
$\Psi_t({\gamma}_1)$ makes a complete turn around the cylinder ${\mathcal C}(2+\delta)$.
\begin{figure}
\caption{ Trace of propellers in ${\bf R}_{0}$}
\label{fig:arcspropeller}
\end{figure}
We comment on the details of Figure~\ref{fig:arcspropeller}, which illustrates the case of ${\bf R}_{0}$ where $a=\theta_0$.
The vertical line between the points $(2,\theta_0,-1)$ and $(2,\theta_0,1)$ (marked
in the figure simply by $z=-1$ and $z=1$, respectively) is the trace of the Reeb cylinder in ${\bf R}_{0}$. The trace of a propeller in ${\bf R}_{0}$ is a collection of arcs that have their endpoints in the vertical line $\{r=2 + \delta\}$. In the left hand figure, $r({\gamma}_1)=2$ and the propeller in consideration is infinite, as defined below. The curves form an infinite family, here just four arcs are shown, accumulating on the vertical line. The right hand figure illustrates the case $r({\gamma}_1)>2$, and the propeller is finite.
Finally, consider the case where $\delta \to 0$, so that the endpoint ${\gamma}_1$ of ${\gamma}$ lies in the cylinder ${\mathcal C}$. Then for $0 \leq s < 1$, we have $r({\gamma}(s)) > 2$, so the $\Psi_t$-flow of ${\gamma}_s \in \partial_h^- {\mathbb W}$ escapes from ${\mathbb W}$. Define the curve $\overline{\gamma}$ in $\partial_h^+ {\mathbb W}$ to be the trace of these facing endpoints in $\partial_h^+ {\mathbb W}$, parametrized by $\overline{\gamma}(s)$ for $0 \leq s < 1$, where ${\gamma}(s) \equiv \overline{\gamma}(s)$. Define $\displaystyle \overline{\gamma}_1 = \lim_{s\to 1} ~ \overline{\gamma}(s)$ so that $\overline{\gamma}_1 \equiv {\gamma}_1$ also.
Note that the forward $\Psi_t$-orbit of ${\gamma}_1$ is asymptotic to the periodic orbit ${\mathcal O}_1$, while the backward $\Psi_t$-orbit of $\overline{\gamma}_1$ is asymptotic to the periodic orbit ${\mathcal O}_2$. Introduce their ``pseudo-orbit'', \begin{equation}\label{eq-Zset} {\mathcal Z}_{{\gamma}} = {\mathcal Z}_{{\gamma}}^- ~ \cup ~ {\mathcal Z}_{{\gamma}}^+ ~ , ~ {\mathcal Z}_{{\gamma}}^- = \{\Psi_t({\gamma}_1) \mid t \geq 0 \} ~ \text{and} ~ {\mathcal Z}_{{\gamma}}^+ = \{\Psi_t(\overline{\gamma}_1) \mid t \leq 0 \} \end{equation} Each curve ${\mathcal Z}_{{\gamma}}^{\pm}$ traces out a semi-infinite ray in ${\mathcal C}$ which spirals from the bottom or top face to a periodic orbit, and thus ${\mathcal Z}_{{\gamma}}$ traces out two semi-infinite curves in ${\mathcal C}$ spiraling to the periodic orbits ${\mathcal O}_1 \cup {\mathcal O}_2$.
For $0 < \delta \leq 1$, denote by ${\gamma}^{\delta}$ the curve with image ${\gamma}([0, \delta])$, parametrized by \begin{equation}\label{eq-sparem} {\gamma}^{\delta}(s) = {\gamma}(\delta \cdot s) . \end{equation}
\begin{defn}\label{def-infpropeller} Let ${\gamma}$ be a curve parametrized by ${\gamma} \colon [0,1]\to {\mathbb W}$ as above, with $r({\gamma}_0) = 3$ and $r({\gamma}_1)=2$. Introduce the \emph{infinite propeller} and its closure in ${\mathbb W}$: \begin{equation}\label{eq-infpropeller} P_{{\gamma}} ~ \equiv ~ {\mathcal Z}_{{\gamma}} ~ \cup ~ \bigcup_{\delta > 0} ~ P_{{\gamma}^\delta} \quad , \quad \overline{P}_{{\gamma}} ~ \equiv ~ \overline{\bigcup_{\delta > 0} ~ P_{{\gamma}^\delta}} \end{equation} \end{defn}
As observed in \cite{HR2016}, the closure $\overline{P}_{{\gamma}}$ of an
infinite propeller contains the Reeb cylinder ${\mathcal R}$, with $\overline{P}_{{\gamma}} ~ = ~ P_{{\gamma}} ~ \cup ~ {\mathcal R}$.
We now introduce the notion of double propellers in ${\mathbb W}$.
Consider a smooth curve $\Gamma \subset \partial_h^-{\mathbb W}$ parametrized by $\Gamma \colon [0,2] \to \partial_h^- {\mathbb W}$, with the notation $\Gamma_s = \Gamma(s)$, such that:
\begin{enumerate} \item $r(\Gamma_s) \geq 2$ for all $0 \leq s \leq 2$; \item $r(\Gamma_0) = r(\Gamma_2) = 3$, so that both endpoints lie in the boundary $\partial_h^- {\mathbb W} \cap \partial_v {\mathbb W}$;
\item $\Gamma$ is topologically transverse to the cylinders ${\mathcal C}(r)$ for $2 \leq r \leq 3$, except at the midpoint $\Gamma_1$. \end{enumerate} It then follows that $r(\Gamma_s) \geq r(\Gamma_1) = 2+\delta$ for all $0 \leq s \leq 2$, and some $\delta\geq 0$.
See Figure~\ref{fig:Gamma1} for an illustration in the case when $\delta = 0$.
Assume that $\delta > 0$, so that $r(\Gamma_s) > 2$ for all $0 \leq s \leq 2$, then the ${\mathcal W}$-orbit of each $\Gamma_s$ traverses ${\mathbb W}$.
The $\Psi_t$-flow of the points in $\Gamma$ form a compact surface embedded in ${\mathbb W}$, whose boundary is contained in the boundary of ${\mathbb W}$, and thus the surface separates ${\mathbb W}$ into two connected components. This surface is denoted $P_{\Gamma}$ and called the \emph{double propeller} defined by the $\Psi_t$-flow of $\Gamma$.
Consider the curves ${\gamma}, \kappa \subset \partial_h^- {\mathbb W}$ obtained by dividing
the curve $\Gamma$ into two segments at the midpoint $s=1$. Parametrize these curves as follows: \begin{eqnarray*}
{\gamma} = \Gamma \ | \ [0,1] ~ & , & ~ \Gamma(s) ~ \text{for} ~ 0 \leq s \leq 1 \\
\kappa= \Gamma \ | \ [1,2] ~ & , & ~ \Gamma(2-s) ~ \text{for} ~ 0 \leq s \leq 1 \end{eqnarray*} The orbit $\displaystyle \{\Psi_t(\Gamma_1) \mid 0 \leq t \leq T_1\}$ forms the \emph{long boundary} of the propellers $P_{{\gamma}}$ and $P_{\kappa}$ generated by the ${\mathcal W}$-flow of these curves. Then $P_{\Gamma}$ is viewed as the gluing of $P_{{\gamma}}$ and $P_{\kappa}$ along the long boundary, hence the notation ``double propeller'' for $P_{\Gamma}$.
\begin{figure}
\caption{ A curve $\Gamma={\gamma}\cup \kappa$ in $\partial_h^-{\mathbb W}$}
\label{fig:Gamma1}
\end{figure}
If $\delta =0$, define two infinite propellers $P_{{\gamma}}$ and $P_{\kappa}$ as in Definition~\ref{def-infpropeller}, and then define
$\displaystyle P_{\Gamma} = P_{{\gamma}} \cup P_{\kappa}$,
where the $\Psi_t$-orbit ${\mathcal Z}_{{\gamma}}$ of the midpoint $\Gamma_1$, defined as in \eqref{eq-Zset}, is again common to both $P_{{\gamma}}$ and $P_{\kappa}$, and $P_{\Gamma}$ is viewed as the gluing of the two propellers along an ``infinite zipper''.
\begin{defn}\label{def-doublepropeller} Let $\Gamma$ be as above with $r(\Gamma_1)=2$. Let ${\gamma}^{\delta}$ and $\kappa^{\delta}$ for $0 <\delta\leq 1$ be the curves as defined in \eqref{eq-sparem}. The \emph{infinite double propeller} is the union: \begin{equation}\label{eq-doublepropeller} P_{\Gamma} ~ \equiv ~ {\mathcal Z}_{{\gamma}} ~ \cup ~ \bigcup_{\delta > 0} ~ \left\{P_{{\gamma}^{\delta}} \cup P_{\kappa^{\delta}}\right\} \end{equation} \end{defn}
\section{Global dynamics for ${\epsilon}<0$}\label{sec-enegative}
We now analyze the global dynamics of the flows $\Phi_t^{{\epsilon}}$ on ${\mathbb K}_{\epsilon}$ for the case when ${\epsilon} < 0$. Recall that the assumption ${\epsilon} < 0$ means that the self-insertion maps $\sigma_i^{\epsilon}$ for $i=1,2$ of the Wilson Plug ${\mathbb W}$ do not penetrate far enough into ${\mathbb W}$ to break the two periodic orbits of the Wilson flow $\Psi_t$ by trapping them with the attracting orbits to these periodic orbits, as is the case when ${\epsilon}=0$. As a consequence, we show that the flow $\Phi_t^{{\epsilon}}$ has simple dynamics: there is an invariant set that is a cylinder whose boundary components are the two periodic orbits, and these are the only periodic orbits of the flow in the plug ${\mathbb K}_{\epsilon}$.
On the technical level, by the Parametrized Radius Inequality condition (K8), we have the strict inequality $r' < r$, as illustrated in Figure~\ref{fig:modifiedradius}(A). That is, the radius coordinate is strictly increasing at a secondary entry point and strictly decreasing at a secondary exit point. Thus, there exists some $\Delta>0$, depending on ${\epsilon}$, such that whenever an orbit hits an entry point, the radius increases by at least $\Delta$. The main consequence of this fact is that, unless an orbit hits the cylinder $\tau({\mathcal C})$ where is ${\mathcal C}$ is the cylinder of radius 2 in ${\mathbb W}$, its behavior is the same as in the Wilson plug. In spite of the simplicity of this portrait of the dynamics of the flow $\Phi_t^{{\epsilon}}$, the proofs of these claims uses key aspects on the analysis of the flow $\Phi_t^{0}$ that were developed in \cite{HR2016}.
Note that as ${\epsilon} < 0$ tends to $0$, the constant $\Delta$ also tends to zero, and as a consequence the length of a given non-trapped orbit in ${\mathbb K}_{{\epsilon}}$ grows increasingly long, as the number of possible reinsertions of the orbit through the faces of the insertions increases to infinity. The geometry of the parametrized flow $\Phi_t^{{\epsilon}}$ as a function of ${\epsilon} < 0$ is reminiscent of the Moving Leaf Lemma in \cite{EMS1977} which is the key to analyzing the dynamics of flows in counter-examples to the Periodic Orbit Conjecture, as constructed in \cite{EpsteinVogt1978,Sullivan1976}.
\subsection{Orbit lifting property}\label{subsec-neglemma}
The first step in the analysis the dynamics of the flow $\Phi_t^{{\epsilon}}$ is to establish the following key technical result, which should be compared with Propositions 6.5 and 6.7 of \cite{HR2016}. Recall that we defined $\rho_x^{\epsilon}(t) \equiv r(\Phi_t^{\epsilon}(x))$. \begin{prop}\label{prop-negativemain} Let $x$ be a primary or secondary entry point of ${\mathbb K}_{\epsilon}$ and $y$ an exit point with $x\equiv y$. Assume that either: \begin{enumerate} \item $r(x)>2$, \item $r(x)<2$ and $\rho_x^{\epsilon}(t)\neq 2 $ for all $t\geq 0$, \end{enumerate} then $x\prec_{{\mathcal K}_{\epsilon}} y$. Moreover, the collection of lifts of the ${\mathcal W}$-arcs in $[x,y]_{{\mathcal K}_{\epsilon}}$ contains all the ${\mathcal W}$-arcs of the ${\mathcal W}$-orbit of $x'$ that are in $\widehat{{\mathbb W}}_\e$, where $\tau(x')=x$. \end{prop}
\proof Let $x=x_0$ and
$0 = t_0 < t_1 < \cdots < t_n < \cdots $ with $x_{\ell} =
\Phi_{t_{\ell}}^{\epsilon}(x)$ be the transition points for the positive
${\mathcal K}_{\epsilon}$-orbit of $x=x_0$. Note that if the forward orbit of $x_0$ is finite, meaning that there exists $t>0$ such that $\Phi_t^{{\epsilon}}(x_0)$ is a primary exit point, then there are a finite number of transitions points. For $\ell \geq 0$, let $[x_{\ell}', y_{\ell +1}']_{{\mathcal W}} \subset \widehat{{\mathbb W}}_\e$ be the lift of the ${\mathcal K}_{\epsilon}$-arc $[x_{\ell}, x_{\ell + 1}]_{{\mathcal K}_{\epsilon}} \subset {\mathbb K}_{\epsilon}$.
Assume that $y$ is not in the ${\mathcal K}_{\epsilon}$-orbit of $x_0$. We prove that this implies that there exists $s>0$ such that for every $t\geq s$ we have $n_{x_0}(t)>0$ and deduce a contradiction from this.
If $x_1$ is an exit point, then $[x_0',y_1']_{\mathcal W}$ is a complete ${\mathcal W}$-orbit traveling from $\partial_h^-{\mathbb W}$ to $\partial_h^+{\mathbb W}$. Thus, by definition we have $x_0'\equiv y_1'$, or equivalently that $x_0\equiv x_1=y$ and we obtain a contradiction to our assumption. Thus, we can assume that $x_1$ is a secondary entry point and $n_{x_0}(t_1)=1$.
Next, we prove that $n_{x_0}(t)\geq 0$ for all $t >0$. Suppose not, then there exists $t>0$ such that $n_{x_0}(t)<0$. Let $\ell>1$ be the first index such that $n_{x_0}(t_\ell)=0$ and $n_{x_0}(t_{\ell+1})=-1$. Then $x_\ell$ and $x_{\ell+1}$ are both exit points, so by Proposition~\ref{prop-shortcut4} we obtain that $x_0'\prec_{\mathcal W} y_\ell'$ and $x_0\equiv x_{\ell+1}$, implying that $x_{\ell+1}=y$ which contradicts the assumption that $y$ is not in the ${\mathcal K}_{\epsilon}$-orbit of $x_0$. Thus $n_{x_0}(t) \geq 0$ for all $t >0$.
Next, suppose there exists $\ell>1$ such that $n_{x_0}(t_\ell)=0$, then let $\ell > 1$ be the least such index. If $x_{\ell}$ is a primary exit point, then by Proposition~\ref{prop-ee}, the claim of the Proposition follows. Otherwise, $x_{\ell +1}$ is defined, and both $x_{\ell-1}$ and $x_{\ell}$ are exit points, so by Proposition~\ref{prop-shortcut4} we obtain that $x_0'\prec_{\mathcal W} y_{\ell +1}'$.
Then by Proposition~\ref{prop-shortcut4} $x_1\equiv x_{\ell-1}$, and it follows that the ${\mathcal W}$-arc $[x_\ell',y_{\ell+1}']_{\mathcal W}$ is the ${\mathcal W}$-arc following $[x_0',y_{1}']_{\mathcal W}$ in the intersection of the ${\mathcal W}$-orbit of $x_0'$ with $\widehat{{\mathbb W}}_{\epsilon}$.
We then apply this argument repeatedly to every ${\mathcal K}_{\epsilon}$-arc $[x_{\ell}, x_{\ell+1}]_{{\mathcal K}_{\epsilon}}$ for which $n_{x_0}(s) = 0$ for $t_{\ell} \leq s < t_{\ell+1}$, to conclude that $[x_{\ell}, x_{\ell+1}]_{{\mathcal K}_{\epsilon}}$ lifts to an arc $[x_{\ell}', y_{\ell+1}']_{{\mathcal W}} \subset \widehat{{\mathbb W}}_\e$. Note that for each such $\ell$, the ${\mathcal W}$-arc $[x_{\ell}', y_{\ell+1}']_{{\mathcal W}}$ is contained in the ${\mathcal W}$-orbit of $x_0'$, and for distinct $\ell > 0$ these ${\mathcal W}$-arcs are disjoint. On the other hand, the ${\mathcal W}$-orbit of $x_0'$ is finite as $r(x_0')\neq 2$ by assumption. We conclude that there is at most a finite number of indices $\ell$ such that $n_{x_0}(t_\ell)=0$. Thus, there exists $s_0>0$ such that $n_{x_0}(t)>0$ for all $t\geq s_0$.
There are now two possible cases to analyze: either the ${\mathcal K}_{\epsilon}$-orbit of $x_0$ is finite and thus exits ${\mathbb K}_{\epsilon}$ at a primary exit point, or the ${\mathcal K}_{\epsilon}$-orbit of $x_0$ is infinite.
Assume first that the ${\mathcal K}_{\epsilon}$-orbit of $x_0$ is finite. Then there exists $\ell>1$ such that $x_\ell$ is a primary exit point. Let $n=n_{x_0}(t_{\ell-1})\geq 0$. Since $n_{x_0}(t)>0$ for $t\geq s_0$ we can assume that $n>0$. Then
$x_{\ell-1}$ must be a secondary exit point. Otherwise, the ${\mathcal W}$-arc $[x_{\ell-1}',y_\ell']_{\mathcal W}$ goes from an entry to an exit point in ${\mathbb W}$, which must then be facing.
This would imply that $x_{\ell-1}$ is a primary entry point, contrary to assumptions.
Thus, $x_{\ell-1}$ is a secondary exit point and therefore
$n_{x_0}(t_{\ell-2})=n+1$. Then there exists $k>0$, chosen to be
the smallest index such that $n_{x_0}(t_{k})=n$ and $n_{x_0}(t)\geq
n$ for every $t_k\leq t<t_\ell$, implying that
$n_{x_0}(t_{k-1})=n-1$. Then $x_k$ is an entry point and
Proposition~\ref{prop-shortcut4} applied to the ${\mathcal K}_{\epsilon}$-arc $[x_k,
x_{\ell}]_{{\mathcal K}_{\epsilon}}$ implies that $x_k\equiv x_\ell$. But then $x_k$
is a primary entry point and thus $x_k=x_0$, which is again a contradiction as we assumed that $n_{x_0}(t_{k})=n>0$.
Finally, consider the case where the ${\mathcal K}_{\epsilon}$-orbit of $x_0$ is infinite. Recall that we may assume that $n_{x_0}(t)>0$ for all $t\geq s_0$. We then have to consider the two following situations: either the level function $n_{x_0}(t)$ admits an upper bound for $t \geq 0$, or the function $n_{x_0}(t)$ is unbounded.
Assume that $n_{x_0}(t)$ grows without bound. We show that this leads to a contradiction. Let $\ell_i$ be the first index such that $n_{x_0}(t_{\ell_i})=i$, thus $\ell_0=0$, $\ell_1=1$ and all the points $x_{\ell_i}$ are secondary entry points. We claim that \begin{equation}\label{eq-radiusunbounded} \rho_{x_0}^{\epsilon}(t_{\ell_i})\geq r(x)+ n_{x_0}(t_{\ell_i}) \cdot \Delta =r(x)+i \cdot \Delta \ . \end{equation} Let us prove the claim by recurrence on $i$. For $i=1$ condition (K8) in the construction of ${\mathbb K}_{\epsilon}$ implies that $\rho_{x_0}^{\epsilon}(t_1)\geq r(x)+\Delta$. Assume that for $i=k-1$ the inequality is satisfied. If $\ell_k=\ell_{k-1}+1$ the claim follows from (K8). If not, observe that $n_{x_0}(t_{\ell_k-1})=k-1$, thus by Proposition~\ref{prop-shortcut4} we have that $x_{\ell_{k-1}}'\prec_{\mathcal W} y_{\ell_k-1}'$ and $\rho_{x_0}^{\epsilon}(t_{\ell_{k-1}})=\rho_{x_0}^{\epsilon}(t_{\ell_k-1})$. Since $x_{\ell_k}$ is a secondary entry point we have that $\rho_{x_0}^{\epsilon}(t_{\ell_k})\geq r(x)+k \cdot \Delta$ proving the claim. Since $\Delta>0$ and the function $\rho_{x_0}^{\epsilon}(t)$ is bounded above by $3$, this is impossible, and thus the level function $n_{x_0}(t)$ must be bounded.
It remains to consider the case where there exists some $N > 0$ such that $n_{x_0}(t)\leq N$ for all $t\geq 0$. Then there exists $0 \leq n_0 \leq N$ which is the least integer such that there exists
$0 < \ell_0 < \ell_1 < \cdots < \ell_k < \cdots$ such that $n_{x_0}(t_{\ell_j}) = n_0$. That is,
$\displaystyle n_0 = \liminf_{\ell > 0} ~ n_{x_0}(t_{\ell}) \leq N$.
Since $n_0$ is the least such integer, there exists $k \geq 0$ such that $n_{x_0}(t) \geq n_0$ for all $t \geq t_{\ell_k}$.
Then the segment $[x_{\ell_k}, x_{\ell_{k+1} + 1}]_{{\mathcal K}_{\epsilon}}$
satisfies the hypotheses of Proposition~\ref{prop-shortcut4}, so
we have $x_{\ell_k}' \prec_{{\mathcal W}} y_{\ell_{k+1} + 1}'$. Thus the
lifted ${\mathcal W}$-arcs $[x_{\ell_m}', y_{\ell_m+1}']_{\mathcal W}$ for $m\geq
k$ belong to the ${\mathcal W}$-orbit of $x_{\ell_k}'$. Then by the estimates \eqref{eq-segmentlengths} and \eqref{eq-segmentlengths2} above, considering the lengths of the lifted ${\mathcal W}$-orbit segments $[x_{\ell_m}', y_{\ell_{m} + 1}']_{{\mathcal W}}$ yields the estimate $\displaystyle d_{min}\cdot (m-k)\leq L(2-r(x_{\ell_k}'))$ for all $m\geq k$.
However, we can choose $m$ arbitrarily large, and so
also $(m-k)$, which yields a contradiction.
We proved so far that $x\prec_{{\mathcal K}_{\epsilon}} y$. Observe that $x_1$ is a secondary entry point and assume $y=x_n$, for some $n>0$. Let $\ell_i$ be all the indices $1\leq \ell_i\leq n$ such that $n_{x_0}(t_{\ell_i})=0$. Then $n_{x_0}(t_{\ell_i-1})=1$ and Proposition~\ref{prop-shortcut4} implies $x_1\equiv x_{\ell_1-1}$. Thus the ${\mathcal W}$-arc $[x_{\ell_1}',y_{\ell_1+1}']_{\mathcal W}$ is the second arc in the ${\mathcal W}$-orbit of $x'$ that belongs to $\widehat{{\mathbb W}}_\e$. The same argument proves that $x_{\ell_1+1}\equiv x_{\ell_2-1}$ and $[x_{\ell_2}',y_{\ell_2+1}']_{\mathcal W}$ is the third arc in the ${\mathcal W}$-orbit of $x'$ that belongs to $\widehat{{\mathbb W}}_\e$. We conclude that the arcs $[x_{\ell_i}',y_{\ell_i+1}']_{\mathcal W}$, for all $i$, are all the arcs in the ${\mathcal W}$-orbit of $x'$ that belong to $\widehat{{\mathbb W}}_\e$. \endproof
We point out two important corollaries of Proposition~\ref{prop-negativemain} and its proof.
\begin{cor}\label{cor-negfinite} Let $x\in {\mathbb K}_{\epsilon}$ be such that $\rho_x^{\epsilon}(t)\neq 2$ for every $t\geq 0$. If $x$ is not a transition point, then the forward $ {\mathcal K}_{\epsilon}$-orbit of $x$ exits ${\mathbb K}_{\epsilon}$. \end{cor}
\proof Let $x\in {\mathbb K}_{\epsilon}$ be such that $\rho_x^{\epsilon}(t)\neq 2$ for every $t\geq 0$, and assume that the forward orbit never exits ${\mathbb K}_{\epsilon}$. Since for every $t\geq 0$ the ${\mathcal W}$-orbit of $\tau^{-1}(\Phi_t^{\epsilon}(x))$ is finite, the assumption implies that the forward ${\mathcal K}_{\epsilon}$-orbit of $x$ has infinitely many transition points. Then either the level function $n_{x_0}(t)$ is upper and lower bounded, or is not. The proof of Proposition~\ref{prop-negativemain} gives a contradiction in both cases. \endproof
\begin{cor}\label{cor-negperiodic} The ${\mathcal K}_{\epsilon}$-orbit ${\mathcal O}_i^{\epsilon}$ containing $\tau({\mathcal O}_i)$, for $i=1,2$, is periodic. \end{cor}
\proof The arc $\tau({\mathcal O}_i)$, for $i=1,2$, intersects the entry region $E^{{\epsilon}}_i$ at a point $p_i^- \in E^{{\epsilon}}_i$ whose radius coordinate satisfies $r(p_i^-) > 2$, as illustrated in Figure~\ref{fig:modifiedradius}(A). Then by the proof of Proposition~\ref{prop-negativemain}, the ${\mathcal K}_{{\epsilon}}$-orbit of $x_i$ contains the facing point $y_i \in S^{{\epsilon}}_i$, which is by construction the other endpoint of the arc $\tau({\mathcal O}_i)$. \endproof
We end this section with a brief discussion of the trapped set of ${\mathbb K}_{\epsilon}$ for ${\epsilon}<0$. Corollary~\ref{cor-trapped} states that for $x\in \partial_h^-{\mathbb K}_{\epsilon}$ with $r(x)=2$ the ${\mathcal K}_{\epsilon}$-orbit of $x$ is trapped. The same argument can be applied to secondary entry points with radius 2. Thus if $x$ is a primary entry point with $r(x)\neq 2$ and such that there exists $t>0$ with $\rho_x^{\epsilon}(t)=2$, the orbit of $x$ is trapped. Thus the trapped set changes with ${\epsilon}$: it is composed of the curve $$\{\tau(x')\in \partial_h^-{\mathbb K}_{\epsilon} \mid x'=(2,\theta,-1)\},$$ and the curve of primary entry points $x\in \partial_h^-{\mathbb K}_{\epsilon}$ such that the ${\mathcal K}_{\epsilon}$-orbit of $x$ hits $E_i^{\epsilon}$ in a point with radius 2. Observe that these conclusions hold for negative orbits also.
\subsection{Existence of exactly two periodic orbits}\label{subsec-periodic}
Corollary~\ref{cor-negperiodic} implies that the plugs ${\mathbb K}_{\epsilon}$ have at least two periodic orbits, corresponding to the periodic orbits of the Wilson plug ${\mathbb W}$. We next show that these are the only periodic orbits of the flow $\Phi_t^{\epsilon}$. Moreover, these orbits form the boundary of an invariant cylinder ${\mathfrak{M}}_{\epsilon} \subset {\mathbb K}_{\epsilon}$, whose limit as ${\epsilon}\to 0$ is the invariant open set ${\mathfrak{M}}_0\subset {\mathbb K}_0$, introduced in \cite{HR2016} and discussed in further detail below.
The proof of the following result is based in spirit, and also in many details, on the proof of the aperiodicity of the plug ${\mathbb K}_0$ as given in \cite[Theorem 8.1]{HR2016}. \begin{thm}\label{thm-2periodicnegative} For ${\epsilon}<0$, the flow $\Phi_t^{\epsilon}$ has exactly two periodic orbits, ${\mathcal O}_1^{\epsilon}$ and ${\mathcal O}_2^{\epsilon}$. \end{thm}
\proof
Suppose there exist a periodic orbit inside ${\mathbb K}_{\epsilon}$ and let $x$ be a point on it. Let $0 \leq t_0 < t_1 < \cdots < t_n < \cdots $ with $x_{\ell} = \Phi_{t_{\ell}}^{\epsilon}(x)$ be the transition points for the ${\mathcal K}_{\epsilon}$-orbit $\{\Phi_t ^{\epsilon} (x) \mid t \geq 0\}$, and suppose that $t_n > 0$ is the first subsequent transition point with $x_n = \Phi_{t_n}^{\epsilon} (x_0) = x_0$, so that $\Phi_{t + t_n}^{\epsilon} (x) = \Phi_{t}^{\epsilon} (x)$ for all $t$.
Let $[x_{\ell}', y_{\ell +1}']_{{\mathcal W}} \subset \widehat{{\mathbb W}}_\e$ be the lift of the ${\mathcal K}_{\epsilon}$-arc $[x_{\ell}, x_{\ell + 1}]_{{\mathcal K}_{\epsilon}} \subset {\mathbb K}_{\epsilon}$, for $0 \leq \ell \leq n$ and let $r_\ell$ be the radius coordinate of the arc $[x_{\ell}', y_{\ell +1}']_{{\mathcal W}}$. Periodicity of the orbit implies that $[x_{0}', y_{1}']_{{\mathcal W}} = [x_{n}', y_{n+1}']_{{\mathcal W}}$. Moreover, the $r$-coordinate is constant on each ${\mathcal W}$-arc $[x_{\ell}', y_{\ell +1}']_{{\mathcal W}}$, so it has a minimal value $r_0$. Without loss, assume this minimum occurs for $[x_{0}', y_{1}']_{{\mathcal W}}$.
Observe that since the radius is minimum on $[x_{0}', y_{1}']_{{\mathcal W}}$ and the radius strictly increases at secondary entry points, $x_0$ is a secondary exit point and $x_1$ is a secondary entry point. Assume that $n_{x_0}(t_{n-1})=k$ for some integer $k$, we next analyze the different situations depending on the value of $k$.
If $k<0$, let $\ell>1$ be the first index such that $n_{x_0}(t_\ell)=-1$, then $n_{x_0}(t_{\ell-1})=0$ and Proposition~\ref{prop-shortcut4} implies that $x_0'\prec_{{\mathcal W}}y_\ell'$. Thus $r_{\ell-1}=r_0$ and $r_\ell<r_0$, contradicting the fact that $r_0$ is minimum. Thus $n_{x_0}(t)\geq 0$ for all $t\geq0$.
If $k=0$, Proposition~\ref{prop-shortcut4} implies that $x_0'\prec_{{\mathcal W}}y_n'$, and thus $r_{n-1}=r_0=r_n$, that is a contradiction because the orbit cannot pass a transition point with the radius coordinate staying constant.
If $k>0$, let us consider the toy case where the level is strictly increasing, or in other words, the case where all the transition points $x_\ell$ for $1\leq \ell \leq n-1$ are secondary entry points. Since $x_0=x_n$ is a secondary exit point, Proposition~\ref{prop-shortcut4} implies that $x_{n-1}\equiv x_n$ and $x_{n-2}'\prec_{{\mathcal W}} y_{n+1}'=y_1'$. Thus $r_{n-3}<r_{n-2}=r_n=r_0$, contradicting the minimality of $r_0$.
If the level is not strictly increasing, consider first the case when $k>1$. We use a technique introduced by Ghys in \cite[page 299]{Ghys1995}, which defines a monotone increasing function $i(a)$ derived from the level function $n_{x_0}(t)$. For $0\leq a\leq k-1$, set $i(a)$ such that $n_{x_0}(t_{\ell_{i(a)}})=a$ and $n_{x_0}(t)\geq a$ for all $t\geq t_{\ell_{i(a)}}$. Then $0=i(0)<i(1)<\cdots <i(k-1)<n-1$ and $x_{\ell_{i(a)}}$ is a secondary entry point for all $a$. Moreover, $x_{\ell_{i(k-1)}}\prec_{\mathcal W} x_n$, by Proposition~\ref{prop-shortcut4}, and thus $r_{\ell_{i(k-1)}}=r_n=r_0$. Then $r_0=r_{\ell_{i(0)}}<r_{\ell_{i(1)}}<\cdots<r_{\ell_{i(k-1)}}=r_0$, a contradiction.
\begin{figure}
\caption{The functions $n_{x_0}$ and $i(a)$ }
\label{fig:steps}
\end{figure}
We are left with the case $k=1$ and $n_{x_0}(t_n)=0$. Thus $1=n_{x_0}(t_1)=n_{x_0}(t_{n-1})$ and $x_1'\prec_{\mathcal W} y_n'$. Moreover, since $x_1$ is a secondary entry point and $x_n$ is a secondary exit point, then $x_1\equiv x_n=x_0$. This implies that the in ${\mathbb W}$, $x_0'\prec_{\mathcal W} x_1'\prec_{\mathcal W} x_0'$ and hence the arc $[x_0',y_1']_{\mathcal W}$ is one of the arcs ${\mathcal O}_i\cap {\mathbb W}'_{\epsilon}$ for $i=1,2$.
Thus the only periodic orbits are the ${\mathcal K}_{\epsilon}$-orbits of $\tau({\mathcal O}_i)$ for $i=1,2$. \endproof
The discussion of the trapped set at the end of Section~\ref{subsec-neglemma} and Theorem~\ref{thm-2periodicnegative}, yields the following assertion in Theorem~\ref{thm-main1}. \begin{cor}\label{cor-wandering} All orbits of the flow $\Phi_t^{\epsilon}$ are wandering, except for the periodic orbits $\tau({\mathcal O}_i)$ for $i=1,2$. \end{cor}
\subsection{Invariant sets}\label{subsec-invariant}
The flow $\Psi_t$ on the modified Wilson Plug ${\mathbb W}$ preserves the Reeb cylinder ${\mathcal R}\subset {\mathbb W}$ introduced in Section~\ref{subsec-wilson}. The restriction of the flow to this invariant set consists of the two periodic orbits ${\mathcal O}_i$, $i=1,2$, for the flow, along with orbits asymptotic to these orbits, as illustrated by the restriction of the flow to the central band in Figure~\ref{fig:flujocilin}(C).
Consider the intersection ${\mathcal R}_{\epsilon}' = {\mathcal R} \cap \widehat{{\mathbb W}}_\e$, where $\widehat{{\mathbb W}}_\e$ is the closure of ${\mathbb W}'_{\epsilon}$ as defined in \eqref{eq-notchedW}. Then ${\mathcal R}_{\epsilon}'$ is a compact submanifold of ${\mathcal R}$ with boundary, which is the union of the two arcs ${\mathcal O}_i\cap {\mathbb W}'_{\epsilon}$ which are contained in the ${\mathcal W}$-periodic orbits, and the boundary of the ``notches'', that is the intersections of ${\mathcal R}$ with ${\mathcal D}_i^{\epsilon}$ for $i=1,2$. This set is illustrated in Figure~\ref{fig:notches}.
We consider in this section the image of $\tau({\mathcal R}_{\epsilon}')$ under the flow $\Phi_t^{\epsilon}$.
First, recall that for ${\epsilon} =0$, the images $\tau({\mathcal O}_i \cap \widehat{{\mathbb W}}_\e) \subset {\mathbb K}_0$ of these orbits are not periodic.
One of the main results in \cite{HR2016} is that under some generic assumptions, the unique minimal set for the flow $\Phi_t^0$ equals the closure of the ${\mathcal K}_0$-flow of the image of the Reeb cylinder $\tau({\mathcal R}_0')$, and has the structure of a \emph{zippered lamination}. For a precise definition of a zippered lamination we refer to \cite[Chapter~19]{HR2016}.
\begin{figure}
\caption{ The notched cylinder ${\mathcal R}'$ embedded in ${\mathbb W}$}
\label{fig:notches}
\end{figure}
Observe that the set $\tau({\mathcal R}_{\epsilon}') \subset {\mathbb K}_{\epsilon}$ is not invariant under the flow $\Phi_t^{\epsilon}$ since the Reeb cylinder ${\mathcal R}$ intersects the inserted regions ${\mathcal D}_i^{\epsilon}$ for $i=1,2$. Instead, we must consider the submanifold ${\mathfrak{M}}_{\epsilon}$ obtained by applying the flow $\Phi_t^{\epsilon}$ to the image $\tau({\mathcal R}_{\epsilon}')$: \begin{equation}\label{eq-invariantMe} {\mathfrak{M}}_{\epsilon} ~ \equiv ~ \{\Phi_t^{\epsilon}(\tau({\mathcal R}_{\epsilon}')) \mid -\infty < t < \infty \} \subset {\mathbb K}_{\epsilon} ~. \end{equation} For ${\epsilon}< 0$, Corollary~\ref{cor-negperiodic} implies that ${\mathcal K}_{\epsilon}$-orbits of the boundary segments $\tau({\mathcal O}_i\cap {\mathbb W}'_{\epsilon})$ are periodic orbits. In fact, a much stronger statement is true.
\begin{prop} For ${\epsilon}<0$, the set ${\mathfrak{M}}_{\epsilon}$ is homeomorphic to a cylinder. \end{prop}
\proof
Consider the intersections of ${\mathcal R}$ with ${\mathcal D}_i^{\epsilon}$ for $i=1,2$. The notches formed by deleting the intersection ${\mathcal D}_1^{\epsilon}$ contains 3 boundary curves: label the curve in ${\mathcal L}_1^{{\epsilon}-}$ by ${\gamma}_{\epsilon}'$. The facing curve transverse to the flow is labeled by $\overline{\gamma}_{\epsilon}'$. Analogously, the notches formed by deleting the intersection ${\mathcal D}_2^{\epsilon}$ contains 3 boundary curves, where $\lambda_{\epsilon}'$ labels the curve transverse to the flow in ${\mathcal L}_2^{{\epsilon}-}$, and $\overline{\lambda}_{\epsilon}'$ is the facing curve. These curves are
illustrated in Figure~\ref{fig:notches}.
The projections of the curves ${\gamma}_{\epsilon}'$ and $\lambda_{\epsilon}'$ to ${\mathbb K}_{\epsilon}$ are denoted by ${\gamma}_{\epsilon} = \tau({\gamma}_{\epsilon}')$ and $\lambda_{\epsilon} = \tau(\lambda_{\epsilon}')$. These are curves in the entry regions $E^{{\epsilon}}_1$ and $E^{{\epsilon}}_2$, respectively. In the same way consider the curves $\overline{\gamma}_{\epsilon}=\tau(\overline{\gamma}_{\epsilon}')$ and $\overline{\lambda}_{\epsilon}=\tau(\overline{\lambda}_{\epsilon}')$ which are contained in the exit regions $S^{{\epsilon}}_1$ and $S^{{\epsilon}}_2$.
The Parametrized Radius Inequality for ${\epsilon}<0$ implies that each point on these four curves lies in the region $\{r> 2\}$. Thus, by Proposition~\ref{prop-negativemain} the ${\mathcal K}_{\epsilon}$-orbit of each point $x\in {\gamma}_{\epsilon}$ passes through the facing point $y\in \overline{{\gamma}_{\epsilon}}$. Thus, the $\Phi_t^{\epsilon}$-flow of the curve ${\gamma}_{\epsilon}$ generates a compact surface denoted by $\Sigma_{{\gamma}_{\epsilon}} \subset {\mathbb K}_{\epsilon}$ that ``fills the notch'' created by deleting the rectangular region ${\mathcal R} \cap {\mathcal D}_1^{\epsilon}$. Analogously, the $\Phi_t^{\epsilon}$-flow of the curve $\lambda_{\epsilon}$ generates a compact surface denoted by $\Sigma_{\lambda_{\epsilon}} \subset {\mathbb K}_{\epsilon}$ that ``fills the notch'' created by deleting the rectangular region ${\mathcal R} \cap {\mathcal D}_2^{\epsilon}$. Both of these surfaces with boundary are diffeomorphic to a rectangular region, but the embeddings of this region in ${\mathbb K}_{\epsilon}$ becomes increasingly ``twisted'' as ${\epsilon} \to 0$. In any case, the surface ${\mathfrak{M}}_{\epsilon}$ obtained by attaching these surfaces $\Sigma_{{\gamma}_{\epsilon}}$ and $\Sigma_{\lambda_{\epsilon}}$ along their boundary to the boundary of the notches in the image $\tau({\mathcal R}_{\epsilon}')$ is homeomorphic to a cylinder whose boundary components are the periodic orbits ${\mathcal O}_i^{\epsilon}$ for $i=1,2$. \endproof
Let us describe the surfaces $\Sigma_{{\gamma}_{\epsilon}}$ and $\Sigma_{\lambda_{\epsilon}}$. The description below proceeds analogously to the surface ${\mathfrak{M}}_0$ obtained from the $\Phi_t^0$-flow of the Reeb cylinder ${\mathcal R}_0' \subset {\mathbb K}_0$ which is described in complete detail in Chapter~12 of \cite{HR2016}. We describe the surface $\Sigma_{{\gamma}_{\epsilon}}$ below, and the description of the surface $\Sigma_{\lambda_{\epsilon}}$ is exactly analogous.
Consider the curve ${\gamma}_{\epsilon} \in E^{{\epsilon}}_1$ and the corresponding curve ${\gamma}_{\epsilon}'' = \tau^{-1}({\gamma}_{\epsilon}) \subset L_1^-$. Note that the curves ${\gamma}_{\epsilon}'$ and ${\gamma}_{\epsilon}''$ satisfy $\tau({\gamma}_{\epsilon}') = \tau({\gamma}_{\epsilon}'') = {\gamma}_{\epsilon}$, but the curve ${\gamma}_{\epsilon}'$ lies in the cylinder of points $r=2$ in ${\mathbb W}$, while the curve ${\gamma}_{\epsilon}''$ is contained in the bottom face of ${\mathbb W}$. In particular, the radius coordinate along ${\gamma}_{\epsilon}''$ is strictly greater than 2. Let $r_1\geq 2+\Delta$ be the minimum radius attained along the curve. Then the $\Psi_t$-flow of ${\gamma}_{\epsilon}''$ generates a finite propeller $P_{{\gamma}_{\epsilon}}$ as described in Section~\ref{sec-propeller}. Consider the notched propeller $P_{{\gamma}_{\epsilon}}'=P_{{\gamma}_{\epsilon}}\cap {\mathbb W}'_{\epsilon}$. Thus $\tau(P_{{\gamma}_{\epsilon}}')\subset {\mathfrak{M}}_{\epsilon}$ and is attached to $\tau({\mathcal R}_{\epsilon}')$ via the insertion map $\sigma_1^{\epsilon}$.
Since this propeller is finite, its intersection with $E^{{\epsilon}}_1$ and $E^{{\epsilon}}_2$ is along a finite number of curves, whose radius coordinate is bounded below by $r_1+\Delta$. Each of these curves generates a finite propeller, simple or double, that is attached to $\tau(P_{{\gamma}_{\epsilon}}')$ via the map $\sigma_i^{\epsilon}$, for $i=1$ or 2. Iterating this process a finite number of steps, we construct the compact surface $\Sigma_{{\gamma}_{\epsilon}}$. Analogously, we obtain the surface $\Sigma_{\lambda_{\epsilon}}$. We conclude that ${\mathfrak{M}}_{\epsilon}$ is formed from the union of $\tau({\mathcal R}')$ with a finite number of finite propellers. An important observation is that the points in ${\mathfrak{M}}_{\epsilon}$ with radius coordinate equal to 2 are exactly those in $\tau({\mathcal R}_{\epsilon}')$.
\begin{remark} The propellers in the construction might have ``bubbles'' formed by double propellers, as described in Chapters~15 and 18 of \cite{HR2016}. These are compact surfaces that are attached to internal notches, which are notches in a propeller that do not intersect the boundary of the propeller. \end{remark}
Observe that as ${\epsilon}\to 0$, the first propeller attached to $\tau({\mathcal R}_{\epsilon}')$ becomes arbitrarily long, and when ${\epsilon}=0$ we obtain an infinite propeller. The set ${\mathfrak{M}}_0$ is no longer a cylinder, and has a very complicated structure. We can thus see ${\mathfrak{M}}_0$ as the limit of the embedded cylinders ${\mathfrak{M}}_{\epsilon}$. This phenomenon is analogous to the behavior of the leaves for a compact foliation as they approach its bad set, as described in \cite{EMS1977}.
To complete the proof of Theorem~\ref{thm-main1}, choose a $C^{\infty}$-family of embeddings $\sigma_i^{\epsilon}$ which satisfy the Parametrized Radius inequality, for $-1 \leq {\epsilon} \leq 0$. Let $\Phi_t^{\epsilon}$ be the resulting flows on the plug ${\mathbb K}_{{\epsilon}}$. Then the results of Section~\ref{sec-enegative} show that these flows satisfy the assertions of Theorem~\ref{thm-main1}.
\section{Geometric hypotheses for ${\epsilon} \geq 0$}\label{sec-generic}
The dynamical properties of the Kuperberg flows $\Phi_t^{\epsilon}$ when ${\epsilon} \geq 0$ are far more subtle than when ${\epsilon} < 0$, and to obtain our results on the global dynamics of $\Phi_t^{\epsilon}$ when ${\epsilon} \geq 0$
requires that we impose a variety of additional hypotheses on the construction of $\Phi_t^{\epsilon}$. In this section, we formulate some basic additional geometric assumptions for the insertion maps $\sigma_i^{\epsilon}$ beyond what we specified in Section~\ref{subsec-kuperberg}. The basic point is to require that they have the geometric shape which is intuitively implicit in Figures~\ref{fig:insertiondisks}, \ref{fig:twisted} and \ref{fig:modifiedradius}(C).
First, we note a straightforward consequence of the Parametrized Radius Inequality (K8) in Section~\ref{subsec-kuperberg}. Recall that $\theta_i$ is the radian angle coordinate specified in (K8) such that for $x' = (2, \theta_i, -2) \in L_i$ we have $r(\sigma_i^{\epsilon}(2, \theta_i, -2)) = 2+{\epsilon}$.
\begin{lemma}\label{lem-r0} For ${\epsilon}>0$ there exists $2+{\epsilon} <r_{\epsilon}<3$ such that $r(\sigma_i^{\epsilon}(r_{\epsilon},\theta_i, -2))= r_{\epsilon}$. \end{lemma}
\proof Since $r(\sigma_i^{\epsilon}(2,\theta_i, -2))= 2+{\epsilon}$ and $r(\sigma_i^{\epsilon}(3,\theta_i, -2))<3$, by the continuity in $r$ of the function $r(\sigma_i^{\epsilon}(r,\theta_i, -2))$ we conclude that there exists $2+{\epsilon} <r_{\epsilon}<3$ such that $r(\sigma_i^{\epsilon}(r_{\epsilon},\theta_i, -2))= r_{\epsilon}$. \endproof
We then add an additional assumption on the insertion maps $\sigma_i^{\epsilon}$ for $i=1,2$ that the radius is \emph{decreasing} under the insertion map, for $r \geq r_{\epsilon}$.
\begin{hyp} \label{hyp-monotone} If $r_{\epsilon}$ is the smallest $2+{\epsilon} <r_{\epsilon}<3$ such that $r(\sigma_i^{\epsilon}(r_{\epsilon},\theta_i, -2))= r_{\epsilon}$, then assume that $r(\sigma_i^{\epsilon}(r,\theta_i, -2)) < r$ for $r > r_{\epsilon}$.
\end{hyp}
Next, we introduce an hypothesis on the insertion maps $\sigma_i^{\epsilon}$ for $i=1,2$ which, in essence, guarantees that the images of the level curves for $r'=c$ are ``quadratic'' for $c$ near the value $r=2$, as pictured in Figure~\ref{fig:modifiedradius}. Recall our notational conventions. For $i=1,2$, let $x' = (r', \theta', -2) \in L_i^-$ denote a point in the domain of $\sigma_i^{\epsilon}$ and denote its image by
$(r, \theta,z) = \sigma_i^{\epsilon}(x') \in {\mathcal L}_i^{{\epsilon}-} \subset {\mathbb W}$.
Let $\pi_z \colon {\mathbb W} \to \partial_h^- {\mathbb W}$ denote the projection along the $z$-coordinate, so $\pi_z(r, \theta, z) = (r, \theta,-2)$.
We first assume that $\sigma_i^{\epsilon}$ restricted to the bottom face, $\sigma_i^{\epsilon} \colon L_i^- \to {\mathcal L}_i^{{\epsilon}-} \subset {\mathbb W}$, has image transverse to the vertical fibers of $\pi_z$.
Then $\pi_z \circ \sigma_i^{\epsilon} \colon L_i^- \to {\mathbb W}$ is a diffeomorphism into the face $\partial_h^- {\mathbb W}$. Denote the image set of this map by ${\mathfrak{D}}_i \subset \partial_h^- {\mathbb W}$. Then we can define
the inverse map
\begin{equation}\label{eq-inverse} \Upsilon_i^{\epsilon} = (\pi_z \circ \sigma_i^{\epsilon})^{-1} \colon {\mathfrak{D}}_i \to L_i^- \ . \end{equation} In particular, express the inverse map $x' = \Upsilon_i^{\epsilon}(x)$ in polar coordinates as: \begin{equation}\label{eq-coordinatesVT} x' = (r',\theta', -2) = \Upsilon_i^{\epsilon}(r, \theta,-2) = (r(\Upsilon_i^{\epsilon}(r,\theta,-2)), \theta(\Upsilon_i^{\epsilon} (r,\theta,-2)), -2) = (R_{i,r}^{\epsilon}(\theta) , \Theta_{i,r}^{\epsilon}(\theta), -2) ~. \end{equation}
In the following hypothesis, we impose uniform conditions on the derivatives of the maps $\Upsilon_i^{\epsilon}$. Recall that $0 < {\epsilon}_0 < 1/4$ was specified in Hypothesis~\ref{hyp-genericW}, and we assume that $0 < {\epsilon} < {\epsilon}_0$.
\begin{hyp} [{\it Strong Radius Inequality}] \label{hyp-SRI}
For $i=1,2$, assume that:
\begin{enumerate} \item \label{item-SRI-2} $\sigma_i^{\epsilon} \colon L_i^- \to {\mathbb W}$ is transverse to the fibers of $\pi_z$; \item \label{item-SRI-1} $r = r(\sigma_i^{\epsilon}(r',\theta',-2))< r+{\epsilon}$, except for $x' = (2,\theta_i, -2)$ and then $r=2+{\epsilon}$;
\item \label{item-SRI-3} $\Theta_{i,r}^{\epsilon}(\theta)$ is an increasing function of $\theta$ for each fixed $r$; \item \label{item-SRI-4} For $2-{\epsilon}_0 \leq r \leq 2+{\epsilon}_0$ and $i = 1,2$, assume that $R_{i,r}^{\epsilon}(\theta)$ has non-vanishing derivative, except when $\theta = \overline{\theta_i}$ as
defined by $\Upsilon_i^{\epsilon}(2+{\epsilon},\overline{\theta}_i,-2)= (2,\theta_i,-2)$;
\item For $2-{\epsilon}_0 \leq r \leq 2+{\epsilon}_0$ and $\theta_i -{\epsilon}_0 \leq \theta \leq \theta_i + {\epsilon}_0$ for $i = 1,2$, assume that \begin{equation}\label{eq-quadratic} \frac{d}{d\theta} \Theta_{i,s}(\theta) > 0 \quad, \quad \frac{d^2}{d\theta^2} R_{i,s}(\theta) > 0. \end{equation} Thus for $2-{\epsilon}_0 \leq r \leq 2+ {\epsilon}_0$, the graph of $R_{i,r}(\theta)$ is parabolic with vertex $\theta = \overline{\theta}_i$. \end{enumerate} Consequently, each surface ${\mathcal L}_i^{{\epsilon}-}$ is transverse to the coordinate vector fields $\partial/\partial \theta$ and $\partial/\partial z$ on ${\mathbb W}$.
\end{hyp}
\begin{remark}
Hypotheses~\ref{hyp-monotone} and \ref{hyp-SRI} combined, imply that $r_{\epsilon}$ is the unique value of $2+{\epsilon} <r_{\epsilon}<3$ for which
$r(\sigma_i^{\epsilon}(r_{\epsilon},\theta_i, -2))= r_{\epsilon}$. It follows that the
radius function $\rho_x^{\epsilon}$ at a secondary entry point $x$ with $r(x)>r_{\epsilon}$ is strictly increasing.
\end{remark}
\section{A pseudogroup model}\label{sec-pseudogroups}
The analysis of the dynamical properties of the standard Kuperberg flow $\Phi_t$, as made in \cite{HR2016}, was based on the introduction of an ``almost transverse'' rectangle ${\bf R}_{0} \subset {\mathbb K}$, and a detailed study of the dynamics of the induced pseudogroup for the return map ${\widehat{\Phi}}$ of the flow to ${\bf R}_{0}$. Our analysis of the dynamics of the flows $\Phi_t^{{\epsilon}}$ for the case when ${\epsilon} > 0$ follows a similar approach. We utilize the same rectangle ${\bf R}_{0}$ as defined in \eqref{eq-goodsection} below, which is the same as in \cite{HR2016}, but we use a simplified model for the pseudogroup, which incorporates the induced return maps to ${\bf R}_{0}$ for both of the flows $\Psi_t$ and $\Phi_t^{\epsilon}$. The dynamics for the induced map ${\widehat \Psi}$ from the Wilson flow is fairly straightforward to analyze, while that of the induced map $\widehat{\Phi^\e}$ from the Kuperberg flow is extraordinarily complicated, so we restrict to analyzing the dynamics of selected maps defined by $\widehat{\Phi^\e}$, and the task is then more manageable.
The goal is to show that for ${\epsilon} > 0$, the dynamics of the return map $\widehat{\Phi^\e}$ contains disjoint families of ``horseshoes'', as will be proved in Section~\ref{sec-horseshoe}, which then yields the conclusions of Theorem~\ref{thm-main2}. The first step is to introduce the pseudogroup $\widehat{\cG}_{\epsilon}$ and the $\psg$ $\widehat{\cG}_{\epsilon}^*$ in this section, as Definitions~\ref{def-pseudogroup1} and \ref{def-pseudogroup1a} below. In the next Section~\ref{sec-horseshoe}, we show that the action of $\widehat{\cG}_{\epsilon}^*\subset \widehat{\cG}_{\epsilon}$ contains invariant horseshoe dynamical subsystems.
\subsection{A good rectangle}\label{subsec-crosssection}
We introduce the ``almost transversal'' rectangle to the flows $\Psi_t$ and $\Phi_t^{{\epsilon}}$ which is used for the study of the return dynamics of their flows, and for the construction of associated pseudogroups.
Choose a value of $\theta_0$ such that the rectangle ${\bf R}_{0}$ as defined in cylindrical coordinates, \begin{equation}\label{eq-goodsection} {\bf R}_{0} \equiv \{\xi = (r, \theta_0, z) \mid ~ 1 \leq r \leq 3 ~, ~ -2 \leq z \leq 2\} \,\subset {\mathbb W}'_{\epsilon} ~ , \end{equation} is disjoint from both the regions $D_i$ and their insertions ${\mathcal D}_i^{\epsilon}$ for $i=1,2$, as defined in Section~\ref{subsec-kuperberg}.
For example, for the curves $\alpha_i$ and $\beta_i'$ defined in Section~\ref{subsec-kuperberg}, we take $\theta_0 = \pi$ so that ${\bf R}_{0}$ is between the embedded regions ${\mathcal D}_i^{\epsilon}$ for $i=1,2$ as illustrated in Figure~\ref{fig:KR}.
\begin{figure}
\caption{ The rectangle ${\bf R}_{0}$ in the Kuperberg plug
${\mathbb K}_{\epsilon}$}
\label{fig:KR}
\end{figure}
As ${\bf R}_{0} \subset {\mathbb W}'_{\epsilon}$, the quotient map $\tau \colon {\mathbb W} \to {\mathbb K}_{\epsilon}$ is injective on ${\bf R}_{0}$. We use a slight abuse of notation, and also denote the image $\tau({\bf R}_{0}) \subset {\mathbb K}_{\epsilon}$ by ${\bf R}_{0}$ with coordinates $r = r(\xi)$ and $z = z(\xi)$ for $\xi \in {\bf R}_{0}$.
The periodic points in ${\bf R}_{0}$ for the Wilson flow are denoted by
\begin{equation}\label{eq-omegas}
\omega_1 = {\mathcal O}_1 \cap {\bf R}_{0} = (2,\theta_0, -1) \quad , \quad \omega_2 = {\mathcal O}_2 \cap {\bf R}_{0} = (2,\theta_0, 1) ~ . \end{equation} For $i=1,2$, the first transition point for the forward orbit of $\omega_i$ is denoted by $p_i^{-} = \tau({\mathcal L}_i^{{\epsilon}-} \cap {\mathcal O}_i)$, and for the backward orbit the first transition point is the special exit point
$p_i^{+} = \tau({\mathcal L}_i^{{\epsilon}+} \cap {\mathcal O}_i)$.
Define a metric on ${\bf R}_{0}$ by $\displaystyle d_{{\bf R}_{0}}(\xi, \xi') = \sqrt{(r' - r)^2 + (z'-z)^2}$, for $\xi = (r, \theta_0, z)$ and $\xi' = (r', \theta_0, z')$.
We next introduce the first return map ${\widehat \Psi}$ on ${\bf R}_{0}$ for the Wilson flow $\Psi_t$.
The map ${\widehat \Psi}$ is defined at $\xi \in {\bf R}_{0}$ if there is a ${\mathcal W}$-orbit segment $[\xi, \eta]_{{\mathcal W}}$ with $\eta \in {\bf R}_{0}$ and its interior $(\xi, \eta)_{{\mathcal W}}$ is disjoint from ${\bf R}_{0}$. We then set ${\widehat \Psi}(\xi) = \eta$. Thus, the domain of ${\widehat \Psi}$ is the set: \begin{equation}\label{eq-domainwhPsi}
Dom({\widehat \Psi}) \equiv \left\{ \xi \in {\bf R}_{0} \mid \exists ~ t > 0 ~
\text{ such ~ that} ~ \Psi_t(\xi) \in {\bf R}_{0} ~ \text{and} ~
\Psi_s(\xi)\notin {\bf R}_{0} ~ \text{for}~ 0<s<t \right\}. \end{equation} The radius function is constant along the orbits of the Wilson flow, so that $r({\widehat \Psi}(\xi)) = r(\xi)$ for all $\xi \in Dom({\widehat \Psi})$. Also, note that the points $\omega_i$ for $i=1,2$ defined in \eqref{eq-omegas} are fixed-points for ${\widehat \Psi}$.
For all other points $\xi \in {\bf R}_{0}$ with $\xi \ne \omega_i$, it was assumed in Section~\ref{subsec-wilson} that the function $g(r,\theta, z) > 0$, so the ${\mathcal W}$-orbit of $\xi$ has a ``vertical drift'' arising from the term $g(r, \theta, z) \frac{\partial}{\partial z}$ in the formula \eqref{eq-wilsonvector} for ${\mathcal W}$.
The precise description of the domain $ Dom({\widehat \Psi})$ is discussed in detail in Chapter~9 of \cite{HR2016}, to which we refer the reader for further details. For our applications here, it suffices to note that the domain $ Dom({\widehat \Psi})$ contains an open neighborhood of the vertical line segment ${\mathcal R} \cap {\bf R}_{0}$. The dynamical properties of $\Psi_t$ on $ Dom({\widehat \Psi})$ are described in Proposition~\ref{prop-wilsonproperties}, and illustrated in Figures~\ref{fig:flujocilin} and \ref{fig:Reebcyl}.
Next, let $\widehat{\Phi^\e}$ denote the first return map on ${\bf R}_{0}$ for the Kuperberg flow $\Phi_t^{\epsilon}$. The domain of $\widehat{\Phi^\e}$ is the set: \begin{equation}\label{eq-domainwhPhi} Dom(\widehat{\Phi^\e}) \equiv \left\{ \xi \in {\bf R}_{0} \mid \exists ~ t > 0 ~ \text{ such ~ that} ~ \Phi_t^{\epsilon}(\xi) \in {\bf R}_{0} ~ \text{and} ~ \Phi_s^{\epsilon}(\xi)\notin {\bf R}_{0}
~ \text{for} ~ 0<s<t \right\} . \end{equation}
The precise description of the domain $Dom(\widehat{\Phi^\e})$ is very complicated, due to the nature of the orbits of $\Phi_t^{\epsilon}$ as the union of ${\mathcal W}$-arcs for the Wilson flow. Moreover, the map $\widehat{\Phi^\e} \colon Dom(\widehat{\Phi^\e}) \to {\bf R}_{0}$ has many points of discontinuity, which arise when an orbit is tangent to section ${\bf R}_{0}$ along the line ${\bf R}_{0}\cap {\mathcal A}$.
An extensive discussion of the properties of the return map $\widehat{\Phi^\e}$ for the case ${\epsilon}=0$ is discussed in detail in Chapter~9 of \cite{HR2016}. We adapt these results as required, for the case ${\epsilon} > 0$.
\subsection{The pseudogroup $\widehat{\cG}_{\epsilon}$}\label{subsec-pseudogroup}
A key idea, introduced in the work \cite{HR2016}, is to associate a pseudogroup ${\mathcal G}_K$ to the return map ${\widehat{\Phi}}$ and study the dynamics of this pseudogroup. This approach allows a more careful analysis of the interaction of the return maps ${\widehat \Psi}$ and ${\widehat{\Phi}}$ in determining the dynamics of $\Phi_t^{\epsilon}$. In this paper, we work with a pseudogroup $\widehat{\cG}_{\epsilon}$ generated by the return maps \emph{for both flows}, and then show that under the proper hypotheses, the orbits of $\widehat{\cG}_{\epsilon}$ in an invariant ``horseshoe'' subset of ${\bf R}_{0}$ agrees with the orbits of $\widehat{\Phi^\e}$.
First, we recall the formal definition of a pseudogroup modeled on a space $X$. \begin{defn}\label{def-pseudogroup} A pseudogroup ${\mathcal G}$ modeled on a topological space $X$ is a collection of homeomorphisms between open subsets of $X$ satisfying the following properties: \begin{enumerate} \item For every open set $U \subset X$, the identity $Id_U \colon U \to U$ is in ${\mathcal G}$. \item For every $\varphi \in {\mathcal G}$ with $\varphi \colon U_{\varphi} \to V_{\varphi}$ where $U_{\varphi}, V_{\varphi} \subset X$ are open subsets of $X$, then also $\varphi^{-1} \colon V_{\varphi} \to U_{\varphi}$ is in ${\mathcal G}$. \item For every $\varphi \in {\mathcal G}$ with $\varphi \colon U_{\varphi} \to V_{\varphi}$ and each open subset $U' \subset U_{\varphi}$, then the restriction $\varphi \mid U'$ is in ${\mathcal G}$. \item For every $\varphi \in {\mathcal G}$ with $\varphi \colon U_{\varphi} \to V_{\varphi}$ and every $\varphi' \in {\mathcal G}$ with $\varphi' \colon U_{\varphi'} \to V_{\varphi'}$, if $V_{\varphi} \subset U_{\varphi'}$ then the composition $\varphi' \circ \varphi$ is in ${\mathcal G}$. \item If $U \subset X$ is an open set, $\{U_{\alpha} \subset X \mid \alpha \in {\mathcal A}\}$ are open sets whose union is $U$, $\varphi \colon U \to V$ is a homeomorphism to an open set $V \subset X$ and for each $\alpha \in {\mathcal A}$ we have $\varphi_{\alpha} = \varphi \mid U_{\alpha} \colon U_{\alpha} \to V_{\alpha}$ is in ${\mathcal G}$, then $\varphi$ is in ${\mathcal G}$. \end{enumerate} \end{defn}
We first introduce a map which encodes a part of the insertion dynamics of the return map $\widehat{\Phi^\e}$. The reader interested in more more complete development of these ideas can consult Chapter~9 of \cite{HR2016}.
Let $U_{\phi_1^+} \subset Dom(\widehat{\Phi^\e})$ be the
subset of ${\bf R}_{0}$ consisting of points
$\xi \in Dom(\widehat{\Phi^\e})$ with $\eta = \widehat{\Phi^\e}(\xi)$, such that the ${\mathcal K}_{\epsilon}$-arc $[\xi, \eta]_{{\mathcal K}_{\epsilon}}$
contains a single transition point $x$, with $x \in E^{{\epsilon}}_1$.
Note that for such $\xi$, we see from Figures~\ref{fig:cWarcs} and ~\ref{fig:KR}, that its ${\mathcal K}_{\epsilon}$-orbit exits the surface $E^{{\epsilon}}_1$ as the ${\mathcal W}$-orbit of a point $x' \in L_1^-$ with $\tau(x') = x$, flowing upwards from $\partial_h^- {\mathbb W}$ until it intersects ${\bf R}_{0}$ again. If the ${\mathcal K}_{\epsilon}$-orbit of $\xi$ enters $E^{{\epsilon}}_1$ but exits through $S^{{\epsilon}}_1$ before crossing ${\bf R}_{0}$, then it is not considered to be in the domain $U_{\phi_1^+}$ as it contains more than one transition point between $\xi$ and $\eta$.
Let $\phi_1^+ \colon U_{\phi_1^+} \to V_{\phi_1^+}$ denote the map defined by the restriction of $\widehat{\Phi^\e}$. As the ${\mathcal K}_{\epsilon}$-arcs $[\xi, \eta]_{{\mathcal K}_{\epsilon}}$ defining $\phi_1^+$ do not intersect ${\mathcal A}$, the restricted map $\phi_1^+$ is continuous. Observe that the action of the map $\phi_1^+$ corresponds to a flow through a transition point which increases the level function $n_x(t)$ by $+1$.
We can now define the pseudogroup $\widehat{\cG}_{\epsilon}$ acting on ${\bf R}_{0}$ associated to the return maps ${\widehat \Psi}$ and $\widehat{\Phi^\e}$.
\begin{defn}\label{def-pseudogroup1}
Let $\widehat{\cG}_{\epsilon}$ denote the pseudogroup generated by the collection of all maps formed by compositions of the maps
\begin{equation}\label{eq-generators}
\{Id, \phi_1^+, {\widehat \Psi}|U \mid U \subset Dom({\widehat \Psi}) ~ {\rm is ~ open ~ and}~ {\widehat \Psi}|U ~{\rm is~ continuous}\} \end{equation}
and their restrictions to open subsets in their domains. \end{defn}
The notion of a \emph{$\psg$} was introduced by Matsumoto in \cite{Matsumoto2010}, which is a subset of the maps in a pseudogroup that satisfy conditions (1) to (4) of Definition~\ref{def-pseudogroup}, but need not satisfy the condition (5) on unions of maps.
\begin{defn}\label{def-pseudogroup1a}
Let $\widehat{\cG}_{\epsilon}^*$ denote the collection of all maps formed by compositions of the maps
in \eqref{eq-generators} above, and their restrictions to open subsets in their domains. \end{defn}
Note that $\widehat{\cG}_{\epsilon}^*$ is a $\psg$ contained in $\widehat{\cG}_{\epsilon}$ but is not a pseudogroup itself. A key point in the proof of
Proposition~\ref{prop-iteration0} later in this work, is that for appropriately chosen $k \geq \ell({\epsilon})$, there is a well-defined non-trivial element ${\varphi}_k={\widehat \Psi}^k\circ\phi_1^+ \in \widehat{\cG}_{\epsilon}^*$ which is a concatenation of a power of the Wilson return map ${\widehat \Psi}$ with the first return map $\phi_1^+$ of the flow $\Phi_t^{\epsilon}$.
We conclude with an observation and a fundamental technical result which relates the orbits of the map ${\widehat \Psi}$ with those of the map $\widehat{\Phi^\e}$, that encodes the existence of ``shortcuts'' as given in Proposition~\ref{prop-shortcut4}. We require the following result, which is analogous to Proposition~\ref{prop-negativemain}. Recall that the constant $r_{\epsilon}> 2$ was introduced in Lemma~\ref{lem-r0}, and that we assume that Hypothesis~\ref{hyp-monotone} is satisfied. \begin{prop}\label{prop-positivemain} Let $x$ be a primary or secondary entry point of ${\mathbb K}_{\epsilon}$ and $y$ the exit point with $x\equiv y$. Assume that $r(x)\geq r_{\epsilon}$, then $x \prec_{{\mathcal K}_{\epsilon}} y$, and the collection of lifts of the ${\mathcal W}$-arcs in $[x,y]_{{\mathcal K}_{\epsilon}}$ contains all the ${\mathcal W}$-arcs of the ${\mathcal W}$-orbit of $x'$ that are in $\widehat{{\mathbb W}}_\e$, where $\tau(x')=x$. \end{prop} \proof The proof follows in the same way as that of Proposition~\ref{prop-negativemain}, where we note that for $r > r_{\epsilon}$ the radius strictly grows along any orbit of $\Phi_t^{{\epsilon}}$ when entering a face of one of the insertions. \endproof
Introduce the local coordinate function $\widetilde{r}(\xi)$ which is defined for points in ${\bf R}_{0}$ whose forward orbit hits $E_i^{\epsilon}$, for $i=1,2$, before reaching any other transition point or returning to ${\bf R}_{0}$. Let $x\in E_i^{\epsilon}$ be the first secondary entry point in the forward orbit $\xi$, set $\widetilde{r}(\xi)=r(x)$. Then, for $\xi$ in the domain $U_{\phi_1^+} \subset {\bf R}_{0}$ of the map $\phi_1^+ \in \widehat{\cG}_{\epsilon}$, we have $\widetilde{r}(\xi) = r(\phi_1^+(\xi))$. Recall that $\xi \in U_{\phi_1^+}$ if the forward ${\mathcal K}_{\epsilon}$-orbit of $\xi$ hits $E^{{\epsilon}}_1$ before returning to ${\bf R}_{0}$, so in particular $U_{\phi_1^+}$ contains an open neighborhood of the set $\{\widetilde{r}=2\}\subset {\bf R}_{0}$.
Next, define the subset
\begin{equation}\label{eq-domainre} U_{r_{\epsilon}} ~ = ~ \{ \xi \in D(\widehat{\Phi^\e}) \mid \widetilde{r}(\xi) > r_{\epsilon}\} \subset {\bf R}_{0} \ . \end{equation}
\begin{cor}\label{cor-positivemain}
Let $\xi \in U_{r_{\epsilon}}$ be such that $\eta$ is
contained in the forward ${\mathcal W}$-orbit of $\xi$. Then there exists some $\ell > 0$ such that $\eta = (\widehat{\Phi^\e})^{\ell}(\xi)$.
\end{cor}
\proof
If the ${\mathcal W}$-arc $[\xi, \eta]_{{\mathcal W}}$ does not intersect an entry region ${\mathcal L}_i^{{\epsilon}-}$ then it is also a ${\mathcal K}$-arc, and so the result follows. Otherwise, let $x \in {\mathcal L}_i^{{\epsilon}-}$ be the first transition point along $[\xi, \eta]_{{\mathcal W}}$ and let $y \in {\mathcal L}_j^{{\epsilon}+}$ be the last exit point. Then $x\equiv y$, and $\xi \in U_{r_{\epsilon}}$ implies that
$r(x)> r_{\epsilon}$, so by Proposition~\ref{prop-positivemain} we have that
$y$ and $\eta$ are in the forward ${\mathcal K}_{\epsilon}$-orbit of $\xi$. Thus, there exists some $\ell > 0$ such that $\eta = (\widehat{\Phi^\e})^{\ell}(\xi)$.
\endproof
\section{Horseshoes for the pseudogroup dynamics}\label{sec-horseshoe}
In this section, we show that for $0<{\epsilon}<{\epsilon}_0$ sufficiently small, the $\psg$ $\widehat{\cG}_{\epsilon}^*$ contains a map $\varphi$ with ``horseshoe dynamics''. In fact, the precise statement, Theorem~\ref{thm-horseshoe} below, gives an even stronger conclusion, showing that there exists a countable collection of such maps, each with an invariant Cantor set for which the action is coded by a shift map, and each disjoint from the others. The proof of Theorem~\ref{thm-horseshoe} will be via a construction, which uses the relations between the action of the maps ${\widehat \Psi}$ and $\phi_1^+$ on ${\bf R}_{0}$, and the geometry of the traces of propellers and double propellers in ${\bf R}_{0}$.
\subsection{Traces of propellers}\label{subsec-props}
We describe the surfaces in the Wilson plug ${\mathbb W}$ that are generated by the Wilson flow of curves in the bottom face $\partial_h^-{\mathbb W}$, and especially those curves which cross the circle $\{r=2\}\subset \partial_h^-{\mathbb W}$. These surfaces are similar to the double propellers introduced in Definition~\ref{def-doublepropeller}.
Let $\Gamma:[0,2]\to L_1^-\subset \partial_h^-{\mathbb W}$ be a curve such that: \begin{itemize} \item $r(\Gamma(0))=r(\Gamma(2))=3$; \item $r(\Gamma(t))>r(\Gamma(1))$ for all $t\neq 1$ and $r(\Gamma(1))<2$; \item $\Gamma$ is transverse to the circle $\{r=2\}$. \end{itemize} We will call such a curve a \emph{traversing curve}. Let $t_1<t_2$ be such that $r(\Gamma(t_1))=r(\Gamma(t_2))=2$. We can divide $\Gamma$ in three overlapping parts, as illustrated in Figure~\ref{fig:Gammapositive}:
\begin{itemize} \item ${\gamma}$ the curve $\Gamma([0,t_1])$; \item $\widetilde{\gamma}$ the curve $\Gamma([t_1,t_2])$; \item $\kappa$ the curve $\Gamma([t_2,2])$. \end{itemize}
\begin{figure}
\caption{The curve $\Gamma={\gamma}\cup\widetilde{\gamma}\cup
\kappa$ in $\partial_h^-{\mathbb W}$}
\label{fig:Gammapositive}
\end{figure}
Label the interior endpoints of these curves as $q_1' = \Gamma(t_1) \in {\gamma} \cap \widetilde{\gamma}$ and $q_2' = \Gamma(t_2) \in \widetilde{\gamma} \cap \kappa$. Note that the forward orbits of the points $\{q_1', q_2'\} \subset \partial_h^-{\mathbb W}$ by the flow $\Psi_t$ spiral upward to the periodic orbit ${\mathcal O}_1$, and so are trapped in the Wilson plug ${\mathbb W}$.
Recall from Section~\ref{sec-radiuslevel} that we say points $x' \in \partial_h^- {\mathbb W}$ and $y' \in \partial_h^+ {\mathbb W}$ are \emph{facing}, and we write $x' \equiv y'$, if $x' = (r, \theta, -2)$ and $y' = (r, \theta, 2)$ for some $r$ and $\theta$. Let $\overline{\Gamma} \subset \partial_h^+ {\mathbb W}$ be the facing curve to $\Gamma$, and denote by $\overline{\gamma}$, $\overline{\kappa}$ and $\overline{\widetilde{\gamma}}$ the corresponding segments of $\overline{\Gamma}$ facing to the curves ${\gamma}$, $\kappa$ and $\widetilde{\gamma}$. Then also define $\overline{q}_1' = \overline{\Gamma}(t_1) \in \overline{\gamma} \cap \overline{\widetilde{\gamma}}$ and $\overline{q}_2' = \overline{\Gamma}(t_2) \in \overline{\widetilde{\gamma}} \cap \overline{\kappa}$. The antisymmetry assumption on the vector field ${\mathcal W}$ implies that the forward $\Psi_t$ flow of a point $\Gamma(t)\in \partial_h^- {\mathbb W}$ terminates in the facing point $\overline{\Gamma}(t)\in \partial_h^+ {\mathbb W}$, except for the cases when $t = t_1 , t_2$.
Now consider the surface $P_\Gamma\subset {\mathbb W}$ generated by the $\Psi_t$-flow of a traversing curve $\Gamma$. The flows of the points $q_1' , q_2' \in \Gamma$ are trapped in ${\mathbb W}$, hence the surface $P_\Gamma$ is non-compact. On the other hand, by the conditions (W1) to (W6) on the function $f$ in the definition of the Wilson flow ${\mathcal W}$ in Section~\ref{subsec-wilson}, the vector field ${\mathcal W}$ is transverse to ${\bf R}_{0}$ away from the boundary cylinders $r=1$ and $r=3$, and the annulus $\{z=0\} = {\mathcal A}$. It follows that each connected component of $P_{\Gamma} \cap {\bf R}_{0}$ is a closed embedded curve in ${\bf R}_{0}$. We next analyze the properties of these curves.
Observe that the $\Psi_t$-flows of the curve segments ${\gamma}$ and $\kappa$ generate infinite propellers $P_{\gamma}$ and $P_\kappa$ as in Definition~\ref{def-infpropeller}, and illustrated in Figure~\ref{fig:propeller}, whose trace on ${\bf R}_{0}$ is pictured in Figure~\ref{fig:arcspropeller}(A). We denote the curves in $P_{\gamma}\cap{\bf R}_{0}$ by ${\gamma}_0(\ell)$, for $\ell> 0$ and unbounded. Recall that these are simple arcs whose endpoints lie on the vertical line $\{r=2\}\cap {\bf R}_{0}$. The lower endpoint $p_1^-(1)$ of ${\gamma}_0(1)$ is the first intersection of the forward ${\mathcal W}$-orbit of $q_1'$ with ${\bf R}_{0}$. Then for $\ell > 1$, the lower endpoint $p_1^-(\ell)$ of ${\gamma}_0(\ell)$ is ${\widehat \Psi}^{\ell -1}(p_1^-(1))$.
Likewise, the upper endpoint $p_1^+(1)$ of ${\gamma}_0(1)$ is the first
intersection of the backward ${\mathcal W}$-orbit of $\overline{q}_1'$ with ${\bf R}_{0}$, and for $\ell > 1$ the upper endpoint of ${\gamma}_0(\ell)$ is ${\widehat \Psi}^{1-\ell}(p_1^+(1))$.
Analogously, we denote the curves in $P_\kappa\cap{\bf R}_{0}$ by $\kappa_0(\ell)$, for $\ell>0$ and unbounded. Again, these are simple arcs whose endpoints lie on the vertical line $\{r=2\}\cap {\bf R}_{0}$. The lower endpoint $p_2^-(1)$ of $\kappa_0(1)$ is the first intersection of the forward ${\mathcal W}$-orbit of $q_2'$ with ${\bf R}_{0}$. Then for $\ell > 1$, the lower endpoint $p_2^-(\ell)$ of $\kappa_0(\ell)$ is ${\widehat \Psi}^{\ell -1}(p_2^-(1))$.
Likewise, the upper endpoint $p_2^+(1)$ of $\kappa_0(1)$ is the first
intersection of the backward ${\mathcal W}$-orbit of $\overline{q}_2'$ with ${\bf R}_{0}$, and for $\ell > 1$ the upper endpoint of $\kappa_0(\ell)$ is ${\widehat \Psi}^{1-\ell}(p_2^+(1))$.
Note that these two infinite families of curves $\{{\gamma}_0(\ell) \mid \ell \geq 1\}$ and $\{\kappa_0(\ell) \mid \ell \geq 1\}$ are interlaced: between two ${\gamma}_0$-curves there is a $\kappa_0$-curve, and vice-versa. To be more precise, the geometry of the Wilson flow maps the lower endpoint $q_2'$ of the ``upper'' curve $\kappa$ in Figure~\ref{fig:Gammapositive} to the point $p_2^-(1) \in {\bf R}_{0}$ , while he lower endpoint $q_1'$ of the ``lower'' curve ${\gamma}$ is mapped to $p_1^-(1) \in {\bf R}_{0}$ which lies \emph{above} $p_2^-(1)$. That is, we have $z(p_1^-(1)) > z(p_2^-(1))$. The return map of the Wilson flow then preserves this local order, so that we have \begin{equation}\label{eq-interlaced} z(p_2^-(1)) <z(p_1^-(1)) < z(p_2^-(2)) <z(p_1^-(2)) < \cdots < z(p_2^-(\ell)) <z(p_1^-(\ell)) < \cdots < -1 \ . \end{equation}
In order to complete the description of the curves in $P_{\Gamma} \cap {\bf R}_{0}$ we must consider the $\Psi_t$ flow of the third curve segment $\widetilde{\gamma}$, and how the endpoints of the curves in its intersections with ${\bf R}_{0}$ are attached to the endpoints of the curves ${\gamma}_0(\ell)$ and $\kappa_0(\ell)$ for $\ell \geq 1$.
The curve $\widetilde{\gamma}$ coincides with $\Gamma\cap \{r\leq2\}$ and its endpoints are $q_1' , q_2' \in \Gamma$. These points are trapped in ${\mathbb W}$ for the forward $\Psi_t$ flow, hence the forward flow of $\widetilde{\gamma}$ under $\Psi_t$ defines a non-compact surface $P_{\widetilde{\gamma}}$ in the region $\{r\leq2\}\subset {\mathbb W}$.
We denote the curves in $P_{\widetilde{\gamma}}\cap {\bf R}_{0}$ by $\widetilde{\gamma}_0(\ell)$, for $\ell> 0$ and unbounded. Consider the point $\Gamma(1)\in \widetilde{\gamma}$, whose radius coordinate is less than 2. It follows that the ${\mathcal W}$-orbit of $\Gamma(1)$ is finite, thus intersects ${\bf R}_{0}$ in a finite number of points $n$. The integer $n$ depends on the choice of the insertion map $\sigma_1^{\epsilon}$, but we omit this dependence from the notation, as it simplifies the presentation and does not impact the results below.
Moreover, without loss of generality, we may assume that the intersection of the ${\mathcal W}$-orbit of $\Gamma(1)$ with the annulus ${\mathcal A}$ is not contained in ${\bf R}_{0}$. The symmetry of the Wilson flow with respect to the annulus ${\mathcal A}$ implies that the trace of the ${\mathcal W}$-orbit of $\Gamma(1)$ on ${\bf R}_{0}$ forms a symmetric pattern, where points of intersection are paired, each point below the center line $\{z=0\} \cap {\bf R}_{0}$ paired with a symmetric copy above this line. Hence, $n$ is an even number.
The intersection $P_{\widetilde{\gamma}}\cap {\bf R}_{0}$ consists of an infinite collection of arcs, with endpoints in $\{r=2\} \cap {\bf R}_{0}$. For $1\leq \ell\leq n/2$ label the curve containing with endpoints $\{p_1^-(\ell) , p_2^-(\ell) \}$ by $\widetilde{\gamma}_0(\ell)$. Thus, $\widetilde{\gamma}_0(\ell)$
is a ``parabolic curve'' in the region $\{z < 0\} \cap {\bf R}_{0}$, as illustrated in Figure~\ref{fig:tracePG}.
By the anti-symmetry of the flow $\Psi_t$, for each $1\leq \ell\leq n/2$ there is a corresponding ``parabolic curve'' $\overline{\widetilde{\gamma}}_0(\ell)$ in the region $\{z > 0\} \cap {\bf R}_{0}$ with endpoints $\{p_1^+(\ell) , p_2^+(\ell) \}$. Let $\Gamma_0(\ell) \subset {\bf R}_{0}$ denote the closed curve
obtained by joining the endpoints of the curve $\widetilde{\gamma}_0(\ell)$ with the endpoints of the curve $\overline{\widetilde{\gamma}}_0(\ell)$ via the curves
$\{{\gamma}_0(\ell), \kappa_0(\ell)\}$. The curves $\Gamma_0(\ell)$ for $1 \leq \ell \leq n/2$ are illustrated in Figure~\ref{fig:tracePG} as the closed curves that \emph{do not contain} the vertical arc ${\mathcal R} \cap {\bf R}_{0}$ in their interiors.
The trace of $P_{\widetilde{\gamma}}$ on ${\bf R}_{0}$ contains also an infinite number of connected arcs that do not contain points of the orbit of $\Gamma(1)$, and each of these arcs is again symmetric with respect to $\{z=0\}$. For each point $p_1^-(\ell)$ with $\ell > n/2$ we obtain an arc in ${\bf R}_{0}$ with $p_1^-(\ell)$ as the lower endpoint, and $p_1^+(\ell)$ as the upper endpoint. When joined with the arc ${\gamma}_0(\ell)$ having the same endpoints, we obtain a closed curve $\Gamma_0^{{\gamma}}(\ell)$ for $\ell > n/2$, that contains the vertical arc ${\mathcal R} \cap {\bf R}_{0}$ in its interior.
Similarly, for each point $p_2^-(\ell)$ with $\ell > n/2$ we obtain an arc in ${\bf R}_{0}$ with $p_2^-(\ell)$ as the lower endpoint, and $p_2^+(\ell)$ as the upper endpoint. When joined with the arc $\kappa_0(\ell)$ having the same endpoints, we obtain a closed curve $\Gamma_0^{\kappa}(\ell)$ for $\ell > n/2$, that also contains the vertical arc ${\mathcal R} \cap {\bf R}_{0}$ in its interior. Moreover, the closed curves $\Gamma_0^{{\gamma}}(\ell)$ and $\Gamma_0^{\kappa}(\ell)$ are nested for $\ell > n/2$. These curves are illustrated in Figure~\ref{fig:tracePG} as the infinite sequence of nested closed curves that contain the vertical arc ${\mathcal R} \cap {\bf R}_{0}$ in their interiors.
The curves $\Gamma_0(\ell) \subset {\bf R}_{0}$ for $1 \leq \ell \leq n/2$, and
$\Gamma_0^{{\gamma}}(\ell), \Gamma_0^{\kappa}(\ell) \subset {\bf R}_{0}$ for $\ell >
n/2$, can be visualized in terms of a modification of the
illustration of a standard propeller in
Figure~\ref{fig:propeller}. Since the curve $\Gamma$ crosses the
circle $\{r=2\} \cap \partial_h^-{\mathbb W}$, the flow of the curves
${\gamma}\cup\tilde{{\gamma}}$ and $\tilde{{\gamma}}\cup\kappa$ each develops a
singularity along this circle, which results in an infinite cylinder
attached to the surface pictured in Figure~\ref{fig:propeller}. This
infinite cylinder wraps around and is asymptotic to the Reeb cylinder
${\mathcal R}$ in ${\mathbb W}$. Its intersections with ${\bf R}_{0}$ yields the nested
family $\Gamma_0^{{\gamma}}(\ell)$ for $\ell > n/2$. A similar statement holds
for the flow of the curve $\kappa$, yielding the nested curves
$\Gamma_0^{\kappa}(\ell)$ for $\ell > n/2$. The curves $\Gamma_0(\ell)$ for
$1 \leq \ell \leq n/2$, are obtained from the flow of the point $\Gamma(1)$, which now lies in the region $r < 2$.
\begin{figure}
\caption{Trace of the surface $P_{\gamma}$ on ${\bf R}_{0}$}
\label{fig:tracePG}
\end{figure}
\subsection{The transverse hypotheses} \label{subsec-transverse}
We formulate the conditions on a Kuperberg plug ${\mathbb K}_{\epsilon}$ for ${\epsilon} > 0$ which will be assumed in the following sections of the text, where horseshoe dynamics are exhibited for the action of the $\psg$ $\widehat{\cG}_{\epsilon}^*$ on ${\bf R}_{0}$.
Assume that the flow $\Psi_t$ on the Wilson Plug satisfies Hypothesis~\ref{hyp-genericW}, and that the construction of the flow $\Phi_t^{\epsilon}$ on ${\mathbb K}_{\epsilon}$ satisfies the conditions (K1) to (K8) in Section~\ref{subsec-kuperberg}, and Hypotheses~\ref{hyp-monotone} and \ref{hyp-SRI}.
Observe that for a plug ${\mathbb K}_{\epsilon}$ defined by the embeddings $\sigma_i^{\epsilon}$ satisfying the Parametrized Radius Inequality (K8), the curve $\theta' \mapsto \sigma_1^{\epsilon}(2,\theta',-2)$ in ${\mathcal L}_1^{{\epsilon}-}$ contains two points with $r=2$. In other words, the image under $\sigma_i^{\epsilon}$ of the circle of radius $r = 2$ intersects twice the cylinder ${\mathcal C}\subset {\mathbb W}$ of points with radius $r= 2$. Thus the curve $\widetilde{r}=2$ is ${\bf R}_{0}$ intersects the vertical line $r=2$ in two points, as in Figure~\ref{fig:Gprime}.
\begin{figure}
\caption{The points $v_1^{\epsilon}$ and $v_2^{\epsilon}$ in ${\bf R}_{0}$}
\label{fig:Gprime}
\end{figure}
Hypothesis~\ref{hyp-monotone} and \ref{hyp-SRI} imply that the parabolic
curve of points in ${\bf R}_{0}$ whose forward orbit hits $E^{{\epsilon}}_1$ in points
of radius $r_{\epsilon}$ is tangent to the vertical line of radius equal to
$r_{\epsilon}$. That is, the curve $\widetilde{r}=r_{\epsilon}$ is tangent to the
vertical line $r=r_{\epsilon}$. Moreover, by the Parametrized Radius Inequality, the parabolic curve of points $\widetilde{r}=2$ is tangent to the vertical line of radius equal to $2+{\epsilon}$, and by Lemma~\ref{lem-r0}, we have that $r_{\epsilon}>2+{\epsilon}$, as illustrated in Figure~\ref{fig:Gprime}.
Next, assume that ${\epsilon} > 0$ is sufficiently small so that $2+{\epsilon} < r_{\epsilon} < 2 + {\epsilon}_0/2$. This implies that the parabolic curves $\widetilde{r}=2$ and $\widetilde{r}=r_{\epsilon}$ in Figure~\ref{fig:Gprime} intersect the vertical line $r=2$ transversally. Recall that the orbits of the points $\xi\in {\bf R}_{0}$ such that $\widetilde{r}(\xi)=r_{\epsilon}$ hit $E^{{\epsilon}}_1$ in points of radius $r_{\epsilon}$.
Label the points of intersection of the curve $\widetilde{r}=r_{\epsilon}$ with the
vertical line $r=2$ by \begin{equation}\label{eq-boundarypoints} \{r=2\} \cap \{\widetilde{r}=r_{\epsilon}\} \cap {\bf R}_{0} = \{ v_1^{\epsilon}, v_2^{\epsilon}\} \ , \end{equation}
where $z(v_1^{\epsilon}) < z(v_2^{\epsilon}) < 0$ by the Hypothesis (K4). We will assume in addition that $z(v_1^{\epsilon}) > -1$, so that the curve $\widetilde{r}=r_{\epsilon}$ has a vertical offset. Note that for ${\epsilon}=0$, the Radius Inequality implies that $r_{\epsilon} = 2$ and $v_1^{\epsilon} = v_2^{\epsilon}$ is the unique point $\omega_1 \in {\bf R}_{0}$ defined by \eqref{eq-omegas}, so that $z(v_1^{\epsilon}) +1 =0$.
Observe that Hypothesis~\ref{hyp-monotone} implies that if $r(\xi)\geq r_{\epsilon}$ then $\widetilde{r}(\xi)\geq r_{\epsilon}$, while condition (K8) implies that if $r(\xi)<r_{\epsilon}$ then $\widetilde{r}(\xi)\geq r(\xi)-{\epsilon}$.
Moreover, Hypothesis~\ref{hyp-SRI} implies that the curves $\{\widetilde{r}=cst\}$ are parabolic, that is the $r$-coordinate depends quadratically on the $z$-coordinate.
Finally, the domain $U_{\phi_1^+}$ of the map $\phi_1^+$ contains a neighborhood of the set $\{\widetilde{r}=2\}$, and thus we can assume that for ${\epsilon} > 0 $ sufficiently small we have that \begin{equation}\label{eq-funddomain}
V_0 = \{x\in {\bf R}_{0} \,|\, r(x)\geq 2, \, \widetilde{r}(x)\leq r_{\epsilon}\} \subset U_{\phi_1^+} ~. \end{equation} That is, the set bounded by the vertical line of radius 2 and the parabolic curve of points whose forward orbit hit $E^{{\epsilon}}_1$ in points of radius $r_{\epsilon}$, is contained in $U_{\phi_1^+}$.
\subsection{Invariant Cantor sets} \label{subsec-horseshoemap}
We assume that the conditions of Section~\ref{subsec-transverse} are satisfied.
Let $\Gamma'\subset {\mathcal L}_1^{{\epsilon}-}$ be the intersection of the cylinder $\{r=2\}\in {\mathbb W}$ and ${\mathcal L}_1^{{\epsilon}-}$, and set $\Gamma=(\sigma_1^{\epsilon})^{-1}(\Gamma')\subset L_1^-$. Observe that $\Gamma$ is a traversing curve with parabolic shape, with its endpoints in the circle $\{r=3\}$, and admits a parametrization as in Section~\ref{subsec-props}. In particular, we can divide it in three parts, $\Gamma={\gamma}\cup \widetilde{\gamma}\cup \kappa$ with $\Gamma\cap \{r\geq 2\}={\gamma}\cup\kappa$, as in Figure~\ref{fig:Gammapositive}.
Let $V \subset L_1^-$ be the compact subset bounded by $\Gamma$ and the circle $\{r=r_{\epsilon}\}$, as illustrated in Figure~\ref{fig:UV}. Let $V' = \tau(V) \subset E^{{\epsilon}}_1 \subset {\mathbb K}$ be its image in the entry region $E^{{\epsilon}}_1$. Then the set $V_0$ defined in \eqref{eq-funddomain} is identified with the set of points in ${\bf R}_{0}$ for which the first transition point of their $\Phi_t^{\epsilon}$-flow lies in $V'\subset E^{{\epsilon}}_1$.
\begin{figure}
\caption{ The shaded region $V$ in $L_1^-$}
\label{fig:UV}
\end{figure}
Recall that the \emph{trace} of the Reeb cylinder in ${\bf R}_{0}$ is the line segment
${\mathcal R} \cap {\bf R}_{0} = \{(r,z) \mid r=2, -1\leq z\leq 1\}$.
Let $P_\Gamma \subset {\mathbb W}$ be the propeller surface generated by the
$\Psi_t$ flow of the traversing curve $\Gamma$.
Let $n > 0$ denote the number of intersections of the flow of the point $\Gamma(1)$ with the surface ${\bf R}_{0}$, which we assume is an even integer as discussed in Section~\ref{subsec-props}. Then the trace $P_\Gamma \cap {\bf R}_{0}$ consists of an infinite collection of closed curves, what we call the $\Gamma_0$-curves. Recall that these have two types: the closed curves that \emph{do not contain} the trace of the Reeb cylinder in their interior, and are denoted by $\Gamma_0(\ell) \subset {\bf R}_{0}$ for for $0 < \ell \leq n/2$. We also have the two infinite families of closed curves
$\{ \Gamma_0^{{\gamma}}(\ell) \mid \ell > n/2\}$ and $\{ \Gamma_0^{\kappa}(\ell) \mid \ell > n/2\}$ which contain the trace of the Reeb cylinder in their interior, as illustrated in Figure~\ref{fig:tracePG}.
For $\ell>n/2$, set $\Gamma_0(\ell)=\Gamma_0^{\gamma}(\ell)\cup \Gamma_0^\kappa(\ell)$.
By the symmetry of the Wilson flow, each $\Gamma_0$ curve is symmetric with respect to $\{z=0\}$, so that the observations for the forward $\Psi_t$ flow of curves in $\partial_h^- {\mathbb W}$ also apply to the reverse flow of curves in $\partial_h^+ {\mathbb W}$.
The traversing curve $\Gamma={\gamma}\cup \widetilde{\gamma}\cup \kappa$ intersects the circle $\{r=2\} \cap L_1^-$ twice, in the points $\{q_1' , q_2'\}$, where $q_1'$ is the inner endpoint of ${\gamma}$, and $q_2'$ is the inner endpoint of $\kappa$. The ${\mathcal W}$-orbits of these points intersect the vertical segment $\{r=2, \, z<-1\}$ in an increasing sequence of interlaced points, as in \eqref{eq-interlaced}, which limit to $p_1^-$ as $\ell \to \infty$. For each $\ell>0$, the lower intersection point $p_2^-(\ell)$ is the lower endpoint of the curve $\kappa_0(\ell)$, and the upper intersection point $p_1^-(\ell)$ is the lower endpoint of the curve ${\gamma}_0(\ell)$.
By Hypothesis~\ref{hyp-SRI}, the curves ${\gamma}_0(\ell)$ and $\kappa_0(\ell)$ have parabolic shape near their intersection with the vertical line $r=2$; that is, the $z$-coordinate depends quadratically on the $r$-coordinate. As the points in the intersection $\Gamma_0(\ell)\cap\{r=2\}$ tend to $p_1^-$, the curves ${\gamma}_0(\ell)$ and $\kappa_0(\ell)$ accumulate on the trace of the Reeb cylinder, ${\mathcal R} \cap {\bf R}_{0}$, which is a vertical line segment, and these curves become increasingly vertical as they approach ${\mathcal R} \cap {\bf R}_{0}$.
We next require a technical result, that states that the curves ${\gamma}_0(\ell)$ and $\kappa_0(\ell)$ are in ``general position'' with respect to the curves $\widetilde{r} = 2$ and $\widetilde{r} = r_{\epsilon}$. Recall that we assume the conditions of Section~\ref{subsec-transverse} are satisfied by the given map $\sigma_1^{\epsilon}$, and in particular, the vertical offset $z(v_1^{\epsilon})>-1$ for the points defined by \eqref{eq-boundarypoints}.
\begin{lemma} \label{lem-perturbation} For ${\epsilon} > 0$ sufficiently small, there exists $\ell({\epsilon}) > 0$ such that for $\ell \geq \ell({\epsilon})$, the curves ${\gamma}_0(\ell)$ and $\kappa_0(\ell)$ intersect the curves $\widetilde{r} = 2$ and $\widetilde{r} = r_{\epsilon}$ in four points, where the intersections are transverse. \end{lemma} \proof The assumption that the vertical coordinate $z(v_1^{\epsilon}) > -1$ implies that for $\ell$ sufficiently large, the curves ${\gamma}_0(\ell)$ and $\kappa_0(\ell)$ intersect the curve in ${\bf R}_{0}$ defined by $\widetilde{r}=2$ transversely. Let $\ell({\epsilon})$ be the first index for which this holds. Then by the nested properties of the curves ${\gamma}_0(\ell)$ and $\kappa_0(\ell)$, the transversality property holds for all $\ell \geq \ell({\epsilon})$ as well. \endproof
The conclusion of Lemma~\ref{lem-perturbation} is illustrated in Figure~\ref{fig:positive1}. Note that the constant $\ell({\epsilon})$ depends also on the choice of the embedding $\sigma_1^{\epsilon}$ but for simplicity of notation we omit this dependence from the notation.
\begin{prop}\label{prop-iteration0} Let ${\epsilon} > 0$, and let $V_0$ and $\ell({\epsilon})>0$ be defined as above. Fix $k \geq \ell({\epsilon})$, and define ${\varphi}_k={\widehat \Psi}^k\circ\phi_1^+ \in \widehat{\cG}_{\epsilon}^*$. Then for $V_0$ as defined in \eqref{eq-funddomain}, the following results hold : \begin{enumerate} \item $U_k={\varphi}_k(V_0)\cap V_0\neq \emptyset$; \item $U_k$ intersects the curve $\{\widetilde{r}=2\}$ along two disjoint arcs $\alpha$ and $\beta$. \end{enumerate} \end{prop}
\proof The image $\phi_1^+(V_0)\subset {\bf R}_{0}$ is the set of first intercept points in ${\bf R}_{0}$ for the $\Psi_t$-flow of the region $V \subset L_1^-$, where
$V$ is illustrated in Figure~\ref{fig:UV}. Then the trace on ${\bf R}_{0}\subset {\mathbb W}$ of the ${\mathcal W}$-orbits of $V$ is the union of the sets $V_{\ell}={\widehat \Psi}^{\ell} \circ \phi_1^+(V_0)\subset {\bf R}_{0}$ for $\ell>0$. The region $V_\ell$ is bounded by a segment in the vertical line $\{r= r_{\epsilon}\} \cap {\bf R}_{0}$, and by the curve $\Gamma_0(\ell)$, where we identify the rectangles ${\bf R}_{0}$ in ${\mathbb W}$ and ${\mathbb K}_{\epsilon}$.
For $k \geq \ell({\epsilon})$, $\Gamma_0(k)$ intersects the curves $\{\widetilde{r}=2\}$ and $\{\widetilde{r}=r_{\epsilon}\}$ transversely, and each intersection consists of four points. Then $V_k$ intersects the region bounded by the curve $\{\widetilde{r}=r_{\epsilon}\}$, implying that $U_k\neq \emptyset$ and that $U_k\cap \{\widetilde{r}=2\}$ has two connected components, which are labeled $\alpha$ for the lower one, and $\beta$ for the upper one, as in Figure~\ref{fig:positive1}. Then $U_k$ is bounded by ${\gamma}_0(k)$, $\kappa_0(k)$, the vertical line $\{r=2\}$ and the curve $\{\widetilde{r}=r_{\epsilon}\}$.
Observe that $U_k$ is bounded by parts of the curves ${\gamma}_0(k)$ and $\kappa_0(k)$. In Figure~\ref{fig:positive1} we are assuming that $k\leq n/2$, and the case $k>n/2$ is analogous since the shape of ${\gamma}_0(k)$ and $\kappa_0(k)$ near $V_0$ is analogous for the two cases of $k$, as depicted in Figure~\ref{fig:tracePG}.
\endproof
\begin{figure}
\caption{The region $U_k$ and the curves $\alpha$ and $\beta$ in ${\bf R}_{0}$}
\label{fig:positive1}
\end{figure}
\begin{remark}\label{rmk-curves} A priori, the boundary of $U_k$ is contained in the union of ${\gamma}_0(k)$, $\kappa_0(k)$, the curve $\{\widetilde{r}=r_{\epsilon}\}$ and the vertical line $\{r=2\}$. The assumption that $z(v_1^{\epsilon}) > -1$ implies that this region is disjoint from the vertical line $\{r=2\}$. \end{remark}
\subsection{The shift map} \label{subsec-labelhorseshoemap}
Let ${\epsilon} > 0$, and let $V_0$ and $\ell({\epsilon})>0$ be defined as in Section~\ref{subsec-horseshoemap}. Fix $k \geq \ell({\epsilon})$ and define ${\varphi}_k={\widehat \Psi}^k\circ\phi_1^+$ and $U_k={\varphi}_k(V_0)\cap V_0$, as in Proposition~\ref{prop-iteration0}.
We describe in detail the set $U_k \cap {\varphi}_k(U_k)$ and show that it is composed of two connected components, and give some of the details of the description of the set $U_k\cap {\varphi}_k(U_k\cap{\varphi}_k(U_k))$. The recursive construction will then give us a collection of $2^n$ disjoint compact regions in the image of ${\varphi}_k^n$, and their infinite intersection defines a Cantor set invariant under the action of ${\varphi}_k$, for which the restricted action is conjugate to the full shift.
Observe that $U_k\subset V_0\subset U_{\phi_1^+}$. The forward $\Phi_t^{\epsilon}$-flow of $U_k$ intersects the secondary entry region $E^{{\epsilon}}_1$ in the subset $U_k'\subset E^{{\epsilon}}_1$. Then $U_k'$ is bounded by the curve $\{r=r_{\epsilon}\}$, and the curves ${\gamma}'(1,k)$ and $\kappa'(1,k)$, obtained by flowing forward to $E^{{\epsilon}}_1$ the curves ${\gamma}_0(k)$ and $\kappa_0(k)$, respectively. Set $U_k''=\tau^{-1}(U_k')\subset L_1^-$, bounded by the circle $\{r=r_{\epsilon}\}$ and the curves ${\gamma}(1,k)=\tau^{-1}({\gamma}'(1,k))$ and $\kappa(1,k)=\tau^{-1}(\kappa'(1,k))$. By construction, ${\gamma}(1,k)$ intersects twice the circle $\{r=2\}$ and $\kappa(1,k)$ also intersects twice $\{r=2\}$, thus both ${\gamma}(1,k)$ and $\kappa(1,k)$ are traversing curves, as in Figure~\ref{fig:U}.
\begin{figure}
\caption{The region $U_k''$ and parts of the curves
$\kappa(1,k)$ and ${\gamma}(1,k)$ in $L_1^-$}
\label{fig:U}
\end{figure}
The surfaces $P_{{\gamma}(1,k)}$ and $P_{\kappa(1,k)}$ in ${\mathbb W}$, generated by the curves ${\gamma}(1,k)$ and $\kappa(1,k)$, intersect the rectangle ${\bf R}_{0}$ in a similar manner as $P_\Gamma$, as in Figure~\ref{fig:tracePG}. Let ${\gamma}_0(1,k;\ell)$ and $\kappa_0(1,k;\ell)$ for $\ell\geq 1$ and unbounded be the curves in the trace of the surfaces $P_{{\gamma}(1,k)}$ and $P_{\kappa(1,k)}$ on ${\bf R}_{0}$. The shape of these curves is analogous to the one of
the curves $\Gamma_0(\ell)$ in the trace of $P_\Gamma$. Observe that for each $\ell>0$, the curves ${\gamma}_0(1,k;\ell)$ and $\kappa_0(1,k;\ell)$ are contained in the region bounded by $\Gamma_0(\ell)$ and as $\ell\to \infty$ these curves accumulate on ${\mathcal R}\cap {\bf R}_{0}$.
Since $U_k'' \subset L_1^-$ is bounded by ${\gamma}(1,k)$, $\kappa(1,k)$ and the circle $\{r=r_{\epsilon}\}$, the region ${\widehat \Psi}^{\ell}\circ\phi_1^+(U_k)$ is bounded by the curves ${\gamma}_0(1,k;\ell)$, $\kappa_0(1,k;\ell)$ and the vertical line $\{r=r_{\epsilon}\}$.
Consider now the case $\ell=k\geq \ell({\epsilon})$. Then ${\varphi}_k(U_k)={\widehat \Psi}^k\circ\phi_1^+(U_k)$ intersects the set $\{r< 2\}$ along one connected component that is U-shaped and bounded by ${\gamma}_0(1,k;k)\cap \{r<2\}$ and $\kappa_0(1,k;k)\cap \{r<2\}$. Also, ${\varphi}_k(U_k)$ has two connected components in $\{r\geq 2\}$, each corresponding to one of the connected components in ${\gamma}_0(1,k;k)\cap \{r\geq 2\}$. Hence $U_k \cap {\varphi}_k(U_k)$ has two connected components. Let $W_0$ be the component on the left and $W_1$ the component on the right as in Figure~\ref{fig:positive2}. The sets $W_0$ and $W_1$ depend on $k$, but we omit this dependence in the notation.
\begin{figure}
\caption{The regions $W_0$ and $W_1$ in ${\bf R}_{0}$}
\label{fig:positive2}
\end{figure}
\begin{lemma}\label{lem-iteration1} For $i=0,1$, the sets $W_i$ are non-empty and intersect the curve $\{\widetilde{r}=2\}$ along two arcs $\alpha_i\subset \alpha$ and $\beta_i\subset \beta$. \end{lemma}
\proof The discussion above implies that the sets $W_i$ are non-empty. Also, by construction $W_i\cap\alpha$ and $W_i\cap\beta$ are non-empty for $i=0,1$, set $\alpha_i=W_i\cap \alpha$ and $\beta_i=W_i\cap \beta$, for $i=0,1$. \endproof
We can now iterate the construction. For $i_1=0,1$, consider the set ${\varphi}_k(W_{i_1})={\widehat \Psi}^k\circ\phi_1^+(W_{i_1})$. Since the curves bounding $W_{i_1}$ cross the curve $\{\widetilde{r}=2\}$, the set ${\varphi}_k(W_{i_1})\subset {\varphi}_k(U_k)$ intersects the side $\{r<2\}$ in a U-shaped set and ${\varphi}_k(W_{i_1})\cap \{r\geq 2\}$ has two connected components one inside $W_0$ and the other inside $W_1$. Thus we can define the four sets $$W_{i_1,i_2}\,=\,{\varphi}_k(W_{i_1})\cap W_{i_2}\,\subset\,W_{i_2} \qquad \text{for}\qquad i_1,i_2=0,1.$$ In complete analogy with Lemma~\ref{lem-iteration1}, we have that each $W_{i_1,i_2}$ is non-empty and intersects $\{\widetilde{r}=2\}$ along two arcs $\alpha_{i_1,i_2}\subset \alpha_{i_2}$ and $\beta_{i_1,i_2}\subset \beta_{i_2}$. Observe that ${\varphi}_k^{-1}(W_{i_1,i_2})\subset W_{i_1}$.
Iterating this construction, we conclude that for any $n>0$ and for any sequence $I=\{i_1,i_2, \ldots, i_n\}\in \{0,1\}^n$ the set $W_I=W_{i_1,i_2, \ldots, i_n}$ satisfies that: \begin{itemize} \item $W_I$ is non-empty; \item $W_I$ intersects the curve $\{\widetilde{r}=2\}$ along two arcs $\alpha_I$ and $\beta_I$. \end{itemize}
We make two observations. For a sequence $I\in \{0,1\}^n$, $I=\{i_1,i_2,\ldots, i_n\}$ and a point $\xi\in W_I$, we have \begin{eqnarray} {\varphi}_k^{-1}(\xi) & \in & W_{i_1,i_2,\ldots, i_{n-1}} \label{eq-shift1}\\ {\varphi}_k^{-2}(\xi)={\varphi}_k^{-1}\circ {\varphi}_k^{-1}(\xi) & \in & W_{i_1,i_2,\ldots, i_{n-2}}, \label{eq-shift2} \end{eqnarray} and so forth, so that by induction we have ${\varphi}_k^{-(n-1)}(\xi)\in W_{i_1}$. Also by construction, $$W_I\subset W_{i_2,i_3,\ldots,i_n}\subset W_{i_3,i_4,\ldots,i_n}\subset \cdots \subset W_{i_n}.$$
We can now define
\begin{equation}\label{eq-cantorset} W({\varphi}_k) = \bigcap_{n\geq 1} \, \bigcap_{I=\{i_1,i_2,\ldots, i_n\}} ~ W_I \end{equation} which is a Cantor set that is invariant under the map ${\varphi}_k$.
Observe that each $\xi \in W({\varphi}_k)$ is uniquely defined by an infinite string $\displaystyle I_{\xi} = \{ i_1,i_2,\ldots, i_n, \ldots\}\in \{0,1\}^{\mathbb N}$, which we call the shift coordinates on $W({\varphi}_k)$. The identities \eqref{eq-shift1}, \eqref{eq-shift2} and their generalization imply that under this identification, the map ${\varphi}_k$ acts on the points of $W({\varphi}_k)$ via the right shift on the shift coordinates. We then have the standard observation about such maps: \begin{lemma}\label{lem-periodic} The periodic orbits for the restricted action ${\varphi}_k \colon W({\varphi}_k) \to W({\varphi}_k)$ are dense in $W({\varphi}_k)$. \end{lemma}
Finally, observe that for $k' > k \geq \ell({\epsilon})$, the regions bounded by the curves $\Gamma_0(k')$ and $\Gamma_0(k)$ are disjoint, hence we have that $W({\varphi}_{k'}) \cap W({\varphi}_k) = \emptyset$.
Combining the above results and observations, we have shown:
\begin{thm}\label{thm-horseshoe} For $0<{\epsilon}<{\epsilon}_0$ sufficiently small, let $V_0$ and $\ell({\epsilon})>0$ be defined as above. Then for each $k \geq \ell({\epsilon})$, define ${\varphi}_k={\widehat \Psi}^k\circ\phi_1^+ \in \widehat{\cG}_{\epsilon}^*$, and let $W({\varphi}_k)$ be defined by \eqref{eq-cantorset}. Then $W({\varphi}_k)$ is a Cantor set which is invariant under the action of ${\varphi}_k$, and the action admits a dynamical coding as a full-shift space. Thus, the action of the $\psg$ $\widehat{\cG}_{\epsilon}^*$ on ${\bf R}_{0}$ contains an infinite number of disjoint horseshoe dynamical systems, each with a dense set of periodic orbits. \end{thm}
\section{Topological entropy for ${\epsilon} > 0$}\label{sec-entropy}
The pseudogroup $ \widehat{\cG}_{\epsilon}$ acting on ${\bf R}_{0}$ was introduced in Definition~\ref{def-pseudogroup1}, with generators obtained from the return maps ${\widehat \Psi}$ and $\widehat{\Phi^\e}$. Then in Section~\ref{sec-horseshoe}, for ${\epsilon} > 0$ sufficiently small and after imposing a sequence of assumptions on the geometry of the construction of the flow $\Phi_t^{\epsilon}$, it was shown that the action of $\widehat{\cG}_{\epsilon}$ contains disjoint families of invariant Cantor sets. In this section, we impose one further condition on the construction of the flow $\Phi_t^{\epsilon}$ which suffices to imply that each of these ``horseshoe dynamical systems'' is realized by the flow $\Phi_t^{\epsilon}$. It follows immediately from this that the flow $\Phi_t^{\epsilon}$ has positive topological entropy and has infinite families of periodic orbits.
Recall that the map ${\varphi}_k={\widehat \Psi}^k\circ\phi_1^+$ was introduced in Proposition~\ref{prop-iteration0}, and realizes a part of the dynamical properties of the action of the $\psg$ $\widehat{\cG}_{\epsilon}^*$ on ${\bf R}_{0}$. However, the map ${\varphi}_k$
need not be realized by the return map of the flow $\Phi_t^{\epsilon}$. Recall that the map $\phi_1^ + = \widehat{\Phi^\e} | U_{\phi_1^+}$, where $\phi_1^ +$ is defined at $\xi \in U_{\phi_1^+}$ with $\eta = \phi_1^+(\xi)$, if there is a ${\mathcal K}_{\epsilon}$-arc $[\xi, \eta]_{{\mathcal K}_{\epsilon}}$
which contains a single transition point $x \in E^{{\epsilon}}_1$. On the other hand, we have that $\zeta = {\widehat \Psi}^k(\eta)$ if there exists a ${\mathcal W}$-arc $[\eta, \zeta]_{{\mathcal W}}$ in ${\mathbb W}$, which is independent of the return map $\widehat{\Phi^\e}$.
The strategy in this section is to apply Corollary~\ref{cor-positivemain} to conclude that there is also a ${\mathcal K}_{\epsilon}$-arc $[\eta, \zeta]_{{\mathcal K}_{\epsilon}}$ in ${\mathbb K}_{\epsilon}$ between $\eta$ and $\zeta$. When applied to points $\xi \in W({\varphi}_k)$, this will imply that the map ${\varphi}_k \colon W({\varphi}_k) \to W({\varphi}_k)$ represents a subsystem of the return map $\widehat{\Phi^\e}$, and hence gives information about the dynamics of the flow $\Phi_t^{\epsilon}$. The difficulty is that Corollary~\ref{cor-positivemain} assumes that $\eta \in U_{r_{\epsilon}}$ where $U_{r_{\epsilon}}$ is defined by \eqref{eq-domainre}. We first obtain conditions on the map $\sigma_1^{\epsilon}$ which will imply that this requirement is satisfied for $\xi \in W({\varphi}_k)$, so that we can then use Theorem~\ref{thm-horseshoe} to obtain a proof of Theorem~\ref{thm-main2}.
\subsection{Wilson dynamics}\label{subsec-wilsonsteps} The orbits of the map ${\widehat \Psi}$ on ${\bf R}_{0}$ are simple to describe, in that for any point $\xi \in {\bf R}_{0}$ with $r(\xi) \ne 2$, the orbit is finite in both forward and backward directions. However, as the value of $r(\xi)$ tends to $r=2$, the lengths of these finite orbits increase, as condition \eqref{eq-generic1} implies that the vertical distance between the iterations ${\widehat \Psi}^{\ell}(\xi)$ becomes arbitrarily small for $r({\widehat \Psi}^{\ell}(\xi))$ near to $2$ and its vertical coordinate $z({\widehat \Psi}^{\ell}(\xi))$ near to either $z= \pm 1$. We give an approximation of the distance between points ${\widehat \Psi}^{\ell}(\xi)$ and ${\widehat \Psi}^{\ell+1}(\xi)$, following the same approach as in Chapter 17 of \cite{HR2016}. These estimates are then used to impose restrictions on the insertion maps $\sigma_1^{\epsilon}$ so that Corollary~\ref{cor-positivemain} can be applied.
Recall the functions $f$ and $g$ were chosen in Section~\ref{subsec-wilson}, which are constant in the coordinate $\theta$, with \begin{equation}\label{eq-wilson2} {\mathcal W} =g(r, \theta, z) \frac{\partial}{\partial z} + f(r, \theta, z) \frac{\partial}{\partial \theta} ~ . \end{equation}
Hypothesis~\ref{hyp-genericW} and condition \eqref{eq-generic1} imply there exists constants $A_g,B_g,C_g$ such that the quadratic form $Q_g(u,v) = A_g \cdot u^2 + 2B_g \cdot uv + C_g \cdot v^2$ defined by the Hessian of $g$ at $\omega_1$ is positive definite. Set
$$Q_0(r,z) = \left( d_{{\bf R}_{0}}((r -2 ) , (z+1))\right)^2 = (r-2)^2 + (z+1)^2$$
then it follows that there exists $D_g > 0$ such that
\begin{equation}\label{eq-quadraticest1}
| g(r, \theta, z) - Q_g(r-2, z+1) | ~ \leq ~ D_g \cdot (|r-2|^3 + |z+1|^3) \quad {\rm for} ~ Q_0(r,z) \leq {\epsilon}_0^2 \end{equation} where ${\epsilon}_0$ is the constant defined in \eqref{eq-generic1}. The condition \eqref{eq-quadraticest1} implies that for $(r,z)$ sufficiently close to $(2,-1)$, the error term on the right-hand-side can be made arbitrarily small relative to the distance squared from the special point $\omega_1 = (2,-1)$. We also observe that \eqref{eq-quadraticest1} implies there exists constants $0 < \lambda_1 \leq \lambda_2$ such that
\begin{equation}\label{eq-quadraticest2} \lambda_1 \cdot Q_0(r,z) \leq g(r, \theta, z) \leq \lambda_2 \cdot Q_0(r,z) \quad {\rm for} ~ Q_0(r,z) \leq {\epsilon}_0^2 \ . \end{equation} Next, consider the action of the maps ${\widehat \Psi}^{\ell}$ for $\ell > 0$.
Let $\xi \in {\bf R}_{0}$ with $2 \leq r(\xi) \leq 2 + {\epsilon}_0$ and $-7/4 \leq z(\xi) \leq -1/4$, such that ${\widehat \Psi}(\xi)$ is defined and $z({\widehat \Psi}(\xi))< 0$.
Let $T(\xi) > 0$ be defined by ${\widehat \Psi}(\xi) = \Psi_{T(\xi) }(\xi)$. Then the $z$-coordinate of ${\widehat \Psi}(\xi)$ is given by
\begin{equation}\label{eq-coordinates}
z({\widehat \Psi}(\xi)) - z(\xi) ~ = ~ \int_0^{T(\xi) } ~ g(\Psi_s(\xi)) ~ ds ~ \geq ~ 0 \ . \end{equation} If $\xi \ne \omega_1$ then $g(\Psi_s(\xi))$ is positive along the orbit segment for $0 \leq s \leq T(\xi)$, hence $ z({\widehat \Psi}(\xi)) - z(\xi) > 0$. Moreover, combining the estimates \eqref{eq-quadraticest2} and \eqref{eq-coordinates} and the estimate $T(\xi) \geq 4\pi$ on the return time for the flow $\Psi_t$ to ${\bf R}_{0}$ for $r(\xi) \geq 2$, we obtain: \begin{lemma}\label{lem-stepesitimates} For $\delta > 0$, suppose that $\xi = (r,z) \in {\bf R}_{0}$ satisfies $\delta < d_{{\bf R}_{0}}((r -2 ) , (z+1)) <{\epsilon}_0$. Then \begin{equation}\label{eq-stepesitimates}
z({\widehat \Psi}(\xi)) - z(\xi) ~ \geq ~ 4 \pi \cdot \lambda_1 \delta^2 \ . \end{equation} \end{lemma}
\subsection{Admissible deformations}\label{subsec-deformations}
We assume that for $0 < {\epsilon} < {\epsilon}_0$ the conditions of Section~\ref{sec-horseshoe} are satisfied, so that the hypotheses of Proposition~\ref{prop-iteration0} and Theorem~\ref{thm-horseshoe} are satisfied.
Recall that for the insertion map $\sigma_1^{\epsilon}$ the points $\{v_1^{\epsilon} , v_2^{\epsilon} \}\subset {\bf R}_{0}$ are defined by \eqref{eq-boundarypoints} and satisfy \begin{equation}\label{eq-offset0} r(v_1^{\epsilon}) = r(v_2^{\epsilon}) = 2 \quad , \quad -1 < z(v_1^{\epsilon}) < z(v_2^{\epsilon}) < 0 \ . \end{equation}
Define \begin{equation}\label{eq-offset1} \delta(\sigma_1^{\epsilon}) = z({\widehat \Psi}^{-1}(v_1^{\epsilon})) + 1 > 0\ . \end{equation}
\begin{hyp}\label{hyp-offset} Assume that ${\epsilon} > 0$ is sufficiently small so that \begin{equation}\label{eq-offset2}
z(v_2^{\epsilon}) - z(v_1^{\epsilon}) < 4 \pi \cdot \lambda_1 \cdot \delta(\sigma_1^{\epsilon})^2 \ . \end{equation} \end{hyp} Hypothesis~\ref{hyp-SRI} implies that the boundary curve $\{\widetilde{r}=r_{\epsilon}\}\subset {\bf R}_{0}$ of the region $V_0$ is parabolic, as illustrated in Figure~\ref{fig:Gprime}, which implies that for a fixed $v_1^{\epsilon} \in {\bf R}_{0}$ then
for ${\epsilon} >0$ sufficiently small, the condition \eqref{eq-offset2} will be satisfied.
Let $V_0$ and $\ell({\epsilon})>0$ be as defined as in Theorem~\ref{thm-horseshoe}.
Fix $k \geq \ell({\epsilon})$, and define ${\varphi}_k={\widehat \Psi}^k\circ\phi_1^+$. Set $U_k={\varphi}_k(V_0)\cap V_0$ as defined in Proposition~\ref{prop-iteration0}. Then let $W({\varphi}_k)$ be defined by \eqref{eq-cantorset}. By Theorem~\ref{thm-horseshoe} we have that ${\varphi}_k \colon W({\varphi}_k) \to W({\varphi}_k)$.
\begin{prop}\label{prop-realized} Assume that Hypothesis~\ref{hyp-offset} holds for the insertion map $\sigma_1^{\epsilon}$. Then for each $\xi \in W({\varphi}_k)$ with $\eta = {\varphi}_k(\xi)$, there exists a ${\mathcal K}_{\epsilon}$-arc $[\xi, \eta]_{{\mathcal K}_{\epsilon}}$ in ${\mathbb K}_{\epsilon}$. Thus, there exists $\ell_{\xi} \geq 1$ such that $\eta = \widehat{\Phi^\e}^{\ell_{\xi}}(\xi)$. \end{prop} \proof Let $(r,z) \in {\bf R}_{0}$ satisfy $Q_0(r,z) \leq {\epsilon}_0^2$ and $z \geq z({\widehat \Psi}^{-1}(v_1^{\epsilon}))$. Then $Q_0(r,z) \geq Q_0(2,z) \geq \delta(\sigma_1^{\epsilon})^2$, so that by \eqref{eq-stepesitimates} we have \begin{equation} z({\widehat \Psi}(r,z)) -z ~ \geq ~ 4 \pi \cdot \lambda_1 \cdot \delta(\sigma_1^{\epsilon})^2 > z(v_2^{\epsilon}) - z(v_1^{\epsilon}) \ . \end{equation} Now let $\eta \in V_0$, for $V_0$ defined by \eqref{eq-funddomain}. Then $z(v_1^{\epsilon}) \leq z(\eta) \leq z(v_2^{\epsilon})$ by the convexity of the region $V_0$. It then follows from \eqref{eq-stepesitimates} applied to $(r,z) = \eta$ that \begin{equation} z({\widehat \Psi}^{-1}(\eta)) = z + \left(z({\widehat \Psi}^{-1}(\eta)) -z \right) \leq z - 4 \pi \cdot \lambda_1 \cdot \delta(\sigma_1^{\epsilon})^2 < z(v_2^{\epsilon}) - \left( z(v_2^{\epsilon}) - z(v_1^{\epsilon}) \right) = z(v_1^{\epsilon}) \ . \end{equation} That is, the image ${\widehat \Psi}^{-1}(V_0)$ lies below the line $\{z= z(v_1^{\epsilon})\}$, and hence lies outside the region bounded by the curve $\widetilde{r} = r_{\epsilon}$ whose lower edge lies on this line by the choice of $v_1^{\epsilon}$ in \eqref{eq-boundarypoints} and the convexity of the region $V_0$. That is, for all $\eta \in V_0$ we have $\widetilde{r}({\widehat \Psi}^{-1}(\eta))> r_{\epsilon}$.
Let $\xi \in W({\varphi}_k)$ and set $\eta = {\varphi}_k(\xi) = {\widehat \Psi}^k\circ\phi_1^+(\xi) \in W({\varphi}_k) \subset V_0$.
Then for the point $\zeta = {\widehat \Psi}^{-1}(\eta)$ we have $\widetilde{r}(\zeta) > r_{\epsilon}$ by the above estimates.
Following the ${\mathcal K}_{\epsilon}$-orbit of $\zeta$ backwards and applying Proposition~\ref{prop-positivemain} and Corollary~\ref{cor-positivemain} we obtain that $\xi$ and $\zeta$ are in the same ${\mathcal K}_{\epsilon}$-orbit.
Now consider the ${\mathcal W}$-orbit segment $[\zeta, \eta]_{{\mathcal W}}$.
If $[\zeta, \eta]_{{\mathcal W}}$ does not contain any transition
points, then $\eta$ is in the ${\mathcal K}_{\epsilon}$-orbit of $\zeta$ and thus in the ${\mathcal K}_{\epsilon}$-orbit
of $\xi$.
If $[\zeta, \eta]_{{\mathcal W}}$ contains transition points, let $x_1$ be the first
transition point. Then $x_1$ is a secondary entry point in $E^{{\epsilon}}_1$ with $r(x_1)> r_{\epsilon}$ and
Corollary~\ref{cor-positivemain} implies that its ${\mathcal K}_{\epsilon}$-orbit contains the facing point
$y_1\in S^{{\epsilon}}_1$. The ${\mathcal K}_{\epsilon}$-orbit of $y_1$ continues to the first intersection with
${\bf R}_{0}$ which is the point $\eta$. Hence $\eta$ is in the ${\mathcal K}_{\epsilon}$-orbit of $\xi$. \endproof
We next deduce an important consequence of Proposition~\ref{prop-realized}. Observe that for every $\xi \in W({\varphi}_k)$ we have $z(\xi) < 0$ and thus the flows $\Psi_t$ and $\Phi_t^{\epsilon}$ intersect the set $W({\varphi}_k) \subset {\bf R}_{0}$ transversally. Let $\ell_{\xi} \geq 1$ be the integer defined in Proposition~\ref{prop-realized} so that ${\varphi}_k(\xi) = \widehat{\Phi^\e}^{\ell_{\xi}}(\xi)$. Then both maps are continuous at $\xi$ and there is an open neighborhood $U_{\xi} \subset V_0$ for which the identity ${\varphi}_k(\xi') = \widehat{\Phi^\e}^{\ell_{\xi}}(\xi')$ holds for all $\xi' \in U_{\xi}$. The set $W({\varphi}_k)$ is compact, so there exists a finite collection of such open sets which cover $W({\varphi}_k)$. Hence, there exists $N_k$ so that for all $\xi \in W({\varphi}_k)$ there exists $1 \leq \ell_{\xi} \leq N_k$ for which ${\varphi}_k(\xi) = \widehat{\Phi^\e}^{\ell_{\xi}}(\xi)$. As a consequence, we have:
\begin{cor}\label{cor-realized} Assume that Hypothesis~\ref{hyp-offset} holds for the insertion map $\sigma_1^{\epsilon}$. Then there exists $T_k > 0$ such that for each $\xi \in W({\varphi}_k)$ there exists $0 < t_{\xi} \leq T_k$ such that ${\varphi}_k(\xi) = \Phi_{t_{\xi}}^{\epsilon}(\xi)$. \end{cor}
\subsection{Topological entropy}\label{subsec-entropy}
We define the entropy of a flow ${\varphi}_t$ on a compact metric space $(X, d_X)$ using a variation of the Bowen formulation of topological entropy \cite{Bowen1971,Walters1982}. The definition we adopt is symmetric in the role of the time variable $t$. For $T > 0$, define a metric on $X$ by \begin{equation} d_X^T(x,y) = \max \ \left\{ d_X({\varphi}_t(x),{\varphi}_t(y)) \mid -T \leq t \leq T \right\} \ , \quad {\rm for} ~ x,y \in X \ . \end{equation}
Two points $x,y\in X$ are said to be \emph{$({\varphi}_t , T, \delta)$-separated} if $d_X^T(x,y)>\delta$. A set $E \subset X$ is \emph{$({\varphi}_t , T, \delta)$-separated} if all pairs of distinct points in $E$ are $({\varphi}_t , T, \delta)$-separated. Let $s({\varphi}_t , T, \delta)$ be the maximal cardinality of a $({\varphi}_t , T, \delta)$-separated set in $X$. Then the topological entropy is defined by \begin{equation}\label{eq-separated} h_{top}({\varphi}_t)= \frac{1}{2} \cdot \lim_{\delta\to 0} \left\{ \limsup_{T\to\infty}\frac{1}{T}\log(s({\varphi}_t , T, \delta)) \right\} . \end{equation} It is a standard fact that for a compact space $X$, the entropy $h_{top}({\varphi}_t)$ is independent of the choice of the metric $d_X$ on $X$.
Given a ${\varphi}_t$-invariant subset $K \subset X$, we can define the restricted topological entropy $h_{top}({\varphi}_t, K)$ by the same formula \eqref{eq-separated}, where we now require that the $({\varphi}_t , T, \delta)$-separated sets in the definition must be subsets of $K$. It follows immediately that we have the estimate \begin{equation} \label{eq-Kseparated} h_{top}({\varphi}_t) \geq h_{top}({\varphi}_t, K) \ . \end{equation}
Let ${\epsilon} > 0$ be given, and assume that $\Phi_t^{\epsilon}$ is a Kuperberg flow as constructed above which satisfies the \emph{generic hypotheses} of Section~\ref{sec-generic}, the \emph{geometric hypotheses} of Section~\ref{sec-horseshoe}, and the Hypothesis~\ref{hyp-offset} above. Then we have:
\begin{thm}\label{thm-entropypositive} The topological entropy $h_{top}(\Phi_t^{\epsilon}) > 0$. \end{thm} \proof Let $V_0$ and $\ell({\epsilon})>0$ be defined as in Theorem~\ref{thm-horseshoe}. Choose $k \geq \ell({\epsilon})$, and define ${\varphi}_k={\widehat \Psi}^k\circ\phi_1^+ \in \widehat{\cG}_{\epsilon}^*$. Then let $W({\varphi}_k) \subset {\bf R}_{0}$ be the ${\varphi}_k$-invariant Cantor set defined by \eqref{eq-cantorset}. By Proposition~\ref{prop-realized}, the set $W({\varphi}_k) $ is invariant under the return map $\widehat{\Phi^\e}$ for the flow $\Phi_t^{\epsilon}$. Let ${\widehat W}({\varphi}_k)$ denote the flow saturation of $W({\varphi}_k)$ which is then a compact invariant set for $\Phi_t^{\epsilon}$. By \eqref{eq-Kseparated} it will suffice to show that $h_{top}(\Phi_t^{\epsilon}, {\widehat W}({\varphi}_k)) > 0$.
Note that $W({\varphi}_k) \subset {\widehat W}({\varphi}_k)$ is a transverse section for the flow $\Phi_t^{\epsilon}$ restricted to ${\widehat W}({\varphi}_k)$, and by Corollary~\ref{cor-realized} the flow has bounded return times to the section $W({\varphi}_k)$.
\begin{lemma}\label{lem-returnmap} The map ${\varphi}_k \colon W({\varphi}_k) \to W({\varphi}_k)$ is the first return map for the flow $\Phi_t^{\epsilon}$ restricted to ${\widehat W}({\varphi}_k)$. \end{lemma} \proof Note that $\widehat{\Phi^\e}$ is the first return map of the flow $\Phi_t^{\epsilon}$ to the section ${\bf R}_{0}$. Given $\xi \in W({\varphi}_k)$, Proposition~\ref{prop-realized} states that there is some least $\ell_{\xi}$ such that ${\varphi}_k(\xi) = \widehat{\Phi^\e}^{\ell_{\xi}}(\xi) \in W({\varphi}_k)$. Thus, it suffices to show the ${\mathcal K}_{\epsilon}$-orbit segment $[\xi,{\varphi}_k(\xi)]_{{\mathcal K}_{\epsilon}}$ is disjoint from ${\widehat W}({\varphi}_k)\subset U_k$ except at the endpoints. In fact, we prove that for any $\xi\in U_k$ such that ${\varphi}_k(\xi)\in U_k$, the ${\mathcal K}_{\epsilon}$-orbit segment $[\xi,{\varphi}_k(\xi)]_{{\mathcal K}_{\epsilon}}$ does not intersect $U_k$.
Since $\xi \in W({\varphi}_k)\subset U_k\subset U_{\phi_1^+}$, we have that $\widehat{\Phi^\e}(\xi)=\phi_1^+(\xi)=\eta$ and thus $r(\eta) \leq r_{\epsilon}$ since $U_k\subset V_0$ and the points in $V_0$ have $\widetilde{r}$-coordinate smaller or equal to $r_{\epsilon}$. Consider the ${\mathcal K}_{\epsilon}$-orbit segment $[\eta,{\widehat \Psi}(\eta)]_{{\mathcal K}_{\epsilon}}$. If $[\eta,{\widehat \Psi}(\eta)]_{{\mathcal K}_{\epsilon}}$ does not contains transition points, then it does not intersect ${\bf R}_{0}$ in its interior. If not, let $x_1$ be the first transition point that is a secondary entry point in $E^{{\epsilon}}_1$. Since $z(\eta)<z({\widehat{\Phi}}^{k-1}(\eta))$, then by the arguments in the proof of Proposition~\ref{prop-realized}, we have $$ z(\eta)<z({\widehat{\Phi}}^{k-1}(\eta))<z(v_1^{\epsilon}), $$ thus $x_1$ is a secondary entry point outside the region $U'\subset E^{{\epsilon}}_1$. In particular, $r(x_1)>r_{\epsilon}$. Let $y_1\in S^{{\epsilon}}_1$ be the secondary exit point such that $x_1\equiv y_1$, then by construction $y_1$ is the last transition point in $[\eta,{\widehat \Psi}(\eta)]_{{\mathcal K}_{\epsilon}}$. By Proposition~\ref{prop-positivemain} and Hypothesis~\ref{hyp-monotone} all the ${\mathcal W}$-arcs in the ${\mathcal K}_{\epsilon}$-orbit segment $[\eta,{\widehat \Psi}(\eta)]_{{\mathcal K}_{\epsilon}}$ have radius greater than $r_{\epsilon}$. Hence any point in the intersection of the interior of the orbit segment $[\eta,{\widehat \Psi}(\eta)]_{{\mathcal K}_{\epsilon}}$ with ${\bf R}_{0}$ has radius greater than $r_{\epsilon}$ and thus does not belongs to $U_k$, in particular does not belongs to ${\widehat W}({\varphi}_k)$.
Repeating the argument, consider the ${\mathcal K}_{\epsilon}$-orbit segment $[{\widehat \Psi}^\ell(\eta),{\widehat \Psi}^{\ell+1}(\eta)]_{{\mathcal K}_{\epsilon}}$ for any $1\leq \ell \leq k-1$. If $[{\widehat \Psi}^{\ell}(\eta),{\widehat \Psi}^{\ell+1}(\eta)]_{{\mathcal K}_{\epsilon}}$ does not contains transition points, then it does not intersects ${\bf R}_{0}$ in its interior. If not, let $x_1$ be the first transition point that is a secondary entry point in $E^{{\epsilon}}_1$. Since $z({\widehat \Psi}^{\ell}(\eta))\leq z({\widehat{\Phi}}^{k-1}(\eta))$, then we have $$ z({\widehat \Psi}^{\ell}(\eta))\leq z({\widehat{\Phi}}^{k-1}(\eta))<z(v_1^{\epsilon}), $$ thus $x_1$ is a secondary entry point outside the region $U'\subset E^{{\epsilon}}_1$. In particular, $r(x_1)>r_{\epsilon}$. Let $y_1\in S^{{\epsilon}}_1$ be the secondary exit point such that $x_1\equiv y_1$, then by construction $y_1$ is the last transition point in $[{\widehat \Psi}^{\ell}(\eta),{\widehat \Psi}^{\ell+1}(\eta)]_{{\mathcal K}_{\epsilon}}$. Then all the ${\mathcal W}$-arcs in the ${\mathcal K}_{\epsilon}$-orbit segment $[{\widehat \Psi}^{\ell}(\eta),{\widehat \Psi}^{\ell+1}(\eta)]_{{\mathcal K}_{\epsilon}}$ have radius greater than $r_{\epsilon}$. Hence any point in the intersection of the interior of the orbit segment $[{\widehat \Psi}^{\ell}(\eta),{\widehat \Psi}^{\ell+1}(\eta)]_{{\mathcal K}_{\epsilon}}$ with ${\bf R}_{0}$ has radius greater than $r_{\epsilon}$ and thus does not belongs to $U_k$, in particular does not belongs to ${\widehat W}({\varphi}_k)$. \endproof
We claim that the restricted map ${\varphi}_k \colon W({\varphi}_k) \to W({\varphi}_k) $ has positive entropy, and it then follows by standard techniques that $h_{top}(\Phi_t^{\epsilon}, {\widehat W}({\varphi}_k)) > 0$.
In the following, we use Proposition~\ref{prop-iteration0} and the labeling system in Section~\ref{subsec-labelhorseshoemap}. In particular, let
$W_0, W_1 \subset {\bf R}_{0}$ be the disjoint sets introduced in Lemma~\ref{lem-iteration1}. Let $\delta>0$ be such that the $d_{{\bf R}_{0}}$-distance between the sets $W_0$ and $W_1$ is greater than $\delta$.
We obtain a lower bound estimate on the maximum number $s({\varphi}_k , n, \delta)$ of points in an $({\varphi}_k , n, \delta)$-separated subset of $W({\varphi}_k)$ as a function of $n$.
For a sequence $I\in \{0,1\}^n$, recall that $W_I = W_{i_1,i_2,\ldots,i_n}$ is a closed and open cylinder set in the Cantor set $W({\varphi}_k)$.
For any pair of distinct sequences $I,J\in \{0,1\}^n$, choose points
\begin{equation} \xi\in W_I \cap W({\varphi}_k) \quad , \quad \eta \in W_J \cap W({\varphi}_k) \ . \end{equation} Let $1 \leq m\leq n$ be the largest index such that $i_m\neq j_m$.
Then \begin{eqnarray*}
{\varphi}_k^{-(n-m)}(\xi) & \in & W_{i_1,i_2,\ldots, i_m} \, \subset\, W_{i_m}\\
{\varphi}_k^{-(n-m)}(\eta) & \in & W_{j_1,j_2,\ldots, j_m} \, \subset\, W_{j_m}.
\end{eqnarray*}
Fix $n \geq 1$ and let $E_n \subset U_k \subset {\bf R}_{0}$ be a collection of $2^n$ points obtained by choosing one point from each of the sets $W_I \cap W({\varphi}_k) = W_{i_1,i_2,\ldots,i_n} \cap W({\varphi}_k)$ where $I \in \{0,1\}^n$. Then $E_n$ is a $({\varphi}_k, n, \delta)$-separated set, and so $s({\varphi}_k , n , \delta) \geq 2^n$.
It follows that $s({\varphi}_k , n, \delta) \geq 2^n$, and thus $h_{top}({\varphi}_k, W({\varphi}_k) ) \geq \ln(2)> 0$. \endproof
We point out another consequence of Theorem~\ref{thm-horseshoe} and Proposition~\ref{prop-realized}.
\begin{cor}\label{cor-periodic}
The restriction of the flow $\Phi_t^{\epsilon}$ to ${\widehat W}({\varphi}_k)$ has a countably dense set of periodic orbits.
\end{cor}
\proof
The restriction of the map ${\varphi}_k$ to ${\widehat W}({\varphi}_k)$ has horseshoe dynamics by Theorem~\ref{thm-horseshoe}, and hence has a dense set of periodic orbits. Each of these orbits for ${\varphi}_k$ is also a periodic orbit for the return map $\widehat{\Phi^\e}$ by Proposition~\ref{prop-realized}.
\endproof
Note that the conclusions of Theorem~\ref{thm-horseshoe} apply to the map ${\varphi}_k$ for any choice of $k \geq \ell({\epsilon})$, and for each such flow the set $W({\varphi}_k) \subset U_k$ as illustrated in Figure~\ref{fig:positive1}. See also Remark~\ref{rmk-curves}. In particular, the set
$W({\varphi}_k)$ is contained in the region between the curves ${\gamma}_0(k), \kappa_0(k) \subset {\bf R}_{0}$.
As remarked in Section~\ref{subsec-horseshoemap}, these curves are asymptotic to the trace ${\mathcal R} \cap {\bf R}_{0}$ of the Reeb cylinder as $k$ increases, so the Cantor sets $W({\varphi}_k)$ also limit to subsets of ${\mathcal R}$.
Now assume that the flow $\Phi_t^{\epsilon}$ also satisfies the offset Hypothesis~\ref{hyp-offset},
and let ${\mathcal P}_{\epsilon}$ denote the union of all periodic orbits of $\Phi_t^{\epsilon}$ and let $\overline{\cP}_{{\epsilon}}$ denote the closure of ${\mathcal P}_{{\epsilon}}$ in ${\mathbb K}_{\epsilon}$. It then follows from Corollary~\ref{cor-periodic} that the intersection $\overline{\cP}_{\epsilon} \cap{\mathcal R}$ is non-empty. It would be interesting to be able to describe the closure $\overline{\cP}_{{\epsilon}}$, or at least the trace $\overline{\cP}_{\epsilon} \cap{\mathcal R}$ of this closure on ${\mathcal R}$, as some sort of analog of the \emph{bad set} for compact flows as discussed in \cite{EMS1977,EpsteinVogt1978}, though that seems like a rather difficult task.
\subsection{Smooth admissible deformations}\label{subsec-admissible}
Suppose that we are given a standard Kuperberg flow $\Phi_t=\Phi_t^0$ on ${\mathbb K}$ constructed starting with a Wilson flow
$\Psi_t$ on the Wilson Plug ${\mathbb W}$ which satisfies Hypothesis~\ref{hyp-genericW}, and that the construction of the flow $\Phi_t$ on ${\mathbb K}$ satisfies the conditions (K1) to (K8) in Section~\ref{subsec-kuperberg}, and the \emph{Strong Radius Inequality} in Hypotheses~\ref{hyp-SRI} for ${\epsilon} = 0$.
We construct a family of flows $\Phi_t^{\epsilon}$ for $0 < {\epsilon} < {\epsilon}_0$ sufficiently small, which give a smooth deformation of $\Phi_t$ such that for each ${\epsilon} > 0$, the flow $\Phi_t^{\epsilon}$ satisfies the hypotheses of Theorem~\ref{thm-entropypositive}.
The construction of the flows $\Phi_t^{\epsilon}$ uses a two-step process. In the first step, we introduce a small vertical offset to the insertion map $\sigma_i^{\epsilon} \colon D_1 \to {\mathbb W}$. After this, we modify the insertion by increasing the radius of the insertion of the vertex of the parabola as in Figure~\ref{fig:modifiedradius}(C).
Recall that $(2, \overline{\theta_1}, -1) \in {\mathcal R}$ is the special point on the Reeb cylinder defined in Hypotheses~\ref{hyp-SRI}, for the case ${\epsilon}=0$. Let $0< {\epsilon}' < {\epsilon}_0$ and choose a point $v_1({\epsilon}') \in {\mathcal R}$ which satisfies $z(v_1({\epsilon}')) = -1 + {\epsilon}'$ and $\theta(v_1({\epsilon}')) = \overline{\theta_1}$. That is, $v_1({\epsilon}')$ is a perturbation of the special point $(2, \overline{\theta_1}, -1) $ in the vertical direction along ${\mathcal R}$.
Let $\widetilde{\sigma_1^{{\epsilon}'}} \colon D_1 \to {\mathbb W}$ denote the vertical translate of $\sigma_1$ so that the vertex of the image curve
$\displaystyle \widetilde{\sigma_1^{{\epsilon}'}}\left( \{r=2\} \cap L_1^-\right)$ is the point $v_1({\epsilon}')$.
Next, slide the map $\displaystyle \widetilde{\sigma_1^{{\epsilon}'}}$ along the parabolic curve $\displaystyle \widetilde{\sigma_1^{{\epsilon}'}}\left( \{r=2\} \cap L_1^-\right)$ so that the vertex now lies on the cylinder $\{r=2+{\epsilon}\}$ where ${\epsilon}$ is sufficiently small so that the offset Hypothesis~\ref{hyp-offset} is satisfied for the given Wilson flow $\Psi_t$ on ${\mathbb W}$. This yields the embedding $\displaystyle \sigma_1^{{\epsilon}} \colon D_1 \to {\mathbb W}$ for which $v_1({\epsilon}')$ is ${\mathcal W}$-flow of the lower intersection point defined in \eqref{eq-boundarypoints}. Intuitively, the value of ${\epsilon}$ is proportional to $({\epsilon}')^2$ as the graph of $\displaystyle \widetilde{\sigma_1^{{\epsilon}'}}\left( \{r=2\} \cap L_1^-\right)$ as illustrated in Figure~\ref{fig:modifiedradius}(C) is quadratic. This yields our family of insertion maps $\sigma_1^{\epsilon}$.
The smooth family of second insertion maps $\displaystyle \sigma_2^{\epsilon} \colon D_2 \to {\mathbb W}$ can then be defined similarly. However, we now point out that in the proof of Theorem~\ref{thm-entropypositive}, and its preparatory results, the role of the second insertion only arises in one manner. We require that the entry/exit condition for orbits of ${\mathcal K}_{\epsilon}$ entering a face of the second insertion be satisfied for points $\xi \in E^{{\epsilon}}_2$ when $r(\xi) > r_{\epsilon}$. In particular, this condition will be satisfied if the insertion $\sigma_2^{\epsilon} = \sigma_2$ is the given map for the construction of the flow $\Phi_t^0$, which satisfies the original radius inequality, and thus as noted previously, has corresponding value of $r_{\epsilon} = 2$. Thus, in order to obtain the claim of Theorem~\ref{thm-entropypositive}, it is not even necessary to modify the upper insertion.
Thus, we obtain a smooth family of deformations of $\Phi_t$ where each map $\Phi_t^{\epsilon}$ satisfies the hypotheses of Theorem~\ref{thm-entropypositive}, and hence has positive entropy with infinitely many periodic orbits, as asserted in Theorem~\ref{thm-main2}.
\end{document} |
\begin{document}
\title[Counting Hyperbolic Components in the Main Molecule]{Counting Hyperbolic Components \\ in the Main Molecule}
\author{Schinella D'Souza}
\address{Department of Mathematics, University of Michigan, Ann Arbor, Michigan 48109-1043} \email{dsouzas@umich.edu}
\date{}
\begin{abstract}
We count the number of hyperbolic components of period $n$ that lie on the main molecule of the Mandelbrot set. We give a formula for how to compute the number of these hyperbolic components of period $n$ in terms of the divisors of $n$ and in the prime power case, an explicit formula is derived. \end{abstract}
\maketitle
\section{\large Introduction}
\blfootnote{\emph{2020 Mathematics Subject Classification.} Primary 37F20; Secondary 37F10.}
Consider the map $f_c(z)=z^2+c$, where $c$ is the parameter variable and $z$ is the dynamic variable, both taking on complex values. Following \cite{M1}, we denote by $K(f_c)$ the filled Julia set, defined by taking the union of bounded orbits for $f_c$. Let the Mandelbrot set, $M$, be the compact subset of all parameter values $c$ such that $K(f_c)$ is connected. Recall that we may look at the period of the hyperbolic components of $M$ as shown in Figure 1. The main cardioid is the hyperbolic component that consists of the parameter values such that $f_c$ has an attracting fixed point in $\mathbb{C}$ \cite{Z}.
\begin{definition}
The \emph{main molecule} is the union of all hyperbolic components attached to the main cardioid through a chain of finitely many components. Let $M(n)$ denote the number of hyperbolic components of period $n$ on the main molecule. \end{definition}
\begin{figure}
\caption{Some periods of hyperbolic components of $M$. The hyperbolic components of period 6 on the main molecule are labeled in black.}
\label{fig:slh}
\end{figure}
The closure of the main molecule is the locus of parameters for which the core entropy is zero. In this note, we are interested in computing $M(n)$ for various $n$ and we shall give formulas for $M(n)$. Apart from the main molecule, Lutzky has given formulas for the number of hyperbolic components of period $n$ \cite{L}, Kiwi and Rees have given formulas for certain hyperbolic components in the space of quadratic rational maps \cite{KR}, and Milnor and Poirier have studied hyperbolic components for polynomials of general degree \cite{MP}. In our case, the formulas for $M(n)$ turn out to be related to the Euler phi function and other combinatorial functions. For example, Figure 1 is an illustration that shows $M(6) = 6$. .
\section{\large Preliminary Results}
From \cite{BS}, there is a simple way to count how many hyperbolic components there are attached to specifically the main cardioid.
\begin{lemma} For any $n \in \mathbb{N}$, there are $\varphi(n)$ hyperbolic components of period $n$ attached to the main cardioid, where $\varphi$ is the Euler phi function. \end{lemma}
\begin{proof}
Let $H$ be a hyperbolic component of period $n$ attached to the main cardioid. If $n = 1$, then the main cardioid itself is the only such hyperbolic component so there is $\varphi(1) = 1$ component of period 1 attached to the main cardioid. Now suppose $n > 1$. Then, the root point of $H$ is a parabolic point of ray period $n$ \cite{M1}. For $c \neq 1/4$, let $f_c(z) = z^2 + c$ be a quadratic polynomial with a parabolic fixed point at $\alpha \in \mathbb{C}$ with multiplier, $\lambda$, a primitive $n$th root of unity. This multiplier must have the form $\lambda = e^{2\pi i \theta}$ where $\theta \in \mathbb{Q}/\mathbb{Z}$ is the rotation number. Each hyperbolic component has an associated rotation number. Because the number of primitive $n$th roots of unity is given by $\varphi(n)$, there are $\varphi(n)$ possible rotation numbers for $H$ hence $\varphi(n)$ hyperbolic components of period $n$.
\end{proof}
\begin{corollary} Let $p, n \in \mathbb{N}$. For any hyperbolic component $H$ of period $p$, there are $\varphi(n)$ hyperbolic components of period $pn$ attached to H. \end{corollary}
\begin{proof}
Douady and Hubbard \cite{DH} introduced a tuning map $i_H:M \rightarrow M$ whose image is a baby Mandelbrot set (that is, a homeomorphic copy of $M$) \cite{T}. From \cite{Z}, the idea of the tuning map is to take $c \in M$ and for each bounded component of the Fatou set corresponding to $f_c$, replace each of these bounded components with a homeomorphic copy of the filled Julia set corresponding to some parameter in $H$. This yields a set homeomorphic to the filled Julia set of $f_{i_H(c)}$. The tuning map $i_H$ also maps the main cardioid onto $H$ and maps hyperbolic components of period $n$ onto hyperbolic components of period $pn$. By Lemma 2.1, there must then be $\varphi(n)$ such hyperbolic components.
\end{proof}
\begin{theorem} Let $n \in \mathbb{N}$ and write $n=d_1\cdot\cdot\cdot d_k$, where $d_1,...,d_k$ are divisors of $n$ with $d_i > 1$ for every $i \in \{1,...,k\}$. Then, we have
\[ M(n) = \sum_{\substack{n=d_1...d_k \\ d_i>1}} \varphi(d_1)\cdot\cdot\cdot\varphi(d_k). \]
\end{theorem}
\begin{example} Before presenting the proof, let us compute $M(12)$, the number of hyperbolic components of period 12 attached to the main cardioid through a chain of finitely many components. We need to take into account all possible ordered partitions of 12 into its divisors. For instance, $6 \cdot 2$ and $2 \cdot 6$ represent two different such partitions. With this in mind, we have \begin{align*} M(12) & = \varphi(12) + \varphi(6)\varphi(2) + \varphi(2)\varphi(6) + \varphi(4)\varphi(3) + \varphi(3)\varphi(4) + \varphi(3)\varphi(2)\varphi(2) + \\& \hspace{0.5cm}\varphi(2)\varphi(3)\varphi(2) + \varphi(2)\varphi(2)\varphi(3) \\
& = 4 + (2 \cdot 1) + (1 \cdot 2) + (2 \cdot 2) + (2 \cdot 2) + (2 \cdot 1 \cdot 1) + (1 \cdot 2 \cdot 1) + (1 \cdot 1 \cdot 2) \\
& = 22. \end{align*} Figure 2 illustrates the period 12 hyperbolic components attached to the main cardioid through the period 2 and 3 hyperbolic components. In particular, (A) corresponds to the terms $\varphi(2)\varphi(6)$, $\varphi(2)\varphi(3)\varphi(2)$, and $\varphi(2)\varphi(2)\varphi(3)$. (B) along with the other period 3 hyperbolic component corresponds to the terms $\varphi(3)\varphi(4)$ and $\varphi(3)\varphi(2)\varphi(2)$. \begin{figure}
\caption{The period 2 hyperbolic component}
\label{fig:sub1}
\caption{A period 3 hyperbolic component}
\label{fig:sub2}
\caption{Period 2 and 3 hyperbolic components attached to the main cardioid are shown along with the period 12 hyperbolic components attached to the main cardioid through a chain of finitely many components.}
\label{fig:test}
\end{figure}
\end{example}
With the intuition from this example, we now present the proof of Theorem 2.3.
\begin{proof} We proceed by induction on the number of divisors. For $n=1$, $M(1)=1$. Suppose $n \in \mathbb{N}$ has only one divisor, $d_1>1$. Then by Lemma 2.1, $$M(n) = \varphi(n) = \sum_{\substack{n=d_1 \\ d_1>1}} \varphi(d_1)$$ so the base case holds. Now suppose that the formula in the statement of the lemma holds for $k$ divisors, where $k \in \mathbb{N}$. We show that it also holds for $k+1$ divisors. Let $n \in \mathbb{N}$ have $k+1$ divisors. Then, using Lemma 2.1 we can count the number of hyperbolic components of period $n$ attached to the main cardioid and using Corollary 2.2 we can count the number of hyperbolic components of period $n$ attached to the components already attached to the main cardioid. Thus, we have $$ M(n) = \varphi(n) + \varphi(d_1)M\Big(\frac{n}{d_1}\Big) + \dots + \varphi(d_{k+1})M\Big(\frac{n}{d_{k+1}}\Big). $$ Applying the inductive hypothesis to each $M(\frac{n}{d_i})$ for $i \in \{1,...,k+1\}$, we may write the equation above as a sum over all the divisors of $n$. This is equivalent to: $$ M(n) = \sum_{\substack{n=d_1...d_{k+1} \\ d_i>1}} \varphi(d_1)\dots \varphi(d_{k+1}). $$ \end{proof}
With this formulation of $M(n)$, we can define $M(n)$ in a recursive way depending on the Euler $\varphi$ function and the divisors of $n$.
\begin{corollary} For any $n \in \mathbb{N}$, \[
M(n) = \sum_{\substack{d | n \\ d>1}} \varphi(d)M\left(\frac{n}{d}\right). \] \end{corollary}
\begin{proof} This follows from the inductive step of the proof of Theorem 2.3. \end{proof}
\section{\large Prime Powers}
In two different ways, we will find an explicit formula for positive integers that are prime powers. First, we begin with an example for small prime powers.
\begin{example} Let $n, p \in \mathbb{N}$ with $p$ prime. \begin{enumerate}[(a)] \item Suppose $n=p=p \cdot 1$. Then, $M(p)=\varphi(p)=p-1$ as all components of period $p$ lie on the main cardioid. \\ \item Now let $n=p^2=p^2 \cdot 1 = p \cdot p$. Then, there are $\varphi(p^2)$ components of period $p^2$ on the main cardioid. There are more components to count here because of the components of period $p$ attached to the main cardioid. On these components, there are components of period $p$ that must be counted. There are $\varphi(p) \cdot \varphi(p)$ such components. In total, $$ M(p^2) = \varphi(p^2) + (\varphi(p))^2=p(p-1)+(p-1)^2=(p-1)(2p-1). $$ \item Finally, suppose $n=p^3=p^3 \cdot 1 = p^2 \cdot p = p \cdot p^2 = p \cdot p \cdot p$. Applying similar reasoning as before and computing, our total is now $$ M(p^3) = \varphi(p^3) + \varphi(p^2)\varphi(p)+\varphi(p)\varphi(p^2)+(\varphi(p))^3 = (p-1)(2p-1)^2. $$ \end{enumerate} \end{example}
This example sheds light on the general case of when $n$ is an arbitrary prime power, leading to the following result.
\begin{theorem} For any $n, k \in \mathbb{N}$, if $n = p^k$ , then $M(p^k)=(p-1)(2p-1)^{k-1}$. \end{theorem}
\begin{proof} Let $p \in \mathbb{N}$ be a prime. Applying Corollary 2.4 for $n=p^k$, we obtain: \begin{equation*} \begin{split} M(p^k) & = \sum_{1 \leq h < k} \varphi(p^h)M(p^{k-h}) \\
& = \sum_{1\leq h < k} (p^{h-1}(p-1)^2(2p-1)^{k-h-1})+p^{k-1}(p-1) \\
& = \frac{(2p-1)^{k-1}(p-1)^2}{p}\Big(\sum_{1\leq h < k}(\frac{p}{2p-1})^h \Big)+p^{k-1}(p-1) \\
& = \frac{(2p-1)^{k-1}(p-1)^2}{p} \Bigg( \frac{1-(\frac{p}{2p-1})^k}{1-\frac{p}{2p-1}}-1\Bigg) +p^{k-1}(p-1) \\
& = (p-1)(2p-1)^{k-1}. \end{split} \end{equation*} \end{proof}
\section{\large Products of Distinct Primes}
Consider now the case where the positive integer $n$ is of the form $n=p_1...p_m$, where $p_1,..., p_m$ are distinct primes. Before considering the general case, let us again look at a simple example.
\begin{example} Let $n=p_1p_2$, where $p_1$ and $p_2$ are distinct primes. As before, we need to consider the number of ways to write this product, namely $p_1p_2 \cdot 1$, $p_1 \cdot p_2$, and $p_2 \cdot p_1$. Recall to determine the number of hyperbolic components with period $p_1p_2$, we need to count the components of period $p_1p_2$ on the main cardioid, the components of period $p_2$ on the components of period $p_1$ on the main cardioid, and the components of period $p_1$ on the components of period $p_2$ on the main cardioid. Therefore, we must have $$ M(p_1p_2)=\varphi(p_1p_2)+\varphi(p_1)\varphi(p_2)+\varphi(p_2)\varphi(p_1)=3\varphi(p_1 p_2)=3(p_1-1)(p_2-1). $$ \end{example}
For the general case of $n=p_1...p_m$, we first need to determine how many ways we can write this product of primes in a similar manner as the example above. This can be done by a recursive process. Instead of thinking about how to write these products of primes, we can consider the equivalent problem of determining the number of ordered partitions of $\{1,...,m\}$. Let this number be represented by $N(m)$. Define $N(0)=1$. It is clear that $N(1) = 1$. The ordered partitions of $\{1,2\}$ are $(\{1\},\{2\})$, $(\{2\},\{1\})$, and $(\{1,2\},\{\})$ so $N(2) = 3$. One can check by hand that in fact $N(3)=13$ and $N(4)=75$. The numbers $N(m)$ are known as the ordered Bell numbers or Fubini numbers and from \cite{OAG}, they satisfy $$N(m) \sim \frac{m!}{2(\log(2))^{m+1}}.$$ The following is a well-known lemma about the ordered Bell numbers.
\begin{lemma} Let $n = p_1p_2 \cdots p_m$ be a product of distinct primes. Then, $$N(m)=\displaystyle\sum_{k=1}^{m}{{m}\choose{k}}N(m-k).$$ \end{lemma}
\begin{proof} Let $1 \leq k \leq m$. Begin by choosing $k$ numbers from $\{1,...,m\}$. The number of ordered partitions of the remaining $m-k$ numbers is $N(m-k)$. Because there are ${{m}\choose{k}}$ ways of choosing $k$ numbers from $\{1,...,m\}$, there are ${{m}\choose{k}}N(m-k)$ ordered partitions fixing $k$ numbers and ordering the rest. As we are counting the total number of ordered partitions as $k$ ranges between 1 and $m$, we have $$N(m)=\displaystyle\sum_{k=1}^{m}{{m}\choose{k}}N(m-k).$$ \end{proof}
In this case where $n$ is a product of distinct primes, the following theorem shows that the number of hyperbolic components of period $n$ on the main molecule is closely related to the ordered Bell numbers.
\begin{theorem} Let $n = p_1p_2 \cdots p_m$, a product of distinct primes. Then, we have $$M(n)=\displaystyle{M(p_1 \dots p_m)=N(m)(p_1-1) \dots (p_m-1)} = N(m) \varphi(n).$$ \end{theorem}
\begin{proof} By Theorem 2.3, if we write $n=d_1\cdot\cdot\cdot d_k$, where $d_1,...,d_k$ are divisors of $n$ with $d_i > 1$ for every $i \in \{1,...,k\}$, then we have $$ M(p_1 \dots p_m) = \sum_{\substack{n=d_1...d_k \\ d_i>1}} \varphi(d_1)\cdot\cdot\cdot\varphi(d_k). $$ For $r_1 \in \{1, \dots, m \}$, we have that $d_1 = p_{i_1} \dots p_{i_{r_1}}$ where $p_{i_1}, \dots ,p_{i_{r_1}}$ are $r_1$ of the primes $p_1, \dots, p_m$. Therefore, $$\varphi(d_1) = \varphi(p_{i_1} \dots p_{i_{r_1}}) = \varphi(p_{i_1}) \dots \varphi(p_{i_{r_1}}).$$ For $r_2 \in \{1,...,m-r_1\}$, $d_2 = p_{j_1} \dots p_{j_{r_2}}$ where $p_{j_1}, \dots, p_{j_{r_2}}$ are $r_2$ of the remaining $m-r_1$ primes. Similarly, $$\varphi(d_2) = \varphi(p_{j_1} \dots p_{j_{r_2}}) = \varphi(p_{j_1}) \dots \varphi(p_{j_{r_2}}).$$ Continuing in this way, at the $k$-th step, we will have exhausted all of the $m$ primes. We may then write $n = d_1 \dots d_k$ and note that $$\varphi(d_1) \dots \varphi(d_k) = \varphi(p_1) \dots \varphi(p_m).$$ We are summing over all possible ways of writing $n$ in terms of its divisors. Each term in the sum is of the form $\varphi(p_1) \dots \varphi(p_m)$ and there are $N(m)$ possible ways to do so. Therefore, $$\displaystyle{M(p_1 \dots p_m)=N(m)\varphi(p_1) \dots \varphi(p_m) = N(m)(p_1-1) \dots (p_m-1)}=N(m)\varphi(n).$$ \end{proof}
Now let $p_m$ denote the $m$th prime number. As before, let $n=p_1 \cdots p_m$. We then have \begin{align*}
\frac{M(n)}{N(m)\cdot n} &= \frac{N(m)(p_1-1) \cdots (p_m-1)}{N(m) \cdot p_1 \cdots p_m} \\
&= (1-\frac{1}{p_1}) \cdots (1-\frac{1}{p_m}) \\
&< 1 \end{align*} As $n \to \infty$ we must have $m \to \infty$ and it is well-known that the product above tends to zero as $m \to \infty$. Therefore, $M(n) = o(N(m) n)$ as $n \to \infty$.
\section{\large Acknowledgements}
I would like to thank Giulio Tiozzo for his supervision and helpful comments.
\end{document} |
\begin{document}
\author{Avraham Aizenbud}
\address{Avraham Aizenbud, Faculty of Mathematics and Computer Science, Weizmann Institute of Science, POB 26, Rehovot 76100, Israel } \email{aizenr@gmail.com} \urladdr{http://www.wisdom.weizmann.ac.il/~aizenr/} \author{Dmitry Gourevitch} \address{Dmitry Gourevitch, Faculty of Mathematics and Computer Science, Weizmann Institute of Science, POB 26, Rehovot 76100, Israel } \email{dimagur@weizmann.ac.il} \urladdr{http://www.wisdom.weizmann.ac.il/~dimagur}
\keywords{Vanishing of distributions, spherical spaces, multiplicity, Shalika, Bessel function} \subjclass[2010]{20G05, 22E45, 46F99}
\date{\today} \title{Vanishing of certain equivariant distributions on spherical spaces} \maketitle
\begin{abstract} \DimaA{ We prove vanishing of $\fz$-eigen distributions on a split real reductive group which change according to a non-degenerate character under the left action of the unipotent radical of the Borel subgroup, and are equivariant under the right action of a spherical subgroup.}
This is a generalization of a result by Shalika, that concerned the group case. Shalika's result was crucial in the proof of his multiplicity one theorem. We view our result as a step in the study of multiplicities of quasi-regular representations on spherical varieties.
As an application we prove non-vanishing of \DimaA{spherical} Bessel functions. \end{abstract} \section{Introduction}\label{sec:intro}
\subsection{Main results}
In this paper we prove the following generalization of Shalika's result \cite[\S 2]{Shal}. \begin{introtheorem}\label{thm:main} \DimaA{Let $G$ be a split real reductive group and $H$ be its spherical subgroup. Let $U$ be the unipotent radical of a Borel subgroup $B$ of $G$. Let $\psi$ be a non-degenerate character of $U$ and $\chi$ be a character of $H$. Let $Z$ be the complement to the union of open $B\times H$-double cosets in $G$. Let $\fz$ be the center of the universal enveloping algebra of the Lie algebra $\g$ of $G$.
Then there are no non-zero $\fz$-eigen $(U\times H,\psi\times \chi)$-equivariant distributions supported on $Z.$} \end{introtheorem}
This result in the group case (\cite[\S 2]{Shal}) was crucial in the proof of Shalika's multiplicity one theorem.
Our proof begins by applying the technique used by Shalika. However, this technique was not enough for this generality and we had to complement it by using integrability of the singular support, as in \cite{AGAMOT}.
Theorem \ref{thm:main} provides a new tool for the study of the multiplicities of the irreducible quotients of the quasi-regular representation of $G$ on Schwartz functions on $G/H$, see \S\S \ref{subsec:qr} below for more details.
\subsection{Non-vanishing of spherical Bessel functions} Another application of Theorem \ref{thm:main} is to the study of spherical Bessel distributions and functions. \begin{defn} Let $G$ be a split real reductive group, and $H \subset G$ be a spherical subgroup. Let $(\pi,V)$ be a (smooth) irreducible admissible representation of $G$. Let $\phi$ be an \DimaA{$(H,\chi)$-equivariant} continuous functional on $V$ and $v$ be a $(U,\psi)$-equivariant continuous functional on the contragredient representation $\tilde V$. Define the \emph{spherical Bessel distribution} by $$\xi_{v,\phi}(f):=\langle v, \pi^*(f) \phi \rangle. $$
Define the \emph{spherical Bessel function} to be the restriction $j_{v,\phi}:=\xi_{v,\phi}|_{\DimaA{G} - Z}$. \end{defn} It is well-known that $j_{v,\phi}$ is a smooth function.
Theorem \ref{thm:main} easily implies the following corollary. \begin{introcor}\label{cor:SpherChar} Suppose that \DimaA{$v$ and $\phi$ are non-zero.} Then $j_{v,\phi} \neq 0$. \end{introcor}
\DimaA{ \subsection{Non-archimedean analogs}$\,$\\ Over non-archimedean fields, the universal enveloping algebra does not act on the distributions. However, the Bernstein's center $\operatorname{End}_{G \times G}(\mathcal{S}(G))$ does act. In \cite{AGS_Z} we study this action in details. In \cite{AGK} we prove, using \cite[Theorem A]{AGS_Z}, analogs of Theorem \ref{thm:main} and Corollary \ref{cor:SpherChar} for non-archimedean fields of characteristic zero. These analogs are somewhat weaker for general spherical pairs, but are of the same strength for the group case and for Galois symmetric pairs. The group case of the non-archimedean counterpart of Corollary \ref{cor:SpherChar} was proven before in \cite[Appendix B]{LM}.
} \subsection{Relation with multiplicities in regular representations of symmetric spaces}\label{subsec:qr}
Let $(G,H)$ be a symmetric pair of real reductive groups. Suppose that $G$ is quasi-split and let $B\subset G$ be a Borel subgroup. Let $k$ be the number of open $B$-orbits on $G/H$.
Theorem \ref{thm:main} can be used in order to study the following conjecture.
\begin{introconj} Let $(\pi,V)$ be a (smooth) irreducible admissible representation of $G$. Then the dimension of the space \DimaA{$(V^*)^{H}$ of $H$-invariant} continuous functionals on $V$ is at most $k$. In particular, any complex reductive symmetric pair is a Gelfand pair. \end{introconj}
We suggest to divide this conjecture into two cases \begin{itemize} \item $\pi$ is non-degenerate, i.e. $\pi$ has a non-zero continuous $(U,\psi)$-equivariant functional for some non-degenerate character $\pi$ of $U$ \item $\pi$ is degenerate. \end{itemize}
In the first case, the last conjecture follows from the following one
\begin{introconj}
Let $U$ be the unipotent radical of $B$ and let $\psi$ be its non-degenerate character. Let $\fz$ be the center of the universal enveloping algebra of the Lie algebra $\g$ of $G$. Let $\lambda$ be a character of $\fz$.
Then the dimension of the space of $(\fz,\lambda)$-eigen $(U,\psi)$-equivariant distributions on $G/H$ does not exceed $k$. \end{introconj}
We believe that Theorem \ref{thm:main} can be useful in approaching this conjecture, since it allows to reduce the study of distributions to the union of open $B$-orbits.
\section{Preliminaries}
\subsection{Conventions}
\begin{itemize} \item By an algebraic manifold we mean a smooth real algebraic variety. \item We will use capital Latin letters to denote Lie groups and the corresponding Gothic letters to denote their Lie algebras. \item Let a Lie group $G$ act on a smooth manifold $M$. For a vector $v\in \fg$ and a point $x \in M$ we will denote by $vx\in T_xM$ the image of $v$ under the differential of the action map $g \mapsto gx$. Similarly, we will use the notation $\fh x$, for any subspace $\fh \subset \fg$. \item We denote by $G_x$ the stabilizer of $x$ in $G$ and by $\fg_x$ its Lie algebra.
\end{itemize}
\subsection{Tangential and transversal differential operators}\label{subsec:trans}
In this subsection we shortly review the method of \cite[\S 2]{Shal}. For a more detailed description see \cite[\S\S 2.1]{JSZ}.
\begin{defn} Let $M$ be a smooth manifold and $N$ be a smooth submanifold. \begin{itemize} \item A vector field $v$ on $M$ is called \emph{tangential} to $N$ if for every point $p\in N, \, v_p\in T_pN$ and \emph{transversal} to $N$ if for every point $p\in N, \, v_p\notin T_pN$.
\item A differential operator $D$ is called \emph{tangential} to $N$ if every point $p\in N$ has an open neighborhood $U_x \subset N$ such that $D|_{U_x}$ is a finite sum of differential operators of the form $\phi V_1\cdot ...\cdot V_r$ where $\phi$ is a smooth function on $U_x, \, r\geq 0$, and $V_i$ are vector fields on $U_x$ tangential to $U_x \cap N$. \end{itemize} \end{defn}
\begin{lem}[cf. the proof of {\cite[Proposition 2.10]{Shal}}]\label{lem:TransTan} Let $M$ be a smooth manifold and $N$ be a smooth submanifold. Let $D$ be a differential operator on $M$ tangential to $N$ and $V$ be a vector field on $M$ transversal to $N$. Let $\eta$ be a distribution on $M$ supported in $N$ such that $D\eta=V\eta$. Then $\eta=0$. \end{lem}
\subsection{Singular support}\label{subsec:SS}
Let $M$ be an algebraic manifold and $\eta$ be a distribution on $M$. The \emph{singular support} of $\eta$ is defined to be the singular support of the D-module generated by $\eta$ and denoted $SS(\eta) \subset T^*M$.
We will shortly review the properties of the singular support that are most important for this paper. For more detailed overview we refer the reader to \cite[\S\S 2.3 and Appendix B]{AGAMOT}.
\begin{notn} For a point $x\in M$ \begin{itemize} \item we denote by $SS_x(\eta)$ the fiber of $x$ under the natural projection $SS(\eta)\to M$, \item for a submanifold $N \subset M$ we denote by $CN_N^M\subset T^*M$ the conormal bundle to $N$ in $M$, and by $CN_{N,x}^M$ the conormal space at $x$ to $N$ in $M$. \end{itemize} \end{notn}
\begin{lem}[{See e.g. \cite[Fact 2.3.9 and Appendix B]{AGAMOT} }] \label{Ginv} Let an algebraic group $G$ act on $M$. Suppose that $\eta$ is $G$-equivariant. Then
$$SS(\eta) \subset \{(x,\phi) \in T^*M \, | \, \forall \alpha \in \g, \, \phi(\alpha(x)) =0\}=\bigcup_{x\in M}CN_{Gx}^M.$$ \end{lem}
\begin{lem} \label{lem:nilp} Let $G$ be a real reductive group, $\cN \subset \fg^*$ be the nilpotent cone and $\fz$ be the center of the universal enveloping algebra $\cU(\g)$. Let $\xi$ be a $\fz$-eigen distribution on $G$. Identify $T^*G$ with $G\times \fg^*$ using the right action. Then $SS(\xi)\subset G \times \cN$. \end{lem} This lemma is well-known but we will prove it here for the benefit of the reader. \begin{proof}
Consider the standard filtrations on $\cU(\fg)$ and on the ring of differential operators $D(G)$. Consider $\fg$ as the space of left-invariant vector fields on $G$. Then the natural map $\RamiA{i:}\cU(\fg)\to D(G)$ is a morphism of filtered algebras. \RamiA{We have a commutative diagram
$$\xymatrix{\parbox{30pt}{$Gr\cU(\fg)$}\ar@{->}^{Gr(i)}[r]\ar@{<-}_{\pi_U}[d] &
\parbox{20pt}{$Gr D(G)$}\ar@{<-}_{\pi_D}[d]\\
\parbox{20pt}{$S(\fg)$ }\ar@{->}^{\bar i}[r] &
\parbox{40pt}{$\cO(T^*G)$,}}
$$
Where $\bar i$ and ${\pi_U}$ are the algebra homomorphisms which extend the natural embeddings $\g \to Gr\cU(\fg)$ and $\g \to \cO(T^*G)$, and $\pi_D$ is the algebra homomorphism which extends the natural embedding of vector fields on $G$ into $D(G)$. By the PBW theorem the vertical maps are isomorphisms. This implies that $Gr(i)$ is an embedding, and thus the filtration on $\cU(\g)$ is the one induced from $D(G)$ using the embedding $i$. Therefore we have the following commutative diagram
$$ \xymatrix{ \cU(\fg)\ar@{->}^{i}[r]\ar@{->}^{\sigma_{U}}[d] & D(G)\ar@{->}^{\sigma_{D}}[d]\\
\parbox{30pt}{$Gr\cU(\fg)$}\ar@{->}^{Gr(i)}[r]\ar@{<-}_{\pi_U}[d] &
\parbox{20pt}{$Gr D(G)$}\ar@{<-}_{\pi_D}[d]\\
S(\fg)\ar@{->}^{\bar i}[r] & \cO(T^*G),} $$ where ${\sigma_{U}}$ and $\sigma_D$ are the (nonlinear) symbol maps. Note that the map ${\bar i}$ is a section of the restriction map $r:\cO(T^*G)\to \cO(T_e^*G)\cong\cO(\g^*)\cong S(\g).$ }
In order to prove the lemma it is enough to show that $SS_{e}(\xi)\subset \cN$. Note that $\cN$ is the zero set the ideal $I\subset S(\fg)=\cO(\fg^*)$ generated by all homogeneous non-constant $\fg$-invariant polynomials. We have to show that for any homogeneous \RamiA{$p \in S(\g)^\g$ of degree $d>0,$} there exists \RamiA{non-constant} $u\in \fz$ such that \RamiA{$r(\pi_D^{-1}(\sigma_D(i(u))))=p$, or equivalently (in view of the above) that $\sigma_{U}(u)=\pi_U(p)$. Let $s:S(\g)\to\cU(\g)$ be the symmetrization map. It is easy to see that $\sigma^{d}_{U}(s(p))=\pi_U(p)$, where $\sigma^{d}_{U}$ denotes the $d$'s symbol. Since $\pi_U$ is an isomorphism this implies that $\sigma^{d}_{U}(s(p)) \neq 0$ and thus $\sigma_{U}(s(p))=\sigma^{d}_{U}(s(p))=\pi_U(p)$. This implies the assertion.
} \end{proof}
\begin{thm}[Integrability theorem, cf. \cite{Gab,GQS,KKS, Mal}]
The $SS(\eta)$ is a coisotropic subvariety of $T^*M$. \end{thm}
This theorem implies the following corollary (see \cite[\S 3]{Aiz} for more details).
\begin{cor}\label{cor:WCI} Let $N\subset M$ be a closed algebraic submanifold. Suppose that $\xi$ is supported in $N$. Suppose further that for any $x \in N$, we have $CN_{N,x}^M \nsubseteq SS_x(\eta)$. Then $\eta=0$. \end{cor}
\section{Proof of the main result}
\subsection{Sketch of the proof}
\DimaA{We decompose $G$ into $B\times H$-double cosets. Each double coset} $\cO$ we decompose $\cO=\cO_s \cup \cO_c$ in a certain way.
We prove the required vanishing \DimaA{coset by coset}, using Shalika's method (see \S\S\ref{subsec:trans}) for $\cO_s$ and singular support analyses (see \S\S\ref{subsec:SS}) for $\cO_c$.
\subsection{Notation and lemmas}
\begin{notation}$\,$ \begin{itemize} \item Fix a torus $T\subset B$ and let $\ft \subset \fb$ denote the corresponding Lie algebras. Let $\Phi$ denote the root system, $\Phi^+$ denote the set of positive roots and $\Delta\subset \Phi^+$ denote the set of simple roots. For $\alpha\in \Phi$ let $\g_{\alpha} \subset \g$ is the root space corresponding to ${\alpha}$.
\item Let $C\in \fz$ denote the Casimir element.
\item We choose $E_{\alpha}\in \fg_{\alpha}$, for any $\alpha \in \Phi$ such that $C=\sum_{\alpha \in \Phi^+} E_{-\alpha}E_{\alpha}+D$, where $D$ is in the universal enveloping algebra of the Cartan subalgebra $\ft$.
\item Let $\cO\subset G$ be a $B\times H$-\DimaA{double coset. Consider the left action of $\fu$ on $\cO$ and define} $$\cO_c:=\left \{x \in \cO \, |\, \sum_{\alpha \in \Delta} d\psi(E_\alpha)E_{-\alpha}x \in T_x\cO \right \}$$ and $\cO_s:=\cO\setminus \cO_c$. \end{itemize} \end{notation}
We will need the following lemmas, that will be proved in subsequent subsections. \DimaA{ \begin{lemma}\label{lem:WF} Let $x\in G$. Let $\xi$ be a $\fz$-eigen $(U\times H,\psi \times \chi)$ equivariant distribution on $G$. Then $SS_x(\xi)\subset CN_{BxH,x}$. \end{lemma}
\begin{lemma}\label{lem:cOProp} Let $\cO\subset Z$ be a $B\times H$-double coset. Then $\cO_s \neq \emptyset$. \end{lemma} }
\subsection{Proof of Theorem \ref{thm:main}}
Suppose that there exists a non-zero $\fz$-eigen $(U,\psi)$- equivariant distribution $\xi$ supported on $Z$.
For any \DimaA{$B\times H$-double coset $\cO \subset G$}, stratify $\cO_{c}$ to a union of smooth locally closed varieties $\cO_c^i$.
The collection \DimaA{$$\{\cO_c^i \, | \, \cO \text{ is a }B\times H-\text{double coset}\}\cup \{\cO_s \, | \,\cO\text{ is a } B\times H-\text{double coset}\}$$} is a stratification of \DimaA{$G$}. Reorder this collection to a sequence $\{S_i\}_{i=1}^N$ of smooth locally closed subvarieties of \DimaA{$G$} s.t. $U_k:=\bigcup_{i=1}^k S_i$ is open in \DimaA{$G$} for any $1\leq k \leq N$.
Let $k$ be the maximal integer such that $\xi|_{U_{k-1}}=0$. Let $\eta:=\xi|_{U_{k}}$. We will now show that $\eta=0$, which leads to a contradiction. \begin{enumerate}[{Case} 1.] \item $S_{k}=\cO_s$ for some \DimaA{double coset} $\cO$.
Recall that we have the following decomposition of the Casimir element $$C=\sum_{\alpha \in \Phi^+} E_{-\alpha}E_{\alpha}+D$$ Since $\eta$ is $\fz$-eigen and $(U,\psi)$-equivariant, we have, for some scalar $\lambda$,
$$\lambda \eta=C\eta=\sum_{\alpha \in\Phi^+} E_{-\alpha}E_{\alpha}\eta+D\eta=\sum_{\alpha \in \Phi^+} E_{-\alpha}d\psi(E_\alpha)\eta +D\eta=\sum_{\alpha \in \Delta} E_{-\alpha}d\psi(E_\alpha)\eta +D\eta$$
Let $V:= \sum_{\alpha \in \Delta} d\psi(E_\alpha)E_{-\alpha}$ and $D':=\lambda Id-D$. We have $V\eta=D'\eta$, and it is easy to see that $D'$ is tangential to $\cO_s$,
and $V$ is transversal to $\cO_s$. Now, Lemma \ref{lem:TransTan} implies $\eta=0$ which is a contradiction.
\item $S_{k}\subset \cO_c$ for some orbit $\cO$.
By Corollary \ref{cor:WCI} it is enough to show that for any $x \in S_k$ we have \begin{equation}\label{eq:NonCoIsot} CN_{S_k,x}^{\DimaA{G}} \nsubseteq SS_x(\eta). \end{equation}
By Lemma \ref{lem:WF}, $ SS_x(\eta)\subset CN_{\cO,x}^{\DimaA{G}}$. By Lemma \ref{lem:cOProp}, $S_{k}\subsetneq \cO$, thus $CN_{S_{k},x}^{\DimaA{G}} \supsetneq CN_{\cO,x}^{\DimaA{G}}$ which implies \eqref{eq:NonCoIsot}.
\end{enumerate}
$\Box$
\DimaA{ \subsection{Proof of Lemma \ref{lem:WF}}\label{subsec:PfLemWF}
\begin{proof} Let $\fh$ denote the Lie algebra of $H$ and $ad(x)\fh$ denote its conjugation by $x$. Identify $T_x^*G$ with $\fg^*$ using multiplication by $x^{-1}$ on the right. Then $$CN_{BxH,x}^G= (\ft+\mathfrak{u}+ad(x)\fh)^\bot.$$ Since $\xi$ is $\mathfrak{u}\times \fh$-equivariant, Lemma \ref{Ginv} implies that $SS_x(\xi)\subset (\mathfrak{u}+ad(x)\fh)^\bot$. Since $\xi$ is also $\fz$-eigen, Lemma \ref{lem:nilp} implies that $SS_x(\xi)\subset \cN $, where $\cN \subset \fg^*$ is the nilpotent cone. Now we have $$SS_x(\xi)\subset(\mathfrak{u}+ad(x)\fh)^\bot \cap \cN =(\ft+\mathfrak{u}+ad(x)\fh)^\bot=CN_{BxH,x}^G.$$ \end{proof}
\subsection{Proof of Lemma \ref{lem:cOProp}}\label{subsec:cOProp} First we need the following lemmas and notation.
\begin{lemma}\label{lem:dense} Let $K\subset K_{i} \subset G$ for $i=1,\dots,n$ be algebraic subgroups. Suppose that $K_i$ generate $G$. Let $Y$ be a transitive $G$ space. Let $y \in Y$. Assume that $Ky$ is Zariski dense in $K_i y$ for each $i$. Then $Ky$ is Zariski dense in $Y$. \end{lemma} \begin{proof} By induction we may assume that $n=2$. Let $$O_l:=\underbrace{K_1K_2\cdots K_1K_2 }_{l \text{ times}}y.$$ It is enough to prove that for any $l$ the orbit $Ky$ is dense in $O_l$. Let us prove it by induction on $l$. Suppose that we have already proven that $Ky$ is dense in $O_{l-1}$. Then $$\overline{Ky}=\overline{K_2y}=\overline{K_2Ky}=\overline{K_2O_{l-1}y}.$$ Thus $Ky$ is dense in $K_2O_{l-1}$. Similarly, $K_2O_{l-1}$ is dense in $K_1K_2O_{l-1}=O_l$. \end{proof}
\begin{notn}$\,$ \begin{itemize} \item Let $Y$ denote the symmetric space $G/H$, and $Z'$ denote the image of $Z$ in $Y$. \item For a simple root $\alpha\in \Delta$, denote by $P_{\alpha}\subset G$ the parabolic subgroup whose Lie algebra is $\fg_{-\alpha}\oplus \fb$. \end{itemize} \end{notn}
\begin{lemma}\label{lem:out} Let $x \in Z'$. Then there exists a simple root $\alpha \in \Delta$ such that $\g_{-\alpha}x \nsubseteq \fb x$. \end{lemma}
\begin{proof} Assume the contrary. Then for any $\alpha \in \Delta, \,T_x P_{\alpha}x=T_xBx$. Thus $Bx$ is Zariski dense in $P_{\alpha}x$. By Lemma \ref{lem:dense} this implies that $Bx$ is dense in $Y$, which contradicts the condition $x \in Z'$. \end{proof}
\begin{proof}[Proof of Lemma \ref{lem:cOProp}] Note that $\cO_c$ is invariant with respect to the right action of $H$. Let $\cO'$ denote the image of $\cO$ in $Y$, and let $\cO_c'$ denote the image of $\cO_c$ in $Y$. Choose $x\in \cO'$ and let $a:B \to \cO'$ denote the action map. It is enough to show that $a^{-1}(\cO_c')\neq B$.
\begin{multline*}
a^{-1}(\cO_c')=\{b \in B \, | \, \sum \psi(E_{\alpha})E_{-\alpha}\in \fg_{bx} + \fb\}=
\{b \in B \, | \, \sum \psi(E_{\alpha})ad(b^{-1})E_{-\alpha} \in \fg_{x} + \fb\}=\\
\{tu \in B \, | \, \sum \psi(E_{\alpha})ad(t^{-1})E_{-\alpha} \in \fg_{x} + \fb\}=
\{tu \in B \, | \, \sum \psi(E_{\alpha})\alpha(t)E_{-\alpha} \in \fg_{x} + \fb\} \end{multline*}
By Lemma \ref{lem:out} we can choose $\alpha \in \Delta$ such that $\g_{-\alpha}x \nsubseteq \fb x$. For any $\varepsilon>0$ there exists $t \in T$ s.t. $\alpha(t)=1$ and $\forall \beta \neq \alpha \in \Delta$ we have $|\beta(t)|<\varepsilon$. It is easy to see that for $\varepsilon$ small enough, $t \notin p^{-1}(\cO_c')$. \end{proof} }
\end{document} |
\begin{document}
\title{Equivalent definitions for (degree one) Cameron-Liebler classes of generators in finite classical polar spaces}
\begin{abstract}
In this article, we study \emph{degree one Cameron-Liebler} sets of generators in all finite classical polar spaces, which is a particular type of a Cameron-Liebler set of generators in this polar space, \cite{CLpolar}. These degree one Cameron-Liebler sets are defined similar to the Boolean degree one functions, \cite{Ferdinand.}. We summarize the equivalent definitions for these sets and give a classification result for the degree one Cameron-Liebler sets in the polar spaces $W(5,q)$ and $Q(6,q)$.
\end{abstract}
\textbf{Keywords}: Cameron-Liebler set, finite classical polar space, Boolean degree one function. \par \textbf{MSC 2010 codes}: 51A50, 05B25, 05E30, 51E14, 51E30. \section{Introduction}
The investigation of Cameron-Liebler sets of generators in polar spaces is inspired by the research on Cameron-Liebler sets in finite projective spaces. This research started with Cameron-Liebler sets of lines in $\PG(3,q)$, defined by P. Cameron and R. Liebler in \cite{begin}. A set $\mathcal{L} $ of lines in $\PG(3,q)$ is a Cameron-Liebler set of lines if and only if the number of lines in $\mathcal{L}$ disjoint to a given line $l$ only depends on whether $l \in \mathcal{L}$ or not. We also find Cameron-Liebler sets in the theory of tight sets for graphs: Cameron-Liebler sets are the tight sets of the type $I$ \cite{bart}.
After many results about those Cameron-Liebler line sets in $\PG(3,q)$, the Cameron-Liebler set concept has been generalized to many other contexts: Cameron-Liebler line sets in $\PG(n,q)$ \cite{phdDrudge}, Cameron-Liebler sets of $k$-spaces in $\PG(2k+1,q)$ \cite{CLkclas}, Cameron-Liebler sets of $k$-spaces in $\PG(n,q)$ \cite{CLksetn}, Cameron-Liebler classes in finite sets \cite{CLset,eenextra,tweeextra} and Cameron-Liebler sets of generators in finite classical polar spaces \cite{CLpolar} were defined. The central problem for Cameron-Liebler sets is to find for which parameter $x$ a Cameron-Liebler set exists, and finding examples with this parameter \cite{phdDrudge,feng,CL20,CL21,Klaus,CL26}.
In this article, we will investigate Cameron-Liebler sets in finite classical polar spaces. The finite classical polar spaces are the hyperbolic quadrics $Q^+(2d-1,q)$, the parabolic quadrics $Q(2d,q)$, the elliptic quadrics $Q^-(2d+1,q)$, the hermitian polar spaces $H(2d-1,q^2)$ and $H(2d,q^2)$, and the symplectic polar spaces $W(2d-1,q)$, with $q$ prime power. Here we investigate the sets of generators defined by the following definition, where $A$ is the incidence matrix of points and generators, and we call these sets \emph{degree one Cameron-Liebler sets}.
\begin{definition}\label{defspecialCL} A degree one Cameron-Liebler set of generators in a finite classical polar space $\mathcal{P}$ is a set of generators in $\mathcal{P}$, with characteristic vector $\chi$ such that $\chi \in \im(A^T)$. \end{definition}
This definition corresponds with the definition of Boolean degree one functions for generators in polar spaces, in \cite{Ferdinand.} by Y. Filmus and F. Ihringer. In their article, they define Boolean degree one functions, or Cameron-Liebler sets in projective and polar spaces by the fact that the corresponding characteristic vector lies in $V_0\perp V_1$, which are eigenspaces of the related association scheme (see Section \ref{section2}).
In \cite{CLpolar}, M. De Boeck, M. Rodgers, L. Storme and A. \v{S}vob introduced Cameron-Liebler sets of generators in the finite classical polar spaces. In this article, Cameron-Liebler set of generators in the polar spaces are defined by the \emph{disjointness-definition} and the authors give several equivalent definitions for these Cameron-Liebler sets.
\begin{definition}[{\cite{CLpolar}}]\label{defCL} Let $\mathcal{P}$ be a finite classical polar space with parameter $e$ and rank $d$. A set $\mathcal{L}$ of generators in $\mathcal{P}$ is a Cameron-Liebler set of generators in $\mathcal{P}$ if and only if for every generator $\pi$ in $\mathcal{P}$, the number of elements of $\mathcal{L}$, disjoint from $\pi$ equals $(x-\chi(\pi))q^{\binom{d-1}{2}+e(d-1)}$. \end{definition}
\begin{table}[h]\begin{center}
\begin{tabular}{ | c | c| c| }
\hline
Type $I$ & Type $II$ & Type $III$ \\ \hline
$Q^-(2d+1,q)$ & $Q^+(2d-1,q)$, $d$ even& $Q(4n+2,q)$ \\
$Q(2d,q)$, $d$ even & & $W(4n+1,q)$ \\
$Q^+(2d-1,q)$, $d$ odd & & \\
$W(2d-1,q)$, $d$ even & & \\
$H(2d-1,q)$, $q$ square & & \\
$H(2d,q)$, $q$ square & & \\ \hline
\end{tabular}
\caption{Three types of polar spaces}\label{tabeltype} \end{center}\end{table}
In this article, we consider three different types of polar spaces, see Table \ref{tabeltype}. Type $I$ and $II$ corresponds with type $I$ and $II$ respectively, defined in \cite{CLpolar}, while type $III$ in this paper corresponds with the union of type $III$ and $IV$ in $\cite{CLpolar}$, as we handle the symplectic polar spaces $W(4n+1,q)$, for both $q$ odd and $q$ even, in the same way. Definition \ref{defCL} and Definition \ref{defspecialCL} are equivalent for the polar spaces of type $I$ by \cite[Theorem 3.7, Theorem 3.15]{CLpolar}. For the polar spaces of type $II$ we can consider the (degree one) Cameron-Liebler sets of one class of generators; we see that Cameron-Liebler sets and degree one Cameron-Liebler sets coincide when we only consider one class (see \cite[Theorem 3.16]{CLpolar}). For the polar spaces of type $III$, this equivalence no longer applies and for these polar spaces, any degree one Cameron-Liebler set is also a regular Cameron-Liebler set, but not vice versa.
Cameron-Liebler sets were introduced by a group-theoretical argument: a set $\mathcal{L}$ of lines is a Cameron-Liebler set of lines in $\PG(3,q)$ if and only if PGL$(3,q)$ has the same number of orbits on the lines of $\mathcal{L}$ and on the points of $\PG(3,q)$.
If the incidence matrix $A$ of points and generators of a polar space $\mathcal{P}$ has trivial kernel, then we also find a group-theoretical definition for degree one Cameron-Liebler sets of generators in $\mathcal{P}$. This theorem follows from {\cite[Lemma 3.3.11]{phdfred}}.
\begin{theorem}
Let $X$ be the set of points in a classical polar space $\mathcal{P}$, let $M$ be the set of generators in $\mathcal{P}$ and let $A$ be the point-generator matrix of $\mathcal{P}$. Consider an automorphismgroup $G$ acting on the sets $X$ and $M$ with orbits $O_1, \dots, O_n$ and $O'_1, \dots, O'_m$, respectively. If $A$ has trivial kernel, then each $O'_i$ is a degree one Cameron-Liebler set in $\mathcal{P}$ if and only if $n=m$.
\end{theorem}
In Section $2$ we give some preliminaries about the classical polar spaces and we discuss several properties of the eigenvalues of the association scheme for generators of finite classical polar spaces. In Section $3$, we give an overview of the equivalent definitions and several properties of degree one Cameron-Liebler sets in polar spaces. In Section $4$ we give an equivalent definition for Cameron-Liebler sets in the hyperbolic quadrics $Q^+(2d-1,q)$, $d$ even and in Section $5$ we end with some classification results for degree one Cameron-Liebler sets, especially in the polar spaces $W(5,q)$ and $Q(6,q)$.
\section{Preliminaries}\label{section2} For an extensive and detailed introduction about distance-regular graphs, polar spaces and association schemes for generators of finite classical polar spaces, we refer to \cite{CLpolar}. For more general information about association schemes of distance regular graphs, we refer to \cite{bose, brouwer}. We only repeat the necessary definitions and information. Note that in this article, we will work in a projective context and all mentioned dimensions are projective dimensions. \subsection{Finite classical polar spaces}\label{setcionfcps}
We start with the definition of finite classical polar spaces. \begin{definition} Finite classical polar spaces are incidence geometries consisting of subspaces that are totally isotropic with respect to a non-degenerate quadratic or non-degenerate reflexive sesquilinear form on a vector space $\mathbb{F}_q^{n+1}$. \end{definition} In this article all polar spaces we will handle are the finite classical polar spaces, so we will call them the polar spaces. We also give the definition of the rank and the parameter $e$ of a polar space. \begin{definition} A generator of a polar space is a subspace of maximal dimension and the rank $d$ of a polar space is the projective dimension of a generator plus $1$. The parameter of a polar space $\mathcal{P}$ over $\mathbb{F}_q$ is defined as the number $e$ such that the number of generators through a $(d-2)$-space of $\mathcal{P}$ equals $q^e+1$. \end{definition} In Table \ref{tabele} we give the parameter $e$ of the polar spaces.
\begin{table}[h]\begin{center}
\begin{tabular}{ | c | c| }
\hline
Polar space & $e$ \\ \hline \hline
$Q^+(2d-1,q)$ & $0$ \\ \hline
$H(2d-1,q)$ & $1/2$ \\ \hline
$W(2d-1,q)$ & $1$ \\ \hline
$Q(2d,q)$ & $1$ \\ \hline
$H(2d,q)$ & $3/2$ \\ \hline
$Q^-(2d+1,q)$ & $2$ \\ \hline
\end{tabular}
\caption{The parameter of the polar spaces}\label{tabele} \end{center}\end{table}
To ease the notations, we will work with the \emph{Gaussian binomial coefficient} $\begin{bmatrix}a\\b\end{bmatrix}_q$ for positive integers $a,b$ and prime power $q\geq 2$: \begin{align*} \begin{bmatrix}a\\b\end{bmatrix}_q=\prod_{i=1}^b \frac{q^{a-b+i}-1}{q^i-1} = \frac{(q^a-1)\dots (q^{a-b+1}-1)}{(q^b-1)\dots (q-1)}. \end{align*} We will write $\qbin ab$ if the field size $q$ is clear from the context. The number $\qbin ab_q$ equals the number of $(b-1)$-spaces in $\PG(a-1,q)$, and the equality $\qbin ab = \qbin{a}{a-b}$ follows immediately from duality.\\
We end this section by defining several substructures in polar spaces. A \emph{spread} in a polar space $\mathcal{P}$ is a set $S$ of generators in $\mathcal{P}$ such that every point of $\mathcal{P}$ is contained in precisely one element of $S$. A \emph{point-pencil} in a polar space $\mathcal{P}$ with vertex $P\in \mathcal{P}$ is the set of all generators in $\mathcal{P}$ through the point $P$. \\
\subsection{Eigenvalues of the association scheme for generators in polar spaces} First of all, remark that in this article vectors are regarded as column vectors and we define $\textbf{\textit{j}}_n$ to be the all one vector of length $n$. We write $\textbf{\textit{j}}$ for $\textbf{\textit{j}}_n$ if the length is clear from the context.\\ Let $\mathcal{P}$ be a finite classical polar space of rank $d$ and let $\Omega$ be its set of generators. The relations $R_i$ on $\Omega$ are defined as follows: $(\pi,\pi') \in R_i$ if and only if $\dim(\pi \cap \pi') =d-i-1$, for generators $\pi,\pi'\in \Omega$ with $i=0, ...,d$. We define $A_i$ as the incidence matrix of the relation $R_i$. By the theory of association schemes we know that there is an orthogonal decomposition $V_0 \perp V_1 \perp \cdots \perp V_d$ of $\mathbb{R}^\Omega$ in common eigenspaces of $A_0,A_1,...,A_d$.
\begin{lemma}[{\cite[Theorem 4.3.6]{phdfred}}]\label{eigenvallem} In the association scheme of a polar space over $\mathbb{F}_q$ of rank $d$ and parameter $e$, the eigenvalue $P_{ji}$ of the relation $R_i$ corresponding to the subspace $V_j$ is given by: \begin{align*} P_{ji} = \sum\limits_{s=\max{(0,j-i)}}^{\min{(j,d-i)}} (-1)^{j+s}\begin{bmatrix} j\\s\end{bmatrix} \begin{bmatrix}d-j \\ d-i-s\end{bmatrix} q^{e(i+s-j)+\binom{j-s}{2}+\binom{i+s-j}{2}}. \end{align*}. \end{lemma}
Before we start with investigating the Cameron-Liebler sets of generators in finite classical polar spaces, we give an important lemma about the eigenvalues $P_{ji}$.\\
\begin{lemma} \label{lemma2} In the association scheme of polar spaces, the eigenvalue $P_{1i}$ of $A_i$ corresponds only with the eigenspace $V_1$ for $i\neq 0$, except in the following cases. \begin{enumerate} \item The hyperbolic quadrics $Q^+(2d-1,q)$. Here $P_{1i}=P_{d-1,i}$ for $i$ even, so $P_{1i}$ also corresponds with $V_{d-1}$, for every relation $R_i$, $i$ even.
\item The parabolic quadrics $Q(4n+2,q)$ and the symplectic quadrics $W(4n+1,q)$. Here $P_{1d} = P_{dd}$, so $P_{1d}$ also corresponds with $V_d$ for the disjointness relation $R_d$. \end{enumerate} \end{lemma} \begin{proof} We need to prove, given a fixed $i$ and $j\neq 1$, that $P_{1i} \neq P_{ji}$ for $q$ a prime power. For $j=0$ and for all $i\neq 0$, it is easy to calculate that $P_{1i}\neq P_{0i}$, so we can suppose that $j>1$.
For $i=1$ we can directly compare the eigenvalues $P_{11}$ and $P_{j1}$. \begin{align*} P_{11} = P_{j1} &\Leftrightarrow \begin{bmatrix}d-1 \\ 1\end{bmatrix} q^e -1= \begin{bmatrix}d-j \\ 1\end{bmatrix} q^e -\begin{bmatrix}j \\ 1 \end{bmatrix}\\ & \Leftrightarrow \frac{-q+1+(q^{d-1}-1)q^e}{q-1}=\frac{-q^j+1+(q^{d-j}-1)q^e}{q-1}\\ & \Leftrightarrow (q^{d-j+e-1}+1)(q^{j-1}-1)=0 \end{align*} Since $j>1$ the last equation gives a contradiction for any $q$.
For $i\geq 2$ we introduce $\phi_i(j) = \max\{k\mid \mid q^k|P_{ji} \}$, the exponent of
$q$ in $P_{ji}$. If $P_{ij}=0$, we put $\phi_i(j)=\infty$. We will show that $\phi_i(j)$ is different from $\phi_i(1)$ for most values of $i$ and $j$. For $j=1$, we find that \begin{align*} P_{1i}=-\begin{bmatrix} d-1 \\ d-i\end{bmatrix} q^{\binom{i-1}{2}+e(i-1)}+\begin{bmatrix} d-1 \\ i\end{bmatrix} q^{\binom{i}{2}+ei}=q^{\binom{i-1}{2}+e(i-1)} \left(\qbin{d-1}{i}q^{i-1+e} -\qbin{d-1}{i-1} \right). \end{align*} We can see that $\phi_i(1)=\binom{i-1}{2}+e(i-1)$, since $i-1+e\geq 1$ and $\qbin ab=1 \pmod q$ for all $0\leq b\leq a$.
In Lemma \ref{eigenvallem} we see that ${\phi_i(j)}$ depends on the last factor of every term in the sum. To find $\phi_i(j)$ we first need to find $z$ such that $q^{e(i+z-j)+\binom{j-z}{2}+\binom{i+z-j}{2}}$ is a factor of every term in the sum, or equivalently, such that $f_{ji}(s)=e(i+s-j)+\binom{j-s}{2}+\binom{i+s-j}{2}$ reaches its minimum for $s=z$. So for most cases, we have that $\phi_i(j)=f_{ij}(z)$, but in some cases it occurs that two values of $z$ correspond with opposite terms with factor $q^{\phi_i(j)}$. These cases, we have to investigate separately. \\ We can check that $z$ is the integer or integers in $[\max\{0,j-i\}, \dots, \min\{j,d-i\}]$, closest to $j-\frac{i}{2}-\frac{e}{2}$. Since $i\geq 2$ we have three possibilities for the value of $z$, as we always have $j-i\leq j-\frac{i}{2}-\frac{e}{2}<j$: \begin{itemize}
\item $z=0$ if $j-\frac{i}{2}-\frac{e}{2}<0$,
\item $z \in \{j-\frac{i}{2}-\frac{e}{2}, j-\frac{i}{2}-\frac{e}{2}\pm \frac{1}{2}\}$ if $0\leq j-\frac{i}{2}-\frac{e}{2}\leq d-i$,
\item $z=d-i$ if $j-\frac{i}{2}-\frac{e}{2}>d-i$. \end{itemize}
Now we handle these three cases.
\begin{itemize} \item If $j-\frac{i}{2}-\frac{e}{2} < 0$, we see that $f_{ji}$ is minimal for the integer $z=0$.
We remark that in this case there is only $1$ value of $s$, namely $0$, for which the corresponding term is divisible by $q^{\phi_i(j)}$ but not by $q^{\phi_i(j)+1}$. This is important to exclude the case where $2$ terms with factor $q^{\phi_i(j)}$ would be each others opposite.
We find that $\phi_i(j)=f_{ji}(0)=\binom{i}{2}+(j-i)(j-e)$, and since $\phi_i(1)=\binom{i-1}{2}+e(i-1)$, the values $\phi_i(j)$ and $\phi_i(1)$ are equal if and only if $j= 1 \vee j= i+e-1$. We only have to check the latter case, and recall that $ j-\frac{i}{2}-\frac{e}{2} < 0$. It follows that $i+e< 2$, a contradiction since we supposed $i\geq 2$.
\item If $0 \leq j-\frac{i}{2}-\frac{e}{2} \leq d-i$ we see that $f_{ji}$ is minimal for the integer $z$ closest to $j-\frac{i}{2}-\frac{e}{2}$.
\def\vrule height 5ex depth 3.5ex width 0pt{\vrule height 5ex depth 3.5ex width 0pt} \pagestyle{empty} \begin{table}[hp] \centering \renewcommand{2.1}{2.1} \setlength\extrarowheight{-2pt}
\begin{tabular}{|>{\centering\arraybackslash}p{6mm}|p{20mm}|p{26mm}|p{45mm}|>{\centering\arraybackslash}p{20mm}|>{\centering\arraybackslash}p{6mm}|}\hline
\boldmath{$e$} & \boldmath{$i$} & \boldmath{$z$} & \boldmath{$\phi_i(j)=f_{ji}(z)$} &\boldmath{$\phi_i(1)$}& \boldmath{$S$}\\ \hline
\hline \multicolumn{6}{|c|}{$Q^+ (2d-1,q)$}\\ \hline
\multirow{2}{*}{$0$}
& even & $j-\frac{i}{2}$ & $\frac{i(i-2)}{4}$&$\frac{(i-1)(i-2)}{2}$ & $\{2\}$\\ \cline{2-6}
& odd & $j-\frac{i}{2}\pm \frac{1}{2}$ & $\begin{cases} \frac{(i-1)^2}{4} &\mbox{if } j\neq \frac{d}{2} \\
\infty & \mbox{if } j=\frac{d}{2} \end{cases}$&$\frac{(i-1)(i-2)}{2}$ & $\{3\}$\\ \hline
\hline \multicolumn{6}{|c|}{$\mathcal{H}(2d-1,q)$, with $q$ square}\\ \hline \multirow{2}{*}{$\frac{1}{2}$}
& even & $j-\frac{i}{2}$ & $\frac{i(i-1)}{4}$&$\frac{(i-1)^2}{2}$ & $\{2\}$\\ \cline{2-6}
& odd & $j-\frac{i}{2}-\frac{1}{2}$ & $\frac{i(i-1)}{4}$&$\frac{(i-1)^2}{2}$ & $\emptyset$\\ \hline
\hline \multicolumn{6}{|c|}{$Q(2d,q)$, $W(2d-1,q)$, with $d\not\equiv 0 \pod{4}$}\\ \hline \multirow{2}{*}{$1$}
& even & $j-\frac{i}{2}-\frac{1}{2}\pm \frac{1}{2}$ & $ \frac{i^2}{4}$&$\frac{i(i-1)}{2}$ & $\{2\}$\\ \cline{2-6}
& odd & $j-\frac{i}{2}-\frac{1}{2}$ & $\frac{i^2-1}{4}$&$\frac{i(i-1)}{2}$ & $\emptyset$\\ \hline
\hline \multicolumn{6}{|c|}{$Q(2d,q)$, $W(2d-1,q)$, with $d\equiv 0 \pod{4}$}\\ \hline \multirow{3}{*}{$1$} & even, $i\neq\frac{d}{2}$ & $j-\frac{i}{2}-\frac{1}{2}\pm \frac{1}{2}$ & $\frac{i^2}{4}$&$\frac{i(i-1)}{2}$ & $\{2\}$\\ \cline{2-6}
& $i=\frac{d}{2}$ & $j-\frac{i}{2}-\frac{1}{2}\pm \frac{1}{2}$ & \vrule height 5ex depth 3.5ex width 0pt $\begin{cases} \infty &\text{if $j= \frac{d}{2}+1$}\\ \frac{i^2}{4} & \text{else} \end{cases}$&$\frac{i(i-1)}{2}$ & $\{2\}$\\ \cline{2-6}
& odd & $j-\frac{i}{2}-\frac{1}{2}$ & $\frac{i^2-1}{4}$&$\frac{i(i-1)}{2}$ & $\emptyset$\\ \hline
\hline \multicolumn{6}{|c|}{$\mathcal{H}(2d,q)$, with $q$ square}\\ \hline \multirow{2}{*}{$\frac{3}{2}$} & even & $j-\frac{i}{2}-1$ & $\frac{(i-1)(i+2)}{4}$&$\frac{i^2-1}{2}$ & $\emptyset$\\ \cline{2-6}
& odd & $j-\frac{i}{2}-\frac{1}{2}$ & $\frac{(i-1)(i+2)}{4}$ &$\frac{i^2-1}{2}$ & $\emptyset$\\ \hline
\hline \multicolumn{6}{|c|}{$Q^- (2d+1,q)$, with $d\not\equiv 2 \pod{4}$}\\ \hline \multirow{2}{*}{$2$}
& even & $j-\frac{i}{2}-1$ & $\frac{i^2}{4}+\frac{i}{2}-1$&$\frac{(i-1)(i+2)}{2}$ & $\emptyset$\\ \cline{2-6}
& odd & $j-\frac{i}{2}-1\pm\frac{1}{2}$ & $\frac{(i-1)(i+3)}{4}$&$\frac{(i-1)(i+2)}{2}$ & $\emptyset$\\ \hline
\hline \multicolumn{6}{|c|}{$Q^- (2d+1,q)$, with $d\equiv 2 \pod{4}$}\\ \hline \multirow{3}{*}{$2$}
& even & $j-\frac{i}{2}-1$ & $\frac{i^2}{4}+\frac{i}{2}-1$&$\frac{(i-1)(i+2)}{2}$ & $\emptyset$\\ \cline{2-6}
& odd, $i\neq\frac{d}{2}$ & $j-\frac{i}{2}-1\pm \frac{1}{2}$ & $\frac{(i-1)(i+3)}{4}$&$\frac{(i-1)(i+2)}{2}$ & $\emptyset$\\ \cline{2-6}
& $i=\frac{d}{2}$ & $j-\frac{i}{2}-1\pm\frac{1}{2}$ & \vrule height 5ex depth 3.5ex width 0pt $\begin{cases} \infty &\text{if $j= \frac{d}{2}+2$} \\ \frac{(i-1)(i+3)}{4} & \text{else} \end{cases}$&$\frac{(i-1)(i+2)}{2}$ & $\emptyset$\\ \hline
\end{tabular} \caption{For $0\leq j-\frac{i}{2}-\frac{e}{2} \leq d-i$, with $S=\{ i \geq 2 \mid\phi_i(j)=\phi_i(1)\}$. } \label{tabel} \end{table}
In Table \ref{tabel} we list the different cases depending on $e$ and the parity of $i$. Note that we have to check, for $e=0, i$ odd, for $e=1, i$ even, and for $e=2, i$ odd, that the two values of $z$ do not correspond with two opposite terms with factor $q^{\phi_i(j)}$. By calculating and taking in account the conditions $0\leq j-\frac{i}{2}-\frac{e}{2}\leq d-i$, we find out that those cases do not correspond with two opposite terms, except in the following cases: \begin{itemize}
\item $e=0, j=\frac{d}{2}$ and $i$ odd,
\item $e=1, j=\frac{d}{2}+1, i=\frac{d}{2}$ and $i$ even,
\item $e=2,j=\frac{d}{2}+2, i=\frac{d}{2}$ and $i$ odd. \end{itemize} In these cases, $P_{ij}=0$, so $\phi_i(j)=\infty\neq \phi_i(1)$.
Remark that for every $e$, $i$ and $j>1$, $\phi_i(j)=f_{ij}(z)$ is independent of $j$, see the fifth column in Table \ref{tabel}. In the last column we give the values of $i$ for which $\phi_i(j)=\phi_i(1)$. As we supposed $i\geq 2$, we see that we have to check the eigenvalues for $i=2$ if $e\in \{0,\frac{1}{2}, 1\}$ and for $i=3$ if $e=0$ in detail. \begin{itemize}
\item Case $i=2$ and $e\in\{0,\frac{1}{2},1\}$:\small {\begin{align*} & P_{12} = P_{j2} \\ & \Leftrightarrow -\begin{bmatrix} d-1 \\ 1\end{bmatrix}q^e+ \begin{bmatrix}d-1 \\ 2\end{bmatrix} q^{1+2e} =\begin{bmatrix} j\\2 \end{bmatrix}q -\begin{bmatrix} d-j\\ 1 \end{bmatrix}\begin{bmatrix}j \\ 1 \end{bmatrix}q^e+\begin{bmatrix}d-j \\ 2\end{bmatrix} q^{1+2e} \\ & \Leftrightarrow \left(\qbin{d-1}{2}-\qbin{d-j}{2} \right) q^{2e} + \qbin{d-j-1}{1}\qbin{j-1}{1} q^{e} =\begin{bmatrix} j\\2 \end{bmatrix}. \end{align*} For $e=\frac{1}{2}$ and $e=1$, we see that the right and left hand side of the last equation are different modulo $q$. So we can assume $e=0$. \begin{align*} & P_{12} = P_{j2}\\
&\Leftrightarrow \frac{(q^{d-1}-1)(q^{d-2}-1)}{(q^2-1)(q-1)}-\frac{(q^{d-j}-1)(q^{d-j-1}-1)}{(q^2-1)(q-1)}+\frac{(q^{d-j-1}-1)(q^{j-1}-1)}{(q-1)(q-1)} = \frac{(q^j-1)(q^{j-1}-1)}{(q^2-1)(q-1)} \\ & \Leftrightarrow q^{2d-3}-q^{2d-2j-1}+q-q^{2j-1}=0 \\ & \Leftrightarrow q(q^{2j-2}-1)(q^{2(d-j-1)}-1)=0 \end{align*}} Since $j>1$, we see that $P_{12}=P_{j2}$ if and only if $j=d-1$. This corresponds with the first exception in the lemma with $i=2$.
\item Case $i=3$ and $e=0$. \begin{align*} &P_{13} = P_{j3} \\ &\Leftrightarrow -\begin{bmatrix} d-1 \\ 2\end{bmatrix}q+ \begin{bmatrix}d-1 \\ 3\end{bmatrix} q^{3} =-\begin{bmatrix}j \\ 3 \end{bmatrix}q^3+\begin{bmatrix} j\\2 \end{bmatrix}\begin{bmatrix} d-j\\ 1 \end{bmatrix}q -\begin{bmatrix}j \\ 1 \end{bmatrix}\begin{bmatrix} d-j\\ 2 \end{bmatrix}q+\begin{bmatrix}d-j \\ 3\end{bmatrix} q^{3} \\ &\Leftrightarrow -\begin{bmatrix} d-1 \\ 2\end{bmatrix}+ \begin{bmatrix}d-1 \\ 3\end{bmatrix} q^{2} =-\begin{bmatrix}j \\ 3 \end{bmatrix}q^2+\begin{bmatrix} j\\2 \end{bmatrix}\begin{bmatrix} d-j\\ 1 \end{bmatrix} -\begin{bmatrix}j \\ 1 \end{bmatrix}\begin{bmatrix} d-j\\ 2 \end{bmatrix}+\begin{bmatrix}d-j \\ 3\end{bmatrix} q^{2} \end{align*} Since the right and left hand side of the last equation are different modulo $q$, we see that $P_{13} \neq P_{j3}$ for $j>1$. Recall that $\qbin ab =1 \pmod q$.
\end{itemize}
\item If $ j-\frac{i}{2}-\frac{e}{2} > d-i$, we see that $f_{ji}$ is minimal for the integer $z=d-i$. Remark again that there is only one value of $s$ for which the corresponding term is divisible by $q^{\phi_i(j)}$ but not by $q^{\phi_i(j)+1}$. This excludes the case where $2$ terms with factor $q^{\phi_i(j)}$ would be each others opposite.
We find that $\phi_i(j)=f_{ji}(d-i)=(j-e-d+1)(j-d+i-1)+\binom{i-1}{2}+e(i-1)$, and we know that $\phi_i(1)=\binom{i-1}{2}+e(i-1)$. These two values $\phi_i(j)$ and $\phi_i(1)$ are equal if and only if $j = e+d-1$ or $j= d-i+1$. \begin{itemize} \item Suppose $j=d+e-1$. As $j,d \in \mathbb{Z}$, we know that $e \in \mathbb{Z}$. If $e=2$, then $j=d+1 > d$, a contradiction. For $e=1$, we find that $P_{1i}=P_{di}$ if and only if $i=d$ and $d$ odd. This corresponds to the polar spaces $Q(4n+2,q)$ and $ W(4n+1,q)$. For $e=0$ and $j=d-1$, we find that $P_{1i}=P_{d-1,i}$ for $i$ even. This corresponds to the exception for the polar spaces $Q^+(2d-1,q)$ and $i$ even. \item Suppose $j=d-i+1$. Since $j-\frac{i}{2}-\frac{e}{2} > d-i$, we know that $i+e<2$, which gives a contradiction as we supposed $i\geq 2$.\qedhere \end{itemize} \end{itemize} \end{proof}
We continue with well-known theorems that will be useful in the following sections. The first theorem follows from \cite[Theorem 2.14]{CLpolar} which was originally proven in \cite{delsarte}. For the second theorem we add a proof for completeness. The ideas are already present in \cite[Lemma 2]{bamberg} and \cite[Lemma 2.1.3]{phdfred}. \begin{theorem}\label{lemmaV0V1} Let $\mathcal{P}$ be a finite classical polar space of rank $d$ and parameter $e$, and let $\Omega$ be the set of all generators of $\mathcal{P}$. Consider the eigenspace decomposition $\mathbb{R}^\Omega =V_0\perp V_1 \perp \dots \perp V_d$ related to the association scheme, and using the classical order. Let $A$ be the point-generator incidence matrix of $\mathcal{P}$, then $\im(A^T) = V_0 \perp V_1$ and $V_0 = \langle j \rangle$. \end{theorem}
\begin{theorem} \label{stellingalgemeen} Let $R_i$ be a relation of an association scheme on the set $\Omega$ with adjacency matrix $A_i$ and let $\mathcal{L} \subset \Omega$ be a set, with characteristic vector $\chi$, such that for any $\pi \in \Omega$, we have that \begin{align*}
|\{x\in \mathcal{L} | (x,\pi) \in R_i \}| = \begin{cases} \alpha_i \ \text{If } \pi \in \mathcal{L} \\ \beta_i \ \text{If } \pi \notin \mathcal{L} \end{cases} \end{align*} with $\alpha_i -\beta_i=P$ an eigenvalue of $A_i$ for the eigenspace $V$, then $v_i= \chi + \frac{\beta_i}{P-P_{0i}}\textbf{\textit{j}} \in V$.
\end{theorem} Remark that the eigenspace $V$ in the previous theorem can be the direct sum of several eigenspaces of the association scheme. Note that an association scheme is not necessary in this theorem, a regular relation suffices. \begin{proof}
We show that $v_i = \chi + \frac{\beta_i}{P - P_{0i}}\textbf{\textit{j}}$, with $P=\alpha_i-\beta_i$ is an eigenvector for the matrix $A_i$ with eigenvalue $P$:
\begin{align*}
A_i\left(\chi + \frac{\beta_i}{P - P_{0i}}\textbf{\textit{j}}\right) =& \alpha_i \chi + \beta_i (\textbf{\textit{j}}-\chi) + \frac{\beta_i}{P - P_{0i}}P_{0i} \textbf{\textit{j}} \\
=& P\left(\chi + \frac{\beta_i}{P - P_{0i}}\textbf{\textit{j}}\right).
\end{align*}\\ So we find that $\chi + \frac{\beta_i}{P - P_{0i}}\textbf{\textit{j}} \in V$.
\end{proof}\\
\section{Degree one Cameron-Liebler sets} In this section we investigate the degree one Cameron-Liebler sets and give an equivalent definition.
Recall that for polar spaces of type $I$ Cameron-Liebler sets and degree one Cameron-Liebler coincide.
Using Lemma \ref{lemma2} and Theorem \ref{stellingalgemeen}, we can give a new equivalent definition for these degree one Cameron-Liebler sets of generators in polar spaces. Remark that the following theorem is an extension of Lemma $4.9$ in \cite{CLpolar}.
\begin{theorem} \label{stelling}
Let $\mathcal{P}$ be a finite classical polar space, of rank $d$ with parameter $e$, let $\mathcal{L}$ be a set of generators of $\mathcal{P}$ and $i$ be an integer with $1\leq i\leq d$. Denote $\frac{|\mathcal{L}|}{\prod_{i=0}^{d-2}(q^{e+i}+1)}$ by $x$. If $\mathcal{L}$ is a degree one Cameron-Liebler set of generators in $\mathcal{P}$ then the number of elements of $\mathcal{L}$ meeting a generator $\pi$ in a $(d-i-1)$-space equals
\begin{align}\label{formulelang}
\left\{ \begin{matrix}
\left( (x-1) \begin{bmatrix}d-1 \\i-1\end{bmatrix} +q^{i+e-1}\begin{bmatrix}d-1 \\ i\end{bmatrix} \right) q^{\binom{i-1}{2}+ (i-1)e} & \mbox{If } \pi \in \mathcal{L}\\ x \begin{bmatrix} d-1 \\ i-1\end{bmatrix} q^{\binom{i-1}{2}+(i-1)e} & \mbox{If }\pi \notin \mathcal{L}.
\end{matrix}\right.
\end{align}
If this propery holds for a polar space $\mathcal{P}$ and an integer $i$ such that
\begin{itemize}
\item $i$ is odd for $\mathcal{P}=Q^+(2d-1,q)$,
\item $i\neq d$ for $\mathcal{P}=Q(2d,q)$ or $\mathcal{P}=W(2d-1,q)$ both with $d$ odd or
\item $i$ is arbitrary otherwise,
\end{itemize}
then $\mathcal{L}$ is a degree one Cameron-Liebler set with parameter $x$. \end{theorem} \begin{proof}
Consider first a degree one Cameron-Liebler set $\mathcal{L}$ of generators in the polar space $\mathcal{P}$ with characteristic vector $\chi$. As $\chi \in V_0\perp V_1$, we have $\chi = v+a\textbf{\textit{j}}$ for some $v\in V_1$ and some $a\in \mathbb{R}$. Since $|\mathcal{L}| = \langle j,\chi \rangle = x\prod_{i=0}^{d-2}(q^{i+e}+1)$, we find that $a=\frac{x}{q^{d+e-1}+1}$, hence $\chi = \frac{x}{q^{d+e-1}+1}\textbf{\textit{j}}+v$.
Recall that the matrix $A_i$ is the incidence matrix of the relation $R_i$, which describes whether the dimension of the intersection of two generators equals $d-i-1$ or not. This implies that the vector $A_i \chi$, on the position corresponding to a generator $\pi$, gives the number of generators in $\mathcal{L}$, meeting $\pi$ in a $(d-i-1)$-space. We have \begin{align*} A_i \chi =& A_iv+\frac{x}{q^{d+e-1}+1}A_i \textbf{\textit{j}}= P_{1i}v+\frac{x}{q^{d+e-1}+1}P_{0i}\textbf{\textit{j}} \\ =& \left( \begin{bmatrix}d-1 \\i \end{bmatrix}q^{\binom{i}{2}+ei}-\begin{bmatrix}d-1 \\ i-1\end{bmatrix}q^{\binom{i-1}{2}+e(i-1)} \right) v+ \frac{x}{q^{d+e-1}+1}\begin{bmatrix}d \\ i\end{bmatrix}q^{\binom{i}{2}+ei} \textbf{\textit{j}}\\ =& \left( \begin{bmatrix}d-1 \\i \end{bmatrix}q^{\binom{i}{2}+ei}-\begin{bmatrix}d-1 \\ i-1\end{bmatrix}q^{\binom{i-1}{2}+e(i-1)} \right) \left(\chi - \frac{x}{q^{d+e-1}+1}\textbf{\textit{j}} \right)+ \frac{x}{q^{d+e-1}+1}\begin{bmatrix}d \\ i\end{bmatrix}q^{\binom{i}{2}+ei} \textbf{\textit{j}}\\ =&\frac{xq^{\binom{i-1}{2}+e(i-1)}}{q^{d+e-1}+1}\left(\begin{bmatrix}d-1 \\i-1 \end{bmatrix}-\begin{bmatrix}d-1 \\i \end{bmatrix}q^{i+e-1}+\begin{bmatrix}d \\i \end{bmatrix}q^{i+e-1} \right)\textbf{\textit{j}} \\ & + q^{\binom{i-1}{2}+e(i-1)}\left(\begin{bmatrix}d-1 \\i \end{bmatrix}q^{i+e-1}-\begin{bmatrix}d-1 \\i-1 \end{bmatrix} \right) \chi \\ =& q^{\binom{i-1}{2}+e(i-1)} \left(x\begin{bmatrix}d-1 \\i-1 \end{bmatrix}\textbf{\textit{j}} + \left(\begin{bmatrix}d-1 \\i \end{bmatrix}q^{i+e-1} -\begin{bmatrix}d-1 \\i-1 \end{bmatrix}\right) \chi \right), \end{align*} which proves the first implication.\\
For the proof of the other implication, suppose that $\mathcal{L}$ is a set of generators in $\mathcal{P}$ with the property described in the statement of the theorem. We apply Theorem \ref{stellingalgemeen} with $\Omega$ the set of all generators in $\mathcal{P}$, $R_i$ the relation $\{(\pi, \pi')|\dim(\pi \cap\pi') = d-i-1\}$, and
\begin{align*}
\alpha_i &= \left( (x-1) \begin{bmatrix}d-1 \\i-1\end{bmatrix} +q^{i+e-1}\begin{bmatrix}d-1 \\ i\end{bmatrix} \right) q^{\binom{i-1}{2}+ (i-1)e},\\
\beta_i &= x \begin{bmatrix} d-1 \\ i-1\end{bmatrix} q^{\binom{i-1}{2}+(i-1)e}.\end{align*}
As $\alpha_i - \beta_i = P_{1i}$, we find that $v_i = \chi + \frac{\beta_i}{P_{1i} - P_{0i}}\textbf{\textit{j}} \in V_1$, for the admissible values of $i$, by Lemma \ref{lemma2}. Hence by Definition \ref{defspecialCL}, $\mathcal{L}$ is a degree one Cameron-Liebler set in $\mathcal{P}$. \end{proof}\\
Remark that this definition is also a new equivalent definition for Cameron-Liebler sets of generators in polar spaces of type $I$, as for these polar spaces, degree one Cameron-Liebler sets and Cameron-Liebler sets coincide.
In the following lemma, we give some properties of degree one Cameron-Liebler sets in a polar space.
\begin{lemma}\label{lemmapropdegree oneCL}
Let $\mathcal{L}$ be a degree one Cameron-Liebler set of generators in a polar space $\mathcal{P}$ and let $\chi$ be the characteristic vector of $\mathcal{L}$. Denote $\frac{|\mathcal{L}|}{\prod_{i=0}^{d-2}(q^{e+i}+1)}$ again by $x$. Then $\mathcal{L}$ has the following properties: \begin{enumerate}
\item $\chi = \frac{x}{q^{d+e-1}+1}\textbf{\textit{j}} +v $ with $v \in V_1$, \item $\chi - \frac{x}{q^{d+e-1}+1}\textbf{\textit{j}}$ is an eigenvector with eigenvalue $P_{1i}$ for all adjacency matrices $A_i$ in the association scheme,
\item if $\mathcal{P}$ admits a spread, then $|\mathcal{L}\cap S|=x$ for every spread $\mathcal{S}$ of $\mathcal{P}$. \end{enumerate} \end{lemma} \begin{proof}
The first property follows from the first part of the proof of Theorem \ref{stelling}. The second property follows from the first property since $\chi - \frac{1}{q^{d+e-1}+1}\textbf{\textit{j}}\in V_1$.
Consider now a spread $S$ in $\mathcal{P}$ with characteristic vector $\chi_S$ and let $A$ be the point-generator incidence matrix of $\mathcal{P}$. Since $\chi\in \im(A^T) = \ker(A)^\perp$ and by \cite[Lemma 3.6(i), $m=1$]{CLpolar}, which gives that $u=\chi_S-\frac{1}{\prod_{i=0}^{d-2}(q^{e+i}+1)}\textbf{\textit{j}}\in \ker(A)$, we find, by taking the inner product of $u$ and $\chi$, that \begin{align*}
|\mathcal{L}\cap S| = \langle \chi_S, \chi \rangle = \frac{1}{\prod_{i=0}^{d-2}(q^{e+i}+1)}\langle \textbf{\textit{j}},\chi \rangle = \frac{1}{\prod_{i=0}^{d-2}(q^{e+i}+1)} |\mathcal{L}|= x. \end{align*}\qedhere \end{proof}
We also give some properties of degree one Cameron-Liebler sets of generators in polar spaces that can easily be proved. \begin{lemma} \label{basislemma4} Let $\mathcal{L}$ and $\mathcal{L}'$ be two degree one Cameron-Liebler sets of generators in a polar space $\mathcal{P}$ with parameters $x$ and $x'$ respectively, then the following statements are valid. \begin{enumerate} \item $0 \leq x,x' \leq q^{d-1+e}+1$.
\item $|\mathcal{L}|=x\prod_{i=0}^{d-2}(q^{i+e}+1)$. \item The set of all generators in the polar space $\mathcal{P}$ not in $\mathcal{L}$ is a degree one Cameron-Liebler set of generators in $\mathcal{P}$ with parameter $q^{d-1+e}+1-x$. \item If $\mathcal{L} \cap \mathcal{L}' = \emptyset$ then $\mathcal{L} \cup \mathcal{L}'$ is a degree one Cameron-Liebler set of generators in $\mathcal{P}$ with parameter $x+x'$. \item If $\mathcal{L} \subseteq \mathcal{L}'$ then $\mathcal{L} \setminus \mathcal{L}'$ is a degree one Cameron-Liebler set of generators in $\mathcal{P}$ with parameter $x-x'$. \end{enumerate} \end{lemma} \begin{lemma}[{\cite[Lemma 2.3]{Ferdinand.}}]\label{lemmaferdi} Let $\mathcal{P}$ be a polar space of rank $d$ and let $\mathcal{P'}$ be a polar space, embedded in $\mathcal{P}$ with the same rank $d$. If $\mathcal{L}$ is a degree one Cameron-Liebler set in $\mathcal{P}$, then the restriction of $\mathcal{L}$ to $\mathcal{P'}$ is again a degree one Cameron-Liebler set. \end{lemma} Note that Theorem \ref{stelling} does not hold for some values of $i$, dependent on the polar space $\mathcal{P}$, since for these cases, we cannot apply Lemma \ref{lemma2}. We will now show that there are examples of generator sets that admit the property of Theorem \ref{stelling} for the non-admitted values of $i$, but that are not degree one Cameron-Liebler sets. These are however Cameron-Liebler sets in the sense of \cite{CLpolar}.
\begin{remark}\label{commentvb4.6} By investigating \cite[Example $4.6$]{CLpolar}, we find an example of a Cameron-Liebler set in a polar space of type $III$ with $d=3$, that is not a degree one Cameron-Liebler set: a base-plane. A \emph{base-plane} in a polar space $\mathcal{P}$ of rank $3$ with base the plane $\pi$ is the set of all planes in $\mathcal{P}$, intersecting $\pi$ in at least a line.
Let $\mathcal{P}$ be a polar space of type $III$ of rank $3$, so $\mathcal{P}=W(5,q)$ or $\mathcal{P}=Q(6,q)$. Let $\pi$ be a plane and let $\mathcal{L}$ be the base-plane with base $\pi$. This set $\mathcal{L}$ is a Cameron-Liebler set in $\mathcal{P}$, but not a degree one Cameron-Liebler set. This follows from Theorem \ref{stelling} with $i=1$: The number of generators of $\mathcal{L}$, meeting a plane $\alpha$ of $\mathcal{L}$ in a line depends on whether $\alpha$ equals $\pi$ or not. As those two numbers, for $\alpha = \pi$ and $\alpha \neq \pi$ are different, the property in Theorem \ref{stelling} does not hold. This implies that the set $\mathcal{L}$ is no degree one Cameron-Liebler set. By similar arguments, we can also use Theorem \ref{stelling} with $i=2$, to show that a base-plane is not a degree one Cameron-Liebler set. However, the equalities for $i=3$ in Theorem \ref{stelling} hold.
\end{remark}
\begin{remark}\label{vb1}
A hyperbolic class is the set of all generators of one class of a hyperbolic quadric $Q^+(4n+1,q)$ embedded in a polar space $\mathcal{P}$ with $\mathcal{P}=Q(4n+2,q)$ or $\mathcal{P}=W(4n+1,q)$, $q$ even. We know that this set is a Cameron-Liebler set, see \cite[Remark 3.25]{CLpolar}, but we can prove that this set is not a degree one Cameron-Liebler set, by considering $\im(B^T)$, where $B$ is the incidence matrix of hyperbolic classes and generators. Every hyperbolic class corresponds to a row in the matrix $B$. If the characteristic vectors of all hyperbolic classes would lie in $V_0 \perp V_1$, then $\im(B^T)\subseteq V_0\perp V_1$. This gives a contradiction since $\im(B^T)= V_0\perp V_1\perp V_d$ by \cite[Lemma 3.26]{CLpolar}.\\ Remark that for the polar spaces $W(4n+1,q)$, $q$ odd, we do not have this example as there is no hyperbolic quadric $Q^+(4n+1,q)$ embedded in these symplectic polar spaces.
\end{remark}
In the previous remark we found that one class of a hyperbolic quadric $Q^+(4n+1,q)$ embedded in a $Q(4n+2,q)$ or $W(4n+1,q)$, $q$ even is no degree one Cameron-Liebler set. In the next example we show that an embedded hyperbolic quadric (so both classes) is a degree one Cameron-Liebler set in the polar spaces $Q(4n+2,q)$ and $W(4n+1,q)$, $q$ even.
\begin{example}[{\cite[Example $4.4$]{CLpolar}}] \label{vb} Consider a polar space $\mathcal{P}$ as in Remark \ref{vb1}. By Lemma \ref{lemmaferdi} we know
that this set of generators is a degree one Cameron-Liebler set, hence also a Cameron-Liebler set.
\end{example} \begin{table}[h]\begin{center}
\begin{tabular}{ | l |c|c| }
\hline
Example & CL & degree one CL \\ \hline \hline
All generators of $\mathcal{P}$ &$\times$& $\times$ \\ \hline
Point-pencil (defined in Section \ref{setcionfcps} ). & $\times$& $\times$ \\ \hline
Base-plane (defined in Example \ref{commentvb4.6}). & $\times$ & \\ \hline
Hyperbolic class (defined in Comment \ref{vb1}). & $\times$& \\ \hline
Embedded hyperbolic quadric (defined in Example \ref{vb}). &$\times$& $\times$ \\ \hline
\end{tabular}
\caption{Examples of Cameron-Liebler and degree one Cameron-Liebler sets.}\label{tabelexample} \end{center}\end{table} Remark by Lemma \ref{basislemma4}(3), that also the complements of the sets in Table \ref{tabelexample} are Cameron-Liebler sets or degree one Cameron-Liebler sets respectively.
\section{Polar spaces $Q^+(2d-1,q)$, $d$ even}
In the previous section we introduced degree one Cameron-Liebler sets while in this section we handle Cameron-Liebler sets in polar spaces. We focus on Cameron-Liebler in one class of generators in the polar spaces $Q^+(2d-1,q)$, $d$ even. These Cameron-Liebler sets were introduced in \cite[Section 3]{CLpolar} and are defined in only one class of generators, in contrast to the (degree one) Cameron-Liebler sets in other polar spaces.
It is known that the generators of a hyperbolic quadric $Q^+(2d-1,q)$ can be divided in two classes such that for any two generators $\pi$ and $\pi'$ we have $\dim(\pi\cap\pi')\equiv d\pmod{2}$ iff $\pi$ and $\pi'$ belong to the same class. By restricting the classical association scheme of the hyperbolic quadric $Q^+(2d-1,q)$ to the even relations, we define an association scheme for one class of generators. For more information, see \cite[Remark $2.18$, Lemma $3.12$]{CLpolar}. Let $R'_i$ and $A'_i$ be $R_{2i}$ and $A_{2i}$ respectively, restricted to the rows and columns corresponding to the generators of this class. Let $V'_j$ be $ V_j \perp V_{d-j}$, also restricted to the subspace corresponding to these generators.
For the polar spaces $Q^+(2d-1,q)$, $d$ even, we thus have the relations $R_i'$, $i = 0, \dots ,\frac{d}{2}$, and the eigenspaces $V_j'$, $j = 0, \dots, \frac{d}{2}$. For this association scheme on one class of generators we give the analogue of Lemma \ref{lemma2}.
\begin{lemma} \label{lemma4} The eigenvalue $P_{1,2i}$ of $A_i' = A_{2i}$ corresponds only with the eigenspace $V_1' = V_1 \perp V_{d-1}$ for the classical polar spaces $Q^+(2d-1,q)$, $d$ even. \end{lemma} \begin{proof} This lemma follows from Lemma \ref{lemma2} as for the hyperbolic quadrics $Q^+(2d-1,q)$ we found that $P_{1k}=P_{d-1,k}$ for $j$ even. This implies that the eigenvalue $P_{1,2i}$ corresponds with $V_1 \perp V_{d-1}$. \end{proof}
Here again, we find a new equivalent definition.
\begin{theorem} \label{stelling3} Let $\mathcal{G}$ be a class of generators of the hyperbolic quadric $Q^+(2d-1,q)$ of even rank $d$ and let $\mathcal{L}$ be a set of generators of $\mathcal{G}$. The set $\mathcal{L}$ is a Cameron-Liebler set of generators in $\mathcal{G}$ if and only if for every generator $\pi$ in $\mathcal{G}$, the number of elements of $\mathcal{L}$ meeting $\pi$ in a $(d-2i-1)$-space equals \\ \begin{align*} \left\{ \begin{matrix}
\left( (x-1) \begin{bmatrix}d-1 \\2i-1\end{bmatrix} +q^{2i-1}\begin{bmatrix}d-1 \\ 2i\end{bmatrix} \right) q^{(2i-1)(i-1)} & \mbox{If } \pi \in \mathcal{L}\\ x \begin{bmatrix} d-1 \\ 2i-1\end{bmatrix} q^{(2i-1)(i-1)} & \mbox{If }\pi \notin \mathcal{L}.
\end{matrix}\right. \end{align*} \end{theorem} \begin{proof}
Let $\mathcal{L}$ be a set of generators in $\mathcal{G}$ with the property described in the theorem, then the first implication is a direct application of Theorem \ref{stellingalgemeen} with $\Omega$ the set of all generators in $\mathcal{G}$, $R_i$ the relation $R'_i=\{(\pi, \pi')|\dim(\pi \cap\pi') = d-2i-1\}$, and \begin{align*}
\alpha_i &= \left( (x-1) \begin{bmatrix}d-1 \\2i-1\end{bmatrix} +q^{2i-1}\begin{bmatrix}d-1 \\ 2i\end{bmatrix} \right) q^{(2i-1)(i-1)},\\
\beta_i &= x \begin{bmatrix} d-1 \\ 2i-1\end{bmatrix} q^{(2i-1)(i-1)}.\end{align*} As $\alpha_i - \beta_i = P_{1,2i}$, we find that $v_i = \chi + \frac{\beta_i}{P_{1,2i} - P_{0,2i}}\textbf{\textit{j}} \in V'_1$, hence $\chi \in V'_0 \perp V'_1 $ and by Lemma $3.15$ in \cite{CLpolar} we know that $\chi \in$ $\im(A^T)$.
Now it follows from \cite[Definition 3.16(iv)]{CLpolar} that $\mathcal{L}$ is a (degree one) Cameron-Liebler set of $\mathcal{G}$. The other implication is \cite[Lemma $4.10$]{CLpolar}.\end{proof}\\
\section{Classification results}
We try to use the ideas from the classification results for Cameron-Liebler sets of polar spaces of type $I$ and the polar spaces $Q^+(2d-1,q)$, $d$ even, in \cite[Section 6]{CLpolar}, to find classification results for degree one Cameron-Liebler sets in polar spaces.\\ We start with some definitions and a lemma that proves that the parameter $x$ is always an integer. \begin{definition} A \emph{partial ovoid} is a set of points in a polar space such that each generator contains at most one point of this set. It is called an \emph{ovoid} if each generator contains precisely one point of the set. \end{definition} \begin{definition} An Erd\H{o}s-Ko-Rado (EKR) set of $k$-spaces is a set of k-spaces which are pairwise not disjoint. \end{definition} \begin{lemma} If $\mathcal{L}$ is a degree one Cameron-Liebler set in a polar space $\mathcal{P}$ with parameter $x$, then $x\in \mathbb{N}$ \end{lemma} \begin{proof} For all polar spaces, except the hyperbolic quadrics $Q^+(2d-1,q)$, $d$ even, we refer to \cite[Lemma 4.8]{CLpolar}.
Suppose now that $\mathcal{L}$ is a degree one Cameron-Liebler set in $\mathcal{P}=Q^+(2d-1,q)$, $d$ even, with parameter $x$. Then $\mathcal{L}$ is also a Cameron-Liebler set in $\mathcal{P}$ with parameter $x$. If $\Omega_1$ and $\Omega_2$ are the two classes of generators in $\mathcal{P}$, then $\mathcal{L}\cap \Omega_1$ and $\mathcal{L}\cap \Omega_2$ are Cameron-Liebler sets of $\Omega_1$ and $\Omega_2$ with parameter $x$, by \cite[Theorem 3.20]{CLpolar}. Hence $x$ is the parameter of a Cameron-Liebler set in one class of generators of $Q^+(2d-1,q)$, $d$ even. This implies, by \cite[Lemma 4.8]{CLpolar}, that $x\in \mathbb{N}$. \end{proof}\\
Now we continue with a classification result for degree one Cameron-Liebler sets with parameter $1$ in all polar spaces. \begin{theorem} A degree one Cameron-Liebler set in a polar space $\mathcal{P}$ of rank $d$ with parameter $1$ is a point-pencil. \end{theorem} \begin{proof} For the polar spaces of type $I$ and $III$, the theorem follows from \cite[Theorem 6.4]{CLpolar} as any degree one Cameron-Liebler set is a Cameron-Liebler set and since a base-plane, a base-solid and a hyperbolic class, are no degree one Cameron-Liebler sets (see Remark \ref{commentvb4.6} and Remark \ref{vb1}). The theorem for the polar spaces of type $I$ follows from \cite[Theorem 6.4]{CLpolar} as degree one Cameron-Liebler sets and Cameron-Liebler sets coincide for these polar spaces. Let $\mathcal{L}$ be a degree one Cameron-Liebler set with parameter $1$ in a polar space $\mathcal{P}$ and remark that $\mathcal{L}$ is an EKR set with size $\prod_{i=0}^{d-2}(q^{i+e}+1)$. For the polar spaces $Q(4n+2,q)$ and $W(4n+1,q)$, we find by \cite[Theorem 23, Theorem 40]{pepe} that $\mathcal{L}$ is a point-pencil, a hyperbolic class or a base-plane if $n=1$. Using Remark \ref{vb1} and Remark \ref{commentvb4.6}, we find that the last two possibilities are no degree one Cameron-Liebler sets. Hence $\mathcal{L}$ is a point-pencil.
Suppose now that $\mathcal{P}$ is the hyperbolic quadric $Q^+(4n-1,q)$ with $\Omega_1$ and $\Omega_2$ the two classes of generators. By \cite[Theorem 3.20]{CLpolar}, we know that $\mathcal{L}\cap \Omega_1$ and $\mathcal{L}\cap \Omega_2$ are Cameron-Liebler sets in $\Omega_1, \Omega_2$ respectively, with parameter $1$. Using \cite[Theorem 6.4]{CLpolar} we see that $\mathcal{L}\cap \Omega_i$ is a point-pencil or a base-solid if $n=2$ for $i=1,2$. A base-solid is the set of all $3$-spaces intersecting a fixed 3-space (the base) in precisely a plane. Note that all elements of the base-solid belong to a different class of the hyperbolic quadric than the base itself.
If $n=2$, so $d=4$, and $\mathcal{L}\cap \Omega_1$ or $\mathcal{L}\cap \Omega_2$ is a base-solid with base $\pi$, then there are at least $(q+1)(q^2+1)$ elements of $\mathcal{L}$ meeting $\pi$ in a plane. This contradicts Theorem \ref{stelling}, whether $\pi\in \mathcal{L}$ or not. So we find that $\mathcal{L}\cap \Omega_1$ and $\mathcal{L}\cap \Omega_2$ are both point-pencils with vertex $v_1$ and $v_2$ respectively. Now we show that $v_1=v_2$. Suppose $v_1\neq v_2$. Consider a generator $\alpha \in \Omega_2\setminus \mathcal{L}$ through $v_1$. Then $\alpha$ intersects $q^2+q+1$ generators of $\mathcal{L}\cap \Omega_1$ in a plane through $v_1$. This gives a contradiction with Theorem \ref{stelling}, which proves that $v_1=v_2$. Hence $\mathcal{L}$ is a point-pencil through $v_1=v_2$. \end{proof} \\
The classification result in \cite[Theorem 6.7]{CLpolar} for polar spaces of type $I$ is also valid for degree one Cameron-Liebler sets in all polar spaces. \begin{theorem} Let $\mathcal{P}$ be a finite classical polar space of rank $d$ and parameter $e$, and let $\mathcal{L}$ be a degree one Cameron-Liebler set of $\mathcal{P}$ with parameter $x$. If $x\leq q^{e-1}+1$ then $\mathcal{L}$ is the union of $x$ point-pencils whose vertices are pairwise non-collinear or $x=q^{e-1}+1$ and $\mathcal{L}$ is the set of generators in an embedded polar space of rank $d$ and with parameter $e-1$. \end{theorem} \begin{proof} In Lemma $6.5$, Theorem $6.6$ and Theorem $6.7$ of \cite{CLpolar}, the authors use \cite[Lemma $4.9$]{CLpolar} to prove the classification result. We can use the same proof as we can use Theorem \ref{stelling} instead of \cite[Lemma $4.9$]{CLpolar}. \end{proof}\\
Remark that the last possibility corresponds to an embedded hyperbolic quadric $Q^+(4n+1,q)$ if $\mathcal{P}=Q(4n+2,q)$ or $\mathcal{P}=W(4n+1,q)$ with $q$ even. If $\mathcal{P}=W(4n+1,q)$ with $q$ odd, then $\mathcal{P}$ admits no embedded polar space with rank $n$ and parameter $e-1=0$.\\
For the symplectic polar space $W(5,q)$ and the parabolic quadric $Q(6,q)$ we give a stronger classification result. Remark the polar spaces $Q(6,q)$ and $W(5,q)$ are isomorphic for $q$ even. We find $W(5,q)$, for $q$ even, by a projection of $Q(6,q)$ from the nucleus $N$ of $Q(6,q)$ to a hyperplane not through $N$ in the ambient projective space $\PG(6,q)$. In this way, there is a one-one connection between the planes of $W(5,q)$ and the planes of $Q(6,q)$. We start with some lemmas.
\begin{lemma} \label{lemmas1s2d1d2} Let $\mathcal{L}$ be a degree one Cameron-Liebler set of generators (planes) in $W(5,q)$ or $Q(6,q)$ with parameter $x$. \begin{enumerate}
\item For every $\pi \in \mathcal{L}$, there are $s_1$ elements of $\mathcal{L}$ meeting $\pi$. \item For skew $\pi, \pi'\in \mathcal{L}$, there exist exactly $d_2$ subspaces in $\mathcal{L}$ that are skew to both $\pi$ and $\pi'$ and there exist $s_2$ subspaces in $\mathcal{L}$ that meet both $\pi$ and $\pi'$.
Here, $d_2$, $s_1$ and $s_2$ are given by: \begin{align*} d_2(q,x) &= (x-2)q^2(q-1)\\ s_1(q,x) &= x(q^2+1)(q+1)-(x-1)q^3 = q^3+x(q^2+q+1)\\ s_2(q,x) &= x(q^2+1)(q+1)-2(x-1)q^3 +d_2(q,x). \end{align*} \end{enumerate} \end{lemma} \begin{proof}Let $\mathcal{P}$ be the polar space $W(5,q)$ or $Q(6,q)$, hence $d=3$ and $e=1$. \begin{enumerate}
\item This follows directly from Theorem \ref{stelling}, for $i=d$ and $|\mathcal{L}|=x(q^2+1)(q+1)$.
\item Let $\chi_\pi$ and $\chi_{\pi'}$ be the characteristic vectors of $\{\pi\}$ and $\{\pi'\}$, respectively. Let $\mathcal{Z}$ be the set of all planes in $\mathcal{P}$ disjoint to $\pi$ and $\pi'$, and let $\chi_\mathcal{Z}$ be its characteristic vector. Furthermore, let $v_\pi$ and $v_{\pi'}$ be the incidence vectors of $\pi$ and $\pi'$, respectively, with their positions corresponding to the points of $\mathcal{P}$. Note that $A\chi_\pi = v_\pi$ and $A\chi_{\pi'} = v_{\pi'}$. \\ The number of planes through a point $P\notin \pi \cup \pi'$ and disjoint to $\pi$ and $\pi'$ is the number of lines in $P^\perp$, disjoint to the lines corresponding to $\pi$ and $\pi'$. By \cite[Corollary $19$]{kms} this number equals $q^2(q-1)$, and we find: \begin{align*} A\chi_\mathcal{Z} &=q^2(q-1)(\textbf{\textit{j}}-v_\pi-v_{\pi'}) \\ &=q^2(q-1)\left(A\frac{\textbf{\textit{j}}}{(q^2+1)(q+1)}-A\chi_\pi-A\chi_{\pi'}\right)\\ \Leftrightarrow\qquad&\chi_\mathcal{Z}-q^2(q-1)\left(\frac{\textbf{\textit{j}}}{(q^2+1)(q+1)}-\chi_\pi-\chi_{\pi'}\right) \in \ker(A). \end{align*} We know that the characteristic vector $\chi$ of $\mathcal{L}$ is included in $\ker(A)^\perp$. This implies: \begin{align*} &&\chi_\mathcal{Z} \cdot \chi &=q^2(q-1)\left(\frac{\textbf{\textit{j}}\cdot \chi}{(q^2+1)(q+1)}-\chi(\pi)-\chi(\pi')\right) \\
&\Leftrightarrow & |\mathcal{Z}\cap \mathcal{L}| &=(x-2)q^2(q-1) \end{align*} which gives the formula for $d_2(q,x)$. The formula for $s_2(q,x)$ follows from the inclusion-exclusion principle. \qedhere \end{enumerate} \end{proof} In the following lemma, corollary and theorem, we will use $s_1,s_2,d_1,d_2$ for the values $s_1(q,x),s_2(q,x),$ $d_1(q,x), d_2(q,x)$ if the field size $q$ and the parameter $x$ are clear from the context. For the definition of these values, we refer to the previous lemma.
The following lemma is a generalization of Lemma $2.4$ in \cite{Klaus}. \begin{lemma}\label{lemmaklaus} If $c$ is a nonnegative integer such that \begin{align*} (c+1)s_1-\binom{c+1}{2}s_2 > x(q^2+1)(q+1)\;, \end{align*} then no degree one Cameron-Liebler set of generators in $W(5,q)$ or $Q(6,q)$ with parameter $x$ contains $c+1$ mutually skew generators. \end{lemma} \begin{proof}
Let $\mathcal{P}$ be the polar space $W(5,q)$ or $Q(6,q)$ and assume that $\mathcal{P}$ has a degree one Cameron-Liebler set $\mathcal{L}$ of generators with parameter $x$ that contains $c+1$ mutually disjoint subspaces $\pi_0,\pi_1,\dots,\pi_c$. Lemma \ref{lemmas1s2d1d2} shows that $\pi_i$, meets at least $s_1(q,x)-i\cdot s_2(q,x)$ elements of $\mathcal{L}$ that are skew to $\pi_0, \pi_1, \dots,\pi_{i-1}$. Hence $x(q^2+1)(q+1) = |\mathcal{L}| \geq (c+1) s_1-\sum_{i=0}^c i s_2$ which contradicts the assumption. \end{proof}
\begin{gevolg}\label{gevolgklaus} A degree one Cameron-Liebler set of generators in $W(5,q)$ or $Q(6,q)$ with parameter $2\leq x\leq \sqrt[3]{2q^2}-\frac{\sqrt[3]{4q}}{3}+\frac{1}{6}$ contains at most $x$ pairwise disjoint generators. \end{gevolg} \begin{proof} Let $\mathcal{L}$ be a degree one Cameron-Liebler set of generators in $W(5,q)$ or $Q(6,q)$ with parameter $x$. Using Lemma \ref{lemmaklaus} for $e=1,d=3, c=x$ we find that if $q^3-q^2x+\frac{q+1}{2}x^2-\frac{q+1}{2}x^3>0$, then $\mathcal{L}$ contains at most $x$ pairwise disjoint generators. Since $f_q(x)=q^3-q^2x-\frac{q+1}{2}x^2(x-1)$ is decreasing on $[1,+\infty[$, we find that it is sufficient that $f_q\left(\sqrt[3]{2q^2}-\frac{\sqrt[3]{4q}}{3}+\frac{1}{6}\right)>0$, as we only consider the values of $x$ in $[2,\dots,\sqrt[3]{2q^2}-\frac{\sqrt[3]{4q}}{3}+\frac{1}{6}]$. Using a computer algebra packet, it can be checked that $f_q\left(\sqrt[3]{2q^2}-\frac{\sqrt[3]{4q}}{3}+\frac{1}{6}\right)>0$ for all $q\geq 2$.
\end{proof}
\begin{theorem} A degree one Cameron-Liebler set $\mathcal{L}$ of generators in $W(5,q)$ or $Q(6,q)$ with parameter $2\leq x\leq \sqrt[3]{2q^2}-\frac{\sqrt[3]{4q}}{3}+\frac{1}{6}$ is the union of $\alpha$ embedded disjoint hyperbolic quadrics $Q^+(5,q)$ and $x-2\alpha$ point-pencils whose vertices are pairwise disjoint and not contained in the $\alpha$ hyperbolic quadrics $Q^+(5,q)$. For the polar space $Q(6,q)$ or $W(5,q)$ with $q$ even, $\alpha\in \{0,...,\lfloor \frac{x}{2} \rfloor\}$, for the polar space $W(5,q)$ with $q$ odd, $\alpha=0$. \end{theorem} \begin{proof} Let $\mathcal{P}$ be the polar space $W(5,q)$ or $Q(6,q)$ and $\mathcal{L}$ be a degree one Cameron-Liebler set in $\mathcal{P}$. Note that the generators in these polar spaces are planes. By Corollary \ref{gevolgklaus}, there are $c$ pairwise disjoint planes $\pi_1,\pi_2,\dots, \pi_c$ with $c\leq x$ in $\mathcal{L}$. Let $S_i$ be the set of planes in $\mathcal{L}$ intersecting $\pi_i$ and not intersecting $\pi_j$ for all $j\neq i$. By Lemma \ref{lemmas1s2d1d2} there are, for a fixed $i$, at least $s_1-(c-1)s_2\geq s_1-(x-1)s_2 =q^3-(x-2)q^2-(x^2-2x)(q+1)$ planes in $S_i$. As $S_i$ is an EKR set by Corollary \ref{gevolgklaus}, $S_i$ has to be a part of a point-pencil (PP), a base plane (BP) or one class of an embedded hyperbolic quadric $Q^+(5,q)$ (CEHQ). Remark that if $\mathcal{P}$ is $W(5,q)$ with $q$ odd, then $\mathcal{P}$ cannot contain a CEHQ, so for this polar space, the only possibilities are a PP of BP, by \cite[Lemmas $3.3.7$, $3.3.8$ and $3.3.16$]{phdmaarten}. Using Theorem \ref{stelling}, we can prove that if the set $S_i$ is a part of a PP, BP or CEHQ, then $\mathcal{L}$ has to contain all planes of this PP, BP or CEHQ. We show this for the case where the set of planes forms a part of a PP. So assume $S_i$ is a subset of the point-pencil with vertex $P$, and there is a plane $\gamma \notin \mathcal{L}$ through $P$. This would imply that $\gamma$ meets at least $q^3-(x-2)q^2-(x^2-2x)(q+1)$ planes in $\mathcal{L}$ non-trivially. This gives a contradiction by Theorem \ref{stelling} for $i=1$ and $i=2$, as $\gamma \notin \mathcal{L}$ intersects with precisely $x(q^2+q+1)<q^3-(x-2)q^2-(x^2-2x)(q+1)$ planes of $\mathcal{L}$.\\ This argument also works for the BP and CEHQ, so we can conclude that if $\mathcal{L}$ contains an $S_i$ which is a part of a PP, BP or CEHQ, then $\mathcal{L}$ has to contain the whole PP, BP or CEHQ respectively, which we will call $\mathcal{L}_i$. \\
Remark first that $\mathcal{L}$ cannot contain a BP with base $\pi$ as then $\pi \in \mathcal{L}$ intersects $q^3+q^2+q>q^2+q+x-1$ planes of $\mathcal{L}$ in a line, which gives a contradiction with Theorem \ref{stelling}. This implies that all sets $\mathcal{L}_i$ are PP's or CEHQ's. Now we show that every two sets $\mathcal{L}_i$ and $\mathcal{L}_j$ are disjoint. Suppose first that $\mathcal{L}_i$ and $\mathcal{L}_j$ are two PP's with vertices $P_i$ and $P_j$ respectively, that are not disjoint. Then there are at most $q+1$ planes in $\mathcal{L}_i \cap \mathcal{L}_j$ and let $\alpha$ be one of them. Now we see that $\alpha$ meets at least $2(q^3+q^2+q+1)-(q+1)$ elements of $\mathcal{L}$, contradicting Theorem \ref{stelling}. If $\mathcal{L}_i$ and $\mathcal{L}_j$ are two non-disjoint CEHQ's or a CEHQ and a PP that are non-disjoint, then we can use the same arguments as above: In both cases, there are at most $q+1$ planes in $\mathcal{L}_i \cap \mathcal{L}_j$, which implies that a plane $\alpha \in \mathcal{L}_i \cap \mathcal{L}_j$ meets at least $2(q^3+q^2+q+1)-(q+1)$ elements of $\mathcal{L}$ non-trivially, contradicting Theorem \ref{stelling}.
Now we know that $\mathcal{L}$ contains the disjoint union of $c\leq x$ sets $\mathcal{L}_i$ of planes, where every set is a PP or CEHQ. As the number of planes in a PP or CEHQ equals $(q^2+1)(q+1)$, and the total number of planes in $\mathcal{L}$ equals $x(q^2+1)(q+1)$ (see Lemma \ref{basislemma4}(2)), we see that $\mathcal{L}$ equals the disjoint union of $x$ sets $\mathcal{L}_i$. \\ To end this proof, we want to show that the only possible composition of $\mathcal{L}$ exists of PP's and embedded hyperbolic quadrics. If $\mathcal{L}$ contains one class of an embedded hyperbolic quadric, then $\mathcal{L}$ also contains the other class of this hyperbolic quadric. This also follows from Theorem \ref{stelling}: suppose $\mathcal{L}$ contains only one class of an embedded hyperbolic quadric and let $\pi$ be a plane of the other class of this embedded hyperbolic quadric. Then we can show that $\pi$ is also a plane of $\mathcal{L}$: we know that $\pi$ meets $q^2+q+1$ planes of the hyperbolic quadric in a line, so at least so many planes of $\mathcal{L}$, in a line. But if $\pi \notin \mathcal{L}$, then by Theorem \ref{stelling}, $\pi$ can only meet $x< \sqrt[3]{2q^2}$ planes of $\mathcal{L}$ in a line, a contradiction.
This implies that $\mathcal{L}$ has to be the disjoint union of point-pencils and embedded hyperbolic quadrics. Remark that two point-pencils are disjoint if the corresponding vertices are non-collinear. As there exists a partial ovoid of size $q+1$ in $\mathcal{P}$, we can find $x$ pairwise disjoint point-pencils.\\ Note that for $q$ odd and $\mathcal{P} = W(5,q)$, there are no embedded hyperbolic quadrics, so in this case $\mathcal{L}$ is the disjoint union of $x$ point-pencils. We end the proof by showing that, for $\mathcal{P} = Q(6,q)$ or $\mathcal{P} = W(5,q)$ and $q$ even, there exist disjoint embedded hyperbolic quadrics in $\mathcal{P}$. It suffices to show this only for $\mathcal{P} = Q(6,q)$, by the connection between $Q(6,q)$ and $W(5,q)$ for $q$ even.
Consider two embedded hyperbolic quadrics $Q^+(5,q)$ in $Q(6,q)$, that intersect in a parabolic quadric $Q(4,q)$. These two hyperbolic quadrics have no planes in common as the generators of $Q(4,q)$ are lines, so these two embedded hyperbolic quadrics are disjoint.
Remark that the disjoint union of point-pencils and the disjoint union of embedded hyperbolic quadrics is a degree one Cameron-Liebler set by Lemma \ref{basislemma4}$(4)$, as a point-pencil is a degree one Cameron-Liebler set and for $\mathcal{P}\neq W(5,q)$ or $q$ even, an embedded hyperbolic quadric of the same rank is also a degree one Cameron-Liebler set. \end{proof}
This theorem agrees with Conjecture $5.1.3$ in \cite{Ferdinand.}, as this conjecture says that every degree one Cameron-Liebler set in $W(5,q)$ is the disjoint union of non-degenerate hyperplane sections and point-pencils. \begin{remark} Recall that the disjoint union of point-pencils and disjoint embedded hyperbolic quadrics is also an example of a degree one Cameron-Liebler set of generators in the other polar spaces of type $III$ (see Lemma \ref{basislemma4} and Example \ref{vb}).
We also remark that we could not generalize this classification result to other classical polar spaces, as for these polar spaces, there is not enough information known about large EKR sets in these polar spaces. For the polar spaces $Q^+(4n+1,q)$ there are some EKR results in \cite{maarten2}. Since in this case, the large examples of EKR sets have much more elements than the largest known Cameron-Liebler sets, we cannot use these results. \end{remark}
\section*{Summary of properties for type \texorpdfstring{$III$}{III}} In Table \ref{tabeldef} we give an overview of properties where we distinguish sufficient properties, necessary properties and characteristic properties, or definitions for Cameron-Liebler sets and for degree one Cameron-Liebler sets.
Suppose in this table that $\mathcal{L}$ is a set of generators in the polar space $\mathcal{P}$ of type $III$, with characteristic vector $\chi$. Suppose also that $\pi$ is a generator in $\mathcal{P}$, not necessarily in $\mathcal{L}$.\\
\begin{table}[h]\begin{center}
\begin{tabular}{ | l l |c|c| }
\hline
Property && CL & degree one CL \\ \hline \hline
$\chi \in V_0 \perp V_1$. && $S$& $C$ \\ \hline
$\forall \pi \in \mathcal{P}$: $|\{\tau \in \mathcal{L}| \tau \cap \pi = \emptyset\}| = (x-\chi(\pi))q^{\binom d 2}$. && $C$& $N$ \\ \hline
$\chi-\frac{x}{q^d+1}\textbf{\textit{j}}$ is an eigenvector of $A_d$ with eigenvalue $-q^{\binom d 2}$. && $C$ & $N$\\ \hline
$\forall \pi \in \mathcal{P}, |\{\tau \in \mathcal{L}| \dim(\tau \cap \pi) = d-i-1 \}|=$ (\ref{formulelang}), for $0\leq i <d$ &&$S$& $C$ \\ \hline
If $\mathcal{P}$ admits a spread, then $|\mathcal{L}\cap S| = x$ $\forall$ spread $S\in \mathcal{P}$. && $C$ &$N$ \\ \hline
\end{tabular}
\caption{Overview of the sufficient ($S$), necessary ($N$) and characterising ($C$) properties.}\label{tabeldef} \end{center}\end{table}
\end{document} |
\begin{document}
\title{f{Modelling of discrete extremes through extended versions of discrete generalized Pareto
distribution} \begin{abstract}
The statistical modelling of integer-valued extremes such as large avalanche counts has received less attention than their continuous counterparts in the extreme value theory (EVT) literature. One approach to moving from continuous to discrete extremes is to model threshold exceedances of integer random variables by the discrete version of the generalized Pareto distribution. Still, the optimal threshold selection that defines exceedances remains a problematic issue. Moreover, within a regression framework, the treatment of the many data points (those below the chosen threshold) is either ignored or decoupled from extremes. Considering these issues, we extend the idea of using a smooth transition between the two tails (lower and upper) to force large and small discrete extreme values to comply with EVT. In the case of zero inflation, we also develop models with an additional parameter. To incorporate covariates, we extend the Generalized Additive Models (GAM) framework to discrete extreme responses. In the GAM forms, the parameters of our proposed models are quantified as a function of covariates. The maximum likelihood estimation procedure is implemented for estimation purposes. With the advantage of bypassing the threshold selection step, our findings indicate that the proposed models are more flexible and robust than competing models (i.e. discrete generalized Pareto distribution and Poisson distribution). \end{abstract}
\textbf{\textit{Keywords:}} Extreme value theory, Discrete extended generalized Pareto distribution, Zero-inflated models, Generalized additive models.
\section{Introduction}\label{sec:1}
Extreme Value Theory (EVT) originating from the innovative work of \citet{fisher1928limiting} offers a facility of stochastic modeling related to very high and very low frequency events (e.g., extreme temperature, heavy rainfall intensities, heavy floods and extreme winds etc.). For example, from last three decades, \citet{coles2001introduction}, \citet{beirlant2004statistics} and \citet{hann2006extreme} discussed regularly adapted extreme value models to measure uncertainty for continuous extremes events. More precisely, the distribution of exceedances (i.e., the amount of data appear over a given high threshold) is often approximated by the so-called Generalized Pareto distribution (GPD) defined by its cumulative distribution function (CDF) as \begin{equation} \label{eq:1}
F(x;\sigma,\xi) =
\begin{cases}
1-\left(1+\xi x/\sigma\right)_{+}^{-1/\xi} & \xi\neq 0 \\
1- \exp{(-{x}/{\sigma})} & \xi = 0
\end{cases} \end{equation} where $(a)_{+}= \max(a,0)$, $\sigma>0$ and $-\infty<\xi<+\infty$ represent the scale and shape parameters of the distribution, respectively. The shape parameter $\xi$ defines tail behavior of the GP distribution. If $\xi<0$, the upper tail is bounded. If $\xi=0$, this tends to the case of an exponential distribution, where all moments are finite. If $\xi>0$, the upper tail is unbounded but higher moments ultimately become infinite. Three defined cases are labeled ``short-tailed'', ``light-tailed'', and ``heavy-tailed'', respectively. These types of tail behaviour makes GPD more flexible to model excesses.
Optimal threshold selection in GPD application is remains an arguable and elusive task \citep[see, e.g.][]{dupuis1999exceedances, scarrott2012review}. Numerous studies \citep[for instance,][]{embrechts1999modelling, choulakian2001goodness, davison1990models, katz2002statistics, boutsikas2002modeling} have established how the GPD can be fitted to continuous extreme events. This is vindicated by Pickands' theorem \citep{pickands1975statistical} which states that, for most random variables, the distribution of the exceedances converges to a GPD as the threshold increases to the right endpoint.
One of the major disadvantages of GPD is that it only models those observations which occurs over a certain high threshold. This imposes an artificial dichotomy in the data (i.e., observations are either below or above the threshold) and the question of finding the optimal threshold remains complex for practitioners.
In the continuous extreme value setting, many authors have been attempted to model entire range of data without threshold selection. For example, \citet{frigessi2002dynamic} proposed dynamically weighted mixture model by combining light-tailed density and heavy tailed density (i.e., GPD) through weight function. The dynamically weighted mixture approach can be valuable in unsupervised tail estimation, especially in heavy tailed situations and for small percentiles. Frigessi's model has many advantages but it has a drawback. For instance, the model has six parameters and inference is not a straightforward task \citep[ see][for more details]{frigessi2002dynamic}.
\citet{carreau2009hybrid} proposed a semiparametric model called "hybrid Pareto" model that stitches a Gaussian distribution with a heavy tailed GPD. According to \citet{carreau2009hybrid}, the hybrid Pareto model
offers efficient estimates of the tail of the distributions and converges faster in terms of log-likelihood than existing GPD.
They used hybrid Pareto models in regression context for statistical modeling of rainfall-runoff.
\citet{macdonald2011flexible} combined a non-parametric kernel density estimator for the bulk of the distribution with a heavy-tailed GPD. One of drawback of these approaches is that it still needs to select the suitable threshold.
{
To keep a low number of parameters, avoid mixtures modeling and simplify inference, \citet{naveau2016modeling} proposed a general procedure to extend the GPD class.
This construction is based on the integral transform idea
to simulate GPD random draws, that is $F_{\sigma,\xi}^{-1}( U),$
where $U\sim\mathcal{U}(0,1)$ represents an uniformly distributed random variable on $(0,1)$ and $F_{\sigma,\xi}^{-1}$ denotes the inverse of the CDF \eqref{eq:1}.
This leads to the class of random variables stochastically defined as
\begin{equation}\label{eq:3}
F_{\sigma,\xi}^{-1}\{G^{-1}(U) \},
\end{equation} where $G$ is a CDF on $[0,1]$ and $U\sim\mathcal{U}(0,1)$. } The key problem is to find a class for $G$ which preserves the upper tail behavior with shape parameter $\xi$ and also controls the lower tail behavior. \citet{naveau2016modeling} defined restrictions for validity of $G$ families. For instance, the tail of $G$ denoted by $\Bar{G}=1-G$ has to satisfy \begin{eqnarray}
\lim_{u \to 0} \frac{\Bar{G}(1-u)}{u}&=&a, \mbox{ for some finite $a>0$ (upper tail behavior),}\nonumber\\
\lim_{u \to 0} \frac{G(u)}{u^\kappa}&=&c, \mbox{ for some finite $c>0$ (lower tail behavior).}\label{eq:cond}
\end{eqnarray} Four examples for parametric family $G$ were studied in \citet{naveau2016modeling}.
By construction, this approach bypasses
the elusive choice of a fixed and optimal threshold.
Inference can be performed with classical methods such as maximum likelihood and probability weighted moments \citep[see, e.g.][]{le2022improved,Furrer07}.
Semi-parametric modeling based on this class has been studied by \citet{tencaliec19} and
extensions to handle covariates have been proposed by \citet{carrer2022distributional} and \citet{decarvalho2022}. Furthermore, \citet{gamet2022} modeled specific of tail estimation based on the same construction. Still, model \eqref{eq:3} has to be yet tailored to handle discrete valued random variables.
The subsequent development deals with the discrete extreme models. A probability mass function (PMF) can be obtained by discretizing the CDF defined by \eqref{eq:1} as \begin{equation} \label{eq:4}
P(X=k) = F(k+1;\sigma,\xi)- F(k;\sigma,\xi), \hspace{1cm} k \in \mathbb{N}. \end{equation} This is called a discrete GPD (DGPD). In existing literature, the DGPD have been used by \citet{prieto2014modelling} to model road accidents and \citet{ranjbar2020modelling} applied in regression context to model extremes of seasonal viruses and hospital congestion in Switzerland. Numerous features of discrete Pareto type distributions were studied in \cite{krishna2009discrete,buddana2014discrete,kozubowski2015discrete}.
Similar to continuous GPD the DGPD is well approximated to discrete excesses over high threshold \citep{hitz2017discrete}. Again, an appropriate threshold selection procedure is required that can offer an optimal threshold for fitting DGPD. Also, the few questions arise in mind, the DGPD models the data above the threshold but how to model the observations below the threshold or how to model the entire range of count data having extreme observations.
One possibility is to use threshold spliced mixture representation for modeling the discrete observations. Again, the optimal threshold is needed for fitting threshold spliced mixture model. The detail is provided in "Supplementary materials". By keeping these arguments in mind, we want to develop the modeling framework which can be used to model the entire range of discrete extreme data without fixing threshold value. To introduce such modeling framework, we take advantage of the constructions given in \citet{naveau2016modeling}. Thus, discrete extended versions of GPD (DEGPD) are proposed here by discretizing the CDF of continuous extended generalized Pareto distributions via equation (\ref{eq:4}). Discrete nature extreme events may contain a lot of zero values. For instance, the insurance complaints data or avalanches data may include many zero values. This type of data is generally referred to as zero-inflated (ZI), requiring specialized statistical methods for analysis.
Therefore, we have introduced a zero-inflated version of DEGPD, ZIDEGPD. This model is useful for the data which has zero inflation and remaining observations rises from lower tail to mode of the distribution and then exponentially decay to the upper heavier tail.
Finally we aim to take into account covariate effects. This narrates to the work of \citet{ranjbar2020modelling}, who model discrete exceedances with linear covariates effects by using generalized additive model (GAM) forms of DGPD in the same spirit as generalized additive models for location, scale and shape \citep{rigby2005generalized}.
The paper is organized as follows. Section \ref{sec:2} presented the DEGPD and ZIDEGPD models along with simple sampling scheme. The GAM forms of DEGPD and ZIDEGPD are given in Section \ref{sec:3}. To assess the performance of the proposed models, Section \ref{sec:5} provides the results of the conducted extensive simulation study. Section \ref{sec:6} discusses applications of DEGPD and ZIDEGPD to the number of upheld complaints data of insurance companies. Real application of GAM form models to avalanches data with environmental covariates are also given in the same section. Finally, Section \ref{sec:7} concludes with a summary of our results and a discussion of some future research directions.
\section{Discrete extremes modeling}\label{sec:2}
\subsection{Discrete extended generalized Pareto distribution}
We start by considering the CDF $G\left\{F\left(.;\sigma,\xi\right) \right\}$ of EGPD where $G$ meets the conditions \eqref{eq:cond}. To model non-negative integer data, we discretize the CDF by \begin{eqnarray}\label{eq:DEGPD}
P(Y=k) &=& G\left\{F\left({k+1};\sigma,\xi\right) \right\}- G\left\{F\left({k};\sigma,\xi\right) \right\},\qquad k\in \mathbb{N} \end{eqnarray}
The distribution defined by (\ref{eq:DEGPD}) will be called discrete extended generalized Pareto distribution (DEGPD).
The explicit formula of CDF of DEGPD is developed as
\begin{equation}\label{eq:8}
P(Y\le k)= G\left\{F\left({k+1};\sigma,\xi\right) \right\}
\end{equation}
and the quantile function is derived as
\begin{equation}\label{eq:9}
q_{p}=
\left\{
\begin{array}{ll}
\ceil{\frac{\sigma}{\xi}\left[ \left\{1- G^{-1}(p) \right\}^{-\xi}-1\right]}-1, & \mbox{if } \xi>0\\
\lceil{-\sigma \log \left\{1- G^{-1}(p) \right\}}\rceil-1,& \mbox{if } \xi=0\\
\end{array}
\right.
\end{equation}
with $0<p<1$.
For $G$, we use four parametric expressions $G(\cdot,\psi)$, already proposed in \citet{naveau2016modeling}, namely \begin{enumerate}[label=\textbf{\roman*.}]
\item \label{c1}
$G(u;\psi)=u^{\kappa}$, $\psi=\kappa >0$;
\item \label{c3}
$G(u;\psi)=1-D_{\delta}\{(1-u)^{\delta}\}$, $\psi=\delta>0$ where $D_{\delta}$ is the CDF of a Beta random variable with parameters $1/\delta$ and $2$, that is:
$$D_{\delta}(u)=\frac{1+\delta}{\delta} u^{1/\delta} \Big( 1-\frac{u}{1+\delta} \Big)$$
\item \label{c4}
$G(u;\psi)=[1-D_{\delta}\{(1-u)^{\delta}\}]^{\kappa/2}$, $\psi=(\delta, \kappa)$ with $\delta>0$ and $\kappa>0$;
\item \label{c2}
$G(u;\psi)=p u^{\kappa_1}+(1-p)u^{\kappa_2}$, $\psi=(p,\kappa_{1},\kappa_{2})$ with $\kappa_2 \geq \kappa_1 >0$ and $p \in (0,1)$.
\end{enumerate}
The parametric family (i) leads to PMF of DEGPD with three parameters $(\kappa, \sigma$ and $\xi)$: $\kappa$ deals the shape of the lower tail, $\sigma$ is a scale parameter, and $\xi$ controls the rate of upper tail decay. Thus, the Figure (\ref{fig:1}, a) shows the behavior of PMF of DEGPD with fixed scale and upper tail shape parameter (i.e., $\sigma=1$ and $\xi=0.5$) and with different values of lower tail behaviors ($\kappa$=1, 2, 5, 10). Similar to EGPD framework of \citet{naveau2016modeling}, the DGPD is recovered when $\kappa=1$, and additional flexibility for low values is attained by varying $\kappa$. For instance, more flexibility on lower tail can be observed without losing upper tail behavior in Figure (\ref{fig:1}, a) when putting the value of $\kappa=10$.
The parametric family (iv) is the mixture of power laws: $\kappa_1$ identifies the shape of the lower tail, while $\kappa_2$ modifies the shape of the central part of the distribution and $\sigma$ and $\xi $ are scale and upper tail parameters, respectively. It can be observed from Figure (\ref{fig:1}, d) that the DEGPD related to $G(u;\psi)=p u^{\kappa_1}+(1-p)u^{\kappa_2}$ is also showing flexibility with $p=0.5$, $\sigma=1$, $\xi=0.5$, $\kappa_1=1, 2$, different values of $\kappa_2$.
The parametric family (ii) is considered another interesting choice for construction of DEGPD. This choice is fairly complex than previous two. Figure (\ref{fig:1}, b), illustrates the behaviour of PMF with different values of $\delta$. the EGPD connected with this G family converges to GPD when $\delta$ increases to infinity. Moreover, conditions in \eqref{eq:cond} are satisfied with $\delta=2$ \citep[see][for more details]{naveau2016modeling}. In discrete settings, the DEGPD corresponding $G(u;\psi)=1-D_{\delta}\{(1-u)^{\delta}\}$ also becomes very closer to the DGPD density when $\delta$ increases to infinity.
In general, the parameter $\delta$ describes the central part of the distribution. Thus, this parameter relatively improves the modeling flexibility for the central part of the distribution. The parameter $\delta$ sometimes interpreted as "threshold tuning parameter". One of drawback of DEGPD (ii) is that it models only the central and upper part of the distribution. On the other hand, the lower tail behavior could not be estimated directly. To address this drawback another parametric family practiced subsequently.
\begin{figure}
\caption{ }
\caption{(a) Probability mass function corresponding to model in \eqref{eq:DEGPD} type (i) for $\sigma=1, \xi=0.5 $ and shape parameter for lower tail $\kappa=1,2,5,10$; (b) Probability mass function corresponding to model \eqref{eq:DEGPD} combined with $G(u;\psi)= 1- D_{\delta}(1-u)^{\delta}$ for $\sigma=1, \xi=0.5 $ and $\delta=\infty,5,3,1$; (c) Probability mass function corresponding to model \eqref{eq:DEGPD} combined with $G(u;\psi)= [1- D_{\delta}(1-u)^{\delta}]^{\kappa/2}$ for $\sigma=1, \xi=0.5 $, $\delta= 1, 2$ and $\kappa=1, 2, 5$, and (d) Probability mass function corresponding to model in \eqref{eq:DEGPD} combined with $G(u;\psi)= p u^{\kappa_1}-(1-p)u^{\kappa_2}$ for $\sigma=1, \xi=0.2 $, $\kappa_1=1, 2$ and $\kappa_2= 1, 2, 5, 10$.}
\label{fig:1}
\end{figure}
The parametric family (iii) is supporting the lower tail of the distribution with $\kappa>0$ parameter. Interestingly, this family also tend to the DEGPD with parameters ($\kappa,\delta, \sigma$ and $\xi$). The $(\kappa,\delta$ and $\xi)$ represents the lower, central and the upper parts of the distribution, respectively, and $\sigma$ is a scale parameter as usual. In particular, Figure (\ref{fig:1}, c) showing the behavior of PMF of DEGPD linked with $G(u;\psi)=\left[1-D_{\delta}\{(1-u)^{\delta}\}\right]^{\kappa/2}$ at different settings of the parameters.
Overall, the all types of DEGPD discussed above with combination of different parametric families are more flexible for modeling discrete extremes except the DEGPD (ii). In fact, the DEGPD (ii) corresponding to parametric family (ii) has limited flexibility on lower tail.
In addition, a large number of zeros can be found in various practical application data sets. In that case, the usual statistical models with flexible lower tail cannot be adjusted for the excessive zeros, which complicates a precise statistical analysis. An investigation of the origins of these zeros is essential. The subsequent section will explain the zero inflation modeling framework.
\subsection{Zero-inflated discrete extended generalized Pareto distribution} We follow \citet{lambert1992zero} and we suppose that
$Z$ is observed with an excessive number of zeros relative to those observed under the DEGPD, the zero inflated distribution (ZIDEGPD) is defined in straightforward way as:
\begin{equation}\label{eq:ZIDEGPD}
P(Z=m)=\left\{
\begin{array}{ll}
\pi +(1-\pi)G\left\{F\left(1,\sigma,\xi\right) \right\} & m=0 \\
(1-\pi)\left[G\left\{
F\left(m+1,\sigma,\xi\right) \right\} -G\left\{F\left(m,\sigma,\xi\right) \right\}
\right] & m= 1,2,\ldots
\end{array}
\right.
\end{equation}
where $0\leq \pi \leq1$ and the remaining parameters would be the same like DEGPD. It turns out that the CDF of ZIDEGPD is
\begin{equation}\label{eq:11}
P(Z\leq m)=
\pi +(1-\pi)G\left\{F\left(m+1,\sigma,\xi\right) \right\},\qquad m\in \mathbb{N},
\end{equation}
and the quantile function is
\begin{equation}\label{eq:12}
q_{p^{*}}=\left\{
\begin{array}{lc}
\ceil{\frac{\sigma}{\xi}\left[ \left\{1- G^{-1}(p^{*}) \right\}^{-\xi}-1\right]}-1, & \xi>0\\
\lceil{-\sigma \log \left\{1- G^{-1}(p^{*}) \right\}}\rceil-1,
& \xi=0\\
\end{array}
\right.
\end{equation}
with $0<p^{*}=(p-\pi)/(1-\pi)<1$. Again, the above expressions are quite simple and straightforward with existing four G families. The flexibility of the proposed extended versions is observed here through PMF with different lower tail parameters.
The parametric family (i) also leads to PMF of ZIDEGPD given in \eqref{eq:ZIDEGPD} with four parameters $(\pi, \kappa, \sigma$ and $\xi)$: $\pi$ is the proportion of zero observations in the sample that is inflating the data distribution and the interpretation of other parameters is same as DEGPD. Thus, the Figure (\ref{fig:2}, a) shows the behavior of PMF of ZIDEGPD with fixed zero-inflation, scale and upper tail shape parameters (i.e., $\pi= 0.20, \sigma=1$ and $\xi=0.2$) and with different values of lower tail behaviors ($\kappa=1, 2,5, 10$). The zero-inflated DGPD is recovered when $\kappa=1$ with more proportion of zero values. Additional flexibility for low values with proportion of zero inflation is attained by varying $\kappa$. For instance, more flexibility on lower tail with zero inflation can be observed without losing upper tail behavior in Figure (\ref{fig:2}, a) when putting the value of $\kappa=10$ or $20$. The ZI Poisson and ZIDEGPD densities act similarly for small and moderate values, but the dissimilarity may rises in at upper tail.
The ZIDEGPD corresponding to parametric family (iv) have six parameters $(\pi, p,\kappa_1, \kappa_2, \sigma$ and $\xi)$ by following the restriction ($\kappa_1\leq \kappa_2$). It can be noticed from Figure (\ref{fig:2}, d) that the ZIDEGPD based on $G(u;\psi)=p u^{\kappa_1}+(1-p)u^{\kappa_2}$ is also a flexible and produce zero inflation when using $\pi=0.2$, $ p=0.5$, $\sigma=1$, $\xi=0.2$, $\kappa_1=1, 2$, different values of $\kappa_2$. Again, the density of ZI Poisson and ZIDEGPD show similar behaviour for small and moderate values, it may change at upper tail due to heavy tail of ZIDEGPD.
The ZIDEGPD proposed by using $G(u;\psi)=1-D_{\delta}\{(1-u)^{\delta}\}$) have four parameters $(\pi, \delta, \sigma$ and $\xi$) . Figure (\ref{fig:2}, b), describes the behaviour of PMF with fixed values of parameters $(i.e., \pi=0.2, \sigma=1$ and $\xi=0.2)$ and with different values of $\delta$. The ZIDEGPD follow ZIDGPD when $\delta$ increases to infinity. It can be observed that the amount of zeros increases when $\delta$ increases. This behaves like ZIP at lower tail and at central part when the mean of ZIP is small. The disadvantage of this kind of ZIDEGPD is that it models only the central and upper part of the distribution, the lower tail behavior could not be estimated directly even though we have a zero inflation.
\begin{figure}
\caption{ }
\caption{(a) Probability mass function corresponding to model in \eqref{eq:ZIDEGPD} with $G(u;\psi)= u^{\kappa}$ having $\pi=0.2, \sigma=1, \xi=0.2 $ and shape parameter for lower tail $\kappa=1, 5, 10, 20$; (b) Probability mass function corresponding to model in \eqref{eq:ZIDEGPD} combined with $G(u;\psi)= 1- D_{\delta}(1-u)^{\delta}$ for $\pi=0.2, \sigma=1, \xi=0.2 $ and $\delta=1, 3, 5, \infty$; (c) Probability mass function corresponding to model in \eqref{eq:ZIDEGPD} combined with $G(u;\psi)= [1- D_{\delta}(1-u)^{\delta}]^{\kappa/2}$ for $\pi=0.2, \sigma=1, \xi=0.2 $, $\delta= 1, 2, 5$ and $\kappa=1, 5, 10, 20$, and (d) Probability mass function corresponding to model \eqref{eq:ZIDEGPD} combined with $G(u;\psi)= p u^{\kappa_1}-(1-p)u^{\kappa_2}$ for $\pi=0.2, p=0.5, \sigma=1, \xi=0.2 $, $\kappa_1=1, 2$ and $\kappa_2= 1, 2, 5, 10$. The orange lines represent the zero-inflated Poisson probability mass function with different setting of their parameter.}
\label{fig:2}
\end{figure}
The proposed ZIDEGPD based on $G(u;\psi)=\left[1-D_{\delta}\{(1-u)^{\delta}\}\right]^{\kappa/2}$) have five parameters: $\pi$ supporting the proportion of zero values, the $(\kappa,\delta$ and $\xi)$ represents the lower, central and the upper parts of the distribution, respectively, and $\sigma$ is a scale parameter as usual. Figure (\ref{fig:2}, c) showing the behavior of PMF of ZIDEGPD linked with $G(u;\psi)=\left[1-D_{\delta}\{(1-u)^{\delta}\}\right]^{\kappa/2}$ at different settings of the parameters. The zero inflation also occurring when $\kappa$ increases. Again, the density shape is similar to ZIP at lower and central part of the distribution.
In general, all types of ZIDEGPD discussed above with support of different parametric families are flexible for both tails in modeling of zero inflated discrete extremes. In fact, the ZIDEGPD (ii) corresponding to parametric family (ii) has limited flexibility on lower tail, but $\pi$ parameter explain zero proportion more correctly.
\section{Generalized additive modelling}\label{sec:3}
The objective of this section is to propose regression-based discrete extreme models by letting the parameters of discrete extreme models vary with covariates. In a continuous framework, modeling continuous variables via extreme value models approximations, it is very appealing to employ techniques that allow for the incorporation of flexible forms of dependence on covariates. \citet{davison1990models} used such types of models for modeling the size and occurrence of excesses over a high threshold through GPD. \citet{pauli2001penalized} proposed smooth models for extreme value distribution parameters based on penalized likelihood. Later on, \citet{chavez2005generalized} used Generalized Additive Model (GAM) that was originally proposed by \citet{hastie1990generalized} to estimate flexibly GPD parameters with an orthogonal reparametrization. \citet{yee2007vector} developed vector generalized additive models to model generalized extreme value distribution parameters as linear or smooth functions of covariates. Vector generalized additive models can easily be implemented in an R package called \texttt{VGAM}. More recently, \citet{youngman2019generalized} models threshold exceedances with GPD parameters of GAM forms. Generally, the GAM form models characteristically reflect additive smooths representations with splines.
In the sequel we denote the vector of parameters $(\xi,\sigma,\psi^T)^T$ or $(\xi,\sigma,\psi^T,\pi)^T$ with $\theta = (\theta_1, ..., \theta_d)^T$. In practice, the parameters of the distribution of $Y$ or $Z$ may depend on some covariates $\boldsymbol{x}$, i.e. $\theta(\boldsymbol{x})=(\theta_1(\boldsymbol{x}),\ldots,\theta_d(\boldsymbol{x}))^T$. The specification is an instance of a distributional regression model \citep{Stasinopoulos-et-al-2018}.
For relating the distributional parameters $(\theta_1(\boldsymbol{x}),\ldots,\theta_d(\boldsymbol{x}))$ to the covariates, we consider additive predictors of the form \begin{equation}\label{eq:predictors}
\eta_i(\boldsymbol{x})=s_{i1}(\boldsymbol{x})+\cdots+s_{iJ_i}(\boldsymbol{x}) \end{equation} where $s_{i1}(\cdot),\ldots, s_{iJ_i}(\cdot)$ are smooth functions of the covariates $\boldsymbol{x}$. The predictors are linked to the distributional parameters via known monotonic and twice differentiable link functions $h_i(\cdot)$. \begin{equation}\label{eq:link}
\theta_i(\boldsymbol{x})=h_i(\eta_i(\boldsymbol{x})), \qquad i=1,\ldots, d \end{equation} In the case of model with $G(u;\psi)=u^\kappa$, common link functions are $$ \xi(\boldsymbol{x})=\eta_\xi(\boldsymbol{x}),\quad \sigma(\boldsymbol{x})=\exp(\eta_\sigma(\boldsymbol{x})),\quad \kappa(\boldsymbol{x})=\exp(\eta_{\kappa}(\boldsymbol{x})),\quad \pi(\boldsymbol{x})= \exp\left(\frac{\eta_{p}(\boldsymbol{x})}{1+\eta_{p}(\boldsymbol{x})}\right) $$
The functions $s_{ij}$ in \eqref{eq:predictors} are approximated in terms of basis function expansions \begin{equation}\label{eq:basis}
s_{ij}(\boldsymbol{x})=\sum_{k=1}^{K_{ij}} \beta_{ij,k}B_k(\boldsymbol{x}), \end{equation} where $B_k(\boldsymbol{x})$ are the basis functions and $\beta_{ij,k}$ denote the corresponding basis coefficients. These basis can be of different types \citep[see][for instance]{wood2017generalized}. The basis function expansions can be written as $ s_{ij}(\boldsymbol{x})= \boldsymbol{t}_{ij}(\boldsymbol{x})^T\boldsymbol{\beta}_{ij} $ where $\boldsymbol{t}_{ij}(\boldsymbol{x})$ is still a vector of transformed covariates that depends on the basis functions and $\boldsymbol{\beta}_{ij}=(\beta_{ij,1},\ldots, \beta_{ij,K_{ij}})^T$ is a parameter vector to be estimated.
To estimate the parameters of the proposed models, the maximum likelihood estimation (MLE) method is practiced. Let $y_1, \ldots, y_n$ be $n$ independent observations from \eqref{eq:DEGPD} and $\boldsymbol{x}_1,\ldots,\boldsymbol{x}_n$ the related covariates. The log-likelihood function is given by, \begin{equation}\label{eq:lik}
l(\boldsymbol{\beta})= \sum_{i=1}^{n} \log \left[ G(F(y_{i}+1;\sigma(\boldsymbol{x}_i), \xi(\boldsymbol{x}_i));\psi(\boldsymbol{x}_i))-
G(F(y_{i};\sigma(\boldsymbol{x}_i), \xi(\boldsymbol{x}_i));\psi(\boldsymbol{x}_i)) \right]. \end{equation} where $\boldsymbol{\beta}$ collects all unknown coefficient $\beta_{ijk}$ of the basis expansions.
Instead if we consider $n$ independent observations $z_1,\ldots,z_n$ from \eqref{eq:ZIDEGPD} we get \begin{eqnarray}\label{eq:likzi}
l(\boldsymbol{\beta})&=& \sum_{i=1}^n I_0(z_i)\log \left[\pi(\boldsymbol{x}_i)+(1-\pi(\boldsymbol{x}_i))
G(F(1;\sigma(\boldsymbol{x}_i), \xi(\boldsymbol{x}_i));\psi(\boldsymbol{x}_i))\right] + \nonumber \\
&&+\sum_{i=1}^{n}(1-I_0(z_i)) \log(1-\pi(\boldsymbol{x}_i)))\times \nonumber\\
&& \left[ G(F(z_{i}+1;\sigma(\boldsymbol{x}_i), \xi(\boldsymbol{x}_i));\psi(\boldsymbol{x}_i))-
G(F(z_{i};\sigma(\boldsymbol{x}_i), \xi(\boldsymbol{x}_i));\psi(\boldsymbol{x}_i)) \right]. \end{eqnarray}
Derivatives with respect unknown parameters of DEPGD and ZIDEGPD can be solved by standard numerical techniques to obtain the maximum likelihood estimators for unknown parameters.
To ensure regularization of the functions $s_{ij}(\boldsymbol{x})$ so-called penalty terms are added to the objective log-likelihood function. Usually, the penalty for each function $s_{ij}(\boldsymbol{x})$ are quadratic penalty $\lambda \boldsymbol{\beta}_{ij}^T \boldsymbol{G}_{ij}(\boldsymbol{\lambda}_{ij}) \boldsymbol{\beta}_{ij} $ where $\boldsymbol{G}_{ij}(\boldsymbol{\lambda}_{ij})$ is a known semi-definite matrix and the vector $\boldsymbol{\lambda}_{ij}$ regulates the amount of smoothing needed for the fit. A special case is when $\boldsymbol{G}_{ij}(\boldsymbol{\lambda}_{ij})=\lambda_{ij}\boldsymbol{G}_{ij}$. Therefore the type and properties of the smoothing functions are controlled by the vectors $\boldsymbol{x}_{ij}(\boldsymbol{x})$ and the matrices $\boldsymbol{G}_{ij}(\boldsymbol{\lambda}_{ij})$.
The penalized log-likelihood function for the latter models reads: \begin{equation}\label{eq:penlik}
l_p(\boldsymbol{\beta}) = l(\boldsymbol{\beta}) - \frac{1}{2}\sum_{i=1}^d \sum_{j=1}^{J_i} \boldsymbol{\beta}_{ij}^T \boldsymbol{G}_{ij}(\boldsymbol{\lambda}_{ij}) \boldsymbol{\beta}_{ij} \end{equation} where $l(\boldsymbol{\beta})$ is the log-likelihood function \eqref{eq:lik} or \eqref{eq:likzi}.
To fit DEGPD and ZIDEGPD with GAM forms, we have written a R code that implements the distributions as ``new families'' for \texttt{evgam} R package \citep{youngman2020evgam}. An example of R code with the name ``Fit\_degpd\_zidegpd.R'' and the complete source code of the function is provided on the GitHub \href{https://github.com/touqeerahmadunipd/degpd-and-zidegpd}{https://github.com/touqeerahmadunipd/degpd-and-zidegpd}.
\section{Simulation study}\label{sec:5} \subsection{Discrete extended Generalized Pareto distribution}\label{subsec: 5.1} This section deals with a simulation study planned to evaluate maximum likelihood estimate (MLE) performance. Different settings of parameters are tried to test each model. Moreover, the scale and upper tail shape parameters are permanently set to ($\sigma=1, \xi=0.2$) for all four models, respectively. The sample size $n=1000$ with $10^4$ replications are used to calculate root mean square errors (RMSEs) corresponding to each model. Remaining parameters of the proposed models are set in the following way \begin{itemize}
\item[(i)] $G(u;\psi)=u^{\kappa}$ with lower tail parameter $\kappa$ = 1, 2, 3, 10.
\item[(ii)] $G(u;\psi)=1-D_{\delta}\{(1-u)^{\delta}\}$, with $\delta$ = 0.5, 1, 2, 5.
\item[(iii)] $G(u;\psi)=\left[1-D_{\delta}\{(1-u)^{\delta}\}\right]^{\kappa/2}$ with $\delta$ = 0.5, 1, 2, 5 and $\kappa$ = 1, 2, 3, 10.
\item[(iv)] $G(u;\psi)= p u^{\kappa_1} + (1-p)u^{\kappa_2}$ with $p=0.5$, $\kappa_1$ = 1, 2, 3, 10 and $\kappa_2$ = 2, 3, 10, 20. \end{itemize}
The boxplots of the MLEs are constructed to assess the models performance for the above representative cases. Figure \ref{fig:3} shows the boxplots of estimated parameters conforming to all four models with different simulation settings. Figure \ref{fig:3}(a) show MLEs of DEGPD based on parametric family $G(u;\psi)=u^{\kappa}$ with true parameter settings (i.e., $\kappa=1, 2, 5, 10$, $\sigma=1$, and $\xi=0.2$). In addition, the horizontal red line in each boxplot represent the true parameters. More precisely, Figure \ref{fig:3}(a) indicate that the MLEs of DEGPD (i) are quite reasonable with less variability.
\begin{figure}
\caption{$G(u;\psi)= u^\kappa$}
\caption{$G(u;\psi)= pu^{\kappa_1} + (1-u)v^{\kappa_2}$}
\caption{Boxplots of maximum likelihood estimates of parameters of each model from $n=500$ with $10^4$ replication at different parameters settings: $\bold{(a)}$ corresponds to model type (i), $\bold{(b)}$ corresponds to model type (ii), $\bold{(c)}$ corresponds to model type (iii), and $\bold{(d)}$ corresponds to model type (iv).}
\label{fig:3}
\end{figure}
Figure \ref{fig:3}(b) report a much variability in the estimates of parameter $\delta$ when we increase true value of $\delta$. This may happen due to skewness parameter $\delta$. Skewness parameters like $\delta$ and $\kappa_2$ are hard to estimate by the MLE method, this was observed by \citet{naveau2016modeling} for extended generalized Pareto distributions, \citet{sartori2006bias} for the skew-normal and \citet{ribereau2016skew} for skew generalized extreme value case.
\begin{figure}
\caption{$G(u;\psi)= u^\kappa$}
\caption{$G(u;\psi)= pu^{\kappa_1} + (1-p)u^{\kappa_2}$}
\caption{$\bold{(a)}$ Boxplots of maximum likelihood estimates of parameters of each ZIDEGPD model from $n=1000$ with $10^4$ replication at different parameters settings: $\bold{(a)}$ corresponds to model type (i), $\bold{(b)}$ corresponds to model type (ii), $\bold{(c)}$ corresponds to model type (iii), and $\bold{(d)}$ corresponds to model type (iv).}
\label{fig:4}
\end{figure} Similarly, Figure \ref{fig:3}(c) shows again $\delta$ is out perform when true value increases. Figure \ref{fig:3}(d) shows the MLEs for all parameter are reasonable with variability seen in $\kappa_2$ and sometimes in $\kappa_1$. Again, this may possible due to appearance of $\kappa_2$ as skewness parameter \citep{naveau2016modeling}.
For further investigation, the RMSEs of models parameters for each configuration are given in Table \ref{DEGPDRMSE}. Overall, the findings of the table show that the maximum likelihood estimator performed well for model type (i) when the lower shape parameter $\kappa$ increases. Model type (ii) highlights that the MLEs are sensible when threshold tuning parameter $\delta<5$. The RMSEs in respect to model type (iii) show that the MLEs are poor when $\delta>1$. Finally, the case of model type (iv) intensifies that the parameters $\kappa_1$ and $\kappa_2$ entail much variability, especially when $\kappa_1>2$ and $\kappa_2>5$. \begin{table}[H]
\centering
\caption{Root mean square errors of parameter estimates of DEGPD found from $10^4$ independent data sets of size $n=1000$. $\textit{TRUE}$ represent the true values of the parameters used during simulation.}
\noindent
\begin{tabular}{c c c c c c c c c c }
\toprule
\multicolumn{10}{c}{ $G(u;\psi)= u^\kappa$}\\
\hline
$\kappa$ &RMSE &$\sigma$ &RMSE & $\xi$ &RMSE &\\
\hline
1 & 0.24 &1 & 0.21 &0.20& 0.07\\
2 & 0.35 &1 & 0.16 &0.20& 0.05 \\
5 & 0.76 & 1 & 0.12&0.20& 0.04\\
10 &2.05 &1 & 0.13 &0.20& 0.03\\
\hline
\multicolumn{10}{c}{ $G(u;\psi)=1-D_{\delta}\{(1-u)^{\delta}\}$}\\
\hline
{$\delta$}&RMSE & {$\sigma$}&RMSE & {$\xi$} &RMSE \\
\hline
0.5 & 1.52 &1 & 0.33 &0.20& 0.06\\
1 & 1.36 &1 & 0.22 &0.20& 0.06 \\
2 & 2.13 & 1 & 0.20&0.20& 0.06\\
5 &19.64 &1 & 0.16 &0.20& 0.06\\
\hline
\multicolumn{10}{c}{$G(u;\psi)=\left[1-D_{\delta}\{(1-u)^{\delta}\}\right]^{\kappa/2}$}\\
\hline
{$\delta$}&RMSE & {$\kappa$}&RMSE & {$\sigma$}&RMSE& {$\xi$}&RMSE \\
\hline
0.5 & 0.42&1 &0.20 & 1 & 0.20 &0.20& 0.06\\
1 & 0.94 & 2& 0.39 & 1 & 0.20 &0.20& 0.05 \\
2 & 1.94 & 5&1.01 & 1 & 0.19&0.20& 0.04\\
5 &30.15 &10 & 2.33& 1 & 0.13 &0.20& 0.04\\
\hline
\multicolumn{10}{c}{$G(u;\psi)= pu^{\kappa_1} + (1-p)u^{\kappa_2}$}\\
\hline
{$p$}&RMSE & {$\kappa_1$}&RMSE & {$\kappa_2$}&RMSE& {$\sigma$}&RMSE& {$\xi$}&RMSE \\
\hline
0.5 & 0.28&1 &0.58 &2 &1.83& 1 & 0.23 &0.20& 0.06\\
0.5& 0.32 & 2& 0.98 & 5&6.43& 1 & 0.26 &0.20& 0.06\\
0.5 & 0.32 & 5&2.13 &10 &12.19&1 & 0.23&0.20& 0.04\\
0.5 &0.30 &10 & 4.62& 20&25.47&1 & 0.23 &0.20& 0.04\\
\toprule
\end{tabular}
\label{DEGPDRMSE} \end{table}
\subsection{Zero-inflated discrete extended Generalized Pareto
distribution}\label{subsec: 5.2} To evaluate maximum likelihood estimator for ZIDEGPD models, the simulation study have been conducted with different configurations of parameters. The scale and upper tail shape parameters are fixed to $ \sigma=1$ and $\xi=0.2$ for all four models, respectively. Similar to DEGPD, the sample size $n=1000$ with $10^4$ replications are used to calculate RMSEs for each model. The other parameters of the proposed ZIDEGPD are chosen as \begin{itemize}
\item[(i)] $G(u;\psi)= u^\kappa$ with lower tail parameter $\kappa$ = 5, 10
\item[(ii)] $G(u;\psi)=1-D_{\delta}\{(1-u)^{\delta}\}$, with $\delta$ = 1, 5.
\item[(iii)] $G(u;\psi)=\left[1-D_{\delta}\{(1-u)^{\delta}\}\right]^{\kappa/2}$ with $\delta$ = 1, 5 and $\kappa$ =5, 10.
\item[(iv)] $G(u;\psi)= pu^{\kappa_1} + (1-p)u^{\kappa_2}$ with $p=0.5$, $\kappa_1$ = 1, 5 and $\kappa_2$ = 5, 10. \end{itemize} The zero inflation parameter (i.e proportion of zeros) $\pi$ is considered $0.2$ and $0.5$ for all models, respectively.
Figure \ref{fig:4} clearly shows that the parameter of ZIDEPD are estimated correctly for $G(u;\psi)= u^{\kappa}$ even though when proportion of zeros is higher. Similar to DEGPD, the estimates of $\delta$ of ZIDEGPD model (ii), $\kappa$ and $ \delta$ of ZIDEGPD model (iii) and $\kappa_1$ and $ \kappa_2$ of ZIDEGPD model (iv) showing more variability. This is already we noted for DEGPD cases.
Similar to DEGPD, we check the performance of maximum likelihood estimator for ZIDEGPD by observing RMSEs of the parameters. We found that the the $\delta$ parameter involved in model type (ii) and model type (iii) and $\kappa_2$ parameter of model type (iv) entail much variability when estimated by MLE. Similar characteristics have already being noted in DEGPD models. In addition, we found the ZIDEGPD based on $G(u;\psi)=u^{\kappa}$ is more reliable than other models. Information regarding RMSEs of ZIDEGPD parameters is reported in "Supplementary materials".
\section{Real data applications}\label{sec:6} This section discusses two applications of the proposed models Section \ref{sec:2} and \ref{sec:3}. Firstly we shall consider a dataset on automobile insurance claims. Then we consider the avalanches data of Haute-Maurienne massif of French Alps with environmental variables as covariates.
\subsection{Discrete extended generalized Pareto distribution} We apply proposed DEGPD models to automobile insurance claims data of the companies of New York city recorded from 2009 to 2020. The data is basically recorded under Department of Financial Services (DFS) rank automobile insurance companies running a business in New York State based on the number of consumer complaints upheld against them as a percentage of their total business over two years. Complaints typically include problems like delays in the payment of no-fault claims and nonrenewal of policies. Insurers with the least upheld complaints per million dollars of premiums stand at the top. The data is freely available on the given website \href{https://data.ny.gov/Government-Finance/Automobile-Insurance-Company-Complaint-Rankings-Be/h2wd-9xfe}{https://www.ny.gov/programs/open-ny}. The frequency distribution of the data ($1942$ observations) is depicted in Figure \ref{fig:5}.
\begin{figure}
\caption{Frequency distribution of upheld complaints of automobile insurance companies at New York city (2009-2020).}
\label{fig:5}
\end{figure}
\begin{table}[H]
\centering
\caption{Estimated parameters for extended versions of discrete Pareto distribution with all four parametric families fitted to insurance complaints data of New York City. Standard errors are reported between parenthesis. The bootstrap confidence intervals at level 95\% are reported between square brackets.}
\begin{tabular}{ccccc}
\toprule
\multicolumn{5}{c}{$G(u;\psi)= u^\kappa$} \\
\hline
$\kappa$ &$\sigma$ &$\xi$ &\\
\hline
1.41 & 0.80 & 0.73 \\
(0.37) & (0.20) & (0.05)\\
$[1.00, 2.35]$&[0.43, 1.22]&[0.61, 0.83]\\
\hline
\multicolumn{5}{c}{$G(u;\psi)=1-D_{\delta}\{(1-u)^{\delta}\}$} \\
\hline
$\delta$ &$\sigma$ &$\xi$\\
\hline
0.006 & 0.36& 0.65 \\
(0.86) &(0.15) &(0.04)\\
$[0.00, 1.46]$&[0.33, 0.61] &[0.60, 0.80]\\
\hline
\multicolumn{5}{c}{$G(u;\psi)=\left[1-D_{\delta}\{(1-u)^{\delta}\}\right]^{\kappa/2}$} \\
\hline
$\kappa$&$\delta$ &$\sigma$ &$\xi$ &\\
\hline
1.61 &0.11& 0.49& 0.65 \\
(0.42) & (0.65) & (0.24) &(0.07)\\
$[1.05 , 2.79]$ &[0.00, 1.68]&[0.27, 1.20] & [0.55, 0.83] \\
\hline
\multicolumn{5}{c}{$G(u;\psi)= pu^{\kappa_1} + (1-p)u^{\kappa_2}$} \\
\hline
$p$& $\kappa_1$ &$\kappa_2$ &$\sigma$ &$\xi$ \\
\hline
0.11 & 0.01 & 2.08 & 0.63 & 0.73 \\
(0.12) & (0.44) & (2.59) &(0.16)&(0.05)\\
$[0.00, 0.46]$& [0.00, 1.59] &[1.44, 10.66] &[0.20, 0.84]& [0.64, 0.83]\\
\bottomrule
\end{tabular}
\label{DEGPD_RD_FIT)} \end{table}
Moreover, the DEGPD and ZIDEGPD based on parametric families (i), (ii), (iii) and (iv) are fitted to the upheld complaints count data. Results of the fitted the DEGPD models with their standard errors and bootstrap confidence intervals are given in Table \ref{DEGPD_RD_FIT)}. AIC and BIC associated with fitted DEGPD and ZIDEGPD models and Chi-square goodness-of-fit test statistic along with p-values are reported in Table \ref{AICBIC_CHI}. According to AIC and BIC values given Table {\ref{AICBIC_CHI}}, the DEGPD (i) and ZIDEGPD (i) with $G(u;\psi)= u^\kappa$ performing better for this specific data example. DEGPD type (ii) fitting is also quite reasonable to the upheld complaints data with smaller estimate of the parameter $\delta$, but the Model type (ii) has restricted flexibility on its lower tail. This is kind of disadvantage of model type (ii)\citep{naveau2016modeling}. In case of zero-inflation in the data the ZIDEGPD based on $G(u;\psi)=1-D_{\delta}\{(1-u)^{\delta}\}$ may perform better by the reason that it has an addition parameter which represent the zero proportion separately. The fitting of DEGPD type (iii) with $G(u;\psi)=\left[1-D_{\delta}\{(1-u)^{\delta}\}\right]^{\kappa/2}$ and DEGPD type (iv) with $G(u;\psi)= p u^{\kappa_1} + (1-p)u^{\kappa_2}$ is also quite sensible with lower AIC and BIC value as compared to ZIDEGPD type (iii) and ZIDEGPD type (iv). But $\kappa_2$ parameter of DEGPD type (iv) have gained more variability, which is also pointed by \citet{naveau2016modeling} in continuous framework. In addition, q-q plots show that all types of DEGPD are fitted reasonably well to upheld complaints data of New York city. Furthermore, the p-values of chi-square test statistic corresponding to each of DEGPD and ZIDEGPD indicate that the fitting of the models proposed in section \ref{sec:2} is pretty good to this specific real data example. Based on AIC and BIC, we prefer the DEGPD of type (i) for upheld complaints data.
\begin{figure}
\caption{$G(u;\psi)= u^\kappa$}
\caption{$G(u;\psi)= pu^{\kappa_1} + (1-p)u^{\kappa_2}$}
\caption{ Quantile-quantile plots of the fitted models with $95\%$ bootstrap-based confidence intervals}
\label{fig:6}
\end{figure}
\begin{table}[H]
\centering \caption{AIC and BIC associated with the fitted DEGPD and ZIDEGPD along with Chi-square goodness-of-fit test. P-value are reported between parenthesis.} \begin{tabular}{ p{0.8cm} p{1.1cm} p{1.5cm} p{1.1cm}p{1.5cm} p{2cm} p{2cm}} \toprule
\multicolumn{1}{c}{}&\multicolumn{2}{c}{AIC} &\multicolumn{2}{c}{BIC} &\multicolumn{2}{c}{Chi-square} \\
\cmidrule(l){2-3}\cmidrule(l){4-5}\cmidrule(l){6-7} {Model}&{DEGPD}&{ZIDEGPD}& {DEGPD} & {ZIDEGPD}&{DEGPD} & {ZIDEGPD}\\
\hline $\text{(i)}$& 7290.93 &7291.40 &7307.65 &7313.69 &0.20 (0.99) & 0.18 (0.99) \\ \text{(ii)}& 7291.88 &7293.05 &7308.60 &7315.34 &0.19 (0.99)& 0.20 (0.99)\\ \text{(iii)}& 7293.36 &7294.56 &7315.64 &7322.42 &0.20 (0.99)& 0.20 (0.99) \\ \text{(iv)}& 7294.45 &7297.56 &7322.30 &7330.99 &0.20 (0.99)& 0.22 (0.99)\\ \bottomrule \end{tabular} \label{AICBIC_CHI} \end{table}
\subsection{GAM forms applications of DEGPD and ZIDEGPD to avalanches data}
In the Alpine regions, snow avalanches with extreme frequency or magnitude are considered a life-threatening hazard. Avalanches are usually caused by severe storms that bring high snowfalls coupled with snow drifting, but strong variations of environmental factors (e.g., temperature, wind, humidity and precipitation etc.) causing snow melt and/or fluctuations of the freezing point can also be
involved \citep{evin2021extreme}. It is crucial to anticipate the future avalanches activity in the short-term and long-term management. Since extreme events have potentially terrible consequences, it is crucial to assess their statistical characteristics correctly. To this end, we try to highlight avalanches events over a short period of time with the help of newly proposed extreme value models. In particular, we intend to quantify how weather-related variables affect the probability of avalanche occurrence each day.
The \textit{Enquête Permanente sur les Avalanches} (EPA) collected avalanche data from the French Alps, which has monitored about 3900 paths since the early 20th century \citep[see][]{mougin1922avalanches,evin2021extreme}. Quantitative (run out elevations, deposit volumes, etc.) and qualitative (flow regime, snow quality, etc.) information is collected for each event. It varies in quality from time to time depending on the local observers (mostly forestry rangers). Natural avalanche activity is also uncertain because records tend to record paths visible from valleys, so high elevation activity may be underestimated.
We consider the dataset in \citet{dkengne2016limiting} and in particular the three day moving sum of daily number of avalanche events recorded from February 1982 to April 2021.
Environmental covariates (see Table {\ref{COVARIATES_INFO}}) has been downloaded from \href{https://power.larc.nasa.gov/data-access-viewer/}{https://power.larc.nasa.gov/data-access-viewer/} by specifying latitude and longitude information. Then, the moving median of the previous three days was considered for each of them.
\begin{figure}
\caption{Correlation among covariates for avalanches data.}
\label{fig:7}
\end{figure}
\begin{table}[H]
\centering
\caption{Detailed information of covariates}
\begin{tabular}{ll}
\toprule
$\text{Name}$ &$\text{Definition}$ \\
\hline
{WS}&{maximum wind speed at 10 meters (m/s)} \\
{PREC}&{precipitation (mm/day)} \\
{MxT}&{maximum temperature at 2 Meters (C$^o$)} \\
{MnT}&{minimum temperature at 2 Meters (C$^o$)} \\
{RH}&{relative humidity at 2 Meters (\%).} \\
\bottomrule
\end{tabular}
\label{COVARIATES_INFO} \end{table}
Figure {\ref{fig:7}} displays correlation plot among the covariates, highlighting, maximum temperature (MxT) and minimum temperature (MnT) are positively strong correlated, while precipitation (PREC) have no significant correlation with MxT. On the other hand, the relative humidity (RH) have moderate positive correlation with wind speed (WS) and PREC while it has weak negative correlation with temperature variables. Further, wind speed and precipitation have weak correlation with minimum and maximum temperature variables. Backward variable selection procedure based on AIC were performed for DEGPD model under the GAM form, using using \texttt{evgam} function with our own developed code. A preliminary study showed that a constant model is numerically preferred for lower shape parameter ($\kappa$ and $\kappa_1$), threshold tuning parameters ($\delta$ and $\kappa_2$) and upper shape parameter $\xi$.
After comparing different combinations of the covariates, we found WS, MxT, PREC and RH are more appropriate to use as covariates. It turned out that the models with the lowest AIC are $$ \begin{array}{lcl}
\mbox{Model type (i)}&:& \kappa = cst; \quad \sigma = s(WS)+s(MxT)+s(PREC)+s(RH); \quad \xi = cst \\
\mbox{Model type (ii)}&:& \delta = cst; \quad \sigma =s(WS)+s(MxT)+s(PREC)+s(RH); \quad \xi = cst \\
\mbox{Model type (iii)}&:& \kappa = cst;\quad\delta = cst; \quad \sigma = s(WS)+s(MxT)+s(PREC)+s(RH); \quad \xi = cst\\
\mbox{Model type (iv)}&:& p= cst; \quad\kappa_1 = cst;\quad \kappa_2 = cst; \\
&&\sigma =s(WS)+s(MxT)+s(PREC)+s(RH); \quad \xi = cst \end{array} $$ where $s(\cdot)$ indicates the smoothed predictor.
\begin{figure}
\caption{Estimated non-parametric effects of covariates in the $\sigma$ component of DEGPD model type (i).}
\label{fig:8}
\end{figure}
\begin{table}[H]
\centering
\caption{Estimated coefficients and smooth terms for GAM form DEGPD models fitted to avalanches data.}
\begin{tabular}{ccccccc}
\toprule
\multicolumn{6}{c}{Model type (i) $G(u;\psi)= u^\kappa$}\\
\hline
\multicolumn{1}{c}{Parameter (intercept)} & \multicolumn{1}{c}{Estimate} & \multicolumn{1}{c}{Std.Error} & \multicolumn{1}{c}{t value}& \multicolumn{1}{c}{P-value}& \\
\midrule
$\log (\kappa)$ & -1.83 & 0.06 & -32.38 & $<$2e-16\\
$\log(\sigma)$ & 0.2 & 0.1 & 2.06 & 0.0197\\
$\log(\xi)$ & -0.54 & 0.08 & -6.88 & 2.93e-12\\
\hline
\multicolumn{6}{c}{** Smooth terms for $\log(\sigma)$ **}\\
\hline
{$\log(\sigma)$} & {edf} & {max.df } & {Chi.sq } & {Pr($>|t|$)}& {}\\
\hline
s(WS) &1.44 & 4 & 22.01& 8.66e-06\\
s(MxT) & 3.57 & 4 &663.24 & $<$2e-16\\
s(PREC) & 1.26 & 4 & 33.94 & 2.91e-08\\
s(RH) & 5.72 & 9 & 79.14 & 9.9e-15\\
\bottomrule
\multicolumn{6}{c}{Model type (ii) $G(u;\psi)=1-D_{\delta}\{(1-u)^{\delta}\}$}\\
\hline
\multicolumn{1}{c}{Parameter (intercept)} & \multicolumn{1}{c}{Estimate} & \multicolumn{1}{c}{Std.Error} & \multicolumn{1}{c}{t value}& \multicolumn{1}{c}{P-value}& \\
\midrule
$\log (\delta)$ & 4.72& 1.28& 3.7& 0.000109\\
$\log(\sigma)$ & -2.52 & 0.07 & -34.84 & $<$2e-16\\
$\log(\xi)$ & 0.14 & 0.03 & 4.94 & 3.89e-07\\
\hline
\multicolumn{6}{c}{** Smooth terms for $\log(\sigma)$ **}\\
\hline
{$\log(\sigma)$} & {edf} & {max.df } & {Chi.sq } & {Pr($>|t|$)}& {}\\
\hline
s(WS) &1.90 & 4 & 31.12 &3.75e-07\\
s(MxT) & 3.52 & 4 & 528.98 & $<$2e-16\\
s(PREC) & 1.27 & 4 & 38.59 & 1.07e-08\\
s(RH) & 6.30 & 9 & 59.21& 8.61e-11\\
\bottomrule
\multicolumn{6}{c}{Model type (iii) $G(u;\psi)=\left[1-F_{\delta}\{(1-u)^{\delta}\}\right]^{\kappa/2}$}\\
\hline
\multicolumn{1}{c}{Parameter (intercept)} & \multicolumn{1}{c}{Estimate} & \multicolumn{1}{c}{Std.Error} & \multicolumn{1}{c}{t value}& \multicolumn{1}{c}{P-value}& \\
\midrule
$\log (\kappa)$ & -0.62 & 0.18& -3.4 & 0.000339\\
$\log (\delta)$ & 4.82 & 0.67& 7.24& 2.3e-13\\
$\log(\sigma)$ & -0.62 & 0.29 & -2.15 & 0.0158\\
$\log(\xi)$ & -0.13 & 0.09 & -1.39 & 0.0819\\
\hline
\multicolumn{6}{c}{** Smooth terms for $\log(\sigma)$ **}\\
\hline
s(WS) &1.78 & 4 & 29.29 & 4.66e-07\\
s(MxT) & 3.61 & 4 & 562.36 & $<$2e-16\\
s(PREC) & 1.31 & 4 & 49.93 & 5.2e-10\\
s(RH) & 6.07 & 9 & 47.71 &1.3e-08\\
\bottomrule
\multicolumn{6}{c}{Model type (iv) $G(u;\psi)=pu^{\kappa_1}+(1-p)u^{\kappa_2}$}\\
\hline
\multicolumn{1}{c}{Parameter (intercept)} & \multicolumn{1}{c}{Estimate} & \multicolumn{1}{c}{Std.Error} & \multicolumn{1}{c}{t value}& \multicolumn{1}{c}{P-value}& \\
\midrule
$\text{logit}(p)$ & 4.89 & 0.33 & 14.69 & $<$2e-16\\
$\log (\kappa_1)$ & -1.84 & 0.06 & -31.88 & $<$2e-16\\
$\log (\kappa_2)$ & 3.5 & 0.41 & 8.51 & $<$2e-16\\
$\log(\sigma)$ & 0.17 & 0.1 & 1.66 & 0.0484\\
$\log(\xi)$ & -0.86 & 0.13 & -6.78 & 6.07e-12\\
\hline
\multicolumn{6}{c}{** Smooth terms for $\log(\sigma)$ **}\\
\hline
s(WS) &1.67 & 4 & 19.68 &3.08e-05\\
s(MxT) & 3.57 & 4 &623.83 & $<$2e-16\\
s(PREC) & 1.05 & 4 &35.18 & 4.71e-09\\
s(RH) &5.67 & 9 &80.50 &5.31e-15\\
\bottomrule
\noindent\end{tabular}
\label{GAMFIT} \end{table}
We fitted all four DEGPD GAM form models to response variable (i.e. avalanches counts) with above selected covariates. The result of fitted models are given in Table {\ref{GAMFIT}}. It can be observed from Table {\ref{GAMFIT}} that parametric and nonparametric terms for DEGPD GAM form models are statistically significant except the shape parameter in DEGPD type (iii). In addition, during estimation, much variability has been seen in constant parameters $\delta$ and $\kappa_2$. This may possible due to the appearance of $\delta$ and $\kappa_2$ as skewness parameters of the model \citep{naveau2016modeling}.
Figure \ref{fig:8} shows the corresponding estimated functions of DEGPD type (i) model for the regressors that included as a nonparametric terms in model for the effect the environmental variables. To this end, the included nonparametric term is significant and have similar behavior for all models. Based on the results of all four models, a broad interpretation of the finding that the temperature and relative humidity seems to better explain the avalanches occurrence as compared to wind and precipitation. Possibly, the fluctuation in temperature may cause more avalanches coupled with snow drifting. Furthermore, we also fitted ZIDEGPD GAM form models looking at possible zero inflation in the avalanches data. A slight improvement is noted in ZIDEGPD type (i) and (iii). We found that parameter $\pi$ in ZIDEGPD type (ii) is not significant. This is possible due to much variation gained by $\delta$ parameter.
The results about the ZIDEGPD model can be found in Supplementary material.
Further, when comparing our proposed models, the GAM forms DEGPD and ZIDEGPD type (i) and type (iii) overall performed well for the avalanches data. It may possible the other proposed models perform better to other real data examples.
To assess overall adequacy of GAM form DEGPD models, we also fitted other existing competitor distributions such as DGPD \eqref{eq:4} and Poisson distribution. The goodness-of fit assessment use the randomized residuals \citep{dunn1996randomized,chiogna2007semiparametric} defined as \begin{equation*}
r_i= \Phi^{-1} (\eta_i) \end{equation*} where $\Phi$ is a standard normal distribution function, $\eta_i= (1-u_i)F(k_i-1; {\boldsymbol{\mathbb{\hat\theta}}_{i}} )+u_i F(k_i; \boldsymbol{\mathbb{\hat\theta}_i} )$, $u_i$ is drawn from a uniform distribution and $F(.; \boldsymbol{\mathbb{\hat\theta}})$ is the parametric estimate of the CDF of the current model.
\begin{figure}
\caption{DEGPD type (i)}
\caption{DEGPD type (ii)}
\caption{DEGPD type (iii)}
\caption{DEGPD type (iv)}
\caption{DGPD}
\caption{Poisson}
\caption{Diagnostic plots of residual quantiles of the proposed DEGPD type (i) to type (iv) and competitor (DGPD and Poisson) distributions.}
\label{fig:9}
\end{figure}
The randomization allows to achieve continuous residuals even if the response variable is discrete. Aside from sampling variation in the parameter estimates, the randomized residuals appear to be exactly normal if the fitted model are correctly specified. Figure \ref{fig:9} shows normal quantile-quantile plots of randomized residuals of the proposed GAM form DEGPD type (i) to DEGPD type (iv) and competing models. Graphical representation of residuals of ZIDEGPD type (i) to ZIDEGPD type (iii) models is given in Figure \ref{fig:S2} (see Supplementary material). The randomized residuals derived from on our proposed models show no apparent departure from normality detected, while randomized residuals based on DGPD and Poisson models departed from normality at the lower and upper tail, respectively.
We further check the normality of randomized residuals by using Kolmogorov–Smirnov test. Table {\ref{KSTEST}} indicates that the newly proposed models considered for GAM form modeling deliver a good fit for the avalanches. among the different types, DEGPD type (i) and DEGPD type (iv) clearly outperforms the others as shown in Figure {\ref{fig:9}} as well. Furthermore, the randomized residuals from GPD and Poisson distribution not meet the normality assumption.
\begin{table}[H]
\centering \caption{Kolmogorov–Smirnov (KS) test statistics (p-values between parentheses) of the proposed DEGPD type (i) to type (iv) and competitor (DGPD and Poisson) distributions.} \begin{tabular}{cccc} \toprule
\multicolumn{1}{c}{Model}&\multicolumn{1}{c}{DEGPD} &\multicolumn{1}{c}{DGPD} &\multicolumn{1}{c}{Poisson}\\
\hline $\text{(i)}$& 0.0038 (0.9855) &0.0132 (0.0130) &0.21048 (2.2e-16)\\ \text{(ii)}& 0.0106 (0.0779) \\ \text{(iii)}& 0.0098 (0.1958)\\ \text{(iv)}&0.0066 (0.5590) \\ \bottomrule \end{tabular} \label{KSTEST} \end{table}
\section{Conclusions}\label{sec:7} This study proposes different versions of DEGPD and ZIDEGPD models to demonstrate that it can jointly model the entire range of count data without selecting a threshold. Further, the DEGPD and ZIDEGPD GAM form models are developed and implemented. The flexibility of these models and their many practical advantages in discrete nature data make them very attractive. A few parameters make it simple to implement, interpret, and comply with discrete EVT for both upper and lower tails. The inference is performed through the MLE procedure, which shows more adequacy in results. Compared to ZIDEGPD, the fitted DEGPD appears more straightforward, robust, and genuinely represents zero proportion in the upheld complaints data of NYC. We observed that our proposed ZIDEGPD models are more flexible and robust for the data with zero inflation, and the remaining observations have rise in the lower tail up to structural mode and exponential decay at the upper heavier tail. As noted in the simulation study, the parameters $\delta$ and $\kappa_2$ gained more variability when estimated through the MLE method; Bayesian analysis with informative priors may improve the estimates of these parameters.
In addition, we developed and implemented the GAM forms methodology of our proposed models that allows for non-identically distributed discrete extremes. This methodology was implemented in \texttt{evgam} using the author's written R functions. The response variable of interest (three day moving sum of daily avalanches at Haute-Maurienne massif of French Alps) is statistically explained by other environmental variables (e.g., temperature, wind, precipitation and humidity). GAM form models proposed in this study allows parametric non-parametric functional forms, which would most likely be required for larger datasets. Our models (espacially DEGPD and ZIDEGPD type (i)) also showing more flexibility and good fit for avalanches data with effect of environmental conditions as covariates than other competing models (i.e., DGPD, negative binomial and Poisson). It is worth to mentioned here that GAM form DEGPD models may perform more better to the other real data example. Again, GAM form ZIDEGPD models are more flexible and adequate when the response variable has a zero inflation, and the remaining observations have an exponential rise in the lower tail till mode and then decay at the upper heavier tail.
Hence, our proposed models can be applied to the variables which has discrete count data with extreme observation. Further, GAM forms proposals are flexible regarding spatial modelling of discrete extremes.
\section*{Acknowledgment} The authors would like to thank Benjamin Youngman for helpful discussions about the R code, and Nicolas Eckert (INRAE) for providing and explaining the avalanche data. Philippe Naveau acknowledges the support of the French Agence Nationale de la Recherche (ANR) under reference ANR-Melody (ANR-19-CE46-0011). Part of this work was also supported by 80 PRIME CNRS-INSU, ANR-20-CE40-0025-01 (T-REX project), and the European H2020 XAIDA (Grant agreement ID: 101003469).
\section*{Code availability} The complete source code for the proposed models with simple running example is available on \\ \href{https://github.com/touqeerahmadunipd/degpd-and-zidegpd}{https://github.com/touqeerahmadunipd/degpd-and-zidegpd}
\begin{center}
\section*{
Supplementary material for\\
``Modelling of discrete extremes through extended
versions of discrete generalized Pareto distribution''} \end{center} \renewcommand{S.\arabic{table}}{S.\arabic{table}} \setcounter{table}{0} \renewcommand{S2}{S.\arabic{figure}} \setcounter{figure}{0}
\renewcommand{S.\arabic{equation}}{S.\arabic{equation}} \setcounter{equation}{0} \setcounter{section}{0} \setcounter{page}{1}
\section{The discrete Gamma spliced threshold discrete GPD model}
As we have seen in existing literature, the discrete and continuous generalized Pareto limit distribution (GPD) is well approximated to the data which existing over a specified threshold. In continuous extreme value framework, many authors have tried to model the bulk part with some specific distribution (for example, Gamma, Normal, Log-normal, Weibull, and Beta distribution etc.). The arrangement, where data above and below an unknown threshold is drawn from the “bulk” and “tail” distributions respectively, meets into a family of models called spliced threshold models. The distinctive motivation for the model is the privilege that the data above and below the threshold are generated originally by two different underlying processes. For a thorough review of the general spliced threshold model in continuous domain (see, for example, \cite{dey2016extreme}).
Let $F_{m}(x|\boldsymbol{\theta_{B}})$ is the CDF of gamma distribution which corresponds to bulk model and $F_{m}(x|\boldsymbol{\theta_{T}})$ is CDF of GPD corresponds to tail model, where $\boldsymbol{\theta}_B$ and $\boldsymbol{\theta}_T$ indicates the parameter vectors of bulk and tail models, respectively. The CDF of bulk and tail model spliced at threshold is given by \begin{equation}
H_{m}(x)=
\begin{cases}
(1-\phi)\frac{F_{m}(x|\boldsymbol{\theta_{B}})}{F_{m}(u|\boldsymbol{\theta_{B}})} \hspace{2cm} for \hspace{0.5cm} x\leq u\\
1-\phi+\phi{F_{m}(x|\boldsymbol{\theta_{T}},u)} \hspace{1cm} for \hspace{0.5cm} x > u
\end{cases} \end{equation} with corresponding probability density function \begin{equation}
h_{m}(x)=
\begin{cases}
(1-\phi)\frac{f_{m}(x|\boldsymbol{\theta_{B}})}{F_{m}(u|\boldsymbol{\theta_{B}})} \hspace{2cm} for \hspace{0.5cm} x\leq u\\
1-\phi+\phi{f_{m}(x|\boldsymbol{\theta_{T}},u)} \hspace{1cm} for \hspace{0.5cm} x > u
\end{cases} \end{equation}
When we have discrete extreme data that characteristically exhibits several ties on the lower tail. That's why, we need to amend the model developed before for continuous paradigm to discrete form in order to account the censored data. Let $X \sim Gamma (\alpha, \beta)$, we write the PMF of discrete gamma distribution as \begin{equation}
P_{g}(X=k)= \frac{1}{ \Gamma (\alpha)}\left[\gamma(\alpha, \beta(k+1))- \gamma(\alpha, \beta k\right] \end{equation} where $\alpha>0, \beta >0$, $X\in \mathbb{N}$ and $\gamma(\alpha, \beta x)$ is the lower in complete gamma function \begin{equation}
\gamma(\alpha, \beta k)=\int_{0}^{\beta k} t^{\alpha-1} e^{-t} dt \end{equation} On the other hand, If $X\sim GPD(u, \sigma, \xi)$,the PMF of DGPD is written by using (\ref{eq:4}) as \begin{equation}
P_{dp}(X=k)= \left(1+ \frac{\xi (k-u)}{\sigma}\right)^{-1/\xi} - \left(1+ \frac{\xi (k+1-u)}{\sigma}\right)^{-1/ \xi} \end{equation} for $(-\infty < u, \xi <\infty)$, $\sigma \in (0, \infty)$ and $k\in \mathbb{N}$. The DGPD support $k\geq u$ when $\xi \geq 0$ and $u\leq k \leq u-\frac{\sigma}{\xi}$ when $\xi<0$. Here, we may observe $\xi>0$. Hence, the discrete gamma generalized Pareto distribution (DGGPD) spliced at threshold model is given by \begin{equation}
P_{d-ggp}(X=k)=
\begin{cases}
(1-\phi)\frac{P_{g}(X=k|\boldsymbol{\theta_{B})}}{F(u-d|\boldsymbol{\theta_{B}})} \hspace{2cm} for \hspace{0.5cm} k\leq u-d\\
\phi P_{dp}(X=k|\boldsymbol{\theta_{T}},u) \hspace{2cm} for \hspace{0.5cm} k\geq u
\end{cases} \end{equation}
where $P_{g}(X=k|\boldsymbol{\theta_{B})} \sim$ discrete Gamma $(\alpha, \beta)$, $P_{dp}(X=k|\boldsymbol{\theta_{T})}\sim$ discrete GPD, $ d=min(x)$, which indicates that we model integer data and $\phi$ is the proportion of observations over threshold $u$. For more details about mixture models and their implications (see e.g., \cite{hu2018evmix}). \renewcommand{S2}{S1} \begin{figure}\label{fig:S1}
\end{figure}
Figure \ref{fig:S1} show the behavior of bulk and tail part in term of PMF and CDF of DGGPD spliced at threshold. Also, for fitting this mixture model suitable threshold is needed. However, to overcome this deficiency, the new framework to model whole range of data have been developed by avoiding the threshold selection issue.
\section{Maximum likelihood procedure of DEGPD}
\textbf{(i)} The PMF of DEGPD corresponding to $G(u;\psi)=u^{\kappa}$, $\psi=\kappa>0$ is written as
\begin{equation}
P(Y=k)= \left[ \bigg\{1-\left(1+ \frac{\xi (n+1)}{\sigma}\right)^{-\frac{1}{\xi}}\bigg\} ^{\kappa}- \bigg\{1-\left(1+ \frac{\xi n}{\sigma}\right)^{-\frac{1}{\xi}}\bigg\}^{\kappa}\right] \end{equation} The log likelihood function is defined as \begin{equation}
l(\kappa, \sigma, \xi)= \sum_{j=1}^{n} \log \left[ \bigg\{1-\left(1+ \frac{\xi (k_{j}+1)}{\sigma}\right)^{-\frac{1}{\xi}}\bigg\} ^{\kappa}- \bigg\{1-\left(1+ \frac{\xi (k_{j})}{\sigma}\right)^{-\frac{1}{\xi}}\bigg\}^{\kappa}\right] \end{equation} \textbf{(ii)} First we need to define the PMF based on \begin{equation*}
G(u;\psi)=1-D_{\delta}\{(1-u)^{\delta}\},\hspace{1cm} \psi=\delta>0 \end{equation*} where $D_{\delta}$ is CDF of beta distribution with parameters $1/\delta$ and 2. By definition \begin{equation*}
H(x)= G\left\{F_{\xi}\left(\frac{x}{\sigma}\right)\right\} \end{equation*} where $F_{\xi}(.)$ is the CDF of GPD. So, \begin{equation*}
H(x)= 1- D_{\delta}\left[\left\{1-F_{\xi}\left(\frac{x}{\sigma}\right)\right\}^{\delta}\right] \end{equation*} \begin{equation}
H(x)= 1- D_{\delta}\left[\left\{\Bar{F}_{\xi}\left(\frac{x}{\sigma}\right)\right\}^{\delta}\right] \end{equation} We solve, $ D_{\delta}\left[\left\{\Bar{F}_{\xi}\left(\frac{x}{\sigma}\right)\right\}^{\delta}\right]= D_{\delta}\left[z\right]$, where $z=\left\{\Bar{F}_{\xi}\left(\frac{x}{\sigma}\right)\right\}^{\delta}$\\ The CDF of beta distribution with above specific parameters (i.e.,$1/\delta$ and 2) is written as \begin{equation}
D_{\delta}(z,1/\delta,2)= \frac{B(z, 1/\delta,2)}{B(1/\delta,2)} \end{equation} After solving the Beta functions, we get \begin{equation*}
H(x)= 1 + \frac{1}{\delta}{\left[\Bar{F}_{\xi}\left(\frac{x}{\sigma}\right)\right]}^{\delta+1}-\frac{\delta+1}{\delta}{\left[\Bar{F}_{\xi}\left(\frac{x}{\sigma}\right)\right]} \end{equation*} The PMF of DEGPD corresponding to $G(u;\psi)=1-D_{\delta}\{(1-u)^{\delta}\}$ is written as
\begin{multline}
P(Y=k)= \frac{1}{\delta}{\left\{\left(1+\frac{\xi(k+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}^{\delta+1}-\frac{\delta+1}{\delta}{\left\{\left(1+\frac{\xi(k+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}\\ -\frac{1}{\delta}{\left\{\left(1+\frac{\xi k}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}^{\delta+1}+\frac{\delta+1}{\delta}{\left\{\left(1+\frac{\xi k}{\sigma}\right)^{-\frac{1}{\xi}}\right\}} \end{multline} The log likelihood function is defined as \begin{multline}
l(\delta, \sigma, \xi)=\sum_{j=1}^{n} \log \Bigg[ \frac{1}{\delta}{\left\{\left(1+\frac{\xi(k_{j}+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}^{\delta+1}-\frac{\delta+1}{\delta}{\left\{\left(1+\frac{\xi(k_{j}+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}\\ -\frac{1}{\delta}{\left\{\left(1+\frac{\xi k_{j}}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}^{\delta+1}+\frac{\delta+1}{\delta}{\left\{\left(1+\frac{\xi k_{j}}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}\Bigg] \end{multline} \\ \\ \textbf{(iii)} Similar to model \textbf{(ii)}, the CDF of EGPD corresponding to $G(u;\psi)=\left[1-D_{\delta}\{(1-u)^{\delta}\}\right]^{\kappa/2}; \psi=(\delta, \kappa)>0 $ is written as \begin{equation*}
H(x)= \left[F_{\xi}\left(\frac{x}{\sigma}\right) + \frac{1}{\delta}{\left[\Bar{F}_{\xi}\left(\frac{x}{\sigma}\right) \right]}^{\delta+1}-\frac{1}{\delta}{\left[\Bar{F}_{\xi}\left(\frac{x}{\sigma}\right) \right]}\right]^{\frac{\kappa}{2}} \end{equation*} The PMF of DEGPD corresponding to $G(u;\psi)=\left[1-D_{\delta}\{(1-u)^{\delta}\}\right]^{\kappa/2}$ can be written as \begin{multline}
P(Y=k)= \left[1-\left(1+\frac{\xi(k+1)}{\sigma}\right)^{-\frac{1}{\xi}} + \frac{1}{\delta}{\left\{\left(1+\frac{\xi(k+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}^{\delta+1}-\frac{1}{\delta}{\left\{\left(1+\frac{\xi(k+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}\right]^{\frac{\kappa}{2}}\\-\left[1-\left(1+\frac{\xi k}{\sigma}\right)^{-\frac{1}{\xi}} + \frac{1}{\delta}{\left\{\left(1+\frac{\xi k}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}^{\delta+1}-\frac{1}{\delta}{\left\{\left(1+\frac{\xi k}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}\right]^{\frac{\kappa}{2}} \end{multline} The log likelihood function is defined as \begin{multline}
l(\kappa,\delta,\sigma,\xi)=\sum_{j=1}^{n} \log \Bigg[ \Bigg[1-\left(1+\frac{\xi(k_{j}+1)}{\sigma}\right)^{-\frac{1}{\xi}} + \frac{1}{\delta}{\left\{\left(1+\frac{\xi(k_{j}+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}^{\delta+1}\\-\frac{1}{\delta}{\left\{\left(1+\frac{\xi(k_{j}+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}\Bigg]^{\frac{\kappa}{2}}-\Bigg[1-\left(1+\frac{\xi k_{j}}{\sigma}\right)^{-\frac{1}{\xi}} + \frac{1}{\delta}{\left\{\left(1+\frac{\xi k_{j}}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}^{\delta+1}\\-\frac{1}{\delta}{\left\{\left(1+\frac{\xi k_{j}}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}\Bigg]^{\frac{\kappa}{2}}\Bigg] \end{multline} \textbf{(iv)} The PMF corresponding to $G(u;\psi)=pu^{\kappa_1}+(1-p)u^{\kappa_2}$, $\psi=(p, \kappa_1, \kappa_2)>0$ with $\kappa_1\leq\kappa_2$ is written as \begin{multline}
P(Y=k)= p\left[\left\{1-\left(1+\frac{\xi (k+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}^{\kappa_1}-\left\{1-\left(1+\frac{\xi k}{\sigma}\right)^{-\frac{1}{\xi}}\right\}^{\kappa_1}\right] \\ +(1-p)\left[\left\{1-\left(1+\frac{\xi (k+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}^{\kappa_2}-\left\{1-\left(1+\frac{\xi k}{\sigma}\right)^{-\frac{1}{\xi}}\right\}^{\kappa_2}\right] \end{multline}\ The likelihood function is defined as \begin{multline}
l(p,\kappa_1,\kappa_2,\sigma,\xi)=\sum_{j=1}^{n} \log \Bigg[ p\left[\left\{1-\left(1+\frac{\xi (k_{j}+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}^{\kappa_1}-\left\{1-\left(1+\frac{\xi k_{j}}{\sigma}\right)^{-\frac{1}{\xi}}\right\}^{\kappa_1}\right] \\ +(1-p)\left[\left\{1-\left(1+\frac{\xi (k_{j}+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}^{\kappa_2}-\left\{1-\left(1+\frac{\xi k_{j}}{\sigma}\right)^{-\frac{1}{\xi}}\right\}^{\kappa_2}\right]\Bigg] \end{multline} To find the estimates of unknown parameters of above model, we solve the derivatives of log likelihood function of model (i), (ii), (iii) and (iv) numerically.
\section{Maximum likelihood procedure of ZIDEGPD}
\textbf{(i)} The PMF of ZIDEGPD based on $G(u;\psi)=u^{\kappa}$, $\psi=\kappa>0$ is written as \begin{equation}
P(Z=m)=
\begin{cases}
\pi +(1-\pi) \left[1-\left(1+\frac{\xi}{\sigma}\right)^{-\frac{1}{\xi}} \right]^{\kappa}, \hspace{5cm} m=0 \\
(1-\pi)\left[ \bigg\{1-\left(1+ \frac{\xi (m+1)}{\sigma}\right)^{-\frac{1}{\xi}}\bigg\} ^{\kappa}- \bigg\{1-\left(1+ \frac{\xi (m)}{\sigma}\right)^{-\frac{1}{\xi}}\bigg\}^{\kappa}\right], \hspace{.5cm} m>0
\end{cases} \end{equation} Thus, the likelihood function is defined as \begin{equation}
L(\theta)= \left[\pi+(1-\pi)G \left\{F_{\xi}\left(\frac{1}{\sigma}\right)\right\}\right]^{r} \prod_{{j=1}; m_{j}\neq 0}^{n} (1-\pi)\left[ G\left\{F_{\xi}\left(\frac{m_{j}+1}{\sigma}\right) \right\}- G\left\{F_{\xi}\left(\frac{m_{j}}{\sigma}\right) \right\} \right] \end{equation} The log likelihood function is \begin{multline}
l(\pi, \kappa, \sigma, \xi)= r \log \left[\pi +(1-\pi) \left\{1-\left(1+\frac{\xi}{\sigma}\right)^{-\frac{1}{\xi}} \right\}^{\kappa}\right] + (n-r)\log(1-\pi) \\ +\sum_{j=1}^{n} \log \left[ \bigg\{1-\left(1+ \frac{\xi (m_{j}+1)}{\sigma}\right)^{-\frac{1}{\xi}}\bigg\} ^{\kappa}- \bigg\{1-\left(1+ \frac{\xi (m_{j})}{\sigma}\right)^{-\frac{1}{\xi}}\bigg\}^{\kappa}\right] \end{multline}
\textbf{(ii)} The PMF of ZIDEGPD obtained via $G(u;\psi)=1-D_{\delta}\{(1-u)^{\delta}\},\hspace{1cm} \psi=\delta>0 $ \begin{equation}
P(Z=m)=
\begin{cases}
\pi +(1-\pi) \left[\frac{1}{\delta}{\left\{\left(1+\frac{\xi}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}^{\delta+1}-\frac{\delta+1}{\delta}{\left\{\left(1+\frac{\xi}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}+1\right] , \hspace{0.5cm} m=0 \\
(1-\pi)\Bigg[ \frac{1}{\delta}{\left\{\left(1+\frac{\xi(m+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}^{\delta+1}-\frac{\delta+1}{\delta}{\left\{\left(1+\frac{\xi(m+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}\\ -\frac{1}{\delta}{\left\{\left(1+\frac{\xi m}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}^{\delta+1}+\frac{\delta+1}{\delta}{\left\{\left(1+\frac{\xi m}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}\Bigg], \hspace{3cm} m>0
\end{cases} \end{equation}
The log likelihood function is defined as \begin{multline}
l(\pi, \delta, \sigma, \xi)= r \log \left[\pi +(1-\pi) \left[\frac{1}{\delta}{\left\{\left(1+\frac{\xi}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}^{\delta+1}-\frac{\delta+1}{\delta}{\left\{\left(1+\frac{\xi}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}+1\right]\right]\\ + (n-r)\log(1-\pi) +\sum_{j=1}^{n} \log \Bigg[ \frac{1}{\delta}{\left\{\left(1+\frac{\xi(m_j+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}^{\delta+1}\\-\frac{\delta+1}{\delta}{\left\{\left(1+\frac{\xi(m_j+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}} -\frac{1}{\delta}{\left\{\left(1+\frac{\xi m_j}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}^{\delta+1}+\frac{\delta+1}{\delta}{\left\{\left(1+\frac{\xi m_j}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}\Bigg] \end{multline}
\textbf{(iii)} The PMF of ZIDEGPD corresponding to $G(u;\psi)=\left[1-D_{\delta}\{(1-u)^{\delta}\}\right]^{\kappa/2}; \psi=(\delta,\kappa)>0 $ is
\begin{equation}
P(Z=m)=
\begin{cases}
\pi +(1-\pi) \left[\left[1-\left(1+\frac{\xi}{\sigma}\right)^{-\frac{1}{\xi}} + \frac{1}{\delta}{\left\{\left(1+\frac{\xi}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}^{\delta+1}-\frac{1}{\delta}{\left\{\left(1+\frac{\xi}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}\right]^{\frac{\kappa}{2}}\right] , \hspace{0.2cm} m=0 \\
(1-\pi)\Bigg[\left[1-\left(1+\frac{\xi(m+1)}{\sigma}\right)^{-\frac{1}{\xi}} + \frac{1}{\delta}{\left\{\left(1+\frac{\xi(m+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}^{\delta+1}-\frac{1}{\delta}{\left\{\left(1+\frac{\xi(m+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}\right]^{\frac{\kappa}{2}}\\-\left[1-\left(1+\frac{\xi m}{\sigma}\right)^{-\frac{1}{\xi}} + \frac{1}{\delta}{\left\{\left(1+\frac{\xi m}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}^{\delta+1}-\frac{1}{\delta}{\left\{\left(1+\frac{\xi m}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}\right]^{\frac{\kappa}{2}}\Bigg], \hspace{1cm} m>0
\end{cases} \end{equation} The log likelihood function is defined as \begin{multline}
l(\pi, \kappa,\delta,\sigma,\xi)=r\log\Bigg[\pi +(1-\pi) \left[\left[1-\left(1+\frac{\xi}{\sigma}\right)^{-\frac{1}{\xi}} + \frac{1}{\delta}{\left\{\left(1+\frac{\xi}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}^{\delta+1}-\frac{1}{\delta}{\left\{\left(1+\frac{\xi}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}\right]^{\frac{\kappa}{2}}\right]\Bigg] \\ + (n-r)\log(1-\pi) +\sum_{j=1}^{n} \log \Bigg[ \Bigg[1-\left(1+\frac{\xi(m_{j}+1)}{\sigma}\right)^{-\frac{1}{\xi}}\\ + \frac{1}{\delta}{\left\{\left(1+\frac{\xi(m_{j}+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}^{\delta+1}-\frac{1}{\delta}{\left\{\left(1+\frac{\xi(m_{j}+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}\Bigg]^{\frac{\kappa}{2}}-\\ \Bigg[1-\left(1+\frac{\xi m_{j}}{\sigma}\right)^{-\frac{1}{\xi}} + \frac{1}{\delta}{\left\{\left(1+\frac{\xi m_{j}}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}^{\delta+1}-\frac{1}{\delta}{\left\{\left(1+\frac{\xi m_{j}}{\sigma}\right)^{-\frac{1}{\xi}}\right\}}\Bigg]^{\frac{\kappa}{2}}\Bigg] \end{multline} \textbf{(iv)} The PMF of ZIDGPD corresponding to $G(u;\psi)=pu^{\kappa_1}+(1-p)u^{\kappa_2}$, $\psi=(p, \kappa_1, \kappa_2)>0$ is
\begin{equation}
P(Z=m)=
\begin{cases}
\pi +(1-\pi)\left[ p\left\{1-\left(1+\frac{\xi}{\sigma}\right)^{-\frac{1}{\xi}}\right\}^{\kappa_1}+(1-p)\left[\left\{1-\left(1+\frac{\xi}{\sigma}\right)^{-\frac{1}{\xi}}\right\}^{\kappa_2}\right]\right] , \hspace{0.2cm} m=0 \\
(1-\pi)\Bigg[p\left[\left\{1-\left(1+\frac{\xi (m+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}^{\kappa_1}-\left\{1-\left(1+\frac{\xi m}{\sigma}\right)^{-\frac{1}{\xi}}\right\}^{\kappa_1}\right] \\ +(1-p)\left[\left\{1-\left(1+\frac{\xi (m+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}^{\kappa_2}-\left\{1-\left(1+\frac{\xi m}{\sigma}\right)^{-\frac{1}{\xi}}\right\}^{\kappa_2}\right]\Bigg], \hspace{1cm} x>0
\end{cases} \end{equation} The likelihood function is defined as \begin{multline}
l(\pi, p,\kappa_1,\kappa_2,\sigma,\xi)=r\log\Bigg[\pi +(1-\pi)\left[ p\left\{1-\left(1+\frac{\xi}{\sigma}\right)^{-\frac{1}{\xi}}\right\}^{\kappa_1}+(1-p)\left[\left\{1-\left(1+\frac{\xi}{\sigma}\right)^{-\frac{1}{\xi}}\right\}^{\kappa_2}\right]\right] \Bigg] \\ + (n-r)\log(1-\pi)+\sum_{j=1}^{n} \log \Bigg[ p\left[\left\{1-\left(1+\frac{\xi (m_{j}+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}^{\kappa_1}-\left\{1-\left(1+\frac{\xi m_{j}}{\sigma}\right)^{-\frac{1}{\xi}}\right\}^{\kappa_1}\right] \\ +(1-p)\left[\left\{1-\left(1+\frac{\xi (m_{j}+1)}{\sigma}\right)^{-\frac{1}{\xi}}\right\}^{\kappa_2}-\left\{1-\left(1+\frac{\xi m_{j}}{\sigma}\right)^{-\frac{1}{\xi}}\right\}^{\kappa_2}\right]\Bigg] \end{multline} Similar to DEGPD, the unknown parameters of ZIDEGPD models, we solve the derivatives of log likelihood function of model (i), (ii), (iii) and (iv) numerically. \\
\begin{table}[H]
\renewcommandS.\arabic{table}{S1}
\centering
\caption{Root mean square errors of parameter estimates ZIDEGPD found from $10^4$ independent data sets of size $n=1000$.}
\noindent\begin{tabular}{c c c c c c c c c c c c }
\toprule
\multicolumn{12}{c}{Model type (i) $G(u;\psi)= u^\kappa$}\\
\hline
\multicolumn{2}{c}{$\pi$} & \multicolumn{2}{c}{$\kappa$} & \multicolumn{2}{c}{$\sigma$} & \multicolumn{2}{c}{$\xi$}& \multicolumn{2}{c}{$-$}& \multicolumn{2}{c}{$-$}\\
\hline
TRUE &RMSE &TRUE &RMSE & TRUE &RMSE & TRUE &RMSE &TRUE &RMSE &TRUE &RMSE\\
\hline
0.2 & 0.04&5 & 3.00 &1& 0.28 & 0.2& 0.06\\
0.2 & 0.01 &10 & 3.19&1& 0.20 & 0.2& 0.04\\
0.5 & 0.03 & 5 &3.83 &1&0.34 & 0.2&0.07\\
0.5 &0.02 &10 & 3.97 &1&0.25 & 0.2&0.05\\
\hline
\multicolumn{12}{c}{Model type (ii) $G(u;\psi)=1-D_{\delta}\{(1-u)^{\delta}\}$}\\
\hline
\multicolumn{2}{c}{$\pi$} &\multicolumn{2}{c}{$\delta$} & \multicolumn{2}{c}{$\sigma$} & \multicolumn{2}{c}{$\xi$}& \multicolumn{2}{c}{$-$}& \multicolumn{2}{c}{$-$} \\
\hline
0.2 &0.09 & 1 &4.18 & 1 &0.22& 0.20&0.06\\
0.2 & 0.10 & 5 &3.08 & 1 &0.17& 0.20&0.06\\
0.5 & 0.06 & 1 & 4.10 & 1&0.20& 0.20& 0.07\\
0.5 & 0.09 &5 & 1.72 &1& 0.20& 0.20& 0.08\\
\hline
\multicolumn{12}{c}{Model type (iii) $G(u;\psi)=\left[1-D_{\delta}\{(1-u)^{\delta}\}\right]^{\kappa/2}$}\\
\hline
\multicolumn{2}{c}{$\pi$} &\multicolumn{2}{c}{$\kappa$} &\multicolumn{2}{c}{$\delta$} & \multicolumn{2}{c}{$\sigma$}& \multicolumn{2}{c}{$\xi$}& \multicolumn{2}{c}{$-$} \\
\hline
0.2 & 0.03&5 &6.09 &1 &1.90 & 1 & 0.29 &0.20& 0.05\\
0.2 & 0.02&10 &4.20 & 5& 4.10 & 1 & 0.27 &0.20& 0.05 \\
0.5 & 0.03&5 &4.49 & 1& 1.63& 1 & 0.38&0.20& 0.07\\
0.5 & 0.02&10 &5.41 &5 &4.80 & 1 & 0.34 &0.20& 0.07\\
\hline
\multicolumn{12}{c}{Model type (iv) $G(u;\psi)= pu^{\kappa_1} + (1-p)u^{\kappa_2}$}\\
\hline
\multicolumn{2}{c}{$\pi$}& \multicolumn{2}{c}{$p$} & \multicolumn{2}{c}{$\kappa_1$} & \multicolumn{2}{c}{$\kappa_2$}& \multicolumn{2}{c}{$\sigma$}& \multicolumn{2}{c}{$\xi$} \\
\hline
0.2 &0.14 & 0.5&0.29 &1 &4.03 &5& 4.16 & 1 &0.34& 0.20& 0.07\\
0.2 &0.07& 0.5 & 0.30& 5 &2.24& 10 & 9.40 &1& 0.25& 0.20& 0.05\\
0.5 & 0.23 & 0.5 & 0.35&1 &0.66 &5&2.14 & 1&0.37& 0.20& 0.08\\
0.5 & 0.09 &0.5 &0.33 & 5& 2.33&10&7.64 & 1 &0.27& 0.2& 0.06\\
\toprule
\end{tabular}
\label{tab:S2} \end{table}
\renewcommand{S2}{S2} \begin{figure}
\caption{ZIDEGPD type (i)}
\caption{ZIDEGPD type (ii)}
\caption{ZIDEGPD type (iii)}
\caption{Diagnostic plots of residual quantiles of the proposed ZIDEGPD type (i) to ZIDEGPD type (iii) models}
\label{fig:S2}
\end{figure}
\begin{table}[H]
\renewcommandS.\arabic{table}{S2}
\centering
\caption{Estimated coefficients and smooth terms for GAM form ZIDEGPD models fitted to Avalanches data of Haute-Maurienne massif of French Alps}
\noindent\begin{tabular}{ccccccc}
\toprule
\multicolumn{6}{c}{Model type (i) $G(u;\psi)= u^\kappa$}\\
\hline
\multicolumn{6}{c}{** Parametric terms **}\\
\hline
\multicolumn{1}{c}{Parameter (intercept)} & \multicolumn{1}{c}{Estimate} & \multicolumn{1}{c}{Std.Error} & \multicolumn{1}{c}{t value}& \multicolumn{1}{c}{P-value}& \\
\midrule
$\log (\kappa)$ & -1.83 & 0.06 & -32.25 & $<$2e-16\\
$\log(\sigma)$ & 0.2 & 0.1 & 1.99 & 0.0232\\
$\log(\xi)$ & -0.53 & 0.08 & -6.84& 4.04e-12\\
$logit(\pi)$ & -52.06 & 0.09 & -16.84& 6.02e-13\\
\hline
\multicolumn{6}{c}{** Smooth terms **}\\
\hline
{$\log(\sigma)$} & {edf} & {max.df } & {Chi.sq } & {Pr($>|t|$)}& {}\\
\hline
s(WS) &1.02 & 4& 23.56 &1.29e-06\\
s(MxT) & 3.99 & 4 & 652.73& $<$2e-16\\
s(PREC) & 1.00 & 4 & 36.41 & 1.6e-09\\
s(RH) & 4.19 & 9 & 70.35& 7.59e-14\\
\bottomrule
\multicolumn{6}{c}{Model type (ii) $G(u;\psi)=1-F_{\delta}\{(1-u)^{\delta}\}$}\\
\hline
\multicolumn{6}{c}{** Parametric terms **}\\
\hline
\multicolumn{1}{c}{Parameter (intercept)} & \multicolumn{1}{c}{Estimate} & \multicolumn{1}{c}{Std.Error} & \multicolumn{1}{c}{t value}& \multicolumn{1}{c}{P-value}& \\
\midrule
$\log(\delta)$ & 4.67 & 599.22 & 0.01 & 0.497\\
$\log(\sigma)$ & -0.73 & 0.08 & -8.92 & $<$2e-16\\
$\log(\xi)$ & -0.3 & 0.05 & -6.17 & 3.44e-10\\
$logit(\pi)$ & 0.48 & 9.04 & 0.05 & 0.479\\
\hline
\multicolumn{6}{c}{** Smooth terms **}\\
\hline
{$\log(\sigma)$} & {} & { } & {} & {}& {}\\
\hline
s(WS) & 1.71 & 4 & 22.34 & 1.3e-05\\
s(MxT) & 2.54 & 4 &692.76 & $<$2e-16\\
s(PREC) & 1.03 & 4 & 33.96 & 7.51e-09\\
s(RH) & 5.50 & 9 & 85.89 & 3.46e-16\\
\bottomrule
\multicolumn{6}{c}{Model type (iii) $G(u;\psi)=\left[1-F_{\delta}\{(1-u)^{\delta}\}\right]^{\kappa/2}$}\\
\hline
\multicolumn{6}{c}{** Parametric terms **}\\
\hline
\multicolumn{1}{c}{Parameter (intercept)} & \multicolumn{1}{c}{Estimate} & \multicolumn{1}{c}{Std.Error} & \multicolumn{1}{c}{t value}& \multicolumn{1}{c}{P-value}& \\
\midrule
$\log (\kappa)$ & -1.23 & 0.21 & -5.96 & 1.27e-09\\
$\log (\delta)$ & 2.1 & 0.43 & 4.86 & 5.83e-07\\
$\log(\sigma)$ & 0.52 & 0.01 & 37.92 & $<$2e-16\\
$\log(\xi)$ & -0.65 & 0.07 & -9.43 & $<$2e-16\\
$logit(\pi)$ & -0.96 & 0.59 & -1.62 & 0.0528\\
\hline
\multicolumn{6}{c}{** Smooth terms **}\\
\hline
{$\log(\sigma)$} & {} & {} & {} & {}& {}\\
\hline
s(WS) &1.00 & 4 & 610.20 & $<$2e-16\\
s(MxT) & 3.95 & 4 & 11340.79 & $<$2e-16\\
s(PREC) & 1.01 & 4 & 792.57 & $<$2e-16\\
s(RH) &7.89 & 9 & 1701.40 & $<$2e-16\\
\bottomrule
\noindent\end{tabular}
\label{tab:S2} \end{table}
\end{document} |
\begin{document}
\title{Randomly perturbed switching dynamics of a dc/dc converter} \author{Chetan D. Pahlajani} \address{Discipline of Mathematics\\ Indian Institute of Technology Gandhinagar\\ Palaj, Gandhinagar 382355\\ India} \email{cdpahlajani@iitgn.ac.in} \date{\today.}
\begin{abstract} In this paper, we study the effect of small Brownian noise on a switching dynamical system which models a first-order {\sc dc}/{\sc dc} buck converter. The state vector of this system comprises a continuous component whose dynamics switch, based on the {\sc on}/{\sc off} configuration of the circuit, between two ordinary differential equations ({\sc ode}), and a discrete component which keeps track of the {\sc on}/{\sc off} configurations. Assuming that the parameters and initial conditions of the unperturbed system have been tuned to yield a stable periodic orbit, we study the stochastic dynamics of this system when the forcing input in the {\sc on} state is subject to small white noise fluctuations of size $\varepsilon$, $0<\varepsilon \ll 1$. For the ensuing stochastic system whose dynamics switch at random times between a small noise stochastic differential equation ({\sc sde}) and an {\sc ode}, we prove a functional law of large numbers which states that in the limit of vanishing noise, the stochastic system converges to the underlying deterministic one on time horizons of order $\mathscr{O}(1/\varepsilon^\nu)$, $0 \le \nu < 2/3$. \end{abstract}
\maketitle
\color{black} \section{Introduction}\label{S:Intro} Ordinary differential equations ({\sc ode}) and dynamical systems play a fundamental role in modelling and analysis of various phenomena arising in science and engineering. In many applications, however, the smooth evolution of the {\sc ode} dynamics is punctuated by discrete instantaneous events which give rise to switching or non-smooth behaviour. Examples include instantaneous switching between different governing {\sc ode} in a power electronic circuit \cite{BKYY,BV_PowerElectronics,dBGGV}, discontinuous change in velocity for an oscillator impacting a boundary \cite{SH83,Nor91}, etc. In such instances, the dynamical system involves functions which are not smooth, but only piecewise-smooth in their arguments. Such piecewise-smooth dynamical systems \cite{dBBCK} display a wealth of phenomena not seen in their smooth counterparts, and have hence been the subject of much current research.
Dynamical systems arising in practice are almost always subject to random disturbances, owing perhaps to fluctuating external forces, or uncertainties in the system, or unmodelled dynamics, etc. A more accurate picture can therefore be obtained by modelling such systems (at least in the continuous-time case) using {\it stochastic differential equations} ({\sc sde}); intuitively, this corresponds to adding a ``noise" term to the {\sc ode}. For cases where the perturbing noise is small, it is natural to ask whether the stochastic (perturbed) system converges to the deterministic (unperturbed) one in the limit of vanishing noise, and if yes, how the asymptotic behaviour of the fluctuations may be quantified. Such questions have played a significant role in the development of limit theorems for stochastic processes; see, for instance \cite{DZ98,EK86,FW_RPDS,PS_Multiscale}.
Although smooth dynamical systems perturbed by noise have been analysed in great depth over the past few decades, the effect of random noise on non-smooth or switching dynamical systems remains, with some exceptions (see, for instance, \cite{CL_TAC_2007,CL_SICON,HBB1,HBB2,SK14,SK_SD,SK_JNS}), relatively unexplored. One of the challenges in such an undertaking is that even in the absence of noise, the dynamics of switching systems can prove rather difficult to analyse. Part of the reason is the frequently encountered intractability of such systems to analytic computation \cite{BC_Boost, dBGGV}, even in cases when the component subsystems are linear.
Our primary interest is the study of stochastic processes which arise due to small random perturbations of (non-smooth) switching dynamical systems. These problems are of immediate relevance in the analysis of {\sc dc/dc} converters in power electronics---naturally susceptible to noise---in which time-varying circuit topology leads to mathematical models characterised by switching between different governing {\sc ode}. In the purely deterministic setting, the dynamics of these systems have been extensively studied, with much of the work focussing on {\it buck converters} \cite{BKYY,dBGGV,HDJ92,FO96}; these are circuits used to transform an input {\sc dc} voltage to a lower output {\sc dc} voltage. Perhaps the simplest of these is the first-order buck converter; this is a system which switches between two linear first-order {\sc ode}. While this circuit is pleasantly amenable to some explicit computation, it nevertheless displays rich dynamics in certain parameter regimes. Periodic orbits, bifurcations and chaos for this converter have been studied in \cite{BKYY,HDJ92}.
In the present paper, we study small random perturbations of a switching dynamical system which models a first-order buck converter. The state vector of this system comprises a continuous component (the inductor current) governed by one of two different {\sc ode}, and a discrete component which takes values $1$ or $0$ depending on whether the circuit is in the {\sc on} versus {\sc off} configurations, respectively. Assuming that the parameters and initial conditions of the unperturbed system have been tuned to yield a stable periodic orbit, we study the stochastic dynamics of this system when the forcing (input {\sc dc} voltage) is subject to small white noise fluctuations of size $\varepsilon$, with $0< \varepsilon \ll 1$. Our main result is a {\it functional law of large numbers} ({\sc flln}) which states that, as $\varepsilon \searrow 0$, the solution of the stochastically perturbed system converges to that of the underlying deterministic system, over time horizons $\mathsf{T}_\varepsilon$ of order $\mathscr{O}(1/\varepsilon^\nu)$ for any $0 \le \nu < 2/3$.
Part of the novelty of this work, in the context of the literature on switching diffusions (see, e.g., \cite{BBG_JMAA_1999,LuoMao,YZ10,YZ_book}), is that the switching in our problem is {\it not} driven by a discrete-state stochastic process (whose transitions may occur at a rate depending on the continuous component of the state); rather, the switching occurs whenever the continuous component of the state hits a threshold ({\sc on} $\to$ {\sc off}), or upon the arrival of a time-periodic signal ({\sc off} $\to$ {\sc on}). Our switching is thus {\it entirely} determined by the continuous component, together with a periodic clock signal. We also note that since the input {\sc dc} voltage in the buck converter influences the inductor current only in the {\sc on} state \cite{BKYY}, the perturbed system has alternating stochastic and deterministic evolutions: the dynamics switch {\it at random times} between an {\sc sde} driven by a small Brownian motion of size $\varepsilon$ in the {\sc on} state, and an {\sc ode} in the {\sc off} state. The import of our results is that even in the presence of small stochastic perturbations, one may expect the buck converter to function close to its desired operation for ``reasonably long" times.
The rest of the paper is organised as follows. In Section \ref{S:ProblemStatement}, we describe the switching systems (deterministic and stochastic) in some detail, and we pose our problem of interest. Next, in Section \ref{S:MainResult}, we state our main result (Theorem \ref{T:FLLN}) and outline the steps to the proof through a sequence of auxiliary lemmas and propositions. A few of these auxiliary results are proved in Section \ref{S:MainResult}, with the remainder (the slightly lengthier ones) being deferred to Section \ref{S:Proofs}.
\section{Problem Description}\label{S:ProblemStatement} In this section, we formulate our problem of interest. We start with a description, including the governing {\sc ode}'s and the switching mechanism, of a dynamical system modelling a first-order buck converter in Section \ref{SS:Det_sw_sys}. Random perturbations of this system, which lead to a switching {\sc sde}/{\sc ode} model, are discussed in Section \ref{SS:Stoch_sw_sys}. In Section \ref{SS:explicit_formulas}, we obtain explicit formulas for solutions to both the {\sc sde} and the {\sc ode}'s {\it between} switching times, and we piece these together {\it at} switching times to obtain expressions describing the overall evolution of both the perturbed (stochastic) and unperturbed (deterministic) switching systems. Finally, after showing in Section \ref{SS:stable_periodic_orbit} how problem parameters can be tuned and initial conditions chosen to ensure that the unperturbed system has a stable periodic orbit, we pose our questions of interest.
Before proceeding further, we note that we have a {\it hybrid} system. Indeed, the full state of the system is specified by a vector $z=(x,y)$ taking values in $\mathscr{Z} \triangleq \mathbb{R} \times \{0,1\}$; here, $x \in \mathbb{R}$ is the continuous component of the state---corresponding to the inductor current in the buck converter---while the discrete component $y$ takes values $1$ or $0$ depending on whether the switch is {\sc on} or {\sc off}.
\subsection{Deterministic switching system}\label{SS:Det_sw_sys} As noted above, the state of our system at time $t \in [0,\infty)$ will be specified by a vector $z(t) \triangleq (x(t),y(t))$ taking values in $\mathscr{Z} \triangleq \mathbb{R} \times \{0,1\}$. We will assume that the dynamics of $x(t)$ when $y(t)=1$ ({\sc on} configuration) are governed by the {\sc ode} \begin{equation}\label{E:ode-on} \frac{dx}{dt}= - \alpha_\textsf{on} \thinspace x + \beta, \end{equation} while the dynamics of $x(t)$ when $y(t)=0$ ({\sc off} configuration) are described by \begin{equation}\label{E:ode-off} \frac{dx}{dt}= - \alpha_\textsf{off} \thinspace x. \end{equation} Here, $\beta$, $\alpha_\textsf{on}$, $\alpha_\textsf{off}$ are fixed positive parameters with $\beta$ representing the (rescaled) input voltage of an external power source, while $\alpha_\textsf{on}$ and $\alpha_\textsf{off}$ denote the (rescaled) resistances in the {\sc on} and {\sc off} configurations, respectively.\footnote{More precisely, $\beta = V_{\mathsf{in}}/L$, $\alpha_\textsf{on}=R/L$ and $\alpha_\textsf{off}=(R+r_d)/L$, where $V_{\mathsf{in}}$ is the input voltage, $R$ denotes the load resistance, and $r_d$ is the diode resistance \cite{BKYY}.}
The switching between the {\sc on} and {\sc off} configurations is effected as follows. A reference level $x_\textsf{ref} \in \left(0,\beta/\alpha_\textsf{on} \right)$ is fixed. Suppose the system starts in the {\sc on} configuration, i.e., $y(0)=1$, with $x(0) \in (0,x_\textsf{ref})$. The current $x(t)$ increases according to \eqref{E:ode-on}, with $y(t)$ staying at 1, until $x(t)$ hits the level $x_{\sf ref}$. At this point, an {\sc on} $\to$ {\sc off} transition occurs: $y(t)$ jumps to 0 and $x(t)$ now evolves according to \eqref{E:ode-off}. This continues until the next arrival of a periodic clock signal with period 1 (which arrives at times $n \in \mathbb{N}$) triggers an {\sc off} $\to$ {\sc on} transition: $y(t)$ jumps back to 1, $x(t)$ again evolves according to \eqref{E:ode-on}, and the cycle continues. Note that if a clock pulse arrives in the {\sc on} configuration, it is ignored. Of course, if one starts in the {\sc off} configuration, $x(t)$ evolves according to \eqref{E:ode-off} until the next clock pulse, at which point the system goes {\sc on}, and the subsequent dynamics are as described above. An important assumption in our analysis is that $x(t)$ is continuous across switching times.
\subsection{Random perturbations}\label{SS:Stoch_sw_sys}
We now suppose that the forcing term $\beta$ in \eqref{E:ode-on} is subjected to small white noise perturbations of size $\varepsilon$, $0 < \varepsilon \ll 1$; for the buck converter, this corresponds to small random fluctuations in the input voltage. In this setting, the state of the system at time $t \in [0,\infty)$ is given by a stochastic process $Z^\varepsilon_t \triangleq (X^\varepsilon_t,Y^\varepsilon_t)$ taking values in $\mathscr{Z}$. The dynamics of $X^\varepsilon_t$ in the {\sc on} configuration ($Y^\varepsilon_t=1$) are now governed by the {\sc sde} \begin{equation}\label{E:sde-on} dX^\varepsilon_t = (-\alpha_\textsf{on} X^\varepsilon_t + \beta)dt + \varepsilon dW_t, \end{equation} where $W_t$ is a standard one-dimensional Brownian motion, while evolution of $X^\varepsilon_t$ in the {\sc off} state ($Y^\varepsilon_t=0$) is governed by the {\sc ode} \eqref{E:ode-off}, as before. The switching mechanism is similar to that in the unperturbed case, but with the stochastic processes $X^\varepsilon_t$, $Y^\varepsilon_t$ playing the roles of $x(t)$, $y(t)$. Note, in particular, that the times for {\sc on} $\to$ {\sc off} transitions are given by {\it passage times} of $X^\varepsilon_t$ (governed by \eqref{E:sde-on}) to the level $x_\textsf{ref}$. As before, $X^\varepsilon_t$ is assumed to be continuous across switching times.
\subsection{Explicit formulas}\label{SS:explicit_formulas}
The foregoing discussion makes clear how $z(t)=(x(t),y(t))$ and $Z^\varepsilon_t = (X^\varepsilon_t,Y^\varepsilon_t)$ are to be obtained, once an initial condition $z_0=(x_0,y_0) \in \mathscr{Z}$ has been specified: the evolutions of $x(t)$ and $X^\varepsilon_t$ are given, respectively, by {\it concatenating} solutions to \eqref{E:ode-on} and \eqref{E:ode-off}, and solutions to \eqref{E:sde-on} and \eqref{E:ode-off}, at the respective switching times, maintaining continuity. The function $y(t)$ and the sample paths of $Y^\varepsilon_t$---which are piecewise constant and take values in $\{0,1\}$---will be assumed to be right-continuous. Below, we obtain expressions for $z(t)$ and $Z^\varepsilon_t$ starting from initial condition $z_0=(x_0,1)$ where $x_0 \in (0,x_\textsf{ref})$. We note that starting with $y_0=1$ entails no real loss of generality; indeed, as will become apparent, the expressions below can be easily modified to accommodate the case when $y_0=0$.
In the sequel, we will use $1_A$ to denote the indicator function of the set $A$, and for real numbers $a,b$, we let $a \wedge b$ and $a \vee b$ denote the minimum and maximum of $a$ and $b$, respectively.
\subsubsection{Solution of deterministic switching system} As indicated above, we fix an initial condition $z_0=(x_0,1)$ with $x_0 \in (0,x_\textsf{ref})$. Let $s_0 \triangleq 0$, and set $\mathsf{x}^{0,\textsf{off}}(t) \equiv x_0$. Next, define
$\mathsf{x}^{1,\textsf{on}}(t) \triangleq 1_{[s_0,\infty)}(t) \cdot \left\{\beta/\alpha_\textsf{on} + \left(x_0 - \beta/\alpha_\textsf{on} \right) e^{-\alpha_\textsf{on} t}\right\}$ for $t \ge 0$.
Let
$t_1 \triangleq \inf\{t > 0:\mathsf{x}^{1,\textsf{on}}(t)=x_\textsf{ref}\}$
be the first time that $\mathsf{x}^{1,\textsf{on}}(t)$ reaches level $x_\textsf{ref}$ and define
$\mathsf{x}^{1,\textsf{off}}(t) \triangleq 1_{[t_1,\infty)}(t) \cdot x_\textsf{ref} \medspace e^{-\alpha_\textsf{off}(t-t_1)}.$
Let $s_1 \triangleq \inf \{t>t_1:t \in \mathbb{Z}\}$ be the time of arrival of the next clock pulse. The solution of the deterministic switching system on the interval $[s_0,s_1)$ is now given by
$\mathsf{x}^1(t) \triangleq \mathsf{x}^{1,\textsf{on}}(t) \cdot 1_{[s_0,t_1)}(t)+ \mathsf{x}^{1,\textsf{off}}(t) \cdot 1_{[t_1,s_1)}(t)$.
In general, given the solution over $[s_0,s_{n-1})$, the solution $\mathsf{x}^{n}(t)$ over $[s_{n-1},s_n)$ is obtained as follows. We let \begin{equation}\label{E:det_state_pieces} \begin{aligned} \mathsf{x}^{n,\textsf{on}}(t) &\triangleq 1_{[s_{n-1},\infty)}(t) \cdot \left\{\frac{\beta}{\alpha_\textsf{on}} + \left(\mathsf{x}^{n-1,\textsf{off}}(s_{n-1}) - \frac{\beta}{\alpha_\textsf{on}}\right) e^{-\alpha_\textsf{on} (t-s_{n-1})}\right\} \qquad \text{for $t \ge 0$,}\\ t_{n} &\triangleq \inf\{t>s_{n-1}:\mathsf{x}^{n,\textsf{on}}(t)=x_\textsf{ref}\},\\ \mathsf{x}^{n,\textsf{off}}(t) &\triangleq 1_{[t_{n},\infty)}(t) \cdot x_\textsf{ref} \medspace e^{-\alpha_\textsf{off}(t-t_{n})} \qquad \text{for $t \ge 0$,}\\ s_{n} &\triangleq \inf\{t>t_{n}: t \in \mathbb{Z}\},\\ \mathsf{x}^{n}(t) &\triangleq \mathsf{x}^{n,\textsf{on}}(t) \cdot 1_{[s_{n-1},t_n)}(t) + \mathsf{x}^{n,\textsf{off}}(t) \cdot 1_{[t_{n},s_{n})}(t). \end{aligned} \end{equation} The evolution of the deterministic switching system over $[0,\infty)$ is now given by \begin{equation}\label{E:det_state_full} z(t) =(x(t),y(t)) \qquad \text{where} \quad x(t) \triangleq \sum_{n \ge 1} \mathsf{x}^{n}(t), \quad y(t) \triangleq \sum_{n \ge 1} 1_{[s_{n-1},t_n)}(t). \end{equation}
We have thus decomposed the evolution into a sequence of {\sc on}/{\sc off} cycles with the switching times $t_n$ and $s_n$ corresponding to the $n$-th {\sc on} $\to$ {\sc off} and {\sc off} $\to$ {\sc on} transitions, respectively; next, we have solved the {\sc ode} \eqref{E:ode-on} and \eqref{E:ode-off} between switching times, and then linked the pieces together while maintaining continuity of $x(t)$ at switching times.
\subsubsection{Solution of stochastic switching system} We now provide a similar detailed construction of the stochastic process $Z^\varepsilon_t=(X^\varepsilon_t,Y^\varepsilon_t)$ starting from the same initial condition $z_0=(x_0,1)$. Let $W=\{W_t, \mathscr{F}_t:0 \le t < \infty\}$ be a standard one-dimensional Brownian motion on the probability space $(\Omega,\mathscr{F},\mathbb{P})$. We introduce, for each $n \in \mathbb{N}$, the processes $\mathsf{X}^{n,\varepsilon,\textsf{on}}_t$, $\mathsf{X}^{n,\varepsilon,\textsf{off}}_t$, $\mathsf{X}^{n,\varepsilon}_t$ and random switching times $\tau_n^\varepsilon$, $\sigma_n^\varepsilon$, which are defined recursively as follows. Set $\sigma_0^\varepsilon \triangleq 0$ and define $\mathsf{X}^{0,\varepsilon,\textsf{off}}_t \equiv x_0$. Now, let
$\mathsf{X}^{1,\varepsilon,\textsf{on}}_t \triangleq 1_{[\sigma_0^\varepsilon,\infty)}(t) \cdot \left\{\beta/\alpha_\textsf{on} + \left(x_0- \beta/\alpha_\textsf{on} \right) e^{-\alpha_\textsf{on} t} + \varepsilon \int_{0}^t e^{-\alpha_\textsf{on}(t-u)} dW_u\right\}$ for $t \ge 0$,
and let $\tau_1^\varepsilon \triangleq \inf\{t >0:\mathsf{X}^{1,\varepsilon,\textsf{on}}_t=x_\textsf{ref} \}$ be the first passage time of $\mathsf{X}^{1,\varepsilon,\textsf{on}}_t$ to level $x_\textsf{ref}$. We next define
$\mathsf{X}^{1,\varepsilon,\textsf{off}}_t \triangleq 1_{[\tau_1^\varepsilon,\infty)}(t) \cdot x_\textsf{ref} \medspace e^{-\alpha_\textsf{off}(t-\tau_1^\varepsilon)}$
and let $\sigma_1^\varepsilon \triangleq \inf\{t> \tau_1^\varepsilon:t \in \mathbb{Z}\}$ be the time of arrival of the next clock pulse. We now set
$\mathsf{X}^{1,\varepsilon}_t \triangleq \mathsf{X}^{1,\varepsilon,\textsf{on}}_t \cdot 1_{[\sigma_0^\varepsilon,\tau_1^\varepsilon)}(t) + \mathsf{X}^{1,\varepsilon,\textsf{off}}_t \cdot 1_{[\tau_1^\varepsilon,\sigma_1^\varepsilon)}(t)$.
To compactly express $\mathsf{X}^{n,\varepsilon,\textsf{on}}_t$ for general $n$, let $I=\{I_t:0 \le t < \infty\}$ be the process defined by
$I_t \triangleq \int_0^t e^{\alpha_\textsf{on} u} dW_u$ for $t \ge 0$.
Note that $I$ is a continuous, square-integrable Gaussian martingale. We now define, for each $n \in \mathbb{N}$, \begin{equation}\label{E:stoch_state_pieces} \begin{aligned} \mathsf{X}^{n,\varepsilon,\textsf{on}}_t &\triangleq 1_{[\sigma_{n-1}^\varepsilon,\infty)}(t) \cdot \left\{ \frac{\beta}{\alpha_\textsf{on}} + \left(\mathsf{X}^{n-1,\varepsilon,\textsf{off}}_{\sigma_{n-1}^\varepsilon}- \frac{\beta}{\alpha_\textsf{on}}\right) e^{-\alpha_\textsf{on} (t-\sigma_{n-1}^\varepsilon)} + \varepsilon \thinspace e^{-\alpha_\textsf{on} t} \left(I_t - I_{t\wedge \sigma_{n-1}^\varepsilon} \right) \right\},\\ \tau_{n}^\varepsilon &\triangleq \inf\{t>\sigma_{n-1}^\varepsilon: \mathsf{X}^{n,\varepsilon,\textsf{on}}_t = x_\textsf{ref}\},\\ \mathsf{X}^{n,\varepsilon,\textsf{off}}_t &\triangleq 1_{[\tau_{n}^\varepsilon,\infty)}(t) \cdot x_\textsf{ref} \medspace e^{-\alpha_\textsf{off}(t-\tau_{n}^\varepsilon)},\\ \sigma_{n}^\varepsilon &\triangleq \inf\{t > \tau_{n}^\varepsilon: t \in \mathbb{Z}\},\\ \mathsf{X}^{n,\varepsilon}_t &\triangleq \mathsf{X}^{n,\varepsilon,\textsf{on}}_t \cdot 1_{[\sigma_{n-1}^\varepsilon,\tau_{n}^\varepsilon)}(t) + \mathsf{X}^{n,\varepsilon,\textsf{off}}_t \cdot 1_{[\tau_{n}^\varepsilon,\sigma_{n}^\varepsilon)}(t). \end{aligned} \end{equation}
Our stochastic process of interest is now given by \begin{equation}\label{E:stoch_state_full} Z^\varepsilon_t \triangleq (X^\varepsilon_t,Y^\varepsilon_t), \qquad \text{where} \qquad X^\varepsilon_t \triangleq \sum_{n \ge 1} \mathsf{X}^{n,\varepsilon}_t, \quad Y^\varepsilon_t \triangleq \sum_{n \ge 1} 1_{[\sigma_{n-1}^\varepsilon,\tau_n^\varepsilon)}(t). \end{equation}
Once again, the evolution comprises a sequence of {\sc on}/{\sc off} cycles, with the quantities above admitting a natural interpretation which parallels the unperturbed (deterministic) case.
\subsection{Stable periodic orbit}\label{SS:stable_periodic_orbit} We now describe the assumptions on the problem parameters that ensure the existence of a stable periodic solution to \eqref{E:det_state_pieces}, \eqref{E:det_state_full}. The argument proceeds by analysing the {\it stroboscopic} map \cite{BKYY} which takes the system state at one clock instant to the state at the next. Map-based techniques are used extensively in analysing the switching dynamics of power electronic circuits; see also \cite{dBGGV,HDJ92}.
\begin{assumption}\label{A:assumptions_real} Fix $x_\textsf{ref}>0$, $0< \alpha_\textsf{on} < \log 2$. Select $\beta>0$ such that \begin{equation}\label{E:beta_assumption_real} 2 x_\textsf{ref} \thinspace \alpha_{\textsf{on}} < \beta < \left( \frac{e^{\alpha_\textsf{on}}}{e^{\alpha_\textsf{on}}-1}\right) x_\textsf{ref} \thinspace \alpha_\textsf{on}. \end{equation} Let $\alpha_\textsf{off}>0$ such that \begin{equation}\label{E:alpha_off_assumption_real} \alpha_\textsf{on} < \alpha_\textsf{off} < \left(\frac{\beta/\alpha_\textsf{on} - x_\textsf{ref}}{x_\textsf{ref}}\right) \alpha_\textsf{on}. \end{equation} \end{assumption}
We now define a map $f:[0,x_\textsf{ref}] \to [0,x_\textsf{ref}]$ which maps $x_0 \in [0,x_\textsf{ref}]$ to the solution $x(t)$ at time 1, subject to the initial condition being $(x_0,1)$, i.e., $f:x_0 \mapsto x(1;x_0,1)$. We are interested in the case when $f$ is only {\it piecewise smooth}. Put another way, if we let
$x_\textsf{border} \triangleq \beta/\alpha_\textsf{on} + \left( x_\textsf{ref} - \beta/\alpha_\textsf{on} \right) e^{\alpha_\textsf{on}}$
be the particular value of $x_0$ for which the corresponding $\mathsf{x}^{1,\textsf{on}}(t)$ satisfies $\mathsf{x}^{1,\textsf{on}}(1)=x_\textsf{ref}$, we would like $x_\textsf{border} \in (0,x_\textsf{ref})$. It is easily seen that the upper bound on $\beta$ in \eqref{E:beta_assumption_real} ensures that such is indeed the case. The map $f:x_0 \mapsto x(1;x_0,1)$ is seen to be given by \begin{equation}\label{E:psmap} f(x) \triangleq \begin{cases} \frac{\beta}{\alpha_\textsf{on}} + \left( x-\frac{\beta}{\alpha_\textsf{on}} \right) e^{-\alpha_\textsf{on}} \qquad & \text{if $0 \le x \le x_\textsf{border}$,}\\ x_\textsf{ref} \thinspace e^{-\alpha_\textsf{off}} \left( \frac{\beta/\alpha_\textsf{on} - x}{\beta/\alpha_\textsf{on} - x_\textsf{ref}} \right)^{\alpha_\textsf{off}/\alpha_\textsf{on}} \qquad & \text{if $x_\textsf{border}<x\le x_\textsf{ref}$.} \end{cases} \end{equation}
\begin{proposition}\label{P:fixed_point}
Suppose Assumption \ref{A:assumptions_real} holds. Then, the mapping $f:[0,x_\textsf{ref}] \to [0,x_\textsf{ref}]$ has a unique fixed point $x^*$ which lies in the interval $(x_\textsf{border},x_\textsf{ref})$. Further, $|f^\prime(x^*)|<1$, implying that $x^*$ is a stable fixed point of the discrete-time dynamical system $x_n \mapsto f(x_n)$. \end{proposition}
\begin{proof} Let $h(x) \triangleq f(x)-x$. Note that $h(0)=f(0)>0$, $h(x_\textsf{ref})=f(x_\textsf{ref})-x_\textsf{ref}=x_\textsf{ref}(e^{-\alpha_\textsf{off}}-1)<0$. Since $h$ is continuous, there exists $x^* \in (0,x_\textsf{ref})$ such that $h(x^*)=0$, i.e., $f(x^*)=x^*$. Since $f(x)>x$ for all $x \in [0,x_\textsf{border}]$, we must have $x^* \in (x_\textsf{border},x_\textsf{ref})$. Further, since $f(x)$ decreases on $(x_\textsf{border},x_\textsf{ref})$ as $x$ increases over this same interval, $f$ can have at most one fixed point. It is easily checked that for $x \in (x_\textsf{border},x_\textsf{ref})$, we have \begin{equation*}
|f^\prime(x)| = \frac{\alpha_\textsf{off} \thinspace f(x)}{\beta - \alpha_\textsf{on} \thinspace x} \le \frac{\alpha_\textsf{off} \thinspace x_\textsf{ref}}{\beta - \alpha_\textsf{on} \thinspace x_\textsf{ref}} < 1, \end{equation*} where the last inequality follows from the upper bound on $\alpha_\textsf{off}$ in \eqref{E:alpha_off_assumption_real}. This proves stability of $x^*$. \end{proof}
We can now pose our principal questions of interest. Suppose $z(\cdot)$, $Z^\varepsilon_\cdot$ are obtained from \eqref{E:det_state_full}, \eqref{E:stoch_state_full}, respectively, with initial conditions $z(0)=Z^\varepsilon_0=(x^*,1)$, where $x^*$ is as in Proposition \ref{P:fixed_point}. \begin{itemize} \item For any fixed $\mathsf{T} \in \mathbb{N}$, do the dynamics of $Z^\varepsilon_\cdot$ converge to those of $z(\cdot)$ in a suitable sense as $\varepsilon \searrow 0$? \item If yes, can the results be strengthened to the case when $\mathsf{T}=\mathsf{T}_\varepsilon \in \mathbb{N}$ grows to infinity, but ``not too fast", as $\varepsilon \searrow 0$? \end{itemize} In the next section, we will show that both these questions can be answered in the affirmative, provided $\mathsf{T}_\varepsilon = \mathscr{O}(1/\varepsilon^\nu)$ with $0 \le \nu < 2/3$.
\section{Main Result}\label{S:MainResult} Recall that the state space for the evolution of $z(t)$ and $Z^\varepsilon_t$ is $\mathscr{Z}=\mathbb{R} \times \{0,1\}$, which inherits the metric
$r(z_1,z_2) \triangleq \left\{|x_1-x_2|^2 + |y_1-y_2|^2 \right\}^{1/2}$ for $z_1=(x_1,y_1)$, $z_2=(x_2,y_2) \in \mathscr{Z}$,
from $\mathbb{R}^2$. If $I$ is a closed subinterval of $[0,\infty)$, we let $D(I;\mathscr{Z})$ be the space of functions $\mathsf{z}:I \to \mathscr{Z}$ which are right-continuous with left limits. This space can be equipped with the Skorokhod metric $d_I$ \cite{ConvProbMeas,EK86}, which renders it complete and separable. If $\mathsf{z} \in D(I;\mathscr{Z})$ and $J$ is a closed subinterval of $I$, then the restriction of $\mathsf{z}$ to $J$ is an element of $D(J;\mathscr{Z})$ which, for simplicity of notation, will also be denoted by $\mathsf{z}$. For our switching systems of interest, we note that the function $z(t)$ in \eqref{E:det_state_full}, and the sample paths of the process $Z^\varepsilon_t$ in \eqref{E:stoch_state_full}, belong to $D([0,\infty);\mathscr{Z})$. Our goal here is to study the convergence, as $\varepsilon \searrow 0$, of $Z^\varepsilon_\cdot$ to $z(\cdot)$ in the space $D([0,\mathsf{T}_\varepsilon];\mathscr{Z})$ for time horizons $\mathsf{T}_\varepsilon = \mathscr{O}(1/\varepsilon^\nu)$ where $0 \le \nu <2/3$.
We start by defining the Skorokhod metric $d_I$ on the space $D(I;\mathscr{Z})$, where $I=[0,\mathsf{T}]$ for some $\mathsf{T} >0$.\footnote{See \cite{ConvProbMeas,EK86} for the case $I=[0,\infty)$.} Let $\tilde\Lambda_\mathsf{T}$ be the set of all strictly increasing continuous mappings from $[0,\mathsf{T}]$ onto itself,\footnote{Thus, we have $\lambda(0)=0$ and $\lambda(\mathsf{T})=\mathsf{T}$ for all $\lambda \in \tilde{\Lambda}_\mathsf{T}$.} and let $\Lambda_\mathsf{T}$ be the set of functions $\lambda \in \tilde{\Lambda}_\mathsf{T}$ for which \begin{equation*}
\gamma_\mathsf{T}(\lambda) \triangleq \sup_{0 \le s < t \le \mathsf{T}} \left| \log \frac{\lambda(t)-\lambda(s)}{t-s} \right| < \infty. \end{equation*} For $\mathsf{z}_1$, $\mathsf{z}_2 \in D([0,\mathsf{T}];\mathscr{Z})$, we now define \begin{equation}\label{E:Sk_metric} d_{[0,\mathsf{T}]}(\mathsf{z}_1,\mathsf{z}_2) \triangleq \inf_{\lambda \in \Lambda_\mathsf{T}} \left\{ \gamma_\mathsf{T}(\lambda) \vee \sup_{0 \le t \le \mathsf{T}} r\left(\mathsf{z}_1(t),\mathsf{z}_2(\lambda(t))\right) \right\}. \end{equation} Note that if $d^u_{[0,\mathsf{T}]}(\mathsf{z}_1,\mathsf{z}_2) \triangleq \sup_{0 \le t \le \mathsf{T}} r(\mathsf{z}_1(t),\mathsf{z}_2(t))$ is the uniform metric on $D([0,\mathsf{T}];\mathscr{Z})$, then $d_{[0,\mathsf{T}]}(\mathsf{z}_1,\mathsf{z}_2) \le d^u_{[0,\mathsf{T}]}(\mathsf{z}_1,\mathsf{z}_2)$. Indeed, the latter corresponds to the specific choice $\lambda(t) \equiv t$.
We now state our main result.
\begin{theorem}[Main Theorem]\label{T:FLLN} Fix $\mathfrak{T} \in \mathbb{N}$, $0 \le \nu<2/3$. For $\varepsilon \in (0,1)$, let $\mathsf{T}_\varepsilon \in \mathbb{N}$ such that $\mathsf{T}_\varepsilon \le \mathfrak{T}/\varepsilon^\nu$. Suppose $z(\cdot)$, $Z^\varepsilon_\cdot$ are given by \eqref{E:det_state_full}, \eqref{E:stoch_state_full}, respectively, with initial conditions $z(0)=Z^\varepsilon_0=(x^*,1)$, where $x^*$ is as in Proposition \ref{P:fixed_point}. Then, for any $p \in [1,\infty)$, we have that $d_{[0,\mathsf{T}_\varepsilon]}(Z^\varepsilon,z) \to 0$ in $L^p$, i.e., \begin{equation}\label{E:conv-in-Lp} \lim_{\varepsilon \searrow 0} \mathbb{E} \left[\left(d_{[0,\mathsf{T}_\varepsilon]}(Z^\varepsilon,z)\right)^p \right]=0. \end{equation}
\end{theorem}
\begin{remark}\label{R:conv-in-prob} Of course, Theorem \ref{T:FLLN} implies that $d_{[0,\mathsf{T}_\varepsilon]}(Z^\varepsilon,z)$ converges to $0$ in probability, i.e., for any $\vartheta>0$, we have $\lim_{\varepsilon \searrow 0} \mathbb{P} \left\{ d_{[0,\mathsf{T}_\varepsilon]}(Z^\varepsilon,z) \ge \vartheta \right\}=0$. \end{remark}
To explain the intuition behind Theorem \ref{T:FLLN}, we note that when $\varepsilon \ll 1$, the likely behaviour of $X^\varepsilon_t$ is to closely track $x(t)$. Therefore, one expects that with high probability, we have $\tau_n^\varepsilon \approx t_n$, $\sigma^\varepsilon_n=s_n = n$ for each $1 \le n \le \mathsf{T}_\varepsilon$ (at least if $\mathsf{T}_\varepsilon$ is not too large). On this ``good" event, a random time-deformation $\lambda$, for which $\gamma_{\mathsf{T}_\varepsilon}(\lambda)$ is small, can be used to align the jumps of $Y^\varepsilon_t$ and $y(t)$ so that $Y^\varepsilon_{\lambda(t)} \equiv y(t)$. Continuity now ensures that $X^\varepsilon_{\lambda(t)}$ is close to $x(t)$, and we get that $d_{[0,\mathsf{T}_\varepsilon]}(Z^\varepsilon,z)$ can be bounded above by a term which goes to zero as $\varepsilon \searrow 0$. It now remains to show that the probability of the complement of this event, i.e., the event where one or more of the $\tau_n^\varepsilon$ differ from $t_n$ by a significant amount, is small.
Our thoughts are organised as follows. First, we introduce an additional scale $\delta \searrow 0$ to quantify proximity of $\tau^\varepsilon_n$ to $t_n$; we will later take $\delta = \varepsilon^\varsigma$ for suitable $\varsigma>0$. Now, for $\varepsilon,\delta \in (0,1)$, set $G_0^{\varepsilon,\delta} \triangleq \Omega$, and for $n \ge 1$, define \begin{equation*} \begin{aligned}
G_n^{\varepsilon,\delta} & \triangleq \{\omega \in G_{n-1}^{\varepsilon,\delta}: |\tau_n^\varepsilon(\omega)-t_n| \le \delta\}\\
B_n^{\varepsilon,\delta} & \triangleq \{\omega \in G_{n-1}^{\varepsilon,\delta}: |\tau_n^\varepsilon(\omega)-t_n| > \delta\} = G_{n-1}^{\varepsilon,\delta} \setminus G_n^{\varepsilon,\delta}. \end{aligned} \end{equation*} Note that the $G_n^{\varepsilon,\delta}$'s are decreasing, i.e., $G_0^{\varepsilon,\delta} \supset G_1^{\varepsilon,\delta} \supset \dots$ and that the $B_n^{\varepsilon,\delta}$'s are pairwise disjoint. Consequently, $ \cap_{n=1}^{\mathsf{T}_\varepsilon} G_n^{\varepsilon,\delta}=G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta} $ and $\mathbb{P} \left( \cup_{n=1}^{\mathsf{T}_\varepsilon} B_n^{\varepsilon,\delta} \right) = \sum_{n=1}^{\mathsf{T}_\varepsilon} \mathbb{P} \left( B_n^{\varepsilon,\delta} \right)$.
We now outline the principal steps in proving Theorem \ref{T:FLLN}. First, in Proposition \ref{P:pth_power_estimate}, we derive a path-wise estimate for $d_{[0,\mathsf{T}_\varepsilon]}(Z^\varepsilon,z)$ and its positive powers. This result assures us that our quantity of interest is indeed small on the event $G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}$ and of order $1$ on its complement. Then, in Proposition \ref{P:bn}, we obtain an upper bound on $\mathbb{P}\left( B_n^{\varepsilon,\delta} \right)$ in terms of the tail of the standard normal distribution. These two propositions enable us to complete the proof of Theorem \ref{T:FLLN}. Both Propositions \ref{P:pth_power_estimate} and \ref{P:bn} are proved through a series of Lemmas; the proofs of the latter are deferred to Section \ref{S:Proofs}.
We start by introducing some notation. Let $\mathsf{t}_\textsf{on} \triangleq t_n-s_{n-1}$ and $\mathsf{t}_\textsf{off} \triangleq s_n-t_n$ denote the fractions of time in each interval $[n,n+1]$ for which the deterministic system is in the {\sc on} and {\sc off} states respectively, and let $\mathsf{t}_\mathsf{min} \triangleq \mathsf{t}_\textsf{on} \wedge \mathsf{t}_\textsf{off}$.
\begin{proposition}\label{P:pth_power_estimate} For every $p>0$, there exists a constant $C_p>0$ such that for all $\varepsilon,\delta \in (0,1)$, $\mathsf{T}_\varepsilon \in \mathbb{N}$ satisfying $0 <\delta \le \mathsf{t}_\mathsf{min}/(4\mathsf{T}_\varepsilon)$, we have \begin{equation}\label{E:pth_power_estimate}
(d_{[0,\mathsf{T}_\varepsilon]}(Z^\varepsilon,z))^p \le C_p \left\{ (\mathsf{T}_\varepsilon \delta)^p + \delta^p + 1_{\Omega\setminus G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}} + \varepsilon^p \thinspace \mathsf{T}_\varepsilon^p \sup_{0 \le t \le \mathsf{T}_\varepsilon} |W_t|^p \right\}. \end{equation} \end{proposition}
To prove Proposition \ref{P:pth_power_estimate}, we will employ the (random) time-deformation $\lambda^\varepsilon:\Omega \to \Lambda_{\mathsf{T}_\varepsilon}$ defined by \begin{multline}\label{E:time_deformation_a} \lambda^\varepsilon_t (\omega) \triangleq \sum_{n \ge 1} 1_{[s_{n-1},t_n)}(t) \cdot \left\{ s_{n-1} + \left( \frac{\tau_n^\varepsilon(\omega)-s_{n-1}}{t_n-s_{n-1}}\right) (t-s_{n-1}) \right\}\\ + \sum_{n \ge 1} 1_{[t_n,s_n)}(t) \cdot \left\{ \tau_n^\varepsilon(\omega) + \left( \frac{s_n-\tau_n^\varepsilon(\omega)}{s_n-t_n}\right) (t-t_n) \right\} \qquad \text{if $\omega \in G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}$,} \end{multline} and \begin{equation}\label{E:time_deformation_b} \lambda^\varepsilon_t (\omega) \triangleq t \qquad \text{if $\omega \in \Omega\setminus G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}$.} \end{equation} Note that in actuality, $\lambda^\varepsilon=\lambda^{\varepsilon,\delta}$. However, we have suppressed the $\delta$-dependence to reduce clutter and also because we will eventually take $\delta=\delta(\varepsilon)$.
The first step is to show that $\gamma_{\mathsf{T}_\varepsilon}(\lambda^\varepsilon)$ is small; this is accomplished in Lemma \ref{L:time_deformation_norm} below. Next, in Lemma \ref{L:full_state_close}, we estimate $\sup_{0 \le t \le \mathsf{T}_\varepsilon} | X^\varepsilon_{\lambda^\varepsilon_t}-x(t) |^p$ and $\sup_{0 \le t \le \mathsf{T}_\varepsilon} | Y^\varepsilon_{\lambda^\varepsilon_t}-y(t) |^p$ for $p>0$.
\begin{lemma}\label{L:time_deformation_norm} Let $\mathsf{T}_\varepsilon \in \mathbb{N}$. If $0 <\delta \le \mathsf{t}_\mathsf{min}/(4\mathsf{T}_\varepsilon)$, then for each $\omega \in \Omega$, we have \begin{equation}\label{E:time_deformation_norm} \gamma_{\mathsf{T}_\varepsilon}(\lambda^\varepsilon(\omega)) \le \frac{4\mathsf{T}_\varepsilon \delta}{\mathsf{t}_\mathsf{min}}. \end{equation} \end{lemma}
\begin{lemma}\label{L:full_state_close} For every $p>0$, there exists a constant $c_p>0$ such that for all $\varepsilon,\delta \in (0,1)$, we have \begin{equation}\label{E:full_state_close_ms} \begin{aligned}
\sup_{0 \le t \le \mathsf{T}_\varepsilon} | X^\varepsilon_{\lambda^\varepsilon_t}-x(t) |^p & \le c_p \left\{ \delta^p\thinspace 1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}} + 1_{\Omega\setminus G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}} + \varepsilon^p \thinspace \mathsf{T}_\varepsilon^p \sup_{0 \le t \le \mathsf{T}_\varepsilon} |W_t|^p \right\},\\
\sup_{0 \le t \le \mathsf{T}_\varepsilon} |Y^\varepsilon_{\lambda^\varepsilon_t}-y(t)|^p & \le 1_{\Omega\setminus G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}. \end{aligned} \end{equation} \end{lemma}
Lemmas \ref{L:time_deformation_norm} and \ref{L:full_state_close} are proved in Section \ref{S:Proofs}. We now have
\begin{proof}[Proof of Proposition \ref{P:pth_power_estimate}] Let $\lambda^\varepsilon \in \Lambda_{\mathsf{T}_\varepsilon}$ be as in \eqref{E:time_deformation_a}, \eqref{E:time_deformation_b}. It is easily seen from \eqref{E:Sk_metric} that \\
$(d_{[0,\mathsf{T}_\varepsilon]}(Z^\varepsilon,z))^p \le 3^p \left\{ (\gamma_{\mathsf{T}_\varepsilon}(\lambda^\varepsilon))^p + \sup_{0 \le t \le \mathsf{T}_\varepsilon} |X^\varepsilon_{\lambda^\varepsilon_t}-x(t)|^p + \sup_{0 \le t \le \mathsf{T}_\varepsilon} |Y^\varepsilon_{\lambda^\varepsilon_t}-y(t)|^p \right\}$.
The claim \eqref{E:pth_power_estimate} now follows from Lemmas \ref{L:time_deformation_norm}, \ref{L:full_state_close}. \end{proof}
We now estimate $\mathbb{P}\left( B_n^{\varepsilon,\delta} \right)$. Let $\mathscr{T}$ denote the right tail of the normal distribution; i.e., \begin{equation*} \mathscr{T}(x) \triangleq \frac{1}{\sqrt{2 \pi}} \int_x^\infty e^{-t^2/2} dt \qquad \text{for $x \ge 0$.} \end{equation*} A simple integration by parts yields \begin{equation}\label{E:gaussian_tail_estimate_1} \mathscr{T}(x) \le \frac{3}{\sqrt{2\pi}} x^2 e^{-x^2/2}, \qquad \text{for $x \ge 1$.} \end{equation}
\begin{proposition}\label{P:bn} There exists $\delta_0 \in (0,1)$ and $K>0$ such that whenever $0<\delta<\delta_0$, $\varepsilon \in (0,1)$, and $n \ge 1$, we have \begin{equation}\label{E:prob_bn} \mathbb{P}\left( B_n^{\varepsilon,\delta} \right) \le 3 \mathscr{T} \left( K \frac{\delta}{\varepsilon} \right). \end{equation} \end{proposition}
To prove this proposition, we write the event $B_n^{\varepsilon,\delta}$ as the disjoint union $B_n^{\varepsilon,\delta,-}\cup B_n^{\varepsilon,\delta,+}$ where
$B_n^{\varepsilon,\delta,-} \triangleq \{\omega \in G_{n-1}^{\varepsilon,\delta}: \tau_n^\varepsilon(\omega) < t_n - \delta \}$ and $B_n^{\varepsilon,\delta,+} \triangleq \{\omega \in G_{n-1}^{\varepsilon,\delta}: \tau_n^\varepsilon(\omega) > t_n + \delta \}$.
The quantities $\mathbb{P}( B_n^{\varepsilon,\delta,-})$ and $\mathbb{P}( B_n^{\varepsilon,\delta,+})$ are now estimated separately in Lemmas \ref{L:bnminus} and \ref{L:bnplus} below, whose proofs are given in Section \ref{S:Proofs}.
\begin{lemma}\label{L:bnminus} There exists $K_->0$ such that for any $n \ge 1$, $\varepsilon,\delta \in (0,1)$, we have \begin{equation}\label{E:prob_bnminus} \mathbb{P}\left( B_n^{\varepsilon,\delta,-} \right) \le 2 \mathscr{T} \left( K_- \frac{\delta}{\varepsilon} \right). \end{equation} \end{lemma}
\begin{lemma}\label{L:bnplus} There exists $\delta_+ \in (0,1)$ and $K_+>0$ such that whenever $0 < \delta < \delta_+$, $\varepsilon \in (0,1)$, $n \ge 1$, we have \begin{equation}\label{E:prob_bnplus} \mathbb{P}\left( B_n^{\varepsilon,\delta,+} \right) \le \mathscr{T} \left( K_+ \frac{\delta}{\varepsilon} \right). \end{equation} \end{lemma}
We now provide \begin{proof}[Proof of Proposition \ref{P:bn}] From Lemmas \ref{L:bnminus} and \ref{L:bnplus}, we take $K \triangleq K_-\wedge K_+$, $\delta_0 \triangleq \delta_+$. Now, noting that $\mathscr{T}$ is strictly decreasing, we use \eqref{E:prob_bnminus} and \eqref{E:prob_bnplus} to get \eqref{E:prob_bn}. \end{proof}
Finally, we have
\begin{proof}[Proof of Theorem \ref{T:FLLN}]
Fix $p \in [1,\infty)$ and let $\delta \triangleq \varepsilon^\varsigma$ where $\nu<\varsigma<1$. By the Burkholder-Davis-Gundy inequalities \cite[Theorem 3.3.28]{KS91}, there exists a universal positive constant $k_{p/2}$ such that
$\mathbb{E} \left[ \sup_{0 \le t \le \mathsf{T}_\varepsilon} | W_t |^p \right] \le k_{p/2} \mathsf{T}_\varepsilon^{p/2}$. Noting that $\mathbb{E}[1_{\Omega\setminus G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}]=\sum_{n=1}^{\mathsf{T}_\varepsilon} \mathbb{P}(B_n^{\varepsilon,\delta})$, we see from Propositions \ref{P:pth_power_estimate} and \ref{P:bn} that for $\varepsilon \in (0,1)$ small enough,\footnote{One can check that $0 < \varepsilon < \left(\frac{\mathsf{t}_\mathsf{min}}{4\mathfrak{T}}\right)^{1/(\varsigma-\nu)}\wedge \delta_{0}^{1/\varsigma}$ will suffice.} \begin{equation*} \mathbb{E}[(d_{[0,\mathsf{T}_\varepsilon]}(Z^\varepsilon,z))^p] \le C_p \left\{ \mathfrak{T}^p \varepsilon^{(\varsigma-\nu)p} + \varepsilon^{\varsigma p} + \frac{3 \mathfrak{T}}{\varepsilon^\nu} \mathscr{T} \left(K \frac{1}{\varepsilon^{1-\varsigma}}\right) + k_{p/2} \mathfrak{T}^{3p/2} \varepsilon^{(1-3\nu/2)p} \right\}. \end{equation*} Since $0 \le \nu<2/3$ and $\nu<\varsigma<1$, straightforward calculations using \eqref{E:gaussian_tail_estimate_1} yield \eqref{E:conv-in-Lp}. \end{proof}
\section{Proofs of Lemmas}\label{S:Proofs}
\begin{proof}[Proof of Lemma \ref{L:time_deformation_norm}] For $\omega \in \Omega\setminus G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}$, we have $\gamma_{\mathsf{T}_\varepsilon}(\lambda^\varepsilon(\omega))=0$, implying \eqref{E:time_deformation_norm}. So, fix $\omega \in G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}$. Note that the function $\lambda^\varepsilon_\cdot(\omega)$ is piecewise-linear with ``corners" at $0=s_0<t_1<s_1<\dots<t_{\mathsf{T}_\varepsilon}<s_{\mathsf{T}_\varepsilon}=\mathsf{T}_\varepsilon$. Since $\omega \in G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}$, we have $\tau_n^\varepsilon(\omega) \in [t_n-\delta,t_n+ \delta]$ for $1 \le n \le \mathsf{T}_\varepsilon$. Recalling that $\lambda^\varepsilon_{t_n}(\omega) = \tau_n^\varepsilon(\omega)$, it is now easy to see that \begin{equation}\label{E:time_def_aux_1}
\max_{1 \le n \le \mathsf{T}_\varepsilon} \left\{ \left| \frac{\lambda^\varepsilon_{t_n}(\omega)-\lambda^\varepsilon_{s_{n-1}}(\omega)}{t_n-s_{n-1}} - 1 \right| \vee \left| \frac{\lambda^\varepsilon_{s_n}(\omega)-\lambda^\varepsilon_{t_n}(\omega)}{s_n-t_n} - 1 \right| \right\} \le \frac{\delta}{\mathsf{t}_\mathsf{min}}. \end{equation}
Now, let $0 \le s < t \le \mathsf{T}_\varepsilon$ and let $\{u_0,u_1,\dots,u_k\}$ be a sequential enumeration of all corners starting just to the left of $s$ and ending just to the right of $t$, i.e., $u_0 \le s < u_1 < \dots < u_{k-1} < t \le u_k$. By the triangle inequality, we have \begin{multline*}
|\lambda^\varepsilon_t(\omega)-\lambda^\varepsilon_s(\omega) - (t-s)| \le |t-u_{k-1}| \left| \frac{\lambda^\varepsilon_t(\omega)-\lambda^\varepsilon_{u_{k-1}}(\omega)}{t-u_{k-1}} - 1 \right| \\+ \sum_{i=2}^{k-1} |u_i-u_{i-1}| \left| \frac{\lambda^\varepsilon_{u_i}(\omega)-\lambda^\varepsilon_{u_{i-1}}(\omega)}{u_i-u_{i-1}} - 1 \right| + |u_1-s| \left| \frac{\lambda^\varepsilon_{u_1}(\omega)-\lambda^\varepsilon_s(\omega)}{u_1-s} - 1 \right|. \end{multline*}
Noting that $|t-u_{k-1}|, |u_{k-1}-u_{k-2}|,\dots,|u_1-s|$ are less than $|t-s|$, recalling the piecewise-linear nature of $\lambda^\varepsilon_t(\omega)$, and using \eqref{E:time_def_aux_1}, we get \begin{equation*}
\left| \frac{\lambda^\varepsilon_t(\omega)-\lambda^\varepsilon_s(\omega)}{t-s} - 1 \right|
\le \sum_{i=1}^{k} \left| \frac{\lambda^\varepsilon_{u_i}(\omega)-\lambda^\varepsilon_{u_{i-1}}(\omega)}{u_i-u_{i-1}} - 1 \right| \le \frac{2\mathsf{T}_\varepsilon \delta}{\mathsf{t}_\mathsf{min}}.
\end{equation*} Thus, \begin{equation*} \log \left( 1 - \frac{2\mathsf{T}_\varepsilon \delta}{\mathsf{t}_\mathsf{min}} \right) \le \log \left( \frac{\lambda^\varepsilon_t(\omega)-\lambda^\varepsilon_s(\omega)}{t-s} \right) \le \log \left( 1+ \frac{2\mathsf{T}_\varepsilon \delta}{\mathsf{t}_\mathsf{min}} \right). \end{equation*}
Using the estimate $|\log(1\pm x)| \le 2|x|$ for $|x| \le 1/2$ \cite[pp. 127]{ConvProbMeas}, we get that for $0<\delta \le \mathsf{t}_\mathsf{min}/{4\mathsf{T}_\varepsilon}$, we have \begin{equation*} -\frac{4\mathsf{T}_\varepsilon \delta}{\mathsf{t}_\mathsf{min}} \le \log \left( \frac{\lambda^\varepsilon_t(\omega)-\lambda^\varepsilon_s(\omega)}{t-s} \right) \le \frac{4\mathsf{T}_\varepsilon\delta}{\mathsf{t}_\mathsf{min}} . \end{equation*} Since $s,t$ are arbitrary, \eqref{E:time_deformation_norm} follows. \end{proof}
\begin{proof}[Proof of Lemma \ref{L:full_state_close}]
We first bound $\sup_{0 \le t \le \mathsf{T}_\varepsilon} | X^\varepsilon_{\lambda^\varepsilon_t}-x(t) |^p$. Write
$X^\varepsilon_{\lambda^\varepsilon_t}-x(t) = \sum_{i=1}^3 \mathsf{L}^{i,\varepsilon}_t$
where \begin{equation*} \begin{aligned} \mathsf{L}^{1,\varepsilon}_t & \triangleq \sum_{n \ge 1} 1_{[\sigma_{n-1}^\varepsilon,\tau_n^\varepsilon)}(\lambda^\varepsilon_t) \cdot \left\{ \frac{\beta}{\alpha_\textsf{on}} + \left(\mathsf{X}^{n-1,\varepsilon,\textsf{off}}_{\sigma_{n-1}^\varepsilon}- \frac{\beta}{\alpha_\textsf{on}}\right) e^{-\alpha_\textsf{on} (\lambda^\varepsilon_t-\sigma_{n-1}^\varepsilon)} \right\}\\ & \phantom{\triangleq} - \sum_{n \ge 1} 1_{[s_{n-1},t_n)}(t) \cdot \left\{\frac{\beta}{\alpha_\textsf{on}} + \left(\mathsf{x}^{n-1,\textsf{off}}(s_{n-1}) - \frac{\beta}{\alpha_\textsf{on}}\right) e^{-\alpha_\textsf{on} (t-s_{n-1})}\right\},\\ \mathsf{L}^{2,\varepsilon}_t & \triangleq \sum_{n \ge 1} 1_{[\tau_{n}^\varepsilon,\sigma^\varepsilon_n)}(\lambda^\varepsilon_t) \cdot x_\textsf{ref} \medspace e^{-\alpha_\textsf{off}(\lambda^\varepsilon_t-\tau_{n}^\varepsilon)} - \sum_{n \ge 1} 1_{[t_{n},s_n)}(t) \cdot x_\textsf{ref} \medspace e^{-\alpha_\textsf{off}(t-t_{n})}, \\ \mathsf{L}^{3,\varepsilon}_t & \triangleq \sum_{n \ge 1} 1_{[\sigma_{n-1}^\varepsilon,\tau_n^\varepsilon)}(\lambda^\varepsilon_t) \cdot \varepsilon \thinspace e^{-\alpha_\textsf{on} \lambda^\varepsilon_t} \left(I_{\lambda^\varepsilon_t} - I_{\lambda^\varepsilon_t \wedge \sigma_{n-1}^\varepsilon} \right). \end{aligned} \end{equation*}
We start by noting that $\mathsf{L}^{1,\varepsilon}_t(\omega)$, $\mathsf{L}^{2,\varepsilon}_t(\omega)$ are bounded for all $(t,\omega) \in [0,\mathsf{T}_\varepsilon]\times\Omega$. We will now show that for $\omega \in G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}$, $|\mathsf{L}^{1,\varepsilon}_t|$, $|\mathsf{L}^{2,\varepsilon}_t|$ are in fact of order $\delta$. We note that if $\omega \in G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}$, then $\lambda^\varepsilon_t(\omega) \in [\sigma^\varepsilon_{n-1}(\omega),\tau^\varepsilon_n(\omega))$ iff $t \in [s_{n-1},t_n)$ for all $1 \le n \le \mathsf{T}_\varepsilon$. Thus, we have for each $t \in [0,\mathsf{T}_\varepsilon]$, \begin{equation*} \begin{aligned} 1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}\thinspace \cdot \mathsf{L}^{1,\varepsilon}_t &= 1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}} \cdot \sum_{n \ge 1} 1_{[s_{n-1},t_n)}(t) \cdot \left\{ \left(\mathsf{X}^{n-1,\varepsilon,\textsf{off}}_{\sigma_{n-1}^\varepsilon}- \frac{\beta}{\alpha_\textsf{on}}\right) e^{-\alpha_\textsf{on} (\lambda^\varepsilon_t-\sigma_{n-1}^\varepsilon)}\right.\\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \left. - \left(\mathsf{x}^{n-1,\textsf{off}}(s_{n-1}) - \frac{\beta}{\alpha_\textsf{on}}\right) e^{-\alpha_\textsf{on} (t-s_{n-1})} \right\}\\ &= 1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}} \cdot \sum_{n \ge 1} 1_{[s_{n-1},t_n)}(t) \cdot \left\{ \left(\mathsf{X}^{n-1,\varepsilon,\textsf{off}}_{\sigma_{n-1}^\varepsilon} - \mathsf{x}^{n-1,\textsf{off}}(s_{n-1})\right) e^{-\alpha_\textsf{on}(\lambda^\varepsilon_t-\sigma^\varepsilon_{n-1})} \right.\\ &\left. \qquad \qquad \qquad \qquad \qquad + \left(\mathsf{x}^{n-1,\textsf{off}}(s_{n-1})-\frac{\beta}{\alpha_\textsf{on}}\right) \left(e^{-\alpha_\textsf{on}(\lambda^\varepsilon_t-\sigma^\varepsilon_{n-1})} - e^{-\alpha_\textsf{on}(t-s_{n-1})}\right) \right\}. \end{aligned} \end{equation*} It is now easy to see that \begin{equation*}
\left| 1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}\thinspace \cdot \mathsf{L}^{1,\varepsilon}_t \right| \le 1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}} \cdot \sum_{n \ge 1} 1_{[s_{n-1},t_n)}(t) \cdot \left\{ x_\textsf{ref} \thinspace \alpha_\textsf{off} \thinspace \delta + \left(\frac{\beta}{\alpha_\textsf{on}}-x^*\right) \alpha_\textsf{on} \delta\right\}.
\end{equation*} Hence, there exists $K_1>0$ such that for all $t \in [0,\mathsf{T}_\varepsilon]$, \begin{equation}\label{E:cont_state_close_ms_1}
|\mathsf{L}^{1,\varepsilon}_t(\omega)| \le 1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}(\omega) K_1 \delta + 1_{\Omega\setminus G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}(\omega) K_1. \end{equation} Turning to $\mathsf{L}^{2,\varepsilon}_t$, we note that \begin{equation*}
1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}\thinspace \cdot \mathsf{L}^{2,\varepsilon}_t = 1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}\thinspace \cdot \sum_{n \ge 1} 1_{[t_{n},s_n)}(t)\cdot x_\textsf{ref} \medspace \left\{ e^{-\alpha_\textsf{off}(\lambda^\varepsilon_t-\tau_{n}^\varepsilon)} - e^{-\alpha_\textsf{off}(t-t_{n})}\right\},
\end{equation*} which gives \begin{equation*}
\left|1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}\thinspace \cdot \mathsf{L}^{2,\varepsilon}_t \right| \le 1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}\thinspace \cdot \sum_{n \ge 1} 1_{[t_{n},s_n)}(t) \cdot x_\textsf{ref} \thinspace \alpha_\textsf{off} \thinspace |\lambda^\varepsilon_t-\tau^\varepsilon_n-t+t_n| \le 2 x_\textsf{ref} \thinspace \alpha_\textsf{off} \thinspace \delta. \end{equation*} Hence, there exists $K_2>0$ such that for all $t \in [0,\mathsf{T}_\varepsilon]$, \begin{equation}\label{E:cont_state_close_ms_2}
|\mathsf{L}^{2,\varepsilon}_t(\omega)| \le 1_{G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}(\omega) K_2 \delta + 1_{\Omega\setminus G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}(\omega) K_2. \end{equation} Turning now to $\mathsf{L}^{3,\varepsilon}_t$, we use integration by parts to get $I_t = e^{\alpha_\textsf{on} t} W_t - \alpha_\textsf{on} \int_0^t W_s e^{\alpha_\textsf{on} s}ds$. It now follows that for any $t \in [0,\mathsf{T}_\varepsilon]$, \begin{equation*} \mathsf{L}^{3,\varepsilon}_t = \varepsilon \thinspace \sum_{n \ge 1} 1_{[\sigma_{n-1}^\varepsilon,\tau_n^\varepsilon)}(\lambda^\varepsilon_t) \cdot\left[ W_{\lambda^\varepsilon_t} - e^{-\alpha_\textsf{on} [\lambda^\varepsilon_t - \lambda^\varepsilon_t \wedge \sigma_{n-1}^\varepsilon]} W_{\lambda^\varepsilon_t \wedge \sigma_{n-1}^\varepsilon} -\alpha_\textsf{on} e^{-\alpha_\textsf{on} \lambda_t^\varepsilon} \int_{\lambda_t^\varepsilon\wedge\sigma^\varepsilon_{n-1}}^{\lambda^\varepsilon_t} W_s e^{\alpha_\textsf{on} s} ds \right], \end{equation*} whence
$|\mathsf{L}^{3,\varepsilon}_t | \le \varepsilon \left( 2 + \alpha_\textsf{on}\right) \mathsf{T}_\varepsilon \sup_{0 \le t \le \mathsf{T}_\varepsilon} |W_t|$. Recalling \eqref{E:cont_state_close_ms_1} and \eqref{E:cont_state_close_ms_2}, we see that for $p>0$, there exists $c_p>0$ such that the first line of \eqref{E:full_state_close_ms} holds.
To bound $\sup_{0 \le t \le \mathsf{T}_\varepsilon} | Y^\varepsilon_{\lambda^\varepsilon_t}-y(t) |^p$, note that for $\omega \in G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}$, we have $Y^\varepsilon_{\lambda^\varepsilon_t}=y(t)$ for all $t \in [0,\mathsf{T}_\varepsilon]$. Consequently,
$\sup_{0 \le t \le \mathsf{T}_\varepsilon} |Y^\varepsilon_{\lambda^\varepsilon_t}-y(t)|^p = 1_{\Omega\setminus G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}} \cdot \sup_{0 \le t \le \mathsf{T}_\varepsilon} |Y^\varepsilon_{\lambda^\varepsilon_t}-y(t)|^p \le 1_{\Omega\setminus G_{\mathsf{T}_\varepsilon}^{\varepsilon,\delta}}$. \end{proof}
To state and prove Lemmas \ref{L:bnminus} and \ref{L:bnplus}, we will need some notation. For $n \in \mathbb{N}$, $\xi \in (0,x_\textsf{ref})$, we let
$a_{n}(t;\xi) \triangleq 1_{[n-1,\infty)}(t) \cdot \left\{\beta/\alpha_\textsf{on} + \left( \xi - \beta/\alpha_\textsf{on} \right) e^{-\alpha_\textsf{on}(t-(n-1))}\right\}$.
It is now easily checked that for $t \ge n-1$ and $\varkappa \in (0,1)$ small enough, \begin{equation}\label{E:an_envelope} a_n(t;x^*)-\varkappa \le a_n(t;x^*-\varkappa) \le a_n(t;x^*) \le a_n(t;x^* + \varkappa) \le a_n(t;x^*) + \varkappa. \end{equation}
We will also find it helpful to express the continuous square-integrable martingale $I_t = \int_0^t e^{\alpha_\textsf{on} u}dW_u$ as a time-changed Brownian motion. The quadratic variation process of $I$ given by
$\left\langle I \right\rangle_t = \int_0^t e^{2 \alpha_\textsf{on} u} du = (e^{2 \alpha_\textsf{on} t} - 1)/(2 \alpha_\textsf{on})$ for $t \ge 0$
is strictly increasing with $\lim_{t \to \infty} \left\langle I \right\rangle_t=\infty$. It therefore follows \cite[Theorem 3.4.6]{KS91} that the process $V = \{ V_s, \mathscr{G}_s:0 \le s < \infty\}$ defined by
$V_s \triangleq I_{T(s)}$, $\mathscr{G}_s \triangleq \mathscr{F}_{T(s)}$ where $T(s) \triangleq \inf\{t \ge 0:\left\langle I \right\rangle_t > s\}= [\log \left( 1 + 2 \alpha_\textsf{on} s \right)]/(2 \alpha_\textsf{on})$,
is a standard one-dimensional Brownian motion, and further, that
$I_t = V_{\left\langle I \right\rangle_t}$ for all $t \ge 0$.
Below, we will use the fact that if $\omega \in G_{n-1}^{\varepsilon,\delta}$ (where $n \in \mathbb{N}$), then $\sigma_{n-1}^\varepsilon=n-1$, and
$|\mathsf{X}^{n-1,\varepsilon,\textsf{off}}_{\sigma_{n-1}^\varepsilon}(\omega)-x^*| \le x_\textsf{ref} \thinspace \alpha_\textsf{off} \thinspace |\tau_{n-1}^\varepsilon(\omega)-t_{n-1}| \le x_\textsf{ref} \thinspace \alpha_\textsf{off} \thinspace \delta$.
Set \begin{equation}\label{E:mu_definition} \mu \triangleq \beta - (\alpha_\textsf{on} + \alpha_\textsf{off}) x_\textsf{ref}, \qquad \varkappa \triangleq x_\textsf{ref} \thinspace \alpha_\textsf{off} \thinspace \delta. \end{equation} Note that, on account of the upper bound on $\alpha_\textsf{off}$ in \eqref{E:alpha_off_assumption_real}, we have $\mu>0$.
\begin{proof}[Proof of Lemma \ref{L:bnminus}] We start by noting that for $n \ge 1$, \begin{equation*} \mathsf{X}^{n,\varepsilon,\textsf{on}}_t \cdot 1_{G_{n-1}^{\varepsilon,\delta}} (\omega) = 1_{G_{n-1}^{\varepsilon,\delta}} (\omega) \cdot 1_{[n-1,\infty)}(t) \cdot \left\{ a_n\left( t; \mathsf{X}^{n-1,\varepsilon,\textsf{off}}_{\sigma_{n-1}^\varepsilon} \right) + \varepsilon e^{-\alpha_\textsf{on} t} \left( I_t - I_{n-1} \right) \right\}. \end{equation*} Using the fact that for $\omega \in G_{n-1}^{\varepsilon,\delta}$, we have $\mathsf{X}^{n-1,\varepsilon,\textsf{off}}_{\sigma_{n-1}^\varepsilon}(\omega) \in [x^*-\varkappa,x^*+\varkappa]$, together with \eqref{E:an_envelope}, we get \begin{multline*} B_n^{\varepsilon,\delta,-} \subset \left\{ \omega \in G_{n-1}^{\varepsilon,\delta}: \sup_{t \in [n-1,t_n-\delta]} \left( a_n\left( t; \mathsf{X}^{n-1,\varepsilon,\textsf{off}}_{\sigma_{n-1}^\varepsilon} \right) + \varepsilon e^{-\alpha_\textsf{on} t} \left( I_t - I_{n-1} \right) \right) \ge x_\textsf{ref} \right\}\\ \subset \left\{ \omega \in G_{n-1}^{\varepsilon,\delta}: \sup_{t \in [n-1,t_n-\delta]} e^{-\alpha_\textsf{on} t} \left( I_t - I_{n-1} \right) \ge \frac{x_\textsf{ref} - a_n(t_n-\delta; x^*) - \varkappa }{\varepsilon} \right\}\\ \subset \left\{ \omega \in \Omega: \sup_{t \in [n-1,t_n-\delta]} e^{-\alpha_\textsf{on} t} \left( I_t - I_{n-1} \right) \ge \frac{\mu \delta}{\varepsilon} \right\} \end{multline*} where the latter set inclusion uses the fact that $x_\textsf{ref} - a_n(t_n-\delta;x^*) - \varkappa \ge \delta \mu$. We now easily get that \begin{equation*} \mathbb{P} \left( B_n^{\varepsilon,\delta,-} \right) \le \mathbb{P} \left\{ \sup_{t \in [n-1,t_n-\delta]} \left( V_{\left\langle I \right\rangle_t} - V_{\left\langle I \right\rangle_{n-1}} \right) \ge \frac{\mu \delta e^{\alpha_\textsf{on}(n-1)}}{\varepsilon} \right\}. \end{equation*} Letting
$u_n \triangleq \left\langle I \right\rangle_{n-1}$, $q \triangleq \left\langle I \right\rangle_t-u_n$, $v_n \triangleq \left\langle I \right\rangle_{t_n-\delta}$,
and noting that $\hat{V}_q \triangleq V_{u_n + q} - V_{u_n}$ is a Brownian motion, we get \begin{equation*} \mathbb{P} \left( B_n^{\varepsilon,\delta,-} \right) \le \mathbb{P} \left\{ \sup_{q \in [0,v_n-u_n]} \hat{V}_q \ge \frac{\mu \delta e^{\alpha_\textsf{on}(n-1)}}{\varepsilon} \right\} = 2\mathscr{T} \left( \frac{\mu \delta}{\varepsilon \sqrt{ \frac{e^{2 \alpha_\textsf{on}(t^*-\delta)} - 1} {2 \alpha_\textsf{on}} }} \right), \end{equation*} where we have explicitly computed $u_n$, $v_n$, and also used \cite[Remark 2.8.3]{KS91}. Since $e^{2\alpha_\textsf{on}(t^*-\delta)}-1 \le e^{2 \alpha_\textsf{on} t^*}$, we easily get \eqref{E:prob_bnminus} with $K_- \triangleq \sqrt{2 \alpha_\textsf{on}} e^{-\alpha_\textsf{on} t^*} \mu$. \end{proof}
\begin{proof} [Proof of Lemma \ref{L:bnplus}] Using the fact that for $\omega \in G_{n-1}^{\varepsilon,\delta}$, we have $\mathsf{X}^{n-1,\varepsilon,\textsf{off}}_{\sigma_{n-1}^\varepsilon}(\omega) \in [x^*-\varkappa,x^*+\varkappa]$, together with \eqref{E:an_envelope}, we get \begin{multline*} B_n^{\varepsilon,\delta,+} = \left\{\omega \in G_{n-1}^{\varepsilon,\delta}: \sup_{t \in [n-1,t_n+\delta]} \left( a_n(t;\mathsf{X}^{n-1,\varepsilon,\textsf{off}}_{\sigma_{n-1}^\varepsilon}) + \varepsilon e^{-\alpha_\textsf{on} t} (I_t - I_{n-1}) \right) < x_\textsf{ref} \right\} \\ \subset \left\{ \omega \in G_{n-1}^{\varepsilon,\delta}: a_n(t_n+\delta;x^*) - \varkappa + \varepsilon e^{-\alpha_\textsf{on} (t_n + \delta)} \left(I_{t_n+\delta}-I_{n-1}\right) < x_\textsf{ref} \right\} \\ \subset \left\{ \omega \in \Omega: e^{-\alpha_\textsf{on} (t_n + \delta)} \left(I_{t_n+\delta}-I_{n-1}\right) < \left( \frac{x_\textsf{ref}+\varkappa-a_n(t_n+\delta;x^*)}{\varepsilon} \right) \right\} \end{multline*} Recalling Assumption \ref{A:assumptions_real}, a bit of computation reveals that if we let \begin{equation*}\label{E:delta_zero} \delta_+ \triangleq \frac{1}{\alpha_\textsf{on}} \log \left[ \frac{2 \beta - 2 \alpha_\textsf{on} x_\textsf{ref}}{\beta - \alpha_\textsf{on} x_\textsf{ref} + \alpha_\textsf{off} x_\textsf{ref}} \right] > 0, \end{equation*} then, for $0 < \delta <\delta_+$, we have \begin{equation*} \frac{x_\textsf{ref}+\varkappa-a_n(t_n+\delta;x^*)}{\varepsilon} < -\left( \frac{\mu}{2} \right) \frac{\delta}{\varepsilon} < 0 \end{equation*} Since $e^{-\alpha_\textsf{on}(t_n+\delta)} (I_{t_n+\delta}-I_{n-1}) \sim \mathscr{N} \left(0, \frac{1-e^{-2\alpha_\textsf{on}(t^*+\delta)}}{2 \alpha_\textsf{on}} \right)$, a straightforward calculation yields that for $0<\delta<\delta_+$, \eqref{E:prob_bnplus} holds with $K_+ \triangleq \mu\sqrt{\alpha_\textsf{on}/2}$. \end{proof}
\end{document} |
\begin{document}
\title{Halfcanonical Gorenstein curves of codimension four}
\begin{abstract}
Recent work of Schenck, Stillman and Yuan \cite{schenck2020calabiyau} outlines all possible Betti tables for Artin Gorenstein algebras $A$ with regularity($A$) = 4 = codim($A$). We populate the second half of this list with examples of stable curves, and ask if there are further possible constructions. The problem of deformation between curves with the same Hilbert series but different Betti tables is ongoing work, but our work solves one case: a deformation (due to Jan Stevens) between a reducible curve corresponding to Betti table type \hyperref[2.7]{2.7} in \cite{schenck2020calabiyau} and the curve obtained as the intersection of a del Pezzo surface of degree 5 and a cubic hypersurface. \end{abstract}
\section{Introduction} Gorenstein rings are a frequently seen subset of Cohen--Macaulay rings, first introduced by Grothendieck in a 1961 seminar. Since Buchsbaum and Eisenbud's 1977~\cite{10.2307/2373926} structure theorem on Gorenstein rings of codimension 3, much work has been done on the case of Gorenstein rings of codimension 4, including Reid's general structure theorem~\cite{reid2015gorenstein}. Recent work on Gorenstein rings involves the study of Gorenstein Calabi--Yau 3-folds, hereon referred to as GoCY 3-folds. Calabi-Yau 3-folds play a vital role in the study of string theory \cite{candelas1985vacuum}, \cite{candelas1991pair}. Following work of Coughlan, Golebiowski, Kapustka and Kapustka~\cite{coughlan2016arithmetically} which presented a list of nonsingular GoCY 3-folds, Schenck, Stillman and Yuan~\cite{schenck2020calabiyau} outlined all possible Betti tables for Artin Gorenstein algebras with Castelnuovo--Mumford regularity and codimension 4. More recently, Kapustka, Kapustka, Ranestad, Schenck, Stillman and Yuan \cite{kapustka2021quaternary} exhibit liftings to GoCY 3-folds corresponding to types of nondegenerate quartic. Other recent work on GoCY 3-folds can be seen in \cite{brown2017polarized}, \cite{brown2019gorenstein}. The possible Betti tables in \cite{schenck2020calabiyau} are split into two sections: eight Betti tables corresponding to the 11 GoCY 3-folds outlined in~\cite{coughlan2016arithmetically}, and eight which cannot correspond to a nonsingular GoCY 3-fold. \\
Our results follow on from~\cite{schenck2020calabiyau}, and focus on those Betti tables which cannot correspond to nonsingular GoCY 3-folds. Our initial goal is to populate the list with concrete examples of stable curves in $\mathbb{P}^5$ corresponding to the given Betti tables. From the restrictions on Castelnuovo-Mumford regularity such curves are halfcanonical, meaning $\omega_C=\mathcal{O}_C(2A)$, where $A$ is the hyperplane class. Our results are summarised in table \ref{tab:table11}. MAGMA code for all eight types can be found at \newline $\hspace*{5mm}$ \url{https://sites.google.com/view/patience-ablett/msc-project}. \newline Note that type 2.4 is described in \cite{schenck2020calabiyau}. We then seek to answer the question of whether these curves are the only possible constructions, and whether we can construct flat deformations between curves in the same Hilbert scheme. Partial results have been achieved here. For type \hyperref[2.7]{2.7} we outline a flat deformation to a curve given by a del Pezzo surface of degree five intersecting a cubic hypersurface. \\
Our method to construct curves relies on first identifying the possible quadric generators in $I_C$. For types \hyperref[2.1]{2.1} and \hyperref[2.2]{2.2} these quadrics are necessarily of the stated form. For types \hyperref[2.5]{2.5} and \hyperref[2.6]{2.6} we can show that we have identified all possible quadrics in the case that they define a Koszul algebra, but a question remains of whether there is a possible set of non-Koszul quadratic generators. Our construction techniques use ideas from liaison theory, which began with work of Peskine and Szpir\'o \cite{peskine1974liaison}. In particular the following result is used: \begin{theorem}[\cite{migliore2002liaison} page 77]\label{liaison}
Let $X_1$, $X_2 \subset \mathbb{P}^n$ be projectively Cohen--Macaulay subschemes of codimension $r$. Then if $X=X_1 \cup X_2$ is Gorenstein it follows that $X_1 \cap X_2$ is also Gorenstein, and of codimension $r+1$. \end{theorem} In the situation of the above theorem, we say that $X_1$ and $X_2$ are geo\-metrically G-linked by $X$, and that $X_1$ is residual to $X_2$ in $X$. Suppose more generally that $X_1 \cup X_2 \subset X$. Then if $(I_X:I_{X_1})=I_{X_2}$ and $(I_X:I_{X_2})=I_{X_1}$ we say that $X_1$ and $X_2$ are algebraically G-linked by $X$, and again $X_1$ is residual to $X_2$ in $X$~\cite[pages 62--64]{migliore2002liaison}. \\
Our constructions also rely on the Tom and Jerry formats as seen in \cite{brown2012fano}, \cite{brown2018tutorial} and \cite{papadakis2001gorenstein}. In this paper, for a given ideal $I$ we use $\text{Tom}_i$ to refer to a skew-symmetric matrix with $a_{kl} \in I$ for $k,l \neq i$ and other elements general. Similarly, we define $\text{Tom}_{ij}$ to have $a_{kl} \in I$ for $k,l \notin \{i,j\}$ and other elements general. On the other hand, $\text{Jer}_{ij}$ refers to a skew matrix with $a_{kl} \in I$ for $k$ or $l \in \{i,j\}$ and other elements general. \\
For simplicity we focus mostly on simpler cases of stable curves where a curve $C$ is given by $C_1 \cup C_2$ with $C_1$, $C_2$ nonsingular and irreducible, meeting transversally in $d$ points. Note that the inclusion $i\colon C_1 \rightarrow C$ is a finite morphism. We may therefore use the following proposition:
\begin{proposition}[\cite{MR0463157} Ch. III, Ex. 7.2]
Let $\pi\colon Y \rightarrow X$ be a finite morphism of projective schemes, with $\dim Y=\dim X$. Then $\omega_Y=\textup{Hom}_{\mathcal{O}_X}(\pi_*\mathcal{O}_Y,\omega_X)$. \end{proposition} It follows that in the case of our inclusion, we have $\omega_{C_1}=\text{Hom}_{\mathcal{O}_C}(\mathcal{O}_{C_1},\omega_C)$. Moreover, since $C_1$ is assumed to be nonsingular, it is normal and $\omega_{C_1}=\mathcal{O}_{C_1}(K_{C_1})$. Since $\omega_C=\mathcal{O}_C(2A)$, we are considering $\mathcal{O}_C$-module homomorphisms from $\mathcal{O}_{C_1}$ to $\mathcal{O}_C(2A)$. Such a homomorphism is defined by where $1$ is sent. Indeed, $1$ can mapped to any element of $\mathcal{O}_C(2A)$ annihilated by $I_{C_2}$. Therefore $\omega_{C_1}=\mathcal{O}_{C_1}(2A_1-D)$, where $A_1$ is the hyperplane class on $C_1$ and $D$ is the locus of double points. \\
This paper is part of a work in progress with Miles Reid, Jan Stevens and Stephen Coughlan. In particular we hope to publish further work on the question of deformations. Table \ref{tab:table12} outlines which curves lie in the same Hilbert scheme and which we would therefore hope to construct deformations between.
\renewcommand{1}{1.3} \begin{table} \begin{center}
\begin{tabular}[c]{|l|l|l|} \hline
\multicolumn{3}{| c |}{Classifying curves by genus and degree}\\
\hline
\rule{0pt}{35pt}\pbox{2.5cm}{Degree\\} & \pbox{2.5cm}{Genus\\} & \pbox{2.5cm}{Corresponding \\ Betti table\\}\\
\hline
14 & 15 & CGKK 1 \\
15 & 16 & CGKK 2, SSY 2.7, SSY 2.8 \\
16 & 17 & CGKK 3, SSY 2.3, SSY 2.4, SSY 2.6 \\
17 & 18 & CGKK 4, CGKK 5, CGKK 6, SSY 2.2, SSY 2.5 \\
18 & 19 & CGKK 7, CGKK 8, SSY 2.1 \\
19 & 20 & CGKK 9, CGKK 10 \\
20 & 21 & CGKK 11 \\
\hline \end{tabular} \end{center} \caption{A summary of which Hilbert scheme every curve lies in.} \label{tab:table12} \end{table} \begin{center} \end{center}
\section{Examples of stable curves}\label{results} In this section we present a series of stable curves with free resolutions corresponding to the Betti tables of type 2 in~\cite{schenck2020calabiyau}. We begin by outlining two possible constructions for type \hyperref[2.6]{2.6}, which use techniques from liaison theory and Brown and Reid's Tom and Jerry format. We then outline \hyperref[2.7]{2.7} and describe the flat deformation from type \hyperref[2.7]{2.7} to CGKK 2 \cite{coughlan2016arithmetically}, which lies in the same Hilbert scheme. We finally present type \hyperref[2.3]{2.3}, since this is a somewhat different case which uses rational scrolls. Other constructions are similar to types \hyperref[2.6]{2.6} and \hyperref[2.7]{2.7} and we therefore relegate them to an appendix.
\renewcommand{1}{1.3} \begin{table}
\begin{tabular}[c]{|c||c|c|c|c|} \hline
\multicolumn{5}{| c |}{Nodal curve models for each type}\\
\hline
\rule{0pt}{35pt}\pbox{2.5cm}{Betti table\\} & \pbox{2.5cm}{Irreducible \\ components\\} & \pbox{2.5cm}{Degrees of \\ components\\} & \pbox{2.5cm}{Genera of\\ components\\} & \pbox{2.5cm}{Number of \\double points\\}\\
\hline
Type 2.1 & $C_1 \cup C_2$ & 12, 6 & 10, 4 & 6 \\
Type 2.2 & $C_1 \cup C_2$ & 11, 6 & 9, 4 & 6\\
Type 2.3 & $C_1 \cup C_2$ & 9, 7& 7, 5& 6\\
Type 2.5 & $C_1 \cup C_2$ & 13, 4 & 12, 3 & 4\\
Type 2.6 & $C_1 \cup C_2$ & 12, 4 or 8, 8 & 11, 3 or 7, 7 & 4 \\
Type 2.7 & $C_1 \cup C_2$ & 11, 4 & 10, 3 & 4 \\
Type 2.8 & $C_1 \cup C_2 \cup C_3$ & 7, 4, 4 & 4, 3, 3 & 8\\
\hline \end{tabular} \caption{A summary of our constructions.} \label{tab:table11} \end{table}
\subsection{Type 2.6}\label{2.6} We first construct a curve in $\mathbb{P}^5_{\left<x_0\dots x_5\right>}$ corresponding to Schenck, Stillman and Yuan's type 2.6. \begin{table}[h!]
\[\begin{array}{c|l}
& 0 \hspace{0.65cm} 1\hspace{0.65cm} 2 \hspace{0.68cm} 3 \hspace{0.65cm} 4 \\ \hline
0 &1 \hspace{0.5cm} - \hspace{0.45cm} - \hspace{0.45cm} - \hspace{0.45cm} - \\
1& - \hspace{0.55cm} 4 \hspace{0.65cm} 4 \hspace{0.65cm} 1 \hspace{0.6cm}-\\
2&- \hspace{0.55cm} 4 \hspace{0.65cm} 8 \hspace{0.65cm} 4 \hspace{0.6cm} -\\
3&- \hspace{0.55cm} 1 \hspace{0.65cm} 4 \hspace{0.65cm} 4 \hspace{0.6cm} - \\
4& - \hspace{0.4cm} - \hspace{0.44cm} - \hspace{0.42cm} - \hspace{0.53cm} 1
\end{array} \] \caption*{Type 2.6~\cite{schenck2020calabiyau}} \label{tab:table5} \end{table} The curve $C$ has degree 16, and due to the assumptions on Castelnuovo--Mumford regularity it is halfcanonical with arithmetic genus 17. Let $J=(Q_1,Q_2,Q_3,Q_4)$ be the ideal of the quadric relations, $S=k[x_0,\dots,x_n]$. Then $R=S/J$ has a minimal free resolution with linear part corresponding to the first line of the Betti table. We can use this to rule out possible ideals $J$ where there are too many or too few linear syzygies. Note that we may also have syzygies of higher order, but focusing on the linear syzygies is often enough to find appropriate quadric relations. For type 2.5 and 2.6 we obtain possible quadric relations through an analysis of the case where $R$ is a Koszul algebra, detailed at the end of this section. This raises the question of whether there exist appropriate quadric relations which do not define a Koszul algebra.
It can be shown that the quadrics $\{x_0x_5,x_1x_5,x_2x_5,Q_4\}$, where $Q_4$ is in the ideal $(x_0,x_1,x_2)\backslash(x_5)$, have four linear first syzygies and one linear second syzygy. In this case our curve $C$ breaks into two pieces: $C_1 \subset \mathbb{P}^4_{\left<x_0\dots x_4\right>}$ and $C_2 \subset \mathbb{P}^2_{\left<x_3:x_4:x_5\right>}$, meeting transversally in $d$ points. For simplicity we assume these curves are nonsingular and irreducible. It follows that $C_2$ is a plane curve defined by an irreducible cubic or quartic. \\ Recall that \begin{equation}
\mathcal{O}_{C_1}(K_{C_1}) = \omega_{C_1}=\mathcal{O}_{C_1}(2A_1-D). \end{equation} It follows that $K_{C_1}=2A_1-D$, where $A_1$ is the hyperplane class in $C_1$ and $D$ is the locus of double points of $C_1 \cup C_2$. Hence, deg $K_{C_1} = 2g_1-2=2d_1-d$ and consequently \begin{equation}\label{points1}
d_1=g_1-1+\tfrac{d}{2}. \end{equation}
Similarly for $C_2$ we have \begin{equation}\label{points2}
d_2=g_2-1+\tfrac{d}{2}. \end{equation}
If $C_2$ was a nonsingular cubic then it would intersect the hypersurface given by $x_5=0$ in a maximum of 3 points. However, from (\ref{points2}) we obtain $d=6$, a contradiction. Consequently $C_2$ is defined by a nonsingular quartic of degree 4 and genus 3. It follows that the double locus of $C$ contains 4 points, and we expect $C_1$ to be a curve of degree 12 and genus 11. \\
Let $\Gamma$ be the curve $C_1 \cup l_1$ in $\mathbb{P}^4$, with $l_1$ defined by $x_0=x_1=x_2=0$. Then from our earlier discussion of nodal curves we have $K_{\Gamma}|_{C_1}=K_{C_1}+D$ where $D$ is the divisor of the double locus of 4 points on $C_1$. Considering $D$ as the divisor of the 4 points on $l_1$, we also have $K_{\Gamma}|_{l_1}=K_{l_1}+D=-2H+D$ where $H$ is the hyperplane class. It follows that $\mathcal{O}_{l_1}(K_{\Gamma})=\mathcal{O}_{l_1}(-2+4)=\mathcal{O}_{l_1}(2)$, so $\Gamma$ is halfcanonical, hence Gorenstein. Thus $\Gamma$ is defined by Pfaffians~\cite{10.2307/2373926} and has degree 13. According to the Betti table we need one more quadric relation and 4 cubic relations so it follows that $\Gamma$ should be defined by the $4 \times 4$ Pfaffians of a $5 \times 5$ skew-symmetric matrix. We now describe some constraints on the matrix to ensure $l_1 \subset \Gamma$, and so that it defines four cubic Pfaffians and one quadric. \\
Consider the matrix \begin{equation}
N = \begin{pmatrix}
& a_{12} & a_{13} & a_{14} & a_{15} \\
& & a_{23} & a_{24} & a_{25} \\
& & & a_{34} & a_{35} \\
& & & & a_{45} \\
\end{pmatrix}. \end{equation} Further, let the degrees of the $a_{ij}$ be given by \begin{equation}
N = \begin{pmatrix}
& 1 & 1 & 1 & 2 \\
& & 1 & 1 & 2 \\
& & & 1 & 2 \\
& & & & 2 \\
\end{pmatrix}. \end{equation} Then the $4 \times 4$ Pfaffians are of degree $(2,3,3,3,3)$. Let $I$ be the ideal of the $4 \times 4$ Pfaffians of $N$. Then \begin{equation} \begin{split}
I = (& a_{12}a_{34}-a_{13}a_{24}+a_{14}a_{23}, \\
& a_{12}a_{35}-a_{13}a_{25}+a_{15}a_{23}, \\ & a_{12}a_{45}-a_{14}a_{25}+a_{15}a_{24}, \\ & a_{13}a_{45}-a_{14}a_{35}+a_{15}a_{34}, \\ & a_{23}a_{45}-a_{24}a_{35}+a_{25}a_{34}).
\end{split} \end{equation} It follows that for any $\{k,l\} \subset \{1,\dots,5\}$, $k\neq l$, setting $a_{ij} \in J=(x_0,x_1,x_2)$ if $i \in \{k,l\}$ or $j \in \{k,l\}$ ensures $I \subset J$. The remaining elements may be general in the coordinates of $\mathbb{P}^4$. In other words, $N$ is a variant of $\text{Jer}_{kl}$. Similarly, if we set $a_{ij} \in J$ for $i,j \neq k$, we ensure $I \subset J$, in which case $N$ is a variant of $\text{Tom}_k$.\\
Defining $\Gamma$ in this way ensures it breaks into two irreducible nonsingular components, our line $l_1$ and curve $C_1$. Moreover, $l_1$ and $C_1$ intersect in 4 points which define a quartic, $q_4$. Mapping $q_4$ into $\mathbb{P}^2$ by adding arbitrary terms in $(x_5)$ defines a nonsingular quartic curve $C_2$. $C=C_1 \cup C_2 \subset \mathbb{P}^5$ is a codimension 4 Gorenstein curve corresponding to Betti table 2.6. A computer algebra package such as MAGMA can be used to verify that each curve is nonsingular and that $C_1$ and $C_2$ intersect transversally. We can also use MAGMA to compute the free resolution as a sanity check. \\
Now instead suppose that the four quadrics are given by $(x_0,x_1) \cap (x_2,x_3)$, which again have the correct minimal free resolution. It follows that $C$ breaks up into $C_1 \subset \mathbb{P}^3_{\left<x_2\dots x_5\right>}$ and $C_2 \subset \mathbb{P}^3_{\left<x_0:x_1:x_4:x_5\right>}$. We may define $C_1$ and $C_2$ in the following way. Consider the complete intersection $X_1=V(F_1,F_2) \subset \mathbb{P}^3_{\left<x_2\dots x_5\right>}$ given by cubics $F_1$, $F_2$ containing the line $l_1 \colon x_2=x_3=0$ in $\mathbb{P}^3$. Such cubics have the form \begin{equation}
F_1=x_2P_3+x_3P_4, \quad F_2=x_2Q_3 + x_3Q_4, \end{equation} with $P_3,Q_3,P_4,Q_4$ quadratic forms in $k[x_2,\dots,x_5]$. Then $X_1$ breaks into two irreducible components, namely the line $l_1$ and the curve $C_1$, defined by $(F_1,F_2,P_3Q_4-P_4Q_3)$. The curve $C_1$ is nonsingular with degree 8 and genus 7. Similarly we are able to define another (3,3) complete intersection $X_2=V(F_3,F_4) \subset \mathbb{P}^3_{\left<x_0:x_1:x_4:x_5\right>}$ containing the line $l_2 \colon x_0=x_1=0$ in $\mathbb{P}^3$: \begin{equation}
F_3=x_0P_1+x_1P_2, \quad F_4=x_0Q_1+x_1Q_2. \end{equation} Here $P_1,P_2,Q_1,Q_2$ are quadratic forms in $k[x_0,x_1,x_4,x_5]$. Again $X_2$ breaks into two irreducible components with $C_2$ defined by $(F_3,F_4,P_1Q_2-P_2Q_1)$, and $C_2$ is nonsingular with degree 8 and genus 7. It follows from (\ref{points1}), (\ref{points2}) that $C_1$ and $C_2$ meet in four points, which lie on the line $\mathbb{P}^1_{\left<x_4:x_5\right>}$. We outline constraints on the $P_i$, $Q_j$ so that this occurs. If \begin{equation} \begin{split}
P_3|_{x_2=x_3=0}&=P_1|_{x_0=x_1=0}, \\ P_4|_{x_2=x_3=0}&=P_2|_{x_0=x_1=0}, \\ Q_3|_{x_2=x_3=0}&=Q_1|_{x_0=x_1=0}, \\ Q_4|_{x_2=x_3=0}&=Q_2|_{x_0=x_1=0}, \end{split} \end{equation} then \begin{equation}
R(x_4,x_5)=(P_3Q_4-P_4Q_3)|_{x_2=x_3=0}=(P_1Q_2-P_2Q_1)|_{x_0=x_1=0}. \end{equation} In this situation, $C_1$ and $C_2$ meet in exactly 4 points defined by the quartic $R$ in $\mathbb{P}^1_{\left<x_4:x_5\right>}$. Their union is a Gorenstein codimension 4 curve with Betti table 2.6. \\
The candidates for the quadric generators arise from work of Mantero--Mastroeni \cite{mantero2021betti}. Assuming $R=S/J$ is Koszul, we analyse $J=(Q_1,Q_2,Q_3,Q_4)$ in the context of different heights. If $\text{ht} J=4$ then $J$ is a complete intersection of four quadrics and does not have linear syzygies. If $\text{ht} J=1$ then it is given as $zI$ where $z$ is a linear form and $I$ is a complete intersection of linear forms~\cite{mantero2021betti}. Thus for such $J$, $R$ would not correspond to the type 2.6 Betti table, since there would be too many linear syzygies. Moreover, for $\text{ht} J=3$, $R$ is a Koszul almost complete intersection and thus has at most two linear syzygies~\cite{mastroeni2018koszul}. Hence, $J$ must have height 2. Mantero--Mastroeni show that for a Koszul algebra of four quadrics with $\text{ht}J=2$ to have the required Betti table, it must have multiplicity $e(R)=2$. \begin{theorem}[Mantero--Mastroeni~\cite{mantero2021betti}]
Let $R$ be Koszul with $\textup{ht}J=2=e(R)$. Then $J$ has one of the following possible forms: \\
\textup{(I)} $(x_0,x_1) \cap (x_2,x_3)$ or $(x_0^2,x_0x_1,x_1^2,x_0x_2+x_1x_3)$\\
\textup{(II)} $(a_1x_0,a_2x_0,a_3x_0,q)$ where the $a_i$ are independent linear forms and $q \in (a_1,a_2,a_3)\backslash(x_0)$ \\
\textup{(III)} $(a_1x_0,a_2x_0,a_3x_0,q)$ where the $a_i$ are independent linear forms and $q$ is a non-zero divisor modulo $(a_1x_0,a_2x_0,a_3x_0)$. \\
\end{theorem} Case (III) does not correspond to a Betti table with four linear first syzygies and one linear second syzygy, so we are in case (I) or case (II). In case (I) the latter option is not reduced so we are restricted to the case $J=(x_0,x_2) \cap (x_2,x_3)$.
\subsection{Type 2.7}\label{2.7} The following construction is an example of a codimension 4 Gorenstein curve with Betti table as in type 2.7. Any such curve has degree 15 and arithmetic genus 16. \\ \begin{table}[h!]
\[\begin{array}{c|l}
& 0 \hspace{0.65cm} 1\hspace{0.65cm} 2 \hspace{0.68cm} 3 \hspace{0.65cm} 4 \\ \hline
0 &1 \hspace{0.5cm} - \hspace{0.45cm} - \hspace{0.45cm} - \hspace{0.45cm} - \\
1& - \hspace{0.55cm} 5 \hspace{0.65cm} 5 \hspace{0.65cm} 1 \hspace{0.6cm}-\\
2&- \hspace{0.55cm} 1 \hspace{0.65cm} 2 \hspace{0.65cm} 1 \hspace{0.6cm} -\\
3&- \hspace{0.55cm} 1 \hspace{0.65cm} 5 \hspace{0.65cm} 5 \hspace{0.6cm} - \\
4& - \hspace{0.4cm} - \hspace{0.44cm} - \hspace{0.42cm} - \hspace{0.53cm} 1
\end{array} \] \caption*{Type 2.7~\cite{schenck2020calabiyau}} \label{tab:table6} \end{table}
Consider the quadrics \begin{equation}
Q_1=x_0x_5, \quad Q_2=x_1x_5, \quad Q_3=x_2x_5, \quad Q_4, \quad Q_5, \end{equation} with $Q_4,Q_5 \in (x_0,x_1,x_2)$. Then $J=(Q_1,Q_2,Q_3,Q_4,Q_5)$ has five linear syzygies as required. Thus, $C$ breaks up into two curves, namely $C_1 \subset \mathbb{P}^4_{\left<x_0\dots x_4\right>}$ and $C_2 \subset \mathbb{P}^3_{\left<x_3:x_4:x_5\right>}$. Again, $C_2$ must be defined by a nonsingular quartic with degree 4 and genus 3, and the double locus of $C$ is 4 points. We obtain from (\ref{points1}) that $C_1$ is degree 11 and genus 10. Let $l_1$ be the line $x_0=x_1=x_2=0$ in $\mathbb{P}^4$. \\
Once more $\Gamma=C_1 \cup l_1$ is Gorenstein since $K_{\Gamma}|_{l_1}$ is halfcanonical. Since $\Gamma$ is degree 12 and we need one more cubic relation we define $\Gamma$ as the complete intersection of two quadrics, $Q_4$ and $Q_5$, and a cubic, $F$, all in $(x_0,x_1,x_2)$. It follows that $C_1$ and $l_1$ are the two irreducible components of $\Gamma$, and $C_1$ is nonsingular. The curves $C_1$ and $l_1$ meet in 4 points defining a quartic and mapping this quartic into $\mathbb{P}^2$, adding arbitrary terms in $(x_5)$, defines a nonsingular quartic curve $C_2$. The union of these two curves is Gorenstein codimension 4, with Betti table as prescribed. \\
Moreover, we can construct a deformation to CGKK 2~\cite{coughlan2016arithmetically}, which is in the same Hilbert scheme as type 2.7 and type 2.8, courtesy of Jan Stevens. \begin{table}[h!]
\[\begin{array}{c|l}
& 0 \hspace{0.65cm} 1\hspace{0.65cm} 2 \hspace{0.68cm} 3 \hspace{0.65cm} 4 \\ \hline
0 &1 \hspace{0.5cm} - \hspace{0.45cm} - \hspace{0.45cm} - \hspace{0.45cm} - \\
1& - \hspace{0.55cm} 5 \hspace{0.65cm} 5 \hspace{0.5cm} - \hspace{0.45cm}-\\
2&- \hspace{0.55cm} 1 \hspace{0.5cm} - \hspace{0.5cm} 1 \hspace{0.6cm} -\\
3&- \hspace{0.4cm} - \hspace{0.5cm} 5 \hspace{0.65cm} 5 \hspace{0.6cm} - \\
4& - \hspace{0.4cm} - \hspace{0.44cm} - \hspace{0.42cm} - \hspace{0.53cm} 1
\end{array} \] \caption*{CGKK 2~\cite{coughlan2016arithmetically}} \label{tab:table10} \end{table}
Consider the syzygy module of the five generating quadrics, \begin{equation}
\begin{split}
& Q_1=x_0x_5, \\
& Q_2=x_1x_5, \\
& Q_3=x_2x_5, \\
& Q_4=a_1x_0+a_2x_1+a_3x_2, \\
& Q_5=b_1x_0+b_2x_1+b_3x_2.
\end{split} \end{equation} Then this is a $6 \times 5$ matrix
\begin{equation}
M=\begin{pmatrix}
0 & x_2 & -x_1 & 0 & 0 \\
-x_2 & 0 & x_0 & 0 & 0 \\
x_1 & -x_0 & 0 & 0 & 0 \\
-b_1 & -b_2 & -b_3 & 0 & x_5 \\
a_1 & a_2 & a_3 & -x_5 & 0 \\
0 & 0 & 0 & -Q_5 & Q_4 \\
\end{pmatrix}. \end{equation} Using deformation variable $t$, we construct a new matrix \begin{equation}
M_t=\begin{pmatrix}
0 & x_2 & -x_1 & tb_1 & -ta_1 \\
-x_2 & 0 & x_0 & tb_2 & -ta_2 \\
x_1 & -x_0 & 0 & tb_3 & -ta_3 \\
-b_1 & -b_2 & -b_3 & 0 & x_5 \\
a_1 & a_2 & a_3 & -x_5 & 0 \\
0 & 0 & 0 & -Q_5 & Q_4 \\
\end{pmatrix}. \end{equation} Ignoring the bottom row of the matrix, and multiplying rows 4 and 5 by $t$ gives a skew-symmetric matrix. We may then take the $4 \times 4$ Pfaffians and cancel $t$ to obtain the five quadrics \begin{equation}
\begin{split}
& Q_1 = x_0x_5 - t(a_2b_3-a_3b_2), \\
& Q_2 = x_1x_5 - t(a_1b_3-a_3b_1), \\
& Q_3 = x_2x_5 - t(a_1b_2-a_2b_1), \\
& Q_4=a_1x_0+a_2x_1+a_3x_2, \\
& Q_5=b_1x_0+b_2x_1+b_3x_2.
\end{split} \end{equation} We also deform the cubic $F$. Suppose $F=c_1x_0+c_2x_1+c_3x_2$, with $c_1,c_2,c_3$ all of degree 2. As discussed earlier, the quartic in type $2.7$ is given by a quartic in $k[x_0,\dots,x_4]$ plus additional terms in $(x_5)$. In fact the first part is the quartic obtained as the determinant of the matrix \begin{equation}
N=\begin{pmatrix}
a_1 & a_2 & a_3 \\
b_1 & b_2 & b_3 \\
c_1 & c_2 & c_3 \\
\end{pmatrix}. \end{equation} We can write the quartic as $q=\text{det}(N) + gx_5$ where $g$ is a degree three polynomial. We define our deformed cubic as $F_t=F+tg$. Consider the ideal $I_t=(Q_1, Q_2, Q_3, Q_4,Q_5,F_t,q)$. Then for $t=0$ this is clearly the defining ideal for our type 2.7 nodal curve. Otherwise, note that $tq$ is in the ideal $J_t$ generated by the first six relations. This follows since \begin{equation} tq + c_1Q_1+c_2Q_2+c_3Q_3 -x_5F_t = 0. \end{equation} Note that $J_t$ is prime, which can be checked with computer algebra. Thus if $t$ is invertible then $I_t=J_t$ and is defined by the $4 \times 4$ Pfaffians of a $5 \times 5$ skew-symmetric matrix intersecting a cubic hypersurface. This deformation corresponds to Betti table CGKK 2~\cite{coughlan2016arithmetically}.
\subsection{Type 2.3}\label{2.3} We now focus on a curve with degree 16 and genus 17 corresponding to Betti table type 2.3. \\ \begin{table}[h!]
\[\begin{array}{c|l}
& 0 \hspace{0.65cm} 1\hspace{0.65cm} 2 \hspace{0.68cm} 3 \hspace{0.65cm} 4 \\ \hline
0 &1 \hspace{0.5cm} - \hspace{0.45cm} - \hspace{0.45cm} - \hspace{0.45cm} - \\
1& - \hspace{0.55cm} 4 \hspace{0.65cm} 3 \hspace{0.52cm} - \hspace{0.45cm}-\\
2&- \hspace{0.55cm} 3 \hspace{0.65cm} 6 \hspace{0.65cm} 3 \hspace{0.6cm} -\\
3&- \hspace{0.4cm} - \hspace{0.5cm} 3 \hspace{0.65cm} 4 \hspace{0.6cm} - \\
4& - \hspace{0.4cm} - \hspace{0.44cm} - \hspace{0.42cm} - \hspace{0.53cm} 1
\end{array} \] \caption*{Type 2.3~\cite{schenck2020calabiyau}} \label{tab:table8} \end{table}
Consider the cubic scroll $\mathbb{F}=\mathbb{F}(1,2) \subset \mathbb{P}^4_{\left<x_0\dots x_4\right>}$, defined by equations \begin{equation}
\rank\begin{pmatrix}
x_0 & x_1 & x_3 \\
x_1 & x_2 & x_4 \\
\end{pmatrix} \leq 1. \end{equation}
$\mathbb{F}(1,2)$ is given by the surface scroll $\mathbb{F}_1$ embedded into $\mathbb{P}^4$ by the linear system $2A+B$ where $A$ is the fibre of $\mathbb{F}_1 \rightarrow \mathbb{P}^1$ and $B$ is the negative section~\cite{Reid1996ChaptersSurfaces}. The curve $C_2$ is given by $\mathbb{F} \cap X$ where $X$ is a general cubic hypersurface. This curve has degree 9 and genus 7. $C_1 \subset \mathbb{P}^3$ is residual to a conic in a (3,3) complete intersection, such that the double locus of $C$ is given by 6 points. It has degree 7 and genus 5. \\
To describe the construction in terms of explicit relations, consider $x_0x_2-x_1^2$, the third minor of the matrix defining $\mathbb{F}$. $C_1$ lies in $\mathbb{P}^3=V(x_3,x_4)$. Define the plane quadric $Q=V(x_0x_2-x_1^2,x_5) \subset \mathbb{P}^3$, and consider two general cubics $G_1, G_2$ containing $Q$. Such cubics have the form \begin{align*}
G_1 = P_1(x_0x_2-x_1^2)+Q_1x_5, \\
G_2 = P_2(x_0x_2-x_1^2)+Q_2x_5, \end{align*} with $P_1$, $P_2$ linear and $Q_1$, $Q_2$ quadrics. $C_1$ is residual to $Q$ in the $(3,3)$ complete intersection $(G_1,G_2)$. It is defined by one further cubic, given by $H=P_1Q_2-P_2Q_1$. Mapping this $H$ into $\mathbb{P}^4$, adding arbitrary terms in $(x_3,x_4)$, defines a cubic hypersurface $X$. We have $C_2 = \mathbb{F} \cap X$, and $C_1 \cup C_2$ is a Gorenstein codimension four curve in $\mathbb{P}^5$ corresponding to Betti table type 2.3.
\section{Further research} Having populated the list of Betti tables with examples of nodal curves, a number of open questions remain. Firstly, can we construct more deformations between curves in the same Hilbert scheme, as in type 2.7? Email correspondence with Jan Stevens and Stephen Coughlan answers in the affirmative for types 2.6 and 2.8. There is also the question of whether this list is exhaustive, or if there are further possible curve constructions. The existence of topologically different constructions for type 2.6 suggests this could be the case for other types. In particular for higher degree curves, the picture may be more complicated. In some cases we have only been able to definitively state the quadrics if they define a Koszul algebra, so there may be a different set of quadric relations. We have restricted our search to nodal curves, and so it is possible there are further constructions with worse singularities. We also raise the idea of constructing surfaces and singular 3-folds corresponding to the type 2 Betti tables, or alternatively finite point sets.
\section{Appendix} \subsection{Type 2.1} \label{2.1} We now consider how to construct a curve in $\mathbb{P}^5$ corresponding to Betti table type 2.1, which has degree 18 and genus 19. \begin{table}[h!]
\[\begin{array}{c|l}
& 0 \hspace{0.65cm} 1\hspace{0.65cm} 2 \hspace{0.68cm} 3 \hspace{0.65cm} 4 \\ \hline
0 &1 \hspace{0.45cm} - \hspace{0.49cm} - \hspace{0.49cm} - \hspace{0.4cm} - \\
1& - \hspace{0.5cm} 2 \hspace{0.7cm} 1 \hspace{0.54cm} - \hspace{0.4cm}-\\
2&- \hspace{0.5cm} 9 \hspace{0.6cm} 18 \hspace{0.57cm} 9 \hspace{0.57cm} -\\
3&- \hspace{0.35cm} - \hspace{0.54cm} 1 \hspace{0.65cm} 2 \hspace{0.57cm} - \\
4& - \hspace{0.35cm} - \hspace{0.45cm} - \hspace{0.45cm} - \hspace{0.5cm} 1
\end{array} \] \caption*{Type 2.1 \cite{schenck2020calabiyau}} \label{tab:table2} \end{table}
We see there are two quadric relations, $Q_1$ and $Q_2$, with one linear syzygy, which may be given as $L_1Q_1-L_2Q_2=0$ for some linear forms $L_1,L_2$. Since we want $Q_1 \neq \lambda Q_2$ for any scalar $\lambda$, we have $L_1 \neq \lambda L_2$ and consequently $L_1$ divides $Q_2$, $L_2$ divides $Q_1$. It follows that $Q_1=L_1L_3$, $Q_2=L_2L_3$ for some linear form $L_3$. Thus without loss of generality we can set \begin{equation} Q_1 = x_0x_5, \quad Q_2=x_1x_5. \end{equation} It follows that $C$ is singular, and breaks up into $C_1 \subset \mathbb{P}^4_{\left<x_0\dots x_4\right>}$ of degree $d_1$ and genus $g_1$ and $C_2 \subset \mathbb{P}^3_{\left<x_2\dots x_4\right>}$ of degree $d_2$ and genus $g_2$. Supposing that $C$ is at worst a nodal curve, with $C_1$ and $C_2$ nonsingular, and supposing further that $C_2$ is an $(a,b)$ complete intersection we have that $d_2=ab$ and $g_2=\tfrac{1}{2}ab(a+b-4)+1$. It follows from (\ref{points2}) that \begin{equation}
ab=\tfrac{1}{2}ab(a+b-4)+\tfrac{d}{2}. \end{equation} Since we have no relations of degree 4 or higher we expect $a,b$ to be at most 3. Notice also that since $C_2$ intersects $x_5=0$ in $ab$ points, $C_1$ and $C_2$ must intersect in at most $ab$ points. This excludes the cases $(1,2),(1,3),(2,2)$, and if $a=b=3$ then we obtain that $d=0$, contradicting our assumption that $C$ is a nodal curve consisting of two nonsingular curves intersecting transversally at a non-zero number of points. Thus we look at the case where $C_2$ is a $(2,3)$ complete intersection. \\
It follows that $g_2=4$ and $d_2=6$. We expect $C_1$ to be a curve with $d_1=18-6=12$ and consequently $g_1=10$. Let $Q_3 \in (x_2,x_3,x_4,x_5)$ be the quadric in the complete intersection. Consider the plane conic $q$ in $\mathbb{P}^4$ defined by $x_0=x_1=Q_3|_{x_5=0}=0$. Then $\omega_{q}=\mathcal{O}_{q}(-1)$ by the adjunction formula \cite[page 41]{eisenbud_harris_2016}, and $K_q=-A_q$ where $A_q$ is the hyperplane class. Let $\Gamma$ be the union of $C_1$ and $q$. Then as before $K_{\Gamma}|_q=K_{q}+D$ where $D$ is the divisor of the 6 double points on $q$. Thus $\mathcal{O}_q(K_{\Gamma})=\mathcal{O}_q(-1+3)=\mathcal{O}_q(2)$, and $\Gamma$ is halfcanonical and hence Gorenstein. \\
Since $\Gamma$ is degree 14 and Gorenstein codimension 3 we expect it to be defined by the $6 \times 6$ Pfaffians of a $7 \times 7$ skew-symmetric matrix. All elements of the matrix must be linear, since there are no quartic or higher degree relations. Thus we now consider what conditions must be satisfied for the Pfaffians of the matrix to lie in the ideal $J=(x_0,x_1,Q_3|_{x_5=0})$, but not in $(x_0,x_1)$. Consider the matrix \begin{equation}
M = \begin{pmatrix}
& a_{12} & a_{13} & a_{14} & a_{15} & a_{16} & a_{17}\\
& & a_{23} & a_{24} & a_{25} & a_{26} & a_{27} \\
& & & a_{34} & a_{35} & a_{36} & a_{37} \\
& & & & a_{45} & a_{46} & a_{47} \\
& & & & & a_{56} & a_{57} \\
& & & & & & a_{67} \\
\end{pmatrix}. \end{equation}
If $Q_3$ is made up of a reducible part in $(x_2^2,x_2x_3,x_2x_4,x_3^2,x_3x_4,x_4^2)$ plus terms in $(x_5)$ we can construct an $M$ such that the Pfaffians lie in $J$. An open problem following on from this work is whether such an $M$ can be constructed to contain a more general quadric, with $C_1$ still a nonsingular and irreducible curve. If $Q_3$ is in the required form, assume without loss of generality that $Q_3|_{x_5=0}=x_2x_3$. Then the following constraints ensure the Pfaffians lie in $J$. First suppose that $a_{ij} \in (x_0,x_1)$ for $i,j,k \notin \{5,6,7\}$, i.e. that $M$ is a $\text{Tom}_{567}$. We add the further constraints that, except for $a_{56}$ which is general, $a_{ij} \in (x_0,x_1,x_2)$ for $j=5$ and $a_{ij} \in (x_0,x_1,x_3)$ for $j=6$. \\
The curve $C_1$ residual to $q$ in $\Gamma$ is nonsingular and irreducible, of degree 12 and genus 10. Moreover, the intersection of $C_1$ with the conic $q$ is 6 points given as the intersection of $Q_3|_{x_5=0}$ and a cubic $H$ in $\mathbb{P}^2_{\left<x_2\dots x_4\right>}$. We can map this cubic into $\mathbb{P}^3$, adding arbitrary terms in $(x_5)$ to define a new nonsingular cubic. The $(2,3)$ complete intersection $C_2$ is given by $(Q_3,H)$. The union of these two curves defines our nodal curve with resolution given by Betti table 2.1.
\subsection{Type 2.2}\label{2.2} We now outline a construction of a nodal curve $C \subset \mathbb{P}^5$ with degree 17 and genus 18 corresponding to Betti table type 2.2. \\ \begin{table}[h!]
\[\begin{array}{c|l}
& 0 \hspace{0.65cm} 1\hspace{0.65cm} 2 \hspace{0.68cm} 3 \hspace{0.65cm} 4 \\ \hline
0 &1 \hspace{0.5cm} - \hspace{0.45cm} - \hspace{0.45cm} - \hspace{0.45cm} - \\
1& - \hspace{0.55cm} 3 \hspace{0.65cm} 1 \hspace{0.52cm} - \hspace{0.45cm}-\\
2&- \hspace{0.55cm} 5 \hspace{0.6cm} 12 \hspace{0.5cm} 5 \hspace{0.6cm} -\\
3&- \hspace{0.4cm} - \hspace{0.5cm} 1 \hspace{0.65cm} 3 \hspace{0.6cm} - \\
4& - \hspace{0.4cm} - \hspace{0.44cm} - \hspace{0.42cm} - \hspace{0.53cm} 1
\end{array} \] \caption*{Type 2.2~\cite{schenck2020calabiyau}} \label{tab:table7} \end{table}
The quadric relations in $\mathbb{P}^5_{\left<x_0\dots x_5\right>}$ are necessarily of the form \begin{equation}
Q_1 = x_0x_5, \quad Q_2= x_1x_5, \quad Q_3, \end{equation} with the single syzygy $x_1Q_1\equiv x_0Q_2$. Thus $C$ breaks up into $C_1 \subset \mathbb{P}^4_{\left<x_0\dots x_4\right>}$ and $C_2 \subset \mathbb{P}^3_{\left<x_2\dots x_5\right>}$. The curve $C_2$ is a $(2,3)$ complete intersection of degree 6 and genus 4. Moreover, the double locus consists of 6 points and we expect $C_1$ to be of genus 11 and degree 9. For $C_1 \cup C_2$ to be halfcanonical we require the 6 points to define a hyperplane section, and we again consider a plane conic $q$ in $\mathbb{P}^4_{\left<x_0\dots x_4\right>}$ in the plane of the double points, with $q$ defined by $x_0=x_1=Q_3=0$. Then we once more have that $\Gamma = C_1 \cup q$ is halfcanonical and hence Gorenstein. We thus define $\Gamma$ using the Pfaffians of a skew-symmetric matrix. The construction is as follows: we define a matrix \begin{equation}
M = \begin{pmatrix}
& b_{12} & b_{13} & b_{14} & b_{15} \\
& & a_{23} & a_{24} & a_{25} \\
& & & a_{34} & a_{35} \\
& & & & a_{45} \\
\end{pmatrix} \end{equation} with entries of the following degrees \begin{equation}
M = \begin{pmatrix}
& 2 & 2 & 2 & 2 \\
& & 1 & 1 & 1 \\
& & & 1 & 1 \\
& & & & 1 \\
\end{pmatrix}, \end{equation} whose $4 \times 4$ Pfaffians are four cubics and a quadric, as in ยง\ref{2.6}. Unlike ยง\ref{2.6}, we wish for the four cubics to vanish on the plane $x_0=x_1=0$ in $\mathbb{P}^4$, and for the quadric not to lie in $(x_0,x_1)$ - it may be general in $\mathbb{P}^4$. This occurs if we set, for example, the $b_{ij}$ as quadrics in $(x_0,x_1)$, and the $a_{ij}$ linear and in all coordinates $x_0$ to $x_4$. Let $\Gamma$ be the curve defined by the $4 \times 4$ Pfaffians, and set $Q_3$ to be the quadric Pfaffian. Consider the residual curve $C_1$ to the conic defined by $x_0=x_1=Q_3=0$ in $\Gamma$. This is nonsingular and irreducible, and has degree 11 and genus 9 as required. It meets the conic in 6 points, which define a cubic $F_1$ such that the zero locus is given by $x_0=x_1=Q_3=F_1=0$. Mapping $Q_3$ and $F_1$ into $\mathbb{P}^3$ by adding arbitrary terms in $(x_5)$ we obtain the complete intersection $C_2$.
\subsection{Type 2.5}\label{2.5} We now construct a curve in $\mathbb{P}^5_{\left<x_0\dots x_5\right>}$ corresponding to Schenck, Stillman and Yuan's type 2.5. The curve $C$ has degree 17, and due to the assumptions on Castelnuovo--Mumford regularity it is halfcanonical with arithmetic genus 18. \\ \renewcommand{1}{1} \begin{table}[h!]
\[\begin{array}{c|l}
& 0 \hspace{0.65cm} 1\hspace{0.65cm} 2 \hspace{0.68cm} 3 \hspace{0.65cm} 4 \\ \hline
0 &1 \hspace{0.5cm} - \hspace{0.45cm} - \hspace{0.45cm} - \hspace{0.45cm} - \\
1& - \hspace{0.55cm} 3 \hspace{0.65cm} 3 \hspace{0.65cm} 1 \hspace{0.6cm}-\\
2&- \hspace{0.55cm} 7 \hspace{0.55cm} 14 \hspace{0.55cm} 7 \hspace{0.6cm} -\\
3&- \hspace{0.55cm} 1 \hspace{0.65cm} 3 \hspace{0.65cm} 3 \hspace{0.6cm} - \\
4& - \hspace{0.4cm} - \hspace{0.44cm} - \hspace{0.42cm} - \hspace{0.53cm} 1
\end{array} \] \caption*{Type 2.5~\cite{schenck2020calabiyau}} \label{tab:table4} \end{table}
Let $J=(Q_1,Q_2,Q_3)$ be the ideal of the three quadrics, $S=k[x_0,\dots,x_n]$. In the case that $R=S/J$ is a Koszul algebra we may apply a theorem of Mantero--Mastroeni~\cite{mantero2021betti} to categorize the quadrics. \begin{proposition} If $R=S/J$ is a Koszul algebra then $J$ is given by \[Q_1 = x_0x_5, \quad Q_2 = x_1x_5, \quad Q_3 = x_2x_5.\] \end{proposition}
\begin{proof}
If $\text{ht}J=2$ then $R$ is an almost complete intersection and consequently has at most two linear syzygies~\cite{mastroeni2018koszul}. If $\text{ht}J=3$ then $R$ is a complete intersection and there are no linear syzygies. If $\text{ht} J=1$ then it is given as $zI$ where $z$ is a linear form and $I$ is a complete intersection of linear forms~\cite{mantero2021betti}. This option has the correct number of syzygies. Thus, without loss of generality, let $z=x_5$, $I=(x_0,x_1,x_2)$. Then $J=zI=(x_0x_5,x_1x_5,x_2x_5).$ \end{proof} It follows that any curve $C$ with such quadric relations breaks up into two curves, $C_1 \subset \mathbb{P}^4_{\left<x_0\dots x_4\right>}$ and $C_2 \subset \mathbb{P}^2_{\left<x_3:x_4:x_5\right>}$. Thus $C_2$ is defined by a single quadric, cubic or quartic in $\mathbb{P}^2$. \\
Recall that a nonsingular plane quadric has genus 0, and a cubic has genus 1. Assuming $C_2$ is nonsingular, it follows that $C_2$ cannot be defined by a quadric or cubic. Note that $C_1 \cap C_2 \subset C_1 \cap H$ where $H$ is the hyperplane given by $x_5=0$. $C_2$ intersects $H$, and consequently $C_1$, in a maximum of deg($C_2$)=$d_2$ points, so $d<2$ or $d<3$ respectively. Thus (\ref{points2}) does not hold in these cases. Further $C_1 \cup C_2$ is halfcanonical so $K_{C_2} + D = 2H$ for $H$ a hyperplane section. We also have for $C_2$ a plane curve of degree $d_2$ that $K_{C_2} = (-3+d_2)H$, and so $D = (5-d_2)H \leq H$ and the only solution is $d_2=4$. Since $C_2$ is defined by a nonsingular plane quartic it has degree 4 and genus 3, and by (\ref{points2}) the double locus of $C = C_1 \cup C_2$ contains 4 points. \\
Let $\Gamma$ be the curve $C_1 \cup l_1$ in $\mathbb{P}^4$, with $l_1$ defined by $x_0=x_1=x_2=0$. Again it follows that $\Gamma$ is halfcanonical, hence Gorenstein. The curve $\Gamma$ is of degree 14 since $C_1$ must have degree 13 by (\ref{points1}). By Buchsbaum-Eisenbud~\cite{10.2307/2373926} if $\Gamma$ is Gorenstein codimension 3 then it is defined by Pfaffians. Since $\Gamma$ is degree 14, and we need seven cubics to define $C$, $\Gamma$ is defined by the $6 \times 6$ Pfaffians of a $7 \times 7$ skew-symmetric matrix. As $\Gamma$ contains the line $l_1$ it is necessary that every Pfaffian lies in the ideal $(x_0,x_1,x_2)$.
\begin{proposition}
Let M be a $7 \times 7$ skew-symmetric matrix. Let $I$ be the ideal of $6 \times 6$ Pfaffians of $M$. Then two possible formats such that $I \subset (x_0,x_1,x_2)$ are as follows:
\begin{flalign*}
(\textup{I}) \hspace{3cm} M = \begin{pmatrix}
& a_{12} & a_{13} & a_{14} & a_{15} & b_{16} & b_{17}\\
& & a_{23} & a_{24} & a_{25} & b_{26} & b_{27} \\
& & & a_{34} & a_{35} & b_{36} & b_{37} \\
& & & & a_{45} & b_{46} & b_{47} \\
& & & & & b_{56} & b_{57} \\
& & & & & & b_{67} \\
\end{pmatrix} \\ \\ \end{flalign*} \begin{flalign*}
(\textup{II}) \hspace{3cm} M = \begin{pmatrix}
& b_{12} & b_{13} & b_{14} & b_{15} & a_{16} & a_{17}\\
& & b_{23} & b_{24} & b_{25} & a_{26} & a_{27} \\
& & & b_{34} & b_{35} & a_{36} & a_{37} \\
& & & & b_{45} & a_{46} & a_{47} \\
& & & & & a_{56} & a_{57} \\
& & & & & & a_{67} \\
\end{pmatrix} \\ \end{flalign*} where the $a_{ij}$ represent linear elements in $(x_0,x_1,x_2)$ and the $b_{ij}$ represent linear elements in all the coordinates on $\mathbb{P}^4$. \end{proposition}
\begin{proof}
Consider the skew-symmetric matrix
\begin{equation}
M = \begin{pmatrix}
& a_{12} & a_{13} & a_{14} & a_{15} & a_{16} & a_{17}\\
& & a_{23} & a_{24} & a_{25} & a_{26} & a_{27} \\
& & & a_{34} & a_{35} & a_{36} & a_{37} \\
& & & & a_{45} & a_{46} & a_{47} \\
& & & & & a_{56} & a_{57} \\
& & & & & & a_{67} \\
\end{pmatrix}. \end{equation}
We can explicitly state the Pfaffians of $M$, as in \cite{ishikawa2000minor}. Consider the $2n \times 2n$ submatrix defined by deleting the $i$th row and $i$th column of $M$. We define its Pfaffian using the symmetric group $G_i=S_{2n}$ on the set $\{1,\dots,\widehat{i},\dots,2n+1\}$. Let
\[
\mathfrak{S_i} = \left\{ \sigma=(\sigma_1\dots \sigma_{2n}) \in G_i\ \quad \middle\vert \begin{array}{l}
\quad \sigma_{2j-1}<\sigma_{2j} \qquad 1 \leq j \leq n \\
\quad \sigma_{2j-1} < \sigma_{2j+1} \quad 1 \leq j \leq n-1
\end{array}\right\}. \]
Recall that we can define the $2n \times 2n$ Pfaffian of the submatrix as
\begin{equation}
\text{Pf}_i = \sum_{\sigma \in \mathfrak{S}_i}\text{sgn}(\sigma)a_{\sigma_1 \sigma_2}\cdots a_{\sigma_{2n-1}\sigma_{2n}}
\end{equation}
where $\text{sgn}(\sigma)=(-1)^{\ell (\sigma)}$, with $\ell (\sigma)$ the number of inversions.
It follows that if all the $a_{\sigma_1 \sigma_2}\dots a_{\sigma_{2n-1} \sigma_{2n}}$ are contained in an ideal, then the Pfaffian must be contained in the ideal. We are working with $7 \times 7$ matrices so $n=3$ here and consequently each Pfaffian is a sum of elements of the form $a_{ij}a_{kl}a_{mn}$. It is clear that since any $a_{ij}a_{kl}a_{mn}$ contains three elements from different columns, (I) is a possible solution. Moreover, since any $a_{ij}a_{kl}a_{mn}$ must contain either a 6 or 7 in its indices, (II) also ensures $I \subset (x_0,x_1,x_2)$. \end{proof}
More generally, we see that for any $(i,j) \in \{1,\dots,7\}$ with $i \neq j$ we can define two possible matrices such that $I \subset (x_0,x_1,x_2)$. Case (II) is the case where $a_{kl} \in (x_0,x_1,x_2)$ for $k \in \{i,j\}$ or $l \in \{i,j\}$ and other elements are general linear elements in the coordinates on $\mathbb{P}^4$. Case (I) is the case where $a_{kl} \in (x_0,x_1,x_2)$ for $k,l \notin \{i,j\}$ and other elements are general. In the language of Tom and Jerry, case (II) corresponds to $\text{Jer}_{67}$ and case (I) corresponds to $\text{Tom}_{67}$.\\
Defining $\Gamma$ using a matrix of the above form gives a curve with two irreducible nonsingular components: $C_1$ of degree 13 and genus 12 as expected, and $l_1$, the line defined by $(x_0,x_1,x_2)$ in $\mathbb{P}^4$. Moreover, the intersection of $C_1$ and $l_1$ is 4 points which define a quartic $q_4$ in $(x_3,x_4)$. This quartic can be mapped into $\mathbb{P}^2$ by adding arbitrary terms in $(x_5)$ to obtain the nonsingular quartic which defines $C_2$. The union of these two curves is a Gorenstein codimension 4 variety in $\mathbb{P}^5$ corresponding to Betti table 2.5.
\subsection{Type 2.8}\label{2.8} We now consider type 2.8. \begin{table}[h!]
\[\begin{array}{c|l}
& 0 \hspace{0.65cm} 1\hspace{0.65cm} 2 \hspace{0.68cm} 3 \hspace{0.65cm} 4 \\ \hline
0 &1 \hspace{0.5cm} - \hspace{0.45cm} - \hspace{0.45cm} - \hspace{0.45cm} - \\
1& - \hspace{0.55cm} 5 \hspace{0.65cm} 6 \hspace{0.65cm} 2 \hspace{0.6cm}-\\
2&- \hspace{0.55cm} 2 \hspace{0.65cm} 4 \hspace{0.65cm} 2 \hspace{0.6cm} -\\
3&- \hspace{0.55cm} 2 \hspace{0.65cm} 6 \hspace{0.65cm} 5 \hspace{0.6cm} - \\
4& - \hspace{0.4cm} - \hspace{0.44cm} - \hspace{0.42cm} - \hspace{0.53cm} 1
\end{array} \] \caption*{Type 2.8~\cite{schenck2020calabiyau}} \label{tab:table3} \end{table} An example construction of a Gorenstein curve $C$ in $\mathbb{P}^5$ with such a minimal free resolution of its coordinate ring is as follows. Firstly, note that this variety is of degree 15, which follows from the Hilbert function \cite{schenck2020calabiyau}. Secondly the constraints of regularity 4 mean our curve will again be halfcanonical, of arithmetic genus 16. This ``big ears'' construction works in the following manner. Let $x_0,\dots,x_5$ be coordinates on $\mathbb{P}^5$. The quadrics \begin{equation}
Q_1=x_0x_4, \quad Q_2=x_1x_4, \quad Q_3=x_2x_5, \quad Q_4=x_3x_5, \quad Q_5=x_4x_5 \end{equation} have six linear first syzygies and two linear second syzygies, thus satisfying the second row of the Betti table. It follows that $C=C_0 \cup C_1 \cup C_2$, with $C_0 \subset \mathbb{P}^3_{\left<x_0\dots x_3\right>}$, $C_1 \subset \mathbb{P}^2_{\left<x_0:x_1:x_5\right>}$, $C_2 \subset \mathbb{P}^2_{\left<x_2:x_3:x_4\right>}$. Each copy of $\mathbb{P}^2$ intersects the copy of $\mathbb{P}^3$ in a line, hence the term ``big ears'' to refer to the curves embedded into each $\mathbb{P}^2$. $C_0$ is a degree 7 genus 4 curve residual to a (3,3) complete intersection in $\mathbb{P}^3_{\left<x_0\dots x_3\right>}$ containing both lines, $l_1 \colon x_0=x_1=0$ and $l_2 \colon x_2=x_3=0$. Each cubic is thus in the ideal $J=(x_0x_2,x_1x_2,x_0x_3,x_1x_3)$. Let $I=(F_1,F_2)$ be the ideal defining the (3,3) complete intersection in $\mathbb{P}^3$, with \begin{equation} \begin{split}
& F_1=l_{13}x_1x_3+l_{23}x_2x_3+l_{14}x_1x_4+l_{24}x_2x_4, \\
& F_2=m_{13}x_1x_3+m_{23}x_2x_3+m_{14}x_1x_4+m_{24}x_2x_4, \end{split} \end{equation} where the $l_{ij},m_{ij}$ are linear forms in $k[x_0,\dots,x_3]$. The ideal defining the residual curve $C_0$ contains a further two quartics. These quartics may be calculated directly from the $2 \times 2$ minors of the matrix \begin{equation} M =
\begin{pmatrix}
l_{13} & l_{23} & l_{14} & l_{24} \\
m_{13} & m_{23} & m_{14} & m_{24} \\
\end{pmatrix}. \end{equation} One quartic, $H_1$ is double on the line $l_1$, and intersects $l_2$ transversally in 4 points, and vice versa for the second quartic $H_2$. Thus mapping $H_1$ into $\mathbb{P}^2_{\left<x_0:x_1:x_5\right>}$ by adding arbitrary terms in $(x_5)$ defines a nonsingular quartic curve $C_1$, and similarly mapping $H_2$ into $\mathbb{P}^2_{\left<x_2\dots x_4\right>}$ by adding arbitrary terms in $(x_4)$ defines a nonsingular quartic curve $C_2$. The union of these three curves is a codimension 4 Gorenstein curve in $\mathbb{P}^5$ whose coordinate ring has free resolution as in Betti table 2.8.
\end{document} |
\begin{document}
\preprint{APS/123-QED}
\title{Subspace-search variational quantum eigensolver for excited states}
\author{Ken M Nakanishi}
\email{ken-nakanishi@g.ecc.u-tokyo.ac.jp}
\affiliation{
Graduate School of Science,
The University of Tokyo,
7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan.
} \author{Kosuke Mitarai}
\email{mitarai@qc.ee.es.osaka-u.ac.jp}
\affiliation{
Graduate School of Engineering Science,
Osaka University,
1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan.
}
\affiliation{
QunaSys Inc.,
High-tech Hongo Building 1F, 5-25-18 Hongo, Bunkyo, Tokyo 113-0033, Japan.
} \author{Keisuke Fujii}
\email{fujii.keisuke.2s@kyoto-u.ac.jp}
\affiliation{
Graduate School of Science,
Kyoto University,
Kitashirakawa Oiwake-cho, Sakyo-ku, Kyoto 606-8302, Japan.
}
\affiliation{
JST, PRESTO, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012, Japan.
}
\date{\today}
\begin{abstract} The variational quantum eigensolver (VQE), a variational algorithm to obtain an approximated ground state of a given Hamiltonian, is an appealing application of near-term quantum computers. The original work [A. Peruzzo et al.; \textit{Nat. Commun.}; \textbf{5}, 4213 (2014)] focused only on finding a ground state, whereas the excited states can also induce interesting phenomena in molecules and materials. Calculating excited states is, in general, a more difficult task than finding ground states for classical computers. To extend the framework to excited states, we here propose an algorithm, the subspace-search variational quantum eigensolver (SSVQE). This algorithm searches a low energy subspace by supplying orthogonal input states to the variational ansatz and relies on the unitarity of transformations to ensure the orthogonality of output states. The $k$-th excited state is obtained as the highest energy state in the low energy subspace. The proposed algorithm consists only of two parameter optimization procedures and does not employ any ancilla qubits. The disuse of the ancilla qubits is a great improvement from the existing proposals for excited states, which have utilized the swap test, making our proposal a truly near-term quantum algorithm. We further generalize the SSVQE to obtain all excited states up to the $k$-th by only a single optimization procedure. From numerical simulations, we verify the proposed algorithms. This work greatly extends the applicable domain of the VQE to excited states and their related properties like a transition amplitude without sacrificing any feasibility of it. \end{abstract}
\pacs{Valid PACS appear here} \maketitle
\section{Introduction}
Supported by the world-wide active research for the development of quantum devices, quantum computers equipped with almost a hundred qubits are now within reach. Those near-term quantum computers are often called noisy intermediate-scale quantum (NISQ) devices~\cite{Preskill2018}, reflecting the fact that those quantum computers are not fault-tolerant, that is, they do not have the guaranteed accuracy of the computational result. However, such a NISQ device is believed not to be simulatable on classical computers if the gate fidelity is sufficiently high~\cite{Boixo2018,Bouland2018,Chen2018}. This fact encourages us to look for practical applications of them.
The variational quantum eigensolver (VQE)~\cite{Peruzzo2014,Bauer2016,Kandala2017} is an attracting application of near-term quantum computers in the hope that the controllable quantum devices can simulate another quantum system more efficiently than classical devices. The VQE is an algorithm for finding an approximate ground state of a given Hamiltonian $H$. For this purpose, the VQE utilizes a parameterized quantum circuit $U(\bm{\theta})$, which is also called an ansatz circuit, to generate an ansatz state $\ket{\psi(\bm{\theta})}$. The expectation value of the target Hamiltonian $\expect{H (\bm{\theta})} = \bra{\psi(\bm{\theta})}H\ket{\psi(\bm{\theta})}$ is minimized by iterative optimization of the parameters $\bm{\theta}$. The circuit with the resultant optimal parameters $\bm{\theta}^*$ which minimizes $\expect{H}$ outputs the approximate ground state.
Not only the ground state, which the original VQE aims to find, but also excited states of molecules are responsible for many chemical reactions and physical processes. For example, the transition between a ground state and excited states is the origin of luminescence~\cite{Klessinger1995}. Intermediate states of a chemical reaction are, in general, not a ground state of a system, and therefore properties of such excited states are important for the analysis of them~\cite{Lischka2018}.
In spite of the importance of the excited states, classical computation suffers from the increasing computational cost and gives relatively poor results for them~\cite{Serrano-Andres2005,Dreuw2005,Lischka2018}. This motivates us to utilize quantum computers for the task of finding excited states and analyzing their property. A long-term quantum algorithm for a chemical reaction has been investigated in Ref.~\cite{Reiher2017}. However, algorithms which we can run on NISQ devices are yet to appear.
In order to find the excited states using NISQ devices, we propose a method, which utilizes the conservation of orthogonality under the unitary transformation. We name the method as the subspace-search VQE (SSVQE). The SSVQE takes two or more orthogonal states as inputs to a parametrized quantum circuit, and minimizes the expectation value of the energy in the space spanned by those states. This method automatically imposes the orthogonality condition on the output states, and therefore allows us to remove the swap test~\cite{Buhrman2001}, which has been employed in the previous works~\cite{Higgott2018,Endo2018} to ensure the orthogonality. In principle, the proposed algorithm can find the $k$-th excited state by running optimization of the circuit parameters only twice. We also propose a generalized version of the SSVQE, which finds all excited states up to the $k$-th only one optimization procedure. As a possible application of the SSVQE, a method to measure a transition amplitude between two eigenstates is described. It can evaluate material properties such as permittivity and rate of spontaneous emission. We perform numerical simulations and show validity of proposed algorithms for fully connected random transverse Ising models and Helium hydride. This work greatly extends the practicability of the VQE by enabling it to find the excited states efficiently, and thereby, pushes the VQE as a candidate for a possible application of NISQ devices further.
The rest of the paper is organized as follows. In \cref{sec:methods} we first propose the algorithm of the SSVQE and the extended version of it. Then, in \cref{sec:relatedworks} we briefly review the existing works addressing the same objective of finding the excited states in the framework of the VQE. An algorithm to obtain the transition amplitude is described in \cref{sec:application}. Finally, we present the simple, proof-of-principle numerical simulations in \cref{sec:numerical_simulation}.
\section{Methods}\label{sec:methods}
The VQE is a quantum-classical hybrid algorithm to find a ground state of a given Hamiltonian $H$ using NISQ devices. For this purpose, the VQE utilizes a parameterized quantum circuit $U(\bm{\theta})$, also called an ansatz circuit, to generate an ansatz state $\ket{\psi(\bm{\theta})}$. The expectation value of the target Hamiltonian $\expect{H (\bm{\theta})} = \bra{\psi(\bm{\theta})}H\ket{\psi(\bm{\theta})}$ is minimized by iterative optimization of the parameters $\bm{\theta}$.
Our objective here is to find excited states of the Hamiltonian $H$. Since the eigenstates of the Hamiltonian $H$ are mutually orthogonal, a straightforward construction of the algorithm to find $k$-th excited state is to minimize $\expect{H(\bm\theta)}$ imposing an orthogonality condition between the ansatz state $\ket{\psi(\bm{\theta})}$ and all of the ground/excited states up to $(k-1)$-th. By inductively repeating this, we can find the excited states of interest. The swap test~\cite{Buhrman2001}, which can measure the inner product between the ground state and the ansatz state, has been employed to ensure the orthogonality in the previous works~\cite{Higgott2018,Endo2018}. In contrast, the SSVQE and the weighted SSVQE we propose here utilize the conservation of orthogonality under the unitary transformation in an effort to satisfy the orthogonality condition. These methods automatically impose the orthogonality on the output states, and therefore remove the swap test.
\subsection{Subspace-search variational quantum eigensolver} The key idea is to ensure the orthogonality at the \textit{input} of the quantum circuit, not at the output. Below we describe the algorithm to find the $k$-th excited state that works on an $n$-qubit quantum computer. We define the ground state as the $0$-th excited state. The algorithm, which we refer to as the subspace-search VQE (SSVQE), runs as follows.
\textbf{Algorithm:} \begin{enumerate}
\item Construct an ansatz circuit $U(\bm{\theta})$ and choose input states $\left\{\ket{\varphi_j}\right\}_{j=0}^k$ which are mutually orthogonal ($\braket{\varphi_i|\varphi_j}=\delta_{ij}$).
\item Minimize $\mathcal{L}_1(\bm{\theta}) = \sum_{j=0}^k \braket{\varphi_j|U^\dagger(\bm{\theta})HU(\bm{\theta})|\varphi_j}$. We denote the optimal $\bm{\theta}$ by $\bm{\theta}^*$.
\item Construct another parametrized quantum circuit $V(\bm{\phi})$ that only acts on the space spanned by $\left\{\ket{\varphi_j}\right\}_{j=0}^k$.
\item Choose an arbitrary index $s\in\{0,\cdots,k\}$, and maximize
$\mathcal{L}_2(\bm{\phi}) =
\braket{\varphi_s|V^\dagger(\bm{\phi})U^\dagger(\bm{\theta}^*)HU(\bm{\theta}^*)V(\bm{\phi})|\varphi_s}.$ \end{enumerate} We note that, in practice, the input states $\left\{\ket{\varphi_j}\right\}_{j=0}^k$ will be chosen from a set of states which are easily preparable, such as the computational basis.
Let the set of eigenstates of $H$ be $\left\{\ket{E_j}\right\}_{j=0}^{2^n -1}$ with corresponding eigenenergies $\left\{E_j\right\}_{j=0}^{2^n -1}$ where $E_i\geq E_j$ when $i\geq j$. Then, the circuit optimized by the step 2 of the above algorithm is a unitary that best approximates the mapping from the space spanned by $\left\{\ket{\varphi_j}\right\}_{j=0}^k$ to one spanned by $\left\{\ket{E_j}\right\}_{j=0}^k$. Therefore, in step 2, we can find the subspace which includes $\ket{E_k}$ as the highest energy state, using a carefully constructed ansatz $U(\bm{\theta})$. The unitary $V(\bm{\phi})$ is responsible for searching in that subspace. By maximizing $\mathcal{L}_2(\bm{\phi})$, we find the $k$-th excited state $\ket{E_k}$.
In the case of $k \geq 2^{n-1}$, it is faster to choose $2^n-k$ of orthogonal input states $\ket{\varphi_j}$ and maximize $\mathcal{L}_1(\bm{\theta})$ instead of minimizing it in the step 2, then minimize $\mathcal{L}_2(\bm{\theta})$ instead of maximizing it in the final step. \subsection{Weighted SSVQE for finding the $k$-th excited state}\label{sec:kth-WSSVQE} Here we extend the algorithm described in the previous section to find the $k$-th excited state of a given Hamiltonian which requires only a single optimization procedure. It runs as follows.
\textbf{Algorithm:} \begin{enumerate}
\item Construct an ansatz circuit $U(\bm{\theta})$ and choose input states $\left\{\ket{\varphi_j}\right\}_{j=0}^k$ which are orthogonal with each other ($\braket{\varphi_i|\varphi_j}=\delta_{ij}$).
\item Minimize
$\mathcal{L}_{w}(\bm{\theta}) =
w \braket{\varphi_k|U^\dagger(\bm{\theta})HU(\bm{\theta})|\varphi_k} +
\sum_{j=0}^{k-1} \braket{\varphi_j|U^\dagger(\bm{\theta})HU(\bm{\theta})|\varphi_j}$
, where the weight $w$ can be any value in $(0, 1)$. \end{enumerate}
When the cost $\mathcal{L}_{w}$ reaches its global optimum, the circuit $U(\bm{\theta})$ becomes a unitary which maps $\ket{\varphi_k}$ to the $k$-th excited state $\ket{E_k}$ of the Hamiltonian and others to the subspace spanned by $\left\{\ket{E_j}\right\}_{j=0}^{j=k-1}$. Therefore, by minimizing the cost $\mathcal{L}_{w}$, we can find the $k$-th excited state by a single optimization process. Note that the overall time required for the optimization might increase, due to the more complicated landscape of the cost function.
\subsection{Weighted SSVQE for finding up to the $k$-th excited states}\label{sec:WSSVQE} We further generalize the above argument and propose an algorithm for finding all excited states of a given Hamiltonian up to the $k$-th with only one optimization procedure.
\textbf{Algorithm:} \begin{enumerate}
\item Construct an ansatz circuit $U(\bm{\theta})$ and choose input states $\left\{\ket{\varphi_j}\right\}_{j=0}^k$ which are orthogonal with each other ($\braket{\varphi_i|\varphi_j}=\delta_{ij}$).
\item Minimize $\mathcal{L}_{\bm{w}}(\bm{\theta}) = \sum_{j=0}^k w_j \braket{\varphi_j|U^\dagger(\bm{\theta})HU(\bm{\theta})|\varphi_j}$, where the weight vector $\bm{w}$ is chosen such that $w_i > w_j$ when $i < j$ \end{enumerate}
The weight vector introduced here has the effect of choosing which $\ket{\varphi_j}$ is converted to which excited state. It is easy to see the circuit $U(\bm{\theta})$ when the cost $\mathcal{L}_{\bm{w}}$ reaches its global optimum becomes a unitary which maps $\ket{\varphi_j}$ to the $j$-th excited state $\ket{E_j}$ of the Hamiltonian for each $j \in \{0, 1, \cdots k \}$. In this case, too, note that the overall time required for the optimization might increase due to the same reason as the previous section.
\section{Related works}\label{sec:relatedworks}
In this section, we first overview previous works, and then point out the advantages of our methods over them.
Ref.~\cite{Santagati2018} has proposed a method which hybridizes the quantum phase estimation algorithm and the VQE. Although it is experimentally demonstrated~\cite{Santagati2018}, the method is unlikely to be implemented on a NISQ device, due to the need for the controlled time evolution.
In Ref.~\cite{Colless2018}, a method called quantum subspace expansion has been proposed. The algorithm first finds the ground state $\ket{E_0}$ by the usual VQE protocol, and then measures the matrix elements of the Hamiltonian with respect to the space spanned by $\left\{O_\alpha\ket{E_0}\right\}$, where $\{O_\alpha\}$ is a set of excitation operators. The diagonalization of the matrix, which is done classically, can determine the approximate eigenvalue spectra. They have used a set of one electron excitation operators $\{a_v^\dagger a_o\}$, where $a_i^\dagger$ and $a_j$ are the fermion creation and annihilation operators respectively and $i,j$ running all possible indices, for $\{O_\alpha\}$.
The constrained VQE proposed in Ref.~\cite{Ryabinkin2018} can also be used for finding a certain set of excited states. They proposed a way to introduce constraints, such as the number of electrons or the overall spin of the system, on the VQE. The introduction of the constraints is done by adding the penalty term to the cost function. Their method finds the lowest energy state under the constraints. Since the difference in the constraints, such as the difference in the number of electrons or the overall spins, generally changes the energy of the system, it can be utilized to find a certain set of excited states.
Ref.~\cite{Higgott2018} has recently proposed an inductive method which adds a penalty term to ensure the orthogonality of the ansatz state with respect to the low-lying state. To be more concrete, to find the $k$-th excited state, they use $\expect{H(\bm{\theta}_k)} + \sum_{i=0}^{k-1} \beta_i \left|\braket{\psi(\bm{\theta}_k)|\psi(\bm{\theta}_i^*)}\right|^2$, where $\bm{\theta}_i^*$ is the optimal parameters for the $i$-th excited state and $\beta_i$ is a hyperparameter that determines the strength of the penalty, as the target cost function to be minimized by tuning $\bm{\theta}_k$. To estimate the overlap, their method uses the swap test, which requires us to double the number of qubits with additional gates. Their method works well when the hyperparameter $\beta_i$ is set properly as shown in Ref.~\cite{Higgott2018}. Ref.~\cite{Endo2018_2} has enabled the optimization by the imaginary time evolution of the parameters in this approach.
The advantages of our methods, when compared to the methods above, are as follows. \begin{enumerate}
\item The energy spectrum found by the SSVQE or the weighted SSVQE is exact when $U(\bm{\theta})$ and $V(\bm{\phi})$ have the ability to represent the exact unitary which maps the $k$ input states to the $k$ eigenstates of the Hamiltonian.
\item The swap test is not employed and thus easily implementable on the NISQ devices.
\item In the SSVQE, there are no hyperparameters.
\item In the weighted SSVQE, the results are unique regardless of the value of the hyperparameters if they meet the conditions.
\item Optimization runs only twice for the SSVQE and only once for the weighted SSVQE. \end{enumerate} \section{Calculation of transition matrix elements}\label{sec:application}
It is possible to measure a transition amplitude of an operator $A$, $\braket{E_i|A|E_j}$, using the result of the SSVQE. Note that $\braket{E_i|A|E_j} = \braket{\varphi_i|U^\dagger(\bm{\theta}^*) A U(\bm{\theta}^*) |\varphi_j}$ where $U(\bm{\theta}^*)$ is the optimized unitary. We can measure this by expanding it as: \begin{align}
\mathrm{Re}(\braket{\varphi_i|U^\dagger(\bm{\theta}^*) A U(\bm{\theta}^*) |\varphi_j}) &=
\braket{+^x_{ij}|U^\dagger(\bm{\theta}^*) A U(\bm{\theta}^*) |+^x_{ij}} \nonumber\\
&\quad - \frac{1}{2}\braket{\varphi_i|U^\dagger(\bm{\theta}^*) A U(\bm{\theta}^*) |\varphi_i} \nonumber\\
&\quad - \frac{1}{2}\braket{\varphi_j|U^\dagger(\bm{\theta}^*) A U(\bm{\theta}^*) |\varphi_j} \\
\mathrm{Im}(\braket{\varphi_i|U^\dagger(\bm{\theta}^*) A U(\bm{\theta}^*) |\varphi_j}) &=
\braket{+^y_{ij}|U^\dagger(\bm{\theta}^*) A U(\bm{\theta}^*) |+^y_{ij}} \nonumber\\
&\quad - \frac{1}{2}\braket{\varphi_i|U^\dagger(\bm{\theta}^*) A U(\bm{\theta}^*) |\varphi_i} \nonumber\\
&\quad - \frac{1}{2}\braket{\varphi_j|U^\dagger(\bm{\theta}^*) A U(\bm{\theta}^*) |\varphi_j} \end{align} where $\ket{+^x_{ij}} = (\ket{\varphi_i} + \ket{\varphi_j})/\sqrt{2}$ and $\ket{+^y_{ij}} = (\ket{\varphi_i} + i\ket{\varphi_j})/\sqrt{2}$. Recall that in practice the input states are chosen from simple states such as the computational basis, and therefore we assume that the superpositions like $\ket{+^x_{ij}}$ can easily be prepared. Each term of the above equation are measured separately on the NISQ device and then are summed up on a classical computer.
\section{Numerical Simulation}\label{sec:numerical_simulation}
Here we numerically simulate our algorithms with 4-qubit Hamiltonians. \Cref{fig:gates} shows the variational ansatz used in the simulations. We chose the input states as $\left\{\ket{\varphi_j}\right\} = \left\{\ket{0000}, \ket{0001}, \ket{0010}, \ket{0011} \right\}$. The depth $D_1$ is set to $D_1 = 2$ for all of them. $D_2$ is set to $D_2=6$ for the SSVQE and the weighted SSVQE for finding the $k$-th excited states, and $D_2=8$ for the weighted SSVQE for finding the all excited states up to the $k$-th. The initial values of the parameters were randomly sampled from a uniform distribution $[0, 2\pi)$. For each simulation, the optimization was run for 10 times starting from different initial values. The results shown in the following sections are the ones which achieved the lowest value of the cost function among those 10 results. We used the BFGS method~\cite{Nocedal2006} implemented in the SciPy library \cite{SciPy} for the optimization of the parameters.
\begin{figure}\label{fig:gates}
\end{figure}
\subsection{Transverse Ising model}
First, we demonstrate our idea with a Hamiltonian of the fully connected transverse Ising model: \begin{equation}
H = \sum_{i=1}^N a_i X_i + \sum_{i=1}^N \sum_{j=1}^{i-1} J_{ij} Z_i Z_j, \end{equation} with $N = 4$. The coefficients $a_i$ and $J_{ij}$ are sampled randomly from a uniform distribution on $[0, 1)$. In this subsection, we use one Hamiltonian with the same coefficients as an example. All experiments were conducted on the case of $k = 3$.
\subsubsection{SSVQE} The SSVQE can find the $k$-th excited state with only two optimization procedures.
\Cref{fig:SS3_ising_step1} shows the first optimization process of $\bm{\theta}$ to minimize $\mathcal{L}_1(\bm{\theta})$. In \cref{fig:SS3_ising_step1}, the fidelity is defined by the overlap between the space spanned by $\left\{\ket{E_j}\right\}_{j=0}^{3}$ and the output of the quantum circuit $\left\{U(\bm{\theta})\ket{\varphi_j}\right\}_{j=0}^3$, namely, $\frac{1}{4}\sum_{i=0}^{3}\sum_{j=0}^{3}\left|\braket{E_i|U(\bm{\theta})|\varphi_j}\right|^2$. We see that, as the cost function gets close to its global minimum, the fidelity approaches unity as expected.
\Cref{fig:SS3_ising_step2} shows the second process of optimizing $\bm{\phi}$ to minimize $\mathcal{L}_2(\bm{\phi})$. Here the fidelity is defined by $\left|\braket{E_3|U(\bm{\theta^*})V(\bm{\phi})|\varphi_3}\right|^2$. One can see the subspace-search approach works well from \cref{fig:SS3_ising_step2}.
\begin{figure}
\caption{
Step 1 of the SSVQE to find third excited state of a transverse Ising model. (black dashed line) $\mathrm{Avg.}(E_{0,1,2,3})$ = $\frac{1}{4}\sum_{k=0}^3 E_k$ , which is the globally optimal value of $\mathcal{L}_1/4$ in this case.
(red solid lines) The evolution of $\mathcal{L}_1/4$ and the fidelity (see the main text for the definition) during the optimization process.
}
\label{fig:SS3_ising_step1}
\end{figure} \begin{figure}
\caption{
Step 2 of the SSVQE to find the third excited state of a transverse Ising model.
(red solid lines) The evolution of $\mathcal{L}_2$ and the fidelity (see the main text for the definition) during the optimization process.
}
\label{fig:SS3_ising_step2}
\end{figure}
\subsubsection{Weighted SSVQE for finding the $k$-th excited state} The method described in \cref{sec:kth-WSSVQE} can find the $k$-th excited state by only one optimization sequence. Here, we chose $w = 0.5$ as the weight.
\Cref{fig:E3_ising} shows the optimization process of $\bm{\theta}$ to minimize $\mathcal{L}_w(\bm{\theta})$. Here the fidelity is defined by $\left|\braket{E_3|U(\bm{\theta})|\varphi_3}\right|^2$. In this case, too, the algorithm succeeds in finding the third excited state of the Hamiltonian. However, the number of iterations to the convergence is larger than the number of overall iterations of the simple SSVQE. It might be attributed to the more complicated landscape existing in the cost function.
\begin{figure}\label{fig:E3_ising}
\end{figure}
\subsubsection{Weighted SSVQE} The weighted SSVQE described in \cref{sec:WSSVQE} can find $0, 1, \cdots, k$-th excited states all at once. Here, we chose $\bm{w} = (4, 3, 2, 1)$ as the weight vector. \Cref{fig:E03_ising} shows the optimization process of $\bm{\theta}$ to minimize $\mathcal{L}_{\bm{w}}(\bm{\theta})$. From \cref{fig:E03_ising}, one can see that this approach can actually find the desired excited states all at once. The number of iterations to the convergence is almost equivalent to the one presented in the previous section.
\begin{figure}\label{fig:E03_ising}
\end{figure}
\subsection{Helium hydride} Next, we apply our idea for the molecular Hamiltonians of $\text{HeH}$ with a fixed distance between two atoms. Our ansatz (\cref{fig:gates}) does not consider the conservation of the number of electrons, and therefore, the calculated excited states can have the different number of them. The molecular Hamiltonians are calculated with OpenFermion and OpenFermion-Psi4~\cite{McClean2017, Parrish2017}. We used the STO-3G minimal basis set, and therefore, obtained the 4-qubit Hamiltonian. We calculated the Hamiltonians at the 24 different bond lengths and performed the VQE at each point. In the weighted SSVQE simulation, we used the same weights as in the previous section.
The result of SSVQE is shown in \cref{fig:HeH_SSVQE} and one from using weighted SSVQE for finding the $k$-th excited state is shown in \cref{fig:HeH_WSSVQE_only_k}. Both of the results agree nicely with the exact values of the third excited state at each bond length.
\begin{figure}
\caption{
The energy levels of the Hamiltonian of $\mathrm{HeH}$ and the calculated energy of the third excited state using the SSVQE.
}
\label{fig:HeH_SSVQE}
\label{fig:hoge}
\end{figure}
\begin{figure}
\caption{
The energy levels of the Hamiltonian of $\mathrm{HeH}$ and the predicted energy of the third excited state using the weighted SSVQE.
}
\label{fig:}
\label{fig:HeH_WSSVQE_only_k}
\end{figure}
Next, we used the weighted SSVQE described in \cref{sec:WSSVQE} to find all excited states up to the third. The result is shown in Fig.~\ref{fig:HeH_WSSVQE}. One can see that the energy eigenvalues are well approximated by the optimized output of the weighted SSVQE.
\begin{figure}
\caption{
The energy levels of the Hamiltonian of $\mathrm{HeH}$ and the predicted energy of the excited states up to the third using the weighted SSVQE.
(a)
ground state
(b)
1st excited state
(c)
2nd excited state
(d)
3rd excited state
}
\label{fig:HeH_WSSVQE_0}
\label{fig:HeH_WSSVQE_1}
\label{fig:HeH_WSSVQE_2}
\label{fig:HeH_WSSVQE_3}
\label{fig:HeH_WSSVQE}
\end{figure}
\section{Conclusion} In this work, we proposed efficient algorithms for finding excited states of a given Hamiltonian, extending the framework of the VQE. The proposed method assures the orthogonality of the states at the input of the ansatz circuit. Minimizing a carefully designed cost function by optimizing the parameters of the quantum circuit, we can map each of the orthogonal states onto one of the energy eigenstates. Our algorithms require us to run, in principle, optimization only once or twice, and find one or more arbitrary excited states. We believe that this work greatly extends the practicability of the VQE for finding the excited states.
\end{document} |
\begin{document}
\begin{abstract} A relatively hyperbolic group $G$ is said to be QCERF if all finitely generated relatively quasiconvex subgroups are closed in the profinite topology on $G$.
Assume that $G$ is a QCERF relatively hyperbolic group with double coset separable (e.g., virtually polycyclic) peripheral subgroups. Given any two finitely generated relatively quasiconvex subgroups $Q,R \leqslant G$ we prove the existence of finite index subgroups $Q'\leqslant_f Q$ and $R' \leqslant_f R$ such that the join $\langle Q',R'\rangle$ is again relatively quasiconvex in $G$. We then show that, under the minimal necessary hypotheses on the peripheral subgroups, products of finitely generated relatively quasiconvex subgroups are closed in the profinite topology on \(G\). From this we obtain the separability of products of finitely generated subgroups for several classes of groups, including limit groups, Kleinian groups and balanced fundamental groups of finite graphs of free groups with cyclic edge groups. \end{abstract}
\keywords{Relatively hyperbolic groups, relatively quasiconvex subgroups, virtual joins, double coset separability, product separability, limit groups, Kleinian groups} \subjclass[2020]{20F67, 20F65, 20E26, 20H10}
\maketitle
\setcounter{tocdepth}{1} \tableofcontents
\section{Introduction} Any group can be equipped with the \emph{profinite topology}, whose basic open sets are cosets of finite index subgroups. A subset of a group is said to be \emph{separable} if it is closed in the profinite topology. The trivial subgroup of a group $G$ is separable if and only if the profinite topology is Hausdorff; in this case $G$ is said to be \emph{residually finite}. If every finitely generated subgroup of $G$ is separable then $G$ is called \emph{LERF} (or \emph{subgroup separable}), and if the product of any two finitely generated subgroups is separable, $G$ is said to be \emph{double coset separable}.
In this paper we will be interested in various separability properties of relatively hyperbolic groups. The notion of a relatively hyperbolic group was originally suggested by Gromov \cite{Gromov1987} as a generalisation of word hyperbolic groups. The concept was further developed by Farb \cite{FarbRHG}, Bowditch \cite{BowditchRHG}, Dru\c{t}u-Sapir \cite{DS}, Osin \cite{OsinRHG} and Groves-Manning \cite{GrovesManning}, whose various definitions were later shown to be equivalent by Hruska \cite{HruskaRHCG}. Relative hyperbolicity is a relative property of a group \(G\) in the sense that one must specify a collection of \emph{peripheral subgroups} \(\{H_\nu \mid \nu \in \Nu\}\) with respect to which \(G\) is relatively hyperbolic (see Definition~\ref{def:rh_gp}). Typical examples of relatively hyperbolic groups include geometrically finite Kleinian groups, fundamental groups of finite volume manifolds of pinched negative curvature, and small cancellation quotients of free products. Respectively, these groups are hyperbolic relative to their maximal parabolic subgroups, their cusp subgroups and the images of the free factors (see, for example, \cite{OsinRHG}).
\subsection{Quasiconvexity of virtual joins} Since general finitely generated subgroups of word hyperbolic (relatively hyperbolic) groups can be quite wild and need not be separable, it is customary to restrict one's attention to quasiconvex (respectively, relatively quasiconvex) subgroups.
\emph{Quasiconvex subgroups} play a central role in the study of word hyperbolic groups. They are precisely the finitely generated quasi-isometrically embedded subgroups, and, hence, they are hyperbolic themselves and are generally well-behaved.
If $Q$ and $R$ are two quasiconvex subgroups of a hyperbolic group $G$ then the intersection $S=Q \cap R$ is also quasiconvex (\cite{Short}) but the join $\langle Q, R \rangle$ need not be. This can be remedied by considering a \emph{virtual join} of $Q$ and $R$, which is defined as $\langle Q',R' \rangle$, for some finite index subgroups $Q' \leqslant_f Q$ and $R' \leqslant_f R$. The existence of a quasiconvex virtual join $\langle Q',R' \rangle$ was proved by Gitik \cite{Gitik_ping-pong} under the assumption that $S=Q \cap R$ is separable in $G$. More precisely, Gitik's theorem states that there exist finite index subgroups $Q' \leqslant_f Q$ and $R' \leqslant_f R$ such that $Q'\cap R'=S$ and the virtual join $\langle Q',R' \rangle$ is quasiconvex in $G$; moreover, $\langle Q',R' \rangle$ will be naturally isomorphic to the amalgamated free product $Q'*_S R'$. This theorem was an important ingredient in the proof that double cosets of quasiconvex subgroups are separable in LERF hyperbolic groups (see \cite{Gitik-double_coset_sep,MinGFERF}).
In the setting of relatively hyperbolic groups, the natural sub-objects are the \emph{relatively quasiconvex subgroups}, which are themselves relatively hyperbolic in a way that is compatible with the ambient group \cite{HruskaRHCG}. Basic examples of relatively quasiconvex subgroups are \emph{maximal parabolic subgroups} (i.e., conjugates of the peripheral subgroups), \emph{parabolic subgroups} (subgroups of maximal parabolics) and finitely generated undistorted (i.e., quasi-isometrically embedded) subgroups \cite{HruskaRHCG}.
In \cite{HruskaRHCG} Hruska proved that the intersection of two relatively quasiconvex subgroups is again relatively quasiconvex. However, until now the existence of a relatively quasiconvex virtual join $\langle Q',R'\rangle$, for two relatively quasiconvex subgroups \(Q\) and \(R\) in a relatively hyperbolic group \(G\), such that $S=Q \cap R$ is separable in $G$, was only known in special cases:
\begin{itemize}
\item Mart\'{i}nez-Pedroza \cite{MPComb} proved it in the case when $R \leqslant P$, for some maximal parabolic subgroup $P$ of $G$, such that $Q \cap P \subseteq R$;
\item Mart\'{i}nez-Pedroza and Sisto \cite{MPS} proved it when $Q$ and $R$ have \emph{compatible parabolics} (i.e., for every maximal parabolic subgroup $P$ of $G$ either $Q \cap P \subseteq R \cap P$ or $R \cap P \subseteq Q \cap P$);
\item Yang \cite{Yang} (unpublished; see also McClellan's thesis \cite{McCl}) proved it when $R$ is a \emph{full subgroup} of $G$ (i.e., for every maximal parabolic subgroup $P$ in $G$, $R \cap P$ is either finite or has finite index in $P$). \end{itemize}
Similarly to Gitik's theorem \cite{Gitik_ping-pong}, in all three cases above the authors establish an isomorphism between the virtual join $\langle Q',R' \rangle$ and the amalgamated free product $Q'*_{S'} R'$, where $S'=Q' \cap R' \leqslant_f S$.
The extra assumptions on $Q$ and $R$ in each of the above results from \cite{MPComb,MPS,Yang,McCl} imply that $Q$ and $R$ have \emph{almost compatible parabolics} (see Definition~\ref{def:almost_compatible} below). Unfortunately this is still a significant restriction and a more general result is desirable. Moreover, in the absence of almost compatibility one cannot expect a virtual join to split as an amalgamated free product of $Q'$ and $R'$. Indeed, for example if both $Q$ and $R$ are subgroups of an abelian peripheral subgroup of $G$ then any virtual join $\langle Q',R' \rangle$ would again be abelian.
One of the goals of the present paper is to establish quasiconvexity of virtual joins without making any compatibility assumptions on $Q$ and $R$. However we need to impose stronger assumptions on the properties of the profinite topology on $G$ than just separability of $S=Q \cap R$: we will require the finitely generated relatively quasiconvex subgroups to be separable and the peripheral subgroups to be {double coset separable}.
\begin{definition}[QCERF] \label{def:QCERF}
We will say that a relatively hyperbolic group $G$ is \emph{QCERF} if every finitely generated relatively quasiconvex subgroup in $G$ is separable. \end{definition}
\begin{theorem} \label{thm:sep->qc_intro}
Let \(G\) be a finitely generated relatively hyperbolic group. Suppose that $G$ is QCERF and the peripheral subgroups of $G$ are double coset separable.
If \(Q, R \leqslant G\) are finitely generated relatively quasiconvex subgroups and $S=Q \cap R$ then there exist finite index subgroups $Q'\leqslant_f Q$ and $R' \leqslant_f R$, with \(Q' \cap R' = S\), such that the virtual join $\langle Q',R'\rangle$ is relatively quasiconvex in $G$.
More precisely, there exists $L \leqslant_f G$, with $S \subseteq L$, such that for any $L' \leqslant_f L$, satisfying $S \subseteq L'$, we can choose $Q'=Q\cap L' \leqslant _f Q$, and there exists $M \leqslant_f L'$, with $Q' \subseteq M$, such that for any $M' \leqslant_f M$, satisfying $Q' \subseteq M'$, we can choose $R'=R \cap M' \leqslant_f R$. \end{theorem}
One can observe that the choice of $R' \leqslant_f R$ in the above theorem depends on the choice of $Q' \leqslant_f Q$. In the case when the peripheral subgroups are abelian the situation is easier:
\begin{theorem} \label{thm:sep->qc_for_ab_parab}
Let \(G\) be a finitely generated group hyperbolic relative to a finite collection of abelian subgroups. Assume that $G$ is QCERF.
If \(Q, R \leqslant G\) are relatively quasiconvex subgroups and \(S = Q \cap R\) then there exists a finite index subgroup $L \leqslant_f G$, with $S \subseteq L$, such that the virtual join $\langle Q',R'\rangle$ is relatively quasiconvex in $G$, for arbitrary subgroups $Q' \leqslant_f Q \cap L$ and $R' \leqslant_f R \cap L$, satisfying $Q' \cap R'=S$. \end{theorem}
In fact, one can slightly weaken the assumptions in Theorem~\ref{thm:sep->qc_for_ab_parab} by requiring the peripheral subgroups of $G$ to be virtually abelian instead of abelian: see Corollary~\ref{cor:virt_ab_periph}.
Let us now discuss the assumptions of Theorem~\ref{thm:sep->qc_intro}. QCERF-ness of $G$ is a natural strengthening of the separability of $S$, and it is unclear how restrictive it is. Indeed, Manning and Mart\'{i}nez-Pedroza \cite{MMPSep} proved that the following two statements are equivalent:
\begin{itemize}
\item[(a)] every finitely generated group hyperbolic relative to a finite collection of LERF and slender subgroups is QCERF;
\item[(b)] all word hyperbolic groups are residually finite. \end{itemize} Recall that a group is called \emph{slender} is every subgroup is finitely generated. The question of whether statement (b) is true is a well-known open problem. If the answer to it is positive then, for example, all finitely generated groups hyperbolic relative to virtually polycyclic subgroups will be QCERF.
Large classes of relatively hyperbolic groups have already been proved to be QCERF. One of the first results in this direction is due to Wilton \cite{WiltonLimitGps}, who established QCERF-ness of limit groups. The ground-breaking work of Haglund and Wise \cite{Haglund-Wise} and Agol \cite{Agol} implies that any word hyperbolic group acting geometrically on a CAT($0$) cube complex is QCERF. One of the consequences of this result is that all finitely generated Kleinian groups are QCERF. More recently, Einstein and Groves \cite{Einstein-Groves} and Groves and Manning \cite{Grov-Man-spec} extended this theory to relatively hyperbolic groups acting (weakly) relatively geometrically on CAT($0$) cube complexes. Einstein and Ng \cite{Einstein-Ng} used it to show that full relatively quasiconvex subgroups of $C'(1/6)$-small cancellation quotients of free products of residually finite groups are separable. In the case when the free factors are LERF and slender the latter result can be combined with a theorem of Manning and Mart\'{i}nez-Pedroza \cite[Theorem~1.7]{MMPSep} to conclude that such small cancellation free products are QCERF.
By a theorem of Lennox and Wilson \cite{L-W} all virtually polycyclic groups are double coset separable, hence the assumption about peripheral subgroups in Theorem~\ref{thm:sep->qc_intro} is automatically true in many relevant cases. However whether this assumption is actually necessary is less obvious. It is required in our approach, but it would be interesting to see whether the theorem remains valid without it. As expected from the results in \cite{MPS, Yang} it is not needed if the relatively quasiconvex subgroups $Q$ and $R$ have almost compatible parabolics: see Theorem~\ref{thm:almost_compat->qc_comb} below.
\subsection{Separability of double cosets}\label{subsec:1.2} In group theory knowing that double cosets of certain subgroups are separable is often quite useful. For example, the separability of double cosets of hyperplane subgroups was used by Haglund and Wise in \cite{Haglund-Wise} to give a criterion for virtual specialness of a compact non-positively curved cube complex. Separability of double cosets of abelian subgroups in Kleinian groups was an important ingredient in the theorem of Hamilton, Wilton and Zalesskii \cite{HWZ} that fundamental groups of compact orientable $3$-manifolds are conjugacy separable
Double coset separability of free groups was first proved by Gitik and Rips \cite{Gitik-Rips}. Shortly after, Niblo \cite{Niblo} came up with a new criterion for separability of double cosets and applied it to show that finitely generated Fuchsian groups and fundamental groups of Seifert-fibred $3$-manifolds are double coset separable. Separability of double cosets of quasiconvex subgroups in QCERF word hyperbolic groups was proved by the first author in \cite{MinGFERF}. Mart\'{i}nez-Pedroza and Sisto \cite{MPS} generalised this to double cosets of relatively quasiconvex subgroups with compatible parabolics in QCERF relatively hyperbolic groups; Yang \cite{Yang} and McClellan \cite{McCl} treated the case when at least one of the factors is full. Our proof of Theorem~\ref{thm:sep->qc_intro} almost immediately yields the following.
\begin{corollary} \label{cor:double_cosets_sep}
Let \(G\) be a finitely generated group hyperbolic relative to a finite collection of subgroups \(\lbrace H_\nu \, | \, \nu \in \Nu \rbrace\).
Suppose that \(G\) is QCERF and \(H_\nu\) is double coset separable, for every \(\nu \in \Nu\).
Then for all finitely generated relatively quasiconvex subgroups \(Q, R \leqslant G\), the double coset $QR$ is separable in $G$. \end{corollary}
In the case when the relatively hyperbolic group $G$ admits a weakly relatively geometric action on a CAT($0$) cube complex Corollary~\ref{cor:double_cosets_sep} was proved by Groves and Manning~\cite{Grov-Man-spec}. Groves and Manning's argument uses Dehn fillings to approximate $G$ by QCERF word hyperbolic groups, thus reducing the statement to separability of double cosets in hyperbolic groups from \cite{MinGFERF}. Our approach is completely different as we always work within $G$.
In the following definition we will use a preorder $\preccurlyeq$ on the sets of subsets of a group $G$, introduced by the first author in \cite{Min-Some_props_of_subsets}: \[
\text{given }U, V \subseteq G \text{ we will write } U \preccurlyeq V \text{ if there exists a finite subset } Y \subseteq G \text{ such that } U \subseteq VY. \]
If $d_X$ is the word metric on $G$, corresponding to a finite generating set $X$, and $U,V$ are subsets of $G$ then $U \preccurlyeq V$ if and only if $U$ is contained in a finite $d_X$-neighbourhood of $V$. If $U$ and $V$ are subgroups of $G$ then $U \preccurlyeq V$ is equivalent to $|U:(U \cap V)|<\infty$ (see \cite[Lemma~2.1]{Min-Some_props_of_subsets}).
\begin{definition}[Almost compatible parabolics] \label{def:almost_compatible}
Let $Q$ and $R$ be subgroups of a relatively hyperbolic groups $G$.
We will say that $Q$ and $R$ have \emph{almost compatible parabolics} if for every maximal parabolic subgroup $P$ of $G$ either $Q \cap P \preccurlyeq R \cap P$ or $R \cap P \preccurlyeq Q \cap P$. \end{definition}
Clearly if $G$ is a relatively hyperbolic group and $Q,R$ are subgroups with compatible parabolics then they have almost compatible parabolics. The same is true if at least one of $Q$, $R$ is a full subgroup of $G$.
In the case when the relatively quasiconvex subgroups $Q$ and $R$ have almost compatible parabolics the assumption that the peripheral subgroups $H_\nu$ are double coset separable can be dropped from Corollary~\ref{cor:double_cosets_sep}, allowing us to recover the double coset separability results from \cite{MPS,Yang,McCl}.
\begin{corollary} \label{cor:almost_comp->sep_dc}
Suppose that $G$ is a finitely generated QCERF relatively hyperbolic group.
If $Q$ and $R$ are finitely generated relatively quasiconvex subgroups of $G$ with almost compatible parabolics then the double coset $QR$ is separable in $G$. \end{corollary}
\subsection{Separability of products of quasiconvex subgroups} The third part of this paper is dedicated to proving separability for more general products $F_1 \dots F_s$, where $s \in \NN$ is arbitrary and $F_1, \dots,F_s$ are relatively quasiconvex subgroups in a relatively hyperbolic group.
\begin{definition}[{RZ}$_s$ and product separability]
Let \(P\) be a group and let \(s \in \NN\).
We say that $P$ has property \emph{{RZ}$_s$} if for arbitrary finitely generated subgroups \(E_1, \dots, E_s \leqslant P\) the product \(E_1 \dots E_s\) is separable in \(P\).
If \(P\) has property {RZ}\(_{s}\) for all \(s \in \NN\), we say that \(P\) is \emph{product separable}. \end{definition}
Thus RZ$_1$ means that the group is LERF and RZ$_2$ is equivalent to double coset separability. The definition of {RZ}$_s$ is due to Coulbois \cite{Coulb}; he named it after Ribes and Zalesskii, who proved in \cite{RibesZal} that free groups are product separable, confirming a conjecture of Pin and Reutenauer from \cite{PinReute}. Pin and Reutenauer showed that product separability of free groups implies Rhodes’ type II conjecture from semigroup theory (see \cite{PinReute,RhodesConj} for the background).
In \cite{MinGFERF}, generalising the result of \cite{RibesZal}, the first author proved that the product of finitely many quasiconvex subgroups is separable in a QCERF word hyperbolic group. Moreover, in \cite{Coulb} Coulbois showed that, for every $s \in \NN$, free products of groups with property {RZ}\(_s\) also have property {RZ}\(_s\). Taken together, these facts motivate the following theorem.
\begin{theorem} \label{thm:RZs}
Let \(G\) be a finitely generated group hyperbolic relative to a finite collection of subgroups \(\{ H_\nu \, | \, \nu \in \Nu \}\), and let $s \in \NN$.
Suppose that \(G\) is {QCERF} and \(H_\nu\) has property RZ$_s$, for each \(\nu \in \Nu\).
If \(F_1, \dots, F_s \leqslant G\) are finitely generated relatively quasiconvex subgroups of \(G\), then the product \(F_1 \dots F_s\) is separable in \(G\). \end{theorem}
We note that separability of products of full relatively quasiconvex subgroups in a QCERF relatively hyperbolic group was proved by McClellan \cite{McCl}.
Finitely generated virtually abelian groups are product separable. Therefore, Theorem~\ref{thm:RZs} applies to finitely generated QCERF relatively hyperbolic groups with virtually abelian peripheral subgroups. Examples of such groups include limit groups, geometrically finite Kleinian groups and $C'(1/6)$-small cancellations quotients of free products of finitely generated virtually abelian groups (see \cite{Oregon-Reyes}). We discuss some applications of Theorem~\ref{thm:RZs} in Subsection~\ref{subsec:prod_sep}, and give a brief outline of the proof at the beginning of Part~\ref{part:multicosets}.
\section{Applications} \label{sec:applications} In this section we list some applications of the main results from the Introduction.
\subsection{Geometrically finite virtual joins}\label{subsec:geom_fin_joins} A \emph{Kleinian group} is a discrete subgroup of the (orientation-preserving) isometries of the real hyperbolic $3$-space, \(\mathrm{Isom}(\HH^3)\). Recall that a Kleinian group \(G\) has an induced action on the ideal boundary \(\partial \HH^3\) of hyperbolic space by homeomorphisms, under which the smallest \(G\)-invariant compact subset, \(\Lambda G\), is called its \emph{limit set}. A subgroup \(P \leqslant G\) is called \emph{parabolic} if it has a single fixed point \(p\) in \(\partial \HH^3\) and setwise fixes some horosphere centred a \(p\). We say that \(G\) is \emph{geometrically finite} if every point of \(\Lambda G\) is either a conical limit point or a bounded parabolic point (see \cite{Bowditch_GeomFin} for definitions). Examples of geometrically finite Kleinian groups include the fundamental groups of finite volume hyperbolic 3-manifolds.
As noted in the Introduction, geometrically finite groups Kleinian groups are relatively hyperbolic with respect to conjugacy class representatives of their maximal parabolic subgroups (which are virtually abelian). Moreover, geometrically finite subgroups are exactly the relatively quasiconvex subgroups of geometrically finite Kleinian groups \cite{HruskaRHCG}.
Baker and Cooper \cite{Baker_Cooper} showed, using geometric methods, that if \(G\) is a finitely generated Kleinian group and \(Q\) and \(R\) are geometrically finite subgroups of \(G\) with compatible parabolics, then there are finite index subgroups \(Q' \leqslant_f Q\) and \(R' \leqslant_f R\) such that the join \(\langle Q', R' \rangle\) is geometrically finite. In \cite{MPS} Mart\'{i}nez-Pedroza and Sisto recover this result for geometrically finite Kleinian groups as a special case of their work, using techniques closer to those in the present paper. Using Theorem~\ref{thm:sep->qc_intro}, we are able to eliminate the hypothesis of compatible parabolic subgroups in these results:
\begin{corollary}\label{cor:geom_fin_combination}
Let \(G\) be a geometrically finite Kleinian group, and suppose that \(Q, R \leqslant G\) are geometrically finite subgroups of \(G\).
Then there are finite index subgroups \(Q' \leqslant_f Q\) and \(R' \leqslant_f R\) such that \(\langle Q', R' \rangle\) is a geometrically finite subgroup of \(G\). \end{corollary}
\begin{proof}
The group \(G\) is geometrically finite, so it is finitely generated \cite[Theorem~12.4.9]{Ratcliffe} and hyperbolic relative to a finite collection of finitely generated virtually abelian subgroups \cite{BowditchRHG,HruskaRHCG}.
Agol proved that all finitely generated Kleinian groups are LERF \cite[Corollary 9.4]{Agol};
in particular, this means that they are QCERF.
Therefore \(G\) is a QCERF relatively hyperbolic group with double coset separable peripheral subgroups. By Hruska's result \cite[Corollary~1.3]{HruskaRHCG}, a subgroup of $G$ is geometrically finite if and only if it is relatively quasiconvex.
We may now apply Theorem~\ref{thm:sep->qc_intro} to obtain the desired conclusion. \end{proof}
\subsection{Product separability} \label{subsec:prod_sep} Recall that a group $G$ is product separable if the product of finitely many finitely generated subgroups is closed in the profinite topology on $G$. Until now, few examples of groups were known to be product separable: free abelian groups, free groups \cite{RibesZal}, groups of the form $F \times \mathbb{Z}$, where $F$ is free \cite{You}, and locally quasiconvex LERF hyperbolic groups \cite{MinGFERF} (e.g., surface groups). Additionally, the class of product separable groups is closed under taking subgroups, finite index supergroups and free products \cite{Coulb}. However, this class is not closed under direct products (e.g., the direct product of two non-abelian free groups is not even LERF \cite{Al-Gre}). It also does not contain some polycyclic groups: in \cite{L-W} it was proved that the integral Heisenberg group $H_3(\mathbb{Z})$, which is polycyclic (in fact, finitely generated nilpotent of class $2$), is not product separable as it does not have property RZ$_3$.
We use Theorem~\ref{thm:RZs} to establish product separability for many more groups.
\begin{theorem} \label{thm:prod_sep}
The following groups are product separable:
\begin{itemize}
\item[(i)] limit groups;
\item[(ii)] finitely generated Kleinian groups;
\item[(iii)] fundamental groups of finite graphs of free groups with cyclic edge groups, as long as they are balanced.
\end{itemize} \end{theorem}
Recall that a group \(G\) is called a \emph{limit group} if it is finitely generated and fully residually free (i.e., for every finite subset \(A \subset G\), there is a free group $F$ and a homomorphism \(\varphi \colon G \to F\) that is injective when restricted to \(A\)). Limit groups played an important role in the solutions of Tarski's problems about the first order theory of free groups by Sela \cite{Sela} and Kharlampovich-Myasnikov \cite{K-M}.
Following Wise, we say that a group $G$ is \emph{balanced} if for any $g \in G \setminus \{1\}$ the conjugacy between $g^m$ and $g^n$ implies that $n=\pm m$. In \cite{Wise-balanced}, Wise proved that the fundamental group $G$ of a finite graph of free groups with cyclic edge groups is LERF if and only if it is balanced if and only if $G$ does not contain any \emph{non-Euclidean} Baumslag-Solitar subgroups $BS(m,n)=\langle a,t\mid t a^m t^{-1}=a^n \rangle$, with $m, n \in \mathbb{Z}\setminus\{0\}$ and $n \neq \pm m$.
Part (iii) of Theorem~\ref{thm:prod_sep} generalises a result of Coulbois \cite[Theorem~5.18]{Coulbois-thesis}, who proved that the free amalgamated product of two free groups along a cyclic subgroup is product separable. Theorem~\ref{thm:prod_sep}(iii) confirms (in a strong way) a conjecture of Hsu and Wise \cite[Conjecture~15.5]{Hsu-Wise}, which states that a balanced group splitting as a finite graph of free groups with cyclic edge groups is double coset separable.
\begin{corollary} Suppose that $G$ splits as a fundamental group of a finite graph of finitely generated free groups with cyclic edge groups. If $G$ is balanced then it is virtually compact special; in other words, $G$ has a finite index subgroup which is isomorphic to the fundamental group of a compact non-positively curved special cube complex (in the sense of Haglund and Wise \cite{Haglund-Wise}). \end{corollary}
\begin{proof} Hsu and Wise \cite[Theorem~10.4]{Hsu-Wise} proved that $G$ admits a proper cocompact action on a CAT($0$) cube complex $X$. By Theorem~\ref{thm:prod_sep}, $G$ is double coset separable, hence, by a result of Haglund and Wise \cite[Theorem~9.19]{Haglund-Wise}, $G$ has a finite index subgroup $K$ such that $K \backslash X$ is a special cube complex. \end{proof}
One of the original motivations for considering product separability of groups came from semigroups and automata theory. Pin and Reutenauer \cite{PinReute} used this property to characterise the profinitely closed rational subsets of free groups.
Recall that for a monoid \(M\), the \emph{rational subsets} \(\Rat(M) \subseteq 2^M\) form the smallest collection of subsets of \(M\) satisfying the following conditions:
\begin{enumerate}
\item \(\emptyset \in \Rat(M)\) and, for each \(m \in M\), \(\{m\} \in \Rat(M)\);
\item if \(A, B \in \Rat(M)\), then \(AB \in \Rat(M)\) and \(A \cup B \in \Rat(M)\);
\item if \(A \in \Rat(M)\), then \(A^* \in \Rat(M)\), where \(A^*\) is the submonoid of \(M\) generated by \(A\). \end{enumerate} We refer the reader to \cite{PinReute} for an account of the basic theory of rational subsets.
In a group $G$ it makes sense to consider the subgroup closure instead of the $*$-closure. Thus we define the set $\Rat^0(G) \subseteq 2^G$ as the smallest collection of subsets of $G$ containing all finite subsets, closed under finite unions, products and subgroup closure. It is easy to see that $\Rat^0(G)$ consists of all subsets of the form $gF_1 \dots F_s$, where $s \in \NN_0$, $g \in G$ and $F_1,\dots,F_s$ are finitely generated subgroups of $G$ (\cite[Proposition~2.2]{PinReute}). Evidently $\Rat^0(G) \subseteq \Rat(G)$; moreover, it is not difficult to show that $\Rat^0(G) = \Rat(G)$ if and only if $G$ is torsion.
The following theorem was proved by Pin and Reutenauer \cite[Corollary~2.5]{PinReute} in the case of free groups (see also \cite[Section 12.3]{Ribes-book} for a slightly different argument), however the proof is readily seen to remain valid in all product separable groups.
\begin{theorem}[Pin-Reutenauer] \label{thm:P-R}
If $G$ is a product separable group then $\Rat^0(G)$ is precisely the class of all separable rational subsets of $G$. \end{theorem}
\begin{corollary} \label{cor:sep_rat_subsets}
If $G$ is a group from one of the classes (i)--(iii), described in Theorem~\ref{thm:prod_sep}, then the set of separable rational subsets of $G$ coincides with $\Rat^0(G)$. \end{corollary}
\section{Plan of the paper} \subsection{The metric quasiconvexity theorem}\label{subsec:3.1} Let $G$ be a relatively hyperbolic group generated by a finite set $X$, and let $Q, R$ be relatively quasiconvex subgroups of $G$. The technical heart of this paper is Theorem~\ref{thm:metric_qc} below, which, given some relatively quasiconvex subgroups $Q' \leqslant Q$ and $R' \leqslant R$, provides sufficient metric conditions for the relative quasiconvexity of the join $\langle Q',R' \rangle$.
\begin{definition}[$\minx$]
Let \(G\) be a group with finite generating set \(X\), and let \(Y \subseteq G\). Then we denote the number \(\min \lbrace \abs{g}_X \, | \, g \in Y \rbrace\) by \(\minx(Y)\), with the usual convention that minimum over the empty set is $+\infty$. \end{definition}
Let $S=Q \cap R$ and $A \ge 0$ be some constant. We will be interested in finding subgroups $Q'\leqslant Q$ and $R' \leqslant R$ satisfying the following properties:
\begin{itemize}
\descitem{P1} if $Q'$ and $R'$ are relatively quasiconvex in $G$ then so is the subgroup \(\langle Q', R' \rangle\);
\descitem{P2} \(\minx\Bigl(\langle Q', R' \rangle \setminus S\Bigr) \geq A\);
\descitem{P3} \(\minx \Bigl(Q \langle Q', R' \rangle R \setminus QR \Bigr) \geq A\). \end{itemize}
\begin{remark} \phantom{a} \begin{itemize}
\item Observe that quasiconvexity of $Q'$ and $R'$ is only required in property \descref{P1}.
\item Property \descref{P2} says that all ``short'' elements of $\langle Q',R'\rangle$ belong to $S$.
\item Property \descref{P3} is the key ingredient for proving that the double coset $QR$ is separable in $G$ in Corollary~\ref{cor:double_cosets_sep}. \end{itemize} \end{remark}
Let us now describe the metric conditions used to establish the above properties. Given a finite collection $\mathcal P$ of maximal parabolic subgroups of $G$, constants $B,C \ge 0$ and subgroups $Q' \leqslant Q$, $R' \leqslant R$, we will consider the following conditions:
\begin{itemize}
\descitem{C1} \(Q' \cap R' = S\);
\descitem{C2} \(\minx(Q \langle Q', R'\rangle Q \setminus Q ) \geq B\) and \(\minx(R \langle Q', R'\rangle R \setminus R ) \geq B\);
\descitem{C3} \(\minx \Bigl( (PQ' \cup PR') \setminus PS\Bigr) \geq C\), for each $P \in \mathcal{P}$. \end{itemize}
Moreover, if not all of the subgroups in $\mathcal{P}$ are abelian then we will need two more conditions (here for subgroups $H,P \leqslant G$, we use $H_P$ to denote the intersection $H \cap P \leqslant P$):
\begin{itemize}
\descitem{C4} \(Q_P \cap \langle Q_P', R_P' \rangle = Q'_P\) and \(R_P \cap \langle Q_P', R_P' \rangle = R_P'\), for every $P \in \mathcal{P}$;
\descitem{C5} \(\minx \Bigl(q \langle Q_P', R_P' \rangle R_P \setminus qQ'_P R_P\Bigr) \geq C \), for each $P \in \mathcal{P}$ and all $q \in Q_P$. \end{itemize}
\begin{remark} \label{rem:ab_periph->C4_and_C5}
If the peripheral subgroups of $G$ are abelian then condition \descref{C4} follows from \descref{C1} and condition \descref{C5} is trivially true. \end{remark}
Indeed, if $P$ is abelian, then, in the notation of \descref{C4}, $\langle Q_P',R_P' \rangle=Q_P'R_P'$, hence \[
Q'_P \subseteq Q_P \cap \langle Q_P', R_P' \rangle = Q_P \cap Q_P'R_P'= Q_P'(Q_P \cap R'_P) \subseteq Q_P'S_P = Q_P', \] where the last equality used that $S_P=S \cap P \subseteq Q_P'$ by \descref{C1}. The second equality of \descref{C4} can be proved in the same fashion.
Similarly, if $q \in Q_P$ then $q \langle Q_P', R_P' \rangle R_P=qQ_P' R_P' R_P=q Q_P' R_P$, so that \[\minx \Bigl(q \langle Q_P', R_P' \rangle R_P \setminus qQ'_P R_P\Bigr)=\minx(\emptyset)=+\infty,\] thus \descref{C5} holds.
\begin{remark} \label{rem:sep->metric}
In this paper we will be primarily interested in the existence of finite index subgroups $Q' \leqslant_f Q $ and $R' \leqslant_f R$ satisfying the above conditions. This may be easier to interpret through the lens of the profinite topology on \(G\) (see Section~\ref{sec:sep->metric}):
\begin{itemize}
\item conditions \descref{C1} and \descref{C4} can be ensured by choosing any finite index subgroup $M \leqslant_f G$ with $ S\subseteq M$, and setting $Q'=Q \cap M$, $R'=R \cap M$;
\item the existence of finite index subgroups $Q' \leqslant_f Q $ and $R' \leqslant_f R$ satisfying condition \descref{C2} can be deduced from separability of $Q$ and $R$ in $G$;
\item the existence of finite index subgroups $Q' \leqslant_f Q $ and $R' \leqslant_f R$ satisfying condition \descref{C3} can be deduced from separability of the double coset $PS$ in $G$;
\item if $Q_P' \leqslant_f Q_P $ is already chosen then $R_P' \leqslant_f R_P$, satisfying \descref{C5}, can be constructed with the help of separability of the double coset $Q_P' R_P$ in $P$.
Indeed, if $Q_P=\bigcup_{j=1}^n a_j Q_P'$, then the inequality in \descref{C5} can be re-written as \(\minx \Bigl(a_j \langle Q_P', R_P' \rangle Q'_P R_P \setminus a_jQ'_P R_P\Bigr) \geq C \), for every $j=1,\dots,n$.
Thus our approach to establishing \descref{C5} will be to choose $R' \leqslant_f R$ after $Q'\leqslant_f Q$ has already been constructed (in other words, $R'$ will depend on $Q'$).
\end{itemize} \end{remark}
\begin{theorem}[Metric quasiconvexity theorem] \label{thm:metric_qc}
Let \(G\) be relatively hyperbolic group generated by a finite set \(X\). Suppose that \(Q, R \leqslant G\) are relatively quasiconvex subgroups and denote \(S = Q \cap R\).
There exists a finite collection $\mathcal{P}$ of maximal parabolic subgroups of $G$ such that for any \(A \geq 0\) there are constants \(B,C \geq 0\) satisfying the following.
Suppose that \(Q' \leqslant Q\), \(R' \leqslant R\) are subgroups of \(G\) satisfying conditions \descref{C1}--\descref{C5}.
Then these subgroups enjoy properties \descref{P1}--\descref{P3} above. \end{theorem}
Rough sketches of the proofs of Theorems~\ref{thm:metric_qc} and \ref{thm:sep->qc_intro} are given in the beginning of Part \ref{part:metric_qc_double_cosets} of the paper.
\subsection{Section outline} This paper is structured as follows. There are three parts: Part~\ref{part:background} contains background material and useful preliminary results (Sections~\ref{sec:prelim}-\ref{sec:RH_gps}), Part~\ref{part:metric_qc_double_cosets} is dedicated to the proof of the metric quasiconvexity theorem and the double coset separability results that follow from them (Sections~\ref{sec:path_reps}-\ref{sec:double_coset_sep}), and Part~\ref{part:multicosets} is essentially dedicated to the proof and applications of Theorem~\ref{thm:RZs} (Sections~\ref{sec:multicoset_defs}-\ref{sec:ex_prod_sep}).
Section~\ref{sec:prelim} covers generalities and Section~\ref{sec:RH_gps} covers definitions and results specific to relatively hyperbolic groups. In Section~\ref{sec:path_reps} we introduce the terminology of \emph{path representatives}, their associated \emph{types}, and make some observations about path representatives that have minimal type. Sections~\ref{sec:adj_backtracking} and \ref{sec:multitracking} are devoted to controlling certain instances of backtracking in minimal type path representatives. In Section~\ref{sec:quasigeods} we describe the "shortcutting" of a broken line, and establish its quasigeodesicity under some technical assumptions. Section~\ref{sec:metric_qc} contains the proof of Theorem~\ref{thm:metric_qc}. In Sections~\ref{sec:sep->metric} and \ref{sec:dc_when_one_is_parab} we show how finite index subgroups \(Q' \leqslant_f Q\) and \(R' \leqslant_f R\) satisfying conditions \descref{C1}-\descref{C5} can be obtained using separability, with the help of a new criterion for separability of double cosets in amalgamated products from Section~\ref{sec:dcs_in_amalgams}. Section~\ref{sec:sep->qc} contains proofs of Theorems~\ref{thm:sep->qc_intro} and \ref{thm:sep->qc_for_ab_parab}, while Section~\ref{sec:double_coset_sep} contains the proof of Corollary~\ref{cor:almost_comp->sep_dc}.
In Section~\ref{sec:multicoset_defs} we generalise the content of Section~\ref{sec:path_reps} to the setting of products of subgroups, as well as introducing new metric conditions \descref{C2-m} and \descref{C5-m}. Sections~\ref{sec:mcs_multitracking1} and \ref{sec:mcs_multitracking2} are product analogues to Section~\ref{sec:multitracking}; similarly, Section~\ref{sec:mcs_sep->metric} generalises Section~\ref{sec:sep->metric}. Finally, Section~\ref{sec:RZs_proof} contains the proof of Theorem~\ref{thm:RZs}, and Section~\ref{sec:ex_prod_sep} establishes new examples of product separable groups, proving Theorem~\ref{thm:prod_sep}.
\part{Background} \label{part:background} In this part we will present the definitions and basic results that will be necessary for the rest of the paper.
\section{Preliminaries} \label{sec:prelim}
\subsection{Notation} We write \(\NN\) for the set of natural numbers \(\{1, 2, 3 \dots\}\), and \(\NN_0\) for \(\NN \cup \{0\}\).
Let \(G\) be a group. If \(H\) is a finite index (respectively, finite index normal) subgroup of \(G\), then we write \(H \leqslant_f G\) (respectively, \(H \lhd_f G\)). For a subgroup $T \leqslant G$ and elements $a,b \in G$ we will write $T^a=aTa^{-1} \leqslant G$ and $b^a=aba^{-1} \in G$.
By a generating set $\mathcal{A}$ of $G$ we will mean a set $\mathcal{A}$ together with a map $\mathcal{A} \to G$ such that the image of $\mathcal{A}$ under this map generates $G$.
If \(\mathcal{A}\) is a generating set for \(G\), then we denote by \(\Gamma(G,\mathcal{A})\) the (left) Cayley graph of \(G\) with respect to \(\mathcal{A}\). The standard edge path length metric on \(\Gamma(G,\mathcal{A})\) will be denoted $d_{\mathcal{A}}(\cdot,\cdot)$. After identifying \(G\) with the vertex set of \(\Gamma(G,\mathcal{A})\), this metric induces the \emph{word metric} associated to $\mathcal{A}$: \(d_{\mathcal{A}}(g,h) = \abs{g^{-1}h}_{\mathcal{A}}\) for all $g,h \in G$, where \(\abs{g}_{\mathcal{A}}\) denotes the length of the shortest word in \(\mathcal{A}^{\pm 1}\) representing \(g\) in $G$.
Abusing the notation, we will identify the combinatorial Cayley graph $\Gamma(G,\mathcal{A})$ with its geometric realisation. The latter is a geodesic metric space and, given two points $x,y$ in this space, we will use $[x,y]$ to denote a geodesic path from $x$ to $y$ in $\Gamma(G,\mathcal{A})$. In general $\Gamma(G,\mathcal{A})$ need not be uniquely geodesic, so there will usually be a choice for $[x,y]$, which will either be specified or will be clear from the context (e.g., if $x$ and $y$ already belong to some geodesic path under discussion, then $[x,y]$ will be chosen as the subpath of that path).
If \(Y \subseteq G\) is a subset of \(G\) and \(K \geq 0\), we denote by \[
N_{\mathcal{A}}(Y,K) = \{ g \in G \, | \, d_{\mathcal{A}}(g,Y) \leq K \} \] the \(K\)-neighbourhood of \(Y\) with respect to \(d_{\mathcal{A}}\). Note that when \(\mathcal{A}\) is a finite generating set, the metric \(d_{\mathcal{A}}\) is proper. However, in this paper we will also be working with infinite generating sets: see Section~\ref{sec:RH_gps} below, where generating sets of the form $\mathcal{A}=X \cup \mathcal{H}$ are considered.
The following general fact will be used quite often. \begin{lemma} \label{lem:nbhdintersection}
Let \(G\) be a group generated by a finite set $\mathcal{A}$. If \(A, B \leqslant G\) are subgroups of \(G\) then for every \(K \geq 0\) there is a constant \(K' = K'(A,B,K) \geq 0\) such that for any $x \in G$ we have
\[ N_{\mathcal{A}}(xA,K) \cap N_{\mathcal{A}}(xB,K) \subseteq N_{\mathcal{A}}(x(A \cap B),K'). \] \end{lemma} \begin{proof} After applying the left translation by $x^{-1}$, which preserves the metric $d_{\mathcal{A}}$, we can assume that $x=1$. Now the statement follows, for example, from \cite[Proposition~ 9.4]{HruskaRHCG}. \end{proof}
Suppose that $\gamma$ is a combinatorial path (edge path) in $\Gamma(G,\mathcal{A})$. We will denote the initial and terminal endpoints of \(\gamma\) by \(\gamma_-\) and \(\gamma_+\) respectively. We will write \(\ell(\gamma)\) for the length (i.e., the number of edges) of \(\gamma\). We will also use $\gamma^{-1}$ to denote the inverse of $\gamma$, which is the path starting at $\gamma_+$, ending at $\gamma_-$ and traversing $\gamma$ in the reverse direction. If \(\gamma_1, \dots, \gamma_n\) are combinatorial paths with \((\gamma_i)_+ = (\gamma_{i+1})_-\), for each \(i \in \{1, \dots, n-1\}\), we will denote their concatenation by \(\gamma_1 \dots \gamma_n\).
Since $\Gamma(G,\mathcal{A})$ is a labelled graph, every combinatorial path $\gamma$ comes with a label $\Lab(\gamma)$, which is a word over the alphabet $\mathcal{A}^{\pm 1}$. We denote by \(\elem{\gamma} \in G\) the element represented by \(\Lab(\gamma)\) in $G$. Finally, we write \(\abs{\gamma}_{\mathcal{A}} = |\elem{\gamma}|_{\mathcal{A}}=d_{\mathcal{A}}(\gamma_-,\gamma_+)\).
Note that $\Lab(\gamma^{-1})$ is the formal inverse of $\Lab(\gamma)$, so that and $|\gamma^{-1}|_{\mathcal{A}}=|\gamma|_{\mathcal{A}}$ and $\widetilde{\gamma^{-1}}={\elem{\gamma}}^{-1}$.
\subsection{Quasigeodesic paths} In this section we assume that $\Gamma$ is a graph equipped with the standard path length metric $d(\cdot,\cdot)$.
\begin{definition}[Quasigeodesic] \label{def:quasigeodesic}
Let $\lambda \ge 1$ and $c \ge 0$ be some numbers and let $p$ be an edge path in $\Gamma$.
Recall that $p$ is said to be \emph{$(\lambda,c)$-quasigeodesic} if for every combinatorial subpath $q$ of $p$ we have
\[
\ell(q) \le \lambda d (q_-,q_+) +c.
\] \end{definition}
\begin{lemma} \label{lem:qgeod_with_attachments_is_qeod}
Suppose that $s=rpt$ is a concatenation of three combinatorial paths $r$, $p$ and $t$ in $\Gamma$ such that $\ell(r)\le D$ and $\ell(t) \le D$, for some $D \ge 0$, and $p$ is $(\lambda,c)$-quasigeodesic, for some $\lambda \ge 1$ and $c \ge 0$.
Then the path $s$ is $(\lambda,c')$-quasigeodesic, where $c'=c+2(\lambda+1)D$. \end{lemma}
\begin{proof}
Consider an arbitrary combinatorial subpath $q$ of $s$. We need to show that
\begin{equation}
\label{eq:need_to_show_for_q}
\ell(q) \le \lambda d (q_-,q_+) +c+2(\lambda+1)D.
\end{equation}
If $q$ is contained in $r$ or in $t$ then the desired inequality follows from the assumptions that $\ell(r) \le D$ and $\ell(t) \le D$.
Therefore we can further suppose that $q_-$ is a vertex of $rp$ and $q_+$ is a vertex of $pt$. The bounds on the lengths of $r$ and $t$ imply that there is a combinatorial subpath $a$ of $p$ such that there are at most $D$ edges of $s$ between $q_-$ and $a_-$ and between $a_+$ and $q_+$. Thus
$d(q_-,a_-) \le D$, $d(q_+,a_+) \le D$ and $\ell(q) \le \ell(a)+2D$
The assumption that $p$ is $(\lambda,c)$-quasigeodesic implies that
\begin{equation}\label{eq:ell_of_q}
\ell(q) \le \ell(a)+2D \le \lambda d(a_-,a_+)+c+2D.
\end{equation}
The triangle inequality gives
$ d(a_-,a_+)\le d(q_-,q_+)+2D$,
which, combined with \eqref{eq:ell_of_q}, shows that \eqref{eq:need_to_show_for_q} holds, as required. \end{proof}
\begin{lemma} \label{lem:perturbed_quasigeodesic}
Let \(\lambda\geq 1, c \geq 0\) and $K \in \NN$.
Suppose that \(p\) is a combinatorial path in $\Gamma$ and let \(p'\) be a path obtained by replacing some edges of \(p\) with combinatorial paths of length at most \(K\).
If \(p\) is \((\lambda,c)\)-quasigeodesic then \(p'\) is \((K\lambda,2K^2\lambda+Kc+2K)\)-quasigeodesic. \end{lemma}
\begin{proof}
Let $q$ be any combinatorial subpath of $p'$ and write \(q_-=x\) and \(q_+=y\). We need to show that
\begin{equation}\label{eq:ineq_for_length_of_q}
\ell(q)\le K \lambda d(x,y) + 2K^2\lambda + Kc + 2K.
\end{equation}
If $q$ does not contain any vertices of $p$ then $\ell(q) \le K$ and \eqref{eq:ineq_for_length_of_q} holds. Otherwise, let $z$ and $w$ be the first and the last vertices of $q$ that lie on $p$ respectively, and let $r$ be the subpath of $p$ starting at $z$ and ending at $w$. The assumptions imply that \(d(x,z) \leq K\), \(d(y,w) \leq K\) and
\begin{equation}
\label{eq:bd_on_len_q}
\ell(q) \leq K \ell(r) + 2K.
\end{equation}
Using the quasigeodesicity of \(p\) and the triangle inequality, we obtain
\begin{equation*}
\ell(r) \leq \lambda d(z,w) + c \leq \lambda d(x,y) + 2K\lambda + c,
\end{equation*}
which, combined with (\ref{eq:bd_on_len_q}), gives (\ref{eq:ineq_for_length_of_q}). \end{proof}
\subsection{Hyperbolic metric spaces}
In this subsection take \((\Gamma,d)\) be a geodesic metric space.
\begin{definition}[Gromov product]
Let \(x, y, z \in \Gamma\) be points.
The \emph{Gromov product} of \(x\) and \(y\) with respect to \(z\) is
\[
\langle x, y \rangle_z = \frac{1}{2}\Big( d(x,z) + d(y,z) - d(x,y) \Big).
\] \end{definition}
It is easy to see that the Gromov products satisfy the following equations: \[
d(x,y)=\langle y,z \rangle_x+\langle x, z \rangle_y,~
d(y,z)=\langle x, z \rangle_y+\langle x, y \rangle_z \text{ and } d(z,x)=\langle x, y \rangle_z+\langle y, z \rangle_x. \]
The following elementary property of Gromov products is an immediate consequence of the triangle inequality.
\begin{remark} \label{rem:Gr_prod_ineq}
Suppose that $x,y,z$ are points in $\Gamma$, $u$ is a point on any geodesic segment $[x,z]$, from $x$ to $z$, and $v$ is a point on any geodesic segment $[z,y]$, from $z$ to $y$. Then \[\langle u,v \rangle_z \le \langle x,y \rangle_z.\] \end{remark}
\begin{definition}[$\delta$-thin triangle]
Let \(\Delta\) be a geodesic triangle in \(\Gamma\) with vertices \(x, y,\) and \(z\), and let \(\delta \geq 0\).
Denote by \(T_\Delta\) the (possibly degenerate) tripod with edges of length \(\langle x, y \rangle_z, \langle y, z \rangle_x\), and \(\langle z, x \rangle_y\) respectively.
There is an map from \(\{x,y,z\}\) to the extremal vertices of \(T_\Delta\), which extends uniquely to a map \(\phi \colon \Delta \to T_\Delta\), whose restriction to each side of \(\Delta\) is an isometry.
If the diameter in \(\Gamma\) of \(\phi^{-1}(\{t\})\) is at most \(\delta\), for all \(t \in T_\Delta\), then \(\Delta\) is said to be \emph{\(\delta\)-thin}. \end{definition}
\begin{definition}[Hyperbolic space]
The space \(\Gamma\) is said to be a \emph{hyperbolic metric space} if there is a constant \(\delta \geq 0\) such that every geodesic triangle in \(\Gamma\) is \(\delta\)-thin. \end{definition}
The above definition of \(\delta\)-hyperbolicity is not the most commonly used in the literature, though it is well-known to be equivalent to other definitions (see, for example, \cite[III.H.1.17]{Bridson_Haefliger}).
In the remainder of this subsection we assume that \(\Gamma\) is a \(\delta\)-hyperbolic graph, for some $\delta \ge 0$, and $d(\cdot,\cdot)$ is the standard path length metric on $\Gamma$.
\begin{definition}[Broken line] \label{def:broken_line}
A \emph{broken line} in $\Gamma$ is a path $p$ which comes with a fixed decomposition as a concatenation of combinatorial geodesic paths $p_1,\dots,p_n$ in $\Gamma$, so that $p=p_1p_2 \dots p_n$.
The paths $p_1, \dots, p_n$ will be called the \emph{segments} of the broken line $p$, and the vertices $p_-=(p_1)_-$, $(p_1)_+=(p_2)_-$, $\dots,$ $(p_{n-1})_+=(p_n)_-$ and $(p_{n+1})_+=p_+$ will be called the \emph{nodes} of $p$. \end{definition}
The following statement is a special case of Lemma~4.2 from \cite{AshotResHom}, applied to the situation when each $p_i$ is geodesic (so, in the notation of that lemma, we can take $\overline{\lambda}=1$, $\overline{c}=0$ and $\nu=\delta$). Note that due to a slightly different definition of quasigeodesicity used in \cite{AshotResHom}, a $(\lambda,c)$-quasigeodesic in the sense of \cite{AshotResHom} is $(1/\lambda,c/\lambda)$-quasigeodesic in the sense of Definition~\ref{def:quasigeodesic} above, and vice-versa.
\begin{lemma} \label{lem:concat_of_geodesics-original}
Let $c_0, c_1$ and $c_2$ be constants such that \(c_0 \geq 14\delta\), $c_1 = 12( c_0+\delta)+1$ and $c_2=10 (\delta+c_1)$.
Suppose that $p=p_1 \dots p_n$ is a broken line in $\Gamma$, where $p_i$ is a geodesic with $(p_i)_-=x_{i-1} $, $(p_i)_+=x_i $, $i=1,\dots,n$.
If \(d(x_{i-1}, x_{i}) \geq c_1 \) for \(i = 1, \dots, n\), and \(\langle x_{i-1}, x_{i+1} \rangle_{x_i} \leq c_0\) for each \(i = 1, \dots, n-1\), then the path \(p\) is \((4, c_2)\)-quasigeodesic. \end{lemma}
We will need an extension of the above lemma which allows the first and the last geodesic segments $p_1$ and $p_n$ to be short.
\begin{lemma} \label{lem:concat}
For any constant $c_0$, satisfying $c_0 \ge 14 \delta$, let $c_1=c_1(c_0) = 12( c_0+\delta)+1$ and $c_3=c_3(c_0)=10 (\delta+2c_1)$.
Suppose that $p=p_1 \dots p_n$ is a broken line in $\Gamma$, where $p_i$ is a geodesic with $(p_i)_-=x_{i-1}$, $(p_i)_+=x_i$, $i=1,\dots,n$.
If \(d(x_{i-1}, x_{i}) \geq c_1 \) for \(i = 2, \dots, n-1\), and \(\langle x_{i-1}, x_{i+1} \rangle_{x_i} \leq c_0\) for each \(i = 1, \dots, n-1\), then the path \(p\) is \((4,c_3)\)-quasigeodesic. \end{lemma}
\begin{proof}
This follows easily by combining Lemma~\ref{lem:concat_of_geodesics-original} with Lemma~\ref{lem:qgeod_with_attachments_is_qeod}.
Indeed, there are four possibilities depending on whether or not $d(x_0,x_1) \ge c_1$ and $d(x_{n-1},x_n) \ge c_1$.
Since all of these cases are similar, let us concentrate on the situation when $d(x_0,x_1) <c_1$ and $d(x_{n-1},x_n) \ge c_1$.
Then the path $q=p_2p_3 \dots p_n$ is $(4,c_2)$-quasigeodesic by Lemma~\ref{lem:concat_of_geodesics-original}, where $c_2=10 (\delta+c_1)$.
Since $\ell(p_1)=d(x_0,x_1) < c_1$, we can apply Lemma~\ref{lem:qgeod_with_attachments_is_qeod} to deduce that the path $p=p_1\dots p_n=p_1 q$ is $(4,c_3)$-quasigeodesic, where $c_3=c_2+10c_1=10 (\delta+2c_1)$ as required. \end{proof}
\subsection{Profinite topology and separable subsets}
Let \(G\) be a group. The \emph{profinite topology} on \(G\) is the topology \(\pt(G)\) whose basis consists of left cosets to finite index subgroups of \(G\).
A subset \(Z \subseteq G\) is called \emph{separable} (in \(G\)) if it is closed in \(\pt(G)\). Evidently finite unions and arbitrary intersections of separable subsets are separable. It is easy to see that a subset \(Z \subseteq G\) is separable if and only if for every \(g \in G\setminus Z\), there is a finite group \(Q\) and a homomorphism \(\varphi \colon G \to Q\) such that \(\varphi(g) \notin \varphi(Z)\) in $Q$. A subgroup \(H \leq G\) is separable if and only if it is the intersection of the finite index subgroups of $G$ containing it.
The following observation stems from the fact that the group operations of taking an inverse and multiplying by a fixed element are homeomorphisms with respect to the profinite topology.
\begin{remark} \label{rem:sep_props}
Let $Z$ be a separable subset of a group $G$.
Then for every $g \in G$ the subsets $Z^{-1}$, $gZ$ and $Zg$ are also separable. \end{remark}
\begin{lemma} \label{lem:induced_top}
Suppose that $A$ is a subgroup of a group $G$.
\begin{itemize}
\item[(a)] Every subset of $A$ which is closed in $\pt(G)$ is also closed in $\pt(A)$.
\item[(b)] If every finite index subgroup of $A$ is separable in $G$ then every closed subset of $\pt(A)$ is closed in $\pt(G)$.
\end{itemize} \end{lemma}
\begin{proof}
Claim (a) immediately follows from the observation that the intersection of $A$ with any basic closed subset from $\pt(G)$ is either empty or is a basic closed subset of $\pt(A)$.
If each finite index subgroup of $A$ is separable in $G$ then, in view of Remark~\ref{rem:sep_props}, every basic closed set in $\pt(A)$ is closed in the profinite topology of $G$.
Claim (b) of the lemma now follows from the fact that any closed subset of $A$ is the intersection of basic closed sets. \end{proof}
\begin{lemma} \label{lem:fi_dc}
Let $G$ be a group with subgroups $A,B$.
Suppose that $A' \leqslant_f A$, $B' \leqslant_f B$ and $A'B'$ is separable in $G$.
Then $AB$ is separable in $G$. \end{lemma}
\begin{proof}
Let $A=\bigsqcup_{i=1}^m a_iA'$ and $B=\bigsqcup_{j=1}^n B'b_j$.
Then \[AB=\bigcup_{i=1}^m \bigcup_{j=1}^n a_iA'B'b_j,\] which is separable in $G$ by Remark~\ref{rem:sep_props}. \end{proof}
The next two lemmas use the notation introduced in Subsections~\ref{subsec:1.2} and \ref{subsec:3.1}.
\begin{lemma} \label{lem:V_sep_and_U_leq_V->UV-sep}
Let $A,B$ be subgroups of a group $G$ such that $A \preccurlyeq B$. If $B$ is separable in $G$ then so are the double cosets $AB$ and $BA$. \end{lemma}
\begin{proof}
By \cite[Lemma 2.1]{Min-Some_props_of_subsets} $A \cap B$ has finite index in $A$, so $A=\bigsqcup_{i=1}^m a_i(A \cap B)$, for some $a_1,\dots,a_m \in A$.
It follows that $AB=\bigcup_{i=1}^m a_iB$, so it is separable by Remark~\ref{rem:sep_props}.
The same remark also implies that $BA=(AB)^{-1}$ is separable in $G$. \end{proof}
The main use of the profinite topology in this paper stems from the following elementary facts. \begin{lemma} \label{lem:sep->large_minx}
Let $G$ be a group generated by a finite set $X$, and let $P \leqslant G$ be a subgroup. Suppose that $Z$ is a separable subset of $P$.
\begin{enumerate}[label=(\alph*)]
\item If a finite subset $U \subseteq P$ is disjoint from $Z$ then there is a normal finite index subgroup $N \lhd_f P$ such that $U \cap ZN=\emptyset$. Thus the image of $U$ in the quotient $P/N$ will be disjoint from the image of $Z$.
\item For every constant $C \ge 0$ there is a finite index normal subgroup $N \lhd_f P$ such that \[\minx(ZN \setminus Z) \ge C.\]
\item For any finite subset $A \subseteq P$ and any $C \ge 0$ there exists $N \lhd_f P$ such that \[\minx(aZN \setminus aZ) \ge C,~\text{ for all } a \in A.\]
\end{enumerate} \end{lemma}
\begin{proof}
(a) Let $U=\{u_1,\dots,u_m\} \subseteq P$. Since $u_i \notin Z$ and $Z$ is separable in $P$, there exists $N_i \lhd_f P$ such that $u_iN_i \cap Z=\emptyset$, for each $i=1,\dots,m$. We set $N=\bigcap_{i=1}^m N_i \lhd_f P$, so that $u_iN \cap Z=\emptyset$, i.e., $u_i \notin ZN$, for all \(i = 1, \dots m\). Therefore $U \cap ZN=\emptyset$ and (a) has been proved.
Claim (b) follows by applying claim (a) to the finite subset $U=\{g \in P\setminus Z \mid |g|_X <C \}$ of $P$.
To prove (c), suppose that $A=\{a_1,\dots,a_k\} \subseteq P$. By Remark~\ref{rem:sep_props}, $a_jZ$ is separable in $P$, for every $j=1,\dots,k$, so, according to part (b), there exists $N_j \lhd_f P$ such that
\[
\minx(a_jZN_j \setminus a_jZ) \ge C, \text{ for each }j=1,\dots,k .
\]
It is easy to see that the normal subgroup $N= \bigcap_{j=1}^k N_j \lhd_f P$ enjoys the required property. \end{proof}
The following statement is well-known; we include a proof for completeness.
\begin{lemma} \label{lem:Lemma_0}
Let \(G\) be a group with subgroups \(K \leqslant_f H \leqslant G\). If \(K\) is separable in \(G\), then there is \(L \leqslant_f G\) such that \(L \cap H = K\) \end{lemma}
\begin{proof}
Since \(K\) is of finite index in \(H\), we can write \(H = K \cup Kh_1 \cup \dots \cup Kh_m\) for some \(h_1, \dots h_m \in H \setminus K\).
The subgroup \(K\) is separable in \(G\), meaning that it is closed in \(\pt(G)\).
Following Remark~\ref{rem:sep_props}, the union \(Kh_1 \cup \dots \cup Kh_m\) is also closed in \(\pt(G)\).
Thus the subset \((G \setminus H) \cup K = G \setminus (Kh_1 \cup \dots \cup Kh_m)\) is open in \(\pt(G)\) and contains the identity.
It follows from the definition of the profinite topology that there is a finite index normal subgroup \(N \lhd_f G\) with \(N \subseteq (G \setminus H) \cup K\).
Observe that \(Kh_i \cap N = \emptyset\), for every \(i= 1, \dots, m\), so \(N \cap H \leqslant K\).
Now set \(L = KN\leqslant_f G\). Then \(L \cap H = KN \cap H = K(N \cap H) = K\), as required. \end{proof}
\section{Relatively hyperbolic groups} \label{sec:RH_gps} In this section we define relatively hyperbolic groups and collect various properties that will be used throughout the paper.
\subsection{Definition} We will define relatively hyperbolic groups following the approach of Osin (for full details, see \cite{OsinRHG}).
\begin{definition}[Relative generating set, relative presentation] \label{def:rel_gen_set}
Let \(G\) be a group, \(X \subseteq G\) a subset and \(\lbrace H_\nu \, | \, \nu \in \Nu \rbrace\) a collection of subgroups of \(G\).
The group \(G\) is said to be \emph{generated by \(X\) relative to \(\lbrace H_\nu \, | \, \nu \in \Nu \rbrace\)} if it is generated by \(X \sqcup \mathcal{H}\), where \(\mathcal{H}= \bigsqcup_{\nu \in \Nu} (H_\nu \setminus\{1\})\) (with the obvious map $X \sqcup \mathcal{H} \to G$).
If this is the case, then there is a surjection
\[
F = F(X) \ast (\ast_{\nu \in \Nu} H_\nu) \to G,
\]
where $F(X)$ denotes the free group on $X$.
Suppose that the kernel of this map is the normal closure of a subset $\mathcal{R} \subseteq F$. Then $G$ can equipped with the \emph{relative presentation} \begin{equation} \label{eq:rel_pres} \langle X, H_\nu, \nu \in \mathcal{N} \mid \mathcal{R} \rangle. \end{equation}
If \(X\) is a finite set, then \(G\) is said to be \emph{finitely generated relative to \(\lbrace H_\nu \, | \, \nu \in \Nu \rbrace\)}. If \(\mathcal{R}\) is also finite, \(G\) is said to be \emph{finitely presented relative to \(\lbrace H_\nu \, | \, \nu \in \Nu \rbrace\)} and the presentation above is a \emph{finite relative presentation}. \end{definition}
With the above notation, we call the Cayley graph \(\ga\) the \emph{relative Cayley graph} of \(G\) with respect to \(X\) and \(\lbrace H_\nu \, | \, \nu \in \Nu \rbrace\). Note that when \(X\) is itself a generating set of \(G\), \(d_{X\cup\mathcal{H}}(g,h) \leq d_X(g,h)\), for all \(g,h \in G\).
\begin{definition}[Relative Dehn function]
Suppose that \(G\) has a finite relative presentation \eqref{eq:rel_pres} with respect to a collection of subgroups \(\lbrace H_\nu \, | \, \nu \in \Nu \rbrace\).
If \(w\) is a word in the free group \(F(X\sqcup\mathcal{H})\), representing the identity in \(G\), then it is equal in \(F\) to a product of conjugates
\[
w \stackrel{F}{=} \prod_{i=1}^n a_i r_i a_i^{-1},
\]
where \(a_i \in F\) and \(r_i \in \mathcal{R}\), for each \(i\).
The \emph{relative area} of the word \(w\) with respect to the relative presentation, \(Area^{rel}(w)\), is the least number \(n\) among products of conjugates as above that are equal to \(w\) in \(F\).
A \emph{relative isoperimetric function} of the above presentation is a function \(f \colon \NN \to \NN\) such that \(Area^{rel}(w) \leq f(\abs{w})\) for every freely reduced word \(w\) in \(F(X\sqcup\mathcal{H})\) representing the identity in \(G\).
If an isoperimetric function exists for the presentation, the smallest such function is called the \emph{relative Dehn function} of the presentation. \end{definition}
\begin{definition} [Relatively hyperbolic group] \label{def:rh_gp}
Let \(G\) be a group and let \(\lbrace H_\nu \, | \, \nu \in \Nu \rbrace\) be a collection of subgroups of \(G\).
If \(G\) admits a finite relative presentation with respect to this collection of subgroups which has a well-defined linear relative Dehn function, it is called \emph{hyperbolic relative to} \(\lbrace H_\nu \, | \, \nu \in \Nu \rbrace\).
When it is clear what the relevant collection of subgroups is, we refer to \(G\) simply as a \emph{relatively hyperbolic group}.
The groups \(\lbrace H_\nu \, | \, \nu \in \Nu \rbrace\) are called the \emph{peripheral subgroups} of the relatively hyperbolic group $G$, and their conjugates in $G$ are called \emph{maximal parabolic subgroups}.
Any subgroup of a maximal parabolic subgroup is said to be \emph{parabolic}. \end{definition}
\begin{lemma}[{\cite[Corollary 2.54]{OsinRHG}}] \label{lem:Cayley_graph-hyperbolic}
Suppose that $G$ is a group generated by a finite set $X$ and hyperbolic relative to a collection of subgroups \(\lbrace H_\nu \mid \nu \in \Nu \rbrace\), and let \(\mathcal{H} = \bigsqcup_{\nu \in \Nu} (H_\nu \setminus \{1\})\).
Then the Cayley graph $\ga$ is $\delta$-hyperbolic, for some $\delta \ge 0$. \end{lemma}
In the remainder of this section (namely, in Subsections \ref{subsec:quasigeod_in_rh_gps}--\ref{subsec:qc_subset_in_rh_gps}, we will assume that \(G\) is a group generated by a finite subset $X$ and hyperbolic relative to a finite collection of subgroups \(\{ H_\nu \, | \, \nu \in \Nu \}\). As usual, we will let \(\mathcal{H} = \bigsqcup_{\nu \in \Nu} (H_\nu \setminus \{1\})\).
\subsection{Geodesics and quasigeodesics in relatively hyperbolic groups}\label{subsec:quasigeod_in_rh_gps} \begin{definition}[Path components]
Let \(p\) be a combinatorial path in \(\Gamma(G,X\cup\mathcal{H})\).
A non-trivial combinatorial subpath of \(p\) whose label consists entirely of elements of \(H_\nu \setminus \{1\}\), for some \(\nu \in \Nu\), is called an \emph{\(H_\nu\)-subpath} of \(p\).
An \(H_\nu\)-subpath is called an \emph{\(H_\nu\)-component} if it is not contained in any strictly longer \(H_\nu\)-subpath.
We will call a subpath of \(p\) an \(\mathcal{H}\)-subpath (respectively, an \emph{\(\mathcal{H}\)-component}) if it is an \(H_\nu\)-subpath (respectively, an \(H_\nu\)-component), for some \(\nu \in \Nu\). \end{definition}
\begin{definition}[Connected and isolated components]
Let \(p\) and $q$ be edge paths in \(\Gamma(G,X\cup\mathcal{H})\) and suppose that \(s\) and \(t\) are \(H_\nu\)-subpaths of \(p\) and $q$ respectively, for some $\nu \in \Nu$.
We say that \(s\) and \(t\) are \emph{connected} if $s_-$ and $t_-$ belong to the same left coset of $H_\nu$ in $G$. The latter means that for all vertices $u$ of $s$ and $v$ of $t$ either $u=v$ or there is an edge \(e\) in $\ga$ with $\Lab(e) \in H_\nu \setminus\{1\}$ and \(e_- = u, e_+ = v\).
If \(s\) is an $H_\nu$-component of a path $p$ and $s$ is not connected to any other \(H_\nu\)-component of \(p\) then we say that \(s\) is \emph{isolated} in \(p\). \end{definition}
\begin{definition}[Phase vertex]
A vertex \(v\) of a combinatorial path \(p\) in \(\Gamma(G,X\cup\mathcal{H})\) is called \emph{non-phase} if it is an interior vertex of an \(\mathcal{H}\)-component of \(p\) (that is, if it lies in an \(\mathcal{H}\)-component which it is not an endpoint of).
Otherwise \(v\) is called \emph{phase}. \end{definition}
\begin{definition}[Backtracking]
If all \(\mathcal{H}\)-components of a combinatorial path \(p\) are isolated, then \(p\) is said to be \emph{without backtracking}.
Otherwise we say that \(p\) \emph{has backtracking}. \end{definition}
\begin{remark} \label{rem:comp_of_geod_is_an_edge}
If \(p\) is a geodesic edge path in \(\Gamma(G,X\cup\mathcal{H})\) then every $\mathcal{H}$-component of $p$ will consist of a single edge, labelled by an element from $\mathcal{H}$. Therefore every vertex of $p$ will be phase.
Moreover, it is easy to see that \(p\) will be without backtracking. \end{remark}
The following is a basic observation about the lengths of paths in the relative Cayley graph whose \(\mathcal{H}\)-components are uniformly short.
\begin{lemma} \label{lem:rel_geods_with_short_comps}
Let \(p\) be a path in \(\Gamma(G,X\cup\mathcal{H})\) and suppose there is a constant \(\Theta \geq 1\) that for any \(\mathcal{H}\)-component \(h\) of \(p\), we have \(|h|_X \leq \Theta\). Then \(|p|_X \leq \Theta \ell(p)\). \end{lemma}
\begin{proof}
We can write \(p\) as a concatenation \(p = a_0 h_1 a_1 \dots a_{n-1} h_n a_n\), where \(h_1, \dots, h_n\) are the \(\mathcal{H}\)-components of \(p\) and \(a_0, \dots, a_n\) are subpaths of \(p\) all whose edges are labelled by elements of \(X^{\pm1}\).
It follows from the triangle inequality that
\begin{equation*}
|p|_X= d_X(p_-, p_+) \leq \sum_{i = 0}^n d_X((a_i)_-, (a_i)_+) + \sum_{i = 1}^n d_X((h_i)_-, (h_i)_+).
\end{equation*}
Since each edge of \(a_i\) is labelled by an element of \(X^{\pm1}\), we have that \(d_X((a_i)_-, (a_i)_+) \leq \ell(a_i)\), for all \(i = 0, \dots, n\).
Moreover, \(d_X((h_i)_-, (h_i)_+)=|h_i|_X \leq \Theta \ell(h_i)\), for each \(i = 1, \dots, n\), by the hypothesis of the lemma, as \(\ell(h_i) \ge 1\).
Combining the above three inequalities with the fact that \(\Theta \geq 1\), we obtain
\[
|p|_X \leq \sum_{i=0}^n \ell(a_i) + \sum_{i=1}^n \Theta \ell(h_i) \leq \Theta \Big(\sum_{i=0}^n \ell(a_i) + \sum_{i=1}^n \ell(h_i)\Big) = \Theta \ell(p).
\] \end{proof}
\begin{lemma}[{\cite[Lemma 3.1]{OsinRHG}}] \label{lem:osinisoperimetric}
There is a constant \(M \geq 1\) such that if \(h_1, \dots, h_n\) are isolated \(\mathcal{H}\)-components of a cycle \(q\) in \(\Gamma(G,X\cup\mathcal{H})\), then
\[
\sum_{i=1}^n \abs{h_i}_X \leq M\ell(q).
\] \end{lemma}
\begin{lemma} \label{lem:qgds_with_long_comps}
For any \(\lambda \geq 1\), \(c \geq 0\) and \(A \geq 0\) there is a constant \(\eta = \eta(\lambda,c,A) \geq 0\) such that the following is true.
Suppose that \(p\) is a \((\lambda,c)\)-quasigeodesic path in $\ga$ possessing an isolated \(\mathcal{H}\)-component \(h\) such that \(\abs{h}_X \geq \eta\). Then \(\abs{p}_X \geq A\). \end{lemma}
\begin{proof}
Let $M \ge 1$ be the constant from Lemma~\ref{lem:osinisoperimetric}, and set
\begin{equation}
\label{eq:defn_of_zeta}
\eta = M(1 + \lambda)A + Mc.
\end{equation}
Let \(q\) be a path in \(\Gamma(G,X\cup\mathcal{H})\), labelled by a word over \(X^{\pm 1}\), with endpoints \(q_- = p_-\) and \(q_+ = p_+\), such that \(\ell(q) = \abs{p}_X\).
Consider the cycle \(r = pq^{-1}\) in \(\Gamma(G,X\cup\mathcal{H})\), formed by concatenating \(p\) and the inverse of \(q\).
By the quasigeodesicity of \(p\), \(\ell(p) \leq \lambda \abs{p}_{X\cup\mathcal{H}} + c \leq \lambda \abs{p}_X + c\).
Now \(\ell(r) = \ell(p) + \ell(q)\), therefore
\begin{equation}
\label{eq:len_of_cycle_r}
\ell(r) \leq (1+ \lambda) \abs{p}_X + c.
\end{equation}
Since \(h\) is isolated in \(p\) it must also be an isolated $\mathcal H$-component of the cycle \(r\) (because all edges of $q$ are labelled by letters from $X^{\pm 1}$). Hence $\abs{h}_X \leq M \ell(r)$
by Lemma~\ref{lem:osinisoperimetric}, so \eqref{eq:len_of_cycle_r} implies that
\begin{equation}
\label{eq:bound_on_h_in_r}
|p|_X \ge \frac{1}{1+\lambda} (\ell(r)-c) \ge \frac{1}{M(1+\lambda)}(|h|_X-Mc).
\end{equation}
Combining the above inequality with \eqref{eq:defn_of_zeta} and the assumption that \(\abs{h}_X \geq \eta\), we obtain
the desired bound $\abs{p}_X \ge A$. \end{proof}
\begin{proposition}[{\cite[Proposition 3.2]{OsinFilling}}] \label{prop:osinpolygon}
There is a constant \(L \ge 0\) such that if \(\Delta\) is a geodesic triangle in \(\Gamma(G,X\cup\mathcal{H})\) and some side $p$ is an isolated $\mathcal{H}$-component of $\Delta$ then $ \abs{p}_X \leq L$. \end{proposition}
\begin{lemma} \label{lem:isol_comp_in_triangles_are_short}
There is a constant $L \ge 0$ such that if $p_1$ and $p_2$ are geodesic paths in $\ga$ with $(p_1)_+=(p_2)_-$, and $s$ and $t$ are connected $H_\nu$-components of $p_1$, $p_2$ respectively, for some $\nu \in \Nu$, then $d_X(s_+,t_-) \le L$. \end{lemma}
\begin{proof}
Let $L \ge 0$ be the constant provided by Proposition~\ref{prop:osinpolygon}.
Since the component $s$ of $p_1$ is connected to the component $t$ of $p_2$, we know that $h=(s_+)^{-1}t_- \in H_\nu$.
If $h=1$ then $s_+=t_-$ and there is nothing to prove, otherwise $s_+$ and $t_-$ are endpoints of an edge $e$ labelled by $h$ in $\ga$.
Consider the geodesic triangle $\Delta$ with vertices $s_+$, $(p_1)_+$ and $t_-$, where the sides $[s_+,(p_1)_+] $ and $[(p_1)_+,t_-]$ are chosen to be subpaths of $p_1$ and $p_2$ respectively, and the side $[s_+,t_-]$ is the edge $e$.
If $v \in [s_+,(p_1)_+]$ is a vertex belonging to the left coset $s_+H_\nu$ then $\dxh(s_-,v)=1$ and $s_+ \in [s_-,v]$ in $p_1$.
Since $\dxh(s_-,s_+)=1$ and $p_1$ is geodesic, we can conclude that $v=s_+$.
Similarly, the only vertex of $[(p_1)_+,t_-]$ which belongs to the left coset $t_-H_\nu=s_+H_\nu$ is $t_-$.
It follows that the edge $e$ is an isolated $H_\nu$-component of $\Delta$. Hence $d_X(s_+,t_-) \le L$ by Proposition~\ref{prop:osinpolygon}. \end{proof}
\begin{proposition}[{\cite[Theorem 3.26]{OsinRHG}}] \label{prop:osinslimtriangles}
Let \(\Delta\) be a geodesic triangle in \(\Gamma(G,X\cup\mathcal{H})\) with vertices at vertices of $\ga$ and sides \(p, q\), and \(r\).
There is a constant \(\sigma = \sigma(G,\mathcal{H},X) \in \NN_0\) such that for any vertex \(u \in p\), there is a vertex \(v \in q \cup r\) with \(d_X(u,v) \leq \sigma\). \end{proposition}
\begin{definition}[$k$-similar paths]
Let \(p\) and \(q\) be paths in \(\Gamma(G,X\cup\mathcal{H})\), and let \(k \geq 0\).
The paths \(p\) and \(q\) are said to be \emph{\(k\)-similar} if \(d_X(p_-,q_-) \leq k\) and \(d_X(p_+,q_+) \leq k\). \end{definition}
\begin{proposition}[{\cite[Proposition 3.15, Lemma 3.21 and Theorem~3.23]{OsinRHG}}] \label{prop:osinbcp}
For any \(\lambda \geq 1\), \(c, k \geq 0\) there is a constant \(\kappa = \kappa(\lambda,c,k) \geq 0\) such that if \(p\) and \(q\) are $k$-similar \((\lambda,c)\)-quasigeodesics in \(\Gamma(G,X\cup\mathcal{H})\) and \(p\) is without backtracking, then
\begin{enumerate}
\item for every phase vertex \(u\) of \(p\), there is a phase vertex \(v\) of \(q\) with \(d_X(u,v) \leq \kappa\);
\item every \(\mathcal{H}\)-component \(s\) of \(p\), with \(\abs{s}_X \geq \kappa\), is connected to an \(\mathcal{H}\)-component of \(q\).
\end{enumerate}
Moreover, if \(q\) is also without backtracking then
\begin{enumerate}[resume]
\item if \(s\) and \(t\) are connected \(\mathcal{H}\)-components of \(p\) and \(q\) respectively, then
\[
\max \{ d_X(s_-,t_-), d_X(s_+,t_+) \} \leq \kappa .
\]
\end{enumerate} \end{proposition}
\subsection{Quasigeodesicity of paths with long components} One of the tools for proving Theorem~\ref{thm:metric_qc} will be the next result of Mart\'{i}nez-Pedroza from \cite{MPComb}.
\begin{proposition}[{\cite[Proposition 3.1]{MPComb}}] \label{prop:mpquasigeodesic-original}
There are constants \(\zeta_0 \ge 0\) and \(\lambda_0 \geq 1\) such that the following holds. If \(q=r_0s_1 \dots r_n s_{n+1}\) is a concatenation of geodesic paths \(r_0, s_1, \dots, r_{n}, s_{n+1}\) in \(\Gamma(G,X\cup\mathcal{H})\) such that
\begin{enumerate}
\item \(s_i\) is an \(\mathcal{H}\)-component of \(q\), for each \(i = 1, \dots, n+1\),
\item \(\abs{s_i}_X \geq \zeta_0\), for every $i=1,\dots,n+1$,
\item \(s_i\) is not connected to \(s_{i+1}\), for every \(i = 1, \dots, n\),
\end{enumerate}
then \(q\) is \((\lambda_0, 0)\)-quasigeodesic in \(\Gamma(G,X\cup\mathcal{H})\) without backtracking.
\end{proposition}
We will actually need a slightly more general version of Proposition~\ref{prop:mpquasigeodesic-original}, as follows.
\begin{proposition} \label{prop:mpquasigeodesic}
There exist constants \(\lambda \geq 1\) and $c \ge 0$ such that for every \(\rho \ge 0\) there is \(\zeta_1 > 0\) such that the following holds.
Suppose that \(p=a_0b_1a_1 \dots b_na_n\) is a concatenation of geodesic paths \(a_0, b_1, \dots, b_n, a_n\) in \(\Gamma(G,X\cup\mathcal{H})\) such that
\begin{enumerate}
\item \(b_i\) is an \(\mathcal{H}\)-subpath of \(p\), for each \(i = 1, \dots, n\),
\item \(\abs{b_i}_X \geq \zeta_1\), for each \(i=1,\dots,n\);
\item \(b_i\) is not connected to \(b_{i+1}\), for every $i=1,\dots,n-1$;
\item if $b_i$ is connected to a component $h$ of $a_i$ or $a_{i-1}$ then
$|h|_X \le \rho$, $i=1,\dots,n$.
\end{enumerate}
Then \(p\) is a \((\lambda, c)\)-quasigeodesic without backtracking. \end{proposition}
\begin{proof}
The argument below employs the following trick: for each $i=1,\dots, n$, we replace the $\mathcal{H}$-component of $p$ containing $b_i$ by a single edge $s_i$, and then embed the resulting path $p'$ into a larger path $q$ to which Proposition~\ref{prop:mpquasigeodesic-original} can be applied. Since a subpath of a $(\lambda,c)$-quasigeodesic path without backtracking is again $(\lambda,c)$-quasigeodesic and without backtracking, this will complete the proof. In order to construct the path $q$ we add an extra infinite peripheral subgroup $Z$ by embedding $G$ into a larger relatively hyperbolic group $G_1$.
Let us consider the free product $G_1=G*Z$, where $Z=\langle z \rangle$ is an infinite cyclic group. Since $G$ is hyperbolic relative to the family $\{H_\nu \mid \nu \in \Nu\}$, the group $G_1$ is hyperbolic relative to the union $\{H_\nu \mid \nu \in \Nu\} \cup \{Z\}$ (this can be fairly easily deduced from the definition or from many existing combination theorems for relatively hyperbolic groups, e.g., \cite[Corollary~1.5]{Osin-comb}).
Note that $G$ embeds in $G_1$ and $G_1$ is generated by the finite set $X'=X \sqcup \{z\}$. Let $\mathcal{H'}= \mathcal{H} \sqcup Z\setminus\{1\}$, so that the Cayley graph $\ga$ is naturally a subgraph of the Cayley graph $\Gamma(G_1,X' \cup \mathcal{H}')$. Therefore we can think of $p$ as a path in $\Gamma(G_1,X' \cup \mathcal{H}')$.
The normal form theorem for free products (\cite[Theorem~IV.1.2]{LS}) implies that the embedding of $G$ into $G_1$ is isometric with respect to both proper and relative metrics, more precisely
\begin{equation}
\label{eq:isom}
d_X(g,h)=d_{X'}(g,h) ~\text{ and }~ \dxh(g,h)=d_{X' \cup \mathcal{H}'} (g,h), \text{ for all } g,h \in G.
\end{equation}
An alternative way to see this is to use the retraction $r:G_1 \to G$, such that $r(x)=x$ for all $x \in X$ and $r(z)=1$. Then $r(X')=X \cup \{1\}$, $r(H_\nu)=H_\nu$, for all $\nu \in \Nu$, and $r(Z)=\{1\}$.
Let $\zeta_0 \ge 0$ and $\lambda_0 \ge 1$ be the constants provided by Proposition~\ref{prop:mpquasigeodesic-original} applied to the group $G_1$, its finite generating set $X'$ and its Cayley graph $\Gamma(G_1,X' \cup \mathcal{H}')$. Set $\zeta_1=\zeta_0+2\rho+1 >0$.
For each $i=1,\dots,n$, let $t_i$ denote the $H_{\nu_i}$-component of $p$ containing the edge $b_i$, $\nu_i \in \mathcal{N}$. Note that $t_1,\dots,t_n$ are pairwise distinct by condition (3), in particular no two of them share a common edge.
In view of Remark~\ref{rem:comp_of_geod_is_an_edge}, for every $i=1,\dots,n$ we can represent $t_i$ as a concatenation $t_i=h_{i-1}b_if_i$, where
\begin{itemize}
\item $h_{i-1}$ is either the last edge and an $H_{\nu_{i}}$-component of $a_{i-1}$ if $a_{i-1}$ ends with an $H_{\nu_{i}}$-component, or $h_{i-1}$ is the trivial path, consisting of the vertex $(a_{i-1})_+$, if $a_{i-1}$ does not end with an $H_{\nu_{i}}$-component;
\item $f_i$ is the first edge and an $H_{\nu_i}$-component of $a_i$ if $a_{i}$ starts with an $H_{\nu_{i}}$-component, or $f_i$ is the trivial path, consisting of the vertex $(a_i)_-$, if $a_{i}$ does not start with an $H_{\nu_{i}}$-component.
\end{itemize}
Note that for each $i=1,\dots, n$ we have $|h_{i-1}|_X \le \rho$ and $|f_i|_X \le \rho$, by condition (4). By (2) and the triangle inequality we get
\begin{equation}
\label{eq:length_of_t_i}
|t_i|_X \ge |b_i|_X-2\rho \ge \zeta_0+1,~ \text{ for } i=1,\dots,n.
\end{equation}
Therefore $p$ decomposes as a concatenation \[p=r_0t_1r_1 \dots t_n r_n,\] where $r_i$ is a subpath of $a_i$, $i=0,\dots,n$, so that $a_0=r_0h_0$, $a_1=f_1r_1h_1$, $\dots$, $a_n=f_nr_n$.
By \eqref{eq:length_of_t_i} the endpoints of the $H_{\nu_i}$-component $t_i$ of $p$ must be distinct, hence there is an edge $s_i$ joining them in $\ga$, such that $\Lab(s_i) \in H_{\nu_i}\setminus\{1\}$, $i=1,\dots,n$. Now, \eqref{eq:length_of_t_i} and \eqref{eq:isom} imply that
\begin{equation*}
\label{eq:length_of_s_i}
|s_i|_{X'}=|t_i|_{X'}=|t_i|_X \ge \zeta_0,~ \text{ for } i=1,\dots,n.
\end{equation*}
Choose $k \in \NN$ so that $|z^k|_{X'} \ge \zeta_0$ and let $s_{n+1}$ be the edge in $\Gamma(G_1,X' \cup \mathcal{H}')$, starting at $p_+=(r_n)_+$ and labelled by $z^k$. Observe that $|s_{n+1}|_{X'}=|z^k|_{X'} \ge \zeta_0$.
Consider the path $q$ in $\Gamma(G_1,X' \cup \mathcal{H}')$, defined as the concatenation $q=r_0s_1 \dots r_n s_{n+1}$. By \eqref{eq:isom} the paths $r_0,\dots,r_n$ are still geodesic in $\Gamma(G_1,X' \cup \mathcal{H}')$, and $s_1,\dots,s_{n+1}$ are $\mathcal{H}'$-components of $q$, by construction. Finally, $s_i$ is not connected to $s_{i+1}$, for $i=1,\dots,n-1$, because elements of $G$ that belong to different $H_\nu$-cosets continue to do so in $G_1$, and $s_n$ is not connected to $s_{n+1}$ because $H_{\nu_n}$ and $Z$ are distinct peripheral subgroups of $G_1$. Therefore all of the assumptions of Proposition~\ref{prop:mpquasigeodesic-original} are satisfied, which allows us to conclude that the path $q$ is $(\lambda_0,0)$-quasigeodesic without backtracking in $\Gamma(G_1,X' \cup \mathcal{H}')$.
Consequently, the path $p'=r_0s_1r_1 \dots s_n r_n$ is $(\lambda_0,0)$-quasigeodesic without backtracking in $\Gamma(G_1,X' \cup \mathcal{H}')$, as a subpath of $q$. Since $p'$ only contains vertices and edges from $\ga$, we see that $p'$ is also $(\lambda_0,0)$-quasigeodesic without backtracking in $\ga$.
Now, the original path $p$ can be obtained by replacing the edges $s_1,\dots, s_n$ of $p'$ by paths $t_1,\dots,t_n$, each of which has length at most $3$. Hence, by Lemma~\ref{lem:perturbed_quasigeodesic}, $p$ is $(3\lambda_0,18\lambda_0+6)$-quasigeodesic. Since $p'$ is without backtracking and every $\mathcal{H}$-component of $p$ is connected to an $\mathcal H$-component of $p'$ (and vice-versa), by construction, the path $p$ must also be without backtracking.
Thus we have shown that the path $p$ is $(\lambda,c)$-quasigeodesic without backtracking in $\ga$, where $\lambda=3\lambda_0$ and $c=18\lambda_0+6$. \end{proof}
\subsection{Quasiconvex subsets in relatively hyperbolic groups}\label{subsec:qc_subset_in_rh_gps} In this paper we shall use the definition of a relatively quasiconvex subgroup given by Osin in \cite{OsinRHG}. For convenience we state it in the case of arbitrary subsets rather than just subgroups.
\begin{definition}[Relatively quasiconvex subset] \label{def:rel_qc}
A subset $Q \subseteq G$ is said to be \emph{relatively quasiconvex} (with respect to $\{H_\nu \mid \nu \in \Nu\}$) if there exists $\varepsilon \ge 0$ such that for every geodesic path $q$ in $\Gamma(G,X \cup \mathcal{H})$, with $q_-,q_+ \in Q$, and every vertex $v$ of $q$ we have $d_X(v,Q) \le \varepsilon$.
Any number $\varepsilon \ge 0$ as above will be called a \emph{quasiconvexity constant} of $Q$. \end{definition}
Osin proved that relative quasiconvexity of a subset is independent of the choice of a finite generating set $X$ of $G$: see \cite[Proposition 4.10]{OsinRHG} -- the proof there is stated for relatively quasiconvex subgroups but actually works more generally for relatively quasiconvex subsets.
We outline some basic properties of quasiconvex subsets and subgroups of $G$ in the next two lemmas.
\begin{lemma} \label{lem:props_of_qc_subsets}
Let $Q$ be a relatively quasiconvex subset of $G$. Then
\begin{enumerate}[label=(\alph*)]
\item the subset $gQ$ is relatively quasiconvex, for every $g \in G$;
\item if $T \subseteq G$ lies at a finite $d_X$-Hausdorff distance from $Q$ then $T$ is relatively quasiconvex.
\end{enumerate} \end{lemma}
\begin{proof}
Claim (a) follows immediately from the fact that left multiplication by $g$ induces an isometry of $G$ with respect to both the proper metric $d_X$ and the relative metric $d_{X \cup \mathcal{H}}$.
To prove claim (b), suppose that $\varepsilon \ge 0$ is a quasiconvexity constant of $Q$ and the $d_X$-Hausdorff distance between $Q$ and $T$ is less than $k\in \NN$.
Consider any geodesic path $t$ in $\Gamma(G, X\cup\mathcal{H})$ with $t_-,t_+ \in T$, and take any vertex $v$ of $t$.
Then there are $x,y \in Q$ such that $d_X(x,t_-)\le k$ and $d_X(y,t_+) \leq k$. Let $q$ be any geodesic connecting $x$ with $y$.
Then $q$ is $k$-similar to $t$, hence there is a vertex $u$ of $q$ such that $d_X(v,u) \leq \kappa$, where $\kappa = \kappa(1,0,k) \ge 0$ is the global constant given by Proposition~\ref{prop:osinbcp} applied to \(k\)-similar geodesics.
By the relative quasiconvexity of $Q$, there exists $w \in Q$ such that $d_X(u,w) \leq \varepsilon$. Moreover, \(d_X(w,T) \leq k\) by assumption.
Therefore $d_X(v,T) \leq \kappa+\varepsilon+k$, thus $T$ is relatively quasiconvex in $G$. \end{proof}
\begin{lemma} \label{lem:props_of_qc_sbgps}
Suppose that $Q \leqslant G$ is a relatively quasiconvex subgroup.
Then for all $g \in G$ and $Q' \leqslant_f Q$ the subgroups $gQg^{-1}$ and $Q'$ are relatively quasiconvex in $G$. \end{lemma}
\begin{proof}
By claim (a) of Lemma~\ref{lem:props_of_qc_subsets}, the coset $gQ$ is relatively quasiconvex and the $d_X$-Hausdorff distance between this coset and $gQg^{-1}$ is at most $|g|_X$, hence $gQg^{-1}$ is relatively quasiconvex in $G$ by claim (b) of the same lemma.
Suppose that $Q=\bigcup_{i=1}^m Q' h_i$, where $h_i \in Q$, $i=1,\dots,m$.
Then the $d_X$-Hausdorff distance between $Q$ and $Q'$ is bounded above by $\max\{|h_i|_X \mid 1\le i \le m\}$, so $Q'$ is relatively quasiconvex by Lemma~\ref{lem:props_of_qc_subsets}(b). \end{proof}
\begin{corollary} \label{cor:parab->qc}
Any parabolic subgroup of $G$ is relatively quasiconvex. \end{corollary}
\begin{proof}
Let $H=gQg^{-1}$ be a parabolic subgroup, where $g \in G$ and $Q \leqslant H_\nu$, for some $\nu \in \Nu$.
The subgroup $Q$ is relatively quasiconvex in $G$ (with quasiconvexity constant $0$), because any geodesic connecting two elements of $Q$ consists of a single edge in $\ga$.
Therefore $H$ is relatively quasiconvex by Lemma~\ref{lem:props_of_qc_sbgps}. \end{proof}
\begin{lemma} \label{lem:fg_qc_int_parab_is_fg}
Let $P$ be a maximal parabolic subgroup of $G$ and let $Q$ be a finitely generated relatively quasiconvex subgroup of $G$.
Then the subgroups $P$ and $Q \cap P$ are finitely generated. \end{lemma}
\begin{proof}
The fact that each $H_\nu$ is finitely generated, provided $G$ is finitely generated, was proved by Osin in \cite[Theorem 1.1]{OsinRHG}.
Now, Hruska \cite[Theorem 9.1]{HruskaRHCG} proved that every quasiconvex subgroup $Q$ of $G$ is itself relatively hyperbolic and maximal parabolic subgroups of $Q$ are precisely the infinite intersections of $Q$ with maximal parabolic subgroups of $G$.
In other words, if $P \leqslant G$ is maximal parabolic, then $Q \cap P$ is either finite or a maximal parabolic subgroup of $Q$.
Combined with Osin's result \cite[Theorem 1.1]{OsinRHG} mentioned above we can conclude that if $Q$ is finitely generated then so is $Q \cap P$, as required. \end{proof}
The following property of quasiconvex subgroups will be useful.
\begin{lemma} \label{lem:shortening}
Let $Q, R \leqslant G$ be relatively quasiconvex subgroups of $G$. For every $\zeta \ge 0$ there exists a constant \(\mu=\mu(\zeta) \geq 0\) such that the following holds.
Suppose $x \in G$, \(a \in Q \), \(b \in R\) are some elements, $[x,xa]$ and $[x,xb]$ are geodesic paths in $\ga$, and \(u \in [x,xa]\), \(v \in [x,xb]\) are vertices such that \(d_X(u,v) \leq \zeta\).
Then there is an element \(z \in x(Q \cap R)\) such that \(d_X(u,z) \leq \mu\) and \(d_X(v,z) \leq \mu\). \end{lemma}
\begin{proof}
Denote by \(\varepsilon \geq 0\) a quasiconvexity constant of the subgroups \(Q\) and \(R\).
After applying the left translation by $x^{-1}$, which is an isometry with respect to both metrics $d_X$ and $\dxh$, we can assume that $x=1$.
Let $K'=K'(Q,R,\varepsilon+\zeta)$ be the constant given by Lemma~\ref{lem:nbhdintersection}.
Since $x=1 \in Q \cap R$, $xa=a \in Q$ and $xb=b \in R$, by the relative quasiconvexity of $Q$ and $R$ we know that $u \in N_X(Q,\varepsilon)$ and $v \in N_X(R,\varepsilon)$.
By the assumptions $d_X(u,v) \le \zeta$, it follows that $u \in N_X(Q,\varepsilon+\zeta) \cap N_X(R,\varepsilon+\zeta)$, hence $u \in N_X(Q \cap R,K')$ by Lemma~\ref{lem:nbhdintersection}.
Thus there exists $z \in Q \cap R$ such that $d_X(u,z) \le K'$, and, hence, $d_X(v,z) \le K'+\zeta$ by the triangle inequality.
Therefore the statement of the lemma holds for $\mu=K'+\zeta$. \end{proof}
The next combination theorem was proved by Mart\'{i}nez-Pedroza. \begin{theorem}[{\cite[Theorem~1.1]{MPComb}}] \label{thm:M-P_comb}
Let $G$ be a relatively hyperbolic group generated by a finite set $X$.
Suppose that $Q$ is a relatively quasiconvex subgroup of $G$, $P$ is a maximal parabolic subgroup of $G$ and $D=Q \cap P$.
There is a constant $C \ge 0$ such that the following holds.
If $H \leqslant P$ is any subgroup satisfying
\begin{enumerate}
\item $H \cap Q=D$, and
\item $\minx( H \setminus D) \ge C$,
\end{enumerate}
then the subgroup $A=\langle H,Q \rangle$ is relatively quasiconvex in $G$ and is naturally isomorphic to the amalgamated free product $H*_{D} Q$.
Moreover, for every maximal parabolic subgroup $T$ of $G$, there exists $u \in A$ such that \[\text{either } A \cap T \subseteq uQu^{-1}~\text{ or }~ A \cap T \subseteq uHu^{-1}.\] \end{theorem}
\part{Quasiconvexity of virtual joins} \label{part:metric_qc_double_cosets} This part of the paper is mostly devoted to the proofs of Theorems \ref{thm:metric_qc} and \ref{thm:sep->qc_intro}. Let us start by giving brief outlines of the arguments.
Suppose \(G\) is a group generated by finite set \(X\) and hyperbolic relative to a collection of subgroups \(\{H_\nu \, | \, \nu \in \Nu\}\). Denote \(\mathcal{H} = \bigsqcup_{\nu \in \Nu} H_\nu \setminus \{1\}\) and take any \(A \geq 0\). Consider two finitely generated relatively quasiconvex subgroups \(Q, R \leqslant G \). Set \(S = Q \cap R\) and suppose that \(Q' \leqslant Q\) and \(R' \leqslant R\) are subgroups satisfying conditions \descref{C1}-\descref{C5} from Subsection~\ref{subsec:3.1}, with some finite collection of maximal parabolic subgroups $\mathcal{P}$ of $G$ (which is independent of $A$) and parameters \(B\) and \(C\) that are sufficiently large with respect to \(A\).
Every element \(g \in \langle Q', R' \rangle\) can be written as a product of elements of \(Q'\) and \(R'\), which gives rise to a broken geodesic line in \(\Gamma(G, X \cup \mathcal{H})\) (not necessarily in a uniquely way), whose label represents \(g\) in $G$. We choose a path \(p\) from the collection of such broken lines, representing \(g\), that is minimal in a certain sense. The path \(p\) may fail to be uniformly quasigeodesic, as it may travel through $H_\nu$-cosets for arbitrarily long time. We do, however, have some metric control over such instances of backtracking, using the fact that \(Q'\) and \(R'\) satisfy conditions \descref{C1}-\descref{C5} and the minimality of \(p\).
We construct a new path from \(p\), which we call the \emph{shortcutting} of \(p\), that turns out to be uniformly quasigeodesic. Informally speaking, the shortcutting of \(p\) is obtained by replacing each maximal instance of backtracking in consecutive geodesic segments of \(p\) with a single edge, then connecting these edges in sequence by geodesics. The resulting path can be seen to satisfy the hypotheses of Proposition~\ref{prop:mpquasigeodesic}. It follows that the shortcutting of \(p\) is uniformly quasigeodesic, and hence \(\langle Q', R' \rangle\) is relatively quasiconvex. Properties \descref{P2} and \descref{P3} also follow from this quasigeodesicity, giving us Theorem~\ref{thm:metric_qc}.
Now suppose that \(G\) is QCERF and its peripheral subgroups are double coset separable.
In Theorem~\ref{thm:sep->qc_comb} we use the separability assumptions on \(G\) and \(\{H_\nu \, | \, \nu \in \Nu\}\) to deduce the existence of a finite index subgroup \(M \leqslant_f G\) such that \(Q' = Q \cap M \leqslant_f Q, R' = R \cap M \leqslant_f R\) satisfy conditions \descref{C1}-\descref{C5} with constants \(B\) and \(C\) large enough to apply Theorem~\ref{thm:metric_qc} (as suggested in Remark~\ref{rem:sep->metric}). Conditions \descref{C1} and \descref{C4} are essentially automatic. Conditions \descref{C2}, \descref{C3} and \descref{C5} can be assured to hold for the subgroups \(Q'\) and \(R'\) using Lemma~\ref{lem:sep->large_minx} by the QCERF condition on \(G\), separability of double cosets \(PS\) (where \(P\) is one of finitely many maximal parabolic subgroups) and double coset separability of the peripheral subgroups, respectively.
The main technical difficulty is in showing that the double cosets of the form \(PS\) as above are separable in \(G\). To this end, we prove a general result about lifting separability of certain double cosets in amalgamated free products. This is then combined with a result of Mart\'{i}nez-Pedroza (Theorem~\ref{thm:M-P_comb}), allowing us to deduce Theorem~\ref{thm:sep->qc_intro} from Theorem~\ref{thm:metric_qc}.
\section{Path representatives} \label{sec:path_reps} Let us set the notation that will be used in the next few sections.
\begin{convention} \label{conv:main}
We fix a group $G$, generated by a finite set $X$, which is hyperbolic relative to a finite family of subgroups \(\lbrace H_\nu \, | \, \nu \in \Nu \rbrace\). We let $\mathcal{H}=\bigsqcup_{\nu \in \Nu} (H_\nu\setminus\{1\})$.
It follows that the Cayley graph $\ga$ is $\delta$-hyperbolic, for some $\delta \in \NN$ (see Lemma~\ref{lem:Cayley_graph-hyperbolic}).
Furthermore, we assume that \(Q, R \leqslant G\) are fixed relatively quasiconvex subgroups of \(G\), with a quasiconvexity constant \(\varepsilon \ge 0\), and denote $S=Q \cap R$. \end{convention}
In this section \(Q'\) and \(R'\) will denote some subgroups of $Q$ and $R$ respectively. We will introduce path representatives of elements in $\langle Q',R' \rangle$ and will order such representatives by their types. This will be crucial in our proof of Theorem~\ref{thm:metric_qc}.
\begin{definition}[Path representative, I] \label{def:path_reps}
Consider an arbitrary element \(g \in \langle Q', R' \rangle\).
Let \(p=p_1 \dots p_n\) be a broken line in \(\Gamma(G,X\cup\mathcal{H})\) with geodesic segments \(p_1, \dots, p_n\), such that $\elem{p}=g$ and \(\elem{p_i} \in Q' \cup R'\) for each \(i \in \{1,\dots,n\}\).
We will call \(p\) a \emph{path representative} of \(g\). \end{definition}
To choose an optimal path representative we define their types.
\begin{definition}[Type of a path representative, I] \label{def:type_of_path_rep}
Suppose that $p=p_1\dots p_n$ is a broken line in $\ga$.
For each $i=1,\dots,n$, let $T_i$ denote the set of all \(\mathcal{H}\)-components of $p_i$, and let $T= \bigcup_{i=1}^n T_i$.
We define the \emph{type} $\tau(p)$ of $p$ to be the triple
\[
\tau(p)=\Big(n,\ell(p),\sum_{t \in T} |t|_X \Big) \in {\NN_0}^3,
\]
where $\ell(p)=\sum_{i=1}^n \ell(p_i)$ is the length of $p$. \end{definition}
\begin{definition}[Minimal type]
Given \(g \in \langle Q', R' \rangle\), the set $\mathcal S$ of all path representatives of $g$ is non-empty.
Therefore the subset $\tau(\mathcal{S})=\{\tau(p) \mid p \in \mathcal{S}\} \subseteq {\NN_0}^3$, where ${\NN_0}^3$ is equipped with the lexicographic order, will have a unique minimal element.
We will say that $p=p_1 \dots p_n$ is a \emph{path representative of $g$ of minimal type} if $\tau(p)$ is the minimal element of $\tau(\mathcal S)$. \end{definition}
\begin{remark} \label{rem:alt}
Note that if \(p_1\) and \(p_2\) are paths with \((p_1)_+ = (p_2)_-\) whose labels both represent elements of \(Q'\) (or, respectively, both \(R'\)), then the label of any geodesic \([(p_1)_-,(p_2)_+]\) also represents an element of \(Q'\) (respectively, \(R'\)).
Hence in a path representative of $g \in \langle Q',R' \rangle$ of minimal type, the labels of the consecutive segments necessarily alternate between representing elements of \(Q' \setminus (Q' \cap R')\) and \(R' \setminus (Q' \cap R')\), whenever \(g\) is not itself an element of \(Q' \cap R'\). \end{remark}
The minimality of the type of a path representative is thus a numerical condition on the total lengths of the paths $p_i$ and the total lengths of their components. In the next few sections we will study local properties induced by this global condition. The first such property is stated in the next lemma.
\begin{notation}
Let \(x,y,z \in G\).
We will write \(\langle x, y \rangle^{rel}_z=\frac12 (\dxh(x,z)+\dxh(y,z)-\dxh (x,y))\) to denote the Gromov product of \(x\) and \(y\) with respect to \(z\) in the relative metric \(d_{X\cup\mathcal{H}}\). \end{notation}
\begin{lemma}[Gromov products are bounded] \label{lem:bddinnprod}
There is a constant \( C_0 \geq 0\) such that the following holds.
Let $Q' \leqslant Q$ and $R' \leqslant R$ be subgroups satisfying condition \descref{C1}. If \(p=p_1 \dots p_n\) is a minimal type path representative of an element \(g \in \langle Q', R' \rangle\) and $f_0, \dots, f_n \in G$ are the nodes of $p$ (i.e., $f_{i-1}=(p_i)_-$, for $i=1,\dots,n$, and $f_n=(p_n)_+$) then
\(\langle f_{i-1}, f_{i+1} \rangle_{f_i}^{rel} \leq C_0\) for each \(i= 1, \dots, n-1\). \end{lemma}
\begin{proof}
Let \(\sigma \in \NN_0\) be the constant from Proposition~\ref{prop:osinslimtriangles} and let \(\mu=\mu(\sigma) \geq 0\) be given by Lemma~\ref{lem:shortening}.
Set \(C_0 = \mu + \delta + 2 \sigma +2\), and assume that \(p=p_1 \dots p_n\) is a path representative of \(g \in \langle Q', R' \rangle\) of minimal type.
Take any $i \in \{1,\dots,n-1\}$. Choose vertices \(u \in p_i \) and \(v \in p_{i+1}\) so that
\(d_{X\cup\mathcal{H}}(f_i,u) = d_{X\cup\mathcal{H}}(f_i,v) = \lfloor\langle f_{i-1}, f_{i+1} \rangle_{f_i}^{rel} \rfloor \). As \(\Gamma(G,X\cup\mathcal{H})\) is \(\delta\)-hyperbolic, we must have \(d_{X\cup\mathcal{H}}(u,v) \leq \delta\).
If \(\langle f_{i-1}, f_{i+1} \rangle_{f_i}^{rel} < C_0\) then we are done, so suppose otherwise.
Then $\dxh(u,f_i) \ge \delta+\sigma+1 \in \NN$, so
there is a vertex \(u_1\) on the subpath $[u,f_i]$ of $p_i$ such that \[ d_{X\cup\mathcal{H}}(u_1,u) = \delta + \sigma +1.\]
Applying Proposition~\ref{prop:osinslimtriangles} to the geodesic triangle \(\Delta\) with sides $[u,f_i]$, $[f_i,v]$ and $[u,v]$ (here we choose $[f_i,v]$ to be a subpath of $p_{i+1}$), we can find some vertex \(v_1 \in [u,v] \cup [f_i,v]\) with \(d_X(v_1,u_1) \leq \sigma\) .
If \(v_1 \in [u,v]\), then, by the triangle inequality, \[d_{X\cup\mathcal{H}}(u_1,u) \le \dxh(u_1,v_1)+\dxh(u,v)\leq \sigma + \delta,\] which would contradict the choice of \(u_1\).
Therefore it must be that \(v_1 \in [f_i,v]\) (see Figure~\ref{fig:bdd_inn_prod}).
\begin{figure}
\caption{We obtain a different path representative for \(g\) by replacing \(p_i\) and \(p_{i+1}\) with geodesics from \(f_{i-1}\) to \(z\) to \(f_{i+1}\). }
\label{fig:bdd_inn_prod}
\end{figure}
Since the path representative $p$ has minimal type, in view of Remark~\ref{rem:alt} we must have either $\elem{p_i} \in Q'$ and $\elem{p_{i+1}} \in R'$ or $\elem{p_i} \in R'$ and $\elem{p_{i+1}} \in Q'$. Without loss of generality let us assume the former.
We can apply Lemma~\ref{lem:shortening} to find \(z \in f_i(Q \cap R)\) with \(d_X(u_1,z) \leq \mu\) and \(d_X(v_1,z) \leq \mu\).
Let $p_i'$ be a geodesic path in $\ga$ joining $f_{i-1}=(p_i)_-$ with $z$ and let $p_{i+1}'$ be a geodesic path joining $z$ with $f_{i+1}=(p_{i+1})_+$.
Observe that $f_{i-1} \in f_iQ'$ and $Q \cap R \subseteq Q'$ by \descref{C1}, whence
\[
\elem{p'_i}=f_{i-1}^{-1}z \in Q' f_i^{-1} f_i (Q \cap R)=Q'.
\]
Similarly, $\elem{p_{i+1}'} \in R'$. It follows that the path $p'=p_1 \dots p_{i-1} p_i' p_{i+1}' p_{i+2} \dots p_n$ is also a path representative of the same element $g \in \langle Q',R' \rangle$.
Since $p$ has minimal type, by the assumption, it must be that $\ell(p_i)+\ell(p_{i+1}) \le \ell(p_i')+\ell(p_{i+1}')$, which can be re-written as
\begin{equation}
\label{eq:ineq_on_dist-1}
\dxh(f_{i-1},f_i)+\dxh(f_i,f_{i+1}) \le \dxh(f_{i-1},z)+\dxh(z,f_{i+1}).
\end{equation}
Since $u_1 \in p_i$, we have $\dxh(f_{i-1},f_i)=\dxh(f_{i-1},u_1)+\dxh(u_1,f_i)$.
On the other hand, \[\dxh(f_{i-1},z) \le \dxh(f_{i-1},u_1)+\dxh(u_1,z) \le \dxh(f_{i-1},u_1)+\mu,\] by the triangle inequality.
Similarly,
\[
\dxh(f_{i},f_{i+1})=\dxh(f_{i},v_1)+\dxh(v_1,f_{i+1}) \text{ and } \dxh(z,f_{i+1}) \le \dxh(v_1,f_{i+1})+\mu.
\]
Combining the above inequalities with \eqref{eq:ineq_on_dist-1}, we obtain
\begin{equation}
\label{eq:Gr_prod_bound}
\dxh(u_1,f_i)+\dxh(f_i,v_1) \le 2 \mu.
\end{equation}
Now, by construction, we have
\begin{equation}
\label{eq:d(u1,fi)}
\dxh(u_1,f_i) = \dxh(u,f_i)-\dxh(u_1,u)=\lfloor\langle f_{i-1}, f_{i+1} \rangle_{f_i}^{rel}\rfloor-(\delta+\sigma+1).
\end{equation}
On the other hand, since $\dxh(v_1,u_1) \le \sigma$, we achieve
\begin{equation}
\label{eq:d(fi,v1)}
\dxh(f_i,v_1) \ge \dxh(u_1,f_i)-\dxh(v_1,u_1) \ge \lfloor\langle f_{i-1}, f_{i+1} \rangle_{f_i}^{rel}\rfloor-(\delta+2\sigma+1).
\end{equation}
After combining \eqref{eq:d(u1,fi)}, \eqref{eq:d(fi,v1)} and \eqref{eq:Gr_prod_bound}, we obtain
\[
2\lfloor\langle f_{i-1}, f_{i+1} \rangle_{f_i}^{rel}\rfloor-(2\delta+3\sigma+2) \le 2 \mu.
\]
Therefore, we can conclude that $\langle f_{i-1}, f_{i+1} \rangle_{f_i}^{rel} \le \mu+ \delta+2 \sigma+2=C_0$, as required. \end{proof}
\section{Adjacent backtracking in path representatives of minimal type} \label{sec:adj_backtracking} In this section we continue working under Convention~\ref{conv:main}. Our goal here is to study the possible backtracking within two adjacent segments in a minimal type path representative.
\begin{lemma} \label{lem:one_comp_in_cusp_is_bounded}
For all non-negative numbers $\zeta$ and $\xi$ there exists $\tau=\tau(\zeta,\xi) \ge 0$ such that the following holds.
Suppose that $Q' \leqslant Q$ and $R' \leqslant R$ are subgroups satisfying \descref{C1}, $g \in \langle Q',R' \rangle$ and $p=p_1\dots p_n$ is a path representative of $g$ of minimal type.
If for some \(i \in \{1,\dots,n-1\}\) $s$ and $t$ are connected $\mathcal{H}$-components of $p_i$ and $p_{i+1}$ respectively, such that $d_X(s_-,t_+) \le \zeta$ and $d_X(s_+,(p_i)_+) \le \xi$, then $|s|_X \le \tau$ and $|t|_X \le \tau$. \end{lemma}
\begin{proof}
Let $\mu=\mu(\zeta) \ge 0$ be the constant from Lemma~\ref{lem:shortening}. Since $|X|<\infty$ and $|\Nu|<\infty$ we can define the constant $k \ge 0$ as follows:
\begin{equation}
\label{eq:def_of_k}
k=\max\{K'(Q \cap R, c H_\nu c^{-1},\xi+\mu) \mid \nu \in \Nu,~c \in G,~|c|_X \le \xi\},
\end{equation}
where for each $c \in G$ and $\nu \in \Nu$ the constant $K'(Q \cap R, c H_\nu c^{-1},\xi+\mu)$ is given by Lemma~\ref{lem:nbhdintersection}.
Let $L \ge 0$ be the constant from Lemma~\ref{lem:isol_comp_in_triangles_are_short} and set $\tau=2k+2\xi+ \zeta+L \ge 0$.
Let $p=p_1\dots p_n$ be a path representative of some $g \in \langle Q',R' \rangle$ of minimal type. Suppose that $s$ and $t$ are connected $H_\nu$-components of $p_i$ and $p_{i+1}$ respectively, for some $i \in \{1,\dots,n-1\}$ and $\nu \in \Nu$, such that $d_X(s_-,t_+) \le \zeta$ and $d_X(s_+,(p_i)_+) \le \xi$.
Note that, by Lemma~\ref{lem:isol_comp_in_triangles_are_short},
\begin{equation}
\label{eq:dist_from_s+_to_t-}
d_X(s_+,t_-) \le L.
\end{equation}
Denote $x=(p_i)_+=(p_{i+1})_- \in G$, $a=x^{-1} s_+ \in G$ and $b=x^{-1}t_- \in G$: see Figure~\ref{fig:bounded_comps}.
\begin{figure}
\caption{Illustration of Lemma~\ref{lem:one_comp_in_cusp_is_bounded} }
\label{fig:bounded_comps}
\end{figure}
Note that
\begin{equation}
\label{eq:a_and_b_in_the_same_H-coset}
aH_\nu=bH_\nu,~\text{ hence } aH_\nu a^{-1}=bH_\nu b^{-1},
\end{equation}
because the $H_\nu$-components $s$ and $t$ are connected.
Using the lemma hypotheses and (\ref{eq:dist_from_s+_to_t-}) we also have
\begin{equation}
\label{eq:mod_a_and_b}
|a|_X=d_X(x,s_+) \le \xi ~\text{ and }~ |b|_X \le d_X(x,s_+)+d_X(s_+,t_-) \le \xi+L.
\end{equation}
In view of Remark~\ref{rem:alt}, without loss of generality we can assume that $\Lab(p_i)$ represents an element of $Q'$ and $\Lab(p_{i+1})$ represents an element of $R'$ in $G$ (the other case can be treated similarly).
Applying Lemma~\ref{lem:shortening}, we can find $z \in x(Q \cap R)$ such that $d_X(s_-,z) \le \mu$.
Consider the element $u=s_- a^{-1}= x a \elem{s}^{-1} a^{-1} \in x aH_\nu a^{-1}$, and observe that $d_X(s_-,u)=|a^{-1}|_X \le \xi$.
On the other hand, $d_X(s_-,x(Q \cap R)) \le d_X(s_-,z) \le \mu$, whence
\[
s_- \in N_X\Bigl(x(Q \cap R),\xi+\mu\Bigr) \cap N_X\Bigl(x aH_\nu a^{-1},\xi+\mu\Bigr).
\]
Therefore, according to Lemma~\ref{lem:nbhdintersection}, there exists $w \in x(Q \cap R \cap aH_\nu a^{-1})$ such that
\begin{equation}
\label{eq:d_X(s_-,w)}
d_X(s_-,w) \le k,
\end{equation}
where $k \ge 0$ is the constant defined in \eqref{eq:def_of_k}.
Let $\alpha$ be the subpath of $p_i$ from $s_+=xa$ to $(p_i)_+=x$. Choose the geodesic path $[wa,w]$ as the translate $wx^{-1} \alpha$.
Observe that \(s_- \in xaH_\nu\) and \(wa \in xaH_\nu a^{-1} a = xaH_\nu\) lie in the same \(H_\nu\)-coset.
Thus $\dxh(s_-,wa) \le 1$;
if \(s_- = wa\) we let $e$ be the trivial path in $\ga$ consisting of the single vertex $s_-$, and otherwise we let $e$ be the edge of $\ga$ labelled by an element of $H_\nu \setminus \{1\}$ that joins $s_-$ to $wa$.
Define the path $q$ in $\ga$ as the concatenation
\begin{equation}
\label{eq:def_of_q}
q=[(p_i)_-,s_-]\, e\, [wa,w],
\end{equation}
where $[(p_i)_-,s_-]$ is chosen as the initial segment of $p_i$.
Since $\ell(e) \le 1=\dxh(s_-,s_+)$, we can bound the length of the path $q$ from above as follows:
\begin{equation}
\begin{split}
\label{eq:length_of_q}
\ell(q) & =\dxh((p_i)_-,s_-)+\ell(e)+\dxh(wa,w) \\
& \le \dxh((p_i)_-,s_-)+\dxh(s_-,s_+)+\dxh(xa,x)=\ell(p_i).
\end{split}
\end{equation}
Now we construct a similar path from $w$ to $(p_{i+1})_+$.
Let $\beta$ be the subpath of $p_{i+1}$ from $(p_{i+1})_-=x$ to $t_-=xb$.
Choose the geodesic path $[w,wb]$ as the translate $wx^{-1} \beta$.
Recall that $t_+\in xb H_\nu$ and note that the inclusion $w \in xaH_\nu a^{-1}$, together with \eqref{eq:a_and_b_in_the_same_H-coset}, imply that $wb \in xbH_\nu$ also.
If $t_+=wb$ then let $f$ be the trivial path in $\ga$ consisting of the single vertex $t_+$, otherwise let \(f\) be the edge in \(\ga\) joining the vertices $wb$ and $t_+$ with $\Lab(f) \in H_\nu \setminus\{1\}$.
We now define the path $r$ in $\ga$ as the concatenation
\begin{equation}
\label{eq:def_of_r}
r=[w,wb]\, f \, [t_+,(p_{i+1})_+],
\end{equation}
where $[t_+,(p_{i+1})_+]$ is chosen as the ending segment of $p_{i+1}$.
Similarly to the case of $q$ we can estimate that
\begin{equation}
\label{eq:length_of_r}
\ell(r) \le \ell(p_{i+1}).
\end{equation}
Note that since $q_-=(p_i)_-=x \elem{p_i}^{-1} \in xQ'$, $q_+=w \in x(Q \cap R)$ and $Q \cap R \subseteq Q'$, we have $\elem{q} \in Q'$. Similarly, $\elem{r} \in R'$.
Let $p_i'$ be a geodesic path from $q_-=(p_i)_-$ to $q_+=w$, and let $p_{i+1}'$ be a geodesic path from $w=r_-$ to $(p_{i+1})_+=r_+$.
Since $\elem{p_i'}=\elem{q} \in Q'$ and $\elem{p_{i+1}'}=\elem{r} \in R'$, the broken line $p'=p_1 \dots p_{i-1} p_i' p_{i+1}' p_{i+2} \dots p_n$ is a path representative of the same element $g \in G$.
If at least one of the paths $q$, $r$ is not geodesic in $\ga$, then, in view of \eqref{eq:length_of_q} and \eqref{eq:length_of_r} we have
\[
\ell(p_i')+\ell(p_{i+1}') < \ell(q)+\ell(r) \le \ell(p_i)+\ell(p_{i+1}),
\]
hence $\ell(p)=\sum_{i=1}^n \ell(p_i)>\ell(p')$, contradicting the minimality of the type of $p$.
Hence both $q$ and $r$ must be geodesic in $\ga$ , so we can further assume that $p_i'=q$ and $p_{i+1}'=r$.
Moreover, the inequality $\ell(p) \le \ell(p')$ must hold by the minimality of the type of $p$.
Therefore $\ell(p_i)+\ell(p_{i+1}) \le \ell(q)+\ell(r)$, which, in view of \eqref{eq:length_of_q} and \eqref{eq:length_of_r}, implies that $\ell(q)=\ell(p_i)$, $\ell(r)=\ell(p_{i+1})$ and $\ell(p)=\ell(p')$.
In particular, $e$ and $f$ are actual edges of $\ga$ (and not trivial paths).
The definition \eqref{eq:def_of_q} of $q$ implies that $\Lab(q)$ can differ from $\Lab(p_i)$ in at most one letter, which is the label of the $H_\nu$-component $e$ in $\Lab(q)$ and the label of the $H_\nu$-component $s$ in $\Lab(p_i)$.
Indeed,
\[
\Lab(p_i)=\Lab([(p_i)_-,s_-]) \Lab(s) \Lab(\alpha) \text{ and } \Lab(q)=\Lab([(p_i)_-,s_-]) \Lab(e) \Lab(\alpha),
\]
where we used the fact that $[wa,w]$ is the left translate of $\alpha$, by definition, and hence its label is the same $\Lab(\alpha)$.
Similarly, (\ref{eq:def_of_r}) implies $\Lab(r)$ can differ from $\Lab(p_i)$ in at most one letter which is the label of $f$ in $r$ and the label of $t$ in $p_{i+1}$.
The minimality of the type of $p$ therefore implies that
\begin{equation}
\label{eq:sum_of_lengths_of_s_and_t}
|s|_X+|t|_X \le |e|_X+|f|_X.
\end{equation}
Now, using the triangle inequality, \eqref{eq:d_X(s_-,w)} and \eqref{eq:mod_a_and_b} we obtain
\begin{equation}
\label{eq:mod_e}
|e|_X=d_X(s_-,wa) \le d_X(s_-,w)+d_X(w,wa) \le k+|a|_X \le k+\xi .
\end{equation}
To estimate $|f|_X$ we also use the inequality $d_X(s_-,t_+) \le \zeta$:
\begin{equation}
\label{eq:mod_f}
\begin{split}
|f|_X=d_X(t_+,wb) & \le d_X(t_+,w)+|b|_X \\
& \le d_X(t_+,s_-)+d_X(s_-,w)+\xi+L \le \zeta+ k+\xi+L .
\end{split}
\end{equation}
Combining \eqref{eq:sum_of_lengths_of_s_and_t}--\eqref{eq:mod_f} together, we achieve
\[\max\{|s|_X,|t|_X\}\le |e|_X+|f|_X \le 2k + 2 \xi + \zeta + L=\tau.\]
This inequality completes the proof of the lemma. \end{proof}
The following auxiliary definition will only be used in the remainder of this section.
\begin{definition} \label{def:tau_j}
Let $C_0 \ge 0$ be the constant provided by Lemma~\ref{lem:bddinnprod}, let $L \ge 0$ be the constant given by Lemma~\ref{lem:isol_comp_in_triangles_are_short} and let $\kappa=\kappa(1,0,L) \ge 0$ be the constant from Proposition~\ref{prop:osinbcp}.
Define the sequences $(\zeta_j)_{j \in \NN}$, $(\xi_j)_{j \in \NN}$ and $(\tau_j)_{j \in \NN}$ of non-negative real numbers as follows.
Set $\zeta_1=\kappa$, $\xi_1=C_0+1$ and $\tau_1=\max\{\kappa, \tau(\zeta_1,\xi_1)\}$, where $\tau(\zeta_1,\xi_1)$ is given by Lemma~\ref{lem:one_comp_in_cusp_is_bounded}.
Now suppose that $j>1$ and the first $j-1$ members of the three sequences have already been defined. Then we set
\[
\zeta_j=\kappa,~\xi_j=C_0+1+\sum_{k=1}^{j-1} \tau_k \text{ and } \tau_j=\max\{\kappa,\tau(\zeta_j,\xi_j)\},
\]
where where $\tau(\zeta_j,\xi_j)$ is given by Lemma~\ref{lem:one_comp_in_cusp_is_bounded}. \end{definition}
\begin{lemma} \label{lem:shortspikes}
There exists a constant $C_1 \ge 0$ such that the following is true.
Let $Q' \leqslant Q$ and $R' \leqslant R$ be subgroups satisfying \descref{C1} and let $p=p_1\dots p_n$ be a minimal type path representative for an element $g \in \langle Q',R' \rangle$.
Suppose that, for some \(i \in \{1,\dots,n-1\}\), $q$ and $r$ are connected $\mathcal{H}$-components of $p_i$ and $p_{i+1}$ respectively.
Then \(d_X(q_+, (p_i)_+) \leq C_1\) and \(d_X((p_i)_+,r_-) \leq C_1\). \end{lemma}
\begin{proof}
Denote $x=(p_{i})_+=(p_{i+1})_- \in G$.
First, let us show that
\begin{equation}
\label{eq:dxh(q_+,(p_i)_+}
\dxh(q_+,x) \le C_0+1,
\end{equation}
where $C_0 \ge 0$ is the global constant provided by Lemma~\ref{lem:bddinnprod}.
Indeed, the latter lemma states that $\langle (p_i)_-,(p_{i+1})_+ \rangle_{x}^{rel} \le C_0$.
Since $q_+$ and $r_-$ are points on the geodesics $p_i$ and $p_{i+1}$, Remark~\ref{rem:Gr_prod_ineq} implies that
\[
\langle q_+,r_-\rangle_{x}^{rel} \le \langle (p_i)_-,(p_{i+1})_+ \rangle_{x}^{rel} \le C_0.
\]
Consequently,
\begin{align*}
C_0 \ge \langle q_+,r_-\rangle_{x}^{rel} & = \frac12 \Bigl( \dxh(x,q_+)+\dxh(x,r_-)-\dxh(q_+,r_-)\Bigr) \\
& \ge \frac12\Bigl( 2\dxh(x,q_+)-2\dxh(q_+,r_-)\Bigr) \ge \dxh(x,q_+) -1,
\end{align*}
where the last inequality used the fact that $\dxh(q_+,r_-) \le 1$, which is true because $q$ and $r$ are connected $\mathcal{H}$-components.
This establishes the inequality \eqref{eq:dxh(q_+,(p_i)_+}.
Let $\alpha$ denote the subpath of $p_i$ starting at $q_+$ and ending at $x$, and let $\beta$ denote the subpath of $p_{i+1}$ starting at $x$ and ending at $r_-$. Let $s_1,\dots,s_l$, $l \in \NN_0$, be the set of all $\mathcal{H}$-components of $\alpha$ listed in the reverse order of their occurrence, i.e., $s_1$ is the last $\mathcal{H}$-component of $\alpha$ (closest to $\alpha_+=x$) and $s_l$ is the first $\mathcal{H}$-component of $\alpha$ (closest to $\alpha_-=q_+$).
Note that, by \eqref{eq:dxh(q_+,(p_i)_+},
\begin{equation}
\label{eq:bound_on_l}
l \le \ell(\alpha) = d_{X\cup\mathcal{H}}(x,q_+) \le C_0+1.
\end{equation}
Let $L \ge 0$ be the constant given by Lemma~\ref{lem:isol_comp_in_triangles_are_short}, then
\begin{equation}
\label{eq:d_X(q_+,r_-)}
d_X(\alpha_-,\beta_+) =d_X(q_+,r_-) \le L.
\end{equation}
It follows that the geodesic paths $\alpha$ and $\beta^{-1}$ are $L$-similar in $\ga$.
Let $\kappa=\kappa(1,0,L) \ge 0$ be the constant provided by Proposition~\ref{prop:osinbcp}.
We will now prove the following.
\begin{claim}
For each $j=1,\dots,l$ we have
\begin{equation}
\label{eq:tau_j}
|s_{j}|_X \leq \tau_j,
\end{equation}
where $\tau_j \ge 0$ is given by Definition~\ref{def:tau_j}.
\end{claim}
We will establish the claim by induction on $j$. For the base of induction, $j=1$, note that if $|s_1|_X < \kappa$ then the inequality $|s_1|_X \le \tau_1$ will be true by definition of $\tau_1$.
Thus we can suppose that $|s_1|_X \ge \kappa$.
In this case, by Proposition~\ref{prop:osinbcp}, $s_1$ must be connected to some $\mathcal{H}$-component of $\beta^{-1}$. Claim (3) of the same proposition implies that there an $\mathcal H$-component $t_1$ of $\beta$, such that $s_1$ is connected to $t_1$ and $d_X((s_1)_-,(t_1)_+) \le \kappa$.
Note that, by construction, $s_1$ and $t_1$ are also connected $\mathcal{H}$-components of $p_i$ and $p_{i+1}$ respectively.
Observe that the subpath of $\alpha$ from $(s_1)_+$ to $x$ is labelled by letters from $X^{\pm 1}$ because it has no $\mathcal H$-components.
Therefore $d_X((s_1)_+,x) \le \ell(\alpha) \le C_0+1$.
Consequently, we can apply Lemma~\ref{lem:one_comp_in_cusp_is_bounded} to deduce that $|s_1|_X\le \tau(\zeta_1,\xi_1)$, where $\zeta_1=\kappa$ and $\xi_1=C_0+1$.
Thus we have shown that $|s_1|_X\le \tau_1$, where $\tau_1=\max\{\kappa, \tau(\zeta_1,\xi_1)\}$, and the base of induction has been established.
Now, suppose that $j>1$ and inequality \eqref{eq:tau_j} has been proved for all strictly smaller values of $j$.
If $|s_j|_X< \kappa$ then are done, because $\tau_j \ge \kappa$ by definition.
So we can assume that $|s_j|_X \ge \kappa$. As before, we can use Proposition~\ref{prop:osinbcp}, to find an $\mathcal{H}$-component $t_j$ of $\beta$ such that $s_j$ is connected to $t_j$ and $d_X((s_j)_-,(t_j)_+) \le \kappa$.
By construction, $s_1, \dots, s_{j-1}$ is the list of all $\mathcal{H}$-components of the subpath $[(s_j)_+,x]$ of $\alpha$, hence
\[
d_X((s_j)_+,x) \le \ell(\alpha)+\sum_{k=1}^{j-1} |s_k|_X \le C_0+1+\sum_{k=1}^{j-1} \tau_k,
\]
where the second inequality used \eqref{eq:bound_on_l} and the induction hypothesis.
This allows us to apply Lemma~\ref{lem:one_comp_in_cusp_is_bounded} again, and conclude that $|s_j|_X \le \tau(\zeta_j,\xi_j)$, where $\zeta_j=\kappa$ and $\xi_j=C_0+1+\sum_{k=1}^{j-1} \tau_k$.
Thus, $|s_j|_X \le \max\{\kappa,\tau(\zeta_j,\xi_j)\}=\tau_j$, as required.
Hence the claim has been proved by induction on $j$.
We are finally ready to prove the main statement of the lemma. Since $s_1,\dots,s_l$ is the list of all $\mathcal H$-components of $\alpha$, we can combine the inequalities \eqref{eq:bound_on_l} and \eqref{eq:tau_j} to achieve
\[
d_X(q_+,(p_i)_+)=|\alpha|_X \le \ell(\alpha)+\sum_{j=1}^l |s_j|_X \le C_0+1+\sum_{j=1}^l \tau_j \le C_0+1+\sum_{j=1}^{\lfloor C_0+1 \rfloor} \tau_j .
\]
On the other hand, by the triangle inequality and \eqref{eq:d_X(q_+,r_-)}, we have
\[
d_X((p_i)_+,r_-)\le L+d_X(q_+,(p_i)_+) \le L+C_0+1+\sum_{j=1}^{\lfloor C_0+1 \rfloor} \tau_j.
\]
We have shown that the constant $\displaystyle C_1=L+C_0+1+\sum_{j=1}^{\lfloor C_0+1 \rfloor} \tau_j > 0$ is an upper bound for $d_X(q_+,(p_i)_+)$ and $d_X((p_i)_+,r_-)$, thus the lemma is proved. \end{proof}
\begin{definition}[Consecutive, adjacent and multiple backtracking]
Let \(p=p_1 \dots p_n\) be a broken line in \(\Gamma(G,X\cup\mathcal{H})\).
Suppose that for some $i,j$, with $1 \le i <j \le n$, and $\nu \in \Nu$ there exist pairwise connected $H_\nu$-components $h_i,h_{i+1},\dots, h_j$ of the paths $p_i,p_{i+1}, \dots, p_j$, respectively.
Then we will say that $p$ has \emph{consecutive backtracking} along the components $h_i,\dots,h_j$ of \(p_i, \dots, p_j\).
Moreover, if \(j=i+ 1\), we will call it an instance of \emph{adjacent backtracking}, while if \(j>i+1\) will use the term \emph{multiple backtracking}. \end{definition}
The next lemma shows that, among path representatives of minimal type, instances of adjacent backtracking where at least one of the components is sufficiently long with respect to the proper metric \(d_X\) must have initial and terminal vertices far apart in $d_X$.
\begin{lemma}[Adjacent backtracking is long] \label{lem:longadjbacktracking}
For any \(\zeta \geq 0\) there is \(\Theta_0 = \Theta_0(\zeta) \in \NN\) such that the following holds.
Let $Q' \leqslant Q$ and $R' \leqslant R$ be subgroups satisfying \descref{C1} and let \(p=p_1 \dots p_n\) be a minimal type path representative for an element \(g \in \langle Q', R' \rangle\). Suppose that for some $i \in \{1,\dots,n-1\}$ the paths \(p_i\) and \(p_{i+1}\) have connected \(\mathcal H\)-components \(q\) and \(r\) respectively, satisfying
\[
\max\{ |q|_X,|r|_X\} \geq \Theta_0.
\]
Then \(d_X(q_-,r_+) \geq \zeta\). \end{lemma}
\begin{proof}
For any $\zeta \ge 0$ we can define $\Theta_0=\lfloor\tau(\zeta,C_1)\rfloor+1$, where $C_1$ is the constant from Lemma~\ref{lem:shortspikes} and $\tau(\zeta,C_1)$ is provided by Lemma~\ref{lem:one_comp_in_cusp_is_bounded}.
It follows that if \(d_X(q_-,r_+) < \zeta\) then $|q|_X < \Theta_0$ and $|r|_X < \Theta_0$, which is the contrapositive of the required statement. \end{proof}
\section{Multiple backtracking in path representatives of minimal type} \label{sec:multitracking} As before, we keep working under Convention~\ref{conv:main}. In this section we deal with multiple backtracking in path representatives of elements from $\langle Q',R'\rangle$. Proposition~\ref{prop:multitracking_path} below uses condition \descref{C3} to show that any instance of multiple backtracking essentially takes place inside a parabolic subgroup. In order to achieve this we first prove two auxiliary statements.
\begin{notation}
Throughout this section \(C_1 \geq 0\) will be the constant given by Lemma~\ref{lem:shortspikes} and $\mathcal{P}_1$ will denote the finite collection of parabolic subgroups of $G$ defined by
\[
\mathcal{P}_1=\lbrace t H_\nu t^{-1} \mid \nu \in \Nu, \abs{t}_X \leq C_1 \rbrace.
\]
Consider the subset $O=\{o \in PS \mid P \in \mathcal{P}_1,~ \abs{o}_X \le 2C_1\}$ of $G$. Since $|O|<\infty$, we can choose and fix a finite subset $\Omega \subseteq S$ such that every element $o \in O$ can be written as $o=fh$, where $f \in P$, for some $P \in \mathcal{P}_1$, and $h \in \Omega$. We define a constant $E$ by \begin{equation}\label{eq:def_of_E} E=\max\{\abs{h}_X \mid h \in \Omega\} \ge 0. \end{equation}
\end{notation}
\begin{lemma} \label{lem:end_sides_constr}
There exists a constant $ D\ge 0$ such that the following holds.
Let $\nu \in \Nu$ and $b \in G$ be an element with $|b|_X \le C_1$, so that $P=bH_\nu b^{-1} \in\mathcal{P}_1$, and let $p$ be a geodesic path in $\ga$ with $\elem{p} \in Q \cup R$.
Suppose that there is a vertex $v$ of $p$ and an element $u \in P$ such that $v \in Pb=bH_\nu$ and $u^{-1}p_- \in S=Q \cap R$. Then there exists a geodesic path $p'$ in $\ga$ such that
\begin{itemize}
\item $p'_-=u$ and $d_X(p'_+,v) \le D$;
\item if $\elem{p} \in Q$ then $\elem{p'} \in Q \cap P$, otherwise $\elem{p'} \in R \cap P$.
\end{itemize} \end{lemma}
\begin{proof}
Let $K=\max\{C_1, \varepsilon\} \ge 0$, where $\varepsilon$ is the quasiconvexity constant of $Q$ and $R$, and let
\begin{equation}
\label{eq:def_of_D}
D= \max\{K'(Q,P,K),K'(R,P,K) \mid P \in \mathcal{P}_1\},
\end{equation}
where $K'(Q,P,K)$ and $K'(R,P,K)$ are obtained from Lemma~\ref{lem:nbhdintersection}.
Denote $x =p_- \in G$ and assume, without loss of generality, that $\elem{p} \in Q$ (the case $\elem{p} \in R$ can be treated similarly). By the quasiconvexity of \(Q\), we have that \( d_X(v,x Q) \leq \varepsilon\). Moreover, $xQ=uQ$ as $u^{-1}x \in S \subseteq Q$.
By the assumptions, $vb^{-1} \in P$, hence $d_X(v,P) \le |b|_X \le C_1$. Since $uP=P$ we see that
\begin{equation*}
v \in N_X(uQ,\varepsilon) \cap N_{X}( uP,C_1).
\end{equation*}
Applying Lemma \ref{lem:nbhdintersection}, we find \(w \in u(Q \cap P)\) such that \(d_X( v,w) \leq D\) (see Figure~\ref{fig:multitracking1}).
\begin{figure}
\caption{Illustration of Lemma \ref{lem:end_sides_constr}.}
\label{fig:multitracking1}
\end{figure}
Let \(p'\) be any geodesic in \(\Gamma(G,X\cup\mathcal{H})\) starting at $u$ and ending at $w$. It is easy to see that $p'$ satisfies all of the required properties, so the lemma is proved. \end{proof}
The next lemma describes how condition \descref{C3} is used in this paper.
\begin{lemma} \label{lem:(c3)->vertex_constr}
Assume that subgroups $Q' \leqslant Q$ and $R' \leqslant R$ satisfy conditions \descref{C1} and \descref{C3} with constant \(C\) and family \(\mathcal{P} \) such that $C \ge 2C_1 + 1$ and $\mathcal{P}_1 \subseteq \mathcal{P}$.
Let $P=bH_\nu b^{-1} \in\mathcal{P}_1$, for some $\nu \in \Nu$ and $b \in G$, with $|b|_X \le C_1$, and let $p$ be a path in $\ga$ with $\elem{p} \in Q' \cup R'$.
Suppose that there is a vertex $v$ of $p$ and an element $u \in P$ satisfying $u^{-1}p_- \in S$, $v \in Pb$, and $d_X(v,p_+) \le C_1$. Then there exists a geodesic path $p'$ such that $(p')_-=u$, $\elem{p'} \in P$, $(p')_+^{-1}p_+ \in S$ and $d_X((p')_+,p_+) \le E$, where $E$ is the constant from \eqref{eq:def_of_E}. In particular, if $\elem{p} \in Q'$ (respectively, $\elem{p} \in R'$) then $\elem{p'} \in Q' \cap P$ (respectively, $\elem{p'} \in R' \cap P$). \end{lemma}
\begin{proof}
Denote $x=p_-$, $y=p_+$ and $z=vb^{-1} \in P$ (see Figure~\ref{fig:multitracking2}). Then $u^{-1}z \in P$ and $x^{-1}y=\elem{p} \in Q' \cup R'$.
\begin{figure}
\caption{Illustration of Lemma \ref{lem:(c3)->vertex_constr}.}
\label{fig:multitracking2}
\end{figure}
Since $u^{-1} x \in S = Q' \cap R'$, we obtain
\[
u^{-1}y=(u^{-1}x) (x^{-1}y) \in Q' \cup R',
\]
whence $z^{-1}y=(z^{-1}u )(u^{-1}y) \in P (Q' \cup R')$.
Now, observe that
\[
|z^{-1} y|_X=d_X(z,y) \le d_X(z,v)+d_X(v,y) \le |b|_X+C_1 \le 2 C_1 < C.
\]
Condition \descref{C3} now implies that $z^{-1}y \in PS $, i.e., $z^{-1}y =fh$, for some $f \in P$ and $h \in \Omega$, where $\Omega$ is the finite subset of $S$, defined above the statement of the lemma.
Let $p'$ be a geodesic path starting at $u$ and ending at $zf \in P$. Then $\elem{p'}=u^{-1}zf \in P$,
\[
(p')_+^{-1}p_+=f^{-1}z^{-1}y=h \in S ~\text{ and }~ d_X((p')_+,p_+)=\abs{h}_X \le E.
\]
The last statement of the lemma follows from \descref{C1} and the observation that
\[
\elem{p'}=u^{-1}(p')_+=u^{-1} p_- \, \elem{p} \, (p_+)^{-1}(p')_+ \in S\,\elem{p}\, S.
\] \end{proof}
\begin{proposition} \label{prop:multitracking_path}
Let $D \ge 0$ is the constant provided by Lemma~\ref{lem:end_sides_constr}, and let $E$ be given by \eqref{eq:def_of_E}. Suppose that $Q' \leqslant Q$ and $R' \leqslant R$ are subgroups satisfying \descref{C1} and \descref{C3}, with constant \(C \ge 2C_1 + 1\) and family \(\mathcal{P} \supseteq \mathcal{P}_1\).
Let \(p = p_1 \dots p_n\) be a path representative for an element \(g \in \langle Q', R' \rangle\) with minimal type.
If \(p\) has consecutive backtracking along \(\mathcal{H}\)-components \(h_i, \dots, h_j\) of the subpaths \(p_i, \dots, p_j\) respectively, then there is a subgroup \(P \in \mathcal{P}_1\) and a path \(p' = p'_i \dots p'_j\) satisfying the following properties:
\begin{itemize}
\item[(i)] \(p'_k\) is geodesic with \(\elem{p'_k} \in P\) for all \(k = i, \dots, j\);
\item[(ii)] \((p'_i)_+ = (p_i)_+\), \((p'_k)_+^{-1} (p_k)_+ \in S\) and $d_X((p'_k)_+,(p_k)_+) \le E$, for all $k=i+1,\dots,j-1$;
\item[(iii)] \(d_X(p'_-,(h_i)_-) \leq D\) and \(d_X(p'_+,(h_j)_+) \leq D\);
\item[(iv)] \(\elem{p'_i} \in Q\cap P\) if \(\elem{p_i} \in Q'\), and \(\elem{p'_i} \in R \cap P\) if \(\elem{p_i} \in R'\); similarly, \(\elem{p'_j} \in Q \cap P\) if \(\elem{p_j} \in Q'\), and \(\elem{p'_j} \in R \cap P\) if \(\elem{p_j} \in R'\);
\item[(v)] for each \(k \in \{i+1, \dots, j-1\}\), \(\Lab(p'_k)\) either represents an element of \(Q' \cap P\) or an element of \(R' \cap P\).
\end{itemize} \end{proposition}
\begin{proof}
Figure \ref{fig:multitracking_parabolic_path} below is a sketch of the path \(p'\) above the subpath \(p_i p_{i+1} \dots p_{j-1} p_j\) of \(p\).
\begin{figure}
\caption{The new path \(p'\) constructed in Proposition~\ref{prop:multitracking_path}. The dotted lines between \(p\) and \(p'\) are paths whose labels represent elements of \(S\).}
\label{fig:multitracking_parabolic_path}
\end{figure}
Note that claim (v) follows from claim (ii) and condition \descref{C1}, so we only need to establish claims (i)--(iv).
By the assumptions, there is $\nu \in \mathcal{N}$ such that for each \(k \in \{i, \dots, j\}\) $p_k$ is a concatenation \(p_k = a_k h_k b_k\), where \(h_k\) is an \(H_\nu\)-component of \(p_k\) and \(a_k,b_k\) are subpaths of \(p_k\).
According to Lemma~\ref{lem:shortspikes}, we have
\begin{equation}\label{eq:lengths_of_a_k_and_b_l}
\abs{b_k}_X \le C_1, \text{ for } k=i,\dots,j-1.
\end{equation}
After translating everything by $(p_i)_+^{-1}$ we can assume that $(p_i)_+=1$.
From here on, we let $b=\elem{b_i}^{-1} \in G$ and \(P= b H_\nu b^{-1}\).
As noted in \eqref{eq:lengths_of_a_k_and_b_l}, \(\abs{b}_X =\abs{{b_i}}_X \leq C_1\), so \(P \in \mathcal{P}_1\).
Since the components \(h_i\) and \(h_k\) are connected, for every $k=i+1,\dots,j$, the elements $(h_i)_+=(b_i)_-=b$ and $(h_k)_+$ all belong to the same left coset $bH_\nu=Pb$, thus
\begin{equation}\label{eq:in_Pb}
(h_k)_+ \in Pb, \text{ for all } k=i+1,\dots,j.
\end{equation}
The rest of the argument will be divided into three steps.
\noindent \underline{\emph{Step 1:}} construction of the path $p_i'$.
Set $u_i=(p_i)_+=1$ and $v_i=(h_i)_-$. Then $v_i =\elem{b_i}^{-1} \elem{h_i}^{-1} \in bH_\nu=Pb$, so the path $p_i^{-1}$, its vertex $v_i$ and the element $u_i=1 \in P$ satisfy the assumptions of Lemma~\ref{lem:end_sides_constr}. Therefore there exists a path $q$ with $q_-=u_i$, $d_X(q_+,v) \le D$ and such that $\elem{q} \in Q \cap P$ if $\elem{p_i} \in Q$ and $\elem{q} \in R \cap P$ if $\elem{p_i} \in R$.
It is easy to check that the path $p_i'=q^{-1}$ enjoys the required properties.
\noindent \underline{\emph{Step 2:}} construction of the paths $p_k'$, for $k=i+1,\dots,j-1$.
We will define the paths $p_k'$ by induction on $k$. For $k=i+1$ we consider the path $p_{i+1}$, its vertex $v_{i+1}=(h_{i+1})_+$ and the element $u_i=1=(p_{i+1})_-$.
Since $v_{i+1} \in Pb$ by \eqref{eq:in_Pb} and $d_X(v_{i+1}, (p_{i+1})_+)=|b_{i+1}|_X \le C_1$ by \eqref{eq:lengths_of_a_k_and_b_l}, we can apply Lemma~\ref{lem:(c3)->vertex_constr} to find a geodesic path $p_{i+1}'$ starting at $u_i$ and satisfying the required conditions.
Now suppose that the required paths $p_{i+1}',\dots,p_m'$ have already been constructed for some $m \in \{i+1,\dots,j-2\}$. To construct the path $p_{m+1}'$, let $v_{m+1}$ be the vertex $(h_{m+1})_+$ of $p_{m+1}$ and set $u_m=(p_m')_+$. Then $u_m \in P$ and $u_m^{-1} (p_{m+1})_-=(p_m')_+^{-1} (p_{m})_+ \in S$ by the induction hypothesis. In view of \eqref{eq:in_Pb} and \eqref{eq:lengths_of_a_k_and_b_l}, $v_{m+1} \in Pb$ and $d_X(v_{m+1}, (p_{m+1})_+) \le C_1$, therefore we can find a geodesic path $p_{m+1}'$ with the desired properties by using Lemma~\ref{lem:(c3)->vertex_constr}.
Thus we have described an inductive procedure for constructing the paths $p_k'$, for $k=i+1,\dots,j-1$.
\noindent \underline{\emph{Step 3:}} construction of the path $p_j'$.
This step is similar to Step 1: the path $p_j'$ will start at $u_{j-1}=(p_{j-1}')_+ \in P$ and can be constructed by applying Lemma~\ref{lem:end_sides_constr} to the path $p_j$ and the elements $v_j=(h_j)_+ \in P b $, $u_{j-1} \in P$.
We have thus constructed a sequence of geodesic paths \(p'_i, \dots, p'_j\) whose concatenation \(p'\) satisfies all the properties from the proposition. \end{proof}
We will now prove the main result of this section, which states that the initial and terminal vertices of an instance of multiple backtracking in a minimal type path representative must lie far apart in the proper metric $d_X$, provided \(Q' \leqslant Q\) and \(R' \leqslant R\) satisfy \descref{C1}--\descref{C5} with sufficiently large constants.
\begin{proposition}[Multiple backtracking is long] \label{prop:long_multitracking}
For any \(\zeta \geq 0\) there is a constant \(C_2 = C_2(\zeta) \geq 0\) such that if \(Q' \leqslant Q\) and \(R' \leqslant R\) are subgroups satisfying conditions \descref{C1}-\descref{C5} with constants \(B \ge C_2\) and \(C \ge C_2\) and a family \(\mathcal{P} \supseteq \mathcal{P}_1\), then the following is true.
Let \(p = p_1 \dots p_n\) be a minimal type path representative for an element \(g \in \langle Q', R' \rangle\).
If \(p\) has multiple backtracking along \(\mathcal{H}\)-components \(h_i, \dots, h_j\) of \(p_i, \dots, p_j\)
then \(d_X((h_i)_-,(h_j)_+) \geq \zeta\). \end{proposition}
\begin{proof}
Let \(\zeta \geq 0\) and define \(C_2(\zeta) = \max{\{2C_1, \zeta + 2D\}} + 1\), where \(D \geq 0\) is the constant obtained from Lemma~\ref{lem:end_sides_constr}.
In view of the assumptions we can apply Proposition~\ref{prop:multitracking_path} to find a path \(p' = p'_i \dots p'_j\) and \(P \in \mathcal{P}_1\) satisfying properties (i)--(v) from its statement.
Let \(\alpha\) be a geodesic with \(\alpha_- = (p_j')_-\) and \(\alpha_+ = (p_j)_-\), and let $\beta=p'_{i+1} \dots p'_{j-1}$.
We will denote \(x_k = \elem{p_k}\) and \(x'_k = \elem{p'_k}\), for each \(k \in \{i, \dots, j\}\), and \(z = \elem{\alpha}\).
Condition \descref{C1}, together with claim (ii) of Proposition~\ref{prop:multitracking_path}, tell us that \(z \in S = Q' \cap R'\), and claim (v) yields that
\begin{equation}
\label{eq:elem(beta)_in_Q'R'}
\elem{\beta} = x_{i+1}' \dots x_{j-1}' \in \langle Q'_P,R'_P \rangle
\end{equation}
(as before, for a subgroup $H \leqslant G$ we denote by $H_P \leqslant G$ the intersection $H \cap P$).
Now suppose, for a contradiction, that \(d_X((h_i)_-,(h_j)_+) < \zeta\).
Then
\begin{equation}
\label{eq:d_X(p'_-, p'_+)}
|p'|_X=d_X(p'_-, p'_+) < \zeta + 2D < C_2 \le \min\{B,C\},
\end{equation} by claim (iii) of Proposition~\ref{prop:multitracking_path}.
There are four cases to consider depending on whether \(\elem{p_i}\) and \(\elem{p_j}\) are elements of \(Q'\) or \(R'\).
\underline{\emph{Case 1:}} $x_i=\elem{p_i} \in Q'$ and $x_j=\elem{p_j} \in Q'$.
Then, by claim (iv) of Proposition~\ref{prop:multitracking_path}, both \(x'_i\) and \(x'_j\) are elements of \(Q_P\) .
It follows that \( \elem{p'} \in Q_P \langle Q'_P, R'_P \rangle Q_P \subseteq Q \langle Q', R' \rangle Q\).
By \eqref{eq:d_X(p'_-, p'_+)} and \descref{C2}, there is \(q \in Q\) such that \(\elem{p'} = q\). Therefore
\begin{equation}
\label{eq:elem(beta)_in_Q}
\elem{\beta}={x_i'}^{-1}\, \elem{p'} \, {x_j'}^{-1}= {x_i'}^{-1} \, q \, {x_j'}^{-1} \in Q.
\end{equation}
Combining \eqref{eq:elem(beta)_in_Q} with \eqref{eq:elem(beta)_in_Q'R'} and using condition \descref{C4}, we get
\[
\elem{\beta} \in Q \cap \langle Q'_P,R'_P \rangle=Q_P \cap \langle Q'_P,R'_P \rangle=Q'_P.
\]
Let $\gamma$ be any geodesic path in $\ga$ starting at $(p_i)_-$ and ending at $(p_j)_+$.
Then $\gamma$ shares the same endpoints with the path $p_i \beta \alpha p_j$, therefore their labels represent the same element of $G$:
\[
\elem{\gamma}=x_i \, \elem{\beta} \, z \, x_j \in Q'\, Q_P'\, S\, Q'=Q'.
\]
Thus we can use $\gamma$ to obtain another path representative for \(g\) through $p_1 \dots p_{i-1} \gamma p_{j+1} \dots p_n$,
which consists of strictly fewer geodesic subpaths than \(p=p_1 \dots p_n\).
This contradicts the minimality of the type of \(p\), so Case 1 has been considered.
\underline{\emph{Case 2:}} both $\elem{p_i}$ and $\elem{p_j}$ are elements of $R'$.
This case can be dealt with identically to Case 1.
\underline{\emph{Case 3:}} $x_i=\elem{p_i} \in Q'$ and $x_j=\elem{p_j} \in R'$.
Then \(x'_i \in Q_P\) and \(x'_j \in R_P\) by claim (iv) of Proposition~\ref{prop:multitracking_path}.
Hence \(\Lab(p')\) represents an element of \(x'_i \langle Q'_P, R'_P \rangle R_P\) with \(x'_i \in Q_P\).
In view of \eqref{eq:d_X(p'_-, p'_+)}, we can use condition \descref{C5} to deduce that \(\elem{p'} \in x'_i Q'_P R_P\).
It follows that
\[
\elem{\beta}=(x_i')^{-1} \, \elem{p'} \, (x_j')^{-1} \in Q_P'\, R_P,
\]
so there exist $q \in Q'_P$ and $r \in R_P$ such that $\elem{\beta}=qr$.
Combining this with \eqref{eq:elem(beta)_in_Q'R'} we conclude that $r=q^{-1}\elem{\beta} \in R_P \cap \langle Q_P',R_P' \rangle$, so $r \in R_P'$ by condition \descref{C4}, whence
\begin{equation}
\label{eq:elem(beta)_in_dc}
\elem{\beta}=qr \in Q'_P\, R_P'.
\end{equation}
Observe that the paths $\gamma=p_i \dots p_j$ and $p_i \beta \alpha p_j$ have the same endpoints, hence their labels represent the same element of $G$:
\[
\elem{\gamma}=x_i \elem{\beta} z x_j \in Q' \, Q'_P\, R_P' \, S \, R' \subseteq Q'\, R'.
\]
Therefore there are elements $q_1 \in Q'$ and $r_1 \in R'$ such that $\elem{\gamma}=q_1 r_1$.
Let $\gamma_1$ be a geodesic path in $\ga$ starting at $\gamma_-=(p_i)_-$ and ending at $\gamma_- q_1$ and let $\gamma_2$ be a geodesic path starting at $(\gamma_1)_+$ and ending at $(\gamma_1)_+ r_1=\gamma_+=(p_j)_+$.
Since $\elem{\gamma_1} =q_1\in Q'$ and $\elem{\gamma_2}=r_1 \in R'$ the path \(p_1 \dots p_{i-1} \gamma_1 \gamma_2 p_{j+1} \dots p_n\) is a path representative of \(g\).
Moreover, it consists of fewer than \(n\) geodesic segments because $j>i+1$ (by the definition of multiple backtracking), contradicting the minimality of the type of \(p\).
This contradiction shows that Case~3 is impossible.
\underline{\emph{Case 4:}} $x_i=\elem{p_i} \in R'$ and $x_j=\elem{p_j} \in Q'$.
Then \(x'_i \in R_P\) while \(x'_j \in Q_P\), which implies that $\elem{p'} \in R_P \langle Q'_P,R'_P \rangle x_j'$, hence $\elem{p'}^{-1} \in (x_j')^{-1}\langle Q'_P, R'_P \rangle R_P$.
By \eqref{eq:d_X(p'_-, p'_+)}, we can use \descref{C5} to conclude that \(\elem{p'}^{-1} \in (x'_j)^{-1} Q'_P R_P\), thus $\elem{p'} \in R_P Q'_P x_j'$.
The rest of the argument proceeds similarly to the previous case, leading to a contradiction with the minimality of the type of $p$. Hence Case~4 is also impossible.
Since we have arrived at a contradiction in each of the four cases, we can conclude that \(d_X((h_i)_-,(h_j)_+) \ge \zeta\), as required. \end{proof}
\section{Constructing quasigeodesics from broken lines} \label{sec:quasigeods} In this section we detail a procedure that takes as input a broken line and a natural number, and outputs another broken line together with some additional vertex data. We show that if a broken line satisfies certain metric conditions, then the new path constructed through this procedure is uniformly quasigeodesic.
We assume that $G$ is a group generated by a finite set $X$ and hyperbolic relative to a finite family of subgroups $\{H_\nu \mid \nu \in \Nu\}$. As usual we set $\mathcal{H}=\bigsqcup_{\nu \in \Nu} (H_\nu\setminus\{1\})$, and by Lemma~\ref{lem:Cayley_graph-hyperbolic} we know that the Cayley graph $\ga$ is $\delta$-hyperbolic, for some $\delta \ge 0$.
The outline of the construction is as follows: we begin with a broken line \(p = p_1 \dots p_n\) in \(\ga\). Starting from the initial vertex \(p_-\), we note in sequence (along the vertices of \(p\)) the vertices marking the start and end of maximal instances of consecutive backtracking in \(p\) involving sufficiently long \(\mathcal{H}\)-components. Once we have done this, we construct the new path by connecting (in the same sequence) the marked vertices with geodesics.
\begin{procedure}[$\Theta$-shortcutting] \label{proc:shortcutting}
Fix a natural number \(\Theta \in \NN\) and let \(p = p_1 \dots p_n\) be a broken line in \(\Gamma(G,X\cup\mathcal{H})\). Let \(v_0, \dots, v_d\) be the enumeration of all vertices of \(p\) in the order they occur along the path (possibly with repetition), so that \(v_0 = p_-\), \(v_d = p_+\) and $d=\ell(p)$.
We construct a broken line \(\Sigma(p,\Theta)\), called the \(\Theta\)-\emph{shortcutting} of \(p\), which comes with a finite set \(V(p,\Theta) \subset \{0,\dots,d\} \times \{0,\dots,d\}\) corresponding to indices of vertices of \(p\) that we shortcut along.
In the algorithm below we will refer to numbers \(s,t,N \in \{0,\dots,d\}\) and a subset \(V \subseteq \{0,\dots,d\} \times \{0,\dots,d\}\). To avoid excessive indexing these will change value throughout the procedure.
The parameters $s$ and $t$ will indicate the starting and terminal vertices of subpaths of $p$ in which all $\mathcal H$-components have lengths less than $\Theta$. The parameter $N$ will keep track of how far along the path $p$ we have proceeded. The set $V$ will collect all pairs of indices $(s,t)$ obtained during the procedure.
We initially take \(s = 0\), $N=0$ and \(V = \emptyset\).
\begin{steps}
\item
If there are no edges of \(p\) between \(v_N\) and \(v_d\) that are labelled by elements of \(\mathcal{H}\), then add the pair $(s,d)$ to the set $V$ and skip ahead to Step 4.
Otherwise, continue to Step 2.
\item
Let \(t \in \{0,\dots,d\}\) be the least natural number with \(t \geq N\) for which the edge of \(p\) with endpoints \(v_t\) and \(v_{t+1}\) is an \(\mathcal{H}\)-component $h_i$ of a geodesic segment $p_i$ of \(p\), for some \(i \in \{1, \dots, n\}\).
If $i=n$ or if $h_i$ is not connected to a component of $p_{i+1}$ then set $j=i$. Otherwise, let \(j \in \{i+1,\dots,n\}\) be the maximal integer such that \(p\) has consecutive backtracking along \(\mathcal{H}\)-components \(h_i, \dots, h_j\) of segments \(p_i, \dots, p_j\).
Redefine $N$ in \(\{1,\dots,d\}\) to be the index of the vertex $(h_j)_+$ in the above enumeration $v_0,\dots,v_d$ of the vertices of $p$. Proceed to Step 3.
\item
If \[\max\Big\{\abs{h_k}_X \, \Big| \, k = i, \dots, j\Big\} \geq \Theta,\] then add the pair \((s,t)\) to the set \(V\) and redefine $s=N$.
Otherwise leave $s$ and \(V\) unchanged.
Return to Step~1 with the new values of \(s\), $N$ and \(V\).
\item
Set \(V(p,\Theta) = V\). The above constructions gives a natural ordering of $V(p,\Theta)$: \[V(p,\Theta) = \{(s_0, t_0), \dots, (s_m,t_m)\},\] where \(s_k \le t_k < s_{k+1}\), for all \(k = 0, \dots, m-1\). Note that $s_0=0$ and $t_m=d$. Proceed to Step 5.
\item
For each $k=0,\dots,m$, let $f_k$ be a geodesic segment (possibly trivial) connecting $v_{s_k}$ with $v_{t_k}$. Note that when $k <m$,
\(v_{t_k}\) and \(v_{s_{k+1}}\) are in the same left coset of $H_\nu$, for some $\nu \in \Nu$.
If \(v_{t_k} = v_{s_{k+1}}\) then let \(e_k\) be the trivial path at \(v_{t_k}\), otherwise let \(e_k\) be an edge of $\ga$ starting at \(v_{t_k}\), ending at \(v_{s_{k+1}}\) and labelled by an element of \(H_\nu \setminus\{1\}\).
We define the broken line \(\Sigma(p,\Theta)\) to be the concatenation \(f_0 e_1 f_1 e_2 \dots f_{m-1} e_m f_m\).
\end{steps} \end{procedure}
\begin{remark} \label{rem:shortcutting}
Let us collect some observations about Procedure~\ref{proc:shortcutting}.
\begin{itemize}
\item[(a)] Since \(p\) has only finitely many vertices and $N$ increases at each iteration of Step 2 above, the procedure will always terminate after finitely many steps.
\item[(b)] The newly constructed broken line \(\Sigma(p,\Theta)\) has the same endpoints as $p$, and each node of \(\Sigma(p,\Theta)\) is a vertex of $p$.
\item[(c)] By construction, for any $k \in \{0,\dots,m\}$ the subpath of $p$ between $v_{s_k}$ and $v_{t_k}$ contains no edge labelled by an element $h \in \mathcal{H}$ satisfying $\abs{h}_X \ge \Theta$.
\end{itemize} \end{remark}
Figure~\ref{fig:shortcutting_example} below sketches an example of the output of Procedure~\ref{proc:shortcutting}. \begin{figure}
\caption{An example of a shortcutting of a path \(p\) in $\ga$. The path \(p\) contains long \(\mathcal{H}\)-components, some of which are involved in instances of consecutive backtracking, as indicated by the dashed lines. The path \(\Sigma(p,\Theta)=f_0e_1f_1e_2f_2e_3f_3\) is drawn on top of \(p\).}
\label{fig:shortcutting_example}
\end{figure}
In the next definition we describe paths that will serve as input for the above procedure.
\begin{definition}[Tamable broken line] \label{def:tamable}
Let \(p = p_1 \dots p_n\) be a broken line in $\ga$, and let \(B, C, \zeta \geq 0, \Theta \in \NN\).
We say that \(p\) is \emph{\((B,C,\zeta,\Theta)\)-tamable} if all of the following conditions hold:
\begin{enumerate}[label=(\roman*)]
\item \label{cond:tam_1} \(\abs{p_i}_X \geq B\), for \(i = 2, \dots, n-1\);
\item \label{cond:tam_2} \(\langle (p_i)_-, (p_{i+1})_+ \rangle_{(p_i)_+}^{rel} \leq C\), for each \(i = 1, \dots, n-1\);
\item \label{cond:tam_3} whenever \(p\) has consecutive backtracking along \(\mathcal{H}\)-components \(h_i, \dots, h_j\), of segments \(p_i, \dots, p_j\), such that
\[
\max\Big\{ \abs{h_k}_X \, \Big| \, k = i, \dots, j \Big\} \geq \Theta,
\]
it must be that \(d_X\Bigl( (h_i)_-,(h_j)_+ \Bigr) \geq \zeta\).
\end{enumerate} \end{definition}
The remainder of this section is devoted to showing the following result about quasigeodesicity of shortcuttings for tamable paths with appropriate constants.
\begin{proposition} \label{prop:shortcutting_quasigeodesic}
Given arbitrary \(c_0 \geq 14\delta\) and \(\eta \geq 0\) there are constants \( \lambda = \lambda(c_0) \geq 1\), \(c = c(c_0) \geq 0\) and \(\zeta = \zeta(\eta,c_0) \geq 1\) such that for any natural number \(\Theta \geq \zeta\) there is \(B_0 = B_0(\Theta,c_0) \geq 0\) satisfying the following.
Let \(p = p_1 \dots p_n\) be a \((B_0,c_0,\zeta,\Theta)\)-tamable broken line in $\ga$ and let \(\Sigma(p,\Theta) = f_0 e_1 f_1 \dots f_{m-1} e_m f_m\) be the \(\Theta\)-shortcutting, obtained by applying Procedure~\ref{proc:shortcutting} to \(p\).
Then $e_k$ is non-trivial, for each $k=1,\dots,m$, and $\Sigma(p,\Theta)$ is \((\lambda,c)\)-quasigeodesic without backtracking.
Moreover, for any \(k \in \{ 1, \dots, m\}\), if we denote by \(e'_k\) the \(\mathcal{H}\)-component of \(\Sigma(p,\Theta)\) containing \(e_k\), then \(\abs{e'_k}_X \geq \eta\). \end{proposition}
The idea of the proof will be to show that under the above assumptions the broken line \(\Sigma(p,\Theta)\) satisfies the hypotheses of Proposition~\ref{prop:mpquasigeodesic}.
\begin{notation}
For the remainder of this section we fix arbitrary constants \(c_0 \geq 14\delta\) and \(\eta \geq 0\). We let \(\rho = \kappa(4,c_3,0)\), where \(c_3 = c_3(c_0) \geq 0\) is the constant from Lemma~\ref{lem:concat} and \(\kappa(4,c_3,0)\) is the constant obtained by applying Proposition~\ref{prop:osinbcp} to \((4,c_3)\)-quasigeodesics.
Let $\zeta_1>0$, $\lambda \ge 1$ and \(c \ge 0\) be the constants given by Proposition~\ref{prop:mpquasigeodesic}, applied with constant \(\rho\).
Note that the constants $\lambda$ and $c$ only depend on $c_0$ and do not depend on $\eta$.
We now define the constant \(\zeta\) by
\begin{equation}\label{eq:choice_of_eta}
\zeta = \max\Big\{ \zeta_1, \eta \Big\} + 2\rho + 1.
\end{equation}
Finally we take any natural number $\Theta \ge \zeta$ and
\begin{equation}\label{eq:choice_of_B_0}
B_0 = \max\Big\{ (12 c_0 + 12\delta + 1)\Theta, (4+c_3)\Theta+1 \Big\}.
\end{equation} \end{notation}
The proof of Proposition~\ref{prop:shortcutting_quasigeodesic} will consist of the following four lemmas. Throughout these lemmas we use the constants defined above and assume that \(p = p_1 \dots p_n\) is a \((B_0,c_0,\zeta,\Theta)\)-tamable broken line in $\ga$. As before, we write \(v_0, \dots, v_d\) for the set of vertices of \(p\) in the order of their appearance. We let \(\Sigma(p,\Theta) = f_0 e_1 f_1 \dots f_{m-1} e_m f_m\) be the \(\Theta\)-shortcutting and \(V(p,\Theta) = \{(s_0,t_0), \dots, (s_m,t_m)\}\) be the set obtained by applying Procedure~\ref{proc:shortcutting} to $p$.
\begin{lemma} \label{lem:e_j-long}
For each $k=1,\dots,m$, we have $|e_k|_X \ge \zeta >0$. \end{lemma}
\begin{proof}
By the construction in Procedure~\ref{proc:shortcutting}, there are pairwise connected \(\mathcal{H}\)-components \(h_1, \dots\), \( h_j\) of consecutive segments of \(p\), such that $j \ge 1$, \((h_1)_- = (e_k)_-\), \((h_s)_+ = (e_k)_+\) and $\max\{\abs{h_l}_X \mid l=1,\dots,j\} \ge \Theta$.
If $j=1$ we see that $|e_k|_X=|h_1|_X \ge \Theta \ge \zeta$, and if $j>1$ then we know that $|e_k|_X \ge \zeta$ by property \ref{cond:tam_3} from Definition~\ref{def:tamable}. \end{proof}
\begin{lemma} \label{lem:beta_quasigeodesic}
The subpaths of \(p\) between \(v_{s_k}\) and \(v_{t_k}\), for \(k = 0, \dots, m\), are \((4,c_3)\)-quasigeodesic. \end{lemma}
\begin{proof}
We write \(c_1=c_1(c_0)=12 c_0 + 12\delta + 1\), as in Lemma~\ref{lem:concat}.
Choose any \(k \in \{0,\dots,m\}\) and denote by \(p'\) be the subpath of \(p\) starting at \(v_{s_k}\) and terminating at \(v_{t_k}\).
If \(v_{s_k}\) and \(v_{t_{k}}\) are both vertices of $p_i$, for some $i \in \{1,\dots,n\}$, then \(p'\) is geodesic and we are done.
Otherwise \(p' = p'_i p_{i+1} \dots p_{j-1} p'_j\), for some \(i, j \in \{1, \dots, n\}\), with \(i < j\), where \(p'_i\) is a terminal segment of \(p_i\) and \(p'_j\) is an initial segment of \(p_j\).
By Remark~\ref{rem:shortcutting}(c), the paths \(p_{i+1}, \dots, p_{j-1}\) contain no \(\mathcal{H}\)-components \(h\) with \(|h|_X \geq \Theta\).
Since \(p\) is \((B_0,c_0,\zeta,\Theta)\)-tamable, \(\abs{p_l}_X \geq B_0\) for each \(l = i+1, \dots, j-1\) by condition~\ref{cond:tam_1}.
Thus we can combine Lemma~\ref{lem:rel_geods_with_short_comps} with \eqref{eq:choice_of_B_0} to obtain
\[
\dxh\Bigl((p_l)_-,(p_l)_+\Bigr)= \ell(p_l) \geq \frac{1}{\Theta}|p_l|_X \geq \frac{B_0}{\Theta} \geq c_1, \text{ for each } l \in \{i+1, \dots, j-1\}.
\]
Again, from the assumption that \(p\) is \((B_0,c_0,\zeta,\Theta)\)-tamable, we have that
\[
\langle (p_l)_-,(p_{l+1})_+ \rangle_{(p_l)_+}^{rel} \leq c_0, \text{ for all } l=i,\dots,j-1,
\]
using condition~\ref{cond:tam_2}.
In view of Remark~\ref{rem:Gr_prod_ineq},
\[
\langle (p'_i)_-,(p_{i+1})_+ \rangle_{(p'_i)_+}^{rel} \leq c_0~\text{ and }~\langle (p_{j-1})_-,(p'_{j})_+ \rangle_{(p_{j-1})_+}^{rel} \leq c_0.
\]
Therefore we can use Lemma~\ref{lem:concat} to conclude that \(p'\) is \((4,c_3)\)-quasigeodesic, as required. \end{proof}
\begin{lemma} \label{lem:short_ending_components}
If \(k \in \{0, \dots, m-1\}\) and \(h\) is an \(\mathcal{H}\)-component of \(f_{k}\) or \(f_{k+1}\) that is connected to \(e_{k+1}\), then \(\abs{h}_X \leq \rho\). \end{lemma}
\begin{proof}
Arguing by contradiction, suppose that \(h\) is an \(\mathcal{H}\)-component of \(f_{k}\) connected to \(e_{k+1}\) and satisfying \(\abs{h}_X > \rho\) (the other case when \(h\) is an \(\mathcal{H}\)-component of \(f_{k+1}\) is similar).
Remark~\ref{rem:comp_of_geod_is_an_edge} and geodesicity of $f_{k}$ imply that \(h\) must be the last edge of \(f_{k}\), so that \(h_+ = (f_k)_+ = v_{t_k}\).
Let \(p' = p'_i p_{i+1} \dots p_{j-1} p'_j\) be the subpath of \(p\) with \(p'_- = v_{s_k}\) and \(p'_+ = v_{t_k}\), where \(p'_i\) and \(p'_j\) are non-trivial subpaths of \(p_i\) and \(p_j\) respectively.
By Lemma~\ref{lem:beta_quasigeodesic}, \(p'\) is \((4,c_3)\)-quasigeodesic.
Since \(\abs{h}_X > \rho = \kappa(4,c_3,0)\) we may apply Proposition \ref{prop:osinbcp} to find that \(h\) is connected to an \(\mathcal{H}\)-component of \(p'\) (which may consist of multiple edges, each of which is an \(\mathcal{H}\)-component of a segment of \(p\)).
We write \(h'\) for the final edge of this \(\mathcal{H}\)-component and denote by \(u\) the edge of \(p\) with endpoints \(v_{t_k}\) and \(v_{t_k + 1}\) (see Figure~\ref{fig:short_ending_comps}).
Procedure~\ref{proc:shortcutting} and the assumption that $h$ is connected to $e_{k+1}$ imply that \(u\) is an \(\mathcal{H}\)-component of a segment of \(p\) and \(h'\) and \(u\) are connected as \(\mathcal{H}\)-subpaths of \(p\).
\begin{figure}
\caption{Illustration of Lemma~\ref{lem:short_ending_components}}
\label{fig:short_ending_comps}
\end{figure}
Suppose, first, that \(p'_j\) is a proper subpath of \(p_j\), so that \(u\) belongs to the segment \(p_j\), as shown on Figure~\ref{fig:short_ending_comps}.
Then there are the following possibilities.
\emph{\underline{Case 1}: \(h'\) is an edge of \(p_j\).}
In this case \(h'\) and \(u\) are connected distinct \(\mathcal{H}\)-subpaths of \(p_j\), which is a geodesic.
This contradicts the observation of Remark~\ref{rem:comp_of_geod_is_an_edge}, that geodesics are without backtracking and \(\mathcal{H}\)-components of geodesics are single edges.
\emph{\underline{Case 2}: \(h'\) is an \(\mathcal{H}\)-component of \(p_{j-1}\).}
Let \(t \in \{0,\dots,d\}\) be such that \(v_t = h'_-\), and note that
\begin{equation}
\label{eq:tk-1_s_sk}
s_k \leq t < t_k.
\end{equation}
By the construction from Procedure~\ref{proc:shortcutting}, there are pairwise connected \(\mathcal{H}\)-components \(h_j, \dots, h_{j+l}\), of segments \(p_j, \dots, p_{j+l}\), with \((e_{k+1})_- = (h_j)_-=v_{t_k}\) and \((e_{k+1})_+ = (h_{j+l})_+=v_{s_{k+1}}\), such that \(\max\{\abs{h_j}_X, \dots \abs{h_{j+l}}\} \geq \Theta\) and \(l \in \{0,\dots,n-j\}\) is chosen to be maximal with this property.
Then the components \(h', h_j, \dots, h_{j+l}\) constitute a larger instance of consecutive backtracking, starting at $h'_-=v_t$, with \[\max \Big\{\abs{h'}_X,\abs{h_k}_X \, \Big| \, k = j, \dots, j+l\Big\} \geq \Theta.\] In view of \eqref{eq:tk-1_s_sk}, this contradicts the choice of $t_k$ and the inclusion of $(s_k,t_k)$ in the set $V(p,\Theta)$ at Steps 2 and 3 of Procedure~\ref{proc:shortcutting}.
\emph{\underline{Case 3}: \(h'\) is a \(\mathcal{H}\)-component of one of the paths \(p'_i, p_{i+1}, \dots, p_{j-2}\).}
Then the subpath $q$ of $p'$ from $h'_+$ to $p'_+=v_{t_k}$ contains all of $p_{j-1}$.
By Remark~\ref{rem:shortcutting}(c), \(p_{j-1}\) contains no \(\mathcal{H}\)-components \(q\) satisfying \(\abs{q}_X \geq \Theta\).
Therefore, in view of Lemma~\ref{lem:rel_geods_with_short_comps} and the assumption that \(p\) is \((B_0,c_0,\zeta,\Theta)\)-tamable, we can deduce that
\(
\Theta \ell(p_{j-1}) \geq \abs{p_{j-1}}_X \ge B_0.
\)
Combining this with the $(4,c_3)$-quasigeodesicity of \(p'\), we obtain
\begin{equation*}\label{eq:dist_of_h'}
\dxh(h'_+,p'_+) \geq \frac{1}{4} \Big( \ell(q) - c_3 \Big) \geq \frac{1}{4} \Big( \ell(p_{j-1}) - c_3 \Big) \geq \frac{B_0}{4\Theta} - \frac{c_3}{4} > 1,
\end{equation*}
where the last inequality follows from \eqref{eq:choice_of_B_0}.
On the other hand, the fact that \(h'\) and \(h\) are connected gives \(d_{X\cup\mathcal{H}}(h'_+,p'_+) = \dxh(h'_+,h_+) \leq 1\), contradicting the above.
In each case we arrive at a contradiction, so it is impossible that \(\abs{h}_X > \rho\) if \(p'_j\) is a proper subpath of \(p_j\).
If \(p'_j\) is instead the whole subpath \(p_j\), we may carry out a similar analysis.
In this situation it must be that \(u\) is an \(\mathcal{H}\)-component of the segment \(p_{j+1}\).
We now have only two relevant cases to consider: \(h'\) is an \(\mathcal{H}\)-component of \(p_j\) or \(h'\) is an \(\mathcal{H}\)-component of one of the paths \(p'_i, p_{i+1}, \dots, p_{j-1}\). Both of them will lead to contradictions similarly to Cases 2 and 3 above.
Therefore it must be that \(\abs{h}_X \leq \rho\), as required. \end{proof}
\begin{lemma} \label{lem:ek_ek+1_not_connected}
For each \(k \in \{ 1, \dots, m-1\}\), the \(\mathcal{H}\)-subpaths \(e_k\) and \(e_{k+1}\) of $\Sigma(p,\Theta)$ are not connected. \end{lemma}
\begin{proof}
Suppose that $e_k$ is connected to $e_{k+1}$ for some $k\in \{1,\dots,m-1\}$.
As before, according to Procedure~\ref{proc:shortcutting}, there exist two sets of pairwise connected \(\mathcal{H}\)-components of consecutive segments of $p$, \(h_1, \dots, h_i\) and \(q_1, \dots, q_j\), such that \((h_1)_- = (e_k)_-\), \((h_i)_+ = (e_k)_+\), \((q_1)_- = (e_{k+1})_-\), \((q_j)_+ = (e_{k+1})_+\) and
\[
\max\Big\{\abs{h_1}_X, \dots,\abs{h_i}_X\Big\} \geq \Theta,~~ \max\Big\{\abs{q_1}_X, \dots,\abs{q_j}_X \Big\} \geq \Theta .
\]
Since \(e_k\) and \(e_{k+1}\) are connected, \(h_i\) and \(q_1\) will be connected $\mathcal{H}$-subpaths of $p$, in particular they cannot be contained in the same segment of the broken line $p$ by Remark~\ref{rem:comp_of_geod_is_an_edge}.
If \(h_i\) and \(q_1\) are \(\mathcal{H}\)-components of adjacent segments of \(p\), then the components \(h_1, \dots, h_i, q_1, \dots, q_j\) constitute a longer instance of consecutive backtracking in $p$, which contradicts the construction of $e_k$ in Procedure~\ref{proc:shortcutting}.
Therefore it must be the case that the subpath \(p'\) of \(p\) between \((e_k)_+=(h_i)_+=v_{s_{k+1}}\) and \((e_{k+1})_-=(q_1)_-=v_{t_{k+1}}\) contains at least one full segment \(p_l\) (with \(1 < l < n\)).
By Remark~\ref{rem:shortcutting}(c) the path \(p'\) has no \(\mathcal{H}\)-components \(h\) satisfying \(\abs{h}_X \geq \Theta\). Therefore we can combine Lemma~\ref{lem:rel_geods_with_short_comps} with the fact that $p$ is \((B_0,c_0,\zeta,\Theta)\)-tamable to deduce that
\begin{equation}\label{eq:len_of_p'_adj}
\ell(p') \geq \ell(p_l) \geq \frac{\abs{p_l}_X}{\Theta} \geq \frac{B_0}{\Theta}.
\end{equation}
Moreover, by Lemma~\ref{lem:beta_quasigeodesic} the path \(p'\) is \((4,c_3)\)-quasigeodesic so that
\[
\ell(p') \leq 4d_{X\cup\mathcal{H}}((e_k)_+,(e_{k+1})_-) + c_3 \leq 4 + c_3,
\]
where the last inequality is true because \(e_k\) and \(e_{k+1}\) are connected.
Combined with (\ref{eq:len_of_p'_adj}), the above inequality gives
$ B_0 \leq (4 + c_3)\Theta$,
which contradicts the choice of \(B_0\) in \eqref{eq:choice_of_B_0}.
Therefore $e_k$ and $e_{k+1}$ cannot be connected, for any $k \in \{1,\dots,m-1\}$. \end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:shortcutting_quasigeodesic}]
The construction, together with Lemmas~\ref{lem:e_j-long}, \ref{lem:short_ending_components} and \ref{lem:ek_ek+1_not_connected}, show that the \(\Theta\)-shortcutting \(\Sigma(p,\Theta) = f_0 e_1 f_1 \dots f_{m-1} e_m f_m\) satisfies the hypotheses of Proposition~\ref{prop:mpquasigeodesic} and $e_k$ is non-trivial, for each $k=1,\dots,m$.
Therefore \(\Sigma(p,\Theta)\) is \((\lambda,c)\)-quasigeodesic without backtracking.
For the final claim of the proposition, consider any \(k \in \{ 1, \dots, m\}\) and denote by \(e'_k\) the \(H_\nu\)-component of \(\Sigma(p,\Theta)\) containing \(e_k\), for some $\nu \in \Nu$. Lemma~\ref{lem:ek_ek+1_not_connected} implies that $e'_k$ is the concatenation $h_1 e_k h_2$, where $h_1$ is either trivial or it is an $H_\nu$-component of $f_{k-1}$, and $h_2$ is either trivial or it is an $H_\nu$-component of $f_{k}$.
Combining the triangle inequality with Lemmas~\ref{lem:e_j-long}, \ref{lem:short_ending_components} and equation \eqref{eq:choice_of_eta}, we obtain
\[
\abs{e'_k}_X \geq \abs{e_k}_X - \abs{h_1}_X-\abs{h_2}_X \geq \zeta-2\rho \ge \eta,
\]
as required. \end{proof}
\section{Metric quasiconvexity theorem} \label{sec:metric_qc}
This section comprises a proof of Theorem \ref{thm:metric_qc}, and, as usual, we work under Convention~\ref{conv:main}. First we will show that if some subgroups \(Q'\leqslant Q\) and \(R'\leqslant R\) satisfy conditions \descref{C1}-\descref{C5} with appropriately large constants, then minimal type path representatives of \(\langle Q', R' \rangle\) meet the conditions of Proposition~\ref{prop:shortcutting_quasigeodesic}. We will then use the quasigeodesicity of shortcuttings of these path representatives to obtain properties \descref{P1}-\descref{P3}.
\begin{lemma} \label{lem:C2_implies_old_C2}
Suppose that \(Q' \leqslant Q\) and \(R' \leqslant R\) satisfy \descref{C2} with constant \(B \geq 0\). Then
\[
\minx \Bigl( (Q' \cup R') \setminus S \Bigr) \geq B.
\] \end{lemma}
\begin{proof}
Let \(g \in (Q' \cup R') \setminus S\).
If \(g \in Q'\) then \(g \notin R\) as \(g \notin S\).
Therefore \(g \in Q' \setminus R \subseteq R \langle Q', R' \rangle R \setminus R\), whence \(\abs{g}_X \geq B\) by \descref{C2}.
Similarly, if \(g \in R'\) then \(g \in Q \langle Q', R' \rangle Q \setminus Q\), and \descref{C2} again implies that \(\abs{g}_X \geq B\). \end{proof}
\begin{notation} \label{not:main_in_part_2}
For the remainder of this section we fix the following notation:
\begin{itemize}
\item \(C_0\) is the constant provided by Lemma~\ref{lem:bddinnprod};
\item $c_0=\max\{C_0,14\delta\}$ and $c_3=c_3(c_0)$ is the constant obtained by applying Lemma~\ref{lem:concat};
\item $\lambda=\lambda(c_0)$ and $c=c(c_0)$ are the first two constants from Proposition~\ref{prop:shortcutting_quasigeodesic};
\item \(C_1 \ge 0\) is the constant from Lemma \ref{lem:shortspikes};
\item $\mathcal{P}_1$ is the finite family of parabolic subgroups of $G$ defined by
\begin{equation*}\label{eq:family_P_1}
\mathcal{P}_1 = \lbrace t H_\nu t^{-1} \, | \, \nu \in \Nu, \abs{t}_X \leq C_1 \rbrace.
\end{equation*}
\end{itemize} \end{notation}
\begin{lemma} \label{lem:pathreps_have_qgd_shortcutting}
For each \(\eta \geq 0\) there are constants \(C_3 = C_3(\eta) \geq 0\), $\zeta=\zeta(\eta) \ge 1$, \( \Theta_1 = \Theta_1(\eta) \in \NN\) and \(B_1 = B_1(\eta) \geq 0\) such that the following is true.
Suppose that \(Q'\leqslant Q\) and \(R'\leqslant R\) are subgroups satisfying conditions \descref{C1}-\descref{C5} with constants \(B \geq B_1\) and \(C \geq C_3\) and family \(\mathcal{P} \supseteq \mathcal{P}_1\). If \(p = p_1 \dots p_n\) is a minimal type path representative for an element \(g \in \langle Q', R' \rangle\) then \(p\) is \((B,c_0,\zeta,\Theta_1)\)-tamable.
Moreover, let \(\Sigma(p,\Theta_1) = f_0 e_1 f_1 \dots f_{m-1} e_m f_m\) be the \(\Theta_1\)-shortcutting of \(p\) obtained from Procedure~\ref{proc:shortcutting}, and let $e_k'$ be the \(\mathcal{H}\)-component of \(\Sigma(p,\Theta_1)\) containing \(e_k\), $k=1,\dots,m$. Then \(\Sigma(p,\Theta_1)\) is a \((\lambda,c)\)-quasigeodesic without backtracking and \(\abs{e'_k}_X \geq \eta\), for each \(k = 1, \dots, m\). \end{lemma}
\begin{proof}
We define the following constants:
\begin{itemize}
\item \(\zeta = \zeta(\eta,c_0) \geq 0\), the constant provided by Proposition~\ref{prop:shortcutting_quasigeodesic};
\item \(C_3 = C_2(\zeta) \geq 0\), where \(C_2(\zeta)\) is given by Proposition~\ref{prop:long_multitracking};
\item \(\Theta_1 = \max\{\Theta_0(\zeta), \zeta\}\), where \(\Theta_0\) is the constant of Lemma~\ref{lem:longadjbacktracking};
\item \(B_1 = \max\{ B_0(\Theta_1,c_0), C_2(\zeta) \} \geq 0\), where \(B_0\) is the remaining constant of Proposition~\ref{prop:shortcutting_quasigeodesic}.
\end{itemize}
Let \(B \geq B_1\) and \(C \geq C_3\).
Suppose that $Q'$, $R'$, $g$ and $p$ are as in the statement of the lemma.
In view of Remark~\ref{rem:alt}, \(\elem{p_i} \in (Q' \cup R') \setminus S\), for every \(i = 2, \dots, n-1\).
Therefore, by Lemma~\ref{lem:C2_implies_old_C2}, we have
\begin{equation}
\label{eq:pi_long}
\abs{p_i}_X \geq B, \text{ for each } i = 2, \dots, n-1.
\end{equation}
On the other hand, Lemma~\ref{lem:bddinnprod} tells us that
\begin{equation}
\label{eq:pathreps_bdd_in_prod}
\langle (p_i)_-, (p_{i+1})_+ \rangle_{(p_i)_+}^{rel} \leq C_0 \le c_0, \text{ for each }i = 1, \dots, n-1.
\end{equation}
Now suppose that \(p\) has consecutive backtracking along \(\mathcal{H}\)-components \(h_i, \dots, h_j\) of segments \(p_i, \dots, p_j\) satisfying
\[
\max\Big\{ \abs{h_i}_X,\dots, \abs{h_j}_X \Big\} \geq \Theta_1.
\]
If \(j = i+1\) then Lemma~\ref{lem:longadjbacktracking} and the choice of \(\Theta_1\) give that \(d_X((h_i)_-,(h_j)_+) \geq \zeta\).
Otherwise Proposition~\ref{prop:long_multitracking} gives the same inequality.
The above together with (\ref{eq:pi_long}) and (\ref{eq:pathreps_bdd_in_prod}) show that \(p\) is \((B,c_0,\zeta,\Theta_1)\)-tamable.
The remaining claims of the lemma follow from Proposition~\ref{prop:shortcutting_quasigeodesic}. \end{proof}
We can now deduce the relative quasiconvexity of \(\langle Q', R' \rangle\) by applying Lemma~\ref{lem:pathreps_have_qgd_shortcutting} with $\eta=0$.
\begin{proposition} \label{prop:metric_P1}
Let $\beta_1=B_1(0)$ and $\gamma_1=C_3(0)$ be the constants provided by Lemma~\ref{lem:pathreps_have_qgd_shortcutting} applied to the case when $\eta=0$.
Suppose that \(Q'\leqslant Q\) and \(R'\leqslant R\) are relatively quasiconvex subgroups of $G$ satisfying conditions \descref{C1}-\descref{C5} with family \(\mathcal{P} \supseteq \mathcal{P}_1\) and constants \(B \geq \beta_1\), \( C \geq \gamma_1\).
Then the subgroup \(\langle Q', R' \rangle\) is relatively quasiconvex in $G$. \end{proposition}
\begin{proof}
By assumption the subgroups \(Q'\) and \(R'\) are relatively quasiconvex, with some quasiconvexity constant $\varepsilon' \ge 0$.
For any element \(g \in \langle Q', R' \rangle\) consider a geodesic \(\tau\) in \(\Gamma(G,X\cup\mathcal{H})\) with \(\tau_- = 1\) and \(\tau_+ = g\). Let \(u\) be any vertex of \(\tau\).
Since \(g \in \langle Q', R' \rangle\), it has a path representative \(p = p_1 \dots p_n\) of minimal type, with $p_-=1$.
Let \(\Sigma(p,\Theta) = f_0 e_1 f_1 \dots f_{m-1} e_m f_m\) be the \(\Theta\)-shortcutting of \(p\) obtained from Procedure~\ref{proc:shortcutting}, where $\Theta=\Theta_1(0)$ is provided by Lemma~\ref{lem:pathreps_have_qgd_shortcutting}.
This lemma implies that $p$ is \((B,c_0,\zeta,\Theta)\)-tamable and \(\Sigma(p,\Theta)\) is a \((\lambda,c)\)-quasigeodesic without backtracking, where $\lambda \ge 1$ and $c \ge 0$ are the constants fixed in Notation~\ref{not:main_in_part_2}. Therefore, by Proposition~\ref{prop:osinbcp}, there is a phase vertex \(v\) of \(\Sigma(p,\Theta)\) with \(d_X(u,v) \leq \kappa(\lambda,c,0)\).
Since each $e_i$ is a single edge, the vertex \(v\) lies on the geodesic subpath \(f_i\) of \(\Sigma(p,\Theta)\), for some \(i \in \{0, \dots, m\}\).
The subpath of \(p\) sharing endpoints with \(f_i\) is \((4,c_3)\)-quasigeodesic by Lemma~\ref{lem:beta_quasigeodesic}.
Hence there is a vertex \(w\) of \(p\) such that \(d_X(v,w) \leq \kappa(4,c_3,0)\), by Proposition~\ref{prop:osinbcp}.
Now \(w\) is a vertex of a subpath \(p_j\) of \(p\), for some \(j \in \{ 1, \dots, n \}\).
Let \(x = (p_j)_-\), and note that \(x \in \langle Q', R' \rangle\).
Without loss of generality, suppose that \(\elem{p_j} \in Q'\) (the case when \(\elem{p_j} \in R'\) can be treated similarly).
Then by the relative quasiconvexity of \(Q'\), \(d_X(w,xQ') \leq \varepsilon'\), whence \(d_X(w,\langle Q', R' \rangle) \leq \varepsilon'\).
Therefore
\begin{align*}
d_X(u,\langle Q', R' \rangle) &\leq d_X(u,v) + d_X(v,w) + d_X(w,\langle Q', R' \rangle) \\
&\leq \kappa(\lambda,c,0) + \kappa(4,c_3,0) + \varepsilon',
\end{align*}
so that \(\langle Q', R' \rangle\) is a relatively quasiconvex subgroup of $G$, with the quasiconvexity constant $\kappa(\lambda,c,0) + \kappa(4,c_3,0) + \varepsilon'$. \end{proof}
We will next show that properties \descref{P2} and \descref{P3} will be satisfied if one chooses the constants \(B\) and \(C\) of \descref{C1}-\descref{C5} to be sufficiently large with respect to \(A\).
\begin{lemma} \label{lem:metric_P2}
For any \(A \geq 0\) there exist constants \(\beta_2 = \beta_2(A) \geq 0\) and $\gamma_2=\gamma_2(A) \ge 0$ such that if \(Q'\leqslant Q\) and \(R'\leqslant R\) satisfy conditions \descref{C1}-\descref{C5} with constants \(B \geq \beta_2\) and \(C \geq \gamma_2\) and family \(\mathcal{P} \supseteq \mathcal{P}_1\), then \[\minx \Big(\langle Q', R' \rangle \setminus S\Big) \geq A.\] \end{lemma}
\begin{proof}
Given any \(A \geq 0\) let \(\eta = \eta(\lambda,c,A)\) be the constant provided by Lemma~\ref{lem:qgds_with_long_comps}.
Using Lemma~\ref{lem:pathreps_have_qgd_shortcutting}, set
\[ \Theta = \Theta_1(\eta),
~ \gamma_2 = C_3(\eta)~ \text{ and }\beta_2 = \max\{B_1(\eta),(4A + c_3)\Theta\}.
\]
Suppose that \(Q'\) and \(R'\) satisfy conditions \descref{C1}-\descref{C5} with constants \(B \geq \beta_2\) and \(C \geq \gamma_2\), and let \(g \in \langle Q', R' \rangle\) be any element with \(\abs{g}_X < A\). Let \(p\) be a path representative of \(g\) with minimal type.
By Lemma~\ref{lem:pathreps_have_qgd_shortcutting}, $p$ is \((B,c_0,\zeta,\Theta_1)\)-tamable, the \(\Theta\)-shortcutting \(\Sigma(p,\Theta) = f_0 e_1 f_1 \dots f_{m-1} e_m f_m\) is \((\lambda,c)\)-quasigeodesic without backtracking, and, for each \(k = 1, \dots, m\), \(e'_k\), the \(\mathcal{H}\)-component of \(\Sigma(p,\Theta)\) containing \(e_k\), is isolated and satisfies \(\abs{e'_k}_X \geq \eta\).
If \(m \geq 1\), then, according to Lemma~\ref{lem:qgds_with_long_comps}, \(\abs{g}_X = \abs{\Sigma(p,\Theta)}_X \geq A\), contradicting our assumption.
Therefore it must be the case that \(m = 0\) and \(\Sigma(p,\Theta) = f_0\).
Since \(p_- = (f_0)_-\) and \(p_+ = (f_0)_+\), Lemma~\ref{lem:beta_quasigeodesic} tells us that \(p\) is \((4,c_3)\)-quasigeodesic.
Moreover, following Remark~\ref{rem:shortcutting}(c), we see that \(p_i\) has no $\mathcal{H}$-component $h$ with $\abs{h}_X \ge \Theta$, for each $i=1,\dots,n$.
Now, arguing by contradiction, suppose that $g \notin S$. Then $\elem{p_1} \in (Q' \cup R') \setminus S$ (by Remark~\ref{rem:alt}), so
\(\abs{p_1}_X \geq B \ge \beta_2\), by Lemma~\ref{lem:C2_implies_old_C2}. Lemma~\ref{lem:rel_geods_with_short_comps} now implies that
\[
\ell(p_1)\ge \beta_2/\Theta \ge 4A+c_3.
\]
Since $\ell(p) \ge \ell(p_1)$, the $(4,c_3)$-quasigeodesicity of \(p\) yields
\[
A > \abs{g}_{X} \ge |g|_{X\cup \mathcal{H}} = |p|_{X\cup \mathcal{H}} \geq \frac{1}{4} \left(\ell(p) - c_3\right) \geq A,
\]
which is a contradiction. Therefore $g \in S$ and the lemma is proved. \end{proof}
In order to prove that property \descref{P3} holds for the subgroups \(Q'\) and \(R'\), we need to consider path representatives of elements $g \in Q \langle Q', R' \rangle R$. These path representatives will necessarily have to be slightly different from those in Definition~\ref{def:path_reps}.
\begin{definition}[Path representative, II] \label{def:double_coset_path_reps}
Let \(g\) be an element of \(Q \langle Q', R' \rangle R\).
Suppose that \(p = q p_1 \dots p_n r\) is a broken line in \(\Gamma(G,X\cup\mathcal{H})\), satisfying the following conditions:
\begin{itemize}
\item \(\elem{p} = g\);
\item \(\elem{q} \in Q\) and \(\elem{r} \in R\);
\item \(\elem{p_i} \in Q' \cup R'\), for each \(i \in \{1, \dots n\}\).
\end{itemize}
Then we say that \(p\) is a \emph{path representative} of \(g\) in the product \(Q \langle Q', R' \rangle R\). \end{definition}
Similarly to Definition~\ref{def:type_of_path_rep}, we can define types for such path representatives.
\begin{definition}[Type of a path representative, II] \label{def:type_of_double_coset_path_rep}
Suppose that \(p = q p_1 \dots p_n r\) is a path representative of some \(g \in Q \langle Q', R' \rangle R\), as described in Definition~\ref{def:double_coset_path_reps}.
Let \(Y\) denote the set of all \(\mathcal{H}\)-components of the segments of \(p\).
We define the \emph{type} of the path representative \(p\) to be the triple
\[\tau(p) = \Big(n, \ell(p),\sum_{y \in Y} |y|_X \Big) \in {\NN_0}^3.\] \end{definition}
\begin{remark} \label{rem:props_of_double_coset_path_reps}
Note that, by Definition~\ref{def:double_coset_path_reps}, a path representative $p=q p_1 \dots p_n r$, of an element $g \in Q \langle Q', R' \rangle R\setminus QR$, must necessarily satisfy $n>0$.
Moreover, if $p$ has minimal type (so $n$ is the smallest possible) then $\elem{p_1} \in R' \setminus S$, $\elem{p_n} \in Q' \setminus S$ and the labels of $p_1,\dots,p_{n}$ will alternate between representing elements of $R' \setminus S$ and $Q' \setminus S$.
It follows that the integer $n$ must be even, so $n \ge 2$. \end{remark}
For example, if $g \in R'Q'\setminus QR$ then a minimal type path representative of $g$ will have the form $qp_1p_2 r$, where $q$ and $r$ are trivial paths, $\elem{p_1} \in R'$ and $\elem{p_2} \in Q'$.
It is not difficult to check that the results of Sections~\ref{sec:path_reps}, \ref{sec:adj_backtracking}, and \ref{sec:multitracking} hold equally well for minimal type path representatives of the above form for elements $g \in Q \langle Q', R' \rangle R \setminus QR$, with only superficial adjustments to the proofs in those sections. It follows that Lemma~\ref{lem:pathreps_have_qgd_shortcutting} also remains valid in these settings.
\begin{lemma} \label{lem:metric_P3}
In the statement of Lemma~\ref{lem:metric_P2} we can add that
\[
\minx \Big( Q \langle Q', R' \rangle R \setminus QR\Big) \geq A.
\] \end{lemma}
\begin{proof}
For any \(A \geq 0\) we define the constants $\eta$, $\Theta$, $\gamma_2$ and $\beta_2$ exactly as in Lemma~\ref{lem:metric_P2}.
Suppose that for some element $g \in Q \langle Q',R'\rangle R \setminus QR$ we have $\abs{g}_X<A$.
Let \(p = q p_1 \dots p_n r\) be a minimal type path representative of $g$, of the form described in Definition~\ref{def:double_coset_path_reps}.
Arguing in the same way as in Lemma~\ref{lem:metric_P2}, we can deduce that $p$ is $(4,c_3)$-quasigeodesic and for each $i=1,\dots,n$, $p_i$ has no $\mathcal{H}$-component $h$ with $\abs{h}_X \ge\Theta$.
According to Remark~\ref{rem:props_of_double_coset_path_reps}, $n \ge 2$ and \(\elem{p_1} \in R' \setminus S\). So, by Lemma~\ref{lem:C2_implies_old_C2}, \(\abs{p_1}_X \geq B \ge \beta_2\).
The same argument as in Lemma~\ref{lem:metric_P2} now yields that $\abs{g}_X \ge A$, leading to a contradiction.
Therefore it must be that $\abs{g}_X \ge A$ for any $g \in Q \langle Q',R'\rangle R \setminus QR$, and the proof is complete. \end{proof}
We are finally able to prove Theorem \ref{thm:metric_qc}.
\begin{proof}[Proof of Theorem \ref{thm:metric_qc}]
Choose $\mathcal{P}$ to be the finite family $\mathcal{P}_1$, defined in Notation~\ref{not:main_in_part_2}. Given any \(A \geq 0\), we apply Proposition~\ref{prop:metric_P1} and Lemma~\ref{lem:metric_P2} to define the constants \[B = \max\{\beta_1,\beta_2(A)\}~\text{ and }~ C = \max\{\gamma_1,\gamma_2(A)\}.\]
Suppose that \(Q' \leqslant Q\) and \(R' \leqslant R\) are subgroups satisfying conditions \descref{C1}-\descref{C5} with constants \(B\) and \(C\) and the finite family of parabolic subgroups \(\mathcal{P}\).
Then property \descref{P1} holds by Proposition~\ref{prop:metric_P1}, while properties \descref{P2} and \descref{P3} are satisfied by Lemmas~\ref{lem:metric_P2} and~\ref{lem:metric_P3} respectively. \end{proof}
\section{Using separability to establish the conditions of the quasiconvexity theorem} \label{sec:sep->metric}
In this section we will show how one can prove the existence of finite index subgroups $Q' \leqslant_f Q$ and $R' \leqslant_f R$, satisfying the conditions \descref{C1}--\descref{C5} from Subsection~\ref{subsec:3.1}, using certain separability assumptions. We start with finding such assumptions for establishing
\descref{C2} and \descref{C3}.
\begin{proposition} \label{prop:sep->C2-C3}
Let $G$ be a group generated by a finite subset $X$, let $Q,R \leqslant G$ and $S=Q \cap R$, and let $\mathcal P$ be a finite collection of subgroups of $G$. Suppose that $Q$ and $R$ are separable in $G$ and $PS$ is separable in $G$, for each $P \in \mathcal{P}$.
Then for any constants $B,C \ge 0$ there exists a finite index subgroup $L \leqslant_f G$, with $S \subseteq L$, such that conditions \descref{C2} and \descref{C3} are satisfied by arbitrary subgroups $Q' \leqslant Q \cap L$ and $R' \leqslant R \cap L$. \end{proposition}
\begin{proof}
Combining the separability of $Q$ and $R$ in $G$ with Lemma~\ref{lem:sep->large_minx}, we can find $E_1,E_2 \lhd_f G$ such that $\minx(Q E_1 \setminus Q) \ge B$ and $\minx(R E_2 \setminus R) \ge B$.
Set $N_0=E_1 \cap E_2 \lhd_f G$ and observe that \[QS N_0 Q=QN_0Q=QQ N_0=Q N_0 \subseteq Q E_1,\] as $Q$ is a subgroup containing $S$ and normalising $N_0$ in $G$.
Similarly, $RSN_0 R=R N_0 \subseteq RE_2$, therefore
\begin{equation}
\label{eq:1N_0}
\minx(QS N_0 Q \setminus Q) \ge B \text{ and } \minx(RS N_0 R \setminus R) \ge B.
\end{equation}
Let $\mathcal{P}=\{P_1,\dots,P_k\}$.
The assumptions imply that for every $i \in \{1,\dots,k\}$ the double coset $P_i S$ is separable in $G$, hence we can apply Lemma~\ref{lem:sep->large_minx} again to find finite index normal subgroups $N_i \lhd_f G$ satisfying
\begin{equation}
\label{eq:1N_i}
\minx(P_i S N_i \setminus P_i S) \ge C,~\text{ for each } i=1,\dots,k.
\end{equation}
Now set $L= \bigcap_{i=0}^k SN_i \leqslant_f G$, and choose arbitrary subgroups $Q' \leqslant Q \cap L$ and $R' \leqslant R \cap L$.
Then $S \subseteq L$ and $\langle Q',R' \rangle \subseteq L \subseteq SN_i$, for all $i=0,\dots,k$, by construction, hence \descref{C2} holds by \eqref{eq:1N_0} and \descref{C3} holds by \eqref{eq:1N_i}, as desired. \end{proof}
To establish condition \descref{C5} we need to be able to lift certain finite index subgroups of a maximal parabolic subgroup $P \leqslant G$ to finite index subgroups of $G$ in a controlled way. The next statement shows how a double coset separability assumption can help with this task.
\begin{lemma} \label{lem:Lemma_1}
Let \(G\) be a group, \(P, Q \leqslant G\) be subgroups of \(G\) and let \(K \leqslant_f P\) be a finite index subgroup of \(P\), with \(Q \cap P \subseteq K\).
If \(KQ\) is separable in \(G\), then there is a finite index subgroup \(M \leqslant_f G\) such that \(Q \subseteq M\) and \(M \cap P \subseteq K\). \end{lemma}
\begin{proof}
Let \(P = K \cup Kh_1 \cup \dots \cup Kh_m\), where \(h_1, \dots, h_m \in P \setminus K\).
Note that \(KQ \cap P = K(Q \cap P) = K\), so \(h_1, \dots, h_m \notin KQ\).
The double coset \(KQ\) is profinitely closed, so, by Lemma~\ref{lem:sep->large_minx}(a), there exists $N\lhd_f G$ such that
\[\{h_1, \dots, h_m\} \cap KQN = \emptyset.\]
Let \(M = QN \leqslant_f G\), so that the above implies $Kh_i \cap M=\emptyset$, for each $i=1,\dots,m$.
We then have $Q \subseteq M$ and $M \cap P \subseteq K$, as required. \end{proof}
We are now in position to prove the main result of this section.
\begin{theorem} \label{thm:sep->qc_comb}
Assume that $G$ is a group generated by a finite set $X$, $Q,R \leqslant G$ are subgroups of \(G\), and denote $S=Q \cap R$.
Let $\mathcal{P}$ be a finite collection of subgroups of $G$ such that for every $P \in \mathcal{P}$ all of the following hold:
\begin{enumerate}[label={\normalfont (S\arabic*)}]
\item \label{cond:1} $Q$ and $R$ are separable in $G$;
\item \label{cond:2} the double coset $PS$ is separable in $G$;
\item \label{cond:3} for all $K \leqslant_f P$ and $T \leqslant_f Q$, satisfying $S \subseteq T$ and $T \cap P \subseteq K$, the double coset $K T$ is separable in $G$;
\item \label{cond:4} for all $U \leqslant_f Q \cap P$, with $S \cap P \subseteq U$, the double coset $U(R \cap P)$ is separable in $P$.
\end{enumerate}
Then, given arbitrary constants $B,C \ge 0$, there exist finite index subgroups $Q' \leqslant_f Q$ and $R' \leqslant_f R$ such that conditions \descref{C1}--\descref{C5} are all satisfied.
More precisely, there exists $L \leqslant_f G$, with $S \subseteq L$, such that for any $L' \leqslant_f L$, satisfying $S \subseteq L'$, we can choose $Q'=Q\cap L' \leqslant _f Q$ and there exists $M \leqslant_f L'$, with $Q' \subseteq M$, such that for any $M' \leqslant_f M$, satisfying $Q' \subseteq M'$, we can choose $R'=R \cap M' \leqslant_f R$. \end{theorem}
\begin{proof}
The idea is that assumption \ref{cond:1} will take care of condition \descref{C2}, \ref{cond:2} will take care of \descref{C3}, and \ref{cond:3}, \ref{cond:4} will take care of \descref{C5}.
The subgroups $Q'$ and $R'$ will satisfy $Q'=Q \cap M'$ and $R=R \cap M'$, for some $M' \leqslant_f G$, with $S \subseteq M'$, which will immediately imply \descref{C1} and \descref{C4}.
Let $\mathcal{P}=\{P_1,\dots,P_k\}$.
Arguing just like in the proof of Proposition~\ref{prop:sep->C2-C3} (using the assumptions \ref{cond:1} and \ref{cond:2}), we can finite finite index normal subgroups $N_i \lhd_f G$, $i=0,\dots,k$, such that
\[
\minx(QS N_0 Q \setminus Q) \ge B,~ \minx(RS N_0 R \setminus R) \ge B~\text{ and }
\]
\[
\minx(P_i S N_i \setminus P_i S) \ge C,~\text{ for each } i=1,\dots,k.
\]
We can now define a finite index subgroup $L \leqslant_f G$ by $L=\bigcap_{i=0}^k SN_i$.
Note that $S \subseteq L$ by construction, and for each $i \in \{1,\dots,k\}$ we have
\begin{equation}
\label{eq:L}
\minx(QLQ \setminus Q) \ge B,~\minx(RLR \setminus R) \ge B~\text{ and }~\minx(P_i L \setminus P_i S) \ge C.
\end{equation}
Choose an arbitrary finite index subgroup $L' \leqslant_f L$, with $ S \subseteq L'$, and define $Q'=Q \cap L'$, so that $S \leqslant Q' \leqslant_f Q$.
To construct $R' \leqslant_f R$, consider any $i \in \{1,\dots,k\}$ and denote $Q_i=Q \cap P_i$, $R_i=R \cap P_i$ and $Q_i'=Q' \cap P_i \leqslant_f Q_i$.
Choose some elements $a_{i1}, \dots,a_{i n_i} \in Q_i$ such that $Q_i=\bigsqcup_{j=1}^{n_i} a_{ij} Q_i'$. Assumption \ref{cond:4} implies that the subset $Q_i'R_i$ is separable in $P_i$, hence, by claim (c) of Lemma~\ref{lem:sep->large_minx}, there exists $F_i \lhd_f P_i$ such that
\begin{equation}
\label{eq:F_i}
\minx \Bigl(a_{ij}Q'_i R_i F_i \setminus a_{ij}Q'_i R_i\Bigr) \ge C, ~\text{ for } j=1,\dots,n_i.
\end{equation}
Define
$K_i=Q_i'F_i \leqslant_f P_i$. Then $Q' \cap P_i=Q_i' \subseteq K_i$ and $a_{ij}Q'_i K_i R_i=a_{ij}Q'_i R_i F_i $, for each $j=1,\dots,n_i$.
Therefore, from \eqref{eq:F_i} we can deduce that
\begin{equation}
\label{eq:K_i}
\minx\Bigl(a_{ij}K_i R_i \setminus a_{ij}Q'_i R_i\Bigr) \ge C, ~\text{ for all } j=1,\dots,n_i.
\end{equation}
By assumption \ref{cond:3}, the double coset $K_iQ'$ is separable in $G$, so we can apply Lemma~\ref{lem:Lemma_1} to find $M_i \leqslant_f G$ such that $Q' \subseteq M_i$ and $M_i \cap P_i \subseteq K_i$.
We now let $\displaystyle M=\bigcap_{i=1}^k M_i \cap L'$ and observe that $Q' \leqslant M \leqslant_f L'$ and $M \cap P_i \subseteq K_i$ for each $i \in \{1,\dots,k\}$. Inequality \eqref{eq:K_i} yields
\begin{equation}
\label{eq:M}
\minx\Bigl(a_{ij}(M \cap P_i) R_i \setminus a_{ij}Q'_i R_i\Bigr) \ge C, ~\text{ for all } i=1,\dots,k \text{ and } j=1,\dots, n_i.
\end{equation}
We can now choose an arbitrary finite index subgroup $M' \leqslant_f M$, with $Q' \subseteq M'$, and define $R'=R \cap M'$. Observe that $M' \leqslant_f G$, by construction, hence $R' \leqslant_f R$.
Let us check that the subgroups $Q'$ and $R'$ obtained above satisfy conditions \descref{C1}--\descref{C5}. Indeed, by construction, $S=Q \cap R \subseteq Q'$, so $S \subseteq R \cap M'=R'$, hence
\[
S \subseteq Q' \cap R' \subseteq Q \cap R=S,
\]
thus \descref{C1} holds. We also have $Q'=Q \cap L'=Q \cap M'$, as $Q' \subseteq M' \subseteq L'$, hence
\[
Q' \subseteq Q \cap \langle Q',R'\rangle \subseteq Q \cap M' =Q',\]
thus $Q \cap \langle Q',R'\rangle = Q'$. After intersecting both sides of the latter equation with an arbitrary $P \in \mathcal{P}$, we get $Q_P \cap \langle Q',R'\rangle=Q'_P$, hence
\[
Q_P' \subseteq Q_P \cap \langle Q_P',R_P'\rangle \subseteq Q_P \cap \langle Q',R'\rangle = Q_P',
\]
thus $Q_P \cap \langle Q_P',R_P'\rangle=Q_P'$. Similarly, $R_P \cap \langle Q_P',R_P'\rangle=R_P'$, so condition \descref{C4} is satisfied.
Conditions \descref{C2} and \descref{C3} hold by \eqref{eq:L}, because $Q', R' \subseteq L$ by construction.
To prove \descref{C5}, take $P_i \in \mathcal{P}$ for any $i \in \{1,\dots,k\}$, and denote $Q_i=Q \cap P_i$, $Q_i'=Q' \cap P_i$, $R_i=R \cap P_i$ and $R_i'=R' \cap P_i$, as before.
For any $q \in Q_i$ there exists $j \in \{1,\dots,n_i\}$ such that $q \in a_{ij}Q_i'$. It follows that
\begin{equation}\label{eq:q->a_ij}
q \langle Q_i',R_i' \rangle R_i=a_{ij} \langle Q_i',R_i' \rangle R_i~\text{ and }~q Q_i' R_i=a_{ij}Q_i'R_i.
\end{equation}
Since $\langle Q_i',R_i' \rangle \leqslant M \cap P_i$, we can combine \eqref{eq:q->a_ij} with \eqref{eq:M} to deduce that
\[
\minx\Bigl(q\langle Q_i',R_i' \rangle R_i \setminus qQ'_i R_i\Bigr) \ge C,
\]
which establishes condition \descref{C5}.
Thus the proof is complete. \end{proof}
\section{Double coset separability in amalgamated free products} \label{sec:dcs_in_amalgams} In this section we develop a method for establishing the separability assumptions \ref{cond:2} and \ref{cond:3} of Theorem~\ref{thm:sep->qc_comb} using amalgamated products. The idea is that when $G$ is a relatively hyperbolic group, $P$ is a maximal parabolic subgroup and $Q$ is a relatively quasiconvex subgroup of $G$, we can apply the combination theorem of Mart\'{i}nez-Pedroza (Theorem~\ref{thm:M-P_comb}) to find a finite index subgroup $H \leqslant_f P$ such that $A=\langle H,Q \rangle \cong H*_{H \cap Q} Q$, so proving the separability of $PQ$ in $G$ can be reduced to proving the separability of $HQ$ in the amalgamated free product $A$.
The next proposition gives a new criterion for showing separability of double cosets in amalgamated free products. This criterion may be of independent interest.
\begin{proposition} \label{prop:dc_in_am}
Let $A=B*_D C$ be an amalgamated free product, where we consider $B$, $C$ and $D$ as subgroups of $A$ with $B \cap C=D$.
Suppose that $D$ is separable in $A$, and $U \subseteq B$, $V \subseteq C$ are arbitrary subsets.
If the product $UD$ (respectively, $DV$) is separable in $A$ then the product $UC$ (respectively, $BV$) is separable in $A$. \end{proposition}
\begin{proof}
We will prove the statement in the case of $UC$, as the other case is similar.
If $U= \emptyset$ then $UC=\emptyset$, so we can suppose that $U$ is non-empty.
Take any $u \in U$.
According to Remark~\ref{rem:sep_props}, without loss of generality we can replace $U$ with $u^{-1}U$ to assume that $1 \in U$.
Consider any element $g \in A \setminus UC$; since $1 \in U$, we deduce that $g \notin C$. We will construct a homomorphism from $A$ to a finite group $L$ which separates the image of $g$ from the image of $UC$.
Since $g \notin D$, it has a reduced form $g=x_1x_2 \dots x_k$, where $x_i$ belongs to one of the factors $B$, $C$, for each $i$, consecutive elements $x_i$, $x_{i+1}$ belong to different factors, and $x_i \notin D$ for all $i=1,\dots,k$ (see \cite[p. 187]{LS}).
Since $D$ is separable in $A$, by Lemma~\ref{lem:sep->large_minx}(a) there is a finite group $M$ and a homomorphism $\varphi: A \to M$ such that
\begin{equation}
\label{eq:notinD}
\varphi(x_i) \notin \varphi(D) ~\text{ in } M, \text{ for every } i=1,\dots,k.
\end{equation}
Denote by $\overbar{B}$, $\overbar{C}$ and $\overbar{D}$ the $\varphi$-images if $B$, $C$ and $D$ in $M$ respectively.
We can then consider the amalgamated free product $\overbar{A}=\overbar{B}*_{\overbar{D}} \overbar{C}$, together with the natural homomorphism $\psi:A \to \overbar{A}$, which is compatible with $\varphi$ on $B$ and $C$ (in other words, $\psi|_B=\varphi|_B$ and $\psi|_C=\varphi|_C$).
It follows that $\varphi$ factors through $\psi$, i.e., $\varphi=\overbar{\varphi} \circ \psi$, where $\overbar\varphi:\overbar{A} \to M$ is the natural homomorphism extending the embeddings of $\overbar{B}$ and $\overbar{C}$ in $M$.
Equation \eqref{eq:notinD} now implies that
\begin{equation}
\label{eq:notinDbar}
\psi(x_i) \notin \overbar{D} ~\text{ in } \overbar{A}, \text{ for every } i=1,\dots,k.
\end{equation}
Denote $\overline{x}_i=\psi(x_i) \in \overbar{A}$, $i=1,\dots,k$. In view of \eqref{eq:notinDbar}, $\psi(g)=\overline{x}_1\dots \overline{x}_k$ is a reduced form in the amalgamated free product $\overbar{A}$.
We will now consider several cases.
\underline{\emph{Case 1:}} assume that $k \ge 3$.
Then the above reduced form for $\psi(g)$ has length $k \ge 3$, so by the normal form theorem for amalgamated free products \cite[Theorem IV.2.6]{LS}, it cannot be equal to an element from $\psi(UC)=\psi(U)\overbar{C}\subseteq \overbar{B}\overbar{C}$, which would necessarily have a reduced form of length at most $2$ in $\overbar{A}$.
Therefore $\psi(g) \notin \psi(UC)$ in $\overbar A$.
Since $\overbar{B}$ and $\overbar C$ are finite groups, their amalgamated free product $\overbar A$ is residually finite (in fact, $\overbar A$ is a virtually free group -- see \cite[Proposition 2.6.11]{Serre}), so the finite subset $\psi(UC)$ is closed in the profinite topology on $\overbar A$.
Hence there is a finite group $L$ and a homomorphism $\eta:\overbar A \to L$ such that $\eta(\psi(g)) \notin \eta(\psi(UC))$ in $L$.
The composition $\eta\circ \psi:A \to L$ is the required homomorphism separating the image of $g$ from the image of $UC$, and the consideration of Case~1 is complete.
\underline{\emph{Case 2:}} suppose that $k=2$, $x_1\in C\setminus D$ and $x_2 \in B\setminus D$.
Then $\overline{x}_1 \in \overbar{C}\setminus\overbar{D}$ and $\overline{x}_2 \in \overbar{B}\setminus\overbar{D}$ by \eqref{eq:notinDbar}, so that $\psi(g)=\overline{x}_1 \overline{x}_2$ is a reduced form of length $2$ in $\overbar A$.
Again, the normal form theorem for amalgamated free products implies that $\psi(g) \notin \overbar{B}\overbar{C}$ in $\overbar{A}$, hence $\psi(g) \notin \psi(U C)$ and we can find the required finite quotient $L$ of $A$ as in Case 1.
\underline{\emph{Case 3:}} $g=bc$, where $b \in B\setminus UD$ and $c \in C$ (here we allow $c \in D$, so this case also covers the situation when $k=1$).
This is the only case where we need to use the assumption that $UD$ is separable in $A$. This assumption implies that we can find a finite group $M$ and a homomorphism $\varphi:A \to M$ satisfying
\begin{equation*}
\varphi(b) \notin \varphi(UD) ~ \text{ in } M.
\end{equation*}
As above, we can construct the amalgamated free product $\overbar{A}=\overbar{B}*_{\overbar{D}} \overbar{C}$, together with the natural homomorphism $\psi:A \to \overbar{A}$, such that $\varphi$ factors through $\psi$.
It follows that
\begin{equation}
\label{eq:notinUDbar}
\psi(b) \notin \psi(UD)= \psi(U) \overbar{D} ~\text{ in } \overbar A.
\end{equation}
Observe that $\psi(g) \notin \psi(UC)=\psi(U) \overbar{C}$ in $\overbar{A}$.
Indeed, otherwise we would have
\[
\psi(b)=\psi(g)\psi(c^{-1}) \in \psi(U)\overbar{C} \cap \overbar{B}=\psi(U)(\overbar{C} \cap \overbar{B})=\psi(U)\overbar{D},
\]
which would contradict \eqref{eq:notinUDbar} (in the first equality we used the fact that $\overbar B$ is a subgroup of $\overbar{A}$ containing the subset $\psi(U)$).
We can now argue as in Case 1 above to find a homomorphism from $A$ to a finite group $L$ separating the image of $g$ from the image of $UC$.
It is not hard to see that since $g \notin UC$ in $A$, the above three cases cover all possibilities, hence the proof is complete. \end{proof}
In the next two corollaries we assume that $A=B*_D C$ is the amalgamated free product of its subgroups $B,C$, with $B \cap C=D$.
\begin{corollary} \label{cor:sep_double_coset_amalg}
Suppose that $D$ is a separable subgroup in $A$. Then $B$, $C$ and $BC$ are all separable in $A$. \end{corollary}
\begin{proof}
The separability of $C$ and $B$ in $A$ follows from Proposition~\ref{prop:dc_in_am}, after choosing $U=\{1\}$ and $V=\{1\}$.
The separability of $BC$ is also a consequence of Proposition~\ref{prop:dc_in_am}, where we take $U=B$ (so that $UD=BD=B$). \end{proof}
We will not need the next corollary in this paper, but it may be of independent interest and can be used to strengthen some of the statements proved in Section~\ref{sec:dc_when_one_is_parab}.
\begin{corollary} \label{cor:sep_triple_coset_amalg}
Suppose that $U \subseteq B$, $V \subseteq C$ are subsets such that $UD$ and $DV$ are separable in $A$. Then the triple product $UDV$ is separable in $A$. \end{corollary}
\begin{proof}
If either $U$ or $V$ are empty then $UDV$ is empty, and, hence, separable in $A$.
Thus we can suppose that there exist some elements $u \in U$ and $v \in V$. By Remark~\ref{rem:sep_props}. the subsets $u^{-1}UD \subseteq B$ and $DVv^{-1} \subseteq C$ are separable in $A$.
Since both of them contain $D$, we see that $D=u^{-1}UD \cap DVv^{-1}$, thus $D$ is separable in $A$.
By Proposition~\ref{prop:dc_in_am}, the products $UC$ and $BV$ are separable in $A$, so the statement follows from the observation that
\[
UC \cap BV=UDV~\text{ in } A.
\] \end{proof}
In the case when $U$ and $V$ are subgroups, the above corollary shows that we can use separability of double cosets $UD$ and $DV$ to deduce separability of the triple coset $UDV$. Moreover, if both $U$ and $V$ are subgroups containing $D$, Corollary~\ref{cor:sep_triple_coset_amalg} implies that the double coset $UV=UDV$ is separable in $A$, as long as $U$ and $V$ are separable in $A$.
\section{Separability of double cosets when one factor is parabolic} \label{sec:dc_when_one_is_parab}
Throughout this section we will assume that $G$ is group generated by a finite subset $X$ and hyperbolic relative to a collection of peripheral subgroups \(\{H_\nu \mid \nu \in \Nu\}\), with \(|\Nu|<\infty\).
Our goal in this section will be to establish separability of double cosets required by conditions \ref{cond:2} and \ref{cond:3} of Theorem~\ref{thm:sep->qc_comb}. All statements in this section will assume that finitely generated relatively quasiconvex subgroups of $G$ are separable, i.e., $G$ is QCERF (see Definition~\ref{def:QCERF}).
\begin{lemma} \label{lem:SA1->prof_top_on_qc_sbgps_is_induced}
Suppose that $G$ is QCERF. If $A$ is a finitely generated relatively quasiconvex subgroup of $G$ then every subset of $A$ which is closed in $\pt(A)$ is also closed in $\pt(G)$. \end{lemma}
\begin{proof}
By Lemma~\ref{lem:props_of_qc_sbgps} every subgroup of finite index in $A$ is finitely generated and relatively quasiconvex, hence it is separable in $G$ as $G$ is QCERF.
The claim of the lemma now follows from Lemma~\ref{lem:induced_top}(b). \end{proof}
The next statement is essentially a corollary of the combination theorem of Mart\'{i}nez-Pedroza (Theorem~\ref{thm:M-P_comb}).
\begin{proposition} \label{prop:M-P_comb_in_prof_terms}
Suppose that $G$ is QCERF. Let $P$ be a maximal parabolic subgroup of $G$, let $Q \leqslant G$ be a finitely generated relatively quasiconvex subgroup and let $D=P \cap Q$.
Then there exists a finite index subgroup $H \leqslant_f P$ such that all of the following properties hold:
\begin{itemize}
\item $ H \cap Q=D$;
\item the subgroup $A=\langle H,Q \rangle$ is relatively quasiconvex in $G$;
\item $A$ is naturally isomorphic to $H*_{D} Q$;
\item $D$ is separable in $A$;
\item every subset of $A$ which is closed in $\pt(A)$ is also closed in $\pt(G)$.
\end{itemize} \end{proposition}
\begin{proof}
Let $C \ge 0$ be the constant provided by Theorem~\ref{thm:M-P_comb}, applied to the maximal parabolic subgroup $P$ and the relatively quasiconvex subgroup $Q$.
By QCERF-ness, $Q$ is separable in $G$, so by Lemma~\ref{lem:sep->large_minx} there exists $N \lhd_f G$ such that $\minx(QN \setminus Q) \ge C$.
Therefore, after setting $H=P \cap QN \leqslant_f P$, we get $\minx(H \setminus D)=\minx(H \setminus Q) \ge C$.
Note that since $D=P \cap Q \subseteq H \subseteq P$, we have $H \cap Q=D$.
Hence we can apply Theorem~\ref{thm:M-P_comb} to conclude that $A=\langle H,Q \rangle$ is relatively quasiconvex in $G$ and is naturally isomorphic to the amalgamated free product $H*_{D} Q$.
Recall, from Lemma~\ref{lem:fg_qc_int_parab_is_fg} and Corollary~\ref{cor:parab->qc}, that $P$ is finitely generated and relatively quasiconvex in $G$, hence it is separable in $G$ by QCERF-ness.
It follows that $D=P \cap Q$ is separable in $G$, which implies that it is separable in $A$ by Lemma~\ref{lem:induced_top}.
Observe that $H$ and $Q$ are both finitely generated, hence $A$ is finitely generated and relatively quasiconvex in $G$.
Therefore Lemma~\ref{lem:SA1->prof_top_on_qc_sbgps_is_induced} yields the last assertion of the proposition, that every subset of $A$ which is closed in $\pt(A)$ is also closed in $\pt(G)$. \end{proof}
By combining Proposition~\ref{prop:M-P_comb_in_prof_terms} with Proposition~\ref{prop:dc_in_am} we obtain the first double coset separability result when one of the factors is parabolic and the other one is finitely generated and relatively quasiconvex.
\begin{proposition} \label{prop:parab_times_fg_qc-sep}
Assume that $G$ is QCERF. Let $P$ be a maximal parabolic subgroup of $G$, let $R \leqslant G$ be a finitely generated relatively quasiconvex subgroup of $G$.
Suppose that $D \leqslant P$ is a subgroup satisfying the following condition:
\begin{equation}
\label{eq:fin_ind_if_D}
\text{ for each } U \leqslant_f D \text{ the double coset } U(P \cap R) \text{ is separable in } P.
\end{equation}
Then the double coset $DR$ is separable in $G$. \end{proposition}
\begin{proof}
According to Proposition~\ref{prop:M-P_comb_in_prof_terms}, there exists $H \leqslant_f P$ such that the subgroup $A=\langle H,R \rangle$ is naturally isomorphic to the amalgamated free product $H*_{E} R$, where $E=P \cap R=H \cap R$ is separable in $A$, and every closed subset from $\pt(A)$ is separable in $G$.
Denote $U=D \cap H \leqslant_f D$.
By assumption \eqref{eq:fin_ind_if_D}, $UE$ is separable in $P$.
Since $P$ is finitely generated and relatively quasiconvex in $G$, we can conclude that $UE$ is separable in $G$ by Lemma~\ref{lem:SA1->prof_top_on_qc_sbgps_is_induced}.
As $UE \subseteq A \leqslant G$, $UE$ will also be closed in $\pt(A)$, so we can apply Proposition~\ref{prop:dc_in_am} to deduce that the double coset $UR$ is closed in $\pt(A)$.
It follows that this double coset is separable in $G$ and, since $U \leqslant_f D$, Lemma~\ref{lem:fi_dc} implies that $DR$ is separable in $G$, as desired. \end{proof}
We can now prove that assumption \ref{cond:3} of Theorem~\ref{thm:sep->qc_comb} holds as long as the relatively hyperbolic group $G$ is QCERF.
\begin{corollary} \label{cor:fi_in_parab_times_fgqc_is_sep}
Suppose that $G$ is QCERF, $P$ is a maximal parabolic subgroup of $G$ and $Q \leqslant G$ is a finitely generated relatively quasiconvex subgroup.
Then for all finite index subgroups $K \leqslant_f P$ and $T \leqslant_f Q$ the double coset $KT$ is separable in $G$. \end{corollary}
\begin{proof}
Note that $T$ is finitely generated and relatively quasiconvex in $G$ by Lemma~\ref{lem:props_of_qc_sbgps}.
Hence, to apply Proposition~\ref{prop:parab_times_fg_qc-sep} we simply need to check that for any $U \leqslant_f K$ the double coset $U(P \cap T)$ is separable in $G$.
The latter is true because $U(P \cap T)$ is a basic closed set in $\pt(P)$, being a finite union of right cosets to $U \leqslant_f P$.
Therefore $KT$ is separable in $G$ by Proposition~\ref{prop:parab_times_fg_qc-sep}. \end{proof}
The proof of assumption \ref{cond:2} of Theorem~\ref{thm:sep->qc_comb} is slightly more involved because the intersection of two finitely generated relatively quasiconvex subgroups need not be finitely generated.
\begin{proposition} \label{prop:parab_times_S_is_sep}
Let $P$ be a maximal parabolic subgroup of $G$, let $Q,R \leqslant G$ be finitely generated relatively quasiconvex subgroups, let $S=Q \cap R$ and $D=P \cap Q$.
Suppose that $G$ is QCERF and condition \eqref{eq:fin_ind_if_D} is satisfied.
Then the double coset $PS$ is separable in $G$. \end{proposition}
\begin{proof}
Proposition~\ref{prop:parab_times_fg_qc-sep} tells us that the double coset $DR$ is separable in $G$, and \(G\) is QCERF so \(Q\) is separable in \(G\).
Now, observe that $DR \cap Q=D(R \cap Q)=DS$, because $D \leqslant Q$.
It follows that the double coset \(DS\) is separable in \(G\).
According to Proposition~\ref{prop:M-P_comb_in_prof_terms}, there exists a finite index subgroup $H \leqslant_f P$ such that $H \cap Q=D$, $A=\langle H, Q\rangle \cong H*_D Q$, $D$ is separable in $A$ and every closed subset in $\pt(A)$ is closed in $\pt(G)$.
The double coset $DS$ is separable in $A$ by Lemma~\ref{lem:induced_top}, so $HS$ is closed in $\pt(A)$ by Proposition~\ref{prop:dc_in_am}.
It follows that $HS$ is closed in $\pt(G)$, which implies that the double coset $PS$ is separable in $G$ by Lemma~\ref{lem:fi_dc}.
Thus the proof is complete. \end{proof}
\section{Quasiconvexity of a virtual join from separability properties} \label{sec:sep->qc}
In this section we will prove Theorems~\ref{thm:sep->qc_intro} and \ref{thm:sep->qc_for_ab_parab} from the Introduction. The latter follows from the following result and the observation that a finite index subgroup of a relatively quasiconvex subgroup is itself relatively quasiconvex (see Lemma~\ref{lem:props_of_qc_sbgps}).
\begin{theorem} \label{thm:sep->qc_for_ab_parab-detailed}
Let \(G\) be a group generated by a finite set \(X\) and hyperbolic relative to a finite collection of abelian subgroups. Assume that $G$ is QCERF.
If \(Q, R \leqslant G\) are relatively quasiconvex subgroups and \(S = Q \cap R\) then for every \(A \geq 0\) there exists a finite index subgroup $L \leqslant_f G$, with $S \subseteq L$, such that properties \descref{P1}--\descref{P3} from Subsection~\ref{subsec:3.1} hold for arbitrary subgroups $Q' \leqslant Q \cap L$ and $R' \leqslant R \cap L$ satisfying $Q' \cap R'=S$.
\end{theorem}
\begin{proof}
By combining the assumptions with Lemma~\ref{lem:fg_qc_int_parab_is_fg}, we know that maximal parabolic subgroups of $G$ are finitely generated abelian groups.
Since such groups are slender, all relatively quasiconvex subgroups of $G$ are finitely generated (see \cite[Corollary~9.2]{HruskaRHCG}).
Moreover, finitely generated abelian groups are LERF, and hence, they are double coset separable (because the product of two subgroups is again a subgroup).
Therefore the double coset $PS$ is separable in $G$ for any maximal parabolic subgroup $P \leqslant G$ by Proposition~\ref{prop:parab_times_S_is_sep}.
In view of Proposition~\ref{prop:sep->C2-C3}, for any finite collection $\mathcal P$, of maximal parabolic subgroups of $G$, and any $B, C \ge 0$ there exists $L \leqslant_f G$, with $S \subseteq L$, such that any subgroups $Q' \leqslant Q \cap L$ and $R' \leqslant R \cap L$ satisfy conditions \descref{C1}--\descref{C3}, as long as $Q'\cap R'=S$.
Remark~\ref{rem:ab_periph->C4_and_C5} tells us that these subgroups automatically satisfy conditions \descref{C4} and \descref{C5}.
Thus we can obtain the desired statement by applying Theorem~\ref{thm:metric_qc}. \end{proof}
\begin{corollary} \label{cor:virt_ab_periph}
Suppose that \(G\) is a QCERF group generated by a finite subset \(X\) and hyperbolic relative to a finite family $\{H_\nu\mid \nu \in \Nu\}$ of virtually abelian subgroups.
Let \(Q, R \leqslant G\) be relatively quasiconvex subgroups and let \(S = Q \cap R\).
Then there exists $L \leqslant_f G$ such that if $Q' \leqslant Q \cap L$ and $R' \leqslant R \cap L$ are relatively quasiconvex subgroups of $G$ satisfying $Q' \cap R'=S \cap L$ then the subgroup $\langle Q',R'\rangle$ is also relatively quasiconvex in $G$. \end{corollary}
\begin{proof}
By the assumptions for each $\nu \in \Nu$ there exists a finite index abelian subgroup $K_\nu\leqslant_f H_\nu$.
Since $G$ is QCERF, each $K_\nu$ is separable in $G$ (it is finitely generated by Lemma~\ref{lem:fg_qc_int_parab_is_fg} and it is relatively quasiconvex by Corollary~\ref{cor:parab->qc}).
Thus, in view of Lemma~\ref{lem:Lemma_0}, for every $\nu \in \Nu$ there exists $L_\nu \leqslant_f G$ such that $L_\nu \cap H_\nu=K_\nu$.
Since $|\Nu|<\infty$, the intersection $\bigcap_{\nu \in \Nu} L_\nu $ has finite index in $G$, hence it contains a finite index normal subgroup $G_1 \lhd_f G$.
Note that for any $g \in G$ and any $\nu \in \Nu$ we have
\begin{equation}
\label{eq:G_1}
G_1 \cap gH_\nu g^{-1}= g(G_1 \cap H_\nu)g^{-1} \subseteq g(L_\nu \cap H_\nu)g^{-1} = g K_\nu g^{-1},
\end{equation}
where the first equality follows from the normality of \(G_1\), the middle inclusion follows from the fact that \(G_1 \subseteq L_\nu\), and the last equality is due to the fact that \(L_\nu \cap H_\nu = K_\nu\).
By Lemma~\ref{lem:props_of_qc_sbgps}, $G_1$ is finitely generated and relatively quasiconvex in $G$, hence, by \cite[Theorem 9.1]{HruskaRHCG} it is hyperbolic relative to representatives of $G_1$-conjugacy classes of the intersections $G_1 \cap g H_\nu g^{-1}$, $g \in G$.
Thus, in view of \eqref{eq:G_1}, all peripheral subgroups in $G_1$ are abelian.
By \cite[Corollary 9.3]{HruskaRHCG}, a subgroup of $G_1$ is relatively quasiconvex in $G_1$ (with respect to the above family of peripheral subgroups) if and only if it is relatively quasiconvex in $G$.
Therefore $G_1$ is QCERF and $Q_1=Q \cap G_1 \leqslant_f Q$, $R_1=R \cap G_1 \leqslant_f R$ are finitely generated relatively quasiconvex subgroups of $G_1$ by Lemma~\ref{lem:props_of_qc_sbgps}.
After denoting $S_1=S \cap G_1=Q_1 \cap R_1$, we can apply Theorem~\ref{thm:sep->qc_for_ab_parab} to find a finite index subgroup $L \leqslant_f G_1$ such that $S_1 \subseteq L$ (thus, $S_1=S \cap L$) and the subgroup $\langle Q',R'\rangle$ is relatively quasiconvex in $G_1$, for arbitrary $Q' \leqslant Q_1 \cap L=Q \cap L$ and $R' \leqslant R_1 \cap L=R \cap L$ satisfying $Q' \cap R'=Q_1 \cap R_1=S_1$.
We can use \cite[Corollary 9.3]{HruskaRHCG} again to deduce that $\langle Q',R'\rangle$ is relatively quasiconvex in $G$. \end{proof}
The next statement implies Theorem~\ref{thm:sep->qc_intro} from the Introduction.
\begin{theorem} \label{thm:sep->qc_intro-detailed}
Let \(G\) be a group generated by a finite set \(X\) and hyperbolic relative to a finite collection of subgroups $\{H_\nu \mid \nu \in \Nu\}$.
Suppose that $G$ is QCERF and $H_\nu$ is double coset separable, for each $\nu \in \Nu$.
If \(Q, R \leqslant G\) are finitely generated relatively quasiconvex subgroups and \(S = Q \cap R\) then for every \(A \geq 0\) there exist finite index subgroups $Q'\leqslant_f Q$ and $R' \leqslant_f R$ which satisfy properties \descref{P1}--\descref{P3}.
More precisely, there exists $L \leqslant_f G$, with $S \subseteq L$, such that for any $L' \leqslant_f L$, satisfying $S \subseteq L'$, we can choose $Q'=Q\cap L' \leqslant _f Q$ and there exists $M \leqslant_f L'$, with $Q' \subseteq M$, such that for any $M' \leqslant_f M$, satisfying $Q' \subseteq M'$, we can choose $R'=R \cap M' \leqslant_f R$.
\end{theorem}
\begin{proof}
Let $\mathcal{P}$ be the finite collection of maximal parabolic subgroups of $G$ provided by Theorem~\ref{thm:metric_qc}.
We start by checking that all the assumptions of Theorem~\ref{thm:sep->qc_comb} are satisfied for every $P \in \mathcal{P}$.
Indeed, assumption~\ref{cond:1} holds because because $G$ is QCERF and assumption \ref{cond:3} is true by Corollary~\ref{cor:fi_in_parab_times_fgqc_is_sep}.
Note that the subgroups $D=Q \cap P$ and $R \cap P$ are finitely generated by Lemma~\ref{lem:fg_qc_int_parab_is_fg}, hence condition \eqref{eq:fin_ind_if_D} follows from the double coset separability of $P$, thus \ref{cond:4} is satisfied.
Finally, assumption \ref{cond:2} holds by Proposition~\ref{prop:parab_times_S_is_sep}.
The statement now follows from a combination of Theorem~\ref{thm:metric_qc} with Theorem~\ref{thm:sep->qc_comb}. \end{proof}
Recall that \(Q\) and \(R\) are said to have almost compatible parabolics if for every maximal parabolic subgroup \(P \leqslant G\), either \(Q \cap P \preccurlyeq R \cap P\) or \(R \cap P \preccurlyeq Q \cap P\). We find that in the case when $Q$ and $R$ have almost compatible parabolics, it is actually not necessary to assume that the peripheral subgroups are double coset separable:
\begin{theorem} \label{thm:almost_compat->qc_comb}
Suppose that $G$ is a finitely generated QCERF relatively hyperbolic group, $Q,R \leqslant G$ are finitely generated relatively quasiconvex subgroups with almost compatible parabolics and \(S = Q \cap R\).
Then for every \(A \geq 0\) there exist finite index subgroups $Q'\leqslant_f Q$ and $R' \leqslant_f R$ which satisfy properties \descref{P1}--\descref{P3}.
More precisely, there exists $L \leqslant_f G$, with $S \subseteq L$, such that for any $L' \leqslant_f L$, satisfying $S \subseteq L'$, we can choose $Q'=Q\cap L' \leqslant _f Q$ and there exists $M \leqslant_f L'$, with $Q' \subseteq M$, such that for any $M' \leqslant_f M$, satisfying $Q' \subseteq M'$, we can choose $R'=R \cap M' \leqslant_f R$. \end{theorem}
\begin{proof}
As before, we will be verifying the assumptions of Theorem~\ref{thm:sep->qc_comb}.
Let $P$ be an arbitrary maximal parabolic subgroup of $G$.
Assumption~\ref{cond:1} follows from the QCERF-ness of $G$ and assumption \ref{cond:3} follows from Corollary~\ref{cor:fi_in_parab_times_fgqc_is_sep}.
Let $D=Q \cap P$ and $U \leqslant_f D$. Since $Q$ and $R$ have almost compatible parabolics and $Q \cap P \preccurlyeq U$, we know that either $U \preccurlyeq R \cap P$ or $R \cap P \preccurlyeq U$.
Note that both $U$ and $R \cap P$ are finitely generated by Lemma~\ref{lem:fg_qc_int_parab_is_fg} and relatively quasiconvex by Corollary~\ref{cor:parab->qc}, so they are separable because $G$ is QCERF.
Lemma~\ref{lem:V_sep_and_U_leq_V->UV-sep} now implies that the double coset $U(R \cap P)$ is separable in $G$, thus condition \eqref{eq:fin_ind_if_D} is satisfied by Lemma~\ref{lem:induced_top}.
This shows that assumption \ref{cond:4} of Theorem~\ref{thm:sep->qc_comb} is satisfied; furthermore, assumption \ref{cond:2} holds by Proposition~\ref{prop:parab_times_S_is_sep}.
We can now deduce the theorem by combining Theorem~\ref{thm:metric_qc} with Theorem~\ref{thm:sep->qc_comb}. \end{proof}
\section{Separability of double cosets in QCERF relatively hyperbolic groups} \label{sec:double_coset_sep} In this section we prove Corollary~\ref{cor:double_cosets_sep} from the Introduction.
\begin{proof}[Proof of Corollary~\ref{cor:double_cosets_sep}]
Let $X$ be a finite generating set of $G$. Consider any $g \in G \setminus QR$, and set $A=|g|_X+1$.
By Theorem \ref{thm:sep->qc_intro-detailed} there are subgroups \(Q' \leqslant_f Q\), \(R' \leqslant_f R\) satisfying properties \descref{P1} and \descref{P3}.
The latter property, combined with the definition of $A$, implies that \(g \notin Q \langle Q', R' \rangle R\).
On the other hand, property \descref{P1} tells us that $H=\langle Q',R' \rangle$ is relatively quasiconvex in $G$. Clearly it is also finitely generated, hence it must be separable in $G$ by QCERF-ness. Observe that since \(Q'\) and \(R'\) are finite index subgroups in \(Q\) and \(R\) respectively,
\[
Q H R = \bigcup_{i=1}^n \bigcup_{j=1}^m a_i H b_j,
\]
where \(a_1, \dots, a_n\) are left coset representatives of \(Q'\) in \(Q\), and \(b_1, \dots, b_m\) are right coset representatives of \(R'\) in \(R\).
Recalling Remark~\ref{rem:sep_props}, we see that the subset $QHR$ is separable in $G$, thus it is a closed set containing \(QR\) but not containing \(g\).
Since we found such a set for an arbitrary $g \in G \setminus QR$, we can conclude that $QR$ is closed in $\pt(G)$, as required. \end{proof}
Corollary~\ref{cor:almost_comp->sep_dc} from the Introduction can be proved in the same way as Corollary~\ref{cor:double_cosets_sep}, except that one needs to use Theorem~\ref{thm:almost_compat->qc_comb} instead of Theorem~\ref{thm:sep->qc_intro-detailed}.
\part{Separability of products of subgroups} \label{part:multicosets} This part of the paper is dedicated to proving Theorem~\ref{thm:RZs} from the Introduction. In order to do this we must generalise the discussion of path representatives in Sections~\ref{sec:path_reps}-\ref{sec:multitracking}, adapting the arguments there to deal with additional technicalities. Let us give a summary of the argument.
Let \(G\) be a QCERF finitely generated relatively hyperbolic group with a finite collection of peripheral subgroups \(\{H_\nu \mid \nu \in \Nu\}\). Suppose that, for each \(\nu \in \Nu\), the subgroup \(H_\nu\) has property \(RZ_s\). Let \(F_1, \dots, F_s \leqslant G\) be finitely generated relatively quasiconvex subgroups. In order to show that the product \(F_1 \dots F_s\) is separable, we proceed by induction on \(s\). The case that \(s = 1\) is the QCERF condition and \(s = 2\) is Corollary~\ref{cor:double_cosets_sep}, so we may assume \(s > 2\). For ease of reading we now relabel the subgroups \(F_1 = Q, F_2 = R, F_3 = T_1, \dots, F_s = T_m\), where \(m = s-2 > 0\).
We approximate the product \(Q R T_1 \dots T_m\) with sets of the form \(Q \langle Q', R' \rangle R T_1 \dots T_m\), where \(Q' \leqslant_f Q\) and \(R' \leqslant_f R\) are finite index subgroups of \(Q\) and \(R\) respectively. Observe that we can write these sets as finite unions \begin{equation}\label{eq:approx_subsets}
Q \langle Q', R' \rangle R T_1 \dots T_m = \bigcup_{i,j} a_i \langle Q', R' \rangle b_j T_1 \dots T_m, \end{equation} where the elements \(a_i\) and \(b_j\) are coset representatives of \(Q'\) and \(R'\) in $Q$ and $R$ respectively. Note that the products on the right-hand side of \eqref{eq:approx_subsets} now involve only \(s-1\) subgroups. By Theorem~\ref{thm:sep->qc_intro}, the subgroups \(Q'\) and \(R'\) can be chosen so that \(\langle Q', R' \rangle\) is relatively quasiconvex, hence we can apply the induction hypothesis to show that such products are separable in $G$.
It then remains to prove that the product \(Q R T_1 \dots T_m\) is, in fact, an intersection of subsets of the form \(Q \langle Q', R' \rangle R T_1 \dots T_m\) as above. To this end, we study path representatives \(q p_1 \dots p_n r t_1 \dots t_m\) of elements of \(Q \langle Q', R' \rangle R T_1 \dots T_m\) in a similar manner to Part~\ref{part:metric_qc_double_cosets}. The main additional difficulty comes from controlling instances of multiple backtracking that involve segments in the \(t_1 \dots t_m\) part of the path. We introduce new metric conditions \descref{C2-m} and \descref{C5-m} to deal with these technicalities.
\section{Auxiliary definitions} \label{sec:multicoset_defs}
\begin{convention} \label{conv:main_multicoset}
We write \(G\) for a group generated by a finite set \(X\) and hyperbolic relative to a family of subgroups \(\{H_\nu \mid \nu \in \Nu\}\), $|\Nu|<\infty$.
Let $\mathcal{H}=\bigsqcup_{\nu \in \Nu} (H_\nu\setminus\{1\})$ and choose $\delta \in \NN$ so that the Cayley graph $\ga$ is $\delta$-hyperbolic (see Lemma~\ref{lem:Cayley_graph-hyperbolic}).
We will assume that \(Q, R, T_1, \dots , T_m \leqslant G\) are fixed relatively quasiconvex subgroups of \(G\), with quasiconvexity constant \(\varepsilon \ge 0\), where $m \in \NN_0$. Denote
\(S=Q \cap R\). \end{convention}
Throughout this section we use \(Q'\) and \(R'\) to denote subgroups of $Q$ and $R$ respectively. We will also assume that \(Q' \cap R'= Q \cap R = S\) (i.e., \(Q'\) and \(R'\) satisfy \descref{C1}).
\subsection{New metric conditions} Suppose $B, C \ge 0$ are some constants, $\mathcal{P}$ is a finite collection of maximal parabolic subgroups of $G$, and \(\mathcal{U}\) is a finite family of finitely generated relatively quasiconvex subgroups of \(G\). We will be interested in the following generalisations of conditions \descref{C2} and \descref{C5} to the multiple coset setting:
\begin{itemize}
\descitem{C2-m}
\(\minx \Bigl(R \langle Q', R' \rangle R T_1 \dots T_j \setminus R T_1 \dots T_j \Bigr) \geq B\), for each \(j=0,\dots, m\);
\descitem{C5-m}
\(\minx \Bigl(q \langle Q'_P, R'_P \rangle R_P (U_1)_P \dots (U_j)_P \setminus qQ'_P R_P (U_1)_P \dots (U_j)_P \Bigr) \geq C \), for each \(P \in \mathcal{P}\), all \(q \in Q_P\), any $j \in \{0, \dots, m\}$ and arbitrary \(U_1, \dots, U_j \in \mathcal{U}\), where $(U_i)_P=U_i \cap P \leqslant P$. \end{itemize}
\begin{remark} Let us make the following observations. \label{rem:C5m->C5}
\begin{itemize}
\item When $j=0$, the inequality from condition \descref{C2-m} reduces to $\minx(R \langle Q',R' \rangle R \setminus R) \ge B$, which is a part of \descref{C2}; on the other hand, the inequality from condition \descref{C5-m} simply becomes \descref{C5}.
In particular, for each $m \ge 0$, \descref{C5-m} implies \descref{C5}.
\item In our usage of \descref{C5-m}, the set $\mathcal U$ will consists of finitely many conjugates of $T_1,\dots,T_m$; in fact, $U_i=T_i^{a_i}$, for some $a_i \in G$, $i=1,\dots,m$.
\end{itemize} \end{remark}
\begin{remark}
Similarly to conditions \descref{C1}-\descref{C5}, the above conditions are best understood with a view towards the profinite topology.
\begin{itemize}
\item
To prove separability of products of relatively quasiconvex subgroups we argue by induction on the number of factors.
That is, we assume that the product of \(m+1\) relatively quasiconvex subgroups is separable and then deduce the separability of the product of \(m+2\) relatively quasiconvex subgroups.
The existence of finite index subgroups \(Q' \leqslant_f Q\) and \(R' \leqslant_f R\) realising condition \descref{C2-m} will be deduced from this inductive assumption.
\item
The existence of finite index subgroups \(Q' \leqslant_f Q\) and \(R' \leqslant_f R\) realising condition \descref{C5-m}, given a finite family \(\mathcal{U}\), will be deduced from the assumption that the peripheral subgroups \(\{H_\nu \, | \, \nu \in \Nu\}\) of \(G\) each satisfy the property \(\mathrm{RZ}_{m+2}\).
\end{itemize} \end{remark}
\subsection{Path representatives for products of subgroups} In this subsection we define path representatives for elements of $Q \langle Q', R' \rangle RT_1\dots T_m$ similarly to the path representatives for elements of $Q \langle Q', R' \rangle R$ from Definition~\ref{def:double_coset_path_reps} and discuss their properties.
\begin{definition}[Path representative, III] \label{def:multicoset_path_reps}
Let \(g\) be an element of \(Q \langle Q', R' \rangle R T_1 \dots T_m\).
Suppose that \(p = q p_1 \dots p_n r t_1 \dots t_m\) is a broken line in \(\Gamma(G,X\cup\mathcal{H})\) satisfying the following properties:
\begin{itemize}
\item \(\elem{p} = g\);
\item \(\elem{q} \in Q\) and \(\elem{r} \in R\);
\item \(\elem{p_i} \in Q' \cup R'\) for each \(i \in \{1, \dots n\}\);
\item \(\elem{t_i} \in T_i\) for each \(i \in \{1, \dots m\}\).
\end{itemize}
Then we say that \(p\) is a \emph{path representative} of \(g\) in the product \(Q \langle Q', R' \rangle R T_1 \dots T_m\). \end{definition}
The type of a path representative is defined as before (cf. Definitions~\ref{def:type_of_path_rep} and \ref{def:type_of_double_coset_path_rep}).
\begin{definition}[Type and width of a path representative, III] \label{def:type_of_multicoset_path_rep}
Let \(g \in Q \langle Q', R' \rangle R T_1 \dots T_m\) and let \(p = q p_1 \dots p_n r t_1 \dots t_m\) be a path representative of $g$ in the sense of Definition~\ref{def:multicoset_path_reps}.
Denote by \(Y\) the set of all \(\mathcal{H}\)-components of the segments of \(p\).
We define the \emph{width} of $p$ as the integer $n$ and the
\emph{type} of \(p\) as the triple
\[\tau(p) = \Big(n, \ell(p),\sum_{y \in Y} |y|_X \Big) \in {\NN_0}^3.\] \end{definition}
The following observation will be useful. \begin{remark} \label{rem:path_rep_from_product}
Suppose $g \in Q \langle Q', R' \rangle R T_1 \dots T_m$ can be written as a product
\[
g=x y_1 \dots y_n z u_1 \dots u_m,
\]
where $x \in Q$, $y_1,\dots y_n \in Q' \cup R'$, $z \in R$ and $u_i \in T_i$, for each $i=1,\dots,m$. Then $g$ has a path representative of width $n$. \end{remark}
Similarly to path representatives of elements of \(\langle Q', R' \rangle\) (in the sense defined in Section~\ref{sec:path_reps}), we will be interested in path representatives whose type is minimal (as an element of \({\NN_0}^3\) under the lexicographic ordering). Given an element \(g \in Q \langle Q', R' \rangle R T_1 \dots T_m\), such a path representative is always guaranteed to exist. Let us make the following observation (cf. Remark~\ref{rem:props_of_double_coset_path_reps}).
\begin{remark} \label{rem:multicoset_path_reps}
Suppose that \(p = q p_1 \dots p_n r t_1 \dots t_m\) is a minimal type path representative of an element \(g \in Q \langle Q', R' \rangle R T_1 \dots T_m\) such that $g \notin QR T_1 \dots T_m$.
Then $n>0$, $\elem{p_1} \in R' \setminus S$, $\elem{p_n} \in Q' \setminus S$ and the labels of $p_1,\dots,p_{n}$ alternate between representing elements of $R' \setminus S$ and $Q' \setminus S$.
In particular, the integer $n$ must be even. \end{remark}
Note that in Definition~\ref{def:multicoset_path_reps} the geodesic paths $q$, $r$ and $T_1,\dots,t_m$ are always counted as segments of the path $p$, even if they end up being trivial paths. For example a minimal type path representative of an element $g \in R'Q'T_1\dots T_m\setminus QRT_1 \dots T_m$ will be a broken line $p=qp_1p_2rt_1 \dots t_m$ with $m+4$ segments, where $q$ and $r$ are trivial paths.
The proofs of the main results from Sections \ref{sec:path_reps} and \ref{sec:adj_backtracking} can be easily adapted to apply to minimal type path representatives of elements $g \in Q \langle Q', R' \rangle R T_1 \dots T_m\setminus QRT_1 \dots T_m$ (in the sense of Definitions~\ref{def:multicoset_path_reps} and \ref{def:type_of_multicoset_path_rep}), with only superficial differences, so the proofs of the following generalisations of Lemmas~\ref{lem:bddinnprod}, \ref{lem:shortspikes} and \ref{lem:longadjbacktracking}, respectively, will be omitted.
\begin{lemma} \label{lem:multicoset_bdd_inn_prod}
There is a constant \(C_0 \geq 0\) such that the following holds.
Assume that $Q' \leqslant Q$ and $R' \leqslant R$ are subgroups satisfying condition \descref{C1}.
Consider any element \(g \in Q \langle Q', R' \rangle R T_1 \dots T_m\) with \(g \notin QRT_1\dots T_m\). Let \(p = q p_1 \dots p_n r t_1 \dots t_m\) be a path representative of $g$ of minimal type,
with nodes $f_0,\dots,f_{n+m+2}$ (i.e., \(f_0 = q_-\), \(f_{i} = (p_i)_-\), for each \(i \in \{1, \dots, n\}\), $f_{n+1}=r_-$, \(f_{n+1+j} = (t_j)_-\), for each \(j \in \{1, \dots, m\}\), and $f_{n+m+2}=(t_m)_+$).
Then \(\langle f_{i-1}, f_{i+1} \rangle_{f_i}^{rel} \leq C_0\), for all \(i \in \{1, \dots, n+m+1\}\). \end{lemma}
\begin{lemma} \label{lem:multicoset_bdd_cusps}
There is a constant \(C_1 \geq 0\) such that the following is true.
Let $Q' \leqslant Q$ and $R' \leqslant R$ be subgroups satisfying condition \descref{C1}.
Consider a minimal type path representative
\(p = q p_1 \dots p_n r t_1 \dots t_m\) for an element \(g \in Q \langle Q', R' \rangle R T_1 \dots T_m \setminus QRT_1\dots T_m\).
If $a$ and $b$ are adjacent segments of $p$, with $a_+=b_-$, and
\(h\) and \(k\) are connected \(\mathcal{H}\)-components of $a$ and $b$ respectively, then \(d_X(h_+,a_+) \leq C_1\) and \(d_X(a_+, k_-) \leq C_1\). \end{lemma}
\begin{lemma} \label{lem:multicoset_adj_backtracking}
For any \(\zeta \geq 0\) there is \(\Theta_0 = \Theta_0(\zeta) \in \NN\) such that the following is true.
Let $Q' \leqslant Q$ and $R' \leqslant R$ be subgroups satisfying condition \descref{C1}.
Consider a minimal type path representative
\(p = q p_1 \dots p_n r t_1 \dots t_m\) for an element \(g \in Q \langle Q', R' \rangle R T_1 \dots T_m \setminus QRT_1\dots T_m\).
Suppose that $a$ and $b$ are adjacent segments of $p$, with $a_+=b_-$, and \(h\) and \(k\) are connected \(\mathcal{H}\)-components of $a$ and $b$ respectively, such that
\[
\max\{ \abs{h}_X, \abs{k}_X \} \geq \Theta_0.
\]
Then \(d_X(h_-, k_+) \geq \zeta\). \end{lemma}
\section{Multiple backtracking in product path representatives: two special cases} \label{sec:mcs_multitracking1} Just like in Theorem~\ref{thm:metric_qc}, the main difficulty in proving Theorem~\ref{thm:RZs} consists in dealing with multiple backtracking in path representatives. In this section we will consider two of the possible cases. We will be working under Convention~\ref{conv:main_multicoset}.
Throughout the rest of the paper we fix the following notation.
\begin{notation} \label{not:C_1-P_1}
let $C_1$ be the larger of the two constants provided by Lemmas~\ref{lem:shortspikes} and \ref{lem:multicoset_bdd_cusps}, and denote by $\mathcal{P}_1$ the finite collection of maximal parabolic subgroups of $G$ given by
\[
\mathcal{P}_1 = \{{H_\nu}^ b \, | \, \nu \in \Nu, \abs{b}_X \leq C_1\}.
\] \end{notation}
The following lemma is roughly analogous to Lemma~\ref{lem:end_sides_constr}.
\begin{lemma} \label{lem:final_path_constr}
For any \(L \geq 0\) and any relatively quasiconvex subgroup \(T \leqslant G\) there is a constant \(L' = L'(L,T) \geq 0\) such that the following is true.
Let \(P={H_\nu}^ b \in \mathcal{P}_1\), for some \(\nu \in \Nu\) and \(b \in G\), with \(\abs{b}_X \leq C_1\), and let \(t\) be a geodesic path in \(\Gamma(G,X\cup\mathcal{H})\), with \(\elem{t} \in T\).
Suppose that \(v \in Pb = bH_\nu\) is a vertex of \(t\) and \(u \in P\) is an element satisfying \(d_X(u,t_-) \leq L\). Denote \(a = u^{-1} t_- \in G\).
Then there is a geodesic path \(t'\) in $\ga$ such that
\begin{itemize}
\item \(t'_- = u\) and \(d_X(t'_+,v) \leq L'\);
\item \(\elem{t'} \in T^{a} \cap P\);
\item \({(t'_+)}^{-1}t_+ \in a T\).
\end{itemize} \end{lemma}
\begin{proof}
Let $K=\max\{C_1,\sigma+L\}$, where $\sigma \ge 0$ is a quasiconvexity constant for $T$. Denote
\begin{equation}
\label{eq:def_of_new_D}
L'= \max\{K'(P,T^a,K) \mid P \in \mathcal{P}_1,~a \in G,~\abs{a}_X \le L\},
\end{equation}
where $K'(P,T^a,K)$ is obtained from Lemma~\ref{lem:nbhdintersection}.
The hypotheses that \(v \in Pb\) and \(\abs{b}_X \leq C_1\) imply that \(d_X(v,P) \leq \abs{b}_X \leq C_1\).
As \(u \in P\), we have \(P = uP\) and so
\begin{equation}
\label{eq:v_dist_from_uP}
d_X(v,uP) \leq C_1.
\end{equation}
Set $x= t_-=ua$. Since $\elem{t} \in T$, we have $d_X(v, xT) \le \sigma$, as $T$ is $\sigma$-quasiconvex. Hence
\[
d_X(v,u T^a)=d_X(v, xT a^{-1}) \le d_X(v,xT)+\abs{a}_X \le \sigma+L.
\]
Combining the latter inequality with \eqref{eq:v_dist_from_uP} allows us to apply Lemma~\ref{lem:nbhdintersection} to find an element $z \in u(T^a \cap P)$ such that $d_X(v,z) \le L'$, where $L' \ge 0$ is the constant from \eqref{eq:def_of_new_D}.
Now take \(t'\) to be any geodesic in $\ga$ with \(t'_- = u\) and \(t'_+ = z\).
It is straightforward to verify that \(t'\) satisfies the first two of the required properties. For the last property, observe that
\[{(t'_+)}^{-1}t_+ =\left({(t'_+)}^{-1} u \right) \left(u^{-1}t_- \right) \left(t_-^{-1} t_+\right) =\elem{t'}^{-1} a \elem{t}\in T^a a T=aT.\] \end{proof}
The following notation will be fixed for the remainder of the paper.
\begin{notation} \label{not:Li_def}
Let \(D\) be the constant from Lemma~\ref{lem:end_sides_constr}, corresponding to $C_1$ and $\mathcal{P}_1$ (from Notation~\ref{not:C_1-P_1}) and subgroups $Q,R$.
We define constants $L_1, \dots. L_{m+1}$ as follows:
\[
L_1 = D + C_1 \quad \textrm{and} \quad L_{i+1} = L'(L_{i},T_i)+C_1, \text{ for each }i = 1, \dots, m,
\]
where \(L'\) is obtained from Lemma~\ref{lem:final_path_constr}.
We also define the family of subgroups
\[
\mathcal{U}_1 = \bigcup_{i=1}^m \Big\{ T_i^g \, \Big| \, i \in \{1, \dots, m\}, g \in G, \abs{g}_X \leq L_i \Big\},
\]
consisting of finitely many conjugates of the subgroups \(T_1, \dots, T_m\).
Note that, by Lemma~\ref{lem:props_of_qc_sbgps}, each \(U \in \mathcal{U}_1\) is a relatively quasiconvex subgroup of \(G\). \end{notation}
The next proposition describes how we deal with consecutive backtracking that involves the $t_1 \dots t_m$-part of a path representative of an element $g \in Q \langle Q', R' \rangle R T_1 \dots T_m \setminus Q R T_1 \dots T_m$; it complements Proposition~\ref{prop:multitracking_path} which takes care of backtracking within the $qp_1\dots p_n r$-part.
\begin{proposition} \label{prop:backtracking_in_t-tail}
Suppose that \(p = q p_1 \dots p_n r t_1 \dots t_m\) is a path representative of minimal type for an element \(g \in Q \langle Q', R' \rangle R T_1 \dots T_m\setminus Q R T_1 \dots T_m\), where $Q' \leqslant Q$ and $R' \leqslant R$ are some subgroups satisfying \descref{C1}.
Let \(P={H_\nu}^ b \in \mathcal{P}_1\), for some \(\nu \in \Nu\) and \(b \in G\), with \(\abs{b}_X \leq C_1\)
Suppose that $h_1,\dots,h_j$ are connected $H_\nu$-components of the segments $t_1,\dots,t_j$, respectively, with $j \in \{1,\dots,m\}$, such that $(h_1)_- \in Pb=bH_\nu$. If $u_1 \in P$ is an element satisfying $d_X(u_1,(t_1)_-) \le L_1$ then there exist elements $a_1,\dots,a_j \in G$ and a broken line $t_1' \dots t_j'$ in $\ga$ such that the following conditions hold:
\begin{enumerate}
\item[(i)] $(t_1')_-=u_1$ and $d_X((t_j')_+,(h_j)_+) \le L_{j+1}$;
\item[(ii)] $a_{i+1} \in a_i T_i$, for $i=1,\dots,j-1$;
\item[(iii)] $a_i=(t_i')_-^{-1} (t_i)_-$ and $\abs{a_i}_X \le L_i$, for each $i=1,\dots,j$;
\item[(iv)] $\elem{t'_i} \in T_i^{a_i} \cap P$, for all $i=1,\dots,j$.
\end{enumerate} \end{proposition}
\begin{proof}
We start by setting $a_1=u_1^{-1} (t_1)_-$, so that $\abs{a_1}_X = d_X(u_1,(t_1)_-) \le L_1$. Note that $(h_1)_+=(h_1)_- \elem{h_1} \in bH_\nu=Pb $.
Therefore we can apply Lemma~\ref{lem:final_path_constr} to find a geodesic path $t_1'$ in $\ga$ such that $(t_1')_-=u_1$, $d_X((t_1')_+,(h_1)_+) \le L'(L_1,T_1)$, $\elem{t_1'} \in T_1^{a_1} \cap P$ and
\begin{equation}
\label{eq:t_1'-t_1}
(t_1')_+^{-1} (t_1)_+ \in a_1T_1.
\end{equation}
It follows that properties (ii)--(iv) are satisfied for $i=1$, while property (i) holds because $L_2 \ge L'(L_1,T_1)$ by definition.
If $j=1$ then property (ii) is vacuously true.
We can now suppose that $j >1$.
Then $h_1$ is connected to the component $h_2$ of $t_2$, whence $d_X((h_1)_+,(t_1)_+) \le C_1$ by Lemma~\ref{lem:multicoset_bdd_cusps}. Set $u_2=(t_1')_+$ and $a_2=u_2^{-1} (t_1)_+$.
Note that $a_2 \in a_1T_1$ by \eqref{eq:t_1'-t_1} and
\[
\abs{a_2}_X=d_X((t_1)'_+,(t_1)_+) \le d_X((t_1')_+,(h_1)_+)+d_X((h_1)_+,(t_1)_+) \le L'(L_1,T_1)+C_1=L_2.
\]
Since $(t_2)_-=(t_1)_+$, we see that $a_2=u_2^{-1} (t_2)_-$ and $d_X(u_2,(t_2)_-)=\abs{a_2}_X \le L_2$.
Now, observe that $u_2 =u_1 \elem{t_1'} \in P $ and $(h_2)_+ \in bH_\nu=Pb$, as $h_2$ is connected to $h_1$.
This allows us to use Lemma~\ref{lem:final_path_constr} to find a geodesic path $t_2'$ in $\ga$ such that $(t_2')_-=u_2=(t_1')_+$, $d_X((t_2')_+,(h_2)_+) \le L'(L_2,T_2)$, $\elem{t_2'} \in T_2^{a_2} \cap P $ and $(t_2')_+^{-1}t_+ \in a_2T_2$ (see Figure~\ref{fig:new_tail}).
\begin{figure}
\caption{The new path \(t_1'\dots t_j'\) constructed in Proposition~\ref{prop:backtracking_in_t-tail}. }
\label{fig:new_tail}
\end{figure}
If $j=2$ then we are done, otherwise we construct the remaining elements $a_3, \dots,a_j$ and the paths $t_3',\dots,t_j'$ inductively, similarly to the construction of $a_2$ and $t_2'$ above. \end{proof}
The next two propositions prove that, under certain conditions, instances of multiple backtracking are long. Essentially, they generalise Proposition~\ref{prop:long_multitracking}. The first of these shows how we can use condition \descref{C5-m} to deal with particular instances of multiple backtracking.
\begin{proposition} \label{prop:C5n_multiracking}
For each \(\zeta \geq 0\) there is a constant \(C_2 = C_2(\zeta) \geq 0\) such that if $Q' \leqslant Q$ and $R' \leqslant R$ satisfy conditions \descref{C1}, \descref{C3} and \descref{C5-m} with constant \(C \geq C_2\) and finite families $\mathcal{P}$ and $\mathcal{U}$, such that \(\mathcal{P}_1 \subseteq \mathcal{P}\) and \(\mathcal{U}_1 \subseteq \mathcal{U}\), then the following is true.
Let \(p = q p_1 \dots p_n r t_1 \dots t_m\) be a path representative of minimal type for some element \(g \in Q \langle Q', R' \rangle R T_1 \dots T_m\), with \(g \notin Q R T_1 \dots T_m\).
Suppose that \(p\) has multiple backtracking along \(H_\nu\)-components \(h_1, \dots, h_k\) of its segments, for some $\nu \in \Nu$, such that
\begin{itemize}
\item \(h_1\) is an \(H_\nu\)-component of either \(q\) or \(p_i\), for some \(i \in \{ 1, \dots, n-1\}\), with \(\elem{p_i} \in Q'\);
\item \(h_k\) is an \(H_\nu\)-component of a segment \(t_j\), for some $j \in \{1,\dots,m\}$.
\end{itemize}
Then \(d_X((h_1)_-,(h_k)_+) \geq \zeta\). \end{proposition}
\begin{proof}
Take \[C_2 = \max\{ 2C_1, D + \zeta + L_{j} \mid j = 1, \dots, m+1 \} + 1,\] where \(D\) and \(L_{j}\) are defined in Notation~\ref{not:Li_def}, and suppose that \(C \geq C_2\).
The proof employs the same strategy as Proposition~\ref{prop:long_multitracking}: we first construct a path whose endpoints are close to \((h_1)_-\) and \((h_k)_+\) and whose label represents an element of a parabolic subgroup.
We will then obtain a contradiction with the minimality of the type of \(p\), using condition \descref{C5-m}.
We will focus on the case when \(h_1\) is an \(H_\nu\)-component of \(p_i\), for some \(i \in \{1, \dots, n-1\}\) with \(\elem{p_i} \in Q'\), with the case when \(h_1\) is an \(H_\nu\)-component of \(q\) being similar. Note that since \(g \notin Q R T_1 \dots T_m\), it must be that \(n \geq 2\) by Remark~\ref{rem:multicoset_path_reps}.
After translating by \((p_i)_+^{-1}\), we may assume that \((p_i)_+ = 1\).
We write \(b = (h_1)_+\) and note that, according to Lemma~\ref{lem:multicoset_bdd_cusps},
\begin{equation}
\label{eq:mc_bdd_b}
\abs{b}_X = d_X((h_1)_+,(p_i)_+) \leq C_1.
\end{equation}
Let \(P = bH_\nu b^{-1}\in \mathcal{P}_1 \subseteq \mathcal{P}\).
Since \(h_1, \dots, h_k\) are pairwise connected, the vertices \((h_l)_+\) lie in the same left coset \(bH_\nu\), for all \(l = 1, \dots, k\), thus
\begin{equation}
\label{eq:hi+_in_Pb}
(h_l)_+ \in Pb, \text{ for all } l = 1, \dots, k.
\end{equation}
We construct a new broken line \(p' = p_i' \dots p'_n r' t'_1 \dots t'_j\) in two steps. It will be used in conjunction with condition \descref{C5-m} to obtain a path representative of \(g\) with lesser type than \(p\).
\noindent
\underline{\emph{Step 1:}}
we start by constructing geodesic paths \(p'_i,p'_{i+1}, \dots, p'_n\) and $r'$ by using condition \descref{C3} and applying Lemmas~\ref{lem:end_sides_constr} and \ref{lem:(c3)->vertex_constr}, in exactly the same way as in the proof of Proposition~\ref{prop:multitracking_path}. The newly constructed paths will have the following properties:
\begin{itemize}
\item \(\elem{p'_i} \in Q_P\), \(\elem{p'_l} \in Q'_P \cup R'_P\), for each \(l = i+1, \dots, n\), and \(\elem{r'} \in R_P\);
\item \(d_X((p'_i)_-,(h_1)_-) \leq D\) and \((p'_i)_+ = (p_i)_+=1\);
\item \((p'_l)_+ = (p'_{l+1})_-\), for \(l = i, \dots, n-1\);
\item \(r'_- = (p'_n)_+\) and \(d_X(r'_+, (h_{k-j})_+) \leq D\);
\item \((p'_l)_+^{-1}(p_l)_+ \in S\), for \(l = i+1, \dots, n\).
\end{itemize}
\noindent
\underline{\emph{Step 2:}}
we now construct geodesic paths \(t'_1, \dots, t'_{j}\) as follows. Set $u_1=(r')_+$ and observe that since $(p_{i+1}')_-=(p_i')_+=1$, we have
\[
u_1=\elem{p_{i+1}'} \dots \elem{p_{n}'} \elem{r'} \in P.
\]
By Lemma~\ref{lem:multicoset_bdd_cusps}, we have \(d_X((h_{k-j})_+,(t_1)_-) =d_X((h_{k-j})_+,r_+) \leq C_1\). Moreover, by Step 1 above, \(d_X(u_1, (h_{k-j})_+) \leq D\). Therefore
\begin{equation*}
\label{eq:dist_from_u_to_r+}
d_X(u_1,(t_1)_-) \leq C_1 + D = L_1.
\end{equation*}
Together with \eqref{eq:hi+_in_Pb} this allows us to apply Proposition~\ref{prop:backtracking_in_t-tail} to find elements $a_1,\dots,a_j \in G$ and a broken line $t_1' t_2' \dots t_j'$ in $\ga$ such that
\begin{itemize}
\item $(t_1')_-=u_1$ and $d_X((t_j')_+,(h_k)_+) \le L_{j+1}$;
\item $a_{l+1} \in a_l T_l$, for $l=1,\dots,j-1$;
\item $a_l=(t_l')_-^{-1} (t_l)_-$ and $\abs{a_l}_X \le L_l$, for each $l=1,\dots,j$;
\item $\elem{t'_l} \in T_l^{a_l} \cap P$, for all $l=1,\dots,j$.
\end{itemize}
Observe that
\begin{equation}
\label{eq:a1_in_R}
\begin{aligned}
a_1 &= (t_1')_-^{-1} (t_1)_-=u_1^{-1}r_+= ({r'_+}^{-1} r'_-) ({r'_-}^{-1} r_-) (r_-^{-1} r_+) \\
& = \elem{r'}^{-1} (p'_n)_+^{-1}(p_n)_+ \elem{r} \in R_P S R \subseteq R.
\end{aligned}
\end{equation}
We now define a new broken line $p'$ in $\ga$ by \[p'=p_i' \dots p_n' r' t_1' \dots t_j'.\] Note that \(d_X(p'_-,(h_1)_-) \leq D\), \(d_X(p'_+,(h_k)_+) \leq L_{j+1}\) and \(\elem{p'} \in \elem{p'_i} \langle Q'_P, R'_P \rangle R_P (T_1^{a_1})_P \dots (T_j^{a_j})_P\), where $\elem{p'_i} \in Q_P$. Moreover, $T_l^{a_l} \in \mathcal{U}_1 \subseteq \mathcal{U}$, for each $l=1,\dots,j$.
Now, suppose, for a contradiction, that \(d_X((h_1)_-,(h_k)_+) < \zeta\).
Then, by the triangle inequality,
\[
\abs{p'}_X \leq D + \zeta + L_{j+1} < C_2.
\]
Thus, as $C \ge C_2$, we can apply \descref{C5-m} to deduce that \(\elem{p'} \in \elem{p_i'}Q'_P R_P(T_1^{a_1})_P \dots (T_j^{a_j})_P\).
Therefore, there exist elements $z \in \elem{p_i'}Q'_P$, \(x \in R\) and \(y_l \in T_l\), $l=1,\dots,j$, such that \(\elem{p'} = z x y_1^{a_1} \dots y_j^{a_j}\).
By construction, for each \(l = 1, \dots, j-1\) there is \(b_l \in T_l\) such that \(a_{l+1} = a_l b_l\), and so
$a_l^{-1} a_{l+1} = b_l \in T_l$.
Recalling that \((p_i')_+ = (p_i)_+ = 1\), the above yields
\begin{equation}
\label{eq:end_of_p'}
\elem{p'} = z x y_1^{a_1} \dots y_j^{a_j} = z x a_1 y_1 b_1 y_2 b_2 \dots b_{j-1} y_j a_j^{-1}.
\end{equation}
Let $\alpha$ and $\beta$ be geodesic segments in $\ga$ connecting $(p_i)_-$ with $(p_i')_-$ and $(t'_j)_+$ with $(t_j)_+$ respectively. Since $(p_i)_+=(p_i')_+$, we have
\begin{equation}
\label{eq:elem_of_alpha}
\elem{\alpha}= (p_i)_-^{-1} (p_i')_-=(p_i)_-^{-1} (p_i)_+ (p_i')_+^{-1} (p_i')_-=
\elem{p_i}\, \elem{p_i'}^{-1}.
\end{equation}
On the other hand, it follows from the construction that
\begin{equation}
\label{eq:elem_of_beta}
\elem{\beta}=(t_j')_+^{-1} (t_j)_+=\elem{t_{j}'}^{-1} (t_j')_-^{-1} (t_j)_- \elem{t_j}=\elem{t_{j}'}^{-1} a_j \elem{t_j} \in T_j^{a_j} a_j T_j=a_j T_j.
\end{equation}
The broken lines $p$ and $\gamma=qp_1 \dots p_{i-1} \alpha p' \beta t_{j+1} \dots t_m$ have the same endpoints in $\ga$.
Hence, in view of \eqref{eq:elem_of_alpha} and \eqref{eq:end_of_p'}, we obtain
\begin{equation}
\label{eq:new_decomp_for_g}
\begin{aligned}
g &=\elem{p} =\elem{\gamma} = \elem{q}\, \elem{p_1} \dots \elem{p_{i-1}}\, \elem{\alpha} \,\elem{p'} \, \elem{\beta} \, \elem{t_{j+1}} \dots \elem{t_m} \\
&= \elem{q}\, \elem{p_1} \dots \elem{p_{i-1}} (\elem{p_i} \, \elem{p'_i}^{-1}) (z x a_1 y_1 b_1 y_2 b_2 \dots b_{j-1} y_j a_j^{-1}) \elem{\beta} \, \elem{t_{j+1}} \dots \elem{t_m} \\
&= \elem{q}\, \elem{p_1} \dots \elem{p_{i-1}} (\elem{p_i} \, \elem{p'_i}^{-1} z) (x a_1) (y_1 b_1) \dots (y_{j-1} b_{j-1}) (y_j a_j^{-1} \elem{\beta}) \elem{t_{j+1}} \dots \elem{t_m}.
\end{aligned}
\end{equation}
Recall that $\elem{q} \in Q$, $\elem{p_1}, \dots, \elem{p_{i-1}} \in Q' \cup R'$ and $\elem{t_l} \in T_l$, for $l=j+1,\dots,m$, by definition.
On the other hand, $\elem{p_i} \, \elem{p'_i}^{-1} z \in Q'\, \elem{p'_i}^{-1}\, \elem{p'_i} \, Q'_P=Q'$,
\(xa_1 \in R\) by \eqref{eq:a1_in_R} and \(y_l b_l \in T_l\), for each \(l = 1, \dots, j-1\), by construction.
Finally, \(y_j a_j^{-1} \, \elem{\beta} \in T_j a_j^{-1} a_jT_j= T_j\) by \eqref{eq:elem_of_beta}.
Thus, following Remark~\ref{rem:path_rep_from_product}, the product decomposition \eqref{eq:new_decomp_for_g} for \(g\) gives us a path representative of \(g\) with width \(i < n\).
This contradicts the minimality of the type of \(p\), so the proposition is proved. \end{proof}
Condition \descref{C2-m} can be used deal with another case of multiple backtracking.
\begin{proposition} \label{prop:C2-m_multitracking}
For every \(\zeta \geq 0\) there is a constant \(B_1 = B_1(\zeta) \geq 0\) such that if \(Q' \leqslant Q\) and \(R' \leqslant R\) satisfy condition \descref{C2-m} with constant \(B \geq B_1\) then the following is true.
Let \(p = q p_1 \dots p_n r t_1 \dots t_m\) be a path representative of minimal type for some element \(g \in Q \langle Q', R' \rangle R T_1 \dots T_m\), with \(g \notin Q R T_1 \dots T_m\), and let $\nu \in \Nu$.
Suppose that \(p\) has multiple backtracking along \(H_\nu\)-components \(h_1, \dots, h_k\) of its segments such that
\begin{itemize}
\item \(h_1\) is an \(H_\nu\)-component of \(p_i\), for some $i \in \{ 1, \dots, n-1\}$, with \(\elem{p_i} \in R'\);
\item \(h_k\) is an \(H_\nu\)-component of \(t_j\) for some \(j \in \{ 1, \dots, m\}\).
\end{itemize}
Then \(d_X((h_1)_-,(h_k)_+) \geq \zeta\). \end{proposition}
\begin{proof}
Take \(B_1 = \zeta + 2 \varepsilon + 1\), where \(\varepsilon \geq 0\) is a quasiconvexity constant for the subgroups \(R\) and \(T_1, \dots, T_m\) (as in Convention~\ref{conv:main_multicoset}), and let \(B \geq B_1\).
Suppose, for a contradiction, that \(d_X((h_1)_-,(h_k)_+) < \zeta\)
Since $\elem{p_i} \in R'$, we have \(d_X((h_1)_-,(p_i)_+ \,R) \leq \varepsilon\), by the quasiconvexity of \(R\).
Therefore there is a geodesic path \(p'_i\) in $\ga$, such that \(\elem{p'_i} \in R\), \(d_X((p'_i)_-,(h_1)_-) \leq \varepsilon\) and \((p'_i)_+ = (p_i)_+\).
Similarly, using the quasiconvexity of \(T_j\), we can find a geodesic path \(t'_j\) in $\ga$, such that \(\elem{t'_j} \in T_j\), \((t'_j)_- = (t_j)_-\) and \(d_X((t'_j)_+,(h_k)_+) \leq \varepsilon\).
Let \(p'\) be the broken line \(p'_i p_{i+1} \dots p_n r t_1 \dots t_{j-1} t'_j\).
Observe that \(\elem{p'} \in R \langle Q', R' \rangle R T_1 \dots T_j\) and, by the triangle inequality, \(\abs{p'}_X \leq \zeta + 2 \varepsilon\).
Therefore we can apply condition \descref{C2-m} to \(\elem{p'}\) to find that \(\elem{p'} = x y_1 \dots y_j\), where \(x \in R\) and \(y_l \in T_l\), for each \(l = 1, \dots, j\).
The broken lines $p$ and $\gamma=qp_1 \dots p_i {p_i'}^{-1} p' {t'_j}^{-1} t_j \dots t_m$ have the same endpoints, hence
\begin{equation}
\label{eq:new_prod_for_g}
\begin{aligned}
g &=\elem{p}=\elem{\gamma} = \elem{q} \,\elem{p_1} \dots \elem{p_{i}}\, \elem{p'_i}^{-1} \, \elem{p'} \, \elem{t_j'}^{-1}\, \elem{t_j} \dots \elem{t_m}\\
&= \elem{q} \,\elem{p_1} \dots \elem{p_{i-1}} (\elem{p_{i}}\, \elem{p'_i}^{-1} \, x) y_1 \dots y_{j-1} (y_j\,\elem{t_j'}^{-1}\, \elem{t_j}) \elem{t_{j+1}} \dots \elem{t_m}.
\end{aligned}
\end{equation}
Note that $\elem{p_{i}} \, \elem{p'_i}^{-1} x \in R$ and $y_j\, \elem{t_j'}^{-1} \, \elem{t_j} \in T_j$.
In view of Remark~\ref{rem:path_rep_from_product}, the product decomposition of $g$ from \eqref{eq:new_prod_for_g} can be used to obtain a path representative $p''$ of $g$ with width $i-1<n$. Thus the type of \(p''\) is strictly less than the type of \(p\), which yields the desired contradiction. \end{proof}
\section{Multiple backtracking in product path representatives: general case} \label{sec:mcs_multitracking2} Propositions~\ref{prop:long_multitracking}, \ref{prop:C5n_multiracking} and \ref{prop:C2-m_multitracking} above show that for $g \in Q \langle Q', R' \rangle R T_1 \dots T_m \setminus Q R T_1 \dots T_m$, instances of multiple backtracking in a minimal type path representative $p=qp_1\dots p_n r t_1 \dots t_m$, that start at a component of $q$, $p_1$,$\dots$, or $p_{n-1}$, are long. We cannot draw the same conclusion in all cases since we have no control over the elements $\elem{r}, \elem{t_1},\dots,\elem{t_m}$. Therefore in this section we use a different approach. Proposition~\ref{prop:multicoset_multitracking} below shows that in the remaining cases we can find a path representative with one of the segments from the tail section $rt_1 \dots t_m$ being short with respect to the proper metric \(d_X\). Note that the main constant $\xi_0=\xi_0(Q',\zeta)$, produced in this proposition, will depend on $Q'$ (unlike the constants $C_1$, $D$, $C_2(\zeta)$, $B_1(\zeta)$, $\dots$, defined previously) but will be independent of $R'$.
As before, we work under Convention~\ref{conv:main_multicoset}. We will also keep using Notation~\ref{not:C_1-P_1} and \ref{not:Li_def}. Let us start with the following elementary observation.
\begin{lemma} \label{lem:shorten_tail}
For any \(\zeta \geq 0\) and any given subsets \(A_1, \dots, A_k \subseteq G\), $k \ge 1$, there is a constant \(\xi = \xi(\zeta, A_1, \dots, A_k) \geq 0\) such that if \(g \in A_1 \dots A_k\) and \(\abs{g}_X \leq \zeta\), then there exist \(a_1 \in A_1\),\(\dots\), \(a_k \in A_k\) such that \(g = a_1 \dots a_k\) and \(\abs{a_i}_X \leq \xi\), for all \(i \in \{1, \dots, k\}\). \end{lemma}
\begin{proof}
For each \(g \in A_1 \dots A_k\) fix some elements \(a_{1,g} \in A_1, \dots, a_{k,g} \in A_k\) such that \(g = a_{1,g} \dots a_{k,g}\).
Now we can define
\[
\xi = \max \Big\{ \abs{a_{1,g}}_X, \dots, \abs{a_{k,g}}_X \, \Big| \, g \in A_1 \dots A_k, ~\abs{g}_X \leq \zeta \Big\} < \infty.
\]
Clearly $\xi$ has the required property. \end{proof}
\begin{definition}[Tail height] \label{def:tail_height}
Suppose that $Q' \leqslant Q$, $R' \leqslant R$ and $p= q p_1 \dots p_n r t_1 \dots t_m$ is a path representative of an element $g \in Q \langle Q', R' \rangle R T_1 \dots T_m$.
The \emph{tail height} of $p$, $th_X(p)$, is defined as
\[
th_X(p)= \min\{ \abs{r}_X, \abs{t_1}_X, \dots, \abs{t_{m-1}}_X\} .
\] \end{definition}
\begin{proposition} \label{prop:multicoset_multitracking}
For each \(\zeta \geq 0\), let $C_2=C_2(\zeta)$ be the larger of the two constants provided by Propositions~\ref{prop:long_multitracking} and \ref{prop:C5n_multiracking}, and let $B_1=B_1(\zeta)$ be given by Proposition~\ref{prop:C2-m_multitracking}. Set $B_2 = B_2(\zeta) =\max\{C_2(\zeta),B_1(\zeta)\}$.
Suppose that $Q' \leqslant Q$ is a relatively quasiconvex subgroup of $G$ containing $S=Q \cap R$. Then there exists a constant \(\xi_0 = \xi_0(Q',\zeta) \geq 0\) such that if \(R'\leqslant R\) and $Q'$, $R'$ satisfy conditions \descref{C1}-\descref{C4}, \descref{C2-m} and \descref{C5-m}, with constants \(B \geq B_2\) and \(C \geq C_2\) and collections of subgroups \(\mathcal{P} \supseteq \mathcal{P}_1\) and \(\mathcal{U} \supseteq \mathcal{U}_1\), then the following is true.
Let \(p = q p_1 \dots p_n r t_1 \dots t_m\) be a path representative of minimal type for some element \(g \in Q \langle Q', R' \rangle R T_1 \dots T_m\), with \(g \notin Q R T_1 \dots T_m\).
Suppose that \(p\) has multiple backtracking along \(\mathcal{H}\)-components \(h_1, \dots, h_k\) of its segments, with $k \ge 3$ and \(d_X((h_1)_-,(h_k)_+) \leq \zeta\).
Then $m \ge 1$ and there is a path representative \(p'\) for \(g\) (not necessarily of minimal type) such that $th_X(p') \le \xi_0$. \end{proposition}
\begin{proof}
Let \(\varepsilon' \geq 0\) be a quasiconvexity constant for \(Q'\).
Take \(\xi_0=\xi_0(Q',\zeta) \geq 0\) to be the maximum, taken over all indices $i$ and $j$ satisfying \(1 \leq i \leq j \leq m\), of the constants
\[ \xi(\zeta + \varepsilon + \varepsilon',Q', R, T_1, \dots, T_j), ~~ \xi(\zeta + 2\varepsilon,R, T_1, \dots, T_j) \text{ and } \xi(\zeta + 2\varepsilon,T_i, \dots, T_j),
\]
obtained from Lemma~\ref{lem:shorten_tail}.
Suppose that \(h_1, \dots, h_k\) are as in the statement, with $ d_X((h_1)_-,(h_k)_+) \leq \zeta$.
There are four possible cases to consider, depending on the segments of \(p\) to which the \(\mathcal{H}\)-components \(h_1\) and \(h_k\) belong to.
If \(h_k\) is an \(\mathcal{H}\)-component of one of the segments \(p_2, \dots, p_n\) or \(r\), then one obtains a contradiction to the minimality of type of \(p\) by following the same argument as in Proposition~\ref{prop:long_multitracking} (recall that \descref{C5-m} implies \descref{C5} by Remark~\ref{rem:C5m->C5}).
If \(h_1\) is an \(\mathcal{H}\)-component of one of the segments \(q, p_1, \dots, p_{n-1}\) and \(h_k\) is an \(\mathcal{H}\)-component of one of the segments \(t_1, \dots, t_m\), we obtain a contradiction by applying either Proposition~\ref{prop:C5n_multiracking} or \ref{prop:C2-m_multitracking} (depending on whether \(h_1\) is a component of a segment of $p$ representing an element of \(Q\) or \(R\), respectively).
It remains to consider the possibility when \(h_1\) is an \(\mathcal{H}\)-component of one of the segments \(p_n, r, t_1, \dots, t_m\).
It follows that \(h_k\) is an \(\mathcal{H}\)-component of \(t_j\), for some \(j \in \{ 1, \dots, m\}\), in particular $m \ge 1$.
For simplicity we treat only the case when \(h_1\) is an \(\mathcal{H}\)-component of \(p_n\); the remaining cases can be dealt with similarly.
Note that \(\elem{p_n} \in Q'\) by Remark~\ref{rem:multicoset_path_reps}.
By the relative quasiconvexity of \(Q'\) and \(T_j\) there are geodesic paths \(\alpha\) and \(\beta\) in $\ga$ satisfying
\begin{gather*}
d_X(\alpha_-,(h_1)_-) \leq \varepsilon', ~ \alpha_+ = (p_n)_+ \text{ and } \elem{\alpha} \in Q',\\
\beta_- = (t_j)_-, ~d_X(\beta_+,(h_k)_+) \leq \varepsilon ~ \text{ and } \elem{\beta} \in T_j.
\end{gather*}
Let \(\gamma = \alpha r t_1 \dots t_{j-1} \beta\). Observe that \(\elem{\gamma} \in Q' R T_1 \dots T_j\) and, by the triangle inequality,
\[
\abs{\gamma}_X=d_X(\alpha_-,\beta_+) \leq \varepsilon'+\zeta + \varepsilon.
\]
Thus, applying Lemma~\ref{lem:shorten_tail}, we can find elements \(x \in Q'\), \(y \in R\), \(z_1 \in T_1\), \(\dots\), \(z_j \in T_j\) such that \(\elem{\gamma} = x y z_1 \dots z_j\) and
\begin{equation}
\label{eq:new_path_rep_bds}
\abs{y}_X \le \xi_0.
\end{equation}
Therefore
\begin{equation}
\label{eq:prod_decomp_for_p'}
\begin{aligned}
g &= \elem{p}=\elem{q}\, \elem{p_1} \dots \elem{p_n} (\elem{\alpha}^{-1}\,\elem{\alpha})\elem{r} \, \elem{t_1} \dots \elem{t_{j-1}} (\elem{\beta} \,\elem{\beta}^{-1}) \elem{t_j} \dots \elem{t_m} \\
&= \elem{q} \, \elem{p_1} \dots \elem{p_n} \,\elem{\alpha}^{-1} \, \elem{\gamma} \elem{\beta}^{-1} \, \elem{t_{j}} \dots \elem{t_m} \\
&=\elem{q} \, \elem{p_1} \dots \elem{p_{n-1}} (\elem{p_n} \, \elem{\alpha}^{-1} \, x) y z_1 \dots z_{j-1} (z_j \, \elem{\beta}^{-1} \, \elem{t_j}) \elem{t_{j+1}} \dots \elem{t_m}.
\end{aligned}
\end{equation}
Following Remark~\ref{rem:path_rep_from_product}, the product decomposition \eqref{eq:prod_decomp_for_p'} gives rise to a path representative $p'=q' p'_1 \dots p'_n r' t'_1 \dots t'_m$ for $g$, where $\elem{q'}=\elem{q} \in Q$, $\elem{p_i'}=\elem{p_i} \in Q' \cup R'$, for $i=1,\dots, n-1$, $\elem{p_n'}=\elem{p_n}\, \elem{\alpha}^{-1} \,x \in Q'$, $\elem{r'}=y \in R$, $\elem{t'_l}=z_l \in T_l$, for $l=1,\dots,j-1$, $\elem{t'_j}=z_j \, \elem{\beta}^{-1}\, \elem{t_j} \in T_j$ and $\elem{t'_s}=\elem{t_s} \in T_s$, for $s=j+1,\dots,m$.
In view of (\ref{eq:new_path_rep_bds}), we see that \(th_X(p') \le\abs{y}_X \le \xi_0\), so the proof is complete. \end{proof}
The following proposition is an analogue of Lemma~\ref{lem:pathreps_have_qgd_shortcutting}. It employs the constant \(c_0 = \max\{C_0,14\delta\}\), where \(C_0\) is provided by Lemma~\ref{lem:multicoset_bdd_inn_prod}, and the constants \(\lambda =\lambda(c_0) \geq 1\) and \(c=c(c_0) \geq 0\), given by Proposition~\ref{prop:shortcutting_quasigeodesic}.
\begin{proposition} \label{prop:mcs_shortcutting_quasigeodesic}
For any \(\eta \geq 0\) there are constants \(\zeta = \zeta(\eta) \geq 0\), \(C_3 = C_3(\eta) \geq 0\), \(\Theta_1 = \Theta_1(\eta) \in \NN\) and \(B_3 = B_3(\eta) \geq 0\) such that if \(Q' \leqslant Q\) is a relatively quasiconvex subgroup of $G$ and \(B \geq B_3\), \(C \geq C_3\) then there exists $E =E(\eta,Q',B) \ge 0$ such that the following holds.
Suppose $Q'$ and some subgroup \(R'\leqslant R \) satisfy conditions \descref{C1}-\descref{C4}, \descref{C2-m} and \descref{C5-m}, with constants \(B\) and \(C\), and families \(\mathcal{P} \supseteq \mathcal{P}_1\) and \(\mathcal{U} \supseteq \mathcal{U}_1\).
Let \(p\) be a minimal type path representative for an element \(g \in Q \langle Q', R' \rangle R T_1 \dots T_m \setminus Q R T_1 \dots T_m\).
Assume that for any path representative \(p'\) for \(g\) we have $th_X(p') \ge E$.
Then \(p\) is \((B,c_0,\zeta,\Theta_1)\)-tamable.
Let \(\Sigma(p,\Theta_1) = f_0 e_1 f_1 \dots e_l f_l\) denote the \(\Theta_1\)-short\-cut\-ting of \(p\), obtained by applying Procedure~\ref{proc:shortcutting}, and let $e_j'$ be the \(\mathcal{H}\)-component of \(\Sigma(p,\Theta_1)\) containing \(e_j\), $j=1,\dots,l$. Then \(\Sigma(p,\Theta_1)\) is a \((\lambda,c)\)-quasigeodesic without backtracking and \(\abs{e'_j}_X \geq \eta\), for each \(j = 1, \dots, l\). \end{proposition}
\begin{proof}
The proof is similar to the argument in Lemma~\ref{lem:pathreps_have_qgd_shortcutting}.
Let us define the necessary constants:
\begin{itemize}
\item \(\zeta = \zeta(\eta,c_0)\) is the constant from Proposition~\ref{prop:shortcutting_quasigeodesic};
\item \(\Theta_1 = \max\{\Theta_0(\zeta),\zeta\}\), where \(\Theta_0\) is the constant from Lemma~\ref{lem:multicoset_adj_backtracking};
\item $B_2(\zeta)$ and \(C_3 = C_2(\zeta)\) are the constants provided by Proposition~\ref{prop:multicoset_multitracking};
\item \(B_3 = \max\{B_0(\Theta_1,c_0), B_2(\zeta)\} \), where \(B_0(\Theta_1,c_0)\) is the constant from Proposition~\ref{prop:shortcutting_quasigeodesic};
\end{itemize}
and, finally, for any given \(B \geq B_3, C \geq C_3\), we set
\begin{itemize}
\item \(E = \max \{B, \xi_0(\eta,Q') + 1\}\), where \(\xi_0(\eta,Q')\) is the constant from Proposition~\ref{prop:multicoset_multitracking}.
\end{itemize}
Suppose that $Q'$, $R'$, $g$ and \(p = q p_1 \dots p_n r t_1 \dots t_m\) are as in the statement of the proposition. We will now show that $p$ is \((B,c_0,\zeta,\Theta_1)\)-tamable.
Since \(Q'\) and \(R'\) satisfy \descref{C2}, Lemma~\ref{lem:C2_implies_old_C2} together with Remark~\ref{rem:multicoset_path_reps} imply that \(\abs{p_i}_X \geq B\), for each \(i = 1, \dots, n\).
Moreover, by assumption, \(\abs{r}_X, \abs{t_1}_X, \dots, \abs{t_{m-1}}_X \geq E \geq B\), so condition \ref{cond:tam_1} of Definition~\ref{def:tamable} is satisfied. On the other hand, condition \ref{cond:tam_2} is satisfied by Lemma~\ref{lem:multicoset_bdd_inn_prod}.
If condition \ref{cond:tam_3} of Definition~\ref{def:tamable} is not satisfied then \(p\) must have consecutive backtracking along \(\mathcal{H}\)-components \(h_1, \dots, h_k\) of its segments, such that
\[
\max{\Big\{ \abs{h_i}_X \, | \, i = 1, \dots, k \Big\}} \geq \Theta_1~\text{ and } d_X((h_1)_-,(h_k)_+) < \zeta.
\]
Lemma~\ref{lem:multicoset_adj_backtracking} rules out the case of adjacent backtracking (\(k = 2\)), so it must be that $k \ge 3$, i.e., \(h_1, \dots, h_k\) is an instance of multiple backtracking in \(p\).
Proposition~\ref{prop:multicoset_multitracking} now applies, giving a path representative \(p'\) for $g$ with \(th_X(p') \leq \xi_0(\eta,Q') < E\).
This contradicts a hypothesis of the proposition, so $p$ must also satisfy condition \ref{cond:tam_3}.
Therefore \(p\) is \((B,c_0,\zeta,\Theta_1)\)-tamable, and we can apply Proposition~\ref{prop:shortcutting_quasigeodesic} to achieve the desired conclusion. \end{proof}
\section{Using separability to establish conditions \texorpdfstring{\descref{C2-m}}{(C2-m)} and \texorpdfstring{\descref{C5-m}}{(C5-m)}} \label{sec:mcs_sep->metric}
In this section we exhibit, under suitable assumptions on \(G\), the existence of finite index subgroups \(Q' \leqslant_f Q\) and \(R' \leqslant_f R\) satisfying conditions \descref{C1}-\descref{C4}, \descref{C2-m} and \descref{C5-m}.
\begin{lemma} \label{lem:sep->C2-n}
Let \(G\) be a group generated by finite set \(X\), let \(Q, R, T_1, \dots, T_m \leqslant G\) be some subgroups, and let $S=Q \cap R$. Suppose that \(R T_1 \dots T_l\) is separable in \(G\), for each \(l = 0, \dots, m\).
Then for any \(B \geq 0\) there is a finite index subgroup \(N \leqslant_f G\), with \(S \subseteq N\), such that arbitrary subgroups \(Q' \leqslant Q \cap N\) and \(R'\leqslant R \cap N\) satisfy condition \descref{C2-m} with constant \(B\). \end{lemma}
\begin{proof}
For each \(l \in \{ 0, \dots, m\}\) the product \(R T_1 \dots T_l\) is separable, so, by Lemma~\ref{lem:sep->large_minx}(b), there is a finite index normal subgroup \(M_l \lhd_f G\) such that
\begin{equation}
\label{eq:ineq_for_C2-m}
\minx (R T_1 \dots T_lM_l \setminus R T_1 \dots T_l) \geq B, \text{ for all } l=0,\dots,m.
\end{equation}
Define the subgroup
\( M = \bigcap_{l=0}^{m} M_l \lhd_f G \),
and take \(N = SM \leqslant_f G\).
Observe that
\begin{equation}\label{eq:2nd_ineq_for_C2-m}
R N R T_1 \dots T_l=RSMRT_1 \dots T_l=RSR T_1 \dots T_lM=RT_1 \dots T_lM, \text{ for all } l=0,\dots,m.
\end{equation}
Now choose arbitrary subgroups \(Q' \leqslant Q \cap N\) and \(R'\leqslant R \cap N\), so that $\langle Q', R' \rangle \subseteq N$. Since \(M \subseteq M_l\) for all $l$, we can combine \eqref{eq:ineq_for_C2-m} with \eqref{eq:2nd_ineq_for_C2-m} to draw the desired conclusion. \end{proof}
The next statement is similar to Theorem~\ref{thm:sep->qc_comb}.
\begin{lemma} \label{lem:sep->C5-n}
Suppose that \(G\) is a group generated by finite set \(X\) and $m \in \NN_0$. Let \(Q, R \leqslant G\) be some subgroups, and let \(\mathcal{P}, \mathcal{U}\) be finite collections of subgroups of $G$ such that
\begin{enumerate}[label={\normalfont (\arabic*)}]
\item \label{cond:C5m-1} each \(P \in \mathcal{P}\) has property RZ$_{m+2}$;
\item \label{cond:C5m-2} the subgroups \(Q \cap P\), \(R \cap P\) and \(U \cap P\) are finitely generated, for all \(P \in \mathcal{P}\) and all \(U \in \mathcal{U}\);
\item \label{cond:C5m-3} if \(P \in \mathcal{P}\), \(K \leqslant_f P\) and \(L \leqslant_f Q\) then \(KL\) is separable in \(G\).
\end{enumerate}
Then for any \(C \geq 0\) and any finite index subgroup \(Q' \leqslant_f Q\), there is a finite index subgroup \(O \leqslant_f G\), with \(Q' \subseteq O\), such for any \(R' \leqslant R \cap O\) the subgroups \(Q'\) and \(R'\) satisfy \descref{C5-m} with constant \(C\) and collections \(\mathcal{P}\) and \(\mathcal{U}\). \end{lemma}
\begin{proof}
As usual, for subgroups \(H \leqslant G\) and \(P \in \mathcal{P}\) we denote \(H \cap P\) by \(H_P\).
Fix an enumeration \(\mathcal{P} = \{P_1, \dots, P_k\}\) and
let \(Q' \leqslant_f Q\) be a finite index subgroup of \(Q\). Given any \(i \in \{ 1, \dots, k\}\), we choose some coset representatives \(a_{i1}, \dots, a_{in_i} \in Q_{P_i}\) of \(Q'_{P_i}\), so that \(Q_{P_i} = \bigsqcup_{j=1}^{n_i} a_{ij} Q'_{P_i}\).
Let $\mathbb{U}$ be the finite set consisting of all $l$-tuples $(U_1,\dots,U_l)$, where $l \in \{0,\dots, m\}$ and $U_1,\dots, U_l \in \mathcal{U}$.
Consider any \(i \in \{ 1, \dots, k\}\) and $\underline{u}=(U_1,\dots,U_l) \in \mathbb{U}$, where $l \in \{0 \dots,m\}$.
Note that \(Q'_{P_i} \leqslant_f Q_{P_i}\) is finitely generated, for each \(i = 1, \dots, k\), since \(Q_{P_i}\) is itself finitely generated by assumption \ref{cond:C5m-2}. Combining assumptions \ref{cond:C5m-1} and \ref{cond:C5m-2}, the subset \(Q'_{P_i} R_{P_i} (U_1)_{P_i} \dots (U_l)_{P_i}\) is separable in \(P_i\).
Therefore,
by Lemma~\ref{lem:sep->large_minx}(c), for any $C \ge 0$ there is \(F_{i,\underline{u}} \lhd_f P_i\) such that
\begin{equation}
\label{eq:mcs_sep_for_Fiu}
\minx \Big( a_{ij} Q'_{P_i} F_{i,\underline{u}} R_{P_i} (U_1)_{P_i} \dots (U_l)_{P_i} \setminus a_{ij} Q'_{P_i} R_{P_i} (U_1)_{P_i} \dots (U_l)_{P_i} \Big) \geq C,
\end{equation}
for all $j=1,\dots,n_i$.
Define \(K_{i,\underline{u}} = Q'_{P_i} F_{i,\underline{u}} \leqslant_f P_i\).
Then (\ref{eq:mcs_sep_for_Fiu}) implies that for every $j=1,\dots,n_i$ we have
\begin{equation}
\label{eq:mcs_sep_for_Kiu}
\minx \Big( a_{ij} K_{i,\underline{u}} R_{P_i} (U_1)_{P_i} \dots (U_l)_{P_i} \setminus a_{ij} Q'_{P_i} R_{P_i} (U_1)_{P_i} \dots (U_l)_{P_i} \Big) \geq C.
\end{equation}
Assumption \ref{cond:C5m-3} tells us that the double coset \(K_{i,\underline{u}} Q'\) is separable in \(G\), and since \(Q' \cap P_i = Q'_{P_i} \subseteq K_{i,\underline{u}}\), we can apply Lemma~\ref{lem:Lemma_1} to find a finite index subgroup \(O_{i,\underline{u}} \leqslant_f G\) such that \(Q' \subseteq O_{i,\underline{u}}\) and \(O_{i,\underline{u}} \cap P_i \subseteq K_{i,\underline{u}}\).
We can now define a finite index subgroup $O$ of $G$ by
\[
O = \bigcap_{i=1}^k \bigcap_{\underline{u} \in \mathbb{U}} O_{i,\underline{u}} \leqslant_f G.
\]
Observe that \(Q' \subseteq O\) and \(O \cap P_i \subseteq K_{i,\underline{u}}\), for each \(i = 1, \dots, k\) and all $\underline{u} \in \mathbb{U}$. Consider any subgroup \(R' \leqslant R \cap O\).
Then \(Q_{P_i}' \cup R'_{P_i} \subseteq O \cap P_i \), so (\ref{eq:mcs_sep_for_Kiu}) yields that
\begin{equation}
\label{eq:mcs_sep_for_Q'R'}
\minx \Big( a_{ij} \langle Q'_{P_i}, R'_{P_i} \rangle R_{P_i} (U_1)_{P_i} \dots (U_l)_{P_i} \setminus a_{ij} Q'_{P_i} R_{P_i} (U_1)_{P_i} \dots (U_l)_{P_i} \Big) \geq C,
\end{equation}
for arbitrary \(i = 1, \dots, k\), \(l = 0, \dots, m\), $U_1,\dots,U_l \in \mathcal{U}$ and any $j=1,\dots,n_i$.
Given any \(i \in \{ 1, \dots, k\}\) and any \(q \in Q_{P_i}\), there is $j \in \{1,\dots,n_i\}$ such that $qQ_{P_i}'= a_{ij}Q_{P_i}'$.
It follows that $q \langle Q'_{P_i}, R'_{P_i} \rangle = a_{ij} \langle Q'_{P_i}, R'_{P_i} \rangle$, which, combined with \eqref{eq:mcs_sep_for_Q'R'}, shows that \(Q'\) and \(R'\) satisfy condition \descref{C5-m}, as required. \end{proof}
For the next result we will follow the notation of Convention~\ref{conv:main_multicoset}.
\begin{proposition} \label{prop:msc_sep->metric}
Suppose that $G$ is QCERF, the product $RT_1 \dots T_l$ is separable in $G$, for every $l=0,\dots,m$, and the peripheral subgroup \(H_\nu\) has property RZ$_{m+2}$, for each \(\nu \in \Nu\).
Let \(\mathcal{P}_1\) be a finite collection of maximal parabolic subgroups and let \(\mathcal{U}_1\) be a finite collection of finitely generated relatively quasiconvex subgroups in $G$.
Then for any \(B, C \geq 0\) there exist finite index subgroups \(Q' \leqslant_f Q\) and \(R' \leqslant_f R\) such that:
\begin{itemize}
\item $\langle Q',R' \rangle $ is relatively quasiconvex in $G$;
\item $Q'$, $R'$ satisfy conditions \descref{C1}-\descref{C4}, \descref{C2-m} and \descref{C5-m} with constants \(B\) and \(C\) and collections \(\mathcal{P}_1\) and \(\mathcal{U}_1\).
\end{itemize}
More precisely, there is \(L_1 \leqslant_f G\), with \(S \subseteq L_1\), such that for any \(L' \leqslant_f L_1\), satisfying \(S \subseteq L'\), we can take \(Q' = Q \cap L'\leqslant_f Q\), and there is \(M_1 \leqslant_f L'\), with \(Q' \subseteq M_1\), such that for any \(M' \leqslant_f M_1\), satisfying \(Q' \subseteq M'\), the subgroups \(Q'\) and \(R' = R \cap M'\leqslant_f R\) enjoy the above properties. \end{proposition}
\begin{proof}
Fix some constants $B,C \ge 0$. Let $\mathcal{P}_0$ be the finite collection of maximal parabolic subgroups of $G$ provided by Theorem~\ref{thm:metric_qc} and set $\mathcal{P}=\mathcal{P}_0 \cup \mathcal{P}_1$.
Note that maximal parabolic subgroups of $G$ are double coset separable by the assumptions, as $m+2 \ge 2$.
Therefore the argument from the proof of Theorem~\ref{thm:sep->qc_intro-detailed} shows that $G$, its subgroups $Q,R$ and $S= Q \cap R$, and the finite collection $\mathcal P$ satisfy assumptions \ref{cond:1}--\ref{cond:4} of Theorem~\ref{thm:sep->qc_comb}.
Let $L \leqslant_f G$, with $S \subseteq L$, be the finite index subgroup provided by this theorem.
By the hypothesis on \(G\), the subsets \(R T_1 \dots T_l\) are separable in \(G\), for each \(l=0,\dots, m\).
We can therefore apply Lemma~\ref{lem:sep->C2-n} to obtain a finite index subgroup \(N \leqslant_f G\) from its statement (in particular, $S \subseteq N$).
Now we define the finite index subgroup $L_1 \leqslant_f G$, from the statement of the proposition, by setting $L_1=L \cap N$.
Clearly $L_1$ contains $S$.
Take any $L' \leqslant_f L_1$, with $S \subseteq L'$, and set $Q'=Q \cap L' \leqslant_f Q$. Let $M \leqslant_f L'$ be the subgroup provided by Theorem~\ref{thm:sep->qc_comb}, with $Q' \subseteq M$.
Lemma~\ref{lem:fg_qc_int_parab_is_fg} and Corollary~\ref{cor:fi_in_parab_times_fgqc_is_sep} imply that all the assumptions of Lemma~\ref{lem:sep->C5-n} are satisfied, so let $O \leqslant_f G$ be the subgroup given by this lemma, with $Q' \subseteq O$.
We now define the finite index subgroup $M_1 \leqslant_f L'$, from the statement of the proposition, by $M_1=M \cap O$.
Evidently, $M_1$ contains $Q'$. Choose an arbitrary finite index subgroup $M' \leqslant_f M_1$, with $Q' \subseteq M'$, and set $R'=R \cap M'$.
Observe that $M' \leqslant_f G$, by construction, hence $R' \leqslant_f R$.
The combined statements of Theorem~\ref{thm:sep->qc_comb}, Lemma~\ref{lem:props_of_qc_sbgps}, Theorem~\ref{thm:metric_qc}, Lemma~\ref{lem:sep->C2-n} and Lemma~\ref{lem:sep->C5-n} now imply that the subgroups $Q' \leqslant_f Q$ and $R' \leqslant_f R$, obtained above, satisfy all of the required properties.
Thus the proposition is proved. \end{proof}
\section{Separability of quasiconvex products in QCERF relatively hyperbolic groups} \label{sec:RZs_proof}
In this section we prove Theorem~\ref{thm:RZs} from the Introduction.
\begin{remark} \label{rem:qc_prod_sep}
Let $G$ be a relatively hyperbolic group.
Suppose that $s \in \NN$ and the product of any \(s\) finitely generated relatively quasiconvex subgroups is separable in \(G\).
If \(Q_1, \dots, Q_s\) are finitely generated quasiconvex subgroups of $G$ and \(a_0, \dots, a_s \in G\) are arbitrary elements, then the subset \(a_0 Q_1 a_1 \dots Q_s a_s\) is separable in $G$. \end{remark}
Indeed, observe that the subset \[
a_0 Q_1 a_1 \dots Q_s a_s = Q_1^{a_0} Q_2^{a_0 a_1} \dots Q_s^{a_0 \dots a_{s-1}} a_0 \dots a_s \] is a translate of a product of conjugates of the subgroups \(Q_1, \dots, Q_s\). Combining Lemma~\ref{lem:props_of_qc_sbgps} with Remark~\ref{rem:sep_props} and the assumption on \(G\) yields the desired conclusion.
\begin{proof}[Proof of Theorem~\ref{thm:RZs}]
We induct on $s$. The case $s = 1$ is equivalent to the QCERF property of $G$, while the case $s = 2$ follows from Corollary \ref{cor:double_cosets_sep}.
Thus we can assume that \(s > 2\) and the product of any $s-1$ finitely generated relatively quasiconvex subgroups is separable in $G$.
Let $F_1,\dots,F_s$ be finitely generated relatively quasiconvex subgroups of $G$.
For ease of notation we write \(m = s - 2\), \(Q = F_1, R = F_2\) and \(T_i = F_{i+2}\), for \(i \in \{1, \dots, m\}\).
Choose a finite generating set $X$ for $G$ and let $\delta \in \NN$ be a hyperbolicity constant for the Cayley graph $\ga$, where $\mathcal{H}=\bigsqcup_{\nu \in \Nu} (H_\nu\setminus\{1\})$.
Denote by $\varepsilon \ge 0$ a common quasiconvexity constant for $Q,R,T_1, \dots, T_m$.
Arguing by contradiction, suppose that the subset $QRT_1 \dots T_m=F_1 \dots F_s$ is not separable in $G$.
Then there exists \(g \in G \setminus QRT_1 \dots T_m\) such that \(g\) belongs to the profinite closure of \(QRT_1 \dots T_m\) in $G$.
Let us fix the following notation for the remainder of the proof:
\begin{itemize}
\item \(c_0 = \max\{C_0,14\delta\} \geq 0\), where \(C_0\) is the constant obtained from Lemma~\ref{lem:multicoset_bdd_inn_prod};
\item \(c_3 = c_3(c_0) \geq 0\) is the constant obtained from Lemma~\ref{lem:concat};
\item \(\lambda = \lambda(c_0) \geq 1\) and \(c = c(c_0) \geq 0\) are obtained from Proposition~\ref{prop:shortcutting_quasigeodesic}, applied with the constant \(c_0\);
\item $\mathcal{P}_1$ is the finite family of maximal parabolic subgroups of $G$ from Notation~\ref{not:C_1-P_1};
\item $\mathcal{U}_1$ is the finite collection of finitely generated relatively quasiconvex subgroups of $G$ from Notation~\ref{not:Li_def};
\item $A=\abs{g}_X + 1$ and \(\eta = \eta(\lambda,c,A) \geq 0\) is obtained from Lemma~\ref{lem:qgds_with_long_comps};
\item \(\zeta = \zeta(\eta) \geq 0\), \(\Theta_1 = \Theta_1(\eta) \geq 0\), \(C_3 = C_3(\eta) \geq 0\) and \(B_3 = B_3(\eta) \geq 0\) are the constants obtained from Proposition~\ref{prop:mcs_shortcutting_quasigeodesic};
\item $B=\max\{B_3(\eta),(4A+c_3)\Theta_1\} $ and $C=C_3(\eta)$.
\end{itemize}
Observe that, by the induction hypothesis, the product $RT_1 \dots T_l$ is separable in $G$, for every $l=0,\dots,m$.
Let \(L_1 \leqslant_f G\) be the finite index subgroup obtained from Proposition~\ref{prop:msc_sep->metric}, applied with finite families $\mathcal{P}_1$, $\mathcal{U}_1$ and constants $B$, $C$, given above.
Note that $S \subseteq L_1$, and define \(Q' = Q \cap L_1 \leqslant_f Q\).
Again, by Proposition~\ref{prop:msc_sep->metric}, there is a finite index subgroup \(M_1 \leqslant_f L_1\) such that $Q' \subseteq M_1$ and for any \(M'\leqslant_f M_1\), with \(Q' \subseteq M'\), the subgroups \(Q'\) and \(R' = R \cap M' \leqslant_f R\) satisfy the conclusion of Proposition~\ref{prop:msc_sep->metric}.
Let \(E = E(\eta,Q',B) \geq 0\) be the constant provided by Proposition~\ref{prop:mcs_shortcutting_quasigeodesic}.
Let \(\{N_j \, | \, j \in \NN\}\) be an enumeration of the finite index subgroups of \(M_1\) containing \(Q'\), and define the subgroups
\begin{equation}
\label{eq:Ri_def}
M'_i= \bigcap_{j=1}^i N_j \leqslant_f L'~\text{ and }~ R'_i = M'_i \cap R\leqslant_f R,~i \in \NN.
\end{equation}
Note that for every $i \in \NN$, $Q' \subseteq M'_i$, so the subgroups $Q'$ and $R'_i$ satisfy the conclusion of Proposition~\ref{prop:msc_sep->metric}.
In particular, the subgroup \(\langle Q', R_i' \rangle \) is relatively quasiconvex (and finitely generated) in $G$, and \(Q'\), \(R'_i\) satisfy conditions \descref{C1}-\descref{C4}, \descref{C2-m} and \descref{C5-m} with constants \(B\), \(C\) and families \(\mathcal{P}_1\), \(\mathcal{U}_1\), defined above.
For each \(i \in \NN\), consider the subset
\[
K_i = Q \langle Q', R'_i \rangle R T_1 \dots T_m.
\]
Choose coset representatives \(x_1, \dots, x_a \in Q\) and \(y_{i,1}, \dots, y_{i,b_i} \in R\) such that \(Q=\bigcup_{j=1}^a x_jQ'\) and $R=\bigcup_{k=1}^{b_i} R'_i\, y_{i,k}$.
Then
\begin{equation*}
\label{eq:QQ'RiR}
Q \langle Q', R'_i \rangle R = \bigcup_{j = 1}^a \bigcup_{k=1}^{b_i} x_j \langle Q', R'_i \rangle y_{i,k},
\end{equation*}
hence \(K_i\) may be written as the finite union
\begin{equation*}
K_i = \bigcup_{j = 1}^a \bigcup_{k=1}^{b_i} x_j \langle Q', R'_i \rangle y_{i,k} T_1 \dots T_m.
\end{equation*}
Therefore, for every \(i \in \NN\), $K_i$ is separable in $G$ by Remark~\ref{rem:qc_prod_sep} and the induction hypothesis.
Since each \(K_i\) contains \(Q R T_1 \dots T_m\) and \(g\) is in the profinite closure of \(Q R T_1 \dots T_m\), it must be the case that \(g \in K_i\), for every \(i \in \NN\).
We will show that considering sufficiently large values of \(i\) leads to a contradiction.
For each \(i \in \NN\), let \(\mathcal{S}_i\) be the set of path representatives of \(g\) in \(K_i = Q \langle Q', R'_i \rangle R T_1 \dots T_m\) (see Definition~\ref{def:multicoset_path_reps}, where $R'$ is replaced by $R'_i$).
We will now consider two cases.
\underline{\emph{Case 1:}}
there exists $i \in \NN$ such that $\displaystyle \inf_{p' \in \mathcal{S}_i} th_X(p') \ge E$.
Choose a path representative of minimal type \(p = q p_1 \dots p_n r t_1 \dots t_m \) for \(g\) in $K_i$.
Note that $n \ge 1$ and $\elem{p_1} \in R'_i \setminus S$ because $g \notin QRT_1 \dots T_m$ (see Remark~\ref{rem:multicoset_path_reps}).
By the assumptions of Case~1 and the above construction, we can apply Proposition~\ref{prop:mcs_shortcutting_quasigeodesic} to conclude that \(p\) is \((B,c_0,\zeta,\Theta_1)\)-tamable and the shortcutting \(\Sigma(p,\Theta_1) = f_0 e_1 f_1 \dots f_{l-1} e_l f_l\), obtained from Procedure~\ref{proc:shortcutting}, is \((\lambda,c)\)-quasigeodesic without backtracking, with \(\abs{e'_k}_X \geq \eta\) for each \(k = 1, \dots, l\) (where \(e'_k\) denotes the \(\mathcal{H}\)-component of \(\Sigma(p,\Theta_1)\) containing \(e_k\)).
If \(l > 0\), then applying Lemma~\ref{lem:qgds_with_long_comps} to the path \(\Sigma(p,\Theta_1)\) gives
\[
\abs{g}_X = \abs{p}_X = \abs{\Sigma(p,\Theta_1)}_X \geq A > \abs{g}_X,
\]
by the choice of \(\eta\), which gives a contradiction.
Therefore it must be that \(l = 0\).
Then \(p\) is \((4,c_3)\)-quasigeodesic by Lemma~\ref{lem:beta_quasigeodesic} and, according to Remark~\ref{rem:shortcutting}(c), no segment of \(p\) contains an \(\mathcal{H}\)-component \(h\) with \(\abs{h}_X \geq \Theta_1\).
By the quasigeodesicity of \(p\) and the fact that \(p_1\) is a subpath of \(p\), we have
\begin{equation}
\label{eq:p_qgd}
\abs{g}_{X\cup\mathcal{H}} = \abs{p}_{X\cup\mathcal{H}} \geq \frac{1}{4}(\ell(p) - c_3) \geq \frac{1}{4}(\ell(p_1) - c_3).
\end{equation}
Applying Lemma~\ref{lem:rel_geods_with_short_comps} to the geodesic \(p_1\) in $\ga$ we obtain
\begin{equation}
\label{eq:r_len_bd}
\ell(p_1) \geq \frac{1}{\Theta_1}\abs{p_1}_X \geq \frac{B}{\Theta_1} \ge 4A+c_3,
\end{equation}
where the second inequality follows from the fact that $\elem{p_1} \in R'_i \setminus S$ and Lemma~\ref{lem:C2_implies_old_C2}.
Combining (\ref{eq:p_qgd}) and (\ref{eq:r_len_bd}), we get
\[
\abs{g}_X \ge \abs{g}_{X\cup\mathcal{H}} \geq \frac{1}{4}(4A+c_3-c_3)= A> \abs{g}_{X},
\]
which is a contradiction.
\underline{\emph{Case 2:}}
for all $i \in \NN$ we have $\displaystyle \inf_{p' \in \mathcal{S}_i} th_X(p') < E$.
Then for each \(i \in \NN\) there is a path representative \(p_i = q_i p_{1,i} \dots p_{n_i,i} r_i t_{1,i} \dots t_{m,i} \in \mathcal{S}_i\) for $g$ such that $th(p_i) \le E$.
It must either be the case that \(\displaystyle \liminf_{i \to \infty}\abs{r_i}_X \le E\) or \(\displaystyle \liminf_{i \to \infty}\abs{t_{j,i}}_X \le E\), for some \(j \in \{ 1, \dots, m\}\).
We will consider the former case, as the latter is very similar.
Since there are only finitely many elements \(x \in G\) with \(\abs{x}_X \le E \), we may pass to a subsequence $(p_{i_k})_{k \in \NN}$ such that \(\elem{r_{i_k}} = y \in R\) is some fixed element, for all \(k \in \NN\).
It follows that
\begin{equation}
\label{eq:g_in_prod_case_1}
g = \elem{p_{i_k}} \in Q \langle Q', R'_{i_k} \rangle y T_1 \dots T_m,~ \text{ for each } k \in \NN.
\end{equation}
Now, \(g \notin Q y T_1 \dots T_m\) (as $y \in R$), and the subset \(Q y T_1 \dots T_m\) is separable in \(G\) by
the induction hypothesis and Remark~\ref{rem:qc_prod_sep}.
By Lemma~\ref{lem:sep->large_minx}(a), there is a finite index normal subgroup \(O \lhd_f G\) such that \(g \notin Q O y T_1 \dots T_m\).
The subgroup \(M_1 \cap QO\) has finite index in $M_1$ and contains $Q'$, therefore $M_1 \cap QO=N_{j_0}$, for some $j_0 \in \NN$.
Choose $k \in \NN$ such that $i_k \ge j_0$, so that $M'_{i_k} \subseteq N_{j_0} \subseteq QO$ (see \eqref{eq:Ri_def}). Then $R_{i_k}'=M'_{i_k} \cap R\subseteq QO $, hence
\begin{equation}
\label{eq:containment_for_case_2}
Q \langle Q',R'_{i_k} \rangle yT_1 \dots T_m \subseteq QOyT_1 \dots T_m.
\end{equation}
Since \(g \notin Q O y T_1 \dots T_m\), inclusions \eqref{eq:g_in_prod_case_1} and \eqref{eq:containment_for_case_2} contradict each other.
We have arrived to a contradiction at each of the two cases, hence the proof is complete. \end{proof}
\section{New examples of product separable groups} \label{sec:ex_prod_sep} In this section we prove Theorem~\ref{thm:prod_sep}, which will follow from the three propositions below.
\begin{proposition} \label{prop:limit_gps_are_prod_sep}
Limit groups are product separable. \end{proposition}
\begin{proof}
Dahmani \cite{Dahmani_Conv} and, independently, Alibegovi\'c \cite{Alib} proved that every limit group is hyperbolic relative to a collection of conjugacy class representatives of its maximal non-cyclic finitely generated abelian subgroups.
Moreover, Wilton \cite{WiltonLimitGps} showed that limit groups are LERF and Dahmani \cite{Dahmani_Conv} showed they are \emph{locally quasiconvex} (i.e., each of their finitely generated subgroups is relatively quasiconvex with respect to the given peripheral structure).
Therefore our Theorem~\ref{thm:RZs} yields that limit groups are product separable. \end{proof}
Finitely generated Kleinian groups are not always locally quasiconvex, and we require the following two lemmas to deal with the case when one of the factors is not relatively quasiconvex.
\begin{lemma} \label{lem:Kl-1} Let $N$ be a group and $n \ge 2$ be an integer. Suppose that $H_1,\dots,H_n$ are subgroups of $G$ such that $H_i \lhd N$, for some $i\in \{1,\dots,n\}$, and the image of the product $H_1 \dots H_{i-1}H_{i+1} \dots H_n$ is separable in $N/H_i$. Then $H_1\dots H_n$ is separable in $N$. \end{lemma}
\begin{proof} Let $\varphi:N \to N/H_i$ denote the natural epimorphism. By the assumptions, the subset $S=\varphi(H_1 \dots H_{i-1}H_{i+1} \dots H_n)$ is separable in $N/H_i$. Observe that \[H_1\dots H_n=(H_1 \dots H_{i-1}H_{i+1} \dots H_n) H_i=\varphi^{-1}(S),\] as $H_i \lhd N$, whence $H_1\dots H_n$ is closed in the profinite topology on $N$ because group homomorphisms are continuous with respect to profinite topologies. \end{proof}
\begin{lemma} \label{lem:Kl-2}
Let \(G\) be a group with finitely generated subgroups \(F_1, \dots, F_n \leqslant G\), $n \ge 2$.
Suppose that there exist a finite index subgroup \(G' \leqslant_f G\) and \(i\in \{ 1, \dots, n\}\) such that \( F_i'=F_i \cap G' \lhd G'\) and \(G'/F_i'\) has property \(RZ_{n-1}\).
Then the product \(F_1 \dots F_n\) is separable in $G$. \end{lemma}
\begin{proof} Let $N \lhd_f G$ be a finite index normal subgroup contained in $G'$, and set $H_j=F_j \cap N$, for $j=1,\dots,n$.
Since $|F_j:H_j|<\infty$, for each $j=1,\dots,n$, the product $F_1\dots F_n$ can be written as a finite union of subsets of the form $h_1H_1h_2H_2 \dots h_n H_n$, where $h_1,\dots,h_n \in G$. Observe that \begin{equation*}\label{eq:prod_of_cosets} h_1H_1h_2H_2 \dots h_n H_n= H_1^{g_1} H_2^{g_2} \dots H_n^{g_n} g_n, \end{equation*} where $g_j=h_1 \dots h_j \in G$, $j=1,\dots,n$. Thus, in view of Remark~\ref{rem:sep_props}, in order to prove the separability of $F_1\dots F_n$ in $G$ it is enough to show that the product $H_1^{g_1} H_2^{g_2} \dots H_n^{g_n}$ is separable, for arbitrary $g_1,\dots,g_n \in G$.
Given any elements $g_1,\dots,g_n \in G$, the subgroups $H_1^{g_1}, H_2^{g_2}, \dots , H_n^{g_n} \leqslant G$ are finitely generated and are contained in $N$. Moreover, since the subgroup $H_i=F_i \cap N=F_i' \cap N$ is normal in $N$ and $N \leqslant G'$ is normal in $G$, we see that $H_i^{g_i} \lhd N$ and \[ N/H_i^{g_i}=N^{g_i}/H_i^{g_i} \cong N/H_i \leqslant G'/F_i'.\]
Therefore the group $N/H_i^{g_i}$ has RZ$_{n-1}$, as a subgroup of $G'/F_i'$, so the image of the product $H_1^{g_1} \dots H_{i-1}^{g_{i-1}}H_{i+1}^{g_{i+1}} \dots H_n^{g_n}$ is separable in $N/H_i^{g_i}$. Lemma~\ref{lem:Kl-1} now implies that $H_1^{g_1} H_2^{g_2} \dots H_n^{g_n}$ is separable in $N$, hence it is also separable in $G$ by Lemma~\ref{lem:induced_top}(b). As we observed above, the latter yields the separability of $F_1 \dots F_n$ in $G$, as required. \end{proof}
\begin{proposition} \label{prop:Kleinian_gps_are_prod_sep} Finitely generated Kleinian groups are product separable. \end{proposition}
\begin{proof} Let $G$ be a finitely generated discrete subgroup of $\mathrm{Isom}(\mathbb{H}^3)$. We will first reduce the proof to the case when $G\backslash \mathbb{H}^3$ is a finite volume manifold. This idea is inspired by the argument of Manning and Mart\'{i}nez-Pedroza used in the proof of \cite[Corollary~1.5]{MMPSep}.
Using Selberg's lemma, we can find a torsion-free finite index subgroup $K \leqslant G$. Since product separability of $K$ implies that of $G$ (\cite[Lemma~11.3.5]{Ribes-book}), without loss of generality we can assume that $G$ is torsion-free. It follows that $G$ acts freely and properly discontinuously on $\mathbb{H}^3$, so that $M=G \backslash \mathbb{H}^3$ is a complete hyperbolic $3$-manifold.
If $M$ has infinite volume then, by \cite[Theorem~4.10]{Matsuzki-Taniguchi}, $G$ is isomorphic to a geometrically finite Kleinian group. Thus we can further assume that $G$ is geometrically finite, which allows us to apply a theorem of Brooks \cite[Theorem~2]{Brooks_Extensions} to find an embedding of $G$ into a torsion-free Kleinian group $G^*$ such that $G^* \backslash \mathbb{H}^3$ is a finite volume manifold. If $G^*$ is product separable, then so is any subgroup of it, hence we have made the promised reduction.
Thus we can suppose that $G =\pi_1(M)$, for a hyperbolic $3$-manifold $M$ of finite volume. The tameness conjecture, proved by Agol \cite{Agol-tame} and Calegary-Gabai \cite{Cal-Gab}, combined with a result of Canary \cite[Corollary~8.3]{Canary}, imply that any finitely generated subgroup $F \leqslant G$ is either geometrically finite or is a virtual fibre subgroup. The latter means that there is a finite index subgroup $G' \leqslant_f G$ such that $F'=F \cap G' \lhd G'$ and $G'/F' \cong \mathbb{Z}$.
By \cite[Theorem~3.7]{Matsuzki-Taniguchi}, $G$ is a geometrically finite subgroup of $\mathrm{Isom}(\mathbb{H}^3)$, hence it is finitely generated and hyperbolic relative to a finite collection of finitely generated virtually abelian subgroups, each of which is product separable by \cite[Lemma~11.3.5]{Ribes-book}. Moreover, by \cite[Corollary~1.3]{HruskaRHCG}, a subgroup of $G$ is relatively quasiconvex if and only if it is geometrically finite. Finally, $G$ is LERF (and, hence, QCERF) by \cite[Corollary~9.4]{Agol}.
Let $F_1, \dots, F_n$ be finitely generated subgroups of $G$, $n \ge 2$. If $F_j$ is geometrically finite, for all $j=1,\dots,n$, then the product $F_1 \dots F_n$ is separable in $G$ by Theorem~\ref{thm:RZs}. Thus we can suppose that $F_i$ is not geometrically finite, for some $i \in \{1,\dots,n\}$. By the above discussion, in this case $F_i$ must be a virtual fibre subgroup of $G$. Since $\mathbb{Z}$ is product separable, we can apply Lemma~\ref{lem:Kl-2} to conclude that $F_1 \dots F_n$ is separable in $G$, completing the proof. \end{proof}
\begin{proposition} \label{prop:balanced_gps_are_prod_sep}
Let $G$ be the fundamental group of a finite graph of free groups with cyclic edge groups.
If $G$ is balanced then it is product separable. \end{proposition}
Limit groups and Kleinian groups are hyperbolic relative to virtually abelian subgroups. The peripheral subgroups from relatively hyperbolic structures on groups in Proposition~\ref{prop:balanced_gps_are_prod_sep} will be fundamental groups of graphs of cyclic groups, which motivates the next auxiliary lemma.
\begin{lemma} \label{lem:graph_of_cyclic_gps}
Suppose that $G$ is the fundamental group of a finite graph of infinite cyclic groups. If $G$ is balanced then it is product separable. \end{lemma}
\begin{proof}
Suppose that $G=\pi_1(G_{-},\Gamma)$, where $(G_{-},\Gamma)$ is a graph of groups, associated to a finite connected graph $\Gamma$ with vertex set $V \Gamma$ and edge set $E\Gamma$.
According to the assumptions, each vertex group $G_v$, $v \in V\Gamma$, is infinite cyclic.
As usual, we use $G_e$ to denote the edge group corresponding to an edge $e \in E\Gamma$ (see \cite[Section I.3]{Dicks-Dunw} for the definition and general theory of graphs of groups).
If $|E\Gamma|=0$ then $G$ is cyclic and, thus, product separable. Let us proceed by induction on $|E\Gamma|$.
Assume first that one of the edge groups $G_e$ is trivial.
If removing $e$ disconnects $\Gamma$ then $G$ splits as a free product $G_1*G_2$, where $G_1$, $G_2$ are the fundamental groups of finite graphs of infinite cyclic groups corresponding to the two connected components of $\Gamma\setminus\{e\}$.
Otherwise, $G \cong G_1*{G_2}$, where $G_1$ the fundamental group of a finite graph of infinite cyclic groups corresponding to the graph $\Gamma\setminus\{e\}$ and $G_2$ is infinite cyclic.
Moreover, $G_1$ and $G_2$ will be balanced as subgroups of a balanced group $G$. Hence $G_1$ and $G_2$ will be product separable by induction, so $G \cong G_1*G_2$ will be product separable by Coulbois' theorem \cite[Theorem~1]{Coulb}.
Therefore we can assume that every edge group $G_e$ is infinite cyclic. This means that $G$ is a \emph{generalised Baumslag-Solitar group}.
The assumption that $G$ is balanced now translates into the assumption that $G$ is \emph{unimodular}, using Levitt's terminology from \cite{Levitt-GBS}.
We can now apply \cite[Proposition~2.6]{Levitt-GBS} to deduce that $G$ has a finite index subgroup $K$ isomorphic to the direct product $F \times \mathbb{Z}$, where $F$ is a free group.
Now, $K \cong F\times \mathbb{Z}$ is product separable by You's result \cite[Theorem~5.1]{You}, hence $G$ is product separable as a finite index supergroup of $K$ (see \cite[Lemma~11.3.5]{Ribes-book}). Thus the lemma is proved. \end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:balanced_gps_are_prod_sep}]
Suppose that $G$ splits as the fundamental group of a finite graph of free groups $(G_-,\Gamma)$ with cyclic edge groups.
Without loss of generality we can assume that each vertex group is a finitely generated free group (in particular, $G$ is finitely generated).
Indeed, otherwise $G \cong G_1*F$, where $G_1$ is the fundamental group of a finite graph of finitely generated free groups with cyclic edge groups and $F$ is free (this follows from the fact that any cyclic subgroup of a free group is the product of only finitely many free generators).
In this case we can deduce the product separability of $G$ from the product separability of $G_1$ and $F$ by \cite[Theorem~1]{Coulb} (recall that $F$ is product separable by Ribes and Zalesskii \cite[Theorem ~2.1]{RibesZal}).
Now, for each vertex group $G_v$, choose and fix a finite family of maximal infinite cyclic subgroups $\mathbb{P}_v$ such that
\begin{itemize}
\item[(a)] no two subgroups from $\mathbb{P}_v$ are conjugate in $G_v$;
\item[(b)] for every edge $e$ incident to $v$ in $\Gamma$, the image of the cyclic group $G_e$ in $G_v$ is conjugate into one of the subgroups from $\mathbb{P}_v$.
\end{itemize}
Condition (a) means that each $G_v$ is hyperbolic relative to the finite family $\mathbb{P}_v$ (for example, by \cite[Theorem~7.11]{BowditchRHG}), and condition (b) means that each edge group of the given splitting of $G$ is parabolic in the corresponding vertex groups.
Therefore we can apply the work of Bigdely and Wise \cite[Theorem~1.4]{Big-Wise} to conclude that $G$ is hyperbolic relative to a finite collection of subgroups $\mathbb{Q}$, where each $Q \in \mathbb{Q}$ acts cocompactly on a \emph{parabolic tree} (see \cite[Definition~1.3]{Big-Wise}) with vertex stabilisers conjugate to elements of $\cup_{v \in V\Gamma} \mathbb{P}_v$ and edge stabilisers conjugate to elements of $\{ G_e \mid e \in \Gamma \}$.
The structure theorem for groups acting on trees (\cite[Section~I.4.1]{Dicks-Dunw}) implies that every $Q \in \mathbb{Q}$ is isomorphic to the fundamental group of a finite graph of infinite cyclic groups.
Since $Q$ is balanced, being a subgroup of $G$, we can apply Lemma~\ref{lem:graph_of_cyclic_gps} to conclude that each $Q \in \mathbb{Q}$ is product separable.
By Wise's result \cite[Theorem~5.1]{Wise-balanced} $G$ is LERF, hence we can apply our Theorem~\ref{thm:RZs} to deduce that the product of a finite number of finitely generated relatively quasiconvex subgroups is separable in $G$.
To establish the product separability of $G$ it remains to show that it is locally quasiconvex. To achieve this we will again use the results of Bigdely and Wise.
More precisely, according to \cite[Theorem~2.6]{Big-Wise}, a subgroup of $G$ is relatively quasiconvex if it is \emph{tamely generated}.
Let $H \leqslant G$ be a finitely generated subgroup.
The splitting of $G$ as the fundamental group of the graph of groups $(G_-,\Gamma)$ induces a splitting of $H$ as the fundamental group of a graph of groups $(H_-,\Delta)$, where for each vertex $u \in V\Delta$ the stabiliser $H_u$ is equal to $H \cap {G_v}^g$, for some $v \in V\Gamma$ and some $g \in G$.
Moreover, the graph $\Delta$ is finite, because $H$ is finitely generated (see \cite[Proposition~I.4.13]{Dicks-Dunw}).
Note that every edge group from $(H_-,\Delta)$ is cyclic, hence each vertex group $H_u$, $u \in V\Delta$, must be finitely generated as $H$ is finitely generated (see \cite[Lemma~2.5]{Big-Wise}).
According to \cite[Definition~0.1]{Big-Wise}, $H$ is tamely generated if for every $u \in V\Delta$ the subgroup $H_u=H \cap {G_v}^g$ is relatively quasiconvex in ${G_v}^g$, equipped with the peripheral structure ${\mathbb{P}_v}^g$.
But the latter is true because ${G_v}^g$ is a finitely generated free group, so any finitely generated subgroup is undistorted, and hence it is relatively quasiconvex with respect to any peripheral structure on ${G_v}^g$, by \cite[Theorem~1.5]{HruskaRHCG}.
Thus every finitely generated subgroup $H \leqslant G$ is tamely generated, and so it is relatively quasiconvex in $G$ by \cite[Theorem~2.6]{Big-Wise}. \end{proof}
\begin{remark}
In the case when the graph of groups has two vertices and one edge (so that $G$ is a free amalgamated product of two free groups over a cyclic subgroup), Proposition~\ref{prop:balanced_gps_are_prod_sep} was originally proved by Coulbois in his thesis: see \cite[Theorem~5.18]{Coulbois-thesis}.
We can use similar methods to recover another result of Coulbois: if $G=H*_{C} F$, where $H$ is product separable, $F$ is free and $C$ is a maximal cyclic subgroup in $F$ then $G$ is product separable \cite[Theorem~5.4]{Coulbois-thesis}.
Indeed, in this case $G$ will be hyperbolic relative to $\mathbb{Q}=\{H\}$ and will be LERF by Gitik's theorem \cite[Theorem~4.1]{Gitik-LERF}.
As in the proof of Proposition~\ref{prop:balanced_gps_are_prod_sep}, the results from \cite{Big-Wise} imply that $G$ is locally quasiconvex. Therefore $G$ is product separable by Theorem~\ref{thm:RZs}. \end{remark}
\printbibliography
\end{document} |
\begin{document}
\selectlanguage{english} \title{On the generalization of the Costas property in the continuum} \author{Konstantinos Drakakis\footnote{The author holds a Diploma in Electrical and Computer Engineering from NTUA, Athens, Greece, and a Ph.D. in Applied and Computational Mathematics from Princeton University, NJ, USA. He was a scholar of the Lilian Boudouris Foundation.}, Scott Rickard \\Electronic and Electrical Engineering\\University College Dublin\\ \& \\ Claude Shannon Institute\footnote{www.shannoninstitute.ie}\\Ireland} \maketitle
\abstract{We extend the definition of the Costas property to functions in the continuum, namely on intervals of the reals or the rationals, and argue that such functions can be used in the same applications as discrete Costas arrays. We construct Costas bijections in the real continuum within the class of piecewise continuously differentiable functions, but our attempts to construct a fractal-like Costas bijection there are successful only under slight but necessary deviations from the usual arithmetic laws. Furthermore, we are able, contingent on the validity of Artin's conjecture, to set up a limiting process according to which sequences of Welch Costas arrays converge to smooth Costas bijections over the reals. The situation over the rationals is different: there, we propose an algorithm of great generality and flexibility for the construction of a Costas fractal bijection. Its success, though, relies heavily on the enumerability of the rationals, and therefore it cannot be generalized over the reals in an obvious way.}
\section{Introduction} Costas arrays \cite{C} have been an active topic of research for more than 40 years now; however, after 1984, when 2 algebraic construction methods for Costas arrays were published (the Welch and the Golomb method \cite{G}), still the only ones available today, there has been effectively no progress at all in the construction of new Costas arrays, with the obvious exception of brute force searches. Recent research on Costas arrays tends to focus on the discovery of new properties \cite{D2,D3,SVM}, hoping that they will either furnish some lead for a new construction method, or prove that such a method does not exist, and thus overcome the current virtual stalemate in the core problems of the field.
In line with this effort, it is likely that research on Costas arrays would benefit by the extension of the definition of the Costas property in the continuum, for 2 reasons: on the one hand, this might open the door to assistance from the entire arsenal of analysis, as was the case with the successful generalization of the factorial in terms of the Gamma function; on the other hand, the recent advances in the subject of the Instantaneous Frequency of a signal \cite{HSLWSZTL} make it possible to design signals with continuously varying frequencies instead of piecewise constant frequencies, such as the usual discrete Costas arrays model, and there might be benefits in doing so. And besides, such objects certainly have an intrinsic pure mathematical merit for study.
In this work, we propose a suitable extension of the definition of the Costas property in the continuum (which we take here to mean the real and rational numbers), and we explain how the existing discrete Costas permutations can be used to generate continuum Costas permutations. Note that, in accordance with common practice in recent literature, we will be using the terms ``Costas permutation'' and ``Costas array'' interchangeably.
\section{Basics} We reproduce below the definition of a Costas function/permutation \cite{D}:
\begin{dfn}\label{bas} Let $[n]:=\{0,\ldots,n-1\},\ n\in\mathbb{N}$ and consider a bijection $f:[n]\rightarrow [n]$; $f$ is a Costas permutation iff the multiset $\{(i-j,f(i)-f(j)): 0\leq j<i< n\}$ is actually a set, namely all of its elements are distinct. \end{dfn}
These permutations are extremely useful because they give rise to binary signals with an optimal autocorrelation pattern:
\begin{dfn}\label{dac} Let $f:[n]\rightarrow [n]$, $n\in\mathbb{N}^*$, be a Costas permutation, and let $F:\mathbb{Z}^2\rightarrow [2]$, the corresponding binary signal of $f$, satisfy $F(i,f(i))=1,\ i\in[n]$, and $F=0$ everywhere else. The autocorrelation of $f$ is: \[A_F(u,v)=\sum_{i,j\in\mathbb{Z}} F(u+i,v+j)F(i,j),\ (u,v)\in\mathbb{Z}^2\] \end{dfn} The following result is just a restatement of the Costas property:
\begin{thm} Let $f:[n]\rightarrow [n]$, $n\in\mathbb{N}^*$, be a permutation, and let $F$ be its corresponding binary signal; then, $0\leq A_f(u,v)<2,\ \forall u,v\in\mathbb{Z}^2-\{0,0\}$ iff $f$ has the Costas property. \end{thm}
We have already mentioned the Welch construction method for Costas arrays. As we will refer to it several times below, we offer its definition for the sake of completeness:
\begin{thm}[Welch construction $W_1(p,g,c)$] Let $p$ be a prime, let $g$ be a primitive root of the finite field $\mathbb{F}(p)$ of $p$ elements, and let $c\in[p-1]$ be a constant; then, the function $f:[p-1]+1\rightarrow [p-1]+1$ where $\displaystyle f(i)=g^{i-1+c}\mod p,\ i\in[p-1]$ is a bijection with the Costas property.
\end{thm}
\section{Costas bijections in the real continuum}
From now on, until Section \ref{crat}, we will be using the term ``continuum'' in the sense of ``real continuum'', unless explicitly stated otherwise.
\subsection{Definitions and simple results}\label{defs}
In our extension of Definition \ref{bas} in the continuum we will replace $[n]$ by $[0,1]$, but otherwise the definition remains the same:
\begin{dfn} Consider a bijection $f:[0,1]\rightarrow [0,1]$; $f$ \emph{is a Costas permutation} iff the multiset $\{(x-y,f(x)-f(y)): 0\leq y<x\leq 1\}$ is actually a set, namely all of its elements are distinct. \end{dfn}
\begin{rmk} The choice of the interval $[0,1]$ is by no means restrictive: it can be seen immediately that for any pair $a,b\in\mathbb{R}$, $a<b$ there exists a linear monotonic mapping $h$ mapping $[0,1]$ bijectively on $[a,b]$, specifically $h(x)=a+x(b-a),\ 0\leq x\leq 1$, and $f$ has the Costas property on $[a,b]$ iff $h^{-1}\circ f\circ h$ has the Costas property on $[0,1]$. \end{rmk}
Yet again, we can give an alternative but equivalent definition of the Costas property in terms of autocorrelation:
\begin{dfn} Consider a bijection $f:[0,1]\rightarrow [0,1]$, and let $F:\mathbb{R}^2\rightarrow \{0,1,\infty\}$ be its corresponding quasi-binary signal (that is, binary whenever finite), so that $F(x,f(x))=1,\ x\in[0,1]$, and $F=0$ otherwise. The autocorrelation of $f$ is: \[A_f(u,v)=\int_0^1 \int_0^1 \delta(F(x+u,y+v)-F(x,y))dxdy,\ (u,v)\in\mathbb{R}^2\] \end{dfn}
\begin{rmk} Notice that this autocorrelation, just like its discrete counterpart in Definition \ref{dac}, takes integer values whenever finite, as it counts the number of zeros in the argument of the Dirac $\delta$-function. \end{rmk}
Once more, then, the following result is just a restatement of the Costas property:
\begin{thm} Consider a bijection $f:[0,1]\rightarrow [0,1]$, and let $F$ be its corresponding quasi-binary signal; then, $f$ has the Costas property iff $0\leq A_f(u,v)<2,\ \forall (u,v)\in\mathbb{R}^2-\{0,0\}$. \end{thm}
\subsection{Applications}
Continuum Costas bijections\footnote{We have to resort to the use of the uncommon word ``continuum'' in the role of an adjective here instead of the perhaps more appealing intuitively ``continuous'': the term ``continuum function'' accurately describes a function defined on an interval, or on something non-finite and dense at any rate, whereas the term ``continuous function'' has an already established different meaning in mathematics.} can find applications in the same situations their discrete counterparts do \cite{C}. For example, consider a RADAR system whose operation relies on a usual Costas waveform. In practical terms, this means that the waveform it transmits is of the form: \[w(t)=A\cos\left(2\pi\left(\sum_{k=0}^{n-1} \frac{s(k)+1}{n}f \mathbf{1}_{\left[\frac{k}{n}T,\frac{k+1}{n}T\right)}(t)\right)t\right),\ s\text{ a Costas permutation of order $n$},\ t\in[0,T)\] which is a different way to express that, for $n\in\mathbb{N}^*$, $\displaystyle w(t)=A\cos\left(2\pi \frac{s(k)+1}{n}ft\right),\ t\in\left[\frac{k}{n}T,\frac{k+1}{n}T\right),\ k\in[n]$.
Alternatively, we could have used a continuum Costas permutation $s$ on $[0,1]$. Let us consider the waveform: \[w(t)=A\cos\left(2\pi f \int_0^t s(u)du+2\pi f_0 t\right),\ t\in[0,T)\] Bedrosian's theorem \cite{B, HSLWSZTL} on instantaneous frequency asserts that the instantaneous frequency of $w$ is \[ \frac{1}{2\pi}\left(2\pi f \int_0^t s(u)du+2\pi f_0 t\right)'=s(t)f+f_0,\] as long as $\hat{w}(0)=0$; this condition can be satisfied, at least approximately, through an appropriate choice of $f_0$.
\subsection{Link between continuum and discrete Costas permutations}
How do the 2 definitions compare? The expression for the discrete waveform is clearly a special case of the continuum expression, and this can be seen if we write $\displaystyle S(t)=\sum_{k=0}^{n-1} \frac{s(k)+1}{n}f \mathbf{1}_{\left[\frac{k}{n}T,\frac{k+1}{n}T\right)}(t)$, where $s$ is a Costas array of order $n$ and $S$ is a continuum permutation (but obviously not Costas). The verification of the Costas property through the autocorrelation in the discrete case is also a subprocess of the verification in the continuum case: we just need to take care that horizontal and vertical displacements of the copies of the functions in the autocorrelation formula are integral multiples of $\displaystyle \frac{T}{n}$ and $\displaystyle \frac{f}{n}$, respectively.
As $S$ is a piecewise constant function, one might be tempted to attempt to formulate a definition for (at least a class of) continuum Costas permutations in terms of Costas arrays, as limits of sequences of Costas arrays, just like measurable functions are approximated by sequences of piecewise constant functions: a Costas array $s_n$ of order $n$ can be mapped on a piecewise constant function $S_n$, just as we did above, and, letting $n\rightarrow \infty$, we can hopefully obtain a continuum Costas permutation $S$. This limit would probably be highly discontinuous, of a fractal nature perhaps, as Costas arrays are highly erratic and patternless.
The problem with the plan of action suggested above is that we seem to have no good understanding yet of sequences of Costas arrays across different orders that follow a clear pattern, so that we can successfully describe how the limit of such a sequence would look like; a notable exception is the example we give below in Section \ref{lm}. Nevertheless, the idea of seeking continuum Costas permutations among fractals seems, in principle, promising in itself and worthwhile investigating. But first, let us focus on the case of smooth functions.
\section{Construction of smooth continuum Costas permutations}\label{smperm}
The whole idea of the existence of smooth functions with the Costas property may sound outright irrational at first, and any investigation futile: after all, there can hardly be any object more irregular and discontinuous that Costas arrays. Nonetheless, the continuum is dense in itself, while finite discrete sets are not, and this makes a big difference, as we are about to see: for example, the function $f(x)=x^2$ has no chance of being a permutation on any discrete set other than $[2]\cup\{\infty\}$, while it is a permutation on both $[0,1]$ and $[1,+\infty]$, as it effectively makes some areas of the intervals ``denser'' and some ``sparser'' (consider, for instance, that the images under $f$ of all points in $[0,\sqrt{2}/2]$ get ``crammed'' in the smaller interval $[0,0.5]$). In the continuum we can create Costas permutations by causing ``elastic deformations'', by ``changing the density'' of points in an interval, whereas such techniques are inapplicable on discrete sets.
Let us begin by seeking functions with the Costas property that are reasonably smooth; for example, let us confine ourselves to special categories of almost everywhere differentiable bijections.
\begin{dfn} Let $f:[0,1]\rightarrow [0,1]$ be a bijection; \begin{itemize}
\item $f$ will be \emph{piecewise continuously differentiable} iff there exists $n\in\mathbb{N}^*\cup \{\infty\}$ and a sequence of intervals $\{I_i\}_{i=1}^n$ so that, for each $i=1,\ldots,n$, $f$ is continuously differentiable in $\displaystyle \overset{\circ}{I_i}$ ($n=\infty$ is used as a convention to denote a countable infinity of intervals);
\item if, in addition to being piecewise continuously differentiable, for each $i=1,\ldots,n$ $\displaystyle f'|\overset{\circ}{I_i}$ is strictly monotonic, $f$ will be called \emph{piecewise strictly monotonic piecewise continuously differentiable};
\item if, in addition to being piecewise continuously differentiable, $f$ satisfies the property that, for all sequences of points $\{x_i\}_{i=1}^n$ such that $\displaystyle x_i\in \overset{\circ}{I_i},\ i=1,\ldots,n$, it is true that the sequences $\{f'(x_i)\}_{i=1}^n$ are either all strictly increasing or all strictly decreasing, $f$ will be called \emph{overall strictly monotonic piecewise continuously differentiable};
\item $f$ may combine all 3 features above, in which case it will be called \emph{overall and piecewise strictly monotonic piecewise continuously differentiable}. \end{itemize} \end{dfn}
\begin{thm} Let $f:[0,1]\rightarrow [0,1]$ be an overall and piecewise strictly monotonic continuously differentiable bijection. Then, $f$ has the Costas property on $[0,1]$. \end{thm}
\begin{proof} Let us choose 4 points in $[0,1]$, say $x$, $y$, $x+d$ and $y+d$ so that $y< x$ and $d\geq 0$; these may actually be 3 points if $x=y+d$. We need to show that \[f(x)-f(x+d)=f(y)-f(y+d)\Rightarrow d=0\] Exactly one of the 2 pairs of intervals $[x,y],[x+d,y+d]$ or $[x,x+d], [y,y+d]$ consists of intervals with disjoint interiors. Without loss of generality, assume it is the second pair, then the Newton-Leibnitz Theorem implies that \[f(x+d)-f(x)=\int_{x}^{x+d}f'(u)du,\ f(y+d)-f(y)=\int_{y}^{y+d}f'(u)du\] Now, if $f$ is overall and piecewise strictly monotonic continuously differentiable, it is always the case that either $\forall u,v:\ u\in(x,x+d),\ v\in(y,y+d) f'(u)< f'(v)$ or $\forall u,v:\ u\in(x,x+d),\ v\in(y,y+d) f'(u)> f'(v)$, so that $f(x)-f(x+d)\neq f(y)-f(y+d)$ unless $d=0$. \end{proof}
\begin{thm} Let $f:[0,1]\rightarrow [0,1]$ be a piecewise continuously differentiable bijection; if $f'$ is not injective, $f$ does not have the Costas property. \end{thm}
\begin{proof} We distinguish the following cases: \begin{itemize}
\item $f'$ is constant on an interval, say $f'\equiv c\in\mathbb{R}$ , or, equivalently, $f$ is linear on that interval: it follows there exist 4 points $x$, $y$, $x+d$ and $y+d$ with $y< x$ and $d> 0$ so that $\displaystyle \frac{f(x+d)-f(x)}{d}=\frac{f(y+d)-f(y)}{d}=c$, hence the Costas property is violated.
\item Assume that $f'$ is never constant on an interval. Then, either there exist $i_1,i_2$ so that $|f'(I_{i_1})\cap f'(I_{i_2})|>0$, namely it fails to be overall strictly monotonic, or there exists an $i$ for which $f'|I_i$ is not monotonic. In either case, there exist 2 points $x_1,x_2\in(0,1)$, so that $x_1<x_2$ and $f'(x_1)=f'(x_2)$. We distinguish 2 subcases:
\begin{itemize}
\item Neither of the points is an inflection point, that is both points lie in regions of the domain where $f$ is either convex or concave; these regions are necessarily different, or the derivative could not possibly be equal at these points. This implies that there exist real numbers $\epsilon_1,\epsilon_2>0$ so that, if 2 parallels are drawn to the tangent at each of the points $x_1$ and $x_2$, at the side of the tangents where the function graph lies, and whose distances from the tangents are less than $\epsilon_1$ and $\epsilon_2$, respectively, they each intersect the function graph at 2 points, say $x_{11}<x_{12}$ and $x_{21}<x_{22}$. Clearly both $x_{11}-x_{12}$ and $x_{21}-x_{22}$ go to 0 as the parallels move closer to the tangents, whence $f(x_{11})-f(x_{12})$ and $f(x_{21})-f(x_{22})$ also go to 0; moreover, if $\epsilon_1$ and $\epsilon_2$ are sufficiently small, $(x_{11},x_{12})\cap (x_{21},x_{22})=\emptyset$, and each of $(x_{11},x_{12}), (x_{21},x_{22})$ falls entirely within one of the intervals $\{I_i\},\ i=1,ldots,n$. Hence, we can choose a pair of parallels so that $\displaystyle \frac{f(x_{11})-f(x_{12})}{x_{11}-x_{12}}= \frac{f(x_{21})-f(x_{22})}{x_{21}-x_{22}}$ and $x_{11}-x_{12}=x_{21}-x_{22}$. This violates the Costas property.
\item At least one of the points is an inflection point, say $x_1$, so there is a $\delta$ so that $x\in(x_1-\delta,x_1+\delta)-\{x_1\}\Rightarrow f'(x)<f'(x_1)$ and $(x_1-\delta,x_1+\delta)$ falls within one of the intervals $\{I_i\},\ i=1,ldots,n$, say $I_k$. As $f'$ is continuous within $I_k$, and is not constant in any interval, there exist $u_1\in(x_1-\delta,x_1)$, $u_2\in(x_1,x_1+\delta$ so that neither is an inflection point and that $f'(u_1)=f'(u_2)$. We are now back to the case above.
\end{itemize} \end{itemize} \end{proof}
Note that the derivative of a continuously differentiable bijection must keep the same sign throughout its domain, or else the bijection would have an extremum and would not be a bijection. Further, in the case of a continuously differentiable bijection, overall and piecewise strict monotonicity are identical, hence strict monotonicity implies injectivity. Therefore, in this special case, the following holds:
\begin{cor}\ \begin{itemize}
\item Let $f:[0,1]\rightarrow [0,1]$ be a bijection continuously differentiable in $(0,1)$; then, $f$ has the Costas property iff $f'$ is strictly monotonic.
\item A continuously differentiable bijection on $f:[0,1]\rightarrow [0,1]$ with the Costas property must be strictly monotonic. \end{itemize} \end{cor}
\begin{rmk} The issue of the continuity of the derivative of a function is quite esoteric. When a function is differentiable in an open interval, its derivative is not necessarily continuous. However, it is ``almost'' continuous, in the sense that, for any value between 2 values the derivative actually assumes at 2 points, there is a point between the 2 aforementioned points where the derivative assumes the chosen value. This property is known as Darboux continuity in the literature \cite{BC}. Working with piecewise continuously differentiable functions, we ``float over'' this technical point. \end{rmk}
Let us now see some examples of continuously differentiable bijections with the Costas property as well as some rules to produce new ones from known ones:
\begin{cor}\label{exmp} The following continuously differentiable bijections $f:[0,1]\rightarrow [0,1]$ have the Costas property on $[0,1]$: \begin{itemize}
\item $f(x)=x^a$, $a\in\mathbb{R}_+$, $a\neq 0,1$;
\item $\displaystyle f(x)=\frac{a^x-1}{a-1},\ a\in\mathbb{R}^*_+ -\{1\}$;
\item $\displaystyle f(x)=\sin\left(\frac{\pi}{2} x\right)$; \end{itemize} Further, if $f,g:[0,1]\rightarrow [0,1]$ are continuously differentiable bijections and have the Costas property on $[0,1]$, the following functions also do: \begin{itemize}
\item $1-f$;
\item $af+bg$, $a,b\in\mathbb{R}_+$, $a+b=1$, if $f,g$ are both strictly increasing or all strictly decreasing, and so are $f',g'$;
\item $f\circ g$, if $f',g'$ are strictly monotonic of the same type and $g$ is strictly increasing;
\item $fg$, if $f,g,f',g'$ are all strictly increasing or all strictly decreasing. \end{itemize} \end{cor}
\begin{proof} Observe that $\displaystyle \left(\frac{a^x-1}{a-1}\right)'=\ln(a)\frac{a^x}{a-1}$ is strictly increasing for $a>1$ and strictly decreasing for $a<1$, $(x^a)'=ax^{a-1}>0$ is strictly increasing when $a>1$ and strictly decreasing when $0<a<1$, and $\displaystyle \left(\sin\left(\frac{\pi}{2} x\right)\right)'=\frac{\pi}{2}\cos\left(\frac{\pi}{2} x\right)$ is strictly decreasing. Moreover, all of these functions are bijections, hence, they have the Costas property.
Further, \begin{itemize}
\item $(1-f)'=-f'$ is strictly monotonic iff $f'$ is, although of the opposite type, and $1-f$ is a bijection on $[0,1]$, so it also has the Costas property.
\item $(af+bg)'=af'+bg'$ is strictly monotonic if $f',g'$ are both strictly monotonic of the same type, and $af+bg$ is strictly monotonic too, hence a bijection, if $f,g$ are both strictly monotonic of the same type.
\item $f\circ g$ is clearly a bijection if both $f$ and $g$ are, and $(f\circ g)' = g'f'\circ g$ is strictly increasing (decreasing) if both $f',g'$ are strictly increasing (decreasing) and $g$ is strictly increasing.
\item $fg$ is strictly increasing (decreasing), hence a bijection, if $f,g$ are both strictly increasing (decreasing), while $(fg)'=fg'+f'g$ is strictly increasing (decreasing) if $f,g,f',g'$ are all strictly increasing (decreasing). \end{itemize} \end{proof}
We have now offered a quite extensive description of the class of piecewise continuously differentiable bijections on $[0,1]$ with the Costas property, and an exact characterization of the continuously differentiable bijections with the Costas property. What about discontinuous bijections, though? By interpreting discontinuity in the most extreme way, we are led back to the idea of fractals.
\section{Costas fractals}\label{cofrac}
In what follows, we establish a connection between discrete and continuum Costas permutations: we use discrete Costas permutations to build continuum ones through a process of multiscale rearrangement of subintervals of $[0,1]$; in other words, we build a ``Costas fractal''. At this moment, however, we are unable to prove the correctness of our construction below under the usual laws of arithmetic: we will need the equivalent of ``xor'' addition (and subtraction), namely addition without carry, in representations over an arbitrary basis.
We will need first of all the slightly stronger definition given below:
\begin{dfn}Consider a bijection $f:[n]\rightarrow [n]$; $f$ is a \emph{modulo Costas permutation} iff the multiset $\{(i-j,f(i)-f(j)\mod(n+1)): 0\leq j<i< n\}$ is actually a set, namely all of its elements are distinct. \end{dfn}
\begin{rmk} Note that both the Golomb and the Welch constructions actually lead to modulo Costas permutations \cite{D,G}. \end{rmk}
\begin{dfn}\label{no} Let the numbers $x,y\in[0,1]$ be expanded over basis $n\in\mathbb{N}^*$: $\displaystyle x=\sum_{i=1}^\infty x_i n^{-i}$, $\displaystyle y=\sum_{i=1}^\infty y_i n^{-i}$, where $\forall i\in\mathbb{N}^*, x_i,y_i\in[n]$. Then, we define the ``no carry'' addition and subtraction as: \[x\oplus y=\sum_{i=1}^\infty \frac{(x_i+y_i)\mod n}{n^i},\ x\ominus y=\sum_{i=1}^\infty \frac{(x_i-y_i)\mod n}{n^i}\] \end{dfn}
\begin{thm}\label{fr} Let $n\in\mathbb{N}$ and let $f_i:[n]\rightarrow [n],\ i\in\mathbb{N}^*$ be a sequence of (not necessarily distinct) modulo Costas permutations. Define a function $F:[0,1]\rightarrow [0,1]$ by the following formula: \[F\left(\sum_{i=1}^\infty a_i n^{-i}\right)=\sum_{i=1}^\infty f_i(a_i)n^{-i}\] where $\forall i\in\mathbb{N}^*, a_i\in[n]$, and so that there exists no $N\in\mathbb{N}^*: a_i=n-1$ for $i\geq N$, unless $N=1$. Then, $F$ has the Costas property, when subtraction is interpreted as in Definition \ref{no}. \end{thm}
\begin{rmk} The explicit exclusion of sequences $\{a_i\}_{i=1}^\infty$ so that $\exists N\in\mathbb{N}^*: a_i=n-1$ for $i\geq N$ is necessary in order to ensure that every number in $[0,1)$ can be expressed over base $n$ in a unique way, otherwise some numbers can have 2 different expansions: a familiar example over base 10 would be that $0.5=0.5000\ldots=0.4999\ldots$. However, we still need to represent $\displaystyle 1=\sum_{i=1}^\infty \frac{n-1}{n^i}$, hence the exception for $N=1$. \end{rmk}
\begin{proof} Select 4 points in $[0,1]$, say $x$, $y$, $x+d$ and $y+d$ so that $y< x$ and $d\geq 0$; notice that these can actually be 3 equidistant points if $y+d=x$. We need to test whether $F(x)\ominus F(x+ d)=F(y)\ominus F(y+ d)$ necessarily implies $d=0$.
Let the interval $[0,1]$ be divided into $n$ subintervals, $\displaystyle \left\{I_{1;i}=\left[\frac{i}{n}, \frac{i+1}{n}\right):i\in[n-1]\right\}\bigcup \left\{I_{1;n-1}=\left[\frac{n-1}{n}, 1\right] \right\}$, so that $\forall i\in[n], F(I_{1;i})=I_{1;f(i)}$. We distinguish the following cases:
\begin{enumerate}
\item \label{main} $y+d\neq x$ and the 4 chosen points all lie in different subintervals: then, we can write $\displaystyle F(x)=\frac{s_1}{n}+\epsilon_1$, $\displaystyle F(y)=\frac{s_2}{n}+\epsilon_2$, $\displaystyle F(x+d)=\frac{s_3}{n}+\epsilon_3$, and $\displaystyle F(y+d)=\frac{s_4}{n}+\epsilon_4$, with $s_i\in[n]$, $\displaystyle \epsilon_i<\frac{1}{n},\ i=1,2,3,4$. It follows that $\displaystyle F(x)\ominus F(x+d)=\frac{(s_1-s_3)\mod n}{n}+(\epsilon_1 \ominus \epsilon_3)$, and $\displaystyle F(y)\ominus F(y+d)=\frac{(s_2-s_4)\mod n}{n}+(\epsilon_2 \ominus \epsilon_4)$, where, if we assume $d>0$, $(s_1-s_3)\mod n\neq (s_2-s_4) \mod n $, by the modulo Costas property of $f_1$, while $\displaystyle |(\epsilon_1 \ominus \epsilon_3)\ominus (\epsilon_2 \ominus \epsilon_4)|<\frac{1}{n}$. Hence, $F(x)\ominus F(x+d)\neq F(y)\ominus F(y+d)$ and the proof is complete for this case.
\item $y+d= x$ and the 3 chosen points all lie in different subintervals: then we can repeat verbatim the previous argument with 3 instead of 4 points.
\item $y+d\neq x$ and one pair of the 4 chosen points lie in the same subinterval, while the remaining pair lie in different subintervals: then, without loss of generality, assume that $x$ and $x+d$ lie in the same subinterval. In terms of the previous argument, $(s_1-s_3)\mod n=0\neq (s_2-s_4)\mod n$ and the proof follows again.
\item $y+d= x$ and the 3 chosen points lie in 2 different subintervals: then, exactly 2 points lie in the same subinterval, and, without loss of generality, assume they are $y$ and $y+d=x$. In terms of the previous argument, $s_4=s_1$, $(s_1-s_3)\mod n\neq (s_2-s_1)\mod n=0$ and the proof follows again.
\item\label{bad} Either $y+d\neq x$ and the 4 chosen points lie pairwise in the same subintervals, or $y+d= x$ and the 3 chosen points all lie in the same subinterval: then, assume, without loss of generality, that $x$ and $x+d$ lie in the same subinterval, and so do $y$ and $y+d$. It follows that $(s_1-s_3)\mod n=0= (s_2-s_4)\mod n$ and the argument fails. \end{enumerate}
In the last case where the argument fails, we need to refine our subinterval division. We already saw the first level of this division. At level $k\in\mathbb{N}$, we consider the collection of intervals \begin{multline*} \left\{I_{k;i_1,\ldots,i_k}=\left[\sum_{j=1}^k\frac{i_j}{n^j}, \sum_{j=1}^{k-1}\frac{i_j}{n^j}+\frac{i_k+1}{n^j}\right):\ i_j\in[n],\ j=1,\ldots,k,\ \exists j:i_j\neq n-1\right\}\bigcup\\ \left\{I_{k;n-1,\ldots,n-1}=\left[1-\frac{1}{n^k},1\right]\right\} \end{multline*}
With respect to the newly defined levels of subintervals, there are 2 possibilities: \begin{itemize}
\item The chosen points fall in a case other than \ref{bad} for the first time in level $k$: then, it must be the case that:
\begin{multline*} \sum_{j=1}^k \frac{(f_j(x_j+d_j)-f_j(x_j))\mod n}{n^j}=\frac{(f_k(x_k+d_k)-f_k(x_k))\mod n}{n^k}\neq\\ \sum_{j=1}^k \frac{f_j((y_j+d_j)-f_j(y_j))\mod n}{n^j}=\frac{(f_k(y_k+d_k)-f_k(y_k))\mod n }{n^k}\end{multline*}
due to the modulo Costas property of $f_k$, whence $F(x)\ominus F(x+d)\neq F(y)\ominus F(y+d)$ for $d>0$.
\item Otherwise, we need to consider the levels beyond level $k$. \end{itemize} But the length of the subintervals in level $k$ is $n^{-k}$ which decays to 0 as $k\rightarrow \infty$; therefore, any specific selection of points can remain in case \ref{bad} for a finite number of levels only. This completes the proof. \end{proof}
It is easy to see where our proof fails under ordinary arithmetic: revisiting case \ref{main}, we would need to show that, under the assumption that $s_1-s_3\neq s_2-s_4$, which holds because $f_1$ is a Costas permutation (we no longer need it to be a modulo Costas permutation), $\displaystyle \frac{s_1-s_3}{n}+(\epsilon_1 - \epsilon_3)\neq \frac{s_2-s_4}{n}+(\epsilon_2 - \epsilon_4)$ holds. Since $\displaystyle \epsilon_i<\frac{1}{n},\ i=1,2,3,4$, it follows that $\displaystyle |\epsilon_1 - \epsilon_3|,|\epsilon_2 - \epsilon_4|<\frac{1}{n}$ and $\displaystyle |(\epsilon_1- \epsilon_3)- (\epsilon_2 - \epsilon_4)|<\frac{2}{n}$, so that, if $|(s_1-s_3)-(s_2-s_4)|=1$, it may still be the case that $\displaystyle \frac{s_1-s_3}{n}+(\epsilon_1 - \epsilon_3)= \frac{s_2-s_4}{n}+(\epsilon_2 - \epsilon_4)\Leftrightarrow F(x)-F(x+d)=F(y)-F(y+d)$ when $d>0$, and the Costas property fails.
The key feature of the arithmetic proposed in Definition \ref{no} that allowed the proof of Theorem \ref{fr} to complete successfully was that if, at any level of interval subdivision, the 4 chosen points were found to lie into distinct subintervals, the defining inequality of the Costas property would be satisfied for the chosen points. There are alternative arithmetics with this property:
\begin{dfn}\label{np}
Let the numbers $x,y\in[0,1]$ be expanded over basis $n\in\mathbb{N}^*$: $\displaystyle x=\sum_{i=1}^\infty x_i n^{-i}$, $\displaystyle y=\sum_{i=1}^\infty y_i n^{-i}$, where $\forall i\in\mathbb{N}^*, x_i,y_i\in[n]$. Then, we define the ``contracted'' subtraction as: \[x\ominus y=\sum_{i=1}^\infty \frac{x_i-y_i}{n^{2i-1}}\] \end{dfn}
\begin{thm}\label{fs} Let $n\in\mathbb{N}$ and let $f_i:[n]\rightarrow [n],\ i\in\mathbb{N}^*$ be a sequence of (not necessarily distinct) Costas permutations. Define a function $F:[0,1]\rightarrow [0,1]$ by the following formula: \[F\left(\sum_{i=1}^\infty a_i n^{-i}\right)=\sum_{i=1}^\infty f_i(a_i)n^{-i}\] where $\forall i\in\mathbb{N}^*, a_i\in[n]$, and so that there exists no $N\in\mathbb{N}^*: a_i=n-1$ for $i\geq N$, unless $N=1$. Then, $F$ has the Costas property, when subtraction is interpreted as in Definition \ref{np}. \end{thm}
\begin{proof} This is a verbatim repetition of the proof of Theorem \ref{fr}. \end{proof}
Is it likely that Theorem \ref{fr} still hold true for ordinary arithmetic despite the fact that our proof does not carry through? At this time we have no reason to believe that it does. It may still be possible to use discrete Costas permutations to generate a Costas fractal in the continuum, but the actual mechanism should most probably be different.
\section{A limiting process}\label{lm}
Assuming Artin's Conjecture holds true \cite{M}, which would be the case if the Generalized Riemann Hypothesis holds true, for any non-square integer $k\in\mathbb{N}^*$, $k>1$ there exists an infinite sequence of primes, say $\{p_n\}_{n\in\mathbb{N}^*}$, for which $k$ is a primitive root. We can construct then the sequence of Welch Costas permutations corresponding to the primes of the sequence and the primitive root $k$: \[f_n: [p_n-1]+1\rightarrow [p_n-1]+1,\ f_n(i)=k^{i-1}\mod p_n,\ i\in[p_n-1]+1,\ n\in\mathbb{N}^*\]
The key observation is that $\forall m\in\mathbb{N}^*, \exists N\in\mathbb{N}^*:\ \forall i=1,\ldots,m, f_n(i)=k^{i-1}$; in particular, $N$ is the smallest integer for which $p_N>k^{m-1}$. In other words, for any fixed number of terms, all functions of the sequence, after skipping a finite number of functions, have these initial terms in common. Define then the \emph{pointwise intermediate limit} of $\{f_n\}_{n\in\mathbb{N}^*}$ to be as follows: for a fixed $i\in\mathbb{N}^*$, \[f(i)=\lim f_n(i)=\lim_{n>N} f_n(i):=\lim k^{i-1},\text{ where $N$ is the smallest integer such that $p_N>k^{i-1}$}\]
Choose now a sequence $\{i_n\}_{n\in\mathbb{N}^*}$ of integers such that $\displaystyle \lim\frac{i_n-1}{p_n}=x$. We define the limit of $\{f_n\}_{n\in\mathbb{N}^*}$ evaluated on $\{i_n\}_{n\in\mathbb{N}^*}$ to be a continuum function on $[0,1]$ as follows: \[s(x)=\lim \left(f(i_n)\right)^{\frac{1}{p_n}}=k^x,\ x\in[0,1)\] We can bring the range of $s$ within $[0,1)$ as well after a linear transformation, and create: $\displaystyle S(x)=\frac{k^x-1}{k-1}$; this is the second example function in Corollary \ref{exmp}.
To sum up, in the special case of an infinite sequence of Welch Costas permutations generated by a common primitive root $k$, we were able to carry out a limiting process and construct a continuum Costas permutation, using the property that all the members of this sequence (except possibly some of the first ones) have a common beginning. The limit we obtained, however, is a smooth function and not a fractal, as one might expect given the way Welch Costas permutations look like.
\section{Costas bijections in the rational continuum} \label{crat}
The idea of fractals with the Costas property in the (real) continuum was explored above in Section \ref{cofrac}, where we saw that their implementation required special considerations. We return to this issue here, but this time in the context of the rationals $Q=\mathbb{Q}\cap [0,1]$: in many ways the rationals stand midway between the integers and the reals, in the sense that they form a dense set (like the reals), but still enumerable (like the integers). We are about to see that these 2 properties allow us to make further progress in the subject.
Note that Costas permutations on the rational continuum is a genuinely new problem, and in no way a special case of the constructions in the real continuum; the reason is that the constructions of Section \ref{smperm} do not map bijectively the rationals onto the rationals. For example, $f(x)=x^2$ is not a bijection over $Q$, as $\displaystyle \nexists x\in Q:\ f(x)=\frac{1}{3}$, say.
The relevant definitions of the Costas property on rational bijections closely parallel the ones in Section \ref{defs} (regarding the real continuum) and will not be repeated here.
\subsection{An existence result}
In this section we offer an algorithm of considerable generality for the construction of bijections on $Q$ with the Costas property. Let us begin by reordering the elements of $Q$ as follows: we order firstly by the magnitude of the denominator, and secondly by the magnitude of the numerator (both in an increasing way). Explicitly, first come those rational numbers in $[0,1]$ whose denominator is 1, namely $\displaystyle 0=\frac{0}{1}$ and $\displaystyle 1=\frac{1}{1}$; then, those whose denominator is 2, namely $\displaystyle \frac{1}{2}$; then, those whose denominator is 3, namely $\displaystyle \frac{1}{3}$ and $\displaystyle \frac{2}{3}$ etc. Hence, the sequence looks like this: \[0,1,\frac{1}{2},\frac{1}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5},\frac{1}{6},\frac{5}{6}\ldots \] Notice that the numerators are always taken to be relatively prime to the denominators in order to avoid duplicate entries. We denote $Q$ equipped with this particular ordering by $Q_X$, and its elements, in the order dictated by the ordering, by $x_0,x_1,x_2,\ldots$. This ordering has the advantage that each rational is preceded by a finite number of rationals only (in set theoretic terminology, it does not contain any transfinite points). Similarly, we denote by $Q_Y$ the set $Q$ equipped with any arbitrary but fixed ordering without transfinite points, and we denote its elements, in the order dictated by its ordering, by $y_0,y_1,y_2,\ldots$.
Consider now the following algorithm for the construction of a mapping $f:Q\rightarrow Q$:
\begin{alg}\ \label{algq}
\begin{description}
\item[Initialization] Choose $f(x_0)=y_0$; set $Q'_{Y}\leftarrow Q_Y-\{y_0\}$, $Q'_{X}\leftarrow Q_X-\{x_0\}$, $X\leftarrow \{x_0\}$, $Y\leftarrow \{y_0\}$, and $D\leftarrow \{\}$.
\item[Find $x$ for $y$:] Set $Q_{X,\text{av}}\leftarrow Q'_X$, $x\leftarrow \inf Q_{X,\text{av}}$, $y\leftarrow \inf Q'_{Y}$; while the (multi)set $\{\text{sgn}(x'-x)(x'-x, f(x')-y): x'\in X\}\cup D$ is actually a multiset, set $Q_{X,\text{av}}\leftarrow Q_{X,\text{av}}-\{x\}$, $x\leftarrow \inf Q_{X,\text{av}}$, and repeat. Set $f(x)=y$, $D\leftarrow \{\text{sgn}(x'-x)(x'-x, f(x')-y): x'\in X\}\cup D$, $Q'_{Y}\leftarrow Q'_{Y}-\{y\}$, $Q'_{X}\leftarrow Q'_{X}-\{x\}$, $X\leftarrow X\cup \{x\}$, $Y\leftarrow Y\cup \{y\}$.
\item[Find $y$ for $x$:] Set $Q_{Y,\text{av}}\leftarrow Q'_Y$, $y\leftarrow \inf Q_{Y,\text{av}}$, $x\leftarrow \inf Q'_{X}$; while the (multi)set $\{\text{sgn}(x'-x)(x'-x, f(x')-y): x'\in X\}\cup D$ is actually a multiset, set $Q_{Y,\text{av}}\leftarrow Q_{Y,\text{av}}-\{y\}$, $y\leftarrow \inf Q_{Y,\text{av}}$, and repeat. Set $f(x)=y$, $D\leftarrow \{\text{sgn}(x'-x)(x'-x, f(x')-y): x'\in X\}\cup D$, $Q'_{Y}\leftarrow Q'_{Y}-\{y\}$, $Q'_{X}\leftarrow Q'_{X}-\{x\}$, $X\leftarrow X\cup \{x\}$, $Y\leftarrow Y\cup \{y\}$. \end{description} The algorithm needs to be supplied with a step sequence before execution begins. For the purposes of the correctness proof the exact step sequence is unimportant (this is yet another degree of freedom of the algorithm), as long as the following rules are observed: \begin{itemize}
\item Initialization is run first and only once;
\item Neither Find $x$ for $y$ nor Find $y$ for $x$ is run infinitely many times in a row. \end{itemize} \end{alg}
For example, when $Q_Y=Q_X$ and the steps are run alternatingly, we get $\displaystyle f(0)=0,\ f(1)=1,\ f\left(\frac{1}{2}\right)=\frac{1}{3},\ f\left(\frac{1}{3}\right)=\frac{1}{2},\ f\left(\frac{2}{3}\right)=\frac{2}{3}$ etc.
\begin{thm}\label{thmq} Algorithm \ref{algq} produces infinitely many bijections $f:Q\rightarrow Q$ with the Costas property. \end{thm}
\begin{proof} In order to prove the correctness of Algorithm \ref{algq} above, we need to demonstrate that a) $\forall y\in Q_Y, \exists! x\in Q_X: f(x)=y$, and b) $\forall x\in Q_X, \exists y\in Q_Y: f(x)=y$. To begin with, note that the construction algorithm above guarantees that the constructed $f$ has the Costas property and that every $y\in Q_Y$ appears in the range of $f$ at most once. We only need to show that the algorithm never gets ``stuck'', namely that the two while loops always exit. \begin{itemize}
\item For a given $x$, is it possible to assign a value to $f(x)$? In other words, if $A\subset Q'_Y$ is the set of all values $f(x)$ can take without violating the Costas property of $f$, is it true that $A\neq \emptyset$? The answer is in the affirmative, as, intuitively, we can see that the Costas property restrictions impose only a finite number of constraints on $f(x_i)$, while $Q'_Y$ is countably infinite. Rigorously, we have to check 2 conditions:
\begin{itemize}
\item Let $A_1\subset Q'_Y$ be the set of possible values for $f(x)$ for which $\text{sgn}(x'-x)(x'-x,f(x')-f(x))=\text{sgn}(x'-x'')(x'-x'',f(x')-f(x''))$ is never true for $x',x''\in X$. We show that $A_1\neq \emptyset$. In fact, consider $\displaystyle \frac{1}{p}$, where $p$ is a prime that does not appear as a factor in the denominator of some $f(x'),\ x'\in X$: choosing $\displaystyle f(x)=\frac{1}{p}$, it follows that $\displaystyle \frac{1}{p}-f(x')$ contains $p$ as a factor in the denominator, while $f(x')-f(x'')$ does not, hence they cannot be equal, and therefore that $\displaystyle \frac{1}{p}\in A_1\neq \emptyset$ as promised. Clearly, there are infinitely many choices for $p$ possible, so $A_1$ contains actually infinitely many elements.
\item Let $A_2\subset A_1$ be the set of possible values for $f(x)$ for which $\text{sgn}(x'-x)(x'-x,f(x')-f(x))=\text{sgn}(x''-x)(x''-x,f(x'')-f(x))$ is never true for $x',x''\in X$. We show that $A_2\neq \emptyset$. In order for one of these equalities to hold, $x$ must be the midpoint of $x'$ and $x''$, while at the same time $f(x)$ be the midpoint of $f(x')$ and $f(x'')$. Choosing $\displaystyle f(x)=\frac{1}{p}$ where $p$ is as above, and writing $\displaystyle x'=\frac{u_1}{v_1},\ x''=\frac{u_2}{v_2}$, we need to investigate whether the following is possible:
\[\frac{1}{2}\left(\frac{u_1}{v_1}+\frac{u_2}{v_2}\right)=\frac{1}{p},\ (u_1,v_1)=(u_2,v_2)=1,\ p\not|v_1,v_2.\]
This implies $p(u_1 v_2+u_2 v_1)=2v_1v_2$, and therefore that $p|2v_1v_2\Rightarrow p|2\Rightarrow p=2$. Hence, $A_2$ does contain all points of the form $\displaystyle \frac{1}{p}$, too, where $p$ does not divide the denominator of some $f(x'),\ x'\in X$ (which are infinitely many), except possibly for $\displaystyle \frac{1}{2}$; in any case $A_2\neq \emptyset$.
\end{itemize}
But $A_2=A$, hence $A\neq \emptyset$, a contradiction; therefore, $f(x)$ can assume a value without $f$ losing the Costas property.
\item For a given $y$, is it possible to find $x\in Q'_X$ so that $f(x)=y$? In other words, if $A\subset Q'_X$ is the set of all values $x$ for which $f(x)$ can be $y$ without violating the Costas property of $f$, is it true that $A\neq \emptyset$? The answer is in the affirmative as well, and the argument is an almost verbatim repetition of the argument above. Rigorously, we have to check 2 conditions:
\begin{itemize}
\item Let $A_1\subset Q'_X$ be the set of possible values for $x$ for which $\text{sgn}(x'-x)(x'-x,f(x')-y)=\text{sgn}(x'-x'')(x'-x'',f(x')-f(x''))$ is never true for $x',x''\in X$. We show that $A_1\neq \emptyset$. In fact, consider $\displaystyle \frac{1}{p}$, where $p$ is a prime that does not appear as a factor in the denominator of some $x'\in X$: choosing $\displaystyle x=\frac{1}{p}$, it follows that $\displaystyle \frac{1}{p}-x'$ contains $p$ as a factor in the denominator, while $x'-x''$ does not, hence they cannot be equal, and therefore that $\displaystyle \frac{1}{p}\in A_1\neq \emptyset$ as promised. Clearly, there are infinitely many choices for $p$ possible, so $A_1$ contains actually infinitely many elements.
\item Let $A_2\subset A_1$ be the set of possible values for $x$ for which $\text{sgn}(x'-x)(x'-x,f(x')-y)=\text{sgn}(x''-x)(x''-x,f(x'')-y)$ is never true for $x',x''\in X$. We show that $A_2\neq \emptyset$. In order for one of these equalities to hold, $x$ must be the midpoint of $x'$ and $x''$, while at the same time $y$ be the midpoint of $f(x')$ and $f(x'')$. Choosing $\displaystyle x=\frac{1}{p}$ where $p$ is as above, and writing $\displaystyle x'=\frac{u_1}{v_1},\ x''=\frac{u_2}{v_2}$, we need to investigate whether the following is possible:
\[\frac{1}{2}\left(\frac{u_1}{v_1}+\frac{u_2}{v_2}\right)=\frac{1}{p},\ (u_1,v_1)=(u_2,v_2)=1,\ p\not|v_1,v_2.\]
This implies $p(u_1 v_2+u_2 v_1)=2v_1v_2$, and therefore that $p|2v_1v_2\Rightarrow p|2\Rightarrow p=2$. Hence, $A_2$ does contain all points of the form $\displaystyle \frac{1}{p}$, too, where $p$ does not divide the denominator of some $f(x'),\ x'\in X$ (which are infinitely many), except possibly for $\displaystyle \frac{1}{2}$; in any case $A_2\neq \emptyset$.
\end{itemize}
But $A_2=A$, hence $A\neq \emptyset$, a contradiction; therefore, there exists a $x:f(x)=y$ without $f$ losing the Costas property. \end{itemize} This completes the proof. \end{proof}
\begin{rmk} Intuitively, the mechanism responsible for the flexibility of the algorithm is the opportunity the countable infinity of the rationals offers for ``double deference of all difficulties for a future time'': when faced with the difficulty of assigning a value to $f$ at a given point, we always have infinitely many possibilities, out of which some will work; this in turn creates the difficulty of assigning the values we skipped to some point, but, when faced with this difficulty, we again have infinitely many points waiting for an assignment, out of which some again will work; but in choosing one we once more skip some points, and we need to choose values for them, hence the cycle restarts.
This interplay is precisely what we cannot do with a finite set, hence the contrast between the easiness of the Costas construction over the rationals, as opposed to the intractability of the classical construction of Costas arrays. \end{rmk}
\begin{rmk} The above proof makes heavy use of the enumerability of the rationals, and therefore cannot be readily extended to the reals, who lack this property. \end{rmk}
It may come as a surprise that we can extend the algorithm even further:
\begin{thm} Algorithm \ref{algq} will produce a bijection $f:Q\rightarrow Q$ with the Costas property even if one of the steps From $x$ to $y$ or From $y$ to $x$ is applied infinitely many times in a row. \end{thm}
\begin{proof} Let us consider the case where From $x$ to $y$ is run infinitely many times in a row immediately after Initialization. This causes no loss of generality: the case where From $y$ to $x$ is run infinitely many times in a row immediately after Initialization is completely dual (observe the duality in the proof of Theorem \ref{thmq}), while the more general situation where finitely many alternations between the 2 steps occur before the algorithm ``locks'' in one can be considered to fall within one of the 2 cases we just mentioned, but with a different, more extensive Initialization.
Assume then that we go through $x\in Q_X$ one after another and we try to assign values to $f(x)\in Q_Y$ while retaining the Costas property. The proof of Theorem \ref{thmq} guarantees that we will succeed for all points. What we need to worry about is whether some $y\in Q_Y$ will be left out in the process: in other words, we know that $\forall x\in Q_X,\ \exists y\in Q_Y: f(x)=y$, but we still need to know that $\forall y\in Q_Y,\ \exists! x\in Q_X:\ f(x)=y$.
Assume then that at some step of the algorithm we find that $y\in Q'_Y$ has been skipped, and is the smallest element of $Q_Y$ that has been skipped. Will the algorithm ever ``pick it up''? As before, let us denote by $A\subset Q'_X$ the set of all available $x$ for which we can set $f(x)=y$ without violating the Costas property; we need to show that $A\neq \emptyset$. Because we proceed through $Q_X$ sequentially from the beginning, at the particular step of the algorithm we find ourselves there exists $x_0\in Q_X:\ X=\{x\in Q_X: x\leq x_0\}$ (remember that $\leq$ refers to the ordering of $Q_X$, \emph{not} the usual ordering!).
Consider a $x\in Q'_X$ which is of the form $\displaystyle \frac{1}{p}$, $p$ prime, say $\chi$. As in the proof of Theorem \ref{thmq}, we need to show 2 things: \begin{itemize}
\item $\text{sgn}(x'-\chi)(x'-\chi,f(x')-y)=\text{sgn}(x'-x'')(x'-x'',f(x')-f(x''))$ is never true for $x',x''<\chi$. The additional complication here is that at the current step of the algorithm we know the values of $f$ up to $x_0$, but we endeavor to prove a property that holds for $x<\chi$, i.e. involving future values! The way to avoid the complication is to apply our favorite argument on the first coordinate only, disregarding entirely what the values of $f$ are: $x'-x''$ cannot contain $p$ as a factor in its denominator, while $\chi-x'$ does, hence they cannot be equal. It follows that $\chi$ will belong in $A$ as long as it satisfies the second condition we are now about to test, and also that $\chi$ can actually be chosen among infinitely many points.
\item $\text{sgn}(x'-\chi)(x'-\chi,f(x')-y)=\text{sgn}(x''-\chi)(x''-\chi,f(x'')-y)$ is never true for $x',x''<\chi$. In order to check this we repeat verbatim the proof of Theorem \ref{thmq}: we assume that $\chi$ is the midpoint of some $x'$ and $x''$, and then show this is impossible, unless perhaps $\displaystyle \chi=\frac{1}{2}$. It follows that $\displaystyle \frac{1}{p}$ satisfies this condition too, with the possible exception of when $p=2$. But this still leaves infinitely many points of the form $\displaystyle \frac{1}{p}$, $p$ prime, in $A$, hence in particular $A\neq \emptyset$. \end{itemize} This completes the proof. \end{proof}
\subsection{An explicit construction}
Algorithm \ref{algq} is not exactly constructive; we cannot, for example, readily compute what $\displaystyle f\left(\frac{8}{1025}\right)$ is equal to. We propose here a constructive algorithm for the construction of a Costas permutation on the rationals; the catch is, however, that it only works on a subset of $Q$.
\begin{dfn} We define the set of \emph{prime rationals} $Q_P$ in $[0,1]$ to be the subset of $Q$ with prime denominators; namely $\displaystyle Q_P=\left\{\frac{i}{p}:\ i\in[p-1],p\text{ prime}\right\}$. \end{dfn}
\begin{thm} For each prime $p$, consider a Welch Costas permutation $f_p:[p-1]+1\rightarrow [p-1]+1$ constructed in $\mathbb{F}(p)$, and consider the set of points $\displaystyle S(p)=\left\{\left(\frac{i}{p},\frac{f_p(i)}{p}\right):\ i\in[p-1]+1\right\}$. The set $\displaystyle S=\bigcup_{p\text{ prime}} S(p)$ is a Costas permutation on $Q_P$. \end{thm}
\begin{proof} $S$ is clearly a permutation. We need to show that the distance vectors between all pairs of points are distinct. \begin{itemize}
\item Choose 4 points in the same $S(p)$: the Costas property of $f_p$ guarantees the 2 distance vectors they define are distinct.
\item Choose 2 points in $S(p)$ and 2 points in $S(q)$, $q\neq p$: the first distance vector has coordinates that are fractions over $p$, while the second over $q$, hence they cannot be equal.
\item Choose 2 points in $S(p)$, a point in $S(q)$, and a point in $S(r)$, where $p,q,r$ are distinct primes: the first distance vector has coordinates that are fractions over $p$, while the second over $qr$, hence they cannot be equal.
\item Choose a point in $S(p)$, a point in $S(q)$, a point in $S(r)$, and a point in $S(s)$, where $p,q,r,s$ are distinct primes: the first distance vector has coordinates that are fractions over $pq$, while the second over $rs$, hence they cannot be equal. \end{itemize} This completes the proof. \end{proof}
\section{Conclusion}
In this work, we have made 4 main and original contributions to the subject of Costas arrays: \begin{itemize}
\item We defined the Costas property on a real continuum function in 2 ways, through distance vectors between points and through the autocorrelation, and we showed that the 2 definitions are equivalent. We also showed that real continuum Costas bijections can be used in the same applications as discrete Costas arrays, by designing signals with the appropriate instantaneous frequency, which has been made possible by the recent advances in the field. Subsequently, we studied similarly the Costas property on rational continuum functions. Essentially, we have now translated the entire framework of Costas arrays in the continuum.
\item We showed that real continuum Costas bijections exist and we offered some examples; we characterized completely the continuously differentiable Costas bijections in terms of the monotonicity of their derivative, and we also obtained some good results for the case where the bijections are only piecewise continuously differentiable.
\item We investigated whether it is possible to construct fractal bijections with the Costas property, perhaps by employing discrete Costas arrays as building blocks. We answered that in the affirmative under nonstandard arithmetic laws (where addition and subtraction take place without carry, or where the contribution of the least significant digits of the points to their distance is deemphasized) in the real continuum; under ordinary arithmetic we have no reason to believe that the result still holds true.
\item We proposed a very general and flexible algorithm for the construction of Costas permutations over the rationals, that is not, however, entirely constructive. We were also able to formulate such a constructive algorithm, but its applicability is limited over a subset of the rationals. \end{itemize}
Overall, it comes to us as a surprise that it was relatively simple to construct smooth continuum functions with the Costas property, whereas all efforts to create a fractal Costas real bijection were unsuccessful (under ordinary arithmetic). Intuitively, given the irregularity of discrete Costas arrays, we would expect the known construction methods for Costas arrays to generalize in a natural way in the real continuum leading to a fractal; however, a direct recursion, such as our attempt in Section \ref{cofrac}, seems to be inappropriate, unless we change the arithmetic we use. It may still be possible to construct a Costas fractal bijection based on discrete Costas arrays through a different, less obvious mechanism, and we challenge the reader to discover such a mechanism.
\end{document} |
\begin{document}
\title{Graph reductions, binary rank, and pivots in gene assembly}
\begin{abstract} We describe a graph reduction operation, generalizing three graph reduction operations related to gene assembly in ciliates. The graph formalization of gene assembly considers three reduction rules, called the positive rule, double rule, and negative rule, each of which removes one or two vertices from a graph. The graph reductions we define consist precisely of all compositions of these rules. We study graph reductions in terms of the adjacency matrix of a graph over the finite field $\textbf{F}_2$, and show that they are path invariant, in the sense that the result of a sequence of graph reductions depends only on the vertices removed. The binary rank of a graph is the rank of its adjacency matrix over $\textbf{F}_2$. We show that the binary rank of a graph determines how many times the negative rule is applied in any sequence of positive, double, and negative rules reducing the graph to the empty graph, resolving two open problems posed by Harju, Li, and Petre. We also demonstrate the close relation between graph reductions and the matrix pivot operation, both of which can be studied in terms of the poset of subsets of vertices of a graph that can be removed by a graph reduction. \end{abstract}
\begin{center} \textit{Keywords:} gene assembly; graph reductions; binary rank; path invariance; pivots \end{center}
\section{Introduction}
This paper considers a graph reduction process formalizing gene assembly in strichotrichous ciliates. We briefly survey this background before describing the combinatorial formalization. The biological background is not necessary elsewhere in the paper.
\subsection{Ciliates and Gene Assembly}
Strichotrichous ciliates are ancient unicellular eukaryotes possessing two distinct types of cell nuclei, called the macronucleus and the microncleus. The macronucleus is the somatic nucleus, while the micronucleus is a germline nucleus that is used to transmit genes to offspring during sexual reproduction. Genes in the micronucleus are located on long molecules consisting of coding blocks separated by non-coding material. These coding blocks must be assembled into their ``orthodox order'' during reproduction. In the micronucleus, however, the blocks may be shuffled, and some may be inverted. The necessary data to assemble these blocks into the orthodox order are encoded in short nucleotide sequences called pointers, located at each end of each coding block. In effect, the coding blocks may be regarded as nodes in a doubly linked list, with the pointers at the ends of each block indicating which block precedes it and which block follows it in the orthodox order. The process of reading these pointers and assembling the blocks into the orthodox order is called the gene assembly process, and it is an example of what could be considered computation in living cells. Background on strichotrichous ciliates and the gene assembly process may be found in \cite{jahn} and \cite{prescott}. A thorough treatment of the gene assembly process and its various formalizations can be found in \cite{monograph}.
The formalization we consider comes from an intramolecular model for gene assembly, which is described in \cite{ehrenfeucht3} and \cite{prescott1}. Several mathematical formalizations for the intramolecular model are described in \cite{langille}. The most straightforward formalization makes use of signed double-occurence strings: the sequence of pointers is described by a string in which each letter occurs exactly twice, and each letter is given a sign to indicate whether or not it is inverted. Such strings are also called \textit{legal strings}. This formalization is studied further in \cite{ehrenfeucht} and \cite{harju0}. The formalization which we consider uses signed graphs, which can be obtained from legal strings as follows: the vertex set is the set of letters in the string; two vertices are connected by an edge if the corresponding letters ``interlock'' in the string (i.e. they appear in the patter $abab$, rather than $aabb$ or $abba$); a vertex is assigned the sign $+$ if it appears both inverted and non-inverted in the string, and $-$ otherwise. Although it may appear that some information is lost in converting strings to graphs, it is demonstrated in \cite{ehrenfeucht0} that no essential information is lost, in the sense that assembly strategies in one formalization correspond to assembly strategies in the other. It is not the case, however, that all signed graphs arise from legal strings. Further discussion of the legal string and signed graph formalizations, and the relation between them, can be found in \cite{ehrenfeucht2}.
The intramolecular model postulates that gene assembly is achieved by applying a sequence of three basic molecular operations, denoted $LD, HI$, and $DLAD$. These correspond, in the legal string and signed graph formalizations, to combinatorial operations called the \textit{negative rule}, \textit{positive rule} and \textit{double rule}. In the graph formalization, each rule shrinks the vertex set of the graph by one or two vertices, and reconfigures the edges between the remaining vertices. The gene assembly process is complete when the graph has been reduced to the empty graph. These three combinatorial operations are the basis of the graph reductions which we study in this paper.
\subsection{Graph Reductions}
We shall refer to the graph formalization of the three molecular operations as \textit{combinatorial reduction rules}, and compositions of them will be called \textit{combinatorial graph reductions}, or simply \textit{graph reductions}. A \textit{successful} graph reduction is a reduction of a graph to the empty graph.
The basic problems about the graph formalization of gene assembly concern understanding the different sequences of combinatorial reductions rules in a graph that produce a successful reduction. In particular, one wishes to understand how to measure the complexity of a given signed graph from the standpoint of combinatorial graph reduction. Several measures of complexity are proposed and analyzed in \cite{harju1}. In particular, one can ask if a given graph can be reduced to the empty graph using only some subset of the three operations; a classification is given in \cite{harju} for those graphs which can be reduced without the positive rule, and those which can be reduced without the double rule; this paper completes that classification by classifying the graphs which can be reduced without the negative rule (Section \ref{nullitySection}). An active topic recently concerns the \textit{parallel complexity} of a signed graph. The parallel complexity of a signed graph is the number of steps needed to reduce it to the empty graph, if it is permitted to perform a set of operations simultaneously if and only if they could be applied in any order with the same result. Parellel complexity is studied for various families of graphs in \cite{harju} and \cite{harju2}, and the computational problem of determining a graph's parallel complexity is considered in \cite{computation} and \cite{alhazov}. It is not known whether parallel complexity can be computed in polynomial time. Surprisingly, no nontrivial bounds are known for the parallel complexity of a graph with a given number of vertices. It is not even known whether parallel comlexity is unbounded for general graphs; it is conjectured in \cite{harju} that in fact parallel complexity is bounded by a constant for all graphs.
This paper demonstrates that all of these questions can be formulated in linear algebraic terms by considering the adjacency matrix of the graph over $\textbf{F}_2$, where the sign of a vertex is encoded by regarding positive vertices as having loops. This idea has also been pursued in \cite{brijder}. We generalize a result from \cite{harju} by demonstrating that the combinatorial reduction rules on signed graphs are path-invariant, in the sense that the result of removing a given subset of vertices does not depend on the particular operations used to remove them (Theorem \ref{pathInvariance}); we also provide an algebraic criterion determining whether a given set of vertices can be removed (Proposition \ref{matrixReducibility}). We prove that the number of times the negative rule is applied in a reduction to the empty graph is determined by the rank of the adjacency matrix (Theorem \ref{ngrNullity}), thus classifying the graphs which can be reduced without this rule and resolving two problems posed in \cite{harju}.
The methods in this paper also suggest another way to encode the data of the possible reductions of a signed graph: by a poset. In particular, the set of subsets of vertices which can be removed without the negative rule (equivalently, as we demonstrate, the subsets whose induced subgraph has invertible adjacency matrix) forms a poset that we call the \textit{pivotal poset}. This poset completely determines the original graph (Theorem \ref{posetDeterminesGraph}), and naturally encodes the sequences of steps that apply in parallel, thus suggesting a new approach to the study of parallel complexity.
The notion of a graph pivot, first considered in the context of gene assembly in \cite{brijder}, is closely related to the methods of this paper. In particular, we characterize the pivot operation in terms of the pivotal poset. As an application of these ideas, we describe and solve in Section \ref{reverseReductions} what might be called the inverse problem for reductions of signed graphs: given a graph, which graphs can be reduced to it using combinatorial reduction rules?
Some of our results, in particular regarding path invariance of graph reductions, have been proved in weaker forms in \cite{brijder}, by means of the matrix pivot operation on the adjacency matrix. In effect, their work concerns what we refer to as \textit{nonsingular reduction}, which can be characterized as those graph reductions which do not use the negative rule. In Section \ref{pivots}, we show this connection, and give a simple characterization of pivots of graphs in terms of the reducibility poset. Our method generalizes some results from \cite{brijder}, and provides short proofs for others. We discuss the implications of the pivot operation for the reducibility poset. As a special case, we consider the \textit{retrograph} of a graph, which can be defined by taking the inverse of the adjacency matrix, when it exists.
We begin by describing the combinatorial reduction rules in Section \ref{combinatorial}. We generalize the reduction rules in \ref{algebraic} using linear algebra over $\textbf{F}_2$, and prove our path-invariance result. We demonstrate in Section \ref{combMinimal} that the combinatorial reduction rules are simply the minimal graph reductions, and also give our results on the number of applications of the negative rule in a successful reduction. In Section \ref{pivots} we relate graph reductions to the matrix pivot operation, and describe the relation between pivots, the pivotal poset, and the graph reduction inverse problem.
Throughout the paper, we shall use the word \textit{graph} to refer to a simple graph with loops (i.e. there is at most one edge between any two vertices, and vertices may have edges to themselves), and the \textit{adjacency matrix} of a graph will always be understood to have coefficients in $\textbf{F}_2$. When we refer to a \textit{signed graph}, we mean a simple graph without loops together with an assigned sign ($+$ or $-$) for each vertex. These two notions are equivalent in the sense that the sign $+$ may be understood to indicate that the vertex has a loop edge.
\section{Combinatorial Graph Reductions}\label{combinatorial} We begin by describing the graph reduction operations formalizing the three molecular operations. We shall refer to these reductions as \emph{combinatorial graph reductions}, in order to distinguish them from the definition of graph reductions that we give in the next section. In Section \ref{combMinimal} we shall demonstrate that these two notions of graph reduction coincide. These reductions have been considered on signed graphs until now, so we present this viewpoint first. We then describe how these rules can be equivalently formulated on simple graphs with loops (which we shall call, simply, \textit{graphs}), and demonstrate that this leads to simple formulas for the reduction rules in terms of the adjacency matrix.
\subsection{On signed graphs} The three molecular operations postulated by the intramolecular model are HI, DLAD, and LD. Each has a corresponding rule on signed graphs, defined as follows.
\begin{definition} A \emph{signed graph} $G = (V,E,\sigma)$ is a simple graph on vertices $V = \{v_1, v_2, \dots, v_n\}$, with edges $E$, such that each vertex is given a sign by $\sigma: V \rightarrow \{+,-\}$. \end{definition}
Let $N_G(v)$ denote the neighborhood of $v$ in $G$ (not including $v$ itself). By \textit{complementing} an edge $(v_1,v_2)$, where $v_1,v_2$ are vertices, we mean adding an edge between $v_1$ and $v_2$ if one is not present, and removing the edge between $v_1$ and $v_2$ if one is present.
\begin{definition} The three combinatorial reduction rules on signed graphs are as follows. \begin{itemize} \item $\emph{gpr}_v$, the \emph{graph positive rule} applies if and only if $\sigma(v)=+$. It removes $v$ from the graph, all edges among two vertices in $N_G(v)$ are complemented, and the signs of all vertices in $N_G(v)$ are inverted. \item $\emph{gdr}_{v_1,v_2}$, the \emph{graph double rule} applies if and only if $\sigma(v_1)=\sigma(v_2)=-$ and $(v_1,v_2) \in E$. It removes $v_1$ and $v_2$ from the graph, and complements all edges $(x,y)$ such that one of $x$ or $y$ lies in $N_G(v_1)$ and the other in $N_G(v_2)$, but such that not both $v_1$ and $v_2$ lie in $N_G(v_1) \cap N_G(v_2)$. Signs are unaffected. \item $\emph{gnr}_v$, the \emph{graph negative rule} applies if and only if $\sigma(v)=-$ and $v$ is isolated (has no neighbors). It removes $v$, and does not affect the rest of the graph. \end{itemize} A \emph{combinatorial reduction strategy} is a sequence $(\gamma_1, \gamma_2, \dots, \gamma_n)$ of one or more combinatorial reduction rules. A combinatorial reduction strategy is called \emph{applicable} if for $i =1,2,\dots,n$, the rule $\gamma_i$ applies to $\gamma_{i-1} \circ \dots \circ \gamma_2 \circ \gamma_1 (G)$. The \emph{domain} of a reduction strategy is the set of vertices removed by the strategy. A reduction strategy is called \emph{successful} if it is applicable and its domain is all of $V$. The composition $\gamma_n \circ \dots \circ \gamma_2 \circ \gamma_1$ of the rules of a combinatorial reduction strategy is called a \emph{combinatorial reduction}. \end{definition}
See \cite{ehrenfeucht3}, \cite{prescott1}, \cite{ehrenfeucht0} and the monograph \cite{monograph}, for discussion of these three rules and their relation to the postulated molecular operations HI, DLAD, and LD. We also point out that if we consider graphs with only negative vertices, the double rule $\textrm{gdr}$ is identical to the \textit{rank two reduction} rule as considered in \cite{ggt}.
An example of a successful reduction strategy of a signed graph, demonstrating the three rules, is shown in Figure 1. There are other successful reduction strategies for this graph (for example, vertex $v_3$ can be removed first, using the positive rule, see Figure \ref{fig:pathInvarianceExample}). All diagrams have been created using the gene assembly simulator \cite{simulator}.
\begin{figure}
\caption{A successful combinatorial reduction strategy.}
\end{figure}
Observe that in any nonempty signed graph, at least one of the combinatorial reduction rules is applicable, and the vertex set shrinks whenever any rule is applied. Thus every signed graph has some successful combinatorial reduction strategy. We consider the set of all successful reduction strategies of an arbitrary signed graph. In Section \ref{redDefinitions} we will obtain a simple algebraic description of those vertex sets which can be removed by some combinatorial reduction strategy. The first step in this direction is to reinterpret signed graphs in a way that will allow them to be studied algebraically.
\subsection{On simple graphs with loops}
A \textit{simple graph with loops} is a graph without multiple edges, but where a vertex may have an edge to itself. There is a bijection between signed graphs and simple graphs with loops, by regarding positive vertices to be vertices with loops and negative vertices as vertices without loops. We will thus use the two viewpoints interchangeably. In this paper, we shall use the word \textit{graph} to mean simple graph with loops.
The main advantage of this second viewpoint is that a signed graph with loops can be described by an adjacency matrix, where the diagonal of the matrix indicates which vertices have loops. If we regard the entries of this matrix as lying in the finite field $\textbf{F}_2$, then the three combinatorial reduction rules are easy to state in terms of the adjacency matrix. If we order the vertices of the graph so that the domain of the reduction comes first, then we may express the three operations in terms of block matrices, as follows. It is important to recall that the entries are in $\textbf{F}_2$, not $\textbf{R}$. The submatrix $Q$ is any $1 \times (n-1)$ matrix in the first line and any $2 \times (n-1)$ matrix in the second line. In the third line, $\textbf{0}$ denotes the $1 \times (n-1)$ vector of all $0$s.
\begin{eqnarray} \label{combMatrix1}
\textrm{gpr}_v: & \smat{1}{Q}{Q^T}{R} &\mapsto R - Q^TQ\\
\label{combMatrix2}
\textrm{gdr}_{v_1,v_2}: & \mat{\begin{smallmatrix} 0 & 1 \\ 1 & 0 \end{smallmatrix}}{Q}{Q^T}{R} &\mapsto R - Q^T\smat{0}{1}{1}{0} Q\\ \label{combMatrix3} \textrm{gnr}_v: & \smat{0}{\textbf{0}}{\textbf{0}^T}{R} &\mapsto R \end{eqnarray}
We also point out here that the positive rule and double rule each reduce the rank of the adjacency matrix by precisely the number of vertices removed. This is why Godsil and Royle \cite{ggt} refer to the double rule as a rank two reduction. This fact can be seen by realizing both rules as a sequence of row reduction operations, and then a restriction to a principle submatrix. We omit the details here since this result will also follow from Corollary \ref{rankCorollary} after discussing general graph reductions.
\section{Graph reductions in general}\label{algebraic} We give in this section a linear algebraic description of the combinatorial graph reductions described above. This description will allow us to prove path invariance for graph reductions, and also characterize the number of times the negative rule $\textrm{gnr}$ is used in a given reduction, resolving two open problems from \cite{harju}. We will also obtain formulas to compute all edge relations of a graph after reduction in terms of ranks of submatrices of the adjacency matrix. These formulas generalize the determintant formulas given in \cite{brijder}, which apply only in the absence of the negative rule. The reductions we define here are closely related to the pivot operation on matrices defined in \cite{geelen} and studied in \cite{brijder}, which we consider in Section \ref{pivots}. We observe that all of our work in this section is easily generalized to directed graphs by considering asymmetric adjacency matrices, but we consider only symmetric adjacency matrices in order to simplify notation.
\subsection{Preliminaries}
We begin with an intrinsic definition of graph reductions. In Section \ref{matrices} we will interpret this definition using matrices in block form. Suppose $G$ is a graph, on vertices $V = \{ v_1, v_2, \dots, v_n \}$, with edges $E$. We shall denote by $\mathcal{V}$ the $n$-dimensional vector space over $\textbf{F}_2$ (the finite field with two elements) with basis $V$. For any subset $W \subset V$, $\langle W \rangle$ will denote the span in $\mathcal{V}$ of the vertices in $W$ (in particular, $\langle V \rangle = \mathcal{V}$). We shall denote by $\mathcal{E}$ a symmetric bilinear form on $\mathcal{V}$ defined on basis vectors as follows.
\begin{equation} \mathcal{E}(v_i, v_j) = \begin{cases}
1 & (v_i,v_j) \in E \\
0 & (v_i,v_j) \not\in E \end{cases} \end{equation}
Observe that the bilinear form $\mathcal{E}$ is given by the adjacency matrix $A$ of the graph, in the sense that $\mathcal{E}(v_1,v_2) = v_1^T A v_2$.
This form is defined on all of $\mathcal{V}\times \mathcal{V}$ by bilinearity. Recall that we permit $G$ to have loops, and $\mathcal{E}(v_i,v_i) = 1$ if and only if vertex $v_i$ has a loop. We shall describe the results of reductions of $G$ by specifying different bilinear forms, using the following notation.
\begin{definition}
For any set of vertices $W$, and any symmetric bilinear form $\mathcal{F}$ defined on $\langle W \rangle$, we denote by $\mathcal{G}(W,\mathcal{F})$ the graph on vertices $W$ with edges $\{ (w_i, w_j):\ \mathcal{F}(w_i,w_j) = 1 \}$. \end{definition}
For example, the graph $G$ can be denoted $\mathcal{G}(V,\mathcal{E})$, and for any subset $W \subset V$, the graph $\mathcal{G}(W,\mathcal{E})$ is the induced subgraph of $G$ on vertices $W$. Observe that, in the above definition, $\mathcal{F}$ may be a form on a larger vector space than $\langle W \rangle$, as in the case of induced subgraphs, although only its restriction to $\langle W \rangle$ is relevant. Graph reductions will be defined by modifying the bilinear form $\mathcal{E}$ in a manner that ``forgets'' the removed vertices in a particular way. Before giving a precise definition, we shall informally motivate the idea behind it.
\subsection{Motivation}\label{motivation}
We begin with a very simple principle: if $W \subset V$ is a subset of vertices such that the induced subgraph of $G$ on vertices $W$ is not connected to the rest of the graph, then the graph reduction removing the vertices $W$ must simply be the induced subgraph on the remaining vertices. Our approach is to define the graph reduction for a set of vertices $W$ by first modifying the graph in such a way that the vertices $W$ become disconnected. This modification is naturally expressed using linear algebra. The bilinear form $\mathcal{E}$ allows not just elements of the set $V$, but in fact all elements of the vector space $\mathcal{V}$, to be regarded as vertices of a graph. By using a different basis for the vector space, a different graph on the same number of vertices is obtained, and graph reductions can be defined by performing a change of basis in a specific way. We first illustrate this idea with an example.
\begin{example} Consider the graph on the left in Figure \ref{changeBasisExample}. In the basis $(v_1,v_2,v_2)$, the adjacency matrix is $\left[ \begin{smallmatrix} 1&1&1 \\ 1&1&0 \\ 1&0&1 \end{smallmatrix} \right]$. Consider a different basis: $(w_1,w_2,w_3) = (v_1, v_2+v_1, v_3+v_1)$. Then in this basis, the bilinear form $\mathcal{E}$ has matrix $\left[ \begin{smallmatrix} 1&0&0 \\ 0 & 0 & 1 \\ 0&1&0 \end{smallmatrix} \right]$, which is the adjacency matrix for the graph $\mathcal{G}(\{v_1,v_2+v_1,v_3+v_1\}, \mathcal{E})$. This graph is shown on the right of Figure \ref{changeBasisExample}. \end{example}
\begin{figure}
\caption{A graph $G$, and the graph obtained by changing basis to $w_1 = v_1$, $w_2 = v_2+v_1$, $w_3=v_3+v_1$.}
\label{changeBasisExample}
\end{figure}
Observe that in this example, the basis is modified only by adding copies of $v_1$ to other basis vectors. In addition, the result is a graph in which the vertex $w_1$ is disconnected from the rest of the graph, and the rest of the graph is identical to the result of applying the positive rule to vertex $v_1$ in the original graph. This illustrates the principle behind our definition of graph reduction: if we wish to remove the vertices in a set $W \subset V$, we first disconnect the vertices in $W$ from the rest of the graph by changing the basis by adding linear combinations of vertices in $W$ to the vertices not in $W$, and then remove the vertices in $W$. In the example, suppose that we wish to remove the vertex $v_1$. Then we first change to the basis $(w_1,w_2,w_3)$. Then the graph reduction removing $w_1$ is the induced subgraph on vertices $(w_2,w_3) = (v_2+v_1,v_3+v_1)$. However, in the reduction process, we intend to ``forget'' the existence of the vertex $v_1$ altogether, and thus we regard $(w_2,w_3)$ as being identical to $(v_2,v_3)$ after this reduction.
We further illustrate this idea by describing the three combinatorial reduction rules in these terms.
\begin{example}[The positive rule] Suppose that the graph $G$ on vertices $\{ v_1, \dots, v_n \}$ has adjacency matrix $\smat{1}{Q}{Q^T}{R}$ in block form. Then the bilinear form $\mathcal{E}$ becomes $\smat{1}{0}{0}{R - Q^TQ}$ in the basis $(w_1, \dots, w_n)$, where $\left[ \begin{smallmatrix} w_1 \\ \vdots \\ w_n \end{smallmatrix} \right] = \smat{1}{0}{Q^T}{I} \left[ \begin{smallmatrix} v_1 \\ \vdots \\ v_n \end{smallmatrix} \right]$. Observe that the graph corresponding to this basis is disconnected, and the induced subgraph on $\{w_2, \dots, w_n \}$ (which is congruent modulo $v_1$ to $(v_2, \dots, v_n)$) is exactly $\textrm{gpr}_{v_1}(G)$. \end{example}
\begin{example}[The double rule] Suppose that the graph $G$ on vertices $\{ v_1, \dots, v_n \}$ has adjacency matrix $\mat{ \begin{smallmatrix} 0&1\\1&0 \end{smallmatrix} }{Q}{Q^T}{R}$ in block form. Then the bilinear form $\mathcal{E}$ becomes $\mat{ \begin{smallmatrix} 0&1\\1&0 \end{smallmatrix} }{0}{0}{R- Q^T \smat{0}{1}{1}{0} Q}$ in the basis $(w_1, \dots, w_n)$, where
\begin{equation*} \left[ \begin{smallmatrix} w_1 \\ \vdots \\ w_n \end{smallmatrix} \right] = \mat{\begin{smallmatrix} 1&0\\0&1 \end{smallmatrix} }{0}{ Q^T \smat{0}{1}{1}{0} }{I} \left[ \begin{smallmatrix} v_1 \\ \vdots \\ v_n \end{smallmatrix} \right]. \end{equation*}
Observe that the graph corresponding to this basis is disconnected, and the induced subgraph on $\{w_3, \dots, w_n \}$ (which is congruent modulo the span of $v_1$ and $v_2$ to $(v_3, \dots, v_n)$) is exactly $\textrm{gpr}_{v_1,v_2}(G)$. \end{example}
\begin{example}[The negative rule] By definition, the negative rule only applies to vertex $v_1$ if it is already disconnected from the rest of the graph. Hence no change of basis is necessary; the result of reducing the vertex $v_1$ is simply removing it. \end{example}
We now make this vague notion of ``changing basis and forgetting $W$'' precise to define our notion of graph reduction.
\subsection{Definition of graph reductions}\label{redDefinitions}
Suppose that $\mathcal{W}$ is a vector subspace of $\mathcal{V}$ (for example, $\mathcal{W}$ could be the span $\langle W \rangle$ of a subset $W \subset V$). Then we wish to define the reduction of the bilinear form $\mathcal{E}$ along $\mathcal{W}$, which will be denoted $\mathcal{E}^{\mathcal{W}}$, and should correspond to ``forgetting'' the subspace $\mathcal{W}$. The easiest situation in which this can occur is if $\mathcal{E}$ can be diagonalized, in the sense that $\mathcal{V}$ can be written as a direct sum $\mathcal{V} = \mathcal{W} \oplus \mathcal{V'}$, for some other subspace $\mathcal{V'}$ (i.e. $\mathcal{V}$ is spanned by $\mathcal{W}$ and $\mathcal{V'}$, and $\mathcal{W} \cap \mathcal{V'} = \{ 0 \}$), and $\mathcal{E}(w,v) = 0$ whenever $w \in \mathcal{W}$ and $v \in \mathcal{V'}$. In this case, we define the reduction $\mathcal{E}^{\mathcal{W}}$ by projecting onto $\mathcal{V'}$ and then applying $\mathcal{E}$. The effect of this is that $\mathcal{E}^{\mathcal{W}}$ is identical to $\mathcal{E}$ on the space $\mathcal{V'}$, and is equal to $0$ when either argument comes from $\mathcal{W}$. Thus in the sense of the previous section, $\mathcal{E}^{\mathcal{W}}$ corresponds to modifying $\mathcal{E}$ so that it ``forgets'' $\mathcal{W}$. Of course, this definition will only work for certain subspaces $\mathcal{W}$, which we now define.
\begin{definition}
If $\mathcal{W}$ is any vector subspace of $\mathcal{V}$, the \emph{$\mathcal{E}$-annihilator}, denoted $\mathcal{W}^{\perp \mathcal{E}}$, is the set $\{v \in \mathcal{V} :\ \mathcal{E}(v,w) = 0\ \forall w \in \mathcal{W} \}$. \end{definition}
\begin{definition}
A vector subspace $\mathcal{W}$ of $\mathcal{V}$ is $\mathcal{E}$-\emph{reducible} if $\mathcal{W} + \mathcal{W}^{\perp \mathcal{E}} = \mathcal{V}$, i.e. if $\mathcal{W}$ and its $\mathcal{E}$-annihilator span $\mathcal{V}$. A subset $W$ of vertices in the graph $G$ is \emph{reducible in} $G$ if $\langle W \rangle$ is $\mathcal{E}$-reducible. \end{definition}
We will see in Section \ref{combMinimal} that reducible sets of vertices correspond precisely to sets of vertices that can be removed by the three combinatorial reduction rules defined in Section \ref{combinatorial}. Observe that we do not require that $\mathcal{W}$ be disjoint from its $\mathcal{E}$-annihilator. Combinatorially, a set of vertices $W$ such that $\langle W \rangle$ is $\mathcal{E}$-reducible and disjoint from its $\mathcal{E}$-annihilator if and only if it can be removed from the graph using only the positive rule and the double rule (see Section \ref{nullitySection}).
Notice that $\mathcal{V}$ is not necessary a \textit{direct} sum of $\mathcal{W}$ and $\mathcal{W}^{\perp \mathcal{E}}$, since $\mathcal{W} \cap \mathcal{W}^{\perp \mathcal{E}}$ may be nonempty. However, as long as $\mathcal{W}$ and $\mathcal{W}^{\perp \mathcal{E}}$ span $\mathcal{V}$, it is possibly to find a subspace $\mathcal{V'} \subset \mathcal{W}^{\perp \mathcal{E}}$ such that $\mathcal{V}$ is the direct sum $\mathcal{W} \oplus \mathcal{V'}$. Projecting to any such subspace $V'$ and applying $\mathcal{E}$ gives the same form $\mathcal{E}^{\mathcal{W}}$ for any choice of $\mathcal{V'}$, as the following lemma demonstrates.
\begin{lemma} Suppose that $\mathcal{W} \subset \mathcal{V}$. Then for any $v_1,v_2,v_1',v_2' \in \mathcal{W}^{\perp \mathcal{E}}$ such that $v_1 - v_1'$ and $v_2 - v_2'$ both lie in $\mathcal{W}$, $\mathcal{E}(v_1,v_2) = \mathcal{E}(v_1',v_2')$. \end{lemma} \begin{proof} This is a trivial verification. \end{proof}
This fact shows that the following is well-defined.
\begin{definition}\label{reductionFormDefn} Suppose that $\mathcal{W} \subset \mathcal{V}$ is $\mathcal{E}$-reducible. Then the \emph{reduction of $\mathcal{E}$ along $\mathcal{W}$}, denoted $\mathcal{E}^\mathcal{W}$, is a bilinear form on $\mathcal{V}$ defined as follows. For any $v_1,v_2 \in \mathcal{V}$, and any $v_1', v_2' \in \mathcal{W}^{\perp \mathcal{E}}$ such that $v_1 - v_1'$ and $v_2 - v_2'$ both lie in $\mathcal{W}$ (such $v_1',v_2'$ exist because $\mathcal{W}$ is $\mathcal{E}$-reducible), define $\mathcal{E}^\mathcal{W}(v_1,v_2) = \mathcal{E}(v_1',v_2')$. \end{definition}
\begin{example} Refer back to the examples in Section \ref{motivation}. In each example, take $\mathcal{W}$ to be the span $\langle W \rangle$, and observe that $\mathcal{E}^\mathcal{W}$, when restricted to the vertices in the complement of $W$, is precisely the bilinear form corresponding to the graph obtained by removing the vertices $W$ with a combinatorial reduction rule. \end{example}
Finally, we are able to use this approach to define graph reductions. The above example shows that this does, indeed, generalize the combinatorial reduction rules.
\begin{definition} \label{reductionDefn} If $W$ is a reducible set of vertices in $G$, the \emph{graph reduction of $G$ along vertices $W$} is \begin{equation}
\Gamma_{W}(G)= \mathcal{G}(V \backslash W, \mathcal{E}^{\langle W \rangle}). \end{equation} \end{definition}
This abstract definition is the easiest to manipulate to prove theorems such as path-invariance (Theorem \ref{pathInvariance}), but it is also useful to understand how this definition looks when written more explicitly using matrices. We examine this now.
\subsection{Matrix description}\label{matrices}
Suppose that $W \subset V$ is a set of vertices, and assume that $V$ is ordered with vertices of $W$ coming first. Then in this basis, $A$ can be written in block form:
\begin{equation}
A = \mat{P}{Q}{Q^T}{R}. \end{equation}
Thus $P$ gives the adjacency matrix for the induced subgraph on vertices $W$, $R$ for the induced subgraph on $V \backslash W$, and $Q$ describes the edges between these sets of vertices. Any vector in $\mathcal{V}$ may be written in terms of this block form as $\col{w}{v}$, where $w \in \langle W \rangle, v \in \langle V \backslash W \rangle$. Whether $W$ is reducible is simply expressed in terms of this block form.
\begin{proposition}\label{matrixReducibility}
In the notation above, the vertices $W$ are reducible if and only if the image of $Q$ is contained in the image of $P$. Equivalently, $W$ is reducible if and only if there exists a matrix $M$ such that $Q = PM$. \end{proposition} \begin{proof}
Observe that $\col{w}{v} \in \mathcal{W}^{\perp \mathcal{E}}$ if and only if $Pw + Q v = 0$ (here by $0$ we mean an all-0 matrix). Thus $\mathcal{W}$ and $\mathcal{W}^{\perp \mathcal{E}}$ will span $\mathcal{V}$ if and only if for every $v \in \langle V \backslash W \rangle$, there exists $w \in \langle W \rangle$ such that $\col{w}{v} \in \mathcal{W}^{{\perp \mathcal{E}}}$, which is true if and only if the image of $Q$ is contained in the image of $P$. \end{proof}
\begin{example} \label{minimalExample} Suppose that $P$ is invertible. Then the set $W$ is reducible, and the matrix $M$ mentioned in Proposition \ref{matrixReducibility} must be $P^{-1} Q$. The special cases $P = \left[ 1 \right]$ and $P = \smat{0}{1}{1}{0}$ correspond to the positive rule and the double rule. On the other hand, suppose that $W = \{v_1\}$ is a single negative vertex. Then $P = \left[ 0 \right]$, and $W$ is reducible if and only if $Q = 0$; in other words, a single negative vertex is reducible if and only if it is isolated. This matches the definition of the negative rule. \end{example}
Assume now that the vertices $W$ are indeed reducible in $G$. We shall give a formula for the adjacency matrix of $\Gamma_W (G)$.
\begin{proposition}\label{matrixGamma}
Let $A = \smat{P}{Q}{Q^T}{R}$ be the adjacency matrix of $G$ in the basis $(v_1, v_2, \dots, v_n)$, let $W$ be the subset of $\{ v_1,v_2, \dots, v_k \}$ of vertices, and suppose that $W$ is reducible. Let $M$ be a matrix such that $Q = PM$ (which exists since $W$ is reducible). Then the adjacency matrix of $\Gamma_W(G)$, in the basis $(v_{k+1},v_{k+2},\dots,v_n)$, is \mbox{$R-M^TPM$}. If $P$ invertible, this matrix can be written $R-Q^T P^{-1} Q$. \end{proposition} \begin{proof} Let $P$ be the matrix $\smat{0}{-M}{0}{I}$. Then $P^2 = P$, i.e. $P$ is a projection. Also, the kernel of $P$ is precisely $\langle W \rangle$, and the image of $P$ is contained in $\langle W \rangle^{\perp \mathcal{E}}$ (where $\mathcal{E}$ is the bilinear form on $\mathcal{V}$ corresponding to $A$). It follows that for any $v \in \mathcal{V}$, $Pv - v \in \langle W \rangle$, and $Pv \in \langle W \rangle^{\perp \mathcal{E}}$. Thus by definition \ref{reductionFormDefn}, $\mathcal{E}^\mathcal{W} (v_1, v_2) = \mathcal{E}(Pv_1, Pv_2)$. Thus the matrix of $\mathcal{E}^\mathcal{W}$ is $P^T A P$. By a simply calculation, this is $\smat{0}{0}{0}{R - M^T P M}$ in block form. Restricting to $\langle V \backslash W \rangle$, we see by definition \ref{reductionDefn} that the adjacency matrix of $\Gamma_W(G)$ is $R - M^TPM$, as claimed. If $P$ is invertible, then $M = P^{-1}Q$ and thus $R - M^T P M = R - Q^T P^{-1} Q$. \end{proof}
Observe that Proposition \ref{matrixGamma} implies that the expression $R - M^T P M$ does not depend on the choice of $M$. This fact can also established directly, using the fact that $PM$ and $M^T P$ are both independent of the choice of $M$, being $Q$ and $Q^T$, respectively.
Finally, we observe that there is a familiar linear algebraic way to compute the adjacency matrix of $\Gamma_W (G)$. If row-reduction operations are performed on $A$ until the lower-left corner becomes $0$, the result is $\mat{I}{0}{-M^T}{I} \mat{P}{Q}{Q^T}{R} = \mat{P}{Q}{0}{R - M^T P M}$. In the special case where $P$ is invertible, this reveals the relation between graph reductions and pivots, which has been considered in special cases in \cite{brijder} and which we consider in general in Section \ref{pivots}.
\subsection{Rank and determinant formulas}\label{rankFormulas} The main insight of the algebraic approach to graph reductions is that questions about graph reductions reduce to questions about ranks of submatrices of the adjacency matrix, considered in the field $\textbf{F}_2$. We shall use the following terminology.
\begin{definition}
If $G$ is a graph, and $W \subset V$ is a subset of the vertices of $G$, then the \emph{rank of $W$ in $G$}, $\emph{rank}_G(W)$, is the rank of the bilinear form $\mathcal{E}$ restricted to $\langle W \rangle$. The \emph{nullity} of $W$ is $|W| - \emph{rank}_G(W)$. The set $W$ is called \emph{singular} if it has positive nullity, and \emph{non-singular} otherwise. \end{definition}
Recall that the rank of the bilinear form $\mathcal{E}$ restricted to $\langle W \rangle$ is equal to the rank of the submatrix of the adjacency matrix given by the considering only rows and columns corresponding to vertices in $W$.
We shall sometimes refer to the rank or nullity of $G$, by which we shall simply mean the rank or nullity of the full vertex set $V$. We observe that the rank of the graph $G$ is referred to as the \emph{binary rank} of $G$ in \cite{ggt}, in order to emphasize that the adjacency matrix is considered in $\textbf{F}_2$. We will sometimes wish to consider ranks of submatrices of the adjacency matrix, thus we use the following definition as well.
\begin{definition}
If $G$ is a graph, and $W_1,W_2 \subset V$ are two subsets of the vertices of $G$, then $\emph{rank}_G(W_1,W_2)$ denotes the rank of the submatrix of the adjacency matrix with rows $W_1$ and columns $W_2$. More intrinsically, this is the rank of the map $\langle W_1 \rangle \rightarrow \langle W_2 \rangle^\ast$ induced by $\mathcal{E}$, where $\langle W_2 \rangle^*$ is the dual vector space of $\langle W_2 \rangle$. \end{definition}
\begin{theorem}\label{ranksOfReductions}
If $G$ is a graph, $W$ is a reducible set of vertices in $G$, and $W_1,W_2 \subset V \backslash W$ are two sets of vertices in $\Gamma_W(G)$, then \begin{equation}
\emph{rank}_{\Gamma_W(G)}(W_1,W_2) = \emph{rank}_G(W \cup W_1, W \cup W_2) - \emph{rank}_G(W). \end{equation} \end{theorem} \begin{proof}
Let $\mathcal{V}' \subset \mathcal{W}^{\perp \mathcal{E}}$ be a complementary subspace to $\mathcal{W}$, in the sense that $\mathcal{V} = \mathcal{W} \oplus \mathcal{V}'$ (as discussed in Section \ref{redDefinitions}). Then there is a projection $\pi : \mathcal{V} \rightarrow \mathcal{V}'$ along $\mathcal{W}$ (i.e. the kernel of $\pi$ is $\mathcal{W}$ and $\pi^2 = \pi$). When $\pi$ is restricted to $\langle V \backslash W \rangle$, it is an isomorphism to $\mathcal{V}'$. By definition \ref{reductionFormDefn}, the bilinear form $\mathcal{E}^\mathcal{W}$ is given by $\mathcal{E}^\mathcal{W}(v_1,v_2) = \mathcal{E}(\pi(v_1),\pi(v_2))$. Thus $\textrm{rank}_{\Gamma_W(G)} (W_1,W_2)$ is equal to the rank of the bilinear form $\mathcal{E}$ restricted to $\pi(\langle W_1 \rangle ) \times \pi(\langle W_2 \rangle)$. Let $B$ be the matrix of $\mathcal{E}$ restricted to $\pi(\langle W_1 \rangle ) \times \pi(\langle W_2 \rangle)$, and let $C$ be the matrix of $\mathcal{E}$ restricted to $\langle W \rangle \times \langle W \rangle$. Then because $\mathcal{V}' \subset \mathcal{W}^{\perp \mathcal{E}}$, matrix of $\mathcal{E}$ resrticted to $\pi(\langle W_1 + W \rangle ) \times \pi(\langle W_2 + W \rangle)$ has a ``diagonal'' block form $\smat{B}{0}{0}{C}$, hence its rank is $\textrm{rank}(B) + \textrm{rank}(C)$, which is $\textrm{rank}_{\Gamma_W(G)}(W_1,W_2) + \textrm{rank}_G(W)$. Now, since $\pi$ is projection along $\langle W \rangle$, it follows that $\pi(\langle W_1 \rangle) + \langle W \rangle = \langle W_1 \rangle + \langle W \rangle = \langle W \cup W_1 \rangle$, and similarly $\pi(\langle W_2 \rangle) + \langle W \rangle = \langle W \cup W_2 \rangle$. Therefore $\textrm{rank}_{\Gamma_W(G)}(W_1,W_2) + \textrm{rank}_G(W) = \textrm{rank}_G(W \cup W_1, W \cup W_2)$, which gives the theorem.
\end{proof}
\begin{corollary}\label{rankCorollary}
$\textrm{rank}(G) = \textrm{rank}_G(W) + \textrm{rank}(\Gamma_W(G))$. \end{corollary} \begin{proof}
Take $W_1 = W_2 = V \backslash W$ in the theorem. \end{proof}
\begin{corollary}\label{rankstellvertices}
If $W$ is a reducible vertex set in $G$ and $v,w \in V \backslash W$, then the edge $(v,w)$ is present in $\Gamma_W(G)$ if any only if \begin{equation}
\textrm{rank}_G(W \cup \{v\},W \cup \{w\}) > \rkg{W}. \end{equation} In case $W$ is nonsingular, this is equivalent to saying that $\emph{det}(A_{W \cup \{v\},W \cup \{w\}}) \neq 0$. \end{corollary} \begin{proof}
The edge $(v,w)$ is present in $\Gamma_W(G)$ if and only if $\textrm{rank}_{\Gamma_W(G)}(\{v\},\{w\}) = 1$, and otherwise this rank is $0$. The result now follows immediately. \end{proof}
From Corollary \ref{rankstellvertices}, we see that the entire adjacency matrix of any graph reduction of $G$ can be obtained from simply knowing ranks of submatrices of the adjacency matrix. If one only considers nonsingular reductions, then it suffices to know determinants of submatrices. This leads us to consider a third abstraction for understanding reductions of simple graphs with loops: the reducibility poset.
\subsection{Path invariance and the reducibility poset}\label{pathInvarianceSection} One of the most basic facts about graph reductions that comes to light in the algebraic formulation is the following path invariance property.
\begin{theorem}\label{pathInvariance} Suppose $W_1, W_2$ are two disjoint sets of vertices in $G$. Then $W_2$ is reducible in $\Gamma_{W_1}(G)$ if and only if $W_1 \cup W_2$ is reducible in $G$. In this case, \begin{equation}
\Gamma_{W_1 \cup W_2} (G) = \Gamma_{W_2} \circ \Gamma_{W_1} (G). \end{equation} \end{theorem}
The proof we give here deduces the theorem easily from Theorem \ref{ranksOfReductions}. However, we point out that it is not difficult to prove the theorem directly from the definitions in Section \ref{redDefinitions}. Indeed, the result is intuitive if one regards graph reduction as ``forgetting'' vertices as described in Section \ref{motivation}, and it is simply necessary to properly formalize this intuition. However, the notation is cumbersome, so we have chosen to give the proof in terms of ranks of submatrices instead.
\begin{proof} Observe that if $W \subset V$ is a set of vertices of $G$, then $W$ is reducible if and only if $\textrm{rank}_G(W,V) = \rkg{W}$. This follows by writing the adjacency matrix in block form $A = \smat{P}{Q}{Q^T}{R}$ (with $P$ the adjacency matrix of the induced subgraph on $W$) and observing that the column space of $\row{P}{Q}$ is equal to the column space of $P$ if and only if the column space of $Q$ is contained in the column space of $P$, which is true if and only if $W$ is reducible in $G$.
From this, we see that assuming $W_1$ is reducible in $G$, $W_2$ is reducible in $\Gamma_{W_1}(G)$ if and only if $\textrm{rank}_{\Gamma_{W_1}(G)}(W_2,V \backslash W_1) = \textrm{rank}_{\Gamma_{W_1}(G)}(W_2)$. By Theorem \ref{ranksOfReductions}, this is true if and only if $\textrm{rank}_G (W_1 \cup W_2, V) = \textrm{rank}_G (W_1 \cup W_2)$, which is true if and only if $W_1 \cup W_2$ is reducible in $G$. This establishes the first part of the theorem.
Now suppose $W_1$ and $W_1 \cup W_2$ are reducible in $G$, and $v,w \in V \backslash (W_1 \cup W_2)$. By Corollary \ref{rankstellvertices}, $(v,w)$ is an edge in $\Gamma_{W_2} \circ \Gamma_{W_1} (G)$ if and only if $\textrm{rank}_{\Gamma_{W_1}(G)} (W_2 \cup \{v\}, W_2 \cup \{w\}) > \rk{\Gamma_{W_1}(G)}$. But by Theorem \ref{ranksOfReductions}, the left side of this inequality is $\textrm{rank}_G(W_1 \cup W_2 \cup \{v\}, W_1 \cup W_2 \cup \{w\}) - \rkg{W_1}$, while the right side is $\rk{G} - \rkg{W_1}$. Thus $(v,w)$ is an edge in $\Gamma_{W_2} \circ \Gamma_{W_1} (G)$ if and only if $\textrm{rank}_{G}(W_1 \cup W_2 \cup \{v\},W_1 \cup W_2 \cup \{w\}) > \textrm{rank}_{G}(W_1 \cup W_2)$. By Corollary \ref{rankstellvertices}, this is the case if and only if $(v,w)$ is an edge in $\Gamma_{W_1 \cup W_2}(G)$. Thus $\Gamma_{W_1 \cup W_2} (G)$ and $\Gamma_{W_2} \circ \Gamma_{W_1} (G)$ have the same adjacency matrix, and thus they are the same graph. \end{proof}
Theorem \ref{pathInvariance} demonstrates that the reducible sets of $G$ determine the reducible sets of the results of every graph reduction of $G$. In fact, if we consider the class of all graphs that can be obtained by a graph reduction from $G$, then the graph reductions from one such graph to another are in bijection with inclusions of reducible vertex sets in $G$. To be precise, if $W_1, W_2$ are two reducible vertex sets such that $W_1 \subset W_2$, then there is a unique graph reduction taking $\Gamma_{W_1}(G)$ to $\Gamma_{W_2}(G)$.
We also point out that we can express Theorem \ref{pathInvariance} in a category-theoretic way: it shows that we can define a category whose objects are simple graphs with loops, and whose morphisms are graph reductions. The fact that the composition of two graph reductions is a graph reduction makes compositions of morphisms in this category well-defined. The full subcategory of a given graph $G$ and the results of all graph reductions of $G$ is thus equivalent to the poset of reducible subsets of the vertices of $V$ (i.e. the category whose objects are these subsets, and whose morphisms are inclusion maps). Thus the category of reducible vertex sets is an important invariant of a graph.
\begin{definition}
The \emph{reducibility poset} $\mathcal{R}(G)$ of a graph $G$ is the collection of reducible subsets of vertices $S \subset V$. The \emph{reducibility poset at level $n$}, $\mathcal{R}_n(G)$, is the collection of reducible subsets $S \subset V$ with nullity $n$ in $G$. The subsets $\mathcal{R}_n(G) \subset \mathcal{R}(G)$ will be called the \emph{levels} of the reducibility poset. The first level $\mathcal{R}_0(G)$ will be called the \emph{pivotal poset} of $G$. \end{definition}
\begin{example} The graph from Figure \ref{changeBasisExample} has reducibility poset shown on the left of Figure \ref{fig:posets}, on page \pageref{fig:posets}. \end{example}
Observe that if $G$ is nonsingular, then $\mathcal{R}(G) = \mathcal{R}_0(G)$. Also observe that all inclusions within the reducibility poset always go to higher levels. This can be proved using the results of the previous section, but it will also follow from the fact that the nullity of a reducible vertex set is precisely the number of times the negative rule must be used in a combinatorial reduction strategy removing those vertices, which we shall establish in Theorem \ref{ngrNullity}.
The primary reason that the reducibility poset is of interest is that it is possible to reconstruct the graph from the reducibility poset along with its levels; in fact, the pivotal poset suffices.
\begin{theorem}\label{posetDeterminesGraph}
A graph $G$ is uniquely determined by $V$ and $\mathcal{R}_0(G)$. \end{theorem} \begin{proof}
Observe that for any $v \in V$, $v$ has a loop if and only if $\{v\} \in \mathcal{R}_0(G)$. Thus the diagonal of the adjacency matrix can be recovered from the pivotal poset. Now for any $v,w \in V$, $v \neq w$, if the induced subgraph on vertices $\{v,w\}$ is $\smat{a}{b}{b}{c}$, then $\{v,w\} \in \mathcal{R}_0(G)$ if and only if $ac-b^2 = 1$. Since $a$ and $c$ are diagonal entries, and $b^2 = b$ in $\textbf{F}_2$, $b$ can also be recovered from the pivotal poset. Thus the entire adjacency matrix can be obtained in this way. \end{proof}
\begin{definition}
A pair $(\mathcal{R}_0, V)$, where $V$ is a finite set and $\mathcal{R}_0$ is a collection of subsets of $V$, is called \emph{realizable} if there exists a graph $G$ on vertices $V$ with pivotal poset $\mathcal{R}_0$. \end{definition}
In principle, we can study all graph reductions by studying pivotal posets. However, this requires classifying those posets which can occur as pivotal posets. The pivot operation provides some intriguing results in this direction; we will take up that subject in Section \ref{pivots}.
\section{Combinatorial rules as minimal reductions}\label{combMinimal} We demonstrate in this section that the general graph reductions we defined in Section \ref{algebraic} do in fact generalize the combinatorial graph reduction rules defined in Section \ref{combinatorial}, and show that the combinatorial reduction rules can be characterized simply as minimal graph reductions. We shall then give a combinatorial interpretation to the nullity of a vertex set in a graph, which will resolve two questions posed by Harju, Li, and Petre \cite{harju} about the combinatorial reduction rules. First we show that the combinatorial reduction rules are, in fact, graph reductions as defined in Section \ref{algebraic}.
\begin{lemma}
Suppose that $W$ is the domain of a combinatorial reduction rule $\gamma$ on $G$ (i.e. $W$ is a single vertex with a loop, two adjacent vertices without loops, or an isolated vertex without a loop). Then $W$ is reducible, and $\Gamma_W$ is the same as $\gamma$, in the sense that $\gamma(G) = \Gamma_W (G)$. \end{lemma} \begin{proof}
This follows by comparing formulas \ref{combMatrix1}, \ref{combMatrix2}, and \ref{combMatrix3} to Proposition \ref{matrixGamma}. \end{proof}
We now demonstrate that the combinatorial reduction rules in fact arise naturally from the notion of graph reduction, in the sense of the following theorem.
\begin{definition}
A nonempty subset $W$ of vertices is \emph{minimally reducible} if it is reducible, and none of its nonempty subsets are reducible. A \emph{minimal graph reduction} is a reduction $\Gamma_W$ such that $W$ is minimally reducible. \end{definition}
\begin{theorem}
The minimal reductions of a graph $G$ are precisely the applicable combinatorial reduction rules, $\emph{gpr}, \emph{gdr}$ and $\emph{gnr}$. \end{theorem} \begin{proof}
It is easy to see that a combinatorial reduction rule is in fact minimal: the domains of $\textrm{gpr}$ and $\textrm{gnr}$ have no nonempty subsets, and both nonempty subsets of the domain of $\textrm{gdr}$ are not reducible. Thus it remains to show that if $W$ is any reducible vertex set in $G$, $W$ has a reducible subset that is the domain of a combinatorial reduction rule. If $W$ contains any positive vertices (i.e. vertices with loops), then any of these vertices is a reducible set by itself. If $W$ contains only negative vertices, and there is at least one edge between vertices of $W$, then the vertices of this edge are the domain of an applicable double rule. The only remaining case is if $W$ contains only negative vertices, and there are no edges among the vertices of $W$. Then $G$ has adjacency matrix $\smat{\textbf{0}}{Q}{Q^T}{R}$ in block form (with rows and columns corresponding to $W$ coming first). Since $W$ is reducible, the row space of $Q$ is contained in the row space of $\textbf{0}$, thus $Q = 0$, and all the vertices of $W$ are isolated, thus any one of them is the domain of an applicable negative rule. Thus in all cases, there is a reducible subset of $W$ that is the domain of an applicable combinatorial reduction rule. \end{proof}
\begin{corollary}
Every graph reduction is a composition of combinatorial reduction rules. \end{corollary} \begin{proof}
This follows by induction on the size of $W$. \end{proof}
\begin{corollary}
Combinatorial reduction strategies are path invariant, in the sense that any two strategies that remove the same set of vertices result in the same graph. \end{corollary} \begin{proof}
If both strategies remove vertex set $W$, then both are equivalent to $\Gamma_W$, by Theorem \ref{pathInvariance}. \end{proof}
\begin{example} Figure \ref{fig:pathInvarianceExample} shows an example of two different ways to factor a graph reduction into combinatorial reduction rules, and also illustrates the path-invariance property of combinatorial reduction rules. \end{example}
\begin{figure}
\caption{Two combinatorial reduction strategies removing the same vertices.}
\label{pathInvarianceExample}
\label{fig:pathInvarianceExample}
\end{figure}
Among the three combinatorial reduction operations, the negative rule is the only one which is singular (in the sense that it removes a vertex set whose induced subgraph has a singular adjacency matrix). This gives it some special properties, which we shall now study.
\subsection{Nullity and the negative rule}\label{nullitySection} Harju et al.\ \cite{harju} ask whether it is true that every combinatorial reduction strategy of a given signed graph applies the negative rule the same number of times, and also whether there is a characterization of those signed graphs that avoid the negative rule in all reductions. We are now able to answer both questions.
\begin{theorem} \label{ngrNullity}
If $W$ is a reducible vertex set in $G$, then the nullity of $W$ in $G$ is equal to the number of times the negative rule is applied in any combinatorial reduction strategy removing the vertices of $W$. \end{theorem} \begin{proof}
By Corollary \ref{rankCorollary} and the definition of nullity, a reduction $\Gamma_W$ reduces the nullity of $G$ by exactly the nullity of the vertex set $W$. If $W$ is a single negative vertex, then $W$ has nullity $1$, and if $W$ is a positive vertex or a pair of adjacent negative vertices, then $W$ has nullity $0$. The result now follows by induction on the number of reduction rules in the combinatorial reduction strategy. \end{proof} \begin{corollary}
Any combinatorial reduction strategy removing the vertices $W$ uses the same number of negative rules. \end{corollary}
Recall that all set inclusions in the reducibility poset $\mathcal{R}(G)$ correspond to reductions of graphs obtainable from $G$ by graph reduction. Theorem \ref{ngrNullity} now shows that for all such inclusions $S \subset T$, where $S \in \mathcal{R}_m(G)$ and $T \in \mathcal{R}_n(G)$, $m \leq n$ and the number of negative rules used in any combinatorial reduction strategy realizing this reduction is $n-m$.
Note that we can also give a combinatorial characterization of those signed graphs that avoid $\textrm{gnr}$. The statement that $A$ is nonsingular means that it has trivial nullspace. Since any vector in $\textbf{F}_2^n$ can be interpreted as a subset of the vertices of the graph, we could state the singularity of $A$ as follows: \textit{$G$ avoids $\emph{gnr}$ if and only if there is no nonempty subset $S$ of $V$ such that each vertex $v \in V$ is adjacent to an even number of vertices in $w \in S$}.
Ehrenfeucht et al.\ \cite{ehrenfeucht} give a similar result for the string formulation of gene assembly. They characterize strings which avoid the string negative rule in all of their reduction as those which have no cycles (see their paper for definitions). Our characterization is related in the sense that cycles in strings correspond to elements of the null space of the adjacency matrix.
If we restrict ourselves to negative graphs (graphs without positive vertices), which can be understood as ordinary undirected graphs with $0$ on the diagonal of their adjacency matrices, this theorem gives a combinatorial interpretation of the binary rank of a graph. This characterization is identical to one given by Godsil and Royle \cite{ggt}, who consider \textit{rank-two reductions} that are identical to $\textrm{gdr}$.
\section{Pivots and retrographs}\label{pivots} We now consider the relation between graph reductions as defined above and the pivot operation on simple graphs with loops. The pivot operation is a combinatorial operation on graphs which does not remove any vertices and preserves the binary rank of the graph. The pivot of a matrix is defined by Geelen \cite{geelen}, who considers pivots of matrices over both $\textbf{F}_2$ and $\textbf{R}$. The properties we study here are closely related to results found by Brijder, Harju, and Hoogeboom \cite{brijder}, who also observe the connection between pivots and nonsingular reductions of signed graphs.
In this section, we demonstrate that pivot operations are easy to understand in terms of the pivotal poset of a graph, and use this to derive the relation between pivots and graph reductions. We also consider a special case of the pivot operation, which we call the retrograph of a graph, which is defined for nonsingular graphs and has an interesting combinatorial relation to the original graph.
As has previously been observed in \cite{brijder}, pivots are well-suited to studying nonsingular reductions. It is more difficult to understand their effect on the reducibility poset and graph reductions in general; we shall illustrate this point with an example at the end of the section.
We observe that when pivots are considered for matrices over fields of characteristic other than $2$, slight modifications are needed. In effect, the notation is simplified by the fact that addition and subtraction are identical. However, the modifications necessary to generalize our results to all fields are not difficult.
\subsection{Pivots} If a matrix $A$ has block form $\smat{P}{Q}{R}{S}$, and $X$ is the set of the first $m$ basis vectors (so that $A$ restricted to columns and rows corresponding to $X$ is $P$), then the \emph{pivot} $A \ast X$ is
\begin{equation}
A \ast X = \mat{-P^{-1}}{P^{-1}Q}{RP^{-1}}{S - RP^{-1}Q}. \end{equation}
Some sources give a different definition, differing from ours in the signs of the first block column. The definition we use is the definition from \cite{brijder}, which has the virtue of preserving symmetry of a matrix. Both definitions are, of course, equivalent over $\textbf{F}_2$. Notice that if $A$ is the adjacency matrix of $G$ over $\textbf{F}_2$, and $X$ a subset of vertices, then $A \ast X$ gives the adjacency matrix of a graph with contains the reduction $\Gamma_X(G)$ as a subgraph. This observation is made in \ \cite{brijder}, where several formulas are given expressing the entries of the matrix $A \ast X$ in terms of determinants of submatrices of $A$, and a path invariance result is derived. We shall slightly generalize these formulas to formulas for ranks of submatrices in the pivot. In order to do this, we introduce a useful notation, which makes many properties of the pivot immediately visible: the notion of a pair-class of matrices.
\begin{definition}
Two pairs of matrices $(A_1,B_1),(A_2,B_2)$ are \emph{row-equivalent} if there exists an invertible matrix $M$ such that $(MA_1, MB_1) = (A_2,B_2)$. Denote by $[A,B]$ the equivalence class of pairs of matrices row-equivalent to $(A,B)$. These equivalence classes are called \emph{pair-classes} of matrices. A pair-class $[A,B]$ is called \emph{proper} if $A$ is nonsingular. \end{definition}
We observe that, as long as the block matrix $\row{A}{B}$ is full rank, the pair-class $[A,B]$ may be identified with its row space. In this setting, we can define the pair-class as a $1$-dimensional subspace in the $n^{th}$ wedge power of a $2n$-dimensional vector space, or as an element of a Grassmannian variety. Proper pair-classes $[I,A]$ then form a distinguished affine open set of this variety, and determinants of submatrices of $A$ appear as the coordinates in the Pl\"{u}cker embedding of the Grassmanian. Details of these notions may be found in \cite{harris}, chapter 6. This perspective is what originally motivated this approach, and may provide geometric intuition, but we shall not discuss it further because it is not needed for this paper.
Any proper class $[A,B]$ may be written equivalently as $[I,A^{-1}B]$. Indeed, $(I,A^{-1}B)$ is the unique pair in the class $[A,B]$ with $I$ in the first entry, and such a pair exists if and only if the class $[A,B]$ is proper. Thus we have a bijection between square $n \times n$ matrices and proper classes of pairs of $n \times n$ matrices: to a matrix $A$ we may associate the pair-class $[I,A]$, and to a proper pair-class $[A,B]$ we may associate the matrix $A^{-1}B$. We shall define a pivot operation on pair-classes, and show that it is identical, via this correspondence, to the pivot operation as defined for matrices.
\begin{definition}
Suppose $A = \smat{P_1}{Q_1}{R_1}{S_1}$ and $B = \smat{P_2}{Q_2}{R_2}{S_2}$ are two matrices in block form, where $P_1,P_2$ is an $m \times m$ block and $X$ is the set of the first $m$ basis vectors. The \emph{pivot} of $[A,B]$ by $X$ is \begin{equation}
[A, B] \ast X = \left[ \smat{P_2}{Q_1}{R_2}{S_1}, \smat{-P_1}{Q_2}{-R_1}{ S_2} \right] \end{equation} \end{definition}
Thus the pivot simply exchanges the first $m$ columns of the two matrices and inverts the signs of these columns in the second matrix. Of course, in $\textbf{F}_2$, the signs are irrelevant, so we may simply view the pivot as exchanging blocks of columns. The following fact is immediate from this definition. Here $X \oplus Y$ denotes the symmetric set difference, $(X \cup Y) \backslash (X \cap Y)$.
\begin{lemma}\label{pivotPathInvariance}
For a pair-class over $\textbf{F}_2$, \begin{equation}
([A,B] \ast X) \ast Y = [A,B] \ast (X \oplus Y). \end{equation} \end{lemma}
Now suppose $A$ is a matrix with block form $\smat{P}{Q}{R}{S}$. Then $A$ corresponds to the pair-class $[I,A]$. Pivoting by the first $m$ basis vectors, we obtain:
\begin{eqnarray*}
[I,A] \ast X &=& \left[ \smat{P}{0}{R}{I}, \smat{-I}{Q}{0}{S} \right], \end{eqnarray*}
which is proper if and only if $P$ is invertible. In this case, we can rewrite it by performing row operations as follows:
\begin{eqnarray*}
[I,A] \ast X &=& \left[ \smat{I}{0}{R}{I}, \smat{-P^{-1}}{P^{-1}Q}{0}{S} \right]\\ &=& \left[ \smat{I}{0}{0}{I}, \smat{-P^{-1}}{P^{-1}Q}{R P^{-1}}{S - R P^{-1} Q} \right]\\ &=& [I, A \ast X]. \end{eqnarray*}
Thus we see that pivots between proper pair-classes correspond precisely to the matrix pivot operation defined at the beginning. The main benefit of considering pivots on pair-classes rather than matrices is the ease with which we may consider ranks of submatrices in this framework. We also observe that this correspondence, combined with Lemma \ref{pivotPathInvariance}, gives an easy proof of the fact that for matrices over $\textbf{F}_2$, $(A \ast X) \ast Y = A \ast (X \oplus Y)$. In particular, this shows path invariances for sequences of pivot operations on disjoint vertices. This path invariance property is also observed in \cite{brijder}, in generalizing similar theorems from \cite{arratia}, \cite{genest}, and \cite{oum}. These papers approach the problem by analyzing determinants of submatrices of pivots, a computation which will follow (in $\textbf{F}_2$) as a special case of formulas we observe below for ranks of submatrices of pivots.
Another reason we have chosen to approach pivots from the perspective of pair-classes is the ease with which we can study the pivotal poset in this context.
\begin{definition}
Suppose $[A,B]$ is a pair-class of matrices, with rows and columns indexed by $V$, and $W_1, W_2 \subset V$ are two subsets such that $|W_1| = |W_2|$. Then $N_{W_1,W_2}([A,B])$ denotes the nullity (i.e. dimension of the kernel) of the matrix formed by the columns $V \backslash W_1$ from $A$ and the columns $W_2$ from $B$. \end{definition}
\begin{definition}
The \emph{pivotal poset} of a pair-class $[A,B]$, denoted $\mathcal{R}_0([A,B])$, is the set of subsets $W \subset V$ such that $N_{W,W}([A,B]) = 0$. \end{definition}
It is not difficult to verify that $N_{W_1,W_2}([A,B])$ and $\mathcal{R}_0([A,B])$ are well-defined, in the sense that they do not depend on which representative of the equivalence class $[A,B]$ is chosen. We have chosen these definitions due to their meaning in case of proper pair-classes.
\begin{lemma}
For a proper pair-class $[I,A]$, $N_{W_1,W_2}([I,A])$ is the nullity of the submatrix of $A$ on rows $W_1$ and columns $W_2$. If $A$ is the adjacency matrix of a graph $G$, then $\mathcal{R}_0([I,A]) = \mathcal{R}_0(G)$. A pair-class $[A,B]$ is proper if and only if $\emptyset \in \mathcal{R}_0([A,B])$. \end{lemma} \begin{proof}
The first statement follows easily by writing $I$ and $A$ in block form. The second statement follows immediately from the first and the definition of a proper pair-class. \end{proof}
The following lemma can in fact be used to uniquely characterize the pivot operation over the field $\textbf{F}_2$. The corollary following the lemma will allow us to uniquely characterize the pivot operation on graphs. In Corollary \ref{pivotPosets} and elsewhere, we write $S \oplus X$, which $S$ is a set of sets of vertices and $X$ is a set of vertices, to indicate the set $\{Y \oplus X: Y \in S \}$.
\begin{lemma}\label{pivotNullity}
If $[A,B]$ is a pair-class, with rows and columns of $A,B$ indexed by $V$, then for any sets $W_1,W_2,X \subset V$ such that $|W_1| = |W_2|$, \begin{equation}\label{pivotLemmaEq1}
N_{W_1,W_2} ([A,B] \ast X) = N_{(W_1 \cap X^c) \cup (W_2^c \cap X), (W_2 \cap X^c) \cup (W_1^c \cap X)}([A,B]), \end{equation} where the superscript $c$ indicates complement in $V$. \end{lemma} \begin{proof}
The two sides of equation \ref{pivotLemmaEq1} denote the nullities of matrices which are identical up to permutation and signing of the columns, which thus have the same rank. \end{proof} \begin{corollary}\label{pivotPosets}
For any pair-class $[A,B]$ and subset $X \subset V$, \begin{equation}
\mathcal{R}_0([A,B] \ast X) = \mathcal{R}_0([A,B]) \oplus X. \end{equation} \end{corollary} \begin{proof}
This follows by considering the special case $W_1 = W_2$ in Lemma \ref{pivotNullity}, which is $N_{W,W}([A,B]\ast X) = N_{W \oplus X, W \oplus X}([A,B])$. \end{proof}
We observe that, over $\textbf{F}_2$, Lemma \ref{pivotNullity} may be regarded as a generalization of the following determinant formula, discussed in \cite{brijder} (proposition 3).
\begin{equation}
\textrm{det}(A \ast X)_{Y,Y} = \pm \textrm{det} A_{X \oplus Y} / \textrm{det}(A_{Y,Y}). \end{equation}
In fact, our method of pair-classes can also be used to establish this result over general fields without much effort, although the definition must be modified to consider two pairs equivalent only if they differ by multiplication on the left by a matrix of determinant $1$, rather than any invertible matrix. Over $\textbf{F}_2$, this distinction is nonexistant.
We have now established the main properties of pivots of pair-classes, which allow us to characterize the pivot of a graph combinatorially.
\begin{theorem}
If $G$ is a graph with pivotal poset $\mathcal{R}_0(G)$ and $W\subset V$ is a subset of the vertices of $G$, then $\mathcal{R}_0(G) \oplus W$ is realizable if and only if $W \in \mathcal{R}_0(G)$. If $G$ has adjacency matrix $A$, then the graph realizing $\mathcal{R}_0(G) \oplus W$ has adjacency matrix $A \ast W$. \end{theorem} \begin{proof}
Suppose that $\mathcal{R}_0(G) \oplus W$ is realizable by the graph $H$ with adjacency matrix $B$. Then $\emptyset \in \mathcal{R}_0(H) = \mathcal{R}_0(G)$, hence $\emptyset \oplus W = W \in \mathcal{R}_0(G)$. Then observe that the pair-class $[I,B]$ must have the same pivotal poset as $[I,A] \ast W$, by Corollary \ref{pivotPosets}. Since $[I,A] \ast X = [I,A \ast X]$, we must have $B = A \ast X$.
Conversely, suppose that $W \in \mathcal{R}_0(G)$. Then the pair-class $[I, A \ast W] = [I,A] \ast W$ has pivotal poset $\mathcal{R}_0(G) \oplus W$, by a similar analysis to above. Since $A \ast W$ is symmetric, it is the adjacency matrix of a graph realizing the pivotal poset $\mathcal{R}_0(G) \oplus W$, as desired. \end{proof}
\begin{definition} \label{pivotDefinition}
If $G$ is a graph, and $W \in \mathcal{R}_0(G)$, then the graph whose pivotal poset is $\mathcal{R}_0(G) \oplus W$ is called the \emph{pivot of $G$ by $W$} and is denoted $P_W(G)$. \end{definition}
Observe that, given this characterization of the pivot of a graph, the following path invariance property is immediately clear. This is essentially the pivot analogue of Theorem \ref{pathInvariance}.
\begin{theorem}
If $G$ is a graph, and $W_1,W_2 \subset V$ are two sets of vertices with $W_1 \in \mathcal{R}_0(G)$, then $W_2 \in \mathcal{R}_0(P_{W_1}(G))$ if and only if $W_1 \oplus W_2 \in \mathcal{R}_0(G)$, and $P_{W_2} \circ P_{W_1} (G) = P_{W_1 \oplus W_2}(G)$. \end{theorem} \begin{proof}
The first assertion is a consequence of definition \ref{pivotDefinition}. The second follows since the pivotal posets of $P_{W_2} \circ P_{W_1} (G)$ and $P_{W_1 \oplus W_2}(G)$ are both $\mathcal{R}_0{G} \oplus W_1 \oplus W_2$. \end{proof}
As we remarked at the beginning of this section, there is a close relation between pivots of graphs and graph reductions. This relation is expressed in the following theorem.
\begin{theorem}\label{pivotReduction}
If $G$ is a signed graph, and $W_1,W_2 \subset V$ are two sets of vertices such that $W_1$ and $W_1 \backslash W_2$ both lie in $\mathcal{R}_0(G)$, then \begin{equation}
I_{W_2} \circ P_{W_1} (G) = P_{W_1 \cap W_2} \circ \Gamma_{W_1 \backslash W_2} \circ I_{W_1 \cup W_2} (G), \end{equation} where $I_{U}(G)$ denotes the induced subgraph on vertices $U$ of $G$. \end{theorem} \begin{proof}
First we show that the expression on the right side of the equation is well-defined. $\Gamma_{W_1 \backslash W_2}$ applies to $I_{W_1 \cup W_2}(G)$ because we have assumed that $W_1 \backslash W_2$ is nonsingular in $G$. Now $P_{W_1 \cap W_2}$ applies to the graph $\Gamma_{W_1 \backslash W_2} \circ I_{W_1 \cup W_2} (G)$ if and only if $W_1 \cap W_2$ is nonsingular in $\Gamma_{W_1 \backslash W_2} \circ I_{W_1 \cup W_2} (G)$, which is true if and only if $(W_1 \cap W_2) \cup (W_1 \backslash W_2) = W_1$ is nonsingular in $I_{W_1 \cup W_2}(G)$, which follows from our assumptions.
Now the graphs described by the two sides of this equation have the same vertex set, so by Theorem \ref{posetDeterminesGraph}, it suffices to show that they have the same pivotal poset. Now observe that \begin{eqnarray*}
&& \mathcal{P}_0(I_{W_2} \circ P_{W_1} (G))\\ &=& \{ S \subset W_2: S \oplus W_1 \in \mathcal{P}_0(I_{W_1 \cup W_2}(G)) \}\\ &=& \{ S \subset W_2: (S \oplus (W_1 \cap W_2)) \cup (W_1 \backslash W_2) \in \mathcal{P}_0(I_{W_1 \cup W_2} (G)) \}\\ &=& \{ S \subset W_2: S \oplus (W_1 \cap W_2) \in \mathcal{P}_0(\Gamma_{W_1 \backslash W_2} \circ I_{W_1 \cup W_2} (G)) \}\\ &=& \mathcal{P}_0(\Gamma_{W_1 \backslash W_2} \circ I_{W_1 \cup W_2} (G)) \oplus (W_1 \cap W_2)\\ &=& \mathcal{P}_0(P_{W_1 \cap W_2} \circ \Gamma_{W_1 \backslash W_2} \circ I_{W_1 \cup W_2} (G)). \end{eqnarray*} Thus these two graphs have the same pivotal poset, and thus are equal. \end{proof}
There are two important special cases of Theorem \ref{pivotReduction}, expressed in the following corollary.
\begin{corollary}
If $U \cup W = V$, and $U, U \backslash W \in \mathcal{R}_0(G)$, then \begin{equation}\label{pivotsOfReductions}
I_W \circ P_U (G) = P_{U \cap W} \circ \Gamma_{U \backslash W} (G). \end{equation} If $U$ and $W$ are disjoint and $U \in \mathcal{R}_0(G)$, then \begin{equation}\label{pivotsHaveReductions}
I_W \circ P_U (G) = \Gamma_U (G). \end{equation} \end{corollary}
Equation \ref{pivotsHaveReductions} simply expresses the fact (already evident from the adjacency matrix) that, at least in the case of nonsingular reductions, we can find any graph reduction as an induced subgraph of a pivot. Equation \ref{pivotsOfReductions} demonstrates that all pivots of a reduction of a graph can be obtained by simply finding pivots of the original graph and examining an induced subgraph. A combinatorially interesting special case of this is studied in the next section.
Before concluding this section, we remark that while the pivotal poset of a graph is very well behaved under pivots, the reducibility poset is not. In other words, it is easy to characterize the nonsingular combinatorial reduction strategies of the pivot of a graph, but it is harder to characterize the stages at which the negative rule $\textrm{gnr}$ will apply. For example, consider the adjacency matrices in Figure \ref{fig:posets}. Arrows indicate set inclusions, and subscripts indicate the nullity of the vertex set. The pivotal poset is simply the sub-poset of those sets with subscript $0$.
\begin{figure}
\caption{Two adjacency matrices and their reducibility posets.}
\label{fig:posets}
\end{figure}
Let $G_1$ be the graph corresponding to the matrix $A$ in Figure \ref{fig:posets}, and $G_2$ be the graph corresponding to $A \ast \{1,2\}$. Note that of course the pivotal poset of $G_2$ is obtained by taking the symmetric set difference of each pivotal set in $G_1$ with $\{1,2\}$. However, $G_2$ is nonsingular, whereas $G_1$ has nullity $1$, thus the reducibility poset of $G_1$ includes singular sets (namely $\{2,3\}$ and $\{1,2,3\}$, which are both in $\mathcal{R}_1(G_1)$. This example shows that there cannot in general be a bijection between the reducibility poset of a graph and the reducibility poset of its pivot, since these posets may have different sizes.
\subsection{The retrograph} If $G$ is a nonsingular graph (that is, its entire vertex set $V$ is pivotal, which is to say that no combinatorial reductions strategies of $G$ use the negative rule), then we can consider the pivot of $G$ by its entire vertex set. The graph obtained in this way has interesting combinatorial properties, which we shall now consider.
\begin{definition}
If $G$ is a graph with vertex set $V$, and $V \in \mathcal{R}_0(G)$, then the \emph{retrograph} $G^R$ of $G$ is the pivot by the entire vertex set, $P_V(G)$. \end{definition}
The retrograph has two useful combinatorial properties, expressed by the following theorem.
\begin{theorem}\label{retrographTheorem}
If $G$ has retrograph $G^R$, then any successful combinatorial reduction strategy of $G$ applies in reverse to $G^R$. The retrograph $G^R$ is the unique graph with whose combinatorial reduction strategies are the same as those for $G$ in reverse. If $W \subset V$ is a subset of the vertices of $G$, then $W$ is reducible in $G$ if and only if $V \backslash W$ is reducible in $G^R$, and in this case $(\Gamma_W (G))^R = I_{V \backslash W} (G^R)$. \end{theorem} \begin{proof}
The statement that a successful combinatorial reduction strategy of $G$ applies in reverse to $G^R$ is equivalent to the statement that a subset of the vertices of $G$ is reducible in $G$ if and only if its complement is reducible in $G^R$. This latter statement is equivalent to $\mathcal{R}_0(G^R) = \mathcal{R}_0(G) \oplus V$, which is true by the definition of the pivot of a graph. The statement $(\Gamma_W (G))^R = I_{V \backslash W} (G^R)$ follows from equation \ref{pivotsOfReductions} by letting the sets $W_1,W_2$ in the statement of equation \ref{pivotsOfReductions} be the sets $V,V\backslash W$, respectively. \end{proof}
Observe that the combinatorial description of $G^R$ (the graph whose successful combinatorial reduction strategies are the combinatorial reductions strategies of $G$ applied in reverse) cannot be used to describe an analogous retrograph for singular graphs $G$. This is because $\mathcal{R}_0(G^R) = \mathcal{R}_0(G) \oplus V$ implies that, since $\emptyset$ is pivotal in any graph, the whole vertex set $V$ must be pivotal in $G$ in order for $G^R$ to be well-defined. Thus the retrograph, defined by this combinatorial property, exists if and only if $G$ is nonsingular.
The first part of this theorem shows the first use of the retrograph: it allows us to look ahead and immediately see how a graph reduction strategy must end, without actually computing the entire reduction strategy. The second part of the theorem shows that, in some sense, the retorgraph reduces the study of reductions of a given graph to the study of subgraphs of the retrograph. More precisely, if we wish to verify some statement on the result of every reduction of $G$, and we can formulate the statement in such a way that it is easy to verify on the retrograph, then we can simply verify this latter statement on all subgraphs of the retrograph, and avoid computing any reductions.
We also observe that Theorem \ref{retrographTheorem} can be restated in terms of the adjacency matrix to obtain an interesting matrix identity. In fact, this identity holds for matrices over any field, by considering pair-classes over fields other than $\textbf{F}_2$. It may also be proved directly by algebra.
\begin{corollary}
If $A$ is an invertible $n \times n$ matrix, $V$ is the set $1,2,\dots,n$, regarded as both the set of rows and the set of columns, and $X \subset V$, then $A\langle X,X \rangle$ is invertible if and only if $A^{-1} \langle V \backslash X, V \backslash X \rangle$ is invertible. In this case, if $A$ is written in block form as $\smat{P}{Q}{R}{S}$ with $A \langle X,X \rangle = P$, then the matrix $A^{-1} \langle V \backslash X, V \backslash X \rangle$ is equal to $(S - QP^{-1}R)^{-1}$. \end{corollary}
We illustrate the retrograph concept with a simple example. Consider the graph and retrograph shown in Figure \ref{fig:retrograph}.
\begin{figure}
\caption{A graph $G$ and its retrograph $G^R$.}
\label{fig:retrograph}
\end{figure}
We show in Figure \ref{fig:retroReduction} one successful reduction strategy of this graph, and show the retrograph of the result at each stage. Observe that, in keeping with Theorem \ref{retrographTheorem}, these retrographs of reductions are simply induced subgraphs of the original retrograph.
\begin{figure}
\caption{A reduction strategy of $G$, with retrographs at each stage.}
\label{fig:retroReduction}
\end{figure}
\subsection{Reverse reductions}\label{reverseReductions} Graph pivots also provide a simple characterization of what might be called the inverse problem for graph reductions: given a graph $G$, which graphs $G'$ can be transformed into $G$ by a graph reduction?
Consider first the case where $G'$ can be transformed to $G$ by a nonsingular reduction. If $V$ are the vertices of $G$, and $W$ the other vertices of $G'$, we can express this by writing $\Gamma_W(G') = G$. By Theorem \ref{pivotReduction}, this is equivalent to $I_V \circ P_W (G') = G$. Denoting by $H$ the graph $P_W(G')$, and recalling that $P_W(H) = G'$, we have the following bijection (once we fix a set $W$ disjoint from $V$).
\begin{equation}\label{reverse1}
\{G':\ \Gamma_W(G') = G \} = \{P_W(H):\ I_V(H) = G\ \textrm{and}\ W \in \mathcal{R}_0(H) \} \end{equation}
On the left side of equation \ref{reverse1}, $H$ ranges over all graphs on vertices $V \cup W$.
In general, $G'$ can be reduced to $G$ if and only if there is a nonsingular reduction from $G'$ to a union of $G$ with some number of isolated negative vertices.
\begin{theorem}
If $G$ is any graph with vertices $V$, and $W$ is a set of vertices disjoint from $V$, then the set of graphs $G'$ on vertices $V \cup W$ such that $\Gamma_W (G') = G$ is precisely $\{ P_{W_1} (H)\}$, where $W_1$ ranges over all subsets of $W$ and $H$ ranges over all graphs on vertices $V \cup W$ such that $W_1 \in \mathcal{R}_0(H)$ and all vertices in $W \backslash W_1$ do not have loops and are not adjacent to any vertices in $V$. \end{theorem} \begin{proof}
Suppose $G' = P_{W_1} (H)$ satisfies the given conditions. Then clearly $W \backslash W_1$ is reducible in $\Gamma_{W_1}(G')$, since all these vertices are negative and isolated. Thus by Theorem \ref{pathInvariance}, $W$ is reducible in $G'$, and $\Gamma_W(G') = G$, since there exists a combinatorial reduction strategy from $G'$ to $\Gamma_{W_1}(G')$, and the remaining vertices $W \backslash W_1$ can be removed by the negative rule to obtain $G$, which is then equal to $\Gamma_W(G')$.
Conversely, suppose $G'$ is a graph on vertices $V \cup W$ such that $W \in \mathcal{R}(G')$ and $\Gamma_W (G') = G$. Then if $W_1$ is a maximal subset of $W$ nonsingular in $G'$, $\textrm{rank}_{G'}(W_1)$ must be $\textrm{rank}_{G'}(W)$, hence reducing $W_1$ results in the disjoint union of $G$ and a collection of isolated negative vertices. Hence $G'$ has the desired form. \end{proof}
\section{Conclusion} In this paper, we have introduced new algebraic methods for the study of the graph formalization of gene assembly, demonstrating in particular the close relation between combinatorial graph reductions and linear algebra over $\textbf{F}_2$, and giving combinatorial interpretations to the binary rank of a graph and the inverse of its adjacency matrix. Our general definition of reducibility and graph reduction, together with the path invariance property, shed light on the relation between the three combinatorial reduction rules. Some of this relation had been uncovered in \cite{brijder}, although our methods successfully incorporate the negative rule into the analysis and approach the problem using different methods. We have also generalized results from \cite{brijder} on the pivot operation and its relation to graph reductions, particularly the special case which we call the retrograph. We believe in particular that our approach of considering matrix pivots by means of pair-classes of matrices may shed considerable light on the properties of pivots of graphs. Finally, the reducibility poset and pivotal poset give a new and useful way to phrase many problems about graph reductions, in particular regarding the study of parallel complexity.
The most mysterious aspect of our method in this paper is the relation between the pivotal poset $\mathcal{R}_0(G)$ and the other levels $\mathcal{R}_n(G)$ of the reducibility poset of a graph. Theorem \ref{posetDeterminesGraph} demonstrates that the pivotal poset completely determines the other levels of the reducibility poset, but the method of recovered the latter from the former is rather cumbersome. In particular, it seems difficult to understand the effect of pivot operations on the full reducibility poset. It is possible that some restricted class of pivot operations are better behaved in this regard.
A more combinatorial way of stating the difficulties described above is that very little is currently understood about when negative rules may occur in the course of a combinatorial reduction strategy. The original problem which led to this paper was the verification that the \textit{number} of times the negative rule occurs in a successful reduction is a graph invariant, but presumably much more could be said about the places in a reduction strategy that the negative rule could occur. All this essentially amounts to understanding the structure of the full reducibility poset.
Although pair-classes of matrices were extremely convenient in studying pivot operations on graphs, it seems that they are the wrong structure to consider pivots and graph reductions. First, the fact that pivots of symmetric matrices remain symmetric appears to be somewhat coincidental; a more natural formulation of pivots might make this fact obvious. A more intrinsic way of stating this criticism is to observe that the adjacency matrix should be viewed as a symmetric bilinear form, not as a linear transformation, as was made vivid in Section \ref{algebraic}. The correct definition of the pivot operation should more explicitly respect this aspect of the adjacency matrix. If such a definition can be found, it might more naturally subsume the intrinsic definition of graph reductions made in Section \ref{algebraic}, and perhaps illuminate the difficulties mentioned in relating the pivotal poset to the full reducibility poset. The notion of the retrograph, which can easily be defined intrinsically , may be critical to this problem. The pivot operation may be regarded as interpolating between the concepts of graph reduction and the retrograph, since the retrograph is a special case, and graph reductions appear as subgraphs of pivots. Thus a more natural definition of the pivot of a graph would presumably interpolate between our definitions in Section \ref{algebraic} and some intrinsic definition of the retrograph.
We have laid some groundwork for an investigation of parallel complexity using algebraic methods. The following problems, to which these methods might be useful, remain open. These questions can also be stated in terms of the reducibility poset of a graph, or equivalently in terms of ranks of submatrices of the adjacency matrix.
\begin{problem} Let $f(n)$ denote the largest parallel complexity of a signed graph on at most $n$ vertices. Is $f(n)$ bounded by a constant? If not, what is its asymptotic behavior as $n$ approaches infinity? What if $f(n)$ instead denotes the largest parallel complexity of a graph on $n$ negative vertices? \end{problem}
It has been conjectured in \cite{harju} that the function $f(n)$ is in fact bounded by a constant. The best known upper bound is linear in $n$.
\begin{problem} Given a graph $G$ on $2n$ negative vertices, partitioned into $n$ edges $e_1, e_2, \dots, e_n$ on disjoint vertex sets, is there an efficient algorithm to determine whether the $n$ double rules $\textrm{gdr}_{e_i}$ removing these edges apply in parallel? \end{problem}
Both these questions could also be asked in terms of average behavior. Of course both of the following problems are not currently well-posed, since various probability distributions could be chosen in both cases.
\begin{problem} What is the average parallel complexity of a signed graph on $n$ vertices? \end{problem}
\begin{problem} If $n$ disjoint edges $e_1,\dots e_n$ between negative vertices are fixed, and edges between the vertices of the $e_i$ are either added or not added at random, what is the probability that the $n$ double rules $\textrm{gdr}_{e_i}$ removing these edges apply in parallel? \end{problem}
\section{Acknowledgments} This research was done at the University of Minnesota Duluth with the financial support of the National Science Foundation (grant number DMS-0447070-001) and the National Security Agency (grant number H98230-06-1-0013). I gratefully acknowledge the advice and assistance of Nathan Kaplan, and Ricky Liu throughout the project, and Raju Krishnamoorthy for reading this paper and providing comments. Geir Helleloid and Jack Huizenga also provided useful suggestions. I am grateful to the anonymous referee of an earlier draft for making me aware of the matrix pivot operation and its relevance to this topic. Finally, I would like to thank Joe Gallian for introducing me to the topic, and for support throughout the project.
\end{document} |
\begin{document}
\title{Project selection with partially verifiable information\thanks{We are grateful to Laura Doval, Federico Echenique and Amit Goyal for helpful comments and suggestions. We would also like to thank seminar audiences at Caltech and Delhi School of Economics for valuable feedback. }}
\author{Sumit Goel\thanks{California Institue of Technology; sgoel@caltech.edu; 0000-0003-3266-9035} \quad Wade Hann-Caruthers\thanks{California Institue of Technology; whanncar@gmail.com; 0000-0002-4273-6249}}
\date{}
\maketitle
\begin{abstract}
We consider a principal agent project selection problem with asymmetric information. There are $N$ projects and the principal must select exactly one of them. Each project provides some profit to the principal and some payoff to the agent and these profits and payoffs are the agent's private information. We consider the principal's problem of finding an optimal mechanism for two different objectives: maximizing expected profit and maximizing the probability of choosing the most profitable project. Importantly, we assume partial verifiability so that the agent cannot report a project to be more profitable to the principal than it actually is. Under this no-overselling constraint, we characterize the set of implementable mechanisms. Using this characterization, we find that in the case of two projects, the optimal mechanism under both objectives takes the form of a simple cutoff mechanism. The simple structure of the optimal mechanism also allows us to find evidence in support of the well-known ally-principle which says that principal delegates more authority to an agent who shares their preferences. \end{abstract} \section{Introduction}
Suppose a principal has to chose exactly one of $N$ available projects but does not know how profitable they are. There is an agent who is fully informed about these profits but also has its own preference over the projects. The principal would like to use the agent's information and chose a profitable project. Assuming the principal has commitment power and cannot use transfers to incentivize the agent, we consider the problem of finding the optimal mechanism for the principal in this project selection framework. \\
Let's first quickly consider the standard setting where the agent can lie arbitrarily. So for any principal and agent payoff vectors $(p,a) \in \Theta$, the agent can report any $(\pi, \alpha) \in \Theta$. Consider any mechanism $d$ and suppose its range is $S \subset [N]$. Under this mechanism, the agent will always report a $(\pi, \alpha)$ so that its favorite project in $S$ defined by $\arg max_{i \in S} a_i$ is chosen. Thus, the principal can only fix a set $S \subset [N]$ and commit to choosing agent's favorite project in $S$. This means that if the payoffs $p_i, a_i$ are i.i.d. across projects, all mechanisms lead to expected payoff $\mathbb{E}[p_i]$ for the principle. And the probability of the principal choosing the best project is $\frac{1}{N}$ for any mechanism. Thus, the principle may as well choose a constant mechanism and commitment power doesn't buy anything for the principal. \\
An important assumption in the above setting is that the agent can lie arbitrarily. But the agent's ability to manipulate may be limited if it is required to support its claims with some form of evidence. For instance, if a tech firm wants to a hire a programmer and the hiring committee is biased towards candidates with better social skills, the firm can require the hiring committee to provide certificates that support the reported coding skills of the candidates. Now the hiring committee can still potentially hide certificates and understate the coding ability of an applicant, but it cannot furnish fake certificates and overstate its coding abilities. Thus, by requiring some kind of supporting evidence, the principal can constraint the kind of manipulations that the agent can make. \\
In this paper, we consider a setting where the agent cannot oversell a project to the principal. That is, the agent cannot say a project is more profitable for the principal than it actually is. So if the true state is $(p,a) \in \Theta$, the agent's message space is $M(p,a)=\{(\pi, \alpha) \in \Theta: \pi_i \leq p_i\ \text{ for all } i \} $. Since the set of manipulations that the principal has to guard against is now smaller, the class of truthful mechanisms is potentially bigger. Note that our message correspondence satisfies the nested range condition ($\theta' \in M(\theta) \implies M(\theta') \subset M(\theta)$) from \citet{green1986partially} and thus, we can without loss of generality restrict attention to truthful mechanisms.\\
Under the partial verifiability constraint of no overselling-constraint, we first characterize the class of truthful mechanisms and call them table mechanisms. A table mechanism is defined by an increasing set function which determines the set of projects on the table as a function of the reported profit values $\pi$. The mechanism then chooses the agent's favorite project according to the reported $\alpha$ from those on the table. Thus, the class of truthful mechanisms is now significantly bigger. We use this characterization to find the optimal mechanism for the principal under two different objectives.\\
First, we consider the objective of maximizing the expected profit for the principal. For the case of two projects, we show that the optimal table mechanism is a simple cutoff mechanism. In this mechanism, one project is always on the table and the other project is on the table if its reported profit meets a cutoff that depends on the bivariate distribution $F$ from which the principal and agent payoffs $(p_i,a_i)$ are drawn for each project. When the payoffs are independent, this cutoff equals the expected profit. We also discuss the well-known ally principle in our framework by considering the case where $F$ is bivariate normal. In this case, we find that the optimal cutoff is decreasing in the correlation between the principal and agent payoffs and thus, the principal lends more leeway to an agent who shares their preferences. For the case of $N>2$ projects, we obtain a relatively weak result. Assuming $F$ is uniform on $[0,1]^2$, we show that a single cutoff mechanism is optimal in the simple subclass of cutoff mechanisms. Next, we also consider the objective of maximizing the probability of choosing the best project. Again, for the case of two projects, we show that a cutoff mechanism is optimal and when the project payoffs are independent, the optimal cutoff equals the median of the principal's profit. \\
\subsection{Related literature}
Mechanism design has often been used to deal with problems of asymmetric information ( \citet{myerson1981optimal}, \citet{myerson1983efficient}). An important theme in the literature on mechanism design is characterizing the set of implementable mechanisms (\citet{gibbard1973manipulation}, \citet{satterthwaite1975strategy}, \citet{dasgupta1979implementation}, \citet{green1986partially}). In particular, \citet{green1986partially} introduce the idea of partially verifiable information in mechanism design and identify a necessary and sufficient condition, called the ``Nested Range Condition", under which the set of implementable mechanisms coincides with the set of truthfully implementable mechanisms (i.e. the revelation principle holds). Later research focuses on identifying implementability conditions in other environments with partially verifiable information; \citet{kartik2009strategic} looks at Nash implementation, \citet{deneckere2008mechanism} considers lying with finite costs, \citet{ben2012implementation} and \citet{singh2001implementation} allow for transfers, and \citet{caragiannis2012mechanism} considers probabilistic verification. Some computer scientists have looked at the trade-off between monetary transfers and partial verifiability in terms of implementing social choice functions (\citet{ferraioli2016verify}, \citet{fotakis2017combinatorial}). Our setup belongs to the environment considered by Green and Laffont and satisfies their ``Nested Range Condition". Thus, without loss of generality, we restrict attention to truthfully implementable mechanisms in our analysis.\\
Our paper contributes to the literature finding optimal or efficient mechanisms in environments with a specific form of partial verifiability (\citet{maggi1995costly}, \citet{lacker1989optimal}, \citet{moore1984global}, \citet{celik2006mechanism}). For instance, \citet{munro2014hide} argues using a model in which only some expenditure can be hidden that spouses hiding income and assets from one another is efficient. \citet{deneckere2001mechanism} explains the complicated selling practices of real-world monopolists by considering an economy where some agents have limited ability to misrepresent their preferences. This paper considers the partial verifiability constraint of no-overselling and potentially explains the use of cutoff mechanisms in settings that only admit positive evidence which can be hidden but not fabricated. \\
Our work also relates to the literature studying principal agent project selection problems with different modeling assumptions (\citet{ben2014optimal}, \citet{mylovanov2017optimal}, \citet{armstrong2010model} and \citet{guo_project_nodate}). \citet{ben2014optimal} and \citet{mylovanov2017optimal} consider a problem where the principal has to choose one of $N$ agents who prefer being chosen and provide some private value to the principal from being chosen. In \citet{ben2014optimal}, the principal can verify his value from agent $i$ at a cost $c_i$, while \citet{mylovanov2017optimal} assumes ex-post verifiability so that the principal can penalize the winner by destroying a certain fraction of the surplus. \citet{armstrong2010model} considers a project delegation problem in which the principal can verify characteristics of the chosen project but is uncertain about the set of available projects. These papers find their respective optimal mechanisms for the principal and call them the favored agent mechanism \citet{ben2014optimal}, shortlisting procedure \citet{mylovanov2017optimal}, and the threshold rule \citet{armstrong2010model}. While these mechanisms have some flavor of the cutoff mechanisms we obtain in this paper, there are important differences in the setup we consider here. Primarily, in their setups, the principal is empowered by ex-post verifiability of the reported values and the ability to use a prohibitively high punishment to deter the agent from telling \textit{any} lie, whereas in our setup, the agent is constrained in that he cannot oversell, but the principal does not have the power to directly deter the agent from underselling.
\\
The paper proceeds as follows. In section 2, we present the model and definitions. Section 3 characterizes the class of truthful mechanisms. In section 4 and 5, we consider the two different objectives of maximizing expected profit and maximizing probability of choosing the best project for the principal. In section 6, we give some remarks and Section 7 concludes. The more technical proofs are in the appendix.\\
\section{Model} There are two parties: a principal and an agent. The principal has a set of available projects $[N]=\{1,2,\dots, N\}$ and must choose one of them. Each project $i$ leads to payoffs $(p_i,a_i) \in X \subset \mathbb{R}^2$ where $p_i$ denotes the profit for the principal and $a_i$ is the utility to the agent. The payoffs $(p_i,a_i)$ are i.i.d. from a bi-variate distribution $F$ and this is all the information that the principal has. The agent knows the true payoffs from all the projects $(p, a) \in X^n=\Theta$.\\
We assume that the principal can commit to a mechanism $d:\Theta \to [N]$ so that if the agent reports payoffs $(\pi, \alpha)$ when the true state is $(p,a)$, the project $d(\pi, \alpha)$ is chosen leading to final payoffs $p_{d(\pi, \alpha)}, a_{d(\pi, \alpha)}$ for the principal and agent respectively. \\
As discussed earlier, if the agent can lie arbitrarily, the principal cannot do better than by choosing a project at random. So we assume a natural partial verifiability constraint of no overselling under which the agent cannot report a project to be more profitable than it actually is. Such a constraint on the message space may be inherent in the environment or induced by the principal by requiring the agent to furnish some kind of evidence supporting its claims. Formally, the agent's state dependent message space takes the form: \begin{align*}
M(p,a)=\{(\pi, \alpha) \in \Theta : \pi_i \leq p_i \text{ } \forall i \in [N] \} \end{align*}
Since our message space satisfies the Nested Range condition of \citet{green1986partially}, $$\theta' \in M(\theta) \implies M(\theta') \subset M(\theta)$$ we can without loss of generality restrict attention to truthful mechanisms.
\begin{definition} A mechanism $d$ is \textit{truthful} if for any $(p,a)$ and $(\pi,\alpha) \in M(p,a) $ \begin{align*}
a_{d(p, a)} \geq a_{d(\pi, \alpha)}. \end{align*} \end{definition}
We'll consider the principal's problem of finding the optimal truthful mechanism for two different objectives: \begin{itemize}
\item Maximizing the expected profit: $$\max_{d: d \text{ is truthful}} \mathbb{E}[p_{d(p,a)}]$$
\item Maximizing the probability of choosing the best project: $$\max_{d: d \text{ is truthful }} \mathbb{P}[d(p,a) \in \arg max_i p_i]$$ \end{itemize}
First, we characterize the class of truthful mechanisms.
\section{Characterization of truthful mechanisms}
We begin by defining a special class of mechanisms which we call table mechanisms.
\begin{definition} A mechanism $d$ is a table mechanism if there exists a function $f: \Theta \rightarrow 2^{N}$ with the properties \begin{itemize} \item $f(p, a) \neq \phi$ \item $f(p, a)=f\left(p, a^{\prime}\right)=f(p)$ \item $p \leq p^{\prime} \Longrightarrow f(p) \subset f\left(p^{\prime}\right)$ \end{itemize} so that
$$d(p, a) \in \arg \max _{i}\left\{a_{i}: i \in f(p)\right\} .$$
\end{definition}
In words, a table mechanism is defined by an increasing set function that determines the set of projects on the table and the mechanism always chooses the agent's favorite project among those on the table. The first condition says that there is a default project which is always on the table. The second condition says that the set of projects on the table cannot depend on the agent's payoffs. The third condition is that the set of projects on the table weakly increases as the profit vector increases.
\begin{theorem} \label{thm:table} $d$ is truthful if and only if it is a table mechanism. \end{theorem}
It is fairly straightforward from the definitions to check that table mechanisms are truthful. Indeed, under a table mechanism, the agent prefers reporting higher $\pi$'s to reporting lower ones and, given any report of $\pi$'s, prefers reporting her payoffs to misreporting her payoffs (since such a misrepresentation can only lead the principal to make a choice which gives the agent a lower payoff.) The other direction is more involved.
\begin{proof}[Proof of Theorem \ref{thm:table}] Suppose $d$ is a table mechanism. Consider any profile $(p,a)$. If the agent reports some $(\pi,\alpha)$, we know that $\pi \leq p$ from the constraint $(\pi, \alpha) \in M(p,a)$. Since $f$ is increasing, it follows that $f(\pi) \subset f(p)$. Since the agent gets his preferred project among those available and reporting truthfully maximizes his set of available projects, the agent cannot gain by misreporting. Therefore, $d$ is truthful.\\
Now suppose that $d$ is a truthful mechanism. Define the function $f: \Theta \to 2^N$ so that $i \in f(p,a)$ if and only if there exists some $(p',a') \in M(p,a)$ such that $d(p',a')=i$. First, we will show that the function satisfies the three properties in the definition of table mechanism:
\begin{itemize} \item Observe that $(p,a) \in M(p,a)$ and thus, $d(p,a) \in f(p,a) \implies f(p,a) \neq \phi$ for any $(p,a) \in \Theta$
\item The property that $f$ does not depend on agent payoffs follows from observing that $M(p,a)=M(p,a')$. Thus, if $i \in f(p,a)$, $i \in f(p,a')$ and vice versa. Thus, we have $f(p,a)=f(p,a')$ for any $(p,a), (p,a')$.
\item Take any $p,p'$ such that $p \leq p'$. Suppose $i \in f(p)$ which implies that there exists $(\pi, \alpha) \in M(p,a)$ with $\pi \leq p$ so that $d(\pi, \alpha)=i$. But then, $(\pi, \alpha) \in M(p',a)$ as well and so $i \in f(p')$. Thus, we get that $f(p) \subset f(p')$. \end{itemize}
Now we want to show that for any state $(p,a)$, $d(p,a) \in \arg \max _{i}\left\{a_{i}: i \in f(p)\right\}$
Suppose towards a contradiction that $d(p, a)$ is not in this set. By definition, $d(p,a) \in f(p)$. Let $j \in \arg \max _{i}\left\{a_{i}: i \in f(p)\right\}$. Then $a_j > a_{d(p,a)}$ and $j \in f(p)$. But the fact that $j \in f(p)$ implies that there exists a $(\pi, \alpha) \in M(p,a)$ such that $d(\pi,\alpha)=j$. But then, the agent can misreport at state $(p,a)$ to $(\pi, \alpha)$ and gain from this manipulation. This contradicts the fact that $d$ is truthful and so it must be that $d(p,a) \in \arg \max _{i}\left\{a_{i}: i \in f(p)\right\}$. It follows then that $d$ is a table mechanism. \end{proof}
For simplicity going forward, we will assume (without loss of generality) that $N \in f(p)$ for all $(p,a) \in \Theta$. That is, in a table mechanism, project $N$ is always on the table.\\
Before discussing the results, we define a subclass of table mechanisms that take a simple cutoff form.
\begin{definition} A mechanism $d$ is a cutoff mechanism if it is a table mechanism and for $i=1, \dots, N-1$, there exist cutoffs $c_i \in X$, such that $i \in f(p)$ if and only if $p_i \geq c_i$. \end{definition}
In a cutoff mechanism, a project is on the table if the principal's profit from the project meets a threshold. That is, whether a project is on the table or not depends only on that particular project's profit value. The principal then chooses the agent's favorite project among those that meet the cutoff and the default project.
\section{Maximizing expected profit}
In this section, we consider the principal's problem of finding the optimal table mechanism $d$ for maximizing expected profit $\mathbb{E}[p_{d(p,a)}]$. For the most part, we'll consider and solve the problem for the case of $N=2$ projects. We'll briefly discuss the case of $N>2$ projects towards the end of this section.
\subsection{2 projects} In the case of two projects, we have $(p_1,a_1) \sim F$ and $(p_2,a_2)\sim F$. In a table mechanism $f$ with 2 projects, we can assume without loss of generality that $2 \in f(p)$ for all $p$ and so the principal only really has to decide the set of vectors when $1 \in f(p)$.
\begin{theorem} \label{thm:n=2_optimal_is_cutoff}
For two projects with $(p_i,a_i) \sim F$, the optimal truthful mechanism is a cutoff mechanism. The optimal cutoff $c_1$ is defined by $$\mathbb{E}\left[(p_1-p_2)\Pr(a_1>a_2\vert p_2)\vert p_1=c_1\right]=0$$ \end{theorem}
\begin{proof} Suppose $d$ is truthful with associated function $f$. From Theorem \ref{thm:table}, we know that $d$ is a table mechanism. So $2 \in f(p)$ for all $p_1,p_2$. Define $c=\sup \{p_1: 1 \notin f(p_1,p_1)\}$. Define the cutoff mechanism $d'$ so that $1 \in f'(p) \iff p_1 \geq c_1$. We'll show that the expected profit for the principal from $d'$ is at least as high as the expected profit from $d$. In fact, we will show that this holds conditional on $(p_1,p_2)$, and hence in expectation.
Consider the following (exhaustive and mutually exclusive) cases depending on whether $p_i \geq c$ or $p_i<c$:
\begin{itemize}
\item $p_1 \in (-\infty,c), p_2 \in (-\infty,c)$: For any such $p_1,p_2$, we know both $f(p)=f'(p)=\{2\}$ and therefore, the second project is chosen for all such profiles. Thus, the two mechanisms are identical and generate the same profit for the principal in this case.
\item $p_1 \in (\infty,c), p_2 \in [c,\infty)$: In this case, $f'(p)=\{2\}$ and thus project 2 is chosen for sure. Note that the principal prefers project 2 over 1 in these profiles and thus, the profit from the cutoff mechanism is weakly higher for any such $p_1,p_2$.
\item $p_1 \in [c,\infty), p_2 \in (-\infty,c)$:
Now $f'(p)=\{1,2\}$ while $f(p)$ can be either $\{2\}$ or $\{1,2\}$. Observe that the principal strictly prefers project 1 over 2 in all these profiles. Thus, the cutoff mechanism again leads to weakly higher profits for such $p_1,p_2$.
\item $p_1 \in [c,\infty), p_2 \in [c,\infty)$: Here we have $f(p)=f'(p)=\{1,2\}$. Thus, the two mechanisms are identical in this set and lead to same profits for the principal.
\end{itemize}
This shows that for any truthful mechanism, there is a cutoff mechanism under which the principal's expected profit is weakly higher. Thus, the optimal truthful mechanism must be a cutoff mechanism. Now our problem is just to find the optimal cutoff $c$.
Consider the decision problem of the principal for any given $p_1,p_2,a_1,a_2$. It can either \begin{itemize}
\item not make project 1 available and get $p_2$
\item make project 1 available and get $p_1 \mathbb{I}_{a_1 \geq a_2}+p_2\mathbb{I}_{a_2 >a_1}$ \end{itemize}
The constraint imposed by truthfulness and optimality of cutoff mechanisms imply that the principal can only base this decision on the value of $p_1$. Thus, taking expectation with respect to $p_2,a_1,a_2$, we get that the two alternatives are: \begin{itemize}
\item not make project 1 available and get $\mathbb{E}[p_2|p_1]$
\item make project 1 available and get $\mathbb{E}\left[p_1 \mathbb{I}_{a_1 \geq a_2}+p_2\mathbb{I}_{a_2 >a_1}|p_1\right]$ \end{itemize}
Thus, the principal would want to make project 1 available if and only if \begin{align*}
&\mathbb{E}\left[p_1 \mathbb{I}_{a_1 \geq a_2}+p_2\mathbb{I}_{a_2 >a_1}|p_1\right] \geq \mathbb{E}[p_2|p_1]\\
\iff & \mathbb{E}\left[p_1 \mathbb{I}_{a_1 \geq a_2}|p_1\right] \geq \mathbb{E}\left[p_2 \mathbb{I}_{a_1 \geq a_2}|p_1\right] \\
\iff & \mathbb{E}\left[(p_1-p_2) \mathbb{I}_{a_1 \geq a_2}|p_1\right] \geq 0 \\
\iff & \mathbb{E}\left[(p_1-p_2) \mathbb{P}[a_1 \geq a_2|p_2]|p_1\right] \geq 0 \\ \end{align*}
At the optimal cutoff, the principal should be indifferent between the two alternatives and so the cutoff $c_1$ is defined by the solution to the equation $$\mathbb{E}\left[(p_1-p_2) \mathbb{P}[a_1 \geq a_2|p_2]|p_1=c_1\right] = 0 $$ \end{proof}
In the special case where the principal and agent payoffs are independent, we get the following corollary. \begin{corollary} Suppose $F$ is such that the principal and agent payoffs $(p_i,a_i)$ are independent. Then the optimal cutoff is given by $c_1=\mathbb{E}[p_1]$. \end{corollary}
\subsection{Ally principle} In this subsection, we discuss the implications of our model for the well-known Ally principle which states that a principal delegates more authority to an agent with more aligned preferences. For this purpose, we assume that the principal agent payoffs for each project is bivariate normal $(p_i,a_i) \sim N(0,0,1,1,\rho)$ and are drawn i.i.d. across projects. Now the question is whether we can say something systematic about $c(\rho)$, the optimal cutoff as a function of the correlation $\rho$.
\begin{theorem} \label{cutoff1} For $N=2$ projects with $(p_i, a_i) \sim N(0,0,1,1,\rho)$, the optimal cutoff is defined by the equation $$c\Phi(tc)+t\phi(tc)=0$$ where $t=\frac{\rho}{\sqrt{2-\rho^2}}$. The optimal cutoff is decreasing in $\rho$.
\end{theorem}
\begin{appendixproof} \textit{of Theorem ~\ref{cutoff1}}
We know that the optimal cutoff $c(\rho)$ is the solution to the following equation
$$\mathbb{E}\left[(p_1-p_2) \mathbb{P}[a_1 \geq a_2|p_2]|p_1=c(\rho)\right] = 0 $$
Let us simplify the above expression. First, we want to find $\mathbb{P}[a_1 \geq a_2|p_1, p_2]$. We know that if $X,Y \sim N(\mu_x,\mu_y,\sigma^2_x, \sigma^2_y,\rho)$, then the conditional distribution $$X \mid Y \sim N\left(\mu_{x}+\rho \frac{\sigma_{x}}{\sigma_{y}}\left(y-\mu_{y}\right), \sigma^2_{x} (1-\rho^{2})\right)$$
and so in our case, $a_i|p_i \sim N(\rho p_i, 1-\rho^2)$. Also, since the payoffs are independent across projects, we get that
$$a_1-a_2|p_1,p_2 \sim N(\rho (p_1-p_2), 2(1-\rho^2))$$
Using this, we get that $$\mathbb{P}[a_1-a_2\geq 0|p_1, p_2]=\Phi\left(\frac{\rho(p_1-p_2)}{\sqrt{2(1-\rho^2)}}\right)$$ where $\Phi$ is the standard normal cdf.
Plugging this into the equation, we get that the optimal cutoff satisfies $$\mathbb{E}\left[(p_1-p_2)\Phi\left(\frac{\rho(p_1-p_2)}{\sqrt{2(1-\rho^2)}}\right) \middle\vert p_1=c(\rho)\right] = 0 $$
We can find that the expectation is equal to $$p_1 \Phi \left(\dfrac{\rho p_1}{\sqrt{2-\rho^2}} \right)+\dfrac{\rho}{\sqrt{2-\rho^2}}\phi \left(\dfrac{\rho p_1}{\sqrt{2-\rho^2}} \right)$$ and letting $t=\dfrac{\rho}{\sqrt{2-\rho^2}}$, we have that the optimal cutoff $c$ is implicitly defined by $$c\Phi(tc)+t\phi(tc)=0 $$ where $\Phi$ and $\phi$ represent the standard normal cdf and pdf respectively. Observe that $t \in [-1,1]$ and is increasing in $\rho$.
Now letting $F(c,t)=c\Phi(tc)+t\phi(tc)$, we can use the implicit function theorem to get that \begin{align*}
c'(t)&=-\dfrac{F_t}{F_c}\\
&=-\dfrac{c^2\phi(tc)+\phi(tc)-t^2c^2\phi(tc)}{\Phi(tc)+tc\phi(tc)-t^3c\phi(tc)}\\
&=-\dfrac{\phi(tc)\left(1+c^2(1-t^2)\right)}{\Phi(tc)+tc\phi(tc)(1-t^2)}\\
&=-\dfrac{\phi(tc)\left(1+c^2(1-t^2)\right)}{\Phi(tc)\left(1-c^2(1-t^2)\right)} \end{align*}
Note that $c(0)=0$ and so $c'(0)=-2\phi(0)<0$. Also observe from the above expression that $c'(t)$ is never $0$ and so it follows from the smoothness of $c$ that $c'(t)<0$ for all $t\in (-1,1)$. Thus, we have that the optimal cutoff is decreasing in $\rho$. \end{appendixproof}
The proof proceeds by applying the condition obtained in Theorem ~\ref{thm:n=2_optimal_is_cutoff} for the case of the bivariate normal distribution. Using formulas for integrals of normal cdfs from \citet{owen_table_1980}, we obtain the condition that the optimal cutoff must satisfy $c\Phi(tc)+t\phi(tc)=0$ where $t=\frac{\rho}{\sqrt{2-\rho^2}}$. We can then differentiate this equation and get that $c'(0)<0$. The smoothness of $c$ and the fact that $c'(t)$ is never zero implies that $c'(t)<0$ for all $t$. The formal proof is in the appendix.
\begin{figure}
\caption{The optimal cutoff for maximizing expected profit decreases as the correlation between principal and agent payoffs increases.}
\end{figure}
\subsection{N projects}
In this subsection, we discuss the case of $N>2$ projects. We believe that a cutoff mechanism should continue to be optimal but we haven't been able to prove it yet. Motivated by the result for two projects, we conjecture that a cutoff mechanism is optimal for an arbitrary number of projects and consider the problem of finding the optimal cutoff mechanism.\\
The following theorem gives the optimal cutoff mechanism when the principal agent payoffs are i.i.d uniform.
\begin{theorem} \label{thm:optimal_cutoff_value} For $N$ projects and $F$ uniform on $[0,1]^2$, the optimal cutoff mechanism has a single cutoff $c(N)$ that is defined by the equation $$ N(1-c)(1 - c + c^N) = 1 - c^N.$$ The principal's expected utility from the corresponding optimal cutoff mechanism is given by $$\frac{1}{2} + \frac{c(N)}{2}(c(N) - c(N)^N)$$ \end{theorem}
\begin{appendixproof} \textit{of Theorem \ref{thm:optimal_cutoff_value}}
For any arbitrary cutoff mechanism with cutoffs $c=(c_1,c_2...c_{N-1})$, we compute the expected utility of the principal. For the below expressions, consider $c_N=0$.
\begin{align*}
EU_p(c)= \sum_{i=1}^N \frac{(1+c_i)}{2} \mathbb{P}(d=i) \\ \end{align*}
Note that $\mathbb{P}(d=i)=(1-c_i)\mathbb{P}(d=N|c_i=0)$. That is, conditional on $p_i \geq c_i$, the probability that the decision is $i$ is the same as the probability that the decision is $N$ when the cutoff $c_i=0$ and the remaining cutoffs are the same. To find the probability that the last project $N$ is chosen, we condition on its rank which is defined in terms of $a_i$s. That is, the rank of $N$ is $k$ if there are exactly $k-1$ projects with higher $a_i$s.
\begin{align*}
\mathbb{P}(d=N)&=\sum_{k=1}^{N} \mathbb{P}(\text{rank of } N =k)\mathbb{P}(d=N|\text{ rank of } N =k)\\
&= \frac{1}{N} \sum_{k=1}^N \mathbb{P}(d=N|\text{ rank of } N =k)\\
&= \frac{1}{N} \sum_{k=1}^N \sum_{S \subset [N-1]: |S|=k-1} \frac{\Pi_{i \in S} c_i}{ {N-1\choose k-1}} \\ \end{align*}
We first argue that cutoffs have to be interior in the optimal cutoff mechanism. Suppose $d$ is a cutoff mechanism with cutoffs $c=(c_1,c_2...c_{N-1})$ and $c_i=1$. In this case, let $u^*$ denote the expected utility of the principal. Note that $u^* < 1$. Define a new cutoff mechanism in which all cutoffs remain the same except for $i$ which now has the cutoff $u^*$. Now, with probability $u^*$, the principal gets $u^*$ and with probability $1-u^*$, the principal gets a convex combination of $u^*$ and $\frac{1+u^*}{2}>u^*$. This means that his expected payoff under the new mechanism is $>u^*$. Therefore, an optimal mechanism cannot have the cutoff 1. Now, suppose there is an $i \in [N-1]$ which has a cutoff of 0. Observe that the expected utility from any arbitrary project conditional on being chosen is $\frac{1+c}{2} \geq 0.5$. Now, consider increasing the cutoff to $c_i=\frac{1}{2}$ in the new mechanism while keeping every other cutoff the same. For every $p_i<\frac{1}{2}$, under the old mechanism, the principal's payoff was some convex combination of $p_i$ and some $k\geq 1/2$. Under the new mechanism, it is just $k>p_i$. For $p_i \geq \frac{1}{2}$, the new mechanism is identical to the old one. Therefore, an optimal mechanism cannot have a cutoff of 0. Thus, we know that in the optimal cutoff mechanism, $c_i \notin \{0,1\}$ for any $i \in [N-1]$. \\
Now suppose that the mechanism $d$ is such that there exist $i,j$ with $c_i > c_j$. Define $t$ so that $\bar{c}+t=c_i$ and $\bar{c}-t=c_j$. From the above calculations, we know that if we write $EU_p(c)$ in expanded form and plug in $c_i=\bar{c}+t$ and $c_j=\bar{c}-t$, we get a polynomial that is at most cubic in $t$. This is because we get a term that is at most quadratic in $t$ for $\mathbb{P}(d=k)$ for any $k \in [n]$ and in the expected utility calculation, we multiply that with $\frac{1+c_k}{2}$. Note that by the symmetry of the projects, the principal should get the same expected utility if we changed $t$ to $-t$. Therefore, the polynomial should be of the form $at^2+b$. Now, if $a$ is $>0$ or $<0$, the principal gains from increasing or decreasing $t$ which is possible since we know that the solution is interior and $c_i>c_j$. Therefore, $d$ cannot be optimal in either case. When $a=0$, the principal is indifferent to increasing $t$ till one of the cutoffs reaches an extreme of $1$ or $0$ which we know cannot be optimal. Therefore, a cutoff mechanism with different cutoffs cannot be optimal.\\
The above discussion implies that the solution to the optimization problem has to be a single cutoff mechanism. Let $c$ be the single cutoff. Using the above calculations, we have that
\begin{align*}
\mathbb{P}(d=N)&= \frac{1}{N} \sum_{k=1}^N \sum_{S \subset [N-1]: |S|=k-1} \dfrac{\Pi_{i \in S} c_i}{ {N-1\choose k-1}} \\
&= \frac{1}{N} \sum_{k=1}^N c^{k-1}\\
&= \frac{1}{N}\frac{1-c^N}{1-c}\\ \end{align*}
Therefore,
\begin{align*}
EU_p(c)&=\frac{1}{2} \mathbb{P}(d=N)+\frac{1+c}{2} \mathbb{P}(d \neq N)\\
&=\frac{1}{2}+\frac{c}{2} \left(1-\frac{1-c^N}{N(1-c)}\right)\\ \end{align*}
Differentiating with respect to $c$ gives the desired optimal cutoff mechanism defined by single cutoff $c(N)$.
\begin{align*}
\dfrac{\partial EU_p(c)}{\partial c} &= \dfrac{1}{2}\left(1-\frac{1-c^N}{N(1-c)}\right)-\frac{c}{2N} \left(\frac{(1-c)(-Nc^{N-1})+(1-c^N)}{(1-c)^2}\right) \\
&= \frac{1}{2}+\dfrac{c^N}{2(1-c)}-\dfrac{1-c^N}{2N(1-c)^2}\\ \end{align*}
Setting it equal to zero gives us:
$$N(1-c)(1-c+c^N)=1-c^N$$
Plugging in the expected utility expression gives us the maximum utility of the principal in the class of cutoff mechanisms: $\frac{1}{2}+\frac{c(N)}{2}(c(N)-c(N)^N)$. \end{appendixproof}
Note that since we are maximizing a continuous function over a compact space, a solution exists. The rest of the proof proceeds in three steps. In step 1, we show that in any cutoff mechanism which has a cutoff that is $0$ or $1$ cannot be optimal. That is, the solution must be interior. In step 2, we show that a cutoff mechanism with different cutoffs cannot be optimal. Finally, we maximize the principal's utility with respect to the single cutoff $c$ to get the optimal cutoff mechanism. The proof is relegated to the appendix. While it is hard to obtain an exact closed form solution for the optimal single cutoff, we show that it has the following asymptotic property.
\begin{lem} \label{lem:explicit-cutoff} The optimal single cutoff $c(N)= 1 - \frac{1}{\sqrt{N}} + o\left(\frac{1}{\sqrt{N}}\right).$ \end{lem}
\begin{appendixproof} \textit{of lemma \ref{lem:explicit-cutoff}}
Let \begin{align*}
\phi_N(c) = N(1-c)(1 - c + c^N) - (1 - c^N). \end{align*} Then for any $\alpha > 0$, \begin{align*}
\lim_{N \to \infty}{\phi_N \left(1 - \frac{\alpha}{\sqrt{N}}\right)} = \alpha^2 - 1, \end{align*} and so for all sufficiently large $N$, the quantity \begin{align*}
\phi_N \left(1 - \frac{\alpha}{\sqrt{N}}\right) \end{align*} is positive if and only if $\alpha > 1$ and negative if and only if $\alpha < 1$. Hence, for any $\epsilon > 0$, it follows that the unique root $c(N)$ of the equation from Theorem \ref{thm:optimal_cutoff_value} satisfies \begin{align*}
(1-\epsilon) \frac{1}{\sqrt{N}} \leq 1 - c(N) \leq (1+\epsilon) \frac{1}{\sqrt{N}}. \end{align*} \end{appendixproof}
It follows from Lemma \ref{lem:explicit-cutoff} that as $N \to \infty$, the optimal single cutoff $c(N) \to 1$ and the expected utility of the principal $\to 1$. \section{Maximizing probability of best project}
In this section, we consider the objective of maximizing the probability of choosing the best project for the principal $\mathbb{P}[p_{d(p,a)} \geq p_j \text{ for all } j]$. We'll only focus on the case of $N=2$ projects for this part.
\subsection{2 projects}
Let's consider the case of two projects and table mechanisms with project 2 always on the table.
\begin{theorem} \label{thm:n=2_optimal_is_cutoff_prob}
For two projects with $(p_i,a_i) \sim F$, the optimal truthful mechanism is a cutoff mechanism. The optimal cutoff $c_1$ is defined by $$\mathbb{E}\left[(\mathbb{I}_{p_1\geq p_2}-\mathbb{I}_{p_2>p_1})\mathbb{I}_{a_1\geq a_2}\vert p_1=c_1\right]=0$$ or more simply $$\mathbb{P}[p_1 \geq p_2\vert p_1=c_1,a_1 \geq a_2]=\frac{1}{2}$$
\end{theorem}
\begin{proof} The argument for why cutoff is optimal is exactly the same as in the proof of Theorem ~\ref{thm:n=2_optimal_is_cutoff}. We now derive the optimal cutoff. The principal can either \begin{itemize}
\item make project 1 available and get $\mathbb{I}_{p_1\geq p_2}\mathbb{I}_{a_1\geq a_2}+\mathbb{I}_{p_2> p_1}\mathbb{I}_{a_2> a_1}$
\item or not make it available and get $\mathbb{I}_{p_2>p_1}$ \end{itemize}
Since the principal can only make the decision based on value of $p_1$, it will chose to make project 1 available if and only if \begin{align*}
&\mathbb{E}[\mathbb{I}_{p_1\geq p_2}\mathbb{I}_{a_1\geq a_2}+\mathbb{I}_{p_2>p_1}\mathbb{I}_{a_2> a_1}\vert p_1]\geq \mathbb{E}[\mathbb{I}_{p_2>p_1} \vert p_1]\\
\iff &\mathbb{E}\left[(\mathbb{I}_{p_1\geq p_2}-\mathbb{I}_{p_2>p_1})\mathbb{I}_{a_1\geq a_2}\vert p_1\right]\geq 0 \end{align*}
At the cutoff, the principal must be indifferent between making or not making project 1 available. This gives the desired condition. \end{proof} In the special case where the principal and agent payoffs are independent, we get the following corollary. \begin{corollary} Suppose $F$ is such that the principal and agent payoffs $(p_i,a_i)$ are independent. Then the optimal cutoff is given by $c_1=Med[p_1]$. \end{corollary}
\subsection{Ally principle} Assume that the principal agent payoffs for each project is bivariate normal $(p_i,a_i) \sim N(0,0,1,1,\rho)$ and are drawn i.i.d. across projects. Now the question is whether we can say something systematic about $c(\rho)$, the optimal cutoff as a function of the correlation $\rho$.
\begin{theorem} \label{cutoff2} For $N=2$ projects with $(p_i, a_i) \sim N(0,0,1,1,\rho)$, the optimal cutoff $c(\rho)$ is given by the equation $$\dfrac{\mathbb{P}[X \leq c, Y \leq tc]}{\mathbb{P}[Y \leq tc]}=\frac{1}{2}$$ where $X,Y \sim N(0,0,1,1,t)$ and $t=\dfrac{\rho}{\sqrt{2-\rho^2}}$.
\end{theorem}
\begin{appendixproof} \textit{of Theorem ~\ref{cutoff2}}
Given the distributional form, we have
$$a_1-a_2|p_1 \sim N(\rho p_1, 2-\rho^2)$$ and therefore,
$$p_2 | a_1-a_2, p_1 \sim N\left(\dfrac{\rho^2p_1-\rho (a_1- a_2)}{2-\rho^2}, \dfrac{2(1-\rho^2)}{2-\rho^2}\right)$$
This gives
$$\mathbb{P}\left[p_2 \leq p_1|p_1,a_1-a_2\right]=\Phi \left(\dfrac{2p_1(1-\rho^2)+\rho(a_1-a_2)}{\sqrt{2(1-\rho^2)(2-\rho^2)}}\right)$$
Then, \begin{align*}
\mathbb{P}[p_2 \leq p_1\vert p_1,a_1 \geq a_2]&=\frac{1}{\mathbb{P}[a_1\geq a_2|p_1]}\int_0^\infty \mathbb{P}[p_2 \leq p_1\vert p_1,a_1- a_2=x]f(a_1-a_2=x|p_1)dx \\
&=\frac{1}{\Phi\left(\dfrac{\rho p_1}{\sqrt{2-\rho^2}}\right)}\int_0^\infty \Phi \left(\dfrac{2p_1(1-\rho^2)+\rho x}{\sqrt{2(1-\rho^2)(2-\rho^2)}}\right)\frac{e^{-\frac{1}{2}\left(\frac{x-\rho p_1}{\sqrt{2-\rho^2}}\right)^2}}{\sqrt{2-\rho^2}\sqrt{2\pi}}dx\\
&=\frac{1}{\Phi\left(\dfrac{\rho p_1}{\sqrt{2-\rho^2}}\right)}\int_\frac{-\rho p_1}{\sqrt{2-\rho^2}}^\infty \Phi \left(\dfrac{2p_1(1-\rho^2)+\rho(t\sqrt{2-\rho^2}+\rho p_1)}{\sqrt{2(1-\rho^2)(2-\rho^2)}}\right)\phi(t)dt\\
&=\frac{1}{\Phi\left(\dfrac{\rho p_1}{\sqrt{2-\rho^2}}\right)}\int_\frac{-\rho p_1}{\sqrt{2-\rho^2}}^\infty \Phi \left(\dfrac{p_1\sqrt{2-\rho^2}+\rho t}{\sqrt{2(1-\rho^2)}}\right)\phi(t)dt\\
&=\mathbb{E}\left[\Phi\left(\dfrac{p_1\sqrt{2-\rho^2}+\rho t}{\sqrt{2(1-\rho^2)}}\right) \bigg\vert t \geq \dfrac{-p_1\rho}{\sqrt{2-\rho^2}}\right] \end{align*}
Observe that if $f(p_1,\rho)$ is the above expectation, then we have $f(p_1,\rho)+f(-p_1,-\rho)=1$. Thus, we can conclude that $c(-\rho)=-c(\rho)$ for all $\rho \in [0,1]$. So let's focus on $\rho>0$ and try to argue that the optimal cutoff must be decreasing in $\rho$.
The expectation above equals $$\dfrac{\mathbb{P}[X \leq p_1, Y \leq t p_1]}{\mathbb{P}[Y \leq t p_1]}$$ where $X,Y \sim N(0,0,1,1,t)$ and $t=\dfrac{\rho}{\sqrt{2-\rho^2}}$ and this completes the proof. \end{appendixproof}
Again, we use the definition of optimal cutoff from Theorem ~\ref{thm:n=2_optimal_is_cutoff_prob} and apply it to the bivariate normal case. Then, using the integral formulas from \citet{owen_table_1980}, we show that the optimal cutoff in this case is defined by $\dfrac{\mathbb{P}[X \leq c, Y \leq t c]}{\mathbb{P}[Y \leq t c]}$ where $X,Y \sim N(0,0,1,1,t)$ and $t=\dfrac{\rho}{\sqrt{2-\rho^2}}$. We haven't been able to show that this result implies that the optimal cutoff is decreasing in $\rho$, but the following contour plot from Python suggests that it is:
\begin{figure}
\caption{The optimal cutoff for maximizing probability of choosing better project as a function of correlation (contour plot made in Python).}
\end{figure}
\section{Remarks}
\begin{enumerate}[leftmargin=*]
\item \textbf{Comparison of optimal mechanisms for the two objectives} We wanted to see how the optimal cutoffs for the two objectives compare when payoffs are bivariate normal. To do so, we plotted the two optimal cutoff curves together and interestingly found that the curves coincide. While we haven't been able to formally prove that the solutions coincide, we conjecture that they do.
\begin{conjecture}
With $N=2$ projects and payoffs $(p_i,a_i)$ drawn iid from bivariate normal $N(0,0,1,1,\rho)$, the optimal mechanism for maximizing principal's expected profit and that for maximizing probability of choosing better project is the same cutoff mechanism with the cutoff $c(\rho)$ defined by $c\Phi(tc)+t\phi(tc)=0$ where $t=\frac{\rho}{\sqrt{2-\rho^2}}$
\end{conjecture}
\item \textbf{Delegation interpretation of the optimal mechanism} The simplest implementation of the optimal cutoff mechanism has a nice delegation interpretation. The principal selects a cutoff profit and a default project and delegates the project choice to the agent, in the sense that the agent can select either the default project or a project which meets the cutoff profit, and the principal signs off on the final decision. Under this delegation, the agent chooses his favorite project among those that meet the cutoff and the default project. Note in particular that this implementation only requires the agent to report information about the chosen project. This is outcome-equivalent to the cutoff mechanism. We note that many instances of ``cutoff mechanisms" with flavors similar to ours have appeared in the literature, but the optimality of such mechanisms has been driven by the assumption of ex-post verifiability (\citet{ben2014optimal}, \citet{mylovanov2017optimal} \citet{armstrong2010model}). In particular, in most previous models the principal's ability to punish in the case of a misreport is tantamount to the assumption that the agent cannot lie. Here, we offer an alternative way of rationalizing such cutoff mechanisms via the no-overselling (or more generally, interim partial verifiability) constraint, which alters the agents incentives but \textit{not} by threatening the agent in the case of a misreport. To help elucidate this point, we make the following observation. If our model were altered so that the agent had an unconstrained message space, but the principal were required to take the default project in case the agent should oversell any of the projects, then all of our results would carry over.
\end{enumerate} \section{Conclusion} We consider a principal agent project selection problem with asymmetric information. When the agent can lie arbitrarily, we find that the principal cannot gain anything from commitment power and may as well choose a project at random. In contrast, if the principal can identify or induce partial verifiability in the environment so that the agent cannot oversell any of the projects, then a simple cutoff mechanism is optimal for the case of two projects, both for maximizing expected profit and for maximizing probability of choosing better project. In the particular case where payoffs are bivariate normal, we find that the optimal cutoff is decreasing in the correlation between payoffs and thus, our model provides evidence in favor of the ally principle which says that the principal grants more leeway to an agent who shares its preferences. We conjecture that our results for the case of two project extend to settings with more than two projects as well. \\
\nocite{*}
\end{document} |
\begin{document}
\title{Integrability of solutions of the Skorokhod Embedding Problem}
\date{\today} \author{David Hobson}
\begin{abstract}
\input{abstract.txt}
\end{abstract}
\maketitle
\input{content.txt}
\end{document}
\section{Introduction} \label{sec:intro}
Let $X$ be a regular, time-homogeneous diffusion on an interval $I^X\subseteq \mathbb{R}$, with $X_0 = x \in int(I^X)$, and let $\mu$ be a probability measure on $\overline{I^X}$. Then $\tau$ is a solution of the Skorokhod embedding problem (Skorokhod~\cite{Skorokhod:65}) for $\mu$ in $X$ if $\tau$ is a stopping time and $X_\tau \sim \mu$. We call such a stopping time an embedding (of $\mu$ in $X$).
For a general Markov process Rost~\cite{Rost:71} gives necessary and sufficient conditions which determine whether a solution to the Skorokhod embedding problem (SEP) exists for a given target law. The conditions are expressed in terms of the potential. When applied to Brownian motion (where we include the case of Brownian motion on an interval subset of $\mathbb{R}$, provided the process is absorbed at finite endpoints) these conditions lead to a characterisation of the set of measures which can be embedded in Brownian motion. Then, in the case of a regular, one-dimensional, time-homogeneous diffusion with absorbing endpoints, necessary and sufficient conditions for the existence of a solution to the SEP can be derived via a change of scale. Let $s$ be the scale function of $X$; then $Y=s(X)$ is a local martingale, and in particular a time-change of Brownian motion. Further, let $I=s(I^X)$ be the state space of $Y$. Then the set of measures for which a solution of the SEP exists depends on both $I$ and the relationship between the starting value of $Y$ and the mean of the image under $s$ of the target law, see Theorem~\ref{thm:existence} below.
Apart from the existence result above, most of the literature on the SEP has concentrated on the case where $X$ is Brownian motion in one dimension. Exceptions include Rost~\cite{Rost:71} as mentioned above, Bertoin and LeJan~\cite{BertoinLeJan:92} who consider embeddings in any time-homogeneous process with a well-defined local time, Grandits and Falkner~\cite{GranditsFalkner:00} (drifting Brownian motion), Hambly {\em et al}~\cite{HamblyKerstingKyprianou:02} (Bessel process of dimension 3) and Pedersen and Peskir~\cite{PedersenPeskir:01} and Cox and Hobson~\cite{CoxHobson:04} (these last two consider embeddings in a general time-homogenenous diffusions).
In the Brownian setting many solutions of the SEP have been described; see Obloj~\cite{Obloj:04} or Hobson~\cite{Hobson:11} for a survey. Given there are many solutions, it is possible to look for criteria which characterise `small' or `good' solutions. In both the Brownian case and more generally, there is a natural class of good solutions of the SEP, namely the minimal embeddings (Monroe~\cite{Monroe:72}). An embedding $\tau$ is minimal if whenever $\sigma \leq \tau$ is another embedding (of $\mu$ in $X$) then $\sigma = \tau$ almost surely.
Another criteria for a good solution might be that it is integrable, or as small as possible in the sense of expectation. In this article we are interested in the integrability or otherwise of solutions of the SEP, and the relationship between integrability and minimality in the case where $X$ is a time-homogeneous diffusion in one dimension.
Consider the case where $X$ is Brownian motion null at zero and write $W$ for $X$. By the results of Rost~\cite{Rost:71} there exists a solution of the SEP for $\mu$ in $W$ on $\mathbb{R}$ for {\em any} measure $\mu$ on $\mathbb{R}$. If we require integrability of the embedding then the story is also well-known:
\begin{theorem} [Monroe~\cite{Monroe:72}] \label{thm:brownian} There exists an integrable solution of the SEP for $\mu$ in $W$ if and only if $\mu$ is centred and in $L^2$.
Further, in the case of centred square-integrable target measures, $\tau$ is minimal for $\mu$ if and only if $\tau$ is an embedding of $\mu$ and $\mathbb{E}[\tau] < \infty$. \end{theorem}
Our goal in this paper is to consider the case where $X$ is a regular time-homogeneous diffusion on an interval $I^X$ with absorbing endpoints. Let $x \in int(I^X)$ denote the initial value of $X$, let $m^X$ denote the speed measure, and $s^X$ the scale function. Let $\mu$ be a probability measure on $\overline{I^X}$.
Our main result is as follows: \begin{theorem} \label{thm:general} There exists an integrable solution of the SEP for $\mu$ in $X$ if and only if $E_X(x;\mu)<\infty$ where $E_X(x;\mu)$ is defined in (\ref{eqn:EXdef}) below. Further, in the case where $E_X(x;\mu)<\infty$ then $\tau$ is minimal for $\mu$ if and only if $\tau$ is an embedding and $\mathbb{E}[\tau] = E_X(x;\mu)$. \end{theorem}
In the Brownian case there is a dichotomy, and for any embedding either $\mathbb{E}[\tau] = \int x^2 \mu(dx)$ or $\mathbb{E}[\tau] = \infty$, and so if the target law is square integrable then minimality of an embedding is equivalent to integrability. This is not true in general for diffusions: we can have integrable embeddings which are not minimal. The converse is also true: both in the Brownian case and more generally we can have minimal embeddings which are not integrable. This will be the case if $E_X(x;\mu)=\infty$.
We close the introduction by considering a trio of illuminating and motivating examples.
\begin{example} \label{ex:ABM} Let $Z = (Z_t)_{t \geq 0}$ be Brownian motion on $\mathbb{R}_+$ absorbed at zero, and with $Z_0 = z>0$. Then there exists an embedding of $\mu$ if and only if $\int x \mu(dx) \leq z$. Moreover, there exists an integrable embedding of $\mu$ in $Z$ if and only if $\int x \mu(dx) = z$ and $\int x^2 \mu(dx) < \infty$ and then an embedding $\tau$ is minimal if and only if $\mathbb{E}[\tau] < \infty$ if and only if $\mathbb{E}[\tau] = \int (x-z)^2 \mu(dz)$. Note that $Z$ is a supermartingale so the necessity of $\int x \mu(dx) \leq z$ is clear. \end{example}
\begin{example} \label{ex:DBM} Let $V = (V_t)_{t \geq 0}$ be upward drifting Brownian motion with $V_0 = v$. In particular, suppose $V$ solves $V_t = v + a W_t + b t$ with $b>0$ and set $\beta = 2b/a^2$. Then there exists an embedding of $\mu$ if and only if $\int e^{- \beta(u-v)}\mu(du) \leq 1$. (Upward drifting Brownian motion is transient to $+\infty$ and so there will be an embedding of $\mu$ provided $\mu$ does not place too much mass at values far below $v$.) Moreover, there exists an integrable embedding of $\mu$ if and only if $\int e^{- \beta(u-v)} \mu(du) \leq 1$ and $\int u^+ \mu(du) < \infty$. If there exists an integrable embedding then an embedding $\tau$ is minimal if and only if $\mathbb{E}[\tau] = E(v;\mu)$ where \[ E(v;\mu) = \frac{1}{b} \left(\int x \mu(dx) - v \right) < \infty. \] \end{example}
\begin{example} \label{ex:Bes3} Let $P= (P_t)_{t \geq 0}$ be a Bessel process of dimension 3 started at $P_0=p>0$. Then there exists an embedding of $\mu$ if and only if $\int x^{-1} \mu(dx) \leq p^{-1}$. Moreover, there exists an integrable embedding of $\mu$ if and only if $\int x^{-1} \mu(dx) \leq p^{-1}$ and $\int x^2 \mu(dx) < \infty$ and then an embedding $\tau$ is minimal for $\mu$ if and only if $\tau$ is an embedding and $\mathbb{E}[\tau] = E(p;\mu)$ where \begin{equation} E(p;\mu)= \frac{1}{3} \int x^2 \mu(dx) - \frac{p^2}{3} \end{equation} Note that a Bessel process is transient to infinity, and so for there to exist an embedding of $\mu$, $\mu$ cannot place too much mass near zero. For an integrable embedding then in addition we cannot have too much mass far from zero as the process takes a long time to get there. Note also that $Y=P^{-1}$ is a diffusion in natural scale and that $Y$ is the classical Johnson-Helms example of a local martingale which is not a martingale.
The results extend to the case $p=0$. Then any $\mu$ on $\mathbb{R}^+$ can be embedded in $P$. There exists an integrable embedding if and only if $\mu$ is square integrable. \end{example}
\section{Preliminaries, notation and the switch to natural scale} Let $X$ be a time-homogeneous diffusion with state space $I^X$ and started at $x$. Suppose that $X$ is regular, ie for all $x' \in int(I^X)$ and $x'' \in I^X$, $\mathbb{P}^{x'}( H_{x''} < \infty ) > 0$. Then, see Rogers and Williams~\cite{RogersWilliams:00b} or Borodin and Salminen~\cite{BorodinSalminen:02}, $X$ has a scale function $s$ and $Y = s(X)$ is a diffusion in natural scale on the interval $I= s(I^X)$. Denote the endpoints of $I$ by $\{\ell,r\}$ and suppose $y = s(x)$ lies in $(\ell,r)$. Then we have $-\infty \leq \ell < y < r \leq \infty$.
For a diffusion process $Z$ let $H^Z_z = \inf \{ s \geq 0 : Z_s = z \}$, and $H^Z_{a,b} = H^Z_a \wedge H^Z_b$. Where the process $Z$ involved is clear, the superscript may be dropped.
We have that $(Y_{t \wedge H^Y_{\ell,r}})_{t \geq 0}$ is a continuous local martingale. In particular, we can write $Y_t = W_{\Gamma_t}$ for some Brownian motion $W$ started at $y$ and a strictly increasing time-change $\Gamma$. We have already seen from Example~\ref{ex:Bes3} that $Y$ may easily be a strict local martingale.
Let $\mu$ be a law on $\overline{I^X}$ and define $\nu = \mu \circ s^{-1}$ so that for a Borel subset of $\overline{I}$, $\nu(A) = \mu ( s^{-1}(A))$. Then $\tau$ is an embedding of $\mu$ in $X$ if and only if $\tau$ is an embedding of $\nu$ in $Y$. Moreover, the integrability of $\tau$ is also unaffected by a change of scale, and thus we lose no generality in assuming that our diffusion is in natural scale. Minimality is another property which is preserved under a change of scale.
Henceforth, therefore, we assume we are given a local martingale diffusion $Y$ on $I$ with $Y_0=y \in int(I)$ and target measure $\nu$ on $\overline{I}$. Provided $\nu \in L^1$, write $\overline{\nu}$ for the mean of $\nu$, with a similar convention for other measures. We assume that if $Y$ can reach an endpoint $\ell$ or $r$ of $I$ in finite time then that endpoint is absorbing. The diffusion $Y$ in natural scale is characterised by its speed measure which we denote by $m$. Recall that if $Y$ solves the SDE $dY_t = \eta(Y_t) dB_t$ for a continuous diffusion coefficient $\eta$ then $m(dy) = dy/\eta(y)^{2}$.
\begin{theorem}[Pedersen and Peskir~\cite{PedersenPeskir:01}, Cox and Hobson~\cite{CoxHobson:04}] \label{thm:existence} \begin{enumerate} \item Suppose $I$ is a finite interval. Then $\nu$ can be embedded in $Y$ if and only if $y = \int x \nu(dx)$. \item Suppose $I=(\ell, \infty)$ or $[\ell,\infty)$ for $\ell > - \infty$. Then $\nu$ can be embedded in $Y$ if and only if $y \geq \int x \nu(dx)$. \item Suppose $I=(-\infty,r)$ or $(-\infty,r]$ for $r<\infty$. Then $\nu$ can be embedded in $Y$ if and only if $y \leq \int x \nu(dx)$. \item Suppose $I=\mathbb{R}$. Then $\nu$ can be embedded in $Y$ if and only if $\nu$ is a measure on $\mathbb{R}$. \end{enumerate} \end{theorem}
The idea behind the proof is to write $Y$ as a time-change of Brownian motion, $Y_t = W_{\Gamma_t}$. Then, since $Y$ is absorbed at the endpoints we must have that $\Gamma_t \leq H^W_{\ell,r}$ for each $t$.
In the first case of the theorem $Y$ is a bounded martingale and $\mathbb{E}[Y_\tau] = y$ for any $\tau$. In the second case $Y$ is a local martingale bounded below and hence a supermartingale for which $\mathbb{E}[Y_\tau] \leq y$. In the third case $Y$ is a submartingale.
\begin{proposition} \label{prop:minimal} Suppose that at most one endpoint of $I$ is infinite. Then any embedding of $\nu$ on $int(I)$ is minimal. \end{proposition}
\begin{proof} We prove the result in the case $I=(\ell,\infty)$ or $[\ell,\infty)$ with $\ell > -\infty$. The other cases are similar.
Since $I$ has a finite endpoint, $Y$ is transient. Further, $Y$ is a supermartingale.
Let $\tau$ be an embedding of $\nu$ where $\overline{\nu} \leq y$. Let $\sigma \leq \tau$ be another embedding. Then, from the supermartingale property, $\mathbb{E}[Y_\tau ; Y_\sigma \leq x] \leq \mathbb{E}[Y_\sigma; Y_\sigma \leq x]$ and since $Y_\sigma$ and $Y_\tau$ are equal in law, \[ \mathbb{E}[x -Y_\tau ; Y_\sigma \leq x] \geq \mathbb{E}[x -Y_\sigma; Y_\sigma \leq x] = \mathbb{E}[x - Y_\tau ; Y_\tau \leq x] = \sup_{A} \mathbb{E}[x - Y_\tau;A] \] Then, modulo null sets $(Y_\tau \leq x) = (Y_\sigma \leq x)$ and hence $Y_\sigma = Y_\tau$ almost surely.
Suppose $\sigma \leq \eta \leq \tau$. Then
\[ Y_\eta \geq \mathbb{E}[Y_\tau | \mathcal F_\eta] = \mathbb{E}[Y_\sigma | \mathcal F_\eta] = Y_\sigma . \] But also $\mathbb{E}[Y_\eta - Y_\sigma] \leq 0$ since $Y$ is a supermartingale, and hence $Y_\eta = Y_\sigma$. It follows that $Y$ is constant over the interval $[\sigma,\tau]$. But $Y$ is a time change of Brownian motion $Y_t = W_{\Gamma_t}$ for some strictly increasing time-change $\Gamma$. Brownian motion has no intervals of constancy, and hence nor does $Y$. It follows that $\sigma = \tau$ and hence $\tau$ is minimal.
\end{proof}
We close this section with a discussion of the Brownian case, including a partial proof of Theorem~\ref{thm:brownian}, followed by a discussion of the local martingale diffusion case.
For $W$ a Brownian motion null at 0, $W^2_{t \wedge \tau} - (t \wedge \tau)$ is a martingale and \begin{equation} \mathbb{E}[\tau] = \liminf \mathbb{E}[t \wedge \tau] = \liminf \mathbb{E}[W^2_{t \wedge \tau}] \geq \mathbb{E}[\liminf W^2_{t \wedge \tau}] = \mathbb{E}[W_\tau^2]. \end{equation} Moreover, from Doob's $L^2$ submartingale inequality we know that $\mathbb{E}[\tau]<\infty$ if and only if $\mathbb{E}[(W_\tau^*)^2]<\infty$, and then $(W_{t \wedge \tau})_{t \geq 0}$ and $(W_{t \wedge \tau}^2)_{t \geq 0}$ are uniformly integrable. It follows that if $\mathbb{E}[\tau]<\infty$ then \[ 0 = \lim \mathbb{E}[ W_{t \wedge \tau}] = \mathbb{E}[W_\tau] = \int x \mu(dx) \] and \[ \mathbb{E}[\tau] = \lim \mathbb{E}[t \wedge \tau] = \lim \mathbb{E}[ W_{t \wedge \tau}^2 ] = \mathbb{E}[W_\tau^2] = \int x^2 \mu(dx),\] so that $\mu$ is centred and in $L^2$.
Conversely, if $\mu$ is centred and in $L^2$ then there are several classical constructions which realise an integrable embedding, including those of Skorokhod~\cite{Skorokhod:65} and Root~\cite{Root:69}. See Obloj~\cite{Obloj:04} or Hobson~\cite{Hobson:11} for a discussion.
The final statement of Theorem~\ref{thm:brownian} is deeper, and follows from Theorem 5 of Monroe~\cite{Monroe:72}. One of the main goals of this work is to extend the work of Monroe to general diffusions. Note that the arguments above yield that in the Brownian case if $\tau$ is an embedding of $\mu$ and $\mathbb{E}[\tau]<\infty$ then $\mathbb{E}[\tau] = \int x^2 \mu(dx)$, so that if $\mu$ is centred and in $L^2$ then every integrable embedding is minimal.
Consider now the case of a general diffusion $Y$ in natural scale. Suppose $Y_0=y=0$ and that $\nu$ is centred. Then to determine whether there might exist a integrable embedding we might expect to replace the condition $\int x^2 \mu(dx)<\infty$ of the Brownian case with some other integral test depending on the speed measure $m$ of $Y$ and the target measure $\nu$. Indeed we find this is the case with $x^2$ replaced by a convex function $q$ defined in (\ref{eqn:qdef}) in the next section.
But what if $\nu$ is not centred? In the Brownian case there is no hope that the target law can be embedded in integrable time, not least because $\mathbb{E}[H^W_x] = \infty$ for each non-zero $x$, but what if $Y$ is some other diffusion?
Suppose the state space $I$ of $Y$ is unbounded above. Suppose $Y_0=y$ and $\nu \in L^1$ with $\overline{\nu} = \int x \nu(dx) < y$. (In this discussion we exclude the degenerate case where $Y$ is a point mass at $\ell$.) One candidate way to embed $\nu$ is to first wait until $H^Y_{\overline{\nu}} = \inf \{ t : Y_t = \overline{\nu} \}$ and then to embed $\nu$ in $Y$ started at $\overline{\nu}$, ie to set \begin{equation}\label{eqn:taudef}
\tau = H^Y_{\overline{\nu}} + \tau^{\overline{\nu}, \nu} \circ \Theta_{H^Y_{\overline{\nu}}} \end{equation} where $\Theta$ is the shift operator $\Theta_t ( \omega( \cdot)) = \omega(t+ \cdot)$ and $\tau^{\overline{\nu},\nu}$ is some embedding of $\nu$ in $Y$ started at $\overline{\nu}$. Note that since $I$ is unbounded above and $Y$ is a time-change of Brownian motion, it follows that $ H^Y_{\overline{\nu}}$ is finite almost surely. The embedding in (\ref{eqn:taudef}) will be integrable if {\em both} $H^Y_{\overline{\nu}}$ {\em and} $\tau^{\overline{\nu}, \nu}$ are integrable, and we can decide if it is possible to choose $\tau^{\overline{\nu}, \nu}$ integrable using the integral test of the centred case. Our results show that although embeddings of $\nu$ need not be of the form given in (\ref{eqn:taudef}), nonetheless there exist integrable embeddings if and only if both $\mathbb{E}[H^Y_{\overline{\nu}}]<\infty$ and there is an integrable embedding $\tau^{\overline{\nu},\nu}$ of $\nu$ in $Y$ started at $\overline{\nu}$. In that case every minimal embedding has the same first moment.
\section{Every minimal embedding has the same first moment} Let $Y$ be a regular diffusion in natural scale on $I \subseteq \mathbb{R}$. Suppose $Y_0 = y$. Let $m$ denote the speed measure of $Y$, define $q_u$ via \begin{equation}\label{eqn:qdef}
q_u(w) = 2 \int_u^w dv \int_u^v m(dz) = 2 \int_u^w m((u,v))dv \end{equation} and let $q = q_y$. Then $q(Y_t) - t$ is a local martingale, null at zero.
\begin{definition} \label{def:E} If $\nu \notin L^1$ set $E_Y(y;\nu)=\infty$. For $\nu \in L^1$ define \begin{equation} \label{eqn:Edefu}
E_Y(y;\nu) = \int q_y(z) \nu(dz) + |y - \overline{\nu}| \lim_{n \to \infty} \frac{ q_y( y + n \mathrm{sign}(y - \overline{\nu})) }{n} \end{equation} with the convention that $\mathrm{sign}(0)=0$. \end{definition}
In the case of a diffusion in natural scale, the main result of this paper is the following: \begin{theorem} \label{thm:martingale} There exists an integrable solution of the SEP for $\nu$ in $Y$ if and only if $E_Y(y;\nu)<\infty$. Further, in the case where $E_Y(y;\nu)<\infty$ we have that $\tau$ is minimal for $\nu$ if and only if $\tau$ is an embedding and $\mathbb{E}[\tau] = E_Y(y;\nu)$. \end{theorem}
Our goal is to prove Theorem~\ref{thm:martingale}. In this section we suppose that $\nu \in L^1$ and $-\infty \leq \ell < y < r \leq \infty$.
\subsection{The centred case with support in a sub-interval} Suppose $\nu$ is a measure with mean $y$ and support in a subset $[L,R] \subset (\ell,r)$ of $I$ where $L < y < R$.
\begin{lemma} \label{lem:minimalLR} Suppose $\tau \leq H_{L,R}$. Then $\tau$ is minimal for $\mathcal{L}(Y_\tau)$ in $Y$ and $\mathbb{E}[\tau] = \mathbb{E}[q(Y_\tau)]$. \end{lemma}
\begin{proof} We have $Y_{t \wedge \tau}$ is bounded and $\mathbb{E}[Y_\tau] = y$. Also $q$ is bounded on $[L,R]$. Hence \[ \mathbb{E}[q(Y_\tau)] = \lim_t \mathbb{E}[q(Y_{t \wedge \tau})] = \lim_{t} \mathbb{E}[t \wedge \tau] = \mathbb{E}[\tau]. \] In general, from Fatou's Lemma we know that for any embedding $\chi$ of $\nu$, \[ \mathbb{E}[\chi] = \lim_t \mathbb{E}[ \chi \wedge t] \geq \lim_t \mathbb{E}[q(Y_{t \wedge \chi})] \geq \mathbb{E}[q(Y_\chi)] = \int q(x) \nu(dx). \] Then if $\chi \leq \tau$ and both $\chi$ and $\tau$ are embeddings of $\nu$, we must have $\chi = \tau$ almost surely. Hence $\tau$ is minimal. See also Proposition 4 in \cite{AnkirchnerHobsonStrack:13}.
\end{proof}
Suppose that $\sigma$ is an embedding of $\nu$. Our goal is to show that there exists an embedding $\tilde{\sigma}$ of $\nu$ such that $\tilde{\sigma} \leq \sigma \wedge H_{L,R}$. Then $\tilde{\sigma}$ is minimal and $\mathbb{E}[\tilde{\sigma}] = \int q(x) \nu(dx)$. It follows that if $\sigma$ is minimal, then $\sigma = \tilde{\sigma}$ and $\mathbb{E}[\sigma]= \int q(x) \nu(dx)$.
Following a definition of Root~\cite{Root:69}, we define a barrier to be a closed subset $B$ of $G=[0,\infty] \times [-\infty,\infty]$ such that $(\infty, x) \in B$ for all $x \in [-\infty,\infty]$, $(t, -\infty) \cup (t,\infty) \in B$ for all $t \in [0,\infty]$, if $(0,x) \in B$ for $x>y$ then $(0,x') \in B$ for $x'>x$, similarly if $(0,x) \in B$ for $x<y$ then $(0,x') \in B$ for $x'<x$ and finally if $(t,x) \in B$ then $(s,x) \in B$ for all $s>t$. Let $\mathcal{B}$ be the space of all barriers and given $L,R$ with $\ell \leq L < y < R \leq r$ let $\mathcal{B}_{L,R}$ be the set of all barriers $B$ with $(0,L)$ and $(0,R)$ in $B$, and then $(t,x) \in B$ for $(t \geq 0, x \leq L)$ and $(t \geq 0, x \geq R)$.
Let $\rho$ be the standard Euclidean metric on $\mathbb{R}^2$.
We map $G$ into a bounded rectangle $F = [0,1] \times [-1,1]$ by $(t,x) \mapsto (t/(1+t),x/(1+|x|))$ and let $r$ be the induced metric on $G$ given by
\[ r((t,x),(s,y)) = \rho \left( \left( \frac{t}{1+t},\frac{x}{1+|x|} \right),\left( \frac{s}{1+s},\frac{y}{1+|y|} \right) \right) . \] Now define the metric $r_\mathcal{G}$ of the set $\mathcal{G}$ of closed subsets of $G$ by \[ r_{\mathcal{G}}(C,D) = \max \left\{ \sup_{(t,x) \in C} r((t,x),D)), \sup_{(s,y) \in D} r((s,y),C)) \right\}; \] then $\mathcal{G}$ is a separable compact space and the spaces $\mathcal{B}$ and $\mathcal{B}_{[L,R]}$ are compact. For $B \in \mathcal{B}$ define \begin{equation*}
\tau_{B} = \inf \{ t : (t, Y(t)) \in B \}. \end{equation*}
\begin{lemma} \label{lem:m4} Suppose $\nu$ has mean $y$ and support in $[L,R]$. Suppose that $\sigma$ is an embedding of $\nu$. Then there is a barrier $B \in \mathcal{B}_{L,R}$ such that $\sigma \wedge \tau_B \leq H_{L,R}$ is a minimal embedding of $\nu$ and $\mathbb{E}[\sigma \wedge \tau_B] = \int q(x) \nu(dx)$. \end{lemma}
\begin{proof}
First suppose $\nu$ puts mass on a finite subset of points in $[L,R]$. In this case it is easy to prove the result by adapting the proof in Monroe~\cite{Monroe:72} which is based on topological arguments. We choose instead to give a more probabilistic proof.
Let $\nu$ be a measure on $n+2$ points. Label the points $y_0 < y_1 < \cdots < y_n < y_{n+1}$. Let $\mathcal C = \{ b = (b_0, b_1, \ldots, b_{n+1}) \in \mathbb{R}^{n+2}_+ ; b_0 = 0 = b_{n+1} \}$. Given $b \in \mathcal C$ let $\eta_b$ be the law of $Y_{\tau(b)}$ where \[ \tau(b) = \inf \{ u > 0 : Y_u = y_k, u \geq b_k, \mbox{some $k \in \{ 0, 1, \ldots n, n+1 \}$} \} \] and note that $\eta_b$ is a probability measure on the same points as $\nu$ with mean $y$. Let \[ \mathcal C_{\leq,\nu} = \{ b \in \mathcal C : \eta_b(\{y_k\}) \leq \nu(\{y_k\}), 1 \leq k \leq n \}. \]
Suppose that $\gamma=(0,\gamma_1, \ldots, \gamma_{n},0)$ and $\lambda=(0,\lambda_1, \ldots \lambda_{n},0)$ are elements of $\mathcal C_{\leq, \nu}$, and consider $\gamma \wedge \lambda = (0,\gamma_1 \wedge \lambda_1, \ldots \gamma_{n} \wedge \lambda_{n},0)$. Set $A = \{ k : \gamma_k <\lambda_k \}$. Then for $k \in A$, $\eta_{\lambda \wedge \gamma}(\{y_k\}) \leq \eta_\gamma(\{y_k\}) \leq \nu( \{y_k\})$ and for $k \in \{1, \ldots n \} \setminus A$, $\eta_{\lambda \wedge \gamma}(\{y_k\}) \leq \eta_\lambda(\{y_k\}) \leq \nu( \{y_k\})$. Hence $\gamma \wedge \lambda \in \mathcal C_{\leq,\nu}$.
It follows that $\mathcal C_{\leq, \nu}$ has a minimal element, $\underline{b}$ say and that $\eta_{\underline{b}}(\{y_k\}) \leq \nu( \{y_k \})$ for all $1 \leq k \leq n$. If $\eta_{\underline{b}}(\{y_j\}) < \nu( \{y_j\})$ for some $j$ then by making the element of $\underline{b}$ with label $j$ smaller we can increase the mass embedded at $j$, without violating the constraint $\eta_{\underline{b}}(\{y_j\}) \leq \nu( \{y_j\})$, whilst simultaneously making $\eta_{\underline{b}}(\{y_k\})$ smaller for each $k \in \{1, \ldots,n \} \setminus \{j \}$, thus contradicting the fact that $\underline{b}$ is a minimal element. Hence $\eta_{\underline{b}}(\{y_k\}) = \nu( \{y_k\})$ for all $k \in \{1, \ldots,n \}$. The fact that $\eta_{\underline{b}}$ and $\nu$ are equal follows from the fact that they are both probability measures with mean $y$. Finally, let \[ \mathcal{B}_\nu = ([0,\infty] \times [-\infty, y_0]) \cup \left( \cup_{i: 1 \leq i \leq n} \{ (s,y_i) ; s \geq b_i \} \right) \cup ([0,\infty] \times [y_{n+1} , \infty]). \] Then $\tau_{B_{\nu}} \wedge \sigma \leq H_{y_0, y_{n+1}}$ and the result follows.
Now consider the general case of a measure $\nu$ on $[L,R]$ with mean $y$. Let \[ C_n = \{ k/n ; k = 0, \pm 1, \pm 2 \ldots, L < k/n < R \} \cup \{L,R \}\] and let $\sigma_n = \inf \{ t \geq \sigma :Y_t \in C_n \}$ and $\nu_n = \mathcal{L}(Y_{\sigma_n})$. Then $\sigma_n$ is a stopping time and $\nu_n$ has mean $y$ and finite support. By the study of the previous case there is a barrier $B_n$ such that $Y_{\tau_{B_n} \wedge \sigma_n}$ has law $\nu_n$ and $\tau_{B_n} \leq H_{L,R}$. We want to show that down a subsequence $(B_n)_{n \geq 1}$ converges to a barrier $B$, $\tau_{B_n}$ converges almost surely to $\tau_B \leq H_{L,R}$ and $Y_{\sigma \wedge \tau_B} \sim \nu$.
By the compactness of $\mathcal{B}_{[L,R]}$, $(B_n)_{n \geq 1}$ has a convergent subsequence. Let $B$ be the limit. Moving to the subsequence, we may assume that $B_n \rightarrow B$. Write $\tau_n$ as shorthand for $\tau_{B_n}$.
Note that $\mathbb{E}[H_{L,R}]$ is finite and choose $T > 2 \mathbb{E}[H_{L,R}] /\epsilon$; then \[ \mathbb{P}( \tau_n \wedge \tau_B > T ) \leq \frac{\mathbb{E}[\tau_n]}{T} \leq \frac{\mathbb{E}[ H_{L,R} ]}{T} < \frac{\epsilon}{2} .\] Fix $c > 0$. Choose $\gamma>0$ such that \[ \sup_{x \in [L,R]} \mathbb{P}^x \left[ \left( \sup_{\gamma < t < c} Y_t - x > \gamma \right) \cap \left( \inf_{\gamma < t < c} Y_t - x < - \gamma \right) \right] > 1 - \epsilon/2 \] and $n_0$ such that \[ \max \left\{ \sup_{(t,x) \in C_{n_0} } \rho((t,x), B) , \sup_{(t,x) \in C} \rho((t,x),\cup_{n \geq n_0} B_n ) \right\} < \gamma , \] where $C_{n_0} = ([0,T] \times [L,R]) \cap ( \cup_{n \geq n_0} B_n)$ and $C = ([0,T] \times [L,R]) \cap B$. Then \begin{equation*}
(|\tau_B - \tau_n| > c) \subseteq (\tau_n \wedge \tau_B > T) \cup_{ (\tau_n \wedge \tau_B = t, Y_{\tau_n \wedge \tau_B} = x) \in [0,T] \times [L,R]} F(t,x) \end{equation*} where $F(s,y)$ is the set
\[ F(s,y) =\left( \left. \sup_{s+\gamma < t < s+c} Y_t - y \leq \gamma \right| Y_s = y\right)
\cup \left( \left. \inf_{s+ \gamma < t < s+ c} Y_t - y \geq - \gamma \right| Y_s = y\right). \]
Clearly $\mathbb{P}(F(y,s)) \leq \epsilon/2$ for all $(y,s)$. Hence by the Strong Markov property
\[ \mathbb{P}(|\tau_B - \tau_n| > c) < \epsilon, \] and down a further subsequence if necessary, $\tau_n \rightarrow \tau_B$ almost surely. Thus \[ \mathcal{L}(Y_{\sigma \wedge \tau_B}) = \lim_{n} \mathcal{L}(Y_{\sigma_n \wedge \tau_n}) = \lim \nu_n = \nu. \]
Also $\sigma \wedge \tau_B = \lim \sigma_n \wedge \tau_n \leq H_{L,R}$ so that $\sigma \wedge \tau_B$ is minimal and $\mathbb{E}[\sigma \wedge \tau_B] =\int q(x) \nu(dx)$. \end{proof}
For a diffusion $Y$ with state space $I$, speed measure $m$ and initial value $Y_0=y$, and for a law $\nu$ on $[L,R]$ with mean $y$, we have that
$E_Y(y;\nu) = \int q_y(x) \nu(dx)$ .
Clearly $E_Y(y;\nu) < \infty$ under the present conditions on $\nu$.
\begin{corollary}
Suppose $\nu$ has mean $y$ and support in $[L,R] \subset (\ell,r)$. Then an embedding $\sigma$ of $\nu$ is minimal if and only if $\mathbb{E}[\sigma] = E_Y(y;\nu)$. \end{corollary} \begin{proof}
By the first case of Theorem~\ref{thm:existence} there exists an embedding $\sigma$ of $\nu$ in $Y$, and then by Lemma~\ref{lem:m4} there exists a minimal embedding $\tilde{\sigma}= \sigma \wedge \tau_B$ with $\mathbb{E}[\tilde{\sigma}] = E_Y(y;\nu)$. If $\sigma$ is minimal then $\sigma = \tilde{\sigma}$ and $\mathbb{E}[\sigma] = E_Y(y;\nu)$. Conversely, by the arguments at the end of Lemma~\ref{lem:minimalLR}, for any embedding $\mathbb{E}[\sigma] \geq E_Y(y;\nu)$ and so if
$\mathbb{E}[\sigma] = E_Y(y;\nu)$ then $\sigma$ is minimal. \end{proof}
\subsection{The general centred case} Now suppose that $\nu$ is centred but that there is no subset $[L,R] \subset (\ell,r)$ for which $\nu([L,R])=1$. We construct a sequence of measures $(\nu_n)_{n \geq n_0}$ with supports in bounded intervals $[L_n,R_n] \subset (\ell,r)$ and such that $(\nu_n)_{n \geq n_0}$ converges to $\nu$. Hence, given $\sigma$ and $\nu_n$ there is a barrier $B_n$ with associated stopping time $\tilde{\sigma}_n = \tau_{B_n} \wedge \sigma$ such that $Y_{\tilde{\sigma}_n}$ has law $\nu_n$. For our specific choice of approximating sequence of measures we argue that the sequence of stopping times $\tau_{B_n}$ is monotonic increasing with limit $\tau_\infty$. Finally we show that $\sigma \wedge \tau_\infty$ is minimal and embeds $\nu$.
Recall that our current hypothesis is that $\nu$ is a measure on $\overline{I}$ such that $\nu \in L^1$ and $Y_0 = y = \overline{\nu}$.
For a measure $\eta \in L^1$ with mean $c$ and support in $[\ell,r]$ define the potential $U_\eta : [\ell,r] \mapsto \mathbb{R}_+$ via $U_\eta(x) = \mathbb{E}^{Z \sim \eta}[|Z-x|]$. Let $\mathcal{V}_{c}$ be the set of convex functions $f:[\ell,r] \mapsto \mathbb{R}$ satisfying $f(x) \geq |x-c|$, together with $\lim_{x \downarrow \ell} \{ f(x) - (c - \ell) \} = 0 = \lim_{x \uparrow r} \{ f(x) - (r-c) \}$. Then $U_\eta \in \mathcal{V}_c$ and there is a one-to-one correspondence between elements of $\mathcal{V}_c$ and probability measures on $[\ell,r]$ with mean $c$. For a pair of probability measures $\eta_i$ with support in $[\ell,r]$ we have that $\eta_1 \leq_{cx} \eta_2$ if and only if $U_{\eta_1}(x) \leq U_{\eta_2}(x)$ for all $x \in [\ell,r]$.
Given $\nu$, fix $n_0 \geq 1/U_\nu(\overline{\nu})$. For $n \geq n_0$ define $U_n : [\ell,r] \mapsto \mathbb{R}_+$ via
\[ U_n(x) = \max \{ U_\nu(x) - 1/n, |x- \overline{\nu}| \}, \] and let $\nu_n$ be the probability measure with potential $U_n$. Then there exist $\{a_n,b_n\}$ such that $[a_n,b_n] \subset (\ell,r)$, $\nu_n(A) = \nu(A)$ for all measurable subsets $A \subset (a_n,b_n)$ and $\nu_n([\ell,a_n)) = 0 = \nu_n((b_n,r])$. Then $\nu_n$ has atoms at $a_n$ and $b_n$ and mean $\overline{\nu}$. Further $(a_n)_{n\geq n_0}$ and $(b_n)_{n \geq n_0}$ are monotonic sequences and the family $(\nu_n)_{n \geq n_0}$ is increasing in convex order.
\begin{theorem}\label{thm:embeddingcentred} Suppose $\nu \in L^1$ and $Y_0 = y =\overline{\nu}$. Let $\sigma$ be an embedding of $\nu$. There exists an barrier $B$ such that $\tau_B \wedge \sigma$ also has law $\nu$ and $\mathbb{E}[\tau_B \wedge \sigma] = E_Y(y;\nu)$ where $E_Y(y;\nu)=\int q(y)\nu(dy)$. \end{theorem}
\begin{proof} For each $n$, fix $\nu_n$ as above. From our study of the bounded case we know there is a barrier $B_n$ which we can assume contains $\{ (t,x), x \leq a_n \mbox{ or } x \geq b_n \}$ such that $Y_{\tau_{B_n} \wedge \sigma}$ has law $\nu_n$.
We now show that if $p>n$ then $B_p \subset B_n$.
Let $\mathcal{B}_n = \{ B \in \mathcal{B}; \{(t,x): x \leq a_n \mbox{ or } x \geq b_n \} \subseteq B, \mathcal{L}(Y_{\tau_B \wedge \sigma}) \sim \nu_n \}$. We show that if $n<p$, $B_n \in \mathcal{B}_n$ and $B_p \in \mathcal{B}_p$ then $B_n \cup B_p \in \mathcal{B}_n$. Certainly $\{(t,x): x \leq a_n \mbox{ or } x \geq b_n \} \subseteq B_n \cup B_p$. Let $A_{n,p} = \{x: \inf \{ t : (t,x) \in B_n \} \leq \inf \{ t : (t,x) \in B_p \} \}$.
Suppose $A \subset [a_n,b_n]$. If $A \subset A_{n,p}$ and $Y_{\sigma \wedge \tau_{B_n \cup B_p}} \in A$ then $Y_{\sigma \wedge \tau_{B_n}} \in A$ and hence we have \[ \nu_n(A) = \mathbb{P}(Y_{\sigma \wedge \tau_{B_n}} \in A)
\geq \mathbb{P}(Y_{\sigma \wedge \tau_{B_n \cup B_p}} \in A). \] Conversely, if $A \subset A_{n,p}^c$, \[ \nu_n(A) = \nu(A) = \nu_p(A) = \mathbb{P}(Y_{\sigma \wedge \tau_{B_p}} \in A)
\geq \mathbb{P}(Y_{\sigma \wedge \tau_{B_n \cup B_p}} \in A). \] Thus for every set $A \subset [a_n, b_n]$, $\nu_n(A) = \mathbb{P}(Y_{\sigma \wedge \tau_{B_n}} \in A) \geq \mathbb{P}(Y_{\sigma \wedge \tau_{B_n \cup B_p}} \in A)$. Hence there must be equality throughout and $B_n \cup B_p \in \mathcal{B}_n$.
Now fix a sequence $(B_n)_{n \geq 1}$ with $B_n \in \mathcal{B}_n$. Let $\tilde{B}_n$ be the closure of $\cup_{i=n}^{\infty} B_i$. We aim to show that $\tilde{B}_n \in \mathcal{B}_n$. For $k>n$ let \[ B^k_n = \cup_{i = n}^k B_i. \] By the arguments of the previous paragraphs $B^k_n \in \mathcal{B}_n$. Since the set of barriers is compact, $B^{k}_n$ converges to $\tilde{B}_n$ as $k \uparrow \infty$ and $\tau_{B^{k}_n} \downarrow \tau_{\tilde{B}_n}$ (note that $\tau_{B^k_n} \leq T_{a_n,b_n} < \infty$). Hence, since paths of $Y$ are continuous, $\nu_n = \lim_k \mathcal{L}(Y_{\sigma \wedge \tau_{B^{k}_n}}) = \mathcal{L}(Y_{\sigma \wedge \tau_{\tilde{B}_n}})$ and $\tilde{B}_n \in \mathcal{B}_n$. It follows that for $p>n$, $\tilde{B}_p \subset \tilde{B}_n$, and without loss of generality we shall assume that $B_p \subset B_n$.
Define $B_\infty = \cap B_n$ and set $\tau_\infty = \tau_{B_\infty}$. Then $\tau_{B_n} \uparrow \tau_{\infty}$. Also $\tau_{B_n} \wedge \sigma \uparrow \tau_{\infty} \wedge \sigma$ and \[ \mathcal{L}(Y_{\tau_{\infty} \wedge \sigma}) = \lim \mathcal{L}(Y_{\tau_{B_n} \wedge \sigma}) = \lim \nu_n = \nu. \]
It only remains to prove that $\mathbb{E}[\sigma \wedge \tau_\infty] = E_Y(y;\nu)$. But \begin{equation*}
\mathbb{E}[\sigma \wedge \tau_\infty] = \lim \mathbb{E}[\sigma \wedge \tau_{B_n}] = \lim E_Y(y; \nu_n) = \lim \int q(z) \nu_n(dz) = \int q(z) \nu(dz) . \end{equation*}
\end{proof}
\subsection{The uncentred case} Without loss of generality we may assume that the mean of $\nu$ satisfies $\overline{\nu} < y$. Then for there to be an embedding of $\nu$ we must have that $I$ is unbounded above.
Again we construct a sequence of measures $(\nu_n)_{n \geq n_0}$ with supports in bounded intervals $[L_n,R_n] \subset (\ell,r)$ and such that $(\nu_n)_{n \geq n_0}$ converges to $\nu$.
Recall that $\nu$ is a measure on $\overline{I}$ such that $\nu \in L^1$.
Let $F_{\nu}$ be the distribution function of $\nu$ and $F^{-1}_\nu$ the inverse. In particular, if $U \sim U[0,1]$ then $F^{-1}_\nu(U)$ has law $\nu$.
Suppose $\ell > -\infty$. Fix $n_0 > \max \{y, (y - \overline{\nu})^{-1} \}$ and for $n \geq n_0$ let $v_n = F_{\nu}(n-)$ and let $u_n$ solve $\int_{u_n}^{v_n} \max \{ F^{-1}_\nu(u) , ( \ell + 1/n ) \} du + n(u_n + 1 - v_n)=y$. Then $Z_n := F^{-1}_{\nu}(U) I_{ \{ u_n < U \leq v_n \} } + n I_{ \{ (U \leq u_n) \cup (U > v_n) \} }$ has mean $y$. Let $\nu_n$ be the law of $Z_n$. Now set $b_n = n$ and $a_n = \max \{ F^{-1}_{\nu}(u_n), (\ell + 1/n) \}$. For $A \subseteq (a_n,b_n)$ we have $\nu_n(A) = \nu(A)$ and moreover $\nu_n([\ell,a_n)) = 0 = \nu_n((n,\infty])$. The measure $\nu_n$ has an atom at $n$ of size $u_n + (1-v_n)$ (and potentially an atom at $a_n$) and mean $\overline{\nu}$. Further $(a_n)_{n\geq n_0}$ is a decreasing sequence and the family $(\nu_n)_{n\geq n_0}$ is increasing in convex order.
If $\ell = - \infty$ then we can construct $\nu_n$ using a similar but simpler argument which does not require moving mass from the interval $(\ell,\ell+1/n)$ to $\ell + 1/n$.
Recall the definition of $E_Y(y;\nu)$ in (\ref{eqn:Edefu}). Since we are assuming that $\nu \in L^1$ and $\overline{\nu} < y$, and since $\lim_{n \to \infty} \frac{ q_y( y + n) }{n} = m(y, \infty)$, this simplifies to \begin{equation}{\label{eqn:Edefu2}}
E_Y(y;\nu) = \int q_y(z) \nu(dz) + 2(y - \overline{\nu}) m(y,\infty) \end{equation}
\begin{theorem}\label{thm:embeddingnoncentred} Suppose $\nu \in L^1$. Let $\sigma$ be an embedding of $\nu$. There exists an barrier $B$ such that $\tau_B \wedge \sigma$ also has law $\nu$ and $\mathbb{E}[\tau_B \wedge \sigma] = E_Y(y;\nu)$. \end{theorem}
\begin{proof} It only remains to cover the case where $Y_0 = y \neq \overline{\nu}$. We may assume $y > \overline{\nu}$.
For each $n$, fix $\nu_n$ as above. From our study of the bounded, centred case we know there is a barrier $B_n$ which we can assume contains $\{ (t,x), x \leq a_n \mbox{ or } x \geq b_n \equiv n \}$ such that $Y_{\tau_{B_n} \wedge \sigma}$ has law $\nu_n$. Moreover, exactly as in the proof of Theorem~\ref{thm:embeddingcentred}, and with similar notation, it follows that if $p>n$ then $B_p \subset B_n$, that $\tau_{B_n} \uparrow \tau_\infty$ and that $\tau_\infty \wedge \sigma$ embeds $\nu$.
Finally we show that $\mathbb{E}[\sigma \wedge \tau_\infty] = E_Y(y;\nu)$.
Observe that $q$ is convex and so $\lim_n q(n)/n$ exists in $(0,\infty]$. Further \[ y = \int x \nu_n(dx) = \int_{u_n}^{v_n} \max \{ F^{-1}_\nu(u), (\ell + 1/n) \} du + n (1 + u_n - v_n) \] and hence $\lim_n n(1 + u_n - v_n)$ exists and is equal to $y - \overline{\nu}$. Then, as before \begin{equation*}
\mathbb{E}[\sigma \wedge \tau_\infty] = \lim \mathbb{E}[\sigma \wedge \tau_{B_n}] = \lim E_Y(y;\nu_n) = \lim \int q(y) \nu_n(y) \end{equation*} but in this case \begin{eqnarray*}
\int q(x) \nu_n (dx) &=& \int_{u_n}^{v_n} q(\max \{ F_{\nu}^{-1}(u), (\ell +1/n) \}) du + q(n) ( 1 - u_n - v_n) \\
& \rightarrow& \int_{0}^{1} q(F_{\nu}^{-1}(u)) du + \lim_n \left\{ \frac{q(n)}{n} n( 1 + u_n - v_n) \right\} \\
&=& \int q(x) \nu(dx) + (y - \overline{\nu}) \lim_n \left\{ \frac{q(n)}{n} \right\} \\
& = & E_Y(y; \nu). \end{eqnarray*}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:martingale} in the case $\nu \in L^1$] If $E_Y(y;\nu)=\infty$ then since any embedding has $\mathbb{E}[\sigma] \geq \mathbb{E}[\sigma \wedge \tau_\infty] = E_Y(y;\nu)$ there are no integrable embeddings. Conversely, if $E_Y(y;\nu)<\infty$, then by Theorem~\ref{thm:embeddingcentred} or Theorem~\ref{thm:embeddingnoncentred} there exists an embedding $\tilde{\sigma}$ with $\mathbb{E}[\tilde{\sigma}] = E_Y(y; \nu)$.
Now suppose $E_Y(y;\nu)<\infty$ and $\sigma$ is an embedding of $\nu$.
Suppose $\sigma$ is minimal. Choose $\nu_n$ as in the discussion before Theorem~\ref{thm:embeddingcentred} or Theorem~\ref{thm:embeddingcentred} as appropriate. In both of these theorems it was shown that we could choose a sequence of barriers $B_n$ such that $\tau_{B_n} \wedge \sigma \rightarrow \tau_{B_\infty} \wedge \sigma$ and $\tau_{B_\infty} \wedge \sigma$ embeds $\nu$. By minimality of $\sigma$, $\tau_{B_\infty} \wedge \sigma = \sigma$. Then, since $\tau_{B_n} \wedge \sigma$ is increasing, \[ \mathbb{E}[\sigma] = \mathbb{E}[\tau_{B_\infty} \wedge \sigma] = \lim_n \mathbb{E}[ \tau_{B_n} \wedge \sigma ] = \lim_n \int q(x) \nu_n (dx) = E_Y(y; \nu). \]
Conversely, if $\sigma$ is not minimal then there is an embedding $\hat{\sigma}$ of $\mu$ with $\hat{\sigma} \leq \sigma$, $\mathbb{P}(\hat{\sigma} < \sigma)>0$ and $\hat \sigma$ integrable. Then $\mathbb{E}[\sigma] > \mathbb{E}[\hat{\sigma}] \geq E(y;\nu)$. \end{proof}
\begin{example} The following example shows that unlike in the Brownian case, in general integrability alone is not sufficient for minimality.
Suppose the diffusion $Y$ solves $dY_t = (1+Y_t^2) dW_t$ subject to $Y_0=0$. Let $\nu = \frac{1}{2} \delta_{1} + \frac{1}{2} \delta_{-1}$ so that $\nu$ is uniform measure on $\{ \pm 1 \}$. Let $\hat{H}_0 = \inf \{ t > H_{-1,1} : Y_t = 0 \}$ and $\hat{H}_{-1,1} = \inf \{ t> \hat{H}_0 : |Y_t| = 1 \}$. Then $\hat{H}_{-1,1}$ embeds $\nu$ and $\mathbb{E}[\hat{H}_{-1,1}] < \infty$, but $\hat{H}_{-1,1}$ is not minimal.
\end{example}
\begin{example} This example gives another circumstance in which integrability is not sufficient to guarantee minimality.
Let $Y$ be a time-homogeneous martingale diffusion on $I=[\ell,r]$ with $-\infty < \ell < y < r < \infty$. Suppose $\ell$ and $r$ are exit boundaries and that $\mathbb{E}[H^Y_{\ell,r}] < \infty$. We take $\ell$ and $r$ to be absorbing boundaries. (A simple example is obtained by taking Brownian motion started at $y$ and absorbed at $\ell$ and $r$.) Let $\nu = (r-y)/(r-\ell) \delta_\ell + (y - \ell)/(r-\ell) \delta_r$. Then for $c>0$, $H^Y_{\ell,r} + c$ is an integrable embedding which is not minimal.
However, examples of this type are degenerate and may easily be excluded by restricting the class of embeddings to those satisfying $\sigma \leq H^Y_{\ell,r}$. \end{example}
\begin{example} Now we give an example which shows that minimality alone is not sufficient for integrability.
Let $Y$ be geometric Brownian motion so that $Y$ solves $dY_t = Y_t dW_t$. Let $Y$ have initial value $Y_0 = 1$. It is easy to see that for $a \in (0,1]$ we have \be \mathbb{E}[H_a] = \int_a^\infty [(z \wedge 1) - a] \frac{dz}{z^2} = 2 \log\left( \frac{1}{a} \right). \ee Let $\nu = \delta_0$. Then $\tau = \infty$ is the minimal stopping time that embeds $\nu$ in $Y$. Obviously $\tau$ is not integrable.
More generally, let $\nu$ be any probability measure on $(0,1)$ with $\int \log y \; \nu(dy) = - \infty$, and let $Z$ be a random variable such that $\mathcal{L}(Z) \sim \nu$. Let the filtration ${\mathbb F} = (\mathcal F_t)_{t \geq 0}$ be such that $Z$ is $\mathcal F_0$-measurable, and let $W$ be a ${\mathbb F}$-Brownian motion which has initial value $W_0=1$ and which is independent of $Z$.
Let $\tau = \inf \{ u \geq 0 : Y_u = Z \}$. Then $\tau$ is an embedding of $\nu$. Note that $\tau$ is a stopping time with respect to ${\mathbb F}$ but not with respect to the smaller filtration generated by $W$ alone. Moreover, \be \mathbb{E}[\tau] = -2 \int \log z \; \nu(dz) = \infty \ee
Observe that $q_1(x) = \int_1^x \int_1^y \frac{2}{z^2} dz dy = 2(x-1) - 2 \log(x)$, and hence $\lim_{x \to \infty} q_1(x)/x = 2$. Therefore, for any law $\nu$ on $(0,1)$, for a minimal embedding \[ \mathbb{E}[\tau] = 2 \int (x-1) \nu(dz) - 2 \int \log z \; \nu(dz) + 2(1 - \overline{\nu}) = - 2 \int \log z \; \nu(dz). \]
We give another example of a minimal non-integrable embedding which does not require independent randomisation in the section on the Az\'ema-Yor stopping time.
Another feature of this example, is that $Y$ is a martingale and yet it is easy to construct examples with $\overline{\nu}<y$ for which there is an integrable embedding. Hence integrability and minimality of $\tau$ is not sufficient for uniform integrability of $(Y_{t \wedge \tau})_{t \geq 0}$. \end{example}
\section{Alternative characterisations of $E$} \label{sec:alternative} In the comments before Theorem~\ref{thm:martingale} we argued that in the non-centred case a natural family of embeddings was those which first involved waiting for the process to hit $\overline{\nu}$ and then to embed $\nu$ in $Y$ started at $\overline{\nu}$. For a stopping rule $\tau$ as given in (\ref{eqn:taudef}) we have from the analysis of the centred case that \begin{equation} \label{eqn:altE1}
\mathbb{E}[\tau] = \mathbb{E}^y[H_{\overline{\nu}}] + E_Y(\overline{\nu};\nu)
\end{equation}
Now we want to show that the right hand side of (\ref{eqn:altE1}) is equivalent to the expression given in (\ref{eqn:Edefu}).
More generally, for $v \in [\overline{\nu},y]$ we could imagine waiting for the process to hit $v$ and then using a minimal embedding time to embed $\nu$ in $Y$ started at $v$. Then we find \begin{equation} \label{eqn:altE2}
\mathbb{E}[\tau] = \mathbb{E}^y[H_v] + E_{Y}(v;\nu)
\end{equation}
We want to show that the right-hand-side of (\ref{eqn:altE2}) does not depend on $v$.
\begin{lemma} For $v \in [\overline{\nu},y]$,
\[ G(v) = 2\int_v^\infty ( y \wedge z - v ) m(dz) + \int q_{v}(z) \nu(dz) + (v - \overline{\nu}) \lim_{n \uparrow \infty} \frac{q_v(v+n)}{n} \]
does not depend on $v$. In particular, for all $v \in [\overline{\nu},y]$,
$E_{Y}(y, \nu) = \mathbb{E}^y[H_v] + E_{Y}(v;\nu)$. If this expression is finite for any (and then all) $v \in [\overline{\nu},y]$ we have that $\mathbb{E}[\tau] = \mathbb{E}^y[H_v] + E_Y(v;\nu)$.
\end{lemma}
\begin{proof}
For any $u,v$,
\[ q_u(z) = q_u(v) + q_v(z) + q'_u(v)(z-v) . \]
Then, with $u = {\overline{\nu}}$, $q_v(z) = q_{\overline{\nu}}(z) - q_{\overline{\nu}}(v) + q'_{\overline{\nu}}(v)(v-z)$ and \begin{eqnarray*}
G(v) &=& 2\int_v^y (z - v ) m(dz) + 2(y-v) \int_y^\infty m(dz) + \int q_{\overline{\nu}}(z) \nu(dz) - q_{\overline{\nu}}(v) \\
& & \hspace{5mm} + (v - \overline{\nu})q'_{\overline{\nu}}(v) + 2(v - \overline{\nu}) \int_v^\infty m(dz) \\
&=& 2(y- \overline{\nu}) \int_y^\infty m(dz) + \int q_{\overline{\nu}}(z) \nu(dz) + 2\int_{\overline{\nu}}^y (z - \overline{\nu}) m(dz) \end{eqnarray*} which does not depend on $v$. \end{proof}
\section{Extensions}
\subsection{Non-integrable target laws} We have seen that if $\nu \in L^1$ then there exists an integrable embedding of $\nu$ if $\mathbb{E}^y[H_{\overline{\nu}}]$ and $\int q_{\overline{\nu}}(x) \nu(dx)$ are both finite. In this short section we argue that if $Y_0= y \in (\ell,r)$ and $\nu \notin L^1$ then there does not exist an integrable embedding of $\nu$.
Note first that $q=q_y$ is non-negative and convex, and hence $q(x) \geq \alpha |x-y| - \beta$ for some pair of finite positive constants $\alpha, \beta$. Let $T_n$ be a localising sequence for the local martingale $\{ q(Y_{t \wedge \sigma}) - (t \wedge \sigma) \}_{t \geq 0}$. Then, by an argument similar to that in the proof of Lemma~\ref{lem:minimalLR} \[ \mathbb{E}[\sigma] = \lim_n \mathbb{E}[ \sigma \wedge T_n] = \liminf \mathbb{E}[q(Y_{\sigma \wedge T_n})] \geq \mathbb{E}[ \liminf q(Y_{\sigma \wedge T_n})] = \int q(z) \nu(dz) = \infty. \]
\subsection{Diffusions started at entrance points} \label{ssec:notL1}
In the proofs of the main results we assumed that $Y$ started at an interior point in $(\ell,r)$. Now we consider what happens if we start at a boundary point. The motivating example is a Bessel process in dimension 3 started at zero.
After a change of scale we may assume that we are working with a diffusion in natural scale. Then, if the boundary point is finite and an entrance point, it must also be an exit point (for terminology, see Borodin and Salminen~\cite[Section II.6]{BorodinSalminen:02}).
We have assumed exit boundary points to be absorbing. It follows that an entrance point must be infinite; without of generality we assume that $Y$ starts at $+\infty$ and that $I = (\ell,\infty)$ where we may have $\ell = -\infty$.
So suppose that $\infty$ is an entrance-not-exit point. In particular, $\mathbb{E}^{\infty}[H_z]<\infty$ for some $z \in (\ell, \infty)$ or equivalently $\int^\infty z m(dz) < \infty$. Suppose that $Y$ starts at $\infty$. We show that the results of previous sections pass over to this case with a small modification.
We suppose the initial sigma algebra $\mathcal F_0$ is sufficiently rich as to include an independent, uniformly distributed random variable.
\begin{theorem} \label{thm:entrance} Suppose $Y$ is a diffusion in natural scale on $I = (\ell,\infty)$ and suppose $Y_0=\infty$, where $\infty$ is an entrance point. Then there exists an integrable embedding of $\nu$ if and only if $E_Y(\infty;\nu)$ defined by \begin{equation} \label{eqn:Enatural}
E_Y(\infty;\nu):= 2 \int^\infty_\ell \nu(dx) \int_x^\infty m(dz) (z-x)
\end{equation} is finite. Furthermore, if there exists an integrable embedding, then every minimal embedding $\sigma$ has $\mathbb{E}[\sigma] =E_Y(\infty;\nu)$.
\end{theorem}
\begin{remark} \label{rem:entrance} Note that $E_Y(\infty;\nu)$ can be rewritten as \[ E_Y(\infty;\nu) = 2 \int^\infty_\ell m(dz) \int_\ell^z \nu(dx) (z-x) \]
It follows that if $\ell = - \infty$ and $\int_{-\infty}^{0} |x| \nu(dx) = \infty$ then $E_Y(\infty;\nu) = \infty$.
However, if $\nu$ has support in $[L,\infty]$ for $L>\ell$ or if $\int_{\ell} m(dz)$ and $\int^0_\ell |x| \nu(dx)$ are finite (the latter is always true if $\ell > -\infty$), then it is possible to have $\nu \notin L^1$ and still have $E_Y(\infty;\nu)<\infty$ and the existence of integrable embeddings. For example, suppose $Y$ solves $dY_t = Y_t^2 dB_t$ subject to $Y_0 = \infty$ and suppose $\nu$ is a measure on $(0,\infty)$ with $\int_0^\infty x \nu(dx) = \infty$ and $\int_0^\infty \nu(dx) / x^2 < \infty$, eg $\nu([x,\infty))= x^{-1} \wedge 1$. Then $E_Y(\infty;\nu) = \int x^{-2} \nu(dx)/3 < \infty$ but $\nu \notin L^1$.
Suppose instead that $\nu \in L^1$. Then as in Section~\ref{sec:alternative} we can rewrite $E_Y(\infty;\nu)$ as \[ E_Y(\infty;\nu) = 2 \int_{\overline{\nu}}^\infty (y-\overline{\nu}) m(dy) + \int q_{\overline{\nu}}(y) \nu(dy) \] This last expression as a clear interpretation as the sum of $\mathbb{E}^\infty[H_{\overline{\nu}}]$ and the expected time to embed law $\nu$ in $Y$ started at $\overline{\nu}$ using a minimal embedding. It follows that if $\nu \in L^1$ and there exists an integrable embedding of $\nu$ started at $\overline{\nu}$ then the stopping time `run until $Y$ hits the mean, and then use a minimal embedding to embed $\nu$ in $Y$ started from the mean' is a minimal and integrable embedding. \end{remark}
\begin{proof}[Proof of Theorem~\ref{thm:entrance}] Suppose first that $E_Y(\infty;\nu)$ is finite. By assumption $\mathcal F_0$ is sufficiently rich as to include a uniform random variable. (Note that if $\nu$ includes an atom at $\infty$ independent randomisation of this form will always be necessary to construct an embedding.) Then there exists a random variable $Z$ with law $\nu$ and setting $\sigma = \inf \{ u \geq 0; Y_u \leq Z \}$ we have $Y_\sigma \sim \nu$ and \[ \mathbb{E}[\sigma] = \int \nu(dz) \mathbb{E}^{\infty}[H^Y_z] = 2 \int \nu(dz) \int_z^\infty (y-z) m(dy) = E_Y(\infty;\nu) . \]
If $\nu \in L^1$ then we do not need independent randomisation. In this case both $\mathbb{E}^{\infty}[H_{\overline{\nu}}]$ and $\int q_{\overline{\nu}}(y) \nu(dy)$ are finite (since $E_Y(\infty;\nu)$ is). Then there exists a minimal and integrable embedding $\tau^{\overline{\nu},\nu}$ of $\nu$ in $Y$ started at $\overline{\nu}$ and \[ \tau = H_{\overline{\nu}} + \tau^{\overline{\nu},\nu} \circ \Theta_{H_{\overline{\nu}}} \] is an integrable embedding.
Now suppose there is an integrable embedding. Then there exists an integrable minimal embedding $\sigma$ say. The remaining parts of the theorem will follow if we can show that $\mathbb{E}[\sigma] = E_Y(\infty;\nu)$.
So, suppose $\sigma$ is integrable and minimal. Since $\infty$ is an entrance boundary, there exists $N$ such that $\mathbb{E}^{\infty}[H_N]< \infty$.
For $n \geq N$ let $\tilde{\sigma}_n = \max \{ \sigma, H_{n} \}$ and let $\nu_n = \mathcal{L}(Y_{\tilde{\sigma}_n})$. Write $\tilde{\sigma}_n = H_{n} + \hat{\sigma}_n$ where $\hat{\sigma}_n = (\sigma - H_{n})^+$ and let $\hat{\nu}_n = \mathcal{L}(Y^{n}_{\hat{\sigma}_n})$ where here the superscript reflects the fact that $Y$ starts at $n$.
First we argue that for each $n \geq N$, $\hat{\sigma}_n$ is minimal for $\hat{\nu}_n$ in $Y$ started at $n$. Suppose $\hat{\rho}_n \leq \hat{\sigma}_n$ also embeds $\hat{\nu}_n$ in $Y$ started from $n$. If $\rho$ is defined by \[ \rho = \left\{ \begin{array}{ll} \sigma & \sigma < H_{n} \\
H_{n} + \hat{\rho}_n & \sigma \geq H_n
\end{array} \right. \] then $\rho \leq \sigma$ and $Y_\rho \sim Y_\sigma$. By minimality of $\sigma$ we conclude that $\rho = \sigma$ and hence $\hat{\rho}_n = \hat{\sigma}_n$.
Since $\hat{\sigma}_n$ is minimal (and integrable, since $\sigma$ is integrable and $\mathbb{E}^{\infty}[H_n] \leq \mathbb{E}^{\infty}[H_N] < \infty$) we have that $\mathbb{E}^n [ \hat{\sigma}_n ] = E_Y(n,\hat{\nu}_n) = \int q_n(x) \tilde{\nu}_n(dx) + 2(n- \overline{\tilde{\nu}_n}) m((n,\infty))$. Then \begin{eqnarray} \mathbb{E}^\infty [\sigma] & = & \lim_n \mathbb{E}[ (\sigma - H_n)^+] \nonumber \\
& = & 2 \lim_n \left\{ \int_{\ell}^{\infty} \tilde{\nu}_n(dx) \int_n^x m(dz) (x-z) + (n- \overline{\tilde{\nu}_n}) m((n,\infty)) \right\} \label{eqn:entrance} \end{eqnarray}
Since $\infty$ is an entrance boundary $\int^{\infty} y m(dy) < \infty$ and hence $\lim_n n m((n,\infty) = 0$. Further, since $\tilde{\nu}_n = \nu$ on $(-\infty,0)$, $\int_{-\infty}^0 |x| \tilde{\nu}_n(dx) < \infty$ if and only if $\int_{-\infty}^0 |x| \nu(dx) < \infty$. But, if $\int_{-\infty}^0 |x| \nu(dx) = \infty$, then for any embedding $\rho$ of $\nu$ in $Y$ started at $\infty$ we have \[ \mathbb{E}^\infty[\rho] > \mathbb{E}^\infty[(\rho - H_0)^+] > \int_{-\infty}^0 \nu(dx) q_0(x) = \infty \]
and hence there cannot be an integrable embedding of $\nu$. Since such an embedding exists by hypothesis, we must have $\int_{-\infty}^0 |x| \nu(dx)<\infty$. Then $\tilde{\nu}_n \in L^1$ and $\overline{\tilde{\nu}_n} \uparrow \overline{\nu} \in (-\infty,\infty]$. In particular, $\lim_n (n- \overline{\tilde{\nu}_n})m(n,\infty) \rightarrow 0$.
For the first term in (\ref{eqn:entrance}), since $\tilde{\nu}_n = \nu$ on $(\ell, n)$, \begin{eqnarray*} 2\lim_n \left\{ \int_{\ell}^{\infty} \tilde{\nu}_n(dx) \int_n^x m(dz) (x-z) \right\} & \geq & 2\lim_n \left\{ \int_{\ell}^{n} \nu(dx) \int^n_x m(dz) (z-x) \right\} \\
& = & 2\int_{\ell}^{\infty} \nu(dx) \int^\infty_x m(dz) (z-x), \end{eqnarray*} and conversely, since $\tilde{\nu}_n \leq \nu$ on $(n,\infty)$, \begin{eqnarray*} 2\lim_n \left\{ \int_{\ell}^{\infty} \tilde{\nu}_n(dx) \int_n^x m(dz) (x-z) \right\}& \leq & 2 \lim_n \left\{ \int_{\ell}^{\infty} \nu(dx) \int_n^x m(dz) (x-z) \right\} \\ & = & 2 \int_{\ell}^{\infty} \nu(dx) \int^\infty_x m(dz) (z-x) . \end{eqnarray*}
\end{proof}
\section{Recovering results for general diffusions}
Let $X = (X_t)_{t \geq 0}$ be a time-homogeneous one-dimensional diffusion with state space $I^X$ and suppose $X$ solves $dX_t = a(X_t)dW_t + b(X_t)dt$ subject to $X_0=x$. Then provided $b/a^2$ and $1/a^2$ are locally integrable, $X$ has scale function $s=s^X$ and speed measure $m^X$ given by \[ s'(z) = \exp \left( - \int^z \frac{2 b(v)}{a(v)^2} dv \right), \hspace{10mm} m^X(dz) = \frac{dz}{a(z)^2 s'(z)} .\] Now let $Y = (Y_t)_{t \geq 0}$ be given by $Y_t = s^X(X_t)$. Then $Y$ is a diffusion in natural scale with state space $I = s^X(I^X)$ and speed measure \[ m(dy) = m^X( d s^{-1}(y)) = \frac{dy}{a(s^{-1}(y))^2 s'(s^{-1}(y))^2} ,\] so that for $[L,R] \subset I$, $m((L,R)) = m^X((s^{-1}(L), s^{-1}(R)))$.
Then $X_\tau \sim \mu$ is equivalent to $Y_\tau \sim \nu$ where $\nu(A) = \mu \circ s^{-1}(A)$.
We have that $\overline{\nu} := \int_I v \nu(dv) = \int_{I^X} s(z) \mu(dz)$ and $\int_I q_{s(x)}(v) \nu(dv) = \int_{I^X} q_{s(x)}(s(z)) \mu(dz)$. Moreover, $q_y(z) = 2 \int_y^z (z-w) m(dw) = 2 \int_{s^{-1}(y)}^{s^{-1}(z)} ( z - s(v) ) m^X(dv)$.
For definiteness suppose $s(x) \geq \overline{\nu}$, and denote by $r$ the upper limit of $I$ and by $r^X$ the upper limit of $I^X$. Then $r=\infty$ and \begin{eqnarray*} E_Y(s(x), \nu) & = & \int_I q_{s(x)}(z)\nu(dz) + 2(s(x) - \overline{\nu})m((s(x),r)) \\ & = & \int_{I^X} q_{s(x)}(s(z)) \mu(dz) + 2(s(x) - \overline{\nu})m^X((x,r^X)) \\ & = & 2 \int_{I^X} \left\{ \int_x^z (s(z)-s(v)) m^X(dv) \right\} \mu(dz) + 2(s(x) - \overline{\nu})m^X(x,r^X)) \end{eqnarray*}
In general therefore, for $x \in int(I^X)$ set
$E_X(x;\mu) = \infty$ if $\int_{I^X} |s(z)| \mu(dz) = \infty$ and otherwise \begin{eqnarray} \nonumber E_X(x;\mu) & = & 2 \int_{I^X} \left\{ \int_x^z (s(z)-s(v)) m^X(dv) \right\} \mu(dz) \\
\label{eqn:EXdef} && \hspace{5mm} + 2|s(x) - \overline{\nu}| \left(m^X((x,r^X)) \mathcal I_{ \{ s(x) > \overline{\nu} \} } + m^X((l^X,x)) \mathcal I_{ \{ s(x) < \overline{\nu} \} } \right) \end{eqnarray} where $\mathcal I$ is the indicator function. As is the case for diffusions in natural scale, there is a second representation of $E_X$ in terms of the expected value of first hitting time of the weighted mean of the target law together with the expected value of an embedding in a process started at the weighted mean, namely \begin{equation} \label{eqn:EXdef2} E_X(x;\mu) = \mathbb{E}^{x}[H^X_{s^{-1}(\overline{\nu})}] + \int q_{\overline{\nu}}(s(z)) \mu(dz). \end{equation} Note that in this expression $q$ is defined for the transformed process in natural scale.
\begin{proof}[Proof of Theorem~\ref{thm:general}] $\tau$ is minimal for $\mu$ in $X$ started at $x$ if and only if $\tau$ is minimal for $\nu$ in $Y$ started at $y=s(x)$. Furthermore, $\tau$ is an integrable embedding of $\mu$ if and only if $\tau$ is an integrable embedding of $\nu$. Then $\mathbb{E}[\tau]=E_Y(s(x);\nu) = E_X(x;\mu)$, where $E_X$ is defined in either (\ref{eqn:EXdef}) or (\ref{eqn:EXdef2}). \end{proof}
\begin{example} Suppose $P$ is a Bessel process of dimension 3, started at $p>0$. Then the scale function is $s(x)=-x^{-1}$ and $I=(-\infty,0)$. The speed measure is $m^P(dp) = p^2 dp$. There exists an embedding of $\mu$ in $Y$ if and only if $\overline{\nu} \geq - p^{-1}$ where $\overline{\nu} = - \int_0^\infty x^{-1} \mu(dx)$. Further, there exists an integrable embedding of $\mu$ if and only if $E_P(p;\mu)<\infty$ where \begin{eqnarray*} E_P(p;\mu) & = & \int_0^{\infty} \mu(dz) 2 \int_p^z \left( \frac{1}{v} - \frac{1}{z} \right)v^2 dv + 2 \left( \frac{1}{p} + \overline{\nu} \right) \frac{p^3}{3} \\ & = & \frac{1}{3} \int_{0}^{\infty} z^2 \mu(dz) - \frac{p^2}{3} \end{eqnarray*} \end{example}
\begin{example}
Suppose $X$ is given by $X_t=aW_t+ bt$ where $b>0$ and $W$ is standard Brownian motion, null at zero. Then $s(z) = - e^{-2bz/a^2}$ and $m^X(dz) = dx e^{2bz/a^2}/2b$. Set $\overline{\nu} = - \int_{\mathbb{R}} e^{-2bz/a^2} \mu(dz)$ and suppose $\overline{\nu} \in [-1,0]$, else there is no embedding. Then $s^{-1}(\overline{\nu}) = - \frac{a^2}{2b} \log | \overline{\nu}|$ and $\exp( - \frac{2b}{a^2} s^{-1}(\overline{\nu})) = |\overline{\nu}|$. Hence \begin{eqnarray*} \int \mu(dz) q_{\overline{\nu}}(s(z)) & = & \int \mu(dz) 2 \int_{s^{-1}(\overline{\nu})}^z (s(z) - s(v)) m^X(dv) \\ & = & \int \mu(dz) 2 \int_{s^{-1}(\overline{\nu})}^z (e^{-2bv/a^2} - e^{-2bz/a^2}) \frac{dv}{2b} e^{2bv/a^2} \\ & = & \int \mu(dz) 2 \int_{s^{-1}(\overline{\nu})}^z (1 - e^{2b(v-z)/a^2}) \frac{dv}{2b} \\ & = & \int \mu(dz) \left\{ \frac{z}{b} - \frac{s^{-1}(\overline{\nu})}{b} - \frac{a^2}{2 b^2} + \frac{a^2}{2b^2} e^{2b(s^{-1}(\overline{\nu})-z)/a^2} \right\} \\
& = & \frac{1}{b} \int z \mu(dz) + \frac{a^2}{2b^2} \log |\overline{\nu}| - \frac{a^2}{2b^2} +\frac{a^2}{2b^2} \frac{1}{|\overline{\nu}|} \int e^{- 2bz/a^2} \mu(dz) \\
& = & \frac{1}{b} \int z \mu(dz) + \frac{a^2}{2b^2} \log |\overline{\nu}|. \end{eqnarray*}
Suppose $X_0=x$. For $w>x$ we have $\mathbb{E}^x[H^X_w] = (w-x)/b$. Then, using (\ref{eqn:EXdef2}), \[ E_X(x;\mu) = \mathbb{E}^x[H^X_{s^{-1}(\overline{\nu})}] + \int q_{\overline{\nu}}(s(z)) \mu(dz) = \frac{1}{b} \left( \int z \mu(dz) - x \right). \] Recall from Proposition~\ref{prop:minimal} that every embedding of $\mu$ is minimal. Then, for drifting Brownian motion, every embedding of $\mu$ has the same expected value. \end{example}
\begin{remark} Drifting Brownian motion was the subject of Grandits and Falkner~\cite{GranditsFalkner:00}, and the conclusion of the previous example is contained in their Proposition 2.2. Note that in the case $X_t = x + aB_t + bt$, if $\mathbb{E}[\tau]< \infty$ then $\mathbb{E}[X_\tau] - x = b \mathbb{E}[\tau]$. Hence, for an embedding $\tau$ of $\mu$ the result $\mathbb{E}[\tau]= E_X(x;\mu) = (\int z \mu(dz) - x)/b$ is not unexpected, and can be proved directly by other means. \end{remark}
\section{Minimality and Integrability of the Az\'ema-Yor embedding} Az\'ema and Yor~\cite{AzemaYor:79a,AzemaYor:79b} (see also Rogers and Williams~\cite[Theorem VI.51.6]{RogersWilliams:00b} and Revuz and Yor~\cite[Theorem VI.5.4]{RevuzYor:99}), give an explicit construction of a solution of the SEP for Brownian motion. The original paper~\cite{AzemaYor:79a} assumes the target law is centred and square integrable, but the $L^2$ condition is replaced with a uniform integrability condition in \cite{AzemaYor:79b}, see also \cite{RevuzYor:99}. Az\'ema and Yor~\cite{AzemaYor:79a} also indicate how the results can be extended to diffusions, provided that the process is recurrent and provided that once the process has been transformed into natural scale, the mean of the target law is equal to the initial value of the diffusion.
The Az\'ema-Yor stopping time for a centred target law $\nu$ in Brownian motion $W$ null at zero is \begin{equation} \label{tauAY}
\tau^W_{AY,\nu} = \inf\{ u : W_u \leq \beta_{\nu} (J^W_u) \} \end{equation}
where $J^W$ is the maximum process $J^W_u = \sup_{s \leq u} W_u$, and $\beta_{\mu}$ is the left-continuous inverse barycentre function, ie $\beta_\mu = b_{\mu}^{-1}$ where for a centred distribution $\eta$, $b_\eta(x) = \mathbb{E}^{Z \sim \eta}[ Z | Z \geq x]$. The Az\'ema-Yor embedding has become one of the canonical solutions of the SEP because it does not involve independent randomisation and because it is possible to give an explicit form for the stopping time. Further, amongst uniformly integrable (or equivalently minimal) solutions of the SEP for Brownian motion, the Az\'ema-Yor solution has the property that it maximises the law of the stopped maximum, ie for all increasing functions $H$, $\mathbb{E}[H(J^W_\tau)]$ is maximised over minimal embeddings $\tau$ of $\nu$ in $W$ by $\tau^W_{AY,\nu}$.
In the case where $\nu \in L^1$ but $\nu$ is not centred, Pedersen and Peskir~\cite{PedersenPeskir:01} make the simple observation that we can embed $\nu$ by first running the Brownian motion until it hits $\overline{\nu}$ and then embedding $\nu$ in Brownian motion started at $\overline{\nu}$ using the classical centred Az\'ema-Yor embedding, ie they propose \[ \tau^W_{PP,\nu} = H^W_{\overline{\nu}} + \tau^W_{AY,\nu} \circ \Theta_{H^W_{\overline{\nu}}}. \] However, if the Brownian motion is null at zero, and $\overline{\nu}<0$, then the embedding $\tau_{PP,\nu}$ no longer maximises the law of the stopped maximum. Instead Cox and Hobson~\cite{CoxHobson:04} introduce an alternative modificiation of the Az\'ema-Yor stopping time which does maximise the law of the stopped maximum, and it is this embedding which we will study here. In fact the expected value of any embedding of the form $H^W_{\overline{\nu}} + \tau^{\overline{\nu},\nu} \circ \Theta_{H^W_{\overline{\nu}}}$ can be found very easily, and our aim here is to analyse an embedding which is not of this form.
Suppose $W_0 = w$ and $\nu \in L^1$. Define $D_\nu(x) = \mathbb{E}^{Z \sim \nu}[(Z-x)^+] + (w - \overline{\nu})^+$ and for $z \geq w$ set \begin{equation} \label{eqn:betadef}
\beta_\nu(z) = \arg \inf_{v<z} \left\{ \frac{D_\nu(v)}{z-v} \right\}.
\end{equation} (Here the $\arg \inf$ may not be uniquely defined, but we can make the choice of $\beta_\nu$ unique by adding a left-continuity requirement.) Then the Cox-Hobson extension of the Az\'ema-Yor embedding is to set \begin{equation} \label{eqn:tauCH}
\tau^W_{CH,\nu} = \inf\{ u : W_u \leq \beta_{\nu} (J^W_u) \} \end{equation} Note that if $\overline{\nu} \geq w$, then for $z \in [w,\overline{\nu}]$ we have $\beta_\nu(z) = - \infty$. In this case the Cox-Hobson and Pedersen-Peskir embeddings are identical. However, if $\overline{\nu}<w$ then the Cox-Hobson and Pedersen-Peskir embeddings are distinct.
To ease the exposition we assume that $\nu$ has a density $\rho$. (The general case can be recovered by approximation, or by taking careful consideration of atoms.) Then $b = \beta_{\nu}^{-1}$ solves \begin{equation} \label{eqn:defbarycentre} (b(y) - y) \nu((y,\infty)) = D_{\nu}(y), \end{equation} $b$ is differentiable and $\nu((y,\infty))b'(y) = (b(y)-y) \rho(y)$. Then, writing $\tau$ for $\tau^W_{CH,\nu}$ and $L(\nu)$ for the lower limit of the support of $\nu$ and using excursion-theoretic arguments, \begin{eqnarray*} \mathbb{P}(W_\tau \geq y) = \mathbb{P}(J^W_\tau \geq y) & = & \exp \left( - \int_w^{b(y)} \frac{dz}{z - \beta(z)} \right) \\ & = & \exp \left( - \int_{w \vee {\overline{\nu}}}^{b(y)} \frac{dz}{z - \beta(z)} \right) \\ & = & \exp \left( - \int_{L(\nu)}^{y} \frac{b'(v)}{b(v) - v} dv \right) \\ & = & \exp \left( - \int_{L(\nu)}^{y} \frac{\rho(v)}{\nu((v,\infty))} dv \right) = \nu((y,\infty)) \end{eqnarray*} and hence $\tau^W_{CH,\nu}$ is an embedding of $\nu$.
Cox and Hobson~\cite{CoxHobson:06} prove that the embedding in (\ref{eqn:tauCH}) is minimal. A bi-product of the subsequent arguments in this section is a proof of minimality by different means. Note that this is only relevant in the case $I=\mathbb{R}$, else every embedding is minimal.
Let $Y$ be a regular diffusion in natural scale. Then by the Dambis-Dubins-Schwarz theorem $Y$ can be written as a time-change of Brownian motion: $Y_t = W_{[Y]_t}$ for some Brownian motion (on a filtration and probability space constructed from the original space supporting $Y$). Then if we set $Q = [Y]^{-1}$ we have $W_t = Y_{Q_t}$. Conversely, let $W$ be Brownian motion and let $(L^W_t(z))_{t\geq 0,z\in \mathbb{R}}$ be its family of local times. Given a measure $m$ on $I$ (with a strictly positive density with respect to Lebesgue measure), set $A_s = \int_I m(dz) L^W_s(z)$. Then $A$ is strictly increasing and continuous (at least until $W$ hits an endpoint of $I$) and we can define an inverse $\Gamma = A^{-1}$. Finally set $Y_t = W_{\Gamma_t}$; then $Y$ is a diffusion in natural scale with speed measure $m$.
It follows that if $\tau$ is a solution of the SEP for $\nu$ in $W$ then $Q_\tau$ is a solution of the SEP for $\nu$ in $Y$. Similarly, if $\sigma$ is the solution of the SEP in $Y$, then $\Gamma_\sigma$ is a solution of the SEP in $W$. Hence there is a 1-1 correspondence between solutions of the SEP for $\nu$ in $W$ and solutions for $\nu$ in $Y$.
Recall that we are supposing that $\nu \in L^1$. (Note that if $\nu \notin L^1$ then it is not possible to define $D_\nu(\cdot)$, and the Az\'ema-Yor solution is not defined.) Suppose also that $w>\overline{\nu}$, which is the interesting case in which the Pedersen-Peskir and Cox-Hobson embeddings are distinct. By analogy with (\ref{eqn:tauCH}) define \begin{equation} \label{eqn:tauY}
\tau^Y_{CH,\nu} = \inf\{ u : Y_u \leq \beta_{\nu} (J^Y_u) \} \end{equation} where $\beta_\nu$ is as defined in (\ref{eqn:betadef}). Then $\tau = \tau^Y_{CH,\nu}$ inherits the embedding property from $\tau^W_{CH,\nu}$ and is a solution of the SEP for $\nu$ in $Y$.
Now consider the question of minimality. It is clear that $\tau^W_{CH,\nu}$ is minimal for $\nu$ in $W$ if and only if $\tau$ is minimal for $\nu$ in $Y$. If $\overline{\nu} \neq y$ then $\tau^W_{CH,\nu}$ is not integrable, but $\tau$ may be integrable. Further, if $\tau$ is integrable for $\nu$ in $Y$ started at $w$ and if $E_Y(w;\nu)<\infty$ then $\tau$ is minimal if and only if $\mathbb{E}[\tau] = E_Y(w;\nu)$. In particular, if we choose the diffusion $Y$ so that its speed measure satisfies $m(\mathbb{R})<\infty$, then necessarily $E_Y(w;\nu)<\infty$ (recall $\nu \in L^1$). The minimality of $\tau$ for $\nu$ in $Y$ and hence the minimality of $\tau^W_{AY,\nu}$ will follow if we can show $\mathbb{E}[\tau]=E_Y(w;\nu)$.
We have, (recall $w > \overline{\nu}$), \begin{eqnarray*} \mathbb{E}[\tau] & = & \int_w^\infty dz \mathbb{P}(J^Y_\tau \geq z) \int_{\beta(z)}^z \frac{2(x - \beta(z))}{z-\beta(z)} m(dx) \\
& = & 2 \int_{\mathbb{R}} \frac{b'(y)}{(b(y)-y)} dy \mathbb{P}(Y_\tau \geq y) \int_y^{b(y)} (x-y) m(dx) \\
& = & 2 \int_{\mathbb{R}} \rho(y) dy \int_y^{b(y)} (x-y) m(dx) \\
& = & 2 \int_{-\infty}^w m(dx) \int_{-\infty}^x (x-y) \rho(y) dy + 2 \int_w^\infty m(dx) \int_{\beta(x)}^x(x-y) \rho(y) dy.
\end{eqnarray*} Here we use excursion theory and the fact that \[ \mathbb{E}^x[H^Y_{a,b}] = 2 \int_a^b (x\wedge z-a)(b- x \vee z) m(dz) \hspace{10mm} a<x<b \] for the first line (see also Pedersen and Peskir~\cite[Theorem 4.1]{PedersenPeskir:98}), $(J^Y_\tau \geq z)= (Y_\tau \geq \beta(z))$ for the second line, $b'(y) = \rho(y) (b(y)-y) / \nu((y,\infty))$ almost everywhere for the third, and the fact that $b(y) \geq w$ for the final line.
Observe that \[ 2\int_{-\infty}^w m(dx) \int_{-\infty}^x (x-y) \nu(dy) = 2\int_{-\infty}^w \nu(dy) \int_{y}^w (x-y) m(dx) = \int_{-\infty}^w \nu(dy) q_{w}(y) . \]
Note that it is no longer true that $b = b_{\nu} =\beta_{\nu}^{-1}$ satisfies $b(y) = \mathbb{E}^{Y \sim \nu}[Y | Y \geq y]$ but rather $b(y) = \{(w - \overline{\nu}) + \int_y^\infty z \nu(dz) \}/(\nu(y,\infty))$ and then $(x - \beta(x)) \int_{\beta(x)}^\infty \nu(dz) = w - \overline{\nu} + \int_{\beta(x)}^\infty ( z - \beta(x)) \nu(dz)$. Thus \[ \int_{\beta(x)}^x(x-y) \nu(dy) = \int_{\beta(x)}^\infty (x-y) \nu(dy) + \int_x^\infty (y-x) \nu(dy) = (w - \overline{\nu}) + \int_x^\infty (y - x) \nu(dy), \] and \[ 2 \int_w^\infty m(dx) \int_{\beta(x)}^x(x-y) \nu(dy) = 2 ( w - \overline{\nu}) m((w,\infty)) + \int_{w}^\infty q_{w}(y) \nu(dy). \]
Finally then \[ \mathbb{E}[\tau] = 2 ( w - \overline{\nu}) m((w,\infty)) + \int q_{w}(y) \nu(dy) = E_{Y}(w;\nu) \]
\subsection{An example} In this example we suppose $Y$ is a non-negative, regular, local-martingale diffusion started at 1 with state space unbounded above and absorbed at zero (if $Y$ can hit zero in finite time, else $Y$ is assumed to be transient to zero). We suppose further that $\nu$ is the given by $\nu((y,\infty)) = (1 + \theta y)^{-\phi}$ with $\theta,\phi>0$ and $\phi \geq 1+1/\theta$. If $\phi = 1 + 1/\theta$ then $\overline{\nu}=1$, otherwise if $\phi>1 + 1/\theta$ then $\overline{\nu}<1$. (Note that if $\phi < 1 + 1/\theta$, then $\overline{\nu}>1$ and there is no embedding of $\nu$ in $Y$.)
Our first goal is to find the function $\beta_\nu$ in the Cox-Hobson extension of the Az\'ema-Yor embedding and the associated stopping times. In fact we find a family of solutions parameterised by $\psi \in [\overline{\nu},1]$ for which the stopping time with parameter $\psi$ corresponds to running $Y$ until it his $\psi$ and then embedding $\nu$ in $Y$ started at $\psi$ using the Cox-Hobson embedding. In particular this stopping time can be written as \[ H^Y_\psi + \tau^\psi \circ \Theta_{H^Y_\psi} \] where \[ \tau^\psi = \inf \{ u \geq 0 ; Y^\psi_u \leq \beta_{\nu, \psi} (J^{Y^\psi}_u) \} \] and $Y^\psi$ satisfies $Y^\psi_0 = \psi$. Here, for $\psi \in [\overline{\nu},1]$, $D_{\nu, \psi}(z) = \mathbb{E}^{Z \sim \nu}[(Z-x)^+] + (\psi - \overline{\nu})$ is given by \[ D_{\nu, \psi}(z) = \psi - \frac{1}{\theta(\phi-1)} \left\{ 1 - (1+\theta y)^{-(\phi - 1)} \right\} \] and $b = \beta_{\nu,\psi}^{-1}$ given by (\ref{eqn:defbarycentre}) has expression \[
b(y) = (1+ \theta y)^\phi \left( \psi - \frac{1}{\theta(\phi-1)} \right) + \frac{\phi y}{\phi - 1} + \frac{1}{\theta(\phi-1)} . \]
Now suppose $m(dy) = y^{-2c}dy$ (with $c \in (0,\infty) \setminus \{1/2,1 \}$) so that $Y$ solves $dY = Y^{c} dW$. Then $q = q_1$ is given by $q(x) = \frac{x^{2-2c} - 1}{(1-c)(1-2c)} - \frac{2 (x-1)}{(1-2c)}$. We have \[ E_Y(1;\nu) = \int_0^\infty q(y) \nu(dy) + 2(1 - \overline{\nu}) m((1,\infty)) \] Suppose $\phi > 1 + 1/\theta$. Then $\overline{\nu}<1$ and there exists an integrable embedding of $\nu$ if and only if each of the three integrals \[ \int^{\infty} x^{-2c} dx, \hspace{15mm} \int^\infty x^{2-2c} x^{-(\phi + 1)} dx, \hspace{15mm} \int_0 x^{2-2c} dx \] is finite or equivalently $c>1/2$, $c>1 - \phi/2$ and $c<3/2$. However, since $\phi \geq 1 + 1/\theta>1$ this reduces to $1/2 < c < 3/2$.
If $\phi = 1 + 1/\theta$ then there is no requirement for $m((1,\infty))$ to be finite, the condition $c>1/2$ is not needed and there exists an integrable embedding of $\nu$ if and only if $1 - \phi/2 < c < 3/2$.
These statements are consistent with the case $c=0$ of absorbing Brownian motion. Then $\nu$ can be embedded in integrable time if an only if $\overline{\nu}=1$ and $\nu \in L^2$, or equivalently $\phi = 1+1/\theta$ and $\phi>2$.
\subsection{An example of Pedersen and Peskir} Pedersen and Peskir~\cite{PedersenPeskir:01} give the expected time for a Bessel process to fall below a constant multiple of the value of its maximum, ie they find $\mathbb{E}[\tau^P_{AY}]$ where $\tau^P_{AY} = \inf \{ u > 0 : P_u \leq \lambda J^P_u \}$ and $\lambda < 1$. They find the answer by solving a differential equation subject to boundary conditions and a minimality principle. We can recover their result directly using our methods.
Let $P$ be a Bessel process of dimension $\alpha \neq 2$, started at 1. Then $Y=P^{2-\alpha}$ is a diffusion in natural scale. Then $P$ solves $dY_t = (2-\alpha) Y_t^b dW_t$ where $b = (1-\alpha)/(2-\alpha)$. Then $m(dy) = (2 -\alpha)^{-2} y^{-2b} dy$ and \[ q_1(y) = \frac{1}{(2 - \alpha)^2} \left[ \frac{y^{2(1-b)}}{(1-b)(1-2b)} + \frac{1}{1-b} - \frac{2y}{1-2b} \right]. \]
Suppose first $\alpha<2$. We find, with $J_u = J^Y_u = \sup_{s \leq u} Y_u$, \[ \tau^P_{AY} = \inf \{ u > 0 ;Y^{1/(2-\alpha)}_u \leq \lambda J_u^{1/(2 - \alpha)} \} = \inf \{ u > 0 ;Y_u \leq \gamma J_u \} =: \tau^\gamma \] where $\gamma = \lambda^{2-\alpha}$. Then, for $y \geq \gamma$, \[ \mathbb{P}(Y_{\tau^{\gamma}} \geq y) = \mathbb{P}(J_{\tau^\gamma} \geq y/\gamma)
= \exp \left( - \int_1^{y/\gamma} \frac{dj}{(j - \gamma j)} \right)
= (y/\gamma)^{-1/(1-\gamma)}. \]
Then, if $\nu =\mathcal{L}(Y_{\tau^\gamma})$ we have $\overline{\nu}=1$ and \[ \mathbb{E}[ \tau^\gamma ] = \int_{\gamma}^\infty q_1(y) \nu(dy) = \frac{\lambda^\alpha (2 -\alpha)}{\alpha(2 - \alpha \lambda^{\alpha - 2})} - \frac{1}{\alpha}\] provided $\alpha \lambda^{\alpha - 2} < 2$, and otherwise $\tau^\gamma$ is not integrable.
If $\alpha > 2$ then set $Y = -P^{2-\alpha}$. Then $\tau^P_{AY} = \inf \{ u > 0 : Y_u \leq \gamma {J}_u \} =: \tau^\gamma$ where $\gamma = \lambda^{2-\alpha}>1$. Then for $y \in (-\gamma,0)$, $\mathbb{P}(Y_{\tau^\gamma} \geq y) = (|y|/\gamma)^{1/(\gamma-1)}$. Again we find that $\nu \sim \mathcal{L}(Y_{\tau^\gamma})$ has unit mean and, provided $\alpha \lambda^{\alpha - 2} > 2$, \[ \mathbb{E}[\tau^\gamma] = \frac{\lambda^\alpha(\alpha - 2)}{\alpha(\alpha \lambda^{\alpha - 2} - 2)} - \frac{1}{\alpha} ,\] else $\tau^{\gamma}$ is not integrable.
Finally, if $\alpha=2$, we set $Y = \log P$ and then $\tau^P_{AY} = \inf \{u>0: Y_u \leq J_u - \gamma \} =: \tau^\gamma$ where $\gamma = - \log \lambda > 0$. Then, for $y \geq - \gamma$, $\mathbb{P}(Y_\tau \geq y) = e^{-(y/\gamma) - 1}$. Further, $dY_t = e^{-Y_t} dB_t$ and if $P_0=1$ then $Y_0=0$. Then $m(dy) = e^{2y} dy$ and $q_0(y) = \{ e^{2y} - 2y - 1 \}/2$. Hence \[ \mathbb{E}[ \tau^\gamma ] = \int_{-\gamma}^\infty q_0(y) \nu(dy) = \int_{-\gamma}^{\infty} \frac{e^{2y - (y/\gamma) - 1}}{2 \gamma} dy - \frac{1}{2} = \frac{\lambda^2}{2 + 4 \log \lambda} - \frac{1}{2} \] provided $\lambda > e^{-1/2}$, and otherwise $\tau^\gamma$ is not integrable.
\end{document} |
\begin{document}
\newcommand{{\it J. Math. Phys}~}{{\it J. Math. Phys}~} \newcommand{{\it Phys. Rev. Lett.}~}{{\it Phys. Rev. Lett.}~} \newcommand{{\it Phys. Lett.}~}{{\it Phys. Lett.}~} \newcommand{{\it Phys. Rev.}~}{{\it Phys. Rev.}~} \newcommand{{\it Phys. Scr.}~}{{\it Phys. Scr.}~} \newcommand{{\it J. Opt. B: Quantum Semiclass. Opt.} B~}{{\it J. Opt. B: Quantum Semiclass. Opt.} B~}
\title{ \vspace*{6cm} Dynamical Casimir Effect in a one-dimensional uniformly contracting cavity} \author{A.M. Fedotov\thanks{E-mail: fedotov@cea.ru}, Yu.E. Lozovik$^\dag$\thanks{Email: lozovik@isan.troitsk.ru}, N.B. Narozhny\thanks{E-mail: narozhny@theor.mephi.ru}, and A.N. Petrosyan\thanks{Email: petrossian777@yandex.ru}} \address{Moscow Engineering Physics Institute, 115409 Moscow, Russia} \address{$^\dag$Institute of Spectroscopy of Russian Academy of Science, Troitsk, 142190 Moscow region, Russia}
\begin{abstract} We consider particle creation (the Dynamical Casimir effect) in a uniformly contracting ideal one-dimensional cavity non-perturbatively. The exact expression for the energy spectrum of created particles is obtained and its dependence on parameters of the problem is discussed. Unexpectedly, the number of created particles depends on the duration of the cavity contracting non-monotonously. This is explained by quantum interference of the events of particle creation which are taking place only at the moments of acceleration and deceleration of a boundary, while stable particle states exist (and thus no particles are created) at the time of contracting. \end{abstract}
\pacs{42.50.Ct,42.50.Dv,03.65.-w}
\maketitle
\section{Introduction}
During the last two decades the dynamical Casimir effect (DCE), the effect of photon creation in an empty nonstationary cavity \cite{Moore}, attracted considerable attention in literature. Being a firm prediction of quantum field theory, the DCE has not been observed yet experimentally. This is probably the main reason for such interest to it, and therefore search for a realistic scheme for experimental observation of the DCE is an actual problem of modern quantum electrodynamics.
A nonstationary cavity can be realized by two possible ways: (i) a cavity with moving boundaries, or (ii) a cavity with fixed shape but with {\it varying} boundary conditions. In case (i) a significant number of photons could be created if the velocity of boundaries is close to the speed of light. The corresponding experimental realization would be obviously a difficult technical problem. Alternatively, it was suggested to utilize the vibrating cavity and accumulate the effect by tuning the frequency of small mechanical oscillations of the boundaries (which could be induced, e.g., by a sound wave \cite{Acoust}, or due to piezoelectric effect \cite{Piezo}) in resonance with an eigenfrequency of the cavity \cite{DManko,JR,Law,Dod,KA}, see also the recent review \cite{rev}. In this realization, an extremely high accuracy of frequency tuning must be provided.
Owing to amazing progress in modern laser technology and semiconductor electronics, the variant (ii) could be considered as a more realistic one \cite{Yab,Loz1,daccor}. The optical properties of a semiconductor film located on a dielectric base can be changed, e.g., by means of electron-hole plasma creation by a strong femtosecond laser pulse \cite{Loz1}, or due to injection of carriers by a powerful electric pulse \cite{FNL}. In both cases, variation of optical properties of the boundary can be fast enough and well controlled.
However, theoretical investigation of the first variant is simpler from mathematical point of view, at least in the framework of a one-dimensional model. Sometimes in the framework of the first approach exact solutions to the problem can be found for some special laws of motion. Besides, one may hope that very fast displacement of a boundary acts like "almost instant" creation of a new boundary due to changing of optical properties of the medium inside the cavity. Therefore we will consider a cavity with moving boundaries in this paper, and will be especially interested in the case of ultrarelativistic motion.
We will present a new exact solution for the DCE problem in a one-dimensional ideal cavity uniformly contracting during the time interval $0<t<T$ and otherwise stationary. A complete set of solutions for a classical field inside a one- or even three-dimensional cavity at the time of uniform contracting was known for years, see Ref.~\cite{rev}. It was used, for example, in Ref.~\cite{Bordag1,Bordag2}, for calculation of the Casimir force acting between the boundaries of a relativistically squeezing or expanding cavity. In contrast, our solution for a one-dimensional quantum field in a uniformly contracting cavity seems to be unknown in literature, and allows one to study the DCE for this particular case non-perturbativly. We have discovered a new and unexpected effect of periodic dependence of the number of created particles on some variable $\chi_f$ determined by the time of contracting $T$ and velocity of the boundary $v$. It is explained by quantum interference of the events of particle creation which are taking place only at the moments of acceleration ($t=0$) and deceleration ($t=T$) of a boundary, while stable particle states exist (and thus no particles are created) at the time of contracting.
We review the Hamiltonian approach in Section \ref{Hamiltonian preliminaries}. Consideration of a field quantization procedure in a uniformly contracting cavity is presented in Section \ref{quasiparticles}. Section \ref{particle creation} is devoted to calculation of the energy spectrum of particles created in the cavity. The periodic dependence of all measurable quantities on $\chi_f$ is indicated, and the optimal conditions for the DCE are derived. The discussion of the results and conclusions are given in Section \ref{discussion}. We use natural units $\hbar=c=1$ throughout the paper.
\section{Hamiltonian and other preliminaries} \label{Hamiltonian preliminaries}
We will use the Hamiltonian approach which in application to DCE was apparently first formulated in Ref. \cite{RT}, see also \cite{Law,rev}. In this section we begin with derivation of Hamiltonian for a one-dimensional problem based on the well known properties of time-dependent canonical transformations in classical Hamiltonian mechanics (see, e.g., \cite{LLI}). Our approach clears up the presence of extra terms in the Law Hamiltonian and is equivalent to the method developed in the Law's paper \cite{Law}, at least, for quantum systems with quadratic Hamiltonian.
Consider a massless scalar quantum field $\Phi(x,t)$ in a nonstationary one-dimensional cavity formed by two mirrors. One of the mirrors is fixed at the point $x=0$, while the position of the other depends on time $x=l(t)$. For definiteness, we impose the following boundary conditions at the mirrors: $\Phi(0,t)=\Phi(l(t),t)=0$.
The Hamiltonian for the field reads \begin{equation}\label{Hamiltonian} H=\frac12 \int\limits_0^{l(t)} \left\{\left(\frac{\partial \Phi(x,t)}{\partial t}\right)^2+\left(\frac{\partial \Phi(x,t)}{\partial x}\right)^2\right\}\, dx.\end{equation} The field $\Phi(x,t)$ and the canonical momentum $\Pi(x,t)=\partial\Phi(x,t)/\partial t$ are assumed to obey the equal time commutation relations $[\Phi(x,t),\Pi(x',t)]=i\delta(x-x')$, and hence the Heisenberg equations acquire the form \begin{equation}\label{Hamilt Eqs}\dot\Phi(x,t)=i[H,\Phi(x,t)]=\Pi(x,t),\quad \dot\Pi(x,t)=i[H,\Pi(x,t)]=\frac{\partial^2\Phi(x,t)}{\partial x^2}.\end{equation}
One can easily see that the Fourier coefficients \begin{eqnarray}\label{Q_n} Q_n(t)= \sqrt{\frac{2}{l(t)}}\int\limits_0^{l(t)}\Phi(x,t)\sin\left(\frac{\pi n x}{l(t)}\right)\,dx,\\ \label{P_n} P_n(t)= \sqrt{\frac{2}{l(t)}}\int\limits_0^{l(t)}\Pi(x,t)\sin\left(\frac{\pi n x}{l(t)}\right)\,dx,\end{eqnarray} satisfy the standard commutation relations $[Q_n,P_{n'}]=i\delta_{nn'}$ and $[Q_n,Q_{n'}]=[P_n,P_{n'}]=0$. Introducing the "instantaneous eigenfrequencies" $\omega_n(t)=\pi n/l(t)$, and the "instantaneous destruction and creation" operators $a_n$, $a_n^\dagger$ \begin{equation}\label{a_nac_n} a_n(t)=\frac{\omega_n(t) Q_n(t)+iP_n(t)}{\sqrt{2\omega_n(t)}},\quad a_n^\dagger(t)=\frac{\omega_n(t) Q_n(t)-iP_n(t)}{\sqrt{2\omega_n(t)}},\end{equation} which satisfy the canonical commutation relations $[a_n,a_{n'}^\dagger]=\delta_{nn'}$, $[a_n,a_{n'}]=[a_n^\dagger,a_{n'}^\dagger]=0$, one can write the expansions for the field and canonical momentum in the form \begin{equation}\label{a->Phi,Pi}\begin{array}{c}\displaystyle \Phi(x,t)=\sqrt{\frac{2}{l(t)}}\sum\limits_{n}\sin\left(\frac{\pi n x}{l(t)}\right)\frac{a_{n}(t)+a_{n}^\dagger(t)}{\sqrt{2\omega_{n}(t)}},\\ \\ \displaystyle \Pi(x,t)=\sqrt{\frac{2}{l(t)}}\sum\limits_{n}\sin\left(\frac{\pi n x}{l(t)}\right)\left(-i\sqrt{\frac{\omega_{n}(t)}{2}}\right) \left\{a_{n}(t)-a_{n}^\dagger(t)\right\}.\end{array}\end{equation} Since operators $a_n$, $ia_n^\dagger$ and $Q_n$, $P_n$ satisfy the same commutation relations, Eqs.~(\ref{a->Phi,Pi}) can be considered as a canonical transformation. One can easily check that this transformation can be defined with a generating functional $F[a,\Phi]$ as \begin{equation}\label{canon_tranf}a_n^\dagger=-i\frac{\partial F[a,\Phi]}{\partial a_n},\quad \Pi(x,t)=-\frac{\delta F[a,\Phi]}{\delta\Phi(x,t)},\end{equation} \begin{equation}\label{F} F[a,\Phi]=-\frac{i}{2}\sum\limits_n\left\{\omega_n(t) Q_n^2[\Phi,t]+a_n^2-2\sqrt{2\omega_n(t)}Q_n[\Phi,t] a_n\right\}.\end{equation} The order ambiguity in the last term of Eq.~(\ref{F}) can give rise only to an additive indefinite c-number constant, and thus is insignificant.
For the stationary case ($l(t)={\rm const}$) the generating functional does not depend on time explicitly, so that the Hamiltonian remains unchanged under the canonical transformation (\ref{canon_tranf}),(\ref{F}). In terms of new variables it acquires the form $H=\sum_n \omega_n a_n^\dagger a_n$. If the cavity is nonstationary, and hence the generating functional explicitly depends on time, the Hamiltonian changes, compare \cite{LLI}. The operators $a_n$ satisfy the equations $\dot a_n=i[H',a_n]$, where the new Hamiltonian is $H'=H-\partial F/\partial t$.
For the case of a uniformly contracting cavity \begin{equation}\label{l(t)}l(t)=\left\{\begin{array}{ll}\displaystyle l_i, &\displaystyle t<0,\\ \displaystyle l_i-vt,&\displaystyle 0<t<T,\\ \displaystyle l_f,&\displaystyle t>T,\end{array}\right.\end{equation} where $T=(l_f-l_i)/v$ is duration of contracting and $v>0$ is the velocity of the moving boundary, we obtain at $0<t<T$ \begin{equation}\label{Lambda} H'=\pi \Lambda/l(t),\quad \Lambda=\sum\limits_n \left\{n{a}_n^{\dagger} a_n- i\frac{v}{4\pi}(a_n^2-{a_n^\dagger}^2)\right\}
- \frac{iv}{\pi} \sum\limits_{n\ne n'} (-1)^{n+n'}\sqrt{nn'}\times\left\{ \frac{(a_n a_{n'}-{a}_{n'}^{\dagger}{a}_n^{\dagger})}{2(n+n')}+\frac{{a}_n^{\dagger} a_{n'}}{n'-n}\right\}.\end{equation} The same result follows from the general Law formula (2.21) of Ref.~\cite{Law} for the "effective" Hamiltonian which was constructed to reduce the equations of motion for operators $Q_n$, $P_n$ to the Heisenberg form. This proves that both approaches are equivalent at least for the case when the Hamiltonian is quadratic in structure.
The additional terms in (\ref{Lambda}) are responsible for annihilation and creation of pairs and scattering of particles from one mode to another in the contracting cavity. The corresponding "coupling constants" are proportional to the velocity of the moving boundary and are small in non-relativistic case. One can see that the terms of the Hamiltonian (\ref{Lambda}) responsible for interaction between modes are of the same order as those describing particle production in a separate mode. Therefore strong interaction between field modes is a built-in internal feature of DCE \cite{Pomeran}.
We see from Eq.~(\ref{Lambda}) that the Hamiltonian $H'$ has a remarkable property. Its time dependence is determined exclusively by a c-number factor $\pi/l(t)$. Hence, the operator $\Lambda$ does not depend on time explicitly and is an integral of motion. Therefore, being quadratic in canonical variables $a_n$, $a_n^\dagger$, it admits diagonalization by a time independent linear canonical transformation (Bogolubov transformation) from $a_n$, $a_n^\dagger$ to some new canonical variables $\tilde{b}_n$, $\tilde{b}_n^\dagger$. In terms of these new variables we would have \begin{equation}\label{H'}\Lambda=\sum_n\lambda_n\tilde{b}_n^ \dagger\tilde{b}_n\,,\quad H'=\sum_n(\pi\lambda_n/l(t))\tilde{b}_n^\dagger\tilde{b}_n. \end{equation}
Hereinafter it is convenient to introduce an auxiliary dimensionless time variable $\chi=\pi\int_0^t dt/l(t)$, which varies from $-\infty$ to $+\infty$. At $0<t<T,\;\chi=-(\pi/v)\ln(1-vt/l_i)$. In terms of this variable the Heisenberg equations of motion acquire the form \begin{equation}\label{EffHamiltEqs} da_n/d\chi=i[\Lambda,a_n],\quad 0<t<T,\end{equation} so that the operator $\Lambda$ can be considered as the generator of translations in "time" $\chi$. The time moment $t=0$ corresponds to $\chi_i=0$, whereas $t=T$ to $\chi_f=(\pi/v)\ln(1/\rho)$, where the dimensionless parameter $\rho=l_f/l_i<1$ is the squeeze rate for the cavity. The value $\chi=\infty$ corresponds to the moment $t=l_i/v$ of the cavity collapse.
\section{Diagonalization of the Hamiltonian}\label{quasiparticles}
We will use the so-called Milne reference frame, see, e.g., Ref. \cite{BD}, to diagonalize operator $\Lambda$. The corresponding spatial $\xi$ and time $\tau$ Milne coordinates are defined by \begin{equation}\label{Milne_Coords} t=\frac{l_i}{v}-\tau\cosh{\xi},\quad x=\tau\sinh{\xi},\end{equation} The point is that the cavity remains stationary in Milne reference frame. Indeed, the spacetime region $0<x<l_i-vt$ is mapped conformally by transformation (\ref{Milne_Coords}) onto the strip $0<\xi<(1/2)\ln{d}$, $\tau>0$, where $d=(1+v)/(1-v)$ coincides with the Doppler factor for reflection from a mirror moving with velocity $v$. This means that the world line for the right boundary is given by $\xi=(1/2)\ln{d}=const$.
The wave equation in terms of Milne coordinates reads: \begin{equation}\label{Milne_WaveEq}\frac1{\tau}\frac{\partial}{\partial\tau} \left(\tau\frac{\partial\Phi}{\partial\tau}\right)- \frac1{\tau^2}\frac{\partial^2\Phi}{\partial\xi^2}=0.\end{equation} This equation can be easily solved by separation of variables. Taking into account boundary conditions (which are stationary in the Milne reference frame) we obtain a complete set of modes \begin{equation}\label{Milne_Modes} \Psi_n(\xi,\tau)=\frac1{\sqrt{\pi n}} \left(\frac{v\tau}{l_i}\right)^{2\pi i n/\ln{d}}\sin\left(\frac{2\pi n\xi}{\ln{d}}\right),\quad n=1,2,\ldots\;,\end{equation} which are normalized by the relation $$-i\int\limits_0^{\frac12\ln{d}} d\xi\,\tau \left[\Psi_n^*\frac{\stackrel{\leftrightarrow}{\partial}} {\partial\tau}\Psi_{n'}\right]=\delta_{nn'}.$$ Note that the unusual sign in the LHS of this equation arises due to opposite directions of physical time $t$ and Milne time $\tau$. In terms of the original variables $x$, $t$ these modes acquire the form \begin{equation}\label{Milne_ModesXT} \Psi_n(x,t)=\frac{-i}{2\sqrt{\pi n}}\left\{\left(1-\frac{v(t-x)}{l_i}\right)^{\frac{2\pi i n}{\ln{d}}}-\left(1-\frac{v(t+x)}{l_i}\right)^{\frac{2\pi i n}{\ln{d}}}\right\},\end{equation} and constitute a complete set of solutions for the wave equation in a uniformly contracting cavity orthonormalized by the usual Klein-Gordon scalar product $$i\int\limits_0^{l(t)} dx\, \left[\Psi_n^*\frac{\stackrel{\leftrightarrow}{\partial}} {\partial t}\Psi_{n'}\right]=\delta_{nn'}.$$ Hence, the general solution to the Heisenberg equations (\ref{Hamilt Eqs}) can be represented in the form \begin{equation}\label{VirtModeQuant}\Phi(x,t)=\sum\limits_n \left\{b_n\Psi_n(x,t)+ b_n^\dagger\Psi_n^*(x,t)\right\},\end{equation} where operators $b_n$, $b_n^\dagger$ satisfy the standard Bose-Einstein commutation relations and {\it are independent of time} since modes (\ref{Milne_ModesXT}) satisfy the wave equation exactly. Specifically, this means that the Hamiltonian which governs behavior of the variables $b_n$ and $b_n^\dagger$ is {\it equal to zero} with an accuracy to a $c$-number contribution.
Let us now express operators $b_n$ in terms of operators $a_n$, $a_n^\dagger$. Due to completeness and orthonormality of the set (\ref{Milne_ModesXT}), we have: \begin{equation}\label{Phi,Pi->b} b_n=i\int\limits_0^{l(t)}\left\{\Psi_n^*(x,t)\Pi(x,t)- \frac{\partial\Psi_n^*(x,t)}{\partial t}\Phi(x,t)\right\}\,dx.\end{equation} Using expansions (\ref{a->Phi,Pi}) we reduce RHS of Eq.~(\ref{Phi,Pi->b}) to the form $$b_n=\sum\limits_{n'}\left\{A_{nn'}(t)a_{n'}(t)+B_{nn'}(t)a_{n'}^\dagger(t)\right\},$$ where \begin{equation}\label{ABcoeffs}\left.\begin{array}{c}\displaystyle A_{nn'}(t)\\ \displaystyle B_{nn'}(t)\end{array}\right\}=\frac{-i}{\sqrt{\pi n'}}\int\limits_0^{l(t)}dx\,\sin\left(\frac{\pi n'x}{l(t)}\right)\left\{\frac{\partial\Psi_n^*(x,t)}{\partial t}\pm i\omega_{n'}\Psi_n^*(x,t)\right\}.\end{equation} Now, using explicit expressions (\ref{Milne_ModesXT}) we can express time derivatives in Eq.~(\ref{ABcoeffs}) in terms of derivatives with respect to $x$ and perform integration by parts. Introducing then a scaling variable $y=x/l(t)$ we finally obtain: \begin{equation}\label{AB->alpha,beta} A_{nn'}(t)=e^{2iv n\chi(t)/\ln{d}}\alpha_{nn'},\quad B_{nn'}(t)=e^{2iv n\chi(t)/\ln{d}}\beta_{nn'},\end{equation} where coefficients \begin{equation}\label{alpha_beta}\left.\begin{array}{c}\displaystyle \alpha_{nn'}\\ \displaystyle \beta_{nn'}\end{array}\right\} =\frac12\,\sqrt{\frac{n'}n}\int\limits_{-1}^1 dy\,(1-vy)^{-2\pi i n/\ln{d}}\,e^{\mp i\pi n'y},\end{equation} are already independent of time $t$, or auxiliary variable $\chi$.
Thus, we have \begin{equation}\label{b_tilde b}b_n=e^{2iv n\chi(t)/\ln{d}}\tilde{b}_n(t), \end{equation} where operators \begin{equation}\label{tilde b}\tilde{b}_n(t)=\sum\limits_{n'} \left\{\alpha_{nn'}a_{n'}(t)+\beta_{nn'}a_{n'}^\dagger(t)\right\},\end{equation} are expressed in terms of operators $a_n$, $a_n^\dagger$ by a Bogolubov transformation with coefficients, which {\it do not depend on time explicitly}$\,$. Therefore the Hamiltonian which determines time dependence of the operators $\tilde{b}_n$ remains unchanged under transformation (\ref{tilde b}) and is given by Eq.~(\ref{Lambda}). This result, together with Eq.~(\ref{b_tilde b}), is sufficient to ascertain the structure of the Hamiltonian $\Lambda$ in terms of the operators $\tilde{b}_n$.
Indeed, the Heisenberg equations of motion for operators $\tilde{b}_n$ are given by $d\tilde{b}_n/d\chi=i[\Lambda,\tilde{b}_n]$, compare (\ref{EffHamiltEqs}). Taking into account the form (\ref{b_tilde b}) of time dependence of $\tilde{b}_n$, we have for the commutator $[\Lambda,\tilde{b}_n]$: $$[\Lambda,\tilde{b}_n]=-(2v/\ln d) n\tilde{b}_n\,.$$ It clearly follows from (\ref{Lambda}) and (\ref{tilde b}) that $\Lambda$ is quadratic in terms of operators $\tilde{b}_n$ also. Hence, we unambiguously arrive to \begin{equation}\label{Lambda_diag}\Lambda=\frac{2v}{\ln{d}}\sum\limits_n n \tilde{b}_n^\dagger \tilde{b}_n\,.\end{equation}
Eqs.~(\ref{Lambda_diag}), (\ref{tilde b}) and (\ref{alpha_beta}) constitute an exact solution for quantum field in a uniformly contracting cavity. This is one of the primary results of the present paper. Its physical meaning consists in existence of a notion of stable, non-interacting particles in a uniformly contracting one-dimensional cavity. Since pair production or mode-to-mode scattering are absent in this representation, the number of these particles in every mode is fixed. Their energies, however, depend on time due to collisions with the moving boundary, compare Eq.~(\ref{H'}). It is worth noting that the exact energy spectrum of these particles turns out to be equidistant.
It is important that the operators $\tilde{b}_n(t)$ for the case of a stationary cavity coincide with $a_n(t)$. Indeed, proceeding to the limit $v\rightarrow 0$ in Eq.~(\ref{alpha_beta}), one can easily make certain that $\alpha_{n,n'}(0)=\delta_{nn'},\, \beta_{n,n'}(0)=0$. We may conclude that operators $\tilde{b}_n(t),\,\tilde{b}^{\dagger}_n(t)$ defined by Eqs.~(\ref{tilde b}),(\ref{alpha_beta}) represent a natural generalization for particle destruction and creation operators valid for nonstationary cavities with uniformly moving boundaries.
Let us now consider the Bogolubov transformation (\ref{tilde b}) with coefficients $\alpha_{n,n'}(v),\,\beta_{n,n'}(v)$ defined by Eq.~(\ref{alpha_beta}) for $v(t)$ being an arbitrary function of time. Since coefficients of the transformation (\ref{tilde b}) now depend on time, the operators $\tilde{b}_n(t)$ are ruled by a new Hamiltonian $\widetilde{H}$, so that the Heisenberg equations of motion acquire the form $i\dot{\widetilde{b}}_n=[\widetilde{H},\widetilde{b}_n]$. Using Eqs.~(\ref{tilde b}) and equations of motion for operators $a_n$, we obtain for the time derivative of $\tilde{b}_n(t)$: $$\begin{array}{l}\displaystyle \dot{\widetilde{b}}_n= \sum\limits_{n'}\left(\dot\alpha_{nn'}a_{n'}+\dot\beta_{nn'}a_{n'}^\dagger+ \alpha_{nn'}\dot a_{n'}+\beta_{nn'}\dot a_{n'}^\dagger\right)=\\ \\ \displaystyle =\sum\limits_{n'}\left(\dot\alpha_{nn'}a_{n'}+\dot\beta_{nn'}a_{n'}^\dagger+ \alpha_{nn'} i[H',a_{n'}]+\beta_{nn'} i [H',a_{n'}^\dagger]\right)=\sum\limits_{n'}\left(\dot\alpha_{nn'}a_{n'}+\dot\beta_{nn'}a_{n'}^\dagger\right)+ i[H'\widetilde{b}_n].\end{array}$$ We know already the structure of $H'$ in terms of $\widetilde{b}_n$, see Eqs.~(\ref{Lambda}) and (\ref{Lambda_diag}). For the difference $\widetilde{H}-H'\equiv{\cal V}$ we have: $$ i[{\cal V},\widetilde{b}_n]= \sum\limits_{n'}\left(\dot\alpha_{nn'}a_{n'}+\dot\beta_{nn'}a_{n'}^\dagger\right).$$ Using the well known unitarity relations for Bogolubov coefficients \begin{equation}\label{unitarity} \sum\limits_{n''}\left(\alpha_{nn''}\beta_{n'n''}- \alpha_{n'n''}\beta_{nn''}\right)=0,\quad \sum\limits_{n''}\left(\alpha_{nn''}\alpha_{n'n''}^*- \beta_{nn''}\beta_{n'n''}^*\right)=\delta_{nn'},\end{equation} which in our case (\ref{alpha_beta}) can be verified straightforwardly, we get the inverse transformation \begin{equation}\label{a_inv} a_n=\sum\limits_{n'}\left(\alpha_{n'n}^*\widetilde{b}_{n'}- \beta_{n'n}\widetilde{b}_{n'}^\dagger\right),\end{equation} and hence, \begin{equation}\label{com_VB} i[{\cal V},\widetilde{b}_n]= \dot v\sum\limits_{n'}\left({\cal A}_{nn'}\widetilde{b}_{n'}+{\cal B}_{nn'}\widetilde{b}_{n'}^\dagger\right),\quad \end{equation} where \begin{equation}\label{calA} {\cal A}_{nn'}=\sum\limits_{n''}\left(\frac{d\alpha_{nn''}}{dv}\alpha_{n'n''}^*- \beta_{n'n''}^*\frac{d\beta_{nn''}}{dv}\right),\quad {\cal B}_{nn'}=\sum\limits_{n''}\left(\frac{d\beta_{nn''}}{dv}\alpha_{n'n''}- \beta_{n'n''}\frac{d\alpha_{nn''}}{dv}\right). \end{equation} Substituting representations (\ref{alpha_beta}) for Bogolubov coefficients into Eqs.~(\ref{calA}) we can perform summations over $n''$ in (\ref{calA}) using the well known relation $$\sum\limits_{n=-\infty}^{\infty}ne^{i\pi xn}=-\frac{2i}{\pi} \sum\limits_{k=-\infty}^{\infty}\delta'(x-2k).$$ After this, all integrations become trivial and we obtain: \begin{equation}\label{calA_final}\begin{array}{c}\displaystyle \mathcal{A}_{nn'}=i\frac{(-1)^{n'-n}}{\pi} \frac{\sqrt{nn'}\gamma^{2+2i\pi(n'-n)/\ln d}}{(n'-n)(n'-n+i\ln d/2\pi)}\,,\quad n\neq n'\,,\qquad \mathcal{A}_{nn}=-\frac{2i\pi n}{\ln^2d} \bigg[\frac{\ln d}{v}+2\gamma^2(\ln\gamma)-1\bigg],\\ \\ \displaystyle\mathcal{B}_{nn'}=i\frac{(-1)^{n'+n}}{\pi} \frac{\sqrt{nn'}\gamma^{2-2i\pi(n'+n)/\ln d}}{(n'+n)(n'+n-i\ln d/2\pi)}\,, \end{array}\end{equation} where $\gamma=(1-v^2)^{-1/2}$.
It follows from (\ref{calA_final}) that ${\cal A}_{nn'}^*=-{\cal A}_{n'n}$, ${\cal B}_{nn'}={\cal B}_{n'n}$. Using these relations and Eq.~(\ref{com_VB}) one can easily find \begin{equation}\label{V_cal} {\cal V}=i\,\dot v(t)\sum\limits_{n,n'}\left\{{\cal A}_{nn'}(v(t))\widetilde{b}_{n}^\dagger\widetilde{b}_{n'}+\frac12\left({\cal B}_{nn'}(v(t))\widetilde{b}_{n}^\dagger\widetilde{b}_{n'}^\dagger-{\cal B}_{nn'}^*(v(t))\widetilde{b}_{n}\widetilde{b}_{n'}\right)\right\}. \end{equation} And finally \begin{equation}\label{H_tilde} \widetilde{H}=\frac{2\pi v(t)}{l(t)\ln{d}}\sum\limits_n n \widetilde{b}_n^\dagger \widetilde{b}_n+{\cal V}. \end{equation}
The representation (\ref{H_tilde}) is very important because it clearly indicates that (i) the Hamiltonian for a field in a nonstationary cavity can be diagonalized if $\dot{v}=0$, and hence, we can use the notion of particles in a uniformly contracting cavity; and (ii) particle creation, as well as mode-to-mode scattering, takes place only when the boundary moves with acceleration. For our law of motion (\ref{l(t)}) these processes arise only in the initial $t=0$ and final $t=T$ moments of time. The fact that radiation processes are absent during the time when one boundary of a cavity moves with a constant velocity was first noticed by Fulling and Davis in Ref.~(\cite{FD}).
\section{Particle creation in a uniformly contracting cavity}\label{particle creation}
We will now compute the number of particles created in a nonstationary cavity with the right mirror moving according to Eq.~(\ref{l(t)}). We will note first that approach based on the Hamiltonian $\widetilde{H}$ is not convenient for this purpose. This is because velocity of the mirror in our approximation changes instantaneously, and thus $\widetilde{H}$ contains ill defined products of $\delta$ and $\theta$-functions. Thereagainst, the Hamiltonian $H'$ (\ref{Lambda}) is not that singular even in our approximation (it contains only $\theta$-functions) and therefore operators $a_n(t)$ are continuous functions of time. This point plays the central role for our method of solution.
We start from the initial conditions \begin{equation}\label{in_cond}a_n(\chi)\vert_{\chi=-0}=a_n^{in}, \end{equation}
where $a_n^{in}$ are initial destruction operators which define the initial vacuum state by $a_n^{in}|0\rangle_{in}=0$. Due to continuity of operators $a_n(\chi)$ ($a_n(-0)=a_n(+0)$), we have the initial data for the operators $\tilde b_n(\chi)$, \begin{equation}\label{tilde b_in}\tilde{b}_n(0)=\sum\limits_{n'} \left\{\alpha_{nn'}a_{n'}^{in}+\beta_{nn'}a_{n'}^{in\,\dagger}\right\}. \end{equation} According to Eq.~(\ref{b_tilde b}) the operator $\widetilde{b}_n$ depends on $t$ (or $\chi$) by mean of phase rotation. Specifically, at $t=T$, we have: \begin{equation}\label{bin->bout}\tilde b_n(\chi_f)=e^{-2\pi i n \Theta}\tilde{b}_n(0),\quad \Theta=\frac{\ln(1/\rho)}{\ln{d}}\,.\end{equation} Here the quantity $\Theta$ measures the phase obtained by a particle in the principle mode ($n=1$) during the squeezing period of the cavity. As it follows from Eq.~(\ref{bin->bout}), the phase factor is completely determined by the fractional part of $\Theta$. For further convenience, we will denote the fractional part of $\Theta$ by $\vartheta$, \begin{equation}\label{Q-def1} \vartheta={\rm Frac}\left\{\frac{\ln(1/\rho)}{\ln{d}}\right\}.\end{equation} The range of variation of the quantity $\vartheta$ is $0\le\vartheta<1$ (the symbol ${\rm Frac}$ denotes the fractional part of a number).
The operators $a_n(\chi_f-0)$, which are the solutions for Eqs.~(\ref{EffHamiltEqs}) with the initial conditions (\ref{in_cond}), can be now found from Eq.~(\ref{a_inv}). Hence, again taking into account the continuity of operators $a_n$, we finally have for $a_n^{out}=\\a_n(\chi_f+0)$: \begin{equation}\label{tilde a_out}a_n^{out}=\sum\limits_{n'} \left\{\alpha_{n'n}^*\tilde{b}_{n'}(\chi_f)- \beta_{n'n}\tilde{b}_{n'}^\dagger(\chi_f)\right\}. \end{equation} By combining the three transformations (\ref{tilde b_in}), (\ref{bin->bout}) and (\ref{tilde a_out}), we obtain: \begin{equation}\label{Bogol_tranf}a_n^{out}=\sum\limits_{n'} \left\{U_{nn'}a_{n'}^{in}+V_{nn'}a_{n'}^{in\dagger}\right\}, \end{equation} where the coefficients of the Bogolubov transformation (\ref{Bogol_tranf}) are given by \begin{eqnarray}\label{U_coeff} U_{nn'}=\sum\limits_{n''}\left\{e^{-2\pi i n'' \vartheta}\alpha_{n''n}^*\alpha_{n''n'}-e^{2\pi i n'' \vartheta}\beta_{n''n'}^*\beta_{n''n}\right\},\\ \label{V_coeff} V_{nn'}=\sum\limits_{n''}\left\{e^{-2\pi i n'' \vartheta}\alpha_{n''n}^*\beta_{n''n'}-e^{2\pi i n'' \vartheta}\alpha_{n''n'}^*\beta_{n''n}\right\}.\end{eqnarray}
Consider the expression for $V_{nn'}$. Using Eq.~(\ref{alpha_beta}), we get \begin{equation}\label{V1} V_{nn'}(d,\vartheta)=\frac{\sqrt{nn'}}{4}\int\limits_{-1}^{1} dy_1\int\limits_{-1}^{1} dy_2\sum\limits_{n''\ge 1}\frac1{n''}\left(\frac{1-vy_1}{1-vy_2}\right)^{2\pi i n''/\ln{d}}\times\left\{e^{i\pi(ny_1+n'y_2)-2\pi i n'' \vartheta}-e^{i\pi(n'y_1+ny_2)+2\pi i n'' \vartheta}\right\}.\end{equation} After evaluation of the sum and double integral in (\ref{V1}) (see Appendix for details) we obtain \begin{equation}\label{V_final} V_{nn'}=(-1)^{n+n'}\frac{i(d-1)\sqrt{nn'}}{2\pi(n d^{\vartheta}+n')(n+n'd^{1-\vartheta})}\times\left\{e^{-2 \pi i n(d^{\vartheta}-1)/(d-1)}-e^{-2 \pi i n'(d^{1-\vartheta}-1)/(d-1)}\right\}.\end{equation} The coefficient $U_{nn'}$ differs from $V_{nn'}$ by the only change $n'\to -n'$ everywhere, except for the common factor $\sqrt{n'}$, so that \begin{equation}\label{U_final} U_{nn'}=(-1)^{n+n'}\frac{i(d-1)\sqrt{nn'}}{2\pi(n d^{\vartheta}-n')(n-n'd^{1-\vartheta})}\times\left\{e^{-2 \pi i n(d^{\vartheta}-1)/(d-1)}-e^{2 \pi i n'(d^{1-\vartheta}-1)/(d-1)}\right\}.\end{equation} Finally, the energy (mode) distribution of created particles is given by \begin{equation}\label{Nn} \bar{N}_n (d,\vartheta)=\,_{in}\langle
{0}|a_n^{out\,\dagger}a_n^{out}|0\rangle _{in}=\sum\limits_{n'\ge 1}|V_{nn'}|^2=\frac{(d-1)^2n}{\pi^2}\, \sum\limits_{n'=1}^{\infty}\, \frac{n'\,\sin^2\left[\frac{\pi(d^{\vartheta}-1)}{(d-1)} (n+n' d^{1-\vartheta})\right]}{(n d^{\vartheta}+n')^2\, (n+n'd^{1-\vartheta})^2}.\end{equation}
One can see that the number of particles (\ref{Nn}) created in each mode depends on two parameters. One of them is the Doppler factor $d$, which can be expressed either in terms of velocity of the moving boundary, $d=(1+v)/(1-v)$, or in terms of the corresponding Lorentz factor $\gamma=(1-v^2)^{-1/2}$, $d=\left(\gamma+\sqrt{\gamma^2-1}\right)^2$. The second parameter $\vartheta$ is determined by $d$ and a dimensionless squeeze rate of the cavity $\rho=l_f/l_i$ which, in turn, can be expressed in terms of the duration of squeezing $T$ as $\rho=1-vT/l_i$. Note that acceleration of the moving boundary, $$ W=-v\delta(t)+v\delta(t-T), $$ is determined by the same set of parameters. It is natural, since particles can be created by an accelerating boundary only.
An interesting feature of the obtained solution is that the number of created particles is a periodic function of the parameter $\log\rho$ with the period $\log d$. This can be traced already from the expansions (\ref{U_coeff}), (\ref{V_coeff}), and definition (\ref{Q-def1}) of the parameter $\vartheta$. Surprisingly, the number of created particles completely vanishes if $\vartheta=0$, see Eq.~(\ref{Nn}). It is worth noting that $\bar{N}_n$ tends to zero also if $\vartheta\to 1$, so that the energy spectrum of created particles is a continuous function of parameter $\rho$. \begin{figure}
\caption{\small A graphical illustration for null values of the number of created particles at a) $\rho=1/d$, b) $\rho=1/d^2$. The dashed lines represent world lines of a light ray, emitted by the moving boundary at $t=0$.}
\label{PCzero}
\end{figure} According to Eq.~(\ref{Q-def1}), $\vartheta$ turns into zero, and hence the number of created particles vanishes, if $\rho=1/d^k$, where $k=0,1,2,\ldots$. The value $k=0$ corresponds to a stationary case, while the others can be explained by destructive quantum interference between events of particle creation at the moments of acceleration ($t=0$) and deceleration ($t=T$) of the boundary. As it is illustrated by Fig.~\ref{PCzero}, the values of the squeeze rate $\rho=1/d^k$ exactly correspond to the cases when a light ray, emitted by the right boundary at $t=0$, returns to it at $t=T$ after $k-1$ successive reflections and therefore can be absorbed at the moment of deceleration. Analogous arguments were employed by the authors of Ref.~(\cite{FD}) in discussion of the effect of vanishing of the renormalized energy flux in a uniformly expanding cavity, though only the case $k=1$ was considered in that work. It is worth mentioning that at $\rho=1/d^k$ every level of the initial energy spectrum of the field inside the cavity converts in time interval $T$ to the value $\omega_n^f=\pi n/l_i\rho=(\pi n/l_i)d^k$ coinciding with the frequency acquired by a particle after $k-1$ successive reflections from the moving boundary, if this particle was emitted by the boundary moving with velocity $v$ at $t=0$.
The average number of particles, created in the principle ($n=1$) and in the first excited ($n=2$) modes as functions of $\vartheta$ are shown in Fig.~\ref{N(Q)plot} for several \begin{figure}
\caption{\small The average number of particles created in a) the principle mode ($n=1$), and b) the first excited mode ($n=2$) as a function of $\vartheta$ for different values of the moving boundary $\gamma$-factor.}
\label{N(Q)plot}
\end{figure} values of the moving boundary $\gamma$-factor. The same plots after the appropriate scaling represent the first period of $\bar{N}_n$ dependence on $\log(1/\rho)$.
One can see from Fig.~\ref{N(Q)plot} that at sufficiently large $\gamma$ plots for both $\bar{N}_1$ and $\bar{N}_2$ have a plateau at not too small $\vartheta$, and a strongly pronounced maxima (one for $n=1$ and two for $n=2$) at $\vartheta$ close to unity. It is seen also that the altitudes of the plateaus are growing with $\gamma$ remaining however rather small. To estimate the maximum possible altitude of a plateau we will note that in the limiting case $\gamma\to\infty$ ($d\to\infty$) at a fixed $\rho$ the instantaneous approximation considered in Ref.~\cite{FNL} becomes applicable. Indeed, one can easily make certain that Eq.~(\ref{Nn}) reduces to Eq.~(6) of Ref.~\cite{FNL} under condition $d\gg n(1-\rho)/\rho$ \footnote{Note, that this condition of applicability of instantaneous approximation was obtained in Ref.~\cite{FNL} on the basis of merely qualitative arguments.}. For our estimation we need only high-energy tail of energy distribution in instantaneous approximation (see Eq.~(10) of Ref.~\cite{FNL}) where we should put $\rho=0$ since not too small values of $\vartheta$ at $d\gg 1$ correspond to small $\rho$. Thus, for maximum values of $\bar{N}_n$ in the plateau region we have \begin{equation}\label{plateau}\bar N_n^{(pl)}\approx\frac1{2\pi^2n}\left[\ln(2\pi n)+C-1\right],\end{equation} where $C=0.5572$ is the Euler constant. For $n=1,2$ we have $\bar N_1^{(pl)}\approx 7.17\cdot 10^{-2}$ and $\bar N_2^{(pl)}\approx 5.34\cdot 10^{-2}$, respectively. So, in the plateau region the average number of created particles is small even for ultrarelativistic velocities of the moving boundary.
To estimate positions of the maxima and the peak values of $\bar{N}_n$ we should evaluate the sum in Eq.~(\ref{Nn}) under the conditions $d\gg 1$, $1-\vartheta\ll 1$. It is clear that the major contribution to the sum is given by $n'_{eff}\lesssim nd^{\vartheta}$. Thus we have: $$\bar N_n\approx\frac{n}{\pi^2}\, \sum\limits_{n'=1}^{\infty}\, \frac{n'\,\sin^2\left[\pi(n d^{\vartheta-1}-n' d^{-{\vartheta}})\right]}{(n+n' d^{-\vartheta})^2\, [n'+n d^{\vartheta-1}]^2}\approx \frac1{\pi^2n}\,\sin^2\left(\pi n d^{\vartheta-1}\right)\sum\limits_{n'=1}^{n d^{\vartheta}}\, \frac{n'}{(n'+n d^{\vartheta-1})^2}\,.$$ Since $nd^{\vartheta}\gg 1$, we can change summation over $n'$ by integration. Finally, after evaluation of the integral we obtain in the leading logarithmic approximation: \begin{equation} \label{Nnexuni2}\displaystyle \bar N_n\approx\frac1{\pi^2n}\sin^2\left(\pi n d^{\vartheta-1}\right)\ln(n d).\end{equation}
We see from Eq.~(\ref{Nnexuni2}) that positions of the maxima are determined by the relations \begin{equation}\label{Res0}\pi n d^{\vartheta-1}=\pi (j+1/2), \quad j=0,1,\ldots (n-1). \end{equation} This is an approximate condition but it becomes exact in the ultrarelativistic limit. For example, at $\gamma=2,10,10^2,10^3$ for the single maximum position at $n=1$ we have respectively $\vartheta_{max}=0.74,0.88,0.94,0.95$ from Eq.~(\ref{Res0}), and $\vartheta_{max}=0.85,0.90,0.94,0.96$ as the result of computation according to exact formula (\ref{Nn}).
The condition (\ref{Res0}) can be reformulated in terms of wavelengths as \begin{equation}\label{Res} (2j+1)\lambda_n^f/2 =2l_i/d^{k+1},\end{equation} where $\lambda_n^f=2l_f/n$ is the wavelength corresponding to the $n$-th harmonic in the final state. The physical meaning of this condition can be clarified by the following qualitative consideration.
We have shown, see Eqs.~(\ref{H_tilde}),(\ref{V_cal}), that particles are created inside the cavity only at the moments of start ($t=0$) and stopping ($t=T$) of its boundaries. Physically, it is clear that in the framework of the applied approximation (instantaneous acceleration) the created particles constitute an extremely narrow wave packet (cluster) which propagates then inside the cavity without spreading successively colliding with its boundaries, see Fig.~\ref{PCmax}. Such picture agrees with the results of calculation of the energy momentum tensor performed by Fulling and Davies in Ref.~(\cite{FD}).
Let us describe the clusters of particles created at the moments $t=0$ and $t=T$ by classical fields $\Phi_1(x,t)$ and $\Phi_2(x,t)$ respectively. The function $\Phi_2(x,T)$ describes the second cluster produced by the boundary stopping at the moment $t=T$. It is located exactly at the point $x=l_f$. The first cluster at the same moment of time is located at some distance $\Delta$ left to the first one. The form of the cluster $\Phi_1$ is absolutely the same and differs from $\Phi_2$ only by sign since the boundary accelerations at two moments of particle creation were of opposite sign. Thus, $\Phi_1(x,T)=-\Phi_2(x-\Delta,T)$. Expanding the fields $\Phi_i$ in terms of stationary {\it out}-modes $$\psi_{n}^{out}(x,t)=\frac{1}{\sqrt{\pi n}}\sin\left(\frac{\pi n}{l_f}x\right)e^{-i\omega_{fn}t},\quad \omega_{fn}=(\pi/l_f) n,$$ for the total field $\Phi(x,T)=\Phi_1(x,T)+\Phi_2(x,T)$ we obtain $$\Phi(x,T)=\sum_{n=1}^{\infty}\frac{2}{\sqrt{\pi n}}\Biggl( c_{n}e^{-i\omega_{fn}T}+c_{n}^*e^{i\omega_{fn}T} \Biggr)\cos\left(\frac{\pi n}{l_f}(x-\frac{\Delta}{2})\right)\sin\left(\frac{\pi n}{2l_f}\Delta\right).$$ Thus, we see that contribution of the $n$-th mode to the field $\Phi(x,T)$ is absent if $\,n\Delta/2l_f=j,\;j=0,1,2, ...$ (destructive interference of two clusters), or it is maximal if \begin{equation}\label{Delta}n\Delta/2l_f=j+1/2,\quad j=0,1,2, ... \end{equation} (constructive interference). In view of the restriction $\Delta\leq l_f$, we have $2j+1\leq n$. Since particle creation and mode interaction are absent in a stationary cavity, these conditions hold at arbitrary moment of time $t>T$. Now, the question is whether it is possible to match Eq.~(\ref{Delta}) with conditions (\ref{Res}) (as well as the corresponding conditions for minima).
Assume that the first cluster undergoes $k$ collisions with the moving boundary during the period of contraction $T$. It can be easily found that the last collision takes place at the point $x_k=l_i/d^k$ at the moment of time $t_k=(l_i/v)(1-1/d^k)$. During the period of time $T-t_k$ the cluster either collides with the boundary at rest, if $T-t_k>x_k$ (Fig.~\ref{PCmax}a), or not, if $T-t_k<x_k$ (Fig.~\ref{PCmax}b). \begin{figure}
\caption{\small World lines of cavity boundaries (solid lines) and of the cluster of particles (dashed line) created at $t=0$ for a) $T-t_k>x_k$, and b) $T-t_k<x_k$. The number of collisions with the moving boundary $k=1$.}
\label{PCmax}
\end{figure} The quantity $\Delta$ for these cases is given by the following relations respectively: $$\Delta=l_f\pm[x_k-(T-t_k)].$$ Then Eq.~(\ref{Delta}) is equivalent to two relations \begin{equation}\label{cond} \frac{l_i}{d^k}\frac{1-v}{v}=\left\{\begin{array}{ll}\displaystyle \frac{2l_f}{n}\Bigl(n\frac{1+v} {2v}-j-\frac{1}{2}\Bigr)\,,\quad T-t_k>x_k\, \\ \\ \displaystyle\frac{2l_f}{n}\Bigl(j+\frac{1}{2} +n\frac{1-v}{v}\Bigr)\,,\quad T-t_k<x_k\, \end{array}\right., \end{equation} where $j\leq (n-1)/2$. In the ultrarelativistic limit $1-v\ll 1$ ($d=2/(1-v)$) the conditions (\ref{cond}) take on form of Eq.~(\ref{Res}) with $0\leq j\leq n-1$. Our analysis shows that physically the conditions (\ref{Res}) mean that maxima of $N_n$ appear in the cases when the first cluster is located at a distance equal to an odd number of half-waves $\lambda_n^f/2$ from the point of creation of the second cluster. It is worth emphasizing that kinematics of our problem permits implementation of this condition only in the ultrarelativistic case. Therefore the less is the speed of the boundary $v$, the less pronounced are the maxima and $N_n\neq0$ in minima, compare Fig.~\ref{N(Q)plot}.
The values of $\widetilde{N}_n$ in the points of maxima are given by Eq.~(\ref{Nnexuni2}) with logarithmic accuracy, and thus cannot reproduce numerical results just as well as their positions. However, Eq.~(\ref{Nnexuni2}) reproduces the main features of spectra nearby the maxima qualitatively correctly. It is important that the height of the maxima grows as $\log{d}$, or equivalently, as $\log{\gamma}$ in the ultrarelativistic limit due to strong interactions between the modes. Clearly, the effect of DCE amplification arises due to constructive interference of events of particle creation at $t=0$ and $t=T$. For example, if $\gamma=10^3$, then for certain values of $\rho$ (or $T$) in the mean more than one particle in the principle mode (or a little bit less than one particle in the first excited mode) can be created.
\section{Discussion}\label{discussion}
An exact solution to the problem of dynamical Casimir effect in a one-dimensional uniformly contracting cavity was found. This became possible due to existence of stable non-interacting particle states in such a cavity. As a consequence, we succeeded in diagonalization of the Hamiltonian for a many-mode quantum field in a cavity with uniformly moving boundaries (a generalization to the case with both boundaries moving uniformly is straightforward, but rather cumbersome). The developed method opens us possibilities to solve exactly many other one-dimensional DCE problems for cavities with boundaries moving along polygon-like world lines, including the interesting case of vibrating boundaries. A new representation for the field Hamiltonian was proposed in one-dimensional problem. It was demonstrated explicitly that particle creation as well, as mode-to-mode interaction is absent in a nonstationary cavity if its boundaries are moving uniformly. However, the same approach can not be applied to physically more realistic three-dimensional problem, although a complete set of classical solutions generalizing Eq.~(\ref{Milne_ModesXT}) is known for this case as well \cite{rev}. This is due to breakdown of stability of particle states and presence of interactions between them in three-dimensional problem.
It was shown that the number of particles, created in each mode, depends periodically on the logarithm of the squeezing rate of the cavity $\rho$, which is equal to the ratio of the final and initial sizes of the cavity and is related to duration of contracting in a rather simple way. This dependence is not monotonous. For the given velocity of the moving boundary, there exists an infinite sequence of values $\rho_k$ for which particles are not produced. On the other hand, it is possible to specify an infinite set of combinations of parameters, which correspond to an optimal regime of particle production. Theoretically, at sufficiently large Lorentz factor $\gamma$ for the moving boundary, the number of created particles can be done arbitrary large. It is worth noting, however, that the mentioned values of Lorentz factors are indeed very large. For example, the value $\gamma\sim 10^3$ for production of a single particle is necessary. We should also mention that the number of created particles in the case of a cavity with ultrarelativistic boundaries depends on $\gamma$ rather slowly (proportional to $\log{\gamma}$).
The non-monotonous character of dependence of the number of created particles on $\rho$ is also present in the framework of an oversimplified "one-mode" model. However, correct estimations, especially near the maxima of particle production rate, and even correct qualitative understanding require taking into account mode-to-mode interactions. This is true in principle at consideration of any non-perturbative feature of DCE. Therefore, it is very important to consider exactly soluble models. It should be noticed, that the approximation of uniform motion of the boundaries, separated by moments of instant infinite accelerations, does not work well for high-frequency modes. For example, the energy spectrum (\ref{Nn}) has an inadmissibly slowly decreasing tail ($\bar N_n\propto 1/n$) leading to an infinite total number of created particles. These problems can be resolved by smoothing instant infinite accelerations, but the corresponding problem ceases be exactly soluble.
\section*{Acknowledgments}
We thank M.I. Gozman, V.D. Mur, V.S. Popov and participants of the Fourth D.N. Klyshko seminar for discussion of the results and helpful remarks. This work was supported by the Russian Foundation for Basic Research (grant 06-02-17370-a), by the Ministry of Science and Education of Russian Federation (grant RNP 2.11.1972), and by the Russian Federation President grants MK-2279.2005.2 and NSh-320.2006.2.
\appendix \section{Evaluation of the integral in Eq.~(\ref{V1})} The coefficient $V_{nn'}$ (\ref{V1}) is represented by a difference of two integrals. After the change of integration variables $y_1\leftrightarrow y_2$ in the second of them, $V_{nn'}$ reduces to the form: \begin{equation}\label{V2} V_{nn'}=\frac{i\sqrt{nn'}}{2}\int\limits_{-1}^{1} dy_1\int\limits_{-1}^{1} dy_2\, e^{i\pi(ny_1+n'y_2)}{\rm Im}\,\sum\limits_{n''\ge 1}\frac{e^{2\pi i n'' Z(y_1,y_2)}}{n''},\end{equation} where \begin{equation}\label{Z(y1,y2)}Z(y_1,y_2)= \log_{d}\left(d^{-\vartheta}\frac{1-vy_1}{1-vy_2}\right).\end{equation} For calculation of the sum in Eq.~(\ref{V2}) we will use formula {\bf 1.441}.1 from \cite{RG} generalized for arbitrary values of $Z$ \begin{equation}\label{sum} {\rm Im}\,\sum\limits_{n''\ge 1}\frac{e^{2\pi i n'' Z}}{n''}=\pi\left(\frac12-Z\right)+\pi\sum\limits_{k=-\infty}^{\infty} k\theta(Z-k)\theta(k+1-Z)\,. \end{equation} Obviously, the first term in the RHS of (\ref{sum}) gives zero contribution to the integral (\ref{V2}). Taking into account that $-1-\vartheta\le Z(y_1,y_2)\le 1-\vartheta$ inside the integration domain, and $0\le \vartheta<1$ by definition, we obtain \begin{equation}\label{V3}\begin{array}{r}\displaystyle V_{nn'}=-\frac{i\pi\sqrt{nn'}}{2}\int\limits_{-1}^{1} dy_1\int\limits_{-1}^{1} dy_2\, e^{i\pi(ny_1+n'y_2)}\left\{\theta(-Z)\theta(Z+1) \right.\\\\ \left.\displaystyle +2\theta(-Z-1)\theta(Z+2)\right\}.\end{array}\end{equation} It is clear that the combination of $\theta$-functions $$\theta(Z)+\theta(-Z)\theta(Z+1)+\theta(-Z-1)\theta(Z+2)$$ is equivalent to $1$ in the domain of integration, and thus $$ \int\hspace{-0.2cm}\int\hspace{-0.1cm}d^2y\, e^{i\pi(ny_1+n'y_2)}\left\{\theta(Z)+\theta(-Z)\theta(Z+1)+ \theta(-Z-1)\theta(Z+2)\right\}=0.$$ Therefore, we have: \begin{equation}\label{V4} V_{nn'}=\frac{i\pi\sqrt{nn'}}{2}\int\hspace{-0.2cm}\int \,d^2y\, e^{i\pi(ny_1+n'y_2)}\left\{\theta(Z)-\theta(-Z-1)\right\}. \end{equation} The integral (\ref{V4}) can be easily reduced to the form \begin{equation}\label{I} V_{nn'}=\frac{i\pi\sqrt{nn'}}{2}\left\{\int\limits_{-1}^{a} dy_1\int\limits_{b(y_1)}^1 dy_2\,-\int\limits_{a}^1 dy_1\int\limits_{-1}^{c(y_1)} dy_2\,\right\} e^{i\pi(ny_1+n'y_2)}\,,\end{equation} where $a=1-2(d^{\vartheta}-1)/(d-1)$, $b(y_1)=(1-d^{-\vartheta})/v+d^{-\vartheta}y_1$, and $c(y_1)=(1-d^{1-\vartheta})/v+d^{1-\vartheta}y_1$. After simple calculations we now obtain the expression~(\ref{V_final}) for $V_{n,n'}$.
\begin{references} \bibitem{Moore}Moore G T 1970 {\it J. Math. Phys}~ {\bf 11} 2679. \bibitem{Acoust}Dodonov V V 1995, {\it Phys. Lett.}~ A {\bf 207} 126; Dodonov V V, Klimov A B 1996, {\it Phys. Rev.}~ A {\bf 53} 2664. \bibitem{Piezo}Dodonov V V and Klimov A B 1996 {\it Phys. Rev.}~ A {\bf 53} 2664. \bibitem{DManko}Dodonov V V, Man'ko V I and Man'ko O V 1991 {\it Proceedings of Lebedev Physics Institute} vol 200 (Moscow: Nauka) p 155. \bibitem{JR}Jaekel M T and Reynaud S 1992. {\it J. Phys. I (France)} {\bf 2} 149.
\bibitem{Law}Law C K 1994 {\it Phys. Rev.}~ A {\bf 49} 433, 1995 {\it Phys. Rev.}~ A {\bf 51} 2537. \bibitem{Dod}Dodonov V V 1995 {\it Phys. Lett.}~ A {\bf 207} 126. \bibitem{KA}Klimov A B, Altuzar V 1997 {\it Phys. Lett.}~ A {\bf 226} 41. \bibitem{rev}Dodonov V V 2001 {\it Advances in Chemical Physics} vol 119 (John Wiley \& Sons, Inc) p~309. \bibitem{Yab}Yablonovitch E 1989 {\it Phys. Rev. Lett.}~ {\bf 62} 1742. \bibitem{Loz1}Lozovik Yu E, Tsvetus V G and Vinogradov E A 1995 {\it Phys. Scr.}~ {\bf 52} 284. \bibitem{daccor}Dodonov A V, Dodonov E V and Dodonov V V 2003, {\it Phys. Lett.}~ A {\bf 317} 378. \bibitem{FNL}Fedotov A, Narozhny N and Lozovik Yu 2005 {\it J. Opt. B: Quantum Semiclass. Opt.} B~ {\bf 7} S64. \bibitem{Bordag1}M. Bordag, G. Petrov and D. Robaschik, Yad. Fiz. 39, 1315 (1984) [Sov. J. Nucl. Phys. 39, 828 (1984)]. \bibitem{Bordag2} M. Bordag, F.-M. Dittes and D. Robaschik, Yad. Fiz. 43, 1606 (1986) [Sov. J. Nucl. Phys. 43, 1034 (1986)]. \bibitem{RT}Razavy M, and Terning J 1985 {\it Phys. Rev.}~ D {\bf 31} 307. \bibitem{LLI}L. D. Landau and E. M. Lifshitz, {\it Mechanics} (3rd Edition, Butterworth-Heinemann, 1981).
\bibitem{Pomeran}Yu.E. Lozovik, N.B. Narozhny and A.M. Fedotov, Proceedings of the International Conference {\it I. Ya. Pomeranchuk and Physics at the Turn of Centuries}, Edited by A. Berkov, N. Narozhny and L. Okun, (World Scientific, Singapore, 2004), p.446. \bibitem{BD}N.D. Birrell and P.C.W. Davies, {\it Quantum Fields in Curved Space} (Cambridge University Press, Cambridge, 1982).
\bibitem{FD}Fulling S A and Davies P C W 1975, {\it Proc. R. Soc. Lond.}A {\bf 348} 393.
\bibitem{RG} I.S. Gradshteyn, and I.M. Ryzhik, {\it Tables of Integrals, Series, and Products} (Academic, New York, 1967). \end{references}
\end{document} |
\begin{document}
\ifthenelse{\boolean{amsart}}{ \title[The rate of growth of $Q(n,\lceil rn\rceil)$]{The rate of growth of the minimum clique size of graphs of given order and chromatic number} }{ \title{The rate of growth of the minimum clique size of graphs of given order and chromatic number} }
\ifthenelse{\boolean{amsart}}{ \author{Csaba Bir\'o} \address[Csaba Bir\'o]{Department of Mathematics, University of Louisville, Louisville, KY 40292} \email{csaba.biro@louisville.edu} \author{Kris Wease} \address[Kris Wease]{Department of Mathematics, University of Louisville, Louisville, KY 40292} }{ \author{Csaba Bir\'o\footnote{Department of Mathematics, University of Louisville, Louisville, KY 40292, \texttt{csaba.biro@louisville.edu}} \and Kris Wease\footnote{Department of Mathematics, University of Louisville, Louisville, KY 40292}} \date{} }
\ifthenelse{\boolean{amsart}}{}{\maketitle}
\begin{abstract} Let $Q(n,c)$ denote the minimum clique number over graphs with $n$ vertices and chromatic number $c$. We determine the rate of growth of the sequence $\{Q(n,\lceil rn \rceil)\}_{n=1}^\infty$ for any fixed $0<r\leq 1$. We also give a better upper bound for $Q(n,\lceil rn \rceil)$. \end{abstract}
\ifthenelse{\boolean{amsart}}{\maketitle}{}
\section{Introduction}\label{S:intro}
Let $\omega(G),\alpha(G),$ and $\chi(G)$ denote the clique number, independence number, and chromatic number, respectively, of a graph $G$. We will also use
$|G|$ to denote the number of vertices and $\|G\|$ to denote the number of edges of $G$. Furthermore, let \[
\omega(n,k)=\min\{\omega(G):|G|=n\text{ and }\alpha(G)\leq k\} \] the inverse Ramsey number. Define \[
Q(n,c)=\min\{\omega(G) : |G|=n \text{ and } \chi(G)=c\}. \] The goal of this research is to determine $Q(n,c)$ as exactly as possible.
Bir\'o, F\"uredi, and Jahanbekam \cite{Bir-Fur-Jah-13} gave an exact formula for $Q(n,c)$ for the case when $c\geq (n+3)/2$ in terms of inverse Ramsey numbers. They proved the following. \begin{theorem}\label{thm:bfj} For $n\geq 2k+3$ \[ Q(n,n-k)=n-2k+q(k) \] where \[ q(k)=\min\sum_{i=1}^s(\omega(2k_i+1,2)-1) \] where the minimum is taken over positive integers $k_1,\ldots,k_s$ with $k_1+\cdots+k_s=k$, and $s\leq 3$. \end{theorem}
Liu \cite{Liu-12} determined the rate of growth of $Q(n,\lceil n/k\rceil)$ for $k$ fixed positive integer, still, in terms of inverse Ramsey numbers. He proved that $Q(n,\lceil n/k\rceil)=\Theta(\omega(n,k))$ for $k$ positive integer. The natural question (also specifically posed by Liu) remained to determine the rate of growth of the sequence in cases when $k$ is not an integer. In this paper we provide the answer to this question proving the following theorem. \begin{theorem}\label{thm:main} Fix $0<r\le1$ and let $k=\lfloor1/r\rfloor$. Then there exists $0<d_r\le 1$ such that for $n$ large enough \[ d_r\omega(n,k)\le Q(n,\lceil rn\rceil)\le \omega(n,k). \] \end{theorem}
We go beyond these bounds in Section~\ref{S:upperbound}: we provide a stronger upper bound for $Q(n,\lceil rn\rceil)$. We hope that the improved bound is close to the actual value, in fact it is plausible to believe that it is asymptotically correct.
A related line of research was done in \cite{Gya-Seb-Tro-12}, in which the authors study the chromatic gap: $\gap(G)=\max\{ \chi(G)-\omega(G): |V(G)|=n \}$. The obvious relationship $\gap(n)= \max \{ c - Q(n,c)\}$ makes our questions slightly more general.
\section{Proof of the main theorem} In the following proof, we generalize some of Liu's ideas to make it work for arbitrary (non-integer) positive real numbers, though at the end the proof is substantially different. Still, it is very interesting to note that the jumps in the rate of growth happens when $r$ is a reciprocal of a positive integer.
We will need the following simple lemma. \begin{lemma}\label{l:subadditive} For all $0<r\leq 1$, and $n,k$ positive integers, if $rn\geq k$, then \[ \omega(\lceil rn\rceil,k)\geq\frac{1}{\left\lceil\frac{1}{r}\right\rceil}\omega(n,k). \] \begin{proof} Observe that $\omega$ is monotone and sub-additive in its first variable. Therefore \[ \left\lceil\frac{1}{r}\right\rceil\omega(\lceil rn\rceil,k) \geq \omega\left(\left\lceil\frac{1}{r}\right\rceil\lceil rn\rceil,k\right) \geq \omega(n,k). \] \end{proof} \end{lemma}
Now we will prove that $Q(n,\lceil rn\rceil)\leq \omega(n,k)$; we do this by exhibiting a graph with $n$ vertices, chromatic number $\lceil rn\rceil$, and clique number at most $\omega(n,k)$. Let $G$ be a Ramsey graph with $|G|=n$, $\alpha(G)=k$, and $\omega(G)=\omega(n,k)$. Then \[ \chi(G)\geq\left\lceil\frac{n}{k}\right\rceil=\left\lceil\frac{n}{\lfloor 1/r\rfloor}\right\rceil\geq\left\lceil\frac{n}{1/r}\right\rceil=\lceil r n \rceil. \]
Drop edges from $G$ until we get a subgraph $G'$ with $\chi(G')=\lceil rn\rceil$. Then $|G'|=n$, and $\omega(G')\leq\omega(n,k)$.
Now we will prove the existence of the constant $d_r$.
Let $G$ be a graph with $|G|=n$ and $\chi(G)=\lceil rn\rceil$. In the first step, we will show that there exists a constant $c_r$ (that only depends on
$r$), and an $H$ subgraph of $G$, such that $|H|\geq c_r n$, and $\alpha(H)\leq k$. We will construct $H$ from $G$ by removing independent sets of size $k+1$, as many as possible. In other words, Let $S$ be a largest collection of disjoint independent sets of size $k+1$ in $G$, and let $H=G-S$.
From the maximality of $S$, it is clear that $\alpha(H)\leq k$. Let $t=|S|$. Since $\chi(G)\leq t+|H|$, and $|H|=n-t(k+1)$, we have \[ \lceil rn\rceil=\chi(G)\leq t+n-t(k+1)=n-tk. \] This implies $tk\leq n-\lceil rn\rceil\leq n-rn=(1-r)n$, so $t\leq (1-r)n/k$. It follows that \[
|H|\geq n-\frac{(1-r)n}{k}(k+1)=\left(\frac{k+1}{k}r-\frac{1}{k}\right)n \] Let $c_r=\frac{k+1}{k}r-\frac{1}{k}$. Recall that $k=\lfloor 1/r\rfloor$, so $c_r$ is determined by $r$, and $r>1/(k+1)$; therefore also $c_r>0$.
We established the existence of a subgraph $H$ with $|H|\geq c_r$ and $\alpha(H)\leq k$. Then for large $n$, \[ \omega(G)\geq \omega(H)\geq\omega(\lceil c_r n\rceil,k)\geq\frac{1}{\left\lceil\frac{1}{c_r}\right\rceil}\omega(n,k), \] where the last inequality follows from Lemma~\ref{l:subadditive}. Hence we may choose $d_r=1/\lceil 1/c_r\rceil$.\qed
Our constants $d_r$ provide improvements on Liu's constants in case $r$ is the reciprocal of an integer. Indeed if $r=1/k$ for a $k$ integer and $k\to\infty$, Liu's constants will exponentially converge to zero, while $c_r=1/k^2=r^2$.
It is also very interesting to note that as $r$ approaches the reciprocal of integer from above, $c_r\to 0$. We tend to believe that this is just an artifact of the proof, but it would be very interesting to see this question settled one way or the other.
\section{Better upper bound}\label{S:upperbound}
In the previous section we only proved a weak bound for $Q(n,\lceil rn\rceil)$, because that was all we needed to establish the rate of growth. But that bound is certainly not optimal. In the following, we show how to get better bounds in case $r$ is \emph{not} a reciprocal of an integer.
\begin{theorem}\label{thm:upperbound} Let $0<r\leq 1$ such that $1/r$ is not an integer. Let $k=\lfloor 1/r\rfloor$, and let $m=n-k\lceil rn\rceil$, $l=(k+1)\lceil rn\rceil-n$. Let $q(\beta,\alpha)=\min\sum\omega(\alpha \beta_i,\alpha)$ where the minimum is taken over sums $\sum \beta_i=\beta$ with $\beta_i>0$ integers. Then for large enough $n$, \[ Q(n,\lceil rn\rceil)\leq q(l,k)+q(m,k+1). \] \end{theorem}
Before the proof, let us comment on the requirement on $1/r$. Notice that for large $n$, we have $m>0$; in other words, for all $r$ that is not a reciprocal of an integer there exists $N$ such that $n>N$ implies $n-k\lceil rn\rceil>0$. Therefore, the quantity $q(m,k+1)$ in the statement is well-defined. If we do not set the requirement on $r$, the statement breaks down at reciprocals of integers due to some rounding problems. Note that, on the other hand, $l>0$ is always true, because the rounding in that case works in our favor.
It may seem that the requirement on $r$ takes away from the power of the theorem, but in fact if $r$ is close to the reciprocal of an integer, $m$ will get close to zero, and then the statement of the theorem is hardly an improvement on Theorem~\ref{thm:main}. In fact, it is expected that this approach would not prove any better bounds for exact reciprocals of integers.
The motivation of the theorem is that we do not believe that the jumps in the rate of growth proven in Theorem~\ref{thm:main} show the whole picture. Between these jumps, far from reciprocals of integers, the upper bound can be improved, as it is demonstrated by the theorem.
\begin{proof}[Proof of Theorem~\ref{thm:upperbound}] We will exhibit a graph on $n$ vertices (for large $n$) with chromatic number $\lceil rn\rceil$ and clique number at most $q(l,k)+q(m,k+1)$. To do this, let $l_1,\ldots,l_a$ be the numbers that minimize $q(l,k)$, and let $m_1,\ldots,m_b$ be the numbers that minimize $q(m,k+1)$. Let $L_1,\ldots,L_a$ be Ramsey graphs with
$|L_i|=kl_i$, $\alpha(L_i)\leq k$, and $\omega(L_i)=\omega(kl_i,k)$. Similarly, let $M_1,\ldots,M_b$ Ramsey graphs with $|M_i|=(k+1)m_i$, $\alpha(M_i)\leq k+1$, and $\omega(M_i)=\omega((k+1)m_i,k+1)$. Now construct $G$ by taking the disjoint union of $L_1,\ldots,L_a,M_1,\ldots,M_b$, and add every edge between any two of these components. Then clearly, $|G|=kl+(k+1)m=n$, and \[ \chi(G) =\sum_{i=1}^a\chi(L_i)+\sum_{j=1}^b\chi(M_j)
\geq \sum_{i=1}^a\frac{|L_i|}{k}+\sum_{j=1}^b\frac{|M_j|}{k+1}=l+m=\lceil rn\rceil, \] furthermore \[ \omega(G)=\sum_{i=1}^a\omega(L_i)+\sum_{j=1}^b\omega(M_j)=q(l,k)+q(m,k+1). \] Now apply the usual trick of dropping edges until the chromatic number is down to $\lceil rn\rceil$ to get the example graphs. \end{proof}
\begin{corollary} Let $0<r\leq 1$ such that $1/r$ is not an integer, and $k,l,m$ defined as in Theorem~\ref{thm:upperbound}. Then \[ Q(n,\lceil rn\rceil)\leq\omega(kl,k)+\omega((k+1)m,k+1). \] \end{corollary} \begin{proof} By the definition of the function $q(\beta,\alpha)$ from Theorem~\ref{thm:upperbound}, we have $q(\beta,\alpha)\leq \omega(\alpha\beta,\alpha)$, and the statement follows. \end{proof} Note that $Q(n,\lceil rn\rceil)\leq \omega(kl,k)+\omega((k+1)m,k)$ is a direct consequence of Theorem~\ref{thm:main} and sub-additivity of $\omega$ in the first variable. But the corollary does provide actual improvements over Theorem~\ref{thm:main}, because the function $\omega(\cdot,\cdot)$ is monotone decreasing in the second variable.
The corollary may be weaker than Theorem~\ref{thm:upperbound}, but it has the advantage that it expresses the upper bound as the sum of only two inverse Ramsey numbers, as opposed to a minimum over sums of inverse Ramsey numbers, like the thorem does.
\section{Final notes}
The bound provided by Theorem~\ref{thm:upperbound} is almost certainly not exact, because one can probably improve on it by just choosing sizes more carefully for the Ramsey graphs $L_i$ and $M_j$. But perhaps the more interesting problem that is left open is to establish an asymptotically correct formula for the sequence $Q(n,\lceil rn\rceil)$. As we mentioned above, we believe that the bounds in Section~\ref{S:upperbound} have a good chance to be asymptotically correct, but proving it would probably require a good understanding of certain restricted clique packings of graphs.
\section{Acknowledgment}
The authors would like to thank the unknown referees for valuable suggestions; in particular on the simplification of the proof of the main theorem.
\end{document} |
\begin{document}
\author{Michel J.\,G. WEBER}
\address{IRMA, UMR 7501, 7 rue Ren\'e Descartes, 67084 Strasbourg Cedex, France.} \email{michel.weber@math.unistra.fr}
\keywords{ Arithmetical function, Erd\H os-Zaremba's function, Davenport's function. \vskip 1 pt \emph{2010 Mathematical Subject Classification}: Primary 11D57, Secondary 11A05, 11A25.}
\begin{abstract}
Erd\"os and Zaremba showed that
$ \limsup_{n\to \infty} \frac{\Phi(n)}{(\log\log n)^2}=e^\g$, $\g$ being Euler's constant, where $\Phi(n)=\sum_{d|n} \frac{\log d}{d}$.
We extend this result to the function $\Psi(n)= \sum_{d|n} \frac{(\log d )(\log\log d)}{d}$ and some other functions. We show that
$ \limsup_{n\to \infty}\, \frac{\Psi(n)}{(\log\log n)^2(\log\log\log n)}\,=\, e^\g$.
The proof
requires a new approach. As an application, we prove that for any $\eta>1$, any finite sequence of reals $\{c_k, k\in K\}$, $\sum_{k,\ell\in K} c_kc_\ell \, \frac{\gcd(k,\ell)^{2}}{k\ell} \le C(\eta) \sum_{\nu\in K} c_\nu^2(\log\log\log \nu)^\eta \Psi(\nu) $, where $C(\eta)$ depends on $\eta$ only. This
improves a recent result obtained by the author.
\end{abstract}
\maketitle
\section{\bf Introduction.} \label{s1}
\rm
Erd\"os and Zaremba showed in \cite{EZ} the following result concerning the arithmetical function $\Phi(n)=\sum_{d|n} \frac{\log d}{d}$,
\begin{equation}\label{EZ1}\limsup_{n\to \infty} \frac{\Phi(n)}{(\log\log n)^2}=e^\g, \end{equation}
where $\g $ is Euler's constant. This function appears in the study of good lattice points in numerical integration, see Zaremba \cite{Z}. The proof
is based on the identity
\begin{eqnarray}\label{formule}\Phi(n)
=\sum_{i=1}^r \sum_{\nu_i=1}^{\a_i}\frac{\log p_i^{\nu_i}}{p_i^{\nu_i}}\sum_{\d|n p_i^{-\a_i}}\frac{1}{\d} , \qq\qq ( n= p_1^{\a_1}\ldots p_r^{\a_r}), \end {eqnarray} which follows from
\begin{eqnarray}\label{base} \sum_{d|n} \frac{\log d}{d}&=& \sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_r=0}^{\a_r} \frac{1}{ p_1^{\m_1}\ldots p_{r}^{\m_{r}}}\Big(\sum_{i=1}^{r}\m_i\log p_i\Big) . \end{eqnarray}
Let $h(n)$ be non-decreasing on integers, $h(n)= o(\log n)$, and
consider the slightly larger function
\begin{eqnarray}\label{phi}\Phi_h(n)=\sum_{d|n} \frac{(\log d )\,h(d)}{d^{}}. \end{eqnarray}
In this case a formula similar to \eqref{base} no longer holds, the "log-linearity" being lost due to the extra factor $h(n)$. The study of this function requires a new approach.
We study in this work the case $h(n)= \log\log n$, that is the function
\begin{eqnarray}\label{psi}\Psi (n)=\sum_{d|n} \frac{(\log d )(\log\log d)}{d^{}}. \end{eqnarray}
We extend Erd\H os-Zaremba's result for this function, as well as for the functions
\begin{eqnarray*} \Phi_1(n)&=&\sum_{
p_1^{\m_1}\ldots p^{\m_{r}}_{r}|n}\frac{\sum_{i=1}^{r} \m_i(\log p_i)(\log\log p_i)}{p_1^{\m_1}\ldots p^{\m_{r}}_{r}}
\cr \Phi_2(n)&=&\sum_{d|n}\frac{(\log d) \log\, \O(d) }{d},
\end{eqnarray*}
where $\O(d)$ denotes as usual the total number of prime factors of $d$ counting multiplicity. These functions are linked to $\Psi$. \vskip 2 pt Throughout, $\log\log x$ (resp. $\log \log\log x$) equals $1$ if $0\le x \le e^{e}$ (resp. $0\le x \le e^{e^e}$), and equals $\log\log x$ (resp. $\log \log\log x$) in the usual sense if $x> e^{e}$ (resp. $x> e^{e^e}$).
\vskip 3pt
One verifies using standard arguments that \begin{equation} \label{phipsi} \limsup_{n\to \infty}\, \frac{\Phi_1(n)}{(\log \log n)^2(\log\log\log n)}\,\ge \,e^\g, \qq
\limsup_{n\to \infty}\, \frac{\Psi(n)}{(\log \log n)^2(\log\log\log n)}\,\ge \,e^\g,\end{equation} and in fact that
\begin{equation} \label{Phi1est} \limsup_{n\to \infty}\, \frac{\Phi_1(n)}{(\log \log n)^2(\log\log\log n)}\,= \,e^\g . \end{equation}
\vskip 2 pt
By the observation made after \eqref{base},
the corresponding extension of this result to $\Psi(n)$
is technically more delicate. It follows from \eqref{EZ1} that
\begin{equation}\label{trois} \limsup_{n\to \infty}\, \frac{\Psi(n)}{(\log \log n)^3}\, \le \, e^\g.
\end{equation} The question thus arises whether the exponent of $\log\log n$ in \eqref{trois} can be replaced by $2+\e$, with $\e>0$ small.
\vskip 2 pt We answer this question affirmatively by establishing the following precise result, which is the main result of this paper.
\begin{theorem}\label{t1}
\begin{equation*}
\limsup_{n\to \infty}\, \frac{\Psi(n)}{(\log \log n)^2(\log\log\log n)}\, =\, e^\g.\end{equation*}
\end{theorem}
An application of this result is given in Section \ref{s6}.
The upper bound is obtained, via the inequality
\begin{eqnarray}\label{convexdec} \Psi(n)\ \le \ \Phi_1(n) + \Phi_2(n),\end{eqnarray}
as a combination of an estimate of $\Phi_1(n)$ and the following estimate of $\Phi_2(n)$. Recall that Davenport's function $w(n)$ is defined by $w(n)=\sum_{p|n}\frac{\log p}{p}$.
According to Theorem 4 in \cite{D} we have,
\begin{equation}\label{wdavenport}\limsup_{n\to \infty}\frac{w(n)}{\log\log n}=1.
\end{equation}
Let also $\o(n)$ be the number of prime divisors of $n$ counted without multiplicity. \begin{theorem}\label{t2}For all odd numbers $n$ we have, \begin{eqnarray*} \Phi_2(n)\, \le \, C\, (\log\log\log \o(n))(\log \o(n))w(n) .\end{eqnarray*} where $C$ is an absolute constant.\end{theorem}
Here and elsewhere $C$ (resp. $C(\eta)$) denotes some positive absolute constant (resp. some positive constant depending only of a parameter $\eta$). \vskip 2 pt The approach used for proving Theorem \ref{t2} can be adapted with no difficulty to other arithmetical functions of similar type.
\vskip 5 pt Before continuing we mention some other existing extensions,
due to Sitaramaiah and Subbarao in \cite{SS2,SS1}. For instance, the case when $\log d$ is replaced by a non-negative additive function $g$ (ie. $S(n)=\sum_{d|n} \frac{g(d)}{d}$) is studied in \cite{SS1}. In our case we note that $g(d)= (\log d)(\log\log d)$ (see \eqref{phi}), which is obviously not additive. It is proved that if $T(d)$ is one of the three arithmetical functions $\frac{\o(d)}{d}$, $\frac{\O(d)}{d}$, $\frac{\log \tau(d)}{d}$, then
\begin{equation}\label{sisu1}\limsup_{n\to \infty} \frac{\sum_{d|n}T(d)}{(\log\log n)(\log\log \log n)}=c_T\,e^\g, \end{equation} where $c_T= 1$ in the two first cases, and $c_T= \log 2$ in the third case. See also Remark 2.3. A basis of their proof lies in the observation that $S(n)/\s_{-1}(n)$ is additive. Further it is proved in \cite{SS2} that
for each positive integer $k$,
\begin{equation}\label{sisu2}\limsup_{n\to \infty} \frac{S_k(n)}{(\log\log n)^{k+1}} =c_k\,e^\g,\qq \qq (S_k(n)=\sum_{d|n}\frac{(\log d)^k}{d}) \end{equation} where $c_k$ is a positive explicit constant. The proof is elegant and based on the derivation formula $S_k(n)=(-1)^kf^{(k)}(1)$, where $f(u)= \s_{-u}(n)$ and $f^{(k)}$ is the $k$-th derivative of $f$, which is specific to these sums.
\vskip 4 pt The paper is organized as follows.
Sections \ref{s4} and \ref{s5} form the main part of the paper, and consist of the proof of Theorem \ref{t2}, which is long and technical and involves the building of a binary tree (subsection \ref{subsection4.2.1}). The proof of Theorem \ref{t1} is given in section \ref{s5}.
Section \ref{s2} contains complementary results and the proofs of \eqref{phipsi}, \eqref{Phi1est}. Section \ref{s6} concerns the afore mentioned application of Theorem \ref{t1}. Additional remarks and results are given in Section \ref{s7}.
\vskip 2 pt \hskip -3 pt
\section{\bf Proof of Theorem \ref{t2}.} \label{s4}
We use a chaining argument. We make throughout the convention $0\log 0=0$.
\rm \vskip 6 pt
Let $n= p_1^{\a_1}\ldots p_r^{\a_r}$ be an odd number. We will use repeatedly the fact that
\begin{eqnarray}\label{min}\min_{i=1}^r p_i\ge 3. \end{eqnarray}
\vskip 10 pt
We note
that
\begin{eqnarray}\label{basic}
\Phi_2(n)&=&\sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_r=0}^{\a_r} \frac{1}{ p_1^{\m_1}\ldots p_{r}^{\m_{r}}}\sum_{i=1}^{r} \m_i\big(\log p_i\big)\log \Big(\sum_{i=1}^{r} \m_i \Big) \cr &=& \sum_{i=1}^r\underbrace{\sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_r=0}^{\a_r}}_{\substack{ \hbox{\small the sum relative}\\ \hbox{\small to $ \m_i$ is excluded} } } \underbrace{\frac{1}{ p_1^{\m_1}\ldots p_{r}^{\m_{r}}}}_{\substack{\hbox{\small $ p_i^{\m_i}$}\\ \hbox{\small is excluded}}}\ \Big(\sum_{\m_i=0}^{\a_i} \frac{\m_i\big(\log p_i\big)}{p_i^{\m_i}}\Big)\log \Big(\sum_{i=1}^{r} \m_i \Big) . \end{eqnarray}
As there is no order relation on the sequence $p_1, \ldots, p_r$, it suffices to study the sum
\begin{eqnarray} \label{Phi_2(r,n)} \Phi_2(r,n)&:=& \sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_{r-1}=0}^{\a_{r-1}} \frac{1}{ p_1^{\m_1}\ldots p_{r-1}^{\m_{r-1}}}\sum_{\m_r=0}^{\a_r}\frac{\m_r \log p_r}{p_r^{\m_r}}\log \Big[\sum_{i=1}^{r-1} \m_i + \m_r\Big].\end{eqnarray} The sub-sums in \eqref{Phi_2(r,n)} will be estimated by using a recursion argument.
\vskip 3 pt We first explain its principle and examine the structure of the sum $\Phi_2(r,n)$, anticipating somehow the calculations. The last sum $\sum_{\m_r=0}^{\a_r}\frac{\m_r \log p_r}{p_r^{\m_r}}\log \big[\sum_{i=1}^{r-1} \m_i + \m_r\big]$ is of type $$\sum_{\m=0}^{\a_r}\a \m \big(\log (A+\m)\big)e^{-\a \m}, \qq \qq \hbox{$\a =\log p_r$, \quad $A=\sum_{i=1}^{r-1}\m_i$}.$$ It is easy to observe with \eqref{49} that the bound obtained in Lemma \ref{E1}, will induce on the sum in $\m_{r-1}$ a logarithmic factor $\log \big[h+ \sum_{i=1}^{r-2} \m_i + \m_{r-1}\big]$ where $h$ is a positive integer, and so one. More precisely,
\begin{eqnarray*}
& & \sum_{\m=0}^{\infty}\a \m \big(\log (A+\m)\big)e^{-\a \m}\ \le \ \a \big(\log (A+1)\big)e^{-\a}+ 2\a \big(\log (A+2)\big)e^{-2\a}
\cr & &\qq + \Big\{3\a \log (A+3)+3 \log (A+3)+ \frac{1}{\a} \log (A+3) + \frac{1}{\a} + \frac{1}{\a^2(A+3)}\Big\} e^{-3\a}.\end{eqnarray*} provided $A\ge 1$, and $\a \ge 1$. Whence the bound,
\begin{eqnarray*} & & \frac{\log p_r}{p_r} \big(\log (A+1)\big)+ \frac{2\log p_r}{p^2_r}\big(\log (A+2)\big)
+ \frac{1}{p^3_r} \Big\{3\log p_r \log (A+3)+3 \log (A+3)\cr & &+ \frac{1}{\log p_r} \log (A+3) + \frac{1}{\log p_r} + \frac{1}{(\log p_r)^2(A+3)}\Big\}.\end{eqnarray*} By reporting this bound in \eqref{Phi_2(r,n)}, we get sums of type \begin{eqnarray*}
\sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_{r-1}=0}^{\a_{r-1}} \frac{\log \big[\sum_{i=1}^{r-1} \m_i + h\big]}{ p_1^{\m_1}\ldots p_{r-1}^{\m_{r-1}}} \qq h=1,2,3,\end{eqnarray*} affected with new coefficients, this is displayed in \eqref{h1234}. By using \eqref{sumphi2}, the last sum is bounded by
\begin{eqnarray*} & & \log \big[\sum_{i=1}^{r-2} \m_i + h\big]
+ \frac{1}{p_{r-1}}\Big(\log (\sum_{i=1}^{r-2} \m_i + h+1)\cr & &+ \frac{\log\big[\sum_{i=1}^{r-2} \m_i + h+1\big]}{\log p_{r-1}} + \frac{1}{(\log p_{r-1})^2(\sum_{i=1}^{r-2} \m_i + h+1)}\Big) .\cr{} & & \end{eqnarray*} By recursing once more, this allows one to bound again $\Phi_2(r,n)$. The remainding sums will after be all of same type. The factor $ \log \big[\sum_{i=1}^{r-1} \m_i + h\big]$ induces on the sum of order $(r-2)$ a factor $\log\big[\sum_{i=1}^{r-2} \m_i + h+1\big]$.
The whole matter thus consists with understanding how the new coefficients are generated, and in particular to check whether a coefficient of order $1+\e$ will not produce by iteration a coefficient of order $(1+\e)^r$. A recurrence inequality established in Lemma \ref{E3} will allow one to control their magnitude efficiently.
\subsection{Preparation} Some technical lemmas will be needed.
\begin{lemma}\label{phivar} {\rm (i)} Let $\p_1(x)=x\big(\log (A+x)\big) e^{-\a x}$, $\p_2(x)=\big(\log (A+x)\big) e^{-\a x}$. Then $\p_1(x)$ is non-increasing on $[3, \infty)$ if $A\ge 1$ and $\a\ge \log 2$. Further, $\p_2(x)$ is non-increasing on $[1,\infty)$, if $A\ge 1$ and $\a\ge 1$.
\vskip 3 pt \noi {\rm (ii)} Assume that $A\ge 1$ and $\a\ge \log 2$. For any integer $m\ge 1$, \begin{eqnarray} \label{phi.intest1} \a \int_m^\infty x \big(\log (A+x)\big) e^{-\a x}\hbox{\rm d}x&\le & \frac{1}{\a^2(A+m)}e^{-\a m}+ \frac{1}{\a}e^{-\a m} + \frac{1}{\a}\big(\log (A+m)\big)e^{-\a m} \cr & & + m(\log A+m) e^{-\a m}.\end{eqnarray}
\vskip 3 pt \noi {\rm (iii)} Assume that $A\ge 1$ and $\a\ge 1$. Then,
\begin{eqnarray}\label{intest2}
\int_1^\infty \big(\log (A+x)\big) e^{-\a x}\hbox{\rm d} x&\le &\frac{\log (A+1)}{\a} e^{-\a}+\frac{1}{\a^2(A+1)}e^{-\a}.
\end{eqnarray} \end{lemma}
\begin{proof}[\cmit Proof] \rm (i) We have
$\p_1'(x) = \big(\log (A+x)\big)e^{-\a x}+ \frac{x}{A+x}e^{-\a x} -\a x \big(\log (A+x)\big) e^{-\a x}$.
By assumption and since $\p_1'(x)\le 0 \Leftrightarrow \frac{1}{x}+ \frac{1}{(A+x)\log (A+x)}\le \a$, we get $$ \frac{1}{x}+ \frac{1}{(A+x)\log (A+x)}\le\frac{1}{3}+\frac{1}{8\log 2} \le \frac{1}{3}+\frac{1}{5}<\log 2\le \a.$$ Similarly
$ \p_2'(x)=\frac{1}{A+x}e^{-\a x}-\a \big(\log (A+x)\big) e^{-\a x}$.
As $\p_2'(x)\le 0 \Leftrightarrow (A+x)\log (A+x)\ge \frac{1}{\a}$, we also get $$(A+x)\log (A+x)\ge 2\log 2>1\ge \frac{1}{\a}.$$
\noi (ii) \rm We deduce from (i) that \begin{equation}\label{49} \a x \big(\log (A+x)\big) e^{-\a x}= \big(\log (A+x)\big)e^{-\a x}+ \frac{x}{A+x}e^{-\a x}-\big( x (\log A+x) e^{-\a x}\big)' .
\end{equation} By integrating,
\begin{eqnarray*}\label{phi1int} \a \int_m^\infty x \big(\log (A+x)\big) e^{-\a x}\hbox{\rm d}x&=& \int_m^\infty x\big(\log (A+x)\big)e^{-\a x}\hbox{\rm d} x+ \int_m^\infty \frac{x}{A+x}e^{-\a x}\hbox{\rm d} x \cr & & \quad + m(\log A+m) e^{-\a m}.\end{eqnarray*} Similarly
\begin{equation*}
\a \int_m^\infty \big(\log (A+x)\big) e^{-\a x}\hbox{\rm d} x=\int_m^\infty
\frac{1}{A+x}e^{-\a x}\hbox{\rm d} x+ \big(\log (A+m)\big) e^{-\a m}.\end{equation*} By combining we get, \begin{eqnarray*}
\a \int_m^\infty x \big(\log (A+x)\big) e^{-\a x}\hbox{\rm d}x&=& \frac{1}{\a}\int_m^\infty\frac{1}{A+x}e^{-\a x}\hbox{\rm d} x+ \int_m^\infty \frac{x}{A+x}e^{-\a x}\hbox{\rm d} x \cr & & + \frac{1}{\a}\big(\log (A+m)\big)e^{-\a m} + m(\log A+m) e^{-\a m}. \end{eqnarray*} Therefore,\begin{eqnarray*} \a \int_m^\infty x \big(\log (A+x)\big) e^{-\a x}\hbox{\rm d}x&\le & \frac{1}{\a^2(A+m)}e^{-\a m}+ \frac{1}{\a}e^{-\a m} + \frac{1}{\a}\big(\log (A+m)\big)e^{-\a m} \cr & & + m(\log A+m) e^{-\a m}.\end{eqnarray*}
\noi (iii) \rm We deduce from (i) that
\begin{eqnarray*}
\int_1^N \big(\log (A+x)\big) e^{-\a x}\hbox{\rm d} x&=&\frac{1}{\a}\int_1^N\frac{1}{(A+x)}e^{-\a x}\hbox{\rm d} x\cr & & \qq -\frac{1}{\a}\Big(\big(\log (A+1)\big) e^{-\a}-\log (A+N)\big) e^{-\a N}\Big).\end{eqnarray*}
As $\frac{1}{\a}\int_1^N\frac{1}{A+x}e^{-\a x}\hbox{\rm d} x\le \frac{1}{\a^2(A+1)}e^{-\a}$, letting $N$ tend to infinity gives,
\begin{eqnarray*}
\int_1^\infty \big(\log (A+x)\big) e^{-\a x}\hbox{\rm d} x&\le &\frac{\log (A+1)}{\a} e^{-\a}+\frac{1}{\a^2(A+1)}e^{-\a}.\end{eqnarray*}
\end{proof}
\begin{lemma}\label{E1} Assume that $A\ge 1$, and $\a \ge 1$. Then,
\begin{eqnarray*}
& & \sum_{\m=0}^{\infty}\a \m \big(\log (A+\m)\big)e^{-\a \m}\ \le \ \a \big(\log (A+1)\big)e^{-\a}+ 2\a \big(\log (A+2)\big)e^{-2\a}
\cr & &\qq\qq + \Big\{3\a \log (A+3)+3 \log (A+3)+ \frac{1}{\a} \log (A+3) + \frac{1}{\a} + \frac{1}{\a^2(A+3)}\Big\} e^{-3\a}.\end{eqnarray*} \end{lemma} \begin{proof}[\cmit Proof] \rm As \begin{eqnarray*}\sum_{\m=0}^{\infty}\a \m \big(\log (A+\m)\big)e^{-\a \m}&=&\a \big(\log (A+1)\big)e^{-\a}+ 2\a \big(\log (A+2)\big)e^{-2\a} \cr & & + 3\a \big(\log (A+3)\big)e^{-3\a}+ \a\sum_{\m=4}^\infty \m \big(\log (A+\m)\big)e^{-\a \m}, \end{eqnarray*} by applying Lemma \ref{phivar}-(ii), we get
\begin{eqnarray*}
\a\sum_{\m=4}^\infty \m \big(\log (A+\m)\big)e^{-\a \m}&\le &\a\int_{3}^\infty x \big(\log (A+x)\big)e^{-\a x} \hbox{\rm d} x
\cr &\le & \frac{1}{\a^2(A+3)}e^{-3\a}+ \frac{1}{\a}e^{-3\a } + \frac{\log (A+3)}{\a}e^{-3\a} + 3(\log A+3) e^{-3\a}.\end{eqnarray*} Whence,
\begin{eqnarray*}
& & \sum_{\m=0}^{\infty}\a \m \big(\log (A+\m)\big)e^{-\a \m}\ \le \ \a \big(\log (A+1)\big)e^{-\a}+ 2\a \big(\log (A+2)\big)e^{-2\a}
\cr & & + \Big\{3\a \log (A+3)+3 \log (A+3)+ \frac{1}{\a} \log (A+3) + \frac{1}{\a} + \frac{1}{\a^2(A+3)}\Big\} e^{-3\a}.\end{eqnarray*}
\end{proof} \begin{lemma}\label{E3a} Under assumption \eqref{min}
we have \begin{eqnarray*}\sum_{\m_s=0}^{\infty} \frac{\log \big(\sum_{i=1}^{s} \m_i +h\big)}{p_s^{\m_s}}&\le & \log \big(\sum_{i=1}^{s-1} \m_i + h\big) + \frac{1}{p_{s}}\Big( 1 + \frac{1}{ \log p_{s}}\Big)\log \big(\sum_{i=1}^{s-1} \m_i + h+1\big) \cr & &\quad +\frac{1}{(1+(\sum_{i=1}^{s-1} \m_i + 2))(\log p_s)^2p_s}. \end{eqnarray*} In particular, \begin{eqnarray*}\sum_{\m_s=0}^{\infty} \frac{\log \big(\sum_{i=1}^{s-1} \m_i +h\big)}{p_s^{\m_s}} &\le &
\Big(1+ \frac{1}{p_{s}}\big( 1 + \frac{1}{ \log p_{s}} +\frac{1}{3(\log p_s)^2}\big) \Big)\log \big(\sum_{i=1}^{s-1} \m_i + h+2\big). \end{eqnarray*}\end{lemma}
\begin{proof}[\cmit Proof]
\rm As
\begin{eqnarray*}\sum_{\m=0}^{\infty} \big(\log (A+\m)\big) e^{-\a \m} &=& \log A + \big(\log (A+1)\big) e^{-\a }+ \sum_{\m=2}^{\infty} \big(\log (A+\m)\big) e^{-\a \m}
\cr &\le & \log A + \big(\log (A+1)\big) e^{-\a }+ \int_{1}^{\infty} \big(\log (A+x)\big) e^{-\a x}\hbox{\rm d} x\end{eqnarray*} we deduce from Lemma \ref{phivar}-(iii), \begin{equation}\label{sumphi2}\sum_{\m=0}^{\infty} \big(\log (A+\m)\big) e^{-\a \m} \ \le \ \log A + e^{-\a}\Big(\log (A+1)+ \frac{\log (A+1)}{\a} + \frac{1}{\a^2(A+1)}\Big) . \end{equation}
Consequently,
\begin{eqnarray*}\sum_{\m_s=0}^{\infty} \frac{\log \big(\sum_{i=1}^{s} \m_i + h\big)}{p_s^{\m_s}}&\le & \log \big(\sum_{i=1}^{s-1} \m_i + h\big)+ \frac{1}{p_{s}}\Big( 1 + \frac{1}{ \log p_{s}}\Big)\log \big(\sum_{i=1}^{s-1} \m_i + h+1\big)\cr & &\quad +\frac{1}{(1+(\sum_{i=1}^{s-1} \m_i + 2))(\log p_s)^2p_s}. \end{eqnarray*} Finally, \begin{eqnarray*}\sum_{\m_s=0}^{\infty} \frac{\log \big(\sum_{i=1}^{s-1} \m_i + h\big)}{p_s^{\m_s}}
&\le &
\Big(1+ \frac{1}{p_{s}}\big( 1 + \frac{1}{ \log p_{s}}\big) +\frac{1}{3(\log p_s)^2}\Big)\log \big(\sum_{i=1}^{s-1} \m_i + h+2\big) . \end{eqnarray*}
\end{proof}
\begin{corollary}\label{E2} Assume that condition \eqref{min} is satisfied.
\vskip 3 pt {\rm (i)} If $\sum_{i=1}^{r-1}\m_i\ge 1$, then \begin{align*}
\sum_{\m_r=0}^{\a_r }&\frac{\m_r\log p_{r}}{p_r^{\m_r}}\log \big(\sum_{i=1}^{r-1} \m_i + \m_r\big) \le \frac{\log p_r}{p_r}\log \big( \sum_{i=1}^{r-1} \m_i + 1\big)+ \frac{2\log p_r}{p_r^2} \log \big( \sum_{i=1}^{r-1} \m_i + 2\big) \cr &+ \frac{1}{p_r^3} \Big(3\log p_r + 3+ \frac{1}{\log p_r} \Big)\log \big( \sum_{i=1}^{r-1} \m_i + 3\big) + \frac{1}{p_r^3\log p_r}\Big( 1+ \frac{1}{(\sum_{i=1}^{r-1} \m_i + 3)\log p_r}\Big). \end{align*} Further, \begin{eqnarray*} \sum_{\m_r=0}^{\a_r }\frac{\m_r\log p_{r}}{p_r^{\m_r}}\log \big(\sum_{i=1}^{r-1} \m_i + \m_r\big)& \le & 5\ \frac{\log p_r}{p_r}\log \big( \sum_{i=1}^{r-1} \m_i + 3\big).\end{eqnarray*}
\vskip 3 pt {\rm (ii)} If $\sum_{i=1}^{r-1}\m_i=0$, then \begin{eqnarray*} \sum_{\m_r=0}^{\a_r }\frac{\m_r\log p_{r}}{p_r^{\m_r}}\log \big(\sum_{i=1}^{r-1} \m_i + \m_r\big) &\le & 18\ \frac{\log p_{r}}{p_{r}} .\end{eqnarray*}\end{corollary}
\begin{proof}[\cmit Proof.] \rm (i) The first inequality follows from Lemma \ref{E1} with the choice $\a =\log p_r$ and $A=\sum_{i=1}^{r-1}\m_i$, noting that by assumption \eqref{min},
$\a >1$. As $p_r\ge 3$, it is also immediate that \begin{align*}
& \sum_{\m_r=0}^{\a_r }\frac{\m_r\log p_{r}}{p_r^{\m_r}} \log \big(\sum_{i=1}^{r-1} \m_i + \m_r\big) \cr &\ \le \Big\{3 \frac{\log p_r}{p_r} + \frac{\log p_r }{9p_r} \Big(3 + \frac{3}{\log p_r}+ \frac{1}{(\log p_r)^2} \Big)\Big\}\log \big( \sum_{i=1}^{r-1} \m_i + 3\big) + \frac{1}{9p_r\log p_r}\Big( 1+ \frac{1}{4\log p_r}\Big) \cr &\ \le \ 5\ \frac{\log p_r}{p_r}\log \big( \sum_{i=1}^{r-1} \m_i + 3\big).\end{align*}
\noi (ii) If $\sum_{i=1}^{r-1}\m_i=0$, the sums relative to $\m_i$, $1\le i\le r-1$, do not contribute. Further, \begin{eqnarray*}
\sum_{\m_r=0}^{\a_r }\frac{\m_r\log p_{r}}{p_r^{\m_r}}\log \big(\sum_{i=1}^{r-1} \m_i + \m_r\big)&=&\sum_{\m_r=2}^{\a_r}\frac{\m_r\log p_{r}}{p_{r}^{\m_r}}\log \m_r\ =\ \sum_{\m=1}^{\a_r-1}\frac{(\m+1)\log p_{r}}{p_{r}^{\m+1}}\log (\m+1) \cr &\le &\frac{1}{p_{r}}\Big\{\sum_{\m=1}^{\infty}\frac{\m\log p_{r}}{p_{r}^{\m}}\log (\m+1)+\sum_{\m=1}^{\infty}\frac{\log p_{r}}{p_{r}^{\m}}\log (\m+1)\Big\} . \end{eqnarray*} Lemma \ref{E1} applied with $A=1$ and $\a =\log p_{r}$ gives the bound
\begin{eqnarray*}
\sum_{\m=1}^{\infty}\frac{\m\log p_{r}}{p_{r}^{\m}}\log (\m+1)
&\le &
\frac{(\log 2)\log p_{r}}{p_{r}} + \frac{2(\log 3)\log p_{r}}{p_{r}^2} + \frac{1}{p_{r}^3}\Big\{(6\log 2)(\log p_{r}) \cr & & +6 \log 2+ \frac{2\log 2}{(\log p_{r})} + \frac{1}{(\log p_{r})} + \frac{1}{4(\log p_{r})^2}\Big\} \cr &\le & 8\Big(\frac{\log p_{r}}{p_{r}}+\frac{1}{p_{r}^3} \Big) .\end{eqnarray*} Next estimate \eqref{sumphi2} applied with $A=1$ and $\a =\log p_{r}$, further gives, \begin{eqnarray*}\sum_{\m=1}^{\infty}\frac{\log p_{r}}{p_{r}^{\m}}\log (\m+1) &\le & \frac{1}{p_{r}}\Big(\log 2+ \frac{\log 2}{\log p_{r}} + \frac{1}{2(\log p_{r})^2}\Big) \ \le \ \frac{2}{p_{r}}.\end{eqnarray*} Whence, $ \sum_{\m_r=0}^{\a_r }\frac{\m_r\log p_{r}}{p_r^{\m_r}}\log \big(\sum_{i=1}^{r-1} \m_i + \m_r\big) \le 18\ \frac{\log p_{r}}{p_{r}} .$\end{proof}
\begin{remark}\rm
As $\log \big(\sum_{i=1}^{s} \m_i + h\big)\le \log \big(\O(n)+3\big)$, one can deduce from Corollary \ref{E2}-(ii) that \begin{eqnarray*} \Phi_2(r,n)
&\le &18\ \frac{\log p_{r}}{p_{r}}\log (\O(n)+3) \prod_{i=1}^{r}\Big(\frac{1}{1-p_i^{-1}}\Big) .\end{eqnarray*} So that by the observation made at the beginning of section \ref{s4}, \begin{eqnarray*} \Phi_2(n)&\le &18\ \big(\log (\O(n)+3)\big)\Big( \sum_{j=1}^r\frac{\log p_{j}}{p_{j}}\Big) \prod_{i=1}^{r}\Big(\frac{1}{1-p_i^{-1}}\Big) .\end{eqnarray*} By combining this with the bound for $\Phi_1(n)$ established in Lemma \ref{phi1maj}, next using inequality \eqref{convexdec}, gives
\begin{align}\label{convexdec1} \Psi(n)\ \le\ \Big(\prod_{j=1 }^r \frac{1}{1-p_j^{ -1}} \Big)\bigg\{\sum_{i=1}^r \frac{(\log p_i)\big(\log\log p_i\big)}{p_i-1} + 18\ \Big( \sum_{i=1}^r\frac{\log p_{i}}{p_{i}}\Big)\log (\O(n)+3) )\bigg\},\end{align} recalling that $r=\o(n)$. Whence by invoking Proposition \ref{tEZm}, noticing that $\o(n)\le\O(n)\le \log_{2} n$, \begin{eqnarray*} \Psi(n)&\le & e^\g(1+o(1)) (\log\log n)^2\big(\log\log\log n+ 18 w( n)\big).\end{eqnarray*} \vskip 3 pt The finer estimate of $\Psi(n)$ will be derived from a more precise study of the coefficients of $\Psi(r,n)$. This is the object of the next sub-section. \end{remark}
\subsection{Estimates of $\boldsymbol{ \Phi_2(r,n)}$.}
\rm ${}$
We define successively
\begin{eqnarray}\label{n1}{}\begin{cases} \ \ \ \ \ \ \ \ \m\, =\, (\m_1, \ldots, \m_r), \qq (\m_1, \ldots, \m_r)\in \displaystyle{\prod_{i=1}^r\big([0,\a_i]\cap {\mathbb N}\big)},
\cr &\cr \ \ \ p_\m(s)\, =\, p_1^{-\m_1}\ldots p_s^{-\m_s}, \qquad 1\le s\le r, \cr &\cr \quad \ \ \ \Pi_s\, =\, \displaystyle{\sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_s=0}^{\a_s}p_\m(s)\, =\, \prod_{\ell = 1}^s\Big(\frac{1-p_\ell^{-\a_\ell -1}}{1-p_\ell^{s -1}}\Big) }. \end{cases} \end{eqnarray}
Next, \begin{eqnarray*} \Phi_s(h)= \sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_s=0}^{\a_s}p_\m(s) \log \big(\sum_{i=1}^{s} \m_i + h\big), \qq \quad 1\le s\le r-1. \end{eqnarray*}
We also set
\begin{eqnarray}\label{n5} \begin{cases}c_1\, =\, 1, \qq c_2\, =\, \frac{2}{p_r},
\qq c_3\, =\, \frac{1}{p_r^2}\big(3 + \frac{3}{\log p_r}+ \frac{1}{(\log p_r)^2}\big),
\cr c_4\, =\, \frac{1}{p_r^3\log p_r}\big(1 + \frac{1}{3\log p_r}\big) \cr c_0\, =\, \frac{\log p_r}{ p_r}, \qq\qq\qq \ \, c\, =\, \sum_{i=1}^3 c_i, \cr b_s\, =\, \frac{1}{ p_s}\big( 1+ \frac{1}{ \log p_s}\big), \qq \ \ \b_s= \frac{1}{2 p_s (\log p_s)^2}. \qq\qq\qq\end{cases}
\end{eqnarray}
\subsubsection{\cmit Recurrence inequality.}\label{subsection4.2.1}
We deduce from the first part of Lemma \ref{E3a} that \begin{eqnarray*}\Phi_{s}(h)&=& \sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_{s-1}=0}^{\a_{s-1}}p_\m(s-1)\bigg\{\sum_{\m_s=0}^{\a_s}p_s^{-\m_s} \log \big(\sum_{i=1}^{s} \m_i + h\big)\bigg\} \cr &\le & \sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_{s-1}=0}^{\a_{s-1}}p_\m(s-1)\bigg\{\sum_{\m_s=0}^{\infty}p_s^{-\m_s} \log \big(\sum_{i=1}^{s} \m_i + h\big)\bigg\} \cr &\le &\sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_{s-1}=0}^{\a_{s-1}}p_\m(s-1)\bigg\{\log \big(\sum_{i=1}^{s-1} \m_i + h\big) + \cr & & \quad \frac{1}{p_{s}}\Big( 1 + \frac{1}{ \log p_{s}}\Big)\log \big(\sum_{i=1}^{s-1} \m_i + h+1\big)+\frac{1}{(1+(\sum_{i=1}^{s-1} \m_i + 2))(\log p_s)^2p_s}\bigg\} \cr &\le &\Phi_{s-1}(h)+ \frac{1}{p_{s}}\Big( 1 + \frac{1}{ \log p_{s}}\Big)\Phi_{s-1}(h+1)+ \frac{1}{2(\log p_s)^2p_s}\Pi_{s-1}. \end{eqnarray*}
\vskip 5 pt
Whence with the previous notation, \begin{lemma}\label{E3} Under assumption \eqref{min},
we have for $s=2,\ldots , r-1$,\begin{eqnarray*}\Phi_{s}(h)&\le &\Phi_{s-1}(h)+ b_s\Phi_{s-1}(h+1)+\b_s\Pi_{s-1}. \end{eqnarray*} \end{lemma}
\vskip 5 pt
Now by using estimate (i) of Corollary \ref{E2} and the notation introduced, we have, under assumption \eqref{min}, if $\sum_{i=1}^{r-1}\m_i\ge 1$, \begin{align}\label{h1234}
\sum_{\m_r=0}^{\a_r }&\frac{\m_r\log p_{r}}{p_r^{\m_r}}\log \big(\sum_{i=1}^{r-1} \m_i + \m_r\big) \le \frac{\log p_r}{p_r}\log \big( \sum_{i=1}^{r-1} \m_i + 1\big)+ \frac{2\log p_r}{p_r^2} \log \big( \sum_{i=1}^{r-1} \m_i + 2\big) \cr &+ \frac{1}{p_r^3} \Big(3\log p_r + 3+ \frac{1}{\log p_r} \Big)\log \big( \sum_{i=1}^{r-1} \m_i + 3\big) + \frac{1}{p_r^3\log p_r}\Big( 1+ \frac{1}{(\sum_{i=1}^{r-1} \m_i + 3)\log p_r}\Big) \cr & \le c_0c_1
\log \big( \sum_{i=1}^{r-1} \m_i + 1\big)+ c_0c_2
\log \big( \sum_{i=1}^{r-1} \m_i + 2\big)
+c_0c_3
\log \big( \sum_{i=1}^{r-1} \m_i + 3\big) + c_4
\cr & =c_0\sum_{h=1}^3c_i \log \big( \sum_{i=1}^{r-1} \m_i + h\big)+c_4,\end{align} since $\frac{1}{p_r^3\log p_r}\big(1+ \frac{1}{(\sum_{i=1}^{r-1} \m_i + 3)\log p_r}\big)\le c_4$. \vskip 3 pt Therefore, under assumption \eqref{min}, if $\sum_{i=1}^{r-1}\m_i\ge 1$, \begin{eqnarray}\Phi_2(r,n)&\le & c_0 \underbrace{\sum_{h=1}^3 c_h \Phi_{r-1}(h)}_{(1)} + c_4 \Pi_{r-1}.\end{eqnarray} Indeed, \begin{eqnarray*}\Phi_2(r,n)&=& \sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_{r-1}=0}^{\a_{r-1}} \frac{1}{ p_1^{\m_1}\ldots p_{r-1}^{\m_{r-1}}}\sum_{\m_r=0}^{\a_r}\frac{\m_r \log p_r}{p_r^{\m_r}}\log \Big[\sum_{i=1}^{r-1} \m_i + \m_r\Big] \cr &\le & \sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_{r-1}=0}^{\a_{r-1}} \frac{1}{ p_1^{\m_1}\ldots p_{r-1}^{\m_{r-1}}}\Big\{c_0\sum_{h=1}^3c_i \log \big( \sum_{i=1}^{r-1} \m_i + h\big)+c_4\Big\} \cr &= & c_0 \underbrace{\sum_{h=1}^3 c_h \Phi_{r-1}(h)}_{(1)} + c_4 \Pi_{r-1}.\end{eqnarray*} By applying the recurrence inequality with $s=r-1$ to $\Phi_{r-1}(h)$, one gets
\begin{eqnarray*} \Phi_2(r,n) &\le & c_0 \underbrace{\sum_{h=1}^3 c_h \big[\Phi_{r-2}(h)}_{(1)}+ \underbrace{b_{r-1}\Phi_{r-2}(h+1)}_{(2)} \big]+c_0c\b_{r-1}\Pi_{r-2}+ c_4 \Pi_{r-1}.\end{eqnarray*} By applying this time the recurrence inequality to $\Phi_{r-2}(h)$, one also gets
\begin{eqnarray*}
\Phi_2(r,n) &\le & c_0 \underbrace{\sum_{h=1}^3 c_h \Phi_{r-3}(h)}_{(1)}+c_0\underbrace{\sum_{h=1}^3 c_h b_{r-2}\Phi_{r-3}(h+1)}_{(3)}+c_0c b_{r-2}\Pi_{r-3}
\cr& &+c_0\underbrace{\sum_{h=1}^3 c_h b_{r-1}\Phi_{r-3}(h+1)}_{(2)}+c_0\underbrace{\sum_{h=1}^3 c_h b_{r-1}b_{r-2}\Phi_{r-3}(h+2)}_{(4)}+c_0cb_{r-1}\b_{r-2}\Pi_{r-3}
\cr& &+c_0c\b_{r-1}\Pi_{r-2}+ c_4 \Pi_{r-1}.\end{eqnarray*} One easily verifies (see expressions underlined by (1)) that the coefficient of $\Phi_{r-1}(h)$ is the same as the one of $\Phi_{r-2}(h)$ and $\Phi_{r-3}(h)$. So is also the case for $\Phi_{r-2}(h+1)$, see expressions underlined by (2). New expressions underlined by (3),\,(4) and linked to $\Phi_{r-3}(h+1)$, $\Phi_{r-3}(h+2)$ appear. \vskip 2 pt Each new coefficient is kept until the end of the iteration process generated by the recurrence inequality of Lemma \ref{E3}. \vskip 2 pt We also verify, when applying this inequality, that we pass from a majoration expressed by $\Phi_{r-1}(h)$, $\Pi_{r-1}$, {\cmit
uniquely}, to a majoration expressed by $\Phi_{r-2}$ (in $h$ or $h+1$) and $\Pi_{r-2}$, $\Pi_{r-1}$ {\cmit uniquely}. \vskip 2 pt This rule is general, and one verifies that when iterating this recurrence relation, we obtain at each step a bound depending on $\Phi_{r-d}$ and the products $\Pi_{r-d}, \Pi_{r-d+1},\ldots,$ $ \Pi_{r-1}$ only. \vskip 4 pt $\underline{\hbox{\cmssqi Binary tree}}$\,: The shift of length $h$ or $h+1$ generates a binary tree whose branches are at each division (steps corresponding to the preceding iterations), either stationary\,: $\Phi_{r-d}(h)\to \Phi_{r-d-1}(h)$, or creating new coefficients\,: $\Phi_{r-d}(h)\to \Phi_{r-d-1}(h+1)$. One can represent this by the diagram below drawn from Lemma \ref{E3}. \vskip 5 pt \centerline{ ${}_\downarrow$ \hbox{\small shift\,+1, new coefficients\ }${}_\downarrow$\hskip +52 pt} \vskip -15 pt \begin{eqnarray*}\Phi_{s}(h)&\le &\Phi_{s-1}(h)+ b_s\Phi_{s-1}(h+1)+\b_s\Pi_{s-1}. \end{eqnarray*} \centerline{ ${}^\uparrow$ \hbox{\small stationarity} ${}^\uparrow$ \hskip +120 pt} \vskip -3pt \centerline{\sevenrm Figure 1.}\par
\par \vskip 6 pt
\noi
Before continuing, we recall that by \eqref{sumphi2}, \begin{eqnarray*}\sum_{\m=0}^{\a_s} \big(\log (A+\m)\big) e^{-\a \m} &\le & \log A + e^{-\a}\Big(\log (A+1)+ \frac{\log (A+1)}{\a} + \frac{1}{\a^2(A+1)}\Big) . \end{eqnarray*}
\noi Thus \begin{eqnarray*} \Phi_1(v)&\le & \sum_{\m_1=0}^\infty p_\m(1)\log \Big(\sum_{i=1}^v\m_i+1\Big)
\ =\ \sum_{\m_1=0}^\infty \frac{\log (v+\m)}{p_1^\m} \cr &\le & \log v + \frac{1}{p_1}\Big( \log (v+1) +\frac{\log (v+1)}{\log p_1}+ \frac{1}{v(\log p_1)^2}\Big) \qq (v\ge 1). \end{eqnarray*}
Hence, \begin{eqnarray*} \Phi_1(h) &\le & C\log h. \end{eqnarray*} One easily verifies that the $d$-tuples formed with the $b_i$ have all $\Phi_{r-x}(h+d)$ as factor. The terms having $\Phi_{r-\cdot}(h+\cdot)$ as factor are forming the sum
\begin{eqnarray} \label{somme} c_0\, \sum_{d=1}^{r-1}\Big( \sum_{1\le i_1<\ldots <i_d<r} b_{i_1}\ldots b_{i_d}\Big) \Phi_1(h+d), \end{eqnarray} once the iteration process achieved, that is after having applied $(r-1)$ times the recurrence inequality of Lemma \ref{E3}.
This sum can thus be bounded from above by (recalling that $h=1,2$ or $3$)
\begin{eqnarray*} c_0\sum_{d=1}^{r-1}(\log d)\, \Big( \sum_{1\le i_1<\ldots <i_d<r} b_{i_1}\ldots b_{i_d}\Big)
. \end{eqnarray*} But, for all positive integers $a_{ 1},\ldots, a_{r}$ and $1\le d\le r$, we have, $$\Big(\sum_{i=1}^r a_i\Big)^d\ge d! \sum_{ 1\le i_1<\ldots<i_d\le r } a_{i_1}\ldots a_{i_d}. $$ Thus
\begin{eqnarray*} \sum_{d=1}^{r-1}(\log d)\, \Big( \sum_{1\le i_1<\ldots <i_d<r} b_{i_1}\ldots b_{i_d}\Big) &\le & \sum_{d=1}^{r-1}\frac{(\log d)}{d!}\, \Big( \sum_{i=1}^r b_{i}\Big)^d
. \end{eqnarray*}
As moreover,
$$b_{i}=\frac{1}{p_{i}}\big(1 + \frac{1}{\log p_{i+1}}\big)\le \frac{1}{p(i)} + \frac{1}{p(i)\log p(i)},$$
one has by means of \eqref{p(i)est},
\begin{eqnarray*} \sum_{i=1}^r b_{i}&\le & \sum_{i=1}^r\big(\frac{1}{i \log i} + \frac{1}{i (\log i)^2}\big) \ \le \ \log\log r +C. \end{eqnarray*}
Thus \begin{eqnarray*} \sum_{d=1}^{r-1}(\log d)\, \Big( \sum_{1\le i_1<\ldots <i_d<r} b_{i_1}\ldots b_{i_d}\Big) &\le & C \sum_{d=1}^{r-1}\frac{(\log d)}{d!}\, (\log\log r +C)^d
. \end{eqnarray*}
On the one hand,
\begin{eqnarray*} \sum_{\log d\le 1+\e +\log\log\log r}\frac{(\log d)}{d!}\,(\log\log r +C)^d&\le & \big(1+\e +\log\log\log r\big) \sum_{d>1}\frac{(\log\log r +C)^{d}}{d!}
\cr &\le &C \big(1+\e +\log\log\log r\big) \log r.
\end{eqnarray*} On the other, utilizing the classical estimate $d\,!\ge C \sqrt d \,d^d\,e^{-d}$, one has
\begin{eqnarray*} \sum_{\log d> 1+\e+\log\log\log r}\frac{(\log d)}{d!}\, (\log\log r)^d&\le &\sum_{\log d> 1+\e+\log\log\log r}\frac{(\log d)}{\sqrt d}\,e^{-d(\log d-1 - \log\log\log r)}
\cr &\le & \sum_{d>1}\frac{(\log d)}{\sqrt d}\,\,e^{-\e d}<\infty.
\end{eqnarray*}
One thus deduces, concerning the sum in \eqref{somme} that,
\begin{equation} \label{somme1} c_0\sum_{d=1}^{r-1}\Big( \sum_{1\le i_1<\ldots <i_d<r} b_{i_1}\ldots b_{i_d}\Big) \Phi_1(h+d)\ \le \ C\frac{\log p_r}{p_r}\big(1+\log\log\log r\big)\log r; \end{equation}
\vskip 4 pt
\subsection{\cmit Coefficients related to $ \Pi_s$.} \vskip 3 pt By applying the recurrence inequality (Lemma \ref{E3}), one successively generates
\begin{eqnarray*} & & c_4\Pi_{r-1}
\cr & & c_4\Pi_{r-1} + c_0c\b_{r-1}\Pi_{r-2} \cr & & c_4\Pi_{r-1} + c_0c\b_{r-1}\Pi_{r-2} + c_0c\b_{r-2}\big(1 + b_{r-1}b_{r-2} \big)\Pi_{r-3}
\cr & &c_4\Pi_{r-1} + c_0c\b_{r-1}\Pi_{r-2} + c_0c\b_{r-2}\big(1 + b_{r-1}b_{r-2} \big)\Pi_{r-3}
\cr & &\qq\quad \ +c_0c\b_{r-3}\big( 1+ b_{r-2} + b_{r-1} + b_{r-1} b_{r-2}\big)\Pi_{r-4}.
\end{eqnarray*}
$\underline{\hbox{\cmssqi Coefficients}}$\,:
\begin{eqnarray*} \qq\qq \Pi_{r-1}: c_4\qq\qq \qq\qq \, \Pi_{r-2}: c_0c\b_{r-1} \qq\qq\qq\qq\qq\qq\qq\qq\cr \Pi_{r-3}: c_0c\b_{r-2}(1+ b_{r-1})\qq \Pi_{r-4}: c_0c\b_{r-3}(1+ b_{r-2}+b_{r-1}+b_{r-1}b_{r-2}).\qq\qq \end{eqnarray*}
It is easy to check that the coefficients $\Pi_{r-x}$ are exactly those of $\Phi_{r-x+1}(.)$ affected with the factor $c_0c\b_{r-x+1}$. The products form the sum \begin{eqnarray} \label{sumpi} c_0c \sum_{d=0}^{r-2}\b_{r-d}\Big(1+ \sum_{1\le i_1<\ldots <i_d<r} b_{r-i_1}\ldots b_{r-i_d}\Big)\Pi_{r-d-1}. \end{eqnarray} By \eqref{p(i)est}, one has
\begin{eqnarray}\label{beta}\b_j= \frac{1}{2 p_j (\log p_j)^2}\le \frac{1}{2 p(j) (\log p(j))^2}\le \frac{1}{2j (\log j)^3},\qq \hbox{if $j\ge 2$,} \end{eqnarray} Moreover, \eqref{p(i)est} and \eqref{prod} imply that \begin{eqnarray*}
\Pi_j= \prod_{\ell =1}^j \Big(\frac{1}{1-\frac{1}{p_\ell}} \Big)&\le &\prod_{\ell =1}^j \Big(\frac{1}{1-\frac{1}{p(\ell)}} \Big)\ \le \ \prod _{p\le j(\log j +\log\log j)}\Big(\frac{1}{1- \frac{1}{p}}\Big) \cr &\le & C (\log j) \, . \end{eqnarray*}
We now note that by definition of $\Pi_j$, we also have $$\Pi_j\le \max_{\ell \le 5}\prod_{p\le p(\ell)} \frac{1}{1 -\frac{1}{p}}= C_0.$$ We deduce that \begin{eqnarray}\label{Piest}\Pi_j &\le &C( \log j) ,\qq\qq \hbox{if $j\ge 2$.} \end{eqnarray} Consequently, \eqref{Piest} and \eqref{beta} imply that \begin{eqnarray}\label{betapi}\b_{j+1}\Pi_j &\le & \frac{C}{j (\log j)^2} ,\qq\qq \hbox{if $j\ge 2$.} \end{eqnarray}
This implies that the sum in \eqref{sumpi} can be bounded as follows: \begin{eqnarray} \label{estsumpi} & & c_0c \sum_{d=0}^{r-2}\b_{r-d}\Big(1+ \sum_{1\le i_1<\ldots <i_d<r} b_{r-i_1}\ldots b_{r-i_d}\Big)\Pi_{r-d-1} \cr &\le &c_0c \prod _{i=1}^{r-2}\big( 1+b_{r-i}\big)\cdot \sum_{d=0}^{r-2}\b_{r-d}\, \Pi_{r-d-1} \ =\ c_0c \prod _{j=2}^{r-1}\big( 1+b_{j}\big)\cdot \sum_{d=0}^{r-2}\b_{r-d}\, \Pi_{r-d-1} \cr &\le &c_0c\, C \prod _{j=2}^{r-1}\big( 1+b_{j}\big)\cdot \sum_{d=0}^{r-2}\frac{1}{(r-d)\big(\log (r-d)\big)^2}
\cr &\le &c_0c\, C \prod _{j=2}^{r-1}\big( 1+b_{j}\big)\cdot \sum_{\d=2}^{\infty}\frac{1}{\d (\log \d)^2}
\cr &\le & c_0c\, C \prod _{j=2}^{r-1}\big( 1+b_{j}\big). \end{eqnarray} We recall that \begin{eqnarray*} \sum_{p\le x }\frac{1}{p}\le \log\log x +C.
\end{eqnarray*}
See for instance \cite{RS}, inequality (3.20). Thus,
\begin{eqnarray}\label{1plusbi} \prod_{i=1}^{r}\big(1+b_i\big)&\le & C\, \log r.\end{eqnarray} Now estimate \eqref{1plusbi} implies that \begin{eqnarray}\label{estsumpi2}
c_0c \sum_{d=0}^{r-2}\b_{r-d}\Big(1+ \sum_{1\le i_1<\ldots <i_d<r} b_{r-i_1}\ldots b_{r-i_d}\Big)\Pi_{r-d-1}&\le & c_0c\, C \log r \cr &\le & C\, \frac{ \log p_r}{p_r} \log r.\end{eqnarray} We thus deduce from \eqref{somme1} and \eqref{sumpi} that
\begin{eqnarray}\label{estS2} \Phi_2(r,n)&\le & C\,\frac{\log p_r}{p_r}\big(1+\log\log\log r\big)\log r + C\, \frac{ \log p_r}{p_r} \log r \cr &\le & C\,\frac{\log p_r}{p_r}(\log r)(\log\log\log r) \, .\end{eqnarray}
As a result, by taking account of the observation made at the beginning of section \ref{s4}, we obtain \begin{equation}\label{estS2a} \Phi_2(n) \le C\, (\log\log\log r)(\log r)\sum_{i=1}^r\frac{ \log p_i}{p_i}\ =\ C\, (\log\log\log r)(\log r)w(n)
\, .\end{equation} \vskip 3 pt By combining \eqref{estS2} with the upper estimate $\Phi_1(n)$ established at Lemma \ref{phi1maj} and using inequality \eqref{convexdec}, we arrive at
\begin{equation}\label{convexdec1a} \Psi(n)\le \Big(\prod_{j =1 }^r \frac{1}{1-p_j^{ -1}} \Big)\sum_{i=1}^r \frac{(\log p_i)\big(\log\log p_i\big)}{p_i-1} + C\, (\log\log\log r)(\log r)w(n) ,\end{equation} recalling that $p_j\ge 3$ by assumption \eqref{min}.
\section{\bf Proof of Theorem \ref{t1}.}\label{s5}
First we prove inequality \eqref{convexdec}. We recall the convention $0\log 0=0$. Inequality \eqref{convexdec} is an immediate consequence of the following convexity lemma. \begin{lemma}\label{lconvexe} For any integers $\mu_i\ge 0$, $p_j\ge 2$, we have \begin{eqnarray*} \sum_{i=1}^{r}\big(\m_i\log p_i\big) \log \Big(\sum_{i=1}^{r} \m_i \log p_i\Big)&\le & \sum_{i=1}^{r} \m_i\big(\log p_i\big)\big(\log\log p_i\big) \cr & &\qq +\sum_{i=1}^{r} \m_i\big(\log p_i\big)\log \Big(\sum_{i=1}^{r} \m_i \Big) . \end{eqnarray*}
\end{lemma}
\begin{proof}
We may restrict to the case $\sum_{i=1}^{r} \m_i\ge 1$, since otherwise
the inequality is trivial.
Let $M=\sum_{i=1}^{r} \m_i$ and write that \begin{eqnarray*} \sum_{i=1}^{r}\m_i\big(\log p_i\big) \log \Big(\sum_{i=1}^{r} \m_i \log p_i\Big)
&= & M\bigg\{ \sum_{i=1}^{r}\frac{\m_i}{M}\big(\log p_i\big) \log \Big\{\sum_{i=1}^{r} \frac{\m_i}{M} \log p_i\Big\} \cr & & \quad +\sum_{i=1}^{r}\frac{\m_i}{M}\big(\log p_i\big)(\log M) \bigg\} . \end{eqnarray*}
By using convexity of $\psi(x)=x\log x$
on ${\mathbb R}_+$, we get
$$ \sum_{i=1}^{r}\frac{\m_i}{\sum_{i=1}^{r}\m_i}\big(\log p_i\big) \log \Big\{\sum_{i=1}^{r} \frac{\m_i}{\sum_{i=1}^{r}\m_i} \log p_i\Big\}\le \sum_{i=1}^{r}\frac{\m_i}{\sum_{i=1}^{r}\m_i}\big(\log p_i\big)\big(\log \log p_i\big).$$
Thus \begin{eqnarray*} \sum_{i=1}^{r}\big(\m_i\log p_i\big) \log \Big(\sum_{i=1}^{r} \m_i \log p_i\Big) \le \sum_{i=1}^{r} \m_i\big(\log p_i\big)\big(\log\log p_i\big)+\sum_{i=1}^{r} \m_i\big(\log p_i\big)\log \Big(\sum_{i=1}^{r} \m_i \Big) . \end{eqnarray*} \end{proof}
The odd case (i.e. condition \eqref{min} is satisfied) is obtained by combining \eqref{estS2} with Corollary \ref{ests1} and utilizing inequality \eqref{convexdec}. Since $r\le \log n$, by taking account of estimate of $w(n)$ given in \eqref{wdavenport}, we get
\begin{eqnarray}\label{convexdec2} \Psi(n)&\le& e^\g(1+o(1)) (\log\log n)^2(\log \log\log n) + C\, (\log\log\log\log n)(\log\log n)^2
\cr &= & e^\g(1+o(1)) (\log\log n)^2(\log \log\log n).
\end{eqnarray} \vskip 5 pt To pass from the odd case to the general case is not easy. This step will necessitate an extra analysis of some other properties of $\Psi(n)$.
\vskip 3 pt
We first exclude the trivial case when $n$ is a pure power of $2$, since $\Psi(2^k) \le C$ uniformly over $k$, and $C$ is a finite constant.
\vskip 3 pt Now if $2$ divides $n$, writing $n=2^vm$, $2 \not| m$, we have \begin{eqnarray*}
\Psi(n)&=&\sum_{d|n} \frac{(\log d )(\log\log d)}{d}\ =\ \sum_{k=0}^v\sum_{\d| m}\frac{(\log (2^k\d) )(\log\log (2^k\d))}{2^k\d}. . \end{eqnarray*} As the function $x\mapsto \frac{(\log x )(\log\log x)}{x}$ decreases on $[x_0,\infty)$ for some positive real $x_0$, we can write
\begin{eqnarray*} & & \sum_{k=0}^v\frac{(\log (2^k\d) )(\log\log (2^k\d))}{2^k\d} \cr &\le &\sum_{k=0}^{k_0-1}\frac{(\log (2^k\d) )(\log\log (2^k\d))}{2^k\d} + \sum_{k=k_0+1}^v\frac{(\log (2^k\d) )(\log\log (2^k\d))}{2^k\d} \cr &\le &\sum_{k=0}^{k_0-1}\frac{(\log (2^k\d) )(\log\log (2^k\d))}{2^k\d} + \int_{2^{k_0}\d}^{\infty} \frac{(\log u )(\log\log u)}{u^2}\dd u, \end{eqnarray*} where $k_0$ is depending on $x_0$ only. Moreover $$\Big(\frac{(\log u )(\log\log u)}{u}\Big)'\ge - \frac{(\log u )(\log\log u)}{u^2}.$$ Thus \begin{eqnarray*} & &\sum_{k=0}^v\frac{(\log (2^k\d) )(\log\log (2^k\d))}{2^k\d} \cr &\le &\sum_{k=0}^{k_0-1}\frac{(\log (2^k\d) )(\log\log (2^k\d))}{2^k\d} + \frac{(\log (2^{k_0}\d) )(\log\log (2^{k_0}\d))}{2^{k_0}\d}, \end{eqnarray*} whence \begin{eqnarray}\label{psik_0}\Psi(n)
&\le & \sum_{k=0}^{k_0}\sum_{\d|m} \frac{(\log (2^k\d) )(\log\log (2^k\d))}{2^k\d}. \end{eqnarray}
Let $m=p_1^{b_1}\ldots p_{\m}^{b_{\m}}$. We have by \eqref{convexdec1}
\begin{eqnarray*} \Psi(m)&\le& \Big(\prod_{j =1 }^{\m} \frac{1}{1-p_j^{ -1}} \Big)\sum_{i=1}^{\m} \frac{(\log p_i)\big(\log\log p_i\big)}{p_i-1} + C\, (\log\log\log \m)(\log \m)w(m) \cr &\le& \Big(\prod_{j =2 }^\m \frac{1}{1-p(j)^{ -1}} \Big)\sum_{i=1}^\m \frac{(\log p(i))\big(\log\log p(i)\big)}{p(i)-1} + C\, (\log\log\log \m)(\log \m)w(m) \cr &=& \frac12\, \Big(\prod_{j =1 }^\m \frac{1}{1-p(j)^{ -1}} \Big)\sum_{i=1}^\m \frac{(\log p(i))\big(\log\log p(i)\big)}{p(i)-1} + C\, (\log\log\log \m)(\log \m)w(m) \cr &\le & \frac{ e^{\g}}2\, \big( \log \m + \mathcal O(1) \big)\sum_{i=1}^\m \frac{(\log p(i))\big(\log\log p(i)\big)}{p(i)-1} + C\, (\log\log\log \m)(\log \m)w(m),\end{eqnarray*} by using Mertens' estimate \eqref{prod} and since $p(\m)\sim \m\log \m$.
Furthermore by using estimate \eqref{phi1.sumr}, and since $2^\m\le m$ we get
\begin{eqnarray}\label{psi(m)est} \Psi(m)
&\le & \frac{ e^{\g}}2\, \big( \log \m + \mathcal O(1) \big)(1+\e)(\log \m)(\log\log \m) + C\, (\log\log\log \m)(\log \m)w(m) \cr &\le & \frac{ e^{\g}}2\, \big( \log \frac{\log m}{\log2} + \mathcal O(1) \big)(1+\e)(\log \frac{\log m}{\log2})(\log\log \frac{\log m}{\log2}) \cr & & + C\, (\log\log\log \frac{\log m}{\log2})(\log \frac{\log m}{\log2})(1+o(1))\log\log m \cr &\le & \frac{ e^{\g}}2\,(1+2\e) (\log \log m)^2(\log\log\log m), \end{eqnarray} for $m$ large.
Now let $\psi(2^km)=\sum_{\d|m} \frac{(\log (2^k\d) )(\log\log (2^k\d))}{\d}$, $1\le k\le k_0$. If $n$ is not a pure power of $2$, then its odd component $m$ tends to infinity with $n$. Thus with \eqref{psik_0},
\begin{eqnarray}\label{estevencase} \frac{\Psi(n)}{\big(\log \log n)^2(\log\log\log n)}
&\le & \sum_{k=0}^{k_0} \frac{1}{2^k}\sum_{\d|m} \frac{\frac{(\log (2^k\d) )(\log\log (2^k\d))}{\d}}{ (\log \log m)^2(\log\log\log m)} . \end{eqnarray} But
\begin{eqnarray}\label{estevencasea}\frac{(\log (2^k\d) )(\log\log (2^k\d))}{\d}&=&\frac{(k(\log 2) )(\log\log (2^k\d))+ (\log \d)(\log\log (2^k\d) }{\d}
\cr &\le &k_0(\log 2) \frac{\log\big(k_0(\log 2)+\log\d\big)}{\d}+ \frac{(\log \d)(\log\log (2^{k_0}\d) }{\d}. \end{eqnarray} Now we have the inequality: $\log \log (a+x)\le \log (b\log x)$ where $b\ge (a+e)$ and $a\ge 1$, which is valid for $x\ge e$. Thus \begin{eqnarray}\label{estevencaseb}\log\big(k_0(\log 2)+\log\d\big)\le \log (k_0\log 2+ e)+\log \log \d. \end{eqnarray} Consequently
\begin{eqnarray}\label{estevencase1}
& & \sum_{k=0}^{k_0} \frac{1}{2^k}\sum_{\d|m} \frac{k_0(\log 2) \frac{\log (k_0(\log 2)+\log\d )}{\d}}{(\log \log m)^2(\log\log\log m)}
\cr &\le &\sum_{k=0}^{k_0} \frac{1}{2^k}\sum_{\d|m} \frac{k_0(\log 2)\frac{\log (k_0\log 2+ e)}{\d}}{(\log \log m)^2(\log\log\log m)}
\cr & & \quad + \sum_{k=0}^{k_0} \frac{1}{2^k}\sum_{\d|m} \frac{k_0(\log 2) \frac{\log\log\d}{\d}}{(\log \log m)^2(\log\log\log m)} \cr&\le &2k_0(\log 2)\big(\log (k_0\log 2+ e)\big) \frac{\s_{-1}(m)}{(\log \log m)^2(\log\log\log m)}
\cr & & \quad + \frac{2k_0(\log 2)}{(\log \log m)^2(\log\log\log m)}\sum_{\d|m} \frac{\log\log\d}{\d} \cr&\le &C(k_0)\Big\{\frac{1}{\log \log m(\log\log\log m)} + \frac{\s_{-1}(m)}{(\log \log m)(\log\log\log m)}\Big\} \cr&\le &\frac{C(k_0)}{\log\log\log m} \quad \to \ 0\quad \hbox{ as $m$ tends to infinity}. \end{eqnarray}
Further
\begin{eqnarray} \label{estevencase2}\sum_{k=0}^{k_0} \frac{1}{2^k}\frac{\sum_{\d|m}\frac{(\log \d)(\log\log (2^{k_0}\d) }{\d}}{(\log \log m)^2(\log\log\log m)}
& \le & \sum_{k=0}^{k_0} \frac{1}{2^k}\sum_{\d|m}\frac{(\log \d)( \log (k_0\log 2+ e)+\log \log \d) }{\d(\log \log m)^2(\log\log\log m)}
\cr &\le &\frac{\log (k_0\log 2+ e)}{(\log \log m)^2(\log\log\log m)}\,\sum_{k=0}^{k_0} \frac{1}{2^k}\sum_{\d|m}\frac{(\log \d) }{\d} \cr & &\quad+2\,\frac{\Psi(m)}{(\log \log m)^2(\log\log\log m)} \cr &\le &\frac{2\log (k_0\log 2+ e)\,\s_{-1}(m)}{(\log \log m)(\log\log\log m)} \cr & &\quad+2\,\frac{\Psi(m)}{(\log \log m)^2(\log\log\log m)} \cr &\le &\frac{C(k_0)}{\log\log\log m} +2\,\frac{ e^{\g}}2 \,(1+2\e)\,\frac{ (\log \log m)^2(\log\log\log m)}{(\log \log m)^2(\log\log\log m)} \cr &\le &\frac{C(k_0)}{\log\log\log m} + e^{\g} \,(1+2\e)\, , \end{eqnarray} for $m$ large, where we used estimate \eqref{psi(m)est}. \vskip 3pt Plugging estimates \eqref{estevencase1} and \eqref{estevencase2} into \eqref{estevencase} finally leads, in view of \eqref{estevencasea}, to \begin{eqnarray}\label{estevencasef} \frac{\Psi(n)}{(\log \log n)^2(\log\log\log n)}
&\le& \frac{C}{\log\log\log m} + e^{\g} \,(1+2\e)\, \end{eqnarray} for $m$ large, where $C$ depends on $k_0$ only. As $\e$ can be arbitrary small, we finally obtain \begin{eqnarray}\label{evencasef} \limsup_{n\to \infty}\frac{\Psi(n)}{(\log \log n)^2(\log\log\log n)}
&\le& e^{\g}. \end{eqnarray} This establishes Theorem \ref{t1}.
\section{\bf Complementary results.}\label{s2}
In this section we prove complementary estimates $\Phi_1$, $\Phi_2$ and $\Psi$, notably estimates \eqref{phipsi} and \eqref{Phi1est}
\subsection{Upper estimates.}
\begin{lemma}\label{phi1maj} We have the following estimate, \begin{eqnarray*} \Phi_1(n)&\le&\Big(\prod_{j =1 }^r \frac{1}{1-p_j^{ -1}} \Big)\sum_{i=1}^r \frac{(\log p_i)\big(\log\log p_i\big)}{p_i-1}. \end{eqnarray*} \end{lemma} \begin{proof}[\cmit Proof] \rm We have $$\Phi_1(n)=\sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_r=0}^{\a_r}\ \frac{ \m_1(\log p_1)(\log\log p_1)+\ldots +\m_r(\log p_r)(\log\log p_r)}{ p_1^{\m_1}\ldots p_{r}^{\m_{r}}} $$ The $i$-th term of the numerator yields the sum $$\underbrace{\sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_r=0}^{\a_r}}_{\substack{\hbox{\small the sum relative}\\ \hbox{\small to $ \m_i$ is excluded}}} \underbrace{\frac{1}{ p_1^{\m_1}\ldots p_{r}^{\m_{r}}}}_{\substack{\hbox{\small $ p_i^{\m_i}$}\\ \hbox{\small is excluded}}}\ \Big(\sum_{\m_i=0}^{\a_i} \frac{\m_i\big(\log p_i\big)\big(\log\log p_i\big)}{p_i^{\m_i}}\Big). $$
Consequently, \begin{eqnarray}\label{Phi1formula} \Phi_1(n)
&=& \sum_{i=1}^r\underbrace{\sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_r=0}^{\a_r}}_{\substack{\hbox{\small the sum relative}\\ \hbox{\small to $ \m_i$ is excluded}}} \underbrace{\frac{1}{ p_1^{\m_1}\ldots p_{r}^{\m_{r}}}}_{\substack{\hbox{\small $ p_i^{\m_i}$}\\ \hbox{\small is excluded}}}\ \Big(\sum_{\m_i=0}^{\a_i} \frac{\m_i\big(\log p_i\big)\big(\log\log p_i\big)}{p_i^{\m_i}}\Big)\cr &=& \sum_{i=1}^r\prod_{\substack{j =1\\ j \neq i}}^r\Big(\frac{1-p_j^{-\a_j -1}}{1-p_j^{ -1}} \Big)\Big[\sum_{\m_i=0}^{\a_i} \frac{\m_i\big(\log p_i\big)\big(\log\log p_i\big)}{p_i^{\m_i}}\Big] . \end{eqnarray} Now as $$\sum_{\m=0}^{\a_i} \frac{\m}{p_i^\m}\le\sum_{j=0}^{\infty} \frac{j}{p_i^j} =\frac{1}{(p_i-1)(1-p_i^{-1})},$$
we obtain \begin{eqnarray*} \Phi_1(n)&\le &\sum_{i=1}^r\prod_{\substack{j =1\\ j \neq i}}^r\Big(\frac{1-p_j^{-\a_j -1}}{1-p_j^{ -1}}\Big)\,.\,\frac{(\log p_i)\big(\log\log p_i\big)}{(p_i-1)(1-p_i^{-1})} \cr
&\le &\Big(\prod_{j =1 }^r \frac{1}{1-p_j^{ -1}} \Big)\sum_{i=1}^r \frac{(\log p_i)\big(\log\log p_i\big)}{p_i-1}. \end{eqnarray*} \end{proof}
\begin{corollary}\label{ests1} We have the following estimate, \begin{eqnarray*} \limsup_{n\to \infty}\frac{\Phi_1(n)}{(\log\log n)^2(\log \log\log n)} &\le & e^{\g}.\end{eqnarray*} \end{corollary}
\begin{proof}[\cmit Proof]\rm Let $p(j)$ denote the $j$-th consecutive prime number, and recall that (\cite[(3.12-13)]{RS}, \begin{eqnarray}\label{p(i)est} p(i) &\ge& \max(i \log i, 2), \qq\quad\ \ i\ge 1, \cr p(i)&\le& i(\log i + \log\log i ), \qq \ \! i\ge 6. \end{eqnarray}
Let $\e>0$ and an integer $r_0\ge 4$. If $r\le r_0$, then \begin{eqnarray}\label{phi1.sumr1} \sum_{i=1}^r \frac{(\log p_i)\big(\log\log p_i\big)}{p_i-1} &\le &\d\,r_0, \qq \qq \d= \sup_{p\ge 3}\frac{(\log p)\big(\log\log p\big)}{p-1}<\infty\,. \end{eqnarray} If $r>r_0$, then \begin{eqnarray*} \sum_{i=r_0+1}^r \frac{(\log p_i)\big(\log\log p_i\big)}{p_i-1} &\le & \Big(\max_{i>r_0}\frac{p(i)}{p(i)-1}\Big)\sum_{i=r_0+1}^r \frac{(\log p(i))\big(\log\log p(i)\big)}{p(i)} \cr &\le & \Big(\max_{i>r_0}\frac{p(i)}{p(i)-1}\Big)\sum_{i=r_0+1}^r \frac{(\log (i\log i))\big(\log\log (i\log i)\big)}{i\log i} \end{eqnarray*} We choose $r_0=r_0(\e)$ so that $\log r_0 \ge 1/\e$ and the preceding expression is bounded from above by $$(1+\e)\sum_{i=r_0+1}^r \frac{\log\log i}{i}$$ We thus have \begin{eqnarray}\label{phi1.sumr2}
\sum_{i=r_0+1}^r \frac{(\log p_i)\big(\log\log p_i\big)}{p_i-1}
&\le &(1+\e)\int_{r_0}^r\frac{\log\log t}{t}\dd t
\cr &\le & (1+\e)(\log r)(\log\log r). \end{eqnarray}
Consequently, for some $r(\e)$,
\begin{eqnarray}\label{phi1.sumr} \sum_{i= 1}^r \frac{(\log p_i)\big(\log\log p_i\big)}{p_i-1}
&\le & (1+\e)(\log r)(\log\log r), \qq r\ge r(\e). \end{eqnarray}
By using Mertens' estimate
\begin{eqnarray}\label{prod}\prod_{p\le x}\Big(\frac{1}{1-\frac{1}{p}}\Big)=e^{\g}\log x + \mathcal O(1)\qq \quad x\ge 2, \end{eqnarray}
we further have \begin{equation}\label{p(i)estappl}
\prod_{\ell =1}^r \Big(\frac{1}{1-\frac{1}{p_\ell}} \Big)\,\le\, \prod_{\ell =1}^r \Big(\frac{1}{1-\frac{1}{p(\ell)}} \Big) \le \prod _{p\le r(\log r +\log\log r)}\Big(\frac{1}{1- \frac{1}{p}}\Big) \,\le \, e^{\g} (\log r) + C\, , \end{equation} if $r\ge 6$, and so for any $r\ge 1$, modifying $C$ if necessary. As $r=\o(n)$ and $2^{\o(n)}\le n$, we consequently have, \begin{eqnarray*} \Phi_1(n)
&\le & e^{\g}(1+ C\e)^2 (\log\log n)^2(\log \log\log n),\end{eqnarray*} if $r>r_0$. If $r\le r_0$, we have \begin{eqnarray*} \Phi_1(n)&\le & \d e^{\g}(1+\e) \big((\log r_0) + C\big):=C(\e).\end{eqnarray*} Whence, \begin{eqnarray*} \Phi_1(n) &\le & e^{\g}(1+\e)^2 (\log\log n)^2(\log \log\log n)+ C(\e).\end{eqnarray*} As $\e$ can be arbitrary small, the result follows.
\end{proof} The following lemma is nothing but the upper bound part of \eqref{EZ1}.
We omit the proof. \begin{lemma} \label{tEZm}
We have the following estimate,
\begin{eqnarray*} \sum_{d|n} \frac{\log d}{d} &\le &\prod_{p|n}\Big(\frac{1}{1-p^{-1}}\Big) \ \sum_{p|n}\frac{\log p}{p-1}. \end{eqnarray*} Moreover,
\begin{eqnarray*} \limsup_{n\to \infty}\ \frac{1}{ (\log\log n)(\log \o(n))}\sum_{d|n} \frac{\log d}{d} &\le & e^{\g}. \end{eqnarray*} \end{lemma}
\subsection{\bf Lower estimates.}\label{s3}
We recall that the smallest prime divisor of an integer $n$ is noted by $P^-(n)$. \begin{lemma}\label{phi1min} Let $n=p_1^{\a_1}\ldots p_r^{\a_r}$, $r\ge 1$, $\a_i\ge1$. Then, \begin{eqnarray*} \Phi_1(n) &\ge & \Big(1-\frac{1}{P^-(n)}\Big)\prod_{j =1}^r\big(1+p_j^{-1} \big)\Big[ \sum_{i=1}^r\frac{ \big(\log p_i\big)\big(\log\log p_i\big)}{p_i}\Big]\end{eqnarray*} \end{lemma}
\begin{proof}[\cmit Proof] By \eqref{Phi1formula}, \begin{eqnarray*} \Phi_1(n) &=&\sum_{i=1}^r\prod_{\substack{j =1\\ j \neq i}}^r\Big(\frac{1-p_j^{-\a_j -1}}{1-p_j^{ -1}} \Big)\Big[\sum_{\m_i=0}^{\a_i} \frac{\m_i\big(\log p_i\big)\big(\log\log p_i\big)}{p_i^{\m_i}}\Big] \cr &\ge &\sum_{i=1}^r\prod_{\substack{j =1\\ j \neq i}}^r\Big(\frac{1-p_j^{-\a_j -1}}{1-p_j^{ -1}} \Big)\Big[ \frac{\big(\log p_i\big)\big(\log\log p_i\big)}{p_i}\Big]
\cr &\ge & \prod_{j =1}^r\big(1+p_j^{-1} \big)\Big[ \sum_{i=1}^r\frac{(1-p_i^{ -1})\big(\log p_i\big)\big(\log\log p_i\big)}{p_i}\Big].\end{eqnarray*} Thus \begin{eqnarray*} \Phi_1(n) &\ge & \Big(1-\frac{1}{P^-(n)}\Big)\prod_{j =1}^r\big(1+p_j^{-1} \big)\Big[ \sum_{i=1}^r\frac{ \big(\log p_i\big)\big(\log\log p_i\big)}{p_i}\Big].\end{eqnarray*} \end{proof} We easily deduce from Lemma \ref{phi1maj} and Lemma \ref{phi1min} the following corollary.
\begin{corollary}\label{phi1est} Let $n=p_1^{\a_1}\ldots p_r^{\a_r}$, $r\ge 1$, $\a_i\ge1$. Then, \begin{eqnarray*}\big(1-\frac{1}{P^-(n)}\big)\prod_{j =1}^r\big(1+p_j^{-1} \big) \ \le \frac{\Phi_1(n)}{\sum_{i=1}^r\frac{ (\log p_i)(\log\log p_i)}{p_i}}\ \le 2\, \prod_{j =1}^r\Big(\frac{1}{1-p_j^{ -1}} \Big).\end{eqnarray*} \end{corollary}
\begin{proposition} \label{tEZ} We have the following estimates
\begin{eqnarray*}\hbox{$\rm a)$}& & \limsup_{n\to \infty}\ \frac{1}{ (\log\log n)} \sum_{d|n} \frac{(\log d)}{d }\ \ge \ e^{\g} \cr \hbox{$\rm b)$} & &\limsup_{n\to \infty}\, \frac{\Phi_1(n)}{(\log \log n)^2(\log\log\log n)}\,\ge \,e^\g, \cr \hbox{$\rm c)$} & &\limsup_{n\to \infty}\, \frac{\Psi(n)}{(\log \log n)^2(\log\log\log n)}\,\ge \,e^\g. \end{eqnarray*} \end{proposition}
\begin{proof}[\cmit Proof] \rm
Case a) is Erd\H os-Zaremba's lower bound of function $\Phi(n)$. Since it is used in the proof of b) and c), we provide a detailed proof for the sake of completeness. \vskip 3 pt a) Let $n_j=\prod_{p<e^j}p^j$. Recall that
$p(i) \ge \max(i \log i, 2)$ if $i\ge 1$. Let $r(j)$ be the integer defined by the condition $p(r(j))< e^j< p(r(j)+1)$.
By using \eqref{formule} and following Gronwall's proof \cite{Gr},
we have,
\begin{eqnarray*}& & \sum_{d|n_j} \frac{\log d}{d} \ =\ \sum_{i=1}^{r(j)}\prod_{\substack{ \ell=1\\ \ell\neq i}}^{r(j)}\Big(\frac{1-p(\ell)^{-j-1}}{1-p(\ell)^{-1}}\Big)\Big[\sum_{\m=0}^{j}\frac{\m\log p(i)}{p(i)^\m}\Big] \cr &\ge & \frac{1}{\zeta(j+1)}\prod_{\ell=1}^{r(j)}\Big(\frac{1}{1-p(\ell)^{-1}}\Big)\sum_{i=1}^{r(j)} (1-p(i)^{-1})\frac{\log p(i)}{p(i)}\Big[1+ \frac{1}{p(i)}+\ldots +\frac{1}{p(i)^{j-1}}\Big] \cr &= & \frac{1}{\zeta(j+1)}\prod_{\ell=1}^{r(j)}\Big(\frac{1}{1-p(\ell)^{-1}}\Big)\sum_{i=1}^{r(j)} \frac{\log p(i)}{p(i)}\big(1-p(i)^{-j}\big). \end{eqnarray*}
Recall that $\vartheta(x)=\sum_{p\le x}\log p$ is Chebycheff's function and that $\vartheta(x)\ge(1-\e(x))x$, $x\ge 2$, where $\e(x)\to 0$ as $x$ tends to infinity.
Thus,
$\log n_j = j\vartheta(e^j)= je^j(1+ o(1))$, and thus
$\log\log n_j
= j(1+ o(1))$.
\vskip 2 pt On the one hand, by \eqref{prod}, \begin{equation}\label{prodnj}\prod_{\ell=1}^{r(j)}\big(1-p(\ell)^{-1}\big)= \prod_{p<e^j}\big(1-p^{-1}\big)=\frac{e^{-\g}}{j}\big(1+ \mathcal O(\frac{1}{j})\big). \end{equation}
And on the other, by Mertens' estimate
\begin{equation}\label{sumnj}\sum_{p<e^j} \frac{\log p}{p}=j+\mathcal O(1)\ge (1+o(1)) \log \log n_j . \end{equation}
Thus
\begin{eqnarray} \label{lbeta1} \sum_{d|n_j} \frac{\log d}{d} &\ge & (1+o(1))e^{\g}(\log\log n_j)^{2} \qq\qq j\to \infty\,, \end{eqnarray} since $\zeta(j+1)\to 1$ as $j\to \infty$.
\vskip 3 pt
b)
Let $\s'_{-1}(n)= \sum_{d|n\,,\, d\ge 3} 1/d$. Let also $X$ be a discrete random variable equal to $\log d$ if $d|n$ and $d\ge 3$, with probability $1/(d\s'_{-1}(n))$. By using convexity of the function $x\log x$ on $[1,\infty)$, we get
\begin{eqnarray*} {\mathbb E \,} X\log X&=& \sum_{\substack{d|n\\ d\ge 3}} \frac{(\log d)(\log \log d)}{d\s'_{-1}(n) }\ \ge\ ({\mathbb E \,} X)\log\,({\mathbb E \,} X)
\cr &= & \Big(\sum_{\substack{d|n\\ d\ge 3}} \frac{(\log d)}{d\s'_{-1}(n) }\Big)\log \Big(\sum_{\substack{d|n\\ d\ge 3}} \frac{(\log d)}{d\s'_{-1}(n) }\Big)
\cr &\ge & \Big(\sum_{\substack{d|n\\ d\ge 1}} \frac{(\log d)}{d\s'_{-1}(n) }-C\Big)\Big(\log \Big(\sum_{\substack{d|n\\ d\ge 1}} \frac{(\log d)}{d }-C\Big)-\log \s_{-1}(n)\Big) . \end{eqnarray*}
Whence\begin{eqnarray*} \sum_{\substack{d|n\\ d\ge 3}} \frac{(\log d)(\log \log d)}{d} &\ge & \Big(\sum_{\substack{d|n\\ d\ge 1}} \frac{(\log d)}{d
}-C\s_{-1}(n)\Big)\Big(\log \Big(\sum_{\substack{d|n\\ d\ge 1}} \frac{(\log d)}{d }-C\Big) \cr & & - \log \s_{-1}(n)\Big) \end{eqnarray*} Letting $n=n_j$, we deduce from \eqref{lbeta1} that
\begin{eqnarray*} \Psi(n) &\ge &\sum_{\substack{d|n\\ d\ge 3}} \frac{(\log d)(\log \log d)}{d} \ \ge \ \Big((1+o(1))e^{\g}(\log\log n_j)^{2} -C\log\log n_j\Big) \cr & & \qq \times \Big(\log \big\{(1+o(1))e^{\g}(\log\log n_j)^2-C\big\}
- \log C \log\log n_j\Big)
\cr & \ge &(1+o(1))e^{\g}(\log\log n_j)^2\log\log\log n_j. \end{eqnarray*} Consequently,
\begin{eqnarray*} \limsup_{n\to \infty}\frac{\Psi(n)}{(\log\log n)^2\log\log\log n}
& \ge &e^{\g}. \end{eqnarray*} \vskip 3 pt
c)
We have \begin{eqnarray*} \Phi_1(n_j) &=&\sum_{i=1}^{r(j)}\prod_{\substack{\ell=1\\ \ell\neq i}}^{r(j)}\Big(\frac{1-p(\ell)^{-j-1}}{1-p(\ell)^{-1}}\Big)\Big[\sum_{\m=0}^{j}\frac{\m (\log p(i))(\log\log p(i))}{p(i)^\m}\Big] \cr &\ge & \frac{1}{\zeta(j+1)}\prod_{\ell=1}^{r(j)}\Big(\frac{1}{1-p(\ell)^{-1}}\Big)\cr & & \quad\times\ \sum_{i=1}^{r(j)} (1-p(i)^{-1})\frac{(\log p(i))(\log\log p(i))}{p(i)}\Big[1+ \frac{1}{p(i)}+\ldots +\frac{1}{p(i)^{j-1}}\Big] \cr &\ge & \frac{1}{\zeta(j+1)}(e^{\g}j)\big(1+ \mathcal O(\frac{1}{j})\big)\sum_{i=1}^{r(j)} \frac{(\log p(i))(\log\log p(i))}{p(i)}\big(1-p(i)^{-j}\big). \end{eqnarray*} by \eqref{prodnj}. Let $0<\e <1$. By using \eqref{sumnj}, we also have for all $j$ large enough, \begin{eqnarray*}\sum_{p<e^j} \frac{(\log p)(\log\log p)}{p} &\ge & \sum_{e^{\e j}\le p<e^j} \frac{(\log p)(\log\log p)}{p} \cr &\ge & (1+o(1))\big(\log(\e j)\big)\sum_{e^{\e j}\le p<e^j} \frac{(\log p)}{p} \cr &\ge & (1+o(1))(1-\e)j\big(\log(\e j)\big)\big(1+ \mathcal O({1}/{j})\big) \cr &\ge & (1+o(1))(1-\e)(\log \log n_j)\big(\log (\e \log \log n_j)\big).\end{eqnarray*} As $\log (\e \log \log n_j) \sim \log\log \log n_j$, $j\to \infty$, we have \begin{eqnarray*} \limsup_{j\to \infty}\frac{\Phi_1(n_j)}{(\log \log n_j)^2(\log\log \log n_j)} &\ge & e^{\g}(1-\e). \end{eqnarray*} As $\e$ can be arbitrarily small, this proves (c).
\end{proof}
\rm \begin{lemma}\label{Phi_2(r,n)min} We have the following estimate \begin{eqnarray*} \Phi_2(n) &\ge & (\log 2)\,\Big(\frac{P^-(n)}{P^-(n)+1}\Big) \,\Big(\prod_{i=1}^{r}\big(1+\frac{1}{ p_i}\big)\Big)\sum_{j=1}^r\big(\frac{ \log p_j}{p_j}\big).\end{eqnarray*} \end{lemma} \begin{proof} We observe from \eqref{Phi_2(r,n)} that
\begin{eqnarray*} \Phi_2(r,n)&\ge & \sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_{r-1}=0}^{\a_{r-1}} \frac{1}{ p_1^{\m_1}\ldots p_{r-1}^{\m_{r-1}}}\frac{ \log p_r}{p_r}\log \Big[\sum_{i=1}^{r-1} \m_i + 1\Big].\end{eqnarray*} It is clear that the above multiple sum can contribute (is not null) only if $\max_{i=1}^{r-1} \m_i \ge 1$, in which case $\log\, [\,\sum_{i=1}^{r-1} \m_i + 1]\ge \log 2$. We thus have \begin{eqnarray} \label{Phi2(r,n)min} \Phi_2(r,n)&\ge & (\log 2)\big(\frac{ \log p_r}{p_r}\big)\, \underbrace{ \sum_{\m_1=0}^{\a_1}\ldots \sum_{\m_{r-1}=0}^{\a_{r-1}}}_{\hbox{$\max_{i=1}^{r-1} \m_i \ge 1$}} \frac{1}{ p_1^{\m_1}\ldots p_{r-1}^{\m_{r-1}}} \cr &= & (\log 2)\big(\frac{ \log p_r}{p_r}\big)\,\prod_{i=1}^{r-1}\Big(1+\sum_{\m_i=0}^{\a_i}\frac{1}{ p_i^{\m_i}}\Big)
\cr &\ge & (\log 2)\big(\frac{ \log p_r}{p_r}\big)\,\prod_{i=1}^{r-1}\Big(1+\frac{1}{ p_i}\Big).\end{eqnarray} Consequently, \begin{eqnarray} \label{Phi2(rn)min} \Phi_2(n) &\ge & (\log 2)\,\sum_{j=1}^r\big(\frac{ \log p_j}{p_j}\big)\,\prod_{\substack{i=1\\ i\neq j}}^{r}\Big(1+\frac{1}{ p_i}\Big)
\cr &\ge & (\log 2)\,\Big(\frac{P^-(n)}{P^-(n)+1}\Big) \,\Big(\prod_{i=1}^{r}\big(1+\frac{1}{ p_i}\big)\Big)\sum_{j=1}^r\big(\frac{ \log p_j}{p_j}\big).\end{eqnarray} \end{proof}
\section{\bf An application.} \label{s6}
We deduce from Theorem \ref{t1} the following result.
\begin{theorem}\label{t3} Let $\eta>1$. There exists a constant $C(\eta)$ depending on $\eta$ only, such that for any finite set $K$ of distinct integers, and any sequence of reals $\{c_k, k\in K\}$, we have \begin{eqnarray}\label{approx}\sum_{k,\ell\in K} c_kc_\ell \frac{(k,\ell)^{2}}{k\ell}&\le & C(\eta) \sum_{\nu\in K} c_\nu^2 \,\,(\log\log\log n)^\eta\,\Psi (\nu) . \end{eqnarray} Further,
\begin{eqnarray} \label{approx1} \sum_{k,\ell \in K}
c_k c_\ell\frac{(k,\ell)^{2}}{k\ell}&\le& C(\eta)\sum_{\nu \in K} c_\nu^2 (\log\log \nu)^2 (\log\log\log \nu)^{1+\eta}. \end{eqnarray}
\end{theorem}
This much improves Theorem 2.5 in \cite{W1} where a specific question related to G\'al's inequality was investigated, see \cite{W1} for details.
The interest of inequality \eqref{approx}, is naturally that the bound obtained
tightly depends on the arithmetical structure of the support $K$ of the coefficient sequence, while being
close to the optimal order of magnitude $(\log\log \nu)^2$. \vskip 2 pt Theorem \ref{t3} is obtained as a combination of Theorem \ref{t1} with a slightly more general and sharper formulation of Theorem 2.5 in \cite{W1}.
\begin{theorem}\label{t5} Let $\eta >1$. Then, for any real $s$ such that $0<s\le 1$,
for any sequence of reals $\{c_k, k\in K\}$, we have
\begin{eqnarray}\label{t1m}\sum_{k,\ell\in K} c_kc_\ell \frac{(k,\ell)^{2s}}{k^s\ell^s}&\le & C(\eta) \sum_{\nu\in K} c_\nu^2(\log\log\log \nu)^\eta
\sum_{\d|\nu} \frac{(\log \d )(\log\log \d)}{\d^{2s-1}}. \end{eqnarray}
The constant $C(\eta)$ depends on $\eta$ only. \end{theorem}
\begin{remark}\label{rems}\rm From Theorem 2.5-(i) in \cite{W1}, follows that for every $s>1/2$, \begin{eqnarray}\label{i1}
\sum_{k,\ell\in K} c_k c_\ell\frac{(k,\ell)^{2s}}{k^s\ell^s} &\le&\zeta(2s) \inf_{0< \e\le 2s-1} \frac{1+\e}{\e } \, \sum_{\nu \in K}
c_\nu^2 \, \s_{ 1+\e-2s}(\nu) ,
\end{eqnarray}
$\s_{u}(\nu)$ being the sum of $u$-th powers of divisors of $\nu$, for any real $u$. As \begin{eqnarray*}
\sum_{\d|\nu} \frac{(\log \d )(\log\log \d)}{\d^{2s-1}}\ll \sum_{\d|\nu} \frac{1}{\d^{2s-1-\e}} =\s_{ 1+\e-2s}(k) ,
\end{eqnarray*} estimate \eqref{t1m} is much better than the one given \eqref{i1}. \end{remark}
\begin{proof}[\cmit Proof of Theorem \ref{t5}] \rm
The proof is similar to that of Theorem 2.5 in \cite{W1} and shorter. Let $\e>0$ and let $J_\e$ denote the generalized Euler function. We recall that \begin{eqnarray}\label{jordan}
J_\e(n)= \sum_{d|n} d^\e \m(\frac{n}{d}).
\end{eqnarray}
We extend the sequence $\{c_k, k\in K\}$ to all ${\mathbb N}$ by putting $c_k= 0$ if $k\notin K$.
By M\"obius' formula, we have $n^\e =\sum_{d|n} J_\e (d)$.
By using Cauchy-Schwarz's inequality, we successively obtain \begin{eqnarray} \label{HS1a}
L&:=& \sum_{k,\ell=1}^n
c_k c_\ell\frac{(k,\ell)^{2s}}{k^s\ell^s}\ =\ \sum_{k,\ell \in K} \frac{c_k c_\ell }{k^s\ell^s}\Big\{\sum_{d\in F(K)}
J_{2s} (d) {\bf 1}_{d|k} {\bf 1}_{d|\ell}\Big\} \cr \hbox{($k=ud$, $\ell=vd$)} &\le& \sum_{u,v\in F(K)} \frac{1}{u^sv^s} \Big(\sum_{d\in F(K)} \frac{J_{2s} (d)}{d^{2s}}c_{ud}c_{vd} \Big) \cr &\le & \sum_{u,v\in F(K)} \frac{1}{u^sv^s} \Big(\sum_{d\in F(K)} \frac{J_{2s} (d)}{d^{2s}}c_{ud}^2 \Big)^{1/2}\Big(\sum_{d\in F(K)} \frac{J_{2s} (d)}{d^{2s}} c_{vd}^2 \Big)^{1/2} \cr &=& \Big[\sum_{u \in F(K)} \frac{1}{u^s } \Big(\sum_{d\in F(K)} \frac{J_{2s} (d)}{d^{2s}}c_{ud}^2 \Big)^{1/2}\Big]^2
\cr &\le& \Big(\sum_{u \in F(K)} \frac{1}{u^s\psi(u) } \Big)\Big(\sum_{\nu \in K} \frac{ c_\nu^2}{ \nu^{2s} } \sum_{\substack{u \in F(K)\\ u|\nu} }
J_{2s}\big( \frac{\nu}{u }\big) u^{ s} \psi(u) \Big) , \end{eqnarray} where $\psi (u)>0$ is a non-decreasing function on ${\mathbb R}^+$. We then choose
$$\psi(u) = u^{-s} \psi_1(u)\sum_{t|u} t (\log t)(\log\log t),\qq \qq \psi_1(u)= (\log\log\log u)^\eta.$$ Hence,\begin{eqnarray*}
L &\le& \Big(\sum_{u \in F(K)} \frac{1}{\psi_1(u)\sum_{t|u} t (\log t)(\log\log t) } \Big)\Big(\sum_{\nu \in K} \frac{ c_\nu^2}{ \nu^{2s} } \sum_{\substack{u \in F(K)\\ u|\nu} }
J_{2s}\big( \frac{\nu}{u }\big) \psi_1(u)\sum_{t|u} t (\log t)(\log\log t)\Big)
\cr &\le& \Big(\sum_{u \in F(K)} \frac{1}{\psi_1(u)\sum_{t|u} t (\log t)(\log\log t) } \Big)\Big(\sum_{\nu \in K} \frac{ c_\nu^2 \psi_1(\nu)
}{ \nu^{2s} } \sum_{\substack{u \in F(K)\\ u|\nu}} J_{2s}\big( \frac{\nu}{u }\big)\sum_{t|u} t (\log t)(\log\log t) \Big) .
\end{eqnarray*}
As $\nu \in K$, we can write
\begin{eqnarray}\label{f}
\sum_{\substack{u \in F(K)\\ u|\nu }}
J_{2s}\big( \frac{\nu}{u }\big) \sum_{t|u} t (\log t)(\log\log t)&=& \sum_{u|\nu}\sum_{d|\frac {\nu}u}d^{2s}\m \Big(\frac {\nu}{ud}\Big)\sum_{t|u} t (\log t)(\log\log t)
\cr & = &\sum_{d|\nu}d^{2s}\sum_{u|\frac {\nu}d}\m \Big(\frac {\nu}{ud}\Big)\sum_{t|u} t (\log t)(\log\log t) \cr \hbox{(writing $u=tx$)}
&=& \sum_{d|\nu}d^{2s}\sum_{t|\frac {\nu}d}t (\log t)(\log\log t)\sum_{x|\frac {\nu}{dt}}\m \Big(\frac {\nu}{dtx}\Big)
\cr \hbox{(writing $\frac {\nu}{dt}=x\theta$)}&
=& \sum_{d|\nu}d^{2s}\sum_{t|\frac {\nu}d}t (\log t)(\log\log t)\sum_{\theta|\frac {\nu}{dt}}\m (\theta)
\cr&=& \sum_{d|\nu}d^{2s}(\frac {\nu}d) (\log (\frac {\nu}d))(\log\log (\frac {\nu}d)),
\end{eqnarray}
where in the last inequality we used the fact that $\sum_{d|n}\m(d)$ equals $1$ or $0$ according to $n=1$ or $n>1$.
Consequently,
\begin{eqnarray*} L &\le& \Big(\sum_{u \in F(K)} \frac{1}{\psi_1(u)\sum_{t|u} t (\log t)(\log\log t) } \Big)\Big(\sum_{\nu \in K} \frac{ c_\nu^2
\psi_1(\nu) }{ \nu^{2s} } \sum_{d|\nu}d^{2s}(\frac {\nu}d) (\log (\frac {\nu}d))(\log\log (\frac {\nu}d)) \Big)
\cr &=& \Big(\sum_{u \in F(K)} \frac{1}{\psi_1(u)\sum_{t|u} t (\log t)(\log\log t)} \Big)\Big(\sum_{\nu \in K} c_\nu^2 \psi_1(\nu) \sum_{\d|\nu} \frac{1}{\d^{2s}}\,\d (\log \d)(\log\log \d) \Big) .
\end{eqnarray*} \vskip 7 pt
From the trivial estimate $\sum_{t|u}t (\log t)(\log\log t)\ge u (\log u)(\log\log u)$, it follows that
\begin{eqnarray} \label{s} \sum_{k,\ell=1}^n
c_k c_\ell\frac{(k,\ell)^{2s}}{k^s\ell^s}
&\le& \Big(\sum_{u \ge 1 } \frac{1}{u (\log u)(\log\log u) (\log\log\log u)^\eta } \Big)
\cr & &\times \Big(\sum_{\nu \in K} c_\nu^2 (\log\log\log \nu)^\eta\sum_{\d|\nu} \frac{ (\log \d)(\log\log \d) }{\d^{2s-1}} \Big)
\cr & = & C(\eta)\ \sum_{\nu \in K} c_\nu^2 (\log\log\log \nu)^\eta\sum_{\d|\nu} \frac{ (\log \d)(\log\log \d) }{\d^{2s-1}}
.
\end{eqnarray} \end{proof}
\begin{proof}[\cmit Proof of Theorem \ref{t3}] \rm Letting $s=1$ in Theorem \ref{t5} we get \eqref{approx}, next using Theorem \ref{t1} we obtain, \begin{eqnarray} \label{1} \sum_{k,\ell=1}^n
c_k c_\ell\frac{(k,\ell)^{2}}{k\ell}
&\le& C(\eta)\, \sum_{\nu \in K} c_\nu^2 (\log \log \nu)^2 (\log\log\log \nu)^{1+\eta} , \end{eqnarray}
which is \eqref{approx1}, and thus proves Theorem \ref{t3}.
\end{proof}
\rm \vskip 3 pt
\section{\bf Concluding Remarks.} \label{s7}
\rm
The proof of Theorem \ref{t2} can be adapted with no difficulty to similar arithmetical functions, for instance with powers of $\log\log d$, but not to the functions
$S_k(n)$, $k\ge 1$,
which specifically depend on a derivation formula, see after \eqref{sisu2}.
We remark that a simple convexity argument shows that
\begin{eqnarray}\label{Phietamin}\limsup_{n\to \infty}\ \frac{S_k(n)}{ (\log\log n)^{1+k}} &\ge&e^{\g} . \end{eqnarray}
Let indeed $X$ be a discrete random variable equal to $\log d$ if $d|n$, with probability $1/(d\s_{-1}(n))$. Then,
$${\mathbb E \,} X^k= \sum_{d|n} \frac{(\log d)^k}{d\s_{-1}(n)}\ge ({\mathbb E \,} X)^k= \big(\sum_{d|n} \frac{\log d}{d\s_{-1}(n)}\big)^k.$$ Whence,
$$ S_k(n)=\sum_{d|n} \frac{(\log d)^k}{d }\ge \s_{-1}(n)^{1-k}\big(\sum_{d|n} \frac{\log d}{d}\big)^k.$$ As $\s_{-1}(n)\le (1+o(1))e^\g\log\log n$,
by using \eqref{lbeta1} we deduce that \begin{eqnarray*} S_k(n_j)&\ge& (1+o(1))e^{(1-k)\g}(\log\log n_j)^{1-k}\big(e^\g(\log\log n_j)^2)^k\cr &= &(1+o(1))e^{\g}(\log\log n_j)^{1+k}. \end{eqnarray*}
Moreover for integers $n$ having sufficiently spaced prime divisors, this lower bound is optimal. More precisely, there exists a constant $C(k)$ depending on $k$ only, such that for any integer $n=\prod_{i=1}^r p_i^{\a_i}$ satisfying the condition $\sum_{i=1}^r\frac{1}{p_i-1}<2^{1-k}, $ one has \begin{eqnarray}\label{Phietaminmajex}S_k(n)\ \le \ C(k)(\log\log n)^{k} \s_{-1}(n) . \end{eqnarray}
As $\s_{-1}(n)\le C\log\log n$, it follows that $S_k(n)\ \le \ C(\eta)(\log\log n)^{1+\eta}$.
\vskip 7 pt
\vskip 3pt \hskip -2pt We conclude with some remarks concerning Davenport's function $w(n)$. At first, if $p_1,\ldots,p_r$ are the $r$ first consecutive prime numbers and $n=p_1 \ldots p_r$, then $w( n)\sim\log\,\o(n)$. Next, the obvious bound $w( n)\ll\log\log\log n$ holds true when the prime divisors of $n$ are large, for instance when for a given positive number $B$, these prime divisors, write them $p_1,\ldots, p_r$, satisfy
\begin{eqnarray}\label{prop.pfinite} \sum_{j=1}^r\frac{\log p_{j}}{p_{j}} \le B \qq \hbox{ and} \qq p_1\ldots p_r\gg e^{e^B}.
\end{eqnarray}
More generally, one can establish the following result. Let $\{p_i, i\ge 1\}$ be an increasing sequence of prime numbers enjoying the following property \begin{eqnarray}\label{prop.p} p_1\ldots p_s&\le & p_{s+1}\qq\qq s=1,2,\ldots\, .
\end{eqnarray}
Numbers of the form $n=p_1\ldots p_\nu$ with $p_1\ldots p_{i-1}\le p_i$, $2\le i\le \nu$, $\nu=1,2,\ldots$ appear as extremal numbers in some divisors questions, see Erd\H os and Hall \cite{EH}.
\begin{lemma}\label{b(n)}Let $\{p_i, i\ge 1\}$ be an increasing sequence of prime numbers satisfying condition \eqref{prop.p}. There exists a constant $C$, such that if $p_1\ge C$, then for any integer $n= p_1^{\a_1}\ldots p_r^{\a_r}$ such that $\a_i\ge 1$ for each $i$, we have $w( n)\le \log\log\log n$. \end{lemma} \begin{proof}[{\cmit Proof.}] \rm We use the following inequality. Let $0<\theta<1$. There exists a number $h_\theta$ such that for any $h\ge h_\theta$ and any $H$ such that $e^{\frac{\theta}{(1-\theta)\log 2}}\le H\le h$, we have \begin{eqnarray}\label{hH} h&\le &e^h\, \log \frac{\log(H+h)}{\log H}\, .
\end{eqnarray} Indeed, note that $\log (1+x) \ge \theta x$ if $0\le x \le (1-\theta)/\theta$. Let $h_\theta$ be such that if $h\ge h_\theta$, then $h\log h \le \theta(\log 2) e^h$. Thus
\begin{eqnarray*}h& \le & e^h\,\theta\frac{\log 2}{\log h}\le e^h\, \theta\frac{\log 2}{\log H} \le e^h\,\log \Big(1+\frac{\log 2}{\log H}\Big)=e^h\,\log \Big(\frac{\log 2H}{\log H}\Big)\cr&\le& e^h\,\log \Big(\frac{\log H+h}{\log H}\Big) \, .
\end{eqnarray*}
We shall show by a recurrence on $r$ that \begin{eqnarray}\label{lll} \sum_{i=1}^r\frac{\log p_i}{p_i}&\le & \log\log\log (p_1\ldots p_r)\, .
\end{eqnarray} This is trivially true if $r=1$ by the notation made in the Introduction, and since $p\ge 2$. Assume that \eqref{lll} is fulfilled for $s=1, \ldots , r-1$. Then, by the recurrence assumption, \begin{eqnarray*} \sum_{i=1}^r\frac{\log p_i}{p_i}&\le & \log\log\log (p_1\ldots p_{r-1} )+ \frac{\log p_r}{p_r}\, .
\end{eqnarray*} Put $H=\sum_{i=1}^{r-1}\log p_i$, $h=\log p_r$. It suffices to show that\begin{eqnarray*} \frac{\log p_r}{p_r}\ =\ \frac{h}{e^h} &\le & \log\frac{\log \sum_{i=1}^r\log p_i}{\log \sum_{i=1}^{r-1}\log p_i}\ =\ \, \log\frac{\log H+h}{\log H},
\end{eqnarray*}
But $H\le h$, by assumption \eqref{prop.p}. Choose $C=e^{\frac{\theta}{(1-\theta)\log 2}}$. Then $H\ge \log p_1\ge e^{\frac{\theta}{(1-\theta)\log 2}}$. The searched inequality thus follows from \eqref{hH}.
Let $n= p_1^{\a_1}\ldots p_r^{\a_r}$, where $\a_i\ge 1$ for each $i$. We have $w( n)\le \log\log\log (p_1\ldots p_r)\le \log\log\log n$. \end{proof} \vskip 6 pt
\noindent {\bf Acknowledgements.}
The author is much grateful to
an anonymous referee for having let him know the papers of Sitaramaiah and Subbarao \cite{SS2,SS1}, and for useful remarks. The author also thank a second referee for useful remarks.
\end{document} |
\begin{document}
\begin{center}
\noindent { \textsc{ Cobweb Posets and KoDAG Digraphs are Representing Natural Join of Relations, their di-Bigraphs and the Corresponding Adjacency Matrices.}} \\
\noindent Andrzej Krzysztof Kwa\'sniewski \\
\noindent {\erm Member of the Institute of Combinatorics and its Applications }\\ {\erm High School of Mathematics and Applied Informatics} \\
{\erm Kamienna 17, PL-15-021 Bia\l ystok, Poland }\noindent\\
\noindent {\erm e-mail: kwandr@gmail.com}\\
\end{center}
\noindent {\ebf Abstract:}
\noindent {\small Natural join of di-bigraphs (directed bi-parted graphs) and their corresponding adjacency matrices is defined and then applied to investigate the so called cobweb posets and their $Hasse$ digraphs called $KoDAGs$. $KoDAGs$ are special \textbf{o}rderable \textbf{D}irected \textbf{A}cyclic \textbf{G}raphs which are cover relation digraphs of cobweb posets introduced by the author few years ago. $KoDAGs$ appear to be distinguished family of $Ferrers$ digraphs which are natural join of a corresponding ordering chain of one direction directed cliques called di-bicliques. These digraphs serve to represent faithfully corresponding relations of arbitrary arity so that all relations of arbitrary arity are their subrelations. Being this $chain -way$ complete (compare with \textbf{K}ompletne , \textbf{K}uratowski $K_{n,m}$ bipartite graphs) their DAG denotation is accompanied with the letter \textbf{K }in front of descriptive abbreviation oDAG.
\noindent The way to join bipartite digraphs of binary into multi-ary relations is the natural join operation either on relations or their digraph representatives. This natural join operation is denoted here by $\oplus\!\!\to$ symbol deliberately referring - in a reminiscent manner - to the direct sum $\oplus$ of adjacency matrices as it becomes the case for disjoint di-bigraphs.}
\noindent Key Words: posets, graded digraphs, Ferrers dimension, natural join
\noindent AMS Classification Numbers: 06A06 ,05B20, 05C7
\noindent affiliated to The Internet Gian-Carlo Polish Seminar:
\noindent \emph{http://ii.uwb.edu.pl/akk/sem/sem\_rota.htm}
\section{Introduction to coweb posets}
\subsection{Notation} One may identify and interpret some classes of digraphs in terms of their associated posets. (see \cite{1} Interpretations in terms of posets Section 9)
\begin{defn}[see \cite{1}] Let $D = (\Phi,\prec)$ be a digraph. $w,v \in \Phi$ are said to be equivalent iff there exists a directed path containing both $w$ and $v$ vertices. We then write: $v \sim w$ for such pairs and denote by $[v]$ the $\sim$ equivalence class of $v \in \Phi$. \end{defn}
\begin{defn}[see \cite{1}] The poset $P(D)$ associated to $D = (\Phi,\prec)$ is the poset $P(D)= (\Phi / \sim , \leq)$ where
$[v] \leq [w]$ iff there exists a directed path from a vertex $x \in [v]$ to a vertex $y \in [w]$. \end{defn}
\noindent \textbf{The graded digraphs case:}
\noindent \textbf{If $D = (\Phi,\prec )$ is graded digraph then $D = (\Phi, \prec )$} is necessarily \textbf{acyclic}. Then no two elements of $D = (\Phi,\prec )$ are $\sim$ equivalent and thereby $P(D) = (\Phi / \sim , \leq)$ associated to $D = (V,\prec )$ \textbf{is equivalent to}: $P(D) \equiv (\Phi , \leq)$ = transitive, reflexive closure of $D = (\Phi,\prec )$.
\noindent The cobweb posets where introduced in several paper (see \cite{2}-\cite{6} and references therein) in terms of their poset [Hasse] diagrams. Here we deliver their equivalent definition preceded by preliminary notation and nomenclature.
\noindent \textbf{Notation : nomenclature, di-bicliques and natural join}
\noindent In order to proceed proficiently we adopt the following.
\begin{defn} A digraph $D = (\Phi,\prec\!\!\cdot)$ is transitive irreducible iff transitive $reduction(D) = D$. \end{defn}
\begin{defn} A poset $P(D) = (\Phi, \leq)$ is associated to a graded digraph $D = (\Phi,\prec )$ iff $P(D)$ is the transitive, reflexive closure of $D = (\Phi, \prec )$ . \end{defn}
\noindent \textbf{Obvious}.
\noindent $D = (\Phi,\prec\!\!\cdot)$ is transitive irreducible iff transitive $reduction(D) = D$ iff $D = (\Phi,\prec\!\!\cdot )$ is Hasse diagram of the poset $P(D) = (\Phi, \leq)$ associated to $D \equiv D = (\Phi,\prec\!\!\cdot )$ is cover relation $\prec\!\!\cdot$ digraph $\equiv$ $D = (\Phi,\prec\!\!\cdot )$ is $P(D) = (\Phi, \leq)$ poset diagram.
\subsection{ Further on we adopt also the following nomenclature.}
We shall use until stated otherwise the convention: $N = \{1,2,...,k,...\}$ . $n \in N \cup \{\infty\}$. The Cartesian product $\Phi_1\times...\times\Phi_k$ of pairwise disjoint sets $\Phi_1, ... , \Phi_k$ is a $k$-ary relation, called sometimes the universal relation and here now on \textbf{K}ompletna relation or \textbf{K}-relation, (in Professor \textbf{K}azimierz \textbf{K}uratowski native language this means complete). The purpose of introducing the letter $K$ is to distinguish in what follows [ for $k = 2$ ] from complete digraphs notions established content.
\begin{conven}[identification] The binary relation $E \subseteq X\times Y$ is being here identified with its bipartite digraph representation $B = (X \cup Y, E)$. \end{conven}
\noindent \textbf{Notation} $\stackrel{\rightarrow}{K_{m,n}}\equiv B = (X \cup Y, E)$ if $|X|= m$ , $|Y| = n$. Colligate with \textbf{K}uratowski and $K_{m,n}$.
\noindent \textbf{Comment 1.}
\noindent Complete $n$-vertex \textbf{graphs} for which all pairs of vertices are adjacent are denoted by $K_n$, The letter $K$ had been chosen in honor of Professor \textbf{K}azimierz \textbf{K}uratowski, a distinguished pioneer in graph theory. The corresponding two widely used concepts for digraphs are called complete digraphs or complete symmetric digraph in which every two different vertices are joined by an arc and complete oriented graphs i.e. tournament graphs.
\noindent The binary $K$-relation $E = X\times Y$ equivalent to bipartite digraph $B = ( X \cup Y, E) \equiv \stackrel{\rightarrow}{K_{m,n}}$ is called from now on a \textbf{di-biclique} following \cite{6}.
\noindent \textbf{Example of} di-bicliques obtained from \textbf{bicliques} : See Fig. 1.
\noindent If you imagine arrows $\to$ left to the right - you would see two examples of \textbf{di-bicliques}
\begin{figure}
\caption{Examples of di-bicliques if edges are replaced by arrows of join direction }
\label{fig:1}
\end{figure}
\noindent if you imagine arrows $\leftarrow$ right to the left, you would see another examples of \textbf{di-bicliques}.
\begin{conven}[recall] The binary relation $E \subseteq X\times Y$ is identified with its bipartite digraph $B = ( X \cup Y, E)$ unless otherwise denoted distinctively deliberately. \end{conven}
\noindent \textbf{The natural join.}
\noindent The natural join operation is a binary operation like $\Theta$ \textbf{operator in computer science} denoted here by $\oplus\!\!\to$ symbol deliberately referring - in a quite reminiscent manner - to direct sum $\oplus$ of adjacency Boolean matrices and - as matter of fact and in effect - to direct the sum $\oplus$ of corresponding biadjacency [reduced] matrices of digraphs under natural join.
\noindent $\oplus\!\!\to$ is a natural operator for sequences construction . $\oplus\!\!\to$ operates on multi-ary relations according to the scheme: $(n+k)_{ary} \oplus\!\!\to (k+m)_{ary}$ = $(n+ k +m)_{ary}$
\noindent For example: $(1+1)_{ary} \oplus\!\!\to(1+1)_{ary} = (1+ 1 +1)_{ary}$ , binary $\oplus\!\!\to$ binary = ternary.
\noindent Accordingly an action of $\oplus\!\!\to$ on these multi-ary relations' digraphs adjacency matrices is to be designed soon in what follows.
\noindent \textbf{Domain-Codomain $F$-sequence condition} $\mathrm{dom}(R_{k+1}) = \mathrm{ran} (R_k)$, $k=0,1,2,...$ .
\noindent Consider any natural number valued sequence $F = \{F_n\}_{n\geq 0}$. Consider then any chain of binary relations defined on pairwise disjoint finite sets with cardinalities appointed by $F$ -sequence elements values. For that to start we specify at first a relations' domain-co-domain $F$ - sequence.
\noindent \textbf{Domain-Codomain $F$-sequence $(|\Phi_n| = F_n )$}
$$
\Phi_0,\Phi_1,...\Phi_i,...\ \ \Phi_k\cap\Phi_n = \emptyset \ \ for \ \ k \neq n, |\Phi_n|=F_n; \ \ i,k,n=0,1,2,... $$
\noindent Let $\Phi=\bigcup_{k=0}^n\Phi_k$ be the corresponding ordered partition [ anticipating - $\Phi$ is the vertex set of $D = (\Phi,\prec\!\!\cdot$ ) and its transitive, reflexive closure $(\Phi, \leq)$] . Impose $\mathrm{dom} (R_{k+1}) = \mathrm{ran} (R_k)$ condition , $k\in N \cup \{\infty\}$. What we get is binary relations chain.
\begin{defn} [Relation`s chain] Let $\Phi=\bigcup_{k=0}^n\Phi_k$ , $\Phi_k \cap \Phi_n = \emptyset$ for $k \neq n$ be the ordered partition of the set $\Phi$ . \end{defn}
\noindent Let a sequence of binary relations be given such that $$
R_0,R_1,...,R_i,...,R_{i+n},...,\ \ R_k\subseteq\Phi_k\times\Phi_{k+1},\ \ \mathrm{dom}(R_{k+1}) = \mathrm{ran}(R_k). $$
\noindent Then the sequence $\langle R_k\rangle_{k\geq 0}$ is called natural join (binary) \textbf{relation's chain}. Extension to varying arity relations' natural join chains is straightforward.
\noindent As necessarily $\mathrm{dom}(R_{k+1}) = \mathrm{ran}(R_k)$ for relations' natural join chain any given binary relation's chain is not just a sequence therefore we use "link to link " notation for $k, i , n = 1,2,3,...$ ready for relational data basis applications: $$
R_0 \oplus\!\!\to R_1 \oplus\!\!\to ... \oplus\!\!\to R_i \oplus\!\!\to ... \oplus\!\!\to R_{i+n},... is\ an\ F-chain\ of\ binary\ relations $$
\noindent where $\oplus\!\!\to$ denotes natural join of relations as well as both natural join of their bipartite digraphs and the natural join of their representative adjacency matrices (see the Section 3.).
\noindent Relation's $F$-chain naturally represented by [identified with] the chain of theirs \textbf{bipartite digraphs}
$$
{ R_0 \oplus\!\!\to R_1 \oplus\!\!\to ... \oplus\!\!\to R_i \oplus\!\!\to ... \oplus\!\!\to R_{i+n},... \Leftrightarrow
\atop
\Leftrightarrow B_0 \oplus\!\!\to B_1 \oplus\!\!\to ... \oplus\!\!\to B_i \oplus\!\!\to ... \oplus\!\!\to B_{i+n},...
} $$
\noindent results in \textbf{$F$-partial ordered set} $\langle\Phi,\leq\rangle$ with its Hasse digraph representation looking like specific "cobweb" image [see figures below].
\subsection{ Partial order $\leq$}
The partial order relation $\leq$ in the set of all points-vertices is determined uniquely by the above equivalent $F$- chains. Let $x,y \in \Phi=\bigcup_{k=0}^n\Phi_k$ and let $k, i = 0,1,2,...$. Then
\begin{equation}\label{eq:leq}
x\leq y \Leftrightarrow \forall_{x\in\Phi} : x\leq x \vee \Phi_i\ni x < y \in \Phi_{i+k}\ iff\ x(R_i\copyright...\copyright R_{i+k-1})y \end{equation}
\noindent where "$\copyright$" stays for [Boolean] composition of binary relations.
\noindent \textbf{Relation ($\leq$) defined equivalently }:
\noindent $ x \leq y$ in $(\Phi,\leq)$ iff either $x=y$ or there exist a directed path from $x$ to $y; x,y \in \Phi$.
\noindent Let now $R_k = \Phi_k\times\Phi_{k+1}, k \in N \cup\{0\}$. For "historical" reasons \cite{2}-\cite{6} we shall call such partial ordered set $\Pi = \langle\Phi,\leq\rangle$ the \textbf{cobweb poset} as theirs Hasse digraph representation looks like specific "cobweb" image ( imagine and/or draw also their transitive and reflexive cover digraph $\langle\Phi,\leq\rangle$. Cobweb? Super-cobweb ! ...- with fog droplets loops ?) .
\subsection{ Cobweb posets ($\Pi = \langle\Phi,\leq\rangle$) }
\begin{conven}[recall]
The binary relation $E \subseteq X\times Y$ is identified with its bipartite digraph $B = ( X \cup Y, E)\equiv \stackrel{\rightarrow}{K_{m,n}}$ where $|X|= m, |Y| = n$. \end{conven}
\begin{defn} [cobweb poset] Let $D = (\Phi, \prec\!\!\cdot )$ be a transitive irreducible digraph. Let $n \in N \cup \{\infty\}$. Let $D$ be a natural join $D = \oplus\!\!\to_{k=0}^n B_k$ of di-bicliques $B_k = (\Phi_k \cup \Phi_{k+1}, \Phi_k\times\Phi_{k+1} ) , n \in N \cup \{\infty\}$. Hence the digraph $D = (\Phi,\prec\!\!\cdot )$ is graded. The poset $\Pi (D)$ associated to this graded digraph $D = (\Phi,\prec\!\!\cdot )$ is called a cobweb poset. \end{defn}
\begin{conven} In a case we want to underline that we deal with finite cobweb poset ( a subposet of appropriate - for example infinite $F$-cobweb poset $\Pi (D)$ ) we shall use a subscript and write $P_n$ . \end{conven} \noindent See: \cite{2}-\cite{6}, \cite{10}, \cite{13}, \cite{18}.
\noindent\textbf{Comment 2.}
\noindent \textbf{Graded graph} is a \textbf{natural join} of bipartite graphs that form a chain of consecutive levels [i.e. graded \textbf{graphs'} antichains]
\noindent \textbf{Graded digraph} is a \textbf{natural join} of bipartite digraphs that form a chain of consecutive levels [i.e. graded \textbf{digraphs'} antichains]
\noindent \textbf{Comment 3. } ({\it Definition 6. Recapitulation in brief.})
\noindent Cobweb poset is the poset $\Pi = \langle\Phi,\leq\rangle$, where $\Phi = \bigcup_{k=0}^n$ and $\prec\!\!\cdot = \oplus\!\!\to_{k=0}^{n-1} \Phi_k\times\Phi_{k+1}, n \in N \cup \{\infty\}$. \noindent Cobweb poset is the poset $\Pi = \langle\Phi,\leq\rangle$, where $\Phi = \bigcup_{k=0}^n$ and $\prec\!\!\cdot = \oplus\!\!\to_{k=0}^{n-1} \stackrel{\rightarrow}{K_{k,k+1}}, n \in N \cup \{\infty\}$, \noindent where $\leq$ is the transitive, reflexive cover of $\prec\!\!\cdot$.
\noindent \textbf{Comment 4. } ({\it $F$-partial ordered set})
\noindent Cobweb poset $\Pi = \langle\Phi,\leq\rangle$ is naturally graded and sequence $F$ - denominated thereby we call it sometimes \textbf{$F$-partial ordered set $\langle\Phi,\leq\rangle$}.
\section{ Dimension of cobweb posets-revisited. }
\subsection{ oDAG \cite{7} }
\begin{observen} [cobwebs are oDAGs] In \cite{2} it was observed that cobweb posets' Hasse diagrams are the members of so called oDAGs family i.e. cobweb posets' Hasse diagrams are orderable Directed Acyclic Graphs which is equivalent to say that the associated poset $P(D) = (\Phi, \leq)$ of $D = (\Phi,\prec\!\!\cdot )$ of is of dimension 2. \end{observen}
\noindent \textbf{Recall:} DAGs - hence graded digraphs with minimal elements always might be considered - up to digraphs isomorphism - as natural digraphs \cite{8} i.e. digraphs with natural labeling (i.e. $x_i < x_j \Rightarrow i < j$ ).
\begin{defn}[Plotnikov - see \cite{7} , \cite{2} and then below] A digraph $D = (\Phi,\prec)$ is called the orderable digraph (oDAG) if there exists a dim 2 poset such that its Hasse diagram coincides with the digraph $G$". \end{defn}
\noindent The statement from \cite{2} may be now restated as follows:
\begin{observen} [oDAG] Cobweb $P(D) =(\Phi , \leq)$ posets' Hasse diagrams $D = (\Phi,\prec\!\!\cdot )$ are oDAGs. \end{observen}
\noindent \emph{Proof}: Obvious. Cobweb posets are posets with minimal elements set $\Phi_0$. Cobweb posets Hasse diagrams are DAGs. Cobweb posets representing the natural join of are then dim 2 posets as their Hasse digraphs are intersection of a natural labeling linear order $L_1$ and its "dual" $L_2$ denominated correspondingly in a standard way by: $L_1$ = natural labeling: chose for the topological ordering $L_1$ the labeling of minimal elements set $\Phi_0$ with labels $1,2,...$, from the left to the right ( see Fig2. ) then proceed up to the next level $\Phi_1$ and continue the labeling "$\to$" from the left to the right [$\Phi_1$ is now treated as the set o minimal elements if $\Phi_0$ is removed] and so on. Apply the procedure of subsequent removal of minimal elements i.e. removal of subsequent labeled levels $F_k$ - labeling the vertices along the levels from the left to the right.
\noindent $L_2$ = "dual" natural labeling: chose for the topological ordering $L_2$ the labeling of minimal elements set $F_0$ with labels $1,2,...$, from the right to the left to ( see Fig1. ) then proceed up to the next level $F_1$ and continue the labeling "$\leftarrow$" from the right to the left [$\Phi_1$ is now treated as the set o minimal elements if $\Phi_0$ is removed] and so on. Apply the procedure of subsequent removal of minimal elements i.e. removal of subsequent labeled levels $\Phi_k$ - labeling now the vertices along the levels from the right to the left q.e.d.
\subsection{ Brief history of the short oDAG's name life }
On the history of oDAG nomenclature with David Halitsky and Others input one is expected to see more in \cite{15}. See also the December $2008$ subject of The Internet Gian Carlo Rota Polish Seminar ($http://ii.uwb.edu.pl/akk/sem/sem\_rota.htm$). Here we present its sub-history leading the author to note that cobweb posets are oDAGs.
\noindent According to Anatoly Plotnikov the concept and the name of oDAG was introduced by David Halitsky from Cumulative Inquiry in 2004.
\noindent \textbf{oDAG-2004} (Plotnikov)
\noindent Quote 1. \emph{"A digraph $G \in D_n$ will be called orderable (oDAG) if there exists are dim 2 poset such that its Hasse diagram coincide with the digraph $G$"}.
\noindent The Quote 1 comes from \cite{9} in \cite{2} i.e. from A.D. \emph{Plotnikov A formal approach to the oDAG/POSET problem} (2004) \emph{$html://www.cumulativeinquiry.com/Problems/solut2.pdf$} (submitted to publication - March 2005)
\noindent The quote of the Quote 1 is to be found in \cite{9}
\noindent oDAG-2005 \cite{2}
\noindent Quote 2 \emph{"A digraph G is called the orderable digraph (oDAG) if there exists a dim 2 poset such that its Hasse diagram coincides with the digraph $G$"}. \cite{2}
\noindent oDAG-2006 \cite{7}
\noindent Quote 3 \emph{"A digraph G is called the orderable if there exists a dim 2 poset such that its Hasse diagram coincides with the digraph $G$"}. \cite{7}
\noindent For further use of oDAG nomenclature see \cite{6}, and references therein. For further references and recent results on cobweb posets see \cite{10} and \cite{11}.
\begin{defn} [KoDAG] The transitive and reflexive reduction of cobweb poset $\Pi = \langle\Phi,\leq\rangle$ i.e. posets' $\Pi$ cover relation digraph [Hasse diagram] $D = (\Phi, \prec\!\!\cdot)$ is called KoDAG. \end{defn}
\noindent See \cite{11}-\cite{14}.
\noindent \textbf{Comment 5.} Apply Comment 1.
\noindent \textbf{Why} do \textbf{we stick} to call KoDAGs graded digraphs with associated poset $\Pi = \langle\Phi,\leq\rangle$ the \textbf{orderable} DAGs on their own independently of the nomenclature quoted ?
\noindent Let $D = (\Phi,\prec\!\!\cdot )$ denotes now any transitive irreducible DAG [ for example any \textbf{graded} digraph including KoDAG digraph for example as above]. Let poset $P(D) = (\Phi, \leq)$ be associated to $D = (\Phi,\prec\!\!\cdot)$
\begin{defn} [Ferrers dimension] We say that the poset $P(D) = (\Phi, \leq)$ is of Ferrers dimension $k$ iff it is associated to $D = (\Phi,\prec\!\!\cdot )$ of Ferrers dimension $k$. \end{defn}
\begin{observen} [Ferrers dimension] Cobweb posets are posets of Ferrers dimension equal to one. \end{observen}
\noindent \emph{Proof.} Apply any of many characterizations of Ferrers digraphs to see that cobweb posets are posets' cover relation digraphs [Hasse diagrams] are Ferrers digraphs. For example consult Section 3 and see that biadjacency matrix does not contain any of two $ 2\times 2$ permutation matrices.
\noindent \textbf{Comment 6. } Any KoDAG digraph $D = (\Phi,\prec\!\!\cdot )$ is the digraph stable under the transitive and reflexive reduction i.e. ["`irreducible"'] Hasse portrait of Ferrers relation $\prec\!\!\cdot$. The positions of 1's in biadjacecy [reduced adjacency] matrix display the support of Ferrers relation $\prec\!\!\cdot$. $D = (\Phi,\prec\!\!\cdot)$ is then interval order relation digraph. The digraph $(\Phi,\leq)$ of the cobweb poset $P(D) = (\Phi, \leq)$ associated to KoDAG digraph $D = (\Phi,\leq )$ is the portrait of Ferrers relation $\leq$. The positions of 1's in biadjacecy [reduced adjacency] matrix display the support of Ferrers relation $\leq$. Note: for $F$-denominated cobweb posets the nomenclature identifies: biajacency [reduced adjacency] matrix $\equiv$ zeta matrix i.e. the incidence matrix $\zeta_F$ of the $F$- poset (see: Fig.$\zeta_N$ and Fig.$\zeta_F$ ). Recall that this $F$-partial ordered set $\langle\Phi,\leq\rangle$ is a natural join of $F$-chain of binary $K$-relations (complete or universal relations as called sometimes). These relations are represented by di-bicliques $\stackrel{\rightarrow}{K_{k,k+1}}$ which are on their own the Ferrers dimension one digraphs. As for the other - not necessarily $K$-relations' chains we may end up with Ferrers or not digraphs in corresponding di-bigraphs' chain. See below, then Section 4 and more in \cite{15}.
\section{ The natural join $\oplus\!\!\to$ operation }
We define here the adjacency matrices representation of the natural join $\oplus\!\!\to$ operation.
\subsection{ Recall }
Let $D(R) = (V(R)\cup W(R),E(R)) \equiv (V \cup W, E) ; V\cap W = \emptyset , E (R) \subseteq V\times W$. Let $D(R)$ denotes here down the \emph{bipartite digraph of binary relation} $R$ with $\mathrm{dom}(R) = V$ and $\mathrm{rang}(R)=W$. Colligate with the anticipated examples $R = R_k \subseteq \Phi_k\times\Phi_{k+1} \equiv \stackrel{\rightarrow}{K_{k,k+1}}, V(R)\cup W(R)= \Phi_k \cup \Phi_{k+1}$.
\subsection{ The adjacency matrices and their natural join.}
The adjacency matrix $\mathbf{A}$ of a bipartite graph with \textbf{biadjacency} (reduced adjacency \cite{16}) matrix $\mathbf{B}$ is given by
$$
\mathbf{A} = \left(
\begin{array}{cc}
0 & \mathbf{B} \\
\mathbf{B}^T & 0 \\
\end{array}
\right). $$
\begin{defn} The adjacency matrix $\mathbf{A}[D]$ of a bipartite digraph $D(R)= ( P\cup L , E \subseteq P\times L)$ with biadjacency matrix $\mathbf{B}$ is given by
$$
\mathbf{A}[D] = \left(
\begin{array}{cc}
0_{k,k} & \mathbf{B}(k\times m) \\
0_{m,k} & 0_{m,m} \\
\end{array}
\right). $$
where $k = | P |$, $m = | L |$. \end{defn}
\begin{conven}
$S \copyright R$ = composition of binary relations $S$ and $R \Leftrightarrow \mathbf{B}_{R\copyright S} = \mathbf{B}_R \copyright \mathbf{B}_S$ where ( $|V|= k , |W|= m$ ) $\mathbf{B}_R (k \times m) \equiv \mathbf{B}_R$ is the $(k \times m)$ \end{conven}
\noindent \textbf{biadjacency} [or another name: \textbf{reduced} adjacency] matrix of the bipartite relations' $R$ digraph $B(R)$ and $\copyright$ apart from relations composition denotes also Boolean multiplication of these rectangular biadjacency Boolean matrices $B_R , B_S$. What is their form? The answer is in the block structure of the standard square $(n \times n)$ adjacency matrix $A[D(R)]; n = k +m$ . The form of standard square adjacency matrix $A[G(R)]$ of bipartite digraph $D(R)$ has the following apparently recognizable block reduced structure: [ $O_{s\times s}$ stays for $(k \times m)$ zero matrix ]
$$
\mathbf{A}[D(R)] = \left[
\begin{array}{ll}
O_{k\times k} & \mathbf{A}_R(k\times m) \\
O_{m\times k} & O_{m\times m}
\end{array}
\right] $$
\noindent Let $D(S) = (W(S)\cup T(S),E(S))$; $W\cap T = \emptyset$, $E (S) \subseteq W\times T;$ ($|W|= m, |T|= s$); hence
$$
\mathbf{A}[D(S)] = \left[
\begin{array}{ll}
O_{m\times m} & \mathbf{A}_S(m\times s) \\
O_{s\times m} & O_{s\times s}
\end{array}
\right] $$
\begin{defn} [natural join condition] The ordered pair of matrices $\langle \mathbf{A_1}, \mathbf{A_2} \rangle$ is said to satisfy the natural join condition iff they have the block structure of $\mathbf{A}[D(R)]$ and $\mathbf{A}[D(S)]$ as above i.e. iff they might be identified accordingly : $\mathbf{A_1} = \mathbf{A}[D(R)]$ and $\mathbf{A_2} = \mathbf{A}[D(S)]$. \end{defn}
\noindent Correspondingly if two given digraphs $G_1$ and $G_2$ are such that their adjacency matrices $\mathbf{A_1} = \mathbf{A}[G_1]$ and $\mathbf{A_2} =\mathbf{A}[G_2]$ do satisfy the natural join condition we shall say that $G_1$ and $G_2$ satisfy the natural join condition.
For matrices satisfying the natural join condition one may define what follows.
\noindent First we define the \textbf{Boolean reduced} or \textbf{natural join composition} $\copyright\!\!\to$ and secondly the natural join $\oplus\!\!\to$ of adjacent matrices satisfying the natural join condition.
\begin{defn} ($\copyright\!\!\to$ composition)
$$
\mathbf{A}[D(R\copyright S)] =: \mathbf{A}[D(R)] \copyright\!\!\to \mathbf{A}[D(S)] = \left[
\begin{array}{ll}
O_{k\times k} & \mathbf{A}_{R\copyright S}(k\times s) \\
O_{s\times k} & O_{s\times s}
\end{array}
\right] $$
\noindent where $\mathbf{A}_{R\copyright S}(k\times s) = \mathbf{A}_R(k\times m) \copyright \mathbf{A}_S(m\times s)$. \end{defn}
\noindent according to the scheme: $$
[(k+m) \times (k + m )] \copyright\!\!\to [(m + s) \times (m + s)] = [(k+ s) \times (k+ s)] . $$
\noindent \textbf{Comment 7.}
\noindent The adequate projection makes out the intermediate, joint in common $\mathrm{dom}(S) = \mathrm{rang}(R)=W$ , $|W|= m$.
\noindent The above Boolean reduced composition $\copyright\!\!\to$ of adjacent matrices technically reduces then to the calculation of just Boolean product of the \textbf{reduced} rectangular adjacency matrices of the bipartite relations` graphs.
\noindent We are however now in need of the Boolean natural join product $\oplus\!\!\to$ of adjacent matrices already announced at the beginning of this presentation. Let us now define it.
\noindent As for the \textbf{natural join} notion we aim at the morphism correspondence: $$
S \oplus\!\!\to R \Leftrightarrow M_{S\oplus\!\!\to R} = M_R \oplus\!\!\to M_S $$
\noindent where $S \oplus\!\!\to R$ = natural join of binary relations $S$ and $R$ while $M_{S\oplus\!\!\to R} = M_R \oplus\!\!\to M_S$ = natural join of standard square adjacency matrices (with customary convention: $M[G(R)] \equiv M_R$ adapted). Attention: recall here that the natural join of the above binary relations $R \oplus\!\!\to S$ is the ternary relation - and on one results in $k$-ary relations if with more factors undergo the $\oplus\!\!\to$ product. As a matter of fact \textbf{ $\oplus\!\!\to$ operates on multi-ary relations according to the scheme:}
$$
(n+k)_{ary} \oplus\!\!\to (k+m)_{ary} = (n+ k +m)_{ary} . $$
\noindent For example: $(1+1)_{ary} \oplus\!\!\to (1+1)_{ary} = (1+ 1 +1)_{ary}, binary \oplus\!\!\to binary = ternary$.
\noindent Technically - the natural join of the $k$-ary and $n$-ary relations is defined accordingly the same way via $\oplus\!\!\to$ natural join product of adjacency matrices - the adjacency matrices of these relations' Hasse digraphs.
\noindent With the notation established above we finally define the natural join $\oplus\!\!\to$ of two adjacency matrices as follows:
\begin{defn} [natural join $\oplus\!\!\to$ of biadjacency matrices].
$$
A[D(R \oplus\!\!\to S)] =: A[D(R)] \oplus\!\!\to A[D(S)] = $$
$$
= \left[
\begin{array}{ll}
O_{k\times k} & A_R(k\times m) \\
O_{m\times k} & O_{m\times m}
\end{array}
\right]
\oplus\!\!\to
\left[
\begin{array}{ll}
O_{m\times m} & A_S(m\times s) \\
O_{s\times m} & O_{s\times s}
\end{array}
\right] = $$ $$
=\left[
\begin{array}{lll}
O_{k\times k} & A_R(k\times m) & O_{k\times s}\\
O_{m\times k} & O_{m\times m} & A_S(m\times s) \\
O_{s\times k} & O_{s\times m} & O_{s\times s}
\end{array}
\right] $$ \end{defn}
\noindent \textbf{Comment 8}. The adequate projection used in natural join operation lefts one copy of the joint in common "intermediate" submatrix $O_{m\times m}$ and consequently lefts one copy of "intermediate" joint in common $m$ according to the scheme: $$
[(k+m) \times (k + m )] \oplus\!\!\to [(m + s) \times (m + s)] = [(k+ m + s) \times (k+ m + s)] . $$
\subsection{ The biadjacency matrices of the natural join of adjacency matrices. }
Denote with $B(A)$ the biadjacency matrix of the adjacency matrix $A$.
\noindent Let $A(G)$ denotes the adjacency matrix of the digraph $G$ , for example a di-biclique relation digraph. Let $A(G_k)$, $k= 0,1,2,...$ be the sequence adjacency matrices of the sequence $G_k, k=0,1,2,...$ of digraphs. Let us identify $B(A)\equiv B(G)$ as a convention.
\begin{defn} [digraphs natural join] Let digraphs $G_1$ and $G_2$ satisfy the natural join condition. Let us make then the identification $A(G_1 \oplus\!\!\to G_2) \equiv A_1 \oplus\!\!\to A_2$ as definition. The digraph $G_1 \oplus\!\!\to G_2$ is called the digraphs natural join of digraphs $G_1$ and $G_2$. Note that the order is essential. \end{defn}
\noindent We observe at once what follows.
\begin{observen} $$
B (G_1 \oplus\!\!\to G_2) \equiv B (A_1 \oplus\!\!\to A_2) = B(A_1)\oplus B(A_2) \equiv B (G_1)\oplus B(G_2) $$ \end{observen}
\noindent \textbf{Comment 9.} The Observation 4 justifies the notation $\oplus\!\!\to$ for the natural join of relations digraphs and equivalently for the natural join of their adjacency matrices and equivalently for the natural join of relations that these are faithful representatives of.
\noindent As a consequence we have.
\begin{observen} $$
B\left(\oplus\!\!\to_{i=1}^n\right) \equiv B [\oplus\!\!\to_{i=1}^n A(G_i)] = \oplus_{i=1}^n B[A(G_i) ] \equiv \mathrm{diag} (B_1 , B_2 , ..., B_n) = $$ $$
= \left[ \begin{array}{lllll}
B_1 \\
& B_2 \\
& & B_3 \\
& ... & ... & ...\\
& & & & B_n
\end{array} \right] $$
\noindent $n \in N \cup \{\infty\}$. \end{observen}
\subsection{ Applications }
Once any natural number valued sequence $F = \{F_n\}_{n\geq 1}$ is being chosen its KoDAG digraph is identified with Hasse cover relation digraph. Its adjacency matrix $\mathbf{A}_F$ is sometimes called Hasse matrix and is given in a plausible form and impressively straightforward way. Just use the fact that the Hasse digraph which is displaying cover relation $\prec\!\!\cdot$ is an $F$ -chain of coined bipartite digraphs - coined each preceding with a subsequent one by natural join operator $\oplus\!\!\to$ [resemblance of $\oplus\!\!\to$ to direct matrix sum is not naive - compare "natural join" of disjoint digraphs with no common set of marked nodes ("attributes") ].
\noindent Note: $I (s\times k)$ stays for $(s\times k)$ matrix of ones i.e. $[ I (s\times k) ]_{ij} = 1$; $1 \leq i \leq s, 1\leq j \leq k$.
\noindent Let us start first with $F = \{F_n\}_{n\geq 1} = N$. See \textbf{Fig.2} . Then its associated \textbf{$F$-partial ordered set $\langle\Phi,\leq\rangle$} has the following Hasse digraph displaying cover relation of the $\leq$ partial order
\noindent The Hasse matrix $\mathbf{A}_N$ i.e. adjacency matrix of cover relation digraph i.e. adjacency matrix of the Hasse diagram of the $N$-denominated cobweb poset $\langle\Phi,\leq\rangle$ is given by upper triangular matrix $\mathbf{A}_N$ of the form:
$$
\mathbf{A}_N =
\left[ \begin{array}{llllll}
O_{1\times 1} & I(1\times 2) & O_{1\times \infty} \\
O_{2\times 1} & O_{2\times 2} & I(2\times 3) & O_{2 \times \infty} \\
O_{3\times 1} & O_{3\times 2} & O_{3\times 3} & I(3\times 4) & O_{3 \times \infty} \\
O_{4\times 1} & O_{4\times 2} & O_{4\times 3} & O_{4\times 4} & I(4\times 5) & O_{4 \times \infty} \\
... etc & ... & and & so & on ...
\end{array} \right] $$
\noindent One may see that the zeta function matrix of the $F = N$ choice is geometrical series in $\mathbf{A}_N$ i.e. the geometrical series in the poset $\langle\Phi,\leq\rangle$ Hasse matrix $\mathbf{A}_N$:
$$
\zeta = (1 - \mathbf{A}_N)^{-1 \copyright} $$
\noindent Explicitly: $\zeta = (1-\mathbf{A}_N)^{-1\copyright} \equiv I_{\infty \time \infty} + \mathbf{A}_N + \mathbf{A}_N ^{\copyright 2} + ... = $
$$
= \left[ \begin{array}{lllll}
I_{1\times 1} & I(1\times \infty) \\
O_{2\times 1} & I_{2\times 2} & I(2\times\infty) \\
O_{3\times 1} & O_{3\times 2} & I_{3\times 3} & I(3\times\infty) \\
O_{4\times 1} & O_{4\times 2} & O_{4\times 3} & I_{4\times 4} & I(4\times\infty) \\
... etc & ... & and & so & on ...
\end{array}\right] $$
\noindent $\zeta = (1-\mathbf{A}_N)^{-1\copyright}$ because [let $\mathbf{A}_N = \mathbf{A}$]
\noindent $\mathbf{A}^k_{ij} =$ the number of maximal $k$-chains [$k>0$] from the $x_0 \in \Phi_i$ to $x_k \in \Phi_j$ i.e. here
$$
\mathbf{A}^k_{ij} = \left\{
\begin{array}{ll}
0 & k\neq j-i \\
\frac{j!}{i!} & k=j-k
\end{array}
\right.
\mathrm{ hence }\ \
\mathbf{A}^{\copyright k}_{ij} = \left\{
\begin{array}{ll}
1 & k = j-i \\
0 & k \neq j-k
\end{array}
\right. . $$
\noindent and the supports (\emph{nonzero matrices blocks}) of $\mathbf{A}^{\copyright k}$ and $\mathbf{A}^{\copyright m}$ are disjoint for $k \neq m$. Indeed: the entry in row $i$ and column $j$ of the inverse $(I - \mathbf{A})^{-1}$ gives \emph{the number of directed paths} from vertex $x_i$ to vertex $x_j$. This can be seen from geometric series with adjacency matrix as an argument
$$
(I - \mathbf{A})^{-1} = I + \mathbf{A} + \mathbf{A}^2 + \mathbf{A}^3 + ... $$
\noindent taking care of the fact that the number of paths from $i$ to $j$ equals the number of paths of length $0$ plus the number of paths of length $1$ plus the number of paths of length $2$, etc.
\noindent Therefore the entry in row $i$ and column $j$ of the inverse $(I - \mathbf{A})^{-1\copyright}$ gives the answer whether there exists a \emph{directed paths} from vertex $i$ to vertex $j$ (Boolean value 1) or not (Boolean value 0) i.e. whether these vertices are comparable i.e. whether $x_i < x_j$ or not.
\noindent \textbf{Remark:} In the cases - Boolean poset $2^N$ and the "Ferrand-Zeckendorf" poset of finite subsets of $N$ without two consecutive elements considered in \cite{17} one has
$$
\zeta = exp[\mathbf{A}] = (1-\mathbf{A})^{-1\copyright} \equiv I_{\infty\times\infty} + \mathbf{A} + \mathbf{A}^{\copyright 2} + ... $$
\noindent because in those cases
$$
\mathbf{A}^k_{ij} = \left\{
\begin{array}{ll}
0 & k\neq j-i \\
k! & k = j - k
\end{array}
\right.
\mathrm{ hence }\ \
\frac{1}{k!} \mathbf{A}^k_{ij} = \mathbf{A}^{\copyright k}_{ij} = \left\{
\begin{array}{ll}
1 & k = j-i \\
0 & k \neq j-k
\end{array}
\right. . $$
\noindent How it goes in our $F$-case? Just see $\mathbf{A}_N^{\copyright 2}$ and then add $\mathbf{A}_N^{\copyright 0} \vee \mathbf{A}_N^{\copyright 1} \vee \mathbf{A}_N^{\copyright 2} \vee ...$
\noindent For example:
$$
\mathbf{A}_N^{\copyright 2} = \left[ \begin{array}{lllllll}
O_{1\times 1} & O_{1\times 2} & I(1\times 3) & O_{1\times \infty} \\
O_{2\times 1} & O_{2\times 2} & O_{2\times 3} & I(2 \times 4) & O_{2\times \infty} \\
O_{3\times 1} & O_{3\times 2} & O_{3\times 3} & O_{3\times 4} & I(3 \times 5) & O_{3\times \infty} \\
O_{4\times 1} & O_{4\times 2} & O_{4\times 3} & O_{4\times 4} & O_{4\times 5} & I(4 \times 6) & O_{4\times\infty} \\ ... etc & ... & and & so & on ...
\end{array}\right] $$
\noindent Consequently we arrive at the incidence matrix $\zeta = \mathrm{exp}[\mathbf{A}_N]$ for the natural numbers cobweb poset displayed by Fig 3. Note that incidence matrix $\zeta$ representing uniquely its corresponding cobweb poset does exhibits (see below) a staircase structure of zeros above the diagonal which is characteristic to Hasse diagrams of \textbf{all} cobweb posets.
$$ \left[\begin{array}{ccccccccccccccccc} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \cdots\\ 0 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \cdots\\ 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \cdots\\ 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \cdots\\ . & . & . & . & . & . & . & . & . & . & . & . & . & . & . & . & . \cdots\\
\end{array}\right]$$
\noindent \textbf{Figure $\zeta_N$. The incidence matrix $\zeta$ for the natural numbers i.e. N- cobweb poset}
\noindent \textbf{Comment 9.}
The given $F$-denominated staircase zeros structure above the diagonal of zeta matrix $zeta$ is the \textbf{unique characteristics} of its corresponding \textbf{$F$-KoDAG} Hasse digraphs.
\noindent For example see Fig $\zeta_F$. below (from \cite{6}).
$$ \left[\begin{array}{ccccccccccccccccc} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \cdots\\ 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \cdots\\ 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \cdots\\ 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & \cdots\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \cdots\\ . & . & . & . & . & . & . & . & . & . & . & . & . & . & . & . & . \cdots\\
\end{array}\right]$$
\noindent \textbf{Figure $\zeta_F$. The incidence matrix $\zeta$ for the Fibonacci cobweb poset associated to \textbf{$F$-KoDAG} Hasse digraph }
\noindent The zeta matrix i.e. the incidence matrix $\zeta_F$ for the Fibonacci numbers cobweb poset \textbf{[$F$ - KoDAG]} determines completely its incidence algebra and corresponds to the poset with Hasse diagram displayed by the Fig. 3.
\noindent The explicit expression for zeta matrix $\zeta_F$ via known blocks of zeros and ones for arbitrary natural numbers valued $F$- sequence is readily found due to brilliant mnemonic efficiency of the authors up-side-down notation (see Appendix in \cite{13}). With this notation inspired by Gauss and the reasoning just repeated with "$k_F$" numbers replacing $k$ - natural numbers one gets in the spirit of Knuth \cite{18} the clean result:
$$
\mathbf{A}_F = \left[\begin{array}{llllll}
0_{1_F\times 1_F} & I(1_F \times 2_F) & 0_{1_F \times \infty} \\
0_{2_F\times 1_F} & 0_{2_F\times 2_F} & I(2_F \times 3_F) & 0_{2_F \times \infty} \\
0_{3_F\times 1_F} & 0_{3_F\times 2_F} & 0_{3_F\times 3_F} & I(3_F \times 4_F) & 0_{3_F \times \infty} \\
0_{4_F\times 1_F} & 0_{4_F\times 2_F} & 0_{4_F\times 3_F} & 0_{4_F\times 4_F} & I(4_F \times 5_F) & 0_{4_F \times \infty} \\
... & etc & ... & and\ so\ on & ...
\end{array}\right] $$
\noindent and
$$
\zeta_F = exp_\copyright[\mathbf{A}_F] \equiv (1 - \mathbf{A}_F)^{-1\copyright} \equiv I_{\infty\times\infty} + \mathbf{A}_F + \mathbf{A}_F^{\copyright 2} + ... = $$ $$
= \left[\begin{array}{lllll}
I_{1_F\times 1_F} & I(1_F\times\infty) \\
O_{2_F\times 1_F} & I_{2_F\times 2_F} & I(2_F\times\infty) \\
O_{3_F\times 1_F} & O_{3_F\times 2_F} & I_{3_F\times 3_F} & I(3_F\times\infty) \\
O_{4_F\times 1_F} & O_{4_F\times 2_F} & O_{4_F\times 3_F} & I_{4_F\times 4_F} & I(4_F\times\infty) \\
... & etc & ... & and\ so\ on & ...
\end{array}\right] $$
\noindent \textbf{Comment 10.} (ad "upside down notation")
\noindent Concerning Gauss and Knuth - see remarks in \cite{18} on Gaussian binomial coefficients.
\begin{observen} Let us denote by $\langle\Phi_k\to\Phi_{k+1}\rangle$ (see the authors papers quoted) the di-bicliques denominated by subsequent levels $\Phi_k, \Phi_{k+1}$ of the graded $F$-poset $P(D) = (\Phi, \leq)$ i.e. levels $\Phi_k , \Phi_{k+1}$ of its cover relation graded digraph $D = (\Phi,\prec\!\!\cdot$) [Hasse diagram]. Then
$$
B\left(\oplus\!\!\to_{k=1}^n \langle\Phi_k\to\Phi_{k+1}\rangle \right) = \mathrm{diag}(I_1,I_2,...,I_n) = $$ $$
= \left[ \begin{array}{lllll}
I(1_F\times 2_F) \\
& I(2_F\times 3_F) \\
& & I(3_F\times 4_F) \\
& & ... \\
& & & & I(n_F\ times (n+1)_F)
\end{array} \right] $$
\noindent where $I_k \equiv I(k_F \times (k+1)_F)$, $k = 1,...,n$ and where - recall - $I (s\times k)$ stays for $(s\times k)$ matrix of ones i.e. $[ I (s\times k) ]_{ij} = 1$; $1 \leq i \leq s, 1\leq j \leq k.$ and $n \in N \cup \{\infty\}$. \end{observen}
\begin{observen} Consider bigraphs' chain obtained from the above di-biqliqes' chain via deleting or no arcs making thus [if deleting arcs] some or all of the di-bicliques $ \langle\Phi_k\to\Phi_{k+1}\rangle$ not di-biqliques; denote them as $G_k$. Let $B_k = B(G_k)$ denotes their biadjacency matrices correspondingly. Then for any such $F$-denominated chain [hence any chain ] of bipartite digraphs $G_k$ the general formula is:
$$
B\left( \oplus\!\!\to_{i=1}^n G_i \right) \equiv B [\oplus\!\!\to_{i=1}^n A(G_i)] = \oplus_{i=1}^n B[A(G_i) ] \equiv \mathrm{diag} (B_1 , B_2 , ..., B_n) = $$ $$
= \left[ \begin{array}{lllll}
B_1 \\
& B_2 \\
& & B_3 \\
& & ... \\
& & & & B_n
\end{array} \right] $$
\noindent $n \in N \cup \{\infty\}$. \end{observen}
\begin{observen}
The $F$-poset $P(G) = (\Phi, \leq)$ i.e. its cover relation graded digraph $G = (\Phi,\prec\!\!\cdot) = \oplus\!\!\to_{k=0}^m G_k$ is of Ferrers dimension one iff in the process of deleting arcs from the cobweb poset Hasse diagram $D = (\Phi,\prec\!\!\cdot)$ = $\oplus\!\!\to_{k=0}^n \langle\Phi_k\to\Phi_{k+1}\rangle $ does not produces $2\times 2$ permutation submatrices in any bigraphs $G_k$ biadjacency matrix $B_k= B (G_k)$. \end{observen}
\noindent \textbf{Examples} (finite subposets of cobweb posets)
\noindent Fig.4 and Fig.5 display a Hasse diagram portraits of finite subposets of cobweb posets. In view of the \textbf{Observation 2} these subposets are naturally Ferrers digraphs i.e. of Ferrers dimension equal one.
\section{Summary}
\subsection{Principal - natural identifications}
Any \textbf{KoDAG} is a \textbf{di}-bicliques chain $\Leftrightarrow$ Any \textbf{KoDAG is a natural join} of complete bipartite \textbf{graphs} [ \textbf{di}-bicliques ] = $$
( \Phi_0 \cup \Phi_1 \cup ... \cup \Phi_n \cup ..., E_0 \cup E_1\cup ... \cup E_n \cup ...) \equiv D(\bigcup_{k\geq 0}\Phi_k,\bigcup_{k\geq 0} E_k ) \equiv D (\Phi,E) $$
\noindent where $E_k = \Phi_k\times \Phi_{k+1} \equiv \stackrel{\rightarrow}{K_{k,k+1}}$ and $E = \bigcup_{k\geq 0}E_k$.
\noindent Naturally, as indicated earlier any graded posets' Hasse diagram with finite width including \textbf{KoDAGs} is of the form $$
D (\Phi , E) \equiv D( \bigcup_{k\geq 0}\Phi_k,\bigcup_{k\geq 0} E_k ) \Leftrightarrow \langle \Phi,\leq \rangle $$
\noindent where $E_k \subseteq \Phi_k\times \Phi_{k+1} \equiv \stackrel{\rightarrow}{K_{k,k+1}}$ and the definition of $\leq$ from \textbf{1.3.} is applied. \noindent In front of all the above presentation the following is clear .
\begin{observen} "Many" graded digraphs with finite width including \textbf{KoDAGs} $D = (V,\prec\!\!\cdot )$ encode bijectively their correspondent $n$-ary relation ($n \in N \cup \{\infty\}$ as seen from its following definition: $ E_k \subseteq \Phi_k \times \Phi_{k+1} \equiv \stackrel{\rightarrow}{K_{k,k+1}}$ where \\ \textcolor{red}{\textbf{($n$-ary relation)}} $E = \oplus\!\!\to_{k=0}^{n-1} E_k \subset \times_{k=0}^n \Phi_k$ \\ i.e. identified with graded poset $\left\langle V_n, E \right\rangle$ natural join obtained $n+1$-ary relation $E$ is a subset of Cartesian product obtained the universal $n+1$-ary relation identified with cobweb poset digraph $\left\langle V_n,\prec\!\!\cdot \right\rangle$). $V_{\infty}\equiv V$. \end{observen}
\noindent Which are those "many"? The characterization is arrived at with au rebour point of view. Any $n$-ary relation ($n \in N \cup \{\infty\}$) determines uniquely [may be identified with] its correspondent graded digraph with minimal elements set $\Phi_0$ given by the \textcolor{red}{\textbf{($n$-ary rel.)}} formula $$E = \oplus\!\!\to_{k=0}^{n-1} E_k \subset \times_{k=0}^n \Phi_k,$$ where the sequence of binary relations $E_k \subseteq \Phi_k\times \Phi_{k+1} \equiv \stackrel{\rightarrow}{K_{k,k+1}}$ is denominated by the source $n$-ary relation as the following example shows.
\noindent \textbf{Example} (ternary = $Binary_1$ $\oplus\!\!\to$ $Binary_2$)
\noindent Let $T \subset X\times Z\times Y$ where $X =\{ x_1,x_2,x_3\}$, $Z = \{ z_1,z_2,z_3,z_4\}$, $Y = \{y_1,y_2\}$ and $$
T = \{ \langle x_1,z_1,y_1 \rangle, \langle x_1,z_2,y_1 \rangle, \langle x_1,z_4,y_2 \rangle, \langle x_2,z_3,y_2 \rangle, \langle x_3,z_3,y_2 \rangle \}. $$
\begin{figure}\label{fig:ternary}
\end{figure}
\noindent Let $X\times Z \supset E_1= \{ \langle x_1,z_1 \rangle, \langle x_1,z_2 \rangle, \langle x_1,z_4 \rangle, \langle x_2,z_3 \rangle, \langle x_3,z_3 \rangle \}$ and $Z\times Y \supset E_2 = \{ \langle z_1,y_1 \rangle, \langle z_2,y_1\rangle, \langle z_3,y_1\rangle, \langle z_4,y_2\rangle \}$. Then $T = E_1 \oplus\!\!\to E_2$.
\noindent More on that - see \cite{15} and see references to the authors recent papers therein.
\noindent \textbf{Comment 11.} As a comment to the \textbf{Observation 9} and the \textbf{Observation 3} consider Fig.7 which was the source of inspiration for cobweb posets birth \cite{4,3,2,5,6} and here serves as Hasse diagram $D_{Fib} \equiv (\Phi, \prec\!\!\cdot_{Fib})$ of the poset $P(D_{Fib}) = (\Phi, \leq_{Fib} )$ associated to $D_{Fib}$. Obviuosly, $P(D_{Fib})$ is a subposet of the Fibonacci cobweb poset $P(D)$ and $D_{Fib}$ is a subgraph of the Fibonacci cobweb poset $P(D)$ Hasse diagram $D \equiv (\Phi,\prec\!\!\cdot )$.
\noindent The Ferrers dimension of $D_{Fib}$ is obviously not equal one.
\noindent \textbf{Exercise.} Find the Ferrers dimension of $D_{Fib}$. What is the dimension of the poset $P(D_{Fib}) = (\Phi, \leq_{Fib})$ ? (Compare with \textbf{Observation 2}). Find the chain $E_k \subset \Phi_{k}\times\Phi_{k+1}$, $k =0,1,2,...$ of binary relations such that $D_{Fib,n} = \oplus\!\!\to_{k=0}^n E_k, n \in N \cup \{\infty\}$. Find the Ferrers dimension of $D_{Fib,n}$.
\noindent \textbf{Ad Bibliography Remark}
\noindent On the history of oDAG nomenclature with David Halitsky and Others input one is expected to see more in \cite{15}. See also the December $2008$ subject of The Internet Gian Carlo Rota Polish Seminar ($http://ii.uwb.edu.pl/akk/sem/sem\_rota.htm$). Recommended readings on Ferrers digraphs of immediate use here are \cite{19}-\cite{25}. For example see pages 61 an 85 in \cite{19}, see page 2 in \cite{20}. The J. Riguet paper \cite{21} is the source paper including also equivalent characterizations of Ferrers digraphs as well as other \cite{22,23,24}. The now classic reference on interval orders and interval graphs is \cite{25}.
\noindent \textbf{Acknowledgments} Thank are expressed here to the Student of Gda\'nsk University Maciej Dziemia\'nczuk for applying his skillful TeX-nology with respect to the present work as well as for his general assistance and cooperation on KoDAGs investigation.
\end{document} |
\begin{document}
\title{Stabilization with finite dimensional controllers for a periodic parabolic system under perturbations in the system conductivity} \author{ Ling Lei \footnote{ Supported by a National Science Foundation of China Research Grant (NSFC-10801108)}\\ Department of Mathematics and Statistics, Wuhan University,\\
Wuhan, 430072, P.R.China} \date{} \maketitle
{\bf Abstract.} This work studies the stabilization for a periodic parabolic system under perturbations in the system conductivity. A perturbed system does not have any periodic solution in general. However, we will prove that the perturbed system can always be pulled back to a periodic system after imposing a control from a fixed finite dimensional subspace.
The paper continues the author's previous work in \cite{kn:[1]}.
{\bf Key words.} approximate periodic solution, stabilization through a finite dimensional control space, parabolic system, unique continuation of elliptic equations.
{\bf AMS subject classification. } 35B37, 93B99.\\ \vskip 1cm
\section{Introduction} \hspace*{0.5 cm}Let $\Omega\subset {\bf R}^N$ be a bounded domain with a $C^2$-smooth boundary $\partial\Omega$ and let $\omega\subset\Omega$ be a subdomain. Write $Q=\Omega \times (0,T)$ with $T>0$ and write $\Sigma=\partial\Omega\times (0,T) $. Consider the following parabolic equation: $$
\left\{\begin{array}{ll} \displaystyle{\frac{\partial y}{\partial t}(x,t)}+L_0 y(x,t)+e(x,t)y(x,t)=f(x,t), \;& \mbox{in }\;Q=\Omega \times (0,T),\\
y(x,t)=0, & \mbox{on }\;\Sigma=\partial \Omega \times (0,T),\\ \end{array}\right. \eqno{(1.1)} $$ where $$ L_0y(x,t) = - \sum ^N _{i,j=1} \frac{\partial }{\partial x_j} (a^{ij}(x)\frac{\partial }{\partial x_i} y(x,t)) +c(x)y(x,t) $$ is considered as the system operator. Here and in all that follows, we make the following regularity assumptions for the coefficients of $L_0$:
\noindent (I): $$\begin{array}{ll} a^{ij}(x) \in Lip(\overline{\Omega}),\;a^{ij}(x)=a^{ji}(x),\;
\mbox{and}\; \lambda ^*|\xi|^2 \leq \displaystyle{\sum_{i,j=1}^{N}} a^{ij}(x)
\xi _i \xi_j & \leq \displaystyle{\frac{1}{\lambda^*}} |\xi|^2 ,\; \mbox{for}\; \xi \in {\bf R}^N \\ \end{array} \eqno{(1.2)} $$
with $\lambda^*$ a certain positive constant;
\noindent (II): $$ \begin{array}{ll}
c(x) \in L^\infty(\Omega),\;
e(x,t) \in L^\infty (0,T;L^q(\Omega))\ \hbox{with }\ q >\max\{N,2\},\ \hbox{and}\ f(x,t)\in L^2(Q). \end{array} \eqno{(1.3)} $$ In such a system, we regard $e(x,t)$ as a perturbation in the system conductivity. Suppose in the ideal case, namely, in the case when the perturbation $e(x,t)\equiv 0$, (1.1) has a periodic solution $y_0(x,t)$: $$
\left\{\begin{array}{ll} \displaystyle{\frac{\partial y_0}{\partial t}(x,t)}+L_0 y_0(x,t)=f(x,t), \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;& \mbox{in }\;\;Q,\\
y_0(x,t)=0, & \mbox{on }\;\; \Sigma,\\ y_0(x,0)=y_0(x,T),&\mbox{in }\Omega. \end{array}\right. \eqno{(1.4)} $$ Then the presence of the error term $e(x,t)$ may well destroy the periodicity of the system. Indeed, (1.1) may no longer have any periodic solution. (See Section 3.) The problem that we are interested in in this paper is to understand if there is a finite (constructible) dimensional subspace ${\bf U}\subset L^2(Q)$, such that, after imposing a control $u_e\in \bf{U}$, we can restore the periodic solution $y_e$. Moreover, we would like to know if $y_e$ is close to $y_0$ and if the energy of $u_e$ is small, when $e(x,t)$ is small. Our main purpose of this paper is to show that we can indeed achieve this goal in the small perturbation case, even if the control is only imposed over a subregion $\omega$ of $\Omega$. The basic tool for this study is the existence and energy estimate for the approximate periodic solutions obtained in the author's previous paper \cite{kn:[1]}.
To state our results, we first recall the definition of approximate periodic solutions with respect to the elliptic operator $L_0$.
Notice that $L_0$ is a symmetric operator. Consider the eigenvalue problem of $L_0$: $$ \left\{ \begin{array}{ll} L_0 X(x)=\lambda X(x),\\
X(x)|_{\partial \Omega} =0. \end{array}\right. \eqno{(1.5)} $$ Making use of the regularity assumptions of the coefficients of $L_0$, we know (see, \cite{kn:[2]} \cite{kn:[3]}, for example) that (1.5) has a complete set of eigenvalues $\{\lambda_j\}_{j=1}^{\infty}$ with the associated eigenvectors $\{X_j(x)\}_{j=1}^{\infty}$ such that $$L_0 X_j(x)=\lambda_j X_j (x),$$ $$-\infty<\lambda_1\leq\lambda_2 \leq\cdots\leq\lambda_j\leq\cdots<\infty,\ \lim_{j\rightarrow \infty}\lambda_j=\infty,\;X_j(x)\in H^1_0(\Omega)\cap C(\overline{\Omega}).$$ Choose $\{X_j(x)\}_{j=1}^{\infty}$ such that it forms an orthonormal basis of $L^2(\Omega)$. Therefore, for any $y(x,t)\in L^2(Q)$, we have $ y(x,t)=\displaystyle\sum^{\infty}_{j=1}y_j(t)X_j(x)$, where $$y_j(t)=\langle y(x,t),X_j(x)\rangle= \displaystyle\int_\Omega y(x,t)X_j(x)dx\in L^2(0,T).$$
{\bf Definition 1.1.} {\it We call $y(x,t)$ is a K-approximate periodic solution of (1.1) with respect to $L_0$ if \\ (a): $y \in C([0,T];L^2(\Omega))\cap L^2(0,T;H^1_0 (\Omega ))$ is a weak solution of (1.1);\\ (b): $ y\in {\bf S_{K}} $, where ${\bf S_{K}}$ is the space of the following functions: $${\bf S_{K}} = \{y(x,t)\in L^2(Q);\;y_j(0)=y_j(T),\;\mbox{for}\;j \ge K+1,\; y_j(t)=\displaystyle{\int_\Omega } y(x,t)X_j (x)dx \}.$$}
When $K= 0$, we will always regard $\sum ^{0} _{j=1} = 0$. Hence, a 0-approximate periodic solution of (1.1) is a regular periodic solution. In what follows, we write $\langle y(\cdot,t),y(\cdot
,t)\rangle=\displaystyle{\int_\Omega} y^2(x,t)dx=\|y(\cdot,t)\|^2$, and we denote $y_t$ for the derivative of $y(x,t)$ with respect to $t$.
Our first result of this paper can be stated as follows:
{\bf Theorem 1.1} {\it Consider the system (1.1), where $e(x,t)$ is regarded as a perturbation in the system conductivity. Suppose that (1.1) has a periodic solution $y_0(x,t)$ at the ideal case with $e(x,t)\equiv 0$. Assume that
$\|e(x,t)\|_{L^\infty(0,T;L^q(\Omega))}=ess\displaystyle{\sup_{t\in(0,T)}}\|e(\cdot,t)\|_{L^q(\Omega)}<\varepsilon$, where $\varepsilon<1$ is a small constant which depends only on $L_0,\Omega, N, q, T$ with $q>\max\{N,2\}$. Then there are a non-negative integer $K_0$, depending only on $L_0,\Omega, N, q, T$ (but not $f$), and a unique outside force of the form $$u_e(x):=\sum_{j=1}^{K_0}u_jX_j(x)\in {\bf U}=span_{{\mathbf R}}\{X_1(x),X_2(x),\cdots,X_{K_0}(x)\},$$ where $u_j\in {\mathbf R}$, such that the following has a unique periodic solution $y$ satisfying: $$
\left\{\begin{array}{ll} \displaystyle{\frac{\partial y(x,t)}{\partial t}}+L_0y(x,t)+e(x,t)y(x,t)=f(x,t)+u_e(x), \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;& \mbox{in }\;\;Q,\\
y(x,t)=0, & \mbox{on }\;\; \Sigma,\\
\langle y(x,0),X_j(x)\rangle=\langle y_0(x,0),X_j(x)\rangle, & \mbox{for }\;j\leq K_0,\\ y(x,0)=y(x,T),&\mbox{in }\;\;\Omega. \end{array}\right. \eqno{(1.6)} $$ Moreover, we have the following energy estimate: $$\begin{array}{ll}
&\displaystyle\sup_{t\in[0,T]}\|(y-y_0)(\cdot,t)\|^2+\displaystyle{\int^T_0}\|\nabla(y-y_0)(\cdot,t)\|^2dt\\
&\leq C(system,K_0)\|e(x,t)\|^2_{L^\infty(0,T;L^q(\Omega))}(1
+|\vec{a}|^2+\displaystyle{\int_Q} f^2dxdt),\end{array}\eqno{(1.7)} $$ and $$
\|u_e\|^2_{L^2(\Omega)}\leq C(system,K_0)\|e(x,t)\|^2_{L^\infty(0,T;L^q(\Omega))}(1+|\vec{a}|^2+\displaystyle{\int_Q} f^2dxdt), \eqno{(1.8)} $$ where $\vec{a}=(a_1,a_2,\cdots,a_{K_0})=(\langle y_0(x,0),X_1(x)\rangle,\langle y_0(x,0),X_2(x)\rangle,\cdots,\langle y_0(x,0),X_{K_0}(x)\rangle)$.
Here and in what follows, $C(system,K_0)$ denotes a constant depending only on $L_0,\Omega, N,q,T$, which may be different in different contexts.}
In Section 3 of this paper, we will construct an example, showing that without outside controls, (1.1) has no periodic solutions in general. This is one of the main features in our Theorem 1.1: The control can always be taken from a certain fixed constructible {\it finite dimensional subspace} to regain the periodicity, while
the perturbation space for $e(x,t)$, which destroys the periodicity, is {\it of infinite dimension}. We also notice that our system operator $L_0$ is not assumed to be positive.
The second part of this work is to consider the same problem as studied in the first part, but with the control only imposed over a subregion $\omega\subset\Omega$ and time interval $E\subset [0,T]$, $m(E)>0$. We will similarly obtain the following:
{\bf Theorem 1.2.} {\it Suppose that the system (1.1) has a periodic solution $y_0(x,t)$ at the ideal case with $e(x,t)\equiv 0$. Then there are a positive integer $K_0$, a small constant $\varepsilon>0$, depending only on $L_0,\Omega, N,q,T$ $(q>\max\{N,2\})$, such that, when
$$\|e(x,t)\|_{L^\infty(0,T;L^q(\Omega))}=ess\displaystyle{\sup_{t\in(0,T)}}\|e(x,t)\|_{L^q(\Omega)}<\varepsilon,$$ the following has a unique periodic solution: $$
\left\{\begin{array}{ll} \displaystyle{\frac{\partial y(x,t)}{\partial t}}+L_0y(x,t)+e(x,t)y(x,t)=f(x,t)+\displaystyle{\sum_{j=1}^{K_0}}\chi_\omega(x)\chi_E(t)u_jX_j(x), \;\;\;\;\;\;\;\;& \mbox{in }\;\;Q,\\
y(x,t)=0, & \mbox{on }\;\; \Sigma,\\
\langle y(x,0),X_j(x)\rangle=a_j, & \mbox{for }\;j\leq K_0,\\ y\in {\bf S_{K_0}}, \end{array}\right. \eqno{(1.9)} $$ where $(a_1,a_2,\cdots,a_{K_0})=(\langle y_0(x,0),X_1(x)\rangle,\langle y_0(x,0),X_2(x)\rangle,\cdots,\langle y_0(x,0),X_{K_0}(x)\rangle)=\vec{a}$, $(u_1,u_2,\cdots,u_{K_0})=\vec{u}\in {\mathbf R}^{K_0}$. Moreover, $$
|\vec{u}|^2\leq C(system,K_0,\omega)\displaystyle\frac{\|e(x,t)\|^2_{L^\infty(0,T;L^q(\Omega))}}{(m(E))^2}(1+|\vec{a}|^2+\displaystyle{\int_Q} f^2dxdt),\eqno{(1.10)} $$ and $$\begin{array}{ll}
&\displaystyle\sup_{t\in[0,T]}\|(y-y_0)(\cdot,t)\|^2+\displaystyle{\int^T_0}\|\nabla(y-y_0)(\cdot,t)\|^2dt\\
&\leq C(system,K_0,\omega)\displaystyle\frac{\|e(x,t)\|^2_{L^\infty(0,T;L^q(\Omega))}}{(m(E))^2}(1+|\vec{a}|^2+\displaystyle{\int_Q} f^2dxdt).\end{array}\eqno{(1.11)} $$ Here, $$\chi_\omega(x),\;\chi_E(t)$$ are the characteristic functions for $\omega$ and $E$, respectively; and $C(system,K_0, \omega)$ is a constant depending only on $\omega,L_0,\Omega, N,q,T$. }
Theorem 1.1 and Theorem 1.2 give stabilization results for the periodic solutions of a linear parabolic system under small perturbation of the system conductivity, modifying a control from a fixed finite dimensional subspace. We do not know if similar results as in Theorem 1.1 hold under the large perturbation case.
The paper is organized as follows. In Section 2, we prove Theorem 1.1. In Section 3, we give an example to show that with a small perturbation $e(x,t)$, (1.1) has no periodic solution in general. In section 4, we give the proof of Theorem 1.2.
\section{Small perturbation}
\hspace*{0.5 cm}In this Section, we give a proof of Theorem
1.1, based on the author's previous paper \cite{kn:[1]}. For convenience of the reader, we first recall the following result of \cite{kn:[1]}, which will be used here.
{\bf Theorem 2.1}. {\it Assume (1.2) and (1.3). Let $e(x,t)\in {\mathcal{M}}(q,M)$, where, for any positive number $M$ and $q> \frac{N}{2}$,
$${\mathcal{M}}(q,M):= \{e(x,t) \in L^{\infty}(0,T;L^q (\Omega)); \mbox{ess sup}_{t\in
(0,T)} \|e(x,t)\|_{L^q (\Omega)}\le M\}.$$ Then, there exists an integer $K_0(L_0,M,\Omega, q,N,T)$ $\geq 0$, depending only on $(L_0, M,\Omega, q,N, T)$ (but not $f(x,t)$), such that for any $K\geq K_0(L_0,M,\Omega,q,N, T)$ and any initial value $\vec{a}=(a_1,a_2,\cdots,a_K)\in {\bf R^K}$, we have a unique solution to the following equation: $$
\left\{\begin{array}{ll} \displaystyle{\frac{\partial y(x,t)}{\partial t}}+L_0y(x,t)+e(x,t)y(x,t)=f(x,t), \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;& \mbox{in }\;\;Q,\\
y(x,t)=0, & \mbox{on }\;\; \Sigma,\\
\langle y(x,0),X_j(x)\rangle=a_j, & \mbox{for }\;j\leq K,\\ y\in {\bf S_{K}}. \end{array}\right. \eqno{(2.1)} $$ Moreover, for such a solution $y(x,t)$, we have the following energy estimate: $$ \begin{array}{ll}
\displaystyle{\sup_{t\in[0,T]}}\|y(\cdot,t)\|^2
+\displaystyle{\int^T_0}\|\nabla y(\cdot,t)\|^2 dt \leq C(L_0,M,\Omega,q,N,T) (|\vec{a}|^2 +\displaystyle{\int_Q}f^2dxdt). \end{array} \eqno (2.2) $$}
Now, suppose $y_0$ is a periodic solution of (1.1) with $e(x,t)=0$, namely, $$
\left\{\begin{array}{ll} \displaystyle{\frac{\partial y_0}{\partial t}(x,t)}+L_0 y_0(x,t)=f(x,t), \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;& \mbox{in }\;\;Q,\\
y_0(x,t)=0, & \mbox{on }\;\; \Sigma,\\ y_0(x,0)=y_0(x,T),&\mbox{in }\Omega. \end{array}\right. \eqno{(2.3)} $$ Let $$ a_j=(y_0)_j(0)=\langle y_0(x,0),X_j(x)\rangle,\mbox{ for } j=1,2,\cdots. $$ In all that follows, we assume that $e(x,t)\in {\mathcal{M}}(q,M)$ with $M=1$. By Theorem 2.1, there exists an integer $K_0(L_0,M,\Omega, q,N,T)$ $\geq 0$, such that for the initial value $\vec{a}=(a_1,a_2,\cdots,a_{K_0})\in {\mathbf R}^{K_0}$, we have a unique solution $y(x,t)$ satisfying the following equations: $$
\left\{\begin{array}{ll} \displaystyle{\frac{\partial y(x,t)}{\partial t}}+L_0y(x,t)+e(x,t)y(x,t)=f(x,t)+\displaystyle{\sum_{j=1}^{K_0}}u_jX_j(x), \;\;\;\;\;\;\;\;& \mbox{in }\;\;Q,\\
y(x,t)=0, & \mbox{on }\;\; \Sigma,\\
\langle y(x,0),X_j(x)\rangle=a_j, & \mbox{for }\;j\leq K_0,\\ y\in {\bf S_{K_0}}. \end{array}\right. \eqno{(2.4)} $$Here, $\vec{u}=(u_1,u_2,\cdots,u_{K_0})\in {\mathbf R}^{K_0}$.
Subtracting (2.3) from (2.4), we get the following equation: $$
\left\{\begin{array}{ll} (y-y_0)_t +L_0(y-y_0)+e(x,t)(y-y_0)=\displaystyle{\sum_{j=1}^{K_0}}u_jX_j(x)-e(x,t)y_0, \;\;\;& \mbox{in }\;\;Q,\\
(y(x,t)-y_0(x,t))=0, & \mbox{on }\;\; \Sigma,\\
(y-y_0)_j(0)=\langle y(x,0)-y_0(x,0),X_j(x)\rangle=0, & \mbox{for }\;j\leq K_0,\\ (y-y_0)\in {\bf S_{K_0}}. \end{array}\right. \eqno{(2.5)} $$ We define a map$$J:\;{\mathbf R}^{K_0}\longmapsto {\mathbf R}^{K_0}$$by $$J(u_1,u_2,\cdots,u_{K_0})=((y-y_0)_1(T),(y-y_0)_2(T),\cdots,(y-y_0)_{K_0}(T)).$$Write $v=y-y_0=v_0+v_u$. Here, $v_0$ and $v_u$ are the solution of the following equations, respectively, $$
\left\{\begin{array}{ll} (v_0)_t +L_0v_0+e(x,t)v_0=-e(x,t)y_0, \;\;\;& \mbox{in }\;\;Q,\\
v_0=0, & \mbox{on }\;\; \Sigma,\\ (v_0)_{j}(0)=0, & \mbox{for }\;j\leq K_0,\\ v_0\in {\bf S_{K_0}}. \end{array}\right. \eqno{(2.6)} $$and $$ \left\{\begin{array}{ll} (v_u)_t +L_0v_u+e(x,t)v_u=\displaystyle{\sum_{j=1}^{K_0}}u_jX_j(x), \;\;\;& \mbox{in }\;\;Q,\\
v_u=0, & \mbox{on }\;\; \Sigma,\\ (v_u)_{j}(0)=0, & \mbox{for }\;j\leq K_0,\\ v_u\in {\bf S_{K_0}}. \end{array}\right. \eqno{(2.7)} $$ We are led to the question to find out if there is a vector $\vec{u}=(u_1,u_2,\cdots,u_{K_0})\in {\bf R^{K_0}}$ such that $$ J(\vec{u})=((y-y_0)_1(T),(y-y_0)_2(T),\cdots,(y-y_0)_{K_0}(T))=(0,0,\cdots,0). $$ Indeed, if this is the case, then $y$ is a periodic solution with the required estimate as we will see later.
For this purpose, we write $J_0=((v_0)_1(T),(v_0)_2(T),\cdots,(v_0)_{K_0}(T))$ and $$J^*(\vec{u})=((v_u)_1(T),(v_u)_2(T),\cdots,(v_u)_{K_0}(T)).$$Then $$J(\vec{u})=J_0+J^*(\vec{u}).$$ Now, it is easy to see that $J^*$ is linear in $(u_1,u_2,\cdots,u_{K_0})$. We next claim that $J^*$ is invertible under the small perturbation case. If not, we can find a vector $\vec{\xi}=(\xi_1,\xi_2,\cdots,\xi_{K_0})\in {\mathbf R}^{K_0}$ with
$|\vec{\xi}|=\sqrt{\xi^2_1+\xi^2_2+\cdots+\xi^2_{K_0}}=1$ such that $J^*(\vec{\xi})=0$. Hence, we have a unique solution to the following problem: $$ \left\{\begin{array}{ll} w_t +L_0w+e(x,t)w=\displaystyle{\sum_{j=1}^{K_0}}\xi_jX_j(x), \;\;\;& \mbox{in }\;\;Q,\\ w=0, & \mbox{on }\;\; \Sigma,\\ w_{j}(0)=w_{j}(T)=0, & \mbox{for }\;j\leq K_0,\\ w\in {\bf S_{K_0}}. \end{array}\right. \eqno{(2.8)} $$ First, by the energy estimate in Theorem 2.1, we have for $w(x,t)$, $$\begin{array}{ll}
\displaystyle{\sup_{t\in[0,T]}}\|w(\cdot,t)\|^2
+\displaystyle{\int^T_0}\|\nabla w(\cdot,t)\|^2 dt &\leq C(system)\cdot T\cdot |\vec{\xi}|^2\\ &\leq C(system,K_0). \end{array} \eqno (2.9)$$As mentioned before, we use $C(system, K_0)$ to denote a constant depending only on $L_0,M,\Omega,q,N,T$, which may be different in different contexts.
Write $w=\displaystyle{\sum^{\infty}_{j=1}}w_j(t)X_j(x)$ as before. Then we have $$ \displaystyle{\frac{dw_j(t)}{dt}}+\lambda_j w_j(t)+\int_\Omega e(x,t)w(x,t)X_j(x)dx =\xi_j,\;\mbox{for }j=1,2,\cdots,K_0.\eqno{(2.10)} $$ Next, by the H$\ddot{o}$lder inequality (see Claim 2.2 of \cite{kn:[1]}), we have $$\begin{array}{ll}
\displaystyle{\int_\Omega}|e(x,t)w(x,t)X_j(x)|dx&\leq C(\Omega,N,q)\|e(x,t)\|_{L^\infty(0,T;L^q(\Omega))}[\|w(\cdot,t)\|^2_{L^2(\Omega)}\\
&+\|X_j(x)\|^2_{L^2(\Omega)}+ \|\nabla w(\cdot,t)\|^2_{L^2(\Omega)}+\|\nabla X_j(x)\|^2_{L^2(\Omega)}]\\
&\leq C(\Omega,N,q)\|e(x,t)\|_{L^\infty(0,T;L^q(\Omega))}[1+\lambda^2_j\\
&+ \|w(\cdot,t)\|^2_{L^2(\Omega)}+
\|\nabla w(\cdot,t)\|^2_{L^2(\Omega)}]. \end{array} $$ By (2.9), we have $$
\displaystyle{\int^T_0\int_\Omega}|e(x,t)w(x,t)X_j(x)|dxdt\leq C(system,K_0)\|e(x,t)\|_{L^\infty(0,T;L^q(\Omega))}.\eqno{(2.11)} $$ Next, from (2.10), we get $$(e^{\lambda_j t}w_j(t))'_t+\int_\Omega e(x,t)w(x,t)X_j(x)e^{\lambda_j t}dx =e^{\lambda_j t}\xi_j,\;\mbox{for }j=1,2,\cdots,K_0.$$ Integrating the above over [0,T], we get, for $j=1,2,\cdots,K_0$, $$0+\displaystyle{\int^T_0\int_\Omega}e(x,t)w(x,t)X_j(x)e^{\lambda_j t}dxdt=\xi_j\displaystyle{\int^T_0}e^{\lambda_j t}dt.$$ Namely, $$ \xi_j=\left\{\begin{array}{ll} \displaystyle{\frac{\displaystyle{\int^T_0\int_\Omega}e(x,t)w(x,t)X_j(x)e^{\lambda_j t}dxdt}{\frac{1}{\lambda_j}(e^{\lambda_j T}-1)}},\;\;\;\;&\mbox{for }\lambda_j\neq 0,\\ \displaystyle{\frac{\displaystyle{\int^T_0\int_\Omega}e(x,t)w(x,t)X_j(x)dxdt}{T}},&\mbox{for }\lambda_j=0. \end{array}\right. $$ Hence, we get, for $j=1,2,\cdots,K_0$, $$\begin{array}{ll}
|\xi_j|&\leq C(system,K_0)\displaystyle{\int^T_0\int_\Omega}|e(x,t)w(x,t)X_j(x)|dxdt\\
&\leq C(system,K_0)\|e(x,t)\|_{L^\infty(0,T;L^q(\Omega))}. \end{array}\eqno{(2.12)} $$ We get
$$1=|\vec{\xi}|\leq C(system,K_0)\sqrt{K_0}\|e(x,t)\|_{L^\infty(0,T;L^q(\Omega))}.$$ This gives a contradiction when
$$\|e(x,t)\|_{L^\infty(0,T;L^q(\Omega))}< \displaystyle{\frac{1}{C(system,K_0)\sqrt{K_0}}}.$$ Therefore, we showed that $J^*$ is invertible when
$\|e(x,t)\|_{L^\infty(0,T;L^q(\Omega))}<\epsilon $ with a certain $\epsilon$ depending only on $L_0,\Omega, N,q, T$.
Hence, for any given $\vec{b}=(b_1,b_2,\cdots,b_{K_0})\in {\mathbf R}^{K_0}$, there exists a unique $$\vec{u}=(u_1,u_2,\cdots,u_{K_0})\in {\mathbf R}^{K_0}$$ such that $$J^*(\vec{u})=J^*(u_1,u_2,\cdots,u_{K_0})=(b_1,b_2,\cdots,b_{K_0}).$$ Back to the equation (2.7), we have $$ \displaystyle{\frac{d(v_u)_j(t)}{dt}}+\lambda_j (v_u)_j(t)+\int_\Omega e(x,t)v_u(x,t)X_j(x)dx =u_j,\;\mbox{for }j=1,2,\cdots,K_0. $$ Then $$ \displaystyle{\frac{d[e^{\lambda_j t}(v_u)_j(t)]}{dt}}+\int_\Omega e(x,t)v_u(x,t)X_j(x)e^{\lambda_j t}dx=u_j e^{\lambda_j t},\;\mbox{for }j=1,2,\cdots,K_0. $$ Integrating the above over [0,T], by the definition of $J^*$, we have $$ b_j e^{\lambda_j T}-0+\displaystyle{\int^T_0\int_\Omega} e(x,t)v_u(x,t)X_j(x)e^{\lambda_j t}dxdt=u_j \int^T_0e^{\lambda_j t}dt,\;\mbox{for }j=1,2,\cdots,K_0. $$ We then get $$ u_j=\left\{\begin{array}{ll} \displaystyle{\frac{b_j e^{\lambda_j T}+\displaystyle{\int^T_0\int_\Omega}e(x,t)v_u(x,t)X_j(x)e^{\lambda_j t}dxdt}{\frac{1}{\lambda_j}(e^{\lambda_j T}-1)}},\;\;\;\;&\mbox{for }\lambda_j\neq 0,\\ \displaystyle{\frac{\displaystyle{\int^T_0\int_\Omega}e(x,t)v_u(x,t)X_j(x)dxdt}{T}},&\mbox{for }\lambda_j=0. \end{array}\right. $$ $$\begin{array}{ll}
|u_j|^2 &\leq 2e^{2\lambda_{K_0}T}|b_j|^2 +2e^{2\lambda_{K_0}T}[\displaystyle{\int^T_0\int_\Omega}e(x,t)v_u(x,t)X_j(x)dxdt]^2\\
&\leq 2e^{2\lambda_{K_0}T}|b_j|^2 +2e^{2\lambda_{K_0}T}\cdot
\hbox{sup}_{\Omega}|X_j|^2[\displaystyle{\int^T_0}\|e(\cdot,t)\|_{L^q(\Omega)}\|v_u(\cdot,t)\|_{L^{q'}(\Omega)}dt]^2
\end{array} $$ Here $1/q+1/q'=1$. Since $\Omega$ is bounded and $q'=\frac{q}{q-1}\le 2$, by the H\"older inequality, we have
$\|v_u\|_{L^{q'}(\Omega)}\le C(\Omega, q)\|v_u\|_{L^{2}(\Omega)}.$ Hence,
$$[\displaystyle{\int^T_0}\|e(\cdot,t)\|_{L^q(\Omega)}\|v_u(\cdot,t)\|_{L^{q'}(\Omega)}dt]^2\le C(\Omega,T,q)\|e\|^2_{L^\infty(0,T;L^q(\Omega))}\|v_u\|^2_{L^{2}(Q)}.$$
By the energy estimate in Theorem 2.1, we have
$\|v_u\|^2_{L^{2}(Q)}\le C(system, K_0) |u|^2.$ Hence, as argument before, when $\|e\|^2_{L^\infty(0,T;L^q(\Omega))}$ is small, we can solve the above to obtain the following:
$$|\vec{u}|^2 \leq C(system,K_0)|\vec{b}|^2.\eqno{(2.13)}$$
Back to (2.5), we need to find $\vec{u}=(u_1,u_2,\cdots,u_{K_0})$ such that the solution in (2.5) has the property $(y-y_0)_j(T)=0$ for $j=1,2,\cdots,K_0$. As mentioned before, $v=y-y_0$ is then a periodic solution. Thus $y=v+y_0$ is a periodic solution of (2.4) after applying the control force $\displaystyle{\sum_{j=1}^{K_0}}u_jX_j(x)$. To this aim, we need only to find $\vec{u}$ such that $$J(\vec{u})=0\;\mbox{or }\;J^*(\vec{u})=-J_0.$$ By the definition of $J_0$, $J_0=-\vec{b}=(-b_1,-b_2,\cdots,-b_{K_0})$ is given by $$
\left\{\begin{array}{ll} (v_0)_t +Lv_0+e(x,t)v_0=-e(x,t)y_0, \;\;\;& \mbox{in }\;\;Q,\\
v_0=0, & \mbox{on }\;\; \Sigma,\\ (v_0)_{j}(0)=0,\;(v_0)_j(T)=-b_j, & \mbox{for }\;j\leq K_0,\\ v_0\in {\bf S_{K_0}}. \end{array}\right. $$ By the energy estimate of Theorem 2.1, we have $$\begin{array}{ll}
|\vec{b}|^2&\leq \|v_0(\cdot,T)\|^2_{L^2(\Omega)}\\ &\leq C(system,K_0)\displaystyle{\int^T_0\int_\Omega (-ey_0)^2dxdt}\\
&\leq C(system,K_0)\displaystyle{\int^T_0}\{\|e(x,t)\|^2_{L^q(\Omega))}\|y_0(\cdot,t)\|^2_{L^{\frac{2q}{q-2}}(\Omega)}\}dt\\
&\leq C(system,K_0)\|e(x,t)\|^2_{L^\infty(0,T;L^q(\Omega))}\|\nabla y_0\|^2_{L^2(Q)}\\
&\leq C(system,K_0)\|e(x,t)\|^2_{L^\infty(0,T;L^q(\Omega))}(|\vec{a}|^2+\displaystyle{\int_Q} f^2dxdt), \end{array}\eqno{(2.14)} $$ where $\vec{a}=(a_1,a_2,\cdots,a_{K_0})=(\langle y_0(x,0),X_1(x)\rangle,\langle y_0(x,0),X_2(x)\rangle,\cdots,\langle y_0(x,0),X_{K_0}(x)\rangle)$.
Thus, by (2.13), we get
$$|\vec{u}|^2\leq C(system,K_0)\|e(x,t)\|^2_{L^\infty(0,T;L^q(\Omega))}
(1+|\vec{a}|^2+\displaystyle{\int_Q} f^2dxdt).\eqno{(2.15)} $$ By (2.2), (2.14) and (2.15), we obtain $$\begin{array}{ll}
&\displaystyle\sup_{t\in[0,T]}\|(y-y_0)(\cdot,t)\|^2+\displaystyle{\int^T_0}\|\nabla(y-y_0)(\cdot,t)\|^2dt\\
&\leq C(system,K_0)\|e(x,t)\|^2_{L^\infty(0,T;L^q(\Omega))}(1
+|\vec{a}|^2+\displaystyle{\int_Q} f^2dxdt),\end{array} $$ Summarizing the above, we complete the proof of Theorem 1.1. $\hbox{\vrule height1.5ex width.5em}$
\section{An example} \hspace*{0.5 cm}In this section, we present an example, showing that with a small perturbation $e(x,t)$, (1.1) has no periodic solution in general. This demonstrates the importance of an outside control to gain back the periodicity as in Theorem 1.1.
We consider the following one dimensional parabolic equation: $$ \left\{\begin{array}{ll} y_t-y_{xx}-y-e(x)y=f(x),\;\;\;\;\;\;&0\leq x\leq \pi,\;0\leq t\leq T,\\ y(0,t)=y(\pi,t)=0,&0\leq t\leq T. \end{array}\right.\eqno{(3.1)} $$ Let $L_e y=-y_{xx}-y-e(x)y$ with $e(x)\in C^0[0,\pi]$. Suppose $0$ is an eigenvalue of $L_e$ with eigenvectors $\{X_j(x)\}^m_{j=1}$. Then (3.1) has a periodic solution if and only if $$\displaystyle{\int^\pi_0}f(x)X_j(x)dx=0,\;\mbox{for}\;j=1,2,\cdots,m.$$Now, when $e(x)=0$, then $0$ is the first eigenvalue of $L_0$ with $\sin x$ as a basis of the $0$-eigenspace. Hence, (3.1) has a periodic solution if and only if $$\displaystyle{\int^\pi_0}f(x)\sin xdx=0\;\mbox{or }f(x)=\displaystyle{\sum^\infty_{j=2}}a_j\sin jx,\
\sum_{j=2}^{\infty}|a_j|^2<\infty.$$ Now suppose $e(x)\approx 0$. The first eigenvalue $\lambda_e$ of $L_e$ is given by
$$\lambda_e=\displaystyle{\min_{\varphi\in H^1_0(0,\pi),\|\varphi\|_{L^2(0,\pi)}=1}}J_e(\varphi,\varphi),$$where $$J_e(\varphi,\varphi)=\displaystyle{\int^\pi_0}(\varphi^2_x-\varphi^2-e(x)\varphi^2)dx.$$ (See \cite{kn:[3]}). Hence, $$\begin{array}{ll}
\lambda_e&\leq \displaystyle{\min_{\varphi\in H^1_0(0,\pi),\|\varphi\|_{L^2(0,\pi)}=1}}\displaystyle{\int^\pi_0}(\varphi^2_x-\varphi^2)dx+\max|e(x)|\displaystyle{\int^\pi_0}\varphi^2dx\\
&\leq 0+\max|e(x)|\\
& \leq \max|e(x)|. \end{array}\eqno{(3.2)} $$ $$\lambda_e=J_e(\varphi_e,\varphi_e)=\displaystyle{\int^\pi_0}(\varphi_e)^2_xdx-\displaystyle{\int^\pi_0}(1+e(x))\varphi_e^2dx$$with
$\varphi_e$ the eigenvector corresponding to $\lambda_e$ and $\|\varphi_e\|_{L^2(0,\pi)}=1$.
Since $0$ is the first eigenvalue of $L_0$, we have $$\begin{array}{ll} \lambda_e&=\displaystyle{\int^\pi_0}((\varphi_e)^2_x-(\varphi_e)^2)dx-\displaystyle{\int^\pi_0}e(x)\varphi_e^2dx\\
&\geq -\max|e(x)| \end{array}\eqno{(3.3)} $$
By (3.2) and (3.3), we get $$|\lambda_e|\leq \max|e(x)|,\;\mbox{and }\lambda_e\rightarrow 0 \;\mbox{as }e(x)\rightarrow 0.$$ Next, consider the system with $e(x)+\lambda_e$ as the perturbation in the system conductivity: $$\left\{\begin{array}{ll} y_t-y_{xx}-y-(e(x)+\lambda_e)y=f(x),\;\;\;\;\;\;&0\leq x\leq \pi,\;0\leq t\leq T,\\ y(0,t)=y(\pi,t)=0,&0\leq t\leq T. \end{array}\right.\eqno{(3.4)} $$ Then when $e(x)\approx 0$, we have $(e(x)+\lambda_e)\approx 0$. However, if (3.4) still has a periodic solution, we have $$ \displaystyle{\int^\pi_0}f(x)\varphi_edx=0. $$ If this is the case for any given $f$, we then have $$ \displaystyle{\int^\pi_0}\sin jx\varphi_e dx=0,\;\mbox{for }j=2,3,\cdots. $$ This implies that $\varphi_e=C \sin x$ and thus $$ -e(x)\sin x=\lambda_e \sin x,\;\mbox{or }e(x)=-\lambda_e. $$ This is a contradiction unless $e(x)\equiv const.$. This shows that for any non-constant small perturbation in $e(x)$, for most a priori given $f$, the periodicity of the system will get lost.
\section{Local stabilization} \hspace*{0.5 cm}In this section, we consider the same problem as studied in Section 2, but with the control only imposed over a subregion $\omega\subset\Omega$ and time interval $E\subset [0,T]$ with $m(E)>0$.
For the proof of Theorem 1.2, we need the following lemma, whose quantitative version in the Laplacian case can be found in \cite{kn:[4]} and \cite{kn:[5]}:
{\bf Lemma 4.1} {\it Let $X_{ij}(\omega)=\displaystyle\int_\omega X_i(x)X_j(x)dx$. Then the symmetric matrix $X(\omega,k)=(X_{ij}(\omega))_{1\leq i,j\leq k}$ is positive definite for any $k\geq 1$. In particular, it is invertible.}
{\it Proof of Lemma 4.1:} Let $a=(a_1,a_2,\cdots,a_k)\in {\bf R^k}$ and let $$
I(a,a)=\displaystyle\int_\omega|\sum^k_{j=1}a_jX_j(x)|^2dx. $$ Then $$I(a,a)=a\cdot X(\omega,k)\cdot a^\tau,\mbox{ where }a^\tau=\left( \begin{array}{ccc}
a_1
\\
a_2 \\ \vdots\\
a_k \end{array} \right). $$ Apparently, $I(a,a)\geq 0$. If $X(\omega,k)$ is not positive definite, then there is a vector $a'=(a'_1,a'_2,\cdots,a'_k)\neq 0$ such that $I(a',a')=0$. Without loss of generality, assume that $a'_k\not =0$. Hence,
$$\displaystyle\sum^k_{j=1}a'_jX_j(x)|_{\omega}=0.\eqno{(4.1)}$$ We thus get over $\omega$: $$X_k(x)=\displaystyle\sum_{j<k}b_jX_j(x),\;\mbox{with }b_j=-\displaystyle\frac{a'_j}{a'_k}.\eqno{(4.2)}$$ Applying $(L_0)^m$ to (4.2) over $\omega$, we have $$\lambda^m_k X_k(x)=\displaystyle\sum_{j<k}b_j\lambda^m_jX_j(x).$$ We get $$X_k(x)=\displaystyle\sum_{j<k}b_j(\frac{\lambda_j}{\lambda_k})^mX_j(x)\mbox{ over }\omega.$$ Letting $m\rightarrow\infty$, we get over $\omega$ $$ X_k(x)=\displaystyle\sum_{k'\leq j<k}b_jX_j(x),\eqno{(4.3)} $$ where $$
\left\{\begin{array}{ll} \lambda_j=\lambda_k,\;\;\;&\mbox{for }j\geq k',\\ \lambda_j<\lambda_k,&\mbox{for }j<k'. \end{array}\right.\eqno{(4.4)} $$ By (4.4), we get over $\Omega$, $$ L_0(X_k(x)-\displaystyle\sum_{k'\leq j<k}b_jX_j(x))=\lambda_kX_k(x)-\displaystyle\sum_{k'\leq j<l}b_j\lambda_jX_j(x)=\lambda_k[X_k(x)-\displaystyle\sum_{k'\leq j<k}b_jX_j(x)]. $$ By (4.3) and the unique continuation for solutions of elliptic equations, we get $$ X_k(x)-\displaystyle\sum_{k'\leq j<k}b_jX_j(x)\equiv 0\;\mbox{over }\Omega. $$ This contradicts the linear independence of the system $\{X_j\}$.\hbox{\vrule height1.5ex width.5em}
{\bf Proof of Theorem 1.2.}: Similar to the proof of Theorem 1.2, we need only to find a vector $\vec{u}=(u_1,u_2,\cdots,u_{K_0})\in {\mathbf R}^{K_0}$ such that $$J^*_\omega(\vec{u})=-J_{0,\omega},$$ where $$J^*_\omega(\vec{u})=(\langle v(x,T),X_1(x)\rangle,\langle v(x,T),X_2(x)\rangle,\cdots,\langle v(x,T),X_{K_0}(x)\rangle)=(v_1(T),v_2(T),\cdots,v_{K_0}(T)) $$ with $v$ the solution of the following equation: $$ \left\{\begin{array}{ll} v_t +L_0v+e(x,t)v=\displaystyle{\sum_{j=1}^{K_0}}\chi_\omega(x)\chi_E(t)u_jX_j(x), \;\;\;& \mbox{in }\;\;Q,\\
v=0, & \mbox{on }\;\; \Sigma,\\ v_j(0)=0, & \mbox{for }\;j\leq K_0,\\ v\in {\bf S_{K_0}}. \end{array}\right. \eqno{(4.5)} $$ and $$ J_{0,\omega}=((v_0)_1(T),(v_0)_2(T),\cdots,(v_0)_{K_0}(T)) $$ with $v_0$ the solution of the following system $$
\left\{\begin{array}{ll} (v_0)_t +L_0v_0+e(x,t)v_0=-e(x,t)y_0, \;\;\;& \mbox{in }\;\;Q,\\
v_0=0, & \mbox{on }\;\; \Sigma,\\ (v_0)_{j}(0)=0, & \mbox{for }\;j\leq K_0,\\ v_0\in {\bf S_{K_0}}. \end{array}\right. \eqno{(4.6)} $$ In the same way, if $J^*_\omega$ is not invertible, then for a vector $\vec{\xi}=(\xi_1,\xi_2,\cdots,\xi_{K_0})$ with
$|\vec{\xi}|=1$, we have a solution to the following system: $$ \left\{\begin{array}{ll} v_t +L_0v+e(x,t)v=\displaystyle{\sum_{j=1}^{K_0}}\chi_\omega(x)\chi_E(t)\xi_jX_j(x), \;\;\;& \mbox{in }\;\;Q,\\
v=0, & \mbox{on }\;\; \Sigma,\\ v_j(0)=0=v_j(T), & \mbox{for }\;j\leq K_0,\\ v\in {\bf S_{K_0}}. \end{array}\right. \eqno{} $$ We then get $$ v_j(t)'+\lambda_jv_j(t)+\displaystyle\int_\Omega e(x,t)v(x,t)X_j(x)dx=\displaystyle\sum^{K_0}_{l=1}\xi_l\chi_E(t)X_{lj}(\omega),\;\mbox{for }j=1,2,\cdots,K_0. $$ We similarly get $$ (e^{\lambda_j t}v_j(t))'_t+e^{\lambda_j t}\displaystyle\int_\Omega e(x,t)v(x,t)X_j(x)dx=e^{\lambda_j t}\displaystyle\sum^{K_0}_{l=1}\xi_l\chi_E(t)X_{lj}(\omega),\;\mbox{for }j=1,2,\cdots,K_0. $$ $$ 0+\displaystyle\int^T_0\int_\Omega e^{\lambda_j t}e(x,t)v(x,t)X_j(x)dxdt=\displaystyle\int^T_0e^{\lambda_j t}\displaystyle\sum^{K_0}_{l=1}\xi_l\chi_E(t)X_{lj}(\omega)dt. $$ We then get $$ \left( \begin{array}{ccc} \displaystyle\int^T_0e^{\lambda_1 t}\chi_E(t)dt&\;&\;
\\
\;&\displaystyle\int^T_0e^{\lambda_2 t}\chi_E(t)dt&\; \\ \;&\ddots&\;\\ \;&\;&\displaystyle\int^T_0e^{\lambda_{K_0} t}\chi_E(t)dt \end{array} \right) X(\omega,K_0)\left( \begin{array}{ccc}
\xi_1
\\
\xi_2 \\ \vdots\\
\xi_{K_0} \end{array} \right)$$ $$ =\left( \begin{array}{ccc}
\displaystyle\int^T_0\int_\Omega e^{\lambda_1 t}e(x,t)v(x,t)X_1(x)dxdt\\ \displaystyle\int^T_0\int_\Omega e^{\lambda_2 t}e(x,t)v(x,t)X_2(x)dxdt\\ \vdots\\ \displaystyle\int^T_0\int_\Omega e^{\lambda_{K_0} t}e(x,t)v(x,t)X_{K_0}(x)dxdt \end{array} \right) $$ $$ \left( \begin{array}{ccc}
\xi_1
\\
\xi_2 \\ \vdots\\
\xi_{K_0} \end{array} \right)=X(\omega,K_0)^{-1}
\left( \begin{array}{ccc} (\displaystyle\int^T_0e^{\lambda_1 t}\chi_E(t)dt)^{-1} \displaystyle\int_Q e^{\lambda_1 t}evX_1dxdt\\ (\displaystyle\int^T_0e^{\lambda_2 t}\chi_E(t)dt)^{-1} \displaystyle\int_Q e^{\lambda_2 t}evX_2dxdt\\ \vdots\\ (\displaystyle\int^T_0e^{\lambda_{K_0} t}\chi_E(t)dt)^{-1} \displaystyle\int_Q e^{\lambda_{K_0} t}evX_{K_0}dxdt \end{array} \right)\eqno{(4.7)} $$ By Lemma 4.1, we know $X(\omega,K_0)^{-1}$ is a bounded linear operator from ${\mathbf R}^{K_0}$ to ${\mathbf R}^{K_0}$.
By the energy estimate in Theorem 2.1, we have for $v(x,t)$, $$\begin{array}{ll}
\displaystyle\sup_{t\in [0,T]}\|v(\cdot,t)\|^2+\int^T_0\|\nabla v(\cdot,t)\|^2dt&\leq C(system,K_0)\displaystyle\int_Q(\sum_{j=1}^{K_0}\chi_\omega(x)\chi_E(t)\xi_jX_j(x))^2dxdt\\
&\leq C(system,K_0)T|\vec{\xi}|^2\\ &\leq C(system,K_0). \end{array}\eqno{(4.8)}$$ By the H$\ddot{o}$lder inequality, we have, $$\begin{array}{ll}
\displaystyle\int_\Omega|evX_j|dx\leq C(\Omega,N,q)\|e\|_{L^\infty(0,T;L^q(\Omega))}[1+\lambda_j^2+\|v(\cdot,t)\|^2+\|\nabla v(\cdot,t)\|^2]. \end{array}\eqno{(4.9)}$$ Together with (4.8), we thus have $$\begin{array}{ll}
\displaystyle\int^T_0\int_\Omega|evX_j|dxdt\leq C(system,K_0)\|e\|_{L^\infty(0,T;L^q(\Omega))}. \end{array}\eqno{(4.10)}$$ Back to (4.7), we have
$$\begin{array}{ll} |\vec{\xi}|^2&\leq C(system,\omega,K_0)\frac{1}{(m(E))^2}\|X(\omega,K_0)^{-1}\|^2\|e\|^2_{L^\infty(0,T;L^q(\Omega))}\\
&\leq C(system,\omega,K_0)\frac{1}{(m(E))^2}\|e\|^2_{L^\infty(0,T;L^q(\Omega))}. \end{array}$$
Hence, when $\|e\|^2_{L^\infty(0,T;L^q(\Omega))}$ is sufficient small, we get $|\vec{\xi}|^2<1$. This gives a contradiction. Therefore, we showed that $J^*_\omega$ is invertible under small perturbation. By the same arguments as those in the proof of Theorem 1.1, we can also show the energy estimates as stated in Theorem 1.2.
This completes the proof of Theorem 1.2. $\hbox{\vrule height1.5ex width.5em}$
\end{document} |
\begin{document}
\title{A Family of Continuous Variable Entanglement Criteria using General Entropy Functions}
\author{A.~Saboia} \email{saboia@if.ufrj.br} \affiliation{Instituto de F\'{\i}sica, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, Rio de Janeiro, RJ 21941-972, Brazil}
\author{F.~Toscano} \affiliation{Instituto de F\'{\i}sica, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, Rio de Janeiro, RJ 21941-972, Brazil}
\author{S. P. ~Walborn} \affiliation{Instituto de F\'{\i}sica, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, Rio de Janeiro, RJ 21941-972, Brazil}
\begin{abstract} We derive a family of entanglement criteria for continuous variable systems based on the R\'enyi entropy of complementary distributions. We show that these entanglement witnesses can be more sensitive than those based on second-order moments, as well as previous tests involving the Shannon entropy [Phys. Rev. Lett. \textbf{103}, 160505 (2009)]. We extend our results to include the case of discrete sampling, and develop another set of entanglement tests using the discrete Tsallis entropy. We provide several numerical results which show that our criteria can be used to identify entanglement in a number of experimentally relevant quantum states. \end{abstract}
\pacs{03.67.Mn, 03.65.Ud, 89.70.Cf}
\maketitle
\section{INTRODUCTION} \label{sec:introduction}
Quantum entanglement is a fundamental property of quantum systems that can be exploited for quantum computation, quantum teleportation and quantum cryptography \cite{chuang00}. As such, its detection is an essential task in an experimental setting. Many techniques exist for detecting entanglement in discrete systems (see \cite{guhne09, horodecki09} for review). In continuous variable systems, its identification can be more complicated, due to the large Hilbert space structure. However, there is a considerable amount of work concerning entanglement detection and characterization of Gaussian states \cite{adesso07,braunstein05} where tests involving only the second-order moments \cite{simon00,duan00,mancini02,giovannetti03,hyllus06,serafini06,fujikawa09} are adequate. However, there is a large interest in non-Gaussian states, since non-Gaussianity is necessary for some quantum information tasks, such as quantum computation \cite{lloyd99,bartlett02,ohliger10} and entanglement distillation \cite{dong08,hage08}. Second-order criteria are sufficient but not necessary for entanglement in non-Gaussian states. As such, there has been some work dedicated towards entanglement detection in non-Gaussian states \cite{agarwal05, shchukin05, hillery06a, hillery06b,chen07,rodo08, walborn09,miranowicz09,hillery09,sperling09b,sperling09,adesso09}. The set of criteria derived by Shchukin and Vogel (SV) \cite{shchukin05}, for instance, is very powerful and general, but may require a large number of measurements \cite{shchukin05b}. We note that the SV criteria has been applied for the experimental detection of non-Gaussian entanglement \cite{gomes09b}. \par It has been shown that classical entropy functions can be used to formulate Bell's inequalities \cite{braunstein88} and entanglement witnesses for bipartite $d\times d$ level systems \cite{giovannetti04}. These are examples of non-linear entanglement witnesses, which provide improvements in sensibility at little to no extra experimental effort \cite{guhne06b,moroder08}. In Ref. \cite{walborn09}, the Shannon entropy of complementary distributions was used to derive a set of entanglement witnesses for bipartite continuous variable quantum systems. This approach is especially useful in the experimental characterization of entanglement, since it considers only a pair of joint quadrature measurements. At the same time, these entropic witnesses are more sensitive than second-order tests (i.e those based solely on the elements of the covariance matrix) \cite{simon00, duan00, mancini02, giovannetti03, hyllus06}. In the present work, we extend this approach by deriving entanglement criteria using more general entropy functions. For example, we use the classical R\'enyi entropy, characterized by the continuous parameter $\alpha$, to derive a family of entropic entanglement witnesses which provides a more powerful tool for identification of entanglement. We note that the Wehrl entropy \cite{marchiolli08}, and also quantum versions of the Shannon \cite{barnett89} and R\'enyi entropies \cite{horodecki96b,horodecki96c} have been used to identify quantum entanglement. In general, these criteria require complete knowledge of the density matrix or more complicated measurement schemes \cite{bovino05}. \par This paper is organized as follows. In section \ref{sec:shannon} we define our notation and briefly review the criteria of Ref. \cite{walborn09}. In section \ref{sec:improvement} we develop a family of entanglement witnesses for continuous variables using the classical R\'enyi entropy. We then extend these results to include the more realistic case of discrete sampling. Using these discretized distributions, we derive another set of inequalities using the discrete Tsallis entropy. We tested the continuous variable R\'enyi criteria on several experimentally relevant states. Section \ref{sec:results} provides numerical results which show that the generalized R\'enyi witnesses detect entanglement in a wider variety of quantum states than second order tests or witnesses based solely on Shannon entropy \cite{walborn09}. In section \ref{sec:conclusions} we provide concluding remarks. \section{Entanglement Critera with Shannon Entropy} \label{sec:shannon} We first review two sets of inequalities which were developed in Ref. \cite{walborn09}. These inequalities are satisfied for all separable states, so that the violation of either one indicates that the bipartite state is entangled.
We first take into account a rotation of the usual canonical operators $\mathsf{x}$ and $\mathsf{p}$, and define a pair of general complementary operators for systems 1 and 2 as \begin{subequations} \label{eq:quads} \begin{equation} \mathbf{\mathsf{r_j}}={\rm{cos}} {\theta_j} \mathbf{\mathsf{x_j}}+{\rm{sin}} {\theta_j} \mathbf{\mathsf{p_j}} \end{equation}
\begin{equation}\mathbf{\mathsf{s_j}}={\rm{cos}} {\theta_j } \mathbf{\mathsf{p_j}}-{\rm{sin}} {\theta_j} \mathbf{\mathsf{x_j}},
\end{equation}
\end{subequations}
where $j=1,2$ refers to each subsystem of the bipartite state. The commutation relation $[\mathbf{\mathsf{x_j}},\mathbf{\mathsf{p_k}}]=i\delta_{j,k}$ for canonical operators $\mathbf{\mathsf{x_j}}$ and $\mathbf{\mathsf{p_k}}$ implies in $[\mathbf{\mathsf{r_j}},\mathbf{\mathsf{s_k}}]=i\delta_{j,k}$, $j,k=1,2$. Here $x$ and $p$ are dimensionless continuous variables, such as quadratures of electromagnetic field modes or dimensionless position and momentum of a point particle, for example. Let us define the global operators $\mathbf{\mathsf{r_{\pm}}}$ and $\mathbf{\mathsf{s_{\pm}}}$: \begin{subequations}
\begin{equation}
\mathbf{\mathsf{r_{\pm}}}=\mathbf{\mathsf{r_1}} \pm \mathbf{\mathsf{r_2}},
\label{eqGlobalOpR}
\end{equation} and
\begin{equation}
\mathbf{\mathsf{s_{\pm}}}=\mathbf{\mathsf{s_1}} \pm \mathbf{\mathsf{s_2}}.
\label{eqGlobalOpS}
\end{equation} \end{subequations} Since $[\mathbf{\mathsf{r_j}},\mathbf{\mathsf{s_k}}]=i\delta_{j,k}$, $j,k=1,2$, it is easy to see that $[\mathbf{\mathsf{r_{\mu}}},\mathbf{\mathsf{s_{\nu}}}]=2i\delta_{\mu,\nu}$ with $\mu,\nu=\pm$.
The inequalities in Ref. \cite{walborn09} were developed initially for a separable pure state $|\psi_1\rangle \otimes |\psi_2\rangle$, corresponding to the wave function $ \Psi(r_1, r_2) = \psi_1(r_1) \psi_2(r_2)$, which can also be written as \begin{equation}
\Psi(r_+, r_-) = \frac{1}{\sqrt{2}}\psi_1\left(\frac{r_{+}+r_{-}}{2}\right) \psi_2\left(\frac{r_{+}-r_{-}}{2}\right). \label{eqWaveFunction} \end{equation} For simplicity, we denote the probability distributions associated to measurement of $r_{\pm}$ as simply $R_\pm$. They are given by \begin{equation}
R_{\pm}= \frac{1}{2} \int {\rm d}r_{\mp} R_{1}\left(\frac{r_{+}+r_{-}}{2}\right)R_{2}\left(\frac{r_{+}-r_{-}}{2}\right),
\label{eqProbDistrRpm} \end{equation} which is equivalent to the convolution \begin{equation}
R_{\pm}=R_1 * R_2^{(\pm)},
\label{eq:Rpm} \end{equation}
where $R_i(r_i)=|\psi_i(r_i)|^2$, $R_2^{+} \equiv R_2(r)$ and $R_2^{-} \equiv R_2(-r)$. The Shannon entropy for continuous variables is defined by \begin{equation}
H[R] = -\int {\rm d}r R(r)\ln R(r), \label{eqDefShannonCont} \end{equation} where $R(r)$ is the probability distribution associated to the measurement of an arbitrary continuous variable $r$. Similar expressions are obtained for the probability distribution $S$ of the complementary variable $s$.
Two inequalities were introduced in Ref. \cite{walborn09}. Their violation indicates the presence of entanglement. Using the probability distributions $R_\pm$ and $S_\pm$ defined above and applying the entropy power inequality \cite{cover}, the following criteria were obtained: \begin{eqnarray}
H[R_{\pm}]+ H[S_{\mp}] \geq \frac{1}{2}\ln \left[ \sum_{i,j} e^{(2 H[R_i]+2 H[S_j])} \right]. \label{eqStrong} \end{eqnarray} These criteria are useful only in the case of pure states. They can be extended to include mixed states as well, but numerical optimization procedures are required \cite{walborn09}. By further applying an entropic uncertainy relation for the distributions $R_j$ and $S_j$ \cite{bialynicki75}, a second set of entropic witnesses were derived: \begin{equation}
H[R_{\pm}]+ H[S_{\mp}] \geq \ln (2\pi e). \label{eqWeak} \end{equation} Although inequality \eqref{eqWeak} is weaker than inequality \eqref{eqStrong}, it has the advantage that it is also suitable for mixed states. It was shown that both of these criteria are more sensitive than second-order tests involving the same operators. \section{GENERALIZATION OF ENTROPIC CRITERIA} \label{sec:improvement} A natural attempt to improve the entropic entanglement witnesses described in Section II is the application of a more general function of information entropy. For this purpose, we first employ the R\'{e}nyi entropy for continuous variables, defined by \cite{renyi61,cover} \begin{equation}
H_{\alpha} [R] = \frac{1}{1-\alpha}\ln \left[\int {\rm d}r R^\alpha (r) \right ]=\frac{\alpha}{1-\alpha}\ln \|R\|_\alpha,
\label{eqDefRenyiCont}
\end{equation}where ${\|R\|}_{\alpha}$ is the $\mathcal{L}_{\alpha}$ norm of the distribution $R$ (see Ref. \cite{cover}):
\begin{equation}
\|R\|_\alpha = \left[ \int {\rm d}r R^\alpha (r)\right ]^{1/\alpha}. \label{eqDefNorm} \end{equation}
As in section \ref{sec:shannon}, let us first consider only pure states of the form $|\psi_1\rangle \otimes |\psi_2\rangle$. Using the probability distributions \eqref{eq:Rpm}, the R\'{e}nyi entropy for global distributions is \begin{equation}
H_{\alpha} [R_{\pm}] = \frac{\alpha}{1-\alpha}\ln \|R_1 * R_2^{(\pm)}\|_\alpha.
\label{eqRenyiGlobal} \end{equation} To derive an inequality, we employ Young's inequality, which is valid for convolutions of distributions \cite{barthe98}. For $1/\alpha = 1/{\alpha_1}+1/{\alpha_2}-1$, Young's inequality is \begin{equation}
\|R_1 * R_2^{(\pm)}\|_\alpha \leq C(\alpha_1, \alpha_2) \|R_1\|_{\alpha_1} \|R_2\|_{\alpha_2},
\label{eqYoung1} \end{equation} with $\alpha, \alpha_1, \alpha_2 \geq1$ or \begin{equation}
\|R_1 * R_2^{(\pm)}\|_\alpha \geq C(\alpha_1, \alpha_2) \|R_1\|_{\alpha_1} \|R_2\|_{\alpha_2}, \label{eqYoung2} \end{equation} for $\alpha, \alpha_1, \alpha_2 \leq1$. The coefficent $C(\alpha_1, \alpha_2) $ is given by \begin{equation} C(\alpha_1, \alpha_2) = \frac{C_{\alpha_1} C_{\alpha_2}}{C_{\alpha}}, \label{eqYoungCoeff} \end{equation} where \begin{equation}
C_t= \sqrt{\frac{t^{\frac{1}{t}}}{|t'|^{\frac{1}{t'}}}}, \end{equation} with $t^\prime \equiv t/(t-1)$. Without loss of generality, we choose variables such that $\alpha$, $\alpha_1$, $\alpha_2 \geq1$ and $0 \leq \beta$, $\beta_1$, $\beta_2 \leq1$. Then, from inequalities \eqref{eqYoung1} and \eqref{eqYoung2} we can write: \begin{subequations}
\begin{equation}
\|R_{\pm}\|_\alpha \leq C(\alpha_1, \alpha_2) \|R_1\|_{\alpha_1} \|R_2\|_{\alpha_2},
\label{eqYoung3}
\end{equation} and
\begin{equation}
\|S_{\mp}\|_\beta \geq C(\beta_1, \beta_2) \|S_1\|_{\beta_1} \|S_2\|_{\beta_2},
\label{eqYoung4}
\end{equation} \end{subequations} where we remember that \begin{subequations} \label{eqRelations}
\begin{eqnarray}
\frac{1}{\alpha} = \frac{1}{\alpha_1} + \frac{1}{\alpha_2} -1,
\label{eqRelationsAlphas}
\end{eqnarray} and
\begin{eqnarray}
\frac{1}{\beta} = \frac{1}{\beta_1} + \frac{1}{\beta_2} -1.
\label{eqRelationsBetas}
\end{eqnarray} \end{subequations} Dividing inequality \eqref{eqYoung3} by inequality \eqref{eqYoung4}, we can set up a new inequality \begin{equation}
\frac{\|R_{\pm}\|_\alpha}{\|S_{\mp}\|_\beta} \leq \frac{C(\alpha_1, \alpha_2)}{C(\beta_1, \beta_2)} \frac{\|R_1\|_{\alpha_1}}{\|S_1\|_{\beta_1}} \frac{\|R_2\|_{\alpha_2}}{\|S_2\|_{\beta_2}},
\label{eqYoungDiv} \end{equation} which will be verified when the pure state is separable, since the distributions $R_{\pm}$ and $S_{\mp}$ can be expressed in terms of convolutions of the probability distributions of the two subsystems. \par We can write the norm in terms of an entropy, such as the R\'{e}nyi entropy or Tsallis entropy. Taking the logarithm of inequality (\ref{eqYoungDiv}) and using Eq. \eqref{eqDefRenyiCont} results in an inequality in terms of R\'{e}nyi entropies:
\begin{align}
&\left(\frac{\alpha-1}{\alpha} \right) H_\alpha[R_{\pm}]+
\left( \frac{1-\beta}{\beta} \right) H_\beta[S_{\mp}] \geq \nonumber\\
&\left( \frac{\alpha_1-1}{\alpha_1} \right) H_{\alpha_1} [R_1] +
\left( \frac{1-\beta_1}{\beta_1} \right) H_{\beta_1} [S_1] +\nonumber\\
&\left( \frac{\alpha_2-1}{\alpha_2} \right) H_{\alpha_2} [R_2] +
\left( \frac{1-\beta_2}{\beta_2} \right) H_{\beta_2} [S_2] + \nonumber \\
&\ln \left[\frac{C(\beta_1, \beta_2)}{C(\alpha_1, \alpha_2)}\right]\ .
\label{eqStrongRenyi} \end{align} Inequality \eqref{eqStrongRenyi} is a generalization of criteria \eqref{eqStrong}. In order to recover \eqref{eqStrong} from \eqref{eqStrongRenyi} we first consider the case $\alpha=\beta$ and then take the limit $\alpha \rightarrow 1$. Violation of inequality \eqref{eqStrongRenyi} implies that the pure state considered is entangled. Extension of \eqref{eqStrongRenyi} to include mixed states is possible, although evaluation of the right-hand side requires minimization over all possible decompositions of the mixed state, and as such, is not very useful in an experimental setting \cite{walborn09}. \par To derive a second inequality that does not depend on the entropy functions $H_{\alpha_j}[R_j]$ and $H_{\beta_j}[S_j]$, we employ the entropic uncertainty relation for R\'{e}nyi entropy given by Ref. \cite{bialynicki06}: \begin{equation}
H_{\alpha_j}[R_j]+ H_{\beta_j}[S_j] \geq
-\frac{1}{2(1-\alpha_j)} \ln \frac{\alpha_j}{\pi}-\frac{1}{2(1-\beta_j)} \ln \frac{\beta_j}{\pi},
\label{eqBirula38} \end{equation} where it is necessary to include the restriction \cite{bialynicki06}: \begin{equation}
\frac{1}{\alpha_j}+\frac{1}{\beta_j}=2,\,\, j=1, 2.
\label{eqAlphaBeta12} \end{equation} Eq. (\ref{eqAlphaBeta12}), along with Eqs. (\ref{eqRelations}), lead to \begin{equation}
\frac{1}{\alpha}+\frac{1}{\beta}=2.
\label{eqAlphaBeta} \end{equation} Applying the uncertainty relation (\ref{eqBirula38}) to inequality (\ref{eqStrongRenyi}) and performing some algebra we obtain the inequality \begin{align}
&H_\alpha[R_{\pm}]+ H_\beta[S_{\mp}] \geq \nonumber \\ &-\frac{1}{2(1-\alpha)} \ln \frac{\alpha}{\pi}-\frac{1}{2(1-\beta)} \ln \frac{\beta}{\pi} +\nonumber \\ &
\frac{\alpha}{\alpha-1} \sum\limits_{j=1,2} \frac{\alpha_j-1}{\alpha_j}\ln \left|\frac{\alpha_j}{\alpha_j-1}\right | -\ln \left|\frac{\alpha}{\alpha-1} \right|.
\label{eqBirulaInt} \end{align} The sum of terms in the last line of Eq. \eqref{eqBirulaInt} is always non-negative. $\alpha_1$ and $\alpha_2$ are arbitrary parameters within the restrictions imposed by Eqs. \eqref{eqRelationsAlphas} and \eqref{eqAlphaBeta}, which guarantee that $1 \leq 1/\alpha_1+1/\alpha_2 \leq 2$. Within this domain we can maximize the last term on the right-hand side of inequality \eqref{eqBirulaInt}, which reaches a maximum value of $\ln 2$ when $\alpha_1=\alpha_2$. This leads directly to the inequality: \begin{align}
H_\alpha[R_{\pm}]+ H_\beta[S_{\mp}] \geq-\frac{1}{2(1-\alpha)} \ln \frac{\alpha}{2 \pi}-\frac{1}{2(1-\beta)} \ln \frac{\beta}{2 \pi}.
\label{eqBirulaWeak} \end{align} Note that our choice $\alpha \geq 1$ and $1/2 \leq \beta \leq 1$ is arbitrary, and that these restrictions can be switched with no alteration in the derivation. Inequality \eqref{eqBirulaWeak} reduces to \eqref{eqWeak} when $\alpha \longrightarrow 1$. \par We'll now show that inequality (\ref{eqBirulaWeak}) is also valid for mixed states. Noting that $[\mathbf{\mathsf{r_{\mu}}},\mathbf{\mathsf{s_{\nu}}}]=2i\delta_{\mu,\nu}$, ($\mu,\nu=\pm$), then the uncertainty relation for the R\'enyi entropy of complementary distributions $R_\pm$ and $S_\pm$ is
\begin{equation}
H_{\alpha}[R_\pm]+ H_{\beta}[S_\pm] \geq
-\frac{1}{2(1-\alpha)} \ln \frac{\alpha}{2\pi}-\frac{1}{2(1-\beta)} \ln \frac{\beta}{2\pi},
\label{eqBirulapm} \end{equation} where again $1/\alpha + 1/\beta = 2$. Bialynicki-Birula has shown that this uncertainty relation is also valid for mixed states \cite{bialynicki06}, in which case $R_\pm$ and $S_\pm$ are complementary marginal distributions obtained from the Wigner function associated to the mixed quantum state. We can now make use of an alternative way of deriving inequality (\ref{eqBirulaWeak}) by means of the positive partial transpose (PPT) criterion \cite{peres96,horodecki96,simon00}. For any continuous variable quantum state, the transpose operation is equivalent to a mirror reflection in phase space, taking $(r_j,s_j)\longrightarrow (r_j,-s_j)$ \cite{simon00}. Thus, the partial transpose of a bipartite state $\varrho_{12}$ thus takes the global variables $r_\pm \longrightarrow r_\pm$ and $s_\pm \longrightarrow s_\mp$, where we take the transpose of subsystem 2. The marginal probability distributions under partial transposition $T$ transform as \begin{subequations} \label{eq:pt2} \begin{align} R_{\pm}^T & =R_{\pm} \\ S^T_{\pm}& =S_{\mp}, \end{align} \end{subequations} and we have
\begin{equation}
H_{\alpha}[R^T_\pm]+ H_{\beta}[S^T_\pm] = H_{\alpha}[R_\pm]+ H_{\beta}[S_\mp].
\label{eq:pt} \end{equation} The partial transpose of a separable density operator is a positive operator, and thus it is still a physical state \cite{peres96, horodecki96, simon00}, and will satisfy the uncertainty relation \eqref{eqBirulapm}.
Substituting Eq. \eqref{eq:pt} into inequality \eqref{eqBirulapm} leads directly to inequality (\ref{eqBirulaWeak}), where we have made no assumptions about the purity of the bipartite state $\varrho_{12}$. Thus, criteria (\ref{eqBirulaWeak}) is also valid for bipartite mixed states.
\par
The above argument illustrates that the family of entropic entanglement witnesses \eqref{eqBirulaWeak} are in fact PPT criteria. This illustrates a general method for developing new PPT criteria: apply \emph{any} quantum mechanical uncertainty relation to distributions $R_\pm$ and $S_\pm$, and use Eqs. \eqref{eq:pt2}. We note that this was the general spirit of the procedure used by Simon to develop a criteria based on second-order moments \cite{simon00}, and has also been used in Ref. \cite{nha08}. \subsection{Relationship with second-order criteria} The second-order Mancini-Giovannetti-Vitali-Tombesi (MGVT) criteria is \cite{mancini02} \begin{equation} \Delta_{r_{\pm}}^2 \Delta_{s_{\mp}}^2 \geq 1, \label{eq:MGVT} \end{equation} where $\Delta_{q}^2$ is the variance in variable $q$. Inequality \eqref{eq:MGVT} is verified by any separable state. In Ref. \cite{walborn09}, it was shown that the MGVT criteria can be derived directly from the Shannon criteria \eqref{eqWeak} by maximizing the sum $H[R_\pm]+H[S_\mp]$. This leads to the inequalities: \begin{equation}
\ln (2 \pi e \Delta_{r_{\pm}} \Delta_{s_{\mp}}) \geq H[R_{\pm}]+ H[S_{\mp}] \geq \ln (2\pi e). \label{eq:limit} \end{equation} This upper bound is saturated for Gaussian probability distributions \cite{shannon}. Since $R_\pm$ and $S_\pm$ are arbitrary (though complementary) marginal distributions in phase space, this implies that the bound is saturated for Gaussian states. Nevertheless, within the class of non-Gaussian states, inequalities \eqref{eq:limit} show that the criteria given in \eqref{eqWeak} may detect entanglement in states that the MGVT criterion might not \eqref{eq:MGVT}. \par A natural question to ask is whether we can derive new entanglement witnesses by maximizing the sum of R\'enyi entropies $H_\alpha[R_\pm]+H_\beta[S_\mp]$ in criteria \eqref{eqBirulaWeak}. Doing so leads to an inequality also involving second-order moments, due to the fact that the R\'enyi entropy is maximized for the Student-t and Student-r distributions \cite{costa02,vignat06}, which (for zero mean) are completely characterized by the variance. More specifically, we arrive at \begin{equation} \Delta_{r_{\pm}}^2 \Delta_{s_{\mp}}^2 \geq f(\alpha,\beta), \label{eq:Toscano} \end{equation} where $f(\alpha,\beta) \leq 1$ for all allowed values of $\alpha$ and $\beta$. In the limiting case $\alpha, \beta \longrightarrow 1$, $f(\alpha,\beta) = 1$ and we recover the MGVT criteria \eqref{eq:MGVT}. Thus, inequality \eqref{eq:Toscano} is not an improvement over the already established MGVT criterion. \section{Discrete Distributions} \subsection{Discrete R\'enyi Entropy} Inequalities \eqref{eqStrongRenyi} and \eqref{eqBirulaWeak} derived in the above section were developed for continuous distributions $R_\pm$ and $S_\pm$. However, in an experimental setting one typically measures discrete distributions, due to the finite resolution of the measurement apparatus. Here, we show how to deal with discrete resolution and we derive an entanglement witness equivalent to \eqref{eqBirulaWeak}, but for discrete distributions. The same procedure can be adopted for a derivation of inequalities equivalent to \eqref{eqStrongRenyi}. Let us call these discrete distributions $R^\delta_\pm$ and $S^\Delta_\pm$, and suppose that their elements are \begin{subequations} \label{eq:discretedist} \begin{equation} \rho^{\delta}_{k \pm} = \int\limits_{k \delta}^{(k+1)\delta} R_{\pm}(r) dr \end{equation} and \begin{equation} \sigma^{\Delta}_{k \pm} = \int\limits_{k \Delta}^{(k+1)\Delta} S_{\pm}(s) ds, \end{equation} \end{subequations} respectively. Here we assume that $r$ measurements have resolution $\delta$ and $s$ measurements are performed with resolution $\Delta$. To apply these inequalities to discrete distributions, one can write the entropy of the continuous distribution in terms of the discrete distribution as \cite{cover} \begin{subequations} \begin{align} H_\alpha[R_\pm] & = H_\alpha[R^\delta_\pm] + \ln \delta, \\ H_\beta[S_\pm] & = H_\beta[S^\Delta_\pm] + \ln \Delta, \end{align} \end{subequations} provided that $\delta$ and $\Delta$ are sufficiently small. Here the discrete R\'enyi entropy is \begin{equation} H_\alpha[R^\delta_\pm]= \frac{1}{1-\alpha} \ln \left ( \sum\limits_k \left (\rho^\delta_{k \pm}\right )^\alpha \right) , \end{equation} and similarly for $H_\beta[S^\Delta_\pm]$. Inequality \eqref{eqBirulaWeak} can then be written in terms of the discrete distributions: \begin{align}
H_\alpha[R^\delta_{\pm}]+ H_\beta[S^\Delta_{\mp}] \geq-\frac{1}{2}\left(\frac{\ln\alpha}{1-\alpha}+\frac{\ln\beta}{1-\beta}\right)+\ln\left(\frac{2\pi}{\delta\Delta}\right).
\label{eqBirulaWeakDisc} \end{align} Nevertheless, the above inequalities are also valid for arbitrary size of the resolutions $\delta$ and $\Delta$, since it is also possible to derive them by direct application of the uncertainty relation for the discrete R\'enyi entropies, as developed by Bialynicki-Birula \cite{bialynicki06}.
\subsection{Entanglement Criteria with Tsallis Entropy} An uncertainty relation for the Tsallis entropy \cite{tsallis} $T_\alpha$ of continuous distributions was developed in Ref. \cite{rajagopal95}. Since the R\'enyi entropy of a continuous distribution can be written as $H_\alpha = \ln[1+(1-\alpha)T_\alpha]/(1-\alpha)$, one can show that this uncertainty relation is equivalent to that of Ref. \cite{bialynicki06}, given in Eq. \eqref{eqBirula38}. Thus, development of entanglement witnesses based on the uncertainty relation in Ref. \cite{rajagopal95} would be equivalent to those developed in the previous section. Recently, Wilk and Wlodarczyk (WW) \cite{wilk08} have derived an uncertainty relation for the discrete Tsallis entropy that is distinct from the discretization of the relation given in Ref. \cite{rajagopal95}. Furthermore, the WW relation cannot be extended to include continuous distributions. We will thus employ the WW relation to arrive at a new set of entanglement criteria based on discrete Tsallis entropy. \par The Tsallis entropy for a discrete random variable $X$ is defined as \cite{tsallis} \begin{equation} T_\alpha[X] = \frac{1}{1-\alpha}\left (\sum_k x_k^\alpha -1 \right). \end{equation} The WW uncertainty relations for discrete distributions $R^\delta_\pm$ and $S^\Delta_\pm$ are, in the case $\delta \Delta \leq (2\pi/\beta)(\alpha/\beta)^{1/2(\alpha-1)}$ \cite{wilk08}, \begin{equation} T_{\alpha}[R^\delta_\pm]+T_{\beta}[S^\Delta_\pm] \geq \frac{1}{1-\alpha}\left [\left(\frac{\beta}{\alpha}\right)^{1/2\alpha}\left(\frac{\beta \delta \Delta}{2 \pi} \right)^{(\alpha-1)/\alpha} -1 \right] \label{eq:WW1} \end{equation} and, in the case $\delta \Delta > (2 \pi/\beta)(\alpha/\beta)^{1/2(\alpha-1)}$, \begin{equation} T_{\alpha}[R^\delta_\pm]+T_{\beta}[S^\Delta_\pm] \geq \frac{1}{\alpha-1}\left [\left(\frac{\alpha}{\beta}\right)^{1/2\alpha}\left(\frac{\beta \delta \Delta}{2 \pi} \right)^{(1-\alpha)/\alpha} -1 \right]. \label{eq:WW2} \end{equation} It follows directly from Eqs. \eqref{eq:discretedist} that the partial transpose takes $(R^\delta_\pm)^{T}\longrightarrow R^\delta_\pm$ and $(S^\Delta_\pm)^{T}\longrightarrow S^\Delta_\mp$, and we have \begin{equation} T_{\alpha}[(R^\delta_\pm)^{T}]+T_{\beta}[(S^\Delta_\pm)^{T}]=T_{\alpha}[R^\delta_\pm]+T_{\beta}[S^\Delta_\mp]. \label{eq:Tpt} \end{equation} Since any separable state should still be a physical state under partial transpose, the distributions $(R^\delta_\pm)^{T}$ and $(S^\Delta_\pm)^{T}$ of any separable state satisfies inequalities \eqref{eq:WW1} and \eqref{eq:WW2}. Using Eq. \eqref{eq:Tpt} in uncertainty relations \eqref{eq:WW1} and \eqref{eq:WW2} leads immediately to entanglement witnesses using the Tsallis entropy. Violation of either inequality thus implies implies that the quantum state associated with the discrete marginal distributions $R_{\pm}^{\delta}$ and $S_{\pm}^{\Delta}$ is entangled. \section{Examples} \label{sec:results} Here we provide some examples which show the utility of the R\'enyi entropic criteria presented in section \ref{sec:improvement}. We focus on several examples of continuous variable states which are currently of experimental interest. We leave further numerical investigation to future work. \subsection{Hermite-Gauss state} Consider the non-Gaussian state given by \begin{eqnarray}
\eta (r_1, r_2) = \frac{(r_1+r_2)}{\sqrt{\pi \sigma_{-} \sigma_{+}^3}} e^{-(r_1+r_2)^2/4\sigma_{+}^2} e^{-(r_1-r_2)^2/4\sigma_{-}^2}, \nonumber\\
\label{eqNonGaussianState} \end{eqnarray} where the widths $\sigma_{+}$ and $\sigma_{-}$ characterize the state. State \eqref{eqNonGaussianState} is non-separable for any value of parameters $\sigma_{+}$ and $\sigma_{-}$. This state has been experimentally produced using spontaneous parametric down-conversion and been shown to have several interesting properties \cite{walborn03b,nogueira04a,gomes09a,gomes09b}. We note that it is equivalent to the single-photon entangled state considered in Ref. \cite{agarwal05}, when $\sigma_+=\sigma_-=1$. \par The application of the witness \eqref{eqBirulaWeak}, after a lengthy but straightforward calculation, leads to: \begin{subequations} \begin{equation}
\frac{\sigma_{-}}{\sigma_{+}} <
\left[\frac{\pi^{\frac{1}{2}}}{\Gamma(\alpha+\frac{1}{2})}\left(\frac{\alpha}{2}\right)^\alpha\right]^\frac{1}{1-\alpha}, \label{eqsiginf} \end{equation}
\begin{equation}
\frac{\sigma_{-}}{\sigma_{+}} >
\left[\frac{\pi^{\frac{1}{2}}}{\Gamma(\alpha+\frac{1}{2})}\left(\frac{\alpha}{2}\right)^\alpha\right]^{-\frac{1}{1-\alpha}}, \label{eqsigsup} \end{equation} \end{subequations} where we have included both cases: $\alpha \geq 1$ and $1/2 \leq \alpha \leq 1$. Thus, only entangled states of the form \eqref{eqNonGaussianState} that violate one of these inequalities are detected by our entropic R\'enyi criteria \eqref{eqBirulaWeak}. For $\alpha=1$ the limits $\sigma_{-} / \sigma_{+} < \frac{e^{1-\gamma}}{2}$ and $\sigma_{-} / \sigma_{+} > \frac{2}{e^{1-\gamma}}$ ($\gamma$ is the Euler's constant) obtained in \cite{walborn09} are recovered. Figure \ref{comparingnongaussian} shows the limits of entanglement detection of the state \eqref{eqNonGaussianState} as a function of $\alpha$. The graph shows that we improve sensibility using the R\'{e}nyi entropic inequality \eqref{eqBirulaWeak} when $\alpha \longrightarrow 1/2$. For example, in the particular case of the non-Gaussian state $\eta(r_1,r_2)$ with $\sigma_{-} / \sigma_{+} = 1.3$, entanglement is not detected by the Shannon entropy criterion of (\ref{eqWeak}), but it is detected by the more general R\'enyi entropy criterion (\ref{eqBirulaWeak}). At the same time, there is a large region ($1/\sqrt{3} < \sigma_-/\sigma_+ < \sqrt{3}$) where the second-order Simon criterion \cite{simon00} does not detect entanglement in state \eqref{eqNonGaussianState}. The Simon criterion is a necessary and sufficient condition for entanglement in bipartite Gaussian states. So, in the case where the Simon criterion fails to detect entanglement, the covariance matrix of the state is ``separable", or in other words, the bipartite Gaussian state with the same covariance matrix is separable. Thus, we can guarantee that any second-order entanglement criterion also fails to detect entanglement in this region. \begin{figure}
\caption{\footnotesize (color online) Entanglement detection of state \eqref{eqNonGaussianState}. The light blue shaded region is where the R\'{e}nyi entropic criteria in (\ref{eqBirulaWeak}) identifies entanglement, while the Simon second-order PPT criterion does not. The uppermost and lowermost areas designate the regions in which the Simon PPT and R\'enyi criteria detect entanglement in state \eqref{eqNonGaussianState}. In the center hatched region neither test detect entanglement.}
\label{comparingnongaussian}
\end{figure} \subsection{NOON States} There is a lot of interest in generating entangled ``NOON" states of the form \begin{equation} \ket{\psi}_{NOON} = \frac{1}{\sqrt{2}} (\ket{N}_1\ket{0}_2 + \ket{0}_1\ket{N}_2), \end{equation} where $\ket{n}$ is an $n$-photon Fock state. NOON state are
particularly useful for quantum metrology \cite{mitchell04}. Here we consider detection of entanglement using continuous variable quadrature measurements. For NOON states the inequality (\ref{eqBirulaWeak}) does not detect entanglement for any value of $\alpha$ (tested for $N \leq 10$). However, we have investigated their entanglement detection with the stronger R\'{e}nyi criterion \eqref{eqStrongRenyi}. The results are shown in Figure \ref{comparingNOON}. We have studied the violation of inequality (\ref{eqStrongRenyi}) as a function of parameters $\alpha_1$, $\alpha_2$, $\beta_1$, $\beta_2$. In order to simplify the calculations, we have constrained $\beta_1$ and $\beta_2$ as functions of $\alpha_1$ and $\alpha_2$, according to restriction (\ref{eqAlphaBeta12}) (see Figure \ref{comparingNOON}). The best violations were found for $\alpha_1=\alpha_2=2$. In all cases, we chose quadrature operators \eqref{eq:quads} with $\theta=0$. With this choice of parameters we were able to detect entanglement up to $N=6$, which is an improvement over the Shannon criteria (\ref{eqStrong}) \cite{walborn09}. Numerical results show that entanglement in the NOON states goes undetected under any second-order criteria (tested for $N \leq 10$).
\begin{figure}
\caption{\footnotesize (color online) Entanglement detection for NOON state for $N=1$ to $6$. The surfaces represents the regions where the strong R\'{e}nyi entropic criteria \eqref{eqStrongRenyi} detects entanglement as function of $\alpha_1$ and $\alpha_2$. The criteria were tested for $\theta_j=0$. FPR designates the ``forbidden parameter region", as determined by Eqs. \eqref{eqRelationsAlphas} and \eqref{eqRelationsBetas}.}
\label{comparingNOON}
\end{figure} \subsection{Dephased Cat State}
\begin{figure}
\caption{\footnotesize (color online) a) Violation of Shannon entanglement criterion, given by the difference of the left-hand side (lhs) and right-hand side (rhs) of (\ref{eqWeak}) for the dephased cat state (\ref{eqCatState}). (b) violation of R\'{e}nyi entanglement criterion ($\alpha$ very close to $1/2$), given by the difference of the left-hand side (lhs) and right-hand side (rhs) of (\ref{eqBirulaWeak}) for the dephased cat state (\ref{eqCatState}). c) Comparison of Shannon criteria and R\'enyi criteria. The white region is detected by Shannon and R\'{e}nyi entropic criteria, the blue one is detected only by R\'{e}nyi entropic criterion ($\alpha\longrightarrow 1/2$) and the hatched area represents the region which remains undetected as function of $\nu$ and $p$.}
\label{cat}
\end{figure}
Entangled Schr\"odinger cat states have been produced experimentally in quadrature variables of two single mode fields using optical parametric amplification \cite{ourjoumtsev09}. Due to experimental imperfections these states are mixed. Here we consider mixed states given by the dephased entangled cat states, \begin{eqnarray}
\rho=N(\nu,p) \{ |\nu, \nu \rangle \langle \nu, \nu |+|-\nu, -\nu \rangle \langle -\nu,-\nu|\nonumber\\
-(1-p)(|\nu, \nu \rangle \langle -\nu, -\nu |+|-\nu, -\nu \rangle \langle \nu, \nu |) \}, \label{eqCatState} \end{eqnarray}
where $N(\nu,p)$ is a normalization constant. Parameter $p$ characterizes the dephasing \cite{chuang00}, and $\nu$ is the complex amplitude of the coherent state $|\nu\rangle$. Ref. \cite{walborn09} showed that the Shannon criteria \eqref{eqWeak} identifies entanglement for a broad range of values of parameters $p$ and $\nu$ \cite{walborn09}, which we reproduce in Figure \ref{cat} a) for comparison. The R\'{e}nyi entropic criterion (\ref{eqBirulaWeak}) with $\alpha$ very close to $1/2$ extends entanglement detection, as we can see in Figure \ref{cat} b). Figure \ref{cat} c) compares these two results.
\section{Conclusions} \label{sec:conclusions} We have presented a family of entanglement witnesses using generalized classical entropy functions applied to marginal probability distributions $R_{\pm}$ and $S_{\pm}$ associated with the measurement of global canonical operators $r_{\pm}$ and $s_{\pm}$ in continuous variable systems. First, we employed the R\'enyi entropy (parameterized by $\alpha$) for continuous distributions to arrive at a set of inequalities (see Eq. \eqref{eqStrongRenyi}) which are satisfied for all pure bipartite separable states. Second, we introduced a set of inequalities in Eq.\eqref{eqBirulaWeak}, using also the R\'enyi entropy of continuous distributions, which are satisfied for all bipartite states (pure or mixed). We have demonstrated that these criteria offer a greater sensitivity to detection of entanglement. We illustrated this point with several examples where the R\'enyi entropic criteria identify entanglement, while the Shannon entropic criteria \cite{walborn09} and second-order criteria do not \cite{simon00}. We also showed that the entropic criteria given in Eq.\eqref{eqBirulaWeak} are in fact PPT criteria, and gave a general recipe to obtain new PPT criteria based on marginal probability distributions $R_{\pm}$ and $S_{\pm}$ in continuous variable systems. \par The entanglement witnesses presented here should be very convenient in an experimental setting, as they involve a relatively small number of measurements. In particular, fixing the local rotations involved in the definition of the global operators $r_{\pm}$ and $s_{\pm}$, it is necessary to determine only the probability distributions $R_{\pm}$ and $S_{\pm}$. This can be done directly via measurement of $r_{\pm}$ and $s_{\pm}$ or from measurement of the joint probability distributions $R(r_1,r_2)$ and $S(s_1,s_2)$. In order to take into account the precision of the measurement apparatus we extended our R\'enyi entropy criteria \eqref{eqBirulaWeak} to include discrete distributions (see Eq. \eqref{eqBirulaWeakDisc}). In this case, we also developed an entropic entanglement criteria based on the Tsallis entropy.
In addition to practical relevance, the improvement offered by the entropic entanglement criteria is interesting from a theoretical point of view, since there is an entire family of entropic inequalities parameterized by the order of the R\'enyi entropy (a continuous quantity) that could be explored. Moreover, these results encourage the use of other types of entropy functionals and/or uncertainty relations for entanglement characterization.
\begin{acknowledgments} We would like to thank M. O. Hor-Meyll for useful discussions, and acknowledge financial support from the Brazilian funding agencies CNPq and FAPERJ. This work was performed as part of the Brazilian Instituto Nacional de Ci\^{e}ncia e Tecnologia - Informa\c{c}\~{a}o Qu\^{a}ntica (INCT-IQ). \end{acknowledgments}
\begin{thebibliography}{59} \expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi \expandafter\ifx\csname bibnamefont\endcsname\relax
\def\bibnamefont#1{#1}\fi \expandafter\ifx\csname bibfnamefont\endcsname\relax
\def\bibfnamefont#1{#1}\fi \expandafter\ifx\csname citenamefont\endcsname\relax
\def\citenamefont#1{#1}\fi \expandafter\ifx\csname url\endcsname\relax
\def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi \providecommand{\bibinfo}[2]{#2} \providecommand{\eprint}[2][]{\url{#2}}
\bibitem[{\citenamefont{Nielsen and Chuang}(2000)}]{chuang00} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Nielsen}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Chuang}},
\emph{\bibinfo{title}{Quantum Computation and Quantum Information}}
(\bibinfo{publisher}{Cambridge}, \bibinfo{address}{Cambridge},
\bibinfo{year}{2000}).
\bibitem[{\citenamefont{G\"uhne and T\'oth}(2009)}]{guhne09} \bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{G\"uhne}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{T\'oth}},
\bibinfo{journal}{Phys. Rep.} \textbf{\bibinfo{volume}{474}},
\bibinfo{pages}{1} (\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Horodecki et~al.}(2009)\citenamefont{Horodecki,
Horodecki, Horodecki, and Horodecki}}]{horodecki09} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Horodecki}},
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Horodecki}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Horodecki}},
\bibnamefont{and}
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Horodecki}},
\bibinfo{journal}{Rev. Mod. Phys.} \textbf{\bibinfo{volume}{81}},
\bibinfo{pages}{865} (\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Adesso and Illuminati}(2007)}]{adesso07} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Adesso}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Illuminati}},
\bibinfo{journal}{J. Phys. A: Math Theor.} \textbf{\bibinfo{volume}{40}},
\bibinfo{pages}{7821} (\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Braunstein and van Loock}(2005)}]{braunstein05} \bibinfo{author}{\bibfnamefont{S.~L.} \bibnamefont{Braunstein}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{van
Loock}}, \bibinfo{journal}{Reviews of Modern Physics}
\textbf{\bibinfo{volume}{77}}, \bibinfo{eid}{513} (\bibinfo{year}{2005}).
\bibitem[{\citenamefont{Simon}(2000)}]{simon00} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Simon}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{84}},
\bibinfo{pages}{2726} (\bibinfo{year}{2000}).
\bibitem[{\citenamefont{Duan et~al.}(2000)\citenamefont{Duan, Giedke, Cirac,
and Zoller}}]{duan00} \bibinfo{author}{\bibfnamefont{L.-M.} \bibnamefont{Duan}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Giedke}},
\bibinfo{author}{\bibfnamefont{J.~I.} \bibnamefont{Cirac}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Zoller}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{84}},
\bibinfo{pages}{2722} (\bibinfo{year}{2000}).
\bibitem[{\citenamefont{Mancini et~al.}(2002)\citenamefont{Mancini,
Giovannetti, Vitali, and Tombesi}}]{mancini02} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Mancini}},
\bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Giovannetti}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Vitali}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Tombesi}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{88}},
\bibinfo{pages}{120401} (\bibinfo{year}{2002}).
\bibitem[{\citenamefont{Giovannetti et~al.}(2003)\citenamefont{Giovannetti,
Mancini, Vitali, and Tombesi}}]{giovannetti03} \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Giovannetti}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Mancini}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Vitali}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Tombesi}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{67}},
\bibinfo{pages}{022320} (\bibinfo{year}{2003}).
\bibitem[{\citenamefont{Hyllus and Eisert}(2006)}]{hyllus06} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Hyllus}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Eisert}},
\bibinfo{journal}{New J. Phys.} \textbf{\bibinfo{volume}{8}},
\bibinfo{pages}{51} (\bibinfo{year}{2006}).
\bibitem[{\citenamefont{Fujikawa}(2009)}]{fujikawa09} \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Fujikawa}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{80}},
\bibinfo{pages}{012315} (\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Serafini}(2006)}]{serafini06} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Serafini}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{96}},
\bibinfo{pages}{110402} (\bibinfo{year}{2006}).
\bibitem[{\citenamefont{Lloyd and Braunstein}(1999)}]{lloyd99} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Lloyd}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{S.~L.} \bibnamefont{Braunstein}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{82}},
\bibinfo{pages}{1784} (\bibinfo{year}{1999}).
\bibitem[{\citenamefont{Bartlett and Sanders}(2002)}]{bartlett02} \bibinfo{author}{\bibfnamefont{S.~D.} \bibnamefont{Bartlett}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{B.~C.} \bibnamefont{Sanders}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{89}},
\bibinfo{pages}{207903} (\bibinfo{year}{2002}).
\bibitem[{\citenamefont{Ohliger et~al.}(2010)\citenamefont{Ohliger, Kieling,
and Eisert}}]{ohliger10} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Ohliger}},
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Kieling}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Eisert}},
\bibinfo{journal}{arXiv:1004.0081} (\bibinfo{year}{2010}).
\bibitem[{\citenamefont{Dong et~al.}(2008)\citenamefont{Dong, Lassen, Heersink,
Marquardt, Filip, Leuchs, and Andersen}}]{dong08} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Dong}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Lassen}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Heersink}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Marquardt}},
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Filip}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Leuchs}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{U.~L.} \bibnamefont{Andersen}},
\bibinfo{journal}{Nature Physics} \textbf{\bibinfo{volume}{4}},
\bibinfo{pages}{919} (\bibinfo{year}{2008}).
\bibitem[{\citenamefont{Hage et~al.}(2008)\citenamefont{Hage, Samblowski,
DiGuglielmo, Franzen, Fiur{\'a}sek, and Schnabel}}]{hage08} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Hage}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Samblowski}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{DiGuglielmo}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Franzen}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Fiur{\'a}sek}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Schnabel}},
\bibinfo{journal}{Nature Physics} \textbf{\bibinfo{volume}{4}},
\bibinfo{pages}{915} (\bibinfo{year}{2008}).
\bibitem[{\citenamefont{Agarwal and Biswas}(2005)}]{agarwal05} \bibinfo{author}{\bibfnamefont{G.~S.} \bibnamefont{Agarwal}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Biswas}},
\bibinfo{journal}{New J. Phys.} \textbf{\bibinfo{volume}{7}},
\bibinfo{pages}{211} (\bibinfo{year}{2005}).
\bibitem[{\citenamefont{Shchukin and Vogel}(2005{\natexlab{a}})}]{shchukin05} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Shchukin}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Vogel}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{95}},
\bibinfo{eid}{230502} (\bibinfo{year}{2005}{\natexlab{a}}).
\bibitem[{\citenamefont{Hillery and Zubairy}(2006{\natexlab{a}})}]{hillery06a} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Hillery}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{M.~S.} \bibnamefont{Zubairy}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{96}},
\bibinfo{pages}{050503} (\bibinfo{year}{2006}{\natexlab{a}}).
\bibitem[{\citenamefont{Hillery and Zubairy}(2006{\natexlab{b}})}]{hillery06b} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Hillery}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{M.~S.} \bibnamefont{Zubairy}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{74}},
\bibinfo{pages}{032333} (\bibinfo{year}{2006}{\natexlab{b}}).
\bibitem[{\citenamefont{Walborn et~al.}(2009)\citenamefont{Walborn, Taketani,
Salles, Toscano, and de~Matos~Filho}}]{walborn09} \bibinfo{author}{\bibfnamefont{S.~P.} \bibnamefont{Walborn}},
\bibinfo{author}{\bibfnamefont{B.~G.} \bibnamefont{Taketani}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Salles}},
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Toscano}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{R.~L.} \bibnamefont{de~Matos~Filho}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{103}},
\bibinfo{pages}{160505} (\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Rod\'o et~al.}(2008)\citenamefont{Rod\'o, Adesso, and
Sanpera}}]{rodo08} \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Rod\'o}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Adesso}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Sanpera}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{100}},
\bibinfo{eid}{110505} (\bibinfo{year}{2008}).
\bibitem[{\citenamefont{Miranowicz et~al.}(2009)\citenamefont{Miranowicz,
Piani, Horodecki, and Horodecki}}]{miranowicz09} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Miranowicz}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Piani}},
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Horodecki}},
\bibnamefont{and}
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Horodecki}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{80}},
\bibinfo{pages}{052303} (\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Hillery et~al.}(2009)\citenamefont{Hillery, Dung, and
Niset}}]{hillery09} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Hillery}},
\bibinfo{author}{\bibfnamefont{H.~T.} \bibnamefont{Dung}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Niset}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{80}},
\bibinfo{pages}{052335} (\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Sperling and Vogel}(2009{\natexlab{a}})}]{sperling09b} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Sperling}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Vogel}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{79}},
\bibinfo{pages}{022318} (\bibinfo{year}{2009}{\natexlab{a}}).
\bibitem[{\citenamefont{Sperling and Vogel}(2009{\natexlab{b}})}]{sperling09} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Sperling}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Vogel}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{79}},
\bibinfo{pages}{052313} (\bibinfo{year}{2009}{\natexlab{b}}).
\bibitem[{\citenamefont{Adesso}(2009)}]{adesso09} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Adesso}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{79}},
\bibinfo{pages}{022315} (\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Chen}(2007)}]{chen07} \bibinfo{author}{\bibfnamefont{X.-y.} \bibnamefont{Chen}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{76}},
\bibinfo{pages}{022309} (\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Shchukin and Vogel}(2005{\natexlab{b}})}]{shchukin05b} \bibinfo{author}{\bibfnamefont{E.~V.} \bibnamefont{Shchukin}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Vogel}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{72}},
\bibinfo{pages}{043808} (\bibinfo{year}{2005}{\natexlab{b}}).
\bibitem[{\citenamefont{Gomes et~al.}(2009{\natexlab{a}})\citenamefont{Gomes,
Salles, Toscano, Ribeiro, and Walborn}}]{gomes09b} \bibinfo{author}{\bibfnamefont{R.~M.} \bibnamefont{Gomes}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Salles}},
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Toscano}},
\bibinfo{author}{\bibfnamefont{P.~H.~S.} \bibnamefont{Ribeiro}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.~P.}
\bibnamefont{Walborn}}, \bibinfo{journal}{Proc. Nat. Acad. Sci.}
\textbf{\bibinfo{volume}{106}}, \bibinfo{pages}{21517}
(\bibinfo{year}{2009}{\natexlab{a}}).
\bibitem[{\citenamefont{Braunstein and Caves}(1988)}]{braunstein88} \bibinfo{author}{\bibfnamefont{S.~L.} \bibnamefont{Braunstein}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{C.~M.} \bibnamefont{Caves}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{61}},
\bibinfo{pages}{662} (\bibinfo{year}{1988}).
\bibitem[{\citenamefont{Giovannetti}(2004)}]{giovannetti04} \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Giovannetti}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{70}},
\bibinfo{pages}{012102} (\bibinfo{year}{2004}).
\bibitem[{\citenamefont{Moroder et~al.}(2008)\citenamefont{Moroder, G\"uhne,
and L\"utkenhaus}}]{moroder08} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Moroder}},
\bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{G\"uhne}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{L\"utkenhaus}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{78}},
\bibinfo{pages}{032326} (\bibinfo{year}{2008}).
\bibitem[{\citenamefont{G\"uhne and L\"utkenhaus}(2006)}]{guhne06b} \bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{G\"uhne}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{L\"utkenhaus}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{96}},
\bibinfo{pages}{170502} (\bibinfo{year}{2006}).
\bibitem[{\citenamefont{Marchiolli and Galetti}(2008)}]{marchiolli08} \bibinfo{author}{\bibfnamefont{M.~A.} \bibnamefont{Marchiolli}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Galetti}},
\bibinfo{journal}{Physica Scripta} \textbf{\bibinfo{volume}{78}},
\bibinfo{pages}{045007} (\bibinfo{year}{2008}).
\bibitem[{\citenamefont{Barnett and Phoenix}(1989)}]{barnett89} \bibinfo{author}{\bibfnamefont{S.~M.} \bibnamefont{Barnett}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Phoenix}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{40}},
\bibinfo{pages}{2404} (\bibinfo{year}{1989}).
\bibitem[{\citenamefont{R.~Horodecki and Horodecki}(1996)}]{horodecki96b} \bibinfo{author}{\bibfnamefont{P.~H.} \bibnamefont{R.~Horodecki}}
\bibnamefont{and}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Horodecki}},
\bibinfo{journal}{Phys. Lett. A} \textbf{\bibinfo{volume}{210}},
\bibinfo{pages}{377} (\bibinfo{year}{1996}).
\bibitem[{\citenamefont{Horodecki and Horodecki}(1996)}]{horodecki96c} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Horodecki}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Horodecki}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{54}},
\bibinfo{pages}{1838} (\bibinfo{year}{1996}).
\bibitem[{\citenamefont{Bovino et~al.}(2005)\citenamefont{Bovino, Castagnoli,
Ekert, Horodecki, Alves, and Sergienko}}]{bovino05} \bibinfo{author}{\bibfnamefont{F.~A.} \bibnamefont{Bovino}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Castagnoli}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Ekert}},
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Horodecki}},
\bibinfo{author}{\bibfnamefont{C.~M.} \bibnamefont{Alves}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.~V.} \bibnamefont{Sergienko}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{95}},
\bibinfo{pages}{240407} (\bibinfo{year}{2005}).
\bibitem[{\citenamefont{Cover and Thomas}(2006)}]{cover} \bibinfo{author}{\bibnamefont{Cover}} \bibnamefont{and}
\bibinfo{author}{\bibnamefont{Thomas}}, \emph{\bibinfo{title}{Elements of
Information Theory}} (\bibinfo{publisher}{John Wiley and Sons},
\bibinfo{year}{2006}).
\bibitem[{\citenamefont{Bialynicki-Birula and Mycielski}(1975)}]{bialynicki75} \bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Bialynicki-Birula}}
\bibnamefont{and}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Mycielski}},
\bibinfo{journal}{Commun. Math. Phys.} \textbf{\bibinfo{volume}{44}},
\bibinfo{pages}{129} (\bibinfo{year}{1975}).
\bibitem[{\citenamefont{R\'enyi}(1961)}]{renyi61} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{R\'enyi}}
(\bibinfo{publisher}{Univ. Calif. Press}, \bibinfo{year}{1961}),
vol.~\bibinfo{volume}{1} of \emph{\bibinfo{series}{Proceedings of the Fourth
Berkeley Symposium on Mathematical Statistics and Probability}}, pp.
\bibinfo{pages}{547--561}.
\bibitem[{\citenamefont{Barthe}(1998)}]{barthe98} \bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Barthe}},
\bibinfo{journal}{Geom. Funct. Anal.} \textbf{\bibinfo{volume}{8}},
\bibinfo{pages}{234} (\bibinfo{year}{1998}).
\bibitem[{\citenamefont{Bialynicki-Birula}(2006)}]{bialynicki06} \bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Bialynicki-Birula}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{74}},
\bibinfo{pages}{052101} (\bibinfo{year}{2006}).
\bibitem[{\citenamefont{Peres}(1996)}]{peres96} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Peres}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{77}},
\bibinfo{pages}{1413} (\bibinfo{year}{1996}).
\bibitem[{\citenamefont{M.~Horodecki and Horodecki}(1996)}]{horodecki96} \bibinfo{author}{\bibfnamefont{P.~H.} \bibnamefont{M.~Horodecki}}
\bibnamefont{and}
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Horodecki}},
\bibinfo{journal}{Phys. Lett. A} \textbf{\bibinfo{volume}{223}},
\bibinfo{pages}{1} (\bibinfo{year}{1996}).
\bibitem[{\citenamefont{Nha and Zubairy}(2008)}]{nha08} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Nha}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{M.~S.} \bibnamefont{Zubairy}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{101}},
\bibinfo{pages}{130402} (\bibinfo{year}{2008}).
\bibitem[{\citenamefont{Shannon and Weaver}(1949)}]{shannon} \bibinfo{author}{\bibfnamefont{C.~E.} \bibnamefont{Shannon}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Weaver}},
\emph{\bibinfo{title}{The Mathematical Theory of Communication}}
(\bibinfo{publisher}{University of Illinois Press}, \bibinfo{year}{1949}).
\bibitem[{\citenamefont{Costa et~al.}(2002)\citenamefont{Costa, III, and
Vignat}}]{costa02} \bibinfo{author}{\bibfnamefont{J.~A.} \bibnamefont{Costa}},
\bibinfo{author}{\bibfnamefont{A.~O.~H.} \bibnamefont{III}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Vignat}},
\bibinfo{journal}{IEEE International Symposium on Information Theory, ISIT
2002, Lausanne} p. \bibinfo{pages}{263} (\bibinfo{year}{2002}).
\bibitem[{\citenamefont{Vignat et~al.}(2006)\citenamefont{Vignat, III, and
Costa}}]{vignat06} \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Vignat}},
\bibinfo{author}{\bibfnamefont{A.~O.~H.} \bibnamefont{III}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.~A.} \bibnamefont{Costa}},
\bibinfo{journal}{IEEE International Symposium on Information Theory, ISIT
2006, Seattle} pp. \bibinfo{pages}{1822--1826} (\bibinfo{year}{2006}).
\bibitem[{\citenamefont{Tsallis}(1988)}]{tsallis} \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Tsallis}}, \bibinfo{journal}{J.
Stat. Phys.} \textbf{\bibinfo{volume}{52}}, \bibinfo{pages}{479}
(\bibinfo{year}{1988}).
\bibitem[{\citenamefont{Rajagopal}(1995)}]{rajagopal95} \bibinfo{author}{\bibfnamefont{A.~K.} \bibnamefont{Rajagopal}},
\bibinfo{journal}{Phys. Lett. A} \textbf{\bibinfo{volume}{205}},
\bibinfo{pages}{32} (\bibinfo{year}{1995}).
\bibitem[{\citenamefont{Wilk and W\l{}odarczyk}(2009)}]{wilk08} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Wilk}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{Z.}~\bibnamefont{W\l{}odarczyk}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{79}},
\bibinfo{pages}{062108} (\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Gomes et~al.}(2009{\natexlab{b}})\citenamefont{Gomes,
Salles, Toscano, Ribeiro, and Walborn}}]{gomes09a} \bibinfo{author}{\bibfnamefont{R.~M.} \bibnamefont{Gomes}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Salles}},
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Toscano}},
\bibinfo{author}{\bibfnamefont{P.~H.~S.} \bibnamefont{Ribeiro}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.~P.}
\bibnamefont{Walborn}}, \bibinfo{journal}{Phys. Rev. Lett.}
\textbf{\bibinfo{volume}{103}}, \bibinfo{pages}{033602}
(\bibinfo{year}{2009}{\natexlab{b}}).
\bibitem[{\citenamefont{Walborn et~al.}(2003)\citenamefont{Walborn,
de~Oliveira, P\'adua, and Monken}}]{walborn03b} \bibinfo{author}{\bibfnamefont{S.~P.} \bibnamefont{Walborn}},
\bibinfo{author}{\bibfnamefont{A.~N.} \bibnamefont{de~Oliveira}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{P\'adua}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{C.~H.} \bibnamefont{Monken}},
\bibinfo{journal}{Europhys. Lett} \textbf{\bibinfo{volume}{62}},
\bibinfo{pages}{161} (\bibinfo{year}{2003}).
\bibitem[{\citenamefont{Nogueira et~al.}(2004)\citenamefont{Nogueira, Walborn,
P\'adua, and Monken}}]{nogueira04a} \bibinfo{author}{\bibfnamefont{W.~A.~T.} \bibnamefont{Nogueira}},
\bibinfo{author}{\bibfnamefont{S.~P.} \bibnamefont{Walborn}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{P\'adua}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{C.~H.} \bibnamefont{Monken}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{92}},
\bibinfo{pages}{043602} (\bibinfo{year}{2004}).
\bibitem[{\citenamefont{Mitchell et~al.}(2004)\citenamefont{Mitchell, Lundeen,
and Steinberg}}]{mitchell04} \bibinfo{author}{\bibfnamefont{M.~W.} \bibnamefont{Mitchell}},
\bibinfo{author}{\bibfnamefont{J.~S.} \bibnamefont{Lundeen}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.~M.}
\bibnamefont{Steinberg}}, \bibinfo{journal}{Nature}
\textbf{\bibinfo{volume}{429}}, \bibinfo{pages}{161} (\bibinfo{year}{2004}).
\bibitem[{\citenamefont{Ourjoumtsev et~al.}(2009)\citenamefont{Ourjoumtsev,
Ferreyrol, Tualle-Brouri, and Grangier}}]{ourjoumtsev09} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Ourjoumtsev}},
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Ferreyrol}},
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Tualle-Brouri}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Grangier}},
\bibinfo{journal}{Nature Physics} \textbf{\bibinfo{volume}{5}},
\bibinfo{pages}{189} (\bibinfo{year}{2009}).
\end{thebibliography}
\end{document} |
\begin{document}
\title[Dimensional Reduction] {Dimensional Reduction and the Long-Time Behavior of Ricci Flow}
\author{John Lott} \address{Department of Mathematics\\ University of Michigan\\ Ann Arbor, MI 48109-1043\\ USA} \email{lott@umich.edu}
\thanks{This work was supported by NSF grant DMS-0604829} \date{October 25, 2008}
\begin{abstract} If $g(t)$ is a three-dimensional Ricci flow solution, with sectional curvatures that are $O(t^{-1})$ and diameter that is $O(t^{\frac12})$, then the pullback Ricci flow solution on the universal cover approaches a homogeneous expanding soliton. \end{abstract}
\maketitle
\section{Introduction} \label{section1}
After Perelman's proof of Thurston's geometrization conjecture \cite{Perelman1,Perelman2}, using Hamilton's Ricci flow \cite{Hamilton (1982)}, there are many remaining questions about three-dimensional Ricci flow.
Since the Ricci flow is a nonlinear heat equation for the Riemannian metric, the intuition is that it should smooth out the metric and thereby give rise, in the long-time limit, to the locally homogeneous pieces in the geometric decomposition. This intuition is a bit misleading because, for example, of the presence of singularities in the Ricci flow. Nevertheless, based partly on earlier work of Hamilton \cite{Hamilton (1999)}, Perelman showed that the hyperbolic pieces do asymptotically appear in the Ricci flow. Perelman's proof for the existence of the other geometric pieces is more indirect. Perelman showed that the nonhyperbolic part of the evolving manifold satisfies certain geometric conditions, from which one can show that it is a graph manifold \cite{BBBMP (2007),Kleiner-Lott2,Morgan-Tian,Perelman2,Shioya-Yamaguchi (2005)}. By earlier work of topologists, graph manifolds have a geometric decomposition.
It is an open question whether the Ricci flow directly performs the geometric decomposition of a three-manifold, as time evolves. In particular, suppose that the geometric decomposition of the three-manifold consists of a single geometric piece. If this piece has Thurston type $S^3$ or $S^1 \times S^2$ then its Ricci flow has a finite extinction time \cite{Colding-Minicozzi (2005),Colding-Minicozzi (2007),Perelman3}. For the other Thurston types, one can ask whether the large-time behavior of the Ricci flow solution will be that of a locally homogeneous Ricci flow, no matter what the initial metric may be. Hamilton \cite[Section 11]{Hamilton (1993)}, Hamilton-Isenberg \cite{Hamilton-Isenberg (1993)} and Knopf \cite{Knopf (2000)} showed that this is true for certain manifolds of ${\mathbb R}^3$ or $\operatorname{Sol}$-type if one assumes some extra symmetries on the initial metric. We are interested in whether one can show asymptotic homogeneity for a wider class of Ricci flow solutions.
To describe the results, let $g(\cdot)$ denote a Ricci-flow-with-surgery whose initial manifold is a closed orientable $3$-manifold. Let $M_t$ denote the time-$t$ manifold. (If $t$ is a surgery time then we take $M_t$ to be the postsurgery manifold.) From Perelman's work \cite{Perelman3}, there is some time $T_0$ so that for all $t \ge T_0$, each connected component $C$ of $M_t$ is $S^3$ or an aspherical $3$-manifold. As the geometrization conjecture holds, $C$ has a decomposition into geometric pieces of type $S^3$, ${\mathbb R}^3$, $H^3$, $\operatorname{Nil}$, $\operatorname{Sol}$, $H^2 \times {\mathbb R}$ and $\widetilde{SL_2({\mathbb R})}$; see Section \ref{section2}.
It is possible that the Ricci-flow-with-surgery involves an infinite number of surgeries. In the known examples, there is a finite number of surgeries. Furthermore, in the known examples, after all of the surgeries are done then the sectional curvatures uniformly decay in magnitude as $O(t^{-1})$, i.e. one has a type-III Ricci flow solution. In order to make progress, we will consider only Ricci-flows-with-surgery in which this is the case. Hence, we will consider a smooth Ricci flow $(M, g(\cdot))$, defined for $t \in (1, \infty)$ on a closed, connected orientable $3$-manifold $M$, with sectional curvatures that are uniformly $O(t^{-1})$.
If $M$ admits a locally homogeneous metric modeled on a given one of the eight Thurston geometries then we will say that $M$ has the corresponding Thurston type. Saying that $M$ has a certain Thurston type is a topological statement, i.e. we allow ourselves to consider Riemannian metrics on $M$ that are not locally homogeneous.
In order to analyze the large-time behavior of a Ricci flow, we use blowdown limits.
\begin{definition} \label{1.1} For $s \ge 1$, put $g_s(t) \: = \: \frac{1}{s} \: g(st)$. It is also a Ricci flow solution. Let $\widetilde{g}_s(t)$ be the lift of ${g}_s(t)$ to the universal cover $\widetilde{M}$. \end{definition}
A time interval $[a,b]$ for $g_s$ corresponds to the time interval $[sa,sb]$ for $g$. We are interested in the behavior as $s \rightarrow \infty$ of $g_s(\cdot)$ on a specified time interval $[a,b]$, since this gives information about the large-time behavior of the initial Ricci flow solution $g(\cdot)$. If there is a limiting Ricci flow solution $\lim_{s \rightarrow \infty} g_s(\cdot)$ then one says that it is a blowdown limit of $g(\cdot)$.
For notation, if the Gromov-Hausdorff limit $\lim_{t \rightarrow \infty} \left( M, \frac{g(t)}{t} \right)$ exists and equals a compact metric space $X$ then we write $\lim_{t \rightarrow \infty} \left( M, \frac{g(t)}{t} \right) \stackrel{GH}{=} X$. If we write $\lim_{s \rightarrow \infty} \left( \widetilde{M},\widetilde{m},\widetilde{g}_s(\cdot) \right) = \left( {M}_\infty, m_\infty, {g}_\infty(\cdot) \right)$ then we mean that for any sequence $\{ s_j \}_{j=1}^\infty$ tending to infinity, there is a smooth pointed limit $\lim_{j \rightarrow \infty} \left( \widetilde{M}, \widetilde{m}, \widetilde{g}_{s_j}(\cdot) \right)$ of Ricci flow solutions which equals $\left( {M}_\infty, m_\infty, {g}_\infty(\cdot) \right)$. We recall that the notion of the limit in the statement $\lim_{j \rightarrow \infty} \left( \widetilde{M}, \widetilde{m}, \widetilde{g}_{s_j}(\cdot) \right) \: = \: \left( {M}_\infty, m_\infty, {g}_\infty(\cdot) \right)$ involves $j$-dependent pointed diffeomorphisms from domains in ${M}_\infty$ to domains in $\widetilde{M}$ \cite{Hamilton (1995)}.
\begin{theorem} \label{1.2} Let $(M, g(\cdot))$ be a smooth Ricci flow solution on a connected closed orientable $3$-manifold, defined for $t \in (1, \infty)$. Suppose that \\ 1. The sectional curvatures of $(M, g(t))$ are uniformly $O(t^{-1})$ and \\ 2. $\operatorname{diam}(M, g(t)) = O(t^{\frac12})$. \\ Then $M$ is irreducible, aspherical and its geometric decomposition contains a single geometric piece. \\ 1. If $M$ has Thurston type ${\mathbb R}^3$ then $\lim_{t \rightarrow \infty} \left( M, \frac{g(t)}{t} \right) \stackrel{GH}{=} \operatorname{pt}$. The limit $\lim_{s \rightarrow \infty} \left( \widetilde{M}, \widetilde{m},\widetilde{g}_s(\cdot) \right)$ exists and equals the flat expanding soliton $({\mathbb R}^3, g_{flat})$. \\ 2. If $M$ has Thurston type $\operatorname{Nil}$ then $\lim_{t \rightarrow \infty} \left( M, \frac{g(t)}{t} \right) \stackrel{GH}{=} \operatorname{pt}$. The limit $\lim_{s \rightarrow \infty} \left( \widetilde{M}, \widetilde{m},\widetilde{g}_s(\cdot) \right)$ exists and equals the expanding soliton $\left( {\mathbb R}^3, \frac{1}{3 t^{\frac13}} (dx + \frac12 y dz - \frac12 z dy)^2 + t^{\frac13} (dy^2 + dz^2) \right)$.\\ 3. If $M$ has Thurston type $\operatorname{Sol}$ then the Gromov-Hausdorff limit $\lim_{t \rightarrow \infty} \left( M, \frac{g(t)}{t} \right)$ is a circle or an interval.
The limit $\lim_{s \rightarrow \infty} \left( \widetilde{M}, \widetilde{m},\widetilde{g}_s(\cdot) \right)$ exists and equals the expanding soliton $\left( {\mathbb R}^3, e^{-2z} dx^2 + e^{2z} dy^2 + 4t dz^2 \right)$.\\ 4. If $M$ has Thurston type $H^2 \times {\mathbb R}$ then for any sequence $\{t_j\}_{j=1}^\infty$ tending to infinity, there is a subsequence (which we relabel as $\{t_j\}_{j=1}^\infty$) so that the Gromov-Hausdorff limit $\lim_{j \rightarrow \infty} \left( M, \frac{g(t_j)}{t_j} \right)$ exists and is a metric of constant curvature $- \: \frac12$ on a closed $2$-dimensional orbifold. The limit $\lim_{s \rightarrow \infty} \left( \widetilde{M}, \widetilde{m},\widetilde{g}_s(\cdot) \right)$ exists and equals the expanding soliton $(H^2 \times {\mathbb R}, 2t g_{hyp} + g_{{\mathbb R}})$. \\ 5. If $M$ has Thurston type $H^3$ then $\lim_{t \rightarrow \infty} \left( M, \frac{g(t)}{t} \right) \stackrel{GH}{=} \left( M, 4 g_{hyp} \right)$. The limit $\lim_{s \rightarrow \infty} \left( \widetilde{M}, \widetilde{m},\widetilde{g}_s(\cdot) \right)$ exists and equals the expanding soliton $(H^3, 4t g_{hyp})$. \\ 6. If $M$ has Thurston type $\widetilde{\operatorname{SL}_2({\mathbb R})}$ then there is some sequence $\{ s_j \}_{j=1}^\infty$ tending to infinity such that $\lim_{j \rightarrow \infty} \left( M, \frac{g(s_j)}{s_j} \right)$ is a metric of constant curvature $- \: \frac12$ on a closed $2$-dimensional orbifold and $\lim_{j \rightarrow \infty} \left( \widetilde{M}, \widetilde{m},\widetilde{g}_{s_j}(\cdot) \right)$ is the expanding soliton $(H^2 \times {\mathbb R}, 2t g_{hyp} + g_{{\mathbb R}})$. \end{theorem}
\begin{corollary} \label{1.3} In cases 1-5 of Theorem \ref{1.2}, as time becomes large the Ricci flow solution becomes increasingly locally homogeneous. \end{corollary}
The corollary follows from the fact that the expanding solitons in Theorem \ref{1.2} are all homogeneous. To state the corollary in a more precise way, we recall that a Riemannian manifold $(M,g)$ is locally homogeneous if and only if any function on $M$ that can be expressed as a polynomial in the covariant derivatives of the curvature tensor $\nabla_{i_1} \nabla_{i_2} \ldots \nabla_{i_r} R_{jklm}$ and the inverse metric tensor $g^{ij}$, by contracting indices, is actually constant on $M$ \cite{Prufer-Tricerri-Vanhecke (1996)}. Corollary \ref{1.3} means that in cases 1-5 of Theorem \ref{1.2}, any function on $M$ which is a polynomial in the covariant derivatives of the curvature tensor and the inverse metric tensor of the rescaled metric $\widehat{g}(t) = \frac{g(t)}{t}$ approaches a constant value as $t \rightarrow \infty$.
\begin{remark} \label{1.4} The diameter condition $\operatorname{diam}(M, g(t)) = O(t^{\frac12})$ implies (under our curvature assumption) that the geometric decomposition of $M$ contains a single geometric piece; see Proposition \ref{3.5}. We expect that if the diameter condition is not satisfied then the geometric decomposition of $M$ will contain more than one geometric piece. \end{remark}
\begin{remark} \label{1.5} Any locally homogeneous Ricci flow solution $(M, g(\cdot))$ on a closed $3$-manifold $M$, which exists for $t \in (1, \infty)$, does have sectional curvatures that are uniformly $O(t^{-1})$ and $\operatorname{diam}(M, g(t)) = O(t^{\frac12})$ \cite{Isenberg-Jackson (1995),Knopf-McLeod (2001)}. Hence a Ricci flow solution on $M$ that in any reasonable sense approaches a locally homogeneous solution, as time goes to infinity, will satisfy the assumptions of Theorem \ref{1.2}. In this way, Theorem \ref{1.2} is essentially an if and only if statement. \end{remark}
\begin{remark} In Case 6 of Theorem \ref{1.2} we only show that we have the desired limit for some sequence $\{s_j \}_{j=1}^\infty$ tending to infinity, not for any such sequence. The reason is a technical point about local stability; see Remark \ref{6.5}. \end{remark}
In \cite[Theorem 1.1]{Lott (2007)} we showed that the expanding soliton solutions listed in Theorem \ref{1.2} are universal attractors within the space of homogeneous Ricci flow solutions on Thurston geometries. In proving Theorem \ref{1.2}, we show that they are global attractors within the space of Ricci flow solutions that satisfy the given curvature and diameter assumptions, after passing to the universal cover.
Theorem \ref{1.2} describes the Gromov-Hausdorff limit of the rescaled Ricci flow solution on $M$ and the smooth pointed rescaling limit of the lifted Ricci flow solution on $\widetilde{M}$. In the proof we show there is a rescaling limit which is a Ricci flow solution on an object that simultaneously encodes both the Gromov-Hausdorff limit on $M$ and the smooth limit on $\widetilde{M}$. This rescaling limit can be considered to give a canonical geometry for $M$. A similar phenomenon occurs in the work of Song and Tian concerning collapsing in the K\"ahler-Ricci flow on elliptic fibrations \cite{Song-Tian (2007)}.
There are three main tools in the proof of Theorem \ref{1.2} : a compactness theorem, a monotonicity formula and a local stability result. The compactness theorem \cite[Theorem 5.12]{Lott (2007)} is an extension of Hamilton's compactness theorem for Ricci flow solutions \cite{Hamilton (1995)}. Hamilton's theorem allows one to take a convergent subsequence of a sequence of pointed Ricci flow solutions that have uniform curvature bounds on compact time intervals and a uniform lower bound on the injectivity radius at the basepoint. The rescalings of a Ricci flow solution on a manifold $M$, as considered in Theorem \ref{1.2}, may collapse, i.e. the Gromov-Hausdorff limit $X$ may have dimension less than three. This means that there is no uniform lower bound on the injectivity radius of the rescaled solution, and so there cannot be a limiting Ricci flow solution on a $3$-manifold. Instead, the limiting Ricci flow solution lives on a more general object called an \'etale groupoid. Roughly speaking, an \'etale groupoid combines the notions of manifold and discrete group into a single object. Its relevance for us comes from the Cheeger-Fukaya-Gromov theory of bounded curvature collapse \cite{Cheeger-Fukaya-Gromov (1992)}, which implies that a Riemannian manifold which collapses with bounded sectional curvature will asymptotically acquire extra symmetries. In Section \ref{section3} we give a brief overview of how collapsing interacts with Ricci flow.
Under the assumptions of Theorem \ref{1.2}, the compactness theorem of \cite{Lott (2007)} implies that if $\{s_j\}_{j=1}^\infty$ is a sequence tending to infinity then after passing to a subsequence, $\{ \left( M, g_{s_j}(\cdot) \right) \}_{j=1}^\infty$ converges to a Ricci flow solution $\overline{g}(\cdot)$ on a three-dimensional \'etale groupoid. It remains to understand the long-time behavior of $\overline{g}(\cdot)$. In our case, the relevant \'etale groupoids arise from locally free abelian group actions. In essence, we have to understand the long-time behavior of an invariant Ricci flow solution on the total space of a (twisted) abelian principal bundle over a compact space $B$. Such a Ricci flow solution $\overline{g}(\cdot)$ becomes a coupled system of evolution equations on the lower-dimensional space $B$. This is the dimensional reduction part of the title of this paper.
Our main tool to analyze the long-time behavior of such a Ricci flow is a modification of the Feldman-Ilmanen-Ni expanding entropy functional ${\mathcal W}_+$ \cite{Feldman-Ilmanen-Ni (2005)}, which in turn is a variation on Perelman's ${\mathcal W}$-functional \cite{Perelman1}. More generally, in Section \ref{section4} we describe versions of the ${\mathcal F}$, ${\mathcal W}$ and ${\mathcal W}_+$ functionals that are adapted for abelian actions. Using the modified ${\mathcal W}_+$ functional, we show that any blowdown limit of $\overline{g}(\cdot)$ satisfies the harmonic-Einstein equations of \cite{Lott (2007)}. As we are in dimension three, we can solve the harmonic-Einstein equations to find the homogeneous expanding soliton solutions of Theorem \ref{1.2}.
By these techniques, we show that there is some sequence $\{s_j\}_{j=1}^\infty$ tending to infinity so that $\{ \left( M, g_{s_j}(\cdot) \right) \}_{j=1}^\infty$ converges in an appropriate sense to a locally homogeneous expanding soliton solution. In order to get convergence for all sequences $\{s_j\}_{j=1}^\infty$ tending to infinity, we use the local stability of the locally homogeneous expanding solitons, along with some further arguments. The local stability is due to Dan Knopf \cite{Knopf}. An important point is that we only need the local stability of the locally homogeneous expanding soliton within the space of Ricci flow solutions with the same abelian symmetry. Because of this, the local stability issue reduces to an elliptic-type analysis on the compact quotient space $B$ where one has compact resolvents, etc. For the $\operatorname{Nil}$ and $\operatorname{Sol}$-expanders, the local stability in a somewhat different sense was considered in \cite{Guenther-Isenberg-Knopf (2006)}.
The outline of this paper is as follows. In Section \ref{section2} we make some general remarks about Ricci flow and geometrization. In Section \ref{section3} we give an overview of some of the needed results from \cite{Lott (2007)}. In Section \ref{section4}, which may be of independent interest, we analyze Ricci flow solutions with a locally free abelian group action. In Section \ref{section5} we give the classification of the \'etale groupoids that arise. In Section \ref{section6} we prove Theorem \ref{1.2}. Further descriptions are given at the beginnings of the sections.
I thank Xiaodong Cao, Dan Knopf and Junfang Li for discussions on the topics of this paper. I am especially grateful to Dan for telling me of his local stability results \cite{Knopf}. Part of this research was performed while attending the MSRI 2006-2007 program on Geometric Evolution Equations. I thank MSRI and the UC-Berkeley Mathematics Department for their hospitality, along with the organizers of the MSRI program for inviting me.
\section{Geometrization Conjecture and Ricci Flow} \label{section2}
In this section we describe what one might expect for the long-time behavior of the Ricci flow on a compact $3$-manifold $M$, in terms of the geometric decomposition of $M$. Background information on the geometrization conjecture is in \cite{Scott (1983)}.
Let $M$ be a connected closed orientable $3$-manifold. The Kneser-Milnor theorem says that $M$ has a connected sum decomposition $M = M_1 \# M_2 \# \ldots \# M_N$ into so-called prime factors, unique up to permutation. Thurston's geometrization conjecture says that if $M$ is prime then there is a (possibly empty) minimal collection of disjoint incompressible embedded $2$-tori $\{T_i\}_{i=1}^I$ in $M$, unique up to isotopy, so that each connected component of $M - \bigcup_{i=1}^I T_i$ admits a complete locally homogeneous metric of one of the following types : \\ 1. A compact quotient of $S^3$, $S^2 \times {\mathbb R}$, ${\mathbb R}^3$, $\operatorname{Nil}$, $\operatorname{Sol}$, $H^3$, $H^2 \times {\mathbb R}$ or $\widetilde{\operatorname{SL}_2({\mathbb R})}$. \\ 2. A noncompact finite-volume quotient of $H^3$ or $H^2 \times {\mathbb R}$. \\ 3. ${\mathbb R} \times_{{\mathbb Z}_2} T^2$, where the generator of ${\mathbb Z}_2$ acts by $x \rightarrow -x$ on ${\mathbb R}$ and by the involution on $T^2$ for which $T^2/{\mathbb Z}_2$ is the Klein bottle $K$.
\begin{remark} \label{2.1} A finite-volume quotient of $S^3$, $S^2 \times {\mathbb R}$, ${\mathbb R}^3$, $\operatorname{Nil}$ or $\operatorname{Sol}$ is necessarily a compact quotient. Noncompact finite-volume quotients of $\widetilde{\operatorname{SL}_2({\mathbb R})}$ are not on the list, as they are diffeomorphic to noncompact finite-volume quotients of $H^2 \times {\mathbb R}$. \end{remark} \begin{remark} \label{2.2} If we were to cut along both 2-tori and Klein bottles then we could eliminate the ${\mathbb R} \times_{{\mathbb Z}_2} T^2$ case, which is the total space of a twisted ${\mathbb R}$-bundle over $K$. However, as we are dealing with orientable manifolds, it is more natural to only cut along 2-tori. \end{remark}
We now discuss graph manifolds. A reference is \cite[Chapter 2.4]{Matveev (2003)}. We recall that a compact orientable $3$-manifold $M$ with (possibly empty) boundary is a {\em graph manifold} if there is a collection of disjoint embedded $2$-tori $\{T_j\}_{j=1}^J$ so that if we take the metric completion of $M - \bigcup_{j=1}^J T_j$ (with respect to some Riemannian metric on $M$) then each connected component is the total space of a circle bundle over a compact surface. Clearly $\partial M$, if nonempty, is a disjoint union of $2$-tori. The result of gluing two graph manifolds along boundary components is again a graph manifold (provided that it is orientable). In addition, the connected sum of two graph manifolds is a graph manifold. In terms of the Thurston decomposition, a closed orientable prime $3$-manifold $M$ is a graph manifold if and only if it has no hyperbolic pieces.
We now summarize how Perelman proved the geometrization conjecture using Ricci flow. If $g(0)$ is an initial Riemannian metric on $M$ then Perelman showed that there is a Ricci-flow-with-surgery $(M_t, g(t))$ defined for all $t \in [0, \infty)$ (although $M_t$ may become the empty set for large $t$). A singularity in the flow is handled by letting some connected components go extinct or by performing surgery. If $t$ is a surgery time then we let $M_t$ denote the postsurgery manifold $M_t^+$. Going from a postsurgery manifold $M_t^+$ to the presurgery manifold $M_t^-$ amounts topologically to performing connected sums on some components of $M_t^+$, possibly along with a finite number of $S^1 \times S^2$'s and ${\mathbb R} P^3$'s, and restoring any factors that went extinct at time $t$. From Kneser's theorem, there is some $T_1 > 0$ so that for a singularity time $t > T_1$, $M_t^+$ differs from $M_t^-$ by the addition or subtraction of some $S^3$ factors. That is, after time $T_1$, all surgeries are topologically trivial.
Perelman showed that any connected component which goes extinct during the Ricci-flow-with-surgery is diffeomorphic to $S^1 \times S^2$, $S^1 \times_{{\mathbb Z}_2} S^2 = {\mathbb R} P^3 \# {\mathbb R} P^3$ or $S^3/\Gamma$, where $\Gamma$ is a finite subgroup of $\operatorname{SO}(4)$ that acts freely on $S^3$. He also showed that for large $t$, any connected component $C$ of $M_t$ has a $3$-dimensional submanifold $G$ with (possibly empty) boundary so that $G$ is a graph manifold, $\partial G$ consists of incompressible tori in $C$ and $C - G$ admits a complete finite-volume hyperbolic metric. Here $G$ is allowed to be $\emptyset$ or $C$. Using earlier results from $3$-manifold topology, this is enough to prove the geometrization conjecture.
It is not known whether there is a finite number of surgeries, but after some time all remaining surgeries will occur in the graph manifold part. For example, if the original manifold $M$ admits a hyperbolic metric then there is a finite number of surgeries, since for large time there is no graph manifold part. We note that one can never exclude singularities for topological reasons, as the initial metric could always contain a pinched $2$-sphere.
In \cite{Perelman3}, Perelman showed that for large $t$, any connected component of $M_t$ is aspherical or $S^3$. Thus the relevant Thurston geometries are $S^3$, ${\mathbb R}^3$, $\operatorname{Nil}$, $\operatorname{Sol}$, $H^3$, $H^2 \times {\mathbb R}$ and $\widetilde{\operatorname{SL}_2({\mathbb R})}$.
Put $\widehat{g}(t) = \frac{g(t)}{t}$. Let us assume that there is a finite number of surgeries, and consider the manifold $M$ to be a connected component of the remaining manifold after all of the surgeries are performed. Based on explicit calculations for the Ricci flow on a locally homogeneous $3$-manifold, the most optimistic possibility for the Gromov-Hausdorff behavior of the long-time Ricci flow is given in the following table. Here $X$ is the Gromov-Hausdorff limit $\lim_{t \rightarrow \infty} (M, \widehat{g}(t))$, which we assume to exist. The ``Thurston type'' denotes the possible geometric types in the Thurston decomposition of $M$, but we do not assume that the metrics in the Ricci flow are locally homogeneous. \begin{equation} \begin{array}{ccc} \underline{\mbox{X}} & & \underline{\mbox{Thurston type}} \\
& & \notag \\ \operatorname{pt}. & & {\mathbb R}^3 \text{ or } \operatorname{Nil} \notag \\ S^1 \mbox{ or } I & & \operatorname{Sol} \notag \\ \mbox{closed 2-orbifold with } K = - \: 1/2 & & H^2 \times {\mathbb R} \text{ or } \widetilde{\operatorname{SL}_2({\mathbb R})} \notag \\ \mbox{closed 3-manifold with } K = - \: 1/4 & & H^3 \notag \\ \mbox{noncompact} & & H^3, H^2 \times {\mathbb R}, {\mathbb R}^3 \end{array} \end{equation}
If $X$ is noncompact then the possible geometric pieces in the geometric decomposition of $M$ should be noncompact finite-volume quotients of $H^3$, noncompact finite-volume quotients of $H^2 \times {\mathbb R}$ and copies of ${\mathbb R} \times_{{\mathbb Z}_2} T^2$. (The final ${\mathbb R}^3$-term in the table refers to the latter possibility.) When discussing Gromov-Hausdorff limits in this case, one would have to choose a basepoint $m \in M$ and take a pointed Gromov-Hausdorff limit $(X, x) \stackrel{GH}{=} \lim_{t \rightarrow \infty} (M, m,\widehat{g}(t))$, whose value would depend on $m$. One would expect to get possible Gromov-Hausdorff limits of the form \\ 1. $H^3/\Gamma$, where $\Gamma$ is a torsion-free noncocompact lattice in $\operatorname{PSL}(2, {\mathbb C})$. \\ 2. $H^2/\Gamma$, where $\Gamma$ is a noncocompact lattice in $\operatorname{PSL}(2, {\mathbb R})$. \\ 3. ${\mathbb R}$. \\ 4. $[0, \infty)$.
\begin{example} \label{2.3} Suppose that $M = N \cup_{T_2} \overline{N}$ is the double of the truncation $N$ of a singly-cusped finite-volume hyperbolic $3$-manifold $Y$, where the metric on $N$ is perturbed to make it a product near $\partial N$. If $m$ is in $N - T^2$ then one would expect that $\lim_{t \rightarrow \infty} (M,m, \widehat{g}(t)) \stackrel{GH}{=} Y$, with a metric of constant curvature $- \: \frac14$, while if $m \in T^2$ then one would expect that $\lim_{t \rightarrow \infty} (M,m, \widehat{g}(t)) \stackrel{GH}{=} {\mathbb R}$. \end{example}
\begin{example} \label{2.4} Put $M^\prime = N \cup_{T^2} (I \times_{Z_2} T^2)$, where $I \times_{{\mathbb Z}_2} T^2$ is the (orientable) total space of a twisted interval bundle over the Klein bottle $K$. Then $M^\prime$ is double covered by $N \cup_{T^2} N$, where the gluing is done by an orientation-reversing isometry of $T^2$. If $m \in M^\prime -K$ then one would expect that $\lim_{t \rightarrow \infty} (M^\prime, m, \widehat{g}(t)) \stackrel{GH}{=} Y$, while if $m \in K$ then one would expect that $\lim_{t \rightarrow \infty} (M^\prime, m, \widehat{g}(t)) \stackrel{GH}{=} {\mathbb R}/{\mathbb Z}_2 = [0, \infty)$.
This example shows why, from the point of view of Ricci flow, it is natural to include ${\mathbb R} \times_{{\mathbb Z}_2} T^2$ as part of the geometric decomposition; see Remark \ref{2.2}. (In this sense it would also be natural to include ${\mathbb R} \times T^2$ as a possible piece, but such a piece would be topologically redundant.) \end{example}
In the collapsing case, i.e. when $\dim(X) < 3$, the Gromov-Hausdorff limit $X$ contains limited information about the evolution of the $3$-dimensional geometry under the Ricci flow. For $t$ large, any component of the time-$t$ manifold is aspherical or $S^3$. Because of this, one natural way to get more information about the $3$-dimensional geometry is to look at the evolving geometry on the universal cover. A special case is when $M$ is locally homogeneous. In \cite[Section 3]{Lott (2007)} the Ricci flow was considered on a simply-connected homogeneous $3$-manifold $G/H$, where $G$ is a connected unimodular Lie group and $H$ is a compact subgroup of $G$. The Ricci flow $(G/H, g(\cdot))$ was assumed to be $G$-invariant and exist for all positive time. In each case, it was shown that there are pointed diffeomorphisms $\{\phi_s\}_{s \in (0, \infty)}$ of $G/H$ so that the blowdown limit $g_\infty(t) = \lim_{s \rightarrow \infty} \frac{1}{s} \: \phi_s^* g(st)$ exists and is one of the expanding solitons listed in Theorem \ref{1.2}.
\begin{remark} \label{2.5} As an aside, instead of looking at the rescaled Ricci flow metric $\widehat{g}(t) \: = \: \frac{g(t)}{t}$, one could also consider the normalized Ricci flow solution, with constant volume. The normalized Ricci flow solution is useful in some settings but in our case we get more uniform results, in terms of the Thurston type, by looking at $\widehat{g}$. For example, let $N$ be a truncated singly-cusped finite-volume hyperbolic $3$-manifold, as in Example \ref{2.3}. Let $\Sigma_1$ and $\Sigma_2$ be compact connected surfaces with one boundary component and negative Euler characteristic. Put $M_1 = N \cup_{T^2} (S^1 \times \Sigma_1)$ and $M_2 = (S^1 \times \Sigma_1) \cup_{T^2} (S^1 \times \Sigma_2)$, where the gluing of $M_2$ is such that it is not just a product $S^1 \times (\Sigma_1 \cup_{S^1} \Sigma_2)$. Under the unnormalized Ricci flow, one expects that $\operatorname{vol}(M_1, g(t)) \sim \operatorname{const.} t^{3/2}$, due to the hyperbolic piece, whereas $\operatorname{vol}(M_2, g(t)) \sim \operatorname{const.} t$. Then the normalized Ricci flow on $M_1$ should collapse its $S^1 \times (\Sigma_1 - \partial \Sigma_1)$ piece, while the normalized Ricci flow on $M_2$ should have a three-dimensional pointed limit on its $S^1 \times (\Sigma_1 - \partial \Sigma_1)$ piece. In contrast, the pointed Gromov-Hausdorff limit $\lim_{t \rightarrow \infty} (M_i, m_i, \widehat{g}(t))$, with an appropriate choice of basepoint $m_i$ in the $S^1 \times \Sigma_1$ piece, should be $\Sigma_1 - \partial \Sigma_1$ with a complete finite-volume metric of constant curvature $- \: \frac12$, independent of $i \in \{1,2\}$. \end{remark}
\section{Collapsing and Ricci Flow} \label{section3}
In this section we give an overview, aimed for geometers, of the use of groupoids in collapsing theory. More details are in \cite[Section 5]{Lott (2007)} and references therein. We also show that under the hypotheses of Theorem \ref{1.2}, the manifold has a single geometric piece.
Suppose that $(M^n, g(\cdot))$ is a type-III Ricci flow solution that exists for $t \in (1, \infty)$, i.e. there is some $K > 0$ so that $\parallel \operatorname{Riem}(g(t)) \parallel_\infty \: \le \: \frac{K}{t}$ for all $t > 1$. Then the rescaled metrics $\widehat{g}(t) = \frac{g(t)}{t}$ have uniformly bounded sectional curvature. Even if the manifolds $(M, \widehat{g}(t))$ are collapsing in the Gromov-Hausdorff sense, we would still like to take a limit as $t \rightarrow \infty$, in some way, of the $n$-dimensional geometry. To do so, it is natural to apply the Cheeger-Fukaya-Gromov theory of bounded curvature collapse to the Ricci flow.
A main technique in the Cheeger-Fukaya-Gromov theory is to work $O(n)$-equivariantly on the orthonormal frame bundle $FM$. This is not very convenient when dealing with Ricci flow, as the induced flow on $FM$ is complicated. For this reason, we use an older approach to collapsing with bounded sectional curvature, as described in Gromov's book \cite{Gromov (1999)}, that deals directly with the manifold $M$.
Let $M$ be a complete $n$-dimensional Riemannian manifold with sectional curvatures bounded in absolute value by a positive number $K$. Given $r \in \left(0, \frac{1}{\sqrt{K}} \right)$ and $m \in M$, we can consider the Riemannian metric $\exp_m^* g$ on $B(0, r) \subset T_mM$.
Given a sequence of pointed complete $n$-dimensional Riemannian manifolds $\{(M_i, m_i)\}_{i=1}^\infty$ with sectional curvatures bounded in absolute value by $K$, there is a convergent subsequence of the pointed geometries $B(0, r) \subset T_{m_i}M_i$, whose limit is a $C^{1,\alpha}$-metric on an $n$-dimensional $r$-ball $(B_\infty, m_\infty)$. If one has uniform bounds of the form $\parallel \nabla^k \operatorname{Riem} (M_i) \parallel_\infty \: \le \: C(k)$ then one can assume that the limit is a $C^\infty$-metric and the convergence is $C^\infty$.
Define an equivalence relation $\sim_i$ on $B \left( 0, \frac{r}{3} \right) \subset T_{m_i}M_i$ by saying that $y \sim_i z$ if $\exp_{m_i} (y) = \exp_{m_i} (z)$. Then $B \left( m_i, \frac{r}{3} \right) \subset M_i$ equals $(B \left( 0, \frac{r}{3} \right) \subset T_{m_i}M_i)/\sim_i$. The equivalence relation $\sim_i$ is the equivalence relation of a pseudogroup $\Gamma_i$ of local isometries on $B(0,r) \subset T_{m_i} M_i$, also called the fundamental pseudogroup $\pi_1(M_i, m_i; r)$. One can take a convergent subsequence of the pseudogroups, in an appropriate sense, to obtain a limit pseudogroup $\Gamma_\infty$ of local isometries of $B_\infty$, which is a local Lie group. Furthermore, a neighborhood of the identity of $\Gamma_\infty$ is isomorphic to a neighborhood of the identity of a nilpotent Lie group. In particular, after passing to the subsequences, the pointed Gromov-Hausdorff limit of $\{B \left( m_i, \frac{r}{3} \right) \subset M_i\}_{i=1}^\infty$ is $\left( B \left( m_\infty, \frac{r}{3} \right) \subset B_\infty \right)/\Gamma_\infty$.
In this way one constructs a limiting $\frac{r}{3}$-ball. It has the drawback that it only describes the (lifted) geometry near the basepoints $m_i$. As one started with complete Riemannian manifolds, one would like to have a limiting object which in some sense is also complete. For example, suppose that $(M_i, m_i) = (M, m)$ for all $i$. The above process would produce the limiting ball $B \left( 0, \frac{r}{3} \right) \subset T_mM$, with $\Gamma_\infty = \pi_1(M, m; r)$. However, the limiting object should be all of $(M, m)$.
One way to construct a global limiting object would be to move the basepoints to other points inside of $B \left( 0, \frac{r}{3} \right) \subset T_{m_i}M_i$, construct new limiting balls, repeat the process and glue all of the ensuing balls together in a coherent way. In order to formalize such a limiting object, the notion of a ``Riemannian megafold'' was introduced in \cite{Petrunin-Tuschmann (1999)}. This essentially consists of a pseudogroup of local isometries of a Riemannian manifold. Another formalization was given in \cite{Lott (2007)}, in which the limiting object is a Riemannian groupoid. A Riemannian groupoid is an \'etale groupoid with a Riemannian metric on its space of units, for which the local diffeomorphisms coming from groupoid elements are local isometries. Riemannian groupoids have been extensively discussed in the literature on foliation theory, as they describe the transverse structure of Riemannian foliations. For details we refer to \cite[Section 5]{Lott (2007)} and references therein.
(We take this opportunity to make some corrections to \cite{Lott (2007)}. The $\infty$ on \cite[p. 629, line 42]{Lott (2007)} should read $(0, \infty)$. The $[0,1]$ on \cite[p. 658, line 15]{Lott (2007)} should read $[0,1)$.)
The upshot is that if $\{(M_i, m_i)\}_{i=1}^\infty$ is a sequence of pointed complete $n$-dimensional Riemannian manifolds, and if for every $k \in {\mathbb Z}^{\ge 0}$ and $R \in {\mathbb R}^+$ there is some $C(k,R) < \infty$ so that for all $i$ we have
$| \nabla^k \operatorname{Riem}(M_i) | \le C(k, R)$ on $B(m_i, R) \subset M_i$, then a subsequence converges smoothly to a pointed complete closed effective Hausdorff $n$-dimensional Riemannian groupoid $({\frak G}_\infty, O_{x_\infty})$ \cite[Proposition 5.9]{Lott (2007)}. This statement is essentially a reformulation of results of Cheeger, Fukaya and Gromov.
Let ${\frak G}$ be a complete closed effective Hausdorff Riemannian groupoid. It carries a certain locally constant sheaf $\underline{\frak g}$ of finite dimensional Lie algebras on its space of units ${\frak G}^{(0)}$. These Lie algebras act as germs of Killing vector fields on ${\frak G}^{(0)}$. Elements of ${\frak G}$ that are sufficiently close to the space of units ${\frak G}^{(0)}$, in the $1$-jet topology, appear in the image of the exponentials of small local sections of $\underline{\frak g}$. In our case, the Lie algebras are nilpotent and there is no point $x \in {\frak G}^{(0)}$ at which all of the corresponding Killing vector fields vanish simultaneously, unless $\underline{\frak g} = 0$. We will say that ${\frak G}$ is {\em locally free} if the isotropy groups ${\frak G}^x_x$ are finite.
\begin{remark} \label{3.1} The locally constant sheaf $\underline{\frak g}$ is analogous to a {\em pure} $\operatorname{Nil}$-structure in the sense of \cite{Cheeger-Fukaya-Gromov (1992)}; see \cite{Rong (2007)} for a recent survey. It may seem surprising that we always get pure structures on our limiting spaces, since a manifold that collapses with bounded curvature generally carries a {\em mixed} $\operatorname{Nil}$-structure if the diameter is not bounded during the collapse. The point is that we are considering a completely collapsed limit. In general, given $\epsilon, K, D > 0$, there is a number $\delta = \delta(n,\epsilon,K,D)$ so that if $\parallel \operatorname{Riem}(M) \parallel_\infty \le K$ then the fundamental pseudogroup $\pi_1(FM, p; \delta)$ (which is represented by loops at $p$ with length less than $\delta$) can be continuously transported to any point $q \in B(p, D) \subset FM$, and the result maps into $\pi_1(FM, q; \epsilon)$ \cite[Lemma 7.2]{Fukaya (1993)}. The fact that one generally cannot transport the short loops arbitrarily far, while keeping them short, is responsible for the appearance of mixed $\operatorname{Nil}$-structures. As we are considering a completely collapsed limit, we can effectively move $\underline{\frak g}_p$ to $\underline{\frak g}_q$ for an arbitrary value of $D$.
The work of Cheeger-Fukaya-Gromov describes the local structure of a Riemannian manifold with bounded sectional curvature that is highly collapsed but not completely collapsed. The technique to do this, for example in \cite{Cheeger-Gromov (1990)}, is to rescale the highly-collapsed manifold at a point $p$ in order to make the rescaled injectivity radius equal to $1$ and the sectional curvatures very small. One then argues that the local geometry around $p$ is modeled on a complete flat $n$-dimensional manifold other than ${\mathbb R}^n$, giving the local $F$-structure. When dealing with Ricci flow this rescaling is problematic, as it does not mesh well with the flow. For this reason, we only deal with completely collapsed limits. \end{remark}
The notion of smooth pointed convergence of Riemannian groupoids is given in \cite{Lott (2007)}, which extends these collapsing considerations to the Ricci flow. (Related Ricci flow limits on a single ball in a tangent space were considered in \cite{Glickenstein (2003)}.) In \cite{Lott (2007)} the Ricci flow on an \'etale groupoid was considered. This consists of a Ricci flow $g(t)$ on the space of units ${\frak G}^{(0)}$, in the usual sense, so that for each $t$ the local diffeomorphisms (arising from elements of ${\frak G}$) act by isometries. One has the following compactness theorem.
\begin{theorem} \label{3.2} \cite[Theorem 5.12]{Lott (2007)} Let $\{(M_i, p_i, g_i(\cdot))\}_{i=1}^\infty$ be a sequence of Ricci flow solutions on pointed $n$-dimensional manifolds $(M_i, p_i)$. We assume that there are numbers $-\infty \: \le A \: < \: 0$ and $0 \: < \:\Omega \: \le \: \infty$ so that \\ 1. Each Ricci flow solution $(M_i, p_i, g_i(\cdot))$ is defined on the time interval $(A, \Omega)$. \\ 2. For each $t \in (A,\Omega)$, $g_i(t)$ is a complete Riemannian metric on $M_i$. \\ 3. For each compact interval $I \subset (A, \Omega)$
there is some $K_{I} \: < \: \infty$ so that $|\operatorname{Riem}(g_i)(x, t)| \: \le \: K_{I}$ for all $x \in M_i$ and $t \in I$.
Then after passing to a subsequence, the Ricci flow solutions $g_i (\cdot)$ converge smoothly to a Ricci flow solution $g_\infty(\cdot)$ on a pointed $n$-dimensional \'etale groupoid $\left( {\frak G}_\infty, O_{x_\infty} \right)$, defined again for $t \in (A, \Omega)$. \end{theorem}
This theorem is an analog of Hamilton's compactness theorem \cite{Hamilton (1995)}, except without the assumption of a uniform positive lower bound on the injectivity radius at $p_i \in (M_i, g_i(0))$. In Hamilton's theorem one obtains a limiting Ricci flow on a manifold, which is a special type of \'etale groupoid. The proof of \cite[Theorem 5.12]{Lott (2007)} is essentially the same as the proof of Hamilton's compactness theorem, when transplanted to the groupoid setting.
\begin{remark} \label{3.3} If $\{\operatorname{diam}(M_i, g_i(0))\}_{i=1}^\infty$ is uniformly bounded above then $({\frak G}_\infty, g_\infty(0))$ has finite diameter and we do not have to talk about basepoints. \end{remark}
An immediate consequence of Theorem \ref{3.2} is the following.
\begin{corollary} \label{3.4} \cite[Corollary 5.15]{Lott (2007)} Given $K > 0$, the space of pointed Ricci flow solutions on $n$-dimensional manifolds, with $\sup_{t \in (1, \infty)} \: t \: \parallel \operatorname{Riem}(g_t) \parallel_\infty \: \le \: K$, is relatively compact among Ricci flows on pointed $n$-dimensional \'etale groupoids, defined for $t \in (1, \infty)$. \end{corollary}
The next proposition will be used in later sections.
\begin{proposition} \label{3.5} Let $(M, g(\cdot))$ be a Ricci flow solution on a closed orientable $3$-manifold that is defined for all $t \in [0, \infty)$. Suppose that \\ 1. The sectional curvatures of $(M, g(t))$ are uniformly $O(t^{-1})$ and \\ 2. $\operatorname{diam}(M, g(t)) = O(t^{\frac12})$.
Then $M$ is irreducible, aspherical and its geometric decomposition contains a single geometric piece. \end{proposition} \begin{proof} As mentioned in Section \ref{section2}, since the Ricci flow exists for all $t \in [0, \infty)$ it follows that $M$ is aspherical. The validity of the Poincar\'e Conjecture then implies that $M$ is irreducible \cite[Theorem 2]{Milnor (1962)}.
Put $\widehat{g}(t) = \frac{g(t)}{t}$. From the evolution equation for the scalar curvature $R$ and the maximum principle applied to $R \: + \: \frac{3}{2t}$, it follows that $\operatorname{vol}(M, \widehat{g}(t))$ is nonincreasing in $t$; see, for example, \cite[(1.7)]{Feldman-Ilmanen-Ni (2005)}. Suppose that $\lim_{t \rightarrow \infty} \operatorname{vol}(M, \widehat{g}(t)) > 0$. Then $(M, \widehat{g}(t))$ is noncollapsing. Recall Definition \ref{1.1}. If $\{s_j \}_{j=1}^\infty$ is a sequence tending to infinity then Hamilton's compactness theorem \cite{Hamilton (1995)} implies that after passing to a subsequence, there is a limiting three-dimensional Ricci flow solution $(M_\infty, g_\infty(\cdot)) = \lim_{j \rightarrow \infty} \left( M, g_{s_j}(\cdot) \right)$. From the diameter assumption, $M_\infty$ is diffeomorphic to $M$. Using monotonic quantities, one can show that $(M_\infty, g_\infty(t))$ has constant sectional curvature $- \: \frac{1}{4t}$; see \cite[Section 1]{Feldman-Ilmanen-Ni (2005)} and references therein. Thus $M$ has an $H^3$-structure.
Now suppose that $\lim_{t \rightarrow \infty} \operatorname{vol}(M, \widehat{g}(t)) = 0$. Then $(M, \widehat{g}(t))$ collapses with bounded sectional curvature and bounded diameter. There will be a sequence $t_i \rightarrow \infty$ such that the Gromov-Hausdorff limit $\lim_{i \rightarrow \infty} (M, \widehat{g}(t_i))$ exists and equals some compact metric space $X$ of dimension less than three. In what follows we use some results about bounded curvature collapsing from \cite{Rong (2007)} and references therein.
If $\dim(X) = 0$ then $M$ is an almost flat manifold and so has an ${\mathbb R}^3$ or $\operatorname{Nil}$-structure \cite{Gromov (1978)}.
If $\dim(X) = 2$ then $X$ is a closed orbifold and $M$ is the total space of an orbifold circle bundle over $X$ \cite[Proposition 11.5]{Fukaya (1990)}, from which it follows that $M$ has a geometric structure.
Finally, suppose that $\dim(X) = 1$. First, $X$ is $S^1$ or an interval. If $X = S^1$ then $M$ is the total space of a torus bundle over $S^1$ and hence carries a geometric structure. If $X$ is an interval $[0,L]$ then there is a Gromov-Hausdorff approximation $\pi \: : \: M \rightarrow X$ with $\pi^{-1}(0,L) = (0,L) \times T^2$. Now $X$ is locally the quotient of $M$ by a fixed-point free $T^2$-action \cite{Cheeger-Gromov (1986)}. If the action is locally free then $[0,L]$ is an orbifold. As the orbifold $[0,L]$ is double covered by $S^1$, the manifold $M$ is double covered by a $T^2$-bundle over $S^1$. Hence in this case, $M$ has a geometric structure \cite{Meeks-Scott (1986)}.
Suppose that the $T^2$-action is not locally free, say on $\pi^{-1}[0, \delta)$, with $\delta$ small. From the slice theorem, a neighborhood of $\pi^{-1}(0)$ is equivariantly diffeomorphic to $T^2 \times_H {\mathbb R}^N$, where $H$ is the isotropy group. As the $T^2$-action has no fixed points, $H$ must be a virtual circle group. However, since $M$ is aspherical, the map $\pi_1(T^2) \rightarrow \pi_1(M)$ must be injective \cite[Remark 0.9]{Cheeger-Rong (1995)}. This is a contradiction. Similarly, the $T^2$-action must be locally free on $\pi^{-1}(L - \delta, L]$. \end{proof}
\begin{remark} \label{3.6} In our case, one can see directly that there is a contradiction if $H$ is a virtual circle group. Suppose so. Then $\pi^{-1}([0, \delta])$ (or $\pi^{-1}([L- \delta,L])$) is diffeomorphic to $S^1 \times D^2$. If the $T^2$-action fails to be locally free on both $\pi^{-1}([0, \delta])$ and $\pi^{-1}([L- \delta,L])$ then $M$ is the union of two solid tori and so is diffeomorphic to $S^3$, $S^1 \times S^2$ or a lens space. If it fails to be locally free on exactly one of $\pi^{-1}([0, \delta])$ and $\pi^{-1}([L- \delta,L])$ then a double cover of $M$ is diffeomorphic to $S^3$, $S^1 \times S^2$ or a lens space. In either case, $M$ fails to be aspherical. \end{remark}
\begin{remark} \label{3.7} By the argument of the proof of Proposition \ref{3.5}, we can say the following about aspherical $3$-manifolds that collapse
with bounded curvature and bounded diameter. If $M$ carries an $H^3$-structure then it cannot collapse. If $M$ carries an $H^2 \times {\mathbb R}$ or $\widetilde{\operatorname{SL}_2({\mathbb R})}$-structure then it can only collapse to a two-dimensional orbifold of negative Euler characteristic. If $M$ carries a $\operatorname{Sol}$-structure then it can only collapse to $S^1$ or an interval. However, if $M$ carries an ${\mathbb R}^3$ or $\operatorname{Nil}$-structure then {\it a priori} it could collapse to a two-dimensional orbifold with vanishing Euler characteristic, a circle, an interval or a point. We will show that under the Ricci flow, with our curvature and diameter assumptions it can only collapse to a point. \end{remark}
\begin{remark} Some results about three-dimensional type-IIb Ricci flow solutions, i.e. Ricci flow solutions defined on $[0, \infty)$ with $\limsup_{t \rightarrow \infty} t \: \parallel \operatorname{Riem}(g(t)) \parallel_\infty \: = \: \infty$, were obtained in \cite{Chow-Glickenstein-Lu (2006)}. Although phrased differently, the collapsing results in \cite{Chow-Glickenstein-Lu (2006)} can be considered to be results about Ricci flow solutions with nonnegative sectional curvature on three-dimensional \'etale groupoids. \end{remark}
\section{Dimensional Reduction} \label{section4}
In this section we consider a Ricci flow $(M, \overline{g}(\cdot))$ which is invariant under local actions of a connected abelian Lie group on $M$. We first define the notion of a twisted principal bundle and write out the Ricci flow equation for an invariant metric $\overline{g}$ on the total space $M$. The Ricci flow equation becomes as a coupled system of equations on the base $B$ of the twisted principal bundle. We construct modified ${\mathcal F}$, ${\mathcal W}$ and ${\mathcal W}_+$ functionals for $\overline{g}(\cdot)$ and show that they are monotonic. We use ${\mathcal W}_+$ to show that any blowdown limit of $\overline{g}(\cdot)$ satisfies the harmonic-Einstein equations of \cite{Lott (2007)}.
Related functionals were considered independently by Bernhard List \cite{List (2006)} and Jeff Streets \cite{Streets (2007)}. (I thank Gerhard Huisken and Gang Tian for these references.) In \cite{List (2006)} the modified ${\mathcal F}$-functional is considered in the special case when $N = 1$ and $A^i_\alpha = 0$. The motivation comes from the static Einstein equation. In \cite{Streets (2007)}, modified ${\mathcal F}$ and ${\mathcal W}$-functionals are considered for a certain invariant flow on the total space of a principal bundle, with the fiber geometry being fixed under the flow.
\subsection{Twisted principal bundles} \label{subsection4.1}
Let ${\mathcal G}$ be a Lie group, with Lie algebra ${\frak g}$. Let $B$ be a connected $n$-dimensional smooth manifold. Let ${\mathcal E}$ be a local system on $B$ of Lie groups isomorphic to ${\mathcal G}$. Fixing a basepoint $b_0 \in B$ and an isomorphism ${\mathcal E}_{b_0} \cong {\mathcal G}$ of the stalk over $b_0$, the local system is specified by a homomorphism $\rho \: : \: \pi_1(B,b) \rightarrow \operatorname{Aut}({\mathcal G})$. Equivalently, we have a ${\mathcal G}$-bundle $E = {\mathcal G} \times_\rho \widetilde{B}$ over $B$, with a flat connection, which gives the \'etale space of the locally constant sheaf ${\mathcal E}$. Put $e = {\frak g} \times_{\rho} \widetilde{B}$, a flat ${\frak g}$-vector bundle on $B$. We will write $\Lambda^{max} e = \Lambda^{max} {\frak g} \times_{\rho} \widetilde{B}$ for the corresponding flat real line bundle on $B$ of fiberwise volume forms, and
$|\Lambda^{max} e| = |\Lambda^{max} {\frak g}| \times_{\rho} \widetilde{B}$ for the flat ${\mathbb R}^{\ge 0}$-bundle of fiberwise densities.
Hereafter we assume that the density bundle $|\Lambda^{max}e|$ is a flat product bundle ${\mathbb R}^{\ge 0} \times B$. (Some of the subsequent results do not need this assumption, but for simplicity we will assume uniformly that it holds.)
\begin{example} \label{4.1} If ${\mathcal G} = {\mathbb R}^N$ then $\operatorname{Aut}({\mathcal G}) = \operatorname{GL}(N, {\mathbb R})$ and $E=e$ is a flat ${\mathbb R}^N$-bundle over $B$.
The assumption on $|\Lambda^{max}e|$ means that the holonomy of $e$ lies in $\det^{-1}(\pm 1)$. If ${\mathcal G} = T^N$ then $\operatorname{Aut}({\mathcal G}) = \operatorname{GL}(N, {\mathbb Z})$, $E$ is a flat
$T^N$-bundle over $B$ and $e$ is a flat ${\mathbb R}^N$-bundle over $B$. In this case the assumption on $|\Lambda^{max}e|$ holds automatically. \end{example}
Let $\pi \: : \: M \rightarrow B$ be a fiber bundle with fiber ${\mathcal G}$. We write $E_b$ for the fiber of $E$ over $b \in B$ and $M_b$ for the fiber of $M$ over $b \in B$. Consider the fiber product $E \times_B M = \bigcup_{b \in B} E_b \times M_b$. We assume that there is a smooth map $E \times_B M \rightarrow M$ so that over a point $b \in B$, the map $E_b \times M_b \rightarrow M_b$ gives a free transitive action of ${\mathcal G} \cong E_b$ on $M_b$. The action must be consistent with the flat connection on $E$ in the sense that if $U \subset B$ is such that
$E \Big|_U \cong U \times {\mathcal G}$ is a local trivialization of the flat ${\mathcal G}$-bundle $E$ then $\pi^{-1}(U)$ has a free ${\mathcal G}$-action, and so is the total space of a principal ${\mathcal G}$-bundle over $U$. In this way, $M$ can be considered to be a twisted principal ${\mathcal G}$-bundle over $B$, with the twisting coming from the flat ${\mathcal G}$-bundle $E$. There is a natural isomorphism between the vertical tangent bundle $T^{vert}M = \operatorname{Ker}(d\pi)$ and $\pi^* e$.
An isomorphism of two twisted principal ${\mathcal G}$-bundles $\pi \: : \: M \rightarrow B$ and $\pi^\prime \: : \: M^\prime \rightarrow B^\prime$ is given by a diffeomorphism $\eta \: : \: B \rightarrow B^\prime$, an isomorphism $\hat{\phi} \: : \: E \rightarrow E^\prime$ of flat ${\mathcal G}$-bundles that covers $\eta$, and a diffeomorphism $\phi \: : \: M \rightarrow M^\prime$ that covers $\eta$ with the property that for all $m \in M$ and $x \in E_{\pi(m)}$, we have $\phi(x \cdot m) \: = \: \widehat{\phi}(x) \cdot \phi(m)$.
It makes sense to talk about a connection $A \in \Omega^1(M; \pi^* e)$ on a twisted principal ${\mathcal G}$-bundle $M$. The restriction of $A$ to $\pi^{-1}(U)$ is a ${\frak g}$-valued connection in the usual sense.
We assume that $M$ has a Riemannian metric $\overline{g}$ with a local free isometric
${\mathcal G}$-action. This means that if $E \Big|_U \cong U \times {\mathcal G}$ is a local trivialization of $E$ as above then the action of ${\mathcal G}$ on $\pi^{-1}(U)$ is isometric.
Hereafter we assume that ${\mathcal G}$ is a connected $N$-dimensional abelian Lie group.
Suppose that $U$ is also small enough so that $U$ is a coordinate chart for $B$ with local parametrization $\{x^\alpha\}_{\alpha = 1}^n \rightarrow \rho(x^\alpha) \in U$. Take a section $s \: : \: U \rightarrow \pi^{-1}(U)$. Choosing a basis $\{e_i\}_{i=1}^N$ of ${\frak g}$, we obtain coordinates on $\pi^{-1}(U)$ by $(x^\alpha, x^i) \rightarrow \exp \left( \sum_{i=1}^N x^i e_i \right) \cdot s(\rho(x^\alpha))$. In terms of these coordinates we can write \begin{equation} \label{4.2} \overline{g} \: = \: \sum_{i,j=1}^N G_{ij} \: (dx^i + A^i) (dx^j + A^j) \: + \: \sum_{\alpha, \beta = 1}^n g_{\alpha \beta} \: dx^\alpha dx^\beta. \end{equation} Here $G_{ij}$ is the local expression of a Euclidean inner product on $e$, $\sum_{\alpha, \beta = 1}^n g_{\alpha \beta} \: dx^\alpha dx^\beta$ is the local expression of a Riemannian metric $g_B$ on $B$ and $A^i = \sum_{\alpha} A^i_\alpha dx^\alpha$ are the components of $s^* A$. A change of section $s$ changes $A^i$ by an exact form. The curvatures $F^i = dA^i$ form an element of $\Omega^2(B; e)$.
If $M$ and $M^\prime$ are two twisted principal ${\mathcal G}$-bundles then an isomorphism $\phi \: : \: M \rightarrow M^\prime$ can be written in local coordinates as \begin{equation} \label{4.3} \phi(y^\gamma, y^k) \: = \: (x^\alpha (y^\gamma), \sum_k T^i_{\: \: k} y^k \: + \: f^i(y^\gamma)), \end{equation} where the $T^i_{\: \: k}$'s are constants. It covers a diffeomorphism $\eta \: : \: B \rightarrow B^\prime$. The isomorphism $\widehat{\phi} \: : \: E \rightarrow E^\prime$ of flat ${\mathcal G}$-bundles is represented locally by the functions $T^i_{\: \: k}$.
A locally ${\mathcal G}$-invariant Ricci flow is a $1$-parameter family of such Riemannian metrics $(M, \overline{g}(\cdot))$ that satisfies the Ricci flow equation. We will consider a basepoint for such a solution to be a point $p \in B$.
Let $\{(M_i, p_i, \overline{g}_i(\cdot))\}_{i=1}^\infty$ be a sequence of locally ${\mathcal G}$-invariant Ricci flow solutions defined for $t \in (1, \infty)$. We say that $\lim_{i \rightarrow \infty} (M_i, p_i, \overline{g}_i(\cdot)) \: = \: (M_\infty, p_\infty, \overline{g}_\infty(\cdot))$ if there are \\ 1. A sequence of open subsets $\{U_j\}_{j=1}^\infty$ of $B_\infty$, containing $p_\infty$, so that any compact subset of $B_\infty$ eventually lies in all $U_j$, and \\ 2. Open subsets $V_{i,j} \subset B_i$ containing $p_i$ and isomorphisms $\phi_{i,j} \: : \: \pi_\infty^{-1}(U_j) \rightarrow \pi_i^{-1}(V_{i,j})$ sending $\pi_\infty^{-1}(p_\infty)$ to $\pi_i^{-1}(p_i)$ so that \\ 3. For all $j$, $\lim_{i \rightarrow \infty} \phi_{i,j}^* \: \overline{g}_i(\cdot) \: = \: \overline{g}_\infty(\cdot)$ smoothly on $\pi_\infty^{-1}(U_j) \times [1 + j^{-1}, 1+j]$.
If $B_\infty$ is compact then we can remove the reference to basepoints.
\subsection{Ricci flow on twisted principal bundles} \label{subsection4.2}
In what follows, we use the Einstein summation convention freely. Let $(x^\alpha, x^i)$ be local coordinates on $\pi^{-1}(U)$ as in Subsection \ref{subsection4.1}. Writing $A^i \: = \: \sum_{\alpha=1}^n A^i_\alpha \: dx^\alpha$, put $F^i_{\alpha \beta} = \partial_\alpha A^i_\beta - \partial_\beta A^i_\alpha$. We also write \begin{equation} \label{4.4} G_{ij;\alpha \beta} \: = \: G_{ij,\alpha \beta} \: - \: \Gamma^{\sigma}_{\: \: \alpha \beta} \: G_{ij, \sigma}, \end{equation} where $\{\Gamma^{\sigma}_{\: \: \alpha \beta}\}$ are the Christoffel symbols for the metric $g_{\alpha \beta}$ on $B$.
Given $b \in U$, it is convenient to choose the section $s$ so that $A^i(b) = 0$. Then the curvature tensor $\overline{R}_{IJKL}$ of $M$ is given in terms of the curvature tensor $R_{\alpha \beta \gamma \delta}$ of $B$, the $2$-forms $F^i_{\alpha \beta}$ and the metrics $G_{ij}$ by \begin{align} \label{4.5} \overline{R}_{ijkl} \: & = \: - \: \frac14 \: g^{\alpha \beta} \: G_{ik,\alpha} \: G_{jl,\beta} \: + \: \frac14 \: g^{\alpha \beta} \: G_{il,\alpha} \: G_{jk, \beta} \\ \overline{R}_{ijk\alpha} \: & = \: \frac14 \: g^{\beta \gamma} \: G_{jm} \: G_{ik,\beta} \: F^m_{\alpha \gamma} \: - \: \frac14 \: g^{\beta \gamma} \: G_{im} \: G_{jk,\beta} \: F^m_{\alpha \gamma} \notag \\ \overline{R}_{ij \alpha \beta} \: & = \: - \: \frac14 \: G^{mk} \: G_{im,\alpha} \: G_{kj,\beta} \: + \: \frac14 \: G^{mk} \: G_{im,\beta} \: G_{kj,\alpha} \: - \: \frac14 \: g^{\gamma \delta} \: G_{im} \: G_{jk} \: F^m_{\alpha \gamma} \: F^k_{\beta \delta} \: + \: \frac14 \: g^{\gamma \delta} \: G_{im} \: G_{jk} \: F^m_{\beta \gamma} \: F^k_{\alpha \delta} \notag \\ \overline{R}_{i\alpha j \beta} \: & = \: - \: \frac12 \: G_{ij;\alpha \beta} \: + \: \frac14 \: G^{kl} \: G_{ik, \beta} \: G_{jl, \alpha} \: + \: \frac14 \: g^{\gamma \delta} \: G_{ik} \: G_{jl} \: F^k_{\alpha \gamma} \: F^l_{\beta \delta} \notag \\ \overline{R}_{i \alpha \beta \gamma} \: & = \: \frac12 \: G_{ij} \: F^j_{\beta \gamma; \alpha} \: + \: \frac12 \: G_{ij,\alpha} \: F^j_{\beta \gamma} \: + \: \frac14 \: G_{ij, \beta} \: F^j_{\alpha \gamma} \: - \: \frac14 \: G_{ij, \gamma} \: F^j_{\alpha \beta} \notag \\ \overline{R}_{\alpha \beta \gamma \delta} \: & = \: {R}_{\alpha \beta \gamma \delta} \: - \: \frac12 \: G_{ij} \: F^i_{\alpha \beta} \: F^j_{\gamma \delta} - \: \frac14 \: G_{ij} \: F^i_{\alpha \gamma} \: F^j_{\beta \delta} + \: \frac14 \: G_{ij} \: F^i_{\alpha \delta} \: F^j_{\beta \gamma}. \notag \end{align} The Ricci tensor is given by \begin{align} \label{4.6} \overline{R}_{ij} \: & = \: - \: \frac12 \: g^{\alpha \beta} \: G_{ij; \alpha \beta} \: - \: \frac14 \: g^{\alpha \beta} \: G^{kl} \: G_{kl, \alpha} \: G_{ij, \beta} \: + \: \frac12 \: g^{\alpha \beta} \: G^{kl} \: G_{ik, \alpha} \: G_{lj, \beta} \: + \: \frac14 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ik} \: G_{jl} \: F^k_{\alpha \beta} \: F^l_{\gamma \delta} \\ \overline{R}_{i \alpha} \: & = \: \frac12 \: g^{\gamma \delta} \: G_{ik} \: F^k_{\alpha \gamma; \delta} \: + \: \frac12 \: g^{\gamma \delta} \: G_{ik, \gamma} \: F^k_{\alpha \delta} \: + \: \frac14 \: g^{\gamma \delta} \: G_{im} \: G^{kl} \: G_{kl, \gamma} \: F^m_{\alpha \delta} \notag \\ \overline{R}_{\alpha \beta} \: & = \: R_{\alpha \beta} \: - \: \frac12 \: G^{ij} \: G_{ij; \alpha \beta} \: + \: \frac14 \: G^{ij} \: G_{jk,\alpha} \: G^{kl} \: G_{li,\beta} \: - \: \frac12 \: g^{\gamma \delta} \: \: G_{ij} \: F^i_{\alpha \gamma} \: F^j_{\beta \delta}. \notag \end{align} The scalar curvature is \begin{equation} \label{4.7} \overline{R} \: = \: R \: - \: g^{\alpha \beta} G^{ij} \: G_{ij; \alpha \beta} \: + \: \frac34 \: g^{\alpha \beta} \: G^{ij} \: G_{jk, \alpha} \: G^{kl} \: G_{li, \beta} \: - \: \frac14 \: g^{\alpha \beta} \: G^{ij} \: G_{ij, \alpha} \: G^{kl} \: G_{kl, \beta} \: - \: \frac14 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ij} \: F^i_{\alpha \beta} \: F^j_{\gamma \delta}. \end{equation}
Consider a $1$-parameter family of such Riemannian metrics $\overline{g}(\cdot)$ on $M$. Writing $G_{ij}(t)$, $A^i_\alpha(t)$ and $g_{\alpha \beta}(t)$ as functions of $t$, the Ricci flow equation becomes \begin{align} \label{4.8} & \frac{d}{dt} \left( G_{ij} (dx^i + A^i)(dx^j + A^j) \: + \: g_{\alpha \beta} dx^\alpha dx^\beta \right) \: = \\ & -2 \overline{R}_{ij} \: (dx^i + A^i)(dx^j + A^j) \: - \: 4 \: \overline{R}_{i \alpha} \: (dx^i + A^i) dx^\alpha \: - \: 2 \: \overline{R}_{\alpha \beta} dx^\alpha dx^\beta. \notag \end{align} Equivalently, \begin{align} \label{4.9} \frac{\partial G_{ij}}{\partial t} \: & = \: g^{\alpha \beta} \: G_{ij; \alpha \beta} \: + \: \frac12 \: g^{\alpha \beta} \: G^{kl} \: G_{kl, \alpha} \: G_{ij, \beta} \: - \: g^{\alpha \beta} \: G^{kl} \: G_{ik, \alpha} \: G_{lj, \beta} \: - \: \frac12 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ik} \: G_{jl} \: F^k_{\alpha \beta} \: F^l_{\gamma \delta} \\ \notag \frac{\partial A^i_{\alpha}}{\partial t} \: & = \: - \: g^{\gamma \delta} \: F^i_{\alpha \gamma; \delta} \: - \: g^{\gamma \delta} \: G^{ij} \: G_{jk, \gamma} \: F^k_{\alpha \delta} \: - \: \frac12 \: g^{\gamma \delta} \: G^{kl} \: G_{kl, \gamma} \: F^i_{\alpha \delta} \notag \\ \frac{\partial g_{\alpha \beta}}{\partial t} \: & = \: -2 R_{\alpha \beta} \: + \: G^{ij} \: G_{ij; \alpha \beta} \: - \: \frac12 \: G^{ij} \: G_{jk,\alpha} \: G^{kl} \: G_{li,\beta} \: + \: g^{\gamma \delta} \: \: G_{ij} \: F^i_{\alpha \gamma} \: F^j_{\beta \delta}. \notag \end{align} Adding a Lie derivative with respect to $- \: \nabla \ln \sqrt{\det(G_{ij})}$ to the right-hand side, and adding an exact form to the right-hand side of the equation for $\frac{\partial A^i_\alpha}{\partial t}$, gives a new equivalent set of equations : \begin{align} \label{4.10} \frac{\partial G_{ij}}{\partial t} \: & = \: g^{\alpha \beta} \: G_{ij; \alpha \beta} \: - \: g^{\alpha \beta} \: G^{kl} \: G_{ik, \alpha} \: G_{lj, \beta} \: - \: \frac12 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ik} \: G_{jl} \: F^k_{\alpha \beta} \: F^l_{\gamma \delta} \\ \frac{\partial A^i_{\alpha}}{\partial t} \: & = \: - \: g^{\gamma \delta} \: F^i_{\alpha \gamma; \delta} \: - \: g^{\gamma \delta} \: G^{ij} \: G_{jk, \gamma} \: F^k_{\alpha \delta} \notag \\ \frac{\partial g_{\alpha \beta}}{\partial t} \: & = \: -2 R_{\alpha \beta} \: + \: \frac12 \: G^{ij} \: G_{jk,\alpha} \: G^{kl} \: G_{li,\beta} \: + \: g^{\gamma \delta} \: \: G_{ij} \: F^i_{\alpha \gamma} \: F^j_{\beta \delta}. \notag \end{align}
The equations in (\ref{4.10}) consist of a heat type equation for $G_{ij}$, a Yang-Mills gradient flow type equation for $A^i_\alpha$ and a Ricci flow type equation for $g_{\alpha \beta}$. If $B$ is closed then an extension of the DeTurck trick \cite{DeTurck (1983)} to our setting shows short-time existence and uniqueness for the system (\ref{4.10}).
\subsubsection{Modified ${\mathcal F}$-functional} \label{subsubsection4.2.1}
We now assume that $B$ is closed.
\begin{definition} \label{4.11} Given $f \in C^\infty(B)$, put \begin{align} \label{4.12} & {\mathcal F}(G_{ij}, A^i_\alpha, g_{\alpha \beta}, f) \: = \\
& \int_B \left( |\nabla f|^2 \: + \: R \: - \: \frac14 \: g^{\alpha \beta} \: G^{ij} \: G_{jk, \alpha} \: G^{kl} \: G_{li, \beta} \: - \: \frac14 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ij} \: F^i_{\alpha \beta} \: F^j_{\gamma \delta} \right) \: e^{-f} \: \operatorname{dvol}_B. \notag \end{align} \end{definition}
If $N = 0$, i.e. if $M = B$, then this is the same as Perelman's ${\mathcal F}$-functional \cite{Perelman1}. Otherwise, the expression in (\ref{4.12}) differs from Perelman's ${\mathcal F}$-functional by the subtraction of terms corresponding to a Dirichlet energy of the field $G$ and a Yang-Mills action for the connection $A$.
We now compute the variation of ${\mathcal F}$.
\begin{lemma} \label{4.13} Given a smooth $1$-parameter family $\{(G_{ij}(s), A^i_\alpha(s), g_{\alpha \beta}(s), f(s))\}_{s \in (-\epsilon, \epsilon)}$, write
$\dot{G}_{ij} \: = \: \frac{dG_{ij}}{ds} \Big|_{s=0}$,
$\dot{A}^i_\alpha \: = \: \frac{dA^i_\alpha}{ds} \Big|_{s=0}$,
$\dot{g}_{\alpha \beta} \: = \: \frac{dg_{\alpha \beta}}{ds} \Big|_{s=0}$ and
$\dot{f} \: = \: \frac{df}{ds} \Big|_{s=0}$. Then \begin{align} \label{4.14}
& \frac{d}{ds} \Big|_{s=0} {\mathcal F}(G_{ij}, A^i_\alpha, g_{\alpha \beta}, f) \: = \\ & - \: \int_B \dot{G}_{kl} \: G^{ik} \: G^{jl} \notag \\ & \left( - \: \frac12 \: g^{\alpha \beta} \: G_{ij; \alpha \beta} \: + \: \frac12 \: g^{\alpha \beta} \: G^{kl} \: G_{ik, \alpha} \: G_{lj, \beta} \: + \: \frac14 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ik} \: G_{jl} \: F^k_{\alpha \beta} \: F^l_{\gamma \delta} \: + \: \frac12 \: g^{\alpha \beta} \: G_{ij, \alpha} \: f_{,\beta} \right) e^{-f} \: \operatorname{dvol}_B \: - \notag \\ & 2 \int_B \dot{A}^j_\beta \: g^{\alpha \beta} \: G_{ij} \left( \frac12 \: g^{\gamma \delta} \: F^i_{\alpha \gamma; \delta} \: + \: \frac12 \: g^{\gamma \delta} \: G^{ij} \: G_{jk, \gamma} \: F^k_{\alpha \delta} \: - \: \frac12 \: g^{\gamma \delta} \: f_{, \gamma} \: F^k_{\alpha \delta} \right) \: e^{-f} \: \operatorname{dvol}_B \: - \notag \\ & \int_B \dot{g}^{\alpha \beta} \left( R_{\alpha \beta} \: - \: \frac14 \: G^{ij} \: G_{jk,\alpha} \: G^{kl} \: G_{li,\beta} \: - \: \frac12 g^{\gamma \delta} \: \: G_{ij} \: F^i_{\alpha \gamma} \: F^j_{\beta \delta} \: + \: f_{;\alpha \beta} \right) \: e^{-f} \: \operatorname{dvol}_B \: + \notag \\ & \int_B \left( \frac12 g^{\alpha \beta} \dot{g}_{\alpha \beta} \: - \: \dot{f} \right) \notag \\
& \left( 2 \nabla^2 f - |\nabla f|^2 + R \: - \: \frac14 \: g^{\alpha \beta} \: G^{ij} \: G_{jk, \alpha} \: G^{kl} \: G_{li, \beta} \: - \: \frac14 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ij} \: F^i_{\alpha \beta} \: F^j_{\gamma \delta} \right) \: e^{-f} \: \operatorname{dvol}_B. \notag \end{align} \end{lemma} \begin{proof} This follows from a calculation along the lines of the corresponding calculation for Perelman's ${\mathcal F}$-functional; see \cite[Section 5]{Kleiner-Lott}. \end{proof}
As a consequence of Lemma \ref{4.13}, we can show that ${\mathcal F}$ is nondecreasing under a certain flow.
\begin{corollary} \label{4.15} Under the flow equations \begin{align} \label{4.16} \frac{\partial G_{ij}}{\partial t} \: & = \:
g^{\alpha \beta} \: G_{ij; \alpha \beta} \: - \: g^{\alpha \beta} \: G^{kl} \: G_{ik, \alpha} \: G_{lj, \beta} \: - \: \frac12 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ik} \: G_{jl} \: F^k_{\alpha \beta} \: F^l_{\gamma \delta} \: - \: g^{\alpha \beta} \: G_{ij, \alpha} \: f_{,\beta} \\ \frac{\partial A^i_\alpha}{\partial t} \: & = \: - \: g^{\gamma \delta} \: F^i_{\alpha \gamma; \delta} \: - \: g^{\gamma \delta} \: G^{ij} \: G_{jk, \gamma} \: F^k_{\alpha \delta} \: + \: g^{\gamma \delta} \: f_{, \gamma} \: F^k_{\alpha \delta} \notag \\ \frac{\partial g_{\alpha \beta}}{\partial t} \: & = \: - \: 2 \: R_{\alpha \beta} \: + \: \frac12 \: G^{ij} \: G_{jk,\alpha} \: G^{kl} \: G_{li,\beta} \: + \: g^{\gamma \delta} \: \: G_{ij} \: F^i_{\alpha \gamma} \: F^j_{\beta \delta} \: - \: 2 \: f_{;\alpha \beta} \notag \\ \frac{\partial f}{\partial t} \: & = \: - \: R \: + \: \frac14 \: g^{\alpha \beta} \: G^{ij} \: G_{jk, \alpha} \: G^{kl} \: G_{li, \beta} \: + \: \frac12 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ij} \: F^i_{\alpha \beta} \: F^j_{\gamma \delta} \: - \: \nabla^2 f \notag \end{align} one has \begin{align} \label{4.17} & \frac{d}{dt} {\mathcal F}(G_{ij}, A^i_\alpha, g_{\alpha \beta}, f) \: = \\
& \frac12 \: \int_B \left| g^{\alpha \beta} \: G_{ij; \alpha \beta} \: - \: g^{\alpha \beta} \: G^{kl} \: G_{ik, \alpha} \: G_{lj, \beta} \: - \: \frac12 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ik} \: G_{jl} \: F^k_{\alpha \beta} \: F^l_{\gamma \delta} \: - \: g^{\alpha \beta} \: G_{ij, \alpha} \: f_{,\beta}
\right|^2 e^{-f} \: \operatorname{dvol}_B \: + \notag \\
& \int_B \left| g^{\gamma \delta} \: F^i_{\alpha \gamma; \delta} \: + \: g^{\gamma \delta} \: G^{ij} \: G_{jk, \gamma} \: F^k_{\alpha \delta} \: - \: g^{\gamma \delta} \:
f_{, \gamma} \: F^k_{\alpha \delta} \right|^2 \: e^{-f} \: \operatorname{dvol}_B \: + \notag \\
& 2 \int_B \left| R_{\alpha \beta} \: - \: \frac14 \: G^{ij} \: G_{jk,\alpha} \: G^{kl} \: G_{li,\beta} \: - \: \frac12 g^{\gamma \delta} \: \: G_{ij} \: F^i_{\alpha \gamma} \:
F^j_{\beta \delta} \: + \: f_{;\alpha \beta} \right|^2 \: e^{-f} \: \operatorname{dvol}_B. \notag \end{align} \end{corollary} \begin{proof} This is an immediate consequence of Lemma \ref{4.13}. \end{proof}
As with Perelman's ${\mathcal F}$-functional, we now perform an infinitesimal diffeomorphism to decouple the equation for $f$ and obtain the Ricci flow on $M$.
\begin{corollary} \label{4.18} Under the flow equations \begin{align} \label{4.19} \frac{\partial G_{ij}}{\partial t} \: & = \:
g^{\alpha \beta} \: G_{ij; \alpha \beta} \: - \: g^{\alpha \beta} \: G^{kl} \: G_{ik, \alpha} \: G_{lj, \beta} \: - \: \frac12 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ik} \: G_{jl} \: F^k_{\alpha \beta} \: F^l_{\gamma \delta} \\ \frac{\partial A^i_\alpha}{\partial t} \: & = \: - \: g^{\gamma \delta} \: F^i_{\alpha \gamma; \delta} \: - \: g^{\gamma \delta} \: G^{ij} \: G_{jk, \gamma} \: F^k_{\alpha \delta} \notag \\ \frac{\partial g_{\alpha \beta}}{\partial t} \: & = \: - \: 2 \: R_{\alpha \beta} \: + \: \frac12 \: G^{ij} \: G_{jk,\alpha} \: G^{kl} \: G_{li,\beta} \: + \: g^{\gamma \delta} \: \: G_{ij} \: F^i_{\alpha \gamma} \: F^j_{\beta \delta} \notag \\ \frac{\partial(e^{-f})}{\partial t} \: & = \: - \: \nabla^2 \: e^{-f} \: + \: \left( R \: - \: \frac14 \: g^{\alpha \beta} \: G^{ij} \: G_{jk, \alpha} \: G^{kl} \: G_{li, \beta} \: - \: \frac12 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ij} \: F^i_{\alpha \beta} \: F^j_{\gamma \delta} \right) e^{-f} \notag \end{align} one has \begin{align} \label{4.20} & \frac{d}{dt} {\mathcal F}(G_{ij}, A^i_\alpha, g_{\alpha \beta}, f) \: = \\
& \frac12 \: \int_B \left| g^{\alpha \beta} \: G_{ij; \alpha \beta} \: - \: g^{\alpha \beta} \: G^{kl} \: G_{ik, \alpha} \: G_{lj, \beta} \: - \: \frac12 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ik} \: G_{jl} \: F^k_{\alpha \beta} \: F^l_{\gamma \delta} \: - \: g^{\alpha \beta} \: G_{ij, \alpha} \: f_{,\beta}
\right|^2 e^{-f} \: \operatorname{dvol}_B \: + \notag \\
& \int_B \left| g^{\gamma \delta} \: F^i_{\alpha \gamma; \delta} \: + \: g^{\gamma \delta} \: G^{ij} \: G_{jk, \gamma} \: F^k_{\alpha \delta} \: - \: g^{\gamma \delta} \:
f_{, \gamma} \: F^k_{\alpha \delta} \right|^2 \: e^{-f} \: \operatorname{dvol}_B \: + \notag \\
& 2 \int_B \left| R_{\alpha \beta} \: - \: \frac14 \: G^{ij} \: G_{jk,\alpha} \: G^{kl} \: G_{li,\beta} \: - \: \frac12 g^{\gamma \delta} \: \: G_{ij} \: F^i_{\alpha \gamma} \:
F^j_{\beta \delta} \: + \: f_{;\alpha \beta} \right|^2 \: e^{-f} \: \operatorname{dvol}_B. \notag \end{align} \end{corollary} \begin{proof} This follows because the right-hand sides of (\ref{4.16}) and (\ref{4.19}) differ by a Lie derivative with respect to $\nabla f$. \end{proof}
Note that the first three equations in (\ref{4.19}) are the same as (\ref{4.10}).
We now analyze what it means for ${\mathcal F}$ to be constant along the flow (\ref{4.19}).
\begin{proposition} \label{4.21} If ${\mathcal F}(G_{ij}, A^i_\alpha, g_{\alpha \beta}, f)$ is constant in $t$ then $F^i_{\alpha \beta} = 0$, $\det(G_{ij})$ is constant and \begin{align} \label{4.22} g^{\alpha \beta} \: G_{ij; \alpha \beta} \: - \: g^{\alpha \beta} \: G^{kl} \: G_{ik, \alpha} \: G_{lj, \beta} \: & = \: 0, \\ R_{\alpha \beta} \: - \: \frac14 \: G^{ij} \: G_{jk,\alpha} \: G^{kl} \: G_{li,\beta} & = \: 0. \notag \end{align} \end{proposition} \begin{proof} From (\ref{4.20}), we have \begin{equation} \label{4.23} g^{\alpha \beta} \: G_{ij; \alpha \beta} \: - \: g^{\alpha \beta} \: G^{kl} \: G_{ik, \alpha} \: G_{lj, \beta} \: - \: \frac12 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ik} \: G_{jl} \: F^k_{\alpha \beta} \: F^l_{\gamma \delta} \: - \: g^{\alpha \beta} \: G_{ij, \alpha} \: f_{,\beta} \: = \: 0 \end{equation} and \begin{equation} \label{4.24} R_{\alpha \beta} \: - \: \frac14 \: G^{ij} \: G_{jk,\alpha} \: G^{kl} \: G_{li,\beta} \: - \: \frac12 g^{\gamma \delta} \: \: G_{ij} \: F^i_{\alpha \gamma} \: F^j_{\beta \delta} \: + \: f_{;\alpha \beta} \: = \: 0. \end{equation} Multiplying (\ref{4.23}) by $G^{ij}$ and summing over indices gives \begin{equation} \label{4.25} \nabla^2 \ln \det(G_{ij}) \: - \: \langle \nabla f, \nabla \ln \det(G_{ij}) \rangle \: - \: \frac12 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ij} \: F^i_{\alpha \beta} \: F^j_{\gamma \delta} \: = \: 0. \end{equation} (Here we are using the trivialization of
$|\Lambda^{max}e|$ to think of $\det(G_{ij})$ as a function on $B$, defined up to multiplication by a positive constant.) Equivalently, \begin{equation} \label{4.26} \nabla^\alpha \left( e^{-f} \nabla_\alpha \ln \det(G_{ij}) \right) \: - \: \frac12 \: e^{-f} g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ij} \: F^i_{\alpha \beta} \: F^j_{\gamma \delta} \: = \: 0. \end{equation} Integrating (\ref{4.26}) over $B$ gives $F^i_{\alpha \beta} = 0$. Then multiplying (\ref{4.26}) by $\ln \det(G_{ij})$ and integrating over $B$ gives $\nabla \ln \det(G_{ij}) \: = \: 0$, so $\ln \det(G_{ij})$ is spatially constant.
Given that $F^i_{\alpha \beta} = 0$, the equation for $G^{ij} \frac{\partial G_{ij}}{\partial t}$ implies \begin{equation} \label{4.27} \frac{\partial}{\partial t} \ln \det(G_{ij}) \: = \: \nabla^2 \ln \det(G_{ij}). \end{equation} Thus $\ln \det(G_{ij})$ is also temporally constant.
As $\det(G_{ij})$ is spatially constant, we have \begin{equation} \label{4.28} G^{ij} \: G_{ij;\alpha \beta} \: - \: G^{ij} \: G_{jk, \alpha} \: G^{kl} \: G_{li, \beta} \: = \: 0. \end{equation} Along with the fact that $F^i_{\alpha \beta} = 0$, it follows that \begin{align} \label{4.29} \overline{R}_{ij} \: & = \: - \: \frac12 \: g^{\alpha \beta} \: G_{ij; \alpha \beta} \: + \: \frac12 \: g^{\alpha \beta} \: G^{kl} \: G_{ik, \alpha} \: G_{lj, \beta} \\ \overline{R}_{i \alpha} \: & = \: 0 \notag \\ \overline{R}_{\alpha \beta} \: & = \: R_{\alpha \beta} \: - \: \frac14 \: G^{ij} \: G_{jk,\alpha} \: G^{kl} \: G_{li,\beta} \notag \end{align} and \begin{equation} \label{4.30} \overline{R} \: = \: R \: - \: \frac14 \: g^{\alpha \beta} \: G^{ij} \: G_{jk, \alpha} \: G^{kl} \: G_{li, \beta}. \end{equation} From equation (\ref{4.24}), \begin{equation} \label{4.31} \int_B \overline{R} \: \operatorname{dvol}_B \: = \: 0. \end{equation}
On $M$, the evolution of the scalar curvature is given by \begin{equation} \label{4.32} \frac{\partial \overline{R}}{\partial t} \: = \:
\overline{\nabla}^2 \overline{R} \: + \: 2 \: |\overline{R}_{IJ}|^2. \end{equation} In our case, and using the fact that $\det(G_{ij})$ is spatially constant, this becomes \begin{equation} \label{4.33} \frac{\partial \overline{R}}{\partial t} \: = \:
{\nabla}^2 \overline{R} \: + \: 2 \: |\overline{R}_{ij}|^2
\: + \: 2 \: |\overline{R}_{\alpha \beta}|^2. \end{equation} From (\ref{4.19}), (\ref{4.23}) and (\ref{4.24}), the flow equations are \begin{align} \label{4.34} \frac{\partial G_{ij}}{\partial t} \: & = \: g^{\alpha \beta} \: G_{ij,\alpha} \: f_{,\beta} \\ \frac{\partial g_{\alpha \beta}}{\partial t} \: & = \: 2 \: f_{;\alpha \beta}. \notag \end{align} As the right-hand side of (\ref{4.34}) is given by Lie derivatives with respect to $\nabla f$, it follows that \begin{equation} \label{4.35} \frac{\partial \overline{R}}{\partial t} \: = \: \langle \nabla f, \nabla \overline{R} \rangle. \end{equation}
Thus \begin{equation} \label{4.36}
{\nabla}^2 \overline{R} \: + \: 2 |\overline{R}_{ij}|^2 \: + \:
2 |\overline{R}_{\alpha \beta}|^2 \: = \: \langle {\nabla} {f}, {\nabla} \overline{R} \rangle, \end{equation} or \begin{equation} \label{4.37}
{\nabla}^2 \overline{R} \: + \: 2 |\overline{R}_{ij}|^2 \: + \:
2 |\overline{R}_{\alpha \beta} \: - \: \frac{1}{n} \:
\overline{R} \: g_{\alpha \beta}|^2 \: + \: \frac{2}{n} \: \overline{R}^2 \: = \: \langle {\nabla} {f}, {\nabla} \overline{R} \rangle. \end{equation} From (\ref{4.31}), either $\overline{R} = 0$ or $\overline{R}_{min} < 0$. If $\overline{R}_{min} < 0$ then we obtain a contradiction to the minimum principle, applied to (\ref{4.37}). Thus $\overline{R} = 0$. Equation (\ref{4.37}) now implies that $\overline{R}_{ij} = \overline{R}_{\alpha \beta} = 0$, which proves the proposition. \end{proof}
From (\ref{4.19}), under the conclusion of Proposition \ref{4.21} it follows that $G_{ij}$, $A^i_\alpha$ and $g_{\alpha \beta}$ are time-independent. The Ricci flow solution $\overline{g}_\infty(\cdot)$ on $M$ is Ricci-flat. In the case $N=0$ the proof of Proposition \ref{4.21} essentially reduces to the standard proof that a steady gradient soliton on a compact manifold is Ricci-flat; see, for example, \cite[Chapter 1]{Chowetal}.
With $\det^{-1}(\pm 1) \subset \operatorname{GL}(N, {\mathbb R})$, we can write $\det^{-1}(\pm 1)/\operatorname{O}(N) = \operatorname{SL}(N, {\mathbb R})/\operatorname{SO}(N)$. From \cite[Proposition 4.17]{Lott (2007)}, the first equation in (\ref{4.22}) says that the map $b \rightarrow G_{ij}(b)$ describes a (twisted) harmonic map $G \: : \: B \rightarrow \det^{-1}(\pm 1)/\operatorname{O}(N)$. The twisting refers to the fact that if the flat ${\mathbb R}^N$-bundle $e$ has holonomy representation $\rho \: : \: \pi_1(B, b_0) \rightarrow \det^{-1}(\pm 1)$ then we really have a harmonic map $\widetilde{G} \: : \: \widetilde{B} \rightarrow \det^{-1}(\pm 1)/\operatorname{O}(N)$ which satisfies $\widetilde{G}(\gamma \widetilde{b}) \: = \: \rho(\gamma) \: \widetilde{G}(\widetilde{b})$ for $\gamma \in \pi_1(B, b)$ and $\widetilde{b}$ in the universal cover $\widetilde{B}$. After passing to a double cover of $B$ if necessary, we can assume that $\rho$ takes value in $\operatorname{SL}(N, {\mathbb R})$. For simplicity, we will make this assumption hereafter and consider $\widetilde{G}$ to be a twisted harmonic map from $B$ to $\operatorname{SL}(N, {\mathbb R})/\operatorname{SO}(N)$. Information on such twisted harmonic maps appears in \cite{Corlette (1988)}, \cite[Section 1.2]{Jost-Zuo (1997)} and \cite{Labourie (1991)}. Given $\rho$, such a twisted harmonic map $G$ exists if and only if the Zariski closure of $\operatorname{Im}(\rho)$ is reductive in $\operatorname{SL}(N, {\mathbb R})$. Given $\rho$, if there are two such equivariant harmonic maps $\widetilde{G}_1$ and $\widetilde{G}_2$ then there is a $1$-parameter family $\{ \widetilde{G}_t \}_{t \in [1,2]}$ of such equivariant harmonic maps, all with the same quotient energy, so that for each $\widetilde{b} \in \widetilde{B}$ the map $t \rightarrow \widetilde{G}_t(\widetilde{b})$ is a constant-speed geodesic arc, whose length is independent of $\widetilde{b}$.
If the second equation in (\ref{4.22}) is satisfied then $B$ clearly has nonnegative Ricci curvature.
We now look at the solutions of (\ref{4.22}).
\begin{proposition} \label{4.38} Any solution $\overline{g}$ of (\ref{4.22}) is a locally product metric on a Ricci-flat base $B$. \end{proposition} \begin{proof} From the second equation in (\ref{4.22}), $B$ has nonnegative Ricci curvature. For some $r$, the universal cover $\widetilde{B}$ is an isometric product of ${\mathbb R}^r$ and $W$, where $W$ is a simply-connected closed $(n-r)$-dimensional manifold of nonnegative Ricci curvature \cite{Cheeger-Gromoll (1971)}. As before, let $\widetilde{G} \: : \: \widetilde{B} \rightarrow \operatorname{SL}(N, {\mathbb R})/\operatorname{SO}(N)$ denote the lift of $G$ to $\widetilde{B}$.
Let $x^1, \ldots, x^r$ be Cartesian coordinates on ${\mathbb R}^r$ and let $x^{r+1}, \ldots, x^n$ be local coordinates on $W$. From the second equation of (\ref{4.22}), $\widetilde{G}_{ij, \alpha} = 0$ for $1 \le \alpha \le r$. That is, $\widetilde{G}$ is constant in the ${\mathbb R}^k$-directions. Then the first equation of (\ref{4.22}) implies that for each $y \in {\mathbb R}^r$, the restriction of $\widetilde{G}$ to $\{y\} \times W$ is a harmonic map from $W$ to $\operatorname{SL}(N, {\mathbb R})/\operatorname{SO}(N)$. It follows that for each $y \in {\mathbb R}^r$, the restriction of $\widetilde{G}$ to $\{y\} \times W$ is a point map. Thus $\widetilde{G}$ is constant. From the second equation of (\ref{4.22}), $\widetilde{B}$ is Ricci flat. The conclusion is that $B$ is Ricci flat and $G$ is locally constant. \end{proof}
In the next proposition we use ${\mathcal F}$ to analyze a long-time limit of a locally ${\mathcal G}$-invariant Ricci flow solution. The method of proof is along the lines of the proof of \cite[Theorem 1.3]{Feldman-Ilmanen-Ni (2005)}.
\begin{proposition} \label{4.39} Suppose that $(M, \overline{g}(\cdot))$ is a locally ${\mathcal G}$-invariant Ricci flow defined for all $t \in [0, \infty)$. Let $\{ s_i \}_{i=1}^\infty$ be a sequence of positive numbers tending to infinity. Put $\overline{g}_i(t) \: = \: \overline{g}(t+s_i)$. Suppose that $\lim_{i \rightarrow \infty} \overline{g}_i(\cdot)$ exists and equals $\overline{g}_\infty(\cdot)$ in the sense of Subsection \ref{subsection4.1}, for a locally ${\mathcal G}$-invariant Ricci flow $\overline{g}_\infty(\cdot)$ with a compact base $B_\infty$. Writing $\overline{g}_\infty(\cdot) \: \equiv \: (G_{ij,\infty}(\cdot), A^i_{\alpha,\infty}(\cdot), g_{\alpha \beta,\infty}(\cdot))$, we conclude that \\ 1. The curvatures $F^i_{\alpha \beta, \infty}$ vanish. \\ 2. $\det(G_{ij,\infty})$ is constant. \\ 3. Equations (\ref{4.22}) are satisfied for $G_{ij,\infty}(\cdot)$ and $g_{\alpha \beta,\infty}(\cdot)$. \end{proposition} \begin{proof} We first construct a positive solution of the conjugate heat equation \begin{equation} \label{4.40} \frac{\partial u}{\partial t} \: = \: - \: \nabla^2 u \: + \: \left( R \: - \: \frac14 \: g^{\alpha \beta} \: G^{ij} \: G_{jk, \alpha} \: G^{kl} \: G_{li, \beta} \: - \: \frac12 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ij} \: F^i_{\alpha \beta} \: F^j_{\gamma \delta} \right) u. \end{equation} that exists for all $t \in [0, \infty)$. Note that if $u$ is a solution to (\ref{4.40}) then $\int_B u \: \operatorname{dvol}_B$ is constant in $t$. Let $\{t_j\}_{j=1}^\infty$ be a sequence of times going to infinity. Let $\widetilde{u}_j(\cdot)$ be a solution to (\ref{4.40}) on the interval $[0,t_j]$ with initial condition $\widetilde{u}_j(t_j) \: = \: \frac{1}{\operatorname{vol}(B, g_{\alpha \beta}(t_j))}$. For any $T > 0$, we claim that a subsequence of the $\widetilde{u}_j$'s converges smoothly on the time interval $[0, T]$. To see this, at time $T+1$ we know that if $t_j \ge T+1$ then $\widetilde{u}_j(T+1) \ge 0$ and $\int_B \widetilde{u}_j(T+1) \: \operatorname{dvol}_B \: = \: 1$. Solving the conjugate heat equation with initial data at time $T+1$, and restricting the solution to the time interval $[0,T]$, gives a smoothing operator from the space of initial data $\{\widetilde{u} \in L^1(B) \: : \: \widetilde{u} \ge 0, \int_B \widetilde{u} \: \operatorname{dvol}_B(T+1) = 1\}$ to $C^\infty([0, T] \times B)$. Thus we have the derivative bounds needed to extract a subsequence of the $\widetilde{u}_j$'s that converges smoothly on $[0, T]$. By a diagonal argument, we can extract a subsequence of the $\widetilde{u}_j$'s that converges smoothly on compact subsets of $[0, \infty)$ to a nonzero solution $\widetilde{u}_\infty(\cdot)$ of (\ref{4.40}), defined for $t \in [0, \infty)$.
One can show, as in \cite[Pf. of Proposition 7.5]{Kleiner-Lott}, that $\widetilde{u}_\infty(\cdot) > 0$. If $\widetilde{f}_\infty(t)$ is given by $\widetilde{u}_\infty(t) \: = \: e^{- \: \widetilde{f}_\infty(t)}$ then ${\mathcal F}(G_{ij}(t), A^i_\alpha(t), g_{\alpha \beta}(t), \widetilde{f}_\infty(t))$ is nondecreasing in $t$. We write ${\mathcal F}_\infty \: = \: \lim_{t \rightarrow \infty} {\mathcal F}(G_{ij}(t), A^i_\alpha(t), g_{\alpha \beta}(t), \widetilde{f}_\infty(t))$, which is possibly infinite for the moment.
Next, put $u_i(t) \: = \: \widetilde{u}_\infty(t + s_i)$. By assumption, $\lim_{i \rightarrow \infty} \overline{g}_i(\cdot) \: = \: \overline{g}_\infty(\cdot)$ in the sense of Subsection \ref{subsection4.1}. Then by the same smoothing argument as above, there is a subsequence of $\{u_i(\cdot)\}_{i=1}^\infty$ that converges smoothly on compact subsets of $[0, \infty)$ to a solution $u_\infty(\cdot)$ of (\ref{4.40}) on $B_\infty$, where (\ref{4.40}) is now written in terms of $G_{ij,\infty}(\cdot)$, $A^i_{\alpha,\infty}(\cdot)$ and $g_{\alpha \beta,\infty}(\cdot)$. (When taking a convergent subsequence, we perform the same diffeomorphisms on the $u_i$'s as are used in forming the limit $\lim_{i \rightarrow \infty} \overline{g}_i(\cdot)$.) Define $f_\infty(t)$ by ${u}_\infty(t) \: = \: e^{- \: {f}_\infty(t)}$. Then after passing to a subsequence, \begin{align} \label{4.41} {\mathcal F}(G_{ij,\infty}(t), A^i_{\alpha,\infty}(t), g_{\alpha \beta,\infty}(t), f_\infty(t)) \: & = \: \lim_{i \rightarrow \infty} {\mathcal F}(G_{ij}(t+s_i), A^i_\alpha(t+s_i), g_{\alpha \beta}(t+s_i), \widetilde{f}_\infty(t+s_i)) \\ & = \: {\mathcal F}_\infty. \notag \end{align} This shows that ${\mathcal F}_\infty < \infty$ and that ${\mathcal F}(G_{ij,\infty}(t), A^i_{\alpha,\infty}(t), g_{\alpha \beta,\infty}(t), f_\infty(t))$ is constant in $t$. The proposition now follows from Proposition \ref{4.21}. \end{proof}
Junfang Li pointed out that the modified ${\mathcal F}$-functional has an $(n+N)$-dimensional interpretation. Namely, for $\overline{f} \in C^\infty(B)$, put \begin{equation} \label{4.42} \overline{\mathcal F}(G_{ij}, A^i_\alpha, g_{\alpha \beta}, \overline{f}) \: = \:
\int_B \left( |\nabla \overline{f}|^2 \: + \: \overline{R} \right) \: e^{- \overline{f}} \: \sqrt{\det(G_{ij})} \: \operatorname{dvol}_B. \end{equation} This is a renormalized version of Perelman's ${\mathcal F}$-functional on $M$. \begin{proposition} \label{4.43} Put $f = \overline{f} - \ln \sqrt{\det(G_{ij})}$. Then \begin{equation} \label{4.44} \overline{\mathcal F}(G_{ij}, A^i_\alpha, g_{\alpha \beta}, \overline{f}) \: = {\mathcal F}(G_{ij}, A^i_\alpha, g_{\alpha \beta},f). \end{equation} \end{proposition} \begin{proof} We have \begin{equation} \label{4.45}
\int_B |\nabla \overline{f} |^2 \: e^{- \overline{f}} \: \sqrt{\det(G_{ij})} \: \operatorname{dvol}_B \: = \\
\int_B \left| \nabla f \: + \: \nabla \ln \sqrt{\det(G_{ij})}
\right|^2 \: e^{- f} \: \operatorname{dvol}_B \end{equation} and \begin{align} \label{4.46}
& \int_B \left| \nabla f \: + \: \nabla \ln \sqrt{\det(G_{ij})}
\right|^2 \: e^{- f} \: \operatorname{dvol}_B \: = \\
&\int_B \left( \left| \nabla f \right|^2 \: + \: 2 \: \langle \nabla f, \nabla \ln \sqrt{\det(G_{ij})} \rangle
\: + \: \left| \nabla \ln \sqrt{\det(G_{ij})} \right|^2 \right) \: e^{- f} \: \operatorname{dvol}_B \: = \: \notag \\
&\int_B \left( \left| \nabla f \right|^2 \: + \: 2 \: \nabla^2 \ln \sqrt{\det(G_{ij})}
\: + \: \left| \nabla \ln \sqrt{\det(G_{ij})} \right|^2 \right) \: e^{- f} \: \operatorname{dvol}_B \: = \: \notag \\
&\int_B \left( \left| \nabla f \right|^2 \: + \: g^{\alpha \beta} G^{ij} \: G_{ij; \alpha \beta} \: - \: g^{\alpha \beta} \: G^{ij} \: G_{jk, \alpha} \: G^{kl} \: G_{li, \beta} \: + \: \frac14 \: g^{\alpha \beta} \: G^{ij} \: G_{ij, \alpha} \: G^{kl} \: G_{kl, \beta} \right) \: e^{- f} \: \operatorname{dvol}_B. \notag \end{align} Combining this with (\ref{4.7}) gives \begin{align} \label{4.46.5}
& \int_B \left( \left| \nabla f \: + \: \nabla \ln \sqrt{\det(G_{ij})}
\right|^2 \: + \: \overline{R} \right) \: e^{- f} \: \operatorname{dvol}_B \: = \\
& \int_B \left( |\nabla f|^2 \: + \: R \: - \: \frac14 \: g^{\alpha \beta} \: G^{ij} \: G_{jk, \alpha} \: G^{kl} \: G_{li, \beta} \: - \: \frac14 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ij} \: F^i_{\alpha \beta} \: F^j_{\gamma \delta} \right) \: e^{-f} \: \operatorname{dvol}_B, \notag \end{align} which proves the proposition. \end{proof}
\subsubsection{Modified ${\mathcal W}$-functional} \label{subsubsection4.2.2}
\begin{definition} \label{4.47} Given $f \in C^\infty(B)$ and $\tau \in {\mathbb R}^+$, put \begin{align} \label{4.48} & {\mathcal W}(G_{ij},A^i_\alpha,g_{\alpha \beta},f,\tau) \: = \\ &\int_B \left[ \tau
\left( |\nabla f|^2 \: + \: R \: - \: \frac14 \: g^{\alpha \beta} \: G^{ij} \: G_{jk, \alpha} \: G^{kl} \: G_{li, \beta} \: - \: \frac14 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ij} \: F^i_{\alpha \beta} \: F^j_{\gamma \delta} \right) \: + \: f \: - \: n \right] \notag \\ & (4\pi \tau)^{- \: \frac{n}{2}} \: e^{-f} \: \operatorname{dvol}_B. \notag \end{align} \end{definition}
If $N = 0$, i.e. if $M = B$, then this is the same as Perelman's ${\mathcal W}$-functional \cite{Perelman1}. The next proposition says how ${\mathcal W}$ varies along the Ricci flow.
\begin{proposition} \label{4.49} Under the flow equations \begin{align} \label{4.50} \frac{\partial G_{ij}}{\partial t} \: & = \:
g^{\alpha \beta} \: G_{ij; \alpha \beta} \: - \: g^{\alpha \beta} \: G^{kl} \: G_{ik, \alpha} \: G_{lj, \beta} \: - \: \frac12 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ik} \: G_{jl} \: F^k_{\alpha \beta} \: F^l_{\gamma \delta} \\ \frac{\partial A^i_\alpha}{\partial t} \: & = \: - \: g^{\gamma \delta} \: F^i_{\alpha \gamma; \delta} \: - \: g^{\gamma \delta} \: G^{ij} \: G_{jk, \gamma} \: F^k_{\alpha \delta} \notag \\ \frac{\partial g_{\alpha \beta}}{\partial t} \: & = \: - \: 2 \: R_{\alpha \beta} \: + \: \frac12 \: G^{ij} \: G_{jk,\alpha} \: G^{kl} \: G_{li,\beta} \: + \: g^{\gamma \delta} \: \: G_{ij} \: F^i_{\alpha \gamma} \: F^j_{\beta \delta} \notag \\ \frac{\partial(e^{-f})}{\partial t} \: & = \: - \: \nabla^2 \: e^{-f} \: + \: \left( R \: - \: \frac14 \: g^{\alpha \beta} \: G^{ij} \: G_{jk, \alpha} \: G^{kl} \: G_{li, \beta} \: - \: \frac12 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ij} \: F^i_{\alpha \beta} \: F^j_{\gamma \delta} \: - \: \frac{n}{2 \tau} \right) e^{-f} \notag \\ \frac{\partial \tau}{\partial t} \: & = \: -1 \notag \end{align} one has \begin{align} \label{4.51} & \frac{d}{dt} {\mathcal W}(G_{ij}, A^i_\alpha, g_{\alpha \beta}, f, \tau) \: = \\
& \frac{\tau}{2} \: \int_B \left| g^{\alpha \beta} \: G_{ij; \alpha \beta} \: - \: g^{\alpha \beta} \: G^{kl} \: G_{ik, \alpha} \: G_{lj, \beta} \: - \: \frac12 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ik} \: G_{jl} \: F^k_{\alpha \beta} \: F^l_{\gamma \delta} \: - \: g^{\alpha \beta} \: G_{ij, \alpha} \: f_{,\beta}
\right|^2 \notag \\ & (4 \pi \tau)^{- \: \frac{n}{2}} \: e^{-f} \: \operatorname{dvol}_B \: + \notag \\
& \tau \int_B \left| g^{\gamma \delta} \: F^i_{\alpha \gamma; \delta} \: + \: g^{\gamma \delta} \: G^{ij} \: G_{jk, \gamma} \: F^k_{\alpha \delta} \: - \: g^{\gamma \delta} \:
f_{, \gamma} \: F^k_{\alpha \delta} \right|^2 \:
(4 \pi \tau)^{- \: \frac{n}{2}} \: e^{-f} \: \operatorname{dvol}_B \: + \notag \\
& 2 \tau \int_B \left| R_{\alpha \beta} \: - \: \frac14 \: G^{ij} \: G_{jk,\alpha} \: G^{kl} \: G_{li,\beta} \: - \: \frac12 \: g^{\gamma \delta} \: \: G_{ij} \: F^i_{\alpha \gamma} \: F^j_{\beta \delta} \: + \: f_{;\alpha \beta} \: - \: \frac{1}{2\tau}
\: g_{\alpha \beta} \right|^2 \: (4 \pi \tau)^{- \: \frac{n}{2}} \: e^{-f} \: \operatorname{dvol}_B \: - \: \notag \\ & \frac14 \: \int_B g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ij} \: F^i_{\alpha \beta} \: F^j_{\gamma \delta} \:
(4 \pi \tau)^{- \: \frac{n}{2}} \: e^{-f} \: \operatorname{dvol}_B. \notag \end{align} \end{proposition} \begin{proof} The proof stands in relation to the proof of Corollary \ref{4.18} as the corresponding statements about Perelman's ${\mathcal W}$-functional vs. Perelman's ${\mathcal F}$-functional; see \cite[Section 12]{Kleiner-Lott}. \end{proof}
Note that the $\frac14 \: \int_B g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ij} \: F^i_{\alpha \beta} \: F^j_{\gamma \delta} \:
(4 \pi \tau)^{- \: \frac{n}{2}} \: e^{-f} \: \operatorname{dvol}_B$ term occurs on the right-hand side of (\ref{4.51}) with a negative sign. We now look at what it means for ${\mathcal W}$ to be constant in $t$, under the assumption that $F^i_{\alpha \beta}$ vanishes.
\begin{proposition} \label{4.52} Suppose that $F^i_{\alpha \beta} = 0$. If ${\mathcal W}(G_{ij}, A^i_\alpha, g_{\alpha \beta}, f, \tau)$ is constant in $t$ then $\det(G_{ij})$ is constant and \begin{align} \label{4.53} g^{\alpha \beta} \: G_{ij; \alpha \beta} \: - \: g^{\alpha \beta} \: G^{kl} \: G_{ik, \alpha} \: G_{lj, \beta} \: - \: g^{\alpha \beta} \: G_{ij, \alpha} \: f_{,\beta} & = \: 0, \\ R_{\alpha \beta} \: - \: \frac14 \: G^{ij} \: G_{jk,\alpha} \: G^{kl} \: G_{li,\beta} \: + \: f_{;\alpha \beta} \: - \: \frac{1}{2\tau} \: g_{\alpha \beta} & = \: 0. \notag \end{align} \end{proposition} \begin{proof} The same argument as in the proof of Proposition \ref{4.21} shows that $\det(G_{ij})$ is constant. Then (\ref{4.53}) follows from (\ref{4.51}).
Unlike in Proposition \ref{4.21}, we cannot conclude that $f$ is constant, because of the existence of nontrivial compact gradient shrinking solitons. \end{proof}
\begin{remark} \label{4.54} The term $\frac14 \: \int_B g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ij} \: F^i_{\alpha \beta} \: F^j_{\gamma \delta} \:
(4 \pi \tau)^{- \: \frac{n}{2}} \: e^{-f} \: \operatorname{dvol}_B$ occurs on the right-hand side of (\ref{4.51}) with a useless sign. This is not surprising, as can be seen by looking at the Ricci flow on a round $3$-sphere $M$, which we consider to be the total space of a circle bundle over $S^2$. We shift the time parameter so that the $3$-sphere disappears at time zero. As the $3$-sphere gives a gradient shrinking soliton, the functional ${\mathcal W}$ is constant in $t$. However, the circle bundle has nonvanishing curvature. Hence having ${\mathcal W}$ constant in $t$ cannot imply that $F^i_{\alpha \beta}$ vanishes. \end{remark}
We now look at some special cases of (\ref{4.53}).
\begin{proposition} Under the hypotheses of Proposition \ref{4.52}, if $1 \le \dim(B) \le 2$ then the only solutions of (\ref{4.53}) occur when $B$ is $S^2$ or ${\mathbb R} P^2$. \end{proposition} \begin{proof} The second equation in (\ref{4.53}) implies that \begin{equation} \int_B R \: \operatorname{dvol}_B \: - \: \frac14 \: \int_B g^{\alpha \beta} \: G^{ij} \: G_{jk,\alpha} \: G^{kl} \: G_{li,\beta} \: \operatorname{dvol}_B \: - \: \frac{n}{2\tau} \: \operatorname{vol}(B) \: = \: 0, \end{equation} from which the proposition follows. \end{proof}
We now use ${\mathcal W}$ to analyze a blowup limit.
\begin{proposition} \label{4.55} Suppose that $(M, \overline{g}(\cdot))$ is a locally ${\mathcal G}$-invariant Ricci flow defined for all $t \in (- T, 0)$, with $T \le \infty$. Suppose that $F^i_{\alpha \beta} = 0$. Put $\tau \: = \: - \: t$. Let $\{ s_i \}_{i=1}^\infty$ be a sequence of positive numbers tending to infinity. Put $\overline{g}_i(\tau) \: = \: s_i \: \overline{g}(s_i^{-1} \tau)$. Suppose that $\lim_{i \rightarrow \infty} \overline{g}_i(\cdot)$ exists and equals $\overline{g}_\infty(\cdot)$ in the sense of Subsection \ref{subsection4.1}, for a locally ${\mathcal G}$-invariant Ricci flow $\overline{g}_\infty(\cdot)$ with a compact base $B_\infty$, defined for $\tau \in (0, \infty)$. Writing $\overline{g}_\infty(\cdot) \: \equiv \: (G_{ij,\infty}(\cdot), g_{\alpha \beta,\infty}(\cdot))$, we conclude that \\ 1. $\det(G_{ij,\infty})$ is constant. \\ 2. Equations (\ref{4.53}) are satisfied for $G_{ij,\infty}(\cdot)$ and $g_{\alpha \beta,\infty}(\cdot)$. \end{proposition} \begin{proof} The proof is along the lines of the proof of Proposition \ref{4.39}. \end{proof}
\subsubsection{Modified ${\mathcal W}_+$-functional} \label{subsubsection4.2.3}
\begin{definition} \label{4.56} Given $f \in C^\infty(B)$ and $t \in {\mathbb R}^+$, put \begin{align} \label{4.57} & {\mathcal W}_+(G_{ij},A^i_\alpha,g_{\alpha \beta},f,t) \: = \\ &\int_B \left[ t
\left( |\nabla f|^2 \: + \: R \: - \: \frac14 \: g^{\alpha \beta} \: G^{ij} \: G_{jk, \alpha} \: G^{kl} \: G_{li, \beta} \: - \: \frac14 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ij} \: F^i_{\alpha \beta} \: F^j_{\gamma \delta} \right) \: - \: f \: + \: n \right] \notag \\ & (4\pi t)^{- \: \frac{n}{2}} \: e^{-f} \: \operatorname{dvol}_B. \notag \end{align} \end{definition}
If $N = 0$, i.e. if $M = B$, then this is the same as the Feldman-Ilmanen-Ni ${\mathcal W}_+$-functional \cite{Feldman-Ilmanen-Ni (2005)}.
In what follows, we will need a lower bound for ${\mathcal W}_+$ in terms of the scalar curvature of $M$ and the volume of $B$.
\begin{lemma} \label{4.58} If $(4 \pi t)^{- \: \frac{n}{2}} \int_B e^{-f} \: \operatorname{dvol}_B \: = \: 1$ then \begin{equation} \label{4.59} {\mathcal W}_+(G_{ij},A^i_\alpha,g_{\alpha \beta},f,t) \: \ge \: t \: \overline{R}_{min} \: + \: n \: + \: \frac{n}{2} \: \ln(4\pi) \: - \: \ln \left( t^{- \: \frac{n}{2}} \: \operatorname{vol}(B, g_{\alpha \beta}(t)) \right). \end{equation} \end{lemma} \begin{proof} From (\ref{4.46.5}), \begin{align} \label{4.60} & {\mathcal W}_+(G_{ij},A^i_\alpha,g_{\alpha \beta},f,t) \: = \\
& \int_B \left[t \left( \left| \nabla f \: + \: \nabla \ln \sqrt{\det(G_{ij})}
\right|^2 \: + \: \overline{R} \right) \: - \: f \: + \: n \right] \: (4\pi t)^{- \: \frac{n}{2}} \: e^{- f} \: \operatorname{dvol}_B \: \ge \notag \\ & t \: \overline{R}_{min} \: + \: n \: - \: (4 \pi t)^{- \: \frac{n}{2}} \int_B f \: e^{-f} \: \operatorname{dvol}_B \: \ge \: \notag \\ & t \: \overline{R}_{min} \: + \: n \: + \: \frac{n}{2} \: \ln(4\pi) \: - \: \ln \left( t^{- \: \frac{n}{2}} \: \operatorname{vol}(B, g_{\alpha \beta}(t)) \right), \notag \end{align} where we used Jensen's inequality. This proves the lemma. \end{proof}
The next proposition says that if $f$ satisfies a conjugate heat equation then ${\mathcal W}_+$ is monotonic under the Ricci flow.
\begin{proposition} \label{4.61} Under the flow equations \begin{align} \label{4.62} \frac{\partial G_{ij}}{\partial t} \: & = \:
g^{\alpha \beta} \: G_{ij; \alpha \beta} \: - \: g^{\alpha \beta} \: G^{kl} \: G_{ik, \alpha} \: G_{lj, \beta} \: - \: \frac12 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ik} \: G_{jl} \: F^k_{\alpha \beta} \: F^l_{\gamma \delta} \\ \frac{\partial A^i_\alpha}{\partial t} \: & = \: - \: g^{\gamma \delta} \: F^i_{\alpha \gamma; \delta} \: - \: g^{\gamma \delta} \: G^{ij} \: G_{jk, \gamma} \: F^k_{\alpha \delta} \notag \\ \frac{\partial g_{\alpha \beta}}{\partial t} \: & = \: - \: 2 \: R_{\alpha \beta} \: + \: \frac12 \: G^{ij} \: G_{jk,\alpha} \: G^{kl} \: G_{li,\beta} \: + \: g^{\gamma \delta} \: \: G_{ij} \: F^i_{\alpha \gamma} \: F^j_{\beta \delta} \notag \\ \frac{\partial(e^{-f})}{\partial t} \: & = \: - \: \nabla^2 \: e^{-f} \: + \: \left( R \: - \: \frac14 \: g^{\alpha \beta} \: G^{ij} \: G_{jk, \alpha} \: G^{kl} \: G_{li, \beta} \: - \: \frac12 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ij} \: F^i_{\alpha \beta} \: F^j_{\gamma \delta} \: + \: \frac{n}{2t} \right) e^{-f} \notag \end{align} one has \begin{align} \label{4.63} & \frac{d}{dt} {\mathcal W}_+(G_{ij}, A^i_\alpha, g_{\alpha \beta}, f, t) \: = \\
& \frac{t}{2} \: \int_B \left| g^{\alpha \beta} \: G_{ij; \alpha \beta} \: - \: g^{\alpha \beta} \: G^{kl} \: G_{ik, \alpha} \: G_{lj, \beta} \: - \: \frac12 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ik} \: G_{jl} \: F^k_{\alpha \beta} \: F^l_{\gamma \delta} \: - \: g^{\alpha \beta} \: G_{ij, \alpha} \: f_{,\beta}
\right|^2 \notag \\ & (4 \pi t)^{- \: \frac{n}{2}} \: e^{-f} \: \operatorname{dvol}_B \: + \notag \\
& t \int_B \left| g^{\gamma \delta} \: F^i_{\alpha \gamma; \delta} \: + \: g^{\gamma \delta} \: G^{ij} \: G_{jk, \gamma} \: F^k_{\alpha \delta} \: - \: g^{\gamma \delta} \:
f_{, \gamma} \: F^k_{\alpha \delta} \right|^2 \:
(4 \pi t)^{- \: \frac{n}{2}} \: e^{-f} \: \operatorname{dvol}_B \: + \notag \\
& 2 t \int_B \left| R_{\alpha \beta} \: - \: \frac14 \: G^{ij} \: G_{jk,\alpha} \: G^{kl} \: G_{li,\beta} \: - \: \frac12 \: g^{\gamma \delta} \: \: G_{ij} \: F^i_{\alpha \gamma} \: F^j_{\beta \delta} \: + \: f_{;\alpha \beta} \: + \: \frac{1}{2t}
\: g_{\alpha \beta} \right|^2 \: (4 \pi t)^{- \: \frac{n}{2}} \: e^{-f} \: \operatorname{dvol}_B \: + \: \notag \\ & \frac14 \: \int_B g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ij} \: F^i_{\alpha \beta} \: F^j_{\gamma \delta} \: (4 \pi t)^{- \: \frac{n}{2}} \: e^{-f} \: \operatorname{dvol}_B. \notag \end{align} \end{proposition} \begin{proof} The proof is along the lines of the proof of Corollary \ref{4.18}. \end{proof}
We now look at what it means for ${\mathcal W}_+$ to be constant along the flow (\ref{4.62}).
\begin{proposition} \label{4.64} If ${\mathcal W_+}(G_{ij}, A^i_\alpha, g_{\alpha \beta}, f, t)$ is constant in $t$ then $F^i_{\alpha \beta} = 0$, $\det(G_{ij})$ is constant and \begin{align} \label{4.65} g^{\alpha \beta} \: G_{ij; \alpha \beta} \: - \: g^{\alpha \beta} \: G^{kl} \: G_{ik, \alpha} \: G_{lj, \beta} \: & = \: 0, \\ R_{\alpha \beta} \: - \: \frac14 \: G^{ij} \: G_{jk,\alpha} \: G^{kl} \: G_{li,\beta} \: + \: \frac{1}{2t} \: g_{\alpha \beta} & = \: 0. \notag \end{align} \end{proposition} \begin{proof} From (\ref{4.63}), we see first that $F^i_{\alpha \beta} = 0$. Then we also see that \begin{equation} \label{4.66} g^{\alpha \beta} \: G_{ij; \alpha \beta} \: - \: g^{\alpha \beta} \: G^{kl} \: G_{ik, \alpha} \: G_{lj, \beta} \: - \: g^{\alpha \beta} \: G_{ij, \alpha} \: f_{,\beta} \: = \: 0 \end{equation} and \begin{equation} \label{4.67} R_{\alpha \beta} \: - \: \frac14 \: G^{ij} \: G_{jk,\alpha} \: G^{kl} \: G_{li,\beta} \: + \: f_{;\alpha \beta} \: + \: \frac{1}{2t} \: g_{\alpha \beta} \: = \: 0. \end{equation} As in the proof of Proposition \ref{4.21}, we can show from (\ref{4.66}) that $\det(G_{ij})$ is constant. Then equations (\ref{4.29}) and (\ref{4.30}) hold. From (\ref{4.67}), we have \begin{equation} \label{4.68} \int_B \left( \overline{R} + \frac{n}{2t} \right) \: \operatorname{dvol}_B \: = \: 0. \end{equation} As in the proof of Proposition \ref{4.21}, we have \begin{equation} \label{4.69} \frac{\partial \overline{R}}{\partial t} \: = \:
{\nabla}^2 \overline{R} \: + \: 2 \: |\overline{R}_{ij}|^2
\: + \: 2 \: |\overline{R}_{\alpha \beta}|^2. \end{equation} From (\ref{4.62}), (\ref{4.66}) and (\ref{4.67}), the flow equations are \begin{align} \label{4.70} \frac{\partial G_{ij}}{\partial t} \: & = \: g^{\alpha \beta} \: G_{ij,\alpha} \: f_{,\beta} \\ \frac{\partial g_{\alpha \beta}}{\partial t} \: & = \: 2 \: f_{;\alpha \beta} \: + \: \frac{1}{t} \: g_{\alpha \beta}. \notag \end{align} It follows that \begin{equation} \label{4.71} \frac{\partial \overline{R}}{\partial t} \: = \: \langle \nabla f, \nabla \overline{R} \rangle \: - \: \frac{\overline{R}}{t}. \end{equation} Thus \begin{equation} \label{4.72}
{\nabla}^2 \overline{R} \: + \: 2 |\overline{R}_{ij}|^2 \: + \:
2 |\overline{R}_{\alpha \beta}|^2 \: + \: \frac{\overline{R}}{t} \: = \: \langle {\nabla} {f}, {\nabla} \overline{R} \rangle. \end{equation} Then \begin{equation} \label{4.73} {\nabla}^2 \left( \overline{R} + \frac{n}{2t} \right)
\: + \: 2 |\overline{R}_{ij}|^2 \: + \:
2 |\overline{R}_{\alpha \beta} + \frac{1}{2t} g_{\alpha \beta}|^2 \: - \: \frac{1}{t} \: \left( \overline{R} + \frac{n}{2t} \right) \: = \: \langle {\nabla} {f}, {\nabla} \left( \overline{R} + \frac{n}{2t} \right) \rangle. \end{equation} From (\ref{4.68}), either $\overline{R} + \frac{n}{2t} \: = \: 0$ or $\overline{R}_{min} + \frac{n}{2t} < 0$. If $\overline{R}_{min} + \frac{n}{2t} < 0$ then we obtain a contradiction to the minimum principle, applied to (\ref{4.73}). Thus $\overline{R} + \frac{n}{2t} = 0$. From (\ref{4.73}), it follows that $\overline{R}_{ij} \: = \: \overline{R}_{\alpha \beta} + \frac{1}{2t} g_{\alpha \beta} \: = \: 0$. This proves the proposition. \end{proof}
\begin{lemma} \label{4.74} Under the conclusion of Proposition \ref{4.64}, $G_{ij}$ and $A^i_\alpha$ are time-independent, and $g_{\alpha \beta}$ is proportionate to $t$. \end{lemma} \begin{proof} This follows from (\ref{4.62}) and (\ref{4.65}). \end{proof}
\begin{remark} \label{4.75} Equations (\ref{4.65}) were called the harmonic-Einstein equations in \cite{Lott (2007)}, where they were used as an ansatz to construct expanding soliton solutions on the total spaces of flat vector bundles. \end{remark}
We now use ${\mathcal W}_+$ to analyze blowdown limits.
\begin{proposition} \label{4.76} Suppose that $(M, \overline{g}(\cdot))$ is a locally ${\mathcal G}$-invariant Ricci flow defined for all $t \in (0, \infty)$. Let $\{ s_i \}_{i=1}^\infty$ be a sequence of positive numbers tending to infinity. Put $\overline{g}_i(t) \: = \: s_i^{-1} \: \overline{g}(s_i t)$. Suppose that $\lim_{i \rightarrow \infty} \overline{g}_i(\cdot)$ exists and equals $\overline{g}_\infty(\cdot)$ in the sense of Subsection \ref{subsection4.1}, for a locally ${\mathcal G}$-invariant Ricci flow $\overline{g}_\infty(\cdot)$ with a compact base $B_\infty$, defined for $t \in (0, \infty)$. Writing $\overline{g}_\infty(\cdot) \: \equiv \: (G_{ij,\infty}(\cdot), A^i_{\alpha,\infty}(\cdot), g_{\alpha \beta,\infty}(\cdot))$, we conclude that \\ 1. $F^i_{\alpha \beta,\infty} \: = \: 0$. \\ 2. $\det(G_{ij,\infty})$ is constant. \\ 3. Equations (\ref{4.65}) are satisfied for $G_{ij,\infty}(\cdot)$ and $g_{\alpha \beta,\infty}(\cdot)$. \end{proposition} \begin{proof} The proof is along the lines of the proof of Proposition \ref{4.39}. \end{proof}
We now look at some special solutions of (\ref{4.65}). Recall that $\rho \: : \: \pi_1(B, b) \rightarrow \operatorname{SL}(N)$ is the holonomy representation.
\begin{proposition} \label{4.77} Under the assumptions of Proposition \ref{4.76}, if $N = 0$ then $\overline{g}_\infty(t) = t g_{Ein}$, where $g_{Ein}$ is an Einstein metric on $M = B$ with Einstein constant $- \: \frac12$. If $N = 1$ then $\overline{g}_\infty(t)$ is locally an isometric product of ${\mathbb R}$ or $S^1$ with $(B, t g_{Ein})$, where $g_{Ein}$ is an Einstein metric on $B$ with Einstein constant $- \: \frac12$. For any $N$, if $\dim(B) = 1$ then with an appropriate choice of section $s$, we can locally write $G_{ij}(b) \: = \: (e^{bX})_{ij}$ and $g_B \: = \: \frac{t}{2} \: \operatorname{Tr}(X^2) \: db^2$, where $X$ is a real diagonal $(N \times N)$-matrix with vanishing trace.
If $\dim(B) = 2$ and $N = 2$ then $B$ has negative Euler characteristic. Also, \\ 1. $\overline{g}$ is a locally product metric and $B$ has sectional curvature $- \: \frac{1}{2t}$, or \\ 2. $\rho$ fixes no point of the boundary of $\operatorname{SL}(2, {\mathbb R})/\operatorname{SO}(2) = H^2$ and with the right choice of orientation of $\widetilde{B}$, the map $\widetilde{G} \: : \: \widetilde{B} \rightarrow H^2$ is holomorphic. \end{proposition} \begin{proof} The $N=0$ case is clear. As $\det(G_{ij})$ is constant, if $N = 1$ then we are in a local product situation. For any $N$, if $\dim(B) = 1$ then the map $b \rightarrow G_{ij}(b)$ describes a geodesic in $\operatorname{SL}(N, {\mathbb R})/\operatorname{SO}(N, {\mathbb R})$, from which the proposition follows. (See \cite[Example 4.27]{Lott (2007)}).
If $\dim(B) = 2$ and $N =2$ then we can consider $\widetilde{G}$ to be a $\rho$-equivariant harmonic map $u \: : \: \widetilde{B} \rightarrow H^2$. Choosing an orientation of $\widetilde{B}$, we use a local complex coordinate $z$ on $\widetilde{B}$. There is a solution to the first equation in (\ref{4.65}) if and only if the representation $\rho \: : \: \pi_1(B) \rightarrow \operatorname{SL}(2, {\mathbb R})$ is not conjugate to a (nondiagonal) representation by upper triangular matrices \cite{Jost-Zuo (1997),Labourie (1991)}. If there is a solution to the first equation in (\ref{4.65}) then looking at the $dz^2$-component of the second equation in (\ref{4.65}) gives \begin{equation} \label{4.77.5} u_{z} \overline{u_{\overline{z}}} = 0. \end{equation}
We consider the subset of $\partial H^2$, the boundary at infinity of $H^2$, which is pointwise fixed by $\operatorname{Im}(\rho)$. It is either all of $\partial H^2$, two points in $\partial H^2$, one point in $\partial H^2$ or the empty set. If all of $\partial H^2$ is fixed by $\operatorname{Im}(\rho)$ then $\rho$ is the identity representation, $u$ descends to a harmonic function on $B$ (which must be constant) and $B$ has constant sectional curvature $- \: \frac{1}{2t}$. If $\operatorname{Im}(\rho)$ fixes exactly two points of $\partial H^2$ then $\rho$ is conjugate to a diagonal representation and $u$ maps to a nontrivial geodesic in $H^2$. We can assume that $u$ is real-valued. Then equation (\ref{4.77.5}) implies that $u$ is constant, which is a contradiction. As has been said, there is no solution to the first equation in (\ref{4.65}) if $\operatorname{Im}(\rho)$ fixes a single point of $\partial H^2$. Finally, suppose that $\operatorname{Im}(\rho)$ fixes no point of $\partial H^2$. Then $u$ is constant or $du$ has generic rank two. If $u$ is constant then $\overline{g}$ is a locally product metric. Suppose that $u$ is nonconstant. As $du$ has generic rank $2$, equation (\ref{4.77.5}) implies that $u$ is holomorphic or antiholomorphic. If $u$ is antiholomorphic then we change the orientation of $\widetilde{B}$ to make $u$ holomorphic. As $u$ is nonconstant, Liouville's theorem implies that $B$ has negative Euler characteristic. \end{proof}
\begin{remark} \label{4.78} The solutions with $\dim(B) = 1$, $G_{ij}(b) \: = \: (e^{bX})_{ij}$ and $g_B \: = \: \frac{t}{2} \: \operatorname{Tr}(X^2) \: db^2$ are generalized Sol-solutions. \end{remark}
\begin{remark} When $\dim(B) = 2$ and $N = 2$, the equations (\ref{4.65}) arose independently in the paper \cite{Song-Tian (2007)} on K\"ahler-Ricci flow. In that paper, which is in the holomorphic setting, the map $G$ arises as the classifying map for the torus bundle of an elliptic fibration. The term $\frac14 \: G^{ij} \: G_{jk,\alpha} \: G^{kl} \: G_{li,\beta}$ of (\ref{4.65}) is called the Weil-Petersson term. The second equation of (\ref{4.65}), in the K\"ahler case, is considered to be a generalized K\"ahler-Einstein equation for the geometry of a collapsing limit. \end{remark}
\begin{remark} \label{4.79} All of the results of this section extend to the case when $B$ is an orbifold, $E$ is a flat orbifold ${\mathcal G}$-bundle over $B$, a manifold $M$ is the total space of an orbifold fiber bundle $\pi \: : \: M \rightarrow B$ and ${\mathcal G}$ acts locally freely on $M$ (via a map $E \times_B M \rightarrow M$) with orbifold quotient $B$. \end{remark}
\section{Equivalence classes of \'etale groupoids} \label{section5}
Let ${\frak G}$ be a complete effective path-connected Hausdorff \'etale groupoid that admits an invariant Riemannian metric on the space of units $G^{(0)}$. We assume that \\ 1. ${\frak G}$ equals its closure $\overline{\frak G}$. \\ 2. The local symmetry sheaf $\underline{\frak g}$ of ${\frak G}$ is a locally constant sheaf of abelian Lie algebras isomorphic to ${\mathbb R}^N$.
\begin{example} \label{5.1} Let $M$ be the total space of a twisted abelian principal ${\mathcal G}$-bundle as in Subsection \ref{subsection4.1}. We can take ${\frak G} \: = \: E \times_B M$, where the flat bundle $E$ has the \'etale topology, with ${\frak G}^{(0)} = M$. The local symmetry sheaf comes from the flat vector bundle $\pi^* e$ on $M$.
We can perform a similar construction in the setting of Remark \ref{4.79}, where $M$ is a manifold and $B$ is an orbifold. \end{example}
The results of Section \ref{section4} extend to the setting of a Ricci flow on ${\frak G}$, under the analogous curvature and diameter assumptions, provided that ${\frak G}$ is locally free. The reason is that the local structure of such an \'etale groupoid is the same as the local structure considered in Section \ref{section4} \cite[Corollary 3.2.2]{Haefliger (1985)}. We can then perform the integrals of Section \ref{section4} over the orbit space of ${\frak G}$ and derive the same consequences as in Section \ref{section4}.
It will be useful to determine the global structure of such \'etale groupoids, at least in low dimensions.
\begin{proposition} \label{5.2} Suppose that ${\frak G}$ is locally free. Then the orbit space ${\mathcal O}$ is an orbifold. There is a flat (orbifold) ${\mathbb R}^N$-bundle $e$ on ${\mathcal O}$ associated to ${\frak G}$.
If $\dim({\mathcal O}) = 1$ then ${\frak G}$ is classified by the isomorphism class of $e$.
In general, if $e$ is trivial then ${\frak G}$ is equivalent to the groupoid of a principal bundle over ${\mathcal O}$. It is classified up to groupoid equivalence by the orbits of $\operatorname{GL}(N, {\mathbb R})$ on $\operatorname{H}^2({\mathcal O}; {\mathbb R}^N$). \end{proposition} \begin{proof} The proof is similar to the classification in \cite{Haefliger-Salem (1988)} of the transverse structure of Riemannian foliations with low-codimension leaves. (As the paper \cite{Haefliger-Salem (1988)} considers Riemannian groupoids that may not equal their closure, there is an additional step in \cite{Haefliger-Salem (1988)} which consists of analyzing the restriction of the groupoid to an orbit closure. Since we only deal with \'etale groupoids that equal their closures, we do not have to deal with this complication.)
Given $x \in {\frak G}^{(0)}$, let ${\mathcal O}_x$ be its orbit. There is an invariant neighborhood of the orbit whose groupoid structure is described by \cite[Corollary 3.2.2]{Haefliger (1985)}. In particular, the point in the orbit space ${\mathcal O}$, corresponding to ${\mathcal O}_x$, has a neighborhood $U$ that is homeomorphic to $V/{\frak G}_x^x$, where $V$ is a representation space for the isotropy group ${\frak G}_x^x$. This gives the orbifold structure on the orbit space.
The classification of such \'etale groupoids comes from the bundle theory developed in \cite[Section 2.3]{Haefliger (1985)}, which we now follow. For notation, if $G$ is a topological group then let $G_\delta$ denote $G$ with the discrete topology. Suppose first that the isotropy groups ${\frak G}_x^x$ are trivial, so the orbifold ${\mathcal O}$ is a manifold. Let $U \subset {\mathcal O}$ be a neighborhood of ${\mathcal O}_x$ as above. Let $\pi \: : \: {\frak G}^{(0)} \rightarrow {\mathcal O}$ be the quotient map. By \cite[Corollary 3.2.2]{Haefliger (1985)}, the restriction of ${\frak G}$ to $\pi^{-1}(U)$ is equivalent to the cross-product groupoid $({\mathbb R}^N \times U) \rtimes {\mathbb R}^N_\delta$, where ${\mathbb R}^N_\delta$ acts on ${\mathbb R}^N$ by translation and acts trivially on $U$. This gives the local structure of ${\frak G}$. It remains to determine the possible ways to glue these local structures together.
To follow the notation of \cite[Section 2.1]{Haefliger (1985)}, put $\Gamma = {\mathbb R}^N_\delta \subset \operatorname{Diff}({\mathbb R}^N)_\delta$. The normalizer $N^\Gamma$ of $\Gamma$ in $\operatorname{Diff}({\mathbb R}^N)$ is ${\mathbb R}^N \widetilde{\times} \operatorname{GL}(N, {\mathbb R})$ and the centralizer is $C^\Gamma = {\mathbb R}^N$. We give $N^\Gamma$ the topology ${\mathbb R}^N \widetilde{\times} \operatorname{GL}(N, {\mathbb R})_\delta$.
Following the discussion in \cite[Section 2.1]{Haefliger (1985)}, suppose that $U \subset {\mathcal O}$ is an open set. Consider the cross-product groupoid $({\mathbb R}^N \times U) \rtimes {\mathbb R}^N_\delta$. Let ${\mathcal E}(U)$ be the self-equivalences of $({\mathbb R}^N \times U) \rtimes {\mathbb R}^N_\delta$ that project onto the identity of $U$. This forms a sheaf $\underline{\mathcal E}$ on ${\mathcal O}$. We can cover ${\mathcal O}$ by open sets $U$ such that $\pi^{-1}(U)$ is equivalent to $({\mathbb R}^N \times U) \rtimes {\mathbb R}^N_\delta$. It follows that the \'etale groupoids in question are classified by the set $\operatorname{H}^1({\mathcal O}; \underline{\mathcal E})$ \cite[Proposition 2.3.2]{Haefliger (1985)}.
To compute $\operatorname{H}^1({\mathcal O}; \underline{\mathcal E})$, let $\underline{{\mathbb R}^N}$ be the sheaf on ${\mathcal O}$ for which $\underline{{\mathbb R}^N}(U)$ consists of smooth maps $U \rightarrow {\mathbb R}^N$, let ${\mathbb R}^N_\delta$ (also) denote the constant sheaf on ${\mathcal O}$ with stalk ${\mathbb R}^N_\delta$ and let $\operatorname{GL}(N, {\mathbb R})_\delta$ (also) denote the constant sheaf on ${\mathcal O}$ with stalk $\operatorname{GL}(N, {\mathbb R})_\delta$. As in \cite[(2.4.2)]{Haefliger (1985)} there is a short exact sequence of sheaves \begin{equation} \label{5.3} 0 \longrightarrow \underline{{\mathbb R}^N}/{\mathbb R}^N_\delta \longrightarrow \underline{\mathcal E} \longrightarrow \operatorname{GL}(N, {\mathbb R})_\delta \longrightarrow 0. \end{equation}
From \cite[Th\'eor\`eme 1.2]{Frenkel (1957)}, this short exact sequence of sheaves gives rise to an exact sequence of pointed sets \begin{equation} \label{5.4} \ldots \longrightarrow \operatorname{H}^0({\mathcal O}; \operatorname{GL}(N, {\mathbb R})_\delta) \longrightarrow \operatorname{H}^1({\mathcal O}; \underline{{\mathbb R}^N}/{\mathbb R}^N_\delta) \longrightarrow \operatorname{H}^1({\mathcal O}; \underline{\mathcal E}) \longrightarrow \operatorname{H}^1({\mathcal O}; \operatorname{GL}(N, {\mathbb R})_\delta). \end{equation} The set $\operatorname{H}^1({\mathcal O}; \operatorname{GL}(N, {\mathbb R})_\delta)$ is the same as the set of homomorphisms $\pi_1({\mathcal O}) \rightarrow \operatorname{GL}(N, {\mathbb R})$ modulo conjugation by elements of $\operatorname{GL}(N, {\mathbb R})$ or, equivalently, the set of equivalence classes of flat ${\mathbb R}^N$-vector bundles on ${\mathcal O}$. The image of the classifying element of ${\frak G}$, under the map $\operatorname{H}^1({\mathcal O}; \underline{\mathcal E}) \longrightarrow \operatorname{H}^1({\mathcal O}; \operatorname{GL}(N, {\mathbb R})_\delta)$, classifies the flat ${\mathbb R}^N$-vector bundle $e$ mentioned in Proposition \ref{5.2}. More explicitly, the transition functions of $e$ come from the image under $\underline{\mathcal E} \longrightarrow \operatorname{GL}(N, {\mathbb R})_\delta$ of the transition functions of ${\frak G}$.
The short exact sequence \begin{equation} \label{5.5} 0 \longrightarrow {\mathbb R}^N_\delta \longrightarrow \underline{{\mathbb R}^N} \longrightarrow \underline{{\mathbb R}^N}/{\mathbb R}^N_\delta \rightarrow 0 \end{equation} of sheaves of abelian groups gives a long exact sequence \begin{equation} \label{5.6} \ldots \longrightarrow \operatorname{H}^1({\mathcal O}; \underline{{\mathbb R}^N}) \longrightarrow \operatorname{H}^1({\mathcal O}; \underline{{\mathbb R}^N}/{\mathbb R}^N_\delta) \longrightarrow \operatorname{H}^2({\mathcal O}; {\mathbb R}^N_\delta) \longrightarrow \operatorname{H}^2({\mathcal O}; \underline{{\mathbb R}^N}) \longrightarrow \ldots \end{equation} of abelian groups. As $\underline{{\mathbb R}^N}$ is a fine sheaf, it follows from (\ref{5.6}) that $\operatorname{H}^1({\mathcal O}; \underline{{\mathbb R}^N}/{\mathbb R}^N_\delta) \cong \operatorname{H}^2({\mathcal O}; {\mathbb R}^N_\delta) \: = \: \operatorname{H}^2({\mathcal O}; {\mathbb R}^N)$.
As $\operatorname{H}^0$ consists of global sections, (\ref{5.4}) gives an exact sequence of pointed sets \begin{equation} \label{5.7} \operatorname{GL}(N, {\mathbb R}) \longrightarrow \operatorname{H}^2({\mathcal O}; {\mathbb R}^N) \longrightarrow \operatorname{H}^1({\mathcal O}; \underline{\mathcal E}) \longrightarrow \operatorname{H}^1({\mathcal O}; \operatorname{GL}(N, {\mathbb R})_\delta). \end{equation}
If $\dim({\mathcal O}) = 1$ then from (\ref{5.7}), the map $\operatorname{H}^1({\mathcal O}; \underline{\mathcal E}) \longrightarrow \operatorname{H}^1({\mathcal O}; \operatorname{GL}(N, {\mathbb R})_\delta)$ is injective. Thus ${\frak G}$ is determined up to groupoid equivalence by the isomorphism class of the flat vector bundle $e$.
If ${\mathcal O}$ has arbitrary dimension, suppose that $e$ is trivial. Consider the preimage under $\operatorname{H}^1({\mathcal O}; \underline{\mathcal E}) \longrightarrow \operatorname{H}^1({\mathcal O}; \operatorname{GL}(N, {\mathbb R})_\delta)$ of the element in $\operatorname{H}^1({\mathcal O}; \operatorname{GL}(N, {\mathbb R})_\delta)$ corresponding to the identity representation. By (\ref{5.7}), this preimage can be identified with the orbit space for the action of $\operatorname{GL}(N, {\mathbb R})$ on $\operatorname{H}^2({\mathcal O}; {\mathbb R}^N)$. Any such orbit contains an element of $\operatorname{Im}( \operatorname{H}^2({\mathcal O}; {\mathbb Z}^N) \rightarrow \operatorname{H}^2({\mathcal O}; {\mathbb R}^N))$, which implies that ${\frak G}$ is equivalent to the \'etale groupoid arising from some principal $T^N$-bundle on ${\mathcal O}$.
The preceding considerations extend to the case when the (finite) isotropy groups ${\frak G}^x_x$ are not all trivial. In that case, ${\mathcal O}$ is an orbifold and the argument extends to the orbifold setting. For example, $\operatorname{H}^*({\mathcal O}; {\mathbb R}^N)$ has to be interpreted as an orbifold cohomology group. \end{proof}
\begin{remark} \label{5.8} If one starts with an (untwisted) principal ${\mathcal G}$-bundle, with ${\mathcal G}$ abelian, then the triviality of the corresponding \'etale groupoid is determined by whether or not $\{\int_B F^i\}_{i=1}^N$ vanishes in $\operatorname{H}^2(B; {\mathbb R}^N)$.
Suppose that the \'etale groupoid is nontrivial and $\{\overline{g}_j\}_{j=1}^\infty$ is a sequence of invariant metrics on the principal ${\mathcal G}$-bundle, so that there is a limiting invariant metric $\overline{g}_\infty$. It is possible that the curvatures $\{F^i\}_{i=1}^N$ approach zero in norm as $j \rightarrow \infty$. If this is the case then $\overline{g}_\infty$ will live on a distinct \'etale groupoid, as its curvature $\{F^i\}_{i=1}^N$ vanishes. This phenomenon occurs in the rescaled Ricci flow on the unit circle bundle of a surface of constant negative curvature.
On the other hand, if we start with a trivial \'etale groupoid and $\{\overline{g}_j\}_{j=1}^\infty$ is a noncollapsing sequence of invariant metrics on the principal ${\mathcal G}$-bundle then any limiting invariant metric $\overline{g}_\infty$ will necessarily be on the same \'etale groupoid.
The relevance of Proposition \ref{5.2} is that for \'etale groupoids which satisfy its hypotheses, we can discuss convergence of Ricci flow solutions on such \'etale groupoids in terms of convergence of invariant Ricci flow solutions on twisted principal bundles. \end{remark}
\begin{example} \label{5.9} Suppose that $M$ is the total space of a principal $S^1$-bundle over a compact oriented surface $B$. Given a subgroup ${\mathbb Z}_k \subset S^1$, let $M/{\mathbb Z}_k$ be the quotient space. It is also the total space of a principal $S^1$-bundle over $B$.
The (discrete) $S^1$-action on a principal $S^1$-bundle gives an \'etale groupoid. The map $M \rightarrow M/{\mathbb Z}_k$ gives an equivalence of \'etale groupoids, in the sense of \cite[Chapter III.${\mathcal G}$.2.4]{Bridson-Haefliger}. However, the Euler class of the circle bundle $M/{\mathbb Z}_k \rightarrow B$ is $k$ times that of the circle bundle $M \rightarrow B$. This shows that the Euler class of the circle bundle is not an invariant of the groupoid equivalence class. Instead, all that is relevant is whether or not the rational Euler class vanishes. \end{example}
\begin{example} \label{5.10} If $\dim({\mathcal O}) = 1$ then any homomorphism $\alpha \: : \: \pi_1({\mathcal O}) \rightarrow \operatorname{GL}(N, {\mathbb R})$ gives rise to an \'etale groupoid with unit space ${\frak G}^{(0)} = {\mathbb R}^N \times_\alpha \widetilde{\mathcal O}$.
If ${\mathcal O}$ is a closed orientable $2$-dimensional orbifold then $\operatorname{H}^2({\mathcal O}; {\mathbb R}^N) \cong {\mathbb R}^N$ and the action of $\operatorname{GL}(N, {\mathbb R})$ on $\operatorname{H}^2({\mathcal O}; {\mathbb R}^N)$ has two orbits, namely the zero element and the nonzero elements. Thus if $e$ is trivial then there are two equivalence classes of such groupoids with orbit space ${\mathcal O}$, one corresponding to a vanishing ``Euler class'' and one corresponding to a nonvanishing ``Euler class''. \end{example}
Suppose that $M$ is the total space of a twisted principal ${\mathbb R}^N$-bundle. Let $\overline{g}$ be an invariant metric on $M$. We recall that there are two distinct connections in this situation, the flat connection on the twisting bundle ${\mathcal E}$ and the connection $A$ on the twisted principal bundle. We will use the following lemma later.
\begin{lemma} \label{5.11} Let $\pi \: : \: M \rightarrow B$ be a twisted principal ${\mathbb R}^N$-bundle. Given $G_{ij}$ and $g_{\alpha \beta}$, let $A_1$ and $A_2$ be two flat connections on $M$. Let $\overline{g}_1$ and $\overline{g}_2$ be the corresponding invariant metrics on $M$. Then their underlying Riemannian groupoids are equivalent. \end{lemma} \begin{proof} Let $\{U_i\}$ be a covering of $B$ by open contractible sets. Let ${\mathcal U} = \{ \pi^{-1}(U_i) \}$ be the corresponding covering of $M$ and let ${\frak G}_{\mathcal U}$ be the localization of ${\frak G}$ \cite[Section 5.2]{Lott (2007)}. In our case, elements of ${\frak G}_{\mathcal U}$ are quadruples $(i,p_i,p_j,j)$ with $p_i \in \pi^{-1}(U_i)$, $p_j \in \pi^{-1}(U_j)$ and $\pi(p_i) = \pi(p_j)$. The multiplication is $(i,p_i,p_j,j) \cdot (j,p_j,p_k,k) \: = \: (i,p_i,p_k,k)$. The units ${\frak G}_{\mathcal U}^{(0)}$ are quadruples $(i,p_i,p_i, i)$ and the source and range maps are $s(i,p_i,p_j,j) = (j,p_j,p_j,j)$ and $r(i,p_i,p_j,j) = (i,p_i,p_i,i)$. Let $s^1_i \: : \: U_i \rightarrow \pi^{-1}(U_i)$ be a section for which $(s^1_i)^* A_1 = 0$. Similarly, let $s^2_i \: : \: U_i \rightarrow \pi^{-1}(U_i)$ be a section for which $(s^2_i)^* A_2 = 0$. Define a map $F \: : \: {\frak G}_{\mathcal U} \rightarrow {\frak G}_{\mathcal U}$ by $F(i,p_i,p_j,j) = (i, p_i + s^2_i(u_i) - s^1_i(u_i), p_j + s^2_j(u_j) - s^1_j(u_j), j)$, where $u_i = \pi(p_i)$, $u_j = \pi(p_j)$ and we write the action of ${\mathbb R}^N$ additively. Then $F$ is a groupoid isomorphism. On the space of units, $F(i,p_i,p_i,i) = (i, p_i + s^2_i(u_i) - s^1_i(u_i), p_i + s^2_i(u_i) - s^1_i(u_i), i)$ and so $F$ sends the section $s^1_i$ to $s^2_i$. It follows that $F$ is an isomorphism of Riemannian groupoids. \end{proof}
\section{Convergence arguments and universal covers} \label{section6}
In this section we prove Theorem \ref{1.2}. In Subsection \ref{subsection6.1} we prove convergence to a locally homogeneous Ricci flow on an \'etale groupoid. In Subsection \ref{subsection6.2} we promote this to convergence on the universal cover of $M$.
\subsection{Convergence arguments} \label{subsection6.1}
In this subsection we show that under the hypotheses of Theorem \ref{1.2}, there is a rescaling limit which is a locally homogeneous expanding soliton solution on an \'etale groupoid. To do this, if $\overline{\mathcal O}$ is the closure of the orbit of $g(\cdot)$ under the action of the parabolic rescaling semigroup ${\mathbb R}^{\ge 1}$ then we define a stratification of $\overline{\mathcal O}$ in terms of the number of local symmetries. We let $k_0$ be the maximal number of local symmetries that can occur in a rescaling limit of $g(\cdot)$. This corresponds to a maximally collapsed limit. The first step is to show that $k_0$ determines the Thurston type of $M$, and that there is a sequence of rescalings of $g(\cdot)$ which approaches the corresponding locally homogeneous expanding soliton.
In order to show that any rescaling limit $\overline{g}(\cdot)$ is a locally homogeneous expanding soliton (except possibly in the $\widetilde{\operatorname{SL}_2({\mathbb R})}$ case), we use further arguments. We show that any rescaling limit has $k_0$ local symmetries. We then use a compactness argument, along with the local stability of the space of expanders, to show that $\overline{g}(\cdot)$ is a locally homogeneous expanding soliton.
Let $g(\cdot)$ be a Ricci flow solution on a connected closed $3$-manifold $M$, defined for $t \in (1, \infty)$, with $\sup_{t \in (1, \infty)} t \parallel \operatorname{Riem}(g(t)) \parallel_\infty \: \le \: K \: < \: \infty$ and $\sup_{t \in (1, \infty)} t^{- \: \frac12} \: \operatorname{diam}(g(t)) \: \le \: D \: < \: \infty$. From Proposition \ref{3.5}, $M$ has a single geometric piece. Given $s \in [1, \infty)$, put $g_s(t) \: = \: \frac{1}{s} \: g(st)$. Then for all $s$, we have $\sup_{t \in (1, \infty)} t \parallel \operatorname{Riem}(g_s(t)) \parallel_\infty \: \le \: K$ and $\sup_{t \in (1, \infty)} t^{- \: \frac12} \: \operatorname{diam}(g_s(t)) \: \le \: D$. By Proposition \ref{3.2}, the family of Ricci flow solutions $\{g_s(\cdot)\}_{s \in [1, \infty)}$ is sequentially precompact among Ricci flow solutions on \'etale groupoids.
Let $\overline{\mathcal O}$ be the sequential closure of the forward orbit $\{g_s(\cdot)\}_{s \in [1, \infty)}$. Let $\overline{\mathcal O}_{(k)}$ be the elements of $\overline{\mathcal O}$ with a $k$-dimensional local symmetry sheaf $\underline{\frak g}$.
\begin{lemma} \label{6.1} If $\widehat{g}(\cdot) \in \overline{\mathcal O}$ then the underlying \'etale groupoid of $\widehat{g}(\cdot)$ is locally free. \end{lemma} \begin{proof} If $\widehat{g}(\cdot) \in \overline{\mathcal O}_{(0)}$ then there is nothing to show. If $\widehat{g}(\cdot) \in \overline{\mathcal O}_{(1)}$ then the lemma follows from the fact there is no point $x \in {\frak G}^{(0)}$ where the local Killing vector fields vanish simultaneously.
Suppose that $\widehat{g}(\cdot) \in \overline{\mathcal O}_{(2)}$. Write $\widehat{g}(\cdot) = \lim_{i \rightarrow \infty} (M, g_{s_j^\prime}(\cdot))$ for some sequence $\{s_j^\prime\}_{j=1}^\infty$ tending to infinity. By \cite{Cheeger-Fukaya-Gromov (1992)}, for any $\epsilon > 0$, there is an integer $J_\epsilon < \infty$ so that if $j \ge J_\epsilon$ then there is a locally $T^2$-invariant Riemannian metric $g^\prime_j$ on $M$ which is $\epsilon$-close in the $C^1$-topology to $\frac{1}{s_j^\prime} g(s_j^\prime)$. Furthermore, one can take the sectional curvature of $g^\prime_j$ to be uniformly bounded in $\epsilon$ \cite[Theorem 2.1]{Rong (1996)}. The collapsing is along the $T^2$ fibers. Taking a sequence of values of $\epsilon$ going to zero and choosing $j \ge J_\epsilon$, after passing to a subsequence we can say that $\widehat{g}(1) = \lim_{j \rightarrow \infty} (M, g^\prime_j)$.
Let $S$ be the orbit space of the \'etale groupoid. It is a circle or an interval. If $S$ is a circle then $M$ is the total space of a $T^2$-bundle over $S^1$. (The fibers cannot be Klein bottles since $M$ is orientable.) Hence the local $T^2$-action on $(M, g^\prime_j)$ is free. Let $H \in \operatorname{SL}(2, {\mathbb Z})$ be the holonomy of the $T^2$-bundle, defined up to conjugation in $\operatorname{SL}(2, {\mathbb Z})$. Given $M$, there is a finite number of possibilities for $H$, as follows from \cite[pp. 439,469-470,481-482]{Scott (1983)}. After passing to a subsequence, we can assume that there is a single such $H$. For each $j$, the $T^2$-bundle with invariant metric $g^\prime_j$ is the total space of a twisted principal $T^2$-bundle over $S^1$, where the twisting bundle $E$ is a flat $T^2$-bundle on $S^1$ with holonomy $H$. From Proposition \ref{5.2}, for all $j$ these give rise to equivalent \'etale groupoids. Looking at how one constructs the limiting Riemannian groupoid as $j \rightarrow \infty$ \cite[Proposition 5.9]{Lott (2007)}, it follows that $\widehat{g}(\cdot)$ is defined on this same \'etale groupoid. In particular, it is locally free.
If $S$ is an interval then as in the proof of Proposition \ref{3.5}, the asphericity of $M$ implies that the local $T^2$-action on $M$ is locally free. Then $M$ is the total space of an orbifold $T^2$-bundle over the orbifold $S$. As $S$ is double covered by a circle, we can take a double cover $\widehat{M}$ of $M$ which is the total space of a $T^2$-bundle over $S^1$. Applying the preceding argument ${\mathbb Z}_2$-equivariantly to $\widehat{M}$, we conclude that the underlying \'etale groupoid of $\widehat{g}(\cdot)$ is again locally free.
Finally, suppose that $\widehat{g}(\cdot) \in \overline{\mathcal O}_{(3)}$. Write $\widehat{g}(\cdot) = \lim_{i \rightarrow \infty} (M, g_{s_j^\prime}(\cdot))$ for some sequence $\{s_j^\prime\}_{j=1}^\infty$ tending to infinity. Then the orbit space $S$ of the \'etale groupoid is a point and $\left\{ \left( M, \frac{1}{s_j^\prime} g(s_j^\prime) \right) \right\}_{j=1}^\infty$ Gromov-Hausdorff converges, with bounded sectional curvature, to a point. That is, $M$ is almost flat and so is an infranilmanifold \cite{Gromov (1978)}. There is a finite normal cover ${M}_0$ of $M$ which is diffeomorphic to a flat manifold or a nilmanifold. Let $g_0(\cdot)$ be the lift of $g(\cdot)$ to $M_0$ and let $\widehat{g}_0(\cdot)$ be the corresponding limiting Ricci flow on an \'etale groupoid, with $\widehat{g}(\cdot)$ as a finite quotient. By \cite{Cheeger-Fukaya-Gromov (1992)}, for any $\epsilon > 0$, there is an integer $J_\epsilon < \infty$ so that if $j \ge J_\epsilon$ then there is a left-invariant Riemannian metric $g^\prime_j$ on ${M}_0$, of ${\mathbb R}^3$ or $\operatorname{Nil}$-type, which is $\epsilon$-close in the $C^1$-topology to $\frac{1}{s_j^\prime} g_0(s_j^\prime)$. Furthermore, one can take the sectional curvature of $g^\prime_j$ to be uniformly bounded in $\epsilon$ \cite[Theorem 2.1]{Rong (1996)}. The collapsing is along all of $M_0$. Taking a sequence of values of $\epsilon$ going to zero and choosing $j \ge J_\epsilon$, after passing to a subsequence we can say that $\widehat{g}_0(1) = \lim_{j \rightarrow \infty} (M_0, g^\prime_j)$. Looking at how one constructs the limiting Riemannian groupoid as $j \rightarrow \infty$ \cite[Proposition 5.9]{Lott (2007)}, it follows that the underlying \'etale groupoid of $\widehat{g}_0(1)$ is a cross-product groupoid ${\mathbb R}^3 \rtimes {\mathbb R}^3_\delta$ or $\operatorname{Nil} \rtimes \operatorname{Nil}_\delta$, where $\delta$ denotes the discrete topology. Hence the underlying \'etale groupoid of $\widehat{g}(\cdot)$ is locally free. \end{proof}
The relevance of Proposition \ref{6.1} is that it allows us to use Proposition \ref{4.76} to analyze blowdown limits of $\widehat{g}(\cdot)$.
Let $k_0$ be the largest $k$ so that $\overline{\mathcal O}_{(k)}$ is nonempty. For simplicity of terminology, we will say that a Ricci flow on an \'etale groupoid is a locally homogeneous expanding soliton if there is some homogeneous expanding soliton to which the Ricci flow on the unit space of the \'etale groupoid is locally isometric.
\begin{proposition} \label{6.2} If $k_0 \: = \: 0$ then $M$ admits an $H^3$-structure. If $k_0 \: = \: 1$ then $M$ admits an $H^2 \times {\mathbb R}$ or $\widetilde{\operatorname{SL}_2({\mathbb R})}$-structure. If $k_0 \: = \: 2$ then $M$ admits a $\operatorname{Sol}$-structure. If $k_0 \: = \: 3$ then $M$ admits an ${\mathbb R}^3$ or $\operatorname{Nil}$-structure.
In any case, there is a sequence $\{s_j\}_{j=1}^\infty$ tending to infinity so that $\lim_{j \rightarrow \infty} (M, g_{s_j}(\cdot))$ exists as a Ricci flow solution on an \'etale groupoid, and is a locally homogeneous expanding soliton of type \begin{itemize} \item $H^3$ if $k_0 = 0$, \item $H^2 \times {\mathbb R}$ if $k_0 = 1$, \item $\operatorname{Sol}$ if $k_0 = 2$, \item ${\mathbb R}^3$ or $\operatorname{Nil}$ if $k_0 = 3$. \end{itemize} \end{proposition} \begin{proof} Given $\widehat{g}(\cdot) \in \overline{\mathcal O}_{(k_0)}$, put $\widehat{g}_s(t) \: = \: \frac{1}{s} \: \widehat{g}(st)$. We claim that the forward orbit $\{ \widehat{g}_s(\cdot)\}_{s \in [1,\infty)}$ is relatively sequentially compact in $\overline{\mathcal O}_{(k_0)}$. To see this, suppose that there is a sequence $\{ \widehat{g}_{s_i}(\cdot)\}_{i=1}^\infty$ having a limit $\widehat{g}^\prime(\cdot)$. We can find a subsequence of $\{(M, g_s(\cdot))\}_{s \in [1, \infty)}$ that converges to $\widehat{g}^\prime(\cdot)$. Thus $\widehat{g}^\prime(\cdot) \in \overline{\mathcal O}$. However, the number of local symmetries cannot decrease in the limit. Hence $\widehat{g}^\prime(\cdot) \in \overline{\mathcal O}_{(k)}$ for some $k \ge k_0$. We must have $k = k_0$, by the definition of $k_0$, which proves the claim.
Let $\{s_i\}_{i=1}^\infty$ be a sequence tending to infinity such that $\lim_{i \rightarrow \infty} \widehat{g}_{s_i}(\cdot) \: = \: \widehat{g}_\infty(\cdot)$ for some $\widehat{g}_\infty(\cdot) \in \overline{\mathcal O}_{(k_0)}$. Let $S$ denote the underlying orbit space of $\widehat{g}_\infty(1)$. There is a sequence $\{s_j^\prime\}_{j=1}^\infty$ tending to infinity so that $\lim_{j \rightarrow \infty} \left( M, g_{s_j^\prime}(\cdot) \right) = \widehat{g}_\infty(\cdot)$. In particular, $\lim_{j \rightarrow \infty} \left( M, \frac{1}{s_j^\prime} g(s_j^\prime) \right) \stackrel{GH}{=} S$.
If $k_0 = 0$ then by Proposition \ref{4.77}, $(M, \widehat{g}_\infty(\cdot))$ is the Ricci flow on a manifold of constant negative sectional curvature.
If $k_0 = 1$ then $S$ is a closed two-dimensional orbifold. Taking a double cover if necessary, we can assume that $S$ is orientable. From Proposition \ref{5.2}, we can assume that the underlying \'etale groupoid comes from an orbifold principal $S^1$-bundle on $S$. (The triviality of $e$ comes from its identification with $\operatorname{H}^1$ of the circle fiber of the orbifold bundle $M \rightarrow S$.) By Proposition \ref{4.77}, $\widehat{g}_\infty(\cdot)$ has $(H^2 \times {\mathbb R})$-type and $S$ has a metric of constant curvature $- \: \frac{1}{2t}$. As $M$ is the total space of an orbifold circle bundle over $S$, it follows that $M$ admits an $H^2 \times {\mathbb R}$ or $\widetilde{\operatorname{SL}_2({\mathbb R})}$-structure (using \cite{Meeks-Scott (1986)} if we took a double cover).
If $k_0 = 2$ then $S$ is $S^1$ or an interval $[0,L]$. Suppose first that $S = S^1$. Then $M$ is the total space of a $T^2$-fiber bundle over $S$. Let $H \in \operatorname{SL}(2, {\mathbb Z})$ be the holonomy of the fiber bundle, defined up to conjugacy. As in the proof of Lemma \ref{6.1}, the \'etale groupoid of $\widehat{g}_\infty(\cdot)$ arises from a (twisted) principal $T^2$-bundle on $S^1$. The flat bundle $e$ over $S^1$ has holonomy $H \in \operatorname{SL}(2,{\mathbb Z})$. By Proposition \ref{4.77}, $\widehat{g}_\infty(\cdot)$ has $\operatorname{Sol}$-type and $H$ is a hyperbolic element of $\operatorname{SL}(2, {\mathbb Z})$. Thus $M$ admits a $\operatorname{Sol}$-structure.
Suppose now that $S = [0,L]$. As in the proof of Lemma \ref{6.1}, $M$ is the total space of an orbifold $T^2$-bundle over the orbifold $[0,L]$. A double cover $\widehat{M}$ of $M$ fibers over $S^1$. Running the previous argument on $\widehat{M}$ with the pullback metric, we conclude that $\widehat{M}$ admits a $\operatorname{Sol}$-structure. Hence $M$ admits a $\operatorname{Sol}$-structure \cite{Meeks-Scott (1986)}.
If $k_0 = 3$ then $S$ is a point. Hence $\left\{ \left( M, \frac{1}{s_j^\prime} g(s_j^\prime) \right) \right\}_{j=1}^\infty$ Gromov-Hausdorff converges, with bounded sectional curvature, to a point. As in the proof of Lemma \ref{6.1}, $\widehat{g}_\infty(\cdot)$ is locally homogeneous and has ${\mathbb R}^3$ or $\operatorname{Nil}$ as its local symmetry group. Such a Ricci flow solution is automatically a locally homogeneous expanding soliton. \end{proof}
We have shown that there is some sequence $\{s_j\}_{j=1}^\infty$ tending to infinity so that $\lim_{j \rightarrow \infty} (M, g_{s_j}(\cdot))$ exists and is a locally homogeneous expanding soliton. We now wish to show that this is true for any sequence $\{s_j\}_{j=1}^\infty$ tending to infinity, at least if the Thurston type of $M$ is not $\widetilde{\operatorname{SL}_2({\mathbb R})}$. The first step is to show that under a compactness assumption, there is a parameter $T$ so that if we take any rescaling limit $\overline{g}(\cdot)$ then upon further rescaling of $\overline{g}(\cdot)$, the result is near a locally homogeneous expanding soliton for some rescaling parameter $s \in [1,T]$.
\begin{proposition} \label{6.3} Given $k$, let $C$ be a sequentially compact subset of $\overline{\mathcal O}_{(k)}$. Let $U$ be a neighborhood of \begin{itemize} \item The $H^3$-type locally homogeneous expanding solitons in $\overline{\mathcal O}_{(0)}$ if $k = 0$, \item The $(H^2 \times {\mathbb R})$-type locally homogeneous expanding solitons in $\overline{\mathcal O}_{(1)}$ if $k = 1$, \item The $\operatorname{Sol}$-type locally homogeneous expanding solitons in $\overline{\mathcal O}_{(2)}$ if $k = 2$, \item The ${\mathbb R}^3$-type and $\operatorname{Nil}$-type locally homogeneous expanding solitons in $\overline{\mathcal O}_{(3)}$ if $k = 3$. \end{itemize}
Then there is a $T = T(k,C,U) \in [1, \infty)$ so that for any $\overline{g}(\cdot) \in C$, if $\overline{g}_s(\cdot) \in C$ for all $s \in [1, T]$ then there is some $s \in [1, T]$ such that $\overline{g}_s(\cdot) \in U$. \end{proposition} \begin{proof} Given $k$, $C$ and $U$, suppose that the proposition is not true. Then for each $j \in {\mathbb Z}^+$, there is some $\overline{g}^{(j)}(\cdot) \in C$ so that for each $s \in [1, j]$, $\overline{g}^{(j)}_s(\cdot) \in C$ and $\overline{g}^{(j)}_s(\cdot) \notin U$. Take a convergent subsequence of the $\{\overline{g}^{(j)}(\cdot)\}_{j=1}^\infty$ with limit $\overline{g}^{(\infty)}(\cdot)$. Then for all $s \in [1, \infty)$, we have $\overline{g}_s^{(\infty)}(\cdot) \in C$ and $\overline{g}_s^{(\infty)}(\cdot) \notin U$. By sequential compactness, there is a sequence $\{t_k\}_{k=1}^\infty$ in ${\mathbb Z}^+$ tending to infinity so that $\lim_{k \rightarrow \infty} \overline{g}^{(\infty)}_{t_k}(\cdot)$ exists and equals some $\overline{g}^{(\infty)}_\infty(\cdot) \in C$. By Proposition \ref{4.77}, $\overline{g}^{(\infty)}_\infty(\cdot)$ is a locally homogeneous expanding soliton as in the statement of the present proposition. Then for large $k$, we have $\overline{g}^{(\infty)}_{t_k}(\cdot) \in U$, which is a contradiction. \end{proof}
The next step is to use local stability to say that after rescaling $\overline{g}(\cdot)$ by the parameter $T$, the result is definitely near a locally homogeneous expanding soliton solution.
\begin{proposition} \label{6.4} Suppose that $M$ does not have Thurston type $\widetilde{\operatorname{SL}_2({\mathbb R})}$. Then there are decreasing open sets $\{U_l\}_{l=1}^\infty$ of the type described in Proposition \ref{6.3}, whose intersection is the corresponding set of locally homogeneous expanding soliton solutions, so that under the hypotheses of Proposition \ref{6.3}, if $T_l = T(k,C,U_l)$ then we are ensured that $\overline{g}_{T_l}(\cdot) \in U_l$. (In the case $k = 1$ we restrict to Ricci flow solutions on an \'etale groupoid with vanishing Euler class, so $U_l$ is a neighborhood in the relative topology.) \end{proposition} \begin{proof} This follows from the local stability of the expanding solitons in $\overline{\mathcal O}_{(k)}$. That is, there is a sequence $\{U_l\}_{l=1}^\infty$ of such neighborhoods so that $\overline{g}_{s}(\cdot) \in U_l$ implies that $\overline{g}_{s^\prime}(\cdot) \in U_l$ whenever $s^\prime \ge s$. (In fact, one has exponential convergence to the set of expanding solitons.) The case $k=0$ appears in \cite{Ye (1993)}. The case $k=2$ appears in \cite{Knopf}. In the case $k = 1$, recall from Example \ref{5.10} that there are two relevant types of \'etale groupoids, one with vanishing Euler class and one with nonvanishing Euler class. The locally homogeneous expanding solitons live on \'etale groupoids with vanishing Euler class. Their local stability (modulo the center manifold), among Ricci flows on \'etale groupoids with vanishing Euler class, is shown in \cite{Knopf}. We remark that if $M$ has Thurston type $H^2 \times {\mathbb R}$ then a limit $\lim_{j \rightarrow \infty} (M, g_{s_j}(\cdot))$ can only be a Ricci flow on an \'etale groupoid with vanishing Euler class.
Note that if $k = 1$ then there may be a moduli space of locally homogeneous expanding solitons of type $H^2 \times {\mathbb R}$ in $\overline{\mathcal O}_{(1)}$, corresponding to various metrics of constant curvature $- \: \frac12$ on the orbit space. However, because of our diameter bound, the moduli space is compact. Comparing with \cite{Knopf}, it may appear that there is also a factor in the moduli space consisting of harmonic $1$-forms on the orbit space. However, by Lemma \ref{5.11}, the various harmonic $1$-forms all give equivalent geometries. \end{proof}
\begin{remark} \label{6.5} There is no locally homogeneous expanding soliton solution on a three-dimensional \'etale groupoid of the type considered in Proposition \ref{6.2} if it has an orbifold surface base with negative Euler characteristic, and a nonvanishing Euler class. A Ricci flow on such an \'etale groupoid will have a rescaling sequence that converges to an $(H^2 \times {\mathbb R})$-type expander on an \'etale groupoid with vanishing Euler class.
In order to show convergence of the Ricci flow on a $3$-manifold with Thurston type $\widetilde{\operatorname{SL}_2({\mathbb R})}$, at least by our methods, one would have to show that the expanding solitons of type $H^2 \times {\mathbb R}$ are also locally stable if one considers neighborhoods that include \'etale groupoids with nonvanishing Euler class. The difficulty is that the nearby Ricci flows live on an inequivalent groupoid and so one cannot just linearize around the $(H^2 \times {\mathbb R})$-type expanding solitons. One approach would be to instead consider Ricci flows with $\widetilde{\operatorname{SL}_2({\mathbb R})}$-symmetry on \'etale groupoids with nonzero Euler class and show that this finite-dimensional family is an attractor. \end{remark}
We now show if $k_0 < 3$ and $M$ does not have Thurston type $\widetilde{\operatorname{SL}_2({\mathbb R})}$ then any rescaling limit $\overline{g}(\cdot)$ is a locally homogenous expanding soliton. The method of proof is to show that we can rescale $\overline{g}(\cdot)$ backward by a factor $T$, and then apply the previous proposition.
\begin{proposition} \label{6.6} If $k_0 < 3$ and $M$ does not have Thurston type $\widetilde{\operatorname{SL}_2({\mathbb R})}$ then for any sequence $\{s_j\}_{j=1}^\infty$ tending to infinity, as $j \rightarrow \infty$, $(M, g_{s_j}(\cdot))$ approaches the set of locally homogeneous expanding solitons of the type listed in Proposition \ref{6.3}, with $k = k_0$. \end{proposition} \begin{proof} If the proposition is not true then there is a sequence $\{s_j\}_{j=1}^\infty$ tending to infinity and a neighborhood $U_l$ as in Proposition \ref{6.4} so that for all $j$, $g_{s_j}(\cdot) \notin U_l$. After passing to a further subsequence, we can assume that $\lim_{j \rightarrow \infty} g_{s_j}(\cdot) \: = \: \overline{g}(\cdot)$ for some $\overline{g}(\cdot) \in \overline{\mathcal O}$.
If $k_0 = 2$ then from Proposition \ref{6.2}, $M$ admits a $\operatorname{Sol}$-structure. As $M$ cannot collapse with bounded curvature and bounded diameter to something of dimension other than one, $\overline{\mathcal O}_{(0)} = \overline{\mathcal O}_{(1)} = \overline{\mathcal O}_{(3)} = \emptyset$. Then $C = \overline{\mathcal O}_{(2)}$ is sequentially compact and $\overline{g}(\cdot) \in \overline{\mathcal O}_{(2)}$. A similar argument applies in the other cases when $k_0 < 3$ to show that $C = \overline{\mathcal O}_{(k_0)}$ is sequentially compact and $\overline{g}(\cdot) \in \overline{\mathcal O}_{(k_0)}$.
For $s \ge 1$, let $\overline{g}^{(s^{-1})}(\cdot)$ be the limit in $\overline{\mathcal O}$ of a convergent subsequence of $\{g_{s^{-1} s_j}(\cdot)\}_{j=1}^\infty$. Then $\overline{g}(\cdot) \: = \: \overline{g}^{(s^{-1})}_s(\cdot)$. Note that $\overline{g}^{(s^{-1})}(\cdot) \in \overline{\mathcal O}_{k_0}$. By Proposition \ref{6.4}, there is a number $T_l \ge 1$ so that for each $s \ge 1$, $\overline{g}^{(s^{-1})}_{T_l}(\cdot) \in U_l$. Taking $s = T_l$, we conclude that $\overline{g}(\cdot) \in U_l$. This is a contradiction. \end{proof}
\begin{corollary} \label{6.7} If $k_0 =0$ or $k_0 = 2$ then $\lim_{s \rightarrow \infty} (M, g_{s}(\cdot))$ exists and is one of the locally homogeneous expanding solitons of the type listed in Proposition \ref{6.3}, with $k = k_0$. \end{corollary} \begin{proof} In these cases, given $M$, there is a unique locally homogeneous expanding soliton of the type listed in Proposition \ref{6.3}, with $k = k_0$. The relationship between the topology of $M$ and the equivalence class of the \'etale groupoid comes from the proof of Lemma \ref{6.1}. If $k_0 = 0$ then $M$ admits a hyperbolic metric and the expander is the solution $\overline{g}(t) = 4t g_{hyp}$, where $g_{hyp}$ is the metric of constant sectional curvature $-1$ on $M$. If $k_0 = 2$ then $M$ is a $\operatorname{Sol}$-manifold. Suppose first that $M$ is the total space of a $T^2$-bundle over $S^1$, with hyperbolic holonomy $H \in \operatorname{SL}(2, {\mathbb Z})$. Then by Remark \ref{4.78}, the expander can be written $\overline{g} = \frac{t}{2} \operatorname{Tr}(X^2) \: db^2 \: + \: (dy)^T e^{bX} dy$, where $b \in [0,1]$ and $e^X = H^T H$. If $M$ fibers over the orbifold $[0,1]$ then the expander is a ${\mathbb Z}_2$-quotient thereof. \end{proof}
In the case $k_0 = 3$, we must show that any rescaling limit $\overline{g}(\cdot)$ has three local symmetries. This does not follow just from topological arguments. The method of proof is to rescale backwards and then apply the monotonicity arguments of Section \ref{section4} to a backward limit.
\begin{proposition} \label{6.8} If $k_0 = 3$ and $\overline{g}(\cdot) \in \overline{\mathcal O}$ is a limit $\lim_{j \rightarrow \infty} (M, g_{s_j}(\cdot))$, for some sequence $\{s_j\}_{j=1}^\infty$ tending to infinity, then $\overline{g}(\cdot) \in \overline{\mathcal O}_{(3)}$. \end{proposition} \begin{proof} Suppose that $\overline{g}(\cdot) \in \overline{\mathcal O}_{(k)}$ with $k < 3$. As in the proof of Proposition \ref{6.6}, for $s \ge 1$, let $\overline{g}^{(s^{-1})}(\cdot)$ be the limit in $\overline{\mathcal O}$ of a convergent subsequence of $\{g_{s^{-1} s_j}(\cdot)\}_{j=1}^\infty$. Then $\overline{g}(\cdot) \: = \: \overline{g}^{(s^{-1})}_s(\cdot)$. More precisely, for each $s \in [1, \infty)$ there is an equivalence $\phi_s$ of groupoids so that \begin{equation} \label{6.9} \overline{g}(t) \: = \: \frac{1}{s} \: \phi_s^* \: \overline{g}^{(s^{-1})}(st). \end{equation} In particular, $\overline{g}^{(s^{-1})}(\cdot) \in \overline{\mathcal O}_{(k)}$. Using (\ref{6.9}), we can extend the domain of definition of $\overline{g}(\cdot)$ to $[s^{-1}, \infty)$ for all $s \ge 1$, and hence to all $t \in (0, \infty)$. We still have the bounds $\sup_{t \in (0, \infty)} t \parallel \operatorname{Riem}(\overline{g}(t)) \parallel_\infty \: \le \: K$ and $\sup_{t \in (0, \infty)} t^{- \: \frac12} \: \operatorname{diam}(\overline{g}(t)) \: \le \: D$.
As in the proof of Proposition \ref{4.76}, we construct a solution $f(t)$ of the conjugate heat equation on the orbit space $S$: \begin{equation} \label{6.10} \frac{\partial(e^{-f})}{\partial t} \: = \: - \: \nabla^2 \: e^{-f} \: + \: \left( R \: - \: \frac14 \: g^{\alpha \beta} \: G^{ij} \: G_{jk, \alpha} \: G^{kl} \: G_{li, \beta} \: - \: \frac12 \: g^{\alpha \gamma} \: g^{\beta \delta} \: G_{ij} \: F^i_{\alpha \beta} \: F^j_{\gamma \delta} \: + \: \frac{n}{2t} \right) e^{-f}, \end{equation} where $n = \dim(S) = 3-k$, that satisfies $(4 \pi t)^{- \: \frac{n}{2}} \: \int_S e^{-f} \: \operatorname{dvol}_S \: = \: 1$ for all $t \in (0, \infty)$. Then ${\mathcal W}_+(G_{ij}(t), A^i_\alpha(t), g_{\alpha \beta}(t), f(t), t)$ is nondecreasing in $t$. From Lemma \ref{4.58}, for $t < 1$ there is a uniform positive lower bound on $t^{- \: \frac{n}{2}} \: \operatorname{vol}(S, g_{\alpha \beta}(t))$. By O'Neill's theorem, the lower sectional curvature bound on $\overline{g}(t)$ implies the same lower sectional curvature bound on $g_{\alpha \beta}(t)$. Hence the (orbifolds) $(S, t^{-1} g_{\alpha \beta}(t))$ are noncollapsing in the Gromov-Hausdorff sense as $t \rightarrow 0$. It follows that $\{ \overline{g}^{s^{-1}}(\cdot) \}_{s \ge 1}$ lies in a sequentially compact subset of $\overline{\mathcal O}_{(k)}$, since if a sequence $\{ \overline{g}^{{s_r}^{-1}}(\cdot) \}_{r=1}^\infty$ with $\lim_{r \rightarrow \infty} s_r = \infty$ converged to an element of $\overline{\mathcal O}_{(k^\prime)}$ with $k^\prime > k$ then the orbit spaces $\{(S, s_r \: g_{\alpha \beta}(s_r^{-1}))\}_{j=1}^\infty$ would Gromov-Hausdorff converge to something of dimension $3 - k^\prime < 3-k$, which contradicts the noncollapsing. In particular, $t^{- \: \frac{n}{2}} \: \operatorname{vol}(S, g_{\alpha \beta}(t))$ is uniformly bounded above as $t \rightarrow 0$ (as also follows from the diameter and lower curvature bounds). Then from Lemma \ref{4.58}, ${\mathcal W}_+(G_{ij}(t), A^i_\alpha(t), g_{\alpha \beta}(t), f(t), t)$ is uniformly bounded from below as $t \rightarrow 0$. There is a sequence of times $t_j \rightarrow 0$ so that
$\lim_{j \rightarrow \infty} t_j \: \frac{d}{dt} \Big|_{t = t_j} {\mathcal W}_+(G_{ij}(t), A^i_\alpha(t), g_{\alpha \beta}(t), f(t), t) \: = \: 0$. After passing to a subsequence, we can assume that $\lim_{j \rightarrow \infty} \overline{g}^{t_j}(\cdot) \: = \: \overline{g}_0(\cdot)$ for some $\overline{g}_0(\cdot) \in \overline{\mathcal O}_{(k)}$, defined for $t \in (0, \infty)$. As in the proof of Proposition \ref{4.76}, for any $t \in (0, \infty)$ the measures $(4 \pi t_j t)^{- \: \frac{n}{2}} \: e^{- f(t_j t)} \: \operatorname{dvol}(S, g_{\alpha \beta}(t_j t))$ will subconverge to a smooth positive probability measure on $S$. Using (\ref{4.63}), we get that $\overline{g}_0(\cdot)$ satisfies the conclusion of Proposition \ref{4.64} at time $t=1$. It follows that $\overline{g}_0(\cdot)$ satisfies the conclusion of Proposition \ref{4.64} for all $t \ge 1$. In particular, $M$ admits a geometric structure other than an ${\mathbb R}^3$ or a $\operatorname{Nil}$-structure (see the proof of Proposition \ref{6.2}), which is a contradiction. \end{proof}
\begin{proposition} \label{6.11} If $k_0 = 3$ then for any sequence $\{s_j\}_{j=1}^\infty$ tending to infinity, $\lim_{j \rightarrow \infty} g_{s_j}(\cdot)$ exists and is a locally homogeneous expanding soliton of the ${\mathbb R}^3$ or $\operatorname{Nil}$-type. \end{proposition} \begin{proof} If the proposition is not true then there is a sequence $\{s_j\}_{j=1}^\infty$ tending to infinity such that $\lim_{j \rightarrow \infty} g_{s_j}(\cdot) \: = \: \overline{g}(\cdot)$ for some $\overline{g}(\cdot) \in \overline{\mathcal O}$, but $\overline{g}(\cdot)$ is not an expander of type ${\mathbb R}^3$ or $\operatorname{Nil}$. From Proposition \ref{6.8}, $\overline{g}(\cdot) \in \overline{\mathcal O}_{(3)}$. In particular, $\overline{g}(\cdot)$ is locally homogeneous. If $\underline{\frak g}$ is a local system of ${\mathbb R}^3$ Lie algebras then $\overline{g}(\cdot)$ must be flat. If $\underline{\frak g}$ is a local system of $\operatorname{nil}$ Lie algebras then $\overline{g}(\cdot)$ is automatically a locally homogeneous expanding soliton, with respect to some origin of time. {\it A priori}, the equation for $\overline{g}(\cdot)$ could differ from the expanding $\operatorname{Nil}$ soliton in Theorem \ref{1.2} by an additive change of the time parameter. We can rule this out by using stability arguments as before, which are simpler in this case because we are now talking about dynamics on the finite-dimensional space of locally homogenous $\operatorname{Nil}$-solutions. First, we argue that there is some sequence $s_j \rightarrow \infty$ so that $\lim_{j \rightarrow \infty} g_{s_j}(\cdot)$ is a locally homogeneous expanding soliton modeled on the $\operatorname{Nil}$ expanding soliton of Theorem \ref{1.2}. Then we use the fact that this expanding soliton is an attractor for the ${\mathbb R}^{\ge 1}$-semigroup action on the locally homogeneous $\operatorname{Nil}$-solutions \cite[Section 3.3.3]{Lott (2007)}. Finally, we use a backward rescaling, as in the proof of Proposition \ref{6.8}, to show that for any sequence $\{s_j\}_{j=1}^\infty$ tending to infinity, $\lim_{j \rightarrow \infty} g_{s_j}(\cdot)$ is a locally homogeneous expanding soliton modeled on the $\operatorname{Nil}$ expanding soliton of Theorem \ref{1.2} \end{proof}
\begin{remark} \label{6.12} Some of the results of this subsection extend to higher dimension. Suppose that $(M, g(\cdot))$ is a Ricci flow on a closed $n$-dimensional manifold that exists for $t \in (1, \infty)$, with sectional curvatures that are uniformly $O(t^{-1})$ and diameter that grows at most like $O(t^{\frac12})$. (If $n > 3$ then not all compact Ricci flows satisfy these assumptions, as seen by the static solution on a Ricci-flat $K3$ surface.) If $\{ s_j \}_{j=1}^\infty$ is any sequence tending to infinity then after passing to a subsequence, there is a limit Ricci flow $\overline{g}(\cdot)$ on an $n$-dimensional \'etale groupoid ${\frak G}$. If $M$ is aspherical then ${\frak G}$ is locally free.
If $n$ is greater than three then the first point is that the local symmetry sheaf $\underline{\frak g}$ may be a sheaf of nonabelian nilpotent Lie algebras. (This could also happen in dimension $3$, but then $\overline{g}(\cdot)$ is locally homogeneous with respect to the three-dimensional Heisenberg group.) Thus the analysis of Subsection \ref{subsection4.2} would have to be extended to the case of twisted ${\mathcal G}$-bundles where ${\mathcal G}$ is a nilpotent Lie group.
If we do assume that $\underline{\frak g}$ is abelian then Proposition \ref{4.76} says that any blowdown limit of $\overline{g}(\cdot)$ satisfies the harmonic-Einstein equations (\ref{4.65}). Proposition \ref{4.77} describes the blowdown limit of a Ricci flow solution $(M, g(\cdot))$ on an aspherical $4$-manifold, defined for $t \in (1, \infty)$, with sectional curvatures that are uniformly $O(t^{-1})$ and diameter which is $O(t^{\frac12})$, provided that $\underline{\frak g}$ is abelian. \end{remark}
\subsection{Proof of Theorem \ref{1.2}} \label{subsection6.2}
In this subsection we use the fact that $M$ is aspherical in order to extend the convergence result of Subsection \ref{subsection6.1} from a statement about a limiting Ricci flow on an \'etale groupoid to a statement about a limiting Ricci flow on $\widetilde{M}$.
By Proposition \ref{3.5}, $M$ is irreducible, aspherical and has a single geometric piece in its geometric decomposition. We assume first that $M$ does not have Thurston type $\widetilde{\operatorname{SL}_2({\mathbb R})}$.
Suppose that for a sequence $\{s_j\}_{j=1}^\infty$ tending to infinity, the limit $\lim_{j \rightarrow \infty} \left( M, g_{s_j}(\cdot) \right)$ exists and equals a Ricci flow $\overline{g}(\cdot)$ on an \'etale groupoid ${\frak G}$. If $S$ is the orbit space of $({\frak G}, \overline{g}(1))$ then $\lim_{j \rightarrow \infty} \left(M, \frac{g(s_j)}{s_j} \right) \stackrel{GH}{=} S$.
From Propositions \ref{6.6} and \ref{6.11}, $\overline{g}(\cdot)$ is a locally homogeneous expanding soliton of the type listed in Proposition \ref{6.3}. There is an orbifold fiber bundle $M \rightarrow S$. Now $S$ is a very good orbifold, i.e. $S$ is the quotient of a manifold $\widehat{S}$ by a finite group action. Taking the corresponding finite cover $\widehat{M}$ of $M$, if we are interested in what happens on the universal cover $\widetilde{M}$ then we can assume that $S$ is a closed manifold.
Suppose that $M$ is not of $\operatorname{Nil}$-type. For large $j$, we know that $\left(M, \frac{g(s_j)}{s_j} \right)$ is the total space of a $T^{k_0}$-bundle over $S$ which defines an $F$-structure, where ${k_0} = \dim(M) - \dim(S)$. As $M$ is aspherical, the map $\pi_1(T^{k_0}) \rightarrow \pi_1(M)$ is injective \cite[Remark 0.9]{Cheeger-Rong (1995)}.
Choose $\delta \in \left( 0, \min \left( \frac{\operatorname{inj}(S)}{10}, \frac{1}{10\sqrt{K}} \right) \right)$ and take a finite collection $\{x_i\}$ of points in $S$ with the property that $\{B(x_i, \delta)\}$ covers $S$. For large $j$, let $\{ p_{i,j} \}$ be points in $\left(M, \frac{g(s_j)}{s_j} \right)$ that are the image of $\{x_i\}$ under a Gromov-Hausdorff approximation. Then for such $j$, $\{B(p_{i,j}, 5\delta)\}$ covers $\left(M, \frac{g(s_j)}{s_j} \right)$. Each $B(p_{i,j}, \delta)$ is homeomorphic to $B^{3-{k_0}} \times T^{k_0}$ and its lift $\widetilde{B(p_{i,j}, \delta)}$ to $\widetilde{M}$ is homeomorphic to $B^{3-{k_0}} \times {\mathbb R}^{k_0}$.
Suppose that $\widetilde{p}_{i,j} \in \widetilde{M}$ is a preimage of $p_{i,j}$. Then the $5\delta$-ball $B(0, 5\delta) \subset T_{p_{i,j}} M$, with respect to the metric $\exp_{p_{i,j}}^* \frac{g(s_j)}{s_j}$, is isometric to $B(\widetilde{p}_{i,j}, 5\delta) \subset \left( \widetilde{M}, \frac{\widetilde{g}(s_j)}{s_j} \right)$. From the construction of the Riemannian groupoid $({\frak G}, \overline{g}(1))$ \cite[Proposition 5.9]{Lott (2007)}, $\lim_{j \rightarrow \infty} B(\widetilde{p}_{i,j}, 5\delta)$ is isometric to a $5\delta$-ball in the time-$1$ slice of the (homogeneous) expanding soliton solution on the manifold ${\mathbb R}^3$.
Let $\widetilde{m} \in \widetilde{M}$ be a basepoint. Given $R > 0$, consider $B(\widetilde{m}, R) \subset \left(\widetilde{M}, \frac{\widetilde{g}(s_j)}{s_j} \right)$. For large $j$, we can find a finite collection of points $\{ \widetilde{p}_{r} \}$ (depending on $j$) in $\left(\widetilde{M}, \frac{\widetilde{g}(s_j)}{s_j} \right)$, where each $\widetilde{p}_{r}$ projects to some element of $\{ p_{i,j} \} \subset M$, so that the cardinality of $\{ \widetilde{p}_{r} \}$ is uniformly bounded in $j$ and $B(\widetilde{m}, R) \subset \left(\widetilde{M}, \frac{\widetilde{g}(s_j)}{s_j} \right)$ is covered by $\{ B(\widetilde{p}_{r}, 5\delta) \}$. Namely, for each $i$ and $j$, take points in the strip $\widetilde{B(p_{i,j}, \delta)} \subset \left( \widetilde{M}, \frac{\widetilde{g}(s_j)}{s_j} \right)$ that lie in $B(\widetilde{m}, R)$, cover $p_{i,j}$ and form a separated net of size approximately $\delta$.
After relabeling the indices if necessary, suppose that $\widetilde{m} \in B(\widetilde{p}_{1}, 5\delta)$ with $\widetilde{p}_1 \in \widetilde{M}$ projectioning to $p_{1,j} \in M$. For large $j$, fix an almost-isometry from $B(\widetilde{p}_{1}, 5\delta)$ to a $5\delta$-ball in the time-$1$ slice of the (homogeneous) expanding soliton solution on ${\mathbb R}^3$. Taking the union of the balls $B(\widetilde{p}_r, 5 \delta)$ whose centers project to $p_{1,j} \in M$, it follows that for large $j$, the metric $\frac{\widetilde{g}(s_j)}{s_j}$ on $\widetilde{B(p_{1,j}, \delta)} \cap B(\widetilde{m}, R)$ approaches the homogeneous expanding soliton metric on the strip. We do the same procedure for the other values of $i$, on the strips $\widetilde{B(p_{i,j}, \delta)} \cap B(\widetilde{m}, R) \subset \left(\widetilde{M}, \frac{\widetilde{g}(s_j)}{s_j} \right)$. Then taking the union of these strips for the various $i$, it follows that for large $j$, the metric $\frac{\widetilde{g}(s_j)}{s_j}$ on $B(\widetilde{m}, R)$ approaches an $R$-ball in the time-one slice of the homogeneous expanding soliton solution on ${\mathbb R}^3$. Finally, we can perform the argument with the time parameter added, to conclude $\left\{ \left( \widetilde{M}, \widetilde{m}, \widetilde{g}_{s_j}(\cdot) \right) \right\}_{j=1}^\infty$ converges to the expanding soliton solution on ${\mathbb R}^3$, in the topology of pointed smooth convergence. We can perform a similar argument in the $\operatorname{Nil}$-case, where $S$ is a point.
To prove Theorem \ref{1.2}, suppose first that $M$ has Thurston type ${\mathbb R}^3$, $\operatorname{Nil}$, $\operatorname{Sol}$ or $H^3$. Suppose that the theorem is not true. Then there is a sequence $\{s_j\}_{j=1}^\infty$ tending to infinity so that for any subsequence $\{s_{j_r}\}_{r=1}^\infty$, either $\left\{ \left( M, g_{s_{j_r}}(1) \right) \right\}_{r=1}^\infty$ does not converge in the Gromov-Hausdorff topology to the limit stated in the theorem or $\left\{ \left( \widetilde{M}, \widetilde{m}, \widetilde{g}_{s_{j_r}}(\cdot) \right) \right\}_{r=1}^\infty$ does not converge in the pointed smooth topology to the homogeneous expanding soliton solution stated in the theorem. After passing to a subsequence, there is a limit $\lim_{j \rightarrow \infty} \left( M, g_{s_j}(\cdot) \right)$ as a Ricci flow on an \'etale groupoid, whose time-one orbit space will be the Gromov-Hausdorff limit $\lim_{j \rightarrow \infty} \left( M, \frac{g(s_j)}{s_j} \right)$. The limit is characterized by Corollary \ref{6.7} and Proposition \ref{6.11}. From the preceding discussion, there is a pointed smooth limit $\lim_{j \rightarrow \infty} \left( \widetilde{M}, \widetilde{m}, \widetilde{g}_{s_j}(\cdot) \right)$ as a Ricci flow on $\widetilde{M}$, which is a homogeneous expanding soliton solution on ${\mathbb R}^3$ of the corresponding type. In any case, we get a contradiction.
Suppose now that $M$ has Thurston type $H^2 \times {\mathbb R}$. We can apply the same argument. The only difference is that we can no longer say that $\lim_{s \rightarrow \infty} (M, g_s(1))$ exists in the Gromov-Hausdorff topology. All that we get from Proposition \ref{6.6} is that for any sequence $\{ t_j \}_{j=1}^\infty$ tending to infinity, there is a subsequence $\{ t_{j_r} \}_{r=1}^\infty$ for which $\lim_{r \rightarrow \infty} \left( M, \frac{g(t_{j_r})}{t_{j_r}} \right)$ exists and equals a closed $2$-dimensional orbifold with constant sectional curvature $- \: \frac12$. {\em A priori}, different subsequences could give rise to to different constant-curvature orbifolds. (From our diameter bound, for a given $M$ there is a compact set of such orbifolds that can arise). However, we claim that on the universal cover $\widetilde{M}$ we do get pointed smooth convergence $\lim_{s \rightarrow \infty} \left( \widetilde{M}, \widetilde{m}, \widetilde{g}_s(\cdot) \right)$ to the Ricci flow $(H^2 \times {\mathbb R}, 2tg_{hyp} \: + \: g_{{\mathbb R}})$. To see this, suppose that $\{s_j\}_{j=1}^\infty$ is a sequence tending to infinity so that for any subsequence $\{s_{j_r}\}_{r=1}^\infty$, $\left\{ \left( \widetilde{M}, \widetilde{g}_{s_{j_r}}(\cdot) \right) \right\}_{r=1}^\infty$ does not converge to $(H^2 \times {\mathbb R}, 2tg_{hyp} \: + \: g_{{\mathbb R}})$ in the pointed smooth topology. We know that there is a subsequence $\{s_{j_r}\}_{r=1}^\infty$ for which $\lim_{r \rightarrow \infty} \left( {M}, {g}_{s_{j_r}}(\cdot) \right)$ exists and is a Ricci flow $\overline{g}(\cdot)$ on an \'etale groupoid. From Proposition \ref{6.6}, $\overline{g}(\cdot)$ is a Ricci flow solution of type $H^2 \times {\mathbb R}$. From the preceding discussion, $\lim_{r \rightarrow \infty} \left( \widetilde{M}, \widetilde{m}, \widetilde{g}_{s_{j_r}}(\cdot) \right)$ exists in the pointed smooth topology and equals the expanding soliton solution $(H^2 \times {\mathbb R}, 2tg_{hyp} \: + \: g_{{\mathbb R}})$. This is a contradiction.
Finally, if $M$ has Thurston type $\widetilde{\operatorname{SL}_2({\mathbb R})}$ then the theorem follows from Proposition \ref{6.2}.
\begin{remark} \label{6.13} The assumption $\operatorname{diam}(M, g(t)) = O(t^{\frac12})$ of Theorem \ref{1.2}, together with the curvature assumption, ensures that $M$ has a single geometric piece. One can ask what happens if one removes the diameter assumption but keeps the curvature assumption. In such a case one would clearly have to consider pointed limits $\lim_{j \rightarrow \infty} \left( M, m, g_{s_j}(\cdot) \right)$ of the Ricci flow solution. After passing to a subsequence, there will be convergence to a Ricci flow solution $\overline{g}(\cdot)$ on a pointed \'etale groupoid. However, the analysis of Subsection \ref{4.2} does not immediately extend to the pointed noncompact setting. For example, $\overline{g}(t)$ need not have finite volume in any reasonable sense; see Example \ref{2.3}. \end{remark}
\end{document} |
\begin{document}
\baselineskip=12pt
\title{Normal forms for rational 3-tangles} \author{Bo-hyun Kwon, Jung Hoon Lee} \date{Thursday, March 9, 2023} \maketitle \begin{abstract}
In this paper, we define the \textit{normal form} of collections of disjoint three \textit{bridge arcs} for a given rational $3$-tangle. We show that there is a sequence of \textit{normal jump moves} which leads one to the other for two normal forms of the same rational 3-tangle. \end{abstract}
\section{Introduction}
Originally, rational tangles introduced by John Conway (1970,~\cite{0}) meant rational $2$-tangles. J. Conway completed the classification of rational tangles by assigning each rational tangle to the corresponding rational number. Moreover, if two rational tangles are equivalent then their corresponding rational numbers are the same. Based on the definition of rational tangles, we can extend the meaning of the rational tangles to define rational $n$-tangles, $n\geq 2$. For a long time after the rational tangle is defined there are many attempts to classify rational $3$-tangles in advance. As one of the trials to classify them, the first author~\cite{1} gave an algorithm to compare two rational $3$-tangles. However, it does not give a certain representative of each rational $3$-tangle. In this paper, we define the \textit{normal form} which is a special collection of disjoint three \textit{bridge arcs} of a given rational $3$-tangle $T$. We provide a clue to give a certain representative of each rational $3$-tangle.
\subsection{Bridge disk and bridge arc.}
\begin{figure}
\caption{ Bridge disk and bridge arc in a rational $3$-tangle}
\label{F1}
\end{figure}
Let $\tau=\tau_1\cup\tau_2\cup\tau_3$ be a rational $3$-tangle in $B$, where $\tau_1,\tau_2,\tau_3$ are the three strings of the rational $3$-tangle. A disk $D$ in $B$ is called a \textit{bridge disk} if $ $int $D\cap \tau=\emptyset$ and $\partial D=\tau_i\cup\beta$ and $\tau_i\cap \beta=\partial \tau_i=\partial \beta$ for some $i\in\{1,2,3\}$, where $\beta$ is a simple arc in $\partial B$ such that int $\beta\cap \tau=\emptyset$. The simple arc $\beta$ in $\Sigma_{0,6}$ is called a \textit{bridge arc} if it cobounds a bridge disk with $\tau_i$ for some $i\in\{1,2,3\}$. We note that there is a collection of disjoint three bridge arcs for a rational $3$-tangle since there are disjoint non-parallel three bridge disks in $B$ by definition of the rationality. However, we also note that the collection is not unique (up to isotopy) even for the same rational $3$-tangle. Once we select disjoint two bridge arcs on $\Sigma_{0,6}$, it is easy to find a third bridge arc. We note that any simple arc connecting the two remaining punctures which is disjoint with the existing two bridge arcs should be a bridge arc by Lemma~\ref{L1} below. So, there are infinitely many different bridge arcs for the same string of the rational $3$-tangle. We say that a simple closed curve $\gamma$ is \textit{obtained from} the bridge arc $\beta$ if the boundary of a small regular neighborhood of $\beta$ in $\Sigma_{0,6}$ is isotopic to $\gamma$.
\begin{Lem}\label{L1} Let $\beta_1$ and $\beta_2$ be two disjoint bridge arcs on $\Sigma_{0,6}$ of $(B,\tau)$, where $\tau$ is a rational $3$-tangle in $B$. Let $\beta_3$ be an arc connecting the two remaining punctures on $\Sigma_{0,6}$ which is disjoint with $\beta_1\cup\beta_2$. Then $\beta_3$ is also a bridge arc on $\Sigma_{0,6}$ of $(B,\tau)$. \end{Lem}
\begin{comment} \begin{proof} We first claim that there are two disjoint bridge disks so that each of them contains the bridge arcs $\beta_1$ and $\beta_2$ respectively. We take a bridge disk bounded by the union of $\beta_1$ and a string of $\tau$. Then cut $B$ along the bridge disk and open it along the two cut disks to have a rational $2$-tangle. Since $\beta_2$ is disjoint with $\beta_1$, we can take a bridge disk bounded by $\beta_2$ and a string of the obtained rational $2$-tangle. We glue the two cut disks back to have the original rational $3$-tangle. This implies the existence of two disjoint bridge disks satisfying the condition. Moreover, from the rational $2$-tangle, if we cut the tangle along the bridge disk bounded by $\beta_2$ and a string then we have a rational $1$-tangle. It is clear that any arc connecting the two endpoint of the $1$-tangle on $\Sigma_{0,2}$ without intersecting with the opened two disks along the bridge disks cobounds a bridge disk in the $3$-ball containing the rational $1$-tangle. By gluing the two pairs of bridge disks back, we can guarantee the lemma above. \end{proof}
\end{comment}
Let $\mathcal{B}=\{\beta_1,\beta_2,\beta_3\}$ be a collection of disjoint three bridge arcs on $\Sigma_{0,6}$ for $T=(B,\tau)$ which is called an $\textit{arc system}$ or simply a \textit{system} of $T$. For a better argument about this, we want to make a basic formation in $\Sigma_{0,6}$. Let $E_i$ be the fixed 2-punctured disks in $\Sigma_{0,6}$ as in the diagram of Figure~\ref{c2}. Let $E=E_1\cup E_2\cup E_3$, $\partial E=\partial E_1\cup\partial E_2\cup\partial E_3$ and $P=\Sigma_{0,6}\setminus E$. \\
\begin{figure}
\caption{Basic formation}
\label{c2}
\end{figure}
\subsection{Dehn's parameterization of an arc system of a rational $3$-tangle $T$}
Let $\mathcal{B}=\{\beta_1,\beta_2,\beta_3\}$ be an arc system of $T$. With a similar argument of \textit{Dehn's parameterization} of simple closed curves (Refer to~\cite{1}.), we can parameterize $\beta=\beta_1\cup\beta_2\cup\beta_3$ as follows. Let $\omega_i$ be the subarc of $\partial E_i$ so that $\omega_i$ contains all of the intersection between $\partial E_i$ and $\beta$. They are called \textit{windows}. By considering the Dehn twists supported on $\partial E_i$, we can define the standard arcs in $P$ as in the diagrams of Figure~\ref{l10}. We note that all components of $P\cap \beta$ can be realized by giving the \textit{weights} of the standard arcs. The \textit{weight} of a standard arc stands for the number of parallel arcs to the standard arc. \begin{figure}
\caption{Standard arcs}
\label{l10}
\end{figure} Now, we investigate the arc types in $E_i$. For this, we define the innermost 2-punctured disk $E_i'$ so that it contains only vertical intersections between $\beta$ and $E_i'$ as in the diagrams of Figure~\ref{c4}. We may need to isotope $\beta$ to have $E_i'$ so that it is in minimal general position with respect to $\partial E$ and $\partial E'$, where $\partial E'=\partial E_1'\cup\partial E_2'\cup\partial E_3'$.
We may need to fix the shape of the half arcs in $E_i'$ as in the diagrams of Figure~\ref{c4}.
\begin{figure}
\caption{Arc types in $E_i$}
\label{c4}
\end{figure} We note that the two integers $p$ and $q$ determine all the arc components of $\beta$ in $E_i$, where $p$ is the number of the arc components and $q$ gives us the connecting pattern between the $2p$ endpoints on $\omega_i$ and the endpoints on $\partial E_i'$. Refer to Figure~\ref{c4} to see the rule to determine $q$. We note that one connecting pattern determines all the connecting patterns since all the arc components should be disjoint each other. By adding up the all arguments above, we have the following theorem.
\begin{Thm}[Special case II of Dehn's theorem]\label{T2} There is a one-to-one map $\phi :
\mathcal{C}\rightarrow \mathbb{Z}^6$ so that $\phi([\beta_i])=(p_1,p_2,p_3, q_1, q_2, q_3)$, i.e., it classifies isotopy classes of simple arcs (bridge arcs), where $\mathcal{C}$ is the collection of isotopy classes of simple arcs. \end{Thm}
We note that Theorem~\ref{T2} works for $\beta=\beta_1\cup\beta_2\cup\beta_3$ as well. We say that an arc system of $T$ is in \textit{standard position} if every bridge arcs of the system are realized by taking the weights of the standard arcs and assigning integers for the connecting pattern in $E_i$. In other words, the bridge arcs in a system in standard position do not make a bigon with $\partial E$. We want to point out that if $p_i=0$ then we are not able to decide a connecting pattern since there is no intersection between $\beta$ and $E_i$. In this case, we assign $0$ as $q_i$ for convenience.
We note that $\beta\cup \partial E$ is in minimal general position if $\beta$ is in standard position since the Dehn's parameterization is well defined.(Refer to \cite{1}.)
\section{Normal forms of systems for a rational $3$-tangle $T$}
\subsection{Normal form and the bridge arc replacement}Let $\mathcal{B}=\{\beta_1,\beta_2,\beta_3\}$ be an arc system of a rational $3$-tangle $T$. We assume that $\beta=\beta_1\cup\beta_2\cup\beta_3$ is in standard position. We say that an arc system $\mathcal{B}$ of $T$ is a \textit{normal form} with respect to $\partial E$ if there is no adjacent two intersections between $\beta$ and $\omega_i$ in $\omega_i$ which belong to the same $\beta_j$ for some $j\in\{1,2,3\}$.
In order to show the existence of the normal form, we assume that there exist more than or equal to two successive intersections not satisfying the definition of normal form as in Figure~\ref{F6}. We note that there are two subarcs of $\beta_j$ so that one end of them is one of the successive intersections and the other end is one of the two endpoints of $\beta_j$ and they do not intersect with the others among the successive intersections. Let $c_1$ and $c_2$ be the two subarcs of $\beta_j$. We define the two directions $(+)$ and $(-)$ as in the first diagram of Figure~\ref{F6}. Assume that $c_1$ and $c_2$ follow $(+)$ and $(-)$ directions respectively as
in the second diagram of Figure~\ref{F6}. Then we construct a new bridge arc with $c_1$ and $c_2$ so that it is disjoint from the other two bridge arcs not $\beta_j$ of the system as
in the second diagram of Figure~\ref{F6}. Then, we take the bridge arc as one of the bridge arcs for a new arc system. It is clear that the new arc is also a bridge arc by Lemma~\ref{L1}. If $c_1$ and $c_2$ follow only one of the directions as in the third diagram, we construct a new bridge arc with $c_1$ and $c_2$ so that it is disjoint from the other two bridge arcs not $\beta_j$ of the system as in the fourth diagram of Figure~\ref{F6}. \begin{figure}
\caption{Bridge arc replacement, $
\mathbf{BR}$}
\label{F6}
\end{figure}
This procedure is called the \textit{bridge arc replacement}, briefly $\mathbf{BR}$ with respect to $\partial E$. We note that the new bridge arc obtained by $\textbf{BR}$ has less (geometric) intersections with $\partial E$.
\begin{Lem}
Let $\mathcal{B}$ be an arc system of $T=(B,\tau)$. Then there exists a normal form $\mathcal{B}'$ of $T$ with respect to $\partial E$.
\end{Lem}
\begin{proof} Since the number of intersections between $\partial P$ and $\beta$ is finite, we have a normal form by a sequence of \textbf{BR}s.
\end{proof}
The first author \cite{2} proved the following theorem. The next corollary works by the theorem.
\begin{Thm}\label{T11} Let $\mathcal{B}$ be a normal form of systems for the fixed trivial rational 3-tangle $T=(B,\epsilon)$ with respect to $\partial E$. Then $\mathcal{B}$ is unique up to isotopy. Especially, the three simple closed curves obtained from $\mathcal{B}$ respectively are isotopic to $\partial E_1,\partial E_2$ and $\partial E_3$ respectively. \end{Thm}
\begin{Cor}\label{T12} Let $\mathcal{B}=\{\beta_1,\beta_2,\beta_3\}$ be a system of $T=(B,\tau)$, where $\tau$ is an arbitrary rational $3$-tangle. Let $\gamma_1,\gamma_2,\gamma_3$ be the simple closed curves obtained from $\beta_1,\beta_2$ and $\beta_3$ respectively so that they are pairwise disjoint. Let $\gamma=\gamma_1\cup\gamma_2\cup\gamma_3$. Then there exists unique normal form of systems for $T$ with respect to $\gamma$ (not with respect to $\partial E$) up to isotopy. Moreover, it is $\mathcal{B}$. \end{Cor}
\subsection{Normal form obtained from Normal jump moves} For a given arc system $\mathcal{B}=\{\beta_1,\beta_2,\beta_3\}$ for $T$, we define a \textit{jump move} as follows. Take a regular neighborhood of $\beta_i$, named $N(\beta_i)$, in $\partial B$ which is disjoint with $\beta_j$ and $\beta_k$, where $\{i,j,k\}=\{1,2,3\}$. Now, we consider a rectangle $R$ so that the interior of $R$ is disjoint with $N(\beta_i), \beta_j$ and $\beta_k$ and two parallel sides of $R$ are subarcs of $N(\beta_i)$ and $\beta_j$ respectively. Then we can construct a new bridge arc by a modified band sum as in the diagrams of Figure~\ref{c5}. Then the movement to have the new bridge arc is called a \textit{jump move} (over $\beta_i$). We note that the bridge arc obtained by a bridge arc replacement also can be obtained by a jump move.
\begin{figure}\label{c5}
\end{figure}
Suppose that $\mathcal{B}$ is a normal form of systems for $T$ with respect to $\partial E$. If the new arc system $\mathcal{B}'$ replacing one of the bridge arcs by a jump move is also a normal form then the jump move is called a \textit{normal jump move}. Now, we discuss how to find a new normal form $\{\beta_1',\beta_2,\beta_3\}$ from the normal form $\mathcal{B}=\{\beta_1,\beta_2,\beta_3\}$ for $T$ with respect to $\partial E$ by using a normal jump move.\\
\textbf{Case 1}: $E_i$ contains one of the bridge arcs of the normal form; the following theorem gives an information to know the given rational $3$-tangle $T$.
\begin{Thm} Suppose that $\mathcal{B}=\{\beta_1,\beta_2,\beta_3\}$ and $\mathcal{B}'=\{\beta_1',\beta_2',\beta_3'\}$ are normal forms for the two rational $3$-tangles $T$ and $T'$ respectively with respect to $\partial E$. Let $(0,0,p_2,q_2,p_3, q_3)$ and $(0,0,p_2',q_2',p_3', q_3')$ be the ordered sequences of Dehn's parameters of $\mathcal{B}$ and $\mathcal{B}'$ respectively.
Then $T$ and $T'$ are isotopic if and only if $p_i=p_i'$ for $i=2,3$ and $q_2-q_3=q_2'-q_3'$. \end{Thm} \begin{figure}\label{c10}
\end{figure}
\begin{proof}
Let $\beta=\beta_1\cup\beta_2\cup\beta_3$ and $\beta'=\beta_1'\cup\beta_2'\cup\beta_3'$. Since $p_1=p_1'=0$, $E_1$ contains one of the bridge arcs of $\mathcal{B}$ and $\mathcal{B}'$. Let $\beta_1$ and $\beta_1'$ be the bridge arcs of each normal form. We note that they are isotopic.
Both of $\mathcal{B}$ and $\mathcal{B}'$ can have new normal forms obtained from sequence of jump moves over $\beta_1$ as in the diagrams of Figure~\ref{c10}. We note that there is no change of $p_i$ and $p_i'$ for $i=2,3$ after the sequence of jump moves. The right diagram of Figure~\ref{c10} has two parts; $\beta_1$ and the two bridge arcs representing a rational $2$-tangles. First, we assume that $T$ and $T'$ are isotopic. Then, the rational $2$-tangles represented by the pairs of two bridge arcs are isotopic. We note that a rational $2$-tangle is determined by $p$ and $q$ uniquely, where $p$ is the minimal intersection number between the two bridge arcs and $E_2$(or $E_3$) and $q$ represents the connecting pattern of endpoints between $\omega_2$ and $\omega_3$. Actually, we note that $q=q_2-q_3=q_2'-q_3'$. For the opposite direction, we note that if $q_2-q_3=q_2'-q_3'$ then the two corresponding rational $2$-tangles are isotopic. Therefore, $T$ and $T'$ are isotopic. This completes the proof.
\end{proof}
\textbf{Case 2}: A rational 3-tangle $T$ has a normal form $\mathcal{B}=\{\beta_1,\beta_2,\beta_3\}$ with respect to $\partial E$ satisfying the condition that $|\beta\cap \partial E_i|\geq 2$ for all $i=1,2,3$, where $\beta=\beta_1\cup\beta_2\cup\beta_3$.\\
In this case, we use the normal jump moves to find an optimized normal form. We first define the \textit{standard normal jump move}. We show that the standard normal jump move is the only normal jump move in Theorem~\ref{L5}.\\
\begin{figure}\label{c14}
\end{figure}
$\circ$ \textit{Standard normal jump move} : Let $p$ be one of the endpoints of $\beta_1$ which is in $E_i$. Then either $\beta_2$ or $\beta_3$ is adjacent to $p$ in $E_i$ since $\mathcal{B}$ is a normal form and $|\beta\cap\partial E_i|\geq 2$. Without loss of generality, we assume that $\beta_2$ is the bridge arc adjacent to $p$. Now, we take a small regular neighbhorhood of $\beta_2$, $N(\beta_2)$, so that it does not intersect with $\beta_1$ and $\beta_3$ as in the first diagram of Figure~\ref{c14}. We now take a shortest path $h_1$ from $p$ to a point $s$ of $\partial N(\beta_2)$ in $E_i$ as in the first diagram of Figure~\ref{c14}. Since $\beta_2$ is the adjacent bridge arc to $p$, $h_1$ does not meet with $\beta$. We note that $\beta_1$ has at least one intersection with $\partial E$ so that it has adjacent intersection in $\partial E$ which belongs to $\beta_2$ since $\beta_2$ is the adjacent bridge arc to $p$. Let $r$ be the first intersection of them when we follow $\beta_1$ from $q$. Let $t$ be an intersection between $\partial N(\beta_2)$ and $\partial E$ so that the subarc of $\partial E$ connecting $r$ and $t$ does not intersect $\beta_2\cup\beta_3$. Let $h_2$ be the subarc of $\partial E$. Let $\widetilde{h_2}$ be the path from $q$ to $t$ which is the union of the subarc of $\beta_1$ between $q$ and $r$, and $h_2$.
There are two arc components $\beta_{21}$ and $\beta_{22}$ of $\partial N(\beta_2)\setminus\{s,t\}$ from $s$ to $t$. Let $\mu_1=h_1\cup\beta_{21}\cup\widetilde{h_2}$ and $\mu_2=h_1\cup\beta_{22}\cup\widetilde{h_2}$. We note that $\mu_1$ and $\mu_2$ are disjoint with $\beta_2\cup\beta_3$. Moreover, we can isotope $\mu_1,\mu_2$ so that they are disjoint except the two endpoints and they meet $\beta_1$ only at the endpoints. Let $\mu_1'$ and $\mu_2'$ be the isotoped simple arc from $\mu_1$ and $\mu_2$ respectively. Since $\mu_1'\cup\mu_2'$ separates $\beta_2$ and $\beta_3$, $\beta_1$ should be isotopic to one of $\mu_i'$. Without loss of generality, suppose that $\mu_1'$ is isotopic to $\beta_1$. We note that $\mu_2$ is obtained from $\mu_1$ by a jump move over $\beta_2$.\\
This jump move is a normal jump move called \textit{standard normal jump move} of $\beta_1$.
We isotope $\mu_2'$ so that it is in minimal general position with respect to $\partial E$. We claim that it is a normal jump move.\\
\begin{proof}[Proof of the claim] : Take an intersection $a$ between $\beta_1$ and $\partial E_j$. Since $\{\beta_1,\beta_2,\beta_3\}$ is a normal form with respect to $\partial E$, there are three cases for the two adjacent intersections to $a$ as in the diagram $(a-1), (a-2)$ and $(a-3)$ of Figure~\ref{c6}. In $(a-1)$, there are two dotted red lines which are adjacent to $a$. We note that one of them would be disappeared when we take $\mu_2'$ since one of them is a part of $\mu_1'$ which is isotopic to $\beta_1$. In $(a-2)$, there is no subarc of $\mu_2'$ between the green and red arcs since the red dotted arc between them should be a part of $\mu_1'$ which is isotopic to $\beta_1$. In $(a-3)$, if the blue arc between the green arcs is a part of $\widetilde{h_2}$ then we need to consider the two dotted blue arcs which are parts of $\mu_1'$ and $\mu_2'$ respectively. Clearly, only one of the dotted blue arcs is a part of $\mu_2'$. Now, consider the case that the green and the red arcs are adjacent as in the diagram $(b)$ of Figure~\ref{c6}. It does not matter where or not the dotted red arc is a part of $\mu_2'$ in this case to check the normality, but it should be. Finally, we conclude that $\{\mu_2',\beta_2,\beta_3\}$ is a normal form, so it is a normal jump move.
\begin{figure}\label{c6}
\end{figure}
\end{proof}
The following theorem shows the uniqueness of the normal jump move.
\begin{Thm}\label{L5} Let $\mathcal{B}=\{\beta_1,\beta_2,\beta_3\}$ be a normal form of $T$ with respect to $\partial E$.
Then there exists unique bridge arc $\beta_1'$ replacing $\beta_1$ so that it is not isotopic to $\beta_1$ and $\{\beta_1',\beta_2,\beta_3\}$ is a normal form of $T$ with respect to $\partial E$. \end{Thm}
\begin{proof}
Let $\beta_1'=\mu_2'$. Then, we note that $\beta_1\cup\beta_1'$ separates $\beta_2$ and $\beta_3$. The first diagram of Figure~\ref{c9} is the schematic picture to explain the situation. Let $A$ and $B$ be the two disk regions bounded by $\beta_1\cup\beta_1'$ in $S^2$. Assume that $A$ contains $\beta_2$ and $B$ contains $\beta_3$. \begin{figure}\label{c9}
\end{figure} In order to have a contradiction, we assume that there exists $\beta_1''$ which is isotopic to none of $ \beta_1$ and $\beta_1'$ and $\{\beta_1'',\beta_2,\beta_3\}$ is a normal form with respect to $\partial E$. We note that $\beta_1\cup\beta_1''$ separates $\beta_2$ and $\beta_3$ as well. We first assume that $\beta_1\cup\beta_1''$ separates $S^2$ into two connected components as in the first diagram of Figure~\ref{c9}. We consider the point components of $(\beta_2\cup\beta_3)\cap\partial E$. We first assume that there are adjacent two different coloured points of $(\beta_2\cup\beta_3)\cap\partial E$ in $\partial E_i$ for some $i$ and $\beta_1$ passes through the subarc of $\partial E_i$ connecting the two points. Then $\beta_1''$ cannot pass through the same subarc since a simple path from a point of $\beta_2$ should be back to the component $A$ when it meets $\beta_1\cup\beta_1''$ twice. So, the simple path cannot end at a point of $\beta_3$.\\
Secondly, we assume that there are adjacent two different coloured components of $(\beta_2\cup\beta_3)\cap\partial E$ in $\partial E_i$ for some $i$ and $\beta_1$ do not pass through the subarc of $\partial E_i$ connecting the two components. We claim that $\beta_1''$ should pass through the subarc of $\partial E_i$. It is followed by the fact that a path from a point of $\beta_2$ to a point of $\beta_3$ should meet $\beta_1\cup\beta_1''$.\\
At last, we assume that there are adjacent two components of $(\beta_2\cup\beta_3)\cap\partial E$ with same color in $\partial E_i$ for some $i$. It is clear that $\beta_1$ passes through the subarc of $\partial E_i$ connecting them because of the normality. We note that $\beta_1''$ should pass through the same subarc of $\partial E_i$ because of the normality.\\
From the three cases above, we note that the intersecting patterns of $\beta_1'$ and $\beta_1''$ with $\partial E$ are the same. This implies that $\beta_1'$ and $\beta_1''$ are isotopic by Theorem~\ref{T2}.\\
Now, we assume that $\beta_1\cup\beta_1''$ separates $S^2$ into more than two connected components. Let $C$ and $D$ be the connected regions containing $\beta_2$ and $\beta_3$ respectively as in the second diagram of Figure~\ref{c9}. We note there is no common face between $C$ and $D$. In other words, they meet at a point or they are disjoint. By using a similar argument for the previous case, we can check that the intersecting patterns of $\beta_1$ and $\beta_1''$ with $\partial E$ are the same. This implies that $\beta_1''$ is isotopic to $\beta_1$. This makes a contradiction and it completes the proof. \end{proof}
\begin{Thm}\label{L7} Let $\mathcal{B}$ and $\mathcal{B}'$ be two normal forms of $T$ with respect to $\partial E$. Then $\mathcal{B}$ can be obtained from $\mathcal{B}'$ by a sequence of normal jump moves. \end{Thm} \begin{proof} \begin{figure}\label{c15}
\end{figure}
Let $\mathcal{B}=\{\beta_1,\beta_2,\beta_3\}$ and $\mathcal{B}'=\{\beta_1',\beta_2',\beta_3'\}$ be two distinct normal forms of $T$ with respect to $\partial E$. Let $\gamma_1,\gamma_2,\gamma_3$ be simple closed curves obtained from $\beta_1,\beta_2,\beta_3$ respectively so that they are pairwise disjoint. Let $\gamma=\gamma_1\cup\gamma_2\cup\gamma_3$.
By Corollary~\ref{T12}, there exist $\gamma_i$ and $\beta_j'$ so that two of the intersections between $\gamma_i$ and $\beta_j'$ are adjacent in $\gamma_i$. Without loss of generality, we assume that $\beta_1'$ has a pair of adjacent intersections $a$ and $b$ with $\gamma_1(\subset\gamma)$. In other words, the subarc of $\gamma_1$ between $a$ and $b$ have no intersection with $\beta'=\beta_1'\cup\beta_2'\cup\beta_3'$ as in the diagram (a) of Figure~\ref{c15}. Let $S_{ab}$ be the subarc of $\beta_1'$ between $a$ and $b$. We emphasize that the diagram (a) is one of possible schematic diagrams. Let $\beta_1''$ be the bridge arc obtained from $\beta_1'$ by the \textbf{BR} with respect to $\gamma$ as in the diagram (a) of Figure~\ref{c15}. We isotope $\beta_1''$ slightly so that it does not intersect with $\beta_1'$ except the two endpoints. For convenience, we use the modified simple arc as $\beta_1''$. We note that $\beta_1''$ is not isotopic to $\beta_1'$ since $|\beta_1''\cap \gamma|<|\beta_1'\cap \gamma|$. Moreover, it does not intersect with $\beta_2'\cup\beta_3'$. So, we note that $\beta_1'\cup\beta_1''$ is a simple closed curve which separates $\beta_2'$ and $\beta_3'$. \\
Now, we show that $\{\beta_1'',\beta_2',\beta_3'\}$ is a normal form of $T$ with respect to $\partial E$. Then this completes the proof of this theorem since $|(\beta_1''\cup\beta_2'\cup\beta_3')\cap\gamma|<|(\beta_1'\cup\beta_2'\cup\beta_3')\cap\gamma|<\infty$. \\
We first claim that there is no adjacent intersections belonging to only $\beta_2'$ or $\beta_3'$ in $\partial E$ when we have $\{\beta_1'',\beta_2',\beta_3'\}$. If there exist adjacent intersections belonging to only $\beta_2'$ or $\beta_3'$, then there was a single intersection of $\beta_1'$ with $\partial E$ between them since $\{\beta_1',\beta_2',\beta_3'\}$ is a normal form of $T$ with respect to $\partial E$. Since $\beta_1'\cup\beta_1''$ separates $\beta_2'$ and $\beta_3'$, the given intersections should belong to $\beta_2'$ and $\beta_3'$ respectively. This makes a contradiction. Therefore, the claim works.\\
Now, we claim that there is no adjacent intersections belonging to only $\beta_1''$ in $\partial E$ as well when we have $\{\beta_1'',\beta_2',\beta_3'\}$. In order to prove the claim, we assume that there are such adjacent intersections. Then, there are two possible cases as the diagrams (b-1) and (b-2) of Figure~\ref{c15}, where the points $c$ and $d$ are the two adjacent intersections belonging to $\beta_1''$ in $\partial E$. Let $S_{cd}$ be the subarc of $\partial E_j$ which connects $c$ and $d$ so that it contains no more intersections with $\beta_1''\cup\beta_2'\cup\beta_3'$. Then $S_{cd}$ and the subarc of $\beta_1''$ cobounds a simple closed curve $S$ so that $S$ encloses one of $\beta_2'$ and $\beta_3'$. Without loss of generality, we assume that $S$ encloses $\beta_2'$. Since $\beta_1'\cup\beta_1''$ separates $\beta_2'$ and $\beta_3'$ and $S$ separates $\beta_2'$ and $\beta_3'$, the intersection patterns between $\beta_2\cup\beta_3$ and $S_{ab}$, and between $\beta_2\cup\beta_3$ and $S_{cd}$ are the same up to isotopy. If either $\beta_2$ or $\beta_3$ is isotopic to $\beta_2'$ then it violates the condition that $\mathcal{B}$ is a normal form with respect to $\partial E$ since either $(\beta_2\cup\beta_3)\cap S_{cd}$ is empty set or there are more than one intersection of $\beta_2$ or $\beta_3$ with $S_{cd}$. So, we assume that none of $\beta_2$ and $\beta_3$ is isotopic to $\beta_2'$. Let $A$ be the region bounded by the simple closed curve $\widehat{S}$ consisting of the subarcs of $S_{ab}$(or the extended arc of $S_{ab}$) and $\beta_1$ which contains $\beta_2'$ as in the diagrams of Figure~\ref{c15}. We note that the interior of $A$ does not intersect with $\beta_1$.
We also note that there exists a simple arc connecting the remaining two punctures not the four punctures in $\beta_1\cup\beta_2'$ so that it is disjoint with $\beta_1\cup\beta_2'$ and $A$. Then it would be a bridge arc by Lemma~\ref{L1}. Let $\beta_3''$ be the bridge arc. Then $\{\beta_1,\beta_2',\beta_3''\}$ is a system of the same rational $3$-tangle $T$. By taking pairwise disjoint three simple closed curves $\gamma_1,\gamma_2'$ and $\gamma_3''$ obtained from $\beta_1,\beta_2'$ and $\beta_3''$, we investigate $\gamma_2$ and $\gamma_3$. We note that $\{\beta_1,\beta_2,\beta_3\}$ is also a system of $T$. By Corollary~\ref{T12}, we note that there exist adjacent two intersections of $\beta_2\cup\beta_3$ with $S_{ab}$ belonging to the same $\beta_2$ or $\beta_3$ since $\beta_1$ is disjoint with $\gamma_1\cup\gamma_2'\cup\gamma_3''$. Moreover, we recall that the intersection patterns between $\beta_2\cup\beta_3$ and $S_{ab}$, and $\beta_2\cup\beta_3$ and $S_{cd}$ are the same. It makes a contradiction to the condition that $\mathcal{B}=\{\beta_1,\beta_2,\beta_3\}$ is a normal form with respect to $\partial E$. This completes the claim. Finally, we proved that $\{\beta_1'',\beta_2',\beta_3'\}$ is a normal form of $T$ with respect to $\partial E$.\\
\end{proof}
\end{document} |
\begin{document}
\newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{conjecture}[theorem]{Conjecture} \title{Engineering steady Knill-Laflamme-Milburn state of Rydberg atoms by dissipation}
\author{Dong-Xiao Li} \affiliation{Center for Quantum Sciences and School of Physics, Northeast Normal University, Changchun, 130024, People's Republic of China} \affiliation{Center for Advanced Optoelectronic Functional Materials Research, and Key Laboratory for UV Light-Emitting Materials and Technology of Ministry of Education, Northeast Normal University, Changchun 130024, China}
\author{Xiao-Qiang Shao\footnote{shaoxq644@nenu.edu.cn}} \affiliation{Center for Quantum Sciences and School of Physics, Northeast Normal University, Changchun, 130024, People's Republic of China} \affiliation{Center for Advanced Optoelectronic Functional Materials Research, and Key Laboratory for UV Light-Emitting Materials and Technology of Ministry of Education, Northeast Normal University, Changchun 130024, China}
\author{Jin-Hui Wu} \affiliation{Center for Quantum Sciences and School of Physics, Northeast Normal University, Changchun, 130024, People's Republic of China} \affiliation{Center for Advanced Optoelectronic Functional Materials Research, and Key Laboratory for UV Light-Emitting Materials and Technology of Ministry of Education, Northeast Normal University, Changchun 130024, China}
\author{X. X. Yi} \affiliation{Center for Quantum Sciences and School of Physics, Northeast Normal University, Changchun, 130024, People's Republic of China} \affiliation{Center for Advanced Optoelectronic Functional Materials Research, and Key Laboratory for UV Light-Emitting Materials and Technology of Ministry of Education, Northeast Normal University, Changchun 130024, China}
\author{Tai-Yu Zheng\footnote{zhengty@nenu.edu.cn}} \affiliation{Center for Quantum Sciences and School of Physics, Northeast Normal University, Changchun, 130024, People's Republic of China} \affiliation{Center for Advanced Optoelectronic Functional Materials Research, and Key Laboratory for UV Light-Emitting Materials and Technology of Ministry of Education, Northeast Normal University, Changchun 130024, China}
\date{\today}
\begin{abstract} The Knill-Laflamme-Milburn (KLM) states have been proved to be a useful resource for quantum information processing [E. Knill, R. Laflamme, and G. J. Milburn, \href{http://dx.doi.org/10.1038/35051009}{Nature 409, 46 (2001)}]. For atomic KLM states, several schemes have been put forward based on the time-dependent unitary dynamics, but the dissipative generation of these states has not been reported. This work discusses the possibility for creating different forms of bipartite KLM states in neutral atom system, where the spontaneous emission of excited Rydberg states, combined with the Rydberg antiblockade mechanism, is actively exploited to engineer a steady KLM state from an arbitrary initial state. The numerical simulation of the master equation signifies that a fidelity above 99\% is available with the current experimental parameters. \end{abstract}
\maketitle
\section{Introduction} It is acknowledged that the generation and stabilization of quantum entanglement \cite{PhysRev.47.777,pra022317ref1} is a remarkable research field in quantum information science, which has various practical applications in quantum cryptography \cite{pra032323ref8}, quantum superdense coding \cite{pra032323ref9}, and quantum teleportation \cite{pra032323ref10}. As a specific class of entangled multiphoton states, the Knill-Laflamme-Milburn (KLM) can get over the issue that the success probability decreases with increasing complexity of the quantum computational scheme \cite{nature409}. { For example, utilizing the KLM state, the scalable quantum computation can be performed with the success probability $(1-1/n)$ where $n$ is the number of ancilla. This value is asymptotically close to unity for a large number $n$, which is a sharp contrast to the success probability of $25\%$ due to the impossibility of performing complete Bell measurement \cite{prl110503ref3,prl110503}. Further more, based on the scheme of \cite{nature409}, Franson {it et al.} realized a high-fidelity quantum logic operations assisted by a more general KLM state, where each component of the quantum state is no longer equal weight superposition \cite{prl137901}. In their approach, the logic devices always produce an output with an intrinsic error rate proportional to $1/n^2$.}
Recently, researchers pay close attention to preparing and extending photonic KLM states \cite{cheng2012ref13,cheng2012ref15,cheng2012ref16,jpb195501}. Concretely, Franson \textit{et al.} made use of elementary linear-optics gates and solid-state approach to produce the arbitrary photon-number KLM states \cite{cheng2012ref13}. In addition, Lemr {\it et al.} proposed two ways for preparation of the KLM state using spontaneous parametric down-conversion in experiment and employing a tunable controlled phase gate in theory, respectively \cite{cheng2012ref16,jpb195501}. Nevertheless, all the schemes mentioned above are probabilistic due to the nature of the linear-optics system. { In \cite{cheng2012ref14}, Popescu proved that a KLM type quantum computation can be performed not only with photons but also with bosonic neutral atoms or bosonic ions, and this idea makes the atomic KLM state meaningful. On the other hand, by virtue of the cavity quantum electrodynamics technology which can coherently couple atoms and photons, a deterministic KLM state for photons can be obtained through the mapping between atoms and photons, which then can be used for photonic quantum KLM computation.}
Up to present, the generation of KLM states for atoms or artificial atoms have been put forward with the time-dependent unitary dynamics \cite{cheng2012generation,liu2014preparation}. Although their works are deterministic, the decoherence effect arising from the weak interaction between quantum system and its surrounding environment would decrease the fidelity of the target quantum state. Fortunately, there are also several schemes suggesting to prepare entangled states by dissipation \cite{prl090502ref12,prl090502ref13,prl090502ref11,pra012319ref13}, in which the dissipation can be tuned into a resource for certain quantum information process task. For instance, Verstraete \textit{et al.} showed the opposite effect of dissipation on engineering a large variety of strongly correlated states in steady state \cite{prl090502ref11}. Kastoryano \textit{et al.} proposed a scheme for the preparation of a maximally entangled state of two atoms in an optical cavity. The cavity decay was no longer undesirable, but played an integral part in the dynamics \cite{pra012319ref14}. Very recently, Shao \textit{et al.} put forward a scheme for generating maximally entanglement between two Rydberg atoms through the atomic spontaneous emission \cite{PhysRevA.95.062339}.
{ The Rydberg atoms has been considered as a good candidate for quantum information processing due to the Rydberg blockade effect \cite{pra063419ref1,pra063419ref2,pra063419ref3}.} The large size and the large electric dipole moment of Rydberg atoms can induce a long-range interaction. When the atoms are excited into Rydberg states in a small volume, the long-range interaction will prevent the multiple Rydberg excitations by the level shifts and can facilitate the formation of strongly correlated many-body systems. The Rydberg blockade mechanism provides a promising method to create quantum states in applications ranging from quantum information processing \cite{pra063419ref6,pra063419ref5} to quantum nonlinear optics \cite{pra063419ref8,pra063419ref9}. However, Ates \textit{et al.} predicted an opposite effect, namely, the Rydberg antiblockade \cite{pra012328ref18}: The two-photon detuning can compensate the energy shift of Rydberg states, which results in the simultaneous transitions of two atoms into Rydberg states and the inhibition of Rydberg blockade. Since then, the Rydberg antiblockade has been proposed to deterministically implement quantum entanglement and multiqubit logic gates \cite{pra012328ref20,PhysRevA.89.012319,pra012328,pra022319,pra032336}. Particularly, Carr \textit{et al.} utilized Rydberg mediated interactions and dissipation to prepare the high fidelity entanglement and antiferromagnetic states \cite{pra012328ref20}. Su \textit{et al.} proposed an alternative scheme to quickly achieve the Rydberg antiblockade regime, which can be used to construct two- and multiqubit quantum logic gates \cite{pra022319}.
In this paper, we consider a dissipative scheme to prepare the bipartite KLM state via Rydberg antiblockade mechanism. We compensate the energy shift of Rydberg states by the two-photon detuning and the Stark shifts induced by classical optical lasers, making the atomic spontaneous emission as a powerful resource to create the KLM state, and eliminate the undesired states by a dispersive microwave field. Combined with these operations, the target state becomes the unique steady state of system. Therefore, the high fidelity entanglement can be realized without states initialization and the precisely evolution time.
\section{The full and effective Markovian master equations of system} \begin{figure}
\caption{(a) Diagrammatic illustration of the dissipative scheme and the KLM state is $|E_1\rangle=(|00\rangle+|10\rangle+|11\rangle)/\sqrt{3}$. (b) The effective transitions of the reduced system.}
\label{model}
\end{figure}
The KLM state is defined as $|E_1\rangle=\sum_{j=0}^n|1\rangle^j|0\rangle^{n-j}/\sqrt{1+n}$ \cite{nature409}, where the notation $|a\rangle^j$ means $|a\rangle|a\rangle...,j$ times. For the case of $n = 2$, the bipartite KLM state reads \begin{eqnarray}
|E_1\rangle=(|00\rangle+|10\rangle+|11\rangle)/\sqrt{3}. \end{eqnarray}
To prepare the bipartite KLM state dissipatively, we show the atomic configuration in Fig.~\ref{model}(a), which illustrates two three-level Rydberg atoms consisting of two ground states $|0\rangle,|1\rangle$ and a Rydberg state $|r\rangle$. For the first atom, the ground state $|0\rangle$ is dispersively coupled to the excited state $|r\rangle$ by a classical field with Rabi frequencies $\Omega_a$, while the transition $|1\rangle\leftrightarrow|r\rangle$ of the second atom is dispersively driven by another classical field with Rabi frequencies $\Omega_b$. The driving fields are both detuned by $-\Delta$ and we set $\Omega_a=\Omega_b=\Omega$ for simplicity. The transitions between the ground states $|0\rangle$ and $|1\rangle$ are driven by two microwave fields of Rabi frequency $\omega_a=-\omega_b=\omega$, detuning $\delta$. The Rydberg state $|r\rangle$ is assumed to spontaneously decay down to the ground states $|0\rangle$ and $|1\rangle$ with an equal rate $\gamma/2$, respectively. The long-range interactions of Rydberg atoms are described by $U_{rr}$ as both atoms occupy the same Rydberg states. In the interaction picture, the Hamiltonian of system can be written as \begin{eqnarray}\label{HI}
H_I&=&\Omega e^{-i\Delta t}(|r\rangle_1\langle0|+|r\rangle_2\langle1|)+\sum_{j=1,2}\omega_j e^{i\delta t}|1\rangle_j\langle0|+{\rm H.c.}+U_{rr}|rr\rangle\langle rr|, \end{eqnarray} where the Rydberg-mediated interaction $U_{rr}$ originates from the dipole-dipole potential of the scale $C_3/r^3$ or the long-range van der Waals interaction proportional to $C_6/r^6$, with $r$ being the distance between two Rydberg atoms and $C_{3(6)}$ depending on the quantum numbers of the Rydberg state \cite{PhysRevA.77.032723,RevModPhys.82.2313}. After rotating the frame of the Hamiltonian, we obtain \begin{eqnarray}\label{HIrot}
H_I&=&\Omega(|r\rangle_1\langle0|+|r\rangle_2\langle1|)+\omega(|1\rangle_1\langle0|-|1\rangle_2\langle0|)+{\rm H.c.}\nonumber\\
&&-\Delta(|r\rangle_1\langle r|+|r\rangle_2\langle r|)+\delta(|1\rangle_1\langle1|-|0\rangle_2\langle0|)+U_{rr}|rr\rangle\langle rr|. \end{eqnarray}
The atomic spontaneous emission can be described by the Lindblad operators as $L^{1(2)}=\sqrt{\gamma/2}|0(1)\rangle_1\langle r|$, $L^{3(4)}=\sqrt{\gamma/2}|0(1)\rangle_2\langle r|$. The evolution of the system is now governed by the Markovian master equation \begin{eqnarray}\label{fullmaster} \dot\rho=-i[H_I,\rho]+\sum_{i=1}^4L^i\rho L^{i\dag}-\frac{1}{2}(L^{i\dag}L^i\rho+\rho L^{i\dag}L^i). \end{eqnarray} According to the standard second-order perturbation theory, in the regime of the large detuning limit $\Delta\gg\Omega$ and $U_{rr}\sim2\Delta$, the process of the single excitation may be adiabatically eliminated and the two atoms can be directly excited into the Rydberg states simultaneously \cite{PhysRevA.95.062339}. Then the Hamiltonian can be rewritten as \begin{eqnarray} H_{\rm eff}&=&
\frac{2\Omega^2}{\Delta}|rr\rangle\langle01|+\omega(|00\rangle-|11\rangle)(\langle10|-\langle01|)+{\rm H.c.}\nonumber\\
&&+\delta(|11\rangle\langle11|-|00\rangle\langle00|)+\frac{\Omega^2}{\Delta}(|0\rangle_1\langle0|+|1\rangle_2\langle1|)\nonumber\\
&&+(U_{rr}-2\Delta+\frac{2\Omega^2}{\Delta})|rr\rangle\langle rr|. \end{eqnarray}
The Stark shifts $\Omega^2/\Delta(|0\rangle_1\langle0|+|1\rangle_2\langle1|)$ can be canceled by introducing other ancillary levels and the term $2\Omega^2/\Delta|rr\rangle\langle rr|$ can be counteracted by setting $U_{rr}=2\Delta-2\Omega^2/\Delta$. Then we reformulate the above Hamiltonian as follows \begin{eqnarray}\label{Heff}
H_{\rm eff}&=&\Omega_{\rm eff}|rr\rangle\langle01|+\omega(|00\rangle-|11\rangle)(\langle10|-\langle01|)+{\rm H.c.}\nonumber\\
&&+\delta(|11\rangle\langle11|-|00\rangle\langle00|), \end{eqnarray} where $\Omega_{\rm eff}=2\Omega^2/\Delta$.
{ The corresponding effective Lindblad operators are \begin{eqnarray}\label{Leff}
L_{\rm eff}^{1(2)}&=&\sqrt{\frac{\gamma}{2}}|r0(1)\rangle\langle rr|,L_{\rm eff}^{3(4)}=\sqrt{\frac{\gamma}{2}}|0(1)r\rangle\langle rr|,\\
L_{\rm eff}^{5(6)}&=&\sqrt{\frac{\gamma}{2}}|0(1)0\rangle\langle r0|,L_{\rm eff}^{7(8)}=\sqrt{\frac{\gamma}{2}}|0(1)1\rangle\langle r1|,\\
L_{\rm eff}^{9(10)}&=&\sqrt{\frac{\gamma}{2}}|00(1)\rangle\langle 0r|,L_{\rm eff}^{11(12)}=\sqrt{\frac{\gamma}{2}}|10(1)\rangle\langle 1r|. \end{eqnarray} Finally we have the effective master equation as \begin{eqnarray}\label{effmaster} \dot\rho=-i[H_{\rm eff},\rho]+\sum_{k=1}^{12}L_{\rm eff}^k\rho L_{\rm eff}^{k\dag}-\frac{1}{2}(L_{\rm eff}^{k\dag}L_{\rm eff}^k\rho+\rho L_{\rm eff}^{k\dag}L_{\rm eff}^k). \end{eqnarray}
According to Eq.~(\ref{effmaster}), we plot the effective transitions of the reduced system in Fig.~\ref{model}(b) to explain how the scheme works. The system consists of four ground states $|00\rangle, |10\rangle, |01\rangle, |11\rangle$, four mediate states $|r0\rangle,|r1\rangle,|0r\rangle,|1r\rangle$ and an excited state $|rr\rangle$. The four ground states can be driven to each other by the microwave fields. (The effect of the microwave fields nearly contributes nothing to the dissipative dynamics of the mediate state since they are unstable, which can be neglected in the effective master equation.)
And the ground state $|01\rangle$ can also be coupled to the excited state $|rr\rangle$ with an effective Rabi frequency $\Omega_{\rm eff}$. Due to the spontaneous emission of atoms, the excited state $|rr\rangle$ will decay to the four mediate states and further decay to the four ground states by atomic spontaneous emission. Thus the
system will repeat the process of pumping and decaying. We find that when the detuning of microwave field satisfies $\delta=\omega$, the bipartite KLM state turns into the unique steady-state solution to the Eq.~(\ref{effmaster}) because of $H_{\rm eff}|E_1\rangle=0$ and $L_{\rm eff}^k|E_1\rangle=0$. Consequently, the system finally will be stabilized at the bipartite KLM state.}
\section{The demonstration of the validity of effective master equation and the investigation of relevant parameters} \begin{figure}
\caption{ {(a) The evolutions of the negativity of system governed by the full master equation. (b) The evolution of the purity of system, Tr$[\rho^2(t)]$, governed by the full master equation. (c) and (d) show the populations of states $|E_1\rangle,|E_2\rangle,|E_3\rangle$ and $|E_4\rangle$ as functions of $\Omega t$ governed by the full and effective master equations, respectively. For all figures, the initial states are chosen arbitrarily as $\rho_0=a|00\rangle\langle00|+b|11\rangle\langle11|+c|10\rangle\langle10|+d|01\rangle\langle01|$, where $a=0.3,~b=0.15,~c=0.45,$ and $d=0.1$. The relevant parameters: $\Delta=70\Omega,~\delta=0.02\Omega,$ and $\gamma=0.05\Omega$.}}
\label{zong}
\end{figure}
{ In order to fully demonstrate the validity of our proposal, we measure the evolution of the system with different quantities. In Fig.~\ref{zong}(a), we investigate the negativity of state $\rho(t)$ solved by the full master equation as a function of $\Omega t$. The negativity is a measure proposed by Vidal \textit{et al.} \cite{pra032314}. It can bound two relevant quantities, the channel capacity and the distillable entanglement, characterizing the entanglement of mixed states. The definition of negativity is \begin{equation}
\mathcal{N}(\rho_{A,B})=\frac{||\rho_{A,B}^{T_A}||-1}{2}, \end{equation} where \begin{equation}
||\rho_{A,B}^{T_A}||={\rm Tr}\sqrt{\rho_{A,B}^{T_A\dag}\rho_{A,B}^{T_A}}, \end{equation}
and $\rho_{A,B}^{T_A}$ denotes the partial transpose of the bipartite mixed state $\rho$ on the subsystem $A$. To increase the credibility, the initial state is chosen arbitrarily as $\rho_0=a|00\rangle\langle00|+b|11\rangle\langle11|+c|10\rangle\langle10|+d|01\rangle\langle01|$, where $a=0.3,~b=0.15,~c=0.45,$ and $d=0.1$. We can obtain the negativity of $\rho(t)$ governed by full master equation can reach $0.3275$ at $\Omega t=5000$, which approaches the ideal value of the negativity of the bipartite KLM state $1/3$. However, we cannot determine the steady state is the pure KLM state at this stage because other mixed states may have the same negativity.
In Fig.~\ref{zong}(b), we further estimate the evolution of system by purity, which is defined as $\mathcal{P}(t)={\rm Tr}[\rho^2(t)]$. From the definition, we can know that once the system is in a pure state, the purity will be equal to unit, otherwise, the purity will be less than unit. In Fig.~\ref{zong}(b), the system also begins with the same mixed state $\rho_0=a|00\rangle\langle00|+b|11\rangle\langle11|+c|10\rangle\langle10|+d|01\rangle\langle01|$, with $a=0.3,~b=0.15,~c=0.45,$ and $d=0.1$. At $\Omega t=5000$, the purity can achieves $0.985$. Combined with the negativity in Fig.~\ref{zong}(a), the behavior of Fig.~\ref{zong}(b) can indirectly show that the bipartite KLM state is produced and directly show that the system tends to be a pure state.
\begin{figure*}\label{Delta}
\end{figure*}
In the Fig.~\ref{zong}(c) and Fig.~\ref{zong}(d), we plot the evolutions of the population for state $|\varphi\rangle$ governed by the full and effective master equations, respectively. The population of state $|\varphi\rangle$ is defined as $P=\langle \varphi|\rho(t)|\varphi\rangle$. In the Fig.~\ref{zong}(c), it convincingly demonstrates the feasibility of the present scheme that the population of target state $|E_1\rangle$ (solid line) tends to unity and the other orthonormal states $|E_2\rangle$ (dotted-dashed line), $|E_3\rangle$ (dashed line), and $|E_4\rangle$ (dotted line) tend to vanish, where \begin{eqnarray}
|E_2\rangle&=&\frac{1}{\sqrt{15}}(|00\rangle-3|01\rangle-2|10\rangle+|11\rangle),\\
|E_3\rangle&=&\frac{1}{\sqrt{5}}(\theta_+|00\rangle+|01\rangle-|10\rangle-\theta_-|11\rangle),\\
|E_4\rangle&=&\frac{1}{\sqrt{5}}(\theta_-|00\rangle-|01\rangle+|10\rangle-\theta_+|11\rangle), \end{eqnarray} and $\theta_\pm=(\sqrt{5}\pm1)/2$. In Fig.~\ref{zong}(d), the populations of the reduced system are in good agreement with the corresponding populations in Fig.~\ref{zong}(c), which proves the validity of the effective system and the above physical explanation of Fig.~\ref{model}(b).}
Now we investigate the influence of the detuning parameter $\Delta$ on the generation of the KLM state in Fig.~\ref{Delta}(a). According to the antiblockade of Rydberg atoms, we need the large detuning limit $\Delta\gg\Omega$ to adiabatically eliminate the process of the single excitation. In Fig.~\ref{Delta}(a), the limiting condition is destroyed by the decrease of $\Delta$, which leads to the reduction of the population for the target state. But the convergence time will be shorter with the decrease of $\Delta$, since the effective coupling strength $\Omega_{\rm eff}=2\Omega^2/\Delta$ is enlarged.
Another method to evaluate the quality of the steady sate is the calculation of the fidelity. In Fig.~\ref{Delta}(b), we give the steady-solution of system by solving the full master equation $\dot\rho=0$, and study the steady-state fidelity of the bipartite KLM state as functions of atomic spontaneous emission and the detuning of the microwave field, where the steady-state fidelity is defined as $F={\rm Tr}[\sqrt{\sqrt{\rho_E}\rho(t)\sqrt{\rho_E}}]$. The dashed lines represent some contours of the fidelity. { The figure makes an impactful proof that the present scheme can realize a high-fidelity bipartite KLM state with a wide range of relevant parameters: When $\gamma/\Omega=0.3$ and $\delta/\Omega=0.1$, the fidelity of target state is still above $95\%$.}
\section{Experimental feasibility} \begin{figure}
\caption{(a) Scheme of the experimental setup. Two ${}^{87}$Rb atoms are driven by two laser beams, respectively. (b) Internal energy levels of the corresponding atoms.}
\label{exsetup}
\end{figure}
{ In experiment, we may employ $^{87}$Rb atoms in our proposal as shown in Fig.~\ref{exsetup}. Each atom can be addressed individually by two classical fields. The transition from the ground state $|0\rangle=|5s_{1/2};F=1\rangle$ to the mediated state $|5p_{3/2};F=2\rangle$ of the first atom is driven by a $\sigma_+$-polarized laser beam at $780$ nm, detuned by $-\Delta_M$. A $\sigma_+$-polarized $480$-nm laser beam achieves the further transition from the mediated state $|5p_{3/2};F=2\rangle$ to the Rydberg state $|r\rangle=|95d_{5/2};F=3\rangle$ with detuning $\Delta_M-\Delta$. For the second atom, a $\pi$-polarized $780$-nm laser beam couples the ground state $|1\rangle=|5s_{1/2};F=2\rangle$ to the mediated state $|5p_{3/2};F=2\rangle$ with detuning $-\Delta_M$, and a $\sigma_+$-polarized $480$-nm laser beam then couples the excited state $|5p_{3/2};F=2\rangle$ to the the Rydberg state $|r\rangle=|95d_{5/2};F=3\rangle$ with detuning $\Delta_M-\Delta$. The detuning paremeter $\Delta_M$ is made large enough compared with the Rabi frequencies of driving fields that the population of the mediated state can be safely omitted \cite{PhysRevLett.104.010502,PhysRevLett.104.010503,PhysRevA.82.030306}. The coupling between two ground states can be directly completed by microwave fields or equivalent Raman transition, which is not shown in Fig.~\ref{exsetup}.}
The Rabi laser frequency $\Omega$ realized by the above two-photon process can be tuned continuously between $2\pi\times(0,60)~$MHz \cite{prl090402}. Thus, considering the decay rate of the Rydberg state $\gamma=2\pi\times0.03~$MHz \cite{njp043020}, we choose the parameters $\Omega=14$~MHz and $\Delta=600$~MHz, and obtain the fidelity of $99.67\%$ { (empty triangle)} at $t=30000/\Omega\approx2.14$~ms in Fig.~\ref{extime}. For another configuration of Rydberg atoms, such as in \cite{prl223002} where the principal quantum number of Rydberg state of $^{87}$Rb atom are chosen as $N=20$, the decay rate can reach $2\pi\times100~$kHz. In this situation, a choice of $\Omega=20$~MHz and $\Delta=900$~MHz will guarantee a steady-state fidelity $99.70\%$ at $t=1.5$~ms, as indicated by the { empty circle} in Fig.~\ref{extime}. To sum up, we are always able to achieve a high fidelity by selecting different Rabi frequencies and detuning parameters with regard to different decay rates for the excited Rydberg states. \begin{figure}
\caption{The fidelity of the KLM state $|E_1\rangle$ as a function of $\Omega t$ with different experimental parameters and $\delta=0.02\Omega$. The initial states are all $\rho_0=|00\rangle\langle00|$. }
\label{extime}
\end{figure} {\section{Generalization: Preparation of a more general bipartite KLM state }
{ The general KLM state (triangle-shaped amplitude function), $|E_1'\rangle=\alpha_0|00\rangle+\alpha_1|10\rangle+\alpha_0|11\rangle,$ can be used to realize a high-fidelity approach to linear optics quantum computing, which does not rely on postselection, and one can choose the suitable values of $\alpha_j$ to significantly increase the efficiency of teleportation based quantum computing \cite{prl137901,jpb195501}. In our scheme, we can change the relation between the Rabi frequency $\omega$ and detuning $\delta$ of microwave fields to create a general bipartite KLM state. For instance, we now set $\delta=m\omega$ and calculate the steady solution of the master equation by $\dot\rho=0$. The unique steady-state solution of the system reads
\begin{eqnarray}
|E_1'\rangle=\frac{1}{\sqrt{2+m^2}}|00\rangle+\frac{m}{\sqrt{2+m^2}}|10\rangle+\frac{1}{\sqrt{2+m^2}}|11\rangle, \end{eqnarray} which is just a general bipartite KLM state. The other orthonormal ground states can be written as \begin{eqnarray}
|E_2'\rangle&=&\frac{1}{\sqrt{(2+m^2)(4+m^2)}}\Big[m|00\rangle-(2+m^2)|01\rangle-2|10\rangle+m|11\rangle\Big],\\
|E_3'\rangle&=&\frac{1}{\sqrt{4+m^2}}(\theta_+'|00\rangle+|01\rangle-|10\rangle-\theta_-'|11\rangle),\\
|E_4'\rangle&=&\frac{1}{\sqrt{4+m^2}}(\theta_-'|00\rangle-|01\rangle+|10\rangle-\theta_+'|11\rangle), \end{eqnarray} where $\theta_\pm'=(\sqrt{3+2m^2}\pm1)/2$. \begin{figure}
\caption{The fidelity of the general bipartite KLM state $|t_2\rangle$ as a function of $\Omega t$ with different $m$. The relevant parameters are chosen as the experimental parameters $(\Delta,\Omega,\gamma)/2\pi=(900,20,0.1)$MHz and $\delta=0.02\Omega$. The initial states are all in $\rho_0=|00\rangle\langle00|$. }
\label{mchange}
\end{figure}
In order to demonstrate the success of the generalization, we plot the fidelity of the general bipartite KLM state $|E_1'\rangle$ as a function of $\Omega t$ with different $m$ in Fig.~\ref{mchange}. Without loss of generality, we select one set of experimental parameters of Fig.~\ref{extime}, i.e. $(\Delta,\Omega,\gamma)/2\pi=(900,20,0.1)$MHz and $\delta=0.02\Omega$. We can find that when $\Omega t=5\times10^4$, the lines of $m=0.5$ (empty circle), $m=2$ (empty square) and $m=3$ (empty triangle) can reach $99.64\%$, $99.88\%$ and $99.72\%$, respectively. These results fully prove the validity of the generalization.}
\section{Summary} In summary, we have systematically investigated the feasibility of dissipatively generating the bipartite KLM state by two three-level Rydberg atoms. The atomic spontaneous emission acts as a powerful resource to create the bipartite KLM state. We counteract the energy shift of Rydberg states by the two-photon detuning, adiabatically eliminate the process of the single excitation by the large detuning limit, and pick up the desired state by dispersive microwave fields. Thus, the target state is the unique steady state of system, and it can be generated in a wide range of relevant parameters. Finally, we generalize our scheme to prepare a more general bipartite KLM state by adjusting the relation between the Rabi frequency and detuning of microwave fields. The numerical simulation reveals that both the standard KLM state and the generalized KLM state process high fidelities with present experimental parameters.
\section*{Funding} National Natural Science Foundation of China (NSFC) (11534002, 61475033, 11774047); Fundamental Research Funds for the Central Universities (2412016KJ004).
\end{document} |
\begin{document}
\title{\Large Cavity-catalyzed deterministic generation of maximal entanglement between nonidentical atoms } \author{\bf Nguyen Ba An} \email{nbaan@kias.re.kr} \affiliation{School of Computational Sciences, Korea Institute for Advanced Study, 207-43 Cheongryangni 2-dong, Dongdaemun-gu, Seoul 130-722, Republic of Korea}
\begin{abstract} By exactly solving the underlying Sch\"{o}dinger equation we show that one and the same resonant cavity can be used as a catalyst to maximally entangle atoms of two nonidentical groups. The generation scheme is realistic and advantageous in the sense that it is deterministic, efficient, scalable and immune from decoherence effects. \end{abstract}
\maketitle
\noindent\textbf{1. Introduction}
Quantum entanglement, or simply entanglement for short, though known a long time ago \cite{qe}, has recently become a key ingredient to perform in nonclassical ways various tasks of quantum information processing and quantum computing \cite{r3}. Not only two parties can be entangled with each other, entanglement exists also between many parties. The two best known genuine multipartite entangled states are the GHZ-states \cite{ghz} and the W-states \cite{w} which are inequivalent and exhibit qualitatively different behaviors (see, e.g. \cite{r7}). Since multipartite entangled states are necessarily prerequisite resources for quantum network communication and scalable quantum computers (see, e.g. \cite{r4}) their generation plays a crucial role. A good deal of generation schemes by diverse mechanisms (see, e.g. \cite {schemes}) have been studied in detail so far. Here we are concerned with a cavity-based scheme to generate $N$-partite maximally entangled W-states of the form \begin{equation}
\left| W_{N}\right\rangle =\frac{1}{\sqrt{N}}\left( \left|
e_{1}g_{2}...g_{N}\right\rangle +\left| g_{1}e_{2}...g_{N}\right\rangle
+...+\left| g_{1}g_{2}...e_{N}\right\rangle \right) \label{WN} \end{equation}
where $\left| g_{j}\right\rangle $ and $\left| e_{j}\right\rangle $ are the ground and the excited state of the $j^{th}$ two-level atom. In dealing with atom-cavity interaction most theoretical schemes have assumed, for simplicity, identical atoms in the sense that each atom is coupled equally with the cavity \cite{iden1,iden2}. To achieve error-free results, synchronization is required that all the atoms be interacted simultaneously with the cavity: the atoms should be sent at the same time through different paths (i.e. they enter the cavity at different entrance points). As the cavity mode has a spatial profile \cite{duan} different atoms actually experience different couplings with the cavity. In this sense the experimented atoms are not invariant under atom exchange, or the same, they are nonidentical with respect to the coupling with the cavity. Just very recently such practical scenarios have been taken into account for $N=2$ nonidentical atoms \cite{M1} as well as for an arbitrary $N$ atoms among which $M=1$ atom is nonidentical with all the remaining $N-1$ identical atoms \cite{M}. This work carefully considers more general cases of partition of nonidentical atoms. In section 2 we explicitly prove that neither totally nonidentical atoms nor totally identical atoms can be maximally entangled by means of interaction with a resonant cavity. The fully symmetric multi-atom entanglement of the form (\ref{WN}) can however be generated if the atomic ensemble possesses a partial asymmetry. Section 3 deals with a bipartite partition when the atomic ensemble consists of two groups such that the coupling with the cavity is equal for atoms in the same group but unequal for atoms in different groups. The generation time is assessed in section 4. Section 5 is the conclusion.
\vskip 0.5cm
\noindent \textbf{2. Totally nonidentical atoms}
Consider $N$ two-level atoms interacting resonantly with a single-mode cavity field. In the interaction picture and rotating-wave approximation the Hamiltonian of the atom-field system reads \begin{equation} H=\sum_{j=1}^{N}f_{j}\left( a^{+}S_{j}^{-}+S_{j}^{+}a\right) \label{H} \end{equation}
where $a$ $(a^{+})$ denotes the annihilation (creation) operator of the resonant single-mode field in the cavity, $S_{j}^{-}=\left|
g_{j}\right\rangle \left\langle e_{j}\right| ,$ $S_{j}^{+}=\left|
e_{j}\right\rangle \left\langle g_{j}\right| $ and $f_{j}$ measures the strength of the interaction between the $j^{th}$ atom and the same field mode. The atoms are assumed totally nonidentical in the sense that $
f_{j}\neq f_{j^{\prime }}$ for $j\neq j^{\prime }.$ The system dynamics governed by the Hamiltonian (\ref{H}) conserves the so-called excitation number $\mathcal{N}$ defined as the number of photons plus the number of excited atoms, i.e. $\mathcal{N}=a^{+}a+\sum_{j=1}^{N}\left|
e_{j}\right\rangle \left\langle e_{j}\right| $. We shall be interested in the subspace having $\mathcal{N}=1.$ In this subspace there are $N+1$ basic states which can be chosen as $\left| e_{1}g_{2}...g_{N}\right\rangle \left|
0\right\rangle ,$ $\left| g_{1}e_{2}...g_{N}\right\rangle \left|
0\right\rangle ,$ $...,$ $\left| g_{1}g_{2}...e_{N}\right\rangle \left|
0\right\rangle $ and $\left| g_{1}g_{2}...g_{N}\right\rangle \left|
1\right\rangle ,$ where the latter ket denotes the photon number state. At any time $t$ the combined atom-field state $\left| \Psi (t)\right\rangle $ can be represented as a linear superposition of the $N+1$ basic states as \begin{eqnarray*}
\left| \Psi (t)\right\rangle &=&C_{1}(t)\left|
e_{1}g_{2}...g_{N}\right\rangle \left| 0\right\rangle +C_{2}(t)\left|
g_{1}e_{2}...g_{N}\right\rangle \left| 0\right\rangle \\
&&+...+C_{N}(t)\left| g_{1}g_{2}...e_{N}\right\rangle \left| 0\right\rangle
+C_{N+1}\left| g_{1}g_{2}...g_{N}\right\rangle \left| 1\right\rangle . \end{eqnarray*}
The differential equations for the coefficients $C^{\prime }s(t)$ can be derived from the Schr\"{o}dinger equation $i\left| \stackrel{.}{\Psi }
(t)\right\rangle =H\left| \Psi (t)\right\rangle .$ As a result, we get \begin{equation} \left. \begin{tabular}{ccc} $i\stackrel{.}{C}_{1}(t)$ & $=$ & $f_{1}C_{N+1}(t),$ \\ $i\stackrel{.}{C}_{2}(t)$ & $=$ & $f_{2}C_{N+1}(t),$ \\ $\cdots $ & $=$ & $\ldots $ \\ $i\stackrel{.}{C}_{N}(t)$ & $=$ & $f_{N}C_{N+1}(t),$ \\ $i\stackrel{.}{C}_{N+1}(t)$ & $=$ & $\sum_{j=1}^{N}f_{j}C_{j}(t)$ \end{tabular} \right\} . \label{e1} \end{equation} The general solution of Eqs. (\ref{e1}) is \begin{eqnarray} C_{k}(t) &=&\frac{f_{k}^{2}\cos (\Omega t)+\sum_{j=1,j\neq k}^{N}f_{j}^{2}}{ \Omega ^{2}}C_{k}(0) \nonumber \\ &&+\frac{f_{k}[\cos (\Omega t)-1]}{\Omega ^{2}}\sum_{j=1,j\neq k}^{N}f_{j}C_{j}(0) \nonumber \\ &&-\frac{if_{k}\sin (\Omega t)}{\Omega }C_{N+1}(0) \label{Ck} \end{eqnarray} for $k=1,2,...,N$ and \begin{equation} C_{N+1}(t)=-\frac{i\sin (\Omega t)}{\Omega }\sum_{j=1}^{N}f_{j}C_{j}(0)+\cos (\Omega t)C_{N+1}(0). \label{CNc1} \end{equation} In Eqs. (\ref{Ck}) and (\ref{CNc1}) $\Omega =\sqrt{\sum_{j=1}^{N}f_{j}^{2}}.$ At time $t=t^{\prime }=\pi /\Omega ,$ Eqs. (\ref{Ck}) and (\ref{CNc1}) reduce to \begin{equation} C_{k}(t^{\prime })=\frac{\Omega ^{2}-2f_{k}^{2}}{\Omega ^{2}}C_{k}(0)-\frac{ 2f_{k}}{\Omega ^{2}}\sum_{j=1,j\neq k}^{N}f_{j}C_{j}(0), \label{Ckt'} \end{equation} \begin{equation} C_{N+1}(t^{\prime })=-C_{N+1}(0). \label{CNc1t'} \end{equation} If the initial system is prepared in a state in which there are no photons and only one atom, say, the $q^{th}$ atom with $q\in [1,N],$ is excited, then Eqs. (\ref{Ckt'}) and (\ref{CNc1t'}) simplify to \[ C_{q}(t^{\prime })=1-\frac{2f_{q}^{2}}{\Omega ^{2}}, \] \[ C_{p\neq q}(t^{\prime })=-\frac{2f_{p}f_{q}}{\Omega ^{2}}, \] \[ C_{N+1}(t^{\prime })=0. \] Since $C_{N+1}(t^{\prime })=0$ the system at time $t=t^{\prime }$ becomes decoupled: the cavity returns back to its initial vacuum state but the $N$ atoms turns out to be in the entangled state \begin{equation}
\left| \mathcal{A}(t^{\prime })\right\rangle =\left( 1-\frac{2f_{q}^{2}}{
\Omega ^{2}}\right) \left| ...e_{q}...\right\rangle -\frac{2f_{q}}{\Omega
^{2}}\sum_{p=1,p\neq q}^{N}f_{p}\left| ...e_{p}...\right\rangle \label{At} \end{equation}
where $\left| ...e_{j}...\right\rangle $ denotes a state in which only the $ j^{th}$ atom is excited while all the other $N-1$ atoms remain unexcited. From Eq. (\ref{At}) it is transparent that the maximally entangled state (
\ref{WN}) cannot be produced since the atoms are totally nonidentical: the weight coefficients of $\left| ...e_{j}...\right\rangle $ are all different from each other. It is worth noting also that even in the case of totally identical atoms, i.e. $f_{j}=f$ $\forall j,$ maximal entanglement does not arise because in this case the atoms would appear in the state \[
\left| \mathcal{A}^{\prime }(t^{\prime })\right\rangle =\left( 1-\frac{2}{N}
\right) \left| ...e_{q}...\right\rangle -\frac{2}{N}\sum_{p=1,p\neq q}^{N}\left| ...e_{p}...\right\rangle \] which is clearly entangled but not maximally.
\vskip 0.5cm
\noindent \textbf{3. Partially nonidentical atoms}
In this section we shall show that the fully symmetric $N$-atom W-state (\ref {WN}) can be generated if the atomic ensemble possesses a partial asymmetry. We suppose that the $N$ atoms consist of two nonidentical groups. In group 1 there are $M_{1}$ $(1\leq M_{1}<N)$ identical atoms whereas the number of identical atoms in group 2 is $M_{2}=N-M_{1}.$ The asymmetry arises from the fact that each atom of group 1 interacts equally with the cavity mode with a coupling constant $f_{1}$ while each atom of group 2 also interacts equally with the cavity mode but with a different coupling constant $f_{2}\neq f_{1}. $ Under such an asymmetric situation the differential equations for the coefficients $C^{\prime }s(t)$ read
\begin{equation} \left. \begin{tabular}{ccc} $i\stackrel{.}{C}_{m}(t)$ & $=$ & $f_{1}C_{N+1}(t),$ \\ $i\stackrel{.}{C}_{n}(t)$ & $=$ & $f_{2}C_{N+1}(t),$ \\ $i\stackrel{.}{C}_{N+1}(t)$ & $=$ & $M_{1}f_{1}C_{m}(t)+M_{2}f_{2}C_{n}(t)$ \end{tabular} \right\} \label{e2} \end{equation} where $m=1,2,...,M_{1}$ and $n=M_{1}+1,M_{1}+2,...,N.$ The general solution of Eqs. (\ref{e2}) is \begin{eqnarray} C_{m}(t) &=&\frac{M_{1}f_{1}^{2}\cos (\omega t)+M_{2}f_{2}^{2}}{\omega ^{2}} C_{m}(0) \nonumber \\ &&+\frac{M_{2}f_{1}f_{2}[\cos (\omega t)-1]}{\omega ^{2}}C_{n}(0) \nonumber \\ &&-\frac{if_{1}\sin (\omega t)}{\omega }C_{N+1}(0), \label{bm} \end{eqnarray} \begin{eqnarray} C_{n}(t) &=&\frac{M_{1}f_{1}f_{2}[\cos (\omega t)-1]}{\omega ^{2}}C_{m}(0) \nonumber \\ &&+\frac{M_{1}f_{1}^{2}+M_{2}f_{2}^{2}\cos (\omega t)}{\omega ^{2}}C_{n}(0) \nonumber \\ &&-\frac{if_{2}\sin (\omega t)}{\omega }C_{N+1}(0), \label{bn} \end{eqnarray} \begin{eqnarray} C_{N+1}(t) &=&-\frac{iM_{1}f_{1}\sin (\omega t)}{\omega }C_{m}(0) \nonumber \\ &&-\frac{iM_{2}f_{2}\sin (\omega t)}{\omega }C_{n}(0) \nonumber \\ &&+\cos (\omega t)C_{N+1}(0) \label{b} \end{eqnarray} where $\omega =\sqrt{M_{1}f_{1}^{2}+M_{2}f_{2}^{2}}.$ At time $t=\theta =\pi /\omega $ Eqs. (\ref{bm}), (\ref{bn}) and (\ref{b}) reduce to \begin{equation} C_{m}(\theta )=\frac{M_{2}f_{2}^{2}-M_{1}f_{1}^{2}}{\omega ^{2}}C_{m}(0)- \frac{2M_{2}f_{1}f_{2}}{\omega ^{2}}C_{n}(0), \label{cm} \end{equation} \begin{equation} C_{n}(\theta )=-\frac{2M_{1}f_{1}f_{2}}{\omega ^{2}}C_{m}(0)+\frac{ M_{1}f_{1}^{2}-M_{2}f_{2}^{2}}{\omega ^{2}}C_{n}(0), \label{cn} \end{equation} \begin{equation} C_{N+1}(\theta)=-C_{N+1}(0). \label{c} \end{equation} If the initial system were prepared in the state \begin{eqnarray}
\left| W_{M_{1}}^{-}\right\rangle &=&-\frac{1}{\sqrt{M_{1}}}\left( \left| e_{1}g_{2}...g_{M_{1}}g_{M_{1}+1}...g_{N}\right\rangle \right. \nonumber \\
&&+\left| g_{1}e_{2}...g_{M_{1}}g_{M_{1}+1}...g_{N}\right\rangle \nonumber \\
&&\left. +...+\left| g_{1}g_{2}...e_{M_{1}}g_{M_{1}+1}...g_{N}\right\rangle \right) \label{WM} \end{eqnarray} realized by setting the initial conditions as \begin{eqnarray} C_{1}(0) &=&C_{2}(0)=...=C_{M_{1}}(0)=-\frac{1}{\sqrt{M_{1}}}, \nonumber \\ C_{M_{1}+1}(0) &=&C_{M_{1}+2}(0)=...=C_{N}(0)=C_{N+1}(0)=0, \label{CM} \end{eqnarray} then Eqs. (\ref{cm}), (\ref{cn}) and (\ref{c}) simplify to \[ C_{m}(\theta )=\frac{M_{1}f_{1}^{2}-M_{2}f_{2}^{2}}{\omega ^{2}\sqrt{M_{1}}}, \] \[ C_{n}(\theta )=\frac{2\sqrt{M_{1}}f_{1}f_{2}}{\omega ^{2}}, \] \[ C_{N+1}(\theta )=0 \] and the $N$ atoms appear in the entangled state \begin{equation}
\left| \mathcal{A}(\theta )\right\rangle =\frac{M_{1}f_{1}^{2}-M_{2}f_{2}^{2}
}{\omega ^{2}\sqrt{M_{1}}}\sum_{m=1}^{M_{1}}\left| ...e_{m}...\right\rangle +
\frac{2\sqrt{M_{1}}f_{1}f_{2}}{\omega ^{2}}\sum_{n=M_{1}+1}^{N}\left| ...e_{n}...\right\rangle . \label{aa} \end{equation} If the coupling constants are controlled so that their ratio $r=f_{1}/f_{2}$ satisfies the following constraint \begin{equation} r=1+\sqrt{\frac{N}{M_{1}}} \label{rct} \end{equation}
then the state (\ref{aa}) becomes the desired $N$-atom W-state $\left|
W_{N}\right\rangle .$ As recognized from above, the overall process involves two steps: the first step prepares the initial state (\ref{WM}) and the second step generates the final state (\ref{WN}). For $M_{1}=1$ the first step is trivial \cite{M1,M} and the overall process can be looked upon as a one-step process. Yet, generally, for $M_{1}>1$ preparation of the initial state (\ref{WM}) is nontrivial. On one hand, we could prepare that state by using another nonresonant cavity and following the probabilistic scheme described in \cite{iden2}. More conveniently and more efficiently, on the other hand, we could use one and the same resonant cavity for doing both of the steps. Namely, in the first step we send through the empty resonant cavity $M_{1}$ unexcited atoms of group 1 and one auxiliary atom in the excited state $-\left| e\right\rangle $ whose coupling constant $f$ with the cavity is chosen such that \begin{equation} f=f_{1}\sqrt{M_{1}}. \label{r} \end{equation}
Then at time $t=\tau =\pi /f\sqrt{2}$ the cavity turns out to be empty again, the auxiliary atom jumps down to its ground state but the $M_{1}$ interested atoms are readily transformed to the required ``initial'' state $\left| W_{M_{1}}^{-}\right\rangle ,$ Eq. (\ref{WM}). We note that the just-mentioned first step by using the resonant cavity is measurement-free and deterministic, in evident contrast with the non-deterministic scheme of \cite{iden2} which uses a nonresonant cavity and needs a post-selection measurement. After the first step we send back the $M_1$ atoms in state
$\left| W^-_{M_1}\right>$ together with the $M_2$ atoms, all in their ground states, to the same cavity to perform the second step.
\begin{figure}
\caption{Dimensionless total generation time $T=\widetilde{\tau }+\widetilde{\theta }$ as a function of $M_{1}$ (the number of atoms in group 1) and $M_{2}$ (the number of atoms in group 2 nonidentical with group 1).}
\end{figure}
\vskip 0.5cm
\noindent \textbf{4. Discussion}
To see how long the proposed scheme takes let us scale time in units of $\pi /f_{1}$ for convenience. That is, we introduce dimensionless time as $ \widetilde{t}=f_{1}t/\pi $ with $t$ the real time. Then, in terms of $M_{1}$ and $M_{2},$ the (dimensionless) time needed to complete the first step is \begin{equation} \widetilde{\tau }=\frac{f_{1}}{\pi }\tau =\frac{1}{\sqrt{2M_{1}}} \label{tau} \end{equation} whereas the (dimensionless) time needed to complete the second step is \begin{equation} \widetilde{\theta }=\frac{f_{1}}{\pi }\theta =\frac{\sqrt{M_{1}}+\sqrt{ M_{1}+M_{2}}}{\sqrt{2M_{1}\left( M_{1}+M_{2}+\sqrt{M_{1}(M_{1}+M_{2})} \right) }}. \label{theta} \end{equation} Figure 1 plots the dependence of the total (dimensionless) generation time $ T=\widetilde{\tau }+\widetilde{\theta }$ on $M_{1}$ and $M_{2}.$ The figure shows that $T$ decreases quickly as $M_{1}$ (the number of atoms in group 1) increases but slightly as $M_{2}$ (the number of atoms in group 2) increases.
We note that, alternative to the choice (\ref{CM}) of the initial condition, we could also set \begin{eqnarray} C_{1}(0) &=&C_{2}(0)=...=C_{M_{1}}(0)=C_{N+1}(0)=0, \nonumber \\ C_{M_{1}+1}(0) &=&C_{M_{1}+2}(0)=...=C_{N}(0)=-1/\sqrt{M_{2}} \label{CN} \end{eqnarray}
that correspond to the initial state $\left| W_{M_{2}}^{-}\right\rangle
=-\left( \left| g_{1}...g_{M_{1}}e_{M_{1}+1}g_{M_{1}+2}...g_{N}\right\rangle
+\left| g_{1}...g_{M_{1}}g_{M_{1}+1}e_{M_{1}+2}...g_{N}\right\rangle +...
\text{ }+\right. $ $\left. \left| g_{1}...g_{M_{1}}g_{M_{1}+1}g_{M_{1}+2}...e_{N}\right\rangle \right) /\sqrt{ M_{2}}.$ For the choice (\ref{CN}) we need just making an interchange between the sub-indices $1$ and $2$ in all the formulae (\ref{rct}) - (\ref {theta}) as well as in the figure. Comparing the two alternatives suggests the following strategy to achieve a faster generation time: if $M_{1}>M_{2}$ we choose the conditions (\ref{CM}), if $M_{1}<M_{2}$ we choose the conditions (\ref{CN}) and if $M_{1}=M_{2}$ either (\ref{CM}) or (\ref{CN}) is equally all-right. Anyway, in a right choice the proposed scheme is robust, taking a shorter time to entangle a larger number $N$ of atoms. For large enough $N$ the total entangling time may become shorter than both the atom and the photon decay times rendering the scheme immune from decoherence effects. Since the coupling constant depends on the distance between the central axis of the cavity and the atom position \cite{duan}, the required ratios of coupling constants (\ref{rct}) and (\ref{r}) can be experimentally achieved by appropriately controlling the entrance points through which the atoms enter the cavity. Furthermore, it is clear that at the beginning the cavity is set to the vacuum state. At time $t=\tau $ when the ``initial'' state $
\left| W_{M_{1}}^{-}\right\rangle $ (or $\left| W_{M_{2}}^{-}\right\rangle )$
is prepared the cavity is automatically reset to the vacuum state, i.e. it is ready for the next step of generating the desired target state $\left| W_{N}\right\rangle .$ At the end of the second step when the desired state
$\left| W_{N}\right\rangle $ is generated, the cavity is again reset to its vacuum state. Hence, one and the same cavity is used for both preparing the
``initial'' state $\left| W_{M_{1,2}}^{-}\right\rangle $ (step 1) and producing the target state $\left| W_{N}\right\rangle $ (step 2) and, after each step the cavity is automatically reset to its initial vacuum state, i.e. the cavity remains unchanged. In this sense, the cavity serves as a catalyst in the proposed scheme. Finally, our scheme would work similarly for scalable maximal entanglement of cold ions trapped inside a high-Q cavity within the Lamb-Dicke approximation. In the latter case the ion-cavity couplings can be controlled by localizing the ions in suitable positions with respect to the cavity field mode profile.
\vskip 0.5cm
\noindent \textbf{5. Conclusion}
We have considered a realistic situation of atoms interacting with a cavity with different couplings. We have investigated in detail the case when one and the same cavity as a catalyst is used to generate $N$-partite W-states where $N=M_{1}+M_{2}$ with $M_{1}\geq 1$ and $M_{2}\geq 1$ the numbers of atoms of two nonidentical groups. Our result covers that of $M_{1}=1$ and $ M_{2}\geq 1$ \cite{M1,M} as a particular case. For $M_{1}>1$ our scheme proceeds with two successive steps. In the first step an auxiliary atom with a properly chosen coupling constant is required to prepare the ``initial''
state $\left| W_{M_{1}}^{-}\right\rangle$ or $\left| W_{M_{2}}^{-}\right\rangle,$
depending on $M_1>M_2$ or $M_2>M_1.$ In the second step the desired target state $\left| W_{N}\right\rangle $ is generated. Both steps are measurement-free and therefore the whole process, as a two-step process, is efficient and deterministic. The proposed scheme is also scalable with the total generation time that decreases with increasing $N.$ This feature provides an advantage as to overcome the obstacle due to decoherence effects when a large number of atoms are to be entangled.
\vskip 0.5cm
\textbf{Acknowledgments.} The author thanks KIAS Quantum Information Science group for support. This research was financed by a Grant (TRQCQ) from the Ministry of Science and Technology (MOST) of Korea and also by a KIAS R\&D Fund No 6G014904.
\end{document} |
\begin{document}
\title{Many-body singlets by dynamic spin polarization}
\author{Wang Yao} \affiliation{Department of Physics and Center of Theoretical and Computational Physics, The University of Hong Kong, Hong Kong, China}
\begin{abstract}
We show that dynamic spin polarization by collective raising and lowering operators can drive a spin ensemble from arbitrary initial state to many-body singlets, the zero-collective-spin states with large scale entanglement. For an ensemble of $N$ arbitrary spins, both the variance of the collective spin and the number of unentangled spins can be reduced to $O(1)$ (versus the typical value of $O(N)$), and many-body singlets can be occupied with a population of $\sim 20 \%$ independent of the ensemble size. We implement this approach in a mesoscopic ensemble of nuclear spins through dynamic nuclear spin polarization by an electron. The result is of two-fold significance for spin quantum technology: (1) a resource of entanglement for nuclear spin based quantum information processing; (2) a cleaner surrounding and less quantum noise for the electron spin as the environmental spin moments are effectively annihilated.
\end{abstract}
\date{\today}
\pacs{76.70.Fz,42.50.Dv,03.67.Bg,71.70.Jp}
\maketitle
Many-body singlets (MBS) are the zero-collective-spin states of a spin ensemble with large scale quantum entanglement and zero spin uncertainties. They appear in a variety of contexts in quantum physics and in condensed matter physics, e.g. as horizon states of the quantum black hole~\cite{livine_entanglement_2005}, and as ground states of quantum antiferromagnetic models~\cite{FrustratedSpinSys}. Their special characteristics place them at the center of attention for quantum applications. \textit{First}, MBS are invariant under a simultaneous unitary rotation on all spins. This makes MBS suitable for spanning a decoherence-free subspace~\cite{bourennane_decoherence-free_2004}, for quantum communications without a shared reference frame~\cite{bartlett_classical_2003}, and for metrology of the spatial gradient or fluctuations of external fields~\cite{tth_generation_2010}. \textit{Second}, MBS is an extreme example for the squeeze of spin uncertainties~\cite{wineland_spin_1992,kitagawa_squeezed_1993,sorensen_entanglement_2001,toth_optimal_2007}. The collective spin has zero variance in all directions and thus a source of quantum noise is removed, e.g. in the context of a quantum object affected by a spin bath. \textit{Third}, MBS contain large scale quantum entanglement: every spin is entangled with the rest part of the ensemble. An example of a pure MBS is the product of two-qubit singlets (Bell pairs). In the maximally mixed state of all MBS, the distillable bipartite entanglement is logarithmic in the ensemble size~\cite{livine_entanglement_2005}.
Despite the successful generation of photonic analog of 4-qubit singlets by parametric down conversion~\cite{eibl_experimental_2003,bourennane_decoherence-free_2004}, realization of MBS in a general spin ensemble is an outstanding goal awaiting technically feasible approaches. Theoretical study shows that spin squeezing based on quantum non-demolition measurement can reduce the total collective spin variance of an atomic ensemble by a factor of 5 in the lossless case~\cite{tth_generation_2010}. However, in such squeezed state the weighting of MBS is small and vanishes in large $N$ limit.
Here we introduce a conceptually new approach for squeezing of collective spin uncertainties and generation of large scale entanglement. The approach uses collective spin raising and lowering operations only, and is applicable to an ensemble of $N$ arbitrary spins initially on arbitrary state. The state after squeezing is significant in figures of merit: in the low loss limit, MBS are occupied with an $N$-independent population of $\sim 20\%$, and both the variance of the total collective spin and the number of spins unentangled with the rest are $O(1)$ (versus the typical values of $O(N)$). We implement this approach in a mesoscopic ensemble of nuclear spins, a spin system of extensive interests either as a noise source or as a superior information storage in quantum technology. The implementation uses only generic features of dynamic nuclear spin polarization processes by an electron, and is applicable to various electron-nuclear spin systems. Distillation of MBS can be realized by post-selection based on measurement of the electron spin. MBS can be a valuable resource of quantum entanglement for nuclear spin quantum information processing~\cite{Kane_QC_nuclei,Dutt2007_ScienceNV,Newmann08ScienceNVentangle}. In electron spin based quantum computation schemes, preparing the peripheral nuclear spins into MBS results in a cleaner surrounding and hence improved quantum coherence of the electron spin.
We refer to the definition of spin squeezing in the generalized sense~\cite{toth_optimal_2007,tth_generation_2010}, where the degree of squeezing is quantified by $\langle \hat{\bm{J}}^{2} \rangle$, with $\hat{\bm{J}} \equiv \sum_{n=1}^N \hat{\bm{I}}_{n} $ being the total collective spin for an ensemble of $N$ particles with equal or different spins. $\langle \hat{\bm{J}}^{2} \rangle=0$ indicates perfect squeezing where the
$N$ spins are in the MBS. $\langle \hat{\bm{J}}^{2} \rangle \left(\bar{s} \right)^{-1}$ gives an upper bound on the number of spins unentangled with others where $\bar{s}$ is the average spin per particle~\cite{toth_optimal_2007,tth_generation_2010}. States of the spin ensemble can be grouped into multiplets, i.e. irreducible invariant subspaces of the total spin. A multiplet $\{ \left|J,M,\alpha_{J}^k \right\rangle, M= -J, \dots, J \}$ will be denoted in short as $\{ J, \alpha_J^k \}$, where $\alpha_J^k$ is a general index for distinguishing the set of orthogonal (2$J$+1)-dimensional multiplets. The aim is to transfer population from all multiplets to those singlets with $J=0$.
Key to our squeezing approach is to apply raising operator of the form $\hat{j}^+_A-\hat{j}^+_B$ on the spin coherent states $|J, -J, \alpha_J \rangle $. Here the ensemble is partitioned arbitrarily into two subsets $A$ and $B$ with collective spin $\hat{\bm{j}}_{A}$ and $\hat{\bm{j}}_{B}$ respectively (Fig.~1(a)). We find the key identity \begin{eqnarray}
&& \frac{\left\langle J+1,-J+1,\alpha_{J+1} \right| (\hat{j}^+_A-\hat{j}^+_B )\left|J,-J,\alpha_{J} \right\rangle }{ \left\langle J,-J,\alpha_{J} \right| (\hat{j}^+_A-\hat{j}^+_B )\left|J+1,-J-1,\alpha_{J+1} \right\rangle ^{*}} \nonumber \\ && = -\left [ \left(J+1\right)\left(2J+1\right) \right ]^{-\frac{1}{2}}. \label{eq:ratio2} \end{eqnarray}
Moreover, $\left\langle J^{\prime},M^{\prime},\alpha_{J'} \right|\hat{j}^+_A-\hat{j}^+_B\left|J,-J,\alpha_{J}\right\rangle =0$
for $|J^{\prime} - J| > 1$ or $M' \neq -J+1$. Thus, under the condition that each multiplet is initialized on the spin coherent state, application of the $\hat{j}^+_A-\hat{j}^+_B$ operator tends to transfer populations from multiplets of larger dimension to multiplets of smaller dimension (Fig.~1(c)). The transfer rate of $\{J+1,\alpha_{J+1}^q\} \rightarrow \{J,\alpha_J^{k}\}$ is by a factor of $(J+1)(2J+1)$ larger than that of the backward transfer $\{J,\alpha_J^k\} \rightarrow \{J+1,\alpha_{J+1}^{q}\}$.
\begin{figure}\label{illustration}
\end{figure}
Squeezing of collective spin uncertainties can therefore be realized by dynamic spin polarization with the lowering operator $\hat{J}^{-}$ and raising operators of the form $\hat{j}^+_A-\hat{j}^+_B$. Consider the use of two such operators $\hat{j}^+_A-\hat{j}^+_B$ and $\hat{j}^+_C-\hat{j}^+_D$ where $C$ and $D$ constitute a different bipartition of the ensemble (Fig.~1(a)). The Hilbert space can be divided into independent subspaces according to the quantum numbers $\{j_{A\cap D},j_{B\cap D},j_{B\cap C},j_{A\cap C}\}$ conserved by the raising/lowering operations, and their values determine the number of (2$J$+1)-dimensional multiplets $n(J)$. If $\hat{J}^{-}$ is applied more frequently such that the system is in spin coherent states every time $\hat{j}^+_A-\hat{j}^+_B$ or $\hat{j}^+_C-\hat{j}^+_D$ is applied, we find the steady-state in each subspace: $\rho= \sum_{J} f(J) \sum_{k=1}^{n(J)} |J,-J,\alpha_J^k \rangle \langle J,-J,\alpha_J^k | $ where $f(J)= (J+1)(2J+1) f(J+1)$. Most subspaces contain at least one MBS~\cite{remark}, and $n\left(J\right) \leq n\left(0 \right)\left(2J+1\right)$. Thus, in the steady state, MBS are occupied with a population $ n(0) f(0) \geq \left[\sum_{J} g (J) \right]^{-1}=0.20 $, and the variance $ \langle \hat{\bm{J}}^{2} \rangle \leq\left[\sum_{J} g(J) \right]^{-1}\sum_{J} J\left(J+1\right) g(J)=2.44$, where $g (J) \equiv (2J+1)\left[\prod_{i=0}^{J-1}\left(i+1\right)\left(2i+1\right) \right]^{-1}$.
\begin{figure}\label{simulation}
\end{figure}
Dynamic spin polarization by the collective raising and lowering operators can be described by Lindblad terms in the master equation $\dot{\rho} = -\frac{1}{2}\sum_{m=1}^{3} (\hat{L}_{m}^{\dagger}\hat{L}_{m}\rho+\rho\hat{L}_{m}^{\dagger}\hat{L}_{m}-2\hat{L}_{m}\rho\hat{L}_{m}^{\dagger} )$ where $\hat{L}_{1}\equiv\sqrt{\Lambda_h}\hat{J}^{-}$, $\hat{L}_{2}\equiv \sqrt{\Lambda_o} (\hat{j}^+_A-\hat{j}^+_B )$, and $\hat{L}_{3}\equiv \sqrt{\Lambda_o} (\hat{j}^+_C-\hat{j}^+_D )$. The $\hat{J}^-$ and $\hat{j}^+_A - \hat{j}^+_B$ operators are applied with the rates $\Lambda_h |\langle \psi_f |\hat{J}^- | \psi_i \rangle |^2 $ and $ \Lambda_o |\langle \psi_f |\hat{j}^+_A - \hat{j}^+_B| \psi_i \rangle |^2$ respectively, and the squeezing scheme requires that the former rate shall always be larger. We note that
$\langle J, M, j_A, j_B | (\hat{j}_A^- - \hat{j}_B^-)(\hat{j}^+_A - \hat{j}^+_B) | J, M, j_A, j_B \rangle$ increases with the decrease of $J$ and reaches the maximal value of $\sim (j_A + j_B)^2$ for small $J$, while $\langle J, M |\hat{J}^+ \hat{J}^- | J, M \rangle \sim J^2$. Thus we find the requirement $\Lambda_h/\Lambda_o > (j_{A\cap D} + j_{B\cap D} + j_{B\cap C} + j_{A\cap C})^2 $, the latter quantity $\sim 4N s^2$ in an ensemble of $N$ spin $s$ particles. Spin decoherence causes the population decay of MBS with a rate $\sim N \gamma_n$ with $\gamma_n$ being the single spin decoherence rate. The low loss condition is therefore defined as $\frac{1}{4N s^2}\Lambda_h > \Lambda_o \gg \gamma_n$, where spin decoherence has negligible effect on the squeezing efficiency~\cite{Yu_inprep}.
We numerically demonstrate a squeeze control where $\hat{j}^+_A-\hat{j}^+_B$ and $\hat{j}^+_C-\hat{j}^+_D$ are applied in alternating fashion (see Fig.~2(a)). With spin decoherence neglected under the low loss condition, calculation can be significantly simplified for this choice of control and a moderately large spin system can be simulated. In the interval when $\hat{J}^-$ and $\hat{j}_A^+ - \hat{j}_B^+$ (or $\hat{j}_C^+ - \hat{j}_D^+$) are applied, the relevant Hilbert space can be further divided into independent subspaces according to the quantum numbers $\{j_A, j_B \}$ (or $\{j_C, j_D\}$). If the duration $\tau$ of each interval is sufficiently large to ensure the reach of steady state, we simply need to solve for steady state in each small subspace and keep track of the basis transform between $\{|J,M, j_A, j_B \rangle \}$ and $\{|J,M, j_C, j_D \rangle \}$ upon the switch of raising operators. Off-diagonal coherence is found to be negligible which can further simplify the calculation. An example of the simulated squeezing dynamics is given in Fig. 2(b-c). The initial density matrix is the completely mixed one in the subspace with $\{j_{A\cap D}=7/2, j_{B\cap D}=7/2, j_{B\cap C}=7/2, j_{A\cap C}=7/2 \}$. The polarization rates are $10^{-3}\Lambda_h=\Lambda_o$, and $\tau = 2/\Lambda_o$. Fig.~2(b) shows that the population distribution $p(J,j_A,j_B)$ among the multiplets indeed approaches the steady state value after a time of $4\tau$. At $t=5\tau$, MBS are occupied with a population of $0.21$ and $\langle \hat{\bm{J}}^{2} \rangle = 2.42$.
In an ensemble of nuclear spins, the applications of the two types of operators $\hat{J}^{-}$ and $\hat{j}_{A}^+-\hat{j}_{B}^+$ are realized in the process of dynamic nuclear spin polarization (DNSP), a major tool for manipulation of nuclear spins~\cite{greilich_nuclei-induced_2007,reilly_suppressing_2008,xu_optically_2009,tartakovskii_nuclear_2007,danon_nuclear_2008,korenev_nuclear_2007,bracker_optical_2005,urbaszek_efficient_2007,Ono_current_oscillation,latta_confluence_2009,vink_locking_2009,laird_hyperfine-mediated_2007,rudner_electrically_2007,rashba_theory_2008}. We consider the hyperfine interaction $\hat{H}_{0}=\sum_{n}\left|\psi\left(\bm{r}_{n}\right)\right|^{2}\hat{\bm{I}}_{n}\cdot\overleftrightarrow{\bm{A}}\cdot\hat{\bm{S}}$ coupling the electron spin $\hat{\bm{S}}$ to peripheral lattice nuclear spins $\hat{\bm{I}}_{n}$. $\overleftrightarrow{\bm{A}}$ is the hyperfine coupling constant in tensor form, and the position dependence of coupling enters through the envelope function $\psi\left(\bm{r}\right)$
of the electron only. $\hat{H}_{0}$ describes generally the hyperfine interaction of electron or hole system in quantum dots or shallow donors formed in group IV or III-V materials~\cite{Chamarro_holespin,fischer_spin_2008}. In most DNSP schemes, $\hat{H}_{0}$ induces the electron-nuclear flip-flop in passing electron spin polarization to the nuclei and the energy cost is compensated by emission/absorption of phonons or photons~\cite{tartakovskii_nuclear_2007,danon_nuclear_2008,korenev_nuclear_2007,bracker_optical_2005,urbaszek_efficient_2007}. These DNSP schemes are termed as the \textit{dc} type hereafter. Alternatively, DNSP can also utilize the \textit{ac} correction to the hyperfine coupling: $\hat{H}_{ac}=\sum_{n}\left(\bm{d}_{\omega}\cdot\nabla\left|\psi\left(\bm{r}_{n}\right)\right|^{2}\right)\cos(\omega t)\hat{\bm{I}}_{n}\cdot\overleftrightarrow{\bm{A}}\cdot\hat{\bm{S}}$ when an \textit{ac} electric field induces an electron displacement $\bm{d}_{\omega}\cos(\omega t)$, with energy cost for electron-nuclear flip-flop directly supplied by \textit{ac} field~\cite{laird_hyperfine-mediated_2007,rudner_electrically_2007,rashba_theory_2008}. Such DNSP process is termed hereafter as the \textit{ac} type.
For nuclear spins on the periphery of an electron, MBS can be realized by combining \textit{dc} and \textit{ac} DNSP processes which polarize nuclear spins in opposite directions with the operators $\sum_{n}\left|\psi\left(\bm{r}_{n}\right)\right|^{2}\hat{I}_{n}^{-}$ and $\sum_{n} \frac{\partial}{\partial\mu}\left|\psi\left(\bm{r}_{n}\right)\right|^{2} \hat{I}_{n}^{+}$ respectively. Here $\mu$ is the direction of \textit{ac} electric field. The lattice sites with equal electron density $\left|\psi\left(\bm{r}\right)\right|^{2}$
are grouped into coordination shells. On each shell, $\sum_{n}\left|\psi\left(\bm{r}_{n}\right)\right|^{2}\hat{I}_{n}^{-}$
and $\sum_{n} \frac{\partial}{\partial\mu}\left|\psi\left(\bm{r}_{n}\right)\right|^{2} \hat{I}_{n}^{+}$ are of the character of $\hat{J}^{-}$ and $\hat{j}_{A}^{+}-\hat{j}_{B}^{+}$ respectively. Under the influence of incoherent electron spin dynamics in the DNSP process, the large shell-to-shell difference in the \textit{dc} hyperfine coupling strength causes loss of inter-shell coherence in a timescale much faster than the squeezing. Thus different coordination shells can be independently squeezed towards MBS.
\begin{figure}
\caption{(a) Schematics of an electron in a nuclear spin bath. The first coordination shell has 4 nuclear spins (green color) and the second shell has 8 nuclear spins (blue color). (b) Upper part: \textit{dc} hyperfine coupling coefficients at the various lattice sites. Lower part: \textit{ac} hyperfine coupling coefficients with \textit{ac} electric field in $x$ direction. The heights of the bar give the magnitude. (c) The \textit{ac} hyperfine coupling decomposed into two terms $\hat{H}_{ac}=\hat{H}_{ac}^{1}+\hat{H}_{ac}^{2}$. $\hat{H}_{ac}^{1}=\tilde{a}_1(\hat{j}^+_A-\hat{j}^+_B)\hat{S}^{-}e^{i\omega t}E_{x}+c.c.$, and $\hat{H}_{ac}^{2}=\tilde{a}_2(\hat{j}_{C}^{+}-\hat{j}_{D}^{+})\hat{S}^{-}e^{i\omega t}E_{x}+c.c.$. The 4 sites with positive coupling coefficients in the upper part form subset $A$ and the 4 sites with positive coupling in the lower part form subset $C$, and $B$ ($D$) is the complement of $A$ ($C$). (d) Schematic of the DNSP control where the switching between \textit{dc} and \textit{ac} DNSP is concatenated with the switching of the \textit{ac} electric field between $x$ and $y$ directions.}
\label{nuclearspin}
\end{figure}
Fig.~3(a) shows the schematic of an electron with a 2D Gaussian envelope function. The 12 lattice nuclear spins form two coordination shells according to the \textit{dc} hyperfine coupling strength (Fig.~3(b)). For the green shell with 4 lattice nuclear spins, $\hat{H}_{ac}=\tilde{a}\hat{S}^{-}(\hat{j}^+_A-\hat{j}^+_B)e^{i\omega t}E_{x}+\tilde{a}\hat{S}^{-}(\hat{j}^+_C-\hat{j}^+_D)e^{i\omega t}E_{y}+c.c.$, where $E_{x(y)}$ is the \textit{ac} electric field in the $x$ ($y$) direction. Fig.~3(d) shows the schematic of the DNSP control where the switching between \textit{dc} and \textit{ac} DNSP is concatenated with the switching of the \textit{ac} electric field between $x$ and $y$ directions. Numerical simulation of such a control has been given in Fig.~2. The blue shell represents the more general case where nuclear spins are polarized in the \textit{ac} DNSP process by $\tilde{a}_1(\hat{j}^+_A-\hat{j}^+_B)+\tilde{a}_2(\hat{j}_{C}^{+}-\hat{j}_{D}^{+})$, a linear superposition of raising operators of the desired form (Fig.~3(c)). The identity in Eq. (\ref{eq:ratio2}) obviously holds if $\hat{j}^+_A-\hat{j}^+_B$ is replaced by $\hat{j}_{C}^{+}-\hat{j}_{D}^{+}$, thus we have this same identity for their linear superposition as well. Numerical simulations confirm that operators of this new form are equally efficient in the squeezing effect~\cite{Yu_inprep}.
Interaction between neighboring nuclear spins causes spin diffusion and spin dephasing which can result in loss of MBS. The dipolar interaction between neighboring lattice sites is of the strength $\sim 10$ Hz. Nuclear spin diffusion by the $\hat{I}_n^+ \hat{I}_m^-$ coupling terms is efficiently suppressed when the shell-to-shell inhomogeneity in the hyperfine coupling is large. By the $\hat{I}_n^z \hat{I}_m^z$ term, the nuclear spins are subject to a dipolar magnetic field dependent on the configuration of their neighbors, which leads to dephasing with a rate $\gamma_n \sim 10-100$ Hz. To realize efficient squeezing, fast DNSP mechanisms are desired.
For optically controllable electron spin, e.g. in quantum dot or impurity in III-V semiconductors, fast \textit{dc} DNSP can be realized by the hyperfine-mediated optical excitation of spin-forbidden excitonic transitions~\cite{korenev_nuclear_2007,chekhovich_pumping_2010}. Assuming the electron Zeeman splitting $\omega_e \sim 0.2 $ GHz, the intrinsic broadening of charged exciton $\gamma_t \sim 0.2 $ GHz, and an optical Rabi frequency $\Omega\sim 3 $ GHz for the excitonic transition, we estimate the DNSP rate: $\Lambda_h=\frac{a^2 \Omega^2}{\omega_e^2 \gamma_t} \sim 10$ MHz on a coordination shell with hyperfine coupling $a = 3$ MHz. For other electron-nuclear spin system, fast \textit{dc} DNSP may be realized through the bath-assisted electron-nuclear flip-flop in the presence of an efficient energy dissipation channel, e.g. an electron Fermi sea in nearby leads~\cite{danon_nuclear_2008}.
\textit{ac} DNSP is of the rate $\Lambda_o=\frac{\tilde{a}^2}{\gamma_s}$ where $\gamma_s$ is the broadening of the electron spin resonance~\cite{rudner_electrically_2007}. The magnitude of the \textit{ac} hyperfine interaction, $\tilde{a}$, depends on the strength of \textit{ac} electric field and the inhomogeneity of the electron envelop function. Giving the phosphorus donor in silicon as an example, we have the first several shells: $(A,6.0, 6)$, $(B,4.5,12)$, $(C,3.3,4)$, $(D,2.2,12)$ and $(F,1.7,12)$ where the first letter is the label of the shell by convention, the second number is the hyperfine coupling strength in unit of MHz, and the third is the number of equivalent sites on the shell~\cite{hale_shallow_1969}. The distance between neighboring shells is in the order of $\sim 0.1$ nm. Thus, we estimate $\tilde{a} \sim $ MHz by a moderate displacement of the electron $d_{\omega} \sim 0.1$ nm. Assuming $\gamma_s \sim 0.1$ GHz, $\Lambda_o$ can be of $\sim 10$ kHz on these shells. Thus, we conclude that the low loss condition can indeed be satisfied for nuclear spins on the periphery of a strongly confined electron.
The work was supported by the Research Grant Council of Hong Kong under Grant No. HKU 706309P. The author acknowledges helpful discussions with L. J. Sham, H. Y. Yu and X. D. Xu.
\begin{thebibliography}{31} \expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi \expandafter\ifx\csname bibnamefont\endcsname\relax
\def\bibnamefont#1{#1}\fi \expandafter\ifx\csname bibfnamefont\endcsname\relax
\def\bibfnamefont#1{#1}\fi \expandafter\ifx\csname citenamefont\endcsname\relax
\def\citenamefont#1{#1}\fi \expandafter\ifx\csname url\endcsname\relax
\def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi \providecommand{\bibinfo}[2]{#2} \providecommand{\eprint}[2][]{\url{#2}}
\bibitem[{\citenamefont{Livine and Terno}(2005)}]{livine_entanglement_2005} \bibinfo{author}{\bibfnamefont{E.~R.} \bibnamefont{Livine}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{D.~R.} \bibnamefont{Terno}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{72}},
\bibinfo{pages}{022307} (\bibinfo{year}{2005}).
\bibitem[{\citenamefont{Misguich and Lhuillier}(2004)}]{FrustratedSpinSys} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Misguich}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Lhuillier}},
\emph{\bibinfo{title}{Frustrated Spin Systems}} (\bibinfo{publisher}{World
Scientific}, \bibinfo{year}{2004}), chap.~\bibinfo{chapter}{5}, p.
\bibinfo{pages}{229}.
\bibitem[{\citenamefont{Bourennane et~al.}(2004)\citenamefont{Bourennane, Eibl,
Gaertner, Kurtsiefer, Cabello, and
Weinfurter}}]{bourennane_decoherence-free_2004} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Bourennane}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Eibl}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Gaertner}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Kurtsiefer}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Cabello}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Weinfurter}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{92}},
\bibinfo{pages}{107901} (\bibinfo{year}{2004}).
\bibitem[{\citenamefont{Bartlett et~al.}(2003)\citenamefont{Bartlett, Rudolph,
and Spekkens}}]{bartlett_classical_2003} \bibinfo{author}{\bibfnamefont{S.~D.} \bibnamefont{Bartlett}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Rudolph}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{R.~W.} \bibnamefont{Spekkens}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{91}},
\bibinfo{pages}{027901} (\bibinfo{year}{2003}).
\bibitem[{\citenamefont{Toth and Mitchell}(2010)}]{tth_generation_2010} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Toth}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{M.~W.} \bibnamefont{Mitchell}},
\bibinfo{journal}{New J. Phys.} \textbf{\bibinfo{volume}{12}},
\bibinfo{pages}{053007} (\bibinfo{year}{2010}).
\bibitem[{\citenamefont{Wineland et~al.}(1992)\citenamefont{Wineland,
Bollinger, Itano, Moore, and Heinzen}}]{wineland_spin_1992} \bibinfo{author}{\bibfnamefont{D.~J.} \bibnamefont{Wineland}},
\bibinfo{author}{\bibfnamefont{J.~J.} \bibnamefont{Bollinger}},
\bibinfo{author}{\bibfnamefont{W.~M.} \bibnamefont{Itano}},
\bibinfo{author}{\bibfnamefont{F.~L.} \bibnamefont{Moore}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{D.~J.} \bibnamefont{Heinzen}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{46}},
\bibinfo{pages}{R6797} (\bibinfo{year}{1992}).
\bibitem[{\citenamefont{Kitagawa and Ueda}(1993)}]{kitagawa_squeezed_1993} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Kitagawa}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Ueda}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{47}},
\bibinfo{pages}{5138} (\bibinfo{year}{1993}).
\bibitem[{\citenamefont{Sorensen and
Molmer}(2001)}]{sorensen_entanglement_2001} \bibinfo{author}{\bibfnamefont{A.~S.} \bibnamefont{Sorensen}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Molmer}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{86}},
\bibinfo{pages}{4431} (\bibinfo{year}{2001}).
\bibitem[{\citenamefont{Toth et~al.}(2007)\citenamefont{Toth, Knapp, Guhne, and
Briegel}}]{toth_optimal_2007} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Toth}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Knapp}},
\bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Guhne}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{H.~J.} \bibnamefont{Briegel}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{99}},
\bibinfo{pages}{250405} (\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Eibl et~al.}(2003)\citenamefont{Eibl, Gaertner,
Bourennane, Kurtsiefer, Zukowski, and Weinfurter}}]{eibl_experimental_2003} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Eibl}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Gaertner}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Bourennane}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Kurtsiefer}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Zukowski}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Weinfurter}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{90}},
\bibinfo{pages}{200403} (\bibinfo{year}{2003}).
\bibitem[{\citenamefont{Kane}(1998)}]{Kane_QC_nuclei} \bibinfo{author}{\bibfnamefont{B.~E.} \bibnamefont{Kane}},
\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{393}},
\bibinfo{pages}{133} (\bibinfo{year}{1998}).
\bibitem[{\citenamefont{Gurudev~Dutt et~al.}(2007)\citenamefont{Gurudev~Dutt,
Childress, Jiang, Togan, Maze, Jelezko, Zibrov, Hemmer, and
Lukin}}]{Dutt2007_ScienceNV} \bibinfo{author}{\bibfnamefont{M.~V.} \bibnamefont{Gurudev~Dutt}},
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Childress}},
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Jiang}},
\bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Togan}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Maze}},
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Jelezko}},
\bibinfo{author}{\bibfnamefont{A.~S.} \bibnamefont{Zibrov}},
\bibinfo{author}{\bibfnamefont{P.~R.} \bibnamefont{Hemmer}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.~D.} \bibnamefont{Lukin}},
\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{316}},
\bibinfo{pages}{1312} (\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Neumann et~al.}(2008)\citenamefont{Neumann, Mizuochi,
Rempp, Hemmer, Watanabe, Yamasaki, Jacques, Gaebel, Jelezko, and
Wrachtrup}}]{Newmann08ScienceNVentangle} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Neumann}},
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Mizuochi}},
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Rempp}},
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Hemmer}},
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Watanabe}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Yamasaki}},
\bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Jacques}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Gaebel}},
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Jelezko}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Wrachtrup}},
\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{320}},
\bibinfo{pages}{1326} (\bibinfo{year}{2008}).
\bibitem{remark} The states unconnected with MBS are of $\sim 1 \%$ of the entire Hilbert space if two raising operators of the form $\hat{j}_{A}^{+}-\hat{j}_{B}^{+}$ are used, and can be reduced to $\sim 0.01 \%$ if three such operators are used in the dynamic spin polarization.
\bibitem{Yu_inprep} \bibinfo{author}{\bibfnamefont{H.~Y.} \bibnamefont{Yu}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{W.} \bibnamefont{Yao}}, unpublished.
\bibitem[{\citenamefont{Greilich et~al.}(2007)\citenamefont{Greilich, Shabaev,
Yakovlev, Efros, Yugova, Reuter, Wieck, and
Bayer}}]{greilich_nuclei-induced_2007} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Greilich}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Shabaev}},
\bibinfo{author}{\bibfnamefont{D.~R.} \bibnamefont{Yakovlev}},
\bibinfo{author}{\bibfnamefont{A.~L.} \bibnamefont{Efros}},
\bibinfo{author}{\bibfnamefont{I.~A.} \bibnamefont{Yugova}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Reuter}},
\bibinfo{author}{\bibfnamefont{A.~D.} \bibnamefont{Wieck}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Bayer}},
\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{317}},
\bibinfo{pages}{1896} (\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Reilly et~al.}(2008)\citenamefont{Reilly, Taylor,
Petta, Marcus, Hanson, and Gossard}}]{reilly_suppressing_2008} \bibinfo{author}{\bibfnamefont{D.~J.} \bibnamefont{Reilly}},
\bibinfo{author}{\bibfnamefont{J.~M.} \bibnamefont{Taylor}},
\bibinfo{author}{\bibfnamefont{J.~R.} \bibnamefont{Petta}},
\bibinfo{author}{\bibfnamefont{C.~M.} \bibnamefont{Marcus}},
\bibinfo{author}{\bibfnamefont{M.~P.} \bibnamefont{Hanson}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.~C.}
\bibnamefont{Gossard}}, \bibinfo{journal}{Science}
\textbf{\bibinfo{volume}{321}}, \bibinfo{pages}{817} (\bibinfo{year}{2008}).
\bibitem[{\citenamefont{Xu et~al.}(2009)\citenamefont{Xu, Yao, Sun, Steel,
Bracker, Gammon, and Sham}}]{xu_optically_2009} \bibinfo{author}{\bibfnamefont{X.}~\bibnamefont{Xu}},
\bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Yao}},
\bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Sun}},
\bibinfo{author}{\bibfnamefont{D.~G.} \bibnamefont{Steel}},
\bibinfo{author}{\bibfnamefont{A.~S.} \bibnamefont{Bracker}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Gammon}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{L.~J.} \bibnamefont{Sham}},
\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{459}},
\bibinfo{pages}{1105} (\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Tartakovskii et~al.}(2007)\citenamefont{Tartakovskii,
Wright, Russell, Falko, Vankov, {Skiba-Szymanska}, Drouzas, Kolodka,
Skolnick, Fry et~al.}}]{tartakovskii_nuclear_2007} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Tartakovskii}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Wright}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Russell}},
\bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Falko}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Vankov}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{{Skiba-Szymanska}}},
\bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Drouzas}},
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Kolodka}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Skolnick}},
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Fry}}, \bibnamefont{et~al.},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{98}},
\bibinfo{pages}{026806}
(\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Danon and Nazarov}(2008)}]{danon_nuclear_2008} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Danon}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Nazarov}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{100}},
\bibinfo{pages}{056603}
(\bibinfo{year}{2008}).
\bibitem[{\citenamefont{Korenev}(2007)}]{korenev_nuclear_2007} \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Korenev}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{99}},
\bibinfo{pages}{256405} (\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Bracker et~al.}(2005)\citenamefont{Bracker, Stinaff,
Gammon, Ware, Tischler, Shabaev, Efros, Park, Gershoni, Korenev
et~al.}}]{bracker_optical_2005} \bibinfo{author}{\bibfnamefont{A.~S.} \bibnamefont{Bracker}},
\bibinfo{author}{\bibfnamefont{E.~A.} \bibnamefont{Stinaff}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Gammon}},
\bibinfo{author}{\bibfnamefont{M.~E.} \bibnamefont{Ware}},
\bibinfo{author}{\bibfnamefont{J.~G.} \bibnamefont{Tischler}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Shabaev}},
\bibinfo{author}{\bibfnamefont{A.~L.} \bibnamefont{Efros}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Park}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Gershoni}},
\bibinfo{author}{\bibfnamefont{V.~L.} \bibnamefont{Korenev}},
\bibnamefont{et~al.}, \bibinfo{journal}{Phys. Rev. Lett.}
\textbf{\bibinfo{volume}{94}}, \bibinfo{pages}{047402}
(\bibinfo{year}{2005}).
\bibitem[{\citenamefont{Urbaszek et~al.}(2007)\citenamefont{Urbaszek, Braun,
Amand, Krebs, Belhadj, Lemaitre, Voisin, and
Marie}}]{urbaszek_efficient_2007} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Urbaszek}},
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Braun}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Amand}},
\bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Krebs}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Belhadj}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Lemaitre}},
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Voisin}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{X.}~\bibnamefont{Marie}},
\bibinfo{journal}{Phys. Rev. B} \textbf{\bibinfo{volume}{76}}, \bibinfo{pages}{201301R}
(\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Ono and Tarucha}(2004)}]{Ono_current_oscillation} \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Ono}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Tarucha}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{92}},
\bibinfo{pages}{256803} (\bibinfo{year}{2004}).
\bibitem[{\citenamefont{Latta et~al.}(2009)\citenamefont{Latta, Hogele, Zhao,
Vamivakas, Maletinsky, Kroner, Dreiser, Carusotto, Badolato, Schuh
et~al.}}]{latta_confluence_2009} \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Latta}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Hogele}},
\bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Zhao}},
\bibinfo{author}{\bibfnamefont{A.~N.} \bibnamefont{Vamivakas}},
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Maletinsky}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Kroner}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Dreiser}},
\bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Carusotto}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Badolato}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Schuh}},
\bibnamefont{et~al.}, \bibinfo{journal}{Nat. Phys.}
\textbf{\bibinfo{volume}{5}}, \bibinfo{pages}{758} (\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Vink et~al.}(2009)\citenamefont{Vink, Nowack, Koppens,
Danon, Nazarov, and Vandersypen}}]{vink_locking_2009} \bibinfo{author}{\bibfnamefont{I.~T.} \bibnamefont{Vink}},
\bibinfo{author}{\bibfnamefont{K.~C.} \bibnamefont{Nowack}},
\bibinfo{author}{\bibfnamefont{F.~H.~L.} \bibnamefont{Koppens}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Danon}},
\bibinfo{author}{\bibfnamefont{Y.~V.} \bibnamefont{Nazarov}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{L.~M.~K.}
\bibnamefont{Vandersypen}}, \bibinfo{journal}{Nat. Phys.}
\textbf{\bibinfo{volume}{5}}, \bibinfo{pages}{764} (\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Laird et~al.}(2007)\citenamefont{Laird, Barthel,
Rashba, Marcus, Hanson, and Gossard}}]{laird_hyperfine-mediated_2007} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Laird}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Barthel}},
\bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Rashba}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Marcus}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Hanson}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Gossard}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{99}}, \bibinfo{pages}{246601}
(\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Rudner and Levitov}(2007)}]{rudner_electrically_2007} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Rudner}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Levitov}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{99}}, \bibinfo{pages}{246602}
(\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Rashba}(2008)}]{rashba_theory_2008} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Rashba}},
\bibinfo{journal}{Phys. Rev. B} \textbf{\bibinfo{volume}{78}}, \bibinfo{pages}{195302}
(\bibinfo{year}{2008}).
\bibitem[{\citenamefont{Eble et~al.}(2009)\citenamefont{Eble, Testelin,
Desfonds, Bernardot, Balocchi, Amand, Miard, Lemaitre, Marie, and
Chamarro}}]{Chamarro_holespin} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Eble}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Testelin}},
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Desfonds}},
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Bernardot}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Balocchi}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Amand}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Miard}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Lemaitre}},
\bibinfo{author}{\bibfnamefont{X.}~\bibnamefont{Marie}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Chamarro}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{102}},
\bibinfo{pages}{146601} (\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Fischer et~al.}(2008)\citenamefont{Fischer, Coish,
Bulaev, and Loss}}]{fischer_spin_2008} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Fischer}},
\bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Coish}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Bulaev}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Loss}},
\bibinfo{journal}{Phys. Rev. B} \textbf{\bibinfo{volume}{78}}, \bibinfo{pages}{155329}
(\bibinfo{year}{2008}).
\bibitem[{\citenamefont{Chekhovich et~al.}(2010)\citenamefont{Chekhovich,
Makhonin, Kavokin, Krysa, Skolnick, and
Tartakovskii}}]{chekhovich_pumping_2010} \bibinfo{author}{\bibfnamefont{E.~A.} \bibnamefont{Chekhovich}},
\bibinfo{author}{\bibfnamefont{M.~N.} \bibnamefont{Makhonin}},
\bibinfo{author}{\bibfnamefont{K.~V.} \bibnamefont{Kavokin}},
\bibinfo{author}{\bibfnamefont{A.~B.} \bibnamefont{Krysa}},
\bibinfo{author}{\bibfnamefont{M.~S.} \bibnamefont{Skolnick}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.~I.}
\bibnamefont{Tartakovskii}}, \bibinfo{journal}{Phys. Rev. Lett.}
\textbf{\bibinfo{volume}{104}}, \bibinfo{pages}{066804}
(\bibinfo{year}{2010}).
\bibitem[{\citenamefont{Hale and Mieher}(1969)}]{hale_shallow_1969} \bibinfo{author}{\bibfnamefont{E.~B.} \bibnamefont{Hale}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{R.~L.} \bibnamefont{Mieher}},
\bibinfo{journal}{Phys. Rev.} \textbf{\bibinfo{volume}{184}},
\bibinfo{pages}{739} (\bibinfo{year}{1969}).
\end{thebibliography}
\end{document} |
\begin{document}
\title{Quantum learning robust to noise} \author{Andrew W. Cross} \email{awcross@us.ibm.com} \affiliation{IBM T. J. Watson Research Center, 1101 Kitchawan Road, Yorktown Heights, NY 10598}
\author{Graeme Smith} \email{gsbsmith@gmail.com}
\affiliation{IBM T. J. Watson Research Center, 1101 Kitchawan Road, Yorktown Heights, NY 10598} \author{John A. Smolin} \email{smolin@us.ibm.com} \affiliation{IBM T. J. Watson Research Center, 1101 Kitchawan Road, Yorktown Heights, NY 10598} \date{18 July 2014}
\begin{abstract}
Noise is often regarded as anathema to quantum computation, but in
some settings it can be an unlikely ally. We consider the problem of
learning the class of $n$-bit parity functions by making queries
to a quantum example oracle. In the absence of noise, quantum and
classical parity learning are easy and almost equally powerful, both
information-theoretically and computationally. We show that in
the presence of noise this story changes dramatically. Indeed, the
classical learning problem is believed to be intractable, while the quantum version
remains efficient. Depolarizing the qubits at the oracle's output at any constant
nonzero rate does not increase the computational (or query) complexity of
quantum learning more than logarithmically. However, the problem of learning from corresponding classical examples
is the Learning Parity with Noise (LPN) problem, for which the best known algorithms have superpolynomial complexity. This creates the possibility of observing a quantum advantage with a few hundred noisy qubits. The presence of noise is essential for creating this
quantum-classical separation. \end{abstract}
\maketitle
\section{Introduction}
A theory of quantum fault-tolerance has been erected to overcome pervasive decoherence \cite{abo98,kit97,agp06}. Without such fault-tolerant machinery, large classes of quantum algorithms can fail to give any significant improvement over classical algorithms \cite{regev08}. This may lead one to suppose that noise can only rob quantum algorithms of their supremacy or at best increase the cost of running them. Here we exhibit a problem for which noise is instead a more significant classical foe and is crucial to achieving a quantum speed-up, or rather, a classical slow-down. This challenges the received wisdom that quantum computations are inherently delicate while classical computation is more robust.
We consider the problem of learning a class of Boolean functions by making queries to a quantum example oracle \cite{bshouty99}. Such an
oracle provides a quantum state that encodes a hidden function, and the goal is to discover the function efficiently, meaning
with a number of queries and an amount of post-processing that scales polynomially in the number of input bits. In the quantum setting, we are permitted to apply coherent operations to the quantum state, whereas in the classical setting we must first measure the state in the computational basis before further computation. This model of quantum learning differs from other attempts to use quantum computers to perform machine learning tasks \cite{anguita03,pudenz13,lloyd13,rebentrost13,wiebe14}. Information-theoretically, quantum learning from queries to ideal oracles is only polynomially more powerful than classical learning \cite{servedio04,atici05}. Computationally, however, there is a class of functions that is polynomial time learnable from quantum coherent queries but not from classical queries, under the assumption that factoring Blum integers is intractable \cite{servedio04}.
In this work, we exhibit a learning problem with a superpolynomial quantum computational speed-up only in the presence of noise. The physical implementation of any oracle on bare qubits will inevitably be noisy. To fairly assess the performance of a quantum algorithm given access to such an oracle, we must compare it to a classical algorithm given access to a noisy classical oracle with similar noise characteristics. To do this, we imagine constructing a noisy classical oracle by completely dephasing the inputs and outputs of a noisy quantum oracle in the computational basis.
For the class of parity functions, we show that depolarizing the example oracle's output at any nonzero rate has a small (logarithmic) effect on the computational complexity of learning from quantum coherent examples. However, the function cannot be learned from classical examples provided by the corresponding noisy classical oracle, as this is equivalent to a problem called Learning Parity with Noise (LPN), for which the best known algorithm has superpolynomial complexity \cite{lyubash05}. Both problem settings are tractable without noise, so a quantum advantage is not merely retained; it occurs because of the noise.
The rest of the paper is organized as follows. Section~\ref{sec:def} reviews definitions relevant to quantum learning. Sections~\ref{sec:lp} and \ref{sec:lpn} consider the learning problem without and with noise, respectively. Finally, Section~\ref{sec:conc} concludes.
\section{Definitions}\label{sec:def}
We begin by reviewing relevant definitions. A \emph{membership oracle} for a Boolean function $f:\{0,1\}^n\rightarrow \{0,1\}$ is an oracle that, when queried with input $x$, outputs the result $f(x)$. It is so called because we can think of $f(x)$ as telling us whether input $x$ belongs to a set associated with the function (namely, the set of inputs that evaluate to 1). A query to a \emph{uniform random example oracle} for a Boolean function $f$ returns an ordered pair $(x,f(x))$ where $x$ is drawn uniformly at random from the set of all possible inputs of $f$. The membership oracle gives an agent freedom to choose the input, whereas the example oracle merely allows one to ``push a button'' and request an output.
The problem of learning a class of Boolean functions by querying such oracles can be generalized to a quantum coherent setting \cite{bshouty99}. A \emph{quantum membership oracle} $\textsc{Q}_f$ is a unitary transformation that acts on the computational basis states as \begin{equation}
\textsc{Q}_f: |x,b\rangle\mapsto |x,b\oplus f(x)\rangle, \end{equation} where $x\in \{0,1\}^n$ and $b\in \{0,1\}$. A \emph{uniform quantum example oracle} for $f$ outputs the quantum state \begin{equation}
|\psi_{f}\rangle\equiv\frac{1}{2^{n/2}}\sum_{x\in\{0,1\}^n} |x,f(x)\rangle. \end{equation} This oracle only gives the learner freedom to request some number of quantum states, each at unit cost. For both oracles, the \emph{query register} comprises the qubits containing $x$, and the \emph{result qubit} is the auxiliary qubit containing $f(x)$ (Fig.~\ref{fig:qex}).
Given any quantum oracle, we define a corresponding classical oracle by completely dephasing every interface to the quantum oracle, passing each input/output qubit through a channel ${\cal E}_Z(\rho)=(\rho+Z\rho Z)/2$, where $Z = \left( \begin{matrix}1 & 0 \\ 0 & -1 \end{matrix}\right)$. Any quantum membership oracle becomes a classical membership oracle, and any uniform quantum example oracle becomes a uniform random example oracle. This definition allows us to begin with a noisy quantum oracle and identify a corresponding noisy classical oracle with similar noise characteristics \footnote{Equivalently, one can instead completely dephase the {\em learner's} interface by moving the dephasing channels outside the quantum oracle. One can then define a classical learner as a learner who interacts with oracles through dephased interfaces, whereas a quantum learner has no such restriction. Classical and quantum learners can now be given access to the same noisy quantum oracle.}.
\begin{figure}
\caption{One can construct a uniform quantum example oracle from a quantum membership oracle for $f$. $H$ denotes a Hadamard gate. The quantum learner then performs a quantum computation to identify the function $f$. The corresponding classical example oracle is obtained from the quantum oracle by measuring its output qubits in their computational bases. The classical learner uses this output together with classical computation to learn the function.}
\label{fig:qex}
\end{figure}
In the domain of learning theory, a \emph{concept} $f$ is a Boolean function $f:\{0,1\}^n\rightarrow \{0,1\}$. A \emph{concept
class} ${\mathfrak C}=\cup_{n\geq 1} C_n$ (hereafter, \emph{class}) is a collection of concepts, and each $C_n$ contains the concepts whose domain is $\{0,1\}^n$. Given a \emph{target concept} $f\in {\mathfrak C}$, a typical goal is to construct a {\it hypothesis} function $h:\{0,1\}^n\rightarrow \{0,1\}$ that agrees with $f$ on at least a $1-\epsilon$ fraction of the inputs in $\{0,1\}^n$, i.e. \begin{equation} \mathrm{Pr}_{x}\left[h(x)=f(x)\right]\geq 1-\epsilon \end{equation} where $x$ is drawn from the uniform distribution. Such a function is called an {\it $\epsilon$-approximation of $f$}.
A class ${\mathfrak C}$ is \emph{efficiently PAC (Probably Approximately Correct \cite{valiant84}) learnable under the uniform distribution} if given a uniform example oracle for any target concept $f\in {\mathfrak C}$, there is an algorithm that \begin{enumerate} \item for any $\epsilon,\delta\in (0,1/2)$, outputs an $\epsilon$-approximation $h$ of $f$ with probability $1-\delta$, \item runs in time and uses a number of queries that is $\mathrm{poly}(n,1/\epsilon,1/\delta)$. \end{enumerate} The definition of learning is identical in the quantum setting except that the uniform example oracle is replaced by a uniform quantum example oracle, and the allowed computations may be coherent. We will also consider example oracles corrupted by noise of constant rate $\eta<1/2$ in a way that is defined later. The definition of learning is unchanged in this case, although one may require the algorithm to run in time $\mathrm{poly}(1/(1/2-\eta))$ as well.
We now restrict the discussion to the class of parity functions \begin{equation} f_a(x) = \langle a,x\rangle = \sum_{j=1}^n a_jx_j\ \mathrm{mod}\ 2 \end{equation} where $a\in \{0,1\}^n$ and $a_j$ ($x_j$) denotes the $j$th bit of $a$ ($x$). We are given access to a uniform quantum example oracle for the unknown concept $f_a(x)=\langle a,x\rangle$. If we incorrectly guess even a single bit of $a$, our hypothesis function is a $1/2$-approximation to $f_a$ and remains so for any number of incorrect bits. Therefore, we must with high probability find $a$ exactly, and this is what we require hereafter.
\section{Learning from ideal queries}\label{sec:lp}
First, consider the noiseless case where each query returns a pure quantum state. This case is tractable for both quantum and classical queries as we now review.
The classical oracle provides an example $(x,f_a(x))$ where $x$ is uniformly random over $\{0,1\}^n$. Since $f_a(x)$ is a linear function, it is clear that $n$ queries are sufficient to learn $f_a$ exactly with a constant probability of success. The probability that $n$ queries produce linearly independent examples is \begin{equation} \prod_{j=0}^{n-1}(1-2^{j-n}), \end{equation} which is greater than $1/4$ for any $n>1$. Any algorithm that detectably fails with constant probability less than $p$ and otherwise succeeds can be repeated no more than $\log_{1/p}(1/\delta)$ times to reduce the failure probability below $\delta$. The value of $a$ is obtained from the examples by Gaussian elimination.
In the quantum setting, $f_a$ can be learned with constant probability from a single query. Given $|\psi_f\rangle$, apply Hadamard gates \begin{equation} H=\frac{1}{\sqrt{2}}\left(\begin{array}{cc} 1 & 1 \\ 1 & -1 \end{array}\right) \end{equation} to each of the $n+1$ output qubits. A simple calculation shows that the output state becomes \begin{equation}
\frac{1}{\sqrt{2}}\left(|0^n,0\rangle+|a,1\rangle\right). \label{eq:outputstate} \end{equation} Therefore, with probability $1/2$, measurement reveals the value of $a$ directly in the query register whenever the result qubit is one. Again the probability of success can be amplified with $O(\log(1/\delta))$ queries.
Note that this is very similar to the Bernstein-Vazirani algorithm \cite{BV} but adapted to use an example oracle rather than a membership oracle. The only difference is our treatment of the final qubit. In the Bernstein-Vazirani algorithm, the result qubit is input as a $|-\rangle$ state and will, with certainty in the noiseless case, end up as a $|1\rangle$. It then does not even need to be measured. For the example oracle we have considered, we do not have the luxury of choosing the input, so we simply check that the result qubit is $|1\rangle$, which collapses the output state of the other $n$ qubits to the result of the Bernstein-Vazirani oracle.
\section{Learning in the presence of noise}\label{sec:lpn}
Now we consider how the situation changes when we add noise to the output of the example oracle. We will see that learning parity from a noisy example oracle seems to become computationally intractable, while the same task with a noisy quantum example oracle can be solved efficiently on a quantum computer. We will first consider a simple case that is easy to analyze followed by the more realistic case of depolarizing noise.
\subsection{Classification noise}
Just about the most trivial model of noise one can imagine is this one: flip the result qubit with probability $\eta$ by applying the Pauli $\sigma_x={0\ 1 \choose 1\ 0}$ operator. Classically, learning $f_a$ from such corrupted results is called \emph{learning parity with noise} (LPN). This problem is an average-case version \cite{lyubash05} of the NP-hard problem of decoding a linear code \cite{berlekamp78}, which is also known to be hard to approximate \cite{hastad97}. The LPN problem is believed to be computationally intractable. Potential cryptographic applications have been proposed for this problem and its generalizations \cite{regev05}, and the best known algorithms for LPN are sub-exponential (but super-polynomial) \cite{blum03,lyubash05}. Problem instances with hundreds of bits may be impractical to solve \cite{lev06}.
However, the quantum case remains easy. With noise, the output of the oracle transformed by Hadamards becomes the mixture of (\ref{eq:outputstate}) with probability $1-\eta$ and \begin{equation}
\frac{1}{\sqrt{2}}(|0^n,1\rangle+|a,0\rangle) \end{equation} with probability $\eta$. The probability that the query register contains $a$ remains $1/2$, independent of $\eta$. Thus, after $k$ queries, the probability of observing $a$ is $1-(1/2)^k$. This suggests the simple strategy of reporting either $a=$``whatever nonzero result is seen'' or $0^n$ otherwise. It fails with a probability that is exponentially small in $k$ and independent of $n$.
This strategy is strictly suboptimal since it ignores the information contained in the result qubit. The fact that it works so well, regardless, suggests that our noise model is rather unfair to the classical case by degrading the result qubit, which the quantum algorithm hardly even needs. Indeed, in the Bernstein-Vazirani algorithm, the output bit isn't measured at all. Still, this simple case serves to illustrate how noise can more severely impact the classical learner. Even in the more realistic noise model we consider next, the quantum algorithm survives the addition of significant amounts of noise because the quantum queries reveal so very much.
\subsection{Depolarizing noise}
Now we consider the case where the output of the oracle is subject to independent depolarizing noise $D_\eta^{\otimes (n+1)}$ where $D_\eta(\rho) = (1-2\eta)\rho + 2\eta I/2$ and $\eta<1/2$ is a constant known noise rate. This noise process is an idealization of realistic independent noise and corrupts the ideal output of the oracle with probability proportional to $\eta$.
Classically, in the presence of this noise, we obtain examples $(x\oplus e_{1:n},f_a(x)\oplus e_{n+1})$ where $x$ is uniformly random over $\{0,1\}^n$ and each bit of the noise $(e_{1:n},e_{n+1})$ is $1$ with probability $\eta$. Here $e_{1:n}\in \{0,1\}^n$ and $e_{n+1}\in\{0,1\}$. Since $x$ is uniformly random, $x'=x\oplus e_{1:n}$ is uniformly random as well and \begin{equation} (x\oplus e_{1:n},f_a(x)\oplus e_{n+1})=(x',f_a(x'\oplus e_{1:n})\oplus e_{n+1}). \end{equation} The probability that $f_a(x'\oplus e_{1:n})\neq f_a(x')$ depends on the value of $a$ and is given by \begin{equation}
\zeta_a=\sum_{w=1}^n\sum_{k=1,\textrm{odd}}^w {|a|\choose k}{n-|a|\choose w-k}\eta^w(1-\eta)^{n-w} \end{equation}
where $|a|$ denotes the number of $1$'s in $a$ (also called the Hamming weight). The total probability of error on the result bit is simply $\eta':=\eta(1-\zeta_a)+\zeta_a(1-\eta)$. Therefore, learning from these examples is the LPN problem with noise rate $\eta'\in [\eta,1-\eta]$.
In contrast, coherent manipulation of the noisy output state of the example oracle allows a quantum learner to learn $f_a$ in a number of queries that is logarithmic in $n$. The algorithm is as follows. Make $k=O(\log n)$ queries to the example oracle, and for each query, Hadamard transform all $n+1$ noisy output qubits and measure them to obtain an outcome. Each outcome has the form $(m,b)$ where $m\in \{0,1\}^n$ is a result string in the query register and the result bit $b$ is uniformly random. Discard the outcome if $b=0$ and otherwise retain the result string $m$. We are left with $k'$ result strings $m_1$, $m_2$, $\dots$, $m_{k'}$ on which we perform a bit-wise majority vote to obtain an estimate $\hat{a}$ of $a$.
We will now argue that the estimate $\hat{a}$ obtained from this protocol is equal to $a$ for appropriately chosen parameters. Given any constant $\delta>0$, the algorithm must find an estimate $\hat{a}$ such that $\mathrm{Pr}[\hat{a}\neq a]<\delta$. We make repeated use of a loose form of the Chernoff bound \begin{equation}
\mathrm{Pr}\left[|X-\eta k|<\delta\eta k\right] > 1-2e^{-\delta^2\eta k/3}\equiv 1 - B_k(\eta,\delta) \label{eq:Chernoff} \end{equation} where $X$ is the sum of $k$ independent Bernoulli random variables with $\mathrm{Pr}(1)=\eta$ and $0<\delta<1$. Clearly we can query the oracle until we retain a total of $k'$ result strings. This takes $2k'$ expected queries. By performing, say, $3k'$ queries, we are guaranteed via Eq.~(\ref{eq:Chernoff}) to retain fewer than $k'$ with probability exponentially small in $k'$. Now, let $D_q^\eta$ be the probability distribution over $\{0,1\}^n$ that corresponds to the bit string $q$ corrupted by independent bit-flip noise of rate $\eta$. The retained strings are drawn from the distribution $(1-\eta)D_a^\eta+\eta D_{0^n}^\eta$. Let $s$ be the random variable giving the unknown number of strings drawn from $D_a^\eta$. These successful queries, which we take to be $m_1,m_2,\dots,m_s$ without loss of generality, contain information about the hidden function. The expected value of $s$ is $\mu_s=(1-\eta)k'$, and its variation from this mean is controlled by \begin{align}
\mathrm{Pr}\left[|s-\mu_s|<\delta'\mu_s\right] > 1 - B_{k'}(1-\eta,\delta'). \end{align} Our algorithm votes independently on the $j$th bits of the strings for each $j=1,2,\dots, n$. Let $M_j$ be the random variable corresponding to the sum $(m_1)_j+(m_2)_j+\dots+(m_{k'})_j$ of the $j$th bits. The worst case occurs when $a_j=1$ and we assume this. Define random variables $M_j^{(a)}=m_1+\dots+m_s$ and $M_j^{(0)}=M_j-M_j^{(a)}$ with means $\mu_j^{(a)}=(1-\eta)s$ and $\mu_j^{(0)}=\eta(k'-s)$, respectively. The mean of $M_j$ is $\mu_j=\mu_j^{(a)}+\mu_j^{(0)}$. Conditioned on obtaining a typical value of $s$, the probability of a successful vote on the $j$th bit is \begin{align} \gamma_j\geq & \mathrm{Pr}\left[M_j>\frac{k'}{2}\right]
\geq \mathrm{Pr}\left[|M_j-\mu_j|<\delta'\mu_j\right]\label{eq:line} \\
\geq & \mathrm{Pr}\left[|M_j^{(a)}-\mu_j^{(a)}|<\delta'\mu_j^{(a)}\right]\times \\
& \mathrm{Pr}\left[|M_j^{(0)}-\mu_j^{(0)}|<\delta'\mu_j^{(0)}\right] \\ > & 1 - 2B_{k'}(\tilde{\eta},\delta').\label{eq:line2} \end{align} We have defined $\tilde{\eta}=\eta(1-(1+\delta')(1-\eta))$ and chosen $\delta'<\eta/(1-\eta)$. The second inequality of $(\ref{eq:line})$ follows by further choosing $1-\delta'>\frac{1}{2}((1-2\eta)(1-\eta)+\eta)^{-1}$. To find $(\ref{eq:line2})$, we used $\tilde{\eta}\leq(1-\delta')(1-\eta)^2$. This gives us an upper bound \begin{align} \mathrm{Pr}\left[\hat{a}_j \neq a_j\right] = 1-\gamma_j < 2 B_{k'}(\tilde{\eta},\delta'), \end{align} on the probability of the $j$th bit being computed incorrectly, which we can use together with a union bound to find \begin{align} \mathrm{Pr}\left[\hat{a} \neq a\right] \leq \sum_j \mathrm{Pr}\left[\hat{a}_j \neq a_j\right]
< 2n B_{k'}(\tilde{\eta},\delta'). \end{align} Choosing \begin{align} k' > \frac{3}{(\delta')^2\tilde{\eta}}\log\left(\frac{4n}{\delta}\right) \end{align} ensures that $\mathrm{Pr}\left[\hat{a} \neq a\right] < \delta$. For sufficiently large $\eta$, one can verify easily that $(\delta')^2\tilde{\eta}$ is a polynomial in $(1/2-\eta)$, and therefore $k'=O(\mathrm{poly}(1/(1/2-\eta)))$.
\section{Conclusion}\label{sec:conc}
We have defined the problem of quantum learning from a noisy quantum example oracle and shown that the class of parity functions can be learned in logarithmic time from corrupted quantum queries. In contrast, it appears to be intractable to learn this class in polynomial time from classical queries to the corresponding classical noisy oracle \footnote{In fact, even a quantum computer given access to this classical noisy example oracle is not known to be able to efficiently learn this class.}. If the oracle is ideal, the problem is tractable for both quantum and classical learners, so the noise plays an essential role in the exhibited behavior. For this problem at least, decoherence is an ally of quantum computation.
The example oracle for parity can be implemented in practice with $O(n)$ one- and two-qubit gates. The quantum learner then needs only single-qubit gates and measurements, or even just measurements in a nonstandard basis. This suggests that an experimental demonstration may be quite practicable. The independent depolarizing noise model we use is an idealization of realistic decoherence. Although a more detailed study of actual experimental noise would be needed, it should be possible to demonstrate quantum supremacy for learning using several hundred \emph{noisy} qubits, i.e. without the use of quantum error-correction. In the meantime, since a classical learner requires at least $n$ queries in the noiseless case while the quantum learner needs only $O(\log n)$ queries with noise, a quantum advantage for query-complexity, while small, could be shown experimentally in existing systems.
\end{document} |
\begin{document}
\title{Approximating Sparsest Cut in Low-Treewidth Graphs via Combinatorial Diameter}
\author{
Parinya Chalermsook \thanks{Aalto University, Finland. {\bf email:} \texttt{parinya.chalermsook@aalto.fi}} \and
Matthias Kaul \thanks{{Technische Universität Hamburg}, {Germany}. \textbf{email:} \texttt{matthias.kaul@tuhh.de}} \and
Matthias Mnich \thanks{{Technische Universität Hamburg}, {Germany}. \textbf{email:} \texttt{matthias.mnich@tuhh.de}} \and
Joachim Spoerhase \thanks{{Aalto University}, {Finland}. \textbf{email:} \texttt{joachim.spoerhase@aalto.fi}} \and
Sumedha Uniyal \thanks{{Aalto University}, {Finland}.} \and
Daniel Vaz \thanks{{Operations Research, Technische Universität M{\"u}nchen}, {Germany}. \textbf{email:} \texttt{daniel.vaz@tum.de}} }
\maketitle
\begin{abstract}
The fundamental sparsest cut problem takes as input a graph $G$ together with the edge costs and demands, and seeks a cut that minimizes the ratio between the costs and demands across the cuts.
For $n$-node graphs~$G$ of treewidth~$k$, \chlamtac, Krauthgamer, and Raghavendra (APPROX 2010) presented an algorithm that yields a factor-$2^{2^k}$ approximation in time $2^{O(k)} \cdot \operatorname{poly}(n)$.
Later, Gupta, Talwar and Witmer (STOC 2013) showed how to obtain a $2$-approximation algorithm with a blown-up run time of $n^{O(k)}$.
An intriguing open question is whether one can simultaneously achieve the best out of the aforementioned results, that is, a factor-$2$ approximation in time $2^{O(k)} \cdot \operatorname{poly}(n)$.
In this paper, we make significant progress towards this goal, via the following results:
\begin{itemize}
\item[(i)] A factor-$O(k^2)$ approximation that runs in time $2^{O(k)} \cdot \operatorname{poly}(n)$, directly improving the work of Chlamt{\'a}{\v{c}} et al.\ while keeping the run time single-exponential in $k$.
\item[(ii)] For any $\varepsilon>0$, a factor-$O(1/\varepsilon^2)$ approximation whose run time is $2^{O(k^{1+\varepsilon}/\varepsilon)} \cdot \operatorname{poly}(n)$, implying a constant-factor approximation whose run time is nearly single-exponential in $k$ and a factor-$O(\log^2 k)$ approximation in time $k^{O(k)} \cdot \operatorname{poly}(n)$.
\end{itemize}
Key to these results is a new measure of a tree decomposition that we call \emph{combinatorial diameter}, which may be of independent interest. \end{abstract}
\section{Introduction} \label{sec:intro} In the sparsest cut problem, we are given a graph together with costs and demands on the edges, and our goal is to find a cut that minimizes the ratio between the costs and demands across the cut. Sparsest cut is among the most fundamental optimization problems that has attracted interests from both computer scientists and mathematicians. Since the problem is $\mathsf{NP}$-hard~\cite{MatulaS90}, the focus has been to study approximation algorithms for the problem. Over the past four decades, several breakthrough results have eventually culminated in a factor-$\tilde O(\sqrt{\log n})$ approximation in polynomial time~\cite{arora2008euclidean,arora2009expander,leighton1999multicommodity}. On the lower bound side, the problem is $\mathsf{APX}$-hard~\cite{chuzhoy2009polynomial} and, assuming the Unique Games Conjecture, does not admit any constant-factor approximation in polynomial time~\cite{chawla2006hardness}.
The extensive interest in sparsest cuts stems from both applications and mathematical reasons. From the point of view of applications, the question of partitioning the universe into two parts while minimizing the ``loss'' across the interface is crucial in any divide-and-conquer approach e.g., in image segmentation. From a mathematical/geometric viewpoint, the integrality gap of convex relaxations for sparsest cuts is equivalent to the embeddability of any finite metric space (for LP relaxation) and of any negative-type metric (for SDP relaxation)\footnote{A metric $(X,d)$ is said to be \textit{negative type}, if $(X,\sqrt{d})$ embeds isometrically into a Hilbert space.} into $\ell_1$. Therefore, it is not a surprise that this problem has attracted interest from both computer science and mathematics (geometry, combinatorics, and functional analysis) communities.
The study of sparsest cuts in the low-treewidth regime was initiated in 2010 by \chlamtac, Krauthgamer, and Raghavendra~\cite{ckr10}, who devised a factor-$2^{2^k}$ approximation algorithm (CKR) that runs in time $2^{O(k)} \cdot \operatorname{poly}(n)$, with $k$ being the treewidth of the input graph. Later, Gupta, Talwar and Witmer~\cite{gtw13} showed how to obtain a factor-$2$ approximation (GTW) with a blown-up run time of $n^{O(k)}$; they further showed that there is no $(2-\varepsilon)$-approximation for any $\varepsilon > 0$ on constant-treewidth graphs, assuming the Unique Games Conjecture. It remains an intriguing open question whether one can simultaneously achieve the best run time and approximation factor. In particular, in this paper we address the following question: \begin{quote}
Does \mbox{\sf Sparsest-Cut}\xspace admit a factor-$2$ approximation that runs in time $2^{O(k)} \cdot \operatorname{poly} (n)$? \end{quote}
\paragraph{Broader perspectives.} Given the significance of sparsest cuts, a lot of effort have been invested into understanding when sparsest cut instances are ``easy''. In trees, optimal sparsest cuts can be found in polynomial time~(see e.g.~\cite{GuptaL19}). For many other well-known graph classes, finding optimal sparsest cuts is $\mathsf{NP}$-hard, so researchers attempted to find constant-factor approximations in polynomial time. They have succeeded, over the past two decades, for several classes of graphs, such as outerplanar, $\ell$-outerplanar, bounded-pathwidth and bounded-treewidth graphs~\cite{gtw13,gupta2004cuts,chekuri2006embedding,ckr10,lee2013pathwidth}, as well as planar graphs~\cite{cohen2021quasipolynomial}.
As mentioned earlier, sparsest cuts are not only interesting from the perspective of algorithm design, but also from the perspectives of geometry, probability theory and convex optimization. Indeed, the famous conjecture of Gupta, Newman, Rabinovich, and Sinclair~\cite{gupta2004cuts} postulates that any minor-free graph metric embeds into $\ell_1$ with a constant distortion, which would imply that all such graphs admit a constant approximation for the sparsest cut problem. The conjecture has been verified in various graph classes~\cite{lee2013pathwidth,chekuri2006embedding}, but remains open even for bounded-treewidth graph families.
To us, perhaps the most interesting aspect of the treewidth parameter~\cite{ckr10,gtw13} is its connection to the power of hierarchies of increasingly tight convex relaxations (see, for instance, the work by Laurent~\cite{laurent03}). In this setting, a straightforward (problem-independent!) LP rounding algorithm performs surprisingly well for many ``combinatorial optimization'' problems. It has been shown to achieve optimal solutions for various fundamental problems in bounded treewidth graphs~\cite{magen2009robust,bienstock2004tree,wainwright2004treewidth} and match the (tight) approximation factors achievable on trees for problems such as {group Steiner tree}~\cite{chalermsook2017beyond,chalermsook2018survivable,garg2000polylogarithmic,halperin2003polylogarithmic}. In this way, for these aforementioned problems, such a problem-oblivious LP rounding algorithm provides a natural framework to generalize an optimal algorithm on trees to nearly-optimal ones on low (perhaps super-constant) treewidth graphs. Our work can be seen as trying to develop such understanding in the context of the sparsest cut problem.
\subsection{Our Results} We present several results that may be seen as an intermediate step towards the optimal result. Our main technical results are summarized in the following theorem.
\begin{theorem} \label{thm:main-intro}
For the following functions $t$ and $\alpha$, there are algorithms that run in time $t(k)\cdot \operatorname{poly}(n)$ and achieve approximation factors $\alpha(k)$ for the sparsest cut problem:
\begin{itemize}
\item $t(k) = 2^{O(k)}$ and $\alpha(k) = O(k^2)$.
\item $t(k) = 2^{O(k^2)}$ and $\alpha(k) = O(1)$.
\item For any $\varepsilon > 0$, $t(k) = \exp\paren{O(\frac{k^{1+\varepsilon}}{\varepsilon})}$ and $\alpha(k) = O(1/\varepsilon^2)$.
\end{itemize} \end{theorem}
Our first result directly improves the approximation factor of $2^{2^k}$ by \chlamtac et al.\@\xspace, while keeping the run time single-exponential in $k$. Our second result shows that, with only slightly more exponential run time, one can achieve a constant approximation factor. Compared to Gupta et al., our result has a constant blowup in the approximation factor (but independent of $k$), but has a much better run time ($2^{O(k^2)}$ instead of $n^{O(k)}$); compared to \chlamtac et al.\@\xspace, our result has a much better approximation factor ($O(1)$ instead of $2^{2^k}$), while maintaining nearly the same asymptotic run time.
Finally, our third result gives us an ``approximation scheme'' whose run time exponent converges to a single exponential, while keeping an approximation factor a constant. We remark that, by plugging in $\varepsilon = \Omega(1/\log k)$, we obtain a factor-$O(\log^2 k)$ approximation in time $k^{O(k)}\cdot\operatorname{poly}(n)$.
\subsection{Overview of Techniques} Now, we sketch the main ideas used in deriving our results. We assume certain familiarity with the notions of treewidth and tree decomposition.
Let $G$ be a graph with treewidth $k$ and ${\mathcal T}$ be a tree decomposition of $G$ with a collection of bags $B_t \subseteq V(G)$ for all $t \in V(G)$. Define the width of ${\mathcal T}$ as $w({\mathcal T}) = \max_{t \in V({\mathcal T})} |B_t| -1$.
The run time of algorithms that deal with the treewidth parameter generally depend on $w({\mathcal T})$, so when designing an algorithm in low-treewidth graphs, one usually starts with a near-optimal tree decomposition in the sense that $w({\mathcal T}) = O(k)$. To give a concrete example, the CKR algorithm~\cite{ckr10} for sparsest cut runs in time $2^{O(w({\mathcal T}))} \cdot \operatorname{poly}(n)$ and gives approximation factor $2^{2^{w({\mathcal T})}}$ . Observe that, with slightly higher width $w({\mathcal T}) = O(\log n + \beta(k))$, the CKR algorithm would run in time $2^{\beta(k)} \cdot \operatorname{poly} (n)$.
Our results are obtained via the concept of {\bf combinatorial diameter} of a tree decomposition. Informally, the combinatorial length between $u$ and $v$ in ${\mathcal T}$ measures the number of ``non-redundant bags'' that lie on the unique path in ${\mathcal T}$ connecting the bags of $u$ and $v$. We say that the combinatorial diameter $\Delta({\mathcal T})$ of ${\mathcal T}$ is at most $d$ if the combinatorial length of every pair of vertices is at most $d$. Please refer to \Cref{sec:combinatorial diameter} for formal definitions.
Our first key technical observation shows that the approximation factor of the CKR algorithm can be upper bounded in terms of the combinatorial diameter $\min \{ O(\Delta({\mathcal T})^2), 2^{2^{w({\mathcal T})}}\}$. Moreover, in the special case of $\Delta({\mathcal T}) = 1$, the CKR algorithm gives a $2$-approximation, which can be seen by using the arguments of Gupta et al.~\cite{gtw13}. Therefore, to obtain a fast algorithm with a good approximation factor, it suffices to prove the existence of a tree decomposition with simultaneously low $w({\mathcal T})$ and low $\Delta({\mathcal T})$. We remark that standard tree decomposition algorithms~\cite{bodlaender2016c} give us $w({\mathcal T}) = O(k)$ and $\Delta({\mathcal T}) = O(\log n)$, so this observation alone does not immediately lead to improved algorithmic results. However, it allows us to view the results from CKR~\cite{ckr10} and GTW~\cite{gtw13} in the same context: CKR applies the algorithm to the tree decomposition ${\mathcal T}_{CKR}$ with $\Delta({\mathcal T}_{CKR}) = O(\log n)$ and $w({\mathcal T}_{CKR}) = O(k)$, while GTW applies the same algorithm with $\Delta({\mathcal T}_{GTW}) = 1$ and $w({\mathcal T}_{GTW}) = O(k \log n)$. In other words, the same algorithm is applied to two different ways of decomposing the input graph~$G$ into a tree.
In this paper, we present several new tree decomposition algorithms that optimize the tradeoff between $w({\mathcal T})$ and $\Delta({\mathcal T})$. Our first algorithm gives a tree decomposition ${\mathcal T}_1$ with $w({\mathcal T}_1) = {O(\log n+k)}$ and $\Delta({\mathcal T}_1) = O(k)$, which leads to a factor-$O(k^2)$ approximation in time $2^{O(k)} \cdot \operatorname{poly}(n)$; this directly improves the approximation factor of CKR while maintaining the same asymptotic run time. Our second algorithm gives the tree ${\mathcal T}_2$ with $w({\mathcal T}_2) = O(\log n + k^2)$ and $\Delta({\mathcal T}_2) = 4$. This leads to an algorithm for sparsest cut with run time $2^{O(k^2)} \cdot \operatorname{poly}(n)$ and approximation factor $O(1)$. Our third algorithm is an approximation scheme which is further parameterized by $\varepsilon >0$. In particular, for any $\varepsilon>0$, we construct the tree ${\mathcal T}_{3,\varepsilon}$ such that $w({\mathcal T}_{3,\varepsilon}) = O(\log n+ k^{1+\varepsilon}/\varepsilon)$ and $\Delta({\mathcal T}_{3,\varepsilon}) = O(1/\varepsilon)$.
\subsection{Conclusion \& Open Problems} Our work is an attempt to simultaneously obtain the best run time and approximation factor for sparsest cut in the low-treewidth regime. Our research question combines the flavors of two very active research areas, namely parameterized complexity and approximation algorithms. We introduce a new measure of tree decomposition called combinatorial diameter and show various constructions with different tradeoffs between $w({\mathcal T})$ and $\Delta({\mathcal T})$. We leave the question of getting $2$-approximation in $2^{O(k)} \cdot \operatorname{poly}(n)$ time as the main open problem. One way to design such an algorithm is to show an existence of a tree decomposition with $w({\mathcal T}) = O(\log n +k)$ and $\Delta({\mathcal T}) = 2$. An interesting intermediate step would be to show $w({\mathcal T}) = O(\log n +f(k))$ for some function $f$ and $\Delta({\mathcal T}) = 2$, which would imply a fixed-parameter algorithm that yields a $2$-approximation.
Another interesting question is to focus on polynomial-time algorithms and optimize the approximation factor with respect to treewidth. In particular, is there an $O(\log^{O(1)} k)$ approximation in polynomial time? This question is open even for the \emph{uniform} sparsest cut problem (unit demand for every vertex pair), for which a fixed-parameter algorithm~\cite{BonsmaBPP2012} but no polynomial-time algorithm is known.
A broader direction that would perhaps complement the study along these lines is to improve our understanding on a natural LP-rounding algorithm on the lift-and-project convex programs in general. For instance, can we prove a similar tradeoff result for other combinatorial optimization problems in this setting? One candidate problem is the {\em group Steiner tree} problem, for which a factor-$O(\log^2 n)$ approximation in time $n^{O(k)}$ is known (and the algorithm there is ``the same'' algorithm as used for finding sparsest cuts). Can we get a factor-$O(\log^2 n)$ approximation in time $2^{O(k)} \cdot \operatorname{poly}(n)$?
\paragraph{Independent Work:} Independent of our work, Cohen-Addad, M\"{o}mke, and Verdugo~\cite{tobias} obtained a $2$-approximation algorithm for sparsest cut in treewidth $k$ graph with running time $2^{2^{O(k)}} \cdot \text{poly}(n)$. Observe that their result is incomparable with our result: they obtain a better approximation factor, whereas the obtained running time is considerably larger than ours. Similar to our result, they build on the techniques from \cite{ckr10,gtw13}.
\section{Preliminaries} \paragraph{Problem Definition} In the \mbox{\sf Sparsest-Cut}\xspace problem (with general demands), the input is a graph $G=(V, E_G)$ with positive edge capacities $\left \{\cp_e \right \}_{e \in E_G}$ and a demand graph $D =(V, E_D)$ (on the same set of vertices) with positive demand values $\left \{\dm_e \right \}_{e \in E_D}$. The aim is to determine \[ \Phi_{G, D} := \min_{S \subseteq V} \Phi_{G,D}(S), \quad\quad \Phi_{G,D}(S) := \frac{\sum_{e \in E_G(S, V-S)}\cp_e}{\sum_{e \in E_D(S, V-S)} \dm_e}.\] The value $\Phi_{G,D}(S)$ is called the \emph{sparsity} of the cut $S$.
\paragraph{Tree decomposition} Let $G=(V, E)$ be a graph. A tree decomposition $({\mathcal T}, \set{B_t}_{t \in V({\mathcal T})})$ of $G$ is a tree ${\mathcal T}$ together with a collection of \emph{bags} $\{B_t\}_{t \in V({\mathcal T})}$, where the bags $B_t \subseteq V(G)$ satisfy the following properties: \begin{itemize}
\item $V(G) = \bigcup_t B_t$.
\item For any edge $uv \in E(G)$, there is a bag $B_t$ containing both $u$ and $v$.
\item For each vertex $v \in V(G)$, the collection of bags that contain $v$ induces a connected subgraph of ${\mathcal T}$. \end{itemize} The treewidth of graph $G$ is defined as the minimum integer $k$ such that there exists a tree decomposition where each bag contains at most $k+1$ vertices.
We generally use $r$ to denote the root of ${\mathcal T}$, and $p\colon V({\mathcal T}) \to V({\mathcal T})$ for the parent of a node with respect to root $r$. We sometimes refer to $B_{p(i)}$ as the \emph{parent bag} of $B_i$. We denote by ${\mathcal T}_{i \leftrightarrow j}$ the set of nodes on the unique path in tree ${\mathcal T}$ between nodes $i, j \in V({\mathcal T})$ (possibly $i=j$). For a set $X \subseteq V({\mathcal T})$ of bags, we use the shorthand $B(X) = \bigcup_{i \in X} B_i$ (the union of bags for nodes in $X$).
We will treat cuts in a graph as assignments of $\{0,1\}$ to each vertex, and fix some corresponding notation. \begin{definition}
Let $X$ be some finite set.
An \emph{$X$-assignment} is a map $f\colon X \to \{0,1\}$.
We denote by~$\mathcal{F}[X]$ the set of all $X$-assignments.
For some distribution $\mu$ over $\mathcal{F}[X]$ and set $Y \subseteq X$ we define~$\mu|_Y$ to be the distribution given by
\[
\Pr_{f \sim \mu|_Y}[f = f'] = \Pr_{f\sim \mu}[f|_Y = f'] \quad \forall f' \in \mathcal{F}[Y] \enspace .
\] \end{definition}
\section{Algorithm and Combinatorial Diameter} Our approach is based on the new relation between the algorithm of \chlamtac et al.\@\xspace~\cite{ckr10} and our novel notion of ``combinatorial diameter''. In \Cref{sec:combinatorial diameter}, we present the definition of the combinatorial diameter. The subsequent sections give the description of \chlamtac et al.\@\xspace and prove the relation to the combinatorial diameter.
\subsection{Our New Concept: Combinatorial Diameter} \label{sec:combinatorial diameter} \begin{definition}[Redundant bags]
Fix $s,t \in V({\mathcal T})$.
Let $v \in V({\mathcal T})$ be a bag with exactly two neighbors $u$ and $w$ on the path ${\mathcal T}_{s \leftrightarrow t}$. When $B_v \cap B_w \subseteq B_u$, we say that $v$ is \emph{$(s,t)$-redundant}. \end{definition}
Intuitively, each node $v$ discarded in the fashion above can be thought of as a subset of $u$, since the vertices $B_v \setminus B_u$ occur only in $B_v$ within ${\mathcal T}_{s \leftrightarrow t}$. As a consequence, we can show that they do not affect the rounding behaviour of the CKR algorithm with respect to $s$ and $t$ (therefore ``redundant'').
\begin{definition}[Simplification] \label{def:simplification}
Let ${\mathcal T}$ be a tree decomposition, and $s,t \in V({\mathcal T})$. We say ${\mathcal T}_{s \leftrightarrow t}$ has \emph{combinatorial length} at most $\ell$ if it can be reduced to a path of length at most $\ell$ by repeatedly applying the following rule:
\begin{itemize}
\item[] Delete an $(s,t)$-redundant node $v$ on path ${\mathcal T}_{s \leftrightarrow t}$, and add the edge~$\set{u,w}$.
We call this operation \emph{bypassing} $v$.
\end{itemize}
We call any path $P$ generated from ${\mathcal T}_{s\leftrightarrow t}$ in this fashion a \emph{simplification} of ${\mathcal T}_{s \leftrightarrow t}$. \end{definition}
\begin{definition}[Combinatorial diameter]
The \emph{combinatorial diameter} of ${\mathcal T}$ is defined to be the minimum $\ensuremath{\delta}\xspace$ such that, for all $u,v$, the path ${\mathcal T}_{u \leftrightarrow v}$ has combinatorial length at most $\ensuremath{\delta}\xspace$. \end{definition}
\subsection{Algorithm Description and Overview}
For completeness, we restate the essential aspects of the algorithm by \chlamtac et al.\@\xspace~\cite{ckr10}. The algorithm is initially provided a \mbox{\sf Sparsest-Cut}\xspace instance $(G,D,\cp,\dm)$ alongside a tree decomposition ${\mathcal T}$ of $G$ with the width $w({\mathcal T}) = \max_{t} |B_t| - 1$. The goal is then to compute a cut in $G$ that has low sparsity.
The algorithm starts by computing, for every vertex set $L = B_i \cup \set{s,t}$, consisting of a bag $B_i$ and a pair of vertices $s,t \in V(G)$, a distribution $\mu_L$ over $L$-assignments.
This collection of distributions for all sets $L$ satisfies the requirement that any two distributions agree on their joint domains, i.e.~$\mu_L|_{L \cap L'} = \mu_{L'}|_{L\cap L'}$ for each pair of sets $L, L'$ with the structure above.
If we denote $\operatorname{\sf lpcut}\xspace(s,t) = \Pr_{f \sim \mu_{B \cup \{s,t\}}}[f(s) \neq f(t)]$ for any $s,t \in V(G)$, and an arbitrary bag $B$ of~${\mathcal T}$, we can compute the collection of distributions that minimizes \[
\dfrac{\sum_{\{s,t\} \in E_G}\cp_{\{s,t\}} \cdot \operatorname{\sf lpcut}\xspace(s,t) }{\sum_{\{s,t\} \in E_D}\dm_{\{s,t\}} \cdot \operatorname{\sf lpcut}\xspace(s,t) } \enspace . \]
Notice that $\operatorname{\sf lpcut}\xspace$ is well-defined by the consistency requirement, since the choice of $B$ does not impact the distribution over $\{s,t\}$-assignments. For ease of notation, we will refer to the implied distribution over some vertex set $X \subseteq B \cup \{s,t\}$ by $\mu_X$, where formally $\mu_X = \mu_{B \cup \{s,t\}}|_{X}$.
Such a collection of distributions can be computed in time $2^{O(w({\mathcal T}))}\operatorname{poly}(n)$, using Sherali-Adams LP hierarchies, which motivates the function name $\operatorname{\sf lpcut}\xspace$. It is then rounded to some $V(G)$-assignment $f$ using \autoref{alg:Chlamtac}. We now recall a number of useful results about the algorithm and the assignment it computes. Details about the algorithm and the attendant lemmas can be found in the work of \chlamtac et al.\@\xspace~\cite{ckr10}.
Denote by ${\mathcal A}$ the distribution over $V(G)$-assignments produced by the algorithm. \begin{algorithm}[t]
\SetAlgoLined
\KwData{$G, ({\mathcal T}, \set{B_i}_{i \in V({\mathcal T})}), \{\mu_L\}$}
Start at any bag $B_0$, sample $f|_{B_0}$ from $\mu_{B_0}$\;
We process the bags in non-decreasing order of distance from $B_0$ \;
\ForEach{Bag $B$ with a processed parent bag $B'$}{
Let $B^+ = B\cap B'$ the subset of $B$ on which $f$ is fixed.
Let $B^- := B\setminus B^+$.
Sample $f|_{B^-}$ according to
\[
\Pr[f|_{B^-} = f'] = \Pr_{f^* \sim \mu_B}[f^*|_{B^-} = f'\; \mid \; f^*|_{B^+} =f|_{B^+} ] \quad\forall f' \in \mathcal{F}[B^-]
\]
}
\KwResult{$f$}
\caption{Algorithm \textsc{SC-Round} }
\label{alg:Chlamtac} \end{algorithm}
\begin{lemma}[\cite{ckr10}, Lemma 3.3]
\label{lem:bagRealisation}
For every bag $B$ the assignment $f|_B$ computed by \autoref{alg:Chlamtac} is distributed according to $\mu_B$, meaning $ \Pr_{ f \sim {\mathcal A}}[f|_B = f'] = \Pr_{f^* \sim \mu_B}[f^* = f']$ for all $f' \in \mathcal{F}[B]$. \end{lemma}
A direct consequence of this lemma is the fact that any edge $\{s,t\}$ of $G$ is cut by the algorithm with probability $\operatorname{\sf lpcut}\xspace(s,t)$. In particular, the expected capacity of the rounded cut is therefore \[ \sum_{\{s,t\} \in E_G}\cp_{\{s,t\}} \cdot \operatorname{\sf lpcut}\xspace(s,t), \] which is the value ``predicted'' by the distribution $\mu_L$. The same property does not hold for the (demand) edges of $D$ since they may not be contained in any bag of ${\mathcal T}$.
Denote by $\operatorname{\sf algcut}\xspace(s,t)$ the probability that the algorithm separates $s$ and $t$, that is, $\operatorname{\sf algcut}\xspace(s,t) = \Pr_{f \sim {\mathcal A}}[f(s) \neq f(t)]$. We would like to lower bound $\operatorname{\sf algcut}\xspace(s,t) \geq c \; \operatorname{\sf lpcut}\xspace(s,t)$ for all demand edges $\{s,t\}$ and some value $c > 0$. This would imply that the expected demand of the rounded cut is at least $c\sum_{\{s,t\} \in E_D}\dm_{\{s,t\}} \operatorname{\sf lpcut}\xspace(s,t)$, and having a good expected demand and capacity is sufficient for computing a good solution by the following observation. \begin{observation}[\cite{ckr10}, Remark 4.3] \label{obs:derandomisation}
The cut sparsity $\alpha$ predicted by distributions $\set{\mu_L}_L$ is
\[
\alpha := \dfrac{\sum_{\{s,t\} \in E_G}\cp_{\{s,t\}} \cdot \operatorname{\sf lpcut}\xspace(s,t) }{\sum_{\{s,t\} \in E_D}\dm_{\{s,t\}} \cdot \operatorname{\sf lpcut}\xspace(s,t) } \enspace .
\]
Then if $\operatorname{\sf algcut}\xspace(s,t) \geq c \cdot \operatorname{\sf lpcut}\xspace(s,t)$ for all $\{s,t\} \in E_D$ and $\operatorname{\sf algcut}\xspace(s,t) = \operatorname{\sf lpcut}\xspace(s,t)$ for $\{s,t\} \in E_G$, we have
\[
\mathbb{E}_{f \sim {\mathcal A}}\left[\sum_{\{s,t\} \in E_G}\cp_{\{s,t\}} |f(s) - f(t)| - \frac{\alpha}{c}\sum_{\{s,t\} \in E_D}\dm_{\{s,t\}} |f(s) - f(t)|\right] \leq 0 \enspace.
\]
A solution is $c$-approximate if the value in the expectation above is non-positive, and such a solution can either be obtained by repeated rounding or by derandomization
using the method of conditional expectations, without increasing the asymptotic run time. \end{observation}
This observation implies that the bottleneck to obtaining a good approximation factor is the extent to which our rounding algorithm can approximate the marginal of $\mu_L$ on the individual edges of $D$. Our main result relates this marginal to the combinatorial diameter of ${\mathcal T}$. It can now be stated as follows: \begin{theorem} \label{thm:algo:main}
Let $(G,D,\cp,\dm)$ be an instance of\ \,\mbox{\sf Sparsest-Cut}\xspace, and $({\mathcal T}, \set{B_i}_i)$ a tree decomposition of $G$ with width $w({\mathcal T})$ and combinatorial diameter $\Delta({\mathcal T})$.
Then {\sc SC-ROUND} satisfies $\operatorname{\sf algcut}\xspace(s,t) \geq \Omega\paren{\frac{1}{\Delta({\mathcal T})^2}} \cdot \operatorname{\sf lpcut}\xspace(s,t)$ for every $\{s,t\}\in E_D$. Therefore, we have a factor-$O(\Delta({\mathcal T})^2)$ approximation for sparsest cut with run time $2^{O(w({\mathcal T}))}\cdot \operatorname{poly}(n)$. \end{theorem}
The rest of this section is devoted to proving this theorem. \subsection{Step 1: Reduction to Short Path} In this section, we show that when the combinatorial diameter of the tree decomposition is $\ensuremath{\delta}\xspace = \Delta({\mathcal T})$, the analysis can be reduced to the case of a path decomposition of length $\ensuremath{\delta}\xspace$. We employ the following lemma to simplify our analysis of the behavior of the algorithm. \begin{lemma}[\cite{ckr10}, Lemma 3.4]
\label{lem:traverselInv}
The distribution over the assignments $f$ is invariant under any connected traversal of ${\mathcal T}$, i.e. the order in which bags are processed does not matter, as long as they have a previously processed neighbor.
The choice of the first bag $B_0$ also does not impact the distribution. \end{lemma}
Let $\set{s,t} \in E_D$ be a demand edge. If $s$ and $t$ are contained in a common bag, then $\operatorname{\sf algcut}\xspace(s,t) = \operatorname{\sf lpcut}\xspace(s,t)$ by \Cref{lem:bagRealisation} and we are done; therefore, we assume that there is no bag containing both $s$ and $t$. We want to estimate the probability that $s$ and $t$ separated by the algorithm, that is, the probability that $f(s) \neq f(t)$.
The lemma above allows us to reduce to the case in which the algorithm first rounds a bag $B_1$ containing $s$, then rounds bags $B_2, \dots, B_{\ell-1}$ along the path to a bag $B_{\ell}$ containing $t$, and finally $B_{\ell}$. At this point the algorithm has already assigned $f(s)$ and $f(t)$, so the remaining bags of ${\mathcal T}$ can be rounded in any connected order without impacting the separation probability. Hence, it is sufficient to characterize the behavior of the rounding algorithm along paths in ${\mathcal T}$.
Let $P$ be the shortest path connecting a bag containing $s$ to a bag containing $t$; denote such path by $P = v_1 v_2 \ldots v_{\ell}$ such that $s \in B_{v_1}$ and $t \in B_{v_{\ell}}$. By \Cref{lem:traverselInv} we can assume that the algorithm first processes $B_{v_1}$, and then all other bags $B_{v_2}, \dots, B_{v_\ell}$, in this order.
Observe that, except for $v_1$ and $v_{\ell}$, no other bag of $P$ contains $s$ or $t$. We repeatedly apply the reduction rule from \Cref{def:simplification} until the resulting path has length at most $\ensuremath{\delta}\xspace$. The following lemma asserts that the distribution of the algorithm is preserved under this reduction rule.
We slightly abuse the notation and denote by ${\mathcal A}$ the distribution of our algorithm on path $P$ starting from $v_1$.
\begin{lemma} \label{lem:distpreservereduction}
Let $u,v,w$ be three consecutive internal bags on $P$ with $B_v \cap B_w \subseteq B_u$. Let $P'$ be a simplification of $P$ bypassing $v$ and let ${\mathcal A}'$ be the distribution obtained by running the algorithm on path $P'$, starting on $v_1$.
Then ${\mathcal A}'$ is exactly the same as ${\mathcal A}$ restricted to $B(P')$. \end{lemma} \begin{proof}
We can assume, without loss of generality, that $u,v,w$ appear on $P$ in the order of rounding; for otherwise, we apply \Cref{lem:traverselInv} twice: first, to reverse $P$, and preserve the distribution~${\mathcal A}$; then, to undo the reversing of $P'$ caused by the previous application.
We modify the path decomposition $P$ into a (tree) decomposition $\hat{{\mathcal T}}$ as follows: remove bag $v$ and add two new bags $v',v''$ where bag $v'$ is connected to $u$ and $w$ with $B_{v'} = B_u \cap B_v$ and $v''$ is connected to $v'$ with $B_{v''} = B_v$.
This remains a tree decomposition for the vertices in $B(P)$ since vertices in $B_v \setminus B_u$ only occur in the bag $B_{v''}$ (due to our assumption that $B_v \cap B_w \subseteq B_u$).
It is easy to check that run the algorithm {\sc SC-ROUND} on $\hat{{\mathcal T}}$ produces exactly the same distribution as ${\mathcal A}$.
Since $B(P') = B(P) \setminus (B_{v''} \setminus B_{v'} )$, we have that ${\mathcal A}|_{B(P')}$ is the distribution of {\sc SC-ROUND} on the path $\hat{P} = v_1 \ldots u v' w \ldots v_{\ell}$, obtained by removing $v''$ from $\hat{{\mathcal T}}$.
Now since $B_{v'} \subseteq B_u$, the rounding algorithm in fact does not do anything at bag $v'$, so it can be removed without affecting the distribution.
We obtain path $P'$ as a result, and this implies that ${\mathcal A}|_{B(P')}$ is the same distribution as ${\mathcal A}'$. \end{proof}
This result allows us conduct the rounding analysis on simplifications of paths. It remains to show that this is beneficial, that is, that the rounding error can be bounded by the length of the path on which we round. As in the work of \chlamtac et al.\@\xspace~\cite{ckr10}, we use Markov flow graphs to analyze that error.
\subsection{Step 2: Markov Flow Graphs} \label{sec:markov} Let $P = v_1, \dots, v_\ell$ be a path with length $\ell$ and $s \in B_{v_1}$, $t\in B_{v_\ell}$. We run \autoref{alg:Chlamtac} from $v_1$ to~$v_\ell$ to compute some assignment $f$. Let ${\mathcal A}$ be the probability distribution of the resulting assignment $f$. Recall that $\operatorname{\sf algcut}\xspace(s,t)$ denotes the probability that the algorithm assigns $f(s) \neq f(t)$, and $\operatorname{\sf lpcut}\xspace(s,t)$ is the probability that $s$ and $t$ are separated according to the distributions $\set{\mu_L}_L$, i.e.~$\Pr_{f \sim \mu_{B \cup \{s,t\}}}[f(s) \neq f(t)]$. In the second step, we analyze the probability of $\operatorname{\sf algcut}\xspace(s,t)$ in terms of $\operatorname{\sf lpcut}\xspace(s,t)$. This step is encapsulated in the following lemma.
\begin{lemma} \label{lem:algo:markov}
There exists a directed layered graph $H$ containing nodes $s_0,s_1,t_0,t_1 \in V(H)$ and a weight function $w_H$ on the edges, satisfying the following properties:
\begin{enumerate}
\item For $i = 0,1$, we have that $\Pr_{f \sim {\mathcal A}}[f(s) = i\; \&\; f(t) = 1-i]$ is at least an $\Omega(1/\ell^2)$-fraction of the minimum $(s_i, t_{1-i})$-cut of $H$.\label{lem:algo:markov:i}
\item For $i = 0,1$, the value of a maximum $(s_i, t_{1-i})$-flow in $H$ is at least $\Pr_{f \sim \mu}[f(s) =i\; \&\; f(t) = 1-i]$.\label{lem:algo:markov:ii}
\end{enumerate} \end{lemma}
\Cref{thm:algo:main} immediately follows from this lemma. \begin{proof}[Proof of \Cref{thm:algo:main}]
We run the algorithm of \chlamtac et al.\@\xspace to get some $V(G)$-assignment~$f$.
Consider a pair $\set{s,t} \in E_D$.
Using \Cref{lem:traverselInv} and \Cref{lem:distpreservereduction}, we can reduce the analysis to a path $P$ of length at most $\ensuremath{\delta}\xspace$, which is a simplification of a path in ${\mathcal T}$.
Now, by \Cref{lem:algo:markov} and max-flow-min-cut theorem, we get that
\begin{align*}
\operatorname{\sf algcut}\xspace(s,t)
&= \Pr_{f \sim {\mathcal A}}[f(s) = 0\; \&\; f(t) = 1] + \Pr_{f \sim {\mathcal A}}[f(s) = 1\; \&\; f(t) = 0] \\
&\geq \Omega\paren{\frac{1}{\ensuremath{\delta}\xspace^2}} \paren{\operatorname{mincut}(s_0, t_1) + \operatorname{mincut}(s_1, t_0)} \\
&= \Omega\paren{\frac{1}{\ensuremath{\delta}\xspace^2}} \paren{\operatorname{maxflow}(s_0, t_1) + \operatorname{maxflow}(s_1, t_0)} \\
&\geq \Omega\paren{\frac{1}{\ensuremath{\delta}\xspace^2}} \paren{\Pr_{f \sim \mu}[f(s) =0\; \&\; f(t) = 1] + \Pr_{f \sim \mu}[f(s) =1\; \&\; f(t) = 0]} \\
&= \Omega\paren{\frac{1}{\ensuremath{\delta}\xspace^2}} \operatorname{\sf lpcut}\xspace(s,t) \enspace .
\end{align*}
Therefore, $f$ separates each pair $\set{s,t}$ with probability that is a factor of $O(\ensuremath{\delta}\xspace^2)$ away from $\operatorname{\sf lpcut}\xspace(s,t)$.
Applying Observation \ref{obs:derandomisation} with $c = \Omega(1/\ensuremath{\delta}\xspace^2)$, we can obtain (deterministically) an assignment $f^*$ that is an $O(\ensuremath{\delta}\xspace^2)$-approximation for the \mbox{\sf Sparsest-Cut}\xspace instance. \end{proof}
The rest of this section is dedicated to proving the \Cref{lem:algo:markov}. The tools needed for this proof are implicit in the work of \chlamtac et al.\@\xspace~\cite{ckr10}. We restate them for the sake of completeness and in order to adjust it to our terminology.
The section is organized as follows: first, we describe the construction of our graph $H$, and then we proceed to analyze the values of maximum flow and minimum cut. We will only analyze the flow and cut for $i= 0$, that is, $(s_0,t_1)$-flow and $(s_0,t_1)$-cut. The other case is analogous.
\paragraph{Construction of Graph $H$:} Without loss of generality, we can assume that the distributions $\set{\mu_L}_L$ are symmetric in the labels $\{0,1\}$, see \Cref{lem:symmetrization}. In particular, this gives $\Pr[f(v) = 1] =\linebreak \Pr[f(v) = 0] = 1/2$ for any vertex $v$.
The rounding can be modeled by a simple Markov process. Denote by $I_0,\dots, I_\ell$ the sets that are conditioned on in \autoref{alg:Chlamtac}, $I_i = B_{v_i} \cap B_{v_{i+1}}$ for $i \in \set{1,\ldots, \ell-1}$; we refer to these sets as \emph{conditioning sets}.
For the initial and final sets of the rounding procedure we take $I_0 = \{s\}$, $I_\ell = \{t\}$. Now we are ready to describe our graph $H$: \begin{itemize}
\item \textbf{Vertices}:
Vertices of $H$ are arranged into layers $L_0,\dots,L_\ell$ with $L_i = \mathcal{F}[I_i]$.
Observe that $|L_i| = 2^{|I_i|}$.
The vertices of $H$ represent the intermediate states the algorithm might
reach.
\item \textbf{Edges}:
For each $i$, there is a directed edge from every vertex in $L_i$ to every vertex in $L_{i+1}$.
The weight of the edge $(f_i, f_{i+1})$, for $f_i \in L_i$, $f_{i+1} \in L_{i+1}$, is equal to the probability of joint event,
$w_H(f_i, f_{i+1}) = \Pr[f|_{I_i} = f_i \wedge f|_{I_{i+1}} = f_{i+1}]$.
We remark that the weight is 0 whenever $f_i$ and $f_{i+1}$ are
contradictory, and that probabilities are well defined, as $I_i \cup I_{i+1} \subseteq B_{i+1}$. \end{itemize}
Observe that the weight of an edge is the probability that both of its endpoints are reached by the algorithm, and hence the probability that the algorithm transitions along that edge.
\newcommand{{\mathcal I}}{{\mathcal I}}
\begin{observation}
Let ${\mathcal I} = \bigcup_{i} I_i$.
The distribution ${\mathcal A}|_{{\mathcal I}}$ can be viewed as the following random walk in~$H$: Pick a random vertex in $L_0$ and start taking a random walk where each edge is taken with probability proportional to its weight.
Formally, once a node $f_i$ is reached, choose the next node $f_{i+1}$ with probability $w_H(f_i, f_{i+1})/\Pr[f|_{I_i} = f_i]$. \end{observation}
At this point, we rename ${\mathcal A} := {\mathcal A}|_{{\mathcal I}}$. Notice that the layer $L_0$ contains two vertices corresponding to the assignment $f(s) = 0$ and $f(s) = 1$, respectively. We denote them by $L_0 = \{s_0, s_1\}$. Similarly, $L_{\ell} = \{t_0, t_1\}$. Notice further that $\Pr_{f \sim {\mathcal A}}[f(s) = 0, f(t) = 1]$ is exactly the probability that the random walk starts at $s_0 \in L_0$ and ends at $t_1 \in L_{\ell}$.
\paragraph{Maximum $(s_0,t_1)$-Flow:} We are now ready to show that the value of the maximum $(s_0,t_1)$-flow is at least $\Pr_{f \sim \mu}[f(s) =0, f(t)=1]$.
We define the flow $g\colon E(H) \to \mathbb{R}_{\geq 0}$ as follows, for $i \in \set{1,\ldots, \ell-1}$, $f_i \in L_i$ and $f_{i+1} \in L_{i+1}$: \[
g(f_i, f_{i+1}) = \Pr_{f\sim \mu_{B_{v_{i+1}} \cup \{s,t\} }}[ f(s) = 0, f(t) = 1, f|_{I_i} = f_i, f|_{I_{i+1}} = f_{i+1} ] \enspace . \] We remark that $g$ is an $s_0$-$t_1$-flow, that is, it satisfies flow conservation at all vertices in $H$ except $s_0,t_1$, and the capacities of graph $H$ are respected, that is, $g(e) \leq w_H(e)$ for all $e \in E(H)$. The value of~$g$ is given by: \begin{align*}
\sum_{(s_0,f^*) \in \delta^+(s_0)}g(s_0, f^*) &= \sum_{f^* \in \mathcal{F}[I_1]}\Pr_{f \sim \mu_{B_{v_1} \cup \{s,t\} } }[f(s) = 0, f(t) = 1, f|_{I_1} = f^*]\\
&=\Pr_{f \sim \mu_{B_{v_1} \cup \{s,t\} } }[f(s) = 0, f(t) = 1] \enspace . \end{align*}
This concludes the proof of Point \ref{lem:algo:markov:ii} of \Cref{lem:algo:markov}.
\paragraph{A Potential Function:} Before we show a cut with the desired capacity, we need to introduce some notation. For $i = 0,\ldots, \ell$, let $X_i$ be a random variable indicating the vertex in $L_i$ visited by the random walk (i.e.~picked by the algorithm. We denote by $\textbf{X}= X_0 X_1 \ldots X_{\ell}$ the path taken in the random walk process. We can interchangeably view distribution ${\mathcal A}$ as either the distribution that samples an assignment $f\colon {\mathcal I} \rightarrow \{0,1\}$ or one that samples a (random walk) path $\textbf{X}$.
We define, for every layer $L_i$ and every vertex $v \in L_i$, \[
A(v):= \Pr_{\textbf{X} \sim {\mathcal A}} [X_0 = s_0 \mid X_i = v] -\frac{1}{2} \enspace . \] Intuitively, this function captures the extent to which $v$ has information about the initial state of the Markov process. On the one hand, if $A(v)$ is equal to $0$, $v$ knows essentially nothing about $X_0$, the choice of $v$ does not imply anything about $X_0$. On the other hand, if $A(v)$ is far from $0$, then we can glean a lot of information about $X_0$ from $v$ being visited; in particular, if the probability that $s$ and $t$ are cut is low, we must have $A(t_1) \approx - 1/2$.
To track how $A$ changes from layer to layer, we use the potential function $\phi\colon \{0,\dots,\ell\} \to \mathbb{R}_{\geq 0}$, defined as: \[
\phi(i) := \operatorname{Var}_{\textbf{X} \sim {\mathcal A}}[A(X_i)] \enspace. \]
The following lemma by \chlamtac et al.\@\xspace bounds the change in potential in terms of the probability that $X_0 = s_0$ and $X_\ell = t_1$. \begin{lemma}[\cite{ckr10}, Lemma 5.2]
It holds $\phi(0) - \phi(\ell) \leq 2\Pr[X_0 = s_0 \wedge X_\ell = t_1]$ \enspace . \end{lemma}
\paragraph{Minimum $(s_0,t_1)$-cut:} We are now ready to analyze the value of minimum $(s_0,t_1)$-cut in $H$. It suffices to give a lower bound on $\phi(0) - \phi(\ell)$. This is is possible by the following lemma which is proved implicitly by \chlamtac et al.\@\xspace~\cite{ckr10}. \begin{lemma}[\cite{ckr10}, Lemma 5.4]
\label{lem:MarkovCut}
Let $C$ be the set of edges $(f_i, f_{i+1})$ in $E(H)$ such that $|A(f_i) - A(f_{i+1})|$ is at least some threshold $\rho > 0$. Then $\sum_{e \in C}w_H(e) \leq (\phi(0) - \phi(\ell)) \cdot 1/\rho^2$. \end{lemma}
We can apply \Cref{lem:MarkovCut} in the following fashion. Suppose $A(t_1) \geq 0$. In that case we have $\Pr[X_0 = s_0 \mid X_\ell = t_1] \geq 1/2$, so $s$ and $t$ are cut with probability at least $\frac{1}{2}\operatorname{\sf lpcut}\xspace(s,t)$. This error is already a small enough, so assume $A(t_1) < 0$. Then $A(s_0) - A(t_1) > 1/2$. Since every path from $s_0$ to $t_1$ has exactly $\ell$ edges, any such path must contain an edge $(f_i, f_j)$ with $A(f_i) -A(f_j) > 1/ (2\ell)$. Cutting all such edges therefore separates $s_0$ and $t_1$. Hence, by applying \Cref{lem:MarkovCut}, the minimum $s_0$-$t_1$-cut has size at most \begin{align*}
O(\ell^2) (\phi(0) - \phi(\ell))
&\leq O(\ell^2) \Pr[X_0 = s_0 \wedge X_\ell = t_1]\\
&=O(\ell^2) \Pr[f(s) = 0 \wedge f(t) = 1] \enspace . \end{align*}
This concludes the proof of Point \ref{lem:algo:markov:i} of \Cref{lem:algo:markov}. We see that the cutting probability predicted by the distributions is realised by the rounded solution $f$, up to a factor $\Omega(1/\ell^2)$.
This gives an alternative to the analysis given by \chlamtac et al.\@\xspace whose constant depends on the size of the layers of $H$ rather than the number of layers. While the layer sizes depend only on $k$, the dependence is exponential. The number of layers is a priori $\log(n)$, which would give a worse approximation guarantee. However, we will show how to modify a tree decomposition to ensure that $H$ has few layers.
\section{Combinatorially Shallow Tree Decompositions}
In this section, we show how to construct tree decompositions with low combinatorial diameter, thus achieving the approximation results stated in \Cref{thm:main-intro}. We start by restricting our consideration to decompositions that are shallow in the traditional sense. For a given graph $G$ with treewidth $k$, we consider a tree decomposition $({\mathcal T}, \set{B_i}_{i \in V({\mathcal T})})$ with diameter $\ensuremath{d}\xspace=O(\log n)$ and width $O(k)$~\cite{Bodlaender88}. Fix some root $r$ in $V({\mathcal T})$.
Our goal is now to modify ${\mathcal T}$ such that every node has a combinatorially short path to $r$. This is a necessary requirement, but perhaps surprisingly it is not sufficient. The combinatorial lengths of paths do not necessarily induce a metric on $V({\mathcal T})$ \footnote{Consider bags $\{ab\}, \{abc\}, \{acd\}, \{ade\}, \{aef\}, \{afg\}, \{a\}$ occuring in that order as a path. The whole path can be reduced to just the endpoints. The subpath $\{ab\}, \{abc\}, \{acd\}, \{ade\}, \{aef\}, \{afg\}$ is irreducible. Thus the distance from $\{ab\}$ to $\{afg\}$ is larger than the sum of the distances from $\{ab\}$ to $\{a\}$ and $\{afg\}$ to $\{a\}$.}, and therefore bounding the length to $r$ does not on its own suffice to bound the combinatorial diameter.
We will not show explicitly that the modified structures are in fact tree decompositions. The proofs are straightforward using \Cref{lem:TreeDecompMonotonicity}.
We introduce three objects, which we call \textbf{bridges}, \textbf{highways}, and \textbf{super-highways}, and show that they can be used to prove the three parts of \Cref{thm:main-intro}.
\subsection{Bridges} \begin{figure}
\caption{Illustration of a path from the root to some node $s$. The square nodes are the synchronization nodes. The bridge from $y$ to its synchronization ancestor is marked with dashes in the first image. The dotted nodes in the second image mark those nodes which can be removed when simplifying the $x$-$s$-path in ${\mathcal T}'$. }
\label{fig:bridges}
\end{figure} Fix a parameter $\lambda \in \set{1,\dots, \ensuremath{d}\xspace}$. Define $\ell\colon V({\mathcal T}) \to \mathbb{N}_0$ to be the \emph{level} of a node in ${\mathcal T}$, that is, $\ell(v)$ is the number of edges on ${\mathcal T}_{v \leftrightarrow r}$.
\begin{definition}
We call a node a \emph{synchronization node} if its level is a multiple of $\lambda$.
Define also the \emph{synchronization ancestor} $\sigma(v)$ of any node $v$ to be the first node on the path from $v$ to $r$ that is a synchronization node, excluding $v$ itself. \end{definition}
We can construct a tree decomposition $({\mathcal T}', \set{B'_i}_i)$ by taking ${\mathcal T}'={\mathcal T}$ and setting $B_v' = B({\mathcal T}_{v \leftrightarrow \sigma(v)})$, that is, the new bag is obtained by combining all the bags from $v$ up to its synchronization ancestor. This increases the width of the decomposition by a factor of at most $\lambda$. We may view this path connecting $v$ to the synchronization point as a {\bf bridge} crossing over all intermediate nodes in one step.
\begin{lemma}
\label{lem:bridges:diam}
${\mathcal T}'$ has combinatorial diameter $O(\ensuremath{d}\xspace / \lambda)$. \end{lemma} \begin{proof}
Fix any two nodes $s,t \in V({\mathcal T}')$ and take $x$ to be their lowest common ancestor in ${\mathcal T}'$.
Then the combinatorial length of ${\mathcal T}'_{s \leftrightarrow t}$ is at most the sum of the combinatorial lengths of ${\mathcal T}'_{s \leftrightarrow x}$ and ${\mathcal T}'_{x \leftrightarrow t}$.
We remark that triangle inequality holds in this case, because $x$ is on the path from $s$ to $t$.
Thus, it suffices to show that the combinatorial length of ${\mathcal T}'_{s \leftrightarrow x}$ is $O(\ensuremath{d}\xspace / \lambda)$.
The result follows analogously for ${\mathcal T}'_{x \leftrightarrow t}$.
Using the rules of \Cref{def:simplification}, we can bypass any node that is neither a synchronization node nor $s$ or $x$, since the bag of the unique child (in ${\mathcal T}'_{s \leftrightarrow x}$) of such a node is a superset of its own bag.
Therefore, the path $\{v \in {\mathcal T}'_{s \leftrightarrow x} | v = s \vee v=x \vee v\text{ is a synchronization node}\}$ is a simplification of~${\mathcal T}'_{s \leftrightarrow x}$.
Since there are at most $\ensuremath{d}\xspace/\lambda$ synchronization nodes on any upward path, the lemma follows. \end{proof}
This lemma, in conjunction with \Cref{thm:algo:main} and the fact that ${\mathcal T}'$ can be computed in polynomial time from ${\mathcal T}$, yields: \begin{corollary}
\label{cor:bridge:algo}
For every $\lambda$, there is an algorithm that computes an $O((\log n / \lambda)^2)$-approximation for \mbox{\sf Sparsest-Cut}\xspace instances where $G$ has treewidth at most $k$, in time $2^{O(\lambda k)}\operatorname{poly}(n)$.
Setting $\lambda = \log n /k$ results in an $O(k^2)$-approximation in time $2^{k}\operatorname{poly}(n)$, while setting $\lambda=\log n$ gives an $O(1)$-approximation in time $n^{O(k)}$. \end{corollary}
\subsection{Highways} \begin{figure}
\caption{The dashed nodes in the first image mark the bridge and highway from $y$ to $r$. The other images illustrate the two simplification rounds for the $x$-$s$-path, leaving a path of length $2$.}
\label{fig:highways}
\end{figure}
The idea of extending bags towards the root can be exploited further by adding the vertices in a synchronization bag to all of its descendants. We may regard this as giving each node a bridge to the next synchronization node, as well as a {\bf highway} along the synchronization nodes towards the root. This idea leads to the following construction.
Let $({\mathcal T}', \set{B'_i}_i)$ be a modified tree decomposition with ${\mathcal T}'={\mathcal T}$ as before, and \[ B_v' := B(\{ w \in {\mathcal T}_{v \leftrightarrow r} \mid w \in {\mathcal T}_{v \leftrightarrow \sigma(v)} \vee w \text{ is a synchronization node} \}) \enspace . \]
The size of these bags is at most $k (\lambda + \ensuremath{d}\xspace / \lambda)$, which for $\lambda = \ensuremath{d}\xspace /k$ gives $\ensuremath{d}\xspace+ k^2 = O(\log n + k^2)$.
Notice that the bag $B_r$ is now contained in any bag $B'_i$, so we have some hope that the combinatorial diameter of $({\mathcal T}', \set{B'_i}_i)$ is low. Indeed this is true.
\begin{lemma}
\label{lem:highways:diam}
${\mathcal T}'$ has combinatorial diameter at most $3$. \end{lemma} \begin{proof}
As before, we split any $s$-$t$-path at $x$, the lowest common ancestor of $s$ and $t$, and consider only the $s$-$x$-path.
Every non-synchronization node $v$ on ${\mathcal T}_{s\leftrightarrow x}$ has a node below it which is either a synchronization node or $s$.
The bag of that node is a superset of $B'_v$, so all non-synchronization nodes except $s$ and $x$ can be bypassed.
Call that reduced path $P$.
Consider the neighbor of $s$ in $P$, which we denote $v$, and assume that $v$ is not the neighbor of $x$ in $P$.
Then $v$ must be a synchronization node, and its next node in $P$ is $\sigma(v)$.
Now, the intersection $B'_v \cap B'_{\sigma(v)}$ contains exactly all of the bags of synchronization nodes in ${\mathcal T}_{\sigma(v)\leftrightarrow r}$, and thus, ${B'_v \cap B'_{\sigma(v)} \subseteq B'_s}$.
This implies that $v$ can be bypassed, and by repeating this process, we can bypass every synchronization node except for the neighbor of $x$.
This gives a possible simplification of ${\mathcal T}_{s \leftrightarrow t}$ as the path $(s, \sigma_s, x, \sigma_t, t)$, where the $\sigma_s$ and $\sigma_t$ are the synchronization nodes below $x$ on the paths to $s$ and $t$, respectively.
There is a further reduction of the whole path, since $B_x'$ is precisely $B_{\sigma_s}' \cap B_{\sigma_t}'$.
This allows us to remove $x$ as well, giving a simplification of length $3$. \end{proof}
Using the fact that $\ensuremath{d}\xspace \in O(\log n)$, and setting $\lambda = \ensuremath{d}\xspace /k$ gives a fixed-parameter algorithm that yields a constant-factor approximation: \begin{corollary}
\label{cor:highways:algo}
There exists an algorithm that in time $2^{O(k^2)}\cdot\operatorname{poly}(n)$ computes a factor-$O(1)$ approximation for \mbox{\sf Sparsest-Cut}\xspace instances where $G$ has treewidth at most $k$. \end{corollary}
\subsection{Super-Highways} \begin{figure}
\caption{Illustration of an upward path with nodes of layer $-1$ as circles, nodes of layer 0 as diamonds, and nodes of layer $1$ as squares. The root is at some unspecified maximum layer. The dashed nodes in the first image mark the super-highway from $s$ to $r$. The other images illustrate the simplification rounds for the $x$-$s$-path, removing all nodes of some layer in each round, except $x$, $s$, and possibly one node close to $x$.}
\label{fig:superhighways}
\end{figure}
We can think of the previous construction as having two layers, bridges to synchronization nodes and highways along synchronization nodes to the root. The highways need to cover many synchronization nodes, leading to large bags in ${\mathcal T}'$. To improve on this we introduce a network of {\bf super-highways} of different layers, where each layer covers fewer, more spaced-out synchronization nodes on a root-leaf path. When we connect a node to the root we can then move up the tree layer by layer with increasing speed, decreasing the size of bags in ${\mathcal T}'$. This is payed for by the need for an additional node in path simplifications for moving between layers, giving a trade-off between run time and approximation guarantee.
Let $q \in \mathbb{N}$ be a parameter representing the number of layers. For a node $v \in {\mathcal T}$, we define the layer of $v$, denoted $\pi(v)$, as \[
\pi(v) := \max\{-1, \max\{ j\in\{0,\dots, q-1 \} \mid \ell(v) \equiv 0 \mod k^{j/q} \ensuremath{d}\xspace / k\} \} \enspace . \] By this definition all synchronization nodes are assigned to some non-negative layer, and all other nodes are on layer $-1$. We now get a new tree decomposition $({\mathcal T}', \set{B'_i}_i)$ by constructing bags: \[ B_v' = B(\{ w \in {\mathcal T}_{p(v) \leftrightarrow r} \mid \pi(w) = \max\{ \pi(u) \mid u \in {\mathcal T}_{p(v)\leftrightarrow w} \} \} \cup \{v\})\enspace . \] Informally, we start at some node $v$ and move towards $r$ by first taking all nodes of layer $-1$ until we hit a node of layer $0$, then taking only nodes of layer $0$ until we hit layer $1$, and so on. The nodes at higher layers are spaced further apart. Thus this process ``speeds up'' thereby generating smaller bags. To be precise, there are $q$ layers and at most $k^{1/q}$ nodes of any one layer in a bag, so ${\mathcal T}'$ has width $O(\ensuremath{d}\xspace + qk^{1+1/q})$.
We now show that $({\mathcal T}', \set{B'_i}_i)$ has combinatorial diameter depending only on $q$.
\begin{lemma} \label{lem:superhighways:diam}
$({\mathcal T}', \set{B'_i}_i)$ has combinatorial diameter at most $2q+1$. \end{lemma} \begin{proof}
As before, we only show that any upward path from $s$ to $x$ has combinatorial length at most~$q+1$.
We need to perform a round of reductions for every layer, with the goal of leaving only $s$, $x$, as well as the first node of at least that layer below $x$.
For layer $-1$, this holds with the same argument as before.
We can now proceed by induction, fixing some layer $i$ and assuming that the $s$-$x$-path $P$ has been reduced to contain only $s$, then nodes of layers $\geq i$, followed by a sequence $(\sigma_{i-1}, \sigma_{i-2}, \ldots, \sigma_0, x)$, where each node $\sigma_j$ is in layer $j$. Here, we assume w.l.o.g.\ that $x$ is at layer $-1$.
Now consider any node $v$ of layer $i$, except the one closest to $x$.
Because its neighbors also have level at least $i$ (or are~$s$), the intersection of their bags can be represented as the union of bags of ${\mathcal T}$ whose layer is at least $i$.
Let~$w$ be the predecessor of $v$ on $s$-$x$-path $P$.
The set $B_w'$ is constructed from some upward path starting at $w$, containing only nodes of non-decreasing layer.
This upward path hits layer~$i$ between~$w$ and~$v$, but not layer $i+1$ since a node of layer $i+1$ would be on $P$ between $w$ and~$v$.
So then~$B_w'$ covers all nodes of layer at least $i$ that $B_v'$ covers, and therefore $v$ can be bypassed, concluding induction.
The simplification of ${\mathcal T}_{s\leftrightarrow x}$ produced in this fashion is a path $(s, \sigma_{q-1}, \dots, \sigma_0, x)$, where $\pi(\sigma_i) = i$.
If we add the same simplification for ${\mathcal T}_{t\leftrightarrow x}$ we get a simplification for ${\mathcal T}_{s\leftrightarrow t}$ that takes the form $(s, \sigma_{q-1}, \dots, \sigma_0, x, \sigma_0',\dots,\sigma_{q-1}',t)$.
As before $x$ can be bypassed since its bag is the intersection of the bags of $\sigma_0$ and $\sigma_0'$.
Thus any $s$-$t$-path in ${\mathcal T}'$ has combinatorial length at most $2q+1$. \end{proof}
This implies the existence of the following algorithms. \begin{corollary} \label{lem:superhighways:algo}
There exists an algorithm that, for any $q\in \mathbb{N}$, computes a factor-$O(q^2)$ approximation for \mbox{\sf Sparsest-Cut}\xspace in time $O(2^{qk^{1+1/q}})\cdot\operatorname{poly}(n)$.
Taking $q = \log k$ gives a factor-$O(\log^2k)$ approximation in time $2^{O(k\log k)}\cdot\operatorname{poly}(n)$. \end{corollary}
\begin{acks} Parinya Chalermsook has been supported by European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 759557) and by Academy of Finland Research Fellowship, under grant number 310415. Joachim Spoerhase has been partially supported by European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 759557). Daniel Vaz has been supported by the Alexander von Humboldt Foundation with funds from the German Federal Ministry of Education and Research (BMBF).” \end{acks}
\appendix \section{Various Lemmas}
\makeatletter \newcommand{\oset}[3][0ex]{
\mathbin{\mathop{#3}\limits^{
\vbox to#1{\kern-2\ex@
\hbox{$\scriptstyle#2$}\vss}}}} \makeatother \newcommand{\mirror}[1]{\ensuremath{\oset{\leftrightarrow}{#1}}}
\def\raise-0.867ex\hbox{$\mathchar"017E$}{\raise-0.867ex\hbox{$\mathchar"017E$}} \newcommand{\dvec}{\normalfont\mathrel{\ooalign{\raise-0.867ex\hbox{$\mathchar"017E$}\cr\hidewidth\raise1ex\hbox{\rotatebox{180}{\raise-0.867ex\hbox{$\mathchar"017E$}}}\cr}}} \renewcommand{\mirror}[1]{\ensuremath{\oset[-0.2ex]{\dvec}{#1}}}
\begin{definition}
For any set $X$ and $X$-assignment $f$ we define the \emph{mirror} of $f$ to be $\mirror{f} := \sigma \circ f$, where $\sigma(0) = 1$ and $\sigma(1) = 0$. \end{definition}
Notice that the mirror of an assignment represents the same cut; it merely exchanges the labels. The approximation ratio analysis of \chlamtac et al.\@\xspace requires the distributions to be symmetric in their labeling, in particular $\Pr[f(v) = 0] = \Pr[f(v) = 1]$ $\forall v$. They resolve this by demanding symmetry via the Sherali-Adams LP which can be shown to not worsen the relaxation. Using the following lemma, we are able to prove that the rounding analysis also holds in the non-symmetric case.
\begin{lemma}
\label{lem:symmetrization}
Let $G, ({\mathcal T}, \set{B_i}_{i}), \{\mu_L\}$ be the input of \autoref{alg:Chlamtac} and $f$ the assignment computed by it.
Consider the modified decomposition $({\mathcal T}, \set{B'_i}_i)$, where a dummy vertex $e$ has been added to every bag $B'_i$.
Then for each $ \mu_L$ and $L' = L \cup \{e\}$ there exists a $\mirror{\mu}_{L'}$ with $\Pr_{ f \sim \mirror{\mu}_{L'} }[ f = f' ] = \Pr_{ f \sim \mirror{\mu}_{L'} }[ f = \mirror{f'} ]$ for all $f' \in \mathcal{F}[L']$ such that when \autoref{alg:Chlamtac} is run on $G, ({\mathcal T}, \set{B'_i}_i), \{\mirror{\mu}_{L'}\}$ the resulting assignment $f^*$ satisfies
\[
\Pr[f = f'] + \Pr[f = \mirror{f'}] = \Pr[f^*|_{V(G)} = f'] + \Pr[f^*|_{V(G)} = \mirror{f'}] \; \forall f' \in \mathcal{F}[V(G)] \enspace.
\] \end{lemma}
The content of the Lemma is at its core not very surprising. If we do not care about the labels, we do not care about whether the algorithm outputs $f$ or $\mirror{f}$. But if that is the case, the distributions also should not need to maintain some distinction between the labels. In fact, one could run the algorithm unmodified, and then permute the labels with probability $1/2$. Clearly this does not change the distribution over cuts, and the choice of labels is now symmetric.
\begin{proof}
To make this formal we shall use the value of $f(e)$ to indicate whether or not we are permuting the labels.
Consider the following definition:
\[
\mirror{\mu}_{L\cup \{e\}} (f', e \to 0) = \frac{1}{2}\mu_L(f'), \; \mirror{\mu}_{L\cup \{e\}} (f', e \to 1) = \frac{1}{2}\mu_L(\mirror{f'}) \; \forall f' \in \mathcal{F}[L] \enspace.
\]
This definition describes a distribution with the desired symmetry property.
By \Cref{lem:traverselInv} we can model the rounding algorithm for $G, ({\mathcal T}, \set{B'_i}_{i}), \{\mirror{\mu}_{L'}\}$ as choosing first a value for $f(e)$, and then proceeding in the same order as the rounding over $G, ({\mathcal T}, \set{B_i}_{i}), \mu_{L}\}$.
With probability $1/2$ we get ${f(e) = 0}$. Since every bag contains $e$, $e$ is always conditioned on, so the symmetrized algorithm performs the exactly as the original run would.
Meanwhile if $f(e) = 1$, the symmetrized algorithm samples some intermediate assignment $f'$ with exactly the probability that the original algorithm would have sampled $\mirror{f'}$.
This gives
\begin{align*}
&\Pr[f^*|_{V(G)} = f'] = \frac{1}{2}\Pr[f = f'] + \frac{1}{2}\Pr[\mirror{f} = f'] \; &\forall f' \in \mathcal{F}[V(G)] \\
\implies &\Pr[f^*|_{V(G)} = f'] + \Pr[f^*|_{V(G)} = \mirror{f'}] = \Pr[f = f'] + \Pr[f = \mirror{f'}] \; &\forall f' \in \mathcal{F}[V(G)].
\end{align*} \end{proof}
Notice that while we could construct the symmetrized $\mirror{\mu}$ efficiently, we do not need to. The mere existence of a symmetrized set of distributions is sufficient for our purposes. The analysis of the Markov flow graphs in \Cref{sec:markov} requires symmetry, but by the lemma above we can assume symmetry without loss of generality. The result then also holds for the non-symmetric case since the probability that an edge is cut is symmetric in the labels by \[ \Pr[f(s) \not = f(t)] = \Pr[f(s) = 1 \wedge f(t) = 0] + \Pr[f(s) = 0 \wedge f(t) = 1] \enspace . \]
\begin{lemma}
\label{lem:TreeDecompMonotonicity}
Let $({\mathcal T},\{B_i\}_{i\in V(G)})$ be a tree decomposition of a graph $G$, rooted at $r$.
Then ${({\mathcal T}, \{B_i'\}_{i\in V(G)})}$ is also a tree decomposition of $G$ if $B_i \subseteq B_i' \subseteq B_i \cup B_{p(i)}'$. \end{lemma} \begin{proof}
Fix some $s \in V(G)$. We need to show that ${\mathcal T}'_s := \{i \in V({\mathcal T}) \mid s \in B_i'\}$ is connected.
As ${\mathcal T}_s := \{i \in V({\mathcal T}) | s \in B_i\}$ is connected and ${\mathcal T}_s \subseteq {\mathcal T}'_s$, it is sufficent to show that any $i\in {\mathcal T}'_s$ is connected to ${\mathcal T}_s$ in ${\mathcal T}'_s$.
We do this by induction over the distance of $i$ to the root.
For $i = r$ we have $B'_r = B_r$, so either $i \not \in {\mathcal T}'_s$ or $i \in {\mathcal T}_s$.
Otherwise, consider some $i \in {\mathcal T}'_s$, so $s \in B_i \cup B'_{p(i)}$.
Then we either have $s \in B_i$, in which case we are done, or $s \in B_{p(i)}'$.
But this gives $p(i) \in {\mathcal T}'_s$, and $p(i)$ is closer to $r$ than $i$.
Thus we can assume that $p(i)$ is connected to ${\mathcal T}_s$, and hence~$i$ is also connected to ${\mathcal T}_s$ via $p(i)$. \end{proof}
\end{document} |
\begin{document}
\thispagestyle{empty} \setcounter{page}{0} \begin{Huge} \begin{center} Computer Science Technical Report CSTR-{\tt 8} \\ \today \end{center} \end{Huge} \vfil \begin{huge} \begin{center} Azam S. Zavar Moosavi, Adrian Sandu \end{center} \end{huge}
\vfil \begin{huge} \begin{it} \begin{center} ``Approximate Exponential Algorithms to\\ Solve the Chemical Master Equation'' \end{center} \end{it} \end{huge} \vfil
\begin{large} \begin{center} Computational Science Laboratory \\ Computer Science Department \\ Virginia Polytechnic Institute and State University \\ Blacksburg, VA 24060 \\ Phone: (540)-231-2193 \\ Fax: (540)-231-6075 \\ Email: \url{sandu@cs.vt.edu} \\ Web: \url{http://csl.cs.vt.edu} \end{center} \end{large}
\vspace*{1cm}
\begin{tabular}{ccc} \includegraphics[width=2.5in]{CSL_LogoWithName.pdf} &\hspace{2.5in}& \includegraphics[width=2.5in]{VTlogo_ITF.eps} \\ {\bf\em Innovative Computational Solutions} &&\\ \end{tabular}
\begin{abstract} This paper discusses new simulation algorithms for stochastic chemical kinetics that exploit the linearity of the chemical master equation and its matrix exponential exact solution. These algorithms make use of various approximations of the matrix exponential to evolve probability densities in time. A sampling of the approximate solutions of the chemical master equation is used to derive accelerated stochastic simulation algorithms. Numerical experiments compare the new methods with the established stochastic simulation algorithm and the tau-leaping method.
\paragraph{keywords} Stochastic chemical kinetics, chemical master equation, exact solution, stochastic simulation algorithm, tau-leap.
\end{abstract}
\section{Introduction} \label{sect:Intro}
In many biological systems the small number of participating molecules make the chemical reactions inherently stochastic. The system state is described by probability densities of the numbers of molecules of different species. The evolution of probabilities in time is described by the chemical master equation (CME) \cite{Gillespie_1977}. Gillespie proposed the Stochastic Simulation Algorithm (SSA), a Monte Carlo approach that samples from CME \cite{Gillespie_1977}. SSA became the standard method for solving well-stirred chemically reacting systems. However, SSA simulates one reaction and is inefficient for most realistic problems. This motivated the quest for approximate sampling techniques to enhance the efficiency.
The first approximate acceleration technique is the tau-leaping method \cite{Gillespie_2001} which is able to simulate multiple chemical reactions appearing in a pre-selected time step of length $\tau$. The tau-leap method is accurate if $\tau$ is small enough to satisfy the leap condition, meaning that propensity functions remain nearly constant in a time step. The number of firing reactions in a time step is approximated by a Poisson random variable \cite{Kurz_1972_SSA}. Explicit tau-leaping method is numerically unstable for stiff systems \cite{Cao_2004_stability}. Stiffness systems have well-separated ``fast'' and ``slow'' time scales present, and the ``fast modes'' are stable. The implicit tau-leap method \cite{Rathinam_2003} overcomes the stability issue but it has a damping effect on the computed variances. More accurate variations of the implicit tau-leap method have been proposed to alleviate the damping \cite{Gillespie_2003,Gillespie_2001,Sandu_2013_SSA,Cao_2005,Cao_2004,Rathinam_2005}. Simulation efficiency has been increased via parallelization \cite{Sandu_2012_parallel}.
Direct solutions of the CME are computationally important specially in order to estimate moments of the distributions of the chemical species \cite{Burrage_2006_multiscale}. Various approaches to solve the CME are discussed in \cite{Engblom_2006_thesis}.
Sandu has explained the explicit tau-leap method as an exact sampling procedure from an approximate solution of the CME \cite{Sandu_2013_CME}. This paper extends that study and proposes new approximations to the CME solution based on various approximations of matrix exponentials. Accelerated stochastic simulation algorithms are the built by performing exact sampling of these approximate probability densities.
The paper is organized as follows. Section \ref{sect:StochasticChem} reviews the stochastic simulation of chemical kinetics. Section \ref{sect:ApproxExponential} developed the new approximation methods. Numerical experiments to illustrate the proposed schemes are carried out in Section \ref{sect:NumericalExperim}. Conclusions are drawn in Section \ref{sect:Conclusion}.
\section{Simulation of stochastic chemical kinetics} \label{sect:StochasticChem}
Consider a chemical system in a constant volume container. The system is well-stirred and in thermal equilibrium at some constant temperature. There are $N$ different chemical species $S^1,\, \ldots\,,S^N$. Let $X^i(t)$ denote the number of molecules of species $S_i$ at time $t$. The state vector $x(t)=[X^1(t),\, \ldots\,,X^N(t)]$ defines the numbers of molecules of each species present at time $t$. The chemical network consists of $M$ reaction channels $R_1,\, \ldots\,,R_M$. Each individual reaction destroys a number of molecules of reactant species, and produces a number of molecules of the products. Let $\nu_j^i$ be the change in the number of $S^i$ molecules caused by a single reaction $R_j$. The state change vector $\nu_j = [\nu_j^{1}, \ldots , \nu_j^{N}]$ describes the change in the entire state following $R_j$.
A propensity function $a_j(x)$ is associated with each reaction channel $R_j$. The probability that one $R_j$ reaction will occur in the next infinitesimal time interval $[t, t+dt)$ is $a_j(x(t))\cdot dt$. The purpose of a stochastic chemical simulation is to trace the time evolution of the system state $x(t)$ given that at the initial time $\bar{t}$ the system is in the initial state $x\left(\bar{t}\right)$.
\subsection{Chemical Master Equation} \label{sect:ChemicalMaster}
The Chemical Master Equation (CME) \cite{Gillespie_1977} has complete information about time evolution of probability of system's state
\begin{equation}\label{eq_CME} \frac {\partial\mathcal{P}\left(x,t\right)}{\partial t}=\sum_{r=1}^M a_{r}\left(x-v_{r}\right) \mathcal{P}\left(x-v_{r},t\right)-a_0\left(x\right)\mathcal{P}\left(x,t\right)\,. \end{equation}
Let $Q^i$ be the total possible number of molecules of species $S^i$. The total number of all possible states of the system is:
\[ \label{eq_Q} Q=\prod_{i=1}^{N}\left(Q^i+1\right). \]
We denote by $\mathcal{I}(x)$ the state-space index of state $x=[X^1,\, \ldots\,,X^N]$
\[ \begin{array}{l}
\mathcal{I}(x) = \left(Q^{N-1}+1\right)\cdots \left(Q^1+1\right)\cdot X^N+\cdots \\
+\left(Q^2+1\right)\left(Q^1+1\right)\cdot X^3+ \left(Q^1+1\right)\cdot X^2+X^1+1 \end{array} \]
One firing of reaction $R_{r}$ changes the state from $x$ to $\bar {x}=x-v_{r}$.
The corresponding change in state space index is:
\[ \label{eq_d} \begin{array}{l} \mathcal{I}(x)-\mathcal{I}\left(x-v_{r}\right)=d_{r}, \\ d_{r}=\left(Q^{N-1}+1\right)\cdots\left(Q^1+1\right).v_{r}^N+...\\ \qquad +\left(Q^2+1\right)\left(Q^1+1\right).v_{r}^3+\left(Q^1+1\right).v_{r}^2+v_{r}^1. \end{array} \]
The discrete solutions of the CME \eqref{eq_CME} are vectors in the discrete state space, $\mathcal{P}\left(t\right) \in \mathbb{R}^{Q}$. Consider the diagonal matrix $A_{0}\in \mathbb{R}^{Q \times Q} $ and the Toeplitz matrices $A_{1},\cdots,A_{M}\in \mathbb{R}^{Q \times Q} $ \cite{Sandu_2013_CME}
\[ \label{eq_sumofexponentexact} ({A_{0}})_{i,j}=\left\{ \begin{array}{rl} -a_{0}\left(x_j\right) & \mbox{if $i=j$} ,\\ 0 & \mbox{if $i \not =j$} , \end{array} \right. \,, \quad ({A_{r}})_{i,j}=\left\{ \begin{array}{rl} a_{r}(x_j) & \mbox{if $i-j=d_{r}$}, \\ 0 & \mbox{if $i-j \not =d_{r}$}, \end{array} \right. \]
as well as their sum $A \in \mathbb{R}^{Q \times Q}$ with entries
\begin{equation} \label{eq_exact} A = A_{0} + A_{1} + \dots + A_{M}\,, \quad A_{i,j}=\left\{ \begin{array}{rl} -a_{0}(x_j) & \mbox{if }i=j\,, \\ a_{r}(x_j) & \mbox{if }i-j=d_{r},~ r=1,\cdots,M\,, \\ 0 & \mbox{otherwise} \,, \end{array} \right. \end{equation}
where $x_j$ denotes the unique state with state space index $j=\mathcal{I}(x_j)$. In fact matrix A is a square $\left(Q \times Q \right)$ matrix which contains all the propensity values for each possible value of all species or let's say all possible states of reaction system. All possible states for a reaction system consists of $N$ species where each specie has at most $Q^{i}$ $i=1,2,...,N$ value.
The CME \eqref{eq_CME} is a linear ODE on the discrete state space
\begin{equation} \label{eq_cme_mat} \mathcal{P}' = A \cdot \mathcal{P}\,, \quad \mathcal{P}(\bar{t}) = \delta_{\mathcal{I}(\bar{x})}\,, \quad t \ge \bar{t}\,, \end{equation}
where the system is initially in the known state $x(0)=\bar{x}$ and therefore the initial probability distribution vector $\mathcal{P}(0) \in \mathbb{R}^{Q}$ is equal to one at $\mathcal{I}(\bar{x})$ and is zero everywhere else.
The exact solution of the linear ODE \eqref{eq_cme_mat} is follows:
\begin{equation} \label{eq_exact2} \mathcal{P}\left(\bar{t}+T\right)=\exp\left(T\, A\right)\cdot \mathcal{P}\left(\bar{t}\right) = \exp\left(T\, \sum_{r=0}^M A_r\right)\cdot \mathcal{P}\left(\bar{t}\right)\,. \end{equation}
\subsection{Approximation to Chemical Master Equation} \label{sect:ApproxChemicalMaster}
Although the CME \eqref{eq_CME} fully describes the evolution of probabilities it is difficult to solve in practice due to large state space. Sandu \cite{Sandu_2013_CME} considers the following approximation of the CME:
\begin{equation} \label{eq_approx_CME} \frac {\partial\mathcal{P}\left(x,t\right)}{\partial t}=\sum_{r=1}^M a_{r}\left(\bar{x}\right) \mathcal{P}\left(x-v_{r},t\right)- a_0\left(\bar{x}\right)\mathcal{P}\left(x,t\right) \end{equation}
where the arguments of all propensity functions have been changed from $x$ or $x-v_{j} $ to $\bar{x} $. In order to obtain an exponential solution to \eqref{eq_approx_CME} in probability space we consider the diagonal matrix $\bar{A_{0}}\in \mathbb{R}^{Q \times Q} $ and the Toeplitz matrices $\bar{A_{1}},...,\bar{A_{M}}\in \mathbb{R}^{Q \times Q} $ \cite{Sandu_2013_CME}. $\bar{A_{r}}$ matrices are square $\left(Q \times Q\right) $ matrices are built upon the current state of system in reaction system which is against $A_{r}$ matrices that contain all possible states of reaction system.
\begin{equation} \label{eq_sumofexponent} (\bar{A_{0}})_{i,j}=\left\{ \begin{array}{rl} -a_{0}\left(\bar{x}\right) & \mbox{if $i=j$} ,\\ 0 & \mbox{if $i \not =j$} , \end{array} \right. \,, \quad (\bar{A_{r}})_{i,j}=\left\{ \begin{array}{rl} a_{r}(\bar{x}) & \mbox{if $i-j=d_{r}$}, \\ 0 & \mbox{if $i-j \not =d_{r}$}, \end{array} \right. \end{equation}
together with their sum $\bar{A} = \bar{A_{0}} + \dots + \bar{A_{M}}$. The approximate CME \eqref{eq_approx_CME} can be written as the linear ODE
\[ \label{eq_sumofexponent2} \mathcal{P}' = \bar{A} \cdot \mathcal{P}\,, \quad \mathcal{P}(\bar{t}) = \delta_{\mathcal{I}(\bar{x})}\,, \quad t \ge \bar{t}\,, \]
and has an exact solution
\begin{equation} \label{eq_sumofexponent3} \mathcal{P}\left(\bar{t}+T\right)=\exp\left(T\, \bar{A}\right)\cdot \mathcal{P}\left(\bar{t}\right) = \exp\left(T\, \sum_{r=0}^M \bar{A}_r\right)\cdot \mathcal{P}\left(\bar{t}\right)\,. \end{equation}
\subsection{Tau-leaping method} \label{sect:TauLeap}
In tau-leap method the number of times a reaction fires is a random variable from a Poisson distribution with parameter $a_{r}\left( \bar{x}\right)\tau$. Since each reaction fires independently, the probability that each reaction $ R_{r} $ fires exactly $k_{r} $ times, $r=1, 2,\cdots, M $, is the product of $ M $ Poisson probabilities.
\[ \mathcal{P}\left(K_{1}=k_{1},\cdots,K_{M}=k_{M}\right)= \prod_{r=1}^{M}e^{-a_{r}\left(\bar{x} \right)\tau}\cdot\frac{\left(a_{r}(\bar{x}\tau\right)^{k_{r}}}{K_{r}!}= e^{-a_{0}\left(\bar{x} \right)\tau}\cdot \prod_{r=1}^{M}\frac{\left(a_{r}\left(\bar{x}\tau\right)\right)^{k_{r}}}{K_{r}!} \]
Then the state vector after these reactions will change as follows: \begin{equation} \label{eq_tauleap} X\left(\bar{t}+\tau\right)=\bar{x}+\sum_{r=1}^{M}K_{r}v_{r} \end{equation}
The probability to go from state $\bar{x}$ at $\bar{t}$ to state $x$ at $ \bar{t}+\tau$, $\mathcal{P}\left(X\left(\bar{t}+\tau\right)\right)=x$, is the sum of all possible firing reactions which is:
\[ \label{eq_tauleap1} \mathcal{P}\left(X,\bar{t}+\tau\right)=e^{-a_{0}\left(\bar{x}\right)T}\cdot\Sigma_{k \in \mathcal{K }\left(x - \overline{x}\right)} ~ \prod_{r=1}^{M}\frac{\left(a_{r}\left(\bar{x}T\right)\right)^{k_{r}}}{K_{r}!} \]
Equation \eqref{eq_sumofexponent3} can be approximated by product of each matrix exponential:
\begin{equation} \label{eq__productofexponents} \mathcal{P}\left(\bar{t}+T\right)=\exp\left(T\bar{A_{0}}\right)\cdot \exp\left(T\bar{A_{1}} \right)\cdots \exp\left(T\bar{A_{r}}\right) \cdot \mathcal{P}\left(\bar{t}\right). \end{equation}
It has been shown in \cite{Sandu_2013_CME} that the probability given by the tau-leaping method is exactly the probability evolved by the approximate solution \eqref{eq__productofexponents}.
\section{Approximations to the exponential solution} \label{sect:ApproxExponential}
\subsection{Strang splitting} \label{sect:StrangSplitting}
In order to improve the approximation of the matrix exponential in \eqref{eq__productofexponents} we consider the symmetric Strang splitting \cite{Strang_1968}. For $T=n\tau$ Strang splitting applied to an interval of length $\tau$ leads to the approximation
\begin{equation} \label{eq_strang} \mathcal{P}\left(\bar{t}+i \tau\right)=e^{\tau/2 \bar{A}_{r}}\cdots e^{\tau/2 \bar{A}_{1} }e^{\tau/2 \bar{A}_{0}}
\cdot e^{\tau/2 \bar{A}_{1}} \cdots e^{\tau/2 \bar{A}_{r}}\cdot P\left(\bar{t} + (i-1)\tau\right) \end{equation}
where the matrices $\bar{A_{r}}$ are defined in \eqref{eq_sumofexponent}.
\subsection{Column based splitting} \label{sect:ColumnBased}
In column based splitting the matrix $A$ \eqref{eq_exact} is decomposed in a sum of columns
\[ \label{eq_colbased} A=\sum_{j=1}^Q A_{j}\,, \quad A_{j}=c_{j}e_{j}^T\,. \]
Each matrix $A_{j}$ has the same $j$-th column as the matrix $A$, and is zero everywhere else.
Here $c_{j} $ is the $j_{th}$ column of matrix A and $e_{j}$ is the canonical vector which is zero every where except the $j_{th}$ component. The exponential of $\tau A_{j}$ is:
\begin{equation} \label{eq_colbased2} e^{\tau A_{j}}=\sum_{k \ge 0} \frac {\tau^k \left(A_{j}\right)^k}{k!}\,. \end{equation}
Since $e_{j}^Tc_{j}$ is equal to the $j$-th diagonal entry of matrix A:
\[ \label{eq_colbased4} e_{j}^T\,c_{j}=-a_{0}\left(x_{j}\right) \]
the matrix power $A_{j}^k$ reads
\[ \label{eq_colbased3} A_{j}^k=c_{j}e_{j}^T \, c_{j}e_{j}^T\, \cdots \, c_{j}e_{j}^T = \left(-a_{0}\left(x_{j}\right)\right)^{k-1} c_{j}e_{j}^T = \left(-a_{0}\left(x_{j}\right)\right)^{k-1}A_{j}\,. \]
Consequently the matrix exponential \eqref{eq_colbased2} becomes
\[ \label{eq_colbased5} e^{\tau A_{j}}=I+ \sum_{k \geq 1} \frac {\left(-\tau a_{0}\left(x_{j}\right)\right)^{k-1}}{k!}\left(\tau A_{j}\right) = I+ S_{j}\, \tau A_{j}\,, \quad S_{j}=\sum_{k \geq 1} \frac {\left(-\tau a_{0}\left(x_{j}\right)\right)^{k-1}}{k!}\,. \]
We have
\[ \label{eq_colbased7} e^{\tau A}=e^{\tau \sum_{j=1}^Q A_{j}}\approx \prod_{j=1}^Q e^{\tau A_{j}} \approx\prod_{j=1}^Q \left(I+S_{j}\tau A_{j}\right) \]
and the approximation to the CME solution reads
\[ \mathcal{P}\left(\bar{t}+i \tau\right)\approx \prod_{j=1}^Q \left(I+S_{j}\tau A_{j}\right)\cdot P\left(\bar{t} + (i-1)\tau\right)\,. \]
\subsection{Accelerated tau-leaping} \label{sect:AccelTauleap}
In this approximation method we build the matrices
\[ (B_{r})_{i,j}=\left\{ \begin{array}{rl} -a_{r}(x_{j}) & \mbox{if $i=j$}, \\ a_{r}(x_{j}) & \mbox{if $i-j =d_{r}$}, \\ 0 & \textnormal{otherwise} \end{array} \right. \]
where $a_{r}(x)$ are the propensity functions. The matrix $A$ in \eqref{eq_exact} can be written as
\[ A=\sum_{r=1}^M B_{r}\,. \]
The solution of the linear CME \eqref{eq_exact2} can be approximated by
\begin{equation} \label{eq_accelerated1} \mathcal{P}\left(\bar{t}+\tau\right)=e^{\tau A}\cdot \mathcal{P}\left(\bar{t}\right) \approx e^{\tau B_{1}} e^{\tau B_{2}} \cdots e^{\tau B_{M}} \cdot P\left(\bar{t}\right)\,. \end{equation}
Note that the evolution of state probability by $e^{\tau B_{j}}\cdot P\left(\bar{t}\right)$ describes the change in probability when only reaction $j$ fires in the time interval $\tau$. The corresponding evolution of the number of molecules that samples the evolved probability is
\[
x\left(\bar{t}+\tau\right)=x\left(\bar{t}\right)+V_{j}\, K\left(a_j\left(x\left(\bar{t}\right)\right) \tau\right). \]
where $K\left(a_j\left(x\left(\bar{t}\right)\right) \tau\right)$ is a random number drawn from a Poisson distribution with parameter $a_j\left(x\left(\bar{t}\right)\right) \tau$, and $V_{j}$ is the $j$-{th} column of stoichiometry matrix.
The approximate solution \eqref{eq_accelerated1} accounts for the change in probability due to a sequential firing of reactions $M$, $M-1$, down to $1$. Sampling from the resulting probability density can be done by changing the system state sequentially consistent with the firing of each reaction. This results in the following accelerated tau-leaping algorithm:
\begin{equation}\label{eq_accelerated2} \begin{array}{l} \hat{X}_{M}= x\left(\bar{t}\right) \\ \textnormal{for } i=M,M-1,\cdots,1 \\ \qquad \hat{X}_{i-1}=\hat{X}_{i}+V_{i}\, K\left(a_{i}\left(\hat{X}_{i}\right)\tau\right) \\ x(\bar{t}+\tau)=\hat{X}_{0}. \end{array} \end{equation}
Moreover, \eqref{eq_accelerated1} can also be written as:
\begin{eqnarray} \label{eq_accelerated3} \mathcal{P}\left(\bar{t}+\tau\right) \approx e^{\tau B_{1}} e^{\tau B_{2}} \cdots e^{\tau B_{M}} \cdot P\left(\bar{t}\right)\,\\ \nonumber
\approx \left( e^{\tau B_{1}} e^{\tau B_{2}} \cdots e^{\tau B_{\frac{M}{2}-1}} \right) \cdot \\ \nonumber
\left(e^{\tau B_{\frac{M}{2}}} e^{\tau B_{\frac{M}{2}+1}} \cdots e^{\tau B_{M}} \cdot P\left(\bar{t}\right) \right).\ \nonumber \end{eqnarray}
Then, \eqref{eq_accelerated2} can be written as:
\begin{equation} \label{eq_accelerated4} \begin{array}{l} \hat{X}_{M}= x\left(\bar{t}\right) \\ \textnormal{for } i=M,M-1,\cdots,\frac{M}{2} \\ \qquad \hat{X}_{i-1}=\hat{X}_{i}+V_{i}\, K\left(a\left(\hat{X}_{M}\right)\tau\right) \\ \textnormal{for } i=\frac{M}{2}-1,\cdots,1 \\ \qquad \hat{X}_{i-1}=\hat{X}_{i}+V_{i}\, K\left(a\left(\hat{X}_{\frac{M}{2}-1}\right)\tau\right) \\ x(\bar{t}+\tau)=\hat{X}_{0}. \end{array} \end{equation}
\subsection{Symmetric accelerated tau-leaping} \label{sect:SymmetricTauleap}
A more accurate version of accelerated tau-leaping can be constructed by using symmetric Strang splitting \eqref{eq_strang} to approximate the matrix exponential in \eqref{eq_accelerated1}. Following the procedure used to derive \eqref{eq_accelerated2} leads to the following sampling algorithm:
\begin{equation}\label{eq_symmetric} \begin{array}{l} \hat{X}_{M}= x\left(\bar{t}\right) \\ \textnormal{for } i=M,M-1,\cdots,1 \\ \qquad \hat{X}_{i-1}=\hat{X}_{i}+V_{i}\, K\left(a_{i}\left(\hat{X}_{i}\right)\tau/2\right) \\ \textnormal{for } i=1,2,\cdots,M \\ \qquad \hat{X}_{i}=\hat{X}_{i}+V_{i-1}\, K\left(a_{i}\left(\hat{X}_{i-1}\right)\tau/2\right) \\ x(\bar{t}+\tau)=\hat{X}_{M}. \end{array} \end{equation}
\section{Numerical experiments} \label{sect:NumericalExperim}
The above approximation techniques are used to solve two test systems, reversible isomer and the Schlogl reactions \cite{Cao_2004_stability}. The experimental results are presented in following sections.
\subsection{Isomer reaction} \label{sect:Isomer}
The reversible isomer reaction system is \cite{Cao_2004_stability}
\begin{equation} \label{eqn:isomer} \ce{ x_1 <=>[\ce{c_1}][\ce{c_2}] x_2. } \end{equation}
The stoichiometry matrix and the propensity functions are:
\[ V= \left[\begin{array}{rr} -1 & 1 \\ 1 & -1 \end{array}\right]\,, \qquad \begin{array}{l} a_{1}(x)= c_{1}x_{1} \,, ~~\\ a_{2}(x) = c_{2}x_{2} \,. \end{array} \]
The reaction rate values are $ c_{1}=10$, $c_{2}=10$ (units), the time interval is $[0,T]$ with $T=10$ (time units), initial conditions are $x_{1}(0)=40$, $x_{2}(0)=40$ molecules, and maximum values of species are $Q^1=80$ and $Q^2=80$ molecules.
The exact exponential solution of CME obtained from \eqref{eq_exact2} is a joint probability distribution vector for the two species at final time. Figure \ref{fig:exact_isomer} shows that the histogram of 10,000 SSA solutions is very close to the exact exponential solution. The approximate solution using the sum of exponentials \eqref{eq_sumofexponent3} is illustrated in Figure \ref{fig:approx_isomer}. This approximation is not very accurate since it uses only the current state of the system. Other approximation methods based on the product of exponentials \eqref{eq__productofexponents} and Strang splitting \eqref{eq_strang} are not very strong approximations as the exact solution hence, the results are not reported.
\begin{figure}
\caption{Histograms of the isomer system \eqref{eqn:isomer} results at the final time T=10.}
\label{fig:exact_isomer}
\label{fig:approx_isomer}
\label{fig:isomer}
\end{figure}
The results reported in Figure \ref{fig:accelerated_isomer} indicate that for small time steps $\tau$ the accelerated tau-leap \eqref{eq_accelerated2} solution is very close to the results provided by traditional explicit tau-leap. Symmetric accelerated tau-leap method \eqref{eq_symmetric} yields even better results, as shown in Figure \ref{fig:symmetric_isomer_1ten}. For small time steps the traditional and symmetric accelerated methods give similar results, however, for large time steps, the results of the symmetric accelerated method is considerably more stable.
\begin{figure}
\caption{Isomer system \eqref{eqn:isomer} solutions provided by the traditional tau-leap \eqref{eq_tauleap} and by accelerated tau-leap \eqref{eq_accelerated2} methods at the final time T=10 (units). A small time step of $\tau=0.01$ (units) is used. The number of samples for both methods is 10,000.}
\label{fig:accelerated_isomer}
\end{figure}
\begin{figure}
\caption{Histograms of isomer system \eqref{eqn:isomer} solutions obtained with SSA, traditional tau-leap \eqref{eq_tauleap}, and symmetric accelerated tau-leap \eqref{eq_symmetric} methods at the final time T=10. The number of samples is 10,000 for all methods.}
\label{fig:symmetric_isomer_1per}
\label{fig:symmetric_isomer_1en}
\label{fig:symmetric_isomer_1ten}
\end{figure}
\subsection{Schlogl reaction} \label{sect:Schlogl}
We next consider the Schlogl reaction system \cite{Cao_2004_stability}
\begin{equation} \label{eqn:schlogl} \begin{array}{r} \ce{ B_{1} + 2x <=>[\ce{c_1}][\ce{c_2}] 3x }\\ \ce{ B_{2} <=>[\ce{c_3}][\ce{c_4}] x } \end{array} \end{equation}
whose solution has a bi-stable distribution. Let $N_1$, $N_2$ be the numbers of molecules of species $B_1$ and $B_2$, respectively. The reaction stoichiometry matrix and the propensity functions are:
\[ \begin{array}{l} V= \begin{bmatrix} 1 & -1 & 1 & -1 \end{bmatrix} \\ \begin{array}{l} a_{1}(x)= \frac{c_{1}}{2}N_{1}x(x-1), \\ a_{2}(x) = \frac{c_{2}}{6}N_{1}x(x-1)(x-2), \\ a_{3}(x) = c_{3}N_{2}, \\ a_{4}(x) = c_{4}x. \end{array} \end{array} \]
The following parameter values (each in appropriate units) are used:
\begin{small} \[ \begin{array}{lll} c_{1}=3 \times 10^{-7}, &c_{2}=10^{-4}, &c_{3}=10^{-3}, \\ c_{4}=3.5, &N_{1}=1 \times 10^5, &N_{2}=2 \times10^5. \end{array} \] \end{small}
with the final time $T=4$ (units), the initial condition $x(0)=250$ molecules, and the maximum values of species $Q^1=900$ molecules.
Figure \ref{fig:exact_schlogl} illustrates the result of exact exponential solution \eqref{eq_exact2} versus SSA. Figure \ref{fig:approx_exact_schlogl} reports the sum of exponentials \eqref{eq_sumofexponent3} result which is not a very good approximation. The product of exponentials \eqref{eq__productofexponents} and Strang splitting \eqref{eq_strang} results are not reported here since they are poor in approximation.
\begin{figure}
\caption{Histograms of Schlogl system \eqref{eqn:schlogl} results at final time T=4 (units).}
\label{fig:exact_schlogl}
\label{fig:approx_exact_schlogl}
\end{figure}
Figures \ref{fig:accelerated_schlogl} and \ref{fig:symmetric_schlogl} present the results obtained with the accelerated tau-leap and the symmetric tau-leap, respectively. For small time step the results are very accurate. However, for large step sizes, the results quickly become less accurate. The lower accuracy may affect systems having more reactions. The accuracy can be improved to some extent using the strategies described in \eqref{eq_accelerated3} and \eqref{eq_accelerated4}.
\begin{figure}
\caption{Histograms of Schlogl system \eqref{eqn:schlogl} solutions with $\tau=0.0001$ (units), final time T=4 (units), and 10,000 samples.}
\label{fig:accelerated_schlogl}
\label{fig:symmetric_schlogl}
\end{figure}
\section{Conclusions} \label{sect:Conclusion}
This study proposes new numerical solvers for stochastic simulations of chemical kinetics. The proposed approach exploits the linearity of the CME and the exponential form of its exact solution. The matrix exponential appearing in the CME solution is approximated as a product of simpler matrix exponentials. This leads to an approximate (``numerical'') solution of the probability density evolved to a future time. The solution algorithms sample exactly this approximate probability density and provide extensions of the traditional tau-leap approach.
Different approximations of the matrix exponential lead to different numerical algorithms: Strang splitting, column splitting, accelerated tau-leap, and symmetric accelerated tau-leap. Current work by the authors focuses on improving the accuracy of these novel approximation techniques for stochastic chemical kinetics.
\ifx
\fi
\appendix
\section{Example} \label{sec:example}
We exemplify the process of building matrix A \eqref{eq_exact} for the Schlogl and isomer reactions.
\subsection{Isomer reaction}
Here for simplicity, we exemplify the implementation of the system for the maximum values of species $Q^1 = 2$ and $Q^2 = 2$. According to \eqref{eq_Q}, $Q=(Q^{1}+1) \times (Q^{2}+1)=3^2$.
The vector $d$ according to \eqref{eq_d} is $[2,-2]$. The state matrix which contains all possible states has dimension $81^2 \times 2$ matrix:
\[ \mathbf{x} = \begin{bmatrix} \ 0 & 1 &2 & 0 & 1 &2 & 0 & 1 &2 \\[0.3em] \ 0 & 0 &0 & 1 & 1 &1 & 2 & 2 &2 \end{bmatrix}^\top \in \mathbb{R}^{3^2 \times 2}. \]
The matrix $\mathbf{A} \in \mathbb{R}^{Q\cdot Q \times Q \cdot Q}$ As an example for a maximum number of species $Q^{1}=2$, $Q^{2}=2$ the matrix $\mathbf{A}$ is:
\begin{center} \begin{small} \begin{eqnarray} \nonumber \mathbf{A} = \begin{bmatrix} -a_{0}(\mathbf{x}_{1,:}) & 0 & a_{2}(\mathbf{x}_{3,:}) & 0 & 0 & 0 & 0 & 0 & 0 \\[0.3em] 0 & -a_{0}(\mathbf{x}_{2,:}) & 0 & \ddots & 0 & 0 & 0 & 0 & 0 \\[0.3em] a_{1}(\mathbf{x}_{1,:}) & 0 & -a_{0}(\mathbf{x}_{3,:}) & 0 & \ddots & 0 & 0 & 0 & 0 \\[0.3em] 0 & a_{1}(\mathbf{x}_{2,:}) & 0 & \ddots & 0 & \ddots & 0 & 0 & 0 \\[0.3em] 0 & 0 & a_{1}(\mathbf{x}_{3,:}) & 0 & \ddots & 0 & a_{2}(\mathbf{x}_{7,:}) & 0 & 0 \\[0.3em] 0 & 0 & 0 & \ddots & 0 & \ddots & 0 & a_{2}(\mathbf{x}_{8,:}) & 0 \\[0.3em] 0 & 0 & 0 & 0 & \ddots & 0 & -a_{0}(\mathbf{x}_{7,:}) & 0 & a_{2}(\mathbf{x}_{9,:}) \\[0.3em] 0 & 0 & 0 & 0 & 0 & \ddots & 0 & -a_{0}(\mathbf{x}_{8,:}) & 0 \\[0.3em] 0 & 0 & 0 & 0 & 0 & 0 & a_{1}(\mathbf{x}_{7,:}) & 0 & -a_{0}(\mathbf{x}_{9,:}) \end{bmatrix} \in \mathbb{R}^{9\times 9}\,. \end{eqnarray} \end{small} \end{center}
\subsection{Schlogl reaction}
Here for simplicity, we exemplify the implementation of the system for the maximum value of the number of molecules $Q^1=5$. According to \eqref{eq_Q} the dimensions of A are: $\left(Q^1+1 \times Q^1+1\right)= 6 \times 6$. The vector $d$ \eqref{eq_d} for this system $ [1,-1,1,-1]$. All possible states for this system are contained in the state vector
\[ \mathbf{x} = [0,1,2, \cdots, 5]^\top \in \mathbb{R}^{1 \times 6}. \]
As an example matrix A for maximum number of molecules $Q=5$ is the following tridiagonal matrix:
\begin{center} \begin{small} \begin{eqnarray} \nonumber \mathbf{A} = \begin{bmatrix} -a_{0}(\mathbf{x}_1) & a_{2}(\mathbf{x}_2)+a_{4}(\mathbf{x}_2) & 0 & 0 & 0 & 0 \\[0.3em] a_{1}(\mathbf{x}_1)+a_{3}(\mathbf{x}_1) & -a_{0}(\mathbf{x}_2) & \ddots & 0 & 0 & 0 \\[0.3em] 0 & a_{1}(\mathbf{x}_2) + a_{3}(\mathbf{x}_2) & \ddots & \ddots & 0 & 0 \\[0.3em] 0 & 0 & \ddots & \ddots & a_{2}(\mathbf{x}_5) + a_{4}(\mathbf{x}_5) & 0 \\[0.3em] 0 & 0 & 0 & \ddots & -a_{0}(\mathbf{x}_5) & a_{2}(\mathbf{x}_6) + a_{4}(\mathbf{x}_6) \\[0.3em] 0 & 0 & 0 & 0 & a_{1}(\mathbf{x}_5) + a_{3}(\mathbf{x}_5) & -a_{0}(\mathbf{x}_6) \end{bmatrix} \in \mathbb{R}^{6 \times 6}. \end{eqnarray} \end{small} \end{center}
\label{lastpage}
\end{document} |
\begin{document}
\begin{abstract}
By now, we have a product theorem in every finite simple group $G$ of Lie type,
with the strength of the bound depending only in the rank of $G$. Such theorems
have numerous consequences: bounds on the diameters of Cayley graphs, spectral
gaps, and so forth. For the alternating group $\Alt_n$, we have a quasipolylogarithmic
diameter bound (Helfgott-Seress 2014), but it does not rest on a product theorem.
We shall revisit the proof of the bound for $\Alt_n$, bringing it closer to the proof
for linear algebraic groups, and making some common themes clearer. As a result,
we will show how to prove a product theorem for $\Alt_n$ -- not of full strength,
as that would be impossible, but strong enough to imply the diameter bound.
\end{abstract} \maketitle \section{Introduction}
My personal route in the subject started with the following result. \begin{thm}[Product theorem \cite{Hel08}]\label{thm:asnat}
Let $G = \SL_2(\mathbb{Z}/p\mathbb{Z})$, $p$ a prime. Let $A\subset G$ generate $G$.
Then either
\[|A\cdot A\cdot A|\geq |A|^{1+\delta}\]
or
\[A^k = G,\]
where $\delta>0$ and $k\in \mathbb{Z}^+$ are absolute constants.\footnote{
It was soon determined that $k=3$ (\cite{MR2410393}, \cite{BNP}).} \end{thm}
Here $|S|$ is the number of elements of a set $S$, and $A^k$ denotes $\{a_1 \dotsc a_k : a_i\in A\}$. (We also write $A B$ for $\{a b:a\in A, b\in B\}$ and $A^{-1}$ for $\{a^{-1} : a\in A\}$.)
Theorem \ref{thm:asnat} gives us an immediate corollary on the diameter of any Cayley graph $\Gamma(G,A)$ of $G$. The {\em diameter} of a graph is the maximal distance $d(v_1,v_2)$ over all pairs of vertices $v_1$, $v_2$ of a graph $\Gamma$; in turn, the distance $d(v_1,v_2)$ between two vertices is the length of the shortest path between them, where the length of a path is defined as its number of edges. In the particular case of a (directed) Cayley graph $\Gamma(G,A)$, the diameter equals the least $\ell$ such that every element $g\in G$ can be expressed as a product of elements of $A$ of length $\leq \ell$.
\begin{cor}\label{cor:nonat}
Let $G = \SL_2(\mathbb{Z}/p\mathbb{Z})$, $p$ a prime. Let $S\subset G$ generate $G$.
Then the diameter of the Cayley graph $\Gamma(G,S)$ is at most
\begin{equation}\label{eq:udun}(\log |G|)^C,\end{equation}
where $C$ is an absolute constant. \end{cor} \begin{proof}
Apply Theorem \ref{thm:asnat} to $A = S$, $A = S^3$, $A= S^9$, etc. \end{proof}
The product theorem has other applications, notably to spectral gaps and expander graphs (\cite{MR2415383}, \cite{MR2587341}, \cite{MR2892611}). It has been generalized several times, to the point where now it is known to hold for all finite simple\footnote{It is trivial to see that the theorem and corollary above still hold if $\SL_2(\mathbb{Z}/p\mathbb{Z})$ is replaced by the simple group $\PSL_2(\mathbb{Z}/p\mathbb{Z})$.} groups of Lie type and bounded rank \cite{BGT}, \cite{MR3402696}. When we say {\em bounded rank}, we mean that the constants $\delta$ and $C$ in these generalizations of Thm.~\ref{thm:asnat} and Cor.~\ref{cor:nonat} depend on the rank of the group $G$.
Babai's conjecture states that the bound (\ref{eq:udun}) holds any finite, simple, non-abelian $G$, and any set of generators $S$ of $G$, with $C$ an {\em absolute} constant (i.e., one constant valid for all $G$). By the classification of finite simple groups (henceforth: CFSG), every finite, simple, non-abelian group $G$ is either (a) a simple group of Lie type, or (b) an alternating group $\Alt(n)$, or (c) one of a finite number of sporadic groups. Being finite in number, the sporadic groups are irrelevant for the purposes of the asymptotic bound (\ref{eq:udun}). It remains, then, to consider whether Babai's conjecture is true for $\Alt(n)$, and for simple, finite groups of Lie type whose rank goes to infinity.
Part of the problem is that, in either of these two cases, the natural generalization of Thm.~\ref{thm:asnat} is false: counterexamples due to Pyber and Spiga \cite{MR2898694}, \cite{MR2876252} show that $\delta$ has to depend on the rank of $G$, or on the index $n$ in $\Alt(n)$, at least if there are no additional conditions. Nevertheless, Babai's conjecture is still believed to be true.
Some of the ideas leading to Thm.~\ref{thm:asnat} and its generalizations were useful in the proof of the following result, even though the overall argument looked rather different. \begin{thm}[\cite{MR3152942}]\label{thm:pais}
Let $G = \Alt(n)$ or $G=\Sym(n)$. Then,
\begin{equation}\label{eq:pedestr}\diam G \leq e^{C (\log \log |G|)^4 \log \log \log |G|},\end{equation}
where $C$ is an absolute constant. \end{thm} Here we write $\diam G$ for the ``worst-case diameter'' \[\diam G = \max_{A\subset G: G = \langle A\rangle} \diam(\Gamma(G,A)), \] i.e., the same sort of quantity that we bounded in Cor.~\ref{cor:nonat}.
Theorem \ref{thm:pais} is not as strong as Babai's conjecture for $\Alt(n)$ or $\Sym(n)$, since the quantity on the right of
(\ref{eq:pedestr}) is larger than $(\log |G|)^C$. The proof of Thm.~\ref{thm:pais} did not go through the proof of an analogue of a product theorem; it used another kind of inductive process.
One of our main aims in what follows is to give a different proof of Theorem \ref{thm:pais}. Some of its elements are essentially the same as in the original proof, sometimes in improved or simplified versions. Others are more closely inspired by the tools developed for the case of groups of Lie type.
Theorem \ref{thm:pais} -- or rather a marginally weaker version thereof
(Thm.~\ref{thm:molop}), with an additional factor of $\log \log \log |G|$ in the exponent -- will follow as a direct consequence of the following product theorem, which is new. It is, naturally, weaker than a literal analogue of Thm.~\ref{thm:asnat}, since such as analogue would be false, by the counterexamples we mentioned. \begin{thm}\label{thm:jukuju} Let $A\subset \Sym(n)$ be such that $A=A^{-1}$, $e \in A$, and $\langle A\rangle$ is $3$-transitive. There are absolute constants
$C,c>0$ such that the following holds. Assume that $|A|\geq n^{C (\log n)^2}$.
Then either \begin{equation}\label{eq:uru}
|A^{n^C}|\geq |A|^{1+c \frac{\log \log |A|
}{(\log n)^2 \log \log n}}\end{equation} or \begin{equation}\label{eq:rororo}
\diam(\Gamma(\langle A\rangle,A))\leq n^C
\diam(G),\end{equation} where $G$ is a transitive group on $m\leq n$ elements such that either (a) $m\leq e^{-1/10} n$ or (b) $G\nsim \Alt(m), \Sym(m)$. \end{thm}
Our general objective will be to make the proof for $\Alt(n)$ and $\Sym(n)$ not just simpler but closer to that for groups of Lie type of bounded rank. Part of the motivation is that the next natural aim is to study in depth groups of Lie type of unbounded rank, which combine features of both kinds of groups.
{\em Overall idea.} To prove the growth of sets $A$ in a group $G$, we study the actions of a group $G$. First of all, every group acts on itself, by left and right multiplication, and by conjugation. The study of these actions is always useful; it gives us lemmas valid for every group. Then there are the actions that exist for a given kind of group.
A linear algebraic group acts by linear transformations on affine space. It then makes sense to see how the action of the group affects varieties, and what this tells us about sets of elements in the group.
In the case of the symmetric group $\Sym(\Omega)$, $|\Omega|=n$, we have no such nicely geometric action. What we do have is an action on a set $\Omega$, that, while completely unstructured, is very small compared to the group. This fact allows us to use short random walks to obtain elements whose action on $\Omega$ and low powers follows an almost uniform distribution.
It is then unsurprising that the strategies for linear algebraic groups and symmetric groups diverge: the actions that characterize the two kinds differ. Nevertheless, it is possible to unify the strategies to some extent. We shall see that the role played by {\em generic} elements -- in the sense of algebraic geometry -- in the study of growth in linear algebraic groups is roughly analogous to the role played in permutation groups by {\em random} elements -- in the sense of being produced by random walks.
{\em Further perspectives.} A ``purer'' product theorem would state that either (\ref{eq:uru}) holds or, say, $A^{n^{C \log n}} = G$. The switch to diameters in conclusion (\ref{eq:rororo}) is not just somewhat ungainly; it also slows down the recursion. If (\ref{eq:rororo}) were replaced by $A^{n^{C \log n}} = G$, we would then obtain an exponent of $3$ instead of $4$ in Theorem \ref{thm:pais}. Such a ``purer'' result is not contradicted by the existing counterexamples, and so remains a plausible goal.
Yet another worthwhile goal would be to remove the dependence on the Classification of Finite Simple Groups (CFSG). The proof here uses the structure theorem in \cite{MR599634}/\cite{MR758332}, which relies on CFSG. The proof in \cite{MR3152942} also depended on CFSG, for essentially the same reason: it used \cite[Thm.~1.4]{zbMATH00091732}, which uses \cite{MR599634}/\cite{MR758332}.
Incidentally, there is a flaw in \cite[Thm.~1.4]{zbMATH00091732} (proof and statement), as L.~Pyber pointed out to the author. We fix it in \S \ref{sec:babaiseress} (with input from Pyber); the amended statement is in Prop.~\ref{prop:finbo}. The bound in \cite{MR3152942} is not affected when we replace \cite[Thm.~1.4]{zbMATH00091732} by Prop.~\ref{prop:finbo} in the proof of \cite{MR3152942}.
{\em Notation.} We write actions on the right, i.e., if $G$ acts on $X$, and $g\in G$, $x\in X$, we write $x^g$ for the element to which $g$ sends $x$.
As is usual, we write $f(x) = O(g(x))$ to mean that there exists a constant $C>0$ (called an
{\em implied constant}) such that $|f(x)|\leq C g(x)$ for all large enough $x$. We also write $f(x)\ll g(x)$ to mean that $f(x) = O(g(x))$, and $f(x)\gg g(x)$ to mean, for $g$ taking positive values, that there is a constant $c>0$ (called, again, an implied constant) such that $f(x) \geq c g(x)$ for all large enough $x$. When we write $O^*(c)$, $c$ a non-negative real, we simply mean a quantity whose absolute value is at most $c$.
Given $h\in G$, we write $C(h)$ for the centralizer $\{g\in G: g h = h g\}$ of $h$. Given $H\leq G$, we write $C(H)$ for the centralizer $\{g\in G: g h = h g\; \forall h\in H\}$ of $H$.
As should be clear by now, and as is standard, we write $\Alt(\Omega)$ for the alternating group on a set $\Omega$, and $\Alt(m)$ for the abstract group isomorphic to $\Alt(\Omega)$ for any set $\Omega$ with $n$ elements. We define $\lbrack n\rbrack = \{1,2,\dotsc,n\}$.
{\bf Acknowledgements.} The author is supported by ERC Consolidator grant 648329 (GRANT) and by funds from his Humboldt professorship. He is deeply grateful to L\'aszl\'o Pyber for his extremely valuable suggestions and feedback, and to Henry Bradford and Vladimir Finkelshtein, for a very careful reading and many corrections.
\section{Toolbox}
\subsection{Special sets}
In the proofs of growth for groups of Lie type, some of the main tools are statements on intersections with varieties. A typical statement is of the following kind.
\begin{lem}\label{lem:oshut}
Let $G = \SL_2(K)$, $K$ a finite field. Let $A\subset G$ be a set of generators of $G$
with $A = A^{-1}$. Let $V$ be a
one-dimensional irreducible subvariety of $\SL_2$. Then, for
every $\delta>0$, either $|A^3|\geq |A|^{1+\delta}$ holds, or
the intersection of $A$ with $V$ has
\[\ll |A|^{\frac{1}{\dim \SL_2} + O(\delta)} = |A|^{1/3 + O(\delta)}\]
elements. The implied constants depend only on the degree of $V$. \end{lem}
Special statements of this kind were proved and used in \cite{Hel08} and \cite{HeSL3}, and have been central to the main strategy since then. They were fully generalized in \cite{MR3402696}. As it happens, Larsen and Pink, in the course of their work on finite subgroups of linear groups, had proven results of the same kind -- for subgroups $H$, instead of sets $A$, but for all simple linear groups $G$. Their procedure was adapted in \cite{BGT} to give essentially the same general result as in \cite{MR3402696}.
(Incidentally, the main purpose of Larsen and Pink was to prove without CFSG a series of statements that follows from CFSG. For this purpose, they developed tools that were, in some sense, both concrete and general. It was these features that let the tools be generalized later to sets, as opposed to subgroups. This is not the only time that preexistent work on doing without CFSG has proved fruitful in this context; we will see another instance when we examine random walks and permutation groups.)
There is an obvious difficulty in adapting such work to the study of permutation groups: in $\Sym(n)$, there seems to be no natural concept of a ``variety'', let alone of its degree and its dimension.
The approach we will follow here is to strip to the proof of a statement such as Lemma \ref{lem:oshut} to its barest bones, so that the main idea becomes a statement about an abstract group. We will later be able to see how to apply it to obtain a useful result on permutation groups.
The proof of Lemma \ref{lem:oshut} goes as follows. First, we show that, for generic $g_1, g_2\in \SL_2(\overline{K})$, the map $\phi:V\times V \times V\to G$ given by \[\phi(v_0,v_1,v_2) = v_0 \cdot g_1 V g_1^{-1} \cdot g_2 V g_2^{-1}\] is {\em almost-injective}, in the sense that the preimage of a generic point of the (closure of) the image is zero-dimensional. ``A generic point'' here means ``a point outside a subvariety of positive codimension''. Similarly, ``for $g_1$, $g_2$ generic'' means that the pairs $(g_1,g_2)$ for which the map $\phi$ is {\em not} almost-injective lie in a variety of positive codimension $W$ in $\SL_2\times \SL_2$. Now, because $A$ generates $G$, a general statement on {\em escape of subvarieties} shows that there exists a pair $(g_1,g_2)\in A^k \times A^k$ outside $W$, where $k$ is a constant depending only on the degree of $V$. (``Escape of subvarieties'' was an argument known before \cite{Hel08}. The statement in \cite[Prop.~3.2]{MR2129706} is over $\mathbb{C}$, but the argument of the proof there is valid over an arbitrary field; see, e.g., \cite[Prop.~4.1]{HeSL3}.)
Then we examine the image of $(A\cap V)\times (A\cap V) \times (A\cap V)$ under
$\phi$. If $\phi$ is injective, then the image has exactly the same size as the domain, namely, $|A\cap V|^3$. In general, for $\phi$ almost-injective, the image will have size $\gg |A\cap V|^3$. At the same time, the image is contained in $A^{1 + 2 k + 1 + 2k + 2k + 1 + 2k} = A^{8 k + 3}$. Hence
\[|A\cap V| \leq \left|A^{8 k + 3}\right|^{1/3}.\]
Let us prove an extremely simple general statement that expresses the main idea of the statement we have just sketched. \begin{lem} Let $G$ be a group. Let $A,B\subset G$ be finite. Then
\[|A B^{-1}| \geq \frac{|A| |B|}{\left|A A^{-1} \cap B B^{-1}\right|} .\] In particular, if $A A^{-1} \cap B B^{-1} = \{e\}$, then
\[|A B^{-1}| \geq |A| |B|.\] \end{lem} The condition $A A^{-1} \cap B B^{-1} = \{e\}$ is fulfilled if, for instance, $A\subset H_1$, $B\subset H_2$, where $H_1$, $H_2$ are subgroups of $G$ with $H_1\cap H_2 =\{e\}$. \begin{proof} Consider the map $\phi:A\times B\to A B^{-1} \subset G$ defined by \[(a,b) \mapsto a b^{-1}.\] Clearly, as with any map from $A\times B$ to $G$, \begin{equation}\label{eq:lolalola}
|\im(\phi)| \geq \frac{|A\times B|}{\max_{x\in G} |\phi^{-1}(x)|},\end{equation}
and of course $|A B^{-1}|\geq |\im(\phi)|$.
So, let us bound $\phi^{-1}(x)$. Say $\phi(a,b)=x=\phi(a',b')$. Then \begin{equation}\label{eq:doso}a^{-1} a' = b (b')^{-1}.\end{equation} In particular, given $a$, $b$ and $b (b')^{-1}$, we can reconstruct $a'$ and $b'$. Moreover, again by (\ref{eq:doso}),
$b (b')^{-1} $ lies in $A A^{-1} \cap B B^{-1}$. Letting $(a,b)$ be fixed, and letting $(a',b')$ vary among all elements of $\phi^{-1}(x)$, we see that
\[\left|\phi^{-1}(x)\right| \leq |A A^{-1} \cap B B^{-1} |.\] By (\ref{eq:lolalola}), we are done. \end{proof}
We can apply the same idea to obtain growth assuming only that an intersection of many sets is empty. \begin{lem}\label{lem:tagore} Let $G$ be a group. Let $A_0, A_1, \dots, A_k \subset G$ be finite. Then there is at least one $0\leq j\leq k-1$ such that
\[\left|A_j A_{j+1}^{-1}\right| \geq \frac{\mathbf{A}^{\frac{k+1}{k}}}{
\left|\bigcap_{j=0}^{k} A_j A_j^{-1}\right|^{1/k}},\]
where $\mathbf{A}$ is the geometric average $(\prod_{j=0}^k |A_j|)^{1/(k+1)}$. In particular, if \begin{equation}\label{eq:notew}
\bigcap_{j=0}^{k} A_j A_j^{-1} = \{e\},\end{equation} then
\[\left|A_j A_{j+1}^{-1}\right| \geq \mathbf{A}^{\frac{k+1}{k}}.\] \end{lem} We will typically apply this lemma to sets $A_j$ that are conjugates of each other, and so all of the same size $\mathbf{A}$. If $A\subset H$, $A_j = g_j A g_j^{-1}$ and $\bigcap_{j=0}^k g_j H g^{-1} = \{e\}$, then condition (\ref{eq:notew}) holds. \begin{proof}
Consider the map \[\phi:A_0\times A_1\times\dotsc \times A_k\to
A_0 A_1^{-1} \times A_1 A_2^{-1} \times \dotsc \times A_{k-1} A_k^{-1}\]
given by
\[(a_0,a_1,\dotsc,a_k) \mapsto \left(a_0 a_1^{-1}, a_1 a_2^{-1},\dotsc,
a_{k-1} a_k^{-1}\right).\]
Clearly,
\[\prod_{j=0}^{k-1} |A_j A_{j+1}^{-1}| = |\im(\phi)|
\geq \frac{\prod_{j=0}^k |A_j|}{\max_{x\in G} |\phi^{-1}(x)|}.\]
Say $\phi(a_0,a_1,\dotsc,a_j) = x = \phi(a_0',a_1',\dotsc,a_j')$. Then,
since $a_j a_{j+1}^{-1} = a_j' \left(a_{j+1}'\right)^{-1}$ for all $0\leq j<k$,
we see that $a_j^{-1} a_j' = a_{j+1}^{-1} a_{j+1}'$ for all $0\leq j < k$.
Thus, $(a_0',a_1',\dotsc,a_j')$ is determined by $(a_0,a_1,\dotsc,a_j)$
and the single element
\[a_0^{-1} a_0' = a_1^{-1} a_1' = \dotsc = a_k^{-1} a_k',\]
which lies in $\bigcap_{j=0}^{k} A_j A_j^{-1}$. We conclude that
\[|\phi^{-1}(x)|\leq \left|\bigcap_{j=0}^{k} A_j A_j^{-1}\right|.\] \end{proof}
We will later see how to obtain the weak orthogonality condition \begin{equation}\label{eq:radapar}
\bigcap_{j=0}^k g_j H g_j^{-1} = \{e\}\end{equation}
for some kinds of subgroups of permutation groups.
\subsection{Subgroups and quotients}
We will need a couple of basic lemmas on subgroups and quotients. As explained in \cite[\S 3.1--3.2]{MR3152942} and \cite[\S 4.1]{MR3348442}, they are all easy applications of an orbit-stabilizer principle for sets \cite[Lemma 4.1]{MR3348442}. We can also prove them by using the pigeonhole principle directly.
For $G$ a group and $H \le G$, we write $\pi_{G/H}: G \to G/H$ for the map taking each $g \in G$ to the right coset $Hg$ containing $g$. Thus, for instance, $|\pi_{G/H}(A)|$ equals the number of distinct cosets $H g$ intersecting $A$.
\begin{lem}[{\cite[Lem.\ 7.2]{HeSL3}}]\label{lem:duffy} Let $G$ be a group and $H$ a subgroup thereof. Let $A\subseteq G$ be a non-empty finite set. Then \begin{equation}\label{eq:vento}
|A A^{-1} \cap H| \geq \frac{|A|}{|\pi_{G/H}(A)|}
\geq \frac{|A|}{\lbrack G:H\rbrack}.\end{equation} \end{lem} \begin{proof}
By pigeonhole, there is a coset $H g$ of $H$
containing at least $|A|/|\pi_{G/H}(A)|$
elements of $A$. Fix $g_0\in A\cap H g$.
Then, for each $g_1\in A\cap H g$, we obtain a distinct element
$g_0 g_1^{-1} \in A A^{-1}\cap H$. \end{proof}
\begin{lem}\label{lem:durdo} Let $G$ be a group and $H$ a subgroup thereof. Let $A\subseteq G$ be a non-empty finite set. Then, for any $k\geq 1$, \begin{equation}\label{eq:verento}
\left|A^{k + 1}\right|
\geq \frac{\left|A^k \cap H\right|}{
\left|A A^{-1} \cap H\right|} \cdot |A|.\end{equation} \end{lem} In other words, growth in a subgroup implies growth in the group. \begin{proof}
It is clear that
\[\left|A^{k + 1}\right| \geq
\left|\left(A^k \cap H\right) \cdot A\right|
\geq \left|\left(A \cap H\right)^k\right|\cdot \left|\pi_{G/H}(A)\right|.\]
At the same time, by Lemma \ref{lem:duffy},
\[\left|\left(A A^{-1} \cap H\right)\right| \cdot \left|\pi_{G/H}(A)\right|
\geq |A|.\] \end{proof}
\begin{lem}[{\cite[Lem.\ 3.7]{MR3152942}}]\label{lem:subcos} Let $G$ be a group, let $H,K$ be subgroups of $G$ with $H\leq K$, and let $A\subseteq G$ be a non-empty finite set. Then
\[|\pi_{K/H}(A A^{-1}\cap K)| \geq \frac{|\pi_{G/H}(A)|}{|\pi_{G/K}(A)|}
\geq \frac{|\pi_{G/H}(A)|}{\lbrack G:K\rbrack}.\] \end{lem} In other words: if $A$ intersects $r \lbrack G:H\rbrack$ cosets of $H$ in $G$, then $A A^{-1}$ intersects at least $r \lbrack G:H\rbrack/\lbrack G:K\rbrack = r \lbrack K:H\rbrack$ cosets of $H$ in $K$. (As usual, all of our cosets are right cosets $H g$, $K g$, etc.) We quote the proof in \cite[Lem.\ 3.7]{MR3152942}. \begin{proof} Since $A$ intersects
$|\pi_{G/H}(A)|$ cosets of $H$ in $G$ and
$|\pi_{G/K}(A)|$ cosets of $K$ in $G$, and every coset of $K$ in $G$ is a disjoint union of cosets of $H$ in $G$, the pigeonhole principle implies that there exists a coset $K g$ of $K$ such that $A$ intersects at least
$k = |\pi_{G/H}(A)|/|\pi_{G/K}(A)|$ cosets $H a \subseteq K g$. Let $a_1,\ldots,a_k$ be elements of $A$ in distinct cosets of $H$ in $Kg$. Then $a_i a_1^{-1}\in AA^{-1}\cap K$ for each
$i=1,\ldots, k$. Note that $H a_1 a_1^{-1},\ldots,H a_k a_1^{-1}$ are $k$ distinct cosets of $H$. \end{proof}
\subsection{Graphs and random walks}
For us, a graph is a directed graph, that is, a pair $(V,E)$, where $V$ is a set and $E$ is a subset of the set of {\em ordered} pairs of elements of $V$. (We allow loops, that is pairs, $(v,v)$.) A multigraph is the same as a graph, but with $E$ a multiset, i.e., edges may have multiplicity $>1$.
Given a group $G$ and a set of generators $A\subset G$, the {\em Cayley graph} $\Gamma(G,A)$ is defined to be the pair $(G,\{(g,g a):g\in G, a\in A\})$. It is connected because $A$ is a set of generators. Given a group $G$, a set of generators $A\subset G$ and a set $X$ on which $G$ acts, the {\em Schreier graph} $\Gamma(G,A;X)$ is the pair $(X,\{(x,x^a): x\in X, a\in A\})$.
We take a {\em random walk} on a graph or multigraph $\Gamma$ by starting at a given vertex $v_0$ and deciding randomly, at each step, to which neighbor $w$ of our current location $v$ to move. (A {\em neighbor} of $v$ is a vertex $w$ such that $(v,w)\in E$.) We choose $w$ with uniform probability among the neighbors of $v$, if $\Gamma$ is a graph, or with probability proportional to the multiplicity of $w$, if $\Gamma$ is a multigraph.
In a {\em lazy random walk}, at each step, we first throw a fair coin to decide whether we are going to bother to move at all. (Of course, if we decide to move, and $(v,v)$ is an edge, we might move from $v$ to itself.) Our random walks will always be lazy, for the sake of eliminating some technicalities.
We say that the {\em $(\ell_\infty,\epsilon)$-mixing time} in a regular, symmetric (multi)graph $\Gamma=(V,E)$ is at most $t$ if, for every (lazy)
random walk of length $\geq t$, the probability that it ends at any given vertex lies between $(1-\epsilon)/|V|$ and
$(1+\epsilon)/|V|$. We will use the fact that (multi)graphs with few vertices have small mixing times.
\begin{prop}\label{prop:chudo} Let $\Gamma$ be a connected, regular and symmetric multigraph of valency $d$ and with $N$ vertices. Then the $(\ell_\infty,\epsilon)$-mixing time is at most $N^2 d \log(N/\epsilon)$. \end{prop} \begin{proof}
This is a well-known fact; see, e.g., the exposition in
\cite[\S 6]{MR3348442}. The main idea is to study the spectrum of the
{\em adjacency operator}, meaning the
operator $\mathscr{A}$ taking each function $f:V\to \mathbb{C}$ to
a function $\mathscr{A}f$ whose value at $v$ is the average of $f(w)$
over the neighbors $w$ of $v$ in the graph $\Gamma$. The connectedness
of $\Gamma$ is used to show that, for every non-constant eigenfunction
of $\mathscr{A}$, the corresponding eigenvalue $\lambda$ cannot
be too close to $1$; it is at most $1-1/N^2 d$. The bound on the mixing
time then follows. \end{proof}
In particular, Prop.~\ref{prop:chudo} holds when $\Gamma$ is any Schreier graph
$\Gamma(G,A;\Omega^{(k)})$ of the action of a permutation group $G\leqslant \Sym(\Omega)$ on the set $\Omega^{(k)}$ of $k$-tuples of distinct elements of $\Omega$. The point is that $N = \left|\Omega^{(k)}\right|\leq |\Omega|^k/k!$ is very small compared to $\Sym(\Omega)$ (which is of course of size $|\Omega|!$) for $k$ bounded. We can make sure that $A$ is small as well, by the following simple lemma.
\begin{lem}\label{lem:dustu}
Let $A\subset \Sym(n)$. Then there is a subset $A_0\subset A\cup A^{-1}$
such that $\langle A_0\rangle = \langle A\rangle$, $|A_0|\leq 4 n$
and $A_0 = A_0^{-1}$. \end{lem} \begin{proof} Choose an element $g_1\in A$, and then an element $g_2\in A$ such that $\langle g_1\rangle \lneq \langle g_1,g_2\rangle$, and then an element $g_3\in A$ such that $\langle g_1,g_2\rangle \lneq \langle g_1,g_2,g_3\rangle$, \dots Since the longest subgroup chain in $\Sym(n)$ is of length $\leq 2n-3$ \cite{MR860123}, we must stop in $r\leq 2n-3<2n$ steps. Let $A_0 = \{g_1,g_1^{-1},\dotsc,g_r,g_r^{-1}\}$. \end{proof}
The point is that, while we cannot assume we can produce random, uniformly distributed elements of $G=\langle A\rangle$ as short products in $A$ (we cannot assume what we are trying to prove, namely, that the diameter is small), we can take short, random products of elements of $A$, and their action on $\Omega^{(k)}$ is like that of random, uniformly distributed elements. This observation was already used in \cite{BBS04} to prove the following. \begin{lem}[\cite{BBS04}]\label{lem:bbs}
Let $A\subset \Sym(\Omega)$, $|\Omega|=n$,
be such that $A=A^{-1}$ and $G=\langle A\rangle$
is $3$-transitive. Assume there is a $g\in A$, $g\ne e$, with
$|\supp(g)|\leq (1/3-\epsilon) n$, $\epsilon>0$. Then
\[\diam(\Gamma(G,A)) \ll_\epsilon n^8 (\log n)^c,\]
where $c$ is an absolute constant. \end{lem} \begin{proof}[Sketch of proof]
See \cite{BBS04} or the exposition in \cite[\S 6.2]{MR3348442}.
The main idea is as follows. Let $A_0$ be as in Lemma \ref{lem:dustu},
and let $h\in A_0^{m} \subset A^{m}$ be the outcome
of a random walk on $A_0$ of length $\leq m$, where
$m \geq 4 n^3 \log(n/\epsilon')$ and $\epsilon'=\epsilon/100$ (say). Then,
by Prop.~\ref{prop:chudo}, for any $x,y\in \Omega$, the probability
that $h$ takes $x$ to $y$ is almost exactly $1/n$. In particular,
for $x\in \supp(g)$, the probability that $x^h\in \supp(g)$ is almost
exactly $|\supp(g)|/n$. Hence
\[\left|\supp(g) \cap \supp\left(h g h^{-1}\right)\right|\lesssim
\frac{|\supp(g)|^2}{n} \leq \left(\frac{1}{3} - \epsilon\right)|\supp(g)|.\]
A quick calculation shows that the commutator
$\lbrack g, h^{-1}\rbrack = g^{-1} h g h^{-1}$ obeys
\[|\supp\left(\left\lbrack g, h^{-1}\right\rbrack\right)| \leq 3
\left|\supp(g) \cap \supp\left(h g h^{-1}\right)\right|,\]
and so, for $g' = \left\lbrack g, h^{-1}\right\rbrack$,
$|\supp(g')|\leq |\supp(g)|^2/n \leq (1-3\epsilon) |\supp(g)|$.
We iterate until, after $O(\log \log n)$, we obtain an element $h$ of
support of size $2$ or $3$. (Additional care is taken in the process so that
our element $h$ is never trivial. It is, in fact, convenient to take
$m \geq 4 n^5 \log(n/\epsilon')$ from the beginning, so that the probability
that $h$ takes a pair $(x,x')\in \Omega^{(2)}$ to a pair
$(y,y')\in \Omega^{(2)}$ is almost exactly $1/|\Omega|^{(2)} = 2/n(n-1)$.)
We conjugate $h$ by elements of $A_0^{n^3}$
to obtain a set $C$ consisting of all $2$-cycles or $3$-cycles. It is
clear that $\diam(\Gamma(G,C))\ll n$. \end{proof}
We shall now use short random walks to construct elements $g_j$ such that a weak orthogonality condition in the sense of (\ref{eq:radapar}) holds for some kinds of sets $B$. By an {\em orbit} of a set $B\subset \Sym(\Omega)$ we mean a subset of $\Omega$ of the form $x^B$, $x\in \Omega$.
\begin{lem}\label{lem:shortorb}
Let $A,B\subset \Sym(\Omega)$, $|\Omega|=n$, $e\in B$. Let
$0<\rho<1$.
Assume that
$\langle A\rangle$ is $2$-transitive, and that $B$ has no
orbits of length $>\rho n$. Then there are
$g_1, g_2,\dotsc,g_k\in \left(A\cup A^{-1}\cup \{e\}\right)^m$,
$k\ll (\log n)/|\log \rho|$,
$m\ll n^6 \log n$, such that
\[\bigcap_{j=1}^k g_j B g_j^{-1} = \{e\}.\] \end{lem} If we required only that $g\in \langle A\rangle$ (and not $g_k\in (A\cup A^{-1}\cup \{e\})^m$), and $B$ were assumed to be a group, then this Lemma would be the ``splitting lemma'' in \cite[\S 3]{Bab82}. The fact that the proof can be adapted illustrates what we were saying: short random products act on $\Omega^{(2)}$ as random elements do. Prop.~5.2 in \cite{MR3152942} is an earlier generalization of Babai's ``splitting Lemma'', based on the same idea. \begin{proof}
Let $g_1,\dotsc,g_k$ be the outcome of $k$ independent
random walks of length $\leq m$ on $A_0$, where
$m \geq 4 n^6 \log(n/\epsilon)$, $\epsilon>0$ and
$A_0\subset A \cup A^{-1}$ is as in Lemma \ref{lem:dustu}. Then, by Prop.~\ref{prop:chudo}, for any $(x,y), (x',y')\in \Omega^{(2)}$ and any $1\leq j\leq k$, the probability that $g_j$ takes $(x,y)$ to $(x',y')$ lies between
$(1-\epsilon)/\left|\Omega^{(2)}\right|$ and
$(1+\epsilon)/\left|\Omega^{(2)}\right|$.
Since $B$ has no orbits of length $> \rho n$, there are at most $\rho \left|\Omega^{(2)}\right|$ pairs $(x',y')\in \Omega^{(2)}$ such that $x'$ and $y'$ lie in the same orbit of $H$. Hence, for any $(x,y)\in \Omega^{(2)}$ and any $1\leq j\leq k$, the probability that $x^{g_j}$ and $y^{g_j}$ lie in the same orbit of $B$ is at most $(1+\epsilon) \rho$. Since $g_1,\dotsc,g_k$ were chosen independently, it follows that the probability that $x^{g_j}$ and $y^{g_j}$ are in the same orbit for every $1\leq j\leq k$ is at most $((1+\epsilon) \rho)^k$.
Now, $x^{g_j}$ and $y^{g_j}$ are in the same orbit for every $1\leq j\leq k$ if and only if $x$ and $y$ are in the same orbit of $B' = \bigcap_{j=1}^k g_j B g_j^{-1}$. The probability that at least two distinct $x$, $y$ lie in the same orbit of $B'$ is therefore at most \[n^2 ((1+\epsilon) \rho)^m .\] We let\footnote{We can assume $\rho\leq (n-1)/n$. Thus
$\epsilon= \rho^{-1/2}-1$ implies $\epsilon\gg 1/n$, and so
$\log(n/\epsilon) \ll \log n$.}
$\epsilon= \rho^{-1/2}-1$, so that $((1+\epsilon) \rho) = \rho^{1/2}$. Then, for $k > 2 (\log n^2)/|\log \rho|$, \[n^2 ((1+\epsilon) \rho)^m < 1.\] In other words, with positive probability, no two distinct $x$, $y$ lie in the same orbit of $B'$, i.e., $B'$ equals $\{e\}$. Thus, there exist $g_1,\dotsc,g_m$ such that $B' = \{e\}$. \end{proof}
It is a familiar procedure in combinatorics (sometimes called the {\em probabilistic method}) to prove that a lion can be found at a random place of the city with positive probability, and to conclude that there must be a lion in the city. What we have done is prove that, after a short random walk, we come across a lion with positive probability (and so there is a lion in the city).
\begin{cor}\label{cor:lalmo}
Let $A,B\subset \Sym(\Omega)$, $|\Omega|=n$. Let
$0<\rho<1$. Assume that
$\langle A\rangle$ is $2$-transitive, and that $B B^{-1}$ has no
orbits of length $>\rho n$. Then there is a
$g\in \left(A\cup A^{-1}\cup \{e\}\right)^m$,
$m\ll n^6 \log n$, such that
\[\left|B B^{-1} g B B^{-1} g^{-1}\right| \geq |B|^{1 + \frac{|\log \rho|}{\log n}}.\] \end{cor}
\begin{proof}
By Lemma \ref{lem:shortorb} applied to $B B^{-1}$ rather than $B$, there are
$g_1, g_2,\dotsc,g_k\in \left(A\cup A^{-1}\right)^m$,
$k\ll (\log n)/|\log \rho|$,
$m\ll n^6 \log n$, such that
\[\bigcap_{j=1}^k g_j B B^{-1} g_j^{-1} = \{e\}.\]
Hence, by Lemma \ref{lem:tagore}, with $A_j = g_j B B^{-1} g_j^{-1}$
(and $g_0=e$, say), there is a $0\leq j\leq k-1$ such that
\[\left|A_j A_{j+1}^{-1}\right|\geq \left|B B^{-1}\right|^{1 +\frac{1}{k}}
\left|B\right|^{1 +\frac{1}{k}}.\]
Since
$\left|A_j A_{j+1}^{-1}\right| = \left|g_j B B^{-1} g_j^{-1}
g_{j+1} B B^{-1} g_{j+1}^{-1}\right|
=\left|B B^{-1} g B B^{-1} g^{-1}\right|$ for $g =g_j^{-1} g_{j+1}$, we are done. \end{proof}
\begin{cor}\label{cor:ratherbab}
Let $A\subset \Sym(\Omega)$, $|\Omega|=n$, with $A=A^{-1}$ and $e\in A$.
Let $0<\rho<1$. Assume that
$\langle A\rangle$ is $2$-transitive. Let $\Sigma\subset \Omega$ be such that
$(A^4)_{(\Sigma)}$ has no orbits of length $>\rho n$. Then either
\begin{equation}\label{eq:firstopt}
\left|\Sigma\right|\geq \frac{|\log \rho|}{3 (\log n)^2} \log |A|
\end{equation}
or
\begin{equation}\label{eq:secondopt}
\left|A^l\right|\geq |A|^{1 + \frac{|\log \rho|}{3 \log n}}
\end{equation}
for some $l\ll n^6 \log n$. \end{cor} Compare to \cite[Cor.~5.3]{MR3152942}. \begin{proof}
Let $B = (A^2)_{(\Sigma)} = (A A^{-1})_{(\Sigma)}$. Since
$B B^{-1}\subset (A^4)_{\Sigma}$, we see that $B B^{-1}$
has no orbits of length $>\rho n$. Apply Corollary \ref{cor:lalmo}.
We obtain that
\[\left|A^{l}\right|\geq |B|^{1 + \frac{|\log \rho|}{n}}\]
for $l=4 m+2$, $m\ll n^6 \log n$.
At the same time, by Lemma \ref{lem:duffy},
\[|B| = \left|A A^{-1} \cap \Sym(\Omega)_{(\Sigma)}\right|
\geq \frac{|A|}{\lbrack \Sym(\Omega):\Sym(\Omega)_{(\Sigma)}\rbrack}
\geq \frac{|A|}{n^{|\Sigma|}}.\]
Hence, either (\ref{eq:firstopt}) holds, or
\[|B|> \frac{|A|}{n^{\frac{|\log \rho|}{2 (\log n)^2} \log |A|}} =
|A|^{1 - \frac{|\log \rho|}{2 \log n}},\] and so
\[\left|A^{l}\right|\geq |A|^{\left(1 - \frac{|\log \rho|}{2 \log n}\right)
\left(1 + \frac{|\log \rho|}{\log n}\right)} = |A|^{
1 + \frac{|\log \rho|}{2 \log n} \left(1 - \frac{1}{\log n}\right)}.\]
We can assume $1-1/\log n \geq 2/3$, as otherwise (\ref{eq:secondopt}) holds
trivially. \end{proof}
\subsection{Generating an element of large support}
We will need to produce an element of $\Sym(n)$ of very large support (almost all of $\{1,2,\dotsc,n\}$). It is not difficult to carry out this task using short random walks.
\begin{lem}\label{lem:chachava}
Let $g\in \Sym(n)$ have support $\geq \alpha n$, $\alpha>0$. Let
$A\subset \Sym(n)$
generate a $2$-transitive group. Assume $A=A^{-1}$, $e\in A$.
Then, provided that $n$ is larger than a constant depending only on
$\alpha$, there are $\gamma_i\in A^{n^6}$, $1\leq i\leq \ell$,
where $\ell=O((\log n)/\alpha)$,
such that the support of
\[\gamma_1 g \gamma_1^{-1} \cdot \gamma_2 g \gamma_2^{-1} \dotsb
\gamma_\ell g \gamma_\ell^{-1}
\]
has at least $n-1$ elements. \end{lem} \begin{proof}
Let $h_1, h_2 \in \Sym(n)$, $m_i = |\supp(h_i)|$. By Prop.~\ref{prop:chudo},
a random walk of length $r = \lceil 4 n^5 \log(n^2/\epsilon)\rceil$ gives
us an element $\sigma$ of $A^r$ sending any given pair of distinct elements
$x,y \in \{1,\dotsc,n\}$ to any given pair of distinct elements
$x',y'\in \{1,\dotsc,n\}$ with probability $(1+O^*(\epsilon))/n(n-1)$.
An element $x\in \{1,\dotsc,n\}$ can fail to be in the support of
$h_1 \sigma h_2 \sigma^{-1}$ only if (a) $x\notin \supp(h_1)$, $x\notin
\supp \sigma h_2 \sigma^{-1}$, or (b) $x\in \supp(h_1)$ and $\sigma h_2 \sigma^{-1}$
sends $x^{h_1}$ to $x$. For $x$ random,
case (a) happens with probability at most
$(1-m_1/n)\cdot (1+\epsilon) (1-m_2/n)$.
In case (b), $\sigma$ must send $x^{h_1}$ to an element that is not
fixed by $h_2$, and, moreover, it must send $x$ to $x^{h_1 \sigma h_2}$.
Now, we know that, even given that $\sigma$ sends an element $x_0$
(in this case,
$x_0 = x^{h_1}$) to some specific element $y_0$,
it will still send any $x\ne x_0$ to any $y\ne y_0$ with almost equal
probability.
Hence,
\[\begin{aligned}
\Prob(x\notin \supp(h_1 \sigma h_2\sigma^{-1})) &\leq
(1+\epsilon) \left(\left(1-\frac{m_1}{n}\right)
\left(1 - \frac{m_2}{n}\right) + \frac{m_1}{n} \frac{m_2}{n} \frac{1}{n-1}\right)
\end{aligned}\]
We set $\epsilon=1/n$ and assume $m_1,m_2<n$. Then we have
\[\Prob(x\notin \supp(h_1 \sigma h_2\sigma^{-1})) \leq
(1+\epsilon) \left(1-\frac{m_1}{n}\right)
\left(1 - \frac{m_2}{n}\right) + \frac{1}{n}.\]
The expected value of $n-|\supp(h_1\sigma h_2 \sigma^{-1})|$ is thus
at least $(1+\epsilon) (n-m_1) (1-m_2/n) + 1/n$. Hence there is
a $\sigma\in A^r$ such that $n-|\supp(h_1\sigma h_2 \sigma^{-1})|$ is at
least that much.
We apply this first with $h_1 = h_2 = g$, and obtain a $\sigma_1 = \sigma$
as above; define $g_1 = g \sigma_1 g \sigma_1^{-1}$. Then we iterate:
we let $h_1 = g_1$, $h_2 = g$, and obtain a $\sigma_2 = \sigma$ such that
$g_2 = g_1 \sigma_2 g \sigma_2^{-1}$ has large support; and so forth,
with $h_1 = g_{i-1}$, $h_2 = g$ at the $i$th step.
We obtain
\[1 - \frac{\supp(g_i)}{n} \geq (1+\epsilon) \left(1 - \frac{\supp(g)}{n}\right)
\left(1-\frac{\supp(g_{i-1})}{n}\right) + \frac{1}{n},\]
where $g_0 = g$, and so, for $r = (1+\epsilon) (1 - \supp(g)/n)$
(which is $<1$) and
$k\geq 0$, \[1 - \frac{\supp(g_k)}{n} \geq r^k \left(1 - \frac{\supp(g)}{n}\right) + \frac{1}{(1-r) n} \geq r^k (1-\alpha) + \frac{1}{(1-r) n}. \] We let $k = \lceil (\log n)/(\log 1/r)\rceil$ and obtain \[\supp(g_k) \geq n - 1 - \frac{1}{(1-r)}.\] For $n\geq 2/\alpha$, we have $r\leq (1+\alpha/2) (1-\alpha) < 1 - \alpha/2$, and so $1/(1-r)\leq 2/\alpha$ and $k\ll (\log n)/\alpha$.
We can assume $\supp(g_k)<n$, as otherwise we are done. Now apply the procedure at the beginning with $h_1=h_2=g_k$. We obtain $\Prob(x\notin \supp(g_k \sigma g_k \sigma^{-1})) < 2/n$, provided that $n$ is larger than a constant depending only on $\alpha$. Hence there is a $\sigma\in A^r$ such that $\supp(g_k \sigma g_k \sigma^{-1})\geq n-1$. Since \[g_k \sigma g_k \sigma^{-1} = g \cdot \sigma_1 g \sigma_1^{-1} \dotsb \sigma_k g \sigma_k^{-1} \cdot \sigma g \sigma^{-1}\cdot (\sigma \sigma_1) g (\sigma \sigma_1)^{-1} \dotsb (\sigma \sigma_k) g (\sigma \sigma_k)^{-1},\] we set $\ell = 2k+2$ and are done. \end{proof}
It may be useful to compare Lemma \ref{lem:chachava} to analogous results on {\em random subproducts} in the sense of \cite{zbMATH01116363}. Such results make weaker assumptions (transitivity instead of double transitivity) and give weaker conclusions (support $\geq n/2$ instead of support $\sim n$; see \cite[Lemma 2.3.1]{zbMATH01849958}, \cite[Lemma 4.3]{MR3152942}).
\subsection{Stabilizers and stabilizer chains}
Let $A\subset \Sym(\Omega)$, $|\Omega|=n$. Given a subset $\Sigma = \{\alpha_1,\alpha_2,\dotsc,\alpha_k\} \subset \Omega$, we write $A_{(\Sigma)}$ and $A_{\Sigma}$ for the {\em pointwise} and {\em setwise} stabilizers, respectively: \[A_{(\Sigma)} = A_{(\alpha_1,\dotsc,\alpha_k)} = \{g\in A: \alpha_j^g = \alpha_j\;\;\; \forall 1\leq j\leq k\},\] \[A_\Sigma = A_{\{\alpha_1,\dotsc,\alpha_k\}} = \{g\in A: \Sigma^g = \Sigma\}.\]
A {\em stabilizer chain} is simply a chain of subsets \[A \supset A_{(\alpha_1)} \supset A_{(\alpha_1,\alpha_2)} \supset \dotsc,\] where $\alpha_1, \alpha_2, \dotsc \in \{1,2,\dotsc,n\}$. Stabilizer chains have been studied starting with Sims \cite{MR0257203} (in the case of $A$ equal to a subgroup $H$). It is useful to find long chains of stabilizers such that the orbits \[\alpha_j^{A_{(\alpha_1,\dotsc,\alpha_{j-1})}}\] are long.
Why do we want stabilizer chains with long orbits? Here is one reason. \begin{lem}\label{lem:basilic}
Let $A\subset \Sym(\Omega)$, $|\Omega|=n$. Let $\rho \in (0,1)$.
Let $\Sigma = \{\alpha_1,\alpha_2,\dotsc,\alpha_k\}\subset \Omega$
be such that, for every $1\leq j\leq k$,
\begin{equation}\label{eq:muladrink}
\left|\alpha_j^{A_{(\alpha_1,\dotsc,\alpha_{j-1})}}\right|\geq \rho n.\end{equation}
Then $A^k$ intersects at least $(\rho n)^k$ right cosets of
$\Sym(\Omega)_{(\Sigma)}$, and the restriction of the
setwise stabiliser $(A^{-k} A^k)_\Sigma$ to $\Sigma$ is a subset of
$\Sym(\Sigma)$ with at least $\rho^k k!$ elements. \end{lem} This result was shown in the proof of \cite[Lemma 3]{Pyb93} for $A$ a subgroup, and in \cite[Lemma 3.19]{MR3152942} for general $A$. \begin{proof}
First of all, notice that $A^k$ sends $(\alpha_1,\alpha_2,\dotsc,\alpha_k)$
to at least $(\rho n)^k$ distinct $k$-tuples. This is shown as follows.
Let $1\leq j\leq k$. Let $\Delta_j$ denote the orbit
$\alpha_j^{A_{(\alpha_1,\dotsc,\alpha_{j-1})}}$.
For each $\delta\in \Delta_j$, choose an element $g_\delta \in A_{(\alpha_1,
\dotsc,\alpha_{j-1})}$ sending $\alpha_j$ to $\delta$. Let
$S_i = \{g_\delta: \delta\in \Delta_i\}$. Clearly, $|S_i|=|\Delta_i|$ and
$S_i\subset A$. Now let $(s_1,s_2,\dotsc,s_k)$,
$(s_1',s_2',\dotsc,s_k')$ be two distinct elements of $S_1\times \dotsb
\times S_k$. Then $s_k \dotsb s_2 s_1$ and $s_k'\dotsb s_2' s_1'$ send
$(\alpha_1,\alpha_2,\dotsc,\alpha_k)$ to two different $k$-tuples: if
$j$ is the least index such that $s_j \ne s_j'$, then
\[\alpha_j^{s_k s_{k-1}\dotsb s_j} = \alpha_j^{s_j} \ne
\alpha_j^{s_j'} = \alpha_j^{s_k' s_{k-1}' \dotsb s_j'},\]
and so \[\alpha_j^{s_k s_{k-1} \dotsb s_j s_{j-1} \dotsb s_1}
\ne \alpha_j^{s_k' s_{k-1}' \dotsb s_j' s_{j-1} \dotsb s_1}
= \alpha_j^{s_k' s_{k-1}' \dotsb s_j' s_{j-1}' \dotsb s_1'}.\]
Hence $(\alpha_1,\alpha_2,\dotsc,\alpha_k)$ is sent to at least
$|S_1|\dotsb |S_k| \geq (\rho n)^k$ distinct tuples by the action of
$S_k S_{k-1} \dotsb S_1 \subset A^k$.
In other words, $A^k$ intersects at $\geq (\rho n)^k$ cosets
$(\Sym(\Omega))_{(\Sigma)} g$ of $(\Sym(\Omega))_{(\Sigma)}$. By Lemma \ref{lem:subcos},
\[\begin{aligned}
\pi_{(\Sym(\Omega))_{\Sigma}/(\Sym(\Omega))_{(\Sigma)}}(A^k A^{-k}\cap (\Sym(\Omega))_{\Sigma}) &\geq
\frac{\pi_{\Sym(\Omega)//(\Sym(\Omega))_{(\Sigma)}}(A^k)}{
\lbrack \Sym(\Omega):(\Sym(\Omega))_{\Sigma}\rbrack} \\ &\geq \frac{(\rho n)^k}{
n (n-1) \dotsc (n-k+1)/k!} \geq \rho^k k!.\end{aligned}\]
Now, two elements of $(\Sym(\Omega))_{\Sigma}$ lie in different cosets
of $(\Sym(\Omega))_{(\Sigma)}$ if and only their restrictions to $\Sigma$ are distinct. Since $A^k A^{-k} \cap (\Sym(\Omega))_{\Sigma} = (A^k A^{-k})_{\Sigma}$, we have
shown that the restriction $(A^{-k} A^k)_\Sigma$ to $\Sigma$ is of size at least
$\rho^k k!$.
\end{proof}
\subsection{Composition factors. Primitive groups}
Let us recall some standard definitions. A {\em composition factor} of a group $G$ is a quotient $H_{i+1}/H_i$ in a composition series of $G$, i.e., a series \[1=H_0\triangleleft H_1 \triangleleft \dotsc \triangleleft H_n =G,\] where every quotient is simple. By the Jordan-H\"older theorem, whether or not an abstract group is a composition factor of $G$ does not depend on the particular composition series of $G$ being used.
A {\em section} of a group $G$ is a quotient $H/N$, where $1\leq N\triangleleft H\leq G$. A composition factor is, by definition, a section.
A {\em block system} for a permutation group $G\leq \Sym(\Omega)$ is a partition of $\Omega$ preserved by $G$, that is, a partition of $\Omega$ into blocks (sets) $B_1,\dotsc,B_k$ such that, if $x,y\in B_i$, $g\in G$ and $x^g\in B_j$, then $y^g \in B_j$. A maximal block system is one that has blocks of size $>1$ and cannot be subdivided into a finer partition with blocks of size $>1$. (In other words, it is a system of {\em minimal} non-trivial blocks.) A minimal block system is one that is not the refinement of any block system other than the trivial partition of $\Omega$ into one set $\Omega$.
The group $G$ is {\em primitive} if it has no block systems with more than $1$ and fewer than $|\Omega|$ blocks, i.e., no block systems other than (a) the partition of $\Omega$ into the single set $\Omega$ and (b) the partition of $\Omega$ into one-element sets. It follows from the definitions that a group $G\leq \Sym(\Omega)$ acts as a primitive group on any minimal block system.
\subsection{Tools from the theory of permutation groups} The following result guarantees the existence of an element of small support in a group under rather mild conditions. It is essentially due to Wielandt. Thanks are due to L. Pyber for the reference.
\begin{lem}\label{lem:wielandt}
For any $\epsilon>0$, there are $C_1, C_2\geq 0$ such that the following
holds. Let $n\geq C_1$.
Let $G\leqslant\Sym(\Omega)$, $|\Omega|=n$,
be a group containing a section isomorphic to
$\Alt(k)$ for $k\geq C_2 \log n$. Then there is a $g\in G$, $g\ne e$,
such that $|\supp(g)|< \epsilon n$. \end{lem} This Lemma replaces \cite[Lem.~3.19]{MR3152942}, which was based on \cite[Lem.~3]{MR894827}. \begin{proof}
Suppose there is no $g\in G$, $g\ne e$, such that
$|\supp(g)|<\epsilon n$; we say $G$ has {\em minimal degree}
at least $\epsilon n$. We can assume without loss of generality that
$C_2 \log n \leq k \leq 2 C_2 \log n$, since, given that $G$ contains
a section isomorphic to $\Alt(k)$, it contains a section isomorphic to
$\Alt(k')$ for all $k'\leq k$.
Let $\omega = \min(\epsilon,0.4)$. Then, by
\cite[Thm.~5.5A]{DM}, we have $n>\binom{k}{s}$ for $s = \lfloor
\mu (k+1)\rfloor$, $\mu = (1-\omega)^{1/5}$. By Stirling's formula,
\[\binom{k}{s} = \frac{k!}{s! (k-s)!}
\gg_\omega \frac{1}{\sqrt{k}} \left(\frac{1}{\mu^\mu (1-\mu)^{(1-\mu)}}\right)^k\]
for $k$ greater than a constant depending only on $\mu$. This gives
a contradiction with $C_2 \log n\leq k\leq 2 C_2 \log n$ for $n\geq C_1$ when
$C_1$ and $C_2$ are large enough in terms of $\epsilon$.
(We use the condition $k\leq 2 C_2 \log n$ to ensure that the effect of
$1/\sqrt{k}$ is negligible.) \end{proof}
We also need a result telling us that a large subset of $\Sym(\Sigma)$ generates a large symmetric subgroup in a few steps. \begin{lem}\label{lem:amusi}
Let $H\leqslant \Sym(\Sigma)$, $|\Sigma|=k$.
Let $\rho \in (1/2,1)$. If $|H|\geq \rho^k k!$ and $k$ is larger than
a constant depending only on $\rho$, then there exists an orbit
$\Delta\subset \Sigma$ of $H$ such that $|\Delta|\geq \rho |\Sigma|$
and $H|_\Delta$ is $\Alt(\Delta)$ or $\Sym(\Delta)$. \end{lem}
\begin{proof}
By \cite[Thm.~5.2B]{DM}, which is a somewhat strengthened version
of \cite[Lem.~1.1]{MR703984}.
We simply need to check that
\[\lbrack \Sym(\Sigma):H\rbrack < \min\left(\frac{1}{2} \binom{k}{\lfloor
k/2\rfloor}, \binom{k}{m}\right)\]
for $m = \lceil \rho k\rceil$. This inequality follows from Stirling's
formula for $k$ larger than a constant depending on $\rho$. \end{proof}
\subsection{Diameter comparisons: directed and undirected graphs}
We wish to derive a (version of) Theorem \ref{thm:pais}, which is a bound on the diameter of a directed graph, from Theorem \ref{thm:jukuju}, which is a statement on sets $A$ satisfying $A = A^{-1}$. It would be natural to expect such a statement to imply only a bound on the diameter of an undirected graph. As it happens, the distinction between directed and undirected graph matters little in this context, thanks to the following result.
\begin{lem}[\cite{zbMATH05771615}, Thm.~1.4]\label{lem:soti}
Let $G$ be a finite group and $A$ a set of generators of $G$. Then
\[\diam \Gamma(G,A) \ll (\log |G|)^3\cdot
\left(\diam \Gamma\left(G,A\cup A^{-1}\right)\right)^2,\]
where the implied constant is absolute. \end{lem}
It is thus enough to prove Theorem \ref{thm:pais} (and analogous statements) for sets $S$ satisfying $S = S^{-1}$: simply replace $S$ by $S\cup S^{-1}$, and use Lemma \ref{lem:soti}.
\section{Finding few generators for a transitive group}\label{subs:ellafi}
Let us be given a set $A\subset \Sym(\Omega)$. When is it the case that there is a small subset $A'$ of $(A\cup A^{-1} \cup \{e\})^{n^C}$ (say) that generates $\langle A\rangle$, or at least a transitive subgroup of $\langle A\rangle$? Can we put further conditions on the elements of $A'$, such as, for instance, that they all be conjugates of each other?
Our motivation for considering this question is the following. We will find it necessary to do what amounts to bounding the size of the intersection of a slowly growing set $A$ with the centralizer of an element of large support. It would stand to reason that there should be stronger bounds than for the intersection of $A$ with a subgroup without long orbits: being in the centralizer is a more restrictive condition.
Given one of our main tools (Lemma \ref{lem:tagore}; see in particular the remark between its proof and its statement), we can quickly reduce this task to the following one: given an element $h$ of large support and a set of generators $A$ of $G$, find $g_1,\dotsc,g_k \in \left(A\cup A^{-1} \cup \{e\}\right)^\ell$ such that \begin{equation}\label{eq:urdu}
C(h) \cap g_1 C(h) g_1^{-1} \cap \dotsc \cap g_k C(h) g_k^{-1}\end{equation} is equal to $\{e\}$, or at least very small.
It is clearly enough for the group \begin{equation}\label{eq:purelllady}
\langle h, g_1 h g_1^{-1}, \dotsc, g_k h g_k^{-1}\rangle\end{equation} to be transitive: the centralizer $H$ of a transitive subgroup of $\Sym(n)$ is {\em semiregular} (that is, no element of $H$ other than $e$ fixes any point), and thus has $\leq n$ elements.
Let us, then, show that there are $g_1,\dotsc, g_k\in \left(A \cup A^{-1} \cup \{e\}\right)^\ell$, $k$ and $\ell$ small, such that the group (\ref{eq:purelllady}) is transitive. We will be able to prove what we want with $k \ll \log \log n$, assuming that $\langle A\rangle$ is $4$-transitive.
(As we will later discuss, reaching the bound $k\ll \log n$ is substantially easier. An analogous, but not identical, result with $k\sim \log n$ can be found in \cite[Lemma 5.13]{zbMATH01116363}.)
Our proof will be by iteration, with the iterative step being given by the next proposition. We will be working with partitions into orbits, but will prove the proposition for general partitions. Recall that, given two partitions $P$, $Q$ of a set $\Omega$, the {\em join} $P\vee Q$ is the finest partition that is coarser (not necessarily strictly so) than both $P$ and $Q$. The {\em trivial} partition of $\Omega$ is the partition $\{\Omega\}$.
Given a partition $P$ of $\Omega$ and an element $x\in \Omega$, we define $S_P(x)$ to be the element of $P$ containing $x$. Let
$s_P(x) = \left|S_P(x)\right|$.
The {\em total variation distance} between two probability measures $\mu_1$, $\mu_2$ on a finite set $X$ is defined to be
\[\delta(\mu_1,\mu_2) = \max_{S\subset X} |\mu_1(S) - \mu_2(S)|.\]
Thus, for instance, $\mu$ is at total variation distance at most $\epsilon$
from the uniform distribution if $\mu(S) = |S|/|X| + O^*(\epsilon)$ for every $S\subset X$. Suppose this is the case. Then, given any function $f:X\to \mathbb{R}$ with $0\leq f(x)\leq T$ for every $x\in X$, we can easily estimate the expected value $\mathbb{E}_\mu f(x)$ of $f(x)$ with respect to $\mu$: clearly, \[f(x) = \int_0^T 1_{L(f,t)}(x) dt \;\;\;\;\;\text{(``layer-cake decomposition''),}\] where $L(f,t) = \{x\in X: f(x)\geq t\}$, and so \[ \mathbb{E}_\mu(f(x)) = \int_0^T \Prob_\mu(x\in L(f,t)) dt =
\int_0^T \left(\frac{L(f,t)}{|X|} dt + O^*(\epsilon)\right) dt.\] Applying the same idea to the uniform distribution, without the error term $O^*(\epsilon)$, we obtain that \begin{equation}\label{eq:richpry}
\mathbb{E}_\mu(f(x)) = \frac{1}{|X|} \sum_{x\in X} f(x) + O^*(\epsilon T).
\end{equation}
\begin{lem}\label{lem:coeur}
Let $P$ be a partition of a finite set $\Omega$ with $|\Omega|=n$. Let $m\geq 2$. Denote by $\rho$ the proportion of elements $x$ of $\Omega$ such that $s_P(x)\geq m$.
Let $g\in \Sym(\Omega)$ be taken at random with a distribution such
that, for any element $\vec{v}$ of the set $\Omega^{(2)}$
of ordered pairs of distinct elements of $\Omega$,
the probability distribution of $\vec{v}^g$ is at total variation
distance $\leq \epsilon$ from the uniform distribution on $\Omega^{(2)}$.
Assume that the same is true for the probability distribution of
$\vec{v}^{g^{-1}}$ as well.
\begin{enumerate}
\item\label{it:turandot1}
With positive probability, the proportion of elements $x$ of $\Omega$
such that $s_{P\vee P^g}(x)\geq m$ is $\geq 1- (1-\rho)^2 - \epsilon$.
\item\label{it:turandot2}
Assume that $\epsilon\leq \rho/100$, $n\geq 100$ and $2\leq m\leq n/2$.
Then,
with positive probability, the proportion of elements $x$ of $\Omega$
such that $s_{P\vee P^g}(x)\geq (1+\rho/3) m$ is $\geq \rho^2/8$.
\item\label{it:pureplon} Assume that $\epsilon\leq \min(\rho/25,\rho/4m)$
and $n\geq 250$.
Then, with positive probability, $P\vee P^g$ contains at least
one set of size at least
\[\min\left(\frac{\rho}{10} n, \frac{\rho-\epsilon}{2} m^2\right).\] \end{enumerate} \end{lem} The proof is straightforward, in that we will proceed by taking expected values. We are giving constants simply for concreteness; they have not been optimized. Conclusion (\ref{it:turandot2}) is substantially weaker than what we could obtain by means of more complicated variance-based arguments such as those we will use in the proof of Lemma \ref{lem:coeur2}. \begin{proof}
\noindent
Let $B$ be the set of all $x\in \Omega$ such that $s_P(x)<m$.
For each $x\in B$, the probability that $x^g \in B$ is $\leq |B|/n + \epsilon
= 1 - \rho + \epsilon$. Hence, the expected value of the number of
$x\in B$ such that $x^g\in B$ is $\leq (1+\epsilon-\rho) |B| =
(1 + \epsilon- \rho) (1-\rho) n \leq ((1-\rho)^2 + \epsilon) n$.
It obviously follows that the number of such $x$
is $\leq ((1-\rho)^2 + \epsilon) n$ with positive probability. In other words, conclusion (\ref{it:turandot1}) holds.
For each $x\in \Omega$ such that $s_P(x)\geq m$, we choose a subset $Z(x)\subset S_P(x)$ of size $m$, in such a way that, for every set $S$ in $P$, every element of $S$ is contained in exactly $m$ sets $Z(x)$, $x\in S$. (For instance, we may identify each element of $P$ having $m'\geq m$ elements with the set $\mathbb{Z}/m' \mathbb{Z}$, and then let $Z(x) = \{x,x+1,\dotsc,x+m-1\} \mo m'$ for every $x\in \mathbb{Z}/m' \mathbb{Z}$.
We can easily see that every element of $\mathbb{Z}/m' \mathbb{Z}$ is then contained in exactly $m'$ sets $Z(x)$.)
For every $x\in \Omega$ such that $s_P(x)<m$, we let $Z(x)=\emptyset$. We write $z(x)$ for $|Z(x)|$; we see that $z(x)$ can take only the values $m$ or $0$.
We see immediately that \begin{equation}\label{eq:kupcop}\begin{aligned}
\sum_{x,x'\in \Omega} \left|Z(x) \cap Z(x')\right| &=
\sum_{x\in \Omega} \sum_{y\in Z(x)} |\{x'\in S_P(x): y\in Z(x')\}|\\
&= \sum_{x\in \Omega} \sum_{y\in Z(x)} m = \rho m^2 n,\end{aligned}\end{equation} a fact that will be useful later.
We write $Z_g(x)$ for $Z\left(x^{g^{-1}}\right)^g$, and
$z_g(x)$ for $|Z_g(x)|$. By definition,
\[Z_g(x)\subset \left(S_P\left(x^{g^{-1}}\right)\right)^g = S_{P^g}(x).\]
Clearly,
\begin{equation}\label{eq:colosh}
s_{P\vee P^g}(x) \geq z(x) + z_g(x) - \left|Z(x)\cap Z_g(x)\right|.
\end{equation}
For any $x$,
\[\begin{aligned}
\mathbb{E}(z_g(x)) &= \mathbb{E}\left(\left|Z\left(x^{g^{-1}}\right)\right|
\right) = m\cdot \Prob\left(x^{g^{-1}}\not\in B\right)\\
&= m\cdot (\rho + O^*(\epsilon)),
\end{aligned}\]
since $B$ is the set of elements $x$ of $\Omega$ such that
$Z(x)=\emptyset$, and $|Z(x)|=m$ for all $x\in \Omega\setminus B$.
Given $x\in \Omega\setminus B$ and a $y\in Z(x)$,
we should estimate the probability that $y$ is an element
of $Z(x)\cap Z_g(x)$, where $g$ is, as always, taken at random.
Evidently, if $y=x$, then $y\notin Z_g(x)$ if $Z_g(x)$ is empty, and
$y\in Z_g(x)$ otherwise.
If $y\ne x$, we have $y\in Z_g(x)$ if and only if $g^{-1}$ sends
$(x,y)$ to an element of
\[S = \{(x',y'): x'\in \Omega\setminus B, y'\in Z(x), y'\ne x'\}.\]
The number of elements of $S$ is $|\Omega\setminus B|\cdot (m-1)$. Hence
\[\begin{aligned}
\Prob(y\in Z_g(x)) \leq \frac{|\Omega\setminus B|\cdot (m-1)}{n (n-1)} +\epsilon \leq \frac{\rho m}{n} + \epsilon.
\end{aligned}\]
Therefore,
\[\begin{aligned}
\mathbb{E}\left( \left|Z(x)\cap Z_g(x)\right|\right) &\leq
\Prob(x\in Z_g(x)) + \mathop{\sum_{y\in Z(x)}}_{y\ne x}
\Prob(y\in Z_g(x))\\
&\leq \rho + \epsilon + \mathop{\sum_{y\in Z(x)}}_{y\ne x'}
\left(\frac{\rho m}{n} + \epsilon\right) \leq
\rho \left(1 + \frac{m^2}{n}\right) + \epsilon m,\end{aligned}\]
and so
\begin{equation}\label{eq:matater}\begin{aligned}
\mathbb{E}&\left(\frac{1}{\rho n} \sum_{x\in \Omega\setminus B}
\left(z_g(x) - \left|Z(x)\cap Z_g(x)\right|\right)\right)\\
&\geq \frac{1}{|\Omega\setminus B|} \sum_{x\in \Omega\setminus B} \left(m\cdot (\rho - \epsilon)
- \rho \left( 1 + \frac{m^2}{n}\right) - \epsilon m\right)\\
&= \rho m
- \rho \left( 1 + \frac{m^2}{n}\right) - 2 \epsilon m
.\end{aligned}\end{equation}
Thus, with positive probability, \[\frac{1}{\rho n} \sum_{x\in \Omega\setminus B}
\left(z_g(x) - \left|Z(x)\cap Z_g(x)\right|\right)\geq \rho m
- \rho \left( 1 + \frac{m^2}{n}\right) - 2\epsilon m.\]
The contribution of all $x\in \Omega\setminus B$ such that
$z_g(x) - \left|Z(x)\cap Z_g(x)\right| \leq \rho m/3$ is at most
$\rho m/3$. Each one of the other $x\in \Omega\setminus B$ contributes
at most $m/\rho n$. Hence, the number of all $x\in \Omega \setminus B$ such that
$z_g(x) - |Z(x)\cap Z_g(x)|>\rho m/3$ is
\[\begin{aligned}
&\geq \frac{\left(\frac{2}{3} m -
\left(1+\frac{m^2}{n}\right)\right) \rho - 2 \epsilon m}{m/\rho n}\\
&\geq
\left(\frac{2}{3} -
\left(\frac{1}{m} + \frac{m}{n}\right)\right) \rho^2 n - 2 \epsilon \rho n
\geq \left(\frac{1}{6} - \frac{2}{n}\right) \rho^2 n
- 2 \epsilon \rho n
\geq \frac{\rho^2}{8} n,
\end{aligned}\]
where we use the assumptions $2\leq m\leq n/2$, $\epsilon\leq \rho/100$,
$n\geq 100$. By (\ref{eq:colosh}),
we obtain that conclusion (\ref{it:turandot2}) holds.
It remains to prove conclusion (\ref{it:pureplon}).
Let $x\in \Omega$. For each $y\in S_P(x)$, every element of $Z_g(y)$ lies in $S_{P^g}(y)$ and hence in $S_{P\vee P^g}(x)$.
Therefore, by inclusion-exclusion, for $U\subset S_P(x)$ arbitrary,
\begin{equation}\label{eq:disgu} \left|S_{P\vee P^g}(x)\right|\geq
\left|\bigcup_{y\in U} S_{P^g}(y)\right| \geq \sum_{y\in U} z_g(y)
- \mathop{\sum_{y,y'\in U}}_{y\ne y'}
\left|Z_g(y) \cap Z_{g}(y')\right| .\end{equation}
By our assumptions on the distribution of $g$,
\begin{equation}\label{eq:autir}\begin{aligned}
\mathbb{E}\left(\sum_{y\in U} z_g(y) \right)
&= \sum_{y\in U} \mathbb{E}\left(z_g(y)\right) =
\sum_{y\in U} \mathbb{E}\left(z\left(y^{g^{-1}}\right)\right) \\
&\geq \sum_{y\in U} (\rho - \epsilon) m
= (\rho - \epsilon) m |U| ,\end{aligned}\end{equation} and, similarly, \begin{equation}\label{eq:autir2}
\mathbb{E}\left(\sum_{y\in U} z_g(y) \right) \leq
(\rho + \epsilon) m |U|. \end{equation}
We can apply (\ref{eq:richpry}) to $X = \Omega^{(2)}$ and
$f(x,x') = |Z(x)\cap Z(x')|$, with the probability distribution on $X$ given by $(y,y')^{g^{-1}}$, where $(y,y')$ is a given element of $\Omega^{(2)}$ and $g$ is taken randomly in the sense we have been using throughout. Then, by (\ref{eq:kupcop}) and the fact that $0\leq f(x,x')\leq m$ for all $(x,x')\in X$,
\[\mathbb{E}\left(\left|Z_g(y) \cap Z_{g}(y')\right|\right) = \frac{\rho m^2 n - m n}{n (n-1)} + O^*(\epsilon m).\] Hence
\begin{equation}\label{eq:rempert}\begin{aligned} \mathbb{E}\left(\mathop{\sum_{y,y'\in U}}_{y\ne y'}
\left|Z_g(y) \cap Z_{g}(y')\right|\right) &=
|U| (|U|-1) \left(\frac{\rho m^2 n - m n}{n (n-1)} + O^*(\epsilon m)\right).\\ \\ &\leq
\left(\frac{\rho m^2}{n} + \epsilon m\right) \cdot |U|^2 .\end{aligned}\end{equation}
Therefore, by (\ref{eq:disgu}), \[\mathbb{E}\left(s_{P\vee P^g}(x)\right)\geq
(\rho-\epsilon) m |U| - \left(\frac{\rho m^2}{n} + \epsilon m\right)
|U|^2. \]
In general, the maximum of an expression $a t - b t^2$, $a,b>0$, is of course attained when $t$ equals $t_0 = a/2b$; moreover, since $a (t_0 - \Delta) - b (t_0 - \Delta)^2 = a^2/4b - b \Delta^2$, we see that $a \lfloor t_0\rfloor - b (\lfloor t_0\rfloor)^2 \geq a^2/4b - b$. We let $a = (\rho-\epsilon) m$, $b = \rho m^2/n + \epsilon m$.
Suppose first that $t_0>m$. Then we simply choose any $x\in \Omega$ with $|S_P(x)|\geq m$, and choose $U\subset S_P(x)$ with
$|U|=m$. Since $t_0=a/2b>m$, we see that $a m - b m^2 = a m (1 - b m/a) > a m/2$, and so \[\mathbb{E}\left(s_{P\vee P^g}(x)\right) > \frac{a m}{2} = \frac{\rho -\epsilon}{2} m^2. \]
Now suppose that $t_0\leq m$. Then
there is an $x\in \Omega$ such that
$|S_P(x)|\geq t_0$. We choose $U\subset S_P(x)$ with
$|U| = \lfloor t_0\rfloor$, and obtain that \[ \mathbb{E}\left(s_{P\vee P^g(x)}\right) \geq \frac{a^2}{4 b} - b .\] Clearly $1/(r_1+r_2)\geq \min(1/2 r_1,1/2 r_2)$ for $a,b>0$. Hence \[\begin{aligned}\frac{a^2}{4 b} &= \frac{a^2/4}{\frac{\rho m^2}{n} + \epsilon m} \geq \frac{a^2}{8} \min\left(\frac{n}{\rho m^2},\frac{1}{\epsilon m}\right)\\ &\geq \frac{(1-\epsilon/\rho)^2}{8} \min\left(\rho n, \frac{\rho^2}{\epsilon} m\right)\geq \frac{(1-\epsilon/\rho)^2}{8} \min\left(\rho n, 4 \rho m^2\right),
\end{aligned}\] where we use the assumption $\epsilon\leq \rho/4 m$. Again by $\epsilon \leq \rho/4m$, we see that $t_0=a/2b\leq m$ implies that \[\begin{aligned} (1-\epsilon) \rho m = a\leq 2 b m = 2 \frac{\rho m^3}{n} + 2 \epsilon m^2 \leq 2 \frac{\rho m^3}{n} + \frac{\rho m}{2},\end{aligned} \] and so $4 \rho m^2 \geq (1-2\epsilon) \rho n$. Therefore \[\begin{aligned} \frac{a^2}{4 b} - b &\geq \frac{(1-\epsilon/\rho)^2}{8} (1- 2\epsilon) \rho n - \left(\frac{\rho m^2}{n} + \epsilon m\right)\\ &\geq \frac{(1-\epsilon/\rho)^2}{8} (1- 2\epsilon) \rho n - \frac{1-2\epsilon}{4} \rho - \epsilon m.\end{aligned}\] If $m\geq \rho n/10$, then $P$ contained sets of size $\geq \rho n/10$ to begin with, and hence so does $P\vee P^g$. If $m< \rho n/10$, we obtain that \[\begin{aligned} \mathbb{E}\left(s_{P\vee P^g(x)}\right) &\geq \frac{a^2}{4 b} - b \geq \left(\frac{(1-\epsilon/\rho)^2}{8} (1-2\epsilon) - \frac{\epsilon}{10} -\frac{1 - 2\epsilon}{4 n}\right) \rho n\\ &\geq \left(\frac{(24/25)^2}{8} \cdot \frac{23}{25} - \frac{1}{250} - \frac{1}{600}\right) n \geq 0.10031 \rho n > \frac{\rho n}{10} \end{aligned}\] by the assumptions $\epsilon\leq \rho/25\leq 1/25$ and $n\geq 150$. Thus, conclusion (\ref{it:pureplon}) holds. \end{proof}
Simply using Lemma \ref{lem:coeur} repeatedly, we could give a proof of Prop.~\ref{prop:verodiro} with $k$ in the order of $\log n$. Our crucial induction step, allowing us $k \ll \log \log n$, will be provided by the following Lemma. The proof will proceed by variance-based bounds. (In other words, we will be using Chebyshev's inequality.)
\begin{lem}\label{lem:coeur2}
Let $P$ be a partition of a finite set $\Omega$ with $|\Omega|=n$. Let $m\geq 2$. Denote by $\rho$ the proportion of elements $x$ of $\Omega$ such that $s_P(x)\geq m$. Let $g\in \Sym(\Omega)$ be taken at random with a distribution as in Lemma \ref{lem:coeur}, with $\epsilon\leq \min(1/1000,\rho m/n)$.
Assume that $\rho\geq 999/1000$ and $1000\leq m\leq \sqrt{n}/100$. Then,
with positive probability,
\[s_{P\vee P^g}(x)\geq \frac{m^2}{2}\] for more than $n/2$ elements $x$ of $\Omega$. \end{lem} We will use several estimates in the proof of part (\ref{it:pureplon}) of Lemma \ref{lem:coeur}.
We shall use the same notation as in that proof: $Z(x)$, $z(x)$, $Z_g(x)$ and $z_g(x)$ are the same as there. \begin{proof}
Let $f_g(x) = \sum_{y\in Z(x)} z_g(y)$. By (\ref{eq:autir}) and (\ref{eq:autir2}), \[(\rho - \epsilon) m \cdot z(x) \leq \mathbb{E}(f_g(x)) \leq (\rho +\epsilon) m \cdot z(x).\]
Therefore, writing
$E_F = \frac{1}{n} \sum_{x\in \Omega} F(x)$,
we see that
\begin{equation}\label{eq:ebound}
(\rho-\epsilon) \rho m^2 \leq \mathbb{E}\left(E_{f_g}\right)
\leq (\rho+\epsilon) \rho m^2,\end{equation}
where we take $g$ at random, as always. Let \begin{equation}\label{eq:mastuerzo}
R_g = \frac{1}{n} \sum_{x\in \Omega}
\mathop{\sum_{y,y'\in Z(x)}}_{y\ne y'}
\left|Z_g(y) \cap Z_{g}(y')\right|.
\end{equation}
Then, by (\ref{eq:rempert}),
\begin{equation}\label{eq:vengado}
\mathbb{E}\left(R_g\right)
\leq \rho \left(\frac{\rho m^2}{n} + \epsilon m\right) m^2
= \frac{\rho^2 m^4}{n} + \epsilon \rho m^3 \leq \frac{2 \rho^2 m^4}{n},\end{equation} where we use the assumption $\epsilon\leq \rho m/n$.
Let us now bound the expected value of
$\sum_{x\in \Omega} f_g(x)^2$. Clearly
\begin{equation}\label{eq:nolola}\begin{aligned}
f_g(x)^2 &= \sum_{y,y'\in Z(x)} z_g(y) z_g(y')
\\ &= \sum_{y\in Z(x)} z_g(y)^2 +
\mathop{\sum_{y,y'\in Z(x)}}_{y\ne y'} z_g(y) z_g(y').\end{aligned}
\end{equation}
Now, for $x$ such that $Z(x)$ is non-empty,
\begin{equation}\label{eq:lelo1}
\mathbb{E}\left(\sum_{y\in Z(x)} z_g(y)^2\right) \leq
\sum_{y\in Z(x)} (\rho+\epsilon) m^2
= (\rho+\epsilon) m^3\end{equation}
and
\begin{equation}\label{eq:lelo2}\begin{aligned}
\mathbb{E}\left(\mathop{\sum_{y,y'\in Z(x)}}_{y\ne y'} z_g(y)
z_g(y')\right)
&\leq
\mathop{\sum_{y,y'\in Z(x)}}_{y\ne y'} (\rho^2 +\epsilon) m^2 \\ &=
(\rho^2+\epsilon) m^2\cdot (m^2-m).
\end{aligned}\end{equation}
Therefore,
\begin{equation}\label{eq:raplusplus}\begin{aligned}
\mathbb{E}\left(\frac{1}{n} \sum_{x\in \Omega} f_g(x)^2\right) &\leq
(\rho + \epsilon) \rho m^3 +
(\rho^2+\epsilon) \rho (m^4 - m^3)\\
&=
(\rho-\rho^2) \rho m^3 + (\rho^2+\epsilon) \rho m^4 . \end{aligned}\end{equation}
We have just established a bound on the expectation of the variance:
for
\begin{equation}\label{eq:elpuma}
V_f = \frac{1}{n} \sum_{x\in \Omega} f(x)^2
- \left(\frac{1}{n} \sum_{x\in \Omega} f(x)\right)^2
\end{equation} we quickly see, by (\ref{eq:ebound}) and (\ref{eq:raplusplus}), that
\begin{equation}\label{eq:kostreko}\begin{aligned}
\mathbb{E}\left(V_{f_g}\right)&=
\mathbb{E}\left( \frac{1}{n} \sum_{x\in \Omega} f_g(x)^2\right)
- \mathbb{E}\left(\left(\frac{1}{n} \sum_{x\in \Omega} f_g(x)\right)^2\right)\\
&\leq (\rho-\rho^2) \rho m^3 + (\rho^2+\epsilon) \rho m^4
- \left((\rho+\epsilon) \rho m^2\right)^2
\\ &\leq (1-\rho) \rho^3 m^4 +
\epsilon_1 m^4,\end{aligned}\end{equation} where
\begin{equation}\label{eq:eps1def}
\epsilon_1 = \epsilon \rho + \frac{(1-\rho) \rho^2}{m}.
\end{equation}
We may call $V_{f_g}$ the variance of $f_g$, just as we may call $E_{f_g}$ the expectation of $f_g$.
Now we should give a bound on the variance $\mathbb{V}\left(E_{f_g}\right)$ of $E_{f_g}$. Clearly
\begin{equation}\begin{aligned}
\mathbb{E}\left(E_{f_g}^2\right) &=
\mathbb{E}\left(\left(\frac{1}{n} \sum_{x\in \Omega} f_g(x)
\right)^2\right) =
\frac{1}{n^2}\cdot \mathbb{E}\left(
\sum_{x,x'\in \Omega} f_g(x) f_g(x')\right).\end{aligned}\end{equation}
By the definition of $f_g(x)$ and the fact that, for every $y\in \Omega$ with
$Z(y)\ne \emptyset$, $y\in Z(x)$ for exactly $m$ values of $x\in \Omega$,
\begin{equation}\label{eq:hausarzt}
\begin{aligned} \sum_{x,x'\in \Omega} f_g(x) f_g(x') &= \sum_{x,x'\in \Omega} \sum_{y\in Z(x)\cap Z(x')} z_g(y)^2 + \sum_{x,x'\in \Omega} \mathop{\sum_{y\in Z(x), y'\in Z(x')}}_{y\ne y'} z_g(y) z_g\left(y'\right)\\ &= \mathop{\sum_{y\in \Omega}}_{Z(y)\ne \emptyset} m^2 z_g(y)^2 + \mathop{\mathop{\sum_{y,y'\in \Omega}}_{Z(y),Z(y')\ne \emptyset}}_{y\ne y'} m^2 z_g(y) z_g(y').
\end{aligned}
\end{equation} Much as in (\ref{eq:lelo1}) and (\ref{eq:lelo2}), \[\mathbb{E}\left(\mathop{\sum_{y\in \Omega}}_{Z(y)\ne \emptyset} m^2 z_g(y)^2 \right) \leq
\mathop{\sum_{y\in \Omega}}_{Z(y)\ne \emptyset} m^2\cdot (\rho + \epsilon) m^2 = (\rho + \epsilon) \rho m^4 n \] and \[\begin{aligned} \mathop{\mathop{\sum_{y,y'\in \Omega}}_{Z(y),Z(y')\ne \emptyset}}_{y\ne y'} m^2 z_g(y) z_g(y') &\leq \mathop{\mathop{\sum_{y,y'\in \Omega}}_{Z(y),Z(y')\ne \emptyset}}_{y\ne y'} m^2 \cdot (\rho^2 + \epsilon) m^2\\ &\leq (\rho^2+\epsilon) \rho^2 m^4 n^2. \end{aligned}\] Hence \[ \mathbb{E}\left(E_{f_g}^2\right)\leq \frac{(\rho+\epsilon) \rho}{n} m^4 + (\rho^2+\epsilon) \rho^2 m^4, \] and so, by (\ref{eq:ebound}),
\begin{equation}\label{eq:mestron}\mathbb{V}\left(E_{f_g}\right) \leq
\frac{(\rho +\epsilon) \rho}{n} m^4 +
((\rho^2 +\epsilon) - (\rho-\epsilon)^2) \rho^2 m^4
\leq \epsilon_2 m^4
,\end{equation} where \begin{equation}\label{eq:menest}
\epsilon_2 = (1+2\rho) \rho^2 \epsilon + \frac{(\rho + \epsilon) \rho}{n}.
\end{equation} Here, of course, $\mathbb{V}\left(E_{f_g}\right) = \mathbb{E}\left(E_{f_g}^2\right) - \mathbb{E}\left(E_{f_g}\right)^2 = \mathbb{E}\left(\left(E_{f_g}-\mathbb{E}\left(E_{f_g}\right)\right)^2\right)$.
By (\ref{eq:ebound}), (\ref{eq:vengado}), (\ref{eq:kostreko})
and (\ref{eq:mestron}) and Cauchy-Schwarz, we conclude that
\[\begin{aligned}\mathbb{E}&\left(
E_{f_g}^2 - c_1 V_{f_g} - c_2 \left(E_{f_g} - \mathbb{E}\left(E_{f_g}\right)
\right)^2
- c_3 n \cdot R_g\right)\\
&\geq \mathbb{E}\left(E_{f_g}\right)^2 - c_1 \mathbb{E}\left(V_{f_g}\right)
-c_2 \mathbb{V}\left(E_{f_g}\right) - c_3 n \cdot \mathbb{E}\left(R_g\right)
\geq K m^4
\end{aligned}\]
for any $c_1,c_2,c_3>0$, where \[K = (\rho - \epsilon)^2 \rho^2 - \left((1-\rho) \rho^3 + \epsilon_1\right) c_1 - \epsilon_2 c_2 - 2 \rho^2 c_3.\]
We will choose $c_1$, $c_2$, $c_3$ so that $K$ is positive.
Then the probability that
\begin{equation}\label{eq:durud}
V_{f_g}\leq \frac{E_{f_g}^2}{c_1},
\;\;\;\;\; E_{f_g}\geq \sqrt{c_2} \left|E_{f_g} - \mathbb{E}\left(E_{f_g}\right)\right|, \;\;\;\;\; n R_g\leq \frac{E_{f_g}^2}{c_3}
\end{equation}
will be positive. What happens when (\ref{eq:durud}) is the case?
\begin{enumerate}
\item First of all, $E_{f_g}\geq \sqrt{c_2} |E_{f_g} - \mathbb{E}(E_{f_g})|$ implies
\begin{equation}\label{eq:pinor}\frac{\sqrt{c_2}}{\sqrt{c_2}+1}
\mathbb{E}\left(E_{f_g}\right)\leq E_{f_g} \leq \frac{\sqrt{c_2}}{\sqrt{c_2}-1} \mathbb{E}\left(E_{f_g}\right).\end{equation} \item
By Chebyshev's inequality, if (\ref{eq:durud}) is the case, then
for any $\tau>0$, the number of $x\in \Omega$ such
that \[1-\tau \leq \frac{f_g(x)}{E_{f_g}} \leq 1+\tau\]
does not hold is at most $(n V_{f_g}/E_{f_g}^2)/\tau^2 \leq n/c_1 \tau^2$. \item By (\ref{eq:mastuerzo}) and the last inequality in (\ref{eq:durud}), for any $\tau'>0$,
the number of $x\in \Omega$ such that
\[ \mathop{\sum_{y,y'\in Z(x)}}_{y\ne y'}
\left|Z_g(y) \cap Z_{g}(y')\right| \leq \tau' E_{f_g}
\]
does not hold is $\leq R_g n/\tau' E_{f_g} \leq E_{f_g}^2/c_3 \tau' E_{f_g} =
E_{f_g}/c_3 \tau'$. \end{enumerate}
Hence, for $\geq (1-1/c_1 \tau^2) n - E_{f_g}/c_3 \tau'$ values of $x\in \Omega$, by (\ref{eq:disgu}) and (\ref{eq:ebound}), \begin{equation}\label{eq:lerso}\begin{aligned}
\left|S_{P\vee P^g}(x)\right|&\geq {f_g}(x) - \tau' E_{f_g} \geq (1 - \tau - \tau') E_{f_g} \geq \frac{\sqrt{c_2}}{\sqrt{c_2}+1} (1-\tau - \tau') \mathbb{E}\left(E_{f_g}\right)\\ &\geq \frac{\sqrt{c_2}}{\sqrt{c_2}+1} (1-\tau - \tau') (\rho - \epsilon) \rho m^2. \end{aligned} \end{equation} Moreover, by (\ref{eq:ebound}) and (\ref{eq:pinor}), \[\frac{E_{f_g}}{c_3 \tau'} \leq \frac{\sqrt{c_2}}{\sqrt{c_2}-1} \frac{\mathbb{E}(E_{f_g})}{c_3 \tau'} \leq \frac{\sqrt{c_2}}{\sqrt{c_2}-1} \frac{(\rho +\epsilon) \rho}{c_3 \tau'} m^2.\] Thus, by the assumption $m\leq \sqrt{n}/100$, we obtain that \[\left(1-\frac{1}{c_1 \tau^2}\right) n - \frac{E_{f_g}}{c_3 \tau'} \geq \rho' n\] for \begin{equation}\label{eq:mantecado}
\rho' = 1 -\frac{1}{c_1 \tau^2} -
\frac{\sqrt{c_2}}{\sqrt{c_2}-1} \frac{(\rho +\epsilon) \rho}{100^2 c_3 \tau'}.
\end{equation}
It is time to choose the parameters $c_1$, $c_2$, $c_3$, $\tau$ and $\tau'$. We let \begin{equation}\label{eq:progro}
c_1=\frac{1}{4\delta_1},\;\;\;\;\;
\delta_1 = \frac{(1-\rho) \rho^3 + \epsilon_1}{\rho^4},\;\;\;\;\;
c_2 = \frac{1}{4 \epsilon_2},\;\;\;\;\;
c_3 = \frac{\rho^2}{8},\end{equation}
and $\tau = 1/4$, $\tau' = 1/12$. Then $K\geq (\rho-\epsilon)^2-3/4 > 0$, by our assumptions on $\rho$ and $\epsilon$.
In fact, since we are assuming $\epsilon\leq 1/1000$, $m\geq 1000$, $\rho\geq 999/1000$ and $n\geq (100 m)^2 \geq 10^9$, \[\epsilon_1 \leq \epsilon + \frac{1-\rho}{m} \leq 0.001001,\] \[\epsilon_2 \leq 3 \epsilon + \frac{(1+\epsilon)}{n} \leq 0.0030001,\] \[\delta_1 \leq \rho^{-1} - 1 + \frac{\epsilon_1}{\rho^4} \leq \frac{1000}{999} - 1 + \frac{0.001001}{(999/1000)^4} \leq 0.002007\]
by (\ref{eq:eps1def}), (\ref{eq:menest}), and (\ref{eq:progro}). Hence, by (\ref{eq:lerso}), \[\begin{aligned}
s_{P\vee P^g}(x) &= \left|S_{P\vee P^g}(x)\right| \geq \frac{\sqrt{c_2}}{\sqrt{c_2}+1} (1 - \tau - \tau') (\rho - \epsilon) \rho m^2\\ &\geq \frac{1}{1 + \sqrt{4 \epsilon_2}} \left(1 - \frac{1}{4} - \frac{1}{12}\right) \frac{998}{1000} \frac{999}{1000} m^2
> \frac{m}{2}\end{aligned}\] for at least $\rho' n$ elements $x$ of $\Omega$. Moreover, by (\ref{eq:mantecado}), \[\rho' \geq 1 - \frac{4 \delta_1}{\tau^2} - \frac{1}{1 - \sqrt{4 \epsilon_2}} \frac{1 + \epsilon}{100^2 \cdot \frac{\rho^2}{8} \cdot \frac{1}{12}} > 0.86 > \frac{1}{2}.\] \end{proof}
\begin{prop}\label{prop:verodiro}
Let $P$ be a partition of a finite set $\Omega$ with $|\Omega|=n$.
Assume that at least $\geq \rho n$ elements of $\Omega$, $\rho>0$, lie
in sets in $P$ of size $>1$.
Let $A\subset \Sym(\Omega)$ be a set of generators of a $4$-transitive subgroup of $\Sym(n)$. Let $h\in \Sym(\Omega)$ have support of size
$n-c$, where $0\leq c<n$.
Then there are $g_1,\dotsc,g_k \in
\left(A \cup A^{-1}\cup \{e\}\right)^v$, $k =
O(\log \log n) + O_{\rho,c}(1)$,
$v = O(n^{10})$, such that the partition $Q_k$ defined
by \[Q_0 = P,\;\;\;\;\;\;\;\;\;\;\;\;
Q_j = Q_{j-1} \vee Q_{j-1}^{g_j h g_j^{-1}}\;\;\;\;\;\text{for
$1\leq j\leq k$}\]
is the trivial partition of $\Omega$. \end{prop} An example given by W. Sawin\footnote{On MathOverflow, in comments to
\cite{286057}.} suggests that $4$-transitivity is a necessary assumption. \begin{proof}
Let $A_0\subset A\cup A^{-1}$ be as in Lemma \ref{lem:dustu}, and let
$g\in A_0^v\subset \left(A\cup A^{-1}\right)^v$
be the outcome of a random walk of length $v$,
where $v = \lceil n^9 \log(n^4/\epsilon)\rceil$ for a given $\epsilon>0$.
Then, by Prop.~\ref{prop:chudo} applied to the Schreier graph
$\Gamma(G,A_0;\Omega^{(4)})$, given any two elements $\vec{v}_1$,
$\vec{v}_2$ of the set $\Omega^{(4)}$ of quadruples of distinct
elements of $\Omega$, the probability that $g$ takes $\vec{v}_1$
to $\vec{v}_2$ lies between $(1-\epsilon)/\left|\Omega^{(4)}\right|$
and $(1+\epsilon)/\left|\Omega^{(4)}\right|$.
A moment's thought shows that, since the support of $h$ is of size
$n-c$, the number of quadruples $(r,s,r',s')\in \Omega^{(4)}$ such
that $r^h = r'$ and $s^h = s'$ is at least $(n-c) (n-c-3)$ and at most
$(n-c) (n-c-1)$. Given any $(x,y,x',y')\in \Omega^{(4)}$,
the probability that $g h g^{-1}$ takes $(x,y)$ to $(x',y')$
equals the probability that $g$ takes $(x,y,x',y')$ to a tuple
$(r,s,r',s')$ such that $r^h = r'$ and $s^h = s'$, and so lies between
$(1-\epsilon) (n-c) (n-c-3)/|\Omega^{(4)}|$ and
$(1+\epsilon) (n-c) (n-c-1)/|\Omega^{(4)}|$.
Therefore, for any $(x,y)\in \Omega^{(2)}$
and any subset $S\subset \Omega^{(2)}$, the probability that
$(x,y)^{g h g^{-1}} \in S$ is at least
\begin{equation}\label{eq:armador}
(1-\epsilon) \frac{(n-c) (n-c-3)}{\left|\Omega^{(4)}\right|}
(|S| - 4 n)\end{equation}
(Here $|S|- 4 n$ is a lower bound for the number of pairs in $S$
not containing $x$ or $y$.)
We bound (\ref{eq:armador}) from below by
\[(1-\epsilon) \frac{(n-c) (n-c-3)}{(n-2) (n-3)}
\frac{|S|}{n (n-1)} - \frac{4 n (n-c) (n-c-3)}{n (n-1) (n-2) (n-3)}
\geq \frac{|S|}{\left|\Omega^{(2)}\right|} - \epsilon',\]
where
\begin{equation}\label{eq:epsp}\begin{aligned}
\epsilon' &= \epsilon +
\left(1 - \frac{(n-c) (n-c-3)}{(n-2) (n-3)}\right) +
\frac{4 (n-c) (n-c-3)}{(n-1) (n-2) (n-3)}
\\
&\leq \epsilon +
\frac{c}{n-3} + \frac{c}{n} + \frac{4 n}{(n-1) (n-2)}
\leq \epsilon + \frac{8 + 3 c}{n},\end{aligned}\end{equation}
by $n\geq 6$. Since we can apply the same bound to the complement
of $S$, we conclude that the distribution of $(x,y)^{g h g^{-1}}$ is at
total variation distance at most $\epsilon'$ from the uniform
distribution. We can apply the same statement to $h^{-1}$ instead of $h$.
Hence, we can apply
Lemmas \ref{lem:coeur} and \ref{lem:coeur2} with $g h g^{-1}$ instead
of $g$, and $\epsilon'$ instead of $\epsilon$, provided that
their conditions on $\epsilon'$, $\rho$, $m$ and $n$ hold.
If $n$ is bounded by a constant $C$, say, then we can proceed as follows:
at every step, we look at an element $L$
of $Q_j$ of maximal length (size), and let $g$
be as above, with $\epsilon=1/100$, say. (We could even take $v=\lceil n^5
\log(n^2/\epsilon)\rceil$, and look only at the Schreier graph
$\Gamma(G,A;\Omega)$.) Then any given element of $L$ is sent to any
given element of $\Omega\setminus L$ with positive probability, and so,
trivially, $L$ becomes larger in $Q_{j+1}= Q_{j} \vee Q_{j}^{g}$ with
positive probability, i.e., for at least one $g$. We set $g_{j+1}$
equal to that $g$. After at most $k=C$ steps, we obtain, then, that
$Q_k$ consists of a single set, equal to $\Omega$, and so we are done.
We can assume, then, that $n$ is greater than a constant $C$.
We start by applying parts (\ref{it:turandot1}) and (\ref{it:turandot2}) of Lemma \ref{lem:coeur}
a bounded number of times so that we get to a state in which
either the conditions of Lemma \ref{lem:coeur2} are fulfilled
or $m>\sqrt{n}/50$. (If the conditions are already fulfilled, then, of course, this
initial stage may be skipped.) The initial value $m_0$ of $m$ will be $2$.
We let $\rho_0=\rho$.
We let $\epsilon=\rho/4000$; since we can assume that
$n\geq 4000 (8+3 c)/\rho$,
we obtain from (\ref{eq:epsp})
that $\epsilon' \leq \rho/4000 + (8+3 c)/n
\leq \rho/2000$. In particular,
the condition $\epsilon'\leq \rho/100$
in part (\ref{it:turandot2}) of Lemma \ref{lem:coeur} is satisfied.
\begin{enumerate}
\item We begin by applying part (\ref{it:turandot1}) of Lemma \ref{lem:coeur}
repeatedly, with $m$ held constant.
At the $j$th step, Lemma \ref{lem:coeur}
guarantees us the existence of a $g_j\in A_0^v$ such that,
for $Q_j = Q_{j-1} \vee Q_{j-1}^{g h g^{-1}}$,
the proportion $\rho_{j}$ of elements $x\in \Omega$
for which $s_{Q_j}(x)\geq m$ satisfies
\[\begin{aligned}
1-\rho_j &\leq (1-\rho_{j-1})^2 + \epsilon'
\leq 1 - 2 \rho_{j-1} + \frac{\rho_{j-1}}{2000} + \rho_{j-1}^2\\
&\leq 1 - \frac{3\rho_{j-1}}{2} + \frac{\rho_{j-1}^2}{2} = (1 -\rho_j) \left(1 - \frac{\rho_j}{2}\right)\end{aligned}\]
provided that $\rho_{j-1}\leq 999/1000$.
Thus, letting $j$ be at least about
\[
\frac{\log 1000}{\left|\log\left(1-\frac{\rho_0}{2}\right)\right|}\]
(which is $O(1/\rho_0)$),
we obtain a new value $\rho_j$ of $\rho$
such that $\rho_j\geq 999/1000$,
say. \item If $m\geq 1000$, we stop. Otherwise, we apply part (\ref{it:turandot2}) of Lemma \ref{lem:coeur}. We are guaranteed the existence of an element $g_{j+1}\in A_0^v$ such that, for $Q_{j+1} = Q_j \vee Q_j^{g h g^{-1}}$, the proportion of elements $x\in \Omega$ such that $s_{Q_j}(x)\geq (1+\rho_j/3) m$ is at least $\rho_j^2/8$. We choose that $g$, let our new values $m_{j+1}$ and $\rho_{j+1}$ of $m$ and $\rho$ be \[m_{j+1} = \left\lceil \left(1+\frac{\rho_j}{3}\right) m_j \right\rceil \geq \left(1 + \frac{333}{1000}\right) m_j ,\;\;\;\;\; \rho_{j+1} = \frac{\rho_j^2}{8} \geq \frac{(999/1000)^2}{8} > \frac{1}{9}, \] and go back to step $1$.
\end{enumerate}
It is clear that, after $s = O(1/\rho) + O(\log \max(c,1))$ steps, we obtain a
partition $Q_s$ such that the proportion of elements
$x$ of $\Omega$ satisfying $s_{Q_s}(x)\geq 1000 \cdot \max(c,1)$
is at least $999/1000$.
Now -- and here we are at the heart of the proof of this proposition --
we go again through an iterative procedure, only we will now
be alternating a bounded number of applications of
part (\ref{it:turandot1})
of
Lemma \ref{lem:coeur} and an application of Lemma \ref{lem:coeur2}, rather
than $O(1/\rho)$ applications of part (\ref{it:turandot1}) of Lemma
\ref{lem:coeur} and
an application of part (\ref{it:turandot2}) of Lemma \ref{lem:coeur}.
Throughout the iteration, $\rho$ stays bounded from below by $1/2$.
We let $\epsilon = 1/n$, so that, by (\ref{eq:epsp}),
\begin{equation}\label{eq:karton}
\epsilon' \leq \epsilon + \frac{8 + 3 c}{n} = \frac{9 + 3 c}{n},
\end{equation}
and thus $\epsilon'\leq 1/1000$ and
$\epsilon'\leq 500 \max(c,1)/n \leq \rho m/n$ both hold.
The conditions of Lemma \ref{lem:coeur2} are thus satisfied for as long
as $m\leq \sqrt{n}/100$.
The important part this time is that
Lemma \ref{lem:coeur2} enables
us to take $m_{j+1} = \lceil m_j^2/2\rceil$, rather than
$m_{j+1} = \lceil (1+\rho/3) m_j\rceil$
as in Lemma \ref{lem:coeur}(\ref{it:turandot2}). Thanks to this fact, after
only
$O(\log \log n)$ steps, we obtain a partition $Q_{s'}$ such that the
proportion of elements $x$ of $\Omega$
satisfying $s_{Q_{s'}}(x)> \sqrt{n}/100$ is at least $999/1000$.
We are almost done.
We apply part (\ref{it:pureplon}) of Lemma \ref{lem:coeur}
with $m = \lceil \sqrt{n}/100\rceil$, $\rho=999/1000$
and $\epsilon=1/n$ (and thus $\epsilon'$ as in (\ref{eq:karton}).
We obtain a partition
$Q_{s'+1} = Q_{s'} \vee Q_{s'}^g$ containing at least one set of size
$\geq \min(\rho n/9, (\rho-\epsilon) m^2/2) > ((998/1000)/20000) n > n/20100$.
Tautologically, the proportion $\rho$ of elements of
$\Omega$ lying in that set is $>1/20100$.
We then alternate
a bounded number of applications of part (\ref{it:turandot1}) of Lemma \ref{lem:coeur}
and an application of part (\ref{it:turandot2}) of the same Lemma,
iterating a bounded number of times,
to obtain a partition $Q_{s''}$ such that at least $n/2$ of
its elements lie in sets of size $>n/2$.
Finally, we
apply part (\ref{it:turandot1}) (with $\epsilon=1/n$)
$O(\log \log n)$ times, and obtain a partition $Q_{s'''}$
such that the proportion $x$ of elements of $\Omega$ lying in sets
in the partition of size $>n/2$ is at least $1-2 \epsilon' \geq
1 - (18+6 c)/n$.
Since $|\Omega|=n$, there can be at most one set in the partition
of length $>n/2$;
that is, at least $n-(18+6 c)$ elements of $\Omega$ lie in one and the same
set $S$.
We now proceed as in the case of $n$ bounded, choosing at each step a
$g\in A_0^l$ that increases the size of the orbit $S$
by at least $1$. After a bounded number of steps, we obtain that all elements
of $\Omega$ lie in the same set of the partition, i.e., the final partition
consists of the single set $\Omega$. \end{proof}
We come to the main result of this section. \begin{prop}\label{prop:siniestro}
Let $\Omega$ be a finite set of size $|\Omega|=n$.
Let $g_0\in \Sym(\Omega)$ have support of size $\geq \alpha n$,
$\alpha>0$.
Let $A\subset \Sym(\Omega)$ with $\langle A\rangle$ $4$-transitive.
Then there are $\gamma_i\in (A\cup A^{-1} \cup \{e\})^{n^6}$, $1\leq i\leq \ell$, where
$\ell=O((\log n)/\alpha)$, and
$g_i\in (A\cup A^{-1}\cup \{e\})^{v}$, $1\leq i\leq k$, $v=O(n^{10})$, $k = O(\log \log n)$, such that, for
\begin{equation}\label{eq:koppelia}h = \gamma_1 g_0 \gamma_1^{-1} \cdot \gamma_2 g_0 \gamma_2^{-1} \dotsb
\gamma_\ell g_0 \gamma_\ell^{-1},\end{equation}
the group
\begin{equation}\label{eq:kalermo}
\langle h, g_1 h g_1^{-1}, g_2 h g_2^{-1},\dotsc , g_k h g_k^{-1}\rangle
\end{equation}
is transitive. \end{prop} \begin{proof}
Let $\gamma_1,\dotsc,\gamma_\ell$ be as in Lemma \ref{lem:chachava} (applied
with $g_0$ instead of $g$ and $A\cup A^{-1}\cup \{e\}$ instead of $A$), so that the element $h$ defined in
(\ref{eq:koppelia}) has support of size $n-c$ with $c=0$ or $c=1$.
(If $n$ is less than a constant, we do not need apply Lemma
\ref{lem:chachava}; we can simply let $h = g_0$, as
$c = n-|\supp(h)|$ will be bounded.)
Write $h$ as a product of disjoint cycles, and let
$P$ be the partition of $\Omega$ given by the cycles.
We can now apply Proposition \ref{prop:verodiro}
with $\rho = (n-1)/n$ and $c=0$ or $c=1$. It is clear, inductively,
that, for $0\leq j\leq k$,
$Q_j$ is finer (not necessarily strictly) than
the partition of $P$ given by the orbits of
\[\langle h, g_1 h g_1^{-1}, g_2 h g_2^{-1},\dotsc , g_j h g_j^{-1}\rangle.\]
Since $Q_k$ is the trivial partition, it follows that the group
in (\ref{eq:kalermo}) has a single orbit. \end{proof}
\section{Babai-Seress revisited}\label{sec:babaiseress} The proof of the main result in \cite{zbMATH00091732} has what looks like a bookkeeping mistake, or rather two mistakes, at the very end (\cite[p. 242]{zbMATH00091732}, ``Proof of Theorem 1.4''): the right side of the last displayed equation has a factor of $\diam(\Alt(m(G)))$ where it should have a product of squares of several such factors. We will show how to fix the result and its proof.
(The fact that \cite{zbMATH00091732} could not be right at this point was first pointed out to the author by L. Pyber; he also shared ideas that can be used in addressing the gap, including one followed and mentioned below.)
The intermediate result \cite[Thm.~2.3]{zbMATH00091732} is actually correct. We will prove it again here (\S \ref{subs:imptrees}), in part for the sake of clarity, and in part so as to give an improved version (Prop.~\ref{prop:obsidian}). We will be following \cite[\S 3--4]{zbMATH00091732} quite closely.
\subsection{Imprimitivity and structure trees}\label{subs:imptrees} A {\em tree} is a graph without cycles, with one vertex labeled as the {\em root vertex}, or {\em root} for short. The vertices at {\em level} $j$ are those at distance $j$ from the root. The {\em leaves} of a tree are the vertices at maximal distance from the root. That maximal distance is called the {\em height} $h$ of the tree. A {\em child} of a vertex $v$ at {\em level} $j$, $0\leq j<h$, is a vertex at level $j+1$ connected to $v$ by an edge. A {\em descendant} of $v$ is a child of $v$, or a child of a child, etc.
The following definition has its origin in the study of algorithms on permutation groups, and in particular \cite{MR973656}. It provides a convenient way to work with permutation groups that may not be primitive.
\begin{defn}\label{def:whathaf}
Let $G\leq \Sym(\Omega)$ be a transitive permutation group. A {\em structure tree}
$T$ for $(G,\Omega)$ is defined as follows. The set of leaves is $\Omega$.
If $G$ is primitive, then it consists of a root vertex and, for each leaf, an edge
connecting the root to the leaf. If $G$ is not primitive, we choose
a maximal block system $B_1,\dotsc,B_k$, define a structure tree for $G$ as a group
acting transitively on
$B_1,\dotsc,B_k$, and then draw edges from the vertex corresponding to each $B_i$
to the elements of $B_i$ (which then become the leaves). \end{defn} It is clear that $G$ has a natural action on $T$. Given a vertex $v$ of $T$, we define the {\em stabilizer} $G_v \leq G$ to be the setwise stabilizer of the block corresponding to $v$ (or the stabilizer of the element of $\Omega$ corresponding to $v$, if $v$ is a leaf). If $w$ is a descendant of $v$, then $G_w$ is a subgroup of $G_v$. For $v$ a vertex that is not a leaf, define $K_v$ to be the intersection $\cap_w G_w$, where $w$ ranges over all children of $v$. It is clear that $K_v$ is a normal subgroup of $G_v$. It is also clear that $G_v/K_v$ acts primitively on the set of children of $v$, due to the maximality of the block systems used in Definition \ref{def:whathaf}.
It is easy to see from the definition that $G$ acts transitively on all vertices of $T$ at a given level. It follows that the {\em normal core} $N_j = \bigcap_{g\in G} g G_v g^{-1}$ of the stabilizer $G_v$ of a vertex $v$ depends only on the level of $v$. Indeed, it is the intersection $\cap_w G_w$, where $w$ ranges over all $w$ at the same level as $v$. Moreover, if the level $j$ of a vertex $v$ is less than the height of the tree (i.e., $v$ is not a leaf), then the normal core $\bigcap_{g\in G} g K_v g^{-1}$ of $K_v$ equals $N_{j+1}$.
Part of the reason for working with the groups $G_v$, rather than just with $N_j$, is that $G_v$ acts transitively on the block corresponding to $v$, whereas $N_j$ may not.
The following lemmas are quoted as ``folklore'' in \cite[\S 3]{zbMATH00091732}.
\begin{lem}\label{lem:burgunto}
Let $H$ be a subgroup of a direct product of simple groups
$M_1\times M_2\times \dotsc \times M_k$ such that the projection $\pi_i:H\to M_i$
is surjective for every $1\leq i\leq k$.
Then $H$ is isomorphic to a direct product $\prod_{i\in I} M_i$,
where $I\subset \{1,2,\dotsc,k\}$. \end{lem} \begin{proof}
If the projection $\phi:H\to M_2\times M_3\times \dotsc \times M_k$ is injective,
we apply the lemma to $\phi(H)\subset M_2\times \dotsc M_k$ and are done, by
induction. Suppose $\phi$ is not injective. Let $h_1,h_2\in H$ be distinct elements
such that $\phi(h_1)=\phi(h_2)$. Then $\phi(h_1^{-1} h_2) = e$, and so
$h = h_1^{-1} h_2$ lies in $H\cap M_1$. Let $g\in M_1$ be arbitrary. There is an
element $g'$ of $H$ mapped to $g$ by $\pi_1$; conjugating $h$ by it,
we obtain $g' h (g')^{-1} = g h g^{-1}$, which must then lie in $H\cap M_1$.
Since we can do as much for every $g\in M_1$, we conclude that
$H\cap M_1$ must contain the subgroup of $M_1$ generated by all elements
of the form $g h g^{-1}$. Since $M_1$ is simple, that subgroup is precisely
$M_1$, and so $H\cap M_1 = M_1$.
Let $K = H\cap (\{e\}\times M_2\times \dotsc M_k)$. Since
$H\cap M_1 = M_1$, we know that $H\sim M_1\times K$.
For each $2\leq i\leq k$, the image $\pi_i(K)$ is invariant under conjugation
by $\pi_i(H) = M_i$, and thus must be either $\{e\}$ or $M_i$. We eliminate all
indices $i$ for which $\pi_i(K) = \{e\}$, and apply the Lemma inductively to
$K$ as a subgroup of the direct product of the remaining $M_i$. \end{proof}
\begin{lem}\label{lem:selfcop}
Let $H_1 \triangleleft H_2 \leq G$, $H_2/H_1$ simple.
Let $N_i = \cap_{g\in G} g H_i g^{-1}$. Then $N_2/N_1$ is isomorphic to a direct
product of copies of $H_2/H_1$. \end{lem} \begin{proof}
We may assume that $N_2\ne N_1$.
By the second isomorphism theorem, $H_1\cap N_2$ is a normal
subgroup of $H_2\cap N_2 = N_2$, and $N_2/(H_1\cap N_2)$ is isomorphic to
$N_2 H_1/ H_1$, which is a normal subgroup of $H_2/H_1$. Since $H_2/H_1$ is
simple, that subgroup is either trivial or all of $H_2/H_1$. If it were trivial, then
$N_2 \leq H_1$, and so $N_2 \leq g H_1 g^{-1}$ for every
$g\in G$; it would follow immediately that $N_2 = N_1\cap N_2 = N_1$. Thus,
we may assume that $N_2/(H_1\cap N_2)$ is isomorphic to $H_2/H_1$.
The same argument applied to any conjugate $g H_1 g^{-1}$, $g\in G$,
instead of $H$ shows that
$N_2/(g H_1 g^{-1} \cap N_2)$ is isomorphic to $H_2/H_1$.
Now, $N_2/N_1$ is isomorphic to its image under the natural map
$N_2/N_1 \to (N_2/(g H_1 g^{-1}\cap N_2))_{g\in G}$, since $N_1 = \bigcap_{g\in G}
(g H_1 g^{-1}\cap N_2)$. We apply Lemma \ref{lem:burgunto}, and obtain that
$N_2/N_1$ is isomorphic to a direct product of groups of the form
$N_2/(g H_1 g^{-1}\cap N_2) \sim H_2/H_1$. \end{proof}
The following is an extremely useful (and by now standard) consequence of the Classification Theorem and the O'Nan-Scott theorem. This is the one way in which the Classification Theorem is needed for the proof of our results. \begin{prop}[\cite{MR599634}, \cite{MR758332}]\label{prop:camlie}
Let $G\leq \Sym(\Omega)$ be a primitive group, where $|\Omega|=n$. Then either
\begin{enumerate}
\item\label{it:uno} $|G|\leq n^{O(\log n)}$, or
\item\label{it:duo}
there is a subgroup $N\triangleleft G$, $\lbrack G:N\rbrack\leq n$,
isomorphic to a direct product $\Alt(m)^r = \Alt(m)\times \dotsc \times \Alt(m)$, where $r\geq 1$, $m\geq 5$ and
$n = \binom{m}{k}^r$
for some $1\leq k\leq m-1$.
\end{enumerate} \end{prop} \begin{proof}
Just a few words on how the statement follows from
\cite[Main Thm.]{MR758332}.
Case (ii) there asserts that there
is a set $\Delta\subset \Omega$ with $|\Delta|<9 \log_2 n$ such that
$G_{(\Delta)}=\{e\}$; since $\lbrack G: G_{(\Delta)}\rbrack\leq n^{|\Delta|}$,
it follows immediately that conclusion
(\ref{it:uno}) holds.
Assume, then, that we are in case (i) in \cite[Main Thm.]{MR758332};
that case gives us
a subgroup $N$ as in (\ref{it:duo}) here. It also gives us that
$m\geq 2$, $\lbrack G:N\rbrack \leq 2^r r!$
and $n = \left(\binom{m}{k}\right)^r$ for some $1\leq k\leq m-1$.
Clearly, $n\geq m^r$.
If $m\geq \max(2 r,5)$, then $\lbrack G: N\rbrack \leq (2 r)^r \leq m^r\leq n$,
and so we obtain conclusion (\ref{it:duo}). If $m<2 r$, then,
since $r\leq \log_m n\leq \log_2 n$, we see that
$|N| = (m!/2)^r\leq m^{m r} < n^m < n^{\max(2 r,5)} \leq n^{O(\log n)}$, whereas
$\lbrack G:N\rbrack \leq 2^r r!\leq (2 r)^r \leq n^{O(\log n)}$.
Hence $|G| = n^{O(\log n)}$, that is, conclusion (\ref{it:uno}) holds. \end{proof}
The motivation for wanting subgroups $\Alt(m)$ with $m\geq 5$ is of course
that $\Alt(m)$ is then simple.
We could do without the following bound\footnote{Thanks are due to D. Holt
for the reference.}, in that using a trivial bound in case (\ref{it:duo}) of Prop.~\ref{prop:camlie} would be enough for our purposes; our intermediate results would become somewhat weaker, but our final results (Thms.~\ref{thm:jukuju} and \ref{thm:molop}) would not be affected. At the same time, there is no reason to avoid the lemma we are about to state. It does use the Classification Theorem, but only in the sense that it uses \cite[Main Thm.]{MR758332} (or rather the version with sharp constants in \cite{MR1943938}). \begin{lem}\label{lem:gprv}(\cite[Thm.~1.3]{gprv})
Let $G\leq \Sym(\Omega)$ be a primitive group, where $|\Omega|=n$.
Let $\{e\} = H_0 \triangleleft H_1 \triangleleft\dotsc \triangleleft H_\ell=G$. Then
\[\ell \leq \frac{8}{3} \frac{\log n}{\log 2} - \frac{4}{3}.\] \end{lem} The trivial bound assuming Prop.~\ref{prop:camlie}, would be $\ell \ll (\log n)^2$.
\begin{lem}[\cite{MR860123}]\label{lem:babsub}
The length $\ell$ of any subgroup chain $\{e\} = H_0 \lneq H_1 \lneq \dotsc \lneq H_\ell
= \Sym(n)$ of $\Sym(n)$ is at most $2 n-3$. \end{lem}
The trivial bound would be $\ell \leq (\log |\Alt(n)|)/\log 2 = O( n \log n)$.
We will now prove a version of \cite[Thm.~2.3]{zbMATH00091732}. \begin{prop}\label{prop:obsidian}
Let $G\leq \Sym(\Omega)$ be transitive, $|\Omega|=n$. Then $G$ has a series of normal subgroups
$\{e\} = H_0 \triangleleft H_1 \triangleleft \dotsc \triangleleft H_\ell = G$,
$H_i\triangleleft G$, with $\{1,\dotsc,\ell\}$ being partitioned into two sets,
$A$, $B$, such that these properties hold:
\begin{enumerate}
\item\label{it:obsprod} each quotient $H_{i+1}/H_i$ is a direct product of at most $2 n$ copies of
a simple group $M_i$,
\item\label{it:obsii} for each $i\in A$, the group $M_i$ is an alternating group $\Alt(m_i)$ with $m_i\geq 5$,
\item\label{it:obsiii} $m = \prod_{i\in A} m_i$ satisfies $m\leq n$,
\item\label{it:obs3.5} if $G\ne \Alt(\Omega)$ and $G\ne \Sym(\Omega)$, then
$m_i\leq n/2$ for every $i\in A$,
\item\label{it:obsiv} $\prod_{i\in B} |M_i| = (n/m)^{O(\log(n/m))} \cdot m = n^{O(\log n)}$,
\item\label{it:obsv} $\ell = O\left(\log n\right)$.
\end{enumerate}
All implied constants are absolute. \end{prop} The series $\{e\} = H_0 \triangleleft H_1 \triangleleft \dotsc \triangleleft H_\ell = G$ is a refinement of the series $\{e\} = N_1 \triangleleft \dotsc \triangleleft N_h = G$ defined by the structure tree as above. \begin{proof}
We construct a structure tree as in Def.~\ref{def:whathaf}.
We choose a leaf $v$ and denote by $v_0, v_1,\dotsc, v_{h-1}, v_h = v$ all
vertices on the path from the root $v_0$ to $v$. For $0\leq i\leq h-1$,
let $G_i = G_{v_i}/K_{v_i}$.
Apply Prop.~\ref{prop:camlie} with $G_i$ instead of $G$, and with the set
of children of $v_i$ instead of $\Omega$. Write $n_i$ for the number of
children of $v_i$. Clearly, $\prod_{0\leq i\leq h-1} n_i = n$, and so
$h \leq (\log n)/\log 2 = O(\log n)$.
If conclusion (\ref{it:uno})
holds, we simply take
a composition series \[\{e\} = S_{i,0} \triangleleft S_{i,1} \triangleleft \dotsc \triangleleft S_{i,\ell_i} = G_i\]
of $G_i$. By Lemma \ref{lem:gprv},
$\ell_i \ll \log n_i$. Write $\tilde{S}_{i,j}$ for the preimage of
$S_{i,j}$ under the map $G_{v_i} \to G_i = G_{v_i}/K_{v_i}$. Let
\[H_{i,j} = \bigcap_{g\in G} g \tilde{S}_{i,j} g^{-1}.\]
By Lemma \ref{lem:selfcop}, for
$1\leq j\leq \ell_i$, $H_{i,j}/H_{i,j-1}$ is a direct product of copies
of the simple group $M_{i,j} = \tilde{S}_{i,j}/\tilde{S}_{i,j-1} = S_{i,j}/S_{i,j-1}$.
By Lemma \ref{lem:babsub}, there are at most $2 n$ such copies. Evidently,
\[\prod_{1\leq j\leq \ell_i} \left|M_{i,j}\right| = |G_i| = n_i^{O(\log n_i)}.\]
We include every one of the groups $M_{i,j}$ in the set $B$ (to be implicitly
redefined as a set of indices at the end of the proof).
If conclusion (\ref{it:duo}) of Prop.~\ref{prop:camlie} holds, we
write $r_i$ for $r$, $m_i$ for $m$ and $k_i$ for $k$, and let
\[\{e\} = S_{i,r_i} \triangleleft S_{i,r_i+1} \triangleleft \dotsc \triangleleft S_{i,\ell_i} = G_i/N\]
be a composition series of $G_i/N$. Since $|G_i/N|\leq n_i$, we see that $\ell_i = r_i + O(\log n_i) = O(\log n_i)$. Given that $N\sim \Alt(m_i)^{r_i}$, we can write \[\{e\} = A_{i,0} \triangleleft A_{i,1}\triangleleft \dotsc \triangleleft A_{i,r_i} = N,\] where $A_{i,j}/A_{i,j-1} \sim \Alt(m_i)$ for $1\leq j\leq r_i$. This time, we define $\tilde{A}_{i,j}$ to be the preimage of $A_{i,j}$ under the map $G_{v_i} \to G_i = G_{v_i}/K_{v_i}$, and $\tilde{S}_{i,j}$ to be the image of
$S_{i,j}$ under the composition $G_{v_i} \to G_i \to G_i/N$.
Let \[H_{i,j} = \begin{cases} \bigcap_{g\in G} g \tilde{A}_{i,j} g^{-1} &\text{if $0\leq j\leq r_i$,}\\
\bigcap_{g\in G} g \tilde{S}_{i,j} g^{-1}
&\text{if $r_i < j\leq \ell_i$.}\end{cases}\] We let $M_{i,j} = S_{i,j}/S_{i,j-1}$ for $r_i < j\leq \ell_i$, and $M_{j,1} = A_{i,j}/A_{i,j-1} \sim \Alt(m_i)$ for $1\leq j \leq r_i$.
We know from Prop.~\ref{prop:camlie} that $\binom{m_i}{k_i}^{r_i}\leq n_i$, where $1\leq k_i\leq m_i-1$. If $r_i\geq 2$, then $m_i\leq \lfloor \sqrt{n_i}\rfloor \leq n_i/2\leq n/2$; if $r_i=1$ and $k_i\geq 2$, then $m_i (m_i-1) \leq 2 n_i$ and so, since $m_i\geq 5$, $m_i\leq n_i/2\leq n/2$. If $r_i=1$ and $k_i=1$, then $m_i = n_i$. In that case, if $G$ is not primitive, then $m_i=n_i\leq n/2$, whereas, if $G$ is primitive, $m_i=n_i=n$ and so $\Alt(n)\leq G$.
For all $1\leq j\leq \ell_i$, $M_{i,j}$ is simple.
By Lemma \ref{lem:selfcop}, for $1\leq j\leq \ell_i$,
$H_{i,j}/H_{i,j-1}$ is a direct product of copies
of the simple group $M_{i,j}$;
by Lemma \ref{lem:babsub},
there are at most $2 n$ such copies.
We include $M_{i,j} \sim \Alt(m_i)$ in $A$ for
$1\leq j\leq r_i$
and $M_{i,j}$ in $B$ for $r_i<j\leq \ell_i$.
It is clear -- whether conclusion (\ref{it:uno}) or (\ref{it:duo}) holds --
that, for any $i$ less than the height $h$ of our tree,
$H_{i,0} = \cap_{g\in G} g K_{v_i} g^{-1} = N_{i+1}$, whereas
$H_{i,\ell_i} = \cap_{g\in G} g G_{v_i} g^{-1} = N_i$. Hence we can define
our subgroups $H_0,H_1,\dotsc,H_\ell$
to be
\[H_{h-1,0},H_{h-1,1},\dotsc ,
H_{h-1,\ell_{h-1}} = H_{h-2,0},\dotsc,
H_{h-2,\ell_{h-2}}, \dotsc H_{1,\ell_1} =
H_{0,0}, \dotsc , H_{0,\ell_0}.\]
The (trivial) accounting is left to the reader.
\end{proof}
\subsection{Reduction of the diameter problem to the case of alternating groups}
First, a lemma essentially due to Schreier. The statement is as in \cite[Lemma 5.1]{zbMATH00091732}. \begin{lem}[Schreier]\label{lem:schreier}
Let $G$ be a finite group. Let $N\triangleleft G$. Then
\[\begin{aligned}
\diam G &\leq (2 \diam(G/N) + 1) \diam(N) + \diam(G/N)\\
&\leq 4 \diam(G/N) \diam(N).\end{aligned}\] \end{lem} \begin{proof}
Let us be given a set $A = \{g_1,\dotsc,g_r\}$ of generators of $G$. Write
$d_1$ for $\diam(G/N)$, $d_2$ for $\diam(N)$ and $m$ for $|G/N|$.
Then, by the definition of diameter, there are
$\sigma_1,\dotsc,\sigma_m \in (A\cup A^{-1}\cup \{e\})^{d_1}$ giving us
a full set of representatives of $G/N$. As is well-known,
\[S = \{\sigma_i g_j \sigma_k^{-1} : 1\leq i,k\leq m, 1\leq j\leq r\}\cap N\]
is a set of generators of $N$ ({\em Schreier generators}).
Hence, $N = (S \cup S^{-1} \cup \{e\})^{d_2}$, and so
\[\begin{aligned}
G = \{\sigma_1,\dotsc,\sigma_m\}\cdot N &\subset
(A\cup A^{-1}\cup \{e\})^{d_1} \cdot (S\cup S^{-1} \cup \{e\})^{d_2}\\
&\subset (A \cup A^{-1}\cup \{e\})^{d_1 + (2 d_1 + 1) d_2}.\end{aligned}\] \end{proof} \begin{cor}\label{cor:moreschreier}
Let $G$ be a finite group. Let $\{e\} \triangleleft H_1\triangleleft H_2
\triangleleft \dotsc \triangleleft H_\ell = G$. Then
\[\diam(G)\leq 4^{\ell-2} \prod_{i=0}^{\ell-1} \diam\left(H_{i+1}/H_i\right) .\] \end{cor} \begin{proof}
Immediate from Lemma \ref{lem:schreier}. \end{proof}
\begin{lem}\label{lem:bspyb0}(\cite[Lemma 5.4]{zbMATH00091732})
Let $G = T_1\times T_2\times \dotsb \times T_n$, where the $T_i$ are non-abelian
simple groups. Let $\diam(T_i) = d_i$, $d =\max_i d_i$. Then
$\diam(G)\ll n^3 d^2$. \end{lem} We will go over the ideas of the proof of the Lemma in a moment, when we improve it in the special case of the alternating group (Lemma \ref{lem:bspyb}). L. Pyber pointed out to the author that the dependence on $d$ could and should be improved so as to be linear; the quadratic dependence of Lemma \ref{lem:bspyb0} on $d$ is one of the gaps in the proof of the main result in \cite{zbMATH00091732}.
It is actually enough to improve the dependence on $d$ in the alternating case. The tool we will use is a simple lemma, similar to \cite[Prop.~5.8]{zbMATH00091732}.
\begin{lem}\label{lem:supconj}
Let $g\in \Alt(\Omega)$, $g\ne e$, $|\Omega|\geq 4$. Then there is an
$h\in \Alt(\Omega)$ such that $\lbrack g,h \rbrack$
is either a $3$-cycle or a product of two disjoint $2$-cycles. \end{lem} Here $\lbrack g,h\rbrack$ denotes the commutator $g^{-1} h^{-1} g h$. \begin{proof}
Write $g$ as a product of disjoint cycles. If
$g$ contains two disjoint $3$-cycles $(a b c)$, $(d e f)$,
let $h$ equal $(a d c) (b e f)$. Then
$\lbrack g,h\rbrack = (a f) (b d)$.
If $g$ contains two disjoint $2$-cycles $(a b) (c d)$, let $h$
be the $3$-cycle $(a b c)$; their commutator
$\lbrack g,h\rbrack$
will be $(a c) (b d)$.
If $g$ contains a $k$-cycle $(a b c d\dotsc)$, $k\geq 4$, let
$h = (a b c)$. Then $\lbrack g,h\rbrack = (a d c)$.
Finally, if $g$ consists of a single $3$-cycle $(a b c)$, let
$d$ be an element of $\Omega$ different from $a$, $b$ and $c$, and
define $h$ to be $(b c d)$. Then $\lbrack g,h\rbrack = (a d) (b c)$. \end{proof}
The following lemma is of course extremely familiar. \begin{lem}\label{lem:easypeasy}
Let $n\geq 5$. Then
every element of $\Alt(\Omega)$, $|\Omega|=n$, can be written as
(a) the product of at most $n-1$ $3$-cycles, (b) the product of at most
$(n+1)/2$ elements of the form $(a b) (c d)$. \end{lem} \begin{proof}
We prove part (a) by induction. If $g\in \Alt(\Omega)$ is not the identity,
then there is an $a\in \Omega$ such that $b = a^g$ is distinct from $a$,
and another $c\ne a, b$ that is also in the support of $g$. Then, for
$g' = g \cdot (b a c)$, we see that $a^{g'} = a$ and $\supp(g') \subset
\supp(g)$, and so $|\supp(g')|\leq |\supp(g)|-1$.
We prove part (b) in the same way: if $|\supp(g)|\geq 4$, there
are distinct $a,b,c,d$ such that $a^g = b$ and $c^g = d$; then, for
$g' = g \cdot (b a) (d c)$, $|\supp(g')|\leq |\supp(g)|-2$.
If $|\supp(g)|=3$, then $g$ is a $3$-cycle $(a b c)$, and so, for
$b', c'$ not in the support of $g$, $g$ equals the product of
$(a c) (b' c')$ and $(b c) (b' c')$. \end{proof}
The following result is classical, easy and very well-known. According to \cite{MR2331612}, it was first proved in \cite{miller1899commutators}. \begin{lem}\label{lem:oldmiller}
Let $m\geq 5$. Then every element of $\Alt(m)$ is a commutator, i.e.,
expressible in
the form $\lbrack x,y\rbrack$, $x,y\in \Alt(m)$. \end{lem}
Now we come to the proof of an improved version of Lemma \ref{lem:bspyb0} in the case of the alternating group. \begin{lem}\label{lem:bspyb}
Let $G = T_1\times T_2\times \dotsb \times T_n$, where the $T_i$ are
alternating groups $\Alt(m_i)$, $m_i\geq 5$. Let $\diam(T_i) = d_i$, $d =\max_i d_i$,
$m = \max_i m_i$. Then
$\diam(G)\ll n^3 m d$. \end{lem} L. Pyber suggests using \cite{MR1865975} to prove an analogous improvement on Lemma \ref{lem:bspyb0} for arbitrary finite simple groups $T_i$. \begin{proof}
Let $S$ be a set of generators of $G$, and let $A = S\cup S^{-1} \cup \{e\}$.
Write $\pi_i:G \to T_i$ for the projection of $G$ to $T_i$.
By the
definition of $d$, $\pi_i\left(A^d\right) = T_i$ for every $1\leq i\leq m$.
The set $A^{2 d +1}\cap \ker(\pi_i)$ must then contain a set of generators
of $\ker(\pi_i)$ (namely, Schreier generators). In particular, for any
$j\ne i$, $A^{2 d +1}\cap \ker(\pi_i)$ contains at least one element
$g_{i,j}$ such that $\pi_j(g_{i,j})\ne e$. By Lemma \ref{lem:supconj}
and $\pi_j\left(A^d\right) = T_j$, there is an $h\in A^d$ such that
$\pi_j(\lbrack g, h\rbrack)\in \pi_j(A^{6 d + 2})$
is either a $3$-cycle or the product of two
disjoint $2$-cycles. Hence, conjugating
$\lbrack g,h\rbrack$ by all elements of $A^d$,
we obtain either all
$3$-cycles in $T_j$ or all products of disjoint $2$-cycles in $T_j$.
By Lemma~\ref{lem:easypeasy}, we can express every element of $T_j$
as a product of (a) at most $m_j-1$ $3$-cycles, (b) at most
$(m_j+1)/2$ $3$-cycles.
At the same time, $\lbrack g, h\rbrack$ is in $\ker(\pi_i)$, and so,
obviously,
are its conjugates.
Hence, for $B_i = A^{(8 d + 2) m_j}\cap \ker(\pi_i)$, we see that
$\pi_j(B_i) = T_j$.
Now, say that, for $S,S'\subset \{1,\dotsc,n\}\setminus \{j\}$,
there are sets $B_{S}, B_{S'}\subset A^{k}$ satisfying
$\pi_j(B_S) = \pi_j(B_{S'}) = T_j$ as well as $B_S\subset \ker(\pi_i)$ for every
$i\in S$ and $B_{S'} \subset \ker(\pi_i)$ for every $i\in S'$.
Then $B_{S\cup S'} =\{\lbrack x,y\rbrack: x\in B_S, y\in B_{S'}\}$ is
a subset of $A^{4 k}$ contained in $\ker(\pi_i)$ for every $i\in S\cup S'$.
Moreover, by Lemma \ref{lem:oldmiller}, $\pi_j(B_{S\cup S'}) = T_j$.
We apply this procedure repeatedly, first expressing
$Z_j = \{1,\dotsc,n\}\setminus \{j\}$ as the union of two disjoint sets
$S$, $S'$ of size $\lfloor (n-1)/2\rfloor$ and $\lceil (n-1)/2 \rceil$,
respectively,
and then doing a recursion, expressing at each point the set we are given
as the union of two disjoint sets of sizes differing by at most $1$,
until we reach the single-element sets $S = \{i\}$, $i\ne j$.
We obtain a subset $B_{Z_j}$ of $A^{4^{\lceil \log_2 n\rceil} k} \subset A^{4 n^2 k}$,
where $k = (8 d + 2) m_j$, such
that $B_{Z_j} \subset \ker(\pi_i)$ for every $i\ne j$, and
$\pi_j(B_{Z_j}) = T_j$. (Here we note that $4^{\lceil \log_2 n\rceil} \leq
4^{\log_2 n + 1} \leq 4 \cdot 4^{\log_2 n} = 4 n^2$.)
Multiplying the sets $B_{Z_j}$, we obtain that
$A^{4 n^3 (8 d + 2) m}$ contains all of $G$. \end{proof}
We will also need a very easy analogue for {\em abelian} simple groups. \begin{lem}\label{lem:bspybtr}
Let $G = \mathbb{Z}/p\mathbb{Z} \times \mathbb{Z}/p\mathbb{Z} \times \dotsb \times \mathbb{Z}/p\mathbb{Z}$ ($n$ times). Then
$\diam(G)\leq n \lfloor p/2\rfloor$. \end{lem} \begin{proof}
Let $S$ be a set of generators of $G$, and let $A = S\cup S^{-1} \cup \{e\}$.
We can see $G$ both as a group
and as a vector space (which we may call $V$)
over $\mathbb{Z}/p\mathbb{Z}$.
We choose a non-identity element $v$ of $A$.
Trivially, every element of the linear span
$\langle v\rangle$ of $v$ can be written in the group
as $v^j$ for some $j\in \mathbb{Z}$ with
$|j|\leq (p-1)/2$. We project the elements of $A$ to $A \mod \langle v\rangle$, and thus reduce the problem to that for the space $V \mo \langle v\rangle$ instead of
$V=G$. \end{proof} We finally come to the fixed (and improved) version of \cite[Thm.~1.4]{zbMATH00091732}.
\begin{prop}\label{prop:finbo}
Let $G\leq \Sym(\Omega)$, $|\Omega|=n$, be transitive.
Then there are
$m_1,\dotsc,m_k\geq 5$ with $\prod_{i=1}^k m_i\leq n$ such that
\[\diam(G) \leq n^{O(\log n)} \prod_{i=1}^k \diam(\Alt(m_i)).\]
Moreover,
\begin{enumerate}
\item for every $1\leq i\leq k$, $G$ has a composition
factor isomorphic to $\Alt(m_i)$,
\item if $G\ne \Alt(\Omega), \Sym(\Omega)$, then
$m_i\leq n/2$ for every $1\leq i\leq k$.
\end{enumerate} \end{prop} \begin{proof}
Apply Proposition \ref{prop:obsidian}. By Cor.~\ref{cor:moreschreier},
\[\diam(G) \leq 4^{O(\log n)} \prod_{i=0}^{\ell-1} \diam(H_{i+1}/H_i).\]
By Lemma \ref{lem:bspyb0}, Lemmas \ref{lem:bspyb}--\ref{lem:bspybtr}
and Prop.~\ref{prop:obsidian}(\ref{it:obsprod}),
\[\diam(H_{i+1}/H_i) \ll \begin{cases}
n^3 m_i \diam(M_i) &\text{if $i\in A$,}\\ n^3 \diam(M_i)^2 &\text{if $i\in B$.}\end{cases}\]
Hence
\[\diam(G)\leq 4^{O(\log n)} (O(n^3))^\ell
\prod_{i\in A} \left(m_i \diam(M_i)\right) \prod_{i\in B} \diam(M_i)^2.\]
Trivially, $\diam(M_i)\leq |M_i|$, and so,
by Prop.~\ref{prop:obsidian}(\ref{it:obsiv})
\[\prod_{i\in B} \diam(M_i) \leq
\prod_{i\in B} \left|M_i\right| \leq n^{O(\log n)},\]
whereas, by Prop.~\ref{prop:obsidian}(\ref{it:obsii}),
\[\prod_{i\in A} \diam(M_i) = \prod_{i\in A} \diam(\Alt(m_i)).\]
Finally, by
Prop.~\ref{prop:obsidian}(\ref{it:obsiii}) and (\ref{it:obsv}),
$\prod_{i\in A} m_i\leq n$ and $\ell \ll \log n$. \end{proof}
\section{Main argument} Let us set out to prove our main result (Theorem \ref{thm:jukuju}). Part of the general strategy will be as in \cite{MR3152942}, but much simplified.
Throughout,
$A\subset \Sym(\Omega)$, $|\Omega|=n$, with $A=A^{-1}$, $e\in A$, and $\langle A\rangle$ is $3$-transitive. We assume that
$\log |A| \geq C (\log n)^3$, where $C>0$ is a constant large enough for our later uses.
\subsection{Existence of a large prefix}
For $j=1,2,\dotsc$, we choose distinct elements $\alpha_1, \alpha_2,\dotsc \in \Omega$ such that, for $j=1,2,\dotsc$, \begin{equation}\label{eq:likemula}
\left|\alpha_j^{(A^4)_{(\alpha_1,\alpha_2,\dotsc,\alpha_{j-1})}}\right|\geq \rho n,\end{equation} where we set $\rho = e^{-1/5} = 0.818\dotsc$, say. We stop when $(A^4)_{(\alpha_1,\alpha_2,\dotsc,\alpha_k)}$ has no orbits of size $\geq \rho n$. Inequality (\ref{eq:likemula}) holds for $1\leq j\leq k$.
Let $\Sigma = \{\alpha_1,\dotsc,\alpha_{k-1}\}$. By Corollary \ref{cor:ratherbab} (applied with $\Sigma \cup \alpha_k$ instead of $\Sigma$), either (\ref{eq:secondopt}) holds, and we are done, or \begin{equation}\label{eq:amianto}
k \geq \frac{\log |A|}{15 (\log n)^2}. \end{equation} We can assume henceforth that (\ref{eq:amianto}) holds.
By Lemma \ref{lem:basilic}, the restriction
$\left(A^{8 (k-1)}\right)_{\Sigma}|_\Sigma$ is a subset of $\Sym(\Sigma)$ with at least $\rho^{k-1} (k-1)!$ elements. Let $A' = \left(A^{8 (k-1)}\right)_{\Sigma}$, $H = \left\langle A'\right\rangle$. If, as we may assume, $k$ is larger than an absolute constant, then, by Lemma \ref{lem:amusi}, there exists an orbit $\Delta\subset \Omega$ of
$H|_\Sigma$,
such that $|\Delta|\geq \rho\cdot (k-1)$ and $H|_\Delta$ contains $\Alt(\Delta)$. Thus, in particular, $H$ has a section isomorphic to $\Alt(k-1)$, namely, the quotient defined by restricting either $H$ or a subgroup of $H$ of index $2$ to $\Delta$.
\subsection{The case of descent}
Applying Lemma \ref{lem:wielandt} with $\epsilon=1/8$, we see that
$H$ contains an element $g_0\ne e$ such that $|\supp(g_0)|<n/8$, assuming, as we may, that $n$ is greater than an absolute constant and that $k-1\geq C_2 \log n$, where $C_2$ is an absolute constant.
Let $O = \alpha_k^{(A^4)_{(\Sigma)}}$. We know that
(\ref{eq:likemula}) holds for $j=k$, i.e., $|O|\geq \rho n$. Denote by $O'$ the orbit of $H$ containing $O$.
Suppose first that either $|O'|\leq e^{-1/10} n$ or
$H|_{O'}\ne \Alt(O'), \Sym(O')$. Define $D$ to be the diameter of
$H|_{O'}$. If the diameter of $\Gamma(H,A')$ is no larger than $D$, then $g_0\in (A')^D \subset A^{8 k D}$.
If, on the other hand, $\diam \Gamma(H,A')> D$, then there is an element $g$ of $H$ that is in
$(A')^{D+1}$ but not in $(A')^D$. At the same time, since $D$ is the diameter of $H|_{O'}$, there is an element $h$ of $(A')^D$ whose restriction
$h|_{O'}$ equals $g|_{O'}$. Clearly, $g^{-1} h$ is non-trivial, lies in $(A')^{2 D + 1}$ and has trivial restriction to $O'$. By this last fact, the support of $g^{-1} h$ is of size $\leq (1-\rho) n < n/5$.
Therefore, in either case, there exists an element $g'$ of $(A')^{2 D+1}\subset A^{8 k (2 D +1)}$ with support $< n/5$. (Many thanks are due to Henry Bradford for spotting a gap at this point in a previous version of this paper, and for the alternative argument we have just given.) By Lemma \ref{lem:bbs}, \[\diam(\Gamma(\langle A\rangle,A\cup g')) \ll n^8 (\log n)^{O(1)},\] and so \[\diam(\Gamma(\langle A\rangle,A))\ll 8 k (2 D + 1) n^8 (\log n)^{O(1)} \ll n^{10} D.\] Thus we attain conclusion (\ref{eq:rororo}) in Theorem \ref{thm:jukuju}. We call this case the case of {\em descent}.
\subsection{The case of growth}
Assume henceforth that $H|_{O'}$ is either $\Alt(O')$ or $\Sym(O')$ and that $|O'|\geq e^{-1/10} n$. We can then of course assume that
$|O'|\geq 6$, and so the action of $H$ on $O'$ is $4$-transitive.
Let $B = (A^2)_{(\alpha_1,\alpha_2,\dotsc,\alpha_k)}$. Then, by (\ref{eq:likemula}), every orbit of $B B^{-1}$
is of length $< \rho n = e^{-1/5} n \leq e^{-1/10} |O'|$. We apply Cor.~\ref{cor:lalmo}
with $A'|_{O'}$ instead of $A$, $B|_{O'}$ instead of $B$, $O'$ instead of $\Omega$,
$|O'|$ instead of $n$ and $e^{-1/10}$ instead of $\rho$, and obtain that there is a $g\in (A')^m \subset A^{8 k m}$, $m\ll n^6 \log n$, such that
\[\left|B^2 g B^2 g^{-1}\right|\geq |B|^{1 + \frac{1/10}{\log n}}.\] Since $g$ is in the setwise stabilizer of $\Sigma$, we know that $B^2 g B^2 g^{-1}$ is a subset of $\left(A^{16 k m + 8}\right)_{(\Sigma)}$. Therefore, by Lemma \ref{lem:duffy},
\[\left|\left(A^{32 k m + 16}\right)_{(\alpha_1,\dotsc,\alpha_k)}\right|
\geq \frac{\left|\left(A^{16 k m + 8}\right)_{(\Sigma)}
\right|}{n} \geq \frac{1}{n} |B|^{1 + \frac{1/10}{\log n}}.\]
We have obtained what we wanted: growth in a subgroup -- namely, the subgroup $\Sym(\Omega)_{(\alpha_1,\dotsc,\alpha_k)}$ of $\Sym(\Omega)$. We apply Lemma \ref{lem:durdo} with $\Sym(\Omega)_{(\alpha_1,\dotsc,\alpha_k)}$ instead of $H$ and $32 k m + 16$ instead of $k$, and obtain that \begin{equation}\label{eq:serdukt}
\left|A^{32 k m + 17}\right|\geq \frac{|B|^{\frac{1/10}{\log n}}}{n} |A|.
\end{equation}
Only one thing remains: to ensure that $|B|$ is not negligible compared to $|A|$. By Lemma \ref{lem:durdo},
\[\left|B\right| =
\left|\left(A^2\right)_{(\alpha_1,\dotsc,\alpha_k)} \right| \geq \frac{|A|}{n^k}. \]
Thus, if $k\leq (\log_n |A|)/2$, we see that $|B|\geq \sqrt{|A|}$, and so
\[\left|A^{32 k m + 17}\right|\geq \frac{1}{n}
|A|^{1 + \frac{1}{20 \log n}} \geq |A|^{1 + \frac{1}{21 \log n}},\]
say, yielding a strong version of conclusion (\ref{eq:uru}). Assume from now on that $k> (\log_n |A|)/2$.
Since (\ref{eq:likemula}) holds for $j=k$, and since we can assume that $\rho n>1$, there is at least one non-trivial element of $(A^4)_{(\Sigma)}$. Call it $g_0$. If $\supp(g_0)\leq n/4$, then, by Lemma \ref{lem:bbs}, \[\diam(\Gamma(\Sym(\Omega),A))\leq 4 \diam(\Gamma(\Sym(\Omega),(A^4)_{(\Sigma)}\cup A)) \ll n^8 (\log n)^c,\] and we are done. Assume, then, that $\supp(g_0)>n/4$, and so
$|\supp(g_0)\cap O'|> n/4 - (1-e^{-1/10}) n \geq n/7 \geq |O'|/7$.
Now we can finish the argument in any of two closely related ways. One way would involve combining Prop.~\ref{prop:siniestro} with Lemma \ref{lem:tagore}, as we said at the beginning of \S \ref{subs:ellafi}. However, we will find it simpler to proceed in a way closer to the procedure explained in \cite[\S 1.5]{MR3152942}. We apply Prop.~\ref{prop:siniestro} with
$O'$ instead of $\Omega$ and $(A')|_{O'}$ instead of $A$.
We obtain $\gamma_i\in (A'\cup (A')^{-1} \cup \{e\})^{n^6}$, $1\leq i\leq \ell$, where
$\ell=O(7 \log n) = O(\log n)$, and $g_1,\dotsc,g_{k'} \in (A')^v$, $v = O(n^{10})$, $k'=O(\log \log n)$, such that, for \[h = \gamma_1 g_0 \gamma_1^{-1} \cdot \gamma_2 g_0 \gamma_2^{-1} \dotsb \gamma_\ell g_0 \gamma_\ell^{-1} \in \left(A^{4\ell + 2\ell \cdot 8 (k-1) n^6}\right)_{(\Sigma)},\] the group \[\langle h, g_1 h g_1^{-1}, g_2 h g_2^{-1},\dotsc , g_{k'} h g_{k'}^{-1}\rangle\] acts transitively on $O'$. Write $h_0 = h$, $h_i = g_i h g_i^{-1}$ for $1\leq i\leq {k'}$. Since $h$ fixes $\Sigma$ pointwise and $g_i$ fixes $\Sigma$ setwise, $h_i$ fixes $\Sigma$ pointwise for every $0\leq i\leq {k'}$. Thus, $h_i \in \left(A^{4\ell + 2\ell \cdot 8 (k-1) n^6 + 2 v}\right)_{(\Sigma)}$. By the same argument, the map \[\phi:g\mapsto (g h_0 g^{-1}, g h_1 g^{-1}, \dotsc, g h_{k'} g^{-1})\] sends $(A')^2\subset (A^{16 k})_{\Sigma}$ to a subset of the Cartesian product \[\left(A^{32 k +
4\ell + 2\ell \cdot 8 (k-1) n^6 + 2 v}\right)_{(\Sigma)} \times \dotsc \times \left(A^{32 k +
4\ell + 2\ell \cdot 8 (k-1) n^6 + 2 v}\right)_{(\Sigma)}\;\;\;\;\;\;\;\; \text{($k'+1$ times).}\] Moreover, two elements $g$, $g'$ satisfy $\phi(g)=\phi(g')$ if and only if $g^{-1} g' h_i (g^{-1} g')^{-1}$ for every $0\leq i\leq k'$, i.e., if and only if $g^{-1} g'$ lies in $C(\langle h_0, h_1, \dotsc,h_{k'}\rangle)$.
We know that $\langle h_0,h_1,\dotsc,h_{k'}\rangle$ acts transitively on $O'$. It is easy to show that an element of the centralizer of a transitive group can have a fixed point if and only if it is the identity. Thus, if two distinct $g,g'\in ((A')^2)_{\alpha_k}$ satisfy $\phi(g) = \phi(g')$, then, since $g (g')^{-1}$ fixes $\alpha_k$, it must act as the identity on
$O'$. In other words, $g (g')^{-1}$ is a non-identity element of support of size $\leq n - |O'|\leq \left(1 - e^{-1/10}\right) n \leq n/10$. We can now apply Lemma \ref{lem:bbs} (Babai-Beals-Seress), and obtain that \[\diam(\Gamma(\langle A\rangle, A)) \ll 8 k n^8 (\log n)^4 \ll n^{10}.\]
Assume, then, that the restriction of $\phi$ to $((A')^2)_{\alpha_k}$ is injective. Then
\[\left|\left(A^{32 k +
4\ell + 2\ell \cdot 8 (k-1) n^6 + 2 v}\right)_{(\Sigma)}\right|^{{k'}+1}\geq \left|
((A')^2)_{\alpha_k}\right|\] and so
\[\left|\left(A^N\right)_{(\Sigma)}\right|\geq
\left|((A')^2)_{\alpha_k}\right|^{\frac{1}{k'+1}} \geq \left(\frac{|A'|}{n}\right)^{\frac{1}{k'+1}}\] for $N = 32 k + 4\ell + 2 \ell \cdot 8 (k-1) n^6 + 2 v = O\left(n^{10}\right)$. Since $\Sigma = \{\alpha_1,\dotsc,\alpha_{k-1}\}$ and
$|A'|\geq \rho^{k-1} (k-1)!$, where $\rho =e^{-1/5}$, it follows that \[\begin{aligned}
\left|\left(A^{2 N}\right)_{(\alpha_1,\dotsc,\alpha_k)}\right|&\geq
\frac{\left|\left(A^N\right)_{(\Sigma)}\right|}{n}
\geq \frac{(|A'|/n)^{\frac{1}{k'+1}}}{n}\\ &\geq \frac{\left(\rho^{k-1} (k-1)!/n\right)^{\frac{1}{k'+1}}}{n} \geq \frac{(\rho^k k!)^{\frac{1}{k'+1}}}{n^2}.\end{aligned} \]
Since $B = (A^2)_{(\alpha_1,\dotsc,\alpha_k)}$, we know from Lemma \ref{lem:durdo} that
\[\left|A^{2 N+1}\right|\geq \frac{\left|\left(A^{2 N}\right)_{(\alpha_1,\dotsc,\alpha_k)}\right|}{|B|} \cdot |A|.\] Hence either \begin{equation}\label{eq:udunur}
|A^{2N+1}|\geq \frac{(\rho^k k!)^{\frac{1}{2 k'+2}}}{n} |A|
\end{equation} or
$|B|\geq (\rho^k k!)^{1/(2 k'+2)}/n$. In the latter case, by (\ref{eq:serdukt}),
\begin{equation}\label{eq:adanar}\left|A^{32 k m + 17}\right|\geq \left(\frac{(\rho^k k!)^{\frac{1}{2 k'+2}}}{n}\right)^{\frac{1}{10 \log n}} \frac{|A|}{n}\gg
(\rho^k k!)^{\frac{1}{20 (k'+1) \log n}} \frac{|A|}{n}
.\end{equation} The amount on the right in (\ref{eq:udunur}) is clearly greater than that on the right in (\ref{eq:adanar}), so we can focus on bounding the right side of (\ref{eq:adanar}) from below.
By $\rho=e^{-1/5}$, Stirling's formula, and the assumptions that
$\log |A|\geq C (\log n)^3$ (or even just $\log |A|> C (\log n)^2$)
and $k>(\log_n |A|)/2$, \[\begin{aligned}
\rho^k k! &\gg \left(\frac{k}{e^{6/5}}\right)^k \geq \left(\frac{\log |A|}{2 e^{6/5}\log n}\right)^{\frac{\log |A|}{2 \log n}}
\geq (\log |A|)^{\frac{\log |A|}{4 \log n}} = |A|^{\frac{\log \log |A|}{4 \log n}} .\end{aligned}\]
Hence, again by $\log |A|\geq C (\log n)^3$, \[\frac{(\rho^k k!)^{\frac{1}{10 k' \log n}}}{n}
\geq \frac{|A|^{\frac{\log \log |A|}{O((\log n)^2 \log \log n)}}}{n}
\geq |A|^{\frac{\log \log |A|}{O((\log n)^2 \log \log n)}}.\]
Taking $N' = \max(2 N+1,32 k m + 17) = O\left(n^{10}\right)$, we conclude that
\[\left|A^{N'}\right| \geq |A|^{1 + \frac{\log \log |A|}{O((\log n)^2 \log \log n)}}.\] Theorem \ref{thm:jukuju} is thus proved.
\section{Iteration} We can now prove a marginally weaker version of Theorem \ref{thm:pais}. The reader will notice that the proof we are about to give works for any
$3$-transitive group $G$, not just for $G=\Alt(n)$ and $G=\Sym(n)$.
However, by \cite[Cor. to Thm.~A]{Pyb93}, every $3$-transitive and in fact every $2$-transitive group $G$ on $n$ elements that is not $\Alt(n)$ or $\Sym(n)$ has
$\exp(O((\log n)^3))$ elements. Thus, in such a case,
the result we are about to prove
would be trivial. \begin{thm}\label{thm:molop}
Let $G=\Alt(n)$ or $\Sym(n)$. Let $S$ be a set of generators of $G$. Then
\begin{equation}\label{eq:klopo}
\diam \Gamma(G,S) \leq e^{K (\log n)^4 (\log \log n)^2},\end{equation}
where $K$ is an absolute constant. \end{thm}
Since $|\Alt(n)|\geq n!/2 \gg (n/e)^n$, it follows immediately that, for $G= \Alt(n)$ and for $G=\Sym(n)$,
\[\diam \Gamma(G,S) \leq e^{O((\log \log |G|)^4 (\log \log \log |G|)^2)},\]
where the implied constant is absolute.
\begin{proof}
We can assume that $e\in S$. By Lemma \ref{lem:soti}, we can assume that
$S=S^{-1}$ as well.
For any $k\geq 1$, if $S^k=S^{k+1}$, then $S^k=S^{k'}$ for every $k'>k$, and so
$S^k = \langle S \rangle=G$.
So, if $S^k\ne G$, $\left|S^{k+1}\right|\geq \left|S^{k}\right|+1$.
Applying this statement for $k=1,2\dotsc,m$, we see that, for any $m$,
$\left|S^m\right|\geq \min(m,|G|)$. Let $A_0 = S^m$ for
$m = \lceil \exp\left(C (\log n)^3\right)\rceil$, where $C$ is as in the statement of
Thm.~\ref{thm:jukuju}. Then, assuming $n$ is larger than a constant,
$|A_0|\geq \exp\left(C (\log n)^3\right)$. (If $n$ is not larger
than a constant, then the theorem we are trying to prove is trivial.)
We apply Theorem \ref{thm:jukuju} to $A_0$ instead of $A$. If
conclusion (\ref{eq:rororo})
holds, we stop. If conclusion (\ref{eq:uru})
holds, we let $A_1 = A_0^{n^C}$ and apply Theorem \ref{thm:jukuju} to $A_1$.
We keep on iterating until
conclusion \ref{eq:rororo} holds, and then we stop.
We thus have $A_0, A_1,\dotsc, A_k$, $k\geq 0$,
such that $A_{i+1}=A_i^{n^C}$ for $0\leq i \leq k-1$,
\begin{equation}\label{eq:sturu}
\left| A_{i+1}\right|\geq \left|A_i\right|^{1+c \frac{\log \log |A_i|
}{(\log n)^2 \log \log n}}\end{equation}
(i.e., conclusion (\ref{eq:uru}) holds) for $0\leq i \leq k-1$,
and conclusion (\ref{eq:rororo}) holds for $A_k$.
Let us bound $k$. Write $r_i= \log |A_i|$. By (\ref{eq:sturu}),
\[r_{i+1} = \left(1+c \frac{\log r_i
}{(\log n)^2 \log \log n}\right) r_i.\]
We also know that $r_0\geq 2$ (or really rather more) and $r_k\leq \log |G|$.
The number of steps needed for $r_i$ to double is
\[\leq \left\lceil \frac{(\log n)^2 \log \log n}{c \log r_i}
\right\rceil \leq
\frac{2 (\log n)^2 \log \log n}{c \log r_i},\]
where we use the fact that $\lceil y\rceil\leq 2 y$ for $y\geq 1$ and
we assume, as we may, that $c\leq 1$. We conclude that
$k$ is at most $(2/c) (\log n)^2 \log \log n$ times
\[\begin{aligned}\mathop{\sum_{r=2^j}}_{r\leq \log |G|} \frac{1}{\log r} &=
\sum_{0\leq j\leq \log_2 \log |G|} \frac{1}{j \log 2} \\ &\ll \log \log \log |G|
\ll \log \log n.\end{aligned}\]
Write this bound in the form
\[k\leq C' (\log n)^2 (\log \log n)^2.\]
We see that $A_k\subset A_0^{n^{C k}} = A_0^{l} = S^{l m}$ for
$l\leq \exp\left(C C' (\log n)^3 (\log \log n)^2\right)$ and, as before,
$m\leq \exp(C (\log n)^3)$.
(The author would like to thank L. Pyber profusely for pointing out that,
as we have just seen, the presence of
$\log \log |A|$ in the exponent in (\ref{eq:uru}) means we save a factor of
$(\log n)/\log \log n$ in the bound on $k$.)
By conclusion (\ref{eq:rororo}), which holds for $A_k$,
\[\diam(\Gamma(G,S))\leq l m\cdot
\diam(\Gamma(G,A_k))\leq l m
n^C \diam(G'),\]
where $G'$ is a transitive group on $n'\leq n$ elements
such that either (a) $n'\leq e^{-1/10} n$ or (b) $G'\nsim \Alt(n'), \Sym(n')$.
If $n'\leq e^{-1/10} n$ and either $G\sim \Alt(n')$ or $G\sim \Sym(n')$, then
\begin{equation}\label{eq:rodor1}
\diam(G')\leq \max(\diam(\Sym(n')),\diam(\Alt(n')))\leq
4 \diam(\Alt(n'))\end{equation}
by Lemma \ref{lem:schreier}.
If $G\nsim \Alt(n'),\Sym(n')$, then, we apply Prop.~\ref{prop:finbo},
and obtain that
\begin{equation}\label{eq:rodor2}
\diam(G)\leq (n')^{C'' \log n'} \prod_{i=1}^k \diam(\Alt(m_i)) \leq e^{C'' (\log n)^2} \prod_{i=1}^k \diam(\Alt(m_i)), \end{equation} where $\prod_{i=1}^k m_i \leq n'\leq n$, $m_i\leq n'/2\leq n/2$ for every $1\leq i\leq k$, and $C''$ is an absolute constant. Clearly, $l m n^C \max(4,e^{C'' (\log n')^2}) \leq e^{C''' (\log n)^3}$ for $C'''$ an absolute constant, provided that (say) $n\geq e^{3/2}$.
We can assume, as an inductive hypothesis,
that Theorem \ref{thm:molop} is true for
$G_1=\Alt(n_1)$, $n_1\leq e^{-1/10} n$. In other words,
\[\diam(G_1) \leq e^{K (\log n_1)^4 (\log \log n_1)^2}.\]
If (\ref{eq:rodor1}) above applies, we let $n_1 = n'$, and obtain that
\[\begin{aligned}
\diam(\Gamma(G,S)) &\leq e^{C''' (\log n)^3 (\log \log n)^2}
e^{K (\log n')^4 (\log \log n')^2}\\
&\leq e^{\left(C''' (\log n)^3 + K ((\log n)-1/10)^4\right) (\log \log n)^2}.\end{aligned}\]
For $K>(10/3.99) \cdot C'''$ (say) and $n$ larger than a constant,
\[C''' (\log n)^3 + K \left((\log n)-\frac{1}{10}\right)^4 \leq
K (\log n)^4,\]
and so Theorem \ref{thm:molop} is true for $n$.
If (\ref{eq:rodor2}) applies instead, then
\[
\diam(\Gamma(G,S)) \leq e^{C''' (\log n)^3 (\log \log n)^2}
\prod_{i=1}^k e^{K (\log m_i)^4 (\log \log m_i)^2},\]
where $\prod_{i=1}^k m_i \leq n$ and $m_i\leq n/2$ for all $1\leq i\leq k$.
If $k=1$, we proceed as above, with $1/2$ instead of $e^{-1/10}$.
If $k>1$, then, assuming, as we may, that $m_1\geq m_i$ for all
$2\leq i\leq k$,
\[\begin{aligned}
\sum_{i=1}^k (\log m_i)^4 &= (\log m_1)^4 +
\sum_{i=2}^k (\log m_i)^4 \leq (\log m_1)^4 + (\sum_{i=2}^k \log m_i)^k\\
&= (\log m_1)^4 + (\log \prod_{i=2}^k m_i)^4 \leq (\log m_1)^4 +
(\log n - \log m_1)^4\\ &\leq (\log 2)^4 + (\log n/2)^4,\end{aligned}\]
since $2\leq m_1\leq n/2$. Hence, much as above,
\[
\diam(\Gamma(G,S)) \leq e^{C''' (\log n)^3 (\log \log n)^2}
e^{K ((\log n - \log 2)^4 + (\log 2)^4) (\log \log n)^2}.\]
For $K>1/(3.99 \log 2)$ and $n$ larger than a constant,
\[C''' (\log n)^3 + K (\left((\log n)-\log 2\right)^4 + (\log 2)^4) \leq
K (\log n)^4,\]
and so Theorem \ref{thm:molop} is true for $n$ in this case as well.
\end{proof}
\end{document} |
\begin{document}
\title{Kempner-like harmonic series}
\begin{abstract} Inspired by a question asked on the list {\tt mathfun}, we revisit {\em Kempner-like series}, i.e., harmonic sums $\sum' 1/n$ where the integers $n$ in the summation have ``restricted'' digits. First we give a short proof that $\lim_{k \to \infty}(\sum_{s_2(n) = k} 1/n) = 2 \log 2$, where $s_2(n)$ is the sum of the binary digits of the integer $n$. Then we propose two generalizations. One generalization addresses the case where $s_2(n)$ is replaced with $s_b(n)$, the sum of $b$-ary digits in base $b$: we prove that $\lim_{k \to \infty}\sum_{s_b(n) = k} 1/n = (2 \log b)/(b-1)$. The second generalization replaces the sum of digits in base $2$ with any block-counting function in base $2$, e.g., the function $a(n)$ of ---possibly overlapping--- $11$'s in the base-$2$ expansion of $n$, for which we obtain $\lim_{k \to \infty}\sum_{a(n) = k} 1/n = 4 \log 2$. \end{abstract}
\section{Introduction}
A nice, now classical, 1914 result of Kempner \cite{Kempner} states that the sum of the inverses of the integers whose expansion in base $10$ contains no occurrence of a given digit ($\neq 0$) converges. This fact might seen amazing at first view, but looking, e.g., at all integers whose decimal expansion has no $9$ in it, one sees that larger and larger ranges of integers are excluded (think of all integers between $9 \cdot 10^k$ and $10^{k+1} - 1$). After the 1914 paper of Kempner \cite{Kempner} and the 1926 paper of Irwin \cite{Irwin}, several papers were devoted to generalizations or extensions of this result, as well as numerical computations of the corresponding series. The reader can look at, e.g., \cite{Alexander, Baillie1, Baillie2, Behforooz, Boas, Craven, Farhi, Fischer, Gordon, Klove, KS, LP, MS, Nathanson1, Nathanson2, Nathanson3, SB, SLF, Wadhwa75, Wadhwa79, WW} and the references therein.
In particular the paper of Farhi \cite{Farhi} proves the somehow unexpected result that, if $c_{j,10}(n)$ denote the number of occurrences of a fixed digit $j \in \{ 0, 1, \dots, 9\}$ in the base-$10$ expansion of $n$, then $$ \lim_{k \to \infty} \sum_{\substack{n \geq 1 \\ c_{j,10}(n) = k}} \frac{1}{n} = 10 \log 10. $$ Replacing base $10$ with base $2$ and letting $c_{1,2}(n)$ denote the number of $1$'s in the binary expansion of $n$, we could expect that $$ \lim_{k \to \infty} \sum_{\substack{n \geq 1 \\ c_{1,2}(n) = k}} \frac{1}{n} = 2 \log 2. $$ The series on the lefthand side is precisely the one occurring in a recent question on {\tt mathfun}, that was forwarded to one of the authors by J. Shallit. Actually, in the post $c_{1,2}(n)$ is replaced with $s_2(n)$ which is of course the same; the question was to determine the value of the limit when $k$ goes to infinity.
First we give a short proof of the following theorem which answers the {\tt mathfun} question.
\begin{theorem}\label{th:base2} The following equality holds: \begin{equation}\label{base2} \lim_{k \to \infty} \sum_{\substack{n \geq 1 \\ s_2(n) = k}} \frac{1}{n} = 2 \log 2. \end{equation} \end{theorem}
Then we investigate two natural generalizations of this result. In the first one we replace the sum of binary digits with the sum of $b$-ary digits.
\begin{theorem}\label{th:sum-digits-base-b} Let $b \geq 2$ be an integer. Let $s_b(n)$ be the sum of digits of $n$ in base $b$. Then \begin{equation}\label{base-b} \lim_{k \to \infty} \sum_{\substack{n \geq 1 \\ s_b(n) = k}} \frac{1}{n} = \frac{2 \log b}{b-1}\cdot \end{equation} \end{theorem}
In the second generalization we replace the sum of digits in base $2$, i.e., the number of $1$'s in base $2$, with $a_w(n)$ the number of occurrences of a word $w$ (a fixed string of consecutive digits) in the binary expansion of the integer $n$.
\begin{theorem}\label{th:general-base2} Let $w$ be a binary word with $r$ letters. Then \begin{equation}\label{general-base2} \lim_{k \to \infty} \sum_{\substack{n \geq 1 \\ a_w(n) = k}} \frac{1}{n} = 2^r \log 2. \end{equation} \end{theorem}
\begin{remark}
We have essentially limited the references to ``missing digits'' given before Theorem~\ref{th:base2} to harmonic series and Dirichlet series whose summation indexes ``miss'' digits or combinations of digits. Integers with missing digits in a given base are called ``ellipsephic''. They occur in several papers (e.g., \cite{Aloui, AMM, Col2009, Biggs2021, Biggs2023, CDGJLM}). Nicholas Yu indicates in \cite[Footnote, p.~6]{Hu-N}:
\begin{quote} ``This word is a translation of the French {\em ellips\'ephique}, which Mauduit coined as a port-\linebreak manteau of the Greek words \textgreek{>'elleiy\v{i}s} ({\em \'elleipsis}, ``ellipisis'') and \textgreek{yhf'io} ({\em psif{\'\i}o}, ``digit''). We \linebreak prescribe the English pronunciation [\textipa{""{\i}.l{\i}p"sEf.{\i}k}].''
\end{quote}
\noindent The word in French was proposed by Christian Mauduit. Its origin is given by Sylvain Col in \cite[p.~12]{Col}:
\begin{quote} ``[... les progressions arithm\'etiques]. C. Mauduit les a baptis\'es entiers {\em ellips\'ephiques} en r\'e-\linebreak f\'erence \`a la superposition de deux mots grecs, \textgreek{elliptikos} \ (litt\'eralement {\em elliptique}) et \linebreak \textgreek{ynhon} (litt\'eralement {\em petit caillou poli par l'eau}~; ces cailloux \'etaient notamment utilis\'es \linebreak pour voter et r\'ealiser les calculs) et signifie {\em qui a des chiffres manquants}. [Parmi les...]'' \linebreak \end{quote}
\noindent Note that there were some misprints in the Greek words in the original quotations above. Namely ``\textgreek{>'elleiy\v{i}s}'', ``\textgreek{yhf'io}'', `` \textgreek{elliptikos}'', and ``\textgreek{ynhon}'' should be replaced respectively with ``\textgreek{>'elleiyis}'', ``\textgreek{y{\~{h}}fos}'', ``\textgreek{>elleiptik\'os}'', and ``\textgreek{y{\~{h}}fos}''.
\end{remark}
\section{A short proof of Theorem~\ref{th:base2}}
The fact that the series $A_k := \displaystyle\sum_{\substack{n \geq 1 \\ s_2(n) = k}} \frac{1}{n}$ converges can be easily proved by a counting argument (adapting, e.g., a proof given in \cite{Irwin}) or by noting that $s_2(n)$ is the number of $1$'s in the binary expansion of $n$ and using the proof of Lemma~1 in \cite{Allouche-Shallit1989}. Let us suppose $k \geq 2$. Splitting the sum into even and odd indices, and recalling that $s_2(2n) = s_2(n)$ and $s_2(2n+1) = s_2(n) + 1$, we obtain: $$ A_k = \sum_{\substack{n \geq 1 \\ s_2(2n) = k}} \frac{1}{2n} + \sum_{\substack{n \geq 0 \\ s_2(2n+1) = k}} \frac{1}{2n+1} = \sum_{\substack{n \geq 1 \\ s_2(n) = k}} \frac{1}{2n} + \sum_{\substack{n \geq 0 \\ s_2(n) = k-1}} \frac{1}{2n+1} = \frac{1}{2} A_k + \sum_{\substack{n \geq 0 \\ s_2(n) = k-1}} \frac{1}{2n+1} $$ which we rewrite as $$ A_k = 2 \sum_{\substack{n \geq 0 \\ s_2(n) = k-1}} \frac{1}{2n+1} = A_{k-1} + B_k $$ where $B_k := 2\displaystyle\sum_{\substack{n \geq 1 \\ s_2(n) = k-1}} \left(\frac{1}{2n+1} - \frac{1}{2n}\right)$. Thus, we have, for $k \geq 2$, $$ \begin{array}{llll} A_k &- \ &A_{k-1} &= \ B_k \\ A_{k-1} &- &A_{k-2} &= \ B_{k-1} \\ \ldots \\ A_2 &- &A_1 &= \ B_2. \end{array} $$ Hence, summing these equalities, $$ A_k - A_1 = \sum_{2 \leq j \leq k} B_j = 2\sum_{2 \leq j \leq k} \sum_{\substack{n \geq 1 \\ s_2(n) = j-1}} \left(\frac{1}{2n+1} - \frac{1}{2n}\right) $$ i.e., $$ A_k - A_1 = 2\sum_{\substack{n \geq 1 \\ s_2(n) \leq k-1}} \left(\frac{1}{2n+1} - \frac{1}{2n}\right). $$ The righthand term clearly tends to $2\displaystyle\sum_{n \geq 1} \left(\frac{1}{2n+1} - \frac{1}{2n}\right)$ when $k$ tends to infinity, thus the lefthand term has a limit and $$ \lim_{k \to \infty} A_k = A_1 - 2 \sum_{n \geq 1} \left(\frac{1}{2n} - \frac{1}{2n+1}\right) $$ Now $$ A_1 = \sum_{\substack{n \geq 1 \\ s_2(n) = 1}} \frac{1}{n} = \sum_{j \geq 0} \frac{1}{2^j} = 2. $$ Hence $$ \lim_{k \to \infty} A_k = 2 \left(1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} \dots\right) = 2 \log 2. \ \ \ \ \ \Box $$
Before proving our first generalization (Theorem~\ref{th:sum-digits-base-b}), we will prove a general result on convergence of sequences and a corollary which will be useful.
\section{A general result on convergence of sequences}
\begin{theorem}\label{non-classical} Let $P(X) = \sum_{0 \leq k \leq d} a_k X^{d-k}$ be a polynomial having all its roots in ${\mathbb C}$ of modulus $< 1$. For a sequence $(u_n)_{n \geq 0}$, define the sequence $(u^{(P)}_n)_{n \geq d}$ by $u^{(P)}_n := \sum_{0 \leq k \leq d} a_k u_{n-k}$. Then the sequence $(u_n)_{n \geq 0}$ tends to $0$ if and only if the sequence $(u^{(P)}_n)_{n \geq d}$ tends to $0$. \end{theorem}
\proof Since one direction is trivial, we only prove that if $(u^{(P)}_n)_{n \geq d}$ tends to $0$, then $(u_n)_{n \geq 0}$ tends to $0$.
First we look at the case $d=1$. Suppose that $z$ is a complex number with $|z| < 1$. We prove that if the sequence $(w_n)_{n \geq 1}$ tends to $0$, where $w_n := u_n - z u_{n-1}$, then the sequence $(u_n)_{n \geq 0}$ tends to $0$. Namely, if $(w_n)_{n \geq 1}$ tends to $0$, we have $$ \forall \varepsilon > 0, \ \exists n_0 \geq 1, \text{\ such that for \ } n \geq n_0 \text{\ one has \ }
|w_n| \leq (1-|z|) \varepsilon. $$ But, for all $p \in {\mathbb N}$, one has by an easy induction $$ u_{n_0+p} = \sum_{0 \leq k \leq p} z^k w_{n_0+p-k} + z^{p+1} u_{n_0-1}. $$ Hence, for $p$ larger than some $p_0$, one has
$|u_{n_0+p}| \leq \varepsilon + |z|^{p+1} |u_{n_0-1}| \leq 2 \varepsilon$.
Now we can address the general case where $P(X) = \sum_{0 \leq k \leq d} a_k X^{d-k} =
\prod_{1 \leq j \leq d} (X - z_j)$, with $|z_j| < 1$ for all $j$. Defining $\varphi_j((u_n)_n) := ((u_n - z_j u_{n-1})_n)$, it is easy to see that $(u^{(P)}_n)_n = (\varphi_d \circ \varphi_{d-1} \circ \dots \circ \varphi_1)((u_n)_n)$. Thus it suffices to apply $d$ times the case $d=1$ above. \endpf
\begin{corollary}\label{the-corollary} Let $b$ be an integer $> 1$. Then $$ \lim_{n \to \infty} ((b-1)u_n + (b-2)u_{n-1} + \dots + 2 u_{n-b+3} + u_{n-b+2}) = \ell \text{ \ if and only if \ } \lim_{n \to \infty} u_n = \dfrac{2\ell}{b(b-1)}\cdot $$ \end{corollary}
\proof Of course one can suppose $b > 2$. Again one direction is trivial. For the other direction, up to replacing $(u_n)_{n \geq 0}$ with $(u'_n)_{n \geq 0}$, where $u'_n := u_n - 2\ell/(b(b-1))$, we can suppose that $\ell = 0$. Now, in order to apply Theorem~\ref{non-classical} with $P(X) = (b-1)X^{b-2} + (b-2)X^{b-3} + \dots + 2X + 1$, it suffices to prove that all the (complex) roots of this polynomial have modulus $< 1$. We note that $(1-X) P(X) = 1 + X + X^2 + \dots + X^{b-2} - (b-1) X^{b-1}$. Hence if $P(z) = 0$ for some $z$ with
$|z| \geq 1$, then $(b-1) |z|^{b-1} \leq 1 + |z| + \dots + |z|^{b-2} \leq (b-1) |z|^{b-2}$, hence $|z| = 1$. Furthermore equality in the triangular inequality implies here that $z$ is real and non-negative, hence
equal to $1$. Since $1$ is not a root of $P$, this gives the desired contradiction: hence, necessarily $|z| <1$. \endpf
\section{A first generalization. Proof of Theorem~\ref{th:sum-digits-base-b}}
We begin with a lemma.
\begin{lemma}\label{harmonic} We have the following properties.
\begin{itemize}
\item[{\rm (i)}] The sum $\sum_{s_b(n) = k} \frac{1}{n} $ is finite.
\item[{\rm (ii)}] For any $j \in \{0, 1, \dots, b-1\}$, $$ \lim_{k \to \infty} \sum_{s_b(n) = k} \left(\frac{1}{bn+j} - \frac{1}{bn}\right) = 0. $$
\item[{\rm (iii)}] For $n \geq 1$, let $H_n = 1 + \frac{1}{2} + \dots + \frac{1}{n}$ be the $n$-th harmonic number. Then, for $b > 1$, $$ \lim_{k \to \infty} \sum_{1 \leq s_b(n) \leq k} \left(\sum_{0 \leq j \leq b-1}\frac{1}{bn+j} - \frac{1}{n}\right) = \log b - H_{b-1}. $$ \end{itemize} \end{lemma}
\proof To prove (i) we note that the number of integer solutions of $x_1 + x_2 + \dots + x_j = k$ is equal to $\displaystyle {j+k-1 \choose k}$, hence $$ \sum_{\substack{b^{j-1} \leq n \leq b^j \\ s_b(n) = k}} \frac{1}{n} \leq \frac{{j+k-1 \choose k}}{b^{j-1}} \sim \frac{j^k}{k! b^{j-1}}\cdot $$ The convergence of the series $\displaystyle\sum_{j \geq 1} \frac{j^k}{k! b^{j-1}}$ implies the existence of $\displaystyle \sum_{s(n) = k} \frac{1}{n}\cdot$
In order to prove (ii), we note that for any $j \in \{0, 1, \dots, b-1\}$, we have $$ \sum_{0 \leq j \leq b-1} \frac{1}{bn+j} - \frac{1}{n} = \sum_{0 \leq j \leq b-1}\left(\frac{1}{bn+j} - \frac{1}{bn}\right) \text{\ \ is a sum of negative terms.} $$ Hence (ii) holds if and only if $\displaystyle \lim_{k \to \infty} \sum_{s_b(n) = k} \sum_{0 \leq j \leq b-1}\left(\frac{1}{bn+j} - \frac{1}{bn}\right)=0$. But $$ \sum_{s_b(n) = k} \sum_{0 \leq j \leq b-1}\left(\frac{1}{bn+j} - \frac{1}{bn}\right) = \sum_{s_b(n) \leq k} \sum_{0 \leq j \leq b-1}\left(\frac{1}{bn+j} - \frac{1}{bn}\right) \
- \sum_{s_b(n) \leq k-1} \sum_{0 \leq j \leq b-1}\left(\frac{1}{bn+j} - \frac{1}{bn}\right). $$ Thus, it suffices to prove (iii).
Finally we prove (iii). Define $v_n := \displaystyle \sum_{0 \leq j \leq b-1} \frac{1}{bn+j} - \frac{1}{n} = \sum_{0 \leq j \leq b-1}\left(\frac{1}{bn+j} - \frac{1}{bn}\right).$ We have $$ \sum_{1 \leq n \leq N} v_n = H_{bN+b-1} - H_{b-1} - H_N $$ which tends to $\log b - H_{b-1}$ when $N$ tends to infinity. Now, $v_n \leq 0$, and, if $n \leq b^k - 1$, then $s_b(n) \leq k (b - 1)$. Hence $$ \sum_{1 \leq n \leq b^k-1} v_n \geq \sum_{1 \leq s_b(n) \leq k(b-1)} v_n \geq \ \sum_{n \geq 1} v_n = \log b - H_{b-1}. $$ Since $\displaystyle \sum_{1 \leq n \leq b^k-1} v_n$ tends to $\log b - H_{b-1}$, we obtain the desired result. \endpf
\noindent {\it Proof of Theorem~\ref{th:sum-digits-base-b}}
Define $\displaystyle u_k := \sum_{s_b(n) = k} \frac{1}{n}$. Then, splitting the integers according to their value modulo $b$, we have $$ u_k = \sum_{s_b(bn) = k} \frac{1}{bn} + \sum_{1 \leq j \leq b-1}\sum_{\substack{n \geq 0 \\ s_b(bn+j) = k}} \frac{1}{bn+j} $$ thus, using that $s_b(bn+j) = s_b(n) + j$ for $j \in \{0,1, \dots, b-1\}$, $$ u_k = \frac{1}{b} u_k + \sum_{1 \leq j \leq b-1}\sum_{\substack{n \geq 0 \\ s_b(n) = k-j}} \frac{1}{bn+j}\cdot $$ Hence $$ \left(1-\frac1b\right)(u_1+\dots+u_k) = \sum_{j=1}^{b-1}\sum_{\substack{n \geq 0 \\ s_b(n) \leq k-j}}\frac {1}{bn+j}\cdot $$ Thus, by subtracting $\left(1-\frac1b\right)(u_1+\dots+u_{k-1})$ with the definition of the $u_j$'s, $$ \left(1-\frac{1}{b}\right) u_k =\sum_{j=1}^{b-1} \sum_{\substack{n \geq 0 \\ s_b(n) \leq k-j}} \frac{1}{bn+j} - \left(1-\frac{1}{b}\right)\sum_{1 \leq s_b(n) \leq k-1} \frac{1}{n}\cdot $$ Hence \begin{equation}\label{key} \left(1-\frac{1}{b}\right) u_k = H_{b-1} + \sum_{j=1}^{b-1} \sum_{\substack{n \geq 1 \\ s_b(n) \leq k-j}} \frac{1}{bn+j} - \left(1-\frac{1}{b}\right)\sum_{1 \leq s_b(n) \leq k-1} \frac{1}{n}\cdot \end{equation} Let us define as in Lemma~\ref{harmonic}(iii) $w_{k-1}$ by $$ w_{k-1} := \sum_{1 \leq s_b(n) \leq k-1} \left(\sum_{0 \leq j \leq b-1}\frac {1}{bn+j}-\frac{1}{n}\right) $$ We have $$ \begin{array}{lll} w_{k-1} &=& \displaystyle\sum_{1 \leq s_b(n) \leq k-1} \ \ \sum_{1 \leq j \leq b-1} \frac {1}{bn+j}
- \left(1 - \frac{1}{b}\right) \sum_{1 \leq s_b(n) \leq k-1} \frac{1}{n} \\ &=& \ \ \displaystyle\sum_{1 \leq j \leq b-1} \ \ \sum_{1 \leq s_b(n) \leq k-1} \frac {1}{bn+j}
- \left(1 - \frac{1}{b}\right) \sum_{1 \leq s_b(n) \leq k-1} \frac{1}{n} \\ &=& \ \displaystyle\sum_{1 \leq j \leq b-1} \ \ \sum_{1 \leq s_b(n) \leq k-j} \frac {1}{bn+j}
+ \displaystyle\sum_{1 \leq j \leq b-1} \ \ \sum_{k-j+1 \leq s_b(n) \leq k-1} \frac {1}{bn+j}
- \left(1 - \frac{1}{b}\right) \sum_{1 \leq s_b(n) \leq k-1} \frac{1}{n} \\ &=& \ \displaystyle\sum_{1 \leq j \leq b-1} \ \ \sum_{1 \leq s_b(n) \leq k-j} \frac {1}{bn+j}
+ \displaystyle\sum_{2 \leq j \leq b-1} \ \ \sum_{k-j+1 \leq s_b(n) \leq k-1} \frac {1}{bn+j}
- \left(1 - \frac{1}{b}\right) \sum_{1 \leq s_b(n) \leq k-1} \frac{1}{n}\cdot \\ \end{array} $$ Hence we can write, using Equation~(\ref{key}), $$ \left(1 - \frac{1}{b}\right) u_k = H_{b-1}+w_{k-1}-R_k $$ with $$ R_k = \sum_{2 \leq j \leq b-1} \ \ \sum_{k-j+1 \leq s_b(n) \leq k-1} \frac {1}{bn+j}. $$ Then, when $k \to \infty$, we can write, by using Lemma~\ref{harmonic}(ii), $$ \begin{array}{lll} R_k &=& \displaystyle\sum_{2 \leq j \leq b-1} \ \sum_{1 \leq i \leq j-1} \ \sum_{s_b(n)= k-i} \frac{1}{bn+j} \\ &=& \displaystyle\frac{1}{b} \ \sum_{2 \leq j \leq b-1}\sum_{1 \leq i \leq j-1} u_{k-i} + o(1) \\ &=& \displaystyle\frac{1}{b} \sum_{1 \leq i \leq b-2} (b-i-1) u_{k-i} + o(1). \end{array} $$ This gives $$ H_{b-1} + w_{k-1} = \left(1 - \frac{1}{b}\right) u_k + R_k = \left(1 - \frac{1}{b}\right) u_k + \frac{1}{b} \sum_{1 \leq i \leq b-2} (b-i-1) u_{k-i} + o(1) $$ but, from Lemma~\ref{harmonic}(iii), $H_{b-1} + w_{k-1} $ tends to $\log b$. Hence $$ \left(1 - \frac{1}{b}\right) u_k + \frac{1}{b} \sum_{1 \leq i \leq b-2} (b-i-1) u_{k-i} \to \log b $$ or, equivalently, $$ \sum_{0 \leq i \leq b-2} (b-i-1) u_{k-i} \to b \log b. $$ Applying Corollary~\ref{the-corollary} yields $$ u_k \to \frac{2\log b}{b-1}\cdot \ \ \ \Box $$
\section{A second generalization. Proof of Theorem~\ref{th:general-base2}}
Comparing the equalities (see, e.g., \cite{Allouche-Shallit1990} and the references therein for the second equality) $$ \lim_{k \to \infty} \sum_{s_2(n) = k} \frac{1}{n} = 2 \log 2 \ \ \text{and} \ \ \sum_{n \geq 1} \frac{s_2(n)}{n(n+1)} = 2 \log 2 $$ it is tempting to prove {\it directly} that the two left-hand quantities are equal. We did not succeed, but we found that a method permitting to prove the second equality can be used for proving the first one, thus yielding a generalization to all base-$2$ pattern counting sequences.
Let $w$ be a word of $0$'s and $1$'s. We let $a_w(n)$ denote the number of occurrences of $w$ in the binary expansion of the integer $n$. As usual, if $w$ begins with $0$ and is not of the form $0^{\ell}$ ---the word consisting of $\ell$ digits equal to $0$--- we assume that the binary expansion of $n$ begins with an arbitrarily long prefix of $0$'s. And if $w = 0^{\ell}$, we use the classical binary expansion of $n$ beginning with $1$. (For example, taking the respective binary expansions of $5$ and $8$, namely $5 = (0...0)101$ and $8 = 1000$, one has $a_{01}(5) = a_{01}(0...0101) = 2$ and
$a_{000}(8) = a_{000}(1000) = 1$.) Also recall that $|w|$ is the length (i.e., the number of letters) of the word $w$. First we prove the following lemma.
\begin{lemma}\label{general-lemma} Define $a_w(n)$ as above. Let $(f(n))_{n \geq 1}$ be a sequence of positive reals such that $\sum_n f(n) < +\infty$. Let $c_k := \displaystyle\sum_{\substack{n \geq 1 \\ a_w(n) = k}} f(n)$ \ and \ $d_k := \displaystyle\sum_{\substack{n \geq 1 \\ a_w(n) = k}} \frac{1}{n}$. Then \newline \begin{itemize}
\item[\rm (i)] One has $d_k < +\infty$.
\item[\rm (ii)] The series $\displaystyle\sum_{k \geq 0} c_k$ converges.
\item[\rm (iii)] The sequence $(c_k)_{k \geq 0}$ tends to $0$ when $k$ tends to infinity.
\end{itemize} \end{lemma}
\proof The first assertion can be found, e.g., in the proof of Lemma~1 in \cite{Allouche-Shallit1989}. The third assertion is a consequence of the second one. Finally, to prove that $\sum c_k$ converges, we write (note that all terms are positive): $$ \sum_{k \geq 0} c_k = \sum_{k \geq 0}\sum_{\substack{n \geq 1 \\ a_w(n) = k}} f(n) = \sum_{n \geq 1} f(n) < +\infty. \ \ \ \ \ \Box $$
\noindent {\it Proof of Theorem~\ref{th:general-base2}}
The idea for proving this theorem is to compare the quantity $\displaystyle\sum_{\substack{n \geq 1 \\ a_w(n) = k}} \frac{1}{n}$ with a series $\displaystyle\sum_{\substack{n \geq 1 \\ a_w(n) = k}} g_w(n)$ whose sum is known and converges to some limit $A_w$ for $k \to \infty$. If furthermore
$g_w(n) - 1/(2^{|w|}n) = {\mathcal O}_w(1/n^2)$, then Lemma~\ref{general-lemma} will imply the existence and the value of $$
\lim_{k \to \infty} \sum_{\substack{n \geq 1 \\ a_w(n) = k}} \frac{1}{n} = 2^{|w|} A_w. $$ The choice of the function $g$ will use \cite{Allouche-Shallit1989} where the authors prove that there exists a rational function $b_w$ such that for all $k \geq 0$, one has $$ \sum_{\substack{n \geq 1 \\ a_w(n) = k}} \log(b_w(n)) = - \log 2 \ \ \ \text{(independent of $k$)}. $$ The paper \cite{Allouche-Shallit1989} explains how to construct $b_w$. This construction is given by a more explicit recursive algorithm in \cite{Allouche-Hajnal-Shallit}. Taking $g_w$ defined by
$g_w = - \log(b_w)$, it will suffice to prove that $- \log(b_w(n)) - 1/(2^{|w|}n) = {\mathcal O}_w(1/n^2)$.
Using \cite{Allouche-Hajnal-Shallit}, we have that, if $w = w_1 w_2 \dots w_m$, then $\log(b_w(n))$ is given in \cite{Allouche-Hajnal-Shallit} by $$ \log(b_w(n)) = Q_w(w_1 w_2 \dots w_{m-1}, w_m, n), $$ where, for $z = z_1 \dots z_r$ and $t$ two binary words, $Q_w$ is recursively defined by: $$ Q_w(z,t,n) := \begin{cases}
\log(2^{|t|} n + \nu(t)) - \log(2^{|t|} n + \nu(t) + 1) \ \ &\text{if $r = 0$}, \\ Q_w(\varepsilon, t, n) - Q_w(\varepsilon, \overline{z_r} t, n) &\text{if $r=1$ and $z$ is a suffix of $w$}, \\ Q_w(z_2 z_3 \dots z_r, t, n) - Q_w(\overline{z_1} z_2 \dots z_{r-1}, z_r t, n) &\text{if $r \geq 2$ and $z$ is a suffix of $w$}, \\ Q_w(z_1 z_2 \dots z_{r-1}, z_r t, n) &\text{if $r \geq 1$ and $z$ is not a suffix of $w$}, \end{cases} $$ where, for $x \in\{0, 1\}$, one defines $\overline{x} := 1 - x$, where $\nu(t)$ is the value of the word $t$
when interpreted as a binary expansion, and $\varepsilon$ is the empty word (the word with no letter). Also recall that $|t|$ is the length (i.e., the number of letters) of the word $w$.
The behavior of $Q_w(z,t,n)$ when $n$ tends to infinity can be proved by induction on
$|z| \geq 1$. We claim that, for all $t$, $$
Q_w(z,t,n) = - \frac{1}{2^{|t| + |z|} n} + {\mathcal O}_{z,t}\left(\frac{1}{n^2}\right)\cdot $$
If $|z| = 0$, i.e., $z = \varepsilon$, we have $$
Q_w(z,t,n) = \log(2^{|t|}n + \nu(t)) - \log(2^{|t|}n + \nu(t) + 1)
= - \frac{1}{2^{|t|}n} + {\mathcal O}_t\left(\frac{1}{n^2}\right)
= - \frac{1}{2^{|t|+|z|}n} + {\mathcal O}_t\left(\frac{1}{n^2}\right)\cdot $$
Suppose that the property holds for $|z|=r-1$ for some $r \geq 1$. Let us prove that it holds
for $|z|=r$. Let $z = z_1 z_2 \dots z_r$.
If $|z| = r = 1$ and $z$ is a suffix of $w$, then, using the case $|z| = 0$ above, $$ \begin{array}{lll}Q_w(z,t,n) = Q_w(\varepsilon, t, n) - Q_w(\varepsilon, \overline{z_r} t, n)
&=& - \displaystyle\frac{1}{2^{|t|}n} + \frac{1}{2^{|t|+1}n} + {\mathcal O}_{z,t}\left(\frac{1}{n^2}\right)
= - \frac{1}{2^{|t|+1}n} + {\mathcal O}_t\left(\frac{1}{n^2}\right) \\
&=& - \displaystyle\frac{1}{2^{|t|+|z|}n} + {\mathcal O}_{z,t}\left(\frac{1}{n^2}\right)\cdot \\ \end{array} $$
If $r \geq 2$ and $z$ is a suffix of $w$, then, using the induction hypothesis $$ \begin{array}{lll} Q_w(z,t,n) &=& \displaystyle Q_w(z_2 z_3 \dots z_r, t, n) - Q_w(\overline{z_1} z_2 \dots z_{r-1}, z_r t, n)
= - \frac{1}{2^{r-1+|t|}n} + \frac{1}{2^{r-1+ |t|+1} n} {\mathcal O}_{z,t}\left(\frac{1}{n^2}\right)\\
&=& - \displaystyle\frac{1}{2^{r+|t|}n} + {\mathcal O}_{z,t}\left(\frac{1}{n^2}\right)
= -\frac{1}{2^{|z|+|t|}n} + {\mathcal O}_{z,t}\left(\frac{1}{n^2}\right)\cdot \end{array} $$
If $r \geq 1$ and $z$ is not a suffix of $w$, then, using the induction hypothesis $$ Q_w(z,t,n) = Q_w(z_1 z_2 \dots z_{r-1}, z_r t, n)
= - \frac{1}{2^{r-1+|t|+1}n} + {\mathcal O}_{z,t}\left(\frac{1}{n^2}\right)
= -\frac{1}{2^{|z|+|t|}n} + {\mathcal O}_{z,t}\left(\frac{1}{n^2}\right)\cdot $$
Thus, we obtain $$ g_w(n) = -\log(b_w(n)) = - Q_w(w_1 w_2 \dots w_{m-1}, w_m, n) =
\frac{1}{2^{|w|} n} + {\mathcal O}_w\left(\frac{1}{n^2}\right)\cdot \ \ \ \ \ \Box $$
\begin{example} If $w = 11$ (note that the sequence $(-1)^{a_w(n)}$ is a classical sequence, called the Golay-Shapiro or also the Rudin-Shapiro sequence), then $$ \lim_{k \to \infty} \sum_{\substack{n \geq 1 \\ a_w(n) = k}} \frac{1}{n} = 4 \log 2. $$ \end{example}
\begin{remark} A similar study could probably be undertaken for any integer base $b \geq 2$, combining ideas in \cite{Allouche-Shallit1989, Allouche-Hajnal-Shallit, Hu-Y}. \end{remark}
\noindent {\bf Acknowledgments} We warmly thank Jeff Shallit and Manon Stipulanti for discussions about Kempner series, and B. Morin for sharing her expertise in ancient Greek.
\end{document} |
\begin{document}
\title{Quantum enhanced joint measurement of multiple non-commuting observables with SU(1,1) interferometer}
\author{Yuhong Liu$^{1}$}
\author{Jiamin Li$^{1}$} \author{Liang Cui$^{1}$} \author{Nan Huo$^{1}$} \author{Syed M Assad$^{3}$} \author{Xiaoying Li$^{1}$}
\email{xiaoyingli@tju.edu.cn} \author{Z. Y. Ou$^{1,2}$}
\email{zou@iupui.edu} \affiliation{ $^{1}$College of Precision Instrument and Opto-Electronics Engineering, Key Laboratory of Opto-Electronics Information Technology, Ministry of Education, Tianjin University, Tianjin 300072, P. R. China\\ $^{2}$Department of Physics, Indiana University-Purdue University Indianapolis, Indianapolis, IN 46202, USA\\ $^{3}$Department of Quantum Science, The Australian National University, Canberra ACT 0200, Australia } \
\date{\today}
\begin{abstract}
Heisenberg uncertainty relation in quantum mechanics sets the limit on the measurement precision of non-commuting observables, which prevents us from measuring them accurately at the same time. In some applications, however, the information are embedded in two or more non-commuting observables. On the other hand, quantum entanglement allows us to infer through Einstein-Podolsky-Rosen correlations two conjugate observables with precision better than what is allowed by Heisenberg uncertainty relation. With the help of the newly developed SU(1,1) interferometer, we implement a scheme to measure jointly information encoded in multiple non-commuting observables of an optical field with a signal-to-noise ratio improvement of about 20$\%$ over the standard quantum limit on all measured quantities simultaneously. This scheme can be generalized to the joint measurement of information in arbitrary number of non-commuting observables.
\end{abstract}
\pacs{42.50.Lc, 42.50.St, 42.50.Dv, 42.65.Yj}
\maketitle
Quantum properties of light were applied to precision phase measurement as early as in 1980s, beating the shot noise limit set by the classical physics, i.e., the so-called standard quantum limit (SQL) \cite{cav,xiao,gran}. The basic idea is to reduce the quantum noise in the measurement with some novel quantum states of light. But because of the Heisenberg uncertainty principle on two non-commuting observables, quantum noise reduction in one observable is inevitably accompanied by the noise increase in the other. Thus, it seems impossible to beat the SQL simultaneously in joint measurement of non-commuting observables.
On the other hand, quantum correlation via quantum entanglement provides us with a remedy to circumvent the aforementioned dilemma. Einstein, Podolsky, and Rosen (EPR) showed in a seminal paper \cite{epr} that quantum mechanics allows the existence of such a state that exhibits perfect correlations not only between the positions of two remotely located particles but also between their momenta. This allows for the inference of both the position and momentum of a particle with a precision violating the Heisenberg uncertainty relation, leading to the EPR paradox. The experimental realization of the EPR entangled state and the demonstration of EPR paradox were first done in an optical system of non-degenerate parametric amplifier \cite{ou92}. Fundamental implications aside, it was suggested \cite{brau,zh} and demonstrated experimentally \cite{xyl,sna} that these magic quantum nonlocal correlations of orthogonal observables can be employed in the scheme of quantum dense coding for the simultaneous measurement of small modulations on the phase and amplitude with quantum noise in both measurement reduced below the standard quantum limit. In this letter, we report on a different scheme for joint measurement of non-commuting observables. The scheme is based on a recently developed SU(1,1) nonlinear interferometer \cite{yur,pl,jing11,ou12,hud14,CB15}, which operates in a fundamentally different principle from traditional linear interferometers. Making use of the advantages of this new interferometer, we achieve joint measurement of information encoded in multiple non-commuting observables such as phase and amplitude as well as arbitrarily rotated quadrature-phase amplitudes with a sensitivity beating the SQL simultaneously.
\begin{figure}
\caption{Schematics for joint measurement of information encoded in multiple non-commuting observables through weak modulations on the probe beam by an amplitude modulator (AM) and a phase modulator (PM). (a) Classical scheme with a beam splitter (BS). (b) An SU(1,1) interferometer with parametric amplifiers (OPA1, OPA2), which can achieve noise reduction at all quadrature angles. (c) Classical scheme with an amplifier (OPA). HD: homodyne detection.}
\label{fig:scheme}
\end{figure}
Since homodyne measurement can only measure one quadrature-phase amplitude at a time, the simplest way to simultaneously obtain information encoded in phase and amplitude of an optical field is to split the field into two with a beam splitter, one for each measurement, as shown in Fig. 1(a). But the vacuum noise from the unused port leads to 3 dB penalty in signal-to-noise ratio (SNR). Our scheme shown in Fig. 1(b) is an SU(1,1) interferometer (SUI) which employs optical parametric amplifiers instead of beam splitters to split the input coherent field into the signal and idler fields and recombine them. The phase and amplitude modulations can be obtained in separate but simultaneous measurement of quadrature amplitudes $\hat X_s$ and $\hat Y_i$ at the signal and idler outputs of the interferometer, respectively. For maximum sensitivity, the interferometer works at the dark fringe. As a result, destructive quantum interference leads to quantum noise cancelation and minimum noise for all quadrature-phase amplitudes at the two outputs \cite{ou93,kong13,Guo2016}. In the meantime, the signals of non-commuting observable encoded on probe beam are amplified by OPA2 for SNR improvement.
For a probe with weakly modulated phase and amplitude signals, $\delta$ (or $Y_m$) and $\epsilon$ (or $X_m$), our theoretical analysis \cite{JML} shows that the SNRs for the BS scheme in Fig. 1(a) and the SUI scheme with $g_2\gg g_1$ in Fig. 1(b) are given by \begin{eqnarray} &&SNR_{BS}(\hat X_{b_1}) =2I_{ps}\epsilon^2,~~SNR_{BS}(\hat Y_{b_2}) = 2{I_{ps}} \delta^2;\\ &&SNR_{SUI}(\hat X_{s}) =2(G_1+g_1)^2I_{ps}\epsilon^2,\label{SNRs}\\ &&SNR_{SUI}(\hat Y_{i}) = 2(G_1+g_1)^2{I_{ps}} \delta^2,\label{SNRi} \label{SNR-t} \end{eqnarray} where subscript $b_1,b_2$ denotes the outputs of the BS and $i,s$ denotes the signal and idler field. The gain factors $g_k, G_k (k=1,2)$ for the OPAs satisfy the relation $G_k^2-g_k^2=1$. $I_{ps}$ is the photon number or intensity of the probe sensing field.
In the experiment, however, the BS scheme is sensitive to detection loss while the SUI scheme is not. So, for a fair comparison, we consider the amplifier scheme in Fig. 1(c), where, similar to the SUI scheme, a parametric amplifier is used to split the information-encoded field into two for simultaneous measurement. This scheme can be shown \cite{JML} to have an SNR as \begin{eqnarray} &&SNR_{Amp}(\hat X_{s}) ={4G^2I_{ps}\epsilon^2\over G^2+g^2},~~SNR_{Amp}(\hat Y_{i}) = {4g^2{I_{ps}} \delta^2\over G^2+g^2},\cr && \label{SNR-t2} \end{eqnarray} It is clear from Eqs.(1) and (4) that the amplifier scheme at large gain $G\gg 1$ gives the same SNRs as the BS scheme for the joint measurement of modulations $X_m = \epsilon$ and $Y_m = \delta$. However, the SUI scheme has SNRs improved by a factor of $G_1^2+g_1^2$ as compared to the classical schemes. This improvement in phase measurement was demonstrated in SUI before \cite{hud14} and is due to quantum entanglement from the first OPA for the quantum amplification of the signal without noise amplified in the second OPA \cite{kong13}. Here, we show that the sensitivity in amplitude measurement can be improved simultaneously.
\begin{figure}
\caption{Joint measurement of the amplitude and phase modulations $X_m$ and $Y_m$ under different situations. (a) and (b) are the results of simultaneous measurement by HD1 and HD2 on $\hat X_s, \hat Y_i$, respectively for the schemes in Fig. 1(b) (blue) and 1c (red). (c) and (d) are results from the beam splitter scheme in Fig. 1(a). The peaks at 0.8 MHz and 1.2 MHz correspond to the AM and PM modulation signals, respectively. The measurement results are normalized to the shot noise level. }
\end{figure}
We implement three schemes in Fig. 1 with optical parametric amplifiers based on four-wave mixing in dispersion-shifted fiber \cite{Guo2016}. The details of the experimental setup are given in Method. The signals of weakly modulated amplitude ($\delta$ or $X_m$) and phase ($\epsilon$ or $Y_m$) are encoded on $\hat X = \hat X(0)$ and $\hat Y=\hat X(\pi/2)$ of the probe beam by applying sinusoidal modulation signal at 0.8 and 1.2 MHz on the amplitude modulator (AM) and phase modulator (PM), respectively. This probe is a classical coherent beam for the classical schemes in Figs. 1a and 1c but is a quantum correlated beam from OPA1 for the SUI scheme in Fig. 1(b). In all cases, the beam intensity $I_{ps}$ is adjusted to be the same for fair comparison, and the amplifier gains for OPA in Fig. 1(c) and for OPA2 in Fig. 1(b) are also the same to ensure equal signal gain. The operation of the classical amplifier scheme in Fig. 1(c) is straightforward. For the best sensitivity, the SUI is operated at dark fringe where the output powers at both signal and idler ports are at minimum \cite{hud14}.
Simultaneous measurements of the modulation signals $X_m$ and $Y_m$ are performed for the schemes with amplifiers (Fig. 1(b) and (c)) by homodyne measurement at signal and idler output ports with detection efficiencies of 72\% and 62\%, respectively. The relative phase $\phi_1$ ($\phi_2$) between LOs (LOi) and the signal (idler) output beam in HD1 (HD2) is locked to $\phi_1=0$ ($\phi_2=\pi/2$). During this measurement, the intensity of probe $I_{ps}$ is about 2 nW and the gains for OPA1, OPA2 are 2 and 9, respectively. Figures 2a and 2b respectively present the joint measurement obtained by HD1 and HD2. The blue traces in Fig. 2 are achieved by SUI (Fig. 1(b)) with a seed injection of 1 nW (input of OPA1), while the red traces are acquired with classical scheme (Fig. 1(c)) by setting P1 to zero and increasing the seed injection to 2 nW to keep the same $I_{ps}$. The peaks at 0.8 and 1.2 MHz in Figs. 2a and 2b correspond to the signals of $X_m$ and $Y_m$, respectively. It can be seen that the signal powers for blue and red traces are about the same but the noise floor of the blue trace is lower than that of the red trace by about 20$\%$ and 22$\%$, respectively, due to destructive quantum interference between the signal and the idler fields out of OPA1~\cite{Guo2016}, resulting in the SNRs of 1.62$\pm0.04$ and 1.55$\pm0.04$ from the blue traces, which are better than the SNRs of 1.29$\pm0.03$ and 1.22$\pm0.03$ extracted from the red traces, for the two conjugate variables $X_m$ and $Y_m$. \begin{figure}
\caption{Joint measurement of $X_m(0)$ (at 0.8MHz) and $X_m(\pi/4)$ (at 1.0MHz) encoded in non-orthogonal quadrature-phase amplitudes ${\hat X(0)}$ and ${\hat X(\pi/4)}$ measured by (a)HD1 and (b)HD2, respectively. Blues traces are the results of SUI (Fig. 1(b)) and red traces are from a conventional OPA (Fig. 1(c)).}
\end{figure}
The direct measurement scheme in Fig. 1(a) is achieved by blocking the two pumps P1 and P2 so that OPA1 and OPA2 simply function as transmission media. After splitting the probe beam with a 50/50 BS, we perform joint measurement of $X_m$ by HD1 and $Y_m$ by HD2 at each output port of BS (b1 and b2). The results are shown as the black traces in Figs. 2c and 2d, respectively. The extracted SNRs of $X_m$ and $Y_m$ are 1.03$\pm0.03$ and 1.01$\pm0.03$ after correcting the transmission efficiency of OPA2. Ideally from Eq.(\ref{SNR-t2}), the SNRs by the classical methods in Figs. 1a and 1c should be the same at large amplifier gain. But Fig. 2 clearly shows a difference. This is because the output noise of the conventional OPA is higher than the shot noise, so the SNR is less sensitive to the loss at detection than the direct homodyne measurement at shot noise level.
\begin{figure}
\caption{ Joint measurement of modulations $X_m(0)$ (at 0.8MHz), $X_m(\pi/4)$ (at 1.0MHz), and $X_m(\pi/2)$ (at 1.2MHz) encoded in three non-commuting quadrature-phase amplitudes of $\hat X, \hat X(\pi/4), \hat Y$ and measured by (a)HD1, (b)HD2, and (c)HD3, respectively. Blues traces are the results of SUI and red traces are from the amplifier scheme.}
\end{figure}
For the clarity of demonstration, we choose different frequencies for the phase and amplitude modulations in the experiment above. This corresponds to the case when the two modulations are uncorrelated. If the two modulations are at the same frequency and they are correlated, it will result in a modulation at a different quadrature-phase amplitude $X_m(\theta) = X_m\cos\theta +Y_m\sin\theta$. For its measurement, we can perform homodyne detection at $\phi_{LO}=\theta$. Since the homodyne angle is changed, one would expect a different, likely higher, noise level. However, because the working principle of SUI is quantum destructive interference for noise cancelation, the noise is at the lowest at dark fringe for all quadrature-phase amplitudes and the SNR for $X_m(\theta)$ is the same as in Eqs.(\ref{SNRs},\ref{SNRi}) for all $\theta$ \cite{JML}. So, we can measure $X_m(\theta)$ at one outport and $X_m$ at the other simultaneously with improved SNR for both. Fig. 3 shows the results with all experimental conditions same as Fig. 2 except that a modulation signal at 1.0 MHz is applied to both AM and PM equally for $X_m(\pi/4)$ and the phase of HD2 is set at $\phi_2=\pi/4$. The blue trace is again for SUI and red for the amplifier scheme. We find the SNRs of ${X_m(0)}$ and ${X_m(\pi/4)}$ extracted from blue traces (1.61$\pm0.04$ and 1.57$\pm0.04$) are better than those from red traces (1.3$\pm0.03$ and 1.27$\pm0.03$), leading to improved simultaneous measurement of information encoded in non-commuting observables of $\hat X(0)$ and $\hat X(\pi/4)$ of the probe. Notice that because ${X_m(0)}$ and ${X_m(\pi/4)}$ are non-orthogonal, their respective projections appear in the figures corresponding to measurement on other quantities.
\begin{figure*}
\caption{Joint measurement of multiple modulations $X_m(\theta)$ on arbitrary quadrature-phase amplitudes by post-detection data processing. Spectra of $i(\theta)= i_{HD1} \cos\theta + k i_{HD3}\sin\theta$ for $\theta =$ (a) 0, (b) $\pi/4$, (c) $\pi/2$, and (d) $3\pi/4$. $i_{HD1}, i_{HD3}$ are the photocurrents recorded from HD1 and HD3 with $\phi_1=0$ and $\phi_3=\pi/2$, respectively. $k$ is an adjustable parameter that can be calibrated to balance the gain difference of HD1 and HD3. The SNRs are calculated for the dash-circled peaks.}
\end{figure*}
Next, we demonstrate that the SUI can be used to perform quantum enhanced joint measurement of information encoded in more than two quadrature-phase amplitudes. This is achieved by splitting the outputs of interferometer further into more beams for multiple simultaneous measurement, as shown in the dashed boxes in Fig. 1(b) and (c). Since the noise levels out of the amplifiers are much higher than the vacuum noise level, further splitting will not distinctly reduce the SNR. This is different from the BS scheme in Fig. 1(a). In the experiment, in addition to the weak modulation signals at 0.8 and 1.2 MHz on AM and PM to encode information on the two orthogonal observables $\hat X$ and $\hat Y$, the modulation signal at 1.0 MHz is simultaneously loaded on both AM and PM so that the information of $X_m(\pi/4)$ is encoded on the probe beam as well. $X_m(\pi/4)$ is measured by a third homodyne detection device (HD3) with an efficiency of about 80\% in the signal output port after splitting the signal output into two with a 50/50 BS (dashed box in Fig. 1(b,c)). The results in Figs. 4a, 4b and 4c are obtained by HD1, HD2, and HD3 with their relative phases locked at $\phi_{1}=0$, $\phi_{2}=\pi/2$, and $\phi_3=\pi/4$, respectively. The experimental conditions are the same as those in Fig. 2. Again, the blue traces are for the SUI and the red ones for the classical amplifier scheme in Fig. 1(c). Similar to Fig. 3, projections of non-orthogonal quantities appear in all the figures. In each plot of Fig. 4, the signal powers of ${X_m}$, $Y_m$ and ${X_m(\pi/4)}$ extracted from the blue and red traces, including the signals of full size or projected size, are about the same, but the noise floor of blue trace is 20$\%$ lower than that of the red one. The best SNRs of $X_m$, $Y_m$, and $X_m(\pi/4)$ extracted from the blue traces are 1.6$\pm0.04$, 1.56$\pm0.04$, and 1.61$\pm0.04$, respectively, while those from the red traces are 1.25$\pm0.03$, 1.21$\pm0.03$, 1.27$\pm0.03$, respectively. Therefore, we achieve joint measurement of information in three non-commuting quadrature-phase amplitudes with sensitivity beyond the standard quantum limit. Notice that, even if the total detection efficiencies for $X_m$ and $X_m(\pi/4)$ are about 50\% lower than that for Ym, the SNR improvement for $X_m$ and ${X_m(\pi/4)}$ in Figs. 4a and 4c is about the same as that for $Y_m$ in Fig. 4(b). Thus, we demonstrate that the SNR is not sensitive to the detection loss introduced by the BS, as we discussed earlier.
We can also approach this problem of joint measurement with the method of post-detection data processing. Since we can measure both the phase ($Y_m$) and amplitude ($X_m$) modulations simultaneously, we should be able to extract out the modulation at arbitrary quadrature-phase amplitude $X_m(\theta) = X_m \cos\theta + Y_m\sin\theta$ by first recording $X_m$ and $Y_m$ and then processing these data to obtain $X_m(\theta)$. For this purpose, we measure and record by a fast digital oscilloscope the photo-currents simultaneously at HD1 ($i_{HD1}$) with $\theta_1=0$ and HD3 ($i_{HD3}$) with $\theta_2=\pi/2$ and calculate a new current of $i(\theta)$ via $i(\theta) = i_{HD1} \cos\theta + k i_{HD3}\sin\theta$ where $k$ is an adjustable parameter that is obtained by balancing the different photocurrents from HD1 and HD3. In our experiment, $k=0.84$. This leads to the measurement of a modulation signal at an arbitrary angle $X_m(\theta)$ \cite{JML}. To demonstrate this approach, we encode the AM and PM in the same way as that used in obtaining Fig. 4. Fig. 5 shows the results of the spectrum of $i(\theta)$ at the respective angle of $\theta$. Maximum signals are extracted at corresponding $\theta$-angles. Notice that the signal is nearly zero at 1.0MHz for $i(3\pi/4)$, which is at an angle orthogonal to the modulation signal at $\pi/4$. Noise reduction is the same for all quadrature-phase amplitude modulations. We thus achieved the simultaneous measurement of multiple arbitrary quadrature-phase amplitude modulations by post-detection data processing. This scheme for measuring arbitrary quadrature modulation is an indirect measurement in contrast to the results in Fig. 4 for the direct detection scheme. Nevertheless, they show the similar results.
In conclusion, we demonstrate that the sensitivity can simultaneously beat the standard quantum limit in the joint measurement of information on multiple non-commuting observables, including the phase and amplitude as well as an arbitrarily rotated quadrature-phase amplitude. Compared to the joint measurement scheme based on quantum dense coding with EPR entangled states in Ref. \cite{xyl,sna}, our scheme, which utilizes the merits of the SU(1,1) interferometer, leads to the joint measurement of information in more than two arbitrary non-commuting quadrature-phase amplitudes. Particularly, it should be emphasized that our scheme can overcome the extra noise (quantum and classical) encountered during the detection process \cite{JML}, and has practical implication and significance in quantum metrology.
What we have done here does not violate the Heisenberg uncertainty relation because the encoded information is indirectly measured through EPR-type of quantum correlations, which are allowed by quantum mechanics. So, the improvement should be unlimited in principle. In our experiment, however, comparing with the classical scheme, we only observed the improvement of about 20$\%$ in SNR. This is mainly due to the transmission loss introduced in coupling light out of the OPA1 into the OPA2 and the loss introduced by the temporal mode mismatch between the pulse pumped OPA1 and OPA2 (see Method for details) \cite{Guo15,GuoOL16}. The former can be overcome by improving the transmission efficiency of the optical components placed between the two amplifiers, while the latter can be surmounted by properly managing the dispersion of the nonlinear media of the two OPAs.
In the scheme of post-detection data processing by beam splitting method, our measurement is at the signal output port only, this leaves the idler output port unused. We can likewise measure at the idler port and obtain the same quadrature-phase amplitude modulation signal with equal SNR as that in the signal port, thus giving rise to another copy of the encoded information. This is somewhat similar to the scheme of quantum information tapping by quantum amplification that was discussed in Ref.\cite{Guo2016} but here we apply it to multi-parameter measurement.
\section*{Method: detailed experimental arrangement}
\subsection{Experimental setup: }
\begin{figure}
\caption{ \textbf{Experimental setup.}
An SU(1,1) interferometer (SUI) is formed with OPA1 and OPA2 with seed injection at OPA1. AM, amplitude modulator; PM, phase modulator; P1, P2, pulsed pumps; DSF, dispersion shifted fiber; CWDM, coarse wavelength division multiplexer; BS, 50/50 beam splitter; DL, delay line; OPA2 lock, locking signal for OPA2; PLL, phase locking loops; LOs, LOi, local oscillators; PZT, Piezo-Electric ceramic Transducer; HD, homodyne detection; DAQ, data acquisition system. }
\label{fig5:experiment_setup}
\end{figure}
The experimental setup for the scheme of SU(1,1) interferometer is shown in Fig.\ref{fig5:experiment_setup}. There are two fiber-based optical parametric amplifiers (OPAs) in the scheme. Each fiber-based OPA consists of a 300 m long dispersion shifted fiber (DSF) and two coarse wavelength division multiplexers (CWDM) \cite{GuoOL16}. OPA1 generates the entangled signal and idler beams at 1550 nm band \cite{GuoOL16}. At the output of DSF1, the seed injection centering at 1570.8 nm is amplified with a gain that depends on the pump P1. The amplification of the signal beam is accompanied by a conjugate beam (called "idler") with the wavelength of 1535 nm. CWDM2 is used to separate the amplified signal and idler beams. The two fiber-based OPAs are identical, except their inputs. OPA2 is a phase sensitive amplifier, because it has both the signal and the idler inputs with non-zero intensity, which are from OPA1 and are coupled in DSF2 together with the pump P2 via CWDM3. The signal and idler outputs of OPA2 are separated by CWDM4. The signal beam out of OPA1 successively propagates through the amplitude modulator (AM) and phase modulator (PM) so as to encode information onto the two or more quadrature-phase amplitudes, which are amplified by OPA2 and come out at both the signal and idler output ports of the SU(1,1) interferometer. In the experiment, the preparation of the optical light sources, including the pumps of OPAs (P1, P2), the injected seed signal, and the local oscillator (LO) of each homodyne detection (HD) device, and the realization of mode-matching between the two OPAs for best performance of the interferometer are described in a previous publication \cite{Guo2016}. When two non-commuting quadrature-phase amplitudes, $\hat X(\phi_1)$ and $\hat X(\phi_2)$, are measured, the signal and idler outputs of SU(1,1) interferometer are detected with thex` HD1 and HD2, respectively.
When three non-commuting quadrature-phase amplitudes, $\hat X(\phi_1)$, $\hat X(\phi_2)$ and $\hat X(\phi_3)$, are measured, the signal output is further split by inserting a 50/50 BS, whose outputs are measured by HD1 and HD3, respectively. The performance of the SU(1,1) interferometer is characterized by analyzing the photo-currents of all the HDs with a data acquisition system (DAQ).
\subsection{Phase Locking: }
The improvement in SNR of joint measurement occurs under the two conditions: (i) the phase between pump and two inputs of OPA2, $\varphi=2\varphi_{p2}-\varphi_s-\varphi_i$, is locked to ensure OPA2 is operated in the deamplification condition, where $\varphi_{p2}$ is the phase of pump P2, $\varphi_s$ and $\varphi_i$ are the phase of signal and idler input; and (ii) the phase of the local oscillator for each HD device is properly locked. To achieve this, we first lock the phase $\phi_1$/$\phi_2$ of the LO for HD1/HD2 by passing the injected seed sequentially through an amplitude modulator (AM2) and a phase modulator (PM2). AM2 is modulated at the frequencies of 0.3125 and 1.875 MHz. PM2 is modulated at the frequencies of 0.625 and 1.875 MHz. In this case, both the modulated signals of AM2 and PM2 are transferred to the amplified signal and idler beams of OPA1 and OPA2. The relative phase $\phi_1$/$\phi_{2}$ is locked by feeding the ac output of the HD1/HD2 to the digital phase locking loop PLL1/PLL2, and by loading the feedback signal of PLL1/PLL2 to the piezo-electric transducer PZT1/PZT2~\cite{GuoOL16}. When the relative phase of HD1 is locked to $\phi_s=0$ by exploiting the sinusoidal modulation signal of PM2 at 0.625 MHz, we are able to measure the quadrature amplitude $\hat X(\phi_1)$ with $\phi_1=0$ at the signal output. Meanwhile, we can lock the relative phase of HD2 to $\phi_{2}=\pi/2$ by using the sinusoidal modulation signal of AM2 at 0.3125 MHz to measure the quadrature amplitude $\hat X(\phi_2)$ with $\phi_2=\pi/2$, and lock the relative phase of HD3 to $\phi_3=\pi/4$ $ (\pi/2)$ by using the combined sinusoidal modulation signals of AM2 and PM2 at 1.875 MHz to measure the quadrature amplitude $\hat X(\phi_3)$ with $\phi_3=\pi/4$ $ (\pi/2)$. The relative phase $\phi$ which determines the operation condition of OPA2 is locked by passing P2 through PM3 with the sinusoidal modulation signal of 0.9375 MHz. Since the modulated signal of PM3 is transferred to the signal and idler outputs of OPA2, the deamplication condition of OPA2 $\phi=\pi$ can be obtained by feeding the ac output of the HD1 at 0.9375 MHz to PLL4 and loading the feedback signal of PLL4 to the delay line (DL) on the idler input of OPA2.
\end{document} |
\begin{document}
\title{On problems in the calculus of variations \\ in increasingly elongated domains}
\author{Herv\'e Le Dret\\ Sorbonne Universit\'es, UPMC Univ Paris 06, CNRS,\\Laboratoire Jacques-Louis Lions, Bo\^\i te courrier 187,\\
75252 Paris Cedex 05, France. Email: herve.le\_dret@upmc.fr\and Amira Mokrane\\ Laboratoire d'\'equations aux d\'eriv\'ees partielles\\ non lin\'eaires et histoire des math\'ematiques, ENS, B.P. 92,\\ Vieux Kouba, 16050 Alger, Alg\'erie\\ and USTHB, Facult\'e des math\'ematiques, D\'epartement d'analyse,\\ Laboratoire d'analyse math\'ematique et num\'erique\\ des \'equations aux d\'eriv\'ees partielles, Bab Ezzouar, Alger, Alg\'erie.\\
Email: mokrane\_amira3@yahoo.fr}
\maketitle
\begin{abstract}\noindent We consider minimization problems in the calculus of variations set in a sequence of domains the size of which tends to infinity in certain directions and such that the data only depend on the coordinates in the directions that remain constant. We study the asymptotic behavior of minimizers in various situations and show that they converge in an appropriate sense toward minimizers of a related energy functional in the constant directions.\\ \textbf{MSC 2010:} 35J25, 35J35, 35J62, 35J92, 49J45, 74K99.\\ \textbf{Keywords:} Calculus of variations, domains becoming unbounded, asymptotic behavior, exponential rate of convergence. \end{abstract}
\section{Introduction}
In this article, we revisit the ``$\ell\to+\infty$'' problem in the context of the calculus of variations. This class of problems was introduced by Chipot and Rougirel in 2000, \cite{[CR1]}, see also the monograph by Chipot \cite{[C2]}, and has since then given rise to many works by several authors dealing with various elliptic and parabolic problems up to until recently.
A prototypical $\ell\to+\infty$ problem is the following. Let $\omega={]-}1,1[$, $\ell>0$ be a real number and $\Omega_{\ell}\subset{\mathbb R}^2$ the rectangle ${]-}\ell,\ell[\times\omega$. We denote by $x_1$ the first variable in ${]-}\ell,\ell[$ and $x_2$ the second variable in $\omega$. Any function $f\in L^2(\omega)$ in the second variable gives rise to a function in two variables still denoted $f$ by setting $f(x_1,x_2)=f(x_2)$. We thus consider the two boundary value problems: find $u_{\ell}$, a function in $(x_1,x_2)$, such that $$\begin{cases}
-\Delta u_{\ell}=f & \mbox{in } \Omega_{\ell},\\
u_{\ell}=0 & \mbox{on } \partial\Omega_{\ell},
\end{cases} $$ and find $u_{\infty}$, a function in $x_2$, such that $$\begin{cases}
-\frac{d^2u_{\infty}}{dx_2^2}=f & \mbox{in } \omega,\\
u_{\infty}=0 & \mbox{on }\partial\omega= \{-1,1\}.
\end{cases} $$ Now the function $u_\infty$ can also be considered as a function in two variables that is independent of $x_1$. In this case, it can be shown that, for any $\ell_0>0$, one has $$u_{\ell}\rightarrow u_{\infty} \mbox{ in } H^1(\Omega_{\ell_0}) \hbox{ when }\ell\to+\infty,$$ hence the name of the problem. In other words, when the data does not depend on the elongated dimension, the solution of the above boundary value problem converges in some sense at finite distance to the solution of the corresponding boundary value problem posed in the non-elongated dimension when the elongation tends to infinity.
The majority of works on the $\ell\to+\infty$ problem makes use of the boundary value problem itself, \emph{i.e.}, the PDE plus boundary condition. One exception to this rule are the recent papers {\cite{Chipot-Savitska,Chipot Mosjic Roy}}, in which the authors consider instead a sequence of problems in the calculus of variations posed on elongated domains, {see also \cite{[C2new]}. This is the approach we adopt here as well.
Our main motivation for this is that certain models, such as nonlinear hyperelasticity, are naturally posed as problems in the calculus of variations for which no Euler-Lagrange equation, \emph{i.e.}, non underlying PDE even in a weak form, is available, see \cite{Ball}. Moreover, questions surrounding the Saint Venant principle in elasticity, see \cite{Mielke,Toupin}, are typically set in elongated domains, albeit in one direction only. Consequently, it makes sense to attempt dealing with some $\ell\to+\infty$ problems by using only energy minimization properties and no Euler-Lagrange equation whatsoever.
We are however quite far from achieving the goal of treating nonlinear elasticity, since the approach that we develop below relies a lot on convexity, whereas convexity is not an appropriate hypothesis for nonlinear elasticity. We are nonetheless able to encompass a wide range of nonlinear energies, including the $p$-Lapla\-cian with some technical restrictions on the number of elongated dimensions with respect to the exponent $p$. Our hypotheses are weaker and our results are sometimes stronger than those of \cite{Chipot Mosjic Roy}. The techniques are somewhat different too, with an emphasis here on weak convergence and weak lower semicontinuity techniques, and reliance on such classical techniques as the De Giorgi slicing method which are not dependent on convexity. As a general rule, we try to make as little use of convexity as we can at any given point.
Let us describe our results a little more precisely. We consider bounded open subsets $\Omega_\ell$ of ${\mathbb R}^n$ which are Cartesian products of the form $\ell\omega'\times\omega''$, with $\omega'\subset {\mathbb R}^r$ and $\omega''\subset{\mathbb R}^{n-r}$, with $1\le r\le n-1$. We let $x=(x',x'')$ with $x'\in {\mathbb R}^r$ being the elongated variable and $x''\in {\mathbb R}^{n-r}$ the non-elongated variable. Likewise, for a scalar-valued function $v\colon\Omega_\ell\to{\mathbb R}$, we decompose the gradient $\nabla v=(\nabla'v,\nabla''v)$ with obvious notation.
We consider an energy density $F\colon{\mathbb R}^n\to{\mathbb R}$ and a function $f$ on $\omega''$, and introduce the minimization problem of finding $u_\ell\in W^{1,p}_0(\Omega_\ell)$ such that $J_\ell(u_\ell)=\inf_{v\in W^{1,p}_0(\Omega_\ell)}J_\ell(v)$ where $$ J_\ell(v)=\int_{\Omega_\ell}\bigl(F(\nabla v(x))-f(x'')v(x)\bigr)\,dx.$$ We assume that $F$ has $p$-growth, $p$-coerciveness and is convex. In particular, there is no assumption of strict convexity or uniform strict convexity made on $F$.
We then introduce $F''\colon {\mathbb R}^{n-r}\to{\mathbb R}$ by letting $F''(\xi'')=F(0,\xi'')$, again with obvious notation. Of course, $F''$ is convex, has $p$-growth and $p$-coerciveness and the minimization problem of finding $u_\infty\in W^{1,p}_0(\omega'')$ such that $J_\infty(u_\infty)=\inf_{v\in W^{1,p}_0(\omega'')}J_\infty(v)$ where $$ J_\infty(v)=\int_{\omega''}\bigl(F''(\nabla'' v(x''))-f(x'')v(x'')\bigr)\,dx'',$$ admits solutions. It turns out that, under additional hypotheses, this problem is the ``$\ell\to+\infty$'' limit of the family of minimization problems under consideration.
These hypotheses include appropriate growth and coerciveness hypotheses on the function $G\colon {\mathbb R}^n\to{\mathbb R}$, $G(\xi)=F(\xi)-F''(\xi'')$, of the form $$ \forall \xi\in{\mathbb R}^{n},
\lambda(|\xi'|^p+k|\xi''|^{p-k}|\xi'|^k)\leq G(\xi)\leq\Lambda(|\xi'|^{p}+k|\xi''|^{p-k}|\xi'|^k), $$ for some $0<\lambda\le\Lambda$ and $0\le k<p$. Depending on the case, there is no more additional hypothesis (for $k=0$), or a hypothesis of strict convexity of $F''$, or a hypothesis of uniform strict convexity of $F''$ (for $k>0)$.
The results are a ``$\ell\to+\infty$'' convergence in the weak sense for $k=0$ when $r<p$, sharpened to strong sense when $F''$ is furthermore assumed to be strictly convex, and a strong ``$\ell\to+\infty$'' convergence for $k>0$ when $r\leq kp/(p-k)$. In the case of the $p$-Laplacian, $p>2$, we thus obtain strong ``$\ell\to+\infty$'' convergence when $r<2p/(p-2)$, {see also \cite{Xie}}.
In addition, in the case $k=0$, if we assume that $F''$ is uniformly strictly convex, we obtain strong convergence at an exponential rate without any restriction on $r$. This includes the known behavior of the $2$-Laplacian in the ``$\ell\to+\infty$'' context.
We conclude the article with a few comments and perspectives on the vectorial case, in connection with nonlinear elasticity in particular.
\section{Statement of the problem}
We consider two bounded open sets $\omega'\subset{\mathbb R}^r$ with $0\in\omega'$ and $\omega'$ is starshaped with respect to $0$, and $\omega''\subset{\mathbb R}^{n-r}$ with $n>r\ge 1$. Let $\ell >0$ and set \begin{equation}\label{omgl} \omega'_\ell=\ell\omega'\text{ and }\Omega_\ell=\omega'_\ell\times\omega''\subset{\mathbb R}^n. \end{equation}
Points $x$ in $\Omega_\ell$ will be denoted by $x=(x',x'')$ with $x'=(x_1,x_2,\ldots,x_r)\in\omega'_\ell$ and $x''=(x_{r+1},\ldots,x_n)\in \omega''$. Likewise, vectors $\xi$ in ${\mathbb R}^n$ will be decomposed as $\xi=(\xi',\xi'')$, with $\xi'\in {\mathbb R}^r$ and $\xi''\in {\mathbb R}^{n-r}$.
Note that because of the starshaped assumption, we have $\Omega_\ell\subset\Omega_{\ell'}$ as soon as $\ell\le\ell'$ and we are thus dealing with a ``growing' family of open sets. We make an additional regularity hypothesis on $\omega'$, which is as follows. Define first the gauge function of $\omega'$ as $$g(x')=\inf\{t\in{\mathbb R}_+^*; x'/t\in \omega'\}.$$
Since $\omega'$ is starshaped and bounded, this is well defined, $\omega'_\ell=\{x'; g(x')<\ell\}$, and there exists $0<R_1<R_{2}$ such that $R_1|x'|\le g(x')\le R_2|x'|$ for all $x'\in{\mathbb R}^r$.
Now we assume that $\omega'$ is such that $g$ is a Lipschitz function with Lipschitz constant $K$. By Rademacher's theorem, this implies that $g$ is almost everywhere differentiable, with $|\nabla'g(x')|\le K$ a.e. Moreover, it is known that $g$ then belongs to $W^{1,\infty}_{\rm loc}({\mathbb R}^r)$ and that its almost everywhere derivatives equal its distributional derivatives. This is true for example if $\omega'$ is convex. This regularity hypothesis is for convenience only: we use $g$ to build cut-off functions inside the domains, and not up to the boundary. It should be quite clear that our results can be rewritten in order to accommodate arbitrary open sets $\omega'$.
We are interested in a sequence of problems in the calculus of variations ${\cal P}_\ell$ of the form \begin{equation}\label{pl} J_\ell(u_\ell)=\inf_{v\in W^{1,p}_0(\Omega_\ell)}J_\ell(v), \end{equation} with $u_\ell\in W^{1,p}_0(\Omega_\ell)$ and \begin{equation}\label{funjl} J_\ell(v)=\int_{\Omega_\ell}\bigl[F(\nabla v(x))-f''(x'')v(x)\bigr]\,dx. \end{equation} where $f''\in L^{p'}(\omega'')$, $\frac1p+\frac1{p'}=1$, is a given function. Observe that the term corresponding to the force term for this problem only depends on the ``non-elongated'' variable $x''$ so that it is reasonable to expect that $u_\ell$ behaves
as a function mostly in $x''$ in the limit $\ell\to+\infty$, in a sense made precise below. We could also consider more general semilinear force terms of the form $h(x'',v)$ satisfying appropriate growth and convexity assumptions, but we stick here with a linear term for simplicity.
We assume that the energy density $F\colon{\mathbb R}^n\to{\mathbb R}$ is convex. We let \begin{equation}\label{structure de F} \begin{array}{rcl} F''\colon {\mathbb R}^{n-r}&\to&{\mathbb R}\\ \xi''&\mapsto&F(0,\xi'') \end{array} \quad\hbox{and}\quad \begin{array}{rcl} G\colon {\mathbb R}^n&\to&{\mathbb R}\\ \xi&\mapsto&F(\xi)-F''(\xi'') \end{array} \end{equation} so that \begin{equation}\label{f1f2} F(\xi',\xi'')=F''(\xi'')+G(\xi',\xi''), \end{equation} and $F''$ is convex. These functions are assumed to satisfy the following coerciveness and growth hypotheses \begin{align} &\forall \xi\in{\mathbb R}^{n},
\lambda|\xi|^p\leq F(\xi)\leq\Lambda(|\xi|^p+1),\label{growth1}\\ &\forall \xi\in{\mathbb R}^{n},
\lambda(|\xi'|^p+k|\xi''|^{p-k}|\xi'|^k)\leq G(\xi)\leq\Lambda(|\xi'|^{p}+k|\xi''|^{p-k}|\xi'|^k),\label{growth3} \end{align}
for some $0<\lambda\le\Lambda$, $p>1$ and $0\le k< p$.\footnote{Note that $k=p$ yields the same hypothesis as $k=0$.} Here, for $\xi\in{\mathbb R}^d$, $|\xi|$ denotes the canonical Euclidean norm of $\xi$ in ${\mathbb R}^d$.
Clearly, condition \eqref{growth1} implies the similar condition \begin{equation} \forall \xi''\in{\mathbb R}^{n-r},
\lambda|\xi''|^p\leq F''(\xi'')\leq\Lambda(|\xi''|^p+1),\label{growth2} \end{equation} for $F''$.
Energy densities of the form above include that associated with the $p$-Laplacian for $p\ge 2$. Indeed, in this case, $F(\xi)=\frac1p|\xi|^p=\frac1p(|\xi'|^2+|\xi''|^2)^{p/2}$ and we can take $k=2$ for $p>2$, or $k=0$ for $p=2$. Another simple energy density that is covered by our analysis is $F(\xi)=\frac1p (|\xi'|^p+|\xi''|^p)$ or more generally energies of the form $F(\xi)=F'(\xi')+F''(\xi'')$, with appropriate hypotheses on $F'$ and $F''$. Here, assuming without loss of generality that $F'(0)=0$, we have $G(\xi',\xi'')=F'(\xi')$ and we can take $k=0$.
In addition to the above growth and coerciveness hypotheses, which obviously imply that problem ${\cal P}_\ell$ has at least one solution $u_\ell$, we assume that $F''$ is uniformly strictly convex for $k>0$, in the sense that there exists a constant $\beta>0$ such that for all $\xi''$, $\zeta''\in {\mathbb R}^{n-r}$ and all $\theta,\mu\in [0,1]$ with $\theta+\mu=1$, we have \begin{equation}\label{uniformite stricte}
F''(\theta\xi''+\mu\zeta'')\le \theta F''(\xi'')+\mu F''(\zeta'')-k\beta\theta\mu(\theta^{p-1}+\mu^{p-1})|\xi''-\zeta''|^p. \end{equation} see for instance \cite{[A],Evans,[J-N]}. The $p$-Laplacian for $p> 2$, $k=2$, satisfies this hypothesis (the $2$-Laplacian satisfies the alternate hypothesis \eqref{uniformite stricte k=0} that will be used later on in Section 5). Note that when $k=0$, the hypothesis becomes redundant, and there is actually no requirement of even strict convexity, let alone uniform strict convexity, of $F''$ in this case.
We now introduce our candidate limit problem ${\cal P}_\infty$ as that of finding $u_\infty\in W^{1,p}_0(\omega'')$ such that \begin{equation}\label{pinfty} J_\infty(u_\infty)=\inf_{v\in W^{1,p}_0(\omega'')}J_\infty(v), \end{equation} with \begin{equation}\label{funjinfty}{} J_{\infty}(v)=\int_{\omega''}\bigl[F''(\nabla''v(x''))-f''(x'')v(x'')\bigr]\,dx''. \end{equation} It also clear that problem ${\cal P}_\infty$ has at least one solution $u_\infty$.
Here and in the sequel, we use the following notational device $$\nabla'=(\partial_1,\ldots,\partial_r),\quad\nabla''=(\partial_{r+1},\ldots,\partial_n),$$ that we apply indifferently to functions defined either on $\Omega_\ell$ or on $\omega''$. For brevity, we refer to $\nabla'$ as the ``horizontal'' part of the gradient and to $\nabla''$ as the ``vertical'' part of the gradient.
We want to study the asymptotic behavior of $u_\ell$ when $\ell\to+\infty$ and compare it with a minimizer $u_\infty$ of the $n-r$ dimensional vertical problem ${\cal P}_\infty$. Actually, our goal is to show that the former converges to the latter in a sense that will be explained later on.
\section{Preliminary estimates}
We first give several estimates that we will use in the proofs of our convergence results. The first estimate follows immediately from Poincar\'e's inequality. \begin{lemma}\label{lem1} There exists a constant $c_1=c_1(\omega'')$ independent of $\ell$ such that for all
$v\in W^{1,p}(\Omega_{\ell})$ whose trace vanish on $\omega'_\ell\times\partial\omega''$, we have \begin{equation}\label{poincare}
\|v\|_{L^p(\Omega_{\ell})}\leq c_1 \|\nabla''
v\|_{L^p(\Omega_{\ell};{\mathbb R}^{n-r})}. \end{equation} \end{lemma}
Let us now give a first, coarse estimate of $u_\ell$.
\begin{lemma}\label{lem2} There exists a constant $c_2$ independent of $\ell$, such that \begin{equation}\label{estmtul}
\int_{\Omega_{\ell}}|\nabla u_{\ell}|^p\,dx\le c_2 \ell^r. \end{equation} \end{lemma}
\noindent{\bf Proof.}\enspace{} Let us take $v=0$ as a test-function in problem \eqref{pl}. It follows that $$\int_{\Omega_{\ell}} F(\nabla u_{\ell}(x))\,dx\le\int_{\Omega_{\ell}} f''(x'')u_{\ell}(x)\,dx+A\ell^r.$$ where $A=F(0){\cal L}^r(\omega'){\cal L}^{n-r}(\omega'')$ does not depend on $\ell$ (${\cal L}^d$ denotes the $d$-dimensional Lebesgue measure). By H\"older's inequality and the coerciveness assumption \eqref{growth1}, it follows that \begin{align*}
\int_{\Omega_{\ell}} |\nabla u_{\ell}(x)|^p\,dx &\le
\frac{1}{\lambda}\Bigl(\int_{\Omega_{\ell}}|f''(x'')|^{p'}\,dx\Bigr)^{1/p'}
\Bigl(\int_{\Omega_{\ell}}|u_{\ell}(x)|^p\,dx \Bigr)^{1/p} +\frac {A}{\lambda}\ell^r\\
&\le
\frac{B}{\lambda} \ell^{r/p'}
\|\nabla''
u_{\ell}\|_{L^p(\Omega_{\ell};{\mathbb R}^{n-r})}+\frac {A}{\lambda}\ell^r, \end{align*}
with $B=c_1\|f''\|_{L^{p'}(\omega'')}{\cal L}^r(\omega')^{1/p'}$, which does not depend on $\ell$. Consequently, we obtain an estimate of the form \begin{equation}\label{estimation grossiere}
\|\nabla u_{\ell}\|^p_{L^p(\Omega_{\ell};{\mathbb R}^{n})} \le
C \ell^{r/p'}
\|\nabla u_{\ell}\|_{L^p(\Omega_{\ell};{\mathbb R}^{n})}+D\ell^r, \end{equation}
where $C$ and $D$ are constants that do not depend on $\ell$. Let us set $X=\ell^{-r/p}\|\nabla u_{\ell}\|_{L^p}$. Estimate \eqref{estimation grossiere} now reads $$X^p\le CX+D,$$ so that there exists $c_2$ depending only on $C$ and $D$ such that $X\le c_2^{1/p}$, which completes the proof.
$\square$\par\medbreak
We now recall an elementary estimate similar to what can be found
in \cite{[G2]} for instance. \begin{lemma}\label{lem3} Let $h(t)$ a nonnegative bounded function defined on an interval $[\tau_0,\tau_1]$, $\tau_0\geq 0$. Suppose that for $\tau_0\leq t< s\leq \tau_1$, we have $$h(t)\leq \theta h(s)+C(s-t)^{-\nu_1}+D(s-t)^{-\nu_2},$$ where $C,D,\nu_1,\nu_2,\theta$ are nonnegative constants with $0\leq \theta< 1$. Then, for all $\tau_0\le t<s\le\tau_1$, we have $$h(t)\le c(C(s-t)^{-\nu_1}+D(s-t)^{-\nu_2}),$$ where $c$ is a constant that only depends on $\nu_1$, $\nu_2$ and $\theta$. \end{lemma}
\noindent{\bf Proof.}\enspace{} If we have two sequences of nonnegative numbers $a_i$ and $b_i$ such that $a_i\le \theta a_{i+1}+b_{i+1}$, it follows by induction that $a_0\le \theta^i a_i+\sum_{j=0}^{i-1}\theta^jb_{j+1}$. We apply this remark to the sequences $a_i=h(t_i)$ and $b_{i+1}=C(t_{i+1}-t_i)^{-\nu_1}+D(t_{i+1}-t_i)^{-\nu_2}$, where $t_i=t+(1-\sigma^i)(s-t)$, $0<\sigma<1$ to be chosen later on, is an increasing sequence in $[\tau_0,\tau_1]$ such that $t_0=t$. This yields the estimate $$h(t)\le \theta^ih(t_i)+\frac{C}{(s-t)^{\nu_1}}(1-\sigma)^{-\nu_1}\sum_{j=0}^{i-1}\biggl(\frac\theta{\sigma^{\nu_1}}\biggr)^j+\frac{D}{(s-t)^{\nu_2}}(1-\sigma)^{-\nu_2}\sum_{j=0}^{i-1}\biggl(\frac\theta{\sigma^{\nu_2}}\biggr)^j .$$ We now choose $\sigma<1$ in such a way that $\frac\theta{\sigma^{\nu_1}}<1$ and $\frac\theta{\sigma^{\nu_2}}<1$, and conclude by letting $i\to+\infty$, remembering that $h(t_i)$ is bounded.
$\square$\par\medbreak
Next, we estimate the horizontal part of the gradient of $u_\ell$ in $L^p(\Omega_{\ell_0})$ in terms of $\ell$, $\ell_0$, $u_\ell$ and a minimizer $u_\infty$ of the vertical problem ${\cal P}_\infty$. \begin{theorem}\label{estimation de base} There exists a constant $c_3$ independent of all the other quantities such that, for all $0<t<s\le\ell$ and all minimizers $u_\infty$ of the vertical problem, we have \begin{multline}\label{l'estimation en question usc}
\|\nabla'u_\ell\|^p_{L^p(\Omega_{t};{\mathbb R}^r)}+k\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_t;{\mathbb R}^{n-r})}\\\le \frac{\delta c_3}{(s-t)^p}\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_s\setminus\Omega_t;{\mathbb R}^{n-r})}\\+
\frac{c_3k}{(s-t)^{kp/(p-k)}}\left\{\|\nabla''u_{\ell}\|^p_{L^p(\Omega_s\setminus\Omega_t;{\mathbb R}^{n-r})}+(1-\delta)\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_s\setminus\Omega_t;{\mathbb R}^{n-r})}\right\}. \end{multline} where $\delta=1$ if $0\le k\le p/2$, $\delta=0$ otherwise. \end{theorem}
\noindent{\bf Proof.}\enspace{} We first define a family of cut-off functions as follows. For all $0<t<s\le \ell$, we set $$\rho_{s,t}(x')=\frac1{s-t}\min\{(s-g(x'))_+,s-t\}.$$ By the definition of the gauge function, we see that $\rho_{s,t}\equiv 0$ on $\omega'_\ell\setminus\omega'_s$, $\rho_{s,t}\equiv 1$ on $\omega'_t$ and $0\le \rho_{s,t}\le 1$. By our regularity assumption on $\omega'$, $\rho_{s,t}$ is Lipschitz and such that $$\nabla'\rho_{s,t}(x')=-\frac1{s-t}\nabla'g(x'){\bf 1}_{\omega'_s\setminus\omega'_t}(x'),$$ so that we can estimate \begin{equation}\label{gradient cutoff}
|\nabla'\rho_{s,t}(x')|\le\frac K{s-t}{\bf 1}_{\omega'_s\setminus\omega'_t}(x'). \end{equation}
We pick a number $0<\alpha<1$ and then set \begin{equation}\label{test1} v_1(x)=(1-\alpha\rho_{s,t}(x'))u_{\ell}(x)+\alpha\rho_{s,t}(x')u_{\infty}(x''), \end{equation} and \begin{equation}\label{test2} v_2(x)=(1-\alpha\rho_{s,t}(x'))u_{\infty}(x'')+\alpha\rho_{s,t}(x')u_{\ell}(x). \end{equation} Clearly, $v_1$ belongs to $W^{1,p}_0(\Omega_\ell)$ and is thus a suitable test-function for problem ${\cal P}_\ell$, hence \begin{equation}\label{test v1} \int_{\Omega_\ell}\bigl[F(\nabla u_\ell(x))-f''(x'')u_\ell(x)\bigr]\,dx\le\int_{\Omega_\ell}\bigl[F(\nabla v_1(x))-f''(x'')v_1(x)\bigr]\,dx. \end{equation}
Next we note that, owing to the embedding $W^{1,p}_0(\Omega_\ell)\hookrightarrow L^p(\omega'_\ell; W^{1,p}_0(\omega''))$, $v_2$ is suitable test-function for problem ${\cal P}_\infty$ for almost all $x'$, hence \begin{multline}\label{test v2} \int_{\omega''}\bigl[F''(\nabla''u_\infty(x''))-f''(x'')u_\infty(x'')\bigr]\,dx''\\ \le\int_{\omega''}\bigl[F''(\nabla''v_2(x',x''))-f''(x'')v_2(x',x'')\bigr]\,dx''. \end{multline} Integrating estimate \eqref{test v2} over $\omega'_\ell$, we obtain \begin{multline}\label{test v2 bis} \int_{\Omega_\ell}\bigl[F''(\nabla''u_\infty(x''))-f''(x'')u_\infty(x'')\bigr]\,dx\\ \le\int_{\Omega_\ell}\bigl[F''(\nabla''v_2(x))-f''(x'')v_2(x)\bigr]\,dx. \end{multline}
We add estimates \eqref{test v1} and \eqref{test v2 bis} together and note that all the terms involving $f''$ cancel out since $v_1+v_2=u_\ell+u_\infty$. Therefore, \begin{equation}\label{addition tests} \int_{\Omega_\ell}\bigl[F(\nabla u_\ell(x))+F''(\nabla''u_\infty(x''))\bigr]\,dx \le \int_{\Omega_\ell}\bigl[F(\nabla v_1(x))+F''(\nabla''v_2(x))\bigr]\,dx. \end{equation} We observe that $v_1=u_\ell$ and $v_2=u_\infty$ on $\Omega_\ell\setminus\Omega_s$, so that estimate \eqref{addition tests} boils down to \begin{align} \int_{\Omega_s}\bigl[F(\nabla u_\ell(x))+F''(\nabla''u_\infty(x''))\bigr]\,dx &\le \int_{\Omega_s\setminus\Omega_t}\bigl[F(\nabla v_1(x))+F''(\nabla''v_2(x))\bigr]\,dx\nonumber\\ &\qquad+\int_{\Omega_t}\bigl[F(\nabla v_1(x))+F''(\nabla''v_2(x))\bigr]\,dx.\label{addition tests 2} \end{align} The left-hand side of \eqref{addition tests 2}
can be rewritten as \begin{multline}\label{addition tests gauche reecrite} \int_{\Omega_s}\bigl[F(\nabla u_\ell(x))+F''(\nabla''u_\infty(x''))\bigr]\,dx\\=\int_{\Omega_s}\bigl[G(\nabla u_\ell(x))+F''(\nabla''u_\ell(x))+F''(\nabla''u_\infty(x''))\bigr]\,dx. \end{multline}
Let $I_1$ and $I_2$ be the first and second integrals in the right-hand side of \eqref{addition tests 2}. To estimate $I_1$, we just use the convexity of $F''$, since the vertical gradients of $v_1$ and $v_2$ are convex combinations of the vertical gradients of $u_\ell$ and $u_\infty$, \begin{align} I_1&=\int_{\Omega_s\setminus\Omega_t}\bigl[G(\nabla v_1(x))+F''(\nabla'' v_1(x))+F''(\nabla''v_2(x))\bigr]\,dx\nonumber\\ &\le\int_{\Omega_s\setminus\Omega_t}\bigl[G(\nabla v_1(x))+F''(\nabla'' u_\ell(x))+F''(\nabla''u_\infty (x))\bigr]\,dx.\label{estimation I1} \end{align} To estimate $I_2$, we note that $v_1=(1-\alpha)u_\ell+\alpha u_\infty$ and $v_2=\alpha u_\ell+ (1-\alpha)u_\infty$ on $\Omega_t$, thus owing to the convexity of $F$ and the uniform convexity \eqref{uniformite stricte} of $F''$, \begin{align} I_2&\le \int_{\Omega_t}\bigl[(1-\alpha)F(\nabla u_\ell)+\alpha F(\nabla u_\infty)+(1-\alpha)F''(\nabla''u_\infty)+\alpha F''(\nabla''u_\ell)\nonumber\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad -k\gamma|\nabla''(u_\infty-u_\ell)|^p\bigr]\,dx\nonumber\\
&= \int_{\Omega_t}\bigl[(1-\alpha)G(\nabla u_\ell)+F''(\nabla''u_\ell)+F''(\nabla'' u_\infty) -k\gamma|\nabla''(u_\infty-u_\ell)|^p\bigr]\,dx,\label{estimation I2} \end{align} for some $\gamma>0$. Putting estimates \eqref{addition tests 2}, \eqref{estimation I1}, \eqref{estimation I2} and equation \eqref{addition tests gauche reecrite} together, we obtain
$$\int_{\Omega_s\setminus\Omega_t}G(\nabla u_\ell)\,dx+\int_{\Omega_t}\bigl[\alpha G(\nabla u_\ell)+k\gamma|\nabla''(u_\infty-u_\ell)|^p\bigr]\,dx \le\int_{\Omega_s\setminus\Omega_t}G(\nabla v_1(x))\,dx,$$ which, upon using the coerciveness hypothesis \eqref{growth3}, yields \begin{multline}\label{premiere estimation serieuse}
a\int_{\Omega_t}\bigl[(|\nabla'u_\ell|^p+k|\nabla''u_\ell|^{p-k}|\nabla'u_\ell|^k)+k|\nabla''(u_\infty-u_\ell)|^p\bigr]\,dx \\\le\int_{\Omega_s\setminus\Omega_t}G(\nabla v_1(x))\,dx, \end{multline} where $a>0$ is a small generic constant that only depends on the other constants involved.
We now focus on estimating the right-hand side of \eqref{premiere estimation serieuse}. We have \begin{equation}\label{gradients tests} \left\{\begin{aligned} \nabla'v_1&=(1-\alpha\rho_{s,t})\nabla'u_{\ell}+\alpha\nabla'\rho_{s,t}(u_{\infty}-u_{\ell}),\\ \nabla''v_1&=(1-\alpha\rho_{s,t})\nabla''u_{\ell}+\alpha\rho_{s,t}\nabla''u_{\infty}. \end{aligned}\right. \end{equation}
Based on \eqref{gradients tests} and the definition of $\rho_{s,t}$, we have the following estimates for any exponent $q$: \begin{equation}\label{gradients tests estimes} \left\{\begin{aligned}
|\nabla'v_1|^q&\le2^{q-1}|\nabla'u_{\ell}|^q+2^{q-1}\frac{K^q}{(s-t)^q}|u_{\infty}-u_{\ell}|^q,\\
|\nabla''v_1|^{q}&\le2^{q-1}|\nabla''u_{\ell}|^{q}+2^{q-1}|\nabla''(u_{\infty}-u_{\ell})|^{q}. \end{aligned}\right. \end{equation} We will use exponents $q=p$ and $q=k$ for the first line and $q=p-k$ for the second line. Due to the growth hypothesis \eqref{growth3}, we have \begin{multline}\label{estimation de Gv1}
G(\nabla v_1)\le A\biggl(|\nabla'u_{\ell}|^p+\frac{1}{(s-t)^p}|u_{\infty}-u_{\ell}|^p\\+k\bigl(|\nabla''u_{\ell}|^{p-k}+|\nabla''(u_{\infty}-u_{\ell})|^{p-k}\bigr)\Bigl(|\nabla'u_{\ell}|^k+\frac{1}{(s-t)^k}|u_{\infty}-u_{\ell}|^k\Bigr)\biggr), \end{multline} where $A$ is a large generic constant that only depends on the other constants involved. For $k\ge 1$, three of the four product terms that appear need to be estimated. For this purpose, we will use Young's inequality in the following form $$a^kb^{p-k}\le \frac{k}{p}a^p+\frac{p-k}{p}b^{p}$$ for $a,b\ge 0$ (recall that $p> k$). We thus obtain \begin{multline}\label{estimation de Gv1 2}
G(\nabla v_1)\le A\biggl(|\nabla'u_{\ell}|^p+\frac{1}{(s-t)^p}|u_{\infty}-u_{\ell}|^p\\
+k\Bigl(|\nabla''u_{\ell}|^{p-k}|\nabla'u_{\ell}|^k+|u_{\infty}-u_{\ell}|^p+\frac{1}{(s-t)^{kp/(p-k)}}|\nabla''u_{\ell}|^{p}
+|\nabla''(u_{\infty}-u_{\ell})|^{p}\Bigr)\biggr), \end{multline} where $A$ is another generic constant. We integrate this inequality over $\Omega_s\setminus\Omega_t$ and use Poincar\'e's inequality in the vertical variables to obtain \begin{multline}\label{estimation du membre de droite}
\int_{\Omega_s\setminus\Omega_t}G(\nabla v_1)\,dx\le A\int_{\Omega_s\setminus\Omega_t}\bigl(|\nabla'u_{\ell}|^p+k|\nabla''u_{\ell}|^{p-k}|\nabla'u_{\ell}|^k+k|\nabla''(u_{\infty}-u_{\ell})|^{p}\bigr)\,dx\\
+\frac{A}{(s-t)^p}\|\nabla''(u_{\infty}-u_{\ell})\|^p_{L^p(\Omega_s\setminus\Omega_t)}
+\frac{Ak}{(s-t)^{kp/(p-k)}}\|\nabla''u_{\ell}\|^p_{L^p(\Omega_s\setminus\Omega_t)}, \end{multline} with $A$ yet another generic constant.
We now consider two different cases. First, for $0\le k\le p/2$, let us set
$$h(t)=\int_{\Omega_{t}}\bigl(|\nabla' u_{\ell}|^p+k|\nabla'' u_{\ell}|^{p-k}|\nabla' u_{\ell}|^k+k|\nabla''(u_{\infty}-u_{\ell})|^{p}\bigr)\,dx.$$ Inequalities \eqref{premiere estimation serieuse} and \eqref{estimation du membre de droite} may be rewritten as \begin{multline}\label{inegalite de g} h(t)\le \theta h(s)
+\frac{1}{(s-t)^{p}}\|\nabla''(u_{\infty}-u_{\ell})\|^p_{L^p(\Omega_s\setminus\Omega_t)}\\
+\frac{k}{(s-t)^{kp/(p-k)}}\|\nabla''u_{\ell}\|^p_{L^p(\Omega_s\setminus\Omega_t)} , \end{multline}
with $\theta=\frac A{A+a}\in {]}0,1[$. Let $t\le t_1<s_1\le s$. We invoke Lemma \ref{lem3}, with $\nu_1=p$, $C=\|\nabla''(u_{\infty}-u_{\ell})\|^p_{L^p(\Omega_s\setminus\Omega_t)}$, $\nu_2=kp/(p-k)$, $D= k\|\nabla''u_{\ell}\|^p_{L^p(\Omega_s\setminus\Omega_t)}$, to conclude that \begin{equation}\label{inegalite de g corrigee} h(t_1)\le c(C(s_1-t_1)^{-\nu_1}+D(s_1-t_1)^{-\nu_2}). \end{equation}
The result follows in this case by letting $t_1\to t$ and $s_1\to s$ since the constant $c$ only depends on $\nu_1$, $\nu_2$ and $\theta$, and $h$ is continuous (recall that $\delta=1$).
Now the second case is when $p/2<k<p$. Estimate \eqref{estimation du membre de droite} still holds true, but we now use Young's inequality once more in the form $$a^wb^{p-w}\le \frac{w}{p}a^p+\frac{p-w}{p}b^{p}$$ with $w=\frac{p(2k-p)}k$ to deduce that $$\frac1{(s-t)^p}=1^w\biggl(\frac1{(s-t)^{p/(p-w)}}\biggr)^{p-w}\le \frac{p-k}k+\frac{2k-p}k\frac1{(s-t)^{kp/(p-k)}},$$ so that we can actually write \begin{equation}\label{inegalite de g bis} h(t)\le \theta h(s)
+\frac{k}{(s-t)^{kp/(p-k)}}\bigl(\|\nabla''(u_{\infty}-u_{\ell})\|^p_{L^p(\Omega_s\setminus\Omega_t)}
+\|\nabla''u_{\ell}\|^p_{L^p(\Omega_s\setminus\Omega_t)}\bigr) , \end{equation} with the same function $h$, but with another value for $\theta$, which we do not write here. We conclude as before with Lemma \ref{lem3} and the first constant $C=0$ for instance.
$\square$\par\medbreak
The following is an immediate consequence of the previous estimate. \begin{corollary}\label{Caccio global} We have, for all $\ell\ge \ell_0$,
\begin{multline}\label{Caccio usc}
\|\nabla'u_\ell\|^p_{L^p(\Omega_{\ell_0};{\mathbb R}^r)}+k\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_{\ell_0};{\mathbb R}^{n-r})}\\\le \frac{\delta c_3}{(\ell-\ell_0)^p}\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_\ell;{\mathbb R}^{n-r})}\\+\frac{c_3k}{(\ell-\ell_0)^{kp/(p-k)}}\left\{\|\nabla''u_{\ell}\|^p_{L^p(\Omega_\ell;{\mathbb R}^{n-r})}+(1-\delta)\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_\ell;{\mathbb R}^{n-r})}\right\}, \end{multline} where $\delta=1$ if $0\le k\le p/2$, $\delta=0$ otherwise. \end{corollary}
\noindent{\bf Proof.}\enspace{} Indeed, we take $s=\ell$, $t=\ell_0$ and notice that $\Omega_\ell\setminus\Omega_{\ell_0}\subset\Omega_\ell$.
$\square$\par\medbreak
Let us remark that if $k=0$ and there is actually no strict convexity assumption made on $F''$, \emph{i.e.}, $F''$ may well be not strictly convex, the previous result boils down to $$
\|\nabla'u_\ell\|^p_{L^p(\Omega_{\ell_0};{\mathbb R}^r)}\le \frac{c_3}{(\ell-\ell_0)^p}\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_\ell;{\mathbb R}^{n-r})}. $$ However, when $k>0$, we make crucial use of the uniform strict convexity to derive the estimate.
Let us close this section with an estimate similar to that obtained in Lemma~\ref{lem2}. Recall that $u_\ell$ is a minimizer on $\Omega_\ell$, whereas the following estimate is on $\Omega_{\ell_0}$. {See \cite{Chipot-Savitska} for a very similar argument.}
\begin{lemma}\label{estimation sur lzero} There exist constants $\bar\ell$ and $c_4$, independent of $\ell$, such that for all $\bar \ell\le\ell_0\le \ell$, \begin{equation}\label{estmtul zero}
\int_{\Omega_{\ell_0}}|\nabla u_{\ell}|^p\,dx\le c_4 \ell_0^r. \end{equation} \end{lemma}
\noindent{\bf Proof.}\enspace{} Let $1\le t\le \ell-1$ and set $\rho_t=\rho_{t+1,t}$. We take $v_{t,\ell}=(1-\rho_t)u_{\ell}$ as a test-function in problem~\eqref{pl}. This test-function is equal to $u_\ell$ ``far away'' and is $0$ in $\Omega_t$. We obtain \begin{align*} \int_{\Omega_{\ell}} F(\nabla u_{\ell})\,dx&\le \int_{\Omega_{\ell}}\bigl( F(\nabla v_{t,\ell})-f''(v_{t,\ell}-u_\ell)\bigr)\,dx\\ &=\int_{\Omega_t}F(0)\,dx+\int_{\Omega_t}f''u_\ell\,dx+ \int_{\Omega_{t+1}\setminus\Omega_t}\bigl[ F(\nabla v_{t,\ell})+f''\rho_tu_\ell)\bigr]\,dx\\ &\qquad\qquad\qquad +\int_{\Omega_{\ell}\setminus\Omega_{t+1}} F(\nabla u_{\ell})\,dx. \end{align*} Therefore, we see that $$ \int_{\Omega_{t+1}} F(\nabla u_{\ell})\,dx\le At^r+\int_{\Omega_{t+1}}\nu_tf''u_\ell\,dx+\int_{\Omega_{t+1}\setminus\Omega_t} F(\nabla v_{t,\ell})\,dx, $$ with $A=F(0){\cal L}^r(\omega'){\cal L}^{n-r}(\omega'')$ and $\nu_t=\mathbf{1}_{\Omega_t}+\rho_t\mathbf{1}_{\Omega_{t+1}\setminus\Omega_t}$.
By the coerciveness and growth hypotheses \eqref{growth1}, we infer that $$
\lambda\int_{\Omega_{t+1}} |\nabla u_{\ell}|^p\,dx\le Bt^r+\int_{\Omega_{t+1}}|f''u_\ell|\,dx+\Lambda\int_{\Omega_{t+1}\setminus\Omega_t} |\nabla v_{t,\ell}|^p\,dx, $$ for some constant $B$, since $0\le\nu_t\le 1$ and the Lebesgue measure of $\Omega_{t+1}\setminus\Omega_t$ is of the order of $t^{r-1}$.
In $\Omega_{t+1}\setminus\Omega_t$, we have
$$|\nabla v_{t,\ell}|^p=|(1-\rho_t)\nabla u_{\ell}-u_\ell\nabla\rho_t|^p\le 2^{p-1}\bigl(|\nabla u_\ell|^p+K^p|u_\ell|^p\bigr).$$ Clearly, estimate \eqref{poincare} is also valid on $\Omega_{t+1}\setminus\Omega_t$, thus, $$
\int_{\Omega_{t+1}\setminus\Omega_t}|u_\ell|^p\,dx
\le c_1^p\int_{\Omega_{t+1}\setminus\Omega_t}|\nabla''u_\ell|^p\,dx\le c_1^p\int_{\Omega_{t+1}\setminus\Omega_t}|\nabla u_\ell|^p\,dx, $$ so that
$$\int_{\Omega_{t+1}\setminus\Omega_t} |\nabla v_{t,\ell}|^p\,dx\le 2^{p-1}(1+c_1^pK^p)\int_{\Omega_{t+1}\setminus\Omega_t} |\nabla u_{\ell}|^p\,dx.$$ Furthermore, \begin{align*}
\int_{\Omega_{t+1}}|f''u_\ell|\,dx&\le\frac{\varepsilon}{p}\int_{\Omega_{t+1}}|u_\ell|^p\,dx +\frac{(t+1)^r}{\varepsilon^{p'/p}p'}{\cal L}^r(\omega')\|f''\|_{L^{p'}(\omega'')}^{p'}\\
&\le\frac{\varepsilon c_1^p}{p}\int_{\Omega_{t+1}}|\nabla u_\ell|^p\,dx +\frac{C}{\varepsilon^{p'/p}}t^r, \end{align*} with $\varepsilon>0$ to be chosen afterwards.
Let us set
$$h(t)=\int_{\Omega_t}|\nabla u_\ell|^p\,dx.$$ Putting all the above estimates together, it follows that \begin{equation} \lambda' h(t+1)\le E\bigl(h(t+1)-h(t)\bigr)+Dt^r,\label{une estimation de plus} \end{equation} with $\lambda'=\lambda-\frac{\varepsilon c_1^p}{p}$, $D=B+\frac{C}{\varepsilon^{p'/p}}$ and $E=2^{p-1}\Lambda(1+c_1^pK^p)$. We now pick $\varepsilon$ in such a way that $\lambda'>0$. Inequality \eqref{une estimation de plus} may be rewritten as \begin{equation}\label{encore une recurrence a venir} h(t)\le \theta h(t+1)+Ht^r, \end{equation} where $\theta=1-\frac{\lambda'}{E}\in {]}0,1[$ and $H=\frac D{E}$ depend neither on $t$ nor on $\ell$. Iterating inequality \eqref{encore une recurrence a venir}, we see that for $n=\lfloor\ell-t\rfloor$, we have \begin{equation}\label{apres recurrence} h(t)\le \theta^nh(t+n)+H\sum_{m=0}^{n-1}(t+m)^r\theta^m. \end{equation}
Let us now set $t=\ell_0$. We have $h(\ell_0+\lfloor\ell-\ell_0\rfloor)\le h(\ell)\le c_2\ell^r$ by Lemma \ref{lem2}. Hence $$\theta^{\lfloor\ell-\ell_0\rfloor}h(t+\lfloor\ell-\ell_0\rfloor)\le c_2 \theta^{\lfloor\ell-\ell_0\rfloor}\ell^r\le c_2 \theta^{\ell-\ell_0-1}\ell^r.$$ Now, for $\ell_0\ge-\frac{r}{\ln\theta}$, the function in the right-hand side is decreasing, hence maximum for $\ell=\ell_0$. Therefore, $$\theta^{\lfloor\ell-\ell_0\rfloor}h(t+\lfloor\ell-\ell_0\rfloor)\le \frac{c_2}{\theta} \ell_0^r,$$ for $\ell\ge\ell_0\ge-\frac{r}{\ln\theta}$. Moreover, for $\ell_0\ge 1$, \begin{multline*} \sum_{m=0}^{\lfloor\ell-\ell_0\rfloor-1}(\ell_0+m)^r\theta^m=\ell_0^r\sum_{m=0}^{\lfloor\ell-\ell_0\rfloor-1}\Bigl(1+\frac m{\ell_0}\Bigr)^r\theta^m\\ \le \ell_0^r\sum_{m=0}^{\lfloor\ell-\ell_0\rfloor-1}(1+m)^r\theta^m\le \frac{\sum_{m=1}^{+\infty}m^r\theta^m}{\theta}\ell_0^r, \end{multline*} which completes the proof with $\bar \ell=\max\bigl(1,-\frac{r}{\ln\theta}\bigr)$.
$\square$\par\medbreak
We now turn to the convergence results. As a consequence of Lemma \ref{estimation sur lzero}, we have, without any restriction on $r$ with respect to $p$ and $k$,
\begin{theorem}\label{sous suite} There exists a subsequence $\ell\to+\infty$ and a function $u^*\in W^{1,p}_{\rm loc}(\Omega_\infty)$ such that, for all $\ell_0$, \begin{equation}\label{conv faible}
u_{\ell|\Omega_{\ell_0}}\rightharpoonup u^*_{|\Omega_{\ell_0}}\text{ weakly in }W^{1,p}(\Omega_{\ell_0}). \end{equation} Moreover, $u^*=0$ on $\partial\Omega_\infty$. \end{theorem}
Note that the weak convergence above implies that $u_{\ell}\rightharpoonup u^*$ weakly in $W^{1,p}_{\rm loc}(\Omega_\infty)$. We will sometimes omit the restriction notation in the sequel when unnecessary.
\noindent{\bf Proof.}\enspace{} By estimates \eqref{poincare} and \eqref{estmtul zero}, for all $n\in{\mathbb N}^*$, $u_\ell$ is bounded in $W^{1,p}(\Omega_n)$. Using the diagonal procedure, we thus construct a sequence $\ell_n$ such that for all $m$, $u_{\ell_n|\Omega_m}\rightharpoonup u^*_m$ weakly in $W^{1,p}(\Omega_m)$, with $u_m=0$ on $\omega'_m\times\partial\omega''$. Now, since $\Omega_m\subset\Omega_{m'}$ as soon as $m\le m'$, it follows that $u^*_m=u^*_{m'|\Omega_m}$, so that we have constructed a single limit function $u^*$ in the desired class. Furthermore, for all $\ell_0$, if we choose an integer $m\ge \ell_0$, we see that convergence~\eqref{conv faible} holds true.
$\square$\par\medbreak
In the sequel, we will always consider a weakly convergent subsequence $u_\ell$ in the sense of Theorem \ref{sous suite}.
\section{Identification of the limit when $\ell\to+\infty$} In this section, we do not make any further use of assumption \eqref{uniformite stricte} of uniform strict convexity of $F''$, other than the fact that we used it to establish Theorem \ref{estimation de base}.\footnote{Keep in mind that this hypothesis is void for $k=0$ anyway.} The results will only hold for values of $r$ small enough depending on $p$. We let $\Omega_\infty={\mathbb R}^r\times\omega''$.
Let us first show that the asymptotic behavior of $u_\ell$ is independent of the elongated dimension if $r$ is small enough.
\begin{theorem}\label{limite allongee} Assume that $r<p$ if $k=0$, or that $r<kp/(p-k)$ if $0<k<p$. Then we have $\nabla'u^*=0$ and $u^*$ may be identified with a function in the $x''$ variable only, still denoted $u^*$, which belongs to $W^{1,p}_0(\omega'')$. \end{theorem}
\noindent{\bf Proof.}\enspace{} By estimates \eqref{estmtul} and \eqref{Caccio usc} and the triangle inequality, it follows that \begin{equation}\label{gradient' tend vers zero}
\|\nabla'u_\ell\|^p_{L^p(\Omega_{\ell_0};{\mathbb R}^r)}\le C\biggl(\frac{\delta}{(\ell-\ell_0)^p}+\frac{k}{(\ell-\ell_0)^{kp/(p-k)}}\biggr)\ell^{r}\to 0 \end{equation} when $\ell\to+\infty$ with $\ell_0$ fixed. Indeed, when $0<k\le p/2$, we actually have $\frac{kp}{p-k}\le p$ and since $\ell\to+\infty$, the first term in the right hand side of estimate \eqref{gradient' tend vers zero} is bounded from above by the second term.
Now $\nabla'u_\ell\rightharpoonup\nabla'u^*$ weakly in $L^{p}_{\rm loc}(\Omega_\infty)$, hence we see that $\nabla'u^*=0$, which concludes the proof of the Theorem.
$\square$\par\medbreak
In order to get a feeling of what Theorem \ref{limite allongee} says, let us look at a few examples. For the Laplacian, we have $p=2$ and we can take $k=0$, which restricts this result to $r=1$ (see Section 5 for a more general result with additional hypotheses, that applies in this case). For the $p$-Laplacian, $p>2$, we can take $k=2$ and the result is restricted to $r<2p/(p-2)$. {This restriction for the $p$-Laplacian can already be found in \cite{Xie}}. Note that $r=1$ and $r=2$ are allowed for any value of $p$. This is not optimal in this particular case, since it is known that $\ell\to+\infty$ convergence holds without restriction on the dimension with respect to $p$, see \cite{Chipot-Xie}.
Let us now identify the limit function. We first need another estimate.
\begin{lemma}\label{estimation sur une tranche} There exists a constant $c_5$ such that for all $t\le s$, \begin{equation}\label{estime tranche}
\limsup_{\ell\to+\infty}\int_{\Omega_s\setminus\Omega_t}|\nabla u_{\ell}|^p\,dx\le c_5 (s^r-t^r). \end{equation} \end{lemma}
\noindent{\bf Proof.}\enspace{} We may assume that $t>0$, since the case $t=0$ is already covered by Lemma \ref{estimation sur lzero}. We use here De Giorgi's classical slicing trick. Let $n$ be an integer large enough so that $0\le t-\frac1n<s+\frac1n\le\ell$. For each integer $m$, $1\le m\le n$, we consider the cut-off function $$\chi_{m,n}(x')=\rho_{s+\frac{m}{n^2},s+\frac{m-1}{n^2}}(x')\Bigl(1-\rho_{t-\frac{m-1}{n^2},t-\frac{m}{n^2}}(x')\Bigr).$$
This cut-off function takes its values in $[0,1]$, it is $0$ whenever $g(x')\ge s+\frac{m}{n^2}$ or $g(x')\le t-\frac{m}{n^2}$, it is $1$ for $t-\frac{m-1}{n^2}\le g(x')\le s+\frac{m-1}{n^2}$, and $|\nabla\chi_{m,n}|\le Kn^2$. Let us call $S_{m,n}$ the slice where $0<\chi_{m,n}(x')<1$. We observe that $$\bigcup_{m=1}^n \overline{S_{m,n}}=\overline{\Omega_{s+\frac1n}\setminus\Omega_s}\bigcup \overline{\Omega_{t}\setminus\Omega_{t-\frac1n}} \subset \Omega_{s+1},$$ and that $S_{m,n}\cap S_{m',n}=\emptyset$ when $m\neq m'$.
Let us consider the test-function $v_{\ell,m,n}=(1-\chi_{m,n})u_\ell+\chi_{m,n}u^*$. The minimization problem yields the estimate \begin{align*} \int_{\Omega_\ell}F(\nabla u_\ell)\,dx&\le\int_{\Omega_\ell}F(\nabla v_{\ell,m,n})\,dx-\int_{\Omega_\ell}f''\chi_{m,n}(u^*-u_\ell)\,dx\\ &=\int_{\Omega_\ell}F(\nabla v_{\ell,m,n})\,dx-\int_{\Omega_{s+1}}f''\chi_{m,n}(u^*-u_\ell)\,dx. \end{align*} Taking into account the specific form of the cut-off function, this implies that \begin{align} \nonumber\int_{\Omega_s\setminus\Omega_t}F(\nabla u_\ell)\,dx&\le \int_{\Omega_{s+\frac{m}{n^2}}\setminus\Omega_{t-\frac{m}{n^2}}}F(\nabla u_\ell)\,dx\\ \nonumber&\le\int_{\Omega_{s+\frac{m}{n^2}}\setminus\Omega_{t-\frac{m}{n^2}}}F(\nabla v_{\ell,m,n})\,dx-\int_{\Omega_{s+1}}f''\chi_{m,n}(u^*-u_\ell)\,dx\\ \nonumber&\le\int_{S_{m,n}}F(\nabla v_{\ell,m,n})\,dx+\int_{\Omega_{s+\frac{m-1}{n^2}}\setminus\Omega_{t-\frac{m-1}{n^2}}}F(\nabla u^*)\,dx\\ &\qquad\qquad-\int_{\Omega_{s+1}}f''\chi_{m,n}(u^*-u_\ell)\,dx.\label{slicing1} \end{align} Let us estimate each term in the right-hand side separately. First of all, we have \begin{equation}\label{slicing2}
\Bigl|\int_{\Omega_{s+1}}f''\chi_{m,n}(u^*-u_\ell)\,dx\Bigr|\le A^{1/p'}(s+1)^{r/p'}\|f''\|_{L^{p'}(\omega'')}\|u^*-u_\ell\|_{L^p(\Omega_{s+1})}, \end{equation} with $A=\mathcal{L}^r(\omega')$. Secondly, we see that \begin{equation}\label{slicing3}
\biggl|\int_{\Omega_{s+\frac{m-1}{n^2}}\setminus\Omega_{t-\frac{m-1}{n^2}}}F(\nabla u^*)\,dx\biggr|\le A\Bigl(\Bigl(s+\frac1n\Bigr)^r-
\Bigl(t-\frac1n\Bigr)^r\Bigr)\|F''(\nabla'' u^*)\|_{L^1(\omega'')}. \end{equation}
We now come to the slicing argument stricto sensu. By the growth estimate \eqref{growth1}, we have \begin{multline}\label{slicing4}
\int_{S_{m,n}}F(\nabla v_{\ell,m,n})\,dx\le2^{p-1}\Lambda\Bigl(\int_{S_{m,n}}\bigl(|\nabla u_\ell|^p+|\nabla u^*|^p+1\bigr)\,dx\\+K^pn^{2p}
\int_{S_{m,n}}|u^*- u_\ell|^p\,dx\Bigr). \end{multline} The only term that causes a difficulty is the last term coming from $\nabla\chi_{m,n}$. We now plug estimates \eqref{slicing2}, \eqref{slicing3} and \eqref{slicing4} into the right-hand side of estimate \eqref{slicing1}, sum for $m=1$ to $n$ and divide the result by $n$. Observing that the sum of integrals over the slices $S_{m,n}$ gives rise to integrals over the union of all slices, which is included in $\Omega_{s+1}$, this yields \begin{align}
\int_{\Omega_s\setminus\Omega_t}F(\nabla u_\ell)\,dx&\le A^{1/p'}(s+1)^{r/p'}\|f''\|_{L^{p'}(\omega'')}\|u^*-u_\ell\|_{L^p(\Omega_{s+1})}\\ &\qquad +A\Bigl(\Bigl(s+\frac1n\Bigr)^r-
\Bigl(t-\frac1n\Bigr)^r\Bigr)\|F''(\nabla'' u^*)\|_{L^1(\omega'')}\\
&+\frac{2^p\Lambda c_4}n(s+1)^r+2^{p-1}\Lambda K^p n^{2p-1}\|u^*-u_\ell\|^p_{L^p(\Omega_{s+1})}. \end{align}
We first let $\ell\to+\infty$. Due to the Rellich-Kondra\v sov theorem, $\|u^*-u_\ell\|_{L^p(\Omega_{s+1})}\to 0$ and it follows from the coerciveness estimate that \begin{multline*}
\limsup_{\ell\to+\infty}\int_{\Omega_s\setminus\Omega_t}|\nabla u_{\ell}|^p\,dx\le \frac A\lambda\Bigl(\Bigl(s+\frac1n\Bigr)^r-
\Bigl(t-\frac1n\Bigr)^r\Bigr)\|F''(\nabla'' u^*)\|_{L^1(\omega'')}\\+\frac{2^p\Lambda c_4}{n\lambda}(s+1)^r. \end{multline*}
We finally let $n\to+\infty$ to obtain the result with $c_5=\frac A\lambda\|F''(\nabla'' u^*)\|_{L^1(\omega'')}$.
$\square$\par\medbreak
We now are in a position to prove the main result of this section.
\begin{theorem}\label{resultat principal faible} The function $u^*$ is a minimizer of problem ${\cal P}_\infty$. \end{theorem}
\noindent{\bf Proof.}\enspace{} Let $z\in W^{1,p}_0(\omega'')$ be arbitrary. We use the test function $v_\ell=(1-\rho_{t})u_\ell+\rho_{t}z$, with $\rho_t=\rho_{t+1,t}$, so that $v_\ell=u_\ell$ on $\Omega_\ell\setminus\Omega_{t+1}$ and $v_\ell=z$ on $\Omega_t$. We thus have \begin{equation}\label{on va finir par y arriver} \int_{\Omega_{t+1}}[F(\nabla u_\ell)-f''u_\ell]\,dx\le \int_{\Omega_{t+1}\setminus\Omega_t}[F(\nabla v_\ell)-f''v_\ell]\,dx +\int_{\Omega_{t}}[F(\nabla z)-f''z]\,dx. \end{equation}
It follows from Lemma \ref{estimation sur une tranche} that
$$\limsup_{\ell\to+\infty}\Bigl|\int_{\Omega_{t+1}\setminus\Omega_t}[F(\nabla v_\ell)-f''v_\ell]\,dx\Bigr|\le C(t+1)^{r-1}$$ for some constant $C$ independent of $\ell$ and $t$. The left-hand side of estimate \eqref{on va finir par y arriver} is weakly lower-semicontinuous, hence, letting $\ell\to+\infty$, we obtain \begin{align*} (t+1)^r\mathcal{L}^r(\omega') \int_{\omega''}[F(\nabla u^*)-f''u^*]\,dx'&\le C(t+1)^{r-1}\\ &\qquad+t^r\mathcal{L}^r(\omega') \int_{\omega''}[F(\nabla z)-f''z]\,dx' \end{align*} and the result follows from letting $t\to+\infty$, since $F(\nabla u^*)=F''(\nabla'' u^*)$ and $F(\nabla z)=F''(\nabla'' z)$.
$\square$\par\medbreak
We now apply a classical trick to obtain strong convergence when $F''$ is strictly convex. Of course, when $k>0$, this is already the case by assumption \eqref{uniformite stricte}. Strict convexity is only a new assumption if $k=0$. In this case, the solution $u_\infty$ of the limit problem is unique and this uniqueness implies the weak convergence of the whole family $u_\ell$.
\begin{theorem}\label{resultat principal fort} Assume that $F''$ is strictly convex. Then $u^*=u_\infty$ and $u_{\ell}\to u_\infty$ strongly in $W^{1,p}(\Omega_{\ell_0})$ for all $\ell_0$. \end{theorem}
We recall the following two lemmas that can be found \emph{e.g.} in \cite{Ball-Marsden}.
\begin{lemma}\label{BM1} Let $F\colon {\mathbb R}^M\to {\mathbb R}$ be strictly convex. Let $\mu\in{]}0,1[$ and $a_j,a\in {\mathbb R}^M$ such that $$ \mu F(a_j)+(1-\mu)F(a)-F(\mu a_j+(1-\mu) a)\to 0\text{ as }j\to+\infty. $$ Then $a_j\to a$. \end{lemma}
The second lemma is a slight variation on Fatou's lemma. \begin{lemma}\label{BM2} Let $F_j,F,H_j,H\in L^1(\Omega)$ with $F_j\ge H_j\ge 0$ for all $j$, $F_j\to F$ and $H_j\to H$ a.e., and $\int_\Omega F_j\,dx\to\int_\Omega F\,dx$. Then $$ \int_\Omega H_j\,dx\to\int_\Omega H\,dx. $$ \end{lemma}
\noindent{\bf Proof of Theorem \ref{resultat principal fort}.} We already know that $\nabla' u_\ell\to 0=\nabla' u^*$ strongly in $L^p(\Omega_t)$ by estimate \eqref{gradient' tend vers zero}. We thus just have to prove the strong convergence of $\nabla'' u_\ell$.
We use a similar slicing as before, with the test-functions $\rho_{t+\frac m{n^2},t+\frac {m-1}{n^2}}u_\ell$ for $n$ large enough, $1\le m\le n$. Skipping the details, this slicing implies that $$\limsup_{\ell\to+\infty}\int_{\Omega_t}F(\nabla u_\ell)\,dx\le \int_{\Omega_t}F(\nabla u^*)\,dx.$$
On the other hand, for almost all $x'$, the function $u_{x',\ell}\colon x''\mapsto u_\ell(x',x'')$ is an admissible test-function for the limit problem, so that $$\int_{\omega''}[F''(\nabla''u^*)-f''u^*]\,dx''\le \int_{\omega''}[F''(\nabla''u_{x',\ell})-f''u_{x',\ell}]\,dx''.$$ We integrate this inequality with respect to $x'\in t\omega'$ and obtain $$\int_{\Omega_t}[F''(\nabla''u^*)-f''u^*]\,dx\le \int_{\Omega_t}[F''(\nabla''u_\ell)-f''u_\ell]\,dx.$$ We now let $\ell\to+\infty$, which yields $$\int_{\Omega_t}F''(\nabla''u^*)\,dx\le \liminf_{\ell\to+\infty}\int_{\Omega_t}F''(\nabla''u_\ell)\,dx.$$ By hypothesis \eqref{growth3}, $G\ge 0$, which implies that $F''(\xi'')\le F(\xi',\xi'')$ for any $\xi'$. It follows that \begin{equation}\label{func-conv}
\int_{\Omega_t}F''(\nabla''u_\ell)\,dx\to \int_{\Omega_t}F''(\nabla''u^*)\,dx \end{equation} when $\ell\to+\infty$, since $F''(\nabla''u^*)=F(\nabla u^*)$.
Let us pick $\mu\in{]}0,1[$ and set $$g_\ell=\mu F''(\nabla'' u_\ell)+(1-\mu)F''(\nabla'' u^*)-F''(\mu \nabla'' u_\ell+(1-\mu)\nabla'' u^*). $$ By weak lower semicontinuity, it is clear that $$\liminf_{\ell\to+\infty}\int_{\Omega_t}F''(\mu \nabla'' u_\ell+(1-\mu)\nabla'' u^*)\,dx \ge\int_{\Omega_t}F''(\nabla'' u^*)\,dx.$$ Therefore $$0\le \limsup_{\ell\to+\infty}\int_{\Omega_t}g_\ell\,dx\le \int_{\Omega_t}F''(\nabla'' u^*)\,dx-\int_{\Omega_t}F''(\nabla'' u^*)\,dx=0,$$ so that $g_\ell\to 0$ a.e.\ (up to a subsequence). We then apply Lemma \ref{BM1} to deduce that $\nabla''u_\ell\to\nabla''u^*$ a.e.\ up to that same subsequence.
We now let
$$H_\ell=|\nabla''u_\ell-\nabla''u^*|^p\le 2^{p-1}(F''(\nabla'' u_\ell)+|\nabla''u^*|^p)=F_\ell,$$ and invoke Lemma \ref{BM2} and \eqref{func-conv} to obtain the result for $\ell_0=t$. To conclude for all $\ell_0$, we use the diagonal process.
$\square$\par\medbreak
\section{Convergence rates} In the previous section, we obtained convergence results without taking advantage of the term involving $k$ in the left-hand side of estimate \eqref{Caccio usc}. This makes them valid in particular for $k=0$ without strict or uniform strict convexity. It should however be clear that for $k>0$, the term in question can be used to obtain a much shorter convergence proof with convergence rate, which we do not detail here. More precisely,
\begin{theorem}\label{limite allongee usc} Under the previous hypotheses with $0<k<p$ and $r<kp/(p-k)$, we have
$$ \|u_{\ell}-u_{\infty}\|^p_{W^{1,p}(\Omega_{\ell_0})}\leq C\ell^{r-\frac{kp}{p-k}}.$$ \end{theorem}
The proof is a direct consequence of Corollary \ref{Caccio global} and Lemma \ref{lem2}.
In any case, the estimates do not seem to allow a convergence proof without any restriction on $r$ with respect to $p$ in all generality, whereas it is known in some cases, for instance in the case of the Laplacian, that convergence holds true for all values of $r$.
In order to partially overcome these shortcomings, we assume now that $k=0$ and that $F''$ is uniformly strictly convex in the sense that \begin{equation}\label{uniformite stricte k=0}
F''(\theta\xi''+\mu\zeta'')\le \theta F''(\xi'')+\mu F''(\zeta'')-\beta\theta\mu(\theta^{p-1}+\mu^{p-1})|\xi''-\zeta''|^p, \end{equation} for some $\beta>0$. Note that this is equivalent to allowing $k=p$ in hypotheses \eqref{growth3} and \eqref{uniformite stricte}. In some sense, $\frac{kp}{p-k}$ is then infinite and it is to be expected that there should be no restriction on the allowed dimensions $r$, plus faster than polynomial convergence. This is what we now proceed to show.
Under assumption \eqref{uniformite stricte k=0}, it is fairly clear that we still have an estimate similar to that of Theorem \ref{estimation de base}, namely, \begin{equation}\label{Caccio ter}
\|\nabla'u_\ell\|^p_{L^p(\Omega_{t})}+\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_t)}\le \frac{C}{(s-t)^p}\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_s\setminus\Omega_t)}. \end{equation}
Let us thus prove that not only does convergence hold without restrictions on the elongated dimension $r$, but that it also occurs at an exponential rate. The extra control makes things actually much easier.
\begin{theorem}\label{th4}Under hypotheses \eqref{growth1}-\eqref{growth3} with $k=0$ and \eqref{uniformite stricte k=0}, then for all $r< n$ and all $\ell_0$, there exist constants $C$ and $\alpha>0$ independent of $\ell$ such that we have
$$||\nabla(u_{\ell}- u_{\infty})||_{L^p(\Omega_{\ell_0})}\leq C e^{-\alpha\ell}.$$ \end{theorem}
\noindent{\bf Proof.}\enspace{} We take $s=t+1$ in estimate \eqref{Caccio ter}, which yields \begin{align*}
\|\nabla'u_\ell\|^p_{L^p(\Omega_{t})}+\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_t)}&\le C\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_{t+1}\setminus\Omega_{t})}\\ &\leq C\|\nabla'u_\ell\|^p_{L^p(\Omega_{t+1}\setminus\Omega_{t})}\\ &\qquad\qquad{}+
C\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_{t+1}\setminus\Omega_{t})}. \end{align*} Setting
$$g(t)=\|\nabla'u_\ell\|_{L^p(\Omega_t)}^p+\|\nabla''(u_\ell-u_\infty)\|_{L^p(\Omega_t)}^p,$$ we have just shown that $$ g(t)\le C(g(t+1)-g(t)), $$ or in other words \begin{equation}\label{la, ca change encore-new5} g(t)\le \theta g(t+1) \end{equation} with $\theta=\frac{C}{1+C}\in{}]0,1[$.
We iterate inequality \eqref{la, ca change encore-new5} using the sequence $t_n=n+\ell_0$, $n=0,\ldots,\lfloor \ell-\ell_0\rfloor$. Obviously $$g(\ell_0)=g(t_0)\leq \theta^ng(t_n)$$ for all such $n$, and in particular for the last one, $$g(\ell_0)\leq\theta^{\lfloor \ell-\ell_0\rfloor}g(t_{\lfloor \ell-\ell_0\rfloor})\le \theta^{\ell-\ell_0-1}g(\ell)\leq C\theta^{-\ell_0-1}e^{\ell\ln \theta}\ell^r,$$ with $\ln\theta<0$. Now, for all $r$, we can pick $\alpha>0$ such that $\ln\theta<-p\alpha<0$ and $e^{\ell\ln \theta}\ell^r\le e^{-p\alpha \ell}$ for $\ell$ large enough, which completes the proof since $\nabla' u_\infty=0$.
$\square$\par\medbreak
Theorem \ref{th4} applies to energies of the form $F(\xi)=F'(\xi')+F''(\xi'')$, for instance. We recover in particular the known result for the case of the 2-Laplacian. {See also the monograph \cite{[C2new]} for exponential estimates in this context.}
\section{Extension to the vectorial case}
We have written everything so far in the context of a scalar problem, \emph{i.e.}, the functions $u_\ell$ are scalar-valued. All previous developments only made use of the minimization problem, under various convexity assumptions. Now clearly, absolutely nothing is changed if we consider instead vector-valued problems in the calculus of variations, with functions $u_\ell$ taking their values in some ${\mathbb R}^N$, if the energies are supposed to satisfy the same growth, coercivity and convexity assumptions as before, and the same convergence results hold true.
Unfortunately, in the vectorial case of the calculus of variations, the relevant condition that guarantees lower-semicontinuity of the energy functional is not convexity, but much weaker conditions such as quasiconvexity, or in the case of energies that can take the value $+\infty$, as is the case in nonlinear elasticity, polyconvexity, see \cite{[D]}. Indeed, convexity is not suitable in nonlinear elasticity for well-known modeling reasons. This explains why we have striven to use as little convexity as possible (in some sense) at any given point in the sequence of arguments. This comment should however be mitigated by the fact that some instances of our uses of convexity will also work with rank-1-convexity, which is a reasonable assumption in the vectorial case. There are also notions of strict uniform quasiconvexity that may apply, see \cite{Evans}.
The fact that the Euler-Lagrange equation is not available in nonlinear elasticity is also an incentive to try and only use the minimization problem. Now, it is at this point unclear to us how to attack the elongation problem in such nonconvex vectorial cases, since we still heavily rely on (strict uniform) convexity at crucial points of the proofs. Moreover, the Dirichlet boundary condition considered here is not necessarily the most interesting one in the context of nonlinear elasticity, in particular if we have the Saint Venant principle in mind.
Even the potential limit problem is not so clear. In another dimension reduction context, when considering a body whose thickness goes to zero, and with different boundary conditions, it can be seen that quasiconvexity is not conserved through an ``algebraic'' formula of the kind found here, and that a relaxation step is necessary, see for instance \cite{[L-R]}. Physically, this due to the possibility of crumpling such a thin body. A similar phenomenon may quite possibly happen here, but maybe not in the same fashion.
To the best of our knowledge, the nonconvex vectorial case remains open.
\end{document} |
\begin{document}
\title{
The Langevin Noise Approach for Lossy Media and the Lossless Limit}
\author{George W. Hanson} \email[]{george@uwm.edu} \address{Department of Electrical Engineering, University of Wisconsin-Milwaukee, 3200 N. Cramer St., Milwaukee, Wisconsin 53211, USA}
\author{Frieder Lindel} \address{Physikalisches Institut, Albert-Ludwigs-Universit\"{a}t Freiburg, Hermann-Herder-Stra\ss e 3, 79104 Freiburg, Germany}
\author{Stefan Yoshi Buhmann} \email[]{stefan.buhmann@physik.uni-freiburg.de} \address{Physikalisches Institut, Albert-Ludwigs-Universit\"{a}t Freiburg, Hermann-Herder-Stra\ss e 3, 79104 Freiburg, Germany}
\date{\today }
\begin{abstract} The Langevin noise approach for quantization of macroscopic electromagnetics for three-dimensional, inhomogeneous environments is compared with normal mode quantization. Recent works on the applicability of the method are discussed, and several examples are provided showing that for closed systems the Langevin noise approach reduces to the usual cavity mode expansion method when loss is eliminated.
\end{abstract}
\maketitle
\section{Introduction}
Methods for the study of the quantum properties of light, and the interaction of quantized light and atoms and other multi-leveled systems, were initially developed for vacuum. The observation of Purcell in 1946 that the spontaneous emission rate of an atom was dependent on the atom's environment \cite{Purcell} was a motivating factor for the study of how cavity materials affect quantized light. The incorporation of simplified models of materials (lossless, dispersionless dielectrics, perfect metals) is accommodated in quantum models in a fairy straightforward manner \cite{DM}. However, the Kramers-Kronig relations \cite{Jac} require that absorption is always accomplished by dispersion, and vice versa. Whereas in classical electromagnetics, dispersion and absorption are easily accounted for, in macroscopic quantum models this is not the case, since a naive implementation of absorption causes the commutators to vanish at long times, violating the Heisenberg uncertainty principle.
Motivated by the fluctuation--dissipation theorem, \cite{HB}-\cite{B2}---describe macroscopic quantum electrodynamics (QED) as inspired by its nature as the quantum version of classical macroscopic electrodynamics---is a phenomenological dipolar, fully quantum, macroscopic theory developed to accommodate lossy, dispersive materials and open environments. It has been widely applied to a variety of problems since it is expressed in terms of the Green function, and allows for very general media, including anisotropic, nonreciprocal, and nonlocal materials \cite{Buh}, \cite{RSW}-\cite{Force2}. For inhomogeneous, complex-shaped regions, the Green function can be computed numerically \cite{Cole}. In Ref.~\cite{Phil}, the phenomenological assumptions are derived from a canonical formulation; this approach was later extended to moving media \cite{Horsley12}. The equivalence of the approach with an alternative based on auxiliary fields \cite{0710} was demonstrated explicitly \cite{0711}. A critical assessment is provided in \cite{AD} (see also \cite{AD1})-\cite{AD2}), where a comparison with a generalized Huttner-Barnett approach \cite{HB} (canonical quantization of a bath of oscillators, based on \cite{Hop}) is discussed. Dissipation and dielectric models are also discussed in a wide range of other works, see e.g. \cite{Chew}.
In \cite{AD}, the practical equivalence of the Langevin noise approach (LNA) and Huttner-Barnett descriptions is shown. More precisely, it is shown that in an open system, the material oscillator degrees of freedom included in the standard Langevin noise approach must be augmented by quantized photonic degrees of freedom associated with fluctuating fields coming from infinity and scattered by the inhomogeneities of the medium. If space is considered to consist of a uniform background having some small absorption, the free fields coming from infinity are absorbed and the standard Langevin noise approach applies. However, it is often of interest to model finite regions of space having nonabsorbing materials. In \cite{AD} a scheme is developed considering a finite region of space (which may be vacuum), surrounded by a weakly absorbing/dispersive medium $\varepsilon _{\text{inf}}$ that extends to infinity, and fluctuating polarization currents in $\varepsilon_{\text{inf}}$\ generate the missing free fields, in which case the Huttner-Barnett and Langevin noise approaches are shown to be equivalent.
Nevertheless, questions about the validity of the Langevin noise approach remain \cite{DLGJ} -\cite{DGJ}, particularly, concerning various limiting procedures such as assuming the material region of interest shrinks to zero, or the limit of a lossless material is taken. In this work, we compare the Langevin noise approach with the standard cavity normal mode approach, which we refer to as normal-mode QED (NMQED) in the following, which is valid for media characterized by Hermitian permittivity tensors (lossless, and therefore, nondispersive). Although it is known that the Langevin noise approach recovers various quantities correctly, such as the atomic spontaneous decay rate, here we show for several explicit examples that the Langevin noise approach results in exactly the same formulation (final equations) as the normal-mode QED, although the former allows for much more general materials than the latter. Several possible geometries may be envisioned: 1) finite-size, PEC-wall cavities (i.e., closed systems) containing lossless inhomogeneous media, 2) same as (1) but for lossy, dispersive media, 3) large-cavity limit cavities containing lossless inhomogeneous media, 4) same as (3) but for lossy inhomogeneous media, 5) open systems, which admit loss even when the materials themselves are lossless. Cases (3) and (4) are actually subsets of (1) and (2); in the former, plane-wave eigenfunctions are used, whereas in the latter, more general cavity eigenfunctions are used. For (1), normal-mode QED is standard, often with homogeneous media (e.g., vacuum). The Langevin noise approach does not apply to Case (1) directly, but can be applied to Case (2), the lossy version. Here, we show that the Langevin noise approach recovers exactly the normal-mode QED equations for several problems considered in the lossless limit (i.e., as Case (2) reduces to Case (1)). For Case (3), the normal-mode QED is often used for homogeneous environments (utilizing discrete plane-wave mode functions to represent the actual mode continuum). Again, the Langevin noise approach can not be applied directly to Case (3), although it applies to Case (4) and again recovers the normal-mode QED result in the lossless limit. In fact, the resulting equations from the Langevin noise approach, e.g., the density operator or population evolution, are easily converted to the normal-mode QED (and, sometimes, vice-versa) using a simple Green function relation. For the study of non-absorbing materials, we point out the need to retain dissipation in the Langevin noise approach model until the final steps of the calculation, at which point the lossless limit can be taken. Similarly, if, say, the medium inhomogeneities vanish (e.g., the structure of interest, such as a metal resonator, shrinks to zero size), that limit must be taken at the end of the development.
Open systems, case (5), cannot be modeled using cavity normal modes, but it can be modeled using Langevin noise approach (in the references cited above, it is inherently a system-bath approach). For open systems, a quasinormal mode quantization (also based on a Langevin noise model) is a useful and natural approach for arbitrarily lossy open system modes \cite{SH1}, and implements a formulation akin to the standard modal approach, but for open lossy systems. An advantage of quasinormal modes beyond the Langevin noise approach is to explore nonlinear quantum optics at the system level, where it is no longer valid to treat the medium as a bath, e.g. \cite{SH2}--\cite{SH3}.
\section{Basic Relations}
We first consider an environment/reservoir such as a three-dimensional cavity $\Omega\subseteq\mathbb{R}^{3}$ with closed surface $\Sigma$, having a uniform background material characterized by $\varepsilon_{\text{bulk}}$ and containing a region $\Omega_{1}\subseteq\Omega$ inhomogeneously-filled with material characterized by relative permittivity tensor $\mathbf{\varepsilon }_{1}\left( \mathbf{r},\omega\right) $ (this could be, e.g., a plasmonic material). The permittivity for all $\mathbf{r}\in\Omega$ is $\mathbf{\varepsilon}\left( \mathbf{r},\omega\right) $. We will assume the magnetic permeability is the unity tensor, although including a permeability response does not change the presented conclusions. As the notation indicates, we can allow $\Omega_{1}=\Omega$, and $\Omega$ can be finite (e.g., a closed system with surface $\Sigma$ perfectly conducting), or in the large-cavity limit. The geometry is depicted in Fig. 1, including a two-level system located somewhere within $\Omega$. We compare two formulations. \begin{figure}
\caption{ Two-level system in the vicinity of an inhomogeneous region $\Omega_{1}\subseteq\Omega\subseteq\mathbb{R}^{3}$
}
\label{fig1}
\end{figure}
\subsection{Normal-Mode QED Approach}
Normal-mode QED is the usual textbook \cite{QOB1}-\cite{QOB4} and research \cite{RA1}-\cite{RA2} approach for i) closed empty cavities, where $\mathbf{\varepsilon}\left( \mathbf{r},\omega\right) =\mathbf{I}$, with $\mathbf{I}$ the identity operator, ii) closed cavities filled with lossless, dispersionless media, where $\mathbf{\varepsilon}\left( \mathbf{r}\right) $ is a real-valued, Hermitian tensor, and iii) closed cavities homogeneously-filled with lossy media. For the first two cases, classical mode functions $\mathbf{E}_{\mathbf{k}}\left( \mathbf{r}\right) =\mathbf{E} _{\mathbf{k}}\left( \mathbf{r},\omega_{\mathbf{k}}\right) $ can be defined that satisfy \cite{VanB}, \cite{Wubs2} \begin{equation} \nabla\times\nabla\times\mathbf{E}_{\mathbf{k}}\left( \mathbf{r} ,\omega_{\mathbf{k}}\right) =\frac{\omega_{\mathbf{k}}^{2}}{c^{2} }\mathbf{\varepsilon}\left( \mathbf{r}\right) \cdot\mathbf{E}_{\mathbf{k} }\left( \mathbf{r},\omega_{\mathbf{k}}\right) ,\label{efe} \end{equation} subject to boundary conditions on the cavity walls, $\left. \widehat {\mathbf{n}}\left( \mathbf{r}\right) \times\mathbf{E}_{\mathbf{k}}\left( \mathbf{r},\omega_{\mathbf{k}}\right) \right\vert _{\mathbf{r}_{\text{wall}} }=\mathbf{0}$, $\widehat{\mathbf{n}}$ being the unit normal vector to the wall, with eigenfunction orthogonality \cite{VanB} \begin{equation} \int\mathbf{E}_{\mathbf{k}}^{\ast}\left( \mathbf{r},\omega_{\mathbf{k} }\right) \cdot\mathbf{\varepsilon}\left( \mathbf{r}\right) \cdot \mathbf{E}_{\mathbf{k}^{\prime}}\left( \mathbf{r},\omega_{\mathbf{k}^{\prime }}\right) d^{3}\mathbf{r}=\delta_{\mathbf{kk}^{\prime}}. \end{equation} Under the restriction of a Hermitian permittivity tensor, and defining the Hilbert space of Lebesgue integrable vector functions $\mathbf{L}^{2}$, the operator $L_{E}:\mathbf{L}^{2}\left( \Omega\right) ^{3}\rightarrow \mathbf{L}^{2}\left( \Omega\right) ^{3}$, $L_{E}\mathbf{x}\equiv \mathbf{\nabla\,\times\,\nabla}\times\mathbf{x}-\frac{\omega_{\mathbf{k}}^{2} }{c^{2}}\mathbf{\varepsilon}(\mathbf{r})\cdot\mathbf{x}$, with boundary condition $B\left( \mathbf{x}\right) =\left. \mathbf{n\times x}\right\vert _{\Gamma}\mathbf{=0}$ or $B\left( \mathbf{x}\right) =\left. \mathbf{n\times \nabla\times x}\right\vert _{\Gamma}\mathbf{=0}$ is self adjoint (SA) and negative-definite, and the modes form an orthonormal, complete set in the Hilbert space of square-integrable functions \cite{VanB}, \begin{equation} \mathbf{I}\delta\left( \mathbf{r}-\mathbf{r}^{\prime}\right) =
{\displaystyle\sum\nolimits_{\mathbf{k}}}
\mathbf{E}_{\mathbf{k}}\left( \mathbf{r},\omega_{\mathbf{k}}\right) \mathbf{E}_{\mathbf{k}}^{\ast}\left( \mathbf{r}^{\prime},\omega_{\mathbf{k} }\right) \cdot\mathbf{\varepsilon}\left( \mathbf{r}^{\prime}\right) . \end{equation}
The electric field operator in the Schr\"{o}dinger picture is \begin{equation} \widehat{\mathbf{E}}\left( \mathbf{r}\right) ^{\text{NMQED}}=
{\displaystyle\sum\nolimits_{\mathbf{k}}}
\widehat{\mathbf{E}}_{\mathbf{k}}\left( \mathbf{r}\right) +\text{H.c.} ,\label{EFO1} \end{equation} where \begin{equation} \widehat{\mathbf{E}}_{\mathbf{k}}\left( \mathbf{r}\right) =i\sqrt {\frac{\hslash\omega_{\mathbf{k}}}{2\varepsilon_{0}}}\widehat{a}_{\mathbf{k} }\mathbf{E}_{\mathbf{k}}\left( \mathbf{r}\right) \label{EFO2} \end{equation} and where $\widehat{a}_{\mathbf{k}},\widehat{a}_{\mathbf{k}}^{\dag}$ are annihilation and creation operators that satisfy \begin{equation} \left[ \widehat{a}_{\mathbf{k}},\widehat{a}_{\mathbf{k}^{\prime}}\right] =\left[ \widehat{a}_{\mathbf{k}}^{\dagger},\widehat{a}_{\mathbf{k}^{\prime} }^{\dagger}\right] =0,\ \left[ \widehat{a}_{\mathbf{k}},\widehat {a}_{\mathbf{k}^{\prime}}^{\dagger}\right] =\delta_{\mathbf{kk}^{\prime} }.\label{sc} \end{equation} In the Heisenberg picture, (\ref{sc}) become equal-time commutators.
The free-field Hamiltonian is (dropping the zero-point energy) \begin{equation} \widehat{H}^{\text{NMQED}}=
{\displaystyle\sum\nolimits_{\mathbf{k}s}}
\hslash\omega_{\mathbf{k}}\widehat{a}_{\mathbf{k}s}^{\dagger}\widehat {a}_{\mathbf{k}s},\label{CMH} \end{equation} and eigenfunctions of the Hamiltonian are the multimode number (Fock) states \begin{equation} \left\vert n_{1}\right\rangle \left\vert n_{2}\right\rangle \left\vert n_{3}...\right\rangle \equiv\left\vert n_{1},n_{2},n_{3}...\right\rangle =\left\vert \left\{ n_{j}\right\} \right\rangle , \end{equation} which can be obtained from the ground state as \begin{equation} \left\vert n_{1}\right\rangle \left\vert n_{2}\right\rangle ...n_{i} ,...=\frac{\left( \widehat{a}_{1}^{\dagger}\right) ^{n_{1}}}{\sqrt{n_{1}!} }...\frac{\left( \widehat{a}_{i}^{\dagger}\right) ^{n_{i}}}{\sqrt{n_{i}!} }...\left\vert 0\right\rangle . \end{equation}
For the special case of an optically large vacuum cavity, the cavity mode functions become \begin{equation} \mathbf{E}_{\mathbf{ks}}\left( \mathbf{r}\right) \rightarrow\frac {\mathbf{e}_{\mathbf{k}s}}{\sqrt{V}}e^{i\mathbf{k}\cdot\mathbf{r}},\label{PWEF} \end{equation} which satisfy periodic boundary conditions ($\Omega$ is assumed to be the union of boxes of volume $V$), where $s$ indicates spin (polarization), with $\mathbf{e}_{\mathbf{k}s}$ being an orthonormal set of polarization functions such that $\mathbf{e}_{\mathbf{k}s}\cdot\mathbf{e}_{\mathbf{k}^{\prime }s^{\prime}}=\delta_{\mathbf{kk}^{\prime}}\delta_{ss^{\prime}}$, and satisfy the transversality condition $\mathbf{k}\cdot\mathbf{e}_{\mathbf{k}s}=0$. The polarization vectors form a right-handed coordinate system, $\mathbf{e} _{\mathbf{k}1}\times\mathbf{e}_{\mathbf{k}2}=\mathbf{k/\left\vert \mathbf{k}\right\vert }$. In (\ref{PWEF}), $V$ is a quantization volume such that \begin{equation} \int_{V}\mathbf{E}_{\mathbf{ks}}^{\ast}\left( \mathbf{r}\right) \cdot\mathbf{E}_{\mathbf{k}^{\prime}s^{\prime}}\left( \mathbf{r}\right) d^{3}\mathbf{r}=\delta_{\mathbf{kk}^{\prime}}\delta_{ss^{\prime}}. \end{equation} Note, however, that this is not an open system (truly infinite space), which inherently allows dissipation (photons going to infinity and never coming back). Mathematically, the difference between a large cavity and a true open system is that for the latter, modes must satisfy the Sommerfeld radiation condition, which renders the operator $L_{E}$ to be non-self-adjoint; the Sommerfeld radiation condition is an out-going wave condition, and the adjoint condition is an inward-traveling wave.
Finally, for case (iii), a cavity homogeneously-filled with lossy media, rather than $L_{E}$, the operator $L\mathbf{x}\equiv\mathbf{\nabla \,\times\,\nabla}\times\mathbf{x}$ can be defined such that eigenfunctions of $L$ satisfy the boundary condition $B\left( \mathbf{x}\right) =\mathbf{0}$, and the resulting operator is SA. The cavity must be homogeneously-filled; material inhomogeneities in piecewise constant media would necessitate boundary conditions $B$ such that $B\neq B^{\ast}$, rendering the problem non-self-adjoint.
In the usual normal-mode QED, the photonic Green functions is not explicitly needed, although it implicitly arises in, e.g., atom-atom coupling terms. However, to make connection with the Langevin noise approach, it is important to connect the mode functions $\mathbf{E}_{\mathbf{k}}\left( \mathbf{r},\omega_{\mathbf{k}}\right) $ with the Green tensor, which is defined by \begin{equation} \nabla\times\nabla\times\mathbf{G}\left( \mathbf{r},\mathbf{r}^{\prime },\omega\right) -\frac{\omega_{\mathbf{k}}^{2}}{c^{2}}\mathbf{\varepsilon }\left( \mathbf{r}\right) \cdot\mathbf{G}\left( \mathbf{r},\mathbf{r} ^{\prime},\omega\right) =\mathbf{I}\delta\left( \mathbf{r}-\mathbf{r} ^{\prime}\right) \end{equation} and satisfies $\mathbf{G(r},\mathbf{r}^{\prime}\mathbf{)}^{\top} =\mathbf{G(r}^{\prime},\mathbf{r)}$. The Green tensor can be expanded as \begin{equation} \mathbf{G}\left( \mathbf{r},\mathbf{r}^{\prime},\omega\right) =
{\displaystyle\sum\nolimits_{\mathbf{k}}}
c^{2}\frac{\mathbf{E}_{\mathbf{k}}\left( \mathbf{r},\omega_{\mathbf{k} }\right) \mathbf{E}_{\mathbf{k}}^{\ast}\left( \mathbf{r}^{\prime} ,\omega_{\mathbf{k}}\right) }{\omega_{\mathbf{k}}^{2}-\omega^{2}}.\label{GTE} \end{equation} The expression (\ref{GTE}) formally encompasses the case of transverse modes, forming a transverse Green function, or could include longitudinal modes as well. It should be emphasized that (\ref{GTE}) is only valid for closed cavities and the three cases discussed, although the Green tensor concept itself extends to dispersive and lossy inhomogeneous media. For certain spatial positions, a quasinormal mode expansion of the Green function is also possible \cite{SH1}--\cite{SH2}.
An important expression relating the Green function and modal summation is obtained by integrating (\ref{GTE}) with respect to frequency and using the Sokhotski--Plemelj (SP) identity \begin{equation} \lim_{\varepsilon\rightarrow0^{+}}\frac{1}{x\pm i\varepsilon}=\text{PV}\left( \frac{1}{x}\right) \mp i\pi\delta\left( x\right) , \end{equation} leading to \begin{equation} \frac{1}{\pi}\int_{0}^{\infty}d\omega\frac{\omega^{2}}{c^{2}}\operatorname{Im} \mathbf{G}\left( \mathbf{r},\mathbf{r}^{\prime},\omega\right) =
{\displaystyle\sum\nolimits_{\mathbf{k}}}
\frac{\omega_{\mathbf{k}}}{2}\mathbf{E}_{\mathbf{k}}\left( \mathbf{r} ,\omega_{\mathbf{k}}\right) \mathbf{E}_{\mathbf{k}}^{\ast}\left( \mathbf{r}^{\prime},\omega_{\mathbf{k}}\right) .\label{p1} \end{equation} This is the key relationship that allows converting between the Langevin noise approach and normal-mode QED, and will be needed in the following. Since the case $\mathbf{r}=\mathbf{r} ^{\prime}$ is often needed in field-atom interactions, it is worth noting that in the event of material loss at point $\mathbf{r}$ ($\operatorname{Im}\left( \mathbf{\varepsilon}\left( \mathbf{r},\omega_{\lambda}\right) \right) >0$), $\operatorname{Im}\mathbf{G}\left( \mathbf{r},\mathbf{r},\omega\right) \rightarrow\infty$, which is not seen with the transverse Green function/transverse mode expansion.
\subsection{Langevin Noise Approach}
The Langevin noise approach is developed in detail in \cite{Welsch0}-\cite{B2}, and here we merely use the main results as needed. We now allow a dispersive absorbing (complex-valued) permittivity, with causality requiring $\mathbf{\varepsilon }\left( \mathbf{r,-}\omega\right) =\mathbf{\varepsilon}^{\ast}\left( \mathbf{r,}\omega^{\ast}\right) $. For the Green function, $\mathbf{G}^{\ast }\mathbf{(r},\mathbf{r}^{\prime},\omega\mathbf{)}=\mathbf{G(r},\mathbf{r} ^{\prime},-\omega^{\ast}\mathbf{)}$, and we impose the condition \begin{equation} \mathbf{G(r},\mathbf{r}^{\prime},\omega\mathbf{)}\mathbf{\rightarrow0}\text{ for }\left\vert \mathbf{r-r}^{\prime}\right\vert \rightarrow\infty \end{equation} associated with some material absorption. This is an often-overlooked requirement, which is discussed further in Section \ref{Comments}.
The electric field operator in the Schr\"{o}dinger picture is \begin{equation} \widehat{\mathbf{E}}\left( \mathbf{r}\right) ^{\text{LNA}}=\int_{0}^{\infty }d\omega_{\lambda}\ \widehat{\mathbf{E}}\left( \mathbf{r},\omega_{\lambda }\right) +\text{H.c.},\label{OLNA} \end{equation} where $\omega_{\lambda}$ is a continuum modal frequency (not a Fourier transform frequency), with \begin{align} \widehat{\mathbf{E}}\left( \mathbf{r},\omega_{\lambda}\right) =i\sqrt {\frac{\hslash}{\pi\varepsilon_{0}}} & \frac{\omega_{\lambda}^{2}}{c^{2}}\int d^{3}\mathbf{r}^{\prime}\mathbf{G}\left( \mathbf{r},\mathbf{r}^{\prime },\omega_{\lambda}\right) \label{LNAO}\\ & \cdot\sqrt{\operatorname{Im}\left( \mathbf{\varepsilon}\left( \mathbf{r}^{\prime},\omega_{\lambda}\right) \right) }\cdot\widehat {\mathbf{f}}\left( \mathbf{r}^{\prime},\omega_{\lambda}\right) ,\nonumber \end{align} where $\widehat{\mathbf{f}},\widehat{\mathbf{f}}^{\dag}$ are canonically conjugate field variables, which are continuum bosonic operator--valued vectors of the combined matter-field system that satisfy \begin{align} \left[ \widehat{f}_{k}\left( \mathbf{r},\omega\right) ,\widehat {f}_{k^{\prime}}^{\dag}\left( \mathbf{r}^{\prime},\omega^{\prime}\right) \right] & =\delta_{kk^{\prime}}\delta\left( \omega-\omega^{\prime}\right) \delta\left( \mathbf{r}-\mathbf{r}^{\prime}\right) ,\\ \left[ \widehat{f}_{k}\left( \mathbf{r},\omega\right) ,\widehat {f}_{k^{\prime}}\left( \mathbf{r}^{\prime},\omega^{\prime}\right) \right] & =\left[ \widehat{f}_{k}^{\dag}\left( \mathbf{r},\omega\right) ,\widehat{f}_{k^{\prime}}^{\dag}\left( \mathbf{r}^{\prime},\omega^{\prime }\right) \right] =0. \end{align} Comparing the two approaches, $\int_{0}^{\infty}d\omega_{\lambda}$ $\widehat{\mathbf{f}}\left( \mathbf{r},\omega_{\lambda}\right) $ is seen to be the continuous analog of $\sum_{\mathbf{k},s}\mathbf{E}_{\mathbf{k}}\left( \mathbf{r}\right) \widehat{a}_{\mathbf{k}s}$.
More complicated environments, including nonlocal, and nonreciprocal media, have also been considered \cite{Buh}, \cite{RSW}-\cite{Force2}. The conclusions described below hold for generally lossy, inhomogeneous, nonreciprocal media
The free field-matter Hamiltonian is \begin{equation} \widehat{H}^{\text{LNA}}=\int_{0}^{\infty}d\omega\int d\mathbf{r\ } \hslash\omega\widehat{\mathbf{f}}^{\dag}\left( \mathbf{r},\omega\right) \cdot\widehat{\mathbf{f}}\left( \mathbf{r},\omega\right) , \end{equation} which is analogous to (\ref{CMH}). Energy eigenstates of the free Hamiltonian are compositions of $\left\vert n_{i}\left( \mathbf{r},\omega_{\lambda }\right) \right\rangle $ (analogous to $\left\vert \left\{ n\right\} \right\rangle $ in the cavity-mode case), which \thinspace indicates that the $\lambda^{\text{th}}$ field mode of the nonuniform continuum is populated with $n$ quanta, and that it is vector-valued with field component in the $i^{\text{th}}$ direction. As a trivial example, the one-quanta states are obtained from the ground state as \begin{equation} \left\vert 1_{i}\left( \mathbf{r},\omega_{\lambda}\right) \right\rangle =\hat{f}_{i}^{\dagger}(\mathbf{r},\omega_{\lambda})\left\vert \left\{ 0\right\} \right\rangle . \end{equation}
An important relation in developing Langevin noise approach formulations is the ``magic formula" \cite{Welsch1} \begin{align} \frac{\omega^{2}}{c^{2}}\int d^{3}\mathrm{r}^{\prime} & \operatorname{Im} \left( \varepsilon\left( \mathbf{r}^{\prime},\omega\right) \right) \boldsymbol{\mathrm{G}}(\mathbf{r},\mathbf{r}^{\prime},\omega)\cdot \boldsymbol{\mathrm{G}}^{\text{\dag}}(\mathbf{r}_{0},\mathbf{r}^{\prime },\omega)\label{magic1}\\ & =\operatorname{Im}\boldsymbol{\mathrm{G}}(\mathbf{r},\mathbf{r}_{0} ,\omega),\nonumber \end{align} generalized for tensor permittivity as \cite{Buh}, \cite{Hanson} \begin{align} \frac{\omega^{2}}{c^{2}}\int d^{3}\mathrm{r}^{\prime\prime} \boldsymbol{\mathrm{G}}(\mathbf{r},\mathbf{r}^{\prime\prime},\omega) & \cdot\boldsymbol{\mathrm{T}}(\mathbf{r}^{\prime\prime},\omega)\cdot \boldsymbol{\mathrm{T}}^{\dagger}(\mathbf{r}^{\prime\prime},\omega )\cdot\boldsymbol{\mathrm{G}}^{\dag}(\mathbf{r}^{\prime},\mathbf{r} ^{\prime\prime},\omega)\nonumber\\ & =\left( \boldsymbol{\mathrm{G}}(\mathbf{r},\mathbf{r}^{\prime} ,\omega)-\boldsymbol{\mathrm{G}}^{\dagger}(\mathbf{r}^{\prime},\mathbf{r} ,\omega)\right) /2i,\label{magic2} \end{align} where $\boldsymbol{\mathrm{T}}(\mathbf{r},\omega)=\sqrt{\mathrm{Im} \boldsymbol{\varepsilon}(\mathbf{r},\omega)}$ (and valid for nonreciprocal media using $\boldsymbol{\mathrm{T}}(\mathbf{r},\omega)\cdot \boldsymbol{\mathrm{T}}^{\dagger}(\mathbf{r},\omega)=\frac{1}{2i}\left[ \boldsymbol{\varepsilon}(\mathbf{r},\omega)-\boldsymbol{\varepsilon}^{\dagger }(\mathbf{r},\omega)\right] $). The above integrals generally don't need to be evaluated explicitly, but are used in the derivation of system equations; their use removes $\operatorname{Im}\left( \varepsilon\left( \mathbf{r} ^{\prime},\omega\right) \right) $ from the resulting equations, allowing the lossless limit to be subsequently taken.
Furthermore, the correlation relation can be shown to be \cite{Buh} \[
\left\langle 0|\mathbf{E}(\mathbf{r},\omega)\mathbf{E}^{\dagger}
(\mathbf{r},\omega^{\prime})|0\right\rangle =\frac{\hbar k_{0}^{2} }{2\varepsilon_{0}^{2}}N\operatorname{Im}(\mathbf{G}(\mathbf{r},\mathbf{r} ,\omega))\delta\left( \omega-\omega^{\prime}\right) , \] where $N(\omega,T)=2/\left( \mathrm{exp}(\hbar\omega/k_{B}T)-1\right) $ for negative frequencies and $N(\omega,T)=1+2/\left( \mathrm{exp}(\hbar \omega/k_{B}T)-1\right) $ for positive frequencies, where $k_{B}$ is Boltzmann's constant.
Conversion to the time-domain is achieved by changing to the Heisenberg picture, where operators $\widehat{A}$ transform as $\widehat{A}_{H}\left( t\right) =e^{i\widehat{H}_{\mathrm{Sch}}t/\hslash}\widehat{A}_{\mathrm{Sch} }e^{-i\widehat{H}_{\mathrm{Sch}}t/\hslash}$, leading to \begin{align} & \widehat{\mathbf{E}}\left( \mathbf{r},t\right) =\int_{0}^{\infty} d\omega_{\lambda}i\sqrt{\frac{\hslash}{\pi\varepsilon_{0}}}\frac {\omega_{\lambda}^{2}}{c^{2}}\\ & \times\int\mathbf{G}\left( \mathbf{r},\mathbf{r}^{\prime},\omega_{\lambda }\right) \cdot\sqrt{\operatorname{Im}\left( \mathbf{\varepsilon}\left( \mathbf{r}^{\prime},\omega_{\lambda}\right) \right) }\cdot\widehat {\mathbf{f}}\left( \mathbf{r}^{\prime},\omega_{\lambda},t\right) d^{3}\mathbf{r}^{\prime}+\text{H.c.}\nonumber \end{align}
In summary, to compare the two methods, the normal-mode QED is the standard method ubiquitous in quantum optics. It is a natural and convenient method to study cavity-QED (e.g., Jaynes--Cummings models), nonclassical light, and many-quanta correlations. It puts the system background (e.g., cavity) on a similar footing as the system (e.g., an atom), both being modes/harmonic oscillators. The Langevin noise approach is a system-bath approach which focuses attention on the system (e.g., the atom) and, while rigorously accounting for the system environment, the latter being relegated to the status of a bath. Although normal-mode QED can be complimented by system-bath decay operators which approximately account for the non-Hermitian (outgoing and incoming) nature of the cavity modes in real systems, the commutation rules assumed are formally only valid for $Q\rightarrow\infty$, a restriction not needed for the Langevin noise approach. In the Langevin noise approach, there is often some confusion about the integration limits and the limit $\operatorname{Im}\left( \varepsilon\left( \mathbf{r} ,\omega\right) \right) \rightarrow0$, discussed further in the following.
\section{Example I: Excited Atom Introduced into a Structured Reservoir -- Non-Markovian Weisskopf-Wigner Analysis\label{IEA}}
As a first example, in this section we consider introducing an excited-state atom at $\mathbf{r}=\mathbf{r}_{0}$, $t=0$ into a structured reservoir \cite{Force2}, comparing the normal-mode QED and Langevin noise approaches in the context of 3D quantization in the limit $\operatorname{Im}\left( \mathbf{\varepsilon }\left( \mathbf{r},\omega\right) \right) \rightarrow0$.
The Hamiltonian operator is \begin{equation} H=H^{\text{NMQED/LNA}}+\hbar\omega_{0}\hat{\sigma}_{+}{\sigma}_{-} -\hat{\mathbf{p}}\cdot\widehat{\boldsymbol{\mathrm{E}}}(\mathbf{r}_{0}), \end{equation} where $\hat{\sigma}_{\pm}$ are the canonically conjugate two-level atomic operators ($\widehat{\sigma}_{+}=\left\vert e\right\rangle \left\langle g\right\vert ,\ \widehat{\sigma}_{-}=\left\vert g\right\rangle \left\langle e\right\vert =\widehat{\sigma}_{+}^{\dagger}$, with $\left\vert e\right\rangle $ and $\left\vert g\right\rangle $ being the excited and ground atomic states, respectively), and $\hat{\mathbf{p}}=\left( \hat{\sigma}_{+}+\hat{\sigma} _{-}\right) \mathbf{\gamma}$ is the dipole operator, where $\mathbf{\gamma}$ is the dipole operator matrix-element, assumed real-valued. The first term in each case is the free Hamiltonian for the field modes (field-matter modes for the Langevin noise approach), the second term is the free Hamiltonian for the dipole, and the last term is the interaction term.
The equation of motion is \begin{equation} \frac{d}{dt}\left\vert \psi\right\rangle =-\frac{i}{\hslash}H\left\vert \psi\right\rangle ,\label{EOM} \end{equation} and in each case the atom-field product states are \begin{align} \left\vert \psi\left( t\right) \right\rangle ^{\text{NMQED}} & =c_{e}\left( t\right) \left\vert e,0\right\rangle +\sum_{\lambda}c_{\lambda}\left( t\right) \left\vert g,1_{\lambda}\right\rangle \label{WFCNMA}\\ \left\vert \psi\left( t\right) \right\rangle ^{\text{LNA}} & =c_{e}\left( t\right) \left\vert e,0\right\rangle \label{WFLNA}\\ & +\int d^{3}\mathbf{r}\int_{0}^{\infty}d\omega_{\lambda}c_{gi}\left( \mathbf{r},\omega_{\lambda},t\right) \left\vert g,1_{i}\left( \mathbf{r} ,\omega_{\lambda}\right) \right\rangle ,\nonumber \end{align} where $\left\vert e,0\right\rangle \equiv\left\vert e\right\rangle \otimes\left\vert \left\{ 0\right\} \right\rangle $ and\ $\left\vert g,1_{i}\left( \mathbf{r},\omega_{\lambda}\right) \right\rangle \equiv\left\vert g\right\rangle \otimes\left\vert \left\{ 1_{i}\left( \mathbf{r},\omega_{\lambda}\right) \right\} \right\rangle $. The interaction Hamiltonian $\hat{\mathbf{p}}\cdot\widehat{\boldsymbol{\mathrm{E}}} (\mathbf{r}_{0})\sim\left( \hat{\sigma}_{+}+\hat{\sigma}_{-}\right) \left( \hat{\boldsymbol{\mathrm{f}}}+\hat{\boldsymbol{\mathrm{f}}}^{\dagger}\right) $ acting on the initial state $\left\vert e,0\right\rangle $ leads to an infinite-dimensional Hilbert space of the set of states $A=\left\{ \left\vert e,0\right\rangle ,\left\vert g,1\right\rangle ,\left\vert e,2\right\rangle ,\left\vert g,3\right\rangle ,\left\vert e,4\right\rangle ,...\right\} $, where the $n>1$ photons could be in the same or different field modes. Here, we truncate the space to consist of \{$\left\vert e,0\right\rangle ,\left\vert g,1\right\rangle $\}, which is equivalent to a rotating wave approximation even when using the full interaction Hamiltonian.
For the normal-mode QED, plugging $\left\vert \psi\left( t\right) \right\rangle ^{\text{NMQED}}$ into the equation of motion and defining \begin{equation} g_{\mathbf{k}}=\mathbf{\gamma}\cdot i\sqrt{\frac{\hslash\omega_{\mathbf{k}} }{2\varepsilon_{0}}}\mathbf{E}_{\mathbf{k}}\left( \mathbf{r}_{0}\right) ,\label{g} \end{equation} multiplying by $\left\langle e,0\right\vert $ and $\left\langle g,1_{\lambda ^{\prime}}\right\vert $, and discarding higher-order terms like $\widehat {a}_{\mathbf{k}s}^{\dag}\left( 0\right) \left\vert g,1_{\lambda }\right\rangle \sim\left\vert g,2_{\lambda}\right\rangle $, leads to \cite{QOB2} \begin{align} \frac{d}{dt}c_{e} & =-ic_{e}\omega_{0}+\frac{i}{\hslash}\sum_{\lambda }g_{\lambda}c_{\lambda}\\ \frac{d}{dt}c_{\lambda} & =\frac{i}{\hslash}c_{e}g_{\lambda}^{\ast} -i\omega_{\lambda}c_{\lambda}. \end{align} Defining slowly-varying amplitudes $c_{es}\left( t\right) =c_{e}\left( t\right) e^{i\omega_{0}t}$ and $c_{\lambda s}\left( t\right) =c_{\lambda }\left( t\right) e^{i\omega_{\lambda}t}$, where $\omega_{0}$ is the energy level transition frequency, we have \begin{equation} c_{\lambda s}\left( t\right) =\frac{i}{\hslash}g_{\lambda}^{\ast}\int _{0}^{t}dt^{\prime}c_{es}\left( t^{\prime}\right) e^{i\left( \omega _{\lambda}-\omega_{0}\right) t^{\prime}} \end{equation} and so the population is obtained by solving the Volterra integral equation of the second kind \begin{equation} \frac{dc_{es}\left( t\right) }{dt}=\int_{0}^{t}D\left( t,t^{\prime}\right) c_{es}\left( t^{\prime}\right) dt^{\prime},\label{VIE} \end{equation} with the kernel \begin{equation} D^{\text{NMQED}}\left( t,t^{\prime}\right) =-\frac{1}{\hslash^{2}} \sum_{\lambda}\left\vert g_{\lambda}\right\vert ^{2}e^{-i\left( \omega_{\lambda}-\omega_{0}\right) \left( t-t^{\prime}\right) }. \end{equation} The Volterra integral equation has been widely utilized in quantum optics, see, e.g., \cite{BF1} -\cite{WW1}, and can accommodate non-Markovian processes. The procedure for numerically solving the Volterra integral equation is shown in \cite{Force2}, \cite{NR}. The initial-value condition $c_{eo}\left( 0\right) =1$ is assumed, representing an initially-excited atom.
Repeating the same procedure for the Langevin noise approach (details are in \cite{Force2}) leads to (\ref{VIE}) where \cite{Trung}, \cite{QED} \begin{align} D^{\text{LNA}}\left( t,t^{\prime}\right) = & \label{DNLA}\\ -\frac{1}{\hslash\pi\varepsilon_{0}}\int_{0}^{\infty} & d\omega_{\lambda }\ \frac{\omega_{\lambda}^{2}}{c^{2}}\mathbf{\gamma}\cdot\operatorname{Im} \mathbf{G}(\mathbf{r}_{0},\mathbf{r}_{0},\omega_{\lambda})\cdot\mathbf{\gamma }\nonumber\\ & \times e^{-i\left( \omega_{\lambda}-\omega_{0}\right) \left( t-t^{\prime}\right) }\doteqdot D^{\text{NMQED}}\left( t,t^{\prime}\right) ,\nonumber \end{align} using (\ref{magic1}), (\ref{g}) and (\ref{p1}), where $\doteqdot$ indicates equality in the lossless limit of the Langevin noise approach formulation (that is, when (\ref{p1}) holds). The term $\sqrt{\operatorname{Im}\left( \mathbf{\varepsilon}\left( \mathbf{r}^{\prime},\omega_{\lambda}\right) \right) }$ does not appear in the expression for $D^{\text{LNA}}$. Since the Langevin noise approach can accommodate generally lossy, dispersive media, the Langevin noise approach approach exactly recovers the normal-mode QED as a special case. There is no need to explicitly take the limit as $\operatorname{Im}\left\{ \mathbf{\varepsilon }\left( \mathbf{r},\omega\right) \right\} \rightarrow\mathbf{0}$, one merely computes the Green function assuming lossless media. This is discussed further in Section \ref{Comments}. The Langevin noise approach also applies to open systems, where the Green function accounts for the infinite space. The vacuum limit is obtained merely by using the vacuum Green function.
To recover the familiar Markov result, setting $c_{es}\left( t^{\prime }\right) =c_{es}\left( t\right) $, and using the SP identity $\int _{0}^{\infty}e^{\pm i\left( \omega-\omega_{0}\right) \tau}d\tau=\pi \delta\left( \omega-\omega_{0}\right) \pm i$PV$\left( \frac{1} {\omega-\omega_{0}}\right) $, (\ref{VIE}) can be solved as \begin{equation} c_{es}\left( t\right) =c_{es}\left( 0\right) e^{-\Gamma\frac{1}{2} t}e^{i\delta t},\label{e3} \end{equation} and the probability of excited state occupation is $P\left( t\right) =\left\vert c_{es}\left( t\right) \right\vert ^{2}=\left\vert c_{es}\left( 0\right) \right\vert ^{2}e^{-\Gamma t}$. In (\ref{e3}), \begin{align} \Gamma & =2\frac{\pi}{\hslash\pi\varepsilon_{0}}\frac{\omega_{0}^{2}}{c^{2} }\mathbf{\gamma}\cdot\operatorname{Im}\mathbf{G}(\mathbf{r}_{0},\mathbf{r} _{0},\omega_{\lambda})\cdot\mathbf{\gamma},\\ \delta & =\frac{1}{\hslash\pi\varepsilon_{0}}\text{PV}\int_{0}^{\infty }d\omega_{\lambda}\ \frac{\omega_{\lambda}^{2}}{c^{2}}\frac{\mathbf{\gamma }\cdot\operatorname{Im}\mathbf{G}(\mathbf{r}_{0},\mathbf{r}_{0},\omega _{\lambda})\cdot\mathbf{\gamma}}{\left( \omega_{\lambda}-\omega_{0}\right) }, \end{align} where $\Gamma$ is the usual decay rate \cite{Nov}, and for vacuum, $\Gamma^{\text{vac}}=\gamma^{2}\omega_{0}^{3}/\pi\varepsilon_{0}\hslash c^{3} $. Note that here we start with the Green function and obtain the normal mode result, whereas in \cite{Wubs2} they start with the normal modes and obtain the Green function (albeit for the lossy case).
\section{Example II: Driven Atom in a Structured Reservoir -- Density Operator Analysis\label{IEA2}}
As a second example, we consider an atom in a structured reservoir under the action of an external pump. The derivation follows the familiar route \cite{OQS}, and, for the Langevin noise approach details are available in \cite{Hanson}. The resulting Schr\"{o}dinger picture master equation (ME) is, under the Born and Markov approximations, \begin{align} & \frac{d}{dt}\rho\left( t\right) =-\frac{i}{\hslash}\left[ \widehat {H}_{\text{S}},\rho\left( t\right) \right] -\int_{0}^{t}d\tau\left( J_{ph}^{n+1}\left( \tau\right) \widehat{\sigma}_{+}\widehat{\sigma} _{-}\left( -\tau\right) \rho\left( t\right) \right. \nonumber\\ & -J_{ph}^{n+1}\left( \tau\right) \widehat{\sigma}_{-}\left( -\tau\right) \rho\left( t\right) \widehat{\sigma}_{+}-J_{ph}^{n+1}\left( -\tau\right) \widehat{\sigma}_{-}\rho\left( t\right) \widehat{\sigma}_{+}\left( -\tau\right) \nonumber\\ & +J_{ph}^{n+1}\left( -\tau\right) \rho\left( t\right) \widehat{\sigma }_{+}\left( -\tau\right) \widehat{\sigma}_{-}-J_{ph}^{n}\left( -\tau\right) \widehat{\sigma}_{-}\widehat{\sigma}_{+}\left( -\tau\right) \rho\left( t\right) \nonumber\\ & +J_{ph}^{n}\left( -\tau\right) \widehat{\sigma}_{+}\left( -\tau\right) \rho\left( t\right) \widehat{\sigma}_{-}+J_{ph}^{n}\left( \tau\right) \widehat{\sigma}_{+}\rho\left( t\right) \widehat{\sigma}_{-}\left( -\tau\right) \nonumber\\ & -J_{ph}^{n}\left( \tau\right) \rho\left( t\right) \widehat{\sigma} _{-}\left( -\tau\right) \widehat{\sigma}_{+}.\label{MFE} \end{align} where $H_{\text{S}}=\hslash\left( \omega_{d}-\omega_{L}\right) \widehat{\sigma}^{+}\widehat{\sigma}^{-}+\frac{\hslash\Omega}{2}\left( \sigma^{+}+\sigma^{-}\right) $. For the normal-mode QED, \begin{align} J_{ph}^{n+1}\left( \tau\right) & =
{\displaystyle\sum\nolimits_{\mathbf{k}}}
J_{\mathbf{k}}\left( \overline{n}\left( \omega_{\mathbf{k}}\right) +1\right) e^{-i\left( \omega_{\mathbf{k}}-\omega_{L}\right) \tau}\\ J_{ph}^{n}\left( \tau\right) & =
{\displaystyle\sum\nolimits_{\mathbf{k}}}
J_{\mathbf{k}}\overline{n}\left( \omega_{\mathbf{k}}\right) e^{-i\left( \omega_{\mathbf{k}}-\omega_{L}\right) \tau}\\ J_{\mathbf{k}} & =\frac{\omega_{\mathbf{k}}}{2\hslash\varepsilon_{0} }\mathbf{\gamma}\cdot\mathbf{E}_{\mathbf{k}}\left( \mathbf{r}\right) \mathbf{E}_{\mathbf{k}}^{\ast}\left( \mathbf{r}\right) \cdot\mathbf{\gamma,} \end{align} and for the Langevin noise approach, \begin{align} J_{ph}^{n+1}\left( \tau\right) & =\int_{0}^{\infty}d\omega J_{ph}\left( \omega\right) \left( \overline{n}\left( \omega\right) +1\right) e^{-i\left( \omega-\omega_{L}\right) \tau}\\ J_{ph}^{n}\left( \tau\right) & =\int_{0}^{\infty}d\omega J_{ph}\left( \omega\right) \overline{n}\left( \omega\right) e^{-i\left( \omega -\omega_{L}\right) \tau}\\ J_{ph}\left( \omega\right) & =\frac{\omega^{2}}{c^{2}}\frac{\mathbf{\gamma }\cdot\operatorname{Im}\left( \mathbf{G}\left( \mathbf{r},\mathbf{r} ,\omega\right) \right) \cdot\mathbf{\gamma}}{\pi\hslash\varepsilon_{0}} \end{align} and, where $\overline{n}$ is the average number of thermal photons, $\overline{n}=\left( e^{\frac{\hslash\omega}{k_{B}T}}-1\right) ^{-1}$.
Using (\ref{p1}), it is easy to show that \begin{equation}
{\displaystyle\sum\nolimits_{\mathbf{k}}}
J_{\mathbf{k}}e^{-i\omega_{\mathbf{k}}\tau}=\int_{0}^{\infty}d\omega J\left( \omega\right) e^{-i\omega\tau}, \end{equation} and, thus, \begin{equation} \frac{d\rho\left( t\right) ^{\text{LNA}}}{dt}\doteqdot\frac{d\rho\left( t\right) ^{\text{NMQED}}}{dt}, \end{equation} and the system evolution is the same for both approaches.
As a special case, if we set $\overline{n}=0$ and turn off the pump, $H_{\text{S}}=\hslash\omega_{d}\widehat{\sigma}^{+}\widehat{\sigma}^{-}$, in which case $\widehat{\sigma}_{\mp}\left( -\tau\right) =\widehat{\sigma} _{\mp}e^{\pm i\omega_{d}\tau}$, we obtain the familiar ME for a single atom interacting with its environment, \begin{align} \frac{d}{dt}\rho & =-i\left( \omega_{d}-\Delta_{d}\right) \left[ \widehat{\sigma}^{+}\widehat{\sigma}^{-},\rho\left( t\right) \right] \\ & +\frac{\gamma\left( \omega_{d}\right) }{2}\left( 2\widehat{\sigma} _{-}\rho\left( t\right) \widehat{\sigma}_{+}-\widehat{\sigma}_{+} \widehat{\sigma}_{-}\rho\left( t\right) -\rho\left( t\right) \widehat{\sigma}_{+}\widehat{\sigma}_{-}\right) , \end{align} where we used the SP identity and where $\gamma\left( \omega_{d}\right) =2\pi J\left( \omega_{d}\right) $, $\Delta_{d}=\frac{1}{\hslash^{2}}$ PV$\int_{0}^{\infty}d\omega J\left( \omega\right) /\left( \omega-\omega _{d}\right) $. The ME for a multi-atom system, allowing for, e.g., the study of entanglement, is also the same for the normal-mode QED and Langevin noise approaches.
\section{Comments on the Connection Between Normal-Mode QED and Langevin Noise Approaches, and Validity of the Langevin Noise Approach\label{Comments}}
Normal-mode QED is well-founded mathematically, based on canonical quantization and completeness of the eigenfunctions of self-adjoint operators \footnote{From a strict mathematical perspective, self-adjointness is actually not enough, but SA together with having a compact inverse does guarantee the validity of eigenfunction expansions \cite{OT}. This condition is satisfied by typical differential equations arising in electromagnetics and quantum mechanical problems, which have compact integral inverse operators.}. Much of quantum optics is based on electric field operators of the form (\ref{EFO1} )-(\ref{EFO2}) using planewave eigenfunctions (\ref{PWEF}) (including microscopic models). As more complicated environments have been considered, the eigenfunctions based on (\ref{efe}) have been used. However, all of the aforementioned eigenfunctions only form complete sets in limited settings (closed cavities, usually lossless, dispersionless materials), where material parameters are represented by Hermitian (self-adjoint) tensors. Note that completeness is important, not only for (\ref{p1}), but also for validity of the operators (\ref{EFO1})-(\ref{EFO2}), which are also eigenfunction expansions.
Two comments are important: 1) Some level of loss must be maintained in the system when using the operator (\ref{OLNA})-(\ref{LNAO}); it is impermissible to let $\operatorname{Im}\left( \mathbf{\varepsilon}\left( \mathbf{r} ,\omega_{\lambda}\right) \right) \rightarrow0$ until after that term drops out from the formulation, typically after using (\ref{magic1}) or (\ref{magic2}). One can not take this limit in the operator (\ref{OLNA} )-(\ref{LNAO}). 2) If in Fig. 1 $\varepsilon_{\text{bulk}}$ is lossless, then it is also impermissible to let the size of the region of interest shrink to zero to implement the vacuum limit (i.e., $\Omega_{1}\rightarrow0$ in Fig. 1), until after using (\ref{magic1}) or (\ref{magic2}), after which the Green function is merely the vacuum Green function for the cavity or open space (if $\varepsilon_{\text{bulk}}$ is lossy, than one can allow the limit $\Omega _{1}\rightarrow0$ at the onset). In the presented examples, using (\ref{p1}), the Langevin noise approach reduces to the normal-mode QED result for closed cavities; alternatively, using (\ref{p1}), the normal-mode QED result can be generalized to involve the Green function, allowing cavities with lossy, dispersive materials to be considered, and even open geometries. However, this is not a general result (i.e., this does not universally hold).
In a practical sense, lossless materials don't exist, aside from vacuum. Therefore, it is not unreasonable to consider space to be filled with a background medium having perhaps $\operatorname{Re}\left( \varepsilon\right) \simeq1$ and $\operatorname{Im}\left( \varepsilon\right) >0$, into which the actual structure of interest is placed, as depicted in Fig. 1. The Green function accounts for the entire permittivity $\mathbf{\varepsilon}\left( \mathbf{r},\omega\right) $, including the background, and after $\operatorname{Im}\left( \mathbf{\varepsilon}\left( \mathbf{r} ,\omega\right) \right) $ is removed from the formulation using (\ref{magic1})-(\ref{magic2}) and only the Green function remains, one can consider lossless materials.
\subsection{Lossless Limit of the ``Magic Formula" (\ref{magic1})}
The connection between the normal-mode QED and the Langevin noise approach is established by virtue of the conversion formula~(\ref{p1})---showing that normal-mode QED is a special case of the Langevin noise approach in the lossless limit. However, the explicit presence of the factor $\sqrt{\operatorname{Im}\left( \mathbf{\varepsilon}\left( \mathbf{r},\omega\right) \right) }$ in the field expansion~(\ref{LNAO}) indicates that this limit has to be understood in a strict sense as a mathematical limiting procedure where $\sqrt{\operatorname{Im}\left( \mathbf{\varepsilon}\left( \mathbf{r},\omega\right) \right) }\to 0$ while $\sqrt{\operatorname{Im}\left( \mathbf{\varepsilon}\left( \mathbf{r},\omega\right) \right) }> 0$. In fact, the presence of $\sqrt{\operatorname{Im}\left( \mathbf{\varepsilon}\left(\mathbf{r},\omega\right) \right) }$ in the field explansion is an artifact of normalising the bosonic canonically conjugate field variables and is avoided if one instead works with the noise polarisation.
In either case, after evaluating operator dynamics or taking quantum expectation values, one typically arrives at the left hand side of the integral relation~(\ref{magic1}). The right hand side of this formula is obviously finite in the above-defined lossless limit $\operatorname{Im}\epsilon(\mathbf{r},\omega)\to 0+$. At first glance, the left hand side seems to vanish in this limit due to the presence of the factor $\operatorname{Im}\left( \mathbf{\varepsilon}\left(\mathbf{r}',\omega\right) \right)$. However, this conclusion is premature as a careful evaluation of the spatial integral will reveal a factor cancelling $\operatorname{Im}\left( \mathbf{\varepsilon}\left(\mathbf{r}',\omega\right) \right)$, so that the limit may be taken to give the same result as the right hand side of the equation.
To illustrate this, consider the case of a bulk medium with permittivity $\varepsilon\left(\omega\right)=\varepsilon_\mathrm{R}\left(\omega\right)+\mathrm{i}\delta$ with $\varepsilon_\mathrm{R}$ real. The respective Green tensor is given by
\begin{multline} \label{B10} \tens{G}^{(0)}(\vec{r},\vec{r}',\omega) =-\frac{1}{3k^2}\,\bm{\delta}(\vec{\rho})
-\frac{\mathrm{e}^{\mathrm{i} k\rho}}{4\pi k^2\rho^3}
\bigl\{\bigl[1-\mathrm{i} k\rho-(k\rho)^2\bigl]\tens{I}\\ -\bigl[3-3\mathrm{i} k\rho-(k\rho)^2\bigr] \vec{e}_\rho\tprod\vec{e}_\rho\bigr\} \end{multline}
with \mbox{$\vec{\rho}=\vec{r}-\vec{r}'$}; \mbox{$\rho=|\vec{\rho}|$}; \mbox{$\vec{e}_\rho=\vec{\rho}/\rho$}, and $k=\sqrt{\varepsilon\left(\omega\right)}\omega/c$ such that $\operatorname{Im}k>0$. In the limit $\vec{r}_0\to\vec{r}$, $\vec{r}_0\neq\vec{r}$, we hence have
\begin{equation} \tens{G}^{(0)}(\vec{r},\vec{r}',\omega)\!\cdot\!\tens{G}^{(0)\dagger}(\vec{r}_0,\vec{r}',\omega) \propto\mathrm{e}^{-2\delta\rho}. \end{equation}
To leading order in $\delta$, this implies
\begin{equation} \int\mathrm{d}^3 \mathrm{r'}\,\tens{G}^{(0)}(\vec{r},\vec{r}',\omega)\!\cdot\!\tens{G}^{(0)\dagger}(\vec{r}_0,\vec{r}',\omega)=\operatorname{O}(1/\delta), \end{equation}
so that
\begin{equation} \int\mathrm{d}^3 \mathrm{r'}\,\operatorname{Im}\varepsilon\left(\omega\right)\tens{G}^{(0)}(\vec{r},\vec{r}',\omega)\!\cdot\!\tens{G}^{(0)\dagger}(\vec{r}_0,\vec{r}',\omega) \end{equation}
remains finite in the limit $\operatorname{Im}\varepsilon\left(\omega\right)\equiv\delta\to0+$.
In App.~\ref{AppA}, we explicitly demonstrate the validity of the integral relation~(\ref{magic1}) in the lossless limit for the more general case of arbitrary~$\vec{r},\vec{r}_0$.
An alternative way to establish contact with the nonabosorbing case was suggested in Ref.~\cite{AD}. Here, the region of interest is surrounded by a strictly lossless region $\operatorname{Im}\varepsilon\left(\omega\right)=0$ at infinity (or sufficiently far, respectivly). It was shown that under such conditions the integral relation (\ref{magic1}) has an additional term
\begin{align} \frac{\omega^{2}}{c^{2}}\int_{\Omega}d^{3}\mathbf{r}^{\prime} & \operatorname{Im}\left( \varepsilon\left( \mathbf{r}^{\prime},\omega\right) \right) \boldsymbol{\mathrm{G}}(\mathbf{r},\mathbf{r}^{\prime},\omega )\cdot\boldsymbol{\mathrm{G}}^{\text{\dag}}(\mathbf{r}_{0},\mathbf{r}^{\prime },\omega)\label{magic3}\\ & +
{\displaystyle\oint\nolimits_{\Sigma}}
d^{2}\mathbf{r}^{\prime}\mathbf{F}\left( \mathbf{r}^{\prime},\mathbf{r} ,\mathbf{r}_{0}\right) =\operatorname{Im}\boldsymbol{\mathrm{G}} (\mathbf{r},\mathbf{r}_{0},\omega),\nonumber \end{align} where \begin{align}
{\displaystyle\oint\nolimits_{\Sigma}}
d^{2}\mathbf{r}^{\prime}\mathbf{F}\left( \mathbf{r}^{\prime},\mathbf{r} ,\mathbf{r}_{0}\right) & =\frac{\omega}{c}\sqrt{\varepsilon_{\text{bulk}}}
{\displaystyle\oint\nolimits_{\Sigma}}
d^{2}\mathbf{r}^{\prime}\mathbf{G}^{\text{T}}\left( \mathbf{r}^{\prime },\mathbf{r}\right) \label{magic4}\\ & \cdot\mathbf{R}\times\mathbf{R}\times\mathbf{G}^{\ast}\left( \mathbf{r}^{\prime},\mathbf{r}_{0}\right) \text{,}\nonumber \end{align} and $\Sigma$ is the bounding surface that is far from the system in question. In the event of an absorbing (perhaps limitingly-low-loss) background medium $\varepsilon_{\text{bulk}}$, the Green tensor vanishes on $\Sigma$
and the surface contribution vanishes accordingly. This is commensurate with the requirement $\mathbf{G(r},\mathbf{r}^{\prime},\omega\mathbf{)} \mathbf{\rightarrow0}$ for $\left\vert \mathbf{r-r}^{\prime}\right\vert \rightarrow\infty$. Thus, one must retain material absorption of the background environment if (\ref{magic1}) or (\ref{magic2}) is to be used,
to ensure that no boundry contribution arises. Physically, one could argue that the assumption of a background environment without at least some small amount of absorption is generally a fiction anyway, aside from perhaps evacuated superconducting chambers. \ Alternatively, in Ref.~\cite{AD} it is shown that implementing the developed scheme of replacing the missing free incident field with polarization currents at infinity with a lossless interior region, to bring the Langevin noise approach into accordance with the Huttner--Barnett result, and including the boundary term one
recovers the usual Langevin noise approach.
\section{Conclusions}
The Langevin noise approach for quantization of macroscopic electromagnetics for three-dimensional, inhomogeneous environments has been compared with the usual normal mode quantization in quantum optics. The conditions of validity of the normal mode expansion were discussed, and it was shown using several examples that the Langevin noise approach reduces exactly to the normal mode expansion formulation in the lossless limit. Conditions on applying the Langevin noise approach to finite structures were also discussed.
\section*{Acknowledgments}
The author gratefully acknowledge discussions with Stephen Hughes.
This work was supported by the German Research Foundation (DFG, Grant BU 1803/3-1).
\appendix
\section{Lossless limit for the bulk case} \label{AppA}
In this appendix we explicitly show that the ``magic formula" in Eq.~\eqref{magic1} holds also in the limit of lossless media for the case of a single bulk dielectric material described by $\epsilon(\vec{r},\omega) = \epsilon(\omega)$. This means we show that \begin{multline} \label{magicApp} \lim_{\mathrm{Im}[\epsilon(\omega)]\to 0+}\hspace{-0.1cm}\frac{\omega^{2}}{c^{2}}\hspace{-0.2cm}\int \hspace{-0.1cm}d^{3}\mathrm{r}^{\prime} \operatorname{Im} \left( \varepsilon\left( \mathbf{r}^{\prime},\omega\right) \right) \\ \times \boldsymbol{\mathrm{G}}^{(0)}(\mathbf{r},\mathbf{r}^{\prime},\omega)\cdot \boldsymbol{\mathrm{G}}^{(0)\ast}(\mathbf{r}^{\prime },\mathbf{r}_{0},\omega)\\
=\,\,\,\,\,\,\, \mathclap{\lim_{\mathrm{Im}[\epsilon(\omega)]\to 0+}} \,\,\,\,\,\,\operatorname{Im}\boldsymbol{\mathrm{G}}^{(0)}(\mathbf{r},\mathbf{r}_{0} ,\omega). \end{multline} Here, $\boldsymbol{\mathrm{G}}^{(0)}$ is the bulk Green tensor and note, that compared to Eq.~\eqref{magic1} we have already used that the Green's tensor for bulk isotropic dielectric material obeys Onsager reciprocity, i.e. $G_{ij}^{(0)}(\vec{r},\vec{r}^\prime) = G_{ji}^{(0)}(\vec{r}^\prime,\vec{r})$. We will show that Eq.~\eqref{magicApp} holds by using the bulk Green tensor $\tens{G}^{(0)}$ in its $(2+1)$-dimensional decomposition \cite{B1} \begin{multline} \label{eq:G0} \tens{G}^{(0)}(\vec{r},\vec{r}_0, \omega) = - \frac{1}{ k^2} \delta^3(\vec{r} - \vec{r}_0) \vec{e}_z \vec{e}_z \\
+ \frac{i}{8\pi^2} \int \mathrm{d}^2 k_\parallel \frac{\mathrm{e}^{i\vec{k}_\parallel \cdot (\vec{r}-\vec{r}_0)}}{k^\perp}
\sum_{\sigma=s,p} \Big[ \vec{e}_{\sigma+}\vec{e}_{\sigma+} \mathrm{e}^{i k^\perp(z-z_0)} \theta(z - z_0) \\ +\vec{e}_{\sigma-}\vec{e}_{\sigma-} \mathrm{e}^{-i k^\perp(z-z_0)} \theta(z_0 - z)\Big]. \end{multline} Here, $k^\perp = \sqrt{k^2-k_\parallel^2}$ and $k= \sqrt{\epsilon(\omega)}\omega/c$ and we have defined the polarisation vectors \begin{align} \label{eq:Pol1} \vec{e}_{p \pm} = \frac{1}{k_\parallel}\left( \begin{array}{c} k_y \\ -k_x \\ 0 \end{array} \right), \quad \vec{e}_{p \pm} = \frac{1}{k} \left( \begin{array}{c} k^\perp k_x/k_\parallel \\ k^\perp k_y/k_\parallel \\ k_\parallel \end{array}\right). \end{align} Inserting the first term of the Green's tensor in Eq.~\eqref{eq:G0} into the left hand side of Eq.~\eqref{magicApp} one obtains \begin{multline}
\lim_{\mathrm{Im}[\epsilon(\omega)]\to 0+}\frac{1}{|k|^4} \mathrm{Im}[\epsilon(\omega)] \frac{\omega^2}{c^2} \int \mathrm{d}^3 r^\prime \, \delta^3(\vec{r}-\vec{r}^\prime) \delta^3(\vec{r}^\prime-\vec{r}_0) \\
= \lim_{\mathrm{Im}[\epsilon(\omega)]\to 0+} \frac{1}{|k|^4} \mathrm{Im}[\epsilon(\omega)] \frac{\omega^2}{c^2} \delta^3(\vec{r}-\vec{r}_0) = 0 . \end{multline} For the terms of the left hand side of Eq.~\eqref{magicApp} consisting of the product of a first and a second term of the bulk Green's tensor in Eq.~\eqref{eq:G0} one finds \begin{multline} \label{eq.4}
\lim_{\mathrm{Im}[\epsilon(\omega)]\to 0+} \mathrm{Im}[\epsilon(\omega)] \frac{\omega^2}{c^2} \frac{\mathrm{i}}{8\pi^2|k|^2} \int \mathrm{d}^2 k_\parallel \mathrm{e}^{\mathrm{i} \vec{k}_\parallel(\vec{r}-\vec{r}_0)} k_\parallel \\
\times \bigg\{ \theta(z-z_0) \left[ \frac{\vec{e}_z \vec{e}_{p-}^\ast \mathrm{e}^{-\mathrm{i} k^{\perp \ast}(z-z_0)} }{k k^{\perp \ast}} - \vec{e}_{p+} \vec{e}_z \frac{\mathrm{e}^{\mathrm{i} k^{\perp }(z-z_0)} }{k^\ast k^{\perp}} \right] \\ + \theta(z_0-z) \left[ \frac{ \vec{e}_z \vec{e}_{p+}^\ast \mathrm{e}^{\mathrm{i} k^{\perp \ast}(z-z_0)}}{k k^{\perp \ast}} - \vec{e}_{p-} \vec{e}_z \frac{\mathrm{e}^{-\mathrm{i} k^{\perp}(z-z_0)} }{k^\ast k^{\perp}} \right]\bigg\}. \end{multline} This term again vanishes in the limit of $\mathrm{Im}[\epsilon(\omega)] \to 0$. \\ Hence, we are left with the terms stemming from the second term of the Green's tensor in Eq.~\eqref{eq:G0} only. Inserting the second and third rows of Eq.~\eqref{eq:G0} into the left hand side of Eq.~\eqref{magicApp} one finds \begin{widetext} \begin{multline} \label{eq.1}
\lim_{\mathrm{Im}[\epsilon(\omega)]\to 0+} \mathrm{Im}[\epsilon(\omega)] \frac{\omega^2}{c^2} \frac{1}{16\pi^2} \int_{-\infty}^\infty \hspace{-0.4cm}\mathrm{d} z^\prime \int \!\! \mathrm{d}^2k_\parallel \frac{\mathrm{e}^{\mathrm{i} \vec{k}_\parallel \cdot (\vec{r}-\vec{r}_0)} }{|k^\perp|^2 } \sum_{\sigma } \left[ \vec{e}_{\sigma+}\vec{e}_{\sigma+} \mathrm{e}^{i k^\perp(z-z^\prime)} \theta(z - z^\prime) +\vec{e}_{\sigma-}\vec{e}_{\sigma-} \mathrm{e}^{-i k^\perp(z-z^\prime)} \theta(z^\prime - z)\right] \\ \cdot \left[ \vec{e}_{\sigma-}^\ast\vec{e}_{\sigma-}^\ast \mathrm{e}^{-i k^{\perp \ast}(z^\prime-z_0)} \theta( z^\prime- z_0) +\vec{e}_{\sigma+}^{\ast }\vec{e}_{\sigma+}^{\ast } \mathrm{e}^{i k^{\perp\ast}(z^\prime-z_0)} \theta(z_0 -z^\prime)\right]. \end{multline} Here, we carried out the $\vec{r}^\prime_{\parallel} $ integral leading to a factor $\delta^2(\vec{k}_\parallel + \vec{k}_\parallel^\prime)$ which in turn has been used to perform the $k^\prime_\parallel$ integral. Finally, we also used \begin{align} \label{rel1} \vec{e}_{\sigma\pm}(-k_\parallel)\vec{e}_{\sigma\pm}(-k_\parallel) & = \vec{e}_{\sigma\mp}(k_\parallel)\vec{e}_{\sigma\mp}(k_\parallel) , \\ \vec{e}_{\sigma\pm}\cdot\vec{e}_{\sigma^\prime\pm}^{\ast } & = \delta_{\sigma \sigma^\prime} ,\\ \vec{e}_{\sigma\pm}\cdot\vec{e}_{\sigma^\prime\mp}^{\ast } & \propto \delta_{\sigma \sigma^\prime}. \end{align} The remaining $z'$ integral can be carried out straight forwardly and some lengthy algebra shows that Eq.~\eqref{eq.1} can be further reduced to \begin{multline} \label{eq.2}
\lim_{\mathrm{Im}[\epsilon(\omega)]\to 0+} \mathrm{Im}[\epsilon(\omega)] \frac{\omega^2}{c^2} \frac{1}{16\pi^2} \int \mathrm{d}^2k_\parallel \frac{\mathrm{e}^{\mathrm{i} \vec{k}_\parallel \cdot (\vec{r}-\vec{r}_0)} }{|k^\perp|^2 } \\
\times \sum_{\sigma } \left\{ \frac{1}{2\mathrm{Im}[k^\perp]}\left[ \vec{e}_{\sigma+}\vec{e}_{\sigma+}^\ast \mathrm{e}^{i k^\perp(z-z_0)} \theta(z - z_0) +\vec{e}_{\sigma-}\vec{e}_{\sigma-}^\ast \mathrm{e}^{-i k^\perp(z-z_0)} \theta(z_0 - z) \right. \right. \\ \left. \left. + \vec{e}_{\sigma-}\vec{e}_{\sigma-}^\ast \mathrm{e}^{-i k^{\perp\ast}(z-z_0)} \theta(z - z_0) +\vec{e}_{\sigma+}\vec{e}_{\sigma+}^\ast \mathrm{e}^{+i k^{\perp\ast}(z-z_0)} \theta(z_0 - z) \right] \right. \\ - \left. \frac{\mathrm{i} \vec{e}_{\sigma-}\cdot \vec{e}_{\sigma+}^\ast}{2 \mathrm{Re}[k^\perp]} \left[ \vec{e}_{\sigma+}\vec{e}_{\sigma-}^\ast \mathrm{e}^{i k^{\perp }(z-z_0)} \theta(z - z_0) +\vec{e}_{\sigma-} \vec{e}_{\sigma+}^{\ast } \mathrm{e}^{i k^{\perp}(z_0-z)} \theta(z_0 - z) \right. \right. \\ - \left. \left. \vec{e}_{\sigma+}\vec{e}_{\sigma-}^\ast \mathrm{e}^{-i k^{\perp \ast}(z-z_0)} \theta(z - z_0) -\vec{e}_{\sigma-} \vec{e}_{\sigma+}^{\ast } \mathrm{e}^{-i k^{\perp\ast}(z_0-z)} \theta(z_0 - z) \right] \right\}. \end{multline} To derive Eq.~\eqref{eq.2} we also used $\vec{e}_{\sigma-}\cdot \vec{e}_{\sigma+}^\ast = \vec{e}_{\sigma+}\cdot \vec{e}_{\sigma-}^\ast $. Next, we rewrite \begin{align}
\frac{1}{\mathrm{Im}[k^\perp]} & = \frac{|k^\perp|^2}{\mathrm{Im}[k^\perp]\mathrm{Re}[k^\perp]}\mathrm{Re}\bigg[\frac{1}{k^\perp}\bigg] = \frac{2|k^\perp|^2 c^2}{\mathrm{Im}[\epsilon(\omega)]\omega^2}\mathrm{Re}\bigg[\frac{1}{k^\perp}\bigg] , \\
\frac{1}{\mathrm{Re}[k^\perp]} & =- \frac{|k^\perp|^2}{\mathrm{Re}[k^\perp]\mathrm{Im}[k^\perp]}\mathrm{Im}\bigg[\frac{1}{k^\perp}\bigg] = - \frac{2|k^\perp|^2 c^2}{\mathrm{Im}[\epsilon(\omega)]\omega^2}\mathrm{Im}\bigg[\frac{1}{k^\perp}\bigg], \end{align} in order to find that Eq.~\eqref{eq.2} is equivalent to \begin{multline} \label{eq.3}
\lim_{ \epsilon(\omega) = 1+ \mathrm{i} \delta \to 1} \frac{1}{8\pi^2} \int \mathrm{d}^2k_\parallel \mathrm{e}^{\mathrm{i} \vec{k}_\parallel \cdot (\vec{r}-\vec{r}_0)}\\
\times \sum_{\sigma } \left\{ \mathrm{Re} \left[ \frac{1}{k^\perp}\right] \frac{1}{2} \left[ \vec{e}_{\sigma+}\vec{e}_{\sigma+}^\ast \mathrm{e}^{i k^\perp(z-z_0)} \theta(z - z_0) +\vec{e}_{\sigma-}\vec{e}_{\sigma-}^\ast \mathrm{e}^{-i k^\perp(z-z_0)} \theta(z_0 - z) \right. \right. \\ \left. \left. + \vec{e}_{\sigma-}\vec{e}_{\sigma-}^\ast \mathrm{e}^{-i k^{\perp\ast}(z-z_0)} \theta(z - z_0) +\vec{e}_{\sigma+}\vec{e}_{\sigma+}^\ast \mathrm{e}^{+i k^{\perp\ast}(z-z_0)} \theta(z_0 - z) \right] \right. \\ - \left. \vec{e}_{\sigma-}\cdot \vec{e}_{\sigma+}^\ast \mathrm{Im}\left[ \frac{1}{k^\perp}\right] \frac{1}{2\mathrm{i}} \left[ \vec{e}_{\sigma+}\vec{e}_{\sigma-}^\ast \mathrm{e}^{i k^{\perp }(z-z_0)} \theta(z - z_0) +\vec{e}_{\sigma-} \vec{e}_{\sigma+}^{\ast } \mathrm{e}^{i k^{\perp}(z_0-z)} \theta(z_0 - z) \right. \right. \\ - \left. \left. \vec{e}_{\sigma+}\vec{e}_{\sigma-}^\ast \mathrm{e}^{-i k^{\perp \ast}(z-z_0)} \theta(z - z_0) -\vec{e}_{\sigma-} \vec{e}_{\sigma+}^{\ast } \mathrm{e}^{-i k^{\perp\ast}(z_0-z)} \theta(z_0 - z) \right] \right\}. \end{multline} This was the crucial step, since the factor $\mathrm{Im}[\epsilon(\omega)]$ was cancelled meaning that now we are ready to take the limit $\mathrm{Im}[\epsilon(\omega)]\to 0 $. In this limit we find that $k\in \mathbb{R}$ which also leads to the fact that $k^\perp = \sqrt{k^2-k_\parallel^2 }$ is either real or purely imaginary depending on whether $k_\parallel <k $ or $k_\parallel > k$, respectively. This way we find that in the second and third row of Eq.~\eqref{eq.3} we can use \begin{align} \vec{e}_{\sigma\pm }^\ast = \vec{e}_{\sigma\pm } \quad \mathrm{if}\,\,\, k^\perp, k \in \mathbb{R}; \end{align} whereas in the third and fourth row of Eq.~\eqref{eq.3} we have \begin{align} \vec{e}_{\sigma\pm }^\ast = \vec{e}_{\sigma\mp } \quad \mathrm{if}\,\,\, k^\perp \in \mathrm{i}\mathbb{R}, k \in \mathbb{R}. \end{align} Since $\vec{e}_{\sigma\pm } \cdot \vec{e}_{\sigma^\prime \pm } = \delta_{\sigma \sigma^\prime}$, and using Eq.~\eqref{rel1} again we finally find that Eq.~\eqref{eq.3} can be further reduced to \begin{multline} \label{eq.Toshow}
\frac{1}{8\pi^2} \int \mathrm{d}^2 k_\parallel \mathrm{e}^{i\vec{k}_\parallel \cdot (\vec{r}-\vec{r}_0)}
\sum_{\sigma=s,p}\\
\times \left\{ \mathrm{Re}\left[\frac{1}{k^\perp} \right]\frac{1}{2} \Big[ \vec{e}_{\sigma+}\vec{e}_{\sigma+} \mathrm{e}^{i k^\perp(z-z_0)} \theta(z - z_0) +\vec{e}_{\sigma-}\vec{e}_{\sigma-} \mathrm{e}^{-i k^\perp(z-z_0)} \theta(z_0 - z) + \mathrm{c.c.}(k_\parallel \to -k_\parallel )\Big] \right. \\ \left. - \mathrm{Im}\left[\frac{1}{k^\perp} \right]\frac{1}{2i} \Big[ \vec{e}_{\sigma+}\vec{e}_{\sigma+} \mathrm{e}^{i k^\perp(z-z_0)} \theta(z - z_0) +\vec{e}_{\sigma-}\vec{e}_{\sigma-} \mathrm{e}^{-i k^\perp(z-z_0)} \theta(z_0 - z) - \mathrm{c.c.}(k_\parallel \to -k_\parallel )\Big] \right\}. \end{multline} Here, $ \mathrm{c.c.}(k_\parallel \to -k_\parallel )$ denotes adding the complex conjugate of the preceding term which has also been subject to the replacement $k_\parallel \to -k_\parallel $. Equation~\eqref{eq.Toshow} is equivalent to $ \lim_{\mathrm{Im}[\epsilon(\omega)]\to 0+} \mathrm{Im} \tens{G}^{(0)}(\vec{r},\vec{r}^\prime, \omega )$ [cf.~Eq.~\eqref{eq:G0}] as desired.
\end{widetext}
\end{document} |
\begin{document}
\title{Essential Trigonometry Without Geometry}
\author{ John Gresham\\ Bryant Wyatt\\ Jesse Crawford\\ Tarleton State University} \date{\today}
\begin{titlepage} \maketitle \thispagestyle{empty} \setcounter{page}{0} \end{titlepage}
\begin{abstract}
The development of the trigonometric functions in introductory texts usually follows geometric constructions using right triangles or the unit circle. While these methods are satisfactory at the elementary level, advanced mathematics demands a more rigorous approach. Our purpose here is to revisit elementary trigonometry from an entirely analytic perspective. We will give a comprehensive treatment of the sine and cosine functions and will show how to derive the familiar theorems of trigonometry without reference to geometric definitions or constructions. \end{abstract}
\subsection*{Introduction} As we approach trigonometry from an analytic perspective, our understanding deepens and old theorems become new again. For this study, we will assume a familiarity with calculus, differential equations, and real analysis.
\subsection*{Definitions and Basic Properties} We begin by considering the solution of the second-order homogeneous linear differential equation \[ f\,^{\prime \prime }\left( x\right) +f\left( x\right) =0\text{ with } f\left( 0\right) =0\text{ and }f\,^{\prime }\left( 0\right) =1. \] By the Existence and Uniqueness Theorem we know that a unique solution exists [Nagle, Saff, and Snider, p. 171]. If this solution has a power series representation around the ordinary point $x=0$, it must have the form \[ f\left( x\right) =\sum_{n=0}^{\infty }c_{n}x^{n} \] Note that $f\left( 0\right) =c_{0}=0$ and $f\,^{\prime }\left( 0\right) =c_{1}=1$. We also have \begin{align*} f\,^{\prime \prime }\left( x\right) &=\sum_{n=2}^{\infty }\left( n\right) \left( n-1\right) c_{n}x^{n-2} \\ &=\sum_{n=0}^{\infty }\left( n+2\right) \left( n+1\right) c_{n+2}x^{n} \end{align*} Then \[ \sum_{n=0}^{\infty }\left( n+2\right) \left( n+1\right) c_{n+2}x^{n}+\sum_{n=0}^{\infty }c_{n}x^{n}=\sum_{n=0}^{\infty }\left( \left( n+2\right) \left( n+1\right) c_{n+2}+c_{n}\right) x^{n}=0 \] Since this power series is $0$ for all $x$, we get the general recursion relation \[ \left( n+2\right) \left( n+1\right) c_{n+2}+c_{n}=0 \] so that \[ c_{n+2}=-\frac{c_{n}}{\left( n+2\right) \left( n+1\right) }. \] Because $c_{0}=0$, we have for all even indices $2n$ \[ c_{2n}=0 \] Let us now examine the coefficients with odd indices $2n+1$. \begin{align*} c_{1} &=1\text{ \ \ initial condition} \\ c_{3} &=-\frac{1}{3\cdot 2}=-\frac{1}{3!} \\ c_{5} &=-\genfrac{}{}{1pt}{0}{-\frac{1}{3!}}{5\cdot 4}=\frac{1}{5!} \\ c_{7} &=-\genfrac{}{}{1pt}{0}{\frac{1}{5!}}{7\cdot 6}=-\frac{1}{7!} \end{align*} and in general, \[ c_{2n+1}=\left( -1\right) ^{n}\frac{1}{\left( 2n+1\right) !} \] The power series about $x=0$ must have the form \[ \sum_{n=0}^{\infty }\left( -1\right) ^{n}\dfrac{x^{2n+1}}{\left( 2n+1\right) !} \] Using the Ratio Test, it is easy to show that this series converges for all real $x$.
The function represented by this power series is the unique solution of the differential equation
\[ f\,^{\prime \prime }\left( x\right) +f\left( x\right) =0\text{ with } f\left( 0\right) =0\text{ and }f\,^{\prime }\left( 0\right) =1. \]
We call this function the \textbf{sine }function\textbf{, }denoted $ \sin x$, or $\sin \left( x\right) $.\textbf{
}
\begin{defn}[Sine Function] \[ \sin x=\sum_{n=0}^{\infty }\left( -1\right) ^{n}\dfrac{x^{2n+1}}{\left( 2n+1\right) !} \] \end{defn}
We define the \textbf{cosine} to be the derivative of the sine function.
\begin{defn}[\textit{Cosine Function}] \[ \cos x=\dfrac{d}{dx}\sum_{n=0}^{\infty }\left( -1\right) ^{n}\dfrac{x^{2n+1} }{\left( 2n+1\right) !}=\sum_{n=0}^{\infty }\left( -1\right) ^{n}\dfrac{ x^{2n}}{\left( 2n\right) !} \]
\end{defn}
The following are elementary consequences of the definitions.
\begin{enumerate} \item $\sin 0=0$
\item $\cos 0=1$
\item The function $\sin x$ is odd because all exponents in its power series are odd.
\item The function $\cos x$ is even because all exponents in its power series are even.
\item The functions $\sin x$ and $\cos x$ are both continuous since they are differentiable.
\item The derivatives of $\sin x$ are cyclic with order four.
\end{enumerate}
\[
\begin{tabular}{|c|c|c|c|c|} \hline $f\left( x\right) $ & $f\,^{\prime }\left( x\right) $ & $f\,^{\prime \prime }\left( x\right) $ & $f\,^{\prime \prime \prime }\left( x\right) $ & $ f\,^{\prime \prime \prime \prime }\left( x\right) $ \\ \hline $\sin x$ & $\cos x$ & $-\sin x$ & $-\cos x$ & $\sin x$ \\ \hline \end{tabular} \]
\subsection*{Key Theorems}
This section presents the Pythagorean and Sine Sum identities which, along with the smallest positive critical value of $\sin x$, enable the development of several important identities and analytic results in elementary trigonometry.
First, we prove the Pythagorean Identity.
\begin{thm}[Pythagorean Identity]
For all $x$, \[ \sin ^{2}x+\cos ^{2}x=1 \] \end{thm} \begin{proof}[Proof:\nopunct] Consider the derivative of the left side.
\begin{align*} \dfrac{d}{dx}\left( \sin ^{2}x+\cos ^{2}x\right) &=2\sin x\cos x+2\cos x\left( -\sin x\right)\\&=0 \end{align*}
Since the derivative is $0$, $\sin ^{2}x+\cos ^{2}x$ is a constant.
Because $\sin 0=0$, and $\cos 0=1$, this constant must be $1$. \end{proof}
Next, we consider the identity for the sine of the sum of $x$ and $y$. The proof in most elementary trigonometry texts involves a geometric construction with triangles or the unit circle. In our geometry-free approach, we will use only power series.
\begin{thm}[Sine Sum Identity]For all $x$, $y$, \[ \sin \left( x+y\right) =\sin x\cos y+\cos x\sin y \] \end{thm}
\begin{proof}[Proof:\nopunct]
Consider the series expansion \[ \sin \left( x+y\right) =\sum_{n=0}^{\infty }\left( -1\right) ^{n}\dfrac{ \left( x+y\right) ^{2n+1}}{\left( 2n+1\right) !} \] Now examine the general $n^{th}$ term $a_{n}$ of this series using the Binomial Theorem: \begin{align*} a_{n} &=\left( -1\right) ^{n}\dfrac{\left( x+y\right) ^{2n+1}}{\left( 2n+1\right) !}\\ &=\dfrac{\left( -1\right) ^{n}}{\left( 2n+1\right) !}\left( x+y\right) ^{2n+1} \\ &=\dfrac{\left( -1\right) ^{n}}{\left( 2n+1\right) !}\sum_{i=0}^{2n+1} \dbinom{2n+1}{i}x^{2n+1-i}y^{i} \\ &=\dfrac{\left( -1\right) ^{n}}{\left( 2n+1\right) !}\sum_{i=0}^{2n+1} \dfrac{\left( 2n+1\right) !}{i!\left( 2n+1-i\right) !}x^{2n+1-i}y^{i} \\ &=\left( -1\right) ^{n}\sum_{i=0}^{2n+1}\dfrac{1}{i!\left( 2n+1-i\right) !} x^{2n+1-i}y^{i} \end{align*}
This last sum has $2n+2$ terms. We will re-write it as two sums each having $n+1$ terms.
\begin{align*} \left( -1\right) ^{n}\sum_{i=0}^{2n+1}\dfrac{1}{i!\left( 2n+1-i\right) !} x^{2n+1-i}y^{i} &=\left( -1\right) ^{n}\underbrace{\sum_{i=0}^{n}\dfrac{ x^{2i+1}y^{2n-2i}}{\left( 2i+1\right) !\left( 2n-2i\right) !}}+\left( -1\right) ^{n}\underbrace{\sum_{i=0}^{n}\dfrac{x^{2n-2i}y^{2i+1}}{\left( 2n-2i\right) !\left( 2i+1\right) !}} \\ &\text{\ \ \ \ \ \ \ \ increasing odd powers of }x\text{ \ \ \ \ \ \ \ decreasing\ even powers of }x \\ &=\dfrac{\left( -1\right) ^{n}}{\left( 2n+1\right) !}\sum_{i=0}^{n}\dfrac{ \left( 2n+1\right) !x^{2i+1}y^{2n-2i}}{\left( 2i+1\right) !\left( 2n-2i\right) !}+\dfrac{\left( -1\right) ^{n}}{\left( 2n+1\right) !} \sum_{i=0}^{n}\dfrac{\left( 2n+1\right) !x^{2n-2i}y^{2i+1}}{\left( 2n-2i\right) !\left( 2i+1\right) !} \\ &=\underbrace{\dfrac{\left( -1\right) ^{n}}{\left( 2n+1\right) !} \sum_{i=0}^{n}\dbinom{2n+1}{2i+1}x^{2i+1}y^{2n-2i}}+\underbrace{\dfrac{ \left( -1\right) ^{n}}{\left( 2n+1\right) !}\sum_{i=0}^{n}\dbinom{2n+1}{2i+1} x^{2n-2i}y^{2i+1}} \\ &\text{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [1] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [2]} \end{align*}
This last line represents the $n^{th}$ term of the expansion of $ \sin \left( x+y\right) $. We now turn our attention to the right side \[ \sin x\cos y+\cos x\sin y \] and consider the series expansion of the term $\sin x\cos y$.
Since the series for $\sin x$ and for $\cos x$ both converge absolutely, we can write $\sin x\cos y$ as the Cauchy product of the two series \[ \sin x\cos y=\sum_{n=0}^{\infty }c_{n} \] where \[ c_{n}=\sum_{i=0}^{n}a_{i}b_{n-i\,}\text{, \ }n=0,1,2,3,... \] and the $a_{i}$, $b_{n-i}$ terms come from the series for $\sin x$ and $\cos x$, respectively [Rudin, p. 63ff]. Let us examine the general term $c_{n}$ of this Cauchy product. \begin{align*} c_{n} &=\sum_{i=0}^{n}\left( -1\right) ^{i}\dfrac{x^{2i+1}}{\left( 2i+1\right) !}\cdot \left( -1\right) ^{n-i}\dfrac{y^{2n-2i}}{\left( 2n-2i\right) !} \\ &=\sum_{i=0}^{n}\left( -1\right) ^{n}\dfrac{x^{2i+1}y^{2n-2i}}{\left( 2i+1\right) !\left( 2n-2i\right) !} \\ &=\dfrac{\left( -1\right) ^{n}}{\left( 2n+1\right) !}\sum_{i=0}^{n}\dfrac{ \left( 2n+1\right) !}{\left( 2i+1\right) !\left( 2n-2i\right) !} x^{2i+1}y^{2n-2i} \\ &=\dfrac{\left( -1\right) ^{n}}{\left( 2n+1\right) !}\sum_{i=0}^{n}\dbinom{ 2n+1}{2n-2i}x^{2i+1}y^{2n-2i} \end{align*} Then the term $c_{n}$ is the odd powers of $x$ in part [1] of the general binomial expansion above.
By switching $x$ with $y$ in the previous equation, we get the general term $d_{n}$ for the Cauchy product of the series for $\sin y$ and $\cos x$. \begin{align*} d_{n} &=\dfrac{\left( -1\right) ^{n}}{\left( 2n+1\right) !} \sum_{i=0}^{n}\dbinom{2n+1}{2n-2i}y^{2i+1}x^{2n-2i} \\ &=\dfrac{\left( -1\right) ^{n}}{\left( 2n+1\right) !}\sum_{i=0}^{n}\dbinom{ 2n+1}{2n-2i}x^{2n-2i}y^{2i+1} \\ &=\dfrac{\left( -1\right) ^{n}}{\left( 2n+1\right) !}\sum_{i=0}^{n}\dbinom{ 2n+1}{2i+1}x^{2n-2i}y^{2i+1} \end{align*} This matches the even powers of $x$ in part [2] of the general binomial expansion.
Therefore
\[a_{n}=c_{n}+d_{n}\] and \[ \sin \left( x+y\right) =\sin x\cos y+\cos x\sin y. \] \end{proof}
We now turn our attention to a special value, the smallest positive critical value of $\sin x$, a number we will call $Q$.
\begin{thm}[Critical Value]
There exists a smallest positive critical value of $\sin x$, that is, a smallest positive zero of $\cos x$. \end{thm}
\begin{proof}
We have already seen that $\cos 0=1$. Now observe that \[ \cos 2=1-\frac{2^{2}}{2!}+\frac{2^{4}}{4!}-\frac{2^{6}}{6!}+\cdots \] We now write \begin{align*} \cos 2 &=\left( 1-\frac{2^{2}}{2!}+\frac{2^{4}}{4!}-\frac{2^{6}}{6!}\right) +R_{3} \\ &=\left( -\frac{19}{45}\right) +R_{3} \\ &\leq -\frac{19}{45}+\left\vert R_{3}\right\vert \end{align*} The Remainder Theorem for alternating series tells us that \begin{align*} \left\vert R_{3}\right\vert &\leq a_{4}=\frac{2^{8}}{8!}\text{ \ and so} \\ \cos 2 &\leq -\frac{19}{45}+\frac{2}{315}=-\frac{131}{315} \end{align*}
Since $\cos 0>0$ and $\cos 2<0$, by the Intermediate Value Theorem, there is at least one real number $c\in \left( 0,2\right) $ with $\cos c=0$. The nonempty set $\left\{ x|\cos x=0\right\} $ is the inverse image of the closed point set $\left\{ 0\right\}$ under the continuous function $\cos x$. Therefore the set $\left\{ x|\cos x=0\right\} $ is closed. It follows that the set \[
\left\{ x|\cos x=0\right\} \cap \left[ 0,2\right] \] is nonempty, closed, bounded, and is therefore compact [Willard, p. 120]. It must contain its least element which we shall call, temporarily, $Q$.
\end{proof}
\textbf{Definition of $Q$} \[
Q=\min \left( \left\{ x|\cos x=0\right\} \cap \left[ 0,2\right] \right) \]
\subsection*{Consequences of the Key Theorems}
The Pythagorean Identity leads directly to the following corollary.
\begin{cor}For all $x$, \[ \left\vert \sin x\right\vert \leq 1\text{ \ and \ }\left\vert \cos x\right\vert \leq 1. \] \end{cor}
\begin{proof}[Proof:\nopunct]
If $\left\vert \sin x\right\vert > 1$, then $\cos^{2} x < 0$ and $\cos x$ is not a real number. Similarly, if $\left\vert \cos x\right\vert > 1$, then $\sin x$ is not a real number. In this study, we are restricting our work to real numbers.
\end{proof}
The next two corollaries follow from the Pythagorean Identity and the special properties of $Q$.
\pagebreak
\begin{cor} $\sin Q=1$ and $\sin x$ has an absolute maximum value of $1$ at $ x=Q$. \end{cor}
\begin{proof}[Proof:\nopunct] Since $\cos 0=1$ and $\cos x$ is an even function, for $x\in ( -Q,Q)$, we have $\cos x>0$. Therefore $\sin x$ is strictly increasing on $(-Q,Q)$. Since $0<Q$ we have $0=\sin 0<\sin Q$. From the Pythagorean Identity we know that \[ \sin ^{2}Q+\cos ^{2}Q=1 \] Since $\cos Q=0$, it must be the case that $\sin Q=1$. We have already observed that \[ \left\vert \sin x\right\vert \leq 1 \] and therefore $1$ is an absolute maximum of $\sin x$. \end{proof}
\begin{cor} The range of $\sin x$ is $\left[ -1,1\right] $.
\end{cor}
\begin{proof}[Proof:\nopunct] Because $\sin x$ is an odd function we have $\sin \left( -Q\right) =-\sin Q=-1$ is an absolute minimum. The range $\left[ -1,1 \right] $ follows from the continuity of $\sin x$ and the Intermediate Value Theorem.\end{proof} Later will will see that the range of cosine is also $[-1,1]$.
Our next two corollaries follow from the Sine Sum Theorem.
\begin{cor} $\sin \left( x-y\right) =\sin x\cos y-\cos x\sin y$ \end{cor}
\begin{proof}[Proof:\nopunct]
Because $\sin x$ is an odd function and $\cos x$ is even, we have the following:
\begin{align*} \sin \left( x-y\right) &=\sin \left( x+\left( -y\right) \right) \\ &=\sin x\cos \left( -y\right) +\cos x\sin \left( -y\right) \\ &=\sin x\cos y-\cos x\sin y \end{align*} \end{proof}
\begin{cor}$ \sin 2x=2\sin x\cos x$ \end{cor}
\begin{proof}[Proof:\nopunct]
\begin{align*} \sin 2x &=\sin \left( x+x\right) \\ &=\sin x\cos x+\cos x\sin x \\ &=2\sin x\cos x \end{align*}
\end{proof}
We now consider the cofunction rules that follow from the Sine Sum Identity and the properties of $Q$. We will use these later to show that the sine and cosine functions are periodic.
\begin{cor}[Cofunction Rule] $\sin \left( Q-x\right) =\cos x$ \end{cor}
\begin{proof}[Proof:\nopunct] \begin{align*} \sin \left( Q-x\right) &=\sin Q\cos x-\cos Q\sin x \\ &=1\cdot \cos x-0\cdot \sin x \\ &=\cos x \end{align*} \end{proof}
\begin{cor}[Cofunction Rule]$\cos \left( Q-x\right) =\sin x$ \end{cor}
\begin{proof}[Proof:\nopunct] \begin{align*} \cos \left( Q-x\right) &=\sin \left( Q-\left( Q-x\right) \right) \\ &=\sin x \end{align*} \end{proof}
In the following corollaries we complete the sum, difference, and double angle rules.
\begin{cor} \ $\cos \left( x+y\right) =\cos x\cos y-\sin x\sin y$ \end{cor}
\begin{proof}[Proof:\nopunct] \begin{align*} \cos \left( x+y\right) &=\sin \left( Q-\left( x+y\right) \right) \\ &=\sin \left( \left( Q-x\right) -y\right) \\ &=\sin \left( Q-x\right) \cos y-\cos \left( Q-x\right) \sin y \\ &=\cos x\cos y-\sin x\sin y \end{align*} \end{proof}
The following corollaries now follow.
\begin{cor} \ $\cos \left( x-y\right) =\cos x\cos y+\sin x\sin y$ \end{cor}
\begin{proof}[Proof:\nopunct] \begin{align*} \cos \left( x-y\right) &=\cos(x+(-y)) \\ &=\cos x \cos(-y)- \sin x \sin(-y) \\ &=\cos x \cos y + \sin x \sin y \end{align*} \end{proof}
\begin{cor} \ $\cos 2x =2\cos^{2} x-1$ \end{cor}
\begin{proof}[Proof:\nopunct]
\begin{align*} \cos 2x &=\cos \left( x+x\right) \\ &=\cos x\cos x-\sin x\sin x \\ &=\cos^{2}x-\sin^{2}x\\ &=\cos^{2}x-(1-\cos^{2}x)\\ &=2\cos^{2} x-1 \end{align*}
\end{proof}
We have seen that the three key theorems have led to the familiar difference formulas as well as double angle formulas. From these follow the other identities such as half-angle and product-to-sum rules. In particular, we will later need the identity \[ \cos ^{2}x=\frac{1}{2}+\frac{1}{2}\cos 2x \]
\subsection*{Periodicity}
We will need the sine and cosine function values of $4Q$ to show periodicity. Here is a sequence of steps to arrive at this point.
\begin{enumerate} \item $\sin 2Q=2\sin Q\cos Q=2(1)(0)=0$
\item $\cos 2Q=\sin \left( Q-2Q\right) =\sin \left( -Q\right) =-\sin Q=-1$.\\ From this it follows that the range of $\cos x$ is $[-1,1].$
\item $\sin 3Q=\sin \left( Q+2Q\right) =\sin Q\cos 2Q+\cos Q\sin 2Q=-1$
\item $\cos 3Q=\sin \left( Q-3Q\right) =\sin \left( -2Q\right) =-\sin 2Q=0$
\item $\sin 4Q=2\sin 2Q\cos 2Q=0$
\item $\cos 4Q=\sin \left( Q-4Q\right) =\sin \left( -3Q\right)=-\sin\left(3Q\right)=-(-1) =1
$ \end{enumerate}
We now have the machinery needed to prove the periodicity of $\sin x$ and $ \cos x$.
\begin{defn} A function $f\left( x\right) $ is \textit{periodic} if there is a positive number $p$ such that \[ f\left( x+p\right) =f\left( x\right) \] for all $x$. If there is a \textsl{smallest} positive number $p$ for which this holds, then $p$ is called the \textit{period of }$f$. \end{defn}
\begin{thm}[Periodicity of Sine] The sine function is periodic and its period is $4Q$. \end{thm}
\begin{proof}[Proof:\nopunct] We first show that sine is periodic. \begin{align*} \sin \left( x+4Q\right) &=\sin x\cos 4Q+\cos x\sin 4Q \\ &=\sin x\left( 1\right) +\cos x\left( 0\right) \\ &=\sin x \end{align*}
This shows that $\sin x$ is periodic, but does not show that the period is $4Q$. To show that $4Q$ is the period, assume, to the contrary, that there exists a number $R$ such that $0<4R<4Q$ and for all $x$, \[ \sin \left( x+4R\right) =\sin x \] Observe that $0<R<Q$. For $x\in \left( 0,Q\right) $ we have $\cos x>0$ because $\cos 0=1$ and $Q$ is the smallest value with $\cos Q=0$. We also have $ \sin x>0$ since $\sin 0=0$ and $\sin $ is increasing on $\left( 0,Q\right)$. Now examine $\sin Q$: \begin{align*} \sin Q &=\sin \left( Q+4R\right) \\ &=\sin Q\cos 4R+\cos Q\sin 4R \\ &=\cos 4R \\ &=\cos 2\left( 2R\right) \\ &=2\cos ^{2}\left( 2R\right) -1 \end{align*}
Because $\sin Q=1$, \begin{align*} 1 &=2\cos ^{2}\left( 2R\right) -1 \\ 1 &=\cos ^{2}\left( 2R\right) \\ \cos 2R &=1\text{ or }\cos 2R=-1 \end{align*} We now have two cases:\\ \pagebreak
\ Case I: \ $\cos 2R=1.$
\ \ \ \ \ \ \ \ \ \ \ \ \ \ Then by the double angle identity, \begin{align*} 2\cos ^{2}R-1 &=1 \\ \cos ^{2}R &=1 \end{align*}
If $\cos^{2} R=1$, then by the Pythagorean Identity, $\sin R=0$, a contradiction to the fact that $\sin R>0$.
\ Case II: \ $\cos 2R=-1$.
\qquad Then \begin{align*} 2\cos ^{2}R-1 &=-1 \\ \cos R &=0 \end{align*} This last statement contradicts the choice of $Q$ as the smallest positive number in $\left[ 0,2\right] $ with $\cos Q=0$. Therefore such a number $R$ does not exist, and the period of $\sin $ is $4Q$. \end{proof}
\begin{cor}[Periodicity of Cosine]The cosine function is periodic with period $4Q$. \end{cor}
\begin{proof}[Proof:\nopunct] We can write $\cos x$ as \[\cos x=-\sin(x-Q)\] Because horizontal translations and vertical rotations about the x-axis do not change the period of a function, $\cos x$ is periodic with period $4Q$. \end{proof}
\subsection*{Connection to Geometry}
With this result we now show the connection between the analytic and geometric approaches to trigonometry.
\begin{thm}[Connection with $\pi$] \[ \int_{0}^{1}\sqrt{1-x^{2}}\, dx=\frac{Q}{2} \] \end{thm}
\begin{proof}[Proof:\nopunct] Use the substitution \[ x=\sin \theta \] with the values
\begin{tabular}{c|c}
$x$ & $\theta $ \\ \hline $0$ & $0$ \\ $1$ & $Q$ \\ \end{tabular} \ so that the integral becomes
\begin{align*} \int_{0}^{Q}\sqrt{1-\sin ^{2}\theta }\cos \theta \,d\theta &=\int_{0}^{Q}\cos ^{2}\theta \,d\theta \\ &=\int_{0}^{Q}\frac{1}{2}\left( 1+\cos 2\theta \right) \,d\theta \\ &=\frac{1}{2}\left[ \theta +\frac{1}{2}\sin 2\theta \right] _{0}^{Q} \\ &=\frac{1}{2}\left[ \left( Q+\frac{1}{2}\sin 2Q\right) -\left( 0+\frac{1}{2} \sin \left( 2\cdot 0\right) \right) \right] \\ &=\frac{1}{2}Q \end{align*} \end{proof}
The integral $\int_{0}^{1}\sqrt{1-x^{2}}\,dx$ represents the quarter-circle area enclosed by the unit circle, the nonnegative $x$-axis, and the nonnegative $y$-axis, and so we are led to the conclusion that \[Q=\pi/2\].
Using what we have previously developed about multiples of $Q$, we have a table restating the values for sine and cosine in terms of $\pi $ instead of $Q$.
\begin{center}
\begin{tabular}{c|c|c|r|r|c}
$x$ & $0$ & $\pi /2$ & $\pi $ & $3\pi /2$ & $2\pi $ \\ \hline $\sin x$ & $0$ & $1$ & $0$ & $-1$ & $0$ \\ $\cos x$ & $1$ & $0$ & $-1$ & $0$ & $1$ \\ \end{tabular}
\end{center}
From this follows the usual information about the graphs of the sine and cosine: intervals for positive/negative values, intervals for increasing/decreasing, local (and absolute) maximums/minimums.
Without geometry, we can find the values of sine and cosine of $\dfrac{\pi }{ 4}$, $\dfrac{\pi }{6}$, $\dfrac{\pi }{3}$ using only the sum and difference identities. We include the development of these values in Appendix A. In Appendix B we present the mathematics that connects the sine and cosine functions, defined here as power series, to the trig functions defined using the unit circle.
\subsection*{Pythagorean Identity Revisited}
We conclude this study with the observation that the \textbf{converse} of the Pythagorean Identity also holds.
\begin{thm} If $f:\mathbb{R} \to \mathbb{R}$ is analytic, $f\,^{\prime }\left( 0\right) =1$, $\ f\left( 0\right) =0$, and $f$ satisfies the Pythagorean Identity \[ \left( f\left( x\right) \right) ^{2}+\left( f\,^{\prime }\left( x\right) \right) ^{2}=1 \] for all $x$, then $f$ $\left( x\right) \equiv \sin x$. \end{thm}
\begin{proof}[Proof:\nopunct] Differentiation of both sides gives \[ 2\left( f\left( x\right) \right) f\,^{\prime }\left( x\right) +2f\,^{\prime }\left( x\right) f\,^{\prime \prime }\left( x\right) =0 \] so that
\[ 2f\,^{\prime }\left( x\right) \left( f\left( x\right) +f\,^{\prime \prime }\left( x\right) \right) =0 \]
Since $f'(0)=1$, and $f$ is analytic, $f'$ is positive on some open interval containing $0$. Therefore, \textbf{on this interval},
\[ f\left( x\right) +f\,^{\prime \prime }\left( x\right) =0, \]
and $f(x) = \sin(x)$. Moreover, if two analytic functions agree on an open interval, then they agree on $\mathbb{R}$.
\end{proof}
\subsection*{Summary and Conclusions}
We have developed the theorems and identities of basic trigonometry using the definition of the sine function as the solution, expressed as a power series, of a certain second order linear homogeneous differential equation. The key theorems in this study are the Pythagorean Identity, the Sine Sum Identity, and the special value $Q$, which turned out to be $\pi/2$. From these the other identities follow. The interested reader is referred to Landau, chapter 16, in which the sine and cosine functions are developed from a power series definition. In a brief note, Appendix III in Hardy uses the definition of the inverse tangent function as an integral to lead to the definitions of sine, cosine, and their sum laws.
In a future study we plan to consider a generalization of the sine and cosine functions, and show that versions of the Key Theorems still hold in these settings.
\pagebreak
\subsection*{REFERENCES}
\noindent Hardy, G. H. \ 1921. \textit{A Course of Pure Mathematics, Third Edition}. Mineola, New York: Dover\\ \indent Publications, Inc., 2018. An unabridged republication of the work originally published by\\ \indent the Cambridge University Press, Cambridge.\\
\noindent Landau, Edmund. 1965. \textit{Differential and Integral Calculus}. Providence, Rhode Island: AMS\\ \indent Chelsea Publishing. Reprinted by the American Mathematical Society, 2010, from a translation\\ \indent of Edmund Landau's 1934 \textit{Einf\"{u}hring in die Differentialrechnung und Integralrechnung}.\\
\noindent Nagle, R. Kent, Edward B. Saff, and Arthur David Snider. 2008. \textit{Fundamentals of Differential Equations}.\\ \indent Boston: Pearson Addison Wesley.\\
\noindent Rudin, Walter. 1964. \textit{Principles of Mathematical Analysis}. New York: McGraw-Hill Book Company.\\
\noindent Willard, Stephen. 1970. \textit{General Topology}. Reading, Massachusetts: Addison-Wesley Publishing Company.
\pagebreak
\subsection*{Appendix A: Trig Functions of Special Angles}
First we consider $\dfrac{\pi }{4}$.
\[ \sin \left( \dfrac{\pi }{4}\right) =\cos \left( \dfrac{\pi }{2}-\dfrac{\pi }{ 4}\right) =\cos \left( \dfrac{\pi }{4}\right) \]
Since \[ 1=\sin ^{2}\left( \dfrac{\pi }{4}\right) +\cos ^{2}\left( \dfrac{\pi }{4} \right) =2\sin ^{2}\left( \dfrac{\pi }{4}\right) \] we obtain \[ \sin \left( \dfrac{\pi }{4}\right) =\dfrac{\sqrt{2}}{2}=\cos \left( \dfrac{ \pi }{4}\right) \]
To find values for $\sin \dfrac{\pi }{6}$, we need the triple-angle identity \[ \sin \left( 3\theta \right) =3\sin \theta -4\sin ^{3}\theta \] This follows from expanding $\sin(2\theta+\theta)$. We can now write \begin{align*} 1 &=\sin \dfrac{\pi }{2} \\ &=\sin \left( 3\cdot \dfrac{\pi }{6}\right) \\ &=3\sin \dfrac{\pi }{6}-4\sin ^{3}\dfrac{\pi }{6} \end{align*} We solve this cubic equation in $\sin \dfrac{\pi }{6}$ to obtain a double solution $\dfrac{1}{2}$ and single solution $-1$. Because $\sin \dfrac{\pi }{6}>0$, \begin{align*} \sin \dfrac{\pi }{6} &=\dfrac{1}{2}\text{, and} \\ \cos \dfrac{\pi }{6} &=\dfrac{\sqrt{3}}{2} \end{align*}
Here's a summary table: \[
\begin{tabular}{c|c|c|c|c|c}
$x$ & $0$ & $\pi /6$ & $\pi /4$ & $\pi /3$ & $\pi /2$ \\ \hline
$\sin x$ & $0$ & $1/2$ & $\sqrt{2}/2$ & $\sqrt{3}/2$ & $1$ \\ $\cos x$ & $1$ & $\sqrt{3}/2$ & $\sqrt{2}/2$ & $1/2$ & $0$ \\ \end{tabular} \]
\subsection*{Appendix B: Connection to Unit Circle Trigonometry}
\begin{lem}
If $f$ is continuous on $\left[ a,b\right] $ and strictly increasing on $\left( a,b\right) $, then $f$ is strictly increasing on $ \left[ a,b\right] $.
\end{lem}
\begin{proof}[Proof:\nopunct] We first show that for any $x$ in the interval $\left( a,b\right) $, we must have $f\left( a\right) <f\left( x\right) $. Assume, to the contrary, that there exists $c$, $\ a<c<b$, such that $f\left( a\right) \geq f\left( c\right) $.
Case I: \ $f\left( a\right) >f\left( c\right) $. \ \ Let \[ \varepsilon =\frac{f\left( c\right) -f\left( a\right) }{2} \] Then by right-hand continuity of $f$ at $a$, there exists $\delta $, $ 0<\delta <c-a$, such that if $a<x<a+\delta $ then \[ f\left( a\right) -\varepsilon <f\left( x\right) <f\left( a\right) +\varepsilon \] Then \begin{align*} f\left( x\right) &>f\left( a\right) -\frac{f\left( c\right) -f\left( a\right) }{2} \\ &=\frac{f\left( a\right) +f\left( c\right) }{2} \\ &>\frac{f\left( c\right) +f\left( c\right) }{2} \\ &=f\left( c\right) \end{align*} Since $x$, $c$ are both in $\left( a,b\right) $ \ and $x<a+\delta <a+\left( c-a\right) =c$, we must have $f\left( x\right) <f\left( c\right) $, a contradiction. Therefore it cannot be the case that $f\left( a\right) \geq f\left( c\right) $ for some $c\in \left( a,b\right) $.
Case II: \ $f\left( a\right) =f\left( c\right) $. Then consider $\frac{a+c}{2}\in \left( a,c\right) \subset \left( a,b\right) $. Since $f$ is strictly increasing on $\left( a,b\right) $, it follows that \[ f\left( \frac{a+c}{2}\right) <f\left( c\right) =f\left( a\right) \] and we have the situation of Case I.
A similar argument shows that if $a<x<b$, then $f\left( x\right) <f\left( b\right) $.
\end{proof}
\begin{thm} The function $\sin x$ is strictly increasing on $\left[ -\frac{\pi }{2},\frac{\pi }{2}\right] $. \end{thm}
\begin{proof}[Proof:\nopunct] By our development and definition of $\frac{\pi }{2}$ as least in $\left[ 0,2\right] $ with $\cos \left( \frac{\pi }{2}\right) =0$ , $\cos x$ is positive on $\left[ 0,\frac{\pi }{2}\right) .$ Since it is an even function, it is positive on $\left( -\frac{\pi }{2},\frac{\pi }{2} \right) $ and therefore $\sin x$ is increasing on $\left( -\frac{\pi }{2}, \frac{\pi }{2}\right) $. The function $\sin x$ is differentiable and therefore continuous at all $x$, and by the lemma must be increasing on $ \left[ -\frac{\pi }{2},\frac{\pi }{2}\right] $.
\end{proof}
We see, then, that the sine function restricted to $\left[ -\frac{\pi }{2}, \frac{\pi }{2}\right] $ is a bijection onto $\left[ -1,1\right] $.
\begin{defn}[Inverse Sine Function]\ The \textit{inverse sine function of $x$}, written here as $\arcsin x$, is defined: \[ \arcsin x:\left[ -1,1\right] \rightarrow \left[ -\frac{\pi }{2},\frac{\pi }{2 }\right] \] is the inverse of the sine function restricted to the domain $\left[ -\frac{\pi }{2}, \frac{\pi }{2}\right] $.
\end{defn}
We now consider the derivative of $\arcsin x$, $-1<x<1$.
\begin{thm} If $-1<x<1$ then $\frac{d}{dx}\arcsin x=\frac{1}{\sqrt{ 1-x^{2}}}$. \end{thm}
\begin{proof}[Proof:\nopunct] If $y=\arcsin x$, $-1<x<1$, then $-\frac{\pi }{2}<y< \frac{\pi }{2}$ and \[ \sin y=x \] Using implicit differentiation, \[ \cos y\frac{dy}{dx}=1 \] and \[ \frac{dy}{dx}=\frac{1}{\cos y} \] What is $\cos y=\cos (\arcsin x)$? By the Pythagorean Identity, \begin{align*} \cos ^{2}y+\sin ^{2}y &=1 \\ \cos ^{2}y+x^{2} &=1 \end{align*} and so $\cos y=\sqrt{1-x^{2}}$ or $\cos y=-\sqrt{1-x^{2}}$. Since $-\frac{ \pi }{2}<y<\frac{\pi }{2}$, we have $\cos y>0$. Thus $\cos y=\sqrt{1-x^{2}}$ and \[ \frac{dy}{dx}=\frac{1}{\sqrt{1-x^{2}}} \] \end{proof}
\begin{cor}
If $-1<x<1$, then \[ \arcsin x=\int_{0}^{x}\frac{1}{\sqrt{1-t^{2}}}dt \] \end{cor}
\begin{proof}[Proof:\nopunct]
This follows from the Fundamental Theorem of Calculus.
\end{proof}
Does this equation hold for $x=1$ and for $x=-1$?
\begin{thm}
\[ \int_{0}^{1}\frac{1}{\sqrt{1-t^{2}}}dt=\frac{\pi }{2} \] \end{thm}
\begin{proof}[Proof:\nopunct] Let $g\left( x\right) =\displaystyle\int_{0}^{x}\dfrac{dt}{\sqrt{ 1-t^{2}}}$, $0\leq x\leq 1$.
What is the value of $g\left( 1\right) $, an improper integral? First, we know that $g\left( x\right) =\arcsin x$ for $x\in \lbrack 0,1)$ and that \[ \sin \dfrac{\pi }{4}=\cos \left( \dfrac{\pi }{2}-\dfrac{\pi }{4}\right) =\cos \left( \dfrac{\pi }{4}\right) \] Since $\sin ^{2}x+\cos ^{2}x=1$, we have \[ \sin \dfrac{\pi }{4}=\cos \dfrac{\pi }{4}=\frac{\sqrt{2}}{2} \] Therefore \[ g\left( \tfrac{\sqrt{2}}{2}\right) =\dfrac{\pi }{4} \] We are now ready to find $g\left( 1\right) $. \begin{align*} g\left( 1\right) &=\lim_{b\rightarrow 1^{-}}\int_{0}^{b}\frac{dt}{\sqrt{ 1-t^{2}}} \\ &=\int_{0}^{\sqrt{2}/2}\frac{dt}{\sqrt{1-t^{2}}}+\lim_{b\rightarrow 1^{-}}\int_{\sqrt{2}/2}^{b}\frac{dt}{\sqrt{1-t^{2}}} \\ &=g\left( \tfrac{\sqrt{2}}{2}\right) +\lim_{b\rightarrow 1^{-}}\int_{\sqrt{ 2}/2}^{b}\frac{dt}{\sqrt{1-t^{2}}} \\ &=\dfrac{\pi }{4}+\lim_{b\rightarrow 1^{-}}\int_{\sqrt{2}/2}^{b}\frac{dt}{ \sqrt{1-t^{2}}} \end{align*} With the substitution $u=\sqrt{1-t^{2}}$ we obtain for this last integral \begin{align*} \lim_{b\rightarrow 1^{-}}\int_{\sqrt{2}/2}^{b}\frac{dt}{\sqrt{1-t^{2}}} &=\lim_{b\rightarrow 1^{-}}\int_{\sqrt{2}/2}^{\sqrt{1-b^{2}}}\dfrac{1}{u} \cdot \dfrac{-u}{\sqrt{1-u^{2}}}du \\ &=-\lim_{b\rightarrow 1^{-}}\int_{\sqrt{2}/2}^{\sqrt{1-b^{2}}}\dfrac{1}{ \sqrt{1-u^{2}}}du \\ &=-\int_{\sqrt{2}/2}^{0}\dfrac{1}{\sqrt{1-u^{2}}}du \\ &=\int_{0}^{\sqrt{2}/2}\frac{du}{\sqrt{1-u^{2}}} \\ &=g\left( \tfrac{\sqrt{2}}{2}\right) \\ &=\frac{\pi }{4} \end{align*} so that \[ g\left( 1\right) =g\left( \tfrac{\sqrt{2}}{2}\right) +g\left( \tfrac{\sqrt{2} }{2}\right) =\dfrac{\pi }{2} \]
\end{proof}
\begin{cor} $\displaystyle\int_{0}^{-1}\frac{1}{\sqrt{1-t^{2}}}dt=-\frac{\pi }{2}$ \end{cor}
\begin{proof}[Proof:\nopunct] Since $\frac{1}{\sqrt{1-t^{2}}}$ is an even function, for $0\leq a<1$, we have \[ \int_{-a}^{0}\frac{1}{\sqrt{1-t^{2}}}dt=\int_{0}^{a}\frac{1}{\sqrt{1-t^{2}}} dt \]
and \begin{align*} \int_{0}^{-1}\frac{1}{\sqrt{1-t^{2}}}dt &=-\int_{-1}^{0}\frac{1}{\sqrt{ 1-t^{2}}}dt \\ &=-\lim_{a\rightarrow 1^{-}}\int_{-a}^{0}\frac{1}{\sqrt{1-t^{2}}}dt \\ &=-\lim_{a\rightarrow 1^{-}}\int_{0}^{a}\frac{1}{\sqrt{1-t^{2}}}dt \\ &=-\frac{\pi }{2} \end{align*} \end{proof}
We now complete the connection to unit circle trigonometry.
\begin{thm}If $-1\leq a\leq b\leq 1$ then the arc length of the graph of $y=\sqrt{1-x^{2}}$ from $x=a$ to $x=b$ is \[\arcsin b-\arcsin a.\] \end{thm} \begin{proof}[Proof:\nopunct] First, note that \[ \frac{d}{dx}\sqrt{1-x^{2}}=-\frac{x}{\sqrt{1-x^{2}}} \] Using the arc length formula and the previous result, \begin{align*} \int\nolimits_{a}^{b}\sqrt{1+\left( \frac{dy}{dx}\right) ^{2}} &=\int_{a}^{b}\sqrt{1+\left( -\frac{x}{\sqrt{1-x^{2}}}\right) ^{2}}\enspace dx \\ &=\int_{a}^{b}\sqrt{1+\frac{x^{2}}{1-x^{2}}}\enspace dx \\ &=\int_{a}^{b}\sqrt{\frac{1}{1-x^{2}}}\enspace dx \\ &=\int_{a}^{b}\frac{1}{\sqrt{1-x^{2}}}\enspace dx \\ &=\arcsin b-\arcsin a \end{align*}
\end{proof}
In the particular case that $b=1$, we have that the arc length $s$ along the upper unit circle from $x=a$ to $x=1$ is \[ s=\frac{\pi }{2}-\arcsin a \] Then \begin{align*} \cos s &=\cos \left( \frac{\pi }{2}-\arcsin a\right) \\ &=\sin \left( \arcsin a\right) \\ &=a \end{align*} and \begin{align*} \sin s &=\sin \left( \frac{\pi }{2}-\arcsin a\right) \\ &=\cos \left( \arcsin a\right) \\ &=\sqrt{1-a^{2}} \end{align*} This shows that a point $P\left( a,\sqrt{1-a^{2}}\right) $ on the upper unit circle with $-1\leq a\leq 1$ has coordinates $P\left( \cos s,\sin s\right)$ where $s$ is the arc length along the upper unit circle from the point $P$ to $ A\left( 1,0\right) $. This arc length is the definition of the radian measure of angle $AOP$ where $O=\left( 0,0\right) $.
The connection from geometry-free trigonometry to unit circle trigonometry is complete.
\end{document} |
\begin{document}
\title[single commutators]{On single commutators in II$_1$--factors}
\author[Dykema]{Ken Dykema$^{*}$} \author[Skripka, {\timeHHMM\space o'clock, \today}]{Anna Skripka$^\dag$} \address{K.D., Department of Mathematics, Texas A\&M University, College Station, TX 77843-3368, USA} \email{kdykema@math.tamu.edu} \address{A.S., Department of Mathematics, University of Central Florida, 4000 Central Florida Blvd., P.O.\ Box 161364, Orlando, FL 32816-1364, USA} \email{skripka@math.ucf.edu} \thanks{\footnotesize ${}^{*}$Research supported in part by NSF grant DMS--0901220. ${}^{\dag}$Research supported in part by NSF grant DMS-0900870}
\subjclass[2000]{47B47, 47C15}
\keywords{Commutators, II$_1$--factors}
\date{February 1, 2011}
\begin{abstract} We investigate the question of whether all elements of trace zero in a II$_1$--factor are single commutators. We show that all nilpotent elements are single commutators, as are all normal elements of trace zero whose spectral distributions are discrete measures. Some other classes of examples are considered. \end{abstract}
\maketitle
\section{Introduction}
In an algebra $\Afr$, the {\em commutator} of $B,C\in\Afr$ is $[B,C]=BC-CB$, and we denote by $\Comm(\Afr)\subseteq\Afr$ the set of all commutators. A {\em trace} on $\Afr$ is by definition a linear functional that vanishes on $\Comm(\Afr)$. The algebra $M_n(k)$ of $n\times n$ matrices over a field $k$ has a unique trace, up to scalar multiplication; (we denote the trace sending the identity element to $1$ by $\tr_n$). It is known that every element of $M_n(k)$ that has null trace is necessarily a commutator (see~\cite{S36} for the case of characteristic zero and \cite{AM57} for the case of an arbitrary characteristic). For the complex field, $k=\Cpx$, a natural generalization of the algebra $M_n(\Cpx)$ is the algebra $B(\HEu)$ of all bounded operators on a separable, possibly infinite dimensional Hilbert space $\HEu$. Thanks to the ground breaking paper~\cite{BP65} of Brown and Pearcy, $\Comm(B(\HEu))$ is known: the commutators in $B(\HEu)$ are precisely the operators that are not of the form $\lambda I+K$ for $\lambda$ a nonzero complex number, $I$ the identity operator and $K$ a compact operator (and an analogous result holds when $\HEu$ is nonseparable).
Characterizations of $\Comm(B(X))$ for some Banach spaces $X$ are found in~\cite{A72}, \cite{A73}, \cite{D09} and~\cite{DJ10}.
The von Neumann algebra factors form a natural family of algebras including the matrix algebras $M_n(\Cpx)$ and $B(\HEu)$ for infinite dimensional Hilbert spaces $\HEu$; (these together are the type~I factors). The set $\Comm(\Mcal)$ was determined by Brown and Pearcy~\cite{BP66} for $\Mcal$ a factor of type III and by Halpern~\cite{H69} for $\Mcal$ a factor of type II$_\infty$.
The case of type II$_1$ factors remains open. A type II$_1$ factor is a von Neumann algebra $\Mcal$ whose center is trivial and that has a trace $\tau:\Mcal\to\Cpx$, which is then unique up to scalar multiplication; by convention, we always take $\tau(1)=1$. The following question seems natural, in light of what is known for matrices: \begin{ques}\label{qn:comm} Do we have \[ \Comm(\Mcal)=\ker\tau \] for any one particular II$_1$--factor $\Mcal$, or even for all II$_1$--factors? \end{ques}
Some partial results are known. Fack and de la Harpe~\cite{FH80} showed that every element of $\ker\tau$ is a sum of ten commutators, (and with control of the norms of the elements). The number ten was improved to two by Marcoux~\cite{M06}. Pearcy and Topping, in~\cite{PT69}, showed that in the type II$_1$ factors of Wright (which do not have separable predual), every self--adjoint element of trace zero is a commutator.
In section~\ref{sec:normal}, we employ the construction of Pearcy and Topping for the Wright factors and a result of Hadwin~\cite{Had98} to show firstly that all normal elements of trace zero in the Wright factors are commutators. We then use this same construction to derive that in any II$_1$--factor, every normal element with trace zero and purely atomic distribution is a single commutator. In section~\ref{sec:nilpotent}, we show that all nilpotent operators in II$_1$--factors are commutators. Finally, in section~\ref{sec:ques}, we provide classes of examples of elements of II$_1$--factors that are not normal and not nilpotent but are single commutators, and we ask some specific questions suggested by our examples and results.
\noindent{\bf Acknowledgement.} The authors thank Heydar Radjavi for stimulating discussions about commutators, and Gabriel Tucci for help with his operators.
\section{Some normal operators} \label{sec:normal}
The following lemma (but with a constant of $2$) was described in Concluding Remark (1) of~\cite{PT69}, attributed to unpublished work of John Dyer. That the desired ordering of eigenvalues can be made with bounding constant~$4$ follows from work of Steinitz~\cite{St13}, the value $2$ follows from~\cite{GS80} and the better constant in the version below (which is not actually needed in our application of it) is due to work of Banaszczyk~\cite{Ba87}, \cite{Ba90}. \begin{lemma}\label{lem:Anormal}
Let $A\in M_n(\Cpx)$ be a normal element with $\tr_n(A)=0$. Then there are $B,C\in M_n(\Cpx)$ with $A=[B,C]$ and $\|B\|\,\|C\|\le\frac{\sqrt5}2\|A\|$. \end{lemma} \begin{proof} After conjugating with a unitary, we may without loss of generality assume $A=\diag(\lambda_1,\ldots,\lambda_n)$ and we may choose the diagonal elements to appear in any prescribed order. We have $A=[B,C]$ where \begin{equation}\label{eq:B} B=\left( \begin{matrix} 0&1&0&\cdots&0 \\ 0&0&1 \\ \vdots & &\ddots&\ddots \\ 0& & \cdots & 0 &1 \\ 0& 0 & \cdots & &0 \end{matrix}\right) \end{equation} and $C=B^*D$, where \begin{equation}\label{eq:D} D=\diag(\lambda_1,\,\lambda_1+\lambda_2,\,\ldots,\lambda_1+\cdots+\lambda_{n-1},0). \end{equation} By work of Banaszczyk~\cite{Ba87}, \cite{Ba90}, any list $\lambda_1,\ldots,\lambda_n$ of complex numbers whose sum is zero can be reordered so that for all $k\in\{1,\ldots,n-1\}$ we have \begin{equation}\label{eq:lambdasum}
\left|\sum_{j=1}^k\lambda_j\right|\le\frac{\sqrt5}2\max_{1\le j\le n}|\lambda_j|. \end{equation}
This ensures $\|B\|\le1$ and $\|C\|\le\frac{\sqrt5}2\|A\|$. \end{proof}
The II$_1$--factors of Wright~\cite{W54} are the quotients of the von Neumann algebra of all bounded sequences in $\prod_{n=1}^\infty M_n(\Cpx)$ by the ideal $I_\omega$, consisting of all sequences $(a_n)_{n=1}^\infty\in\prod_{n=1}^\infty M_n(\Cpx)$ such that $\lim_{n\to\omega}\tr_n(a_n^*a_n)=0$, where $\omega$ is a nontrivial ultrafilter on the natural numbers. The trace of the element of $\Mcal$ associated to a bounded sequence $(b_n)_{n=1}^\infty\in\prod_{n=1}^\infty M_n(\Cpx)$ is $\lim_{n\to\omega}\tr_n(b_n)$. (See~\cite{McD70} or~\cite{J72} for ultrapowers of finite von Neumann algebras.) The following result in the case of self--adjoint operators is due to Pearcy and Topping~\cite{PT69}.
\begin{thm}\label{thm:PT} If $\Mcal$ is a Wright factor and if $T\in\Mcal$ is normal with $\tau(T)=0$, then $T\in\Comm(\Mcal)$. \end{thm} \begin{proof}
Let $T\in\Mcal$ be normal and let $X$ and $Y$ be the real and imaginary parts of $T$, respecitvely. Let $(S_n)_{n=1}^\infty\in\prod_{n=1}^\infty M_n(\Cpx)$ be a representative of $T$, with $\|S_n\|\le\|T\|$ for all $n$. Let $X_n$ and $Y_n$ be the real and imaginary parts of $S_n$. Then the mixed $*$--moments of the pair $(X_n,Y_n)$ converge as $n\to\omega$ to the mixed $*$--moments of $(X,Y)$. By standard methods, we can construct some commuting, self--adjoint, traceless $n\times n$ matrices $H_n$ and $K_n$ such that $H_n$ converges in moments to $X$ and $K_n$ converges in moments to $Y$, as $n\to\infty$. Now using a result of Hadwin (Theorem 2.1 of~\cite{Had98}), we find $n\times n$ unitaries $U_n$ such that \[
\lim_{n\to\omega}\|U_nX_nU_n^*-H_n\|_2=0 \qquad
\lim_{n\to\omega}\|U_nY_nU_n^*-K_n\|_2=0, \]
where $\|Z\|_2=\tr_n(Z^*Z)^{1/2}$ is the Euclidean norm resulting from the normalized trace on $M_n(\Cpx)$. This shows that $T$ has respresentative $(T_n)_{n=1}^\infty$, where $T_n=U_n^*(H_n+iK_n)U_n$ is normal and, of course, traceless.
By Lemma~\ref{lem:Anormal}, for each $n$ there are $B_n,C_n\in M_n(\Cpx)$ with $\|B_n\|=1$ and
$\|C_n\|\le\frac{\sqrt5}2\|T\|$ such that $T_n=[B_n,C_n]$. Let $B,C\in\Mcal$ be the images (in the quotient $\prod_{n=1}^\infty M_n(\Cpx)/I_\omega$) of $(B_n)_{n=1}^\infty$ and $(C_n)_{n=1}^\infty$, respectively. Then $T=[B,C]$. \end{proof}
The {\em distribution} of a normal element $T$ in a II$_1$--factor is the compactly supported Borel probability measure on the complex plane obtained by composing the trace with the projection--valued spectral measure of $T$.
\begin{thm}\label{thm:normalhyp} If $R$ is the hyperfinite II$_1$--factor and if $\mu$ is a compactly supported Borel probability measure on the complex plane such that $\int z\,\mu(dz)=0$, then there is a normal element $T\in\Comm(R)$ whose distribution is $\mu$. \end{thm} \begin{proof} We will consider a particular instance of the construction from the proof of Theorem~\ref{thm:PT}. Let $\Mcal$ be a factor of Wright, with tracial state $\tau$. Let $L$ be the maximum modulus of elements of the support of $\mu$. We may choose complex numbers $(\lambda^{(n)}_j)_{j=1}^n$ for $n\ge1$ such that the measures $\frac1n\sum_{j=1}^n\delta_{\lambda_j^{(n)}}$ converge in weak$^*$--topology to $\mu$ and all have support contained inside the disk of radius $L$ centered at the origin and such that $\sum_{j=1}^n\lambda^{(n)}_j=0$ for each $n$. Let $T_n=\diag(\lambda^{(n)}_1,\ldots,\lambda^{(n)}_n)\in M_n(\Cpx)$ and let $T\in\Mcal$ be the element associated to the sequence $(T_n)_{n=1}^\infty$. Then the distribution of $T$ is $\mu$. By~\cite{Ba87}, \cite{Ba90}, we can order these $\lambda^{(n)}_1,\ldots,\lambda^{(n)}_n$ so that
$\big|\sum_{j=1}^k\lambda^{(n)}_j\big|\le\frac{\sqrt5}2\|T\|$ for all $1\le k\le n$. Then, as in the proof of Lemma~\ref{lem:Anormal}, we have $T_n=[B_n,B_n^*D_n]$ where $B_n$ and $D_n$ are the $n\times n$ matrices $B$ and $D$ of~\eqref{eq:B} and~\eqref{eq:D}, respectively. If $B,D\in\Mcal$ are the images in the quotient of the sequences $(B_n)_{n=1}^\infty$ and $(D_n)_{n=1}^\infty$, respectively, then $T=[B,B^*D]$. However, note that $B\in\Mcal$ is a unitary element such that $\tau(B^k)=0$ for all $k>0$. Moreover, the set $\{B^kDB^{-k}\mid k\in\Ints\}$ generates a commutative von Neumann subalgebra $\Ac$ of $\Mcal$ and every element of $\Ac$ is the image (under the quotient mapping) of a sequence $(A_n)_{n=1}^\infty$ where each $A_n\in M_n(\Cpx)$ is a diagonal matrix. Thus, the unitary $B$ acts by conjugation on $\Ac$, and, moreover, we have $\tau(AB^k)=0$ for all $A\in\Ac$ and all $k>0$. Therefore the von Neumann subalgebra generated by $\Ac\cup\{B\}$ is a case of the group--measure-space construction, $\Ac\rtimes\Ints$, and is a hyperfinite von Neumann algebra by \cite{Co76} and can, thus, be embedded into the hyperfinite II$_1$--factor $R$. \end{proof}
The above proof actually shows the following. \begin{cor} Given any compactly supported Borel probability measure $\mu$ on the complex plane with $\int z\,\mu(dz)=0$, there is $f\in L^\infty([0,1])$ and a probability-measure-preserving transformation $\alpha$ of $[0,1]$ such that the distribution of $f-\alpha(f)$ equals $\mu$ and the supremum norm of $f$ is no more than $\frac{\sqrt5}2$ times the maximum modulus of the support of $\mu$. \end{cor}
\begin{thm}\label{thm:atomic} If $\Mcal$ is any II$_1$--factor and $T\in\Mcal$ is a normal element whose distribution is purely atomic and with trace $\tau(T)=0$, then $T\in\Comm(\Mcal)$. \end{thm} \begin{proof} $\Mcal$ contains a (unital) subfactor $R$ isomorphic to the hyperfinite II$_1$--factor. By Theorem~\ref{thm:normalhyp}, there is an element $\Tt\in\Comm(R)$ whose distribution equals the distribution of $T$. Since this distribution is purely atomic, there is a unitary $U\in\Mcal$ such that $U\Tt U^*=T$. Thus, $T\in\Comm(\Mcal)$. \end{proof}
\section{Nilpotent operators} \label{sec:nilpotent}
The von Neumann algebra $\Mcal$ is embedded in $B(\HEu)$ as a strong--operator--topology closed, self--adjoint subalgebra. If $T\in\Mcal$, we denote the self--adjoint projection onto $\ker(T)$ by $\kerproj(T)$ and the self--adjoint projection onto the closure of the range of $T$ by $\ranproj(T)$. Both of these belong to $\Mcal$, and we have \[\tau(\kerproj(T))+\tau(\ranproj(T))=1\]
The following decomposition follows from the usual sort of analysis of subspaces that one does also in the finite dimensional setting. \begin{lemma}\label{lem:UT} Let $\Mcal$ be a II$_1$--factor and let $T\in\Mcal$ be nilpotent, $T^n=0$. Then there are integers $n\ge k_1>k_2>\ldots>k_m\ge1$ and for each $j\in\{1,\ldots,m\}$ there are equivalent projections $f^{(j)}_1,\ldots,f^{(j)}_{k_j}$ in $\Mcal$ such that \begin{enumerate}[(i)] \item $f^{(j)}:=f^{(j)}_1+\cdots+f^{(j)}_{k_j}$ commutes with $T$, \item $f^{(1)}+\cdots+f^{(m)}=1$, \item the $k_j\times k_j$ matrix of $f^{(j)}T$ with respect to these projections $f^{(j)}_1,\ldots,f^{(j)}_{k_j}$ is strictly upper triangular. \end{enumerate} \end{lemma} In other words, the lemma says that $T$ lies in a unital $*$--subalgebra of $\Mcal$ that is isomorphic to $M_{k_1}(\Afr_1)\oplus\cdots\oplus M_{k_m}(\Afr_m)$ for certain compressions $\Afr_j$ of $\Mcal$ by projections, and the direct summand component of $T$ in each $M_{k_j}(\Afr_j)$ is a strictly upper triangular matrix. \begin{proof} The proof is by induction on $n$. The case $n=1$ is clear, because then $T=0$. Assume $n\ge2$. We consider the usual system $p_1,p_2,\ldots,p_n$ of pairwise orthogonal projections with respect to which $T$ is upper triangular: \begin{align*} p_1&=\kerproj(T), \\ p_j&=\kerproj(T^j)-\kerproj(T^{j-1}),\quad(2\le j\le n). \end{align*} Then we have \begin{gather} \tau(\ranproj(Tp_j))=\tau(p_j),\qquad(2\le j\le n), \label{eq:Tpj} \\ \ranproj(Tp_j)\le\kerproj(T^{j-1})=p_1+p_2+\cdots+p_{j-1},\qquad(2\le j\le n), \label{eq:Tpjle} \\ \ranproj(Tp_j)\wedge(p_1+ p_2+\cdots+p_{j-2})=0,\qquad(3\le j\le n). \label{eq:rpw} \end{gather} Indeed, for~\eqref{eq:Tpj}, it will suffice to show $\kerproj(Tp_j)=1-p_j$. For this, note that if $p_j\xi=\xi$ and $T\xi=0$, then $\xi\in\ker T\subseteq\ker T^{j-1}$. Since $p_j\perp\kerproj(T^{j-1})$, this gives $\xi=0$. The relation~\eqref{eq:Tpjle} is clear. For~\eqref{eq:rpw}, if $q:=\ranproj(Tp_j)\wedge\kerproj(T^{j-2})\ne0$, then by standard techniques (see, e.g., Lemma~2.2.1 of~\cite{CD09}), we would have a nonzero projection $r\le p_j$ such that $q=\ranproj(Tr)\le\kerproj(T^{j-2})$. However, this would imply $r\le\kerproj(T^{j-1})$, which contradicts $p_j\perp\kerproj(T^{j-1})$.
Let \begin{align*} q_n&=p_n\,, \\ q_{n-j}&=\ranproj(T^jq_n),\qquad(1\le j\le n-1). \end{align*} Then we have \begin{gather} q_k=\ranproj(Tq_{k+1})\le p_1+\cdots+p_k,\qquad(1\le k\le n-1), \label{eq:Tqk} \\ q_k\wedge(p_1+\cdots+p_{k-1})=0,\qquad(2\le k\le n). \label{eq:qk} \end{gather} Now~\eqref{eq:Tpj} and~\eqref{eq:Tqk} together imply $\tau(q_k)=\tau(q_{k+1})$, and from~\eqref{eq:qk} we have $\tau(q_1\vee\cdots\vee q_k)=k\tau(q_1)$. Thus, we have pairwise equivalent and orthogonal projections $f_1,\ldots,f_n$ defined by \begin{align*} f_n&=q_n\,, \\ f_k&=(q_k\vee\cdots\vee q_n)-(q_{k+1}\vee\cdots\vee q_n),\qquad(1\le k\le n-1), \end{align*} $T$ commutes with $f:=f_1+\cdots+f_n$ and $Tf$ is strictly upper triangular when written as an $n\times n$ matrix with respect to $f_1,\ldots,f_n$. Moreover, we have $(T(1-f))^{n-1}=T^{n-1}(1-f)=0$ and the induction hypothesis applies to $T(1-f)$. \end{proof}
\begin{prop} Let $\Mcal$ be a II$_1$--factor. Then $\Comm(\Mcal)$ contains all nilpotent elements of $\Mcal$. \end{prop} \begin{proof} By Lemma~\ref{lem:UT}, we only need to observe that a strictly upper triangular matrix in $M_n(\Afr)$ is a single commutator, for any algebra $\Afr$. But this is easy: if \[ A=\left( \begin{matrix} 0&a_{1,2}&a_{1,3}&\cdots&a_{1,n} \\ 0&0 &a_{2,3}&\cdots&a_{2,n} \\ \vdots & &\ddots&\ddots&\vdots \\
& & & &a_{n-1,n} \\ 0 & & \cdots & &0 \end{matrix}\right), \] then $A=BC-CB$, where $B$ is the matrix in~\eqref{eq:B}, \begin{equation} C=\left( \begin{matrix} 0&0&\cdots&0 \\ 0&c_{2,2}&\cdots&c_{2,n} \\ \vdots& &\ddots&\vdots \\ 0&\cdots&0 &c_{n,n} \end{matrix}\right), \end{equation} and where the $c_{i,j}$ are chosen so that \begin{align*} a_{1,j}&=c_{2,j}\,,\qquad (2\le j\le n), \\ a_{p,j}&=c_{p+1,j}-c_{p,j-1}\,,\qquad(2\le p<j\le n). \end{align*} \end{proof}
\section{Examples and questions} \label{sec:ques}
\begin{example} A particular case of Theorem~\ref{thm:atomic} is that if $p$ is a projection (with irrational trace) in any II$_1$--factor $\Mcal$, then $p-\tau(p)1\in\Comm(\Mcal)$. We note that a projection with rational trace is contained in some unital matrix subalgebra $M_n(\Cpx)\subseteq\Mcal$; therefore, the case of a projection with rational trace is an immediate application of Shoda's result. \end{example}
\begin{ques} In light of Theorem~\ref{thm:atomic}, it is natural to ask: does $\Comm(\Mcal)$ contain all normal elements of $\Mcal$ whose trace is zero? (Note that each such element is the limit in norm of a sequence of elements of the sort considered in Theorem~\ref{thm:atomic}.) It is of particular interest to focus on normal elements that generate maximal self--adjoint abelian subalgebras (masas) in $\Mcal$. Does it make a difference whether the masa is singular or semi-regular? (See~\cite{SS08}.) \end{ques}
A particular case: \begin{ques} If $a$ and $b$ freely generate the group $\Fb_2$, let $\lambda_a$ and $\lambda_b$ be the corresponding unitaries generating the group von Neumann algebra $L(\Fb_2)$. Do we have $\lambda_a\in\Comm(L(\Fb_2))$? \end{ques}
Our next examples come from ergodic theory. \begin{example}\label{ex:ergodic} Let $\alpha$ be an ergodic, probability measure preserving transformation of a standard Borel probability space $X$, that is not weakly mixing. Consider the hyperfinite II$_1$--factor $R$ realized as the crossed product $R=L^\infty(X)\rtimes_\alphat\Ints$ where $\alphat$ is the automorphism of $L^\infty(X)$ arising from $\alpha$ by $\alphat(f)=f\circ\alpha$. For $f\in L^\infty([0,1])$, we let $\pi(f)$ denote the corresponding element of $R$, and we write $U\in R$ for the implementing unitary, so that $U\pi(f)U^*=\pi(\alphat(f))$. By a standard result in ergodic theory (see, for example, Theorem 2.6.1 of~\cite{P83}), there is an eigenfunction, i.e.,
$h\in L^\infty(X)\backslash\{0\}$ so that $\alphat(h)=\zeta h$ for some $\zeta\ne1$; moreover, all eigenfunctions $h$ of an ergodic transformation must have $|h|$ constant. If $g\in L^\infty(X)$, then \begin{align*} [U\pi(g),\pi(h)]=U\pi\big(g\big(h-\alphat^{-1}(h)\big)\big). \end{align*} Since $h-\alphat^{-1}(h)$ is invertible, by making appropriate choices of $g$ we get $U\pi(f)=[U\pi(g),\pi(h)]\in\Comm(R)$ for all $f\in L^\infty(X)$. \end{example}
\begin{ques} If $\alpha$ is a weakly mixing transformation of $X$ (for example, a Bernoulli shift), then, with the notation of Example~\ref{ex:ergodic}, do we have $U\pi(f)\in\Comm(R)$ for all $f\in L^\infty(X)$? \end{ques}
\begin{example}
Assume that $\alphat$ from Example \ref{ex:ergodic} has infinitely many distinct eigenvalues. This is the case for every compact ergodic action $\alpha$ (for example, an irrational rotation of the circle or the odometer action), but can also hold for a non-compact action (for example, a skew rotation of the torus). For every finite set $F\subset\Ints\setminus\{0\}$, there is an eigenvalue $\zeta$ such that $\zeta^k\neq 1$, for any $k\in F$. Let $h$ be an eigenfunction of $\alphat$ corresponding to this eigenvalue $\zeta$; clearly, $|h|$ is a constant. Then, for $g_k\in L^\infty(X)$, \[ \left[\sum_{k \in F}U^k\pi(g_k),\pi(h)\right]=\sum_{k\in F} \left[U^k\pi(g_k),\pi(h)\right] =\sum_{k\in F}U^k\pi\big(g_k\big(h-\alphat^{-k}(h)\big)\big). \] Thus, for any $f_k\in L^\infty(X)$, by choosing $g_k=f_k \big(h-\alphat^{-k}(h)\big)^{-1}$, we obtain \[ \sum_{k\in F} U^k\pi(f_k)\in\Comm(R). \] \end{example}
\begin{ques} It is natural to ask Question~\ref{qn:comm} in the particular case of quasinilpotent elements $T$ of $\Mcal$: must they lie in $\Comm(\Mcal)$? From Proposition~4 of~\cite{MW79}, it follows that every quasinilpotent operator $T$ in a II$_1$--factor has trace zero. (Alternatively, use L.\ Brown's analogue~\cite{B86} of Lidskii's theorem in II$_1$--factors and the fact that the Brown measure of $T$ must be concentrated at $0$). \end{ques}
\begin{ques} Consider the quasinilpotent DT--operator $T$ (see~\cite{DH04}), which is a generator of the free group factor $L(\Fb_2)$. Do we have $T\in\Comm(L(\Fb_2))$? \end{ques}
\begin{example}\label{ex:Tucci} Consider G.\ Tucci's quasinilpotent operator \[ A=\sum_{n=1}^\infty a_n V_n\in R, \] from~\cite{T08}, where $a=(a_n)_{n=1}^\infty\in\ell^1_+$, the set of summable sequences of nonnegative numbers. Here $R=\overline{\bigotimes_1^\infty M_2(\Cpx)}$ is the hyperfinite II$_1$--factor and \begin{equation}\label{def:Vn} V_n=I^{\otimes n-1}\otimes\left(\begin{smallmatrix}0&1\\0&0\end{smallmatrix}\right)\otimes I\otimes I\otimes\cdots. \end{equation} Tucci showed in Remark~3.7 (p.\ 2978) of~\cite{T08} that $A$ is a single commutator whenever $a=(b_nc_n)_{n=1}^\infty$ for some $b=(b_n)_{n=1}^\infty\in\ell^1$ and $c=(c_n)_{n=1}^\infty\in\ell^1$, by writing $A=[B,C]$, where \begin{align} B&=\sum_{n=1}^\infty b_nV_nV_n^*, \label{eq:Bop} \\ C&=\sum_{n=1}^\infty c_nV_n\,. \label{eq:C} \end{align} Note that, for $a\in\ell^1_+$, there exist $b$ and $c$ in $\ell^1$ such that $a=(b_nc_n)_{n=1}^\infty$ if and only if $\sum_{n=1}^\infty a_n^{1/2}<\infty$, i.e., if and only if $a\in\ell^{1/2}_+$. \end{example}
The rest of the paper is concerned with some further results and remarks about Tucci's operators.
We might try to extend the formula $A=[B,C]$ for $B$ and $C$ as in~\eqref{eq:Bop} and~\eqref{eq:C}, respectively, to other sequences $a\in\ell^1_+$, i.e.\ for $b$ and $c$ not necessarily in $\ell^1$, and where the convergence in~\eqref{eq:Bop} and~\eqref{eq:C} might be in some weaker topology.
We first turn our attention to~\eqref{eq:C}. Denoting the usual embedding $R\hookrightarrow L^2(R,\tau)$ by $X\mapsto\Xh$, from \eqref{def:Vn} we see that the vectors $\Vh_n$ are orthogonal and all have $L^2(R,\tau)$-norm equal to $1/\sqrt2$; therefore, the series~\eqref{eq:C} converges in $L^2(R,\tau)$ as soon as $c\in\ell^2$, and we have \begin{equation}\label{eq:Ch} \Ch=\sum_{n=1}^\infty c_n\Vh_n. \end{equation} We easily see (below) that only for $c\in\ell^1$ there is a bounded operator $C\in R$ such that $\Ch$ is given by~\eqref{eq:Ch}.
\begin{prop}\label{prop:cl1} Let $c\in\ell^2$. Suppose there is a bounded operator $C\in R$ such that $\Ch$ is given by~\eqref{eq:Ch}. Then $c\in\ell^1$. \end{prop} \begin{proof} For any sequence $(\zeta_n)_{n=1}^\infty$ of complex numbers of modulus $1$, there is an automorphism of $R$ sending $V_n$ to $\zeta_nV_n$ for all $n$. Thus, without loss of generality we may assume $c_n\ge0$ for all $n$.
Letting $E_n:R\to M_2(\Cpx)^{\otimes n}\otimes I\otimes I\otimes\cdots\cong M_{2^n}(\Cpx)$ be the conditional expectation onto the tensor product of the first $n$ copies of the $2\times 2$ matrices (see Example~\ref{ex:Tucci}), we must have $C_n:=E_n(C)=\sum_{k=1}^n c_kV_k\in M_{2^n}(\Cpx)$. Let $x=2^{-n/2}(1,1,\ldots,1)^t$ be the normalization of the column vector of length $2^n$ with all entries equal to $1$. Taking the usual inner product in $\Cpx^{2^n}$, we see $\langle V_kx,x\rangle=1/2$ for all $k\in\{1,\ldots,n\}$. Thus, \[
\frac12\sum_{k=1}^nc_k=\big|\,\langle C_n x,x\rangle\,\big|\le\|C_n\|\le\|C\|. \] This shows $c\in\ell^1$. \end{proof}
Let us now investigate the series~\eqref{eq:Bop} for some sequence $b=(b_n)_{n=1}^\infty$ of complex numbers. We claim that this series gives rise (in a weak sense explained below) to a bounded operator if and only if $b\in\ell^1$. Indeed, for $K$ a finite subset of $\Nats$, we have \[
\left\|\sum_{n\in K}b_nV_nV_n^*\right\|_{L^2(R,\tau)}^2
=\;\frac14\sum_{n\in K}|b_n|^2+\frac14\left|\sum_{n\in K}b_n\right|^2. \] Now suppose $K_1\subseteq K_2\subseteq\cdots$ are finite sets whose union is all of $\Nats$. Then $\sum_{n\in K_p}b_nV_nV_n^*$ converges in $L^2(R,\tau)$ as $p\to\infty$ if and only if $b\in\ell^2$ and $y:=\lim_{p\to\infty}\sum_{n\in K_p}b_n$ exists. Then the limit in $L^2(R,\tau)$ is \begin{equation}\label{eq:Bh} \Bh=\sum_{n=1}^\infty b_n\left(V_nV_n^*-\frac12\right)^{\widehat{\;}}+\frac y2 \oneh. \end{equation} If there is a bounded operator $B$ such that $\Bh$ is given by~\eqref{eq:Bh}, then for every finite $F\subseteq\Nats$, the conditional expectation $E_F(B)$ of $B$ onto the (finite dimensional) subalgebra of $R$ generated by $\{V_nV_n^*\mid n\in F\}$ will be $\sum_{n\in F}b_n(V_nV_n^*-\frac12)+\frac y2$. Taking the projection $P=\prod_{n\in F}V_nV_n^*$, we have $E_F(B)P=\frac12(y+\sum_{n\in F}b_n)P$, so \[
\left|\frac12\left(y+\sum_{n\in F}b_n\right)\right|\le\|E_F(B)\|\le\|B\|. \] As $F$ was arbitrary, this implies $b\in\ell^1$.
Suppose $b_nc_n=\frac1{n^r}$ and $b=(b_n)_1^\infty\in\ell^1$. Letting $(b^*_n)_1^\infty$ denote the nonincreasing rearrangement of $(|b_n|)_1^\infty$, we have $b^*_n=o(\frac1n)$ and standard arguments show $c^*_n\ge\frac{K}{n^{r-1}}$ for some constant $K$. Thus, by Proposition~\ref{prop:cl1}, Tucci's formula for writing $A=[B,C]$ does not work if $a_n=\frac1{n^r}$ for $1<r\le2$, while of course for $r>2$ it works just fine.
\begin{ques}\label{qn:Tuccir} Fix $1<r\leq 2$, and let \[ A=\sum_{n=1}^\infty \frac1{n^r}V_n\in R \] be Tucci's quasinilpotent operator in the hyperfinite II$_1$--factor. Do we have $A\in\Comm(R)$? \end{ques}
\begin{bibdiv} \begin{biblist}
\bib{AM57}{article}{
author={Albert, A. A.},
author={Muckenhoupt, B.},
title={On matrices of trace zero},
journal={Michigan Math. J.},
volume={3},
year={1957},
pages={1--3} }
\bib{A72}{article}{
author={Apostol, Constantin},
title={Commutators on $\ell^p$ spaces},
journal={Rev. Roumaine Math. Pures Appl.},
volume={17},
year={1972},
pages={1513--1534} }
\bib{A73}{article}{
author={Apostol, Constantin},
title={Commutators on $c_0$ and $\ell^\infty$ spaces},
journal={Rev. Roumaine Math. Pures Appl.},
volume={18},
year={1973},
pages={1025--1032} }
\bib{Ba87}{article}{
author={Banaszczyk, Wojciech},
title={The Steinitz constant of the plane},
journal={J. reine angew. Math.},
volume={373},
year={1987},
pages={218--220} }
\bib{Ba90}{article}{
author={Banaszczyk, Wojciech},
title={A note on the Steinitz constant of the Euclidean plane},
journal={C. R. Math. Rep. Acad. Sci. Canada},
volume={12},
year={1990},
pages={97--102} }
\bib{BP65}{article}{
author={Brown, Arlen},
author={Pearcy, Carl},
title={Structure of commutators of operators},
journal={Ann. of Math. (2)},
volume={82},
year={1965},
pages={112--127} }
\bib{BP66}{article}{
author={Brown, Arlen},
author={Pearcy, Carl},
title={Commutators in factors of type III},
journal={Canad. J. Math.},
volume={18},
year={1966},
pages={1152--1160} }
\bib{B86}{article}{
author={Brown, Lawrence G.},
title={Lidskii's theorem in the type II case},
conference={
title={Geometric methods in operator algebras},
address={Kyoto},
date={1983}
},
book={
series={Pitman Res. Notes Math. Ser.},
volume={123},
publisher={Longman Sci. Tech.},
address={Harlow},
date={1986}
},
pages={1--35} }
\bib{CD09}{article}{
author={Collins, Beno\^it},
author={Dykema, Ken},
title={On a reduction procedure for Horn inequalities in finite von Neumann algebras},
journal={Oper. Matrices},
volume={3},
year={2009},
pages={1-40} }
\bib{Co76}{article}{
author={Connes, Alain},
title={Classification of injective factors},
journal={Ann. Math.},
volume={104},
pages={73--115},
year={1976} }
\bib{D09}{article}{
author={Dosev, Detelin},
title={Commutators on $\ell_1$},
journal={J. Funct. Anal.},
volume={256},
year={2009},
pages={3490--3509} }
\bib{DJ10}{article}{
author={Dosev, Detelin},
author={Johnson, William B.},
title={Commutators on $\ell_\infty$},
journal={Bull. London Math. Soc.},
volume={42},
year={2010},
pages={155-169} }
\bib{DH04}{article}{
author={Dykema, Ken},
author={Haagerup, Uffe},
title={Invariant subspaces of the quasinilpotent DT--operator},
journal={J. Funct. Anal.},
volume={209},
year={2004},
pages={332--366} }
\bib{FH80}{article}{
author={Fack, Thierry},
author={de la Harpe, Pierre},
title={Sommes de commutaterus dans les alg\`ebres de von Neumann finies continues},
journal={Ann. Inst. Fourier (Grenoble)},
volume={30},
year={1980},
pages={49--73} }
\bib{GS80}{article}{
author={Grinberg, V.S.},
author={Sewast'janow, S.V.},
title={Regarding the value of Steinitz's constant},
journal={Funktsional. Anal. i Prilozhen},
volume={14},
year={1980},
pages={56--57},
translation={
journal={Functional Anal. and Appl.},
volume={14},
year={1980},
pages={125--126}
} }
\bib{Had98}{article}{
author={Hadwin, Don},
title={Free entropy and approximate equivalence in von Neumann algebras},
conference={
title={Operator algebras and operator theory},
address={Shanghai},
date={1997}
},
book={
series={Contemp. Math.}
volume={228},
publisher={Amer. Math. Soc.},
address={Providence, RI},
year={1998}
},
pages={111--131} }
\bib{H69}{article}{
author={Halpern, Herbert},
title={Commutators in properly infinite von Neumann algebras},
journal={Trans. Amer. Math. Soc.},
volume={139},
year={1969},
pages={55-73} }
\bib{J72}{article}{
author={Janssen, Gerhard},
title={Restricted ultraproducts of finite von Neumann algebras},
conference={
title={Contributions to non-standard analysis},
address={Oberwolfach},
date={1970},
},
book={
series={Studies in Logic and Found. Math.},
volume={69},
publisher={North--Holland},
address={Amsterdam},
date={1972}
},
pages={101--114} }
\bib{M06}{article}{
author={Marcoux, Laurent},
title={Sums of small numbers of commutators},
journal={J. Operator Theory},
volume={56},
year={2006},
pages={111--142} }
\bib{McD70}{article}{
author={McDuff, Dusa},
title={Central sequences and the hyperfinite factor},
journal={Proc. London Math. Soc. (3)},
volume={21},
year={1970},
pages={443--461} }
\bib{MW79}{article}{
author={Murphy, Gerard J.},
author={West, T. T.},
title={Spectral radius forumlae},
journal={Proc. Edinburgh Math. Soc.},
volume={22},
year={1979},
pages={271--275} }
\bib{PT69}{article}{
author={Pearcy, Carl},
author={Topping, David},
journal={J. Funct. Anal.},
title={Commutators and certain II$_1$--factors},
volume={3},
year={1969},
pages={69--78} }
\bib{P83}{book}{
author={Petersen, Karl},
title={Ergodic theory},
publisher={Cambridge Univ. Press.},
series={Cambridge studies in advanced mathematics},
volume={2},
year={1983} }
\bib{S36}{article}{
author={Shoda, Kenjiro},
title={Einige S\"atze \"uber Matrizen},
journal={Japanese J. Math.},
volume={13},
year={1936},
pages={361--365} }
\bib{SS08}{book}{
author={Sinclair, Allan M.},
author={Smith, Roger R.},
title={Finite von Neumann algebras and masas},
series={London Mathematical Society Lecture Note Series},
volume={351},
publisher={Cambridge University Press},
address={Cambridge},
year={2008} }
\bib{St13}{article}{
author={Steinitz, Ernst},
title={Bedingt konvergente Reihen und konvexe Systeme},
journal={J. reine angew. Math.},
volume={143},
year={1913},
pages={128--175} }
\bib{T08}{article}{
author={Tucci, Gabriel},
title={Some quasinilpotent generators of the hyperfinite II$_1$ factor},
journal={J. Funct. Anal.},
volume={254},
year={2008},
pages={2969--2994} }
\bib{W54}{article}{
author={Wright, Fred},
title={A reduction for algebras of finite type},
journal={Ann. of Math. (2)},
volume={60},
year={1954},
pages={560--570} }
\end{biblist} \end{bibdiv}
\end{document} |
\begin{document}
\title[M-Function Asymptotics and Borg-type Theorems]{ Weyl-Titchmarsh $M$-Function Asymptotics, Local Uniqueness Results, Trace Formulas, and Borg-type Theorems for Dirac Operators} \author[Clark and Gesztesy]{Steve Clark and Fritz Gesztesy} \address{Department of Mathematics and Statistics, University of Missouri-Rolla, Rolla, MO 65409, USA} \email{sclark@umr.edu} \urladdr{http://www.umr.edu/\~{ }clark} \address{Department of Mathematics, University of Missouri, Columbia, MO 65211, USA} \email{fritz@math.missouri.edu\newline \indent{\it URL:} http://www.math.missouri.edu/people/fgesztesy.html} \dedicatory{Dedicated to F.~V.~Atkinson, one of the pioneers of this subject.}
\thanks{Supported in part by NSF grant INT-9810322.} \subjclass{Primary 34B20, 34E05, 34L40; Secondary 34A55.} \keywords{Weyl-Titchmarsh matrices, high-energy expansions, uniqueness results, trace formulas, Borg theorems, Dirac operators.}
\begin{abstract} We explicitly determine the high-energy asymptotics for Weyl-Titchmarsh matrices associated with general
Dirac-type operators on half-lines and on ${\mathbb{R}}$. We also prove new local uniqueness results for Dirac-type operators in terms of exponentially small differences of Weyl-Titchmarsh matrices. As concrete applications of the asymptotic high-energy expansion we derive a trace formula for Dirac operators and use it to prove a Borg-type theorem. \end{abstract}
\maketitle
\section{Introduction}\label{s1}
While the high-energy asymptotics, $|z|\to\infty$, of scalar-valued Weyl-Titchmarsh functions, $m_+(z,x_0)$, associated with general half-line Dirac-type differential expressions of the form \begin{equation} J\frac{d}{dx}-B(x), \quad J=\begin{pmatrix} 0 &-1\\1 &0 \end{pmatrix} \end{equation} and $B$ a self-adjoint $2\times 2$ matrix with real-valued coefficients, $B^{(n)}\in L^1([x_0,c])^{2\times 2}$ for some $n\in{\mathbb{N}}_0(={\mathbb{N}}\cup\{0\})$ and all $c>x_0$, received some attention over the past two decades as can be inferred, for instance, from \cite{EHS83}, \cite{Ha85}, \cite{HKS89a}, \cite{HKS89b}, \cite{Mi91} (and the literature therein), it may perhaps come as a surprise that the corresponding matrix extension of this problem, considering general matrix-valued differential expressions of the type \begin{equation} J\frac{d}{dx}-B(x), \quad J=\begin{pmatrix} 0 &-I_m\\I_m &0\end{pmatrix} \end{equation} with $I_m$ the identity matrix in ${\mathbb{C}}^m$, $m\in{\mathbb{N}}$, and $B$ a self-adjoint $2m\times 2m$ matrix satisfying $B^{(n)}\in L^1([x_0,c])^{2m\times 2m}$ for some $n\in{\mathbb{N}}_0$ and all $c>x_0$, apparently, received no attention at all. (It should be noted that this observation discounts papers in the special scattering theoretic case concerned with short-range coefficients
$B^{(n)}\in L^1([x_0,\infty); (1+|x|)dx)^{2m\times 2m}$, where
iterations of Volterra-type integral equations yield the asymptotic high-energy expansion of $M_+(z,x_0)$ as $|z|\to\infty$ to any order, cf.~Lemma~\ref{l4.1}.) This is not because of a lack of interest in this type of problem (we will discuss its relevance below), but simply since it is a nontrivial one, which, in many of its aspects, must be regarded as more difficult than the corresponding matrix-valued Schr\"odinger operator case, which in turn, was only very recently settled in \cite{CG99}. The results proven in this paper show that in leading order (and independently of the self-adjoint boundary condition chosen at $x_0$), \begin{equation}\label{1.1} M_+(z,x_0) \underset{\substack{\abs{z}\to\infty\\ z\in C_\varepsilon}}{=} iI_{m} +o(1), \end{equation} where $C_{\varepsilon}$ denotes the open sector in the open upper complex half-plane ${\mathbb{C}}_+$ with vertex at zero, symmetry axis along the positive imaginary axis, and opening angle $\varepsilon$, with $0<\varepsilon <\pi/2$. We are interested in proving the asymptotic expansion \eqref{1.1} and especially in its higher-order analogs in powers of $1/z$, under optimal smoothness hypotheses on $B$. Such results are then also derived for the $2m\times 2m$ analog $M(z,x)$ of $M_+(z,x)$ associated with Dirac-type operators on ${\mathbb{R}}$.
Our principal motivation in studying this problem stems from our general interest in operator-valued Herglotz functions (cf.~\cite{Ca76}, \cite{GKMT98}, \cite{GKM00}, \cite{GM99}, \cite{GM99a}, \cite{GMN98}, \cite{GMT98}, \cite{GT97}, \cite{Sh71}) and their possible applications in the areas of inverse spectral theory and completely integrable systems. More precisely, using higher-order asymptotic expansions of $M_+(z,x)$, one can prove trace formulas for $B(x)$ and certain higher-order differential polynomials in $B(x)$ (similar in spirit to an approach pioneered in \cite{GS96} (see also \cite{GH97}, \cite{GHSZ95}) in connection with Schr\"odinger operators). These trace formulas, in turn, then can be used to prove various results in inverse spectral theory for matrix-valued Dirac-type operators $D=J\frac{d}{dx}-B$ in $L^2({\mathbb{R}})^{2m}$. For instance, using one of the principal results of this paper, Theorem~\ref{t4.6}, and its straightforward application to the asymptotic high-energy expansion of the diagonal Green's matrix $G(z,x,x)=(D-z)^{-1}(x,x)$ of $D$, the following matrix-valued analog of a classical uniqueness result of Borg \cite{Bo46} for one-dimensional Schr\"odinger operators will be proven in in the context of Dirac-type operators in Section~\ref{s6}.
\begin{theorem} \label{t1.1} Suppose that $B$ is of the normal form $B(x)=\left(\begin{smallmatrix} B_{1,1}(x) & B_{1,2}(x)\\B_{1,2}(x) & -B_{1,1}(x)\end{smallmatrix}\right)$, with $B_{1,1}(x)$ and $B_{1,2}(x)$ self-adjoint for a.e.~$x\in{\mathbb{R}}$, and assume that $D$ is reflectionless $($e.g., $B$ is periodic and D has uniform spectral multiplicity $2m$$)$. In addition, suppose that $D$ has spectrum equal to ${\mathbb{R}}$. Then, \begin{equation} B(x)=0 \text{ for a.e.~$x\in{\mathbb{R}}$}. \label{1.2} \end{equation} \end{theorem}
For related results see, for instance, \cite{Am93}, \cite{AG96}, \cite{CJ87}, \cite{Ge91}, \cite{GSS91}, \cite{GJ84}, \cite{GG93}. Incidentally, the higher-order differential polynomials in $B(x)$ just alluded to represent the Ablowitz-Kaup-Newell-Segur (AKNS) or Zakharov-Shabat (ZS) invariants (i.e., densities associated with the AKNS-ZS conservation laws) and hence provide a link to infinite-dimensional completely integrable systems (cf., e.g., \cite{AK90}, \cite{Ch96}, \cite{Di91}, \cite{Du77}, \cite{Du83}, \cite{Ma78}, \cite{Ma88}, \cite{Sa88}, \cite{Sa99}, \cite{Sa94a}, \cite{Sa99a}, and the references therein), especially, hierarchies of matrix-valued (i.e., nonabelian) nonlinear Schr\"odinger equations.
Although various aspects of inverse spectral theory for scalar Schr\"odinger, Jacobi, and Dirac-type operators, and more generally, for $2\times 2$ Hamiltonian systems, are well-understood by now (cf.~the extensive list of references provided in \cite{GKM00}), the corresponding theory for such operators and Hamiltonian systems with $m\times m$ matrix-valued coefficients, $m\in{\mathbb{N}}$, is still in its infancy. A particular inverse spectral theory aspect we have in mind is that of determining isospectral sets (manifolds) of such systems. It may, perhaps, come as a surprise that determining the isospectral set of Hamiltonian systems with matrix-valued periodic coefficients is a completely open problem. It appears to be no exaggeration to claim that absolutely nothing seems to be known about the corresponding isospectral sets of periodic Dirac-type operators in the case $m\geq 2$.\ (More or less the same ignorance applies to Schr\"odinger, Jacobi, and more generally, to periodic $2m\times 2m$ Hamiltonian systems with $m\geq 2$.) Theorem~\ref{t1.1} can be viewed as a first (and very modest) step toward the construction of isospectral manifolds of certain classes of matrix-valued potential coefficients $B$ for Dirac-type operators.
However, asymptotic high-energy expansions for Weyl-Titchmarsh matrices on half-lines and on ${\mathbb{R}}$, their applications to trace formulas for $B(x)$, and the derivation of Borg-type theorems for Dirac operators are not the only topics under consideration in this paper. We also provide a comprehensive and new treatment of local uniqueness theorems for $B$
in terms of exponentially close Weyl-Titchmarsh matrices. More precisely, in Section \ref{s5} we will prove the following result ($\|\cdot\|_{{\mathbb{C}}^{m\times m}}$ denotes a matrix norm on ${\mathbb{C}}^{m\times m}$).
\begin{theorem} \label{t1.2} Fix $x_0\in{\mathbb{R}}$ and suppose that $B_j\in L^1([x_0,x_0+R])^{2m\times 2m}$ for all $R>0$, posseses the normal form given in Theorem~\ref{t1.1} a.e.~on $(x_0,\infty)$, $j=1,2$. Denote by $M_{j,+}(z,x)$, $x\geq x_0$, the unique Weyl-Titchmarsh matrices corresponding to the half-line Dirac-type operators in $L^2([x_0,\infty))^{2m}$ associated with $B_j$, $j=1,2$ $($fixing some self-adjoint boundary condition at $x_0$$)$. Then, \begin{equation} \text{if for some $a>0$, }\, B_1(x)=B_2(x) \, \text{ for a.e. $x\in (x_0,x_0+a)$,} \label{1.3} \end{equation} one obtains \begin{equation}
\|M_{1,+}(z,x_0)-M_{2,+}(z,x_0)\|_{{\mathbb{C}}^{m\times m}}\underset{\substack{\abs{z} \to\infty\\ z\in \rho_{+}}}{=} O\big(e^{-2\text{\rm Im}(z)a}\big) \label{1.4} \end{equation} along any ray $\rho_+\subset{\mathbb{C}}_+$ with $0<\arg(z)<\pi$ $($and for all self-adjoint boundary condition at $x_0$$)$. Conversely, if $m>1$, assume in addition that $B_j\in L^\infty([x_0,x_0+a])^{2m\times 2m}$, $j=1,2$. Moreover, suppose that for all $\varepsilon >0$, \begin{equation}
\|M_{1,+}(z,x_0)-M_{2,+}(z,x_0)\|_{{\mathbb{C}}^{m\times m}}\underset{\substack{|z|\to\infty\\z\in \rho_{+,\ell}}}{=} O\big(e^{-2\text{\rm Im}(z)(a-\varepsilon)}\big), \quad \ell=1,2, \label{1.5} \end{equation} along a ray $\rho_{+,1}\subset{\mathbb{C}}_+$ with $0<\arg(z)<\pi/2$ and along a ray $\rho_{+,2}\subset{\mathbb{C}}_+$ with $\pi/2<\arg(z)<\pi$. Then \begin{equation} B_1(x)=B_2(x) \text{ for a.e. } x\in [x_0,x_0+a]. \label{1.6} \end{equation} \end{theorem}
\noindent We also prove the analog of Theorem~\ref{t1.2} for the $2m\times 2m$ Weyl-Titchmarsh matrices $M_j(z,x)$ associated with Dirac-type operators on ${\mathbb{R}}$ corresponding to $B_j$, $j=1,2$.
In the context of scalar Schr\"odinger operators, the analog of Theorem~\ref{t1.2} was first proved by Simon \cite{Si98}. An alternative proof, applicable to matrix-valued Schr\"odinger operators was presented in \cite{GS99} (cf.~also \cite{GKM00}). More recently, yet another proof was found by Bennewitz \cite{Be00} (following some ideas in \cite{Bo52}). In fact, our proof of Theorem~\ref{t1.2} is based on that of Bennewitz \cite{Be00} with additional modifications necessary to accomodate Dirac-type operators. These results extend the classical (global) uniqueness results due to Borg \cite{Bo52} and Marchenko \cite{Ma50}, \cite{Ma52} which state that half-line $m$-functions uniquely determine the corresponding potential coefficient. The Dirac-type results such as Theorem~\ref{t1.2} appear to be new, even in the special case $m=1$. Previous results in the Dirac case focused on global uniqueness questions only. We refer to Gasymov and Levitan \cite{GL66} in the case $m=1$ and to Lesch and Malamud \cite{LM00} in the matrix case $m\in{\mathbb{N}}$.
Next, we briefly sketch the content of each section. Section~\ref{s2} provides the necessary background results on Dirac-type operators and recalls the basic notions of Weyl-Titchmarsh theory for Hamiltonian systems on a half-line as well as on ${\mathbb{R}}$, as developed in detail by Hinton and Shaw in a series of papers \cite{HS81}--\cite{HS86} (see also \cite{At64}, \cite{Jo87}, \cite{HSH93}, \cite{HSH97}, \cite{JNO00}, \cite{KR74}, \cite{KS88}, \cite{Kr89a}, \cite{Kr89b}, \cite{LM00a}, \cite{Or76}, \cite{Sa94a}). In fact, most of these references deal with more general singular Hamiltonian systems and hence we specialize some of this material to the Dirac-type operator case at hand. While our treatment of Weyl-Titchmarsh theory in Section~\ref{s2} is somewhat detailed, the results presented appear to be of vital importance for our asymptotic expansions in Sections~\ref{s3} and \ref{s4}. At any rate, we intended to present this material as concisely as possible.
Section~\ref{s3} is devoted to a proof of the leading-order for the asymptotic high-energy expansion \eqref{1.1} of $M_+(z,x)$ for the Dirac case. We follow the strategy developed in the context of matrix-valued Schr\"odinger operators in our joint paper \cite{CG99} by appealing to the theory of Riccati equations. By doing so, we follow the lead of Atkinson who highlighted the importance of Riccati equations, in this regard, first in \cite{At81}, subsequently in \cite{At82}, \cite{At88a} and ultimately in the unpublished manuscript \cite{At88} in which he obtains the leading order for the asymptotic high-energy expansion of $M_+(z,x)$ for the matrix-valued Schr\"odinger case.
Theorems \ref{t3.6} and \ref{t3.7} contain two characterizations of the {\it Weyl disk} (cf.~Definition~\ref{dWD}). These characterizations provide an answer in Remark~\ref{r3.3} to a point raised in \cite{CG99} concerning the nature of the Weyl disk. {}From these characterizations of the Weyl disk, we obtain a realization of $M_+(z,x)$ as a differentiable function of $x$ which satisfies a certain Riccati equation globally and whose imaginary part is strictly positive. We observe, in Remark~\ref{r3.6a}, that the totality of Weyl disks, $D_+(z,x)$ (cf.~Defintion~\ref{dLWD}), represents the phase space for these solutions. Thus, the asymptotic expansion we seek, represents the asymptotic high-energy behavior for certain solutions of a given Riccati equation.
Section~\ref{s4} develops a systematic higher-order high-energy asymptotic expansion of $M_+(z,x)$ as
$|z|\to\infty$, combining the leading-order asymptotic result in Section~\ref{s3} with matrix-valued extensions of some methods based again on an associated Riccati equation. More precisely, following a technique in \cite{GS98} in the scalar Schr\"odinger operator context, we show how to derive the general high-energy asymptotic expansion of
$M_+(z,x)$ as $|z|\to\infty$ by combining Atkinson's leading-order term in \eqref{1.1} and the corresponding asymptotic expansion of $M_+(z,x)$ in the special case where $B$ has compact support. Section~\ref{s5} then contains our new local uniqueness results for $B(x)$ in terms of exponentially small differences of Weyl-Titchmarsh matrices as indicated in Theorem~\ref{t1.2}. Finally, in Section~\ref{s6} we derive a new trace formula for Dirac-type operators $D$ in $L^2({\mathbb{R}})^{2m}$, using appropriate Herglotz representation results for the diagonal Green's matrix $G(z,x,x)$ discussed in Section~\ref{s2}. Moreover, we derive the Borg-type Theorem~\ref{t1.1} for Dirac operators and close with an application to the case of periodic potentials coefficients $B$.
\section{Weyl-Titchmarsh Matrices for Hamiltonian Systems} \label{s2}
We now turn to the Weyl-Titchmarsh theory for Hamiltonian systems as developed by Hinton and Shaw in a series of papers devoted to the spectral theory of (singular) Hamiltonian systems \cite{HS81}--\cite{HS86} (see also \cite{HSH93}, \cite{HSH97}, \cite{Kr89a}, \cite{Kr89b}, \cite{Sa92}, \cite{Sa94a}, \cite{Sa99}, \cite{Sa99a}). Throughout this paper all matrices will be considered over the field of complex numbers ${\mathbb{C}}$. The basic assumptions throughout are described in the following three hypotheses.
\begin{hypothesis} \label{h2.1} Fix $m\in{\mathbb{N}}$ and define the $2m\times 2m$ matrix \begin{subequations}\label{2.1} \begin{equation}\label{2.1a} J=\begin{pmatrix}0& -I_m \\ I_m & 0 \end{pmatrix}, \end{equation} where $I_m$ denotes the identity matrix in ${\mathbb{C}}^{m\times m}$. Suppose \begin{equation} A_{j,k}, B_{j,k} \in L_{\text{\rm{loc}}}^1({\mathbb{R}})^{m\times m}, \quad j,k = 1,2 \end{equation} and assume \begin{align} A(x)&=\begin{pmatrix}A_{1,1}(x)&A_{1,2}(x) \\A_{2,1}(x) & A_{2,2}(x) \end{pmatrix}\ge 0, \label{2.1c}\\ \quad B(x)&=\begin{pmatrix}B_{1,1}(x)&B_{1,2}(x) \\B_{2,1}(x) & B_{2,2}(x) \end{pmatrix}=B(x)^*,\label{2.1d} \end{align} \end{subequations} for a.e.~$x\in {\mathbb{R}}$. \end{hypothesis}
\noindent $L_{\text{\rm{loc}}}^1({\mathbb{R}})$ denotes the set of locally integrable functions on ${\mathbb{R}}$. With $M\in{\mathbb{C}}^{m\times m}$, let $M^t$ denote the transpose, let $M^*$ denote the adjoint or conjugate transpose of the matrix $M$ and let $M\ge 0$ and $M\le 0$ denote nonnegative and nonpositive matrices $M$ (i.e., positive and negative semidefinite matrices). Moreover, let $\text{\rm Im}(M)=(M-M^*)/(2i)$ and $\text{\rm Re}(M)=(M+M^*)/2$ denote, respectively, the imaginary and real parts of the matrix $M$.
Given Hypothesis~\ref{h2.1}, our Hamiltonian system is given by \begin{subequations}\label{HS} \begin{equation}\label{HSa} J \varPsi'(z,x)=(zA(x)+B(x))\varPsi(z,x), \quad z\in{\mathbb{C}} \end{equation} for a.e. $x\in {\mathbb{R}}$, where $z$ plays the role of the spectral parameter, and where \begin{equation}\label{HSb} \varPsi(z,x) = \begin{pmatrix}\psi_1(z,x)\\ \psi_2(z,x) \end{pmatrix}, \quad \psi_j(z,\cdot)\in AC_{\text{\rm{loc}}}({\mathbb{R}})^{m\times r}, \,\, j=1,2. \end{equation} \end{subequations} $AC_{\text{\rm{loc}}}({\mathbb{R}})$ denotes the set of locally absolutely continuous functions on ${\mathbb{R}}$. The parameter $r$ in \eqref{HSb} will be context dependent and range between $1\leq r \leq m$.
For our discussions of the Weyl-Titchmarsh theory for the Hamiltonian system \eqref{HS}, we introduce the definiteness assumption found in Atkinson~\cite{At64}.
\begin{hypothesis}\label{h2.2} For all nontrivial solutions $\varPsi$ of \eqref{HSa} with $r=1$ in \eqref{HSb}, we assume that \begin{equation}\label{2.3} \int_{a}^b dx \, \varPsi(z,x)^*A(x)\varPsi(z,x) > 0\; , \end{equation} for every interval $(a,b)\subset {\mathbb{R}}$, $a<b$. \end{hypothesis}
A principal example of such a system is the Dirac-type system obtained when \begin{equation}\label{DS} A(x) = I_{2m}, \end{equation} and the subject of the present paper; another example being the matrix-valued Schr\"odinger system, obtained when \begin{equation}\label{SS} A(x)= \begin{pmatrix}I_m& 0 \\ 0 & 0 \end{pmatrix}, \qquad B(x) = \begin{pmatrix}-Q(x)& 0 \\ 0 & I_m \end{pmatrix}, \end{equation} and the subject of \cite{CG99}. When \eqref{SS} holds, we note that \eqref{HSa} is equivalent to \begin{align} -\psi_1^{\prime\prime}(z,x)+Q(x)\psi_1(z,x)& =z\psi_1(z,x), \label{2.6} \\ \psi_2(z,x)&=\psi_1^\prime(z,x) \label{2.7} \end{align} for a.e.~$x\in{\mathbb{R}}$. Hypothesis \ref{h2.2} clearly holds in both examples.
Next, we introduce a set of matrices that will serve as boundary data for separated boundary conditions.
\begin{hypothesis}\label{h2.3} Let $\gamma = (\gamma_1\; \gamma_2)$ with $\gamma_j \in {\mathbb{C}}^{m\times m}$, $j=1,2$. We assume that $\gamma$ satisfies the following conditions, \begin{subequations}\label{BD} \begin{equation}\label{BDa} \text{\rm{rank}} (\gamma) = m, \end{equation} and that either \begin{equation}\label{BDc} \text{\rm Im} (\gamma_{2}\gamma_{1}^*) \le 0, \quad \text{or} \quad \text{\rm Im} (\gamma_{2}\gamma_{1}^*) \ge 0, \end{equation} where $(2i)^{-1}\, \gamma J\gamma^*=\text{\rm Im} (\gamma_{2}\gamma_{1}^*)$. Given the rank condition in \eqref{BDa}, we assume, without loss of generality in what follows, the normalization \begin{equation}\label{BDd} \gamma\gamma^* = I_m. \end{equation} \end{subequations} \end{hypothesis}
\begin{remark} \label{r2.4} With $\alpha\in{\mathbb{C}}^{m\times 2m}$, the conditions \begin{equation} \alpha\alpha^*=I_m, \quad \alpha J\alpha^*=0 \label{2.8e} \end{equation} imply that $\alpha$ satisfies Hypothesis~\ref{2.3}, and they explicitly read \begin{equation} \alpha_1\alpha_1^* +\alpha_2\alpha_2^*=I_m, \quad \alpha_2\alpha_1^* -\alpha_1\alpha_2^*=0. \label{2.8f} \end{equation} In fact, one also has \begin{equation} \alpha_1^*\alpha_1 +\alpha_2^*\alpha_2=I_m, \quad \alpha_2^*\alpha_1 -\alpha_1^*\alpha_2=0, \label{2.8g} \end{equation} as is clear from \begin{equation} \begin{pmatrix} \alpha_1 & \alpha_2\\ -\alpha_2 & \alpha_1 \end{pmatrix} \begin{pmatrix} \alpha_1^* & -\alpha_2^*\\ \alpha_2^* & \alpha_1^* \end{pmatrix}=I_{2m}=\begin{pmatrix} \alpha_1^* & -\alpha_2^*\\ \alpha_2^* & \alpha_1^* \end{pmatrix}\begin{pmatrix} \alpha_1 & \alpha_2\\ -\alpha_2 & \alpha_1 \end{pmatrix}, \label{2.8h} \end{equation} since any left inverse matrix is also a right inverse, and vice versa. Moreover, from \eqref{2.8g} we obtain \begin{equation}\label{2.8i} \alpha^*\alpha J + J\alpha^*\alpha = J. \end{equation} \end{remark}
With $\alpha\in{\mathbb{C}}^{m\times 2m}$ satisfying \eqref{2.8e}, let $\Psi(z,x,x_0,\alpha)$ be a normalized fundamental system of solutions of \eqref{HS} at some $x_0\in{\mathbb{R}}$. That is, $\Psi(z,x,x_0,\alpha)$ satisfies \eqref{HS} for a.e.\ $x\in{\mathbb{R}}$, and \begin{subequations}\label{FS} \begin{equation}\label{FSa} \Psi(z,x_0,x_0,\alpha)=(\alpha^* \; J\alpha^*)= \begin{pmatrix} \alpha_1^* & -\alpha_2^* \\ \alpha_2^* & \alpha_1^* \end{pmatrix}. \end{equation} We partition $\Psi(z,x,x_0,\alpha)$ as follows, \begin{align} \Psi(z,x,x_0,\alpha)&=(\Theta(z,x,x_0,\alpha)\; \Phi(z,x,x_0,\alpha))\label{FSb}\\ &=\begin{pmatrix}\theta_1(z,x,x_0,\alpha) & \phi_1(z,x,x_0,\alpha)\\ \theta_2(z,x,x_0,\alpha)& \phi_2(z,x,x_0,\alpha) \end{pmatrix},\label{FSc} \end{align} \end{subequations} where $\theta_j(z,x,x_0,\alpha)$ and $\phi_j(z,x,x_0,\alpha)$ for $j=1,2$ are $m\times m$ matrices, entire with respect to $z\in{\mathbb{C}}$, and normalized according to \eqref{FSa}.~One can now prove the following result.
\begin{lemma}\label{l2.4} Let $ \Theta(z,x,x_0,\alpha)$ and $\Phi(z,x,x_0,\alpha)$ be defined in \eqref{FSb} with $\alpha$ and $\beta$ satisfying Hypothesis~\ref{h2.3} and with $\text{\rm Im}(\alpha_2\alpha_1^*)=0$. Then, for $c\ne x_0$, $\beta\Phi(z,c,x_0,\alpha)$ is singular if and only if $z$ is an eigenvalue for the regular boundary value problem given by \eqref{HSa} on $[x_0,c]$ if $c>x_0$ and on $[c,x_0]$ if $c<x_0$ together with the separated boundary conditions \begin{equation}\label{BC} \alpha\varPsi(z,x_0)=0, \quad \beta\varPsi(z,c)=0, \end{equation} where $\varPsi (z,x)=(\psi_1(z,x)^t\; \psi_2(z,x)^t)^t$ with $\psi_j(z,\cdot)\in AC([x_0,c])$ if $c>x_0$ and $\psi_j(z,\cdot)\in AC([c,x_0])$ if $c<x_0$, $j=1,2$. \end{lemma}
\noindent Note that the regular boundary value problem described in Lemma~\ref{l2.4} is self-adjoint when $\text{\rm Im}(\beta_2\beta_1^*)=0$.
In light of Lemma~\ref{l2.4}, it is possible to introduce, under appropriate conditions, the $m\times m$ matrix-valued function, $M(z,c,x_0,\alpha,\beta)$, as follows.
\begin{definition}\label{dMF} Let $ \Theta(z,x,x_0,\alpha)$, and $\Phi(z,x,x_0,\alpha)$ be defined in \eqref{FSb} with $\alpha$ and $\beta$ satisfying Hypothesis~\ref{h2.3} and with $\text{\rm Im}(\alpha_2\alpha_1^*)=0$. For $c\ne x_0$, and $\beta\Phi(z,c,x_0,\alpha)$ nonsingular let \begin{equation}\label{MF} M(z,c,x_0,\alpha,\beta) = -[\beta\Phi(z,c,x_0,\alpha)]^{-1}[\beta\Theta(z,c,x_0,\alpha)]. \end{equation} $M(z,c,x_0,\alpha,\beta)$ is said to be the {\it Weyl-Titchmarsh $M$-function} for the regular boundary value problem described in Lemma~\ref{l2.4}. \end{definition}
The Weyl-Titchmarsh $M$-function is an $m\times m$ matrix-valued function with meromorphic entries whose poles correspond to eigenvalues for the regular boundary value problem given by \eqref{HSa} and \eqref{BC}. Moreover, if $M\in {\mathbb{C}}^{m\times m}$, and one defines \begin{equation}\label{2.14} U(z,x,x_0,\alpha)= \begin{pmatrix} u_1(z,x,x_0,\alpha)\\u_2(z,x,x_0,\alpha) \end{pmatrix}= \Psi(z,x,x_0,\alpha)\begin{pmatrix}I_m\\M\end{pmatrix}, \end{equation} with $u_j(z,x,x_0,\alpha)\in {\mathbb{C}}^{m\times m}$, $j=1,2$, then $U(z,x,x_0,\alpha)$ will satisfy the boundary condition at $x=c$ in \eqref{BC} whenever $M=M(z,c,x_0,\alpha,\beta)$. Intimately connected with the matrices introduced in Definition~\ref{dMF} is the set of $m\times m$ complex matrices known as the Weyl disk. Several characterization of this set have appeared in the literature (see, e.g., \cite{At64}, \cite{At88a}, \cite{At88}, \cite{HSH93}, \cite{HS81}, \cite{Kr89a}, \cite{Or76}). We now mention two, and will introduce two others in Section~\ref{s3} which we use in the derivation of the asymptotic expansions that are the subject of Sections \ref{s3} and \ref{s4}.
To describe this set, we first introduce the matrix-valued function $E_c(M)$: With $c\ne x_0$, $z\in{\mathbb{C}}\backslash{\mathbb{R}}$, and with $U(z,c,x_0,\alpha)$ defined by \eqref{2.14} in terms of a matrix $M\in{\mathbb{C}}^{m\times m}$, let \begin{equation}\label{2.380} E_c(M) = \sigma(x_0,c,z)U(z,c,x_0,\alpha)^*(iJ)U(z,c,x_0,\alpha), \end{equation} where \begin{equation}
\sigma(s,t,z)=\frac{(s-t)\text{\rm Im} (z)}{|(s-t)\text{\rm Im} (z)|},\quad \sigma(s,t)=\sigma(s,t,i),\quad \sigma(z)= \sigma(1,0,z), \end{equation} with $s\ne t$, and $s,t\in{\mathbb{R}}$.
\begin{definition}\label{dWD} Let the following be fixed: Real numbers $x_0$ and $c\ne x_0$, an $m\times 2m$ matrix $\alpha$ satisfying \eqref{2.8e}, and $z\in{\mathbb{C}}\backslash{\mathbb{R}}$. ${\mathcal D}(z,c,x_0,\alpha)$ will denote the collection of all $M\in {\mathbb{C}}^{m\times m}$ for which $E_c(M)\le 0$, where $E_c(M)$ is defined in \eqref{2.380}. ${\mathcal D}(z,c,x_0,\alpha)$ is said to be a {\it Weyl disk}. The set of $M\in {\mathbb{C}}^{m\times m}$ for which $E_c(M) = 0$ is said to be a {\it Weyl circle} (even when $m>1$). \end{definition}
This definition leads to a presentation that is a generalization of the description first given by Weyl~\cite{We10}; a presentation which is geometric in nature, involves the contractive matrices $V\in{\mathbb{C}}^{m\times m}$, such that $VV^*\le I_m$, and provides the justification for the geometric terms of circle and disk (cf., e.g., \cite{HS81}, \cite{HSH93}, \cite{Kr89a}, \cite{Or76}).
The disk has also been characterized in terms of matrices which statisfy Hypothesis~\ref{h2.3} and which serve as boundary data for the regular boundary value problem described in Lemma~\ref{l2.4} (cf., e.g., \cite{At88a}, \cite{At88}). More precisely, one could have used the following alternative definition. \\
\vspace*{-2mm}
\noindent {\bf Definition 2.7A.} ${\mathcal D}(z,c,x_0,\alpha)$ denotes the collection of all $M\in {\mathbb{C}}^{m\times m}$ obtained by the construction given in \eqref{MF} where $c\ne x_0$, $z\in{\mathbb{C}}/{\mathbb{R}}$, where $\alpha$ and $\beta$ are the $m\times m$ matrices defined in Hypothesis~\ref{h2.3} for which $\sigma(c,x_0,z)\text{\rm Im}(\beta_2\beta_1^*)\ge 0$, and $\text{\rm Im}{(\alpha_2\alpha_1^*)}=0$. \\
\vspace*{-2mm} \noindent However, in this paper we take Definition~\ref{dWD} as our point of departure.
We note that the Weyl circle corresponds to the regular boundary value problems in Lemma~\ref{l2.4} with separated, self-adjoint boundary conditions. For convenience of the reader, and to achieve a reasonable level of completeness, we reproduce the corresponding short proof below.
\begin{lemma}[\cite{HS84}, \cite{HSH93}, \cite{Kr89a}]\label{l2.11} Let $M\in{\mathbb{C}}^{m\times m}$, $c\ne x_0$, and $z\in{\mathbb{C}}\backslash{\mathbb{R}}$. Then, $E_c(M)=0$ if and only if there is a $\beta\in {\mathbb{C}}^{m\times 2m}$ satisfying \eqref{2.8e} such that \begin{equation}\label{2.24} 0=\beta U(z,c,x_0,\alpha), \end{equation} where $U(z,c)=U(z,c,x_0,\alpha)$ is defined in \eqref{2.14} in terms of $M$. With $\beta$ so defined, \begin{equation}\label{2.25} M=-[\beta\Phi(z,c,x_0,\alpha)]^{-1}[\beta\Theta(z,c,x_0,\alpha)], \end{equation} that is, $M=M(z,c,x_0,\alpha,\beta)$. Moreover, $\beta$ may be chosen to satisfy \eqref{BDd}, and hence Hypothesis~\ref{2.3}. \end{lemma}
\begin{proof} Let $z\in{\mathbb{C}}\backslash{\mathbb{R}}$, and suppose for a given $M\in{\mathbb{C}}^{m\times m}$ that there is a $\beta\in{\mathbb{C}}^{m\times 2m}$ which satisfies \eqref{2.8e} and such that \eqref{2.24} is satisfied. Given that $\beta J \beta^* = 2i\text{\rm Im} (\beta_2\beta_1^*)=0$, and given that $\text{\rm{rank}} (\beta)=\text{\rm{rank}}(J\beta^*)=m$, there is a nonsingular $C\in{\mathbb{C}}^{m\times m}$ such that $U(z,c) = J\beta^*C$. Hence, $E_c(M)=i\sigma(c,x_0,z)C^*\beta J\beta^*C=0$.
Upon showing that $\beta\Phi(z,c)=\beta\Phi(z,c,x_0,\alpha)$ is nonsingular, \eqref{2.25} will then follow from \eqref{2.24}. If $\beta\Phi(z,c)$ is singular, then there are nonzero vectors $v, w \in {\mathbb{C}}^{m}$ such that $\beta\Phi(z,c)v=0$, and such that $\Phi(z,c)v = J\beta^*w$. Let $\varPsi_j=\varPsi_j(z,x)$, $j=1,2$ denote solutions of \eqref{HSa} with $z=z_j$, $j=1,2$. Then, \begin{equation}\label{2.19} (\varPsi_1^*J\varPsi_2)'=(z_2 - \bar{z}_1)\varPsi_1^*A\varPsi_2. \end{equation} Using \eqref{2.19}, and recalling that $\Phi(z,x)$ is defined in \eqref{FS}, we obtain \begin{subequations}\label{2.230} \begin{align} 2i\text{\rm Im}(z)\int_{x_0}^c dx\, v^*\Phi(z,x)^* A(x) \Phi(z,x)v&=v^*\Phi(z,c)^* J \Phi(z,c)v \label{2.230a} \\ &=w^*\beta J \beta^*w =0. \end{align} \end{subequations} Thus, by Hypothesis~\ref{h2.2}, $\text{\rm Im}(z)=0$. This contradicts the assumption that $z\in{\mathbb{C}}\backslash{\mathbb{R}}$.
Conversely, if $E_c(M)=0$ for a given $M\in {\mathbb{C}}^{m\times m}$, then for $z\in{\mathbb{C}}\backslash{\mathbb{R}}$ let $\beta =(I_m\; M^*)\Psi(z,c,x_0,\alpha)^*J =
U(z,c,x_0,\alpha)^* J$. One observes that \eqref{2.24} is satisfied and that $\text{\rm{rank}} (\beta) =m$. Moreover, $0=\sigma(x_0,c,z)E_c(M)/2=\text{\rm Im}(\beta_2\beta_1^*)$. If for this choice of $\beta$, \eqref{BDd} is not yet satisfied, one introduces $\Delta = (\beta\beta^*)^{-1/2}\beta$ and observes that $0=\Delta U(z,c,x_0,\alpha)$, $\text{\rm Im}(\Delta_2\Delta_1^*)= (\beta\beta^*)^{-1/2} \text{\rm Im}(\beta_2\beta_1^*) (\beta\beta^*)^{-1/2}$, and that $\Delta$ satisfies all requirements of \eqref{2.8e}. \end{proof}
Next, we recall a fundamental property associated with matrices in ${\mathcal D}(z,c,x_0,\alpha)$.
\begin{lemma} \label{l2.8} If $M\in{\mathcal D}(z,c,x_0,\alpha)$, then \begin{equation}\label{Hgz} \sigma(c,x_0,z) \text{\rm Im} (M)>0. \end{equation} Moreover, whenever $\beta\in{\mathbb{C}}^{m\times 2m}$ satisfies \eqref{2.8e}, \begin{equation}\label{2.270} M(\bar z,c,x_0,\alpha,\beta)= M(z,c,x_0,\alpha,\beta)^*. \end{equation} \end{lemma}
\begin{proof} Let $\varPsi_j=\varPsi_j(z,x)$, $j=1,2$ denote solutions of \eqref{HSa} with $z=z_j$, $j=1,2$. Then $(\varPsi_1^*J\varPsi_2)'=(z_2 - \bar{z}_1)\varPsi_1^*A\varPsi_2$ as in \eqref{2.19}. This implies \begin{align} 2i\text{\rm Im}(z)\int_{x_0}^c dx\, U(z,x)^* A(x) U(z,x) &=
U(z,x)^* J U(z,x)\big |_{x_0}^c \nonumber \\ &=2i\text{\rm Im}(M) + U(z,c)^* J U(z,c), \label{2.280} \end{align} with $U(z,x)=U(z,x,x_0,\alpha)$ defined in \eqref{2.14}. Moreover, by the definition of $E_c(M)$ given in \eqref{2.380}, one obtains \begin{align}\label{2.290} &2\sigma(c,x_0,z)\text{\rm Im} (M)\\
&= -E_c(M) + 2\sigma(c,x_0)|\text{\rm Im}(z)|\int_{x_0}^c ds\, U(z,s)^*A(s)U(z,s).\nonumber \end{align} By Hypothesis~\ref{h2.2} and Definition~\ref{dWD}, one infers that $\sigma(c,x_0,z) \text{\rm Im} (M)>0$. To prove \eqref{2.270}, let $\Psi(z,x)=\Psi(z,x,x_0,\alpha)$, where $\Psi$ is defined in \eqref{HS}. Then, by \eqref{2.19}, \begin{equation}\label{2.310} \Psi(\bar{z},x)^*J\Psi(z,x)=J, \end{equation} which implies $J\Psi(z.x)(\Psi(\bar{z},x)J)^*=I_{2m}$ and hence \begin{equation}\label{2.330} \Psi(z,x)J\Psi(\bar{z},x)^*=J. \end{equation} Thus one concludes \begin{equation} \beta\Phi(z,c)(\beta\Theta(\bar{z},c))^*- \beta\Theta(z,c)(\beta\Phi(\bar{z},c))^*= \beta J\beta^*=0, \end{equation} from which \eqref{2.270} follows immediately by Lemma~\ref{l2.11}. \end{proof}
For $c>x_0$, the function $M(z,c,x_0,\alpha,\beta)$, defined by \eqref{MF}, and satisfying \eqref{Hgz}, is said to be a matrix-valued {\it Herglotz} function of rank $m$. Hence, for $\text{\rm Im}(\beta_2\beta_1^*)=0$, poles of $M(z,c,x_0,\alpha,\beta)$, $c>x_0$, are at most of first order, are real, and have nonpositive residues. Such functions admit a representation of the form \begin{align}\label{NP} M(z,c,x_0,\alpha,\beta)=& \; C_1(c,x_0,\alpha,\beta) + zC_2(c,x_0,\alpha,\beta) \nonumber\\ &+\int_{-\infty}^\infty d\Omega(\lambda,c,x_0,\alpha,\beta)\,\left( \frac{1}{\lambda-z} -\frac{\lambda}{1+\lambda^2} \right), \quad c>x_0, \end{align} where $C_2(c,x_0,\alpha,\beta)\ge 0$ and $C_1(c,x_0,\alpha,\beta)$ are self-adjoint $m\times m$ matrices, and where $\Omega(\lambda,c,x_0,\alpha,\beta)$ is a nondecreasing $m\times m$ matrix-valued function such that \begin{subequations}\label{Mrep} \begin{align}
&\int_{-\infty}^{\infty}\|
d\Omega(\lambda,c,x_0,\alpha,\beta)\|_{{\mathbb{C}}^{m\times m}}\, (1+\lambda^2)^{-1} < \infty, \label{NPa} \\ &\Omega((\lambda, \mu],c,x_0,\alpha,\beta)= \lim_{\delta\downarrow 0} \lim_{\epsilon \downarrow 0}\frac{1}{\pi}\int_{\lambda + \delta}^{\mu + \delta }d\nu\, \sigma(c,x_0)\text{\rm Im}\left( M(\nu +i\epsilon,c,x_0,\alpha,\beta)\right). \label{NPb} \end{align} \end{subequations} In general, for self-adjoint boundary value problems, $\Omega(\lambda,c,x_0,\alpha,\beta)$ is piecewise constant with jump discontinuities precisely at the eigenvalues of the boundary value problem, and that in the matrix-valued Schr\"odinger and Dirac-type cases $C_2=0$ in \eqref{NP} (and later in \eqref{2.42} and \eqref{2.64}). Analogous statements apply to $-M(z,c,x_0,\alpha,\beta)$ if $c<x_0$. For such problems, we note in the subsequent lemma that for fixed $\beta$, varying the boundary data $\alpha$ produces Weyl-Titchmarsh matrices $M(z,c,x_0,\alpha,\beta)$ related to each other via linear fractional transformations (see also \cite{GMT98}, \cite{GT97} for a general approach to such linear fractional transformations).
\begin{lemma}\label{l2.9} Suppose $\alpha, \beta, \gamma\in{\mathbb{C}}^{m\times 2m}$ satisfy \eqref{2.8e}. Let $M_{\alpha}=M(z,c,x_0,\alpha,\beta)$, and $M_{\gamma}=M(z,c,x_0,\gamma,\beta)$. Then, \begin{equation} M_{\alpha}= [-\alpha J \gamma^* + \alpha\gamma^* M_{\gamma}] [\alpha\gamma^* + \alpha J\gamma^*M_{\gamma}]^{-1}. \label{2.360} \end{equation} \end{lemma}
\begin{proof} Let $U_{\alpha}(z,x)=U(z,x,x_0,\alpha)$ and $U_{\gamma}(z,x)=U(z,x,x_0,\gamma)$ be defined in \eqref{2.14} with $M=M_{\alpha}$ and $M=M_{\gamma}$ respectively. Then, \begin{equation} 0=\beta U_{\alpha}(z,c)=\gamma U_{\gamma}(z,c). \end{equation} By the rank condition \eqref{BDa}, \begin{equation} U_{\alpha}(z,c)= J\beta^*C_{\alpha}\, ,\qquad U_{\gamma}(z,c)= J\beta^*C_{\gamma} \end{equation} for nonsingular $C_{\alpha}, \; C_{\gamma}\in {\mathbb{C}}^{m\times m}$. Thus, by \eqref{FSa}, and by the uniqueness of solution of \eqref{HSa}, there is a nonsingular $C\in {\mathbb{C}}^{m\times m}$ for which \begin{equation}\label{2.410} \begin{pmatrix} \alpha^*\hspace{-5pt}&J\alpha^*\end{pmatrix} \begin{pmatrix} I_m \\ M_{\alpha}\end{pmatrix}=U_{\alpha}(z,x_0)=U_{\gamma}(z,x_0)C= \begin{pmatrix} \gamma^*\hspace{-5pt}&J\gamma^*\end{pmatrix} \begin{pmatrix} I_m \\ M_{\gamma}\end{pmatrix}C. \end{equation} By \eqref{2.8i}, \begin{equation} \begin{pmatrix} \alpha^*\hspace{-5pt}&J\alpha^*\end{pmatrix}^{-1}= \begin{pmatrix} \alpha \\ -\alpha J\end{pmatrix}; \end{equation} and hence, by \eqref{2.410} we see that \begin{subequations} \begin{align} I_m&=(\alpha \gamma^* + \alpha J \gamma^* M_{\gamma} )C\\ M_{\alpha}&= (-\alpha J\gamma^* + \alpha \gamma^* M_{\gamma} )C, \end{align} \end{subequations} from which \eqref{2.360} immediately follows. \end{proof}
\begin{remark} {}From the proof of the previous lemma one infers, in general, that \begin{equation} U_{\gamma}(z,x) = U_{\alpha}(z,x)(\alpha \gamma^* + \alpha J \gamma^* M_{\gamma} ). \end{equation} Moreover, if $\alpha_0 =(I_m\; 0)$ and $\gamma_0=(0\ I_m)$ one observes, in particular, \begin{equation} M(z,c,x_0,\alpha_0,\beta)=-M(z,c,x_0,\gamma_0,\beta)^{-1}. \end{equation} \end{remark}
We further note that the sets ${\mathcal D}(z,c,x_0,\alpha)$ are closed, and convex, (cf., e.g., \cite{HS84}, \cite{HSH93}, \cite{Kr89a}, \cite{Or76}). Moreover, by \eqref{2.290} and Hypothesis~\ref{h2.2}, one concludes that $E_c(M)$ is strictly increasing. This fact together with Lemma~\ref{l2.11} implies that, as a function of $c$, the sets ${\mathcal D}(z,c,x_0,\alpha)$ are strictly nesting in the sense that \begin{equation}\label{2.28} {\mathcal D}(z,c_2,x_0,\alpha)\subset {\mathcal D}(z,c_1,x_0,\alpha) \quad \text{for}\quad x_0<c_1< c_2\quad \text{or} \quad c_2< c_1<x_0. \end{equation} Hence, the intersection of this nested sequence, as $c\to \pm \infty$, is nonempty, closed and convex. We say that this intersection is a limiting set for the nested sequence.
\begin{definition}\label{dLWD} Let ${\mathcal D}_\pm(z,x_0,\alpha)$ denote the closed, convex set in the space of $m\times m$ matrices which is the limit, as $c\to \pm\infty$, of the nested collection of sets ${\mathcal D}(z,c,x_0,\alpha)$ given in Definition~\ref{dWD}. ${\mathcal D}_\pm(z,x_0,\alpha)$ is said to be a limiting {\em disk}. Elements of ${\mathcal D}_\pm(z,x_0,\alpha)$ are denoted by $M_\pm(z,x_0,\alpha)\in {\mathbb{C}}^{m\times m}$. \end{definition}
In light of the containment described in \eqref{2.28}, for $c\ne x_0$ and $z\in {\mathbb{C}}\backslash{\mathbb{R}}$, \begin{equation}\label{2.32} {\mathcal D}_\pm(z,x_0,\alpha)\subset {\mathcal D}(z,c,x_0,\alpha), \end{equation} with emphasis on strict containment of the disks in \eqref{2.32}. Moreover, by \eqref{2.290}, \begin{equation}\label{2.320} M\in {\mathcal D}_\pm(z,x_0,\alpha) \text{ precisely when }E_c(M)<0 \text{ for all } c \in(x_0, \pm\infty). \end{equation} The following Lemma appears to have gone unnoted in the literature.
\begin{lemma}\label{l2.12} Let $M\in{\mathbb{C}}^{m\times m}$, $c\ne x_0$, and $z\in{\mathbb{C}}\backslash{\mathbb{R}}$. Then, $E_c(M)<0$ if and only if there is a $\beta\in {\mathbb{C}}^{m\times 2m}$ satisfying the condition \begin{equation}\label{2.27a} \sigma(c,x_0,z) \text{\rm Im}(\beta_2\beta_1^*)>0, \end{equation} and such that \eqref{2.24} holds with $u_j(z,c)= u_j(z,c,x_0,\alpha)$, $j=1,2$, defined in \eqref{2.14} in terms of $M$. With $\beta$ so defined, \eqref{2.25} holds; that is, $M=M(z,c,x_0,\alpha,\beta)$. Moreover, $\beta$ maybe chosen to satisfy \eqref{BDd}, and hence Hypothesis~\ref{h2.3}. \end{lemma}
\begin{proof} Let $z\in {\mathbb{C}}\backslash{\mathbb{R}}$, and for a given $M\in {\mathbb{C}}^{m\times m}$ suppose that there is a $\beta\in{\mathbb{C}}^{m\times 2m}$ satisfying \eqref{2.27a} such that \eqref{2.24} holds. The matrices $\beta_j$, $j=1,2$, are invertible by \eqref{2.27a}, and by \eqref{2.24} it follows that \begin{equation}\label{2.26} U(z,c)=\begin{pmatrix} -\beta_1^{-1}\beta_2\\I_m \end{pmatrix} u_2(z,c). \end{equation} By \eqref{2.380} and \eqref{2.26}, one then concludes that \begin{equation}\label{2.27} E_c(M) = -2\sigma(c,x_0,z) u_2(z,c)^*\beta_1^{-1}
\text{\rm Im} (\beta_2\beta_1^*) (\beta_1^*)^{-1} u_2(z,c), \end{equation} and hence that $E_c(M)<0$ whenever \eqref{2.27a} holds.
Upon showing that $\beta\Phi(z,c)$ is nonsingular, \eqref{2.25} will follow from \eqref{2.24}. If $\beta\Phi(z,c)$ is singular, then there is a nonzero vector $v\in {\mathbb{C}}^{m}$ such that $\beta\Phi(z,c)v=0$. By the nonsingularity of $\beta_j$, $j=1,2$, $\phi_1(z,c)v = -\beta_1^{-1}\beta_2\phi_2(z,c)v$, and as a result, \eqref{2.230a} yields \begin{align}
&2\sigma(c,x_0)|\text{\rm Im}(z)|\int_{x_0}^c dx \,v^*\Phi(z,x)^*A(x)\Phi(z,x)v \nonumber\\ &= -2\sigma(c,x_0,z)v^*\phi_2(z,c)^*\beta_1^{-1}\text{\rm Im}(\beta_2\beta_1^*) (\beta_1^*)^{-1}\phi_2(z,c)v, \end{align} and hence, a contradiction given \eqref{2.27a} (cf.~\eqref{2.3}).
Conversely, if $E_c(M)<0$ for a given $M\in{\mathbb{C}}^{m\times m}$, then for $z\in{\mathbb{C}}\backslash{\mathbb{R}}$, $u_j(z,c)$, $j=1,2$, defined by \eqref{2.14}, are nonsingular. Indeed, if either $u_1(z,c)$ or $u_2(z,c)$ are singular, then there is a $v\in {\mathbb{C}}^m$, $v\ne 0$, such that $v^*E_c(M)v=0$, a contradiction. Next, let $\beta_1 =I_m$ and let $\beta_2=-u_1(z,c)u_2(z,c)^{-1}$. Then, for these $\beta_j$, $j=1,2$, \eqref{2.24} holds. Equation~\eqref{2.27} now implies that $\sigma(c,x_0,z)\text{\rm Im}(\beta_2\beta_1^*)> 0$ for $c\ne x_0$ and $z\in{\mathbb{C}}\backslash{\mathbb{R}}$. For this choice, $\beta$ does not satisfy \eqref{BDd}. However, one can normalize $\beta$ as described in the proof of Lemma~\ref{l2.11}. \end{proof} Hence by Lemma~\ref{l2.12} and \eqref{2.320}, we see that if $M\in{\mathcal D}_\pm(z,x_0,\alpha)$, then for some $\beta\in{\mathbb{C}}^{m\times 2m}$ satisfying \eqref{2.27a} \begin{equation}\label{2.33} M_\pm(z,x_0,\alpha)=M(z,c,x_0,\alpha,\beta). \end{equation}
\begin{remark}\label{r2.14} To the reader of \cite{CG99}, our study of the high-energy asymptotics of the Weyl-Titchmarsh $M$-function for matrix-Schr\"odinger operators, we offer this cautionary note: In \cite{CG99}, $D(z,c,x_0,\alpha)$ represents the set of matrices characterized by Lemmas~\ref{l2.11} and \ref{l2.12}. However, the homeomorphism that exists between the contractive matrices $V\in{\mathbb{C}}^{m\times m},\ VV^*\le I_m$, and the Weyl disk, $D(z,c,x_0,\alpha)$, (cf., \cite{HS84}, \cite{HSH93}, \cite{Kr89a}, \cite{Or76}) shows that those $M\in{\mathbb{C}}^{m\times m}$ characterized in Lemma~\ref{l2.11} correspond to the set of unitary matrices while those characterized in Lemma~\ref{l2.12} correspond to the contractive matrices for which $VV^*<I_m$. Hence, Lemma~\ref{l2.11} characterizes part of the boundary while Lemma~\ref{l2.12} characterizes the interior of the Weyl disk as it is defined in Defintion~\ref{dWD}. As a result, the closure of the set consisting of those $M\in{\mathbb{C}}^{m\times m}$ characterized by these two lemmas (i.e., those $M$ which correspond to $VV^*<I_m$, or to $VV^*=I_m$) is the Weyl disk. Thus, for deriving high-energy asymptotics for $M_\pm(z,x_0,\alpha)$, it is sufficient to consider the subset of the Weyl disk consisting of those matrices, $M\in{\mathbb{C}}^{m\times m}$, characterized in Lemma~\ref{l2.11} and Lemma~\ref{l2.12}. This was the approach taken in \cite{CG99}. \end{remark}
When ${\mathcal D}_\pm(z,x_0,\alpha)$ is a singleton matrix, the system \eqref{HSa} is said to be in the {\it limit point} (l.p.) case at $\pm\infty$. If ${\mathcal D}_\pm(z,x_0,\alpha)$ has nonempty interior, then \eqref{HSa} is said to be in the {\it limit circle} (l.c.) case at $\pm\infty$. Indeed, for the case $m=1$, the limit point case corresponds to a point in ${\mathbb{C}}$, whereas the limit circle case corresponds to ${\mathcal D}_\pm(z,x_0,\alpha)$ being a disk in ${\mathbb{C}}$.
These apparent geometric properties for the disk correspond to analytic properties for the solutions of the Hamiltonian system \eqref{HSa}. To recall this correspondence, we introduce the following spaces in which we assume that $ -\infty\le a< b \le \infty$, \begin{subequations}\label{2.29} \begin{align}
L_A^2((a,b))&=\bigg\{\phi:(a,b)\to{\mathbb{C}}^{2m} \bigg| \int_a^b dx\, (\phi(x),A\phi(x))_{{\mathbb{C}}^{2m}}<\infty \bigg\}, \label{2.29a} \\ N(z,\infty)&=\{\phi\in L_A^2((c,\infty)) \mid J\phi^\prime =(zA+B)\phi \text{ a.e. on $(c,\infty)$} \}, \label{2.29b} \\ N(z,-\infty)&=\{\phi\in L_A^2((-\infty,c)) \mid J\phi^\prime=(zA+B)\phi \text{ a.e. on $(-\infty,c)$} \}, \label{2.29c} \end{align} \end{subequations} for some $c\in{\mathbb{R}}$ and $z\in{\mathbb{C}}$. (Here $(\phi,\psi)_{{\mathbb{C}}^n}=\sum_{j=1}^n \overline\phi_j\psi_j$ denotes the standard scalar product in ${\mathbb{C}}^n$, abbreviating $\chi\in{\mathbb{C}}^n$ by $\chi=(\chi_1,\dots,\chi_n)^t$.) Both dimensions of the spaces in \eqref{2.29b} and \eqref{2.29c}, $\dim_{\mathbb{C}}(N(z,\infty))$ and $\dim_{\mathbb{C}}(N(z,-\infty))$, are constant for $z\in{\mathbb{C}}_\pm=\{\zeta\in{\mathbb{C}} \mid \pm\text{\rm Im}(\zeta)> 0 \}$ (see, e.g., \cite{At64}, \cite{KR74}). One then observes that the Hamiltonian system \eqref{HSa} is in the limit point case at $\pm\infty$ whenever \begin{equation}\label{2.30} \dim_{\mathbb{C}}(N(z,\pm\infty))=m \text{ for all $z\in{\mathbb{C}}\backslash{\mathbb{R}}$} \end{equation} and in the limit circle case at $\pm\infty$ whenever \begin{equation}\label{2.31} \dim_{\mathbb{C}}(N(z,\pm\infty))=2m \text{ for all $z\in{\mathbb{C}}$.} \end{equation}
Next we show that the Dirac-type systems considered in this paper are always in the limit point case at $\pm\infty$. Results of this type, under varying sets of assumptions on $B(x)$, are well-known to experts in the field. For instance, in the case $m=1$ and with $B_{1,2}(x)=B_{2,1}(x)$ this fact can be found in \cite{We71}. For $B\in C({\mathbb{R}})^{2m\times 2m}$ and a more general constant matrix $A$, this result is proven in \cite{LM00} (their proof, however, extends to the current $B\in L^1_{\text{\rm{loc}}} ({\mathbb{R}})$ case). More generally, multi-dimensional Dirac operators with $L^2_{\text{\rm{loc}}} ({\mathbb{R}}^n)$-type coefficients (and additional conditions) can be found in \cite{LO82}. A short proof in the case $m=1$ has recently been sent to us by Don Hinton \cite{Hi99}. For convenience of the reader we present its elementary generalization to $m\in{\mathbb{N}}$ below (see also \cite{Cl94} for a sketch of such a proof). After completion of this paper we became aware of a recent preprint by Lesch and Malamud \cite{LM00a} which provides a thorough study of self-adjointness questions for more general Hamiltonian systems than those studied in this paper.
\begin{lemma} \label{l2.15} The limit point case holds for Dirac-type systems {\rm (}i.e., for $A=I_{2m}$ in \eqref{HSa}{\rm )} at $\pm \infty$. \end{lemma}
\begin{proof} Let $\{y_\ell(z,x)\}_{\ell=1,\dots,k}$ and $\{w_j(z,x)\}_{j=1,\dots,k^\prime}$ denote bases for $N(z,\pm\infty)$ and $N(\overline z,\pm\infty)$, respectively. By Theorem~9.11.1 of Atkinson \cite{At64}, one has $k,k^\prime\geq m$ for $z\in{\mathbb{C}}\backslash{\mathbb{R}}$. We now assume that $k>m$.
One observes that $\{y_1(z,c),\dots,y_k(z,c)\}$ and $w_1(\overline z,c),\dots,w_{k^\prime}(\overline z,c)\}$ are linearly independent in ${\mathbb{C}}^{2m+1}$, where $k+k^\prime\geq 2m+1$. Consequently, there is some $s\in\{1,\dots,k\}$ and some $r\in\{1,\dots,k^\prime\}$ such that \begin{equation} w_r(\overline z,c)^*Jy_s(z,c)\neq 0. \label{2.32a} \end{equation} By Lagrange's identity, \begin{equation} w_r(\overline z,x)^*Jy_s(z,x)=w_r(\overline z,c)^*Jy_s(z,c) \label{2.33a} \end{equation} is constant with respect to $x$. On the other hand, an application of Cauchy's inequality shows that the left-hand side of \eqref{2.33a} is in $L^1 ((c,\pm\infty))$. By \eqref{2.32a} one obtains a contradiction and hence concludes that \begin{equation} \dim_{\mathbb{C}} (N(z,\pm\infty))=m. \label{2.34} \end{equation} The analogous argument then also yields \begin{equation} \dim_{\mathbb{C}} (N(\overline z,\pm\infty))=m \label{2.35} \end{equation} and hence the limit point property of Dirac-type systems with $A(x)=I_{2m}$ in \eqref{HSa}. \end{proof}
Returning to the general case \eqref{HSa}, in either the limit point or limit circle cases, $M_\pm(z,x_0,\alpha)\in \partial {\mathcal D}_{\pm}(z,x_0,\alpha)$ is said to be a {\em half-line Weyl-Titchmarsh matrix}. Each such matrix is associated with the construction of a self-adjoint operator acting on $L_A^2([x_0,\pm \infty))\cap \text{\rm{AC}}([x_0,\pm\infty))^{2m}$ for the Hamiltonian system \eqref{HSa}. However, for those intermediate cases where $m<\dim_{\mathbb{C}}(N(z,\pm\infty))<2m$, Hinton and Schneider have noted that not every element of $\partial{\mathcal D}_{\pm}(z,x_0,\alpha)$ is a half-line Weyl-Titchmarsh matrix, and have characterized those elements of the boundary that are (cf.~\cite{HSH93}, \cite{HSH97}).
For convenience of the reader we summarize some of the principal results on half-line Weyl-Titchmarsh matrices next.
\begin{theorem} [\cite{AD56}, \cite{Ca76}, \cite{GT97}, \cite{HS81}, \cite{HS82}, \cite{HS86}, \cite{KS88}] \label{t2.3} Suppose Hypotheses \ref{h2.1} and \\ \ref{h2.2}. Let $z\in{\mathbb{C}}\backslash{\mathbb{R}}$, $x_0\in{\mathbb{R}}$, and denote by $\alpha, \gamma\in{\mathbb{C}}^{m\times 2m}$ matrices satisfying \eqref{2.8e}. Then, \\ $(i)$ $\pm M_{\pm}(z,x_0,\alpha)$ is an $m\times m$
matrix-valued Herglotz function of maximal rank. In particular, \begin{gather} \text{\rm Im}(\pm M_{\pm}(z,x_0,\alpha)) > 0, \quad z\in{\mathbb{C}}_+, \\ M_{\pm}(\overline z,x_0,\alpha)=M_{\pm}(z,x_0,\alpha)^*, \label{2.38} \\ \text{\rm{rank}} (M_{\pm}(z,x_0,\alpha))=m, \\ \lim_{\varepsilon\downarrow 0} M_{\pm}(\lambda+ i\varepsilon,x_0,\alpha) \text{ exists for a.e.\ $\lambda\in{\mathbb{R}}$},\\ \begin{split}\label{2.41} M_\pm(z,x_0,\alpha) &= [-\alpha J \gamma^* + \alpha\gamma^* M_\pm(z,x_0,\gamma)]\times \\ &\quad \times[ \alpha\gamma^* + \alpha J \gamma^*M_\pm(z,x_0,\gamma)]^{-1}. \end{split} \end{gather} Local singularities of $\pm M_{\pm}(z,x_0,\alpha)$ and $\mp M_{\pm}(z,x_0,\alpha)^{-1}$ are necessarily real and at most of first order in the sense that \begin{align} &\mp \lim_{\epsilon\downarrow0} \left(i\epsilon\, M_{\pm}(\lambda+i\epsilon,x_0,\alpha)\right) \geq 0, \quad \lambda\in{\mathbb{R}}, \label{2.24b} \\ & \pm \lim_{\epsilon\downarrow0} \left(\frac{i\epsilon}{M_{\pm}(\lambda+i\epsilon,x_0,\alpha)}\right) \geq 0, \quad \lambda\in{\mathbb{R}}. \label{2.24c} \end{align} $(ii)$ $\pm M_{\pm}(z,x_0,\alpha)$ admit the representations \begin{align} &\pm M_{\pm}(z,x_0,\alpha)=F_\pm(x_0,\alpha)+\int_{\mathbb{R}} d\Omega_\pm(\lambda,x_0,\alpha) \, \big((\lambda-z)^{-1}-\lambda(1+\lambda^2)^{-1}\big) \label{2.42} \\ &=\exp\bigg(C_\pm(x_0,\alpha)+\int_{\mathbb{R}} d\lambda \, \Xi_{\pm} (\lambda,x_0,\alpha) \big((\lambda-z)^{-1}-\lambda(1+\lambda^2)^{-1}\big) \bigg), \label{2.43} \end{align} where \begin{align} F_\pm(x_0,\alpha)&=F_\pm(x_0,\alpha)^*, \quad \int_{\mathbb{R}}
\|d\Omega_\pm(\lambda,x_0,\alpha)\|_{{\mathbb{C}}^{m\times m}} \, (1+\lambda^2)^{-1}<\infty,
\\ C_\pm(x_0,\alpha)&=C_\pm(x_0,\alpha)^*, \quad 0\le\Xi_\pm(\,\cdot\,,x_0,\alpha) \le I_m \, \text{ a.e.} \end{align} Moreover, \begin{align} \Omega_\pm((\lambda,\mu],x_0,\alpha)& =\lim_{\delta\downarrow 0}\lim_{\varepsilon\downarrow 0}\f1\pi \int_{\lambda+\delta}^{\mu+\delta} d\nu \, \text{\rm Im}(\pm M_\pm(\nu+i\varepsilon,x_0,\alpha)), \\ \Xi_\pm(\lambda,x_0,\alpha)&=\lim_{\varepsilon\downarrow 0} \pi^{-1}\text{\rm Im}(\text{\rm ln}(\pm M_\pm(\lambda+i\varepsilon,x_0,\alpha))) \text{ for a.e.\ $\lambda\in{\mathbb{R}}$}. \end{align} $(iii)$ Define the $2m\times m$ matrices \begin{align} U_\pm(z,x,x_0,\alpha)&=\begin{pmatrix}u_{\pm,1}(z,x,x_0,\alpha) \\ u_{\pm,2}(z,x,x_0,\alpha) \end{pmatrix} =\Psi(z,x,x_0,\alpha)\begin{pmatrix} I_m \\ M_\pm(z,x_0,\alpha) \end{pmatrix} \nonumber \\ &=\begin{pmatrix}\theta_1(z,x,x_0,\alpha) & \phi_1(z,x,x_0,\alpha)\\ \theta_2(z,x,x_0,\alpha) & \phi_2(z,x,x_0,\alpha)\end{pmatrix} \begin{pmatrix} I_m \\ M_\pm(z,x_0,\alpha) \end{pmatrix}, \label{2.52} \end{align} with $\theta_j(z,x,x_0,\alpha)$, and $\phi_j(z,x,x_0,\alpha)$, $j=1,2$, defined by \eqref{FSc}. Then, \begin{equation} \text{\rm Im}(M_\pm(z,x_0,\alpha))=\text{\rm Im}(z) \int_{x_0}^{\pm\infty}ds\, U_\pm(z,s,x_0,\alpha)^* A(s) U_\pm(z,s,x_0,\alpha). \end{equation} \end{theorem}
In the Dirac-type context, where $A=I_{2m}$, the $m$ columns of $U_\pm (z,\cdot,x_0,\alpha)$ span $N(z,\pm\infty)$.
Up to this point, we focused exclusively on Hamiltonian systems and neglected the notion of a linear operator associated with \eqref{HS}. We did this on purpose as the formalism presented thus far is widely applicable and goes beyond the prime candidates such as Schr\"odinger and Dirac-type systems. However, in the remainder of this section and for the bulk of the material from Section~\ref{s3} on, we will focus on the Dirac-type case. Thus, in addition to Hypotheses~\ref{h2.1}--\ref{h2.3}, which are assumed throughout this paper, we introduce the following hypothesis taylored to these occasions.
\begin{hypothesis}\label{h2.4} Assume Hypotheses~\ref{h2.1} and \ref{h2.3} as well as the Dirac-type assumption \eqref{DS}. \end{hypothesis}
Assuming the Dirac-type Hypothesis~\ref{h2.4}, we now describe the associated Dirac-type operator $D$ on ${\mathbb{R}}$ by first introducing the Green's matrix associated with \eqref{HS} and \eqref{DS}. Define the $2m\times 2m$ matrix $G$ by \begin{align} G(z,x,x^\prime)=U_\mp(z,x,x_0,\alpha_0)[M_-(z,x_0,\alpha_0) & -M_+(z,x_0,\alpha_0)]^{-1} U_\pm(\overline z,x^\prime,x_0,\alpha_0)^*, \nonumber \\ & \alpha_0=(I_m\; 0), \quad x\lessgtr x^\prime,\, z\in{\mathbb{C}}\backslash{\mathbb{R}} \label{2.56} \end{align} Next, let $\phi\in L^2({\mathbb{R}})^{2m}$ and consider \begin{equation} J\psi^\prime(z,x)=(zI_{2m}+B(x))\psi(z,x)+ \phi(x), \quad z\in{\mathbb{C}}\backslash{\mathbb{R}} \label{2.58} \end{equation} for a.e.\ $x\in{\mathbb{R}}$. Then, as inferred from \cite{HS81}, \cite{HS83}, \eqref{2.58} has a unique solution $\psi(z,\,\cdot\,)\in L^2({\mathbb{R}})^{2m}\cap\text{\rm{AC}}_{\text{\rm{loc}}}({\mathbb{R}})^{2m}$ given by \begin{equation} \psi(z,x)=\int_{\mathbb{R}} dx^\prime\, G(z,x,x^\prime) \phi(x^\prime). \label{2.59} \end{equation} The Dirac-type operator $D$ in $L^2({\mathbb{R}})^{2m}$ associated with the Hamiltonian system \eqref{HS} and \eqref{DS} is then defined by \begin{equation} ((D-z)^{-1}\psi)(x)= \int_{\mathbb{R}} dx^\prime\, G(z,x,x^\prime)\psi(x^\prime), \quad \psi\in L^2({\mathbb{R}})^{2m}, \; z\in{\mathbb{C}}\backslash{\mathbb{R}}. \label{2.60} \end{equation} Explicitly, one obtains \begin{align} D&=J \frac{d}{dx}-B, \label{2.61} \\ \text{\rm{dom}}(D)&=\{\phi\in L^2({\mathbb{R}})^{2m}\mid \phi \in\text{\rm{AC}}_{\text{\rm{loc}}}({\mathbb{R}})^{2m}; \,(J\phi^\prime-B\phi)\in L^2({\mathbb{R}})^{2m} \}, \nonumber \end{align} taking into account the limit point property of Dirac-type systems as described in Lemma~\ref{l2.15}. Thus, $D$ is a self-adjoint operator in $L^2({\mathbb{R}})^{2m}$.
As described in \cite{HS81}--\cite{HS86}, the $2m\times 2m$ Weyl-Titchmarsh matrix $M(z,x_0,\alpha_0)$ associated with $D$ is then defined by \begin{align} M(z,x_0,\alpha_0) &=\big(M_{j,j^\prime}(z,x_0,\alpha_0)\big)_{j,j^\prime=1,2} \nonumber \\ &=[G(z,x_0,x_0+0)+G(z,x_0,x_0-0)]/2, \quad z\in{\mathbb{C}}\backslash{\mathbb{R}}. \label{2.62} \end{align} Actually, one can replace $\alpha_0=(I_m\; 0)$ by an arbitrary matrix $\alpha=[\alpha_1\ \alpha_2]\in{\mathbb{C}}^{m\times 2m}$ satisfying \eqref{2.8e} and hence introduces \begin{subequations}\label{2.620} \begin{align} M(z,x_0,\alpha) &=\big(M_{j,j^\prime}(z,x_0,\alpha)\big)_{j,j^\prime=1,2}, \quad z\in{\mathbb{C}}\backslash{\mathbb{R}}, \label{2.62A} \\ M_{1,1}(z,x_0,\alpha)&=[M_-(z,x_0,\alpha)-M_+(z,x_0,\alpha)]^{-1}, \label{2.62B} \\ M_{1,2}(z,x_0,\alpha)&=2^{-1} [M_-(z,x_0,\alpha)-M_+(z,x_0,\alpha)]^{-1} [M_-(z,x_0,\alpha)+M_+(z,x_0,\alpha)], \nonumber \\ M_{2,1}(z,x_0,\alpha)&=2^{-1} [M_-(z,x_0,\alpha)+M_+(z,x_0,\alpha)] [M_-(z,x_0,\alpha)-M_+(z,x_0,\alpha)]^{-1},\nonumber \\ M_{2,2}(z,x_0,\alpha)&=M_\pm(z,x_0,\alpha) [M_-(z,x_0,\alpha)-M_+(z,x_0,\alpha)]^{-1}M_\mp(z,x_0,\alpha). \nonumber \end{align} \end{subequations} \ The basic results on $M(z,x_0,\alpha)$ then read as follows.
\begin{theorem} [\cite{GT97}, \cite{HS81}, \cite{HS82}, \cite{HS86}, \cite{KS88}] \label{thm2.19} Assume Hypothesis~\ref{h2.4} and suppose \, that $z\in{\mathbb{C}} \backslash {\mathbb{R}}$, $x_0\in{\mathbb{R}}$, and that $\alpha\in{\mathbb{C}}^{m\times 2m}$ satisfies \eqref{2.8e}. Then, \\ $(i)$ $M(z,x_0,\alpha)$ is a matrix-valued Herglotz function of rank $2m$ with representation \begin{align} &M(z,x_0,\alpha)=F(x_0,\alpha)+\int_{\mathbb{R}} d\Omega(\lambda,x_0,\alpha)\, \big((\lambda-z)^{-1}-\lambda(1+\lambda^2)^{-1}\big), \label{2.64} \\ &=\exp\bigg(C(x_0,\alpha)+\int_{\mathbb{R}} d\lambda \, \Upsilon (\lambda,x_0,\alpha) \big((\lambda-z)^{-1}-\lambda(1+\lambda^2)^{-1}\big) \bigg), \label{2.65} \end{align} where \begin{align} F(x_0,\alpha)&=F(x_0,\alpha)^*, \quad \int_{\mathbb{R}} \Vert d\Omega(\lambda,x_0,\alpha) \Vert_{{\mathbb{C}}^{2m\times 2m}} \,(1+\lambda^2)^{-1}<\infty, \label{2.66} \\ C(x_0,\alpha)&=C(x_0,\alpha)^*, \quad 0\le\Upsilon(\,\cdot\,,x_0,\alpha) \le I_{2m} \, \text{ a.e.} \label{2.67} \end{align} Moreover, \begin{align} \Omega((\lambda,\mu],x_0,\alpha)&=\lim_{\delta\downarrow 0}\lim_{\varepsilon\downarrow 0}\f1\pi \int_{\lambda+\delta}^{\mu+\delta} d\nu \, \text{\rm Im}(M(\nu+i\varepsilon,x_0,\alpha)), \label{2.68} \\ \Upsilon(\lambda,x_0,\alpha)&=\lim_{\varepsilon\downarrow 0} \pi^{-1}\text{\rm Im}(\text{\rm ln}(M(\lambda+i\varepsilon,x_0,\alpha))) \text{ for a.e.\ $\lambda\in{\mathbb{R}}$}. \label{2.69} \end{align} $(ii)$ $z\in{\mathbb{C}}\backslash\text{\rm{spec}}(D)$ if and only if $M(z,x_0,\alpha)$ is holomorphic near $z$. \end{theorem}
Here $\text{\rm{spec}} (T)$ abbreviates the spectrum of a linear operator $T$.
Next, we explicitly discuss the elementary Dirac-type example where $A=I_{2m}$ and $B=0$.
\begin{example}\label{e2.20} Suppose $A=I_{2m}$, $B=0$ and let $\alpha\in{\mathbb{C}}^{m\times 2m}$ satisfy \eqref{2.8e}. Then, \begin{align} \Theta(z,x,x_0,\alpha)&=\begin{pmatrix}\theta_{1}(z,x,x_0,\alpha) \\ \theta_{2}(z,x,x_0,\alpha) \end{pmatrix}=\begin{pmatrix} \alpha_1^*\cos(z(x-x_0))+\alpha_2^*\sin(z(x-x_0)) \\ \alpha_2^*\cos(z(x-x_0))-\alpha_1^*\sin(z(x-x_0)) \end{pmatrix}, \nonumber \\ & \hspace*{7.5cm} \quad z\in{\mathbb{C}}, \label{2.80} \\ \Phi(z,x,x_0,\alpha)&=\begin{pmatrix}\phi_{1}(z,x,x_0,\alpha) \\ \phi_{2}(z,x,x_0,\alpha) \end{pmatrix}=\begin{pmatrix} -\alpha_2^*\cos(z(x-x_0))+\alpha_1^*\sin(z(x-x_0)) \\ \alpha_1^*\cos(z(x-x_0))+\alpha_2^*\sin(z(x-x_0)) \end{pmatrix}, \nonumber \\ & \hspace*{7.5cm} \quad z\in{\mathbb{C}}, \label{2.81} \\ U_\pm (z,x,x_0,\alpha)&=\begin{pmatrix} u_{\pm,1}(z,x,x_0,\alpha) \\ u_{\pm,2}(z,x,x_0,\alpha) \end{pmatrix} =\begin{pmatrix} \alpha_1^* \mp i\alpha_2^* \\ \pm i(\alpha_1^* \mp i\alpha_2^*) \end{pmatrix}\exp(\pm iz(x-x_0)), \nonumber \\ & \hspace*{7cm} \quad z\in{\mathbb{C}}_+, \label{2.82} \\ M_\pm (z,x,\alpha)&=\pm iI_m, \quad z\in{\mathbb{C}}_+. \label{2.83} \end{align} \end{example}
Compared to the case of Schr\"odinger operators, it is remarkable that $M_\pm(z,x,\alpha)$ in \eqref{2.83} is, in fact, independent of $\alpha$. Put differently, in Dirac-type situations, $M_\pm(z,x,\alpha)$ may contain no information on the boundary condition indexed by $\alpha\in{\mathbb{C}}^{m\times 2m}$.
In Sections~\ref{s4} and \ref{s5} we will also refer to half-line Dirac operators $D_+(\alpha)$ in $L^2([x_0,\infty))^{2m}$ associated with a self-adjoint boundary condition at $x_0$ indexed by $\alpha\in{\mathbb{C}}^{m\times 2m}$ satisfying \eqref{2.8e}, and hence briefly introduce \begin{align} D_+(\alpha)&=J \frac{d}{dx}-B, \label{2.84} \\ \text{\rm{dom}}(D_+(\alpha))&=\{\phi\in L^2([x_0,\infty))^{2m} \mid \phi \in\text{\rm{AC}}([x_0,R])^{2m} \text{ for all $R>0$}; \nonumber \\ & \hspace*{2.4cm} \alpha\phi(x_0)=0; \, (J\phi^\prime-B\phi)\in L^2([x_0,\infty))^{2m} \}, \nonumber \end{align} taking into account the limit point property of Dirac-type systems at $+\infty$ as described in Lemma~\ref{l2.15}. Thus, $D_+(\alpha)$ is a self-adjoint operator in $L^2([x_0,\infty))^{2m}$. In complete analogy one introduces $D_-(\alpha)$ in $L^2((-\infty, x_0])^{2m}$.
Next, we recall a few formulas in connection with Lagrange's identity needed in the proof of Theorem~\ref{t4.10} assuming $\alpha\in{\mathbb{C}}^{m\times 2m}$ satisfies \eqref{2.8e}. Then, explicitly, \eqref{2.310} and \eqref{2.330} read \begin{align} \theta_2(\bar z,x,x_0,\alpha)^*\theta_1(z,x,x_0,\alpha)- \theta_1(\bar z,x,x_0,\alpha)^*\theta_2(z,x,x_0,\alpha)&=0, \label{2.72} \\ \phi_2(\bar z,x,x_0,\alpha)^*\phi_1(z,x,x_0,\alpha)- \phi_1(\bar z,x,x_0,\alpha)^*\phi_2(z,x,x_0,\alpha)&=0, \label{2.73} \\ \phi_2(\bar z,x,x_0,\alpha)^*\theta_1(z,x,x_0,\alpha)- \phi_1(\bar z,x,x_0,\alpha)^*\theta_2(z,x,x_0,\alpha)&=I_m, \label{2.74} \\ \theta_1(\bar z,x,x_0,\alpha)^*\phi_2(z,x,x_0,\alpha)- \theta_2(\bar z,x,x_0,\alpha)^*\phi_1(z,x,x_0,\alpha)&=I_m, \label{2.75} \end{align} and \begin{align} \phi_1(z,x,x_0,\alpha)\theta_1(\bar z,x,x_0,\alpha)^*- \theta_1(z,x,x_0,\alpha_0)\phi_1(\bar z,x,x_0,\alpha)^*&=0, \label{2.92} \\ \phi_2(z,x,x_0,\alpha)\theta_2(\bar z,x,x_0,\alpha)^*- \theta_2(z,x,x_0,\alpha)\phi_2(\bar z,x,x_0,\alpha)^*&=0, \label{2.93} \\ \phi_2(z,x,x_0,\alpha)\theta_1(\bar z,x,x_0,\alpha)^*- \theta_2(z,x,x_0,\alpha)\phi_1(\bar z,x,x_0,\alpha)^*&=I_m, \label{2.94} \\ \theta_1(z,x,x_0,\alpha)\phi_2(\bar z,x,x_0,\alpha)^*- \phi_1(z,x,x_0,\alpha)\theta_2(\bar z,x,x_0,\alpha)^*&=I_m. \label{2.95} \end{align} Finally, we note the connection between $\Phi$ defined in \eqref{FSb}, for different boundary value data $\alpha, \gamma\in{\mathbb{C}}^{m\times 2m}$ satisfying \eqref{2.8e}, namely \begin{equation}\label{2.96} \Phi(z,x,x_0,\gamma)=\Phi(z,x,x_0,\alpha)\alpha\gamma^* + \Theta(z,x,x_0,\alpha)\alpha J \gamma^*. \end{equation} This connection formula follows by the uniqueness of solutions of \eqref{HS} and by the identity given in \eqref{2.8i}. It is needed in the proof of Theorem~\ref{t4.10}.
\section{The Leading Order Term in the Asymptotic \\ Expansion of $M_\pm (z,x,\alpha)$} \label{s3}
Assuming Hypothesis~\ref{h2.4}, the principal result proven in this section will be the following leading-order asymptotic result for half-line Weyl-Titchmarsh matrices $M_\pm(z,x_0,\alpha_0)$ associated with the Dirac-type operator \eqref{2.61}, \begin{equation}\label{3.1} M_\pm(z,x_0,\alpha_0) \underset{\substack{\abs{z}\to\infty\\ z\in C_\varepsilon}}{=} \pm iI_{m} +o(1). \end{equation} Here $\alpha_0 =(I_m\; 0)\in{\mathbb{C}}^{m\times 2m}$, and $C_{\varepsilon} \subset {\mathbb{C}}_+$ denotes the open sector with vertex at zero, symmetry axis along the positive imaginary axis, and opening angle $\varepsilon$, with $0<\varepsilon <\pi/2$.
This particular topic originates with the order result of Hille~\cite{Hil63} and the asymptotic formulas of Everitt~\cite{Ev72} and of Everitt and Halvorsen~\cite{EH78}. By appealing to the theory of Riccati equations, Atkinson in \cite{At81}, \cite{At82}, and \cite{At88a} obtains results like those of Hille, Everitt, and Halvorsen, both for the Schr\"odinger case as well as for the scalar-Dirac ($m=1$) case. Through a deeper understanding of the role played by Riccati theory, Atkinson obtains the first order asymptotic expansion of $M_+(z,x,\alpha_0)$ for the matrix-valued Schr\"odinger case in an unpublished manuscript \cite {At88}. Our strategy of proof for \eqref{3.1} is patterned after Atkinson's approach which also appears in our recent work on the full asymptotic expansion for $M_+(z,x,\alpha_0)$ in the matrix-valued Schr\"odinger case \cite{CG99}.
We begin our discussion by noting two additional characterizations for the Weyl disk, ${\mathcal D}(z,c,x_0,\alpha)$, for the general Hamiltonian system \eqref{HSa}.
\begin{lemma}\label{l3.3} Assume Hypotheses~\ref{h2.1} and \ref{h2.2}. Let $z\in{\mathbb{C}}\backslash{\mathbb{R}}$, $c\ne x_0$, and define $U(z,x,x_0,\alpha)$, in terms of $M\in{\mathbb{C}}^{m\times m}$ by \eqref{2.14}. Then $M\in {\mathcal D}(z,c,x_0,\alpha)$ if and only if \begin{equation}\label{3.3} \sigma(c,x_0,z)\text{\rm Im} (u_1(z,x,x_0,\alpha)^*u_2(z,x,x_0,\alpha)) > 0, \quad x \in[x_0,c), \end{equation} or equivalently, if and only if \begin{equation}\label{3.4} \sigma(c,x_0,z)\text{\rm Im} (u_2(z,x,x_0,\alpha)u_1(z,x,x_0,\alpha)^{-1}) > 0, \quad x \in[x_0,c). \end{equation} Moreover, $M\in {\mathcal D}_{\pm}(z,x_0,\alpha)$ if and only if \eqref{3.3} and \eqref{3.4} hold for $c=\pm\infty$. \end{lemma}
\begin{proof} Let $U(z,x)=U(z,x,x_0,\alpha)$, and let $u_j(z,x)=u_j(z,x,x_0,\alpha)$, $j=1,2$ with $x\in[x_0,c)$. By \eqref{2.280}, \begin{align}\label{3.6}
&2\sigma(c,x_0)|\text{\rm Im}(z)|\int_x^c ds\, U(z,s)^*A(s)U(z,s)\nonumber \\
&=\sigma(x_0,c,z)U(z,s)^*(iJ)U(z,s)\Big |_x^c. \end{align} By \eqref{2.380}, this yields \begin{align}\label{3.5} &2\sigma(c,x_0,z)\text{\rm Im} (u_1(z,x)^*u_2(z,x))\nonumber\\
&= -E_c(M) + 2\sigma(c,x_0)|\text{\rm Im}(z)|\int_x^c ds\, U(z,s)^*A(s)U(z,s). \end{align} The integral expression in \eqref{3.5} is strictly positive by Hypothesis~\ref{h2.2}. This yields the equivalence of $-E_c(M)\ge 0$, and hence of $M\in {\mathcal D}(z,c,x_0,\alpha)$, with the condition given in \eqref{3.3}. The equivalence of \eqref{3.3} and \eqref{3.4} follows from the observation that \begin{equation} \text{\rm Im} (u_2(z,x)u_1(z,x)^{-1}) = (u_1(z,x)^{-1})^*\text{\rm Im} (u_1(z,x)^*u_2(z,x))u_1(z,x)^{-1}. \end{equation} The analogous characterization of ${\mathcal D}_{\pm}(z,x_0,\alpha)$ now follows from Definition~\ref{dLWD}. \end{proof}
In Lemma~\ref{l3.3}, $u_j(z,c)$, $j=1,2$, are well-defined and $E_c(M)=0$ precisely when $\sigma(c,x_0,z)\text{\rm Im} (u_1(z,c)^*u_2(z,c))= 0$. A similar statement might not hold for \eqref{3.4} since $u_1(z,c,x_0,\alpha)$ might be singular. In part, the latter point motivates the next characterization of the disk.
\begin{lemma}\label{l3.4} Assume Hypotheses~\ref{h2.1} and \ref{h2.2}. Let $z\in{\mathbb{C}}\backslash{\mathbb{R}}$, $c\ne x_0$, and define $u_j(z,x)=u_j(z,x,x_0,\alpha)$, $j=1,2$, by \eqref{2.14}. Then $M\in {\mathcal D}(z,c,x_0,\alpha)$ if and only if \begin{equation}\label{3.8} u_1(z,x) -i\sigma(c,x_0,z)u_2(z,x) \end{equation} is nonsingular for $x\in[x_0,c]$ and \begin{equation}\label{3.9} \begin{split} \vartheta(z,x)=\vartheta(z,x,x_0,\alpha)&= [u_1(z,x) +i\sigma(c,x_0,z)u_2(z,x)]\times\\ & \quad \times [u_1(z,x) -i\sigma(c,x_0,z)u_2(z,x)]^{-1} \end{split} \end{equation} satisfies \begin{equation}\label{3.10}
I_m-\vartheta(z,x)^*\vartheta(z,x)>0,\quad x\in[x_0,c), \end{equation} with nonnegativity holding at $x=c$. Moreover, $M\in {\mathcal D}_{\pm}(z,x_0,\alpha)$ if and only if \eqref{3.9} is well-defined on $[x_0,\pm\infty)$ and \eqref{3.10} holds for $c=\pm\infty$. \end{lemma}
\begin{proof} Let $M\in{\mathcal D}(z,c,x_0,\alpha)$ and suppose that $u_1(z,\xi)v=i\sigma(c,x_0,z)u_2(z,\xi)v$ for $\xi\in[x_0,c]$ and $v\in{\mathbb{C}}^m$, $v\ne 0$. Then, \begin{equation} v^*\sigma(c,x_0,z)\text{\rm Im} (u_1(z,\xi)^*u_2(z,\xi))v = -v^*u_1(z,\xi)^*u_1(z,\xi)v. \end{equation} By \eqref{3.3}, an immediate contradiction results if $\xi\ne c$. However, if $\xi=c$, then either $v^*E_c(M)v>0$ or $u_j(z,c)v=0$, $j=1,2$. In either case, a contradiction results since $E_c(M)\leq 0$ by Definition~\ref{dWD} and $U=(u_1^t,u_2^t)^t$ satisfies the first-order system \eqref{HSa}. Hence, $\vartheta(z,x)$ is well-defined on $[x_0,c]$. For $x\in [x_0,c)$ and $M\in {\mathcal D}(z,c,x_0,\alpha)$, \eqref{3.3} implies that \begin{equation}\label{3.12} 2i\sigma(c,x_0,z)(u_1(z,x)^*u_2(z,x) - u_2(z,x)^*u_1(z,x) )< 0. \end{equation} This is equivalent to \begin{equation}\label{3.13} \begin{split} &[u_1(z,x)^*-i\sigma(c,x_0,z)u_2(z,x)^*] [u_1(z,x)+i\sigma(c,x_0,z)u_2(z,x)] \\ &<[u_1(z,x)^*+i\sigma(c,x_0,z)u_2(z,x)^*][u_1(z,x) -i\sigma(c,x_0,z)u_2(z,x)] \end{split} \end{equation}
on $[x_0,c)$. Given the nonsingularity of $u_1(z,x)-i\sigma(c,x_0,z)u_2(z,x)$ on $[x_0,c]$, \eqref{3.13} implies \eqref{3.10}, with nonnegativity holding at $x=c$. \\ Next, let $M\in{\mathbb{C}}^{m\times m}$, and suppose that $\vartheta(z,x)$, defined by \eqref{3.9}, is well-defined on $[x_0,c]$, and satisfies \eqref{3.10}. Then, on $[x_0,c)$, \eqref{3.13} and consequently \eqref{3.12} follow, which implies that \eqref{3.3} holds, and hence that $M\in{\mathcal D}(z,c,x_0,\alpha)$. The analogous characterization of ${\mathcal D}_{\pm}(z,x_0,\alpha)$ follows from Definition~\ref{dLWD}. \end{proof}
By Lemma~\ref{l3.3} one notes, for $z\in {\mathbb{C}}\backslash{\mathbb{R}}$, that $M\in {\mathcal D}(z,c,x_0,\alpha)$ if and only if \begin{equation}\label{3.14} V(z,x,x_0,\alpha)= u_2(z,x,x_0,\alpha)u_1(z,x,x_0,\alpha)^{-1}, \quad x\in [x_0,c), \end{equation} is well-defined while satisfying \begin{equation}\label{3.15} \sigma(c,x_0,z)\text{\rm Im} (V(z,x,x_0,\alpha)) > 0, \quad x\in [x_0,c). \end{equation} In terms of $V(z,x,x_0,\alpha)$ and by \eqref{3.9}, one notes that \begin{equation}\label{3.16} \begin{split} \vartheta(z,x,x_0,\alpha) &= [I_m + i\sigma(c,x_0,z)V(z,x,x_0,\alpha)]\times\\ & \quad \times[I_m - i\sigma(c,x_0,z)V(z,x,x_0,\alpha)]^{-1},\quad x\in[x_0,c), \end{split} \end{equation} is a Cayley-type transformation of $V(z,x,x_0,\alpha)$. In the scalar context, this transformation corresponds to a conformal mapping of the complex upper half-plane to the unit disk. Moreover, defined as it is, $V(z,x,x_0,\alpha)$ satisfies a Riccati differential equation that is associated with the Hamiltonian system \eqref{HSa} while $\vartheta(z,x,x_0,\alpha)$ satisfies a Riccati equation obtained by the Cayley-type transformation \eqref{3.16} applied to the differential equation satisfied by $V(z,x,x_0,\alpha)$.
For the Dirac-type case of \eqref{HSa}, one observes by a simple calculation that $V(z,x,x_0,\alpha_0)$ is seen to satisfy a particular initial value problem for a Riccati differential equation.
\begin{lemma} \label{l3.5} Assume Hypotheses~\ref{h2.1}, \ref{h2.2}, and \ref{h2.4}. Let $\alpha_0=(I_m\; 0)\in{\mathbb{C}}^{m\times 2m}$, let $u_j(z,x)=u_j(z,x,x_0,\alpha_0)$, $j=1,2$, be defined by \eqref{2.14}, and suppose that $V(z,x,x_0,\alpha_0)$ is well-defined by \eqref{3.14}. Then, $V(z,\cdot)=V(z,\cdot,x_0,\alpha_0)$ satisfies, \begin{subequations}\label{3.17} \begin{align} &V'(z,x) +zV(z,x)^2 + V(z,x) B_{2,2}(x)V(z,x) + B_{1,2}(x)V(z,x) + V(z,x)B_{2,1}(x) \nonumber \\\ & + B_{1,1}(x) +zI_m =0, \label{3.17a}\\ &V (z,x_0)=M, \label{3.17b} \end{align} \end{subequations} where $B_{j,k}\in {\mathbb{C}}^{m\times m}$, $j,k= 1,2$, are defined in \eqref{2.1d}. \end{lemma}
Hence, by Lemma~\ref{l3.3}, the associated relations \eqref{3.14} and \eqref{3.15}, and the uniqueness of solutions for \eqref{3.17}, we obtain the following result for the Dirac-type case.
\begin{theorem}\label{t3.6} Assume Hypotheses~\ref{h2.1}, \ref{h2.2}, and \ref{h2.4}, and let $\alpha_0=(I_m\; 0)\in{\mathbb{C}}^{m\times 2m}$. Then, $M\in {\mathcal D}(z,c,x_0,\alpha_0)$ if and only if the initial value problem given by \eqref{3.17} has a solution, $V(z,\cdot)$, well-defined and satisfying \begin{equation}\label{3.18}
\sigma(c,x_0,z)\text{\rm Im} (V(z,x)) > 0,\quad x\in[x_0,c). \end{equation} Moreover, $M\in {\mathcal D}_{\pm}(z,x_0,\alpha_0)$ if and only if \eqref{3.18} holds for $c=\pm\infty$. \end{theorem}
\begin{remark} \label{r3.6a} An important consequence of Theorem~\ref{t3.6} and the uniqueness of solutions for \eqref{3.17} is that solution trajectories for \eqref{3.17}, which satisfy \eqref{3.18}, consist of elements of Weyl disks; that is, \begin{equation}\label{3.19} V(z,x,x_0,\alpha_0)\in {\mathcal D}(z,c,x,\alpha_0), \quad x\in [x_0,c). \end{equation} Given the characterization of ${\mathcal D}(z,c,x_0,\alpha_0)$ in Defintion~2.7A, for each $x\in [x_0,c)$ there is a $\beta\in{\mathbb{C}}^{m\times 2m}$ with $\sigma(c,x_0,z)\text{\rm Im} (\beta_2\beta_1^*)\ge 0$, such that \begin{equation} V(z,x,x_0,\alpha_0)=M(z,c,x,\alpha_0,\beta). \end{equation} It is in this sense that we let $M(z,c,x,\alpha_0)$ denote our solution of the initial value problem \eqref{3.17} that satisfies \eqref{3.18}. Analogously, \begin{equation} V(z,x,x_0,\alpha_0)\in {\mathcal D}_{\pm}(z,x,\alpha_0),\quad x\in [x_0,\pm\infty), \end{equation} for trajectories of \eqref{3.17} that satisfy \eqref{3.18} for $c=\pm\infty$. Hence, in this sense, we let $M_{\pm}(z,x,\alpha_0)$ denote those solutions of \eqref{3.17} that satisfy \eqref{3.18} for $c=\pm\infty$. However, by Lemma~\ref{l2.15}, our Dirac system is in the limit point case at $\pm\infty$. Each ${\mathcal D}_{\pm}(z,x,\alpha_0)$ consists of a unique matrix, and thus $M_{\pm}(z,x,\alpha_0)$ describes {\em unique} trajectories for \eqref{3.17a}. This contrasts with the matrix-valued Schr\"odinger case considered in \cite{CG99} where there are as many trajectories, each denoted by either $M_{+}(z,x,\alpha_0)$ or $M_{-}(z,x,\alpha_0)$, as there are matrices in a given initial disk ${\mathcal D}_{\pm}(z,x_0,\alpha_0)$. \end{remark}
Now for the Dirac-type case \eqref{DS} with $\alpha_0=(I_m\; 0)\in {\mathbb{C}}^{m\times 2m}$, with $\vartheta(z,x)=\vartheta(z,x,x_0,\alpha_0)$ defined in \eqref{3.9} and \eqref{3.16}, and with $x\in[x_0,c)$, one concludes that \begin{equation}\label{3.22} \vartheta(z,x)[u_1(z,x)-i\sigma(c,x_0,z)u_2(z,x)]=u_1(z,x)+ i\sigma(c,x_0,z)u_2(z,x), \end{equation} and hence that
\begin{subequations}\label{3.23} \begin{align} I_m + \vartheta(z,x)&= 2u_1(z,x)[u_1(z,x)-i\sigma(c,x_0,z)u_2(z,x)]^{-1},\\ I_m - \vartheta(z,x)&= -2i\sigma(c,x_0,z)u_2(z,x)[u_1(z,x)-i \sigma(c,x_0,z)u_2(z,x)]^{-1}. \end{align} \end{subequations}
Differentiating \eqref{3.22} one obtains
\begin{equation} \begin{split} \vartheta'(u_1-i\sigma u_2)&=(I_m -\vartheta) (zu_2 + B_{2,1}u_1 + B_{2,2}u_2) \\ & \quad +i\sigma (I_m +\vartheta)(-zu_1 - B_{1,1}u_1 -B_{1,2}u_2). \end{split} \end{equation}
By \eqref{3.23} one concludes that $\vartheta(z,\cdot,x_0,\alpha_0)$ satisfies the initial value problem given by
\begin{subequations}\label{3.25} \begin{align} \vartheta'(z,x)&= \frac{1}{2} \begin{pmatrix}I_m + \vartheta(z,x)^t\\ I_m - \vartheta(z,x)^t\end{pmatrix}^t\times \nonumber \\ & \quad \times \begin{pmatrix} -i\sigma(c,x_0,z)(zI_m +B_{1,1}(x)) & B_{1,2}(x)\\B_{2,1}(x)& i\sigma(c,x_0,z)(zI_m +B_{2,2}(x)) \end{pmatrix}\times\notag \\ & \quad \times\begin{pmatrix}I_m + \vartheta(z,x)\\ I_m - \vartheta(z,x)\end{pmatrix}, \label{3.25a} \\[5pt] \vartheta(z,x_0)&=(I_m +i\sigma(c,x_0,z) M) (I_m -i\sigma(c,x_0,z) M)^{-1},\label{3.25b} \end{align} \end{subequations} where $B_{j,k}\in {\mathbb{C}}^{m\times m}$, $j,k= 1,2$, satisfy Hypothesis~\ref{h2.1}.
By Lemma~\ref{l3.4} and the uniqueness of solutions for \eqref{3.25}, one obtains the following result in the Dirac-type case \eqref{DS}.
\begin{theorem}\label{t3.7} Assume Hypothesis~\ref{h2.4}. Then $M\in {\mathcal D}(z,c,x_0,\alpha_0)$ if and only if the initial value problem given by \eqref{3.25} has a solution, $\vartheta(z,\cdot)$ which is well-defined on $[x_0,c]$ and satisfies \begin{equation}\label{3.26} I_m-\vartheta(z,x)^*\vartheta(z,x)> 0,\quad x\in[x_0,c). \end{equation} Moreover, $M\in {\mathcal D}_{\pm}(z,x_0,\alpha_0)$ if and only if \eqref{3.26} holds for $c=\pm\infty$. \end{theorem}
Given the positivity present in \eqref{3.26}, we note the exact correspondence which exists, by \eqref{3.16}, between solutions of \eqref{3.17} that satisfy \eqref{3.18} and those solutions of \eqref{3.25} that satisfy \eqref{3.26}. In particular, given Remark~\ref{r3.6a}, we rewrite \eqref{3.16} as \begin{equation}\label{3.27} \begin{split} \vartheta(z,x,x_0,\alpha_0) &= [I_m + i\sigma(c,x_0,z)M(z,c,x,\alpha_0)]\times\\
& \quad \times[I_m - i\sigma(c,x_0,z)M(z,c,x,\alpha_0)]^{-1}, \quad x\in[x_0,c), \end{split} \end{equation} Moreover, our Dirac system is in the limit point case at $\pm\infty$. Consequently, there are unique solutions of \eqref{3.25}, $\vartheta_{\pm}(z,\cdot,x_0,\alpha_0)$, $z\in{\mathbb{C}}\backslash{\mathbb{R}}$, which satisfy \eqref{3.26} for $c=\pm\infty$, and which correspond to the unique solutions of \eqref{3.17}, $M_{\pm}(z,x,\alpha_0)$, which satisfy \eqref{3.18} for $c=\pm\infty$; specifically, \begin{equation}\label{3.28} \vartheta_{\pm}(z,x,x_0,\alpha_0)= [I_m \pm i\sigma(z)M_{\pm}(z,x,\alpha_0)][I_m \mp i\sigma(z)M_{\pm}(z,x,\alpha_0)]^{-1}. \end{equation}
These relationships form the basis for the analysis to follow. The asymptotic result \eqref{3.1} is obtained by an analysis of the corresponding asymptotic behavior for all solutions $\vartheta(z,\cdot,x_0,\alpha_0)$ described in \eqref{3.25}, these include among them the particular solutions $\vartheta_{\pm}(z,\cdot,x_0,\alpha_0)$. Thus asymptotic behavior is deduced for all corresponding solutions $M(z,c,\cdot,\alpha_0)$ of \eqref{3.17} which include among them the solutions $M_{\pm}(z,\cdot,\alpha_0)$. The advantage of this approach comes from the compactification inherent in the Cayley-type transformation \eqref{3.27}, and the resulting boundedness of the solutions as a consequence of \eqref{3.26}.
We pause for a moment to address, in the following remark, a point raised by us in \cite{CG99} for the matrix-valued Schr\"odinger case described in \eqref{SS}.
\begin{remark}\label{r3.3} With $u_j(z,x)=u_j(z,x,x_0,\alpha)$, $j=1,2$, defined in \eqref{2.14} for the general Hamiltonian system \eqref{HSa}, an analog to Lemma~\ref{l3.4} for the characterization of ${\mathcal D}(z,c,x_0,\alpha)$ is obtained by replacing the expression in \eqref{3.8} with \begin{equation}
u_1(z,x) -i|z|^{-1/2}\sigma(c,x_0,z)u_2(z,x), \end{equation} and by replacing the definition for $\vartheta(z,x)=\vartheta(z,x,x_0,\alpha)$ given in \eqref{3.9} with \begin{equation} \begin{split}
\vartheta(z,x) =& (u_1(z,x) +i|z|^{-1/2}\sigma(c,x_0,z)u_2(z,x))\times\\
& \times (u_1(z,x) -i|z|^{-1/2}\sigma(c,x_0,z)u_2(z,x))^{-1}. \end{split} \end{equation} Specific to the matrix-valued Schr\"odinger case, we obtain analogs of Lemma~\ref{l3.5}, Theorem~\ref{t3.6}, and Theorem~\ref{t3.7} by replacing equation \eqref{3.17a} with \begin{equation} V'(z,x) + V(z,x)^2 - Q(x) +zI_m = 0 \end{equation} and by replacing the equations in \eqref{3.25} with \begin{subequations}\label{3.280} \begin{align} \vartheta'(z,x)&= \sigma(c,x_0,z)\frac{1}{2} \begin{pmatrix}I_m +\vartheta(z,x)^t\\ I_m - \vartheta(z,x)^t \end{pmatrix}^t
\begin{pmatrix} -i|z|^{-1/2}(zI_m - Q(x)) & 0 \\
0 & i|z|^{-1/2}I_m \end{pmatrix}\times\notag \\ & \quad \times\begin{pmatrix}I_m + \vartheta(z,x)\\ I_m - \vartheta(z,x)\end{pmatrix},\label{3.280a} \\[5pt]
\vartheta(z,x_0)&=(I_m + i|z|^{-1/2}\sigma(c,x_0,z)M)
(I_m - i|z|^{-1/2}\sigma(c,x_0,z)M)^{-1}.\label{3.280b} \end{align} \end{subequations} ${\mathcal D}^{{\mathcal R}}(z,c,x_0,\alpha_0)$ was defined in \cite{CG99} to be the set of those $M\in {\mathbb{C}}^{m\times m}$ for which the intial value problem given by \eqref{3.280} has a solution, $\vartheta(z,x)$, which is well-defined on $[x_0,c]$ and satisfies \eqref{3.26}. In \cite{CG99} we showed that ${\mathcal D}(z,c,x_0,\alpha_0)\subseteq{\mathcal D}^{{\mathcal R}}(z,c,x_0,\alpha_0)$. This was sufficient for the subsequent analysis in \cite{CG99}. However, as the analog of Theorem~\ref{t3.7} now shows, one actually has equality of the two disks in \cite{CG99}, that is, \begin{equation} {\mathcal D}(z,c,x_0,\alpha_0)={\mathcal D}^{{\mathcal R}}(z,c,x_0,\alpha_0). \end{equation} \end{remark}
To obtain a proof of \eqref{3.1} for the Dirac-type case, we adapt an approach due to Atkinson \cite{At88} for proving a result analogous to \eqref{3.1} for the matrix-valued Schr\"odinger case (cf., e.g., \cite[Theorem 3.1]{CG99}) In light of Remark~\ref{r3.2}, we begin by restricting our attention to $z\in{\mathbb{C}}_+$, and as in the previous discussion, take $\alpha_0=(I_m\; 0)\in {\mathbb{C}}^{m\times 2m}$.
First we introduce two systems related to \eqref{3.25} by means of a change of variables. Let \begin{equation}\label{3.29} \varphi(z,t)=\vartheta(z,x),\qquad
t= (x-x_0)|z|,\qquad x \in [ x_0 , c ). \end{equation} With this change, \eqref{3.25} becomes \begin{subequations}\label{3.30} \begin{align}\label{3.30a}
\varphi'(z,t)&= \frac{1}{2} |z|^{-1}\begin{pmatrix}I_m + \varphi(z,t)^t\\ I_m - \varphi(z,t)^t\end{pmatrix}^t \begin{pmatrix}\mp i(zI_m +\widetilde B_{1,1}(t)) & \widetilde B_{1,2}(t)\\[1mm] \widetilde B_{2,1}(t)& \pm i(zI_m +\widetilde B_{2,2}(t)) \end{pmatrix}\times \notag \\ & \quad \times\begin{pmatrix}I_m + \varphi(z,t)\\ I_m - \varphi(z,t)\end{pmatrix}. \end{align} \noindent With $M=M(z,c,x_0,\alpha_0)\in {\mathcal D}(z,c,x_0,\alpha_0)$
\eqref{3.25b} becomes \begin{equation}\label{3.30b} \varphi(z,0)=(iI_m \mp M(z,c,x_0,\alpha_0)) (iI_m \pm M(z,c,x_0,\alpha_0))^{-1}, \end{equation} \noindent and \eqref{3.26} becomes \begin{equation}\label{3.30c} \varphi(z,t)^*\varphi(z,t)< I_m
\qquad t\in[ 0, (c-x_0)|z|), \end{equation} where in \eqref{3.30a}, \begin{equation}\label{3.30d}
\widetilde B_{j,k}(t)= B_{j,k}(x_0 +t|z|^{-1}), \quad j,k=1,2. \end{equation} \end{subequations} In the complete system \eqref{3.30}, one now has a set of conditions equivalent to system \eqref{3.25} and \eqref{3.26}.
We recall that $C_\varepsilon \subset {\mathbb{C}}_+$ represents the open sector with vertex at zero, symmetry axis along the positive imaginary axis, and opening angle $\varepsilon$, with $0<\varepsilon <\pi/2$. Next, consider a sequence, $z_n \in
{\mathbb{C}}_{\varepsilon}$, $n\in{\mathbb{N}} $, such that $|z_n| \to \infty$ as $n\to \infty$ and such that \begin{equation}\label{3.31} 0< \varepsilon < \delta_n = \arg{(z_n)} < \pi - \varepsilon. \end{equation} By choosing an appropriate subsequence, we may assume that \begin{equation}\label{3.32} \delta_n \to \delta \in [\varepsilon, \pi - \varepsilon]. \end{equation} Let $\varphi (z_n ,t)$ denote a corresponding sequence of functions that satisfy \eqref{3.30a} and \eqref{3.30c}, with initial data, $\varphi (z_n ,0)$, defined by \eqref{3.30b} for a sequence of points $M(z_n,c,x_0,\alpha_0)$, where each $M(z_n,c,x_0,\alpha_0)$ is chosen to be an element of the disk ${\mathcal D}(z_n,c,x_0,\alpha_0)$. Note that as $z_n\to \infty$, the intervals described in \eqref{3.30c} eventually cover all compact subintervals of $ {\mathbb{R}}_+$. Given the uniform boundedness of $\varphi_n(t)=\varphi (z_n ,t)$ described in \eqref{3.30c}, we assume, upon passing to an appropriate subsequence still denoted by $\varphi_n (0)$, that \begin{equation}\label{3.33} \varphi_n(0) = \varphi (z_n,0) \rightarrow \varphi_\pm (\delta), \ \text{for} \ \pm (c-x_0) > 0 \ \text{ as } n\rightarrow \infty , \end{equation} and as a consequence, that \begin{equation}\label{3.34} {\varphi_\pm(\delta)}^* \varphi_\pm(\delta) \le I_m. \end{equation}
With $\varphi_\pm(\delta)$ defined in \eqref{3.33} as
$|z_n|\to\infty$, we consider limiting systems associated with \eqref{3.30}: \begin{subequations}\label{3.35} \begin{align} \eta_\pm '(t)&= \frac{1}{2} \begin{pmatrix} I_m+ \eta_\pm (t)^t \\ I_m-\eta_\pm (t)^t \end{pmatrix} ^t \begin{pmatrix} \mp ie^{i\delta}I_m & 0\\ 0 & \pm ie^{i\delta}I_m \end{pmatrix}\begin{pmatrix} I_m+\eta_\pm (t)\\ I_m- \eta_\pm (t) \end{pmatrix}, \quad \pm t\ge 0, \label{3.35a} \\ \eta_\pm (0)&= \varphi_\pm(\delta). \label{3.35b} \end{align} \end{subequations}
\begin{theorem}\label{t3.8} Assume Hypothesis~\ref{h2.4}. Then the solution $\eta_\pm$ of \eqref{3.35} satisfies \begin{equation}\label{3.36} \eta_\pm (t)^* \eta_\pm(t) \le I_m,\quad t\in [0, \pm\infty). \end{equation} Moreover, the solutions $\varphi_n =\varphi (z_n,\cdot)$ of \eqref{3.30} converge to $\eta_\pm$ uniformly on $[0,\pm T]$ for every $T>0$, as $n\to \infty $. \end{theorem}
\begin{proof} In this proof, we consider only the case corresponding to $t\ge 0$, that is, $\eta_+(0)=\varphi_+(\delta)$ in \eqref{3.35b}. The other case follows in a similar manner. For this reason, we let $\eta( t)= \eta_+( t)$ in the remaining discussion. We also let $T\in {\mathbb{R}}_+$ be the greatest value such that \eqref{3.36} holds for $t\in [0, T] $ and show that \eqref{3.36} must hold for some $[0, T'] $ with $ T' > T $, thus proving $T=\infty.$\\ The solution of \eqref{3.35}, $\eta$, presumed to be defined on $[0, T]$, can be continued onto some $[0, T']$ with $T' > T$; $\eta$ then satisfies \begin{equation}\label{3.37} \eta (t)^* \eta (t) \le \kappa^2 I_m \end{equation} for $0\le t \le T'$ and for some $\kappa\ge 1$. \\ For brevity, let $\varphi'_n (t)= G_n(\varphi_n,t) $ denote \eqref{3.30a} with $z=z_n$, and let $\eta'(t)= H(\eta,t) $ denote \eqref{3.35a} in the following. Integrating \eqref{3.35a} and \eqref{3.30a}, one obtains \begin{align}\label{3.38} \varphi_n (t) -\eta (t) &= \varphi_n(0) -\varphi_0(\delta) + \int_0^t ds \{ G_n(\eta,s) - H(\eta,s)\} \nonumber \\ &\quad + \int_0^t ds \{ G_n(\varphi_n,s) - G_n(\eta,s)\}. \end{align} We note that \begin{align}\label{3.39} G_n(\eta ,s) - H(\eta ,s) &= \frac{1}{2}i(e^{i\delta} - e^{i\delta_n}) (I_m +\eta(s) )^2 -\frac{1}{2}i(e^{i\delta} - e^{i\delta_n}) (I_m -\eta(s) )^2 +\nonumber\\ & \quad + \sum_{j,k =1}^2 F_{j,k}(z_n,s), \end{align} where, \begin{subequations}\label{3.40} \begin{align}
F_{1,1}(z_n,s)&=-\frac{1}{2}i|z_n|^{-1} (I_m+\eta(s))\widetilde B_{1,1}(s)(I_m+\eta(s)),\\
F_{2,2}(z_n,s)&=\frac{1}{2}i|z_n|^{-1} (I_m-\eta(s))\widetilde B_{2,2}(s)(I_m-\eta(s)),\\
F_{1,2}(z_n,s)&=\frac{1}{2}i|z_n|^{-1} (I_m+\eta(s))\widetilde B_{1,2}(s)(I_m-\eta(s)),\\
F_{2,1}(z_n,s)&=\frac{1}{2}i|z_n|^{-1} (I_m-\eta(s))\widetilde B_{2,1}(s)(I_m+\eta(s)). \end{align} \end{subequations} Thus, for $t\in [0,T']$, \eqref{3.37} implies that as $n\to\infty$ \begin{equation}
|e^{i\delta}-e^{i\delta_n}|\int_0^t
\|I_m \pm \eta (s) \|_{{\mathbb{C}}^{m\times m}}^2 ds= o (1), \end{equation} and together with \eqref{3.29} and \eqref{3.30d} that \begin{equation}\label{3.42}
\int_0^t \| F_{j,k}(s) \|_{{\mathbb{C}}^{m\times m}}ds =
O\bigg( \int_{x_0}^{x_0 + t|z_n|^{-1}} \| \widetilde B_{j,k}(s) \|_{{\mathbb{C}}^{m\times m}} ds\bigg) =o(1). \end{equation}
(Here $\|\cdot\|_{{\mathbb{C}}^{m\times m}}$ denotes a norm on ${\mathbb{C}}^{m\times m}$.) Hence, by \eqref{3.39}--\eqref{3.42}, one infers that for $t\in [0,T']$ and as $n\to\infty$, \begin{equation}\label{3.43} \int_0^t \{ G_n(\eta,s) -H(\eta,s)\} ds = o(1). \end{equation} Next, one notes that \begin{equation}\label{3.44} G_n(\varphi_n,s) - G_n(\eta,s) = 2ie^{i\delta_n} (\eta(s) - \varphi_n(s)) + \sum_{j,k =1}^2 K_{j,k}(z_n,s), \end{equation} where \begin{subequations} \begin{align}
K_{1,1}(z_n,s) &= \frac{-i}{2}|z_n|^{-1} \{ (I_m +\varphi_n) B_{1,1}(s) (\varphi_n -\eta) + (\varphi_n -\eta)B_{1,1}(s) (I_m +\eta) \},\\ K_{2,2}(z_n,s) &=
\frac{i}{2}|z_n|^{-1}\{ (I_m -\varphi_n) B_{2,2}(s) (\eta -\varphi_n) + (\eta -\varphi_n)B_{2,2}(s) (I_m -\eta) \},\\
K_{1,2}(z_n,s) &= \frac{1}{2}|z_n|^{-1}\{(I_m +\varphi_n)B_{1,2}(s)(\eta -\varphi_n) + (\varphi_n -\eta)B_{1,2}(s)(I_m -\eta) \},\\ K_{2,1}(z_n,s) &=
\frac{1}{2}|z_n|^{-1}\{(I_m -\varphi_n) B_{2,1}(s) (\varphi_n -\eta) + (\eta -\varphi_n)B_{2,1}(s) (I_m +\eta) \}. \end{align} \end{subequations} By \eqref{3.34} and \eqref{3.37}, for $s\in [0,T']$, \begin{equation}\label{3.46}
\| I_m \pm \varphi_n(s) \|_{{\mathbb{C}}^{m\times m}}\le 2,
\qquad \| I_m \pm \eta(s) \|_{{\mathbb{C}}^{m\times m}}\le \kappa +1, \end{equation} and hence by \eqref{3.44}--\eqref{3.46}, \begin{align}\label{3.47}
&\| G_n(\varphi_n,s) - G_n(\eta,s) \|_{{\mathbb{C}}^{m\times m}} \nonumber \\
&\le \|
\eta (s)- \varphi_n (s)\|_{{\mathbb{C}}^{m\times m}} \bigg\{ 2 +
\frac{|z_n|^{-1}}{2}(3+\kappa)\sum_{j,k=1}^2\| \widetilde B_{j,k}(s) \|_{{\mathbb{C}}^{m\times m}} \bigg\}. \end{align} Of course, by \eqref{3.33} as $n\to \infty$, \begin{equation}\label{3.48} \phi_n(0) - \phi_+(\delta) = o(1). \end{equation} Thus, by \eqref{3.42}, \eqref{3.47} and \eqref{3.48}, one concludes for $t\in [0,T']$ and as $n\to \infty$, that \begin{align}\label{3.49}
&\|\varphi_n(t)-\eta(t) \|_{{\mathbb{C}}^{m\times m}} \le o(1) \nonumber \\
&+ \int_0^t \|\varphi_n(s)-\eta(s) \|_{{\mathbb{C}}^{m\times m}} \bigg\{ 2 +
\frac{|z_n|^{-1}}{2}(3+\kappa)\sum_{j,k=1}^2\| \widetilde B_{j,k}(s) \|_{{\mathbb{C}}^{m\times m}} \bigg\}ds. \end{align}
Gronwall's inequality applied to \eqref{3.49} together with a consideration of the effect of the variable change \eqref{3.29}, as illustrated in \eqref{3.42}, yields \begin{equation}\label{3.50} \varphi_n(t)-\eta(t)\to 0 \,\text{ as } \, n\to \infty \end{equation} uniformly for $t\in [0,T']$. Thus by \eqref{3.30c}, the contradiction results that for all $t\in [0,T']$, $\eta$ satisfies \eqref{3.36}. \end{proof}
What solutions of \eqref{3.35} satisfy \eqref{3.36}?
\begin{lemma} Assume Hypothesis~\ref{h2.4}. If $\eta_\pm$ is a solution of \eqref{3.35a} which satisfies \eqref{3.36}, then \begin{equation}\label{3.51} 0=\eta_\pm (t) ,\quad t\in [0, \pm\infty). \end{equation} \end{lemma}
\begin{proof} We note that \eqref{3.35a} is equivalent to \eqref{3.30a} with $\widetilde B=0$. By the variable change \eqref{3.29}, \eqref{3.35a} is also equivalent to \eqref{3.25a} with $B=0$. Next, we recall the connection between the Riccati-type equations \eqref{3.25a}, and \eqref{3.17a} by means of the Cayley transformation \eqref{3.27}. Solution matrices of \eqref{3.35a} which statisfy \eqref{3.36} at $t=0$ thus correspond to solution matrices, $V(z,\cdot)$, of \eqref{3.17a} for which $\text{\rm Im} (V(z,x_0))\ge 0$. Moreover, solutions of \eqref{3.17a} for which for which $\text{\rm Im} (V(z,x_0))\ge 0$ are obtainable from solutions of \eqref{HSa}, with $B=0$, by means of \eqref{2.14} with $\text{\rm Im} (M)\ge 0$. Thus, by utilizing this connection between explicit exponential solutions of \eqref{HSa} with $B=0$ and solutions of the Riccati-type equation \eqref{3.17a}, and by performing on the resulting solution of \eqref{3.17a} the conformal mapping \eqref{3.27} followed by the variable transformation \eqref{3.29}, one obtains the following solution for \eqref{3.30a}, \begin{equation}\label{3.52} \varphi (z,t)= (iI_m \mp M)(iI_m \pm M)^{-1} \exp(\mp 2ite^{i\delta}), \end{equation} for $\pm t\ge 0$, $\text{\rm Im} (\pm M)\ge 0$, and $z\in {\mathbb{C}}_+$. By hypothesis, $0<\delta<\pi$. Thus the exponential term in \eqref{3.52} will result in \begin{equation}
|| \varphi (z,t) ||_{{\mathbb{C}}^{m\times m}} >1 \ \text{ as } t\to \pm\infty \end{equation} unless \begin{equation} M = \pm i I_m, \end{equation} thus implying \eqref{3.51}. \end{proof}
\noindent One then obtains the following result.
\begin{corollary}\label{c3.10} With $\phi_\pm(\delta) $ defined in \eqref{3.33}, $\eta_\pm(0)=\phi_\pm(\delta)=0$. \end{corollary}
For $M(z_n,c,x_0,\alpha_0)\in{\mathcal D}(z_n,c,x_0,\alpha_0)$, it follows by \eqref{3.30b}, \eqref{3.33}, and Corollary~\ref{c3.10} that \begin{equation}\label{3.55} [iI_m \mp M(z_n,c,x_0,\alpha_0)][iI_m \pm M(z_n,c,x_0,\alpha_0)]^{-1} = o(1), \qquad\pm (c-x_0)>0, \end{equation} as $n\to \infty$. Hence one infers, for elements of ${\mathcal D}(z_n,c,x_0,\alpha_0)$, that \begin{equation}\label{3.56} M(z_n,c,x_0,\alpha_0)= \pm iI_m + o(1), \qquad \pm (c-x_0)>0, \end{equation}
as $|z|\to \infty$ in $C_\varepsilon$. This proves \eqref{3.1}. Actually, \eqref{3.56} is a statement for all elements of ${\mathcal D}(z,c,x_0,\alpha_0)$ including the particular element $M_{\pm}(z,x_0,\alpha_0)$, for $\pm (c-x_0)>0$.
In \eqref{3.1} an asymptotic expansion is given that is uniform with respect to $\arg(z)$ for $|z| \to \infty$ in $C_\varepsilon$. We now vary the reference point, $x_0$, and observe that the asymptotic expansion in \eqref{3.1} is also uniform with respect to $x_0$ whenever $x_0$ is confined to a compact subset of ${\mathbb{R}}$.
\begin{theorem} \label{t3.12} Assume Hypothesis~\ref{h2.4}. Let $\alpha_0 =(I_m\; 0)\in{\mathbb{C}}^{m\times 2m}$, and denote by $C_\varepsilon\subset {\mathbb{C}}_+$ the open sector with vertex at zero, symmetry axis along the positve imaginary axis and opening angle $\varepsilon$, with $0<\varepsilon<\pi/2$. Let $M_\pm(z,x_0,\alpha_0)$ be the unique elements of the limit disks ${\mathcal D}_\pm (z,x_0,\alpha_0)$ for the Dirac system given by \eqref{HS} and \eqref{DS}. Then, \begin{equation}\label{3.57}
M_\pm(z,x,\alpha_0) \underset{\substack{|z| \to\infty\\z\in C_\varepsilon}}{=} \pm iI_m +o(1) \end{equation}
uniformly with respect to $\arg(z)$, for $|z| \to \infty$ in $C_\varepsilon$, and uniformly with respect to $x$, as long as $x$ varies in compact subsets of $[x_0,\pm\infty)$. \end{theorem}
\begin{proof} We note that the system \eqref{3.35} is independent of the reference point $x_0$. Next, we recall that $\delta$, defined in \eqref{3.32} is determined by an apriori choice of the sequence $z_n$, subject only to $z_n$ being in $C_\varepsilon$ (c.f.~\eqref{3.31}). Moreover, we note that $\varphi_\pm(\delta)$, defined as a limit in \eqref{3.33}, described explicity in Corollary~\ref{c3.10}, and which gives solutions of \eqref{3.35} satisfying \eqref{3.36} for $t\in [0, \pm\infty)$, is also independent of the reference point $x_0$. Thus, had we chosen a different point of reference, $x_0'\ne x_0$, at the start, the asymptotic analysis begun in Theorem~\ref{t3.8} and continued through \eqref{3.55}, would remain the same after the variable change in \eqref{3.29}, except for the integral expression in \eqref{3.42} in which $x_0$ would be replaced by $x_0'$. However, given the local integrability assumption on $B$ in Hypothesis~\ref{h2.1}, one concludes that this integral expression is uniformly continuous with respect to $x_0$ whenever $x_0$ is confined to a compact subset of ${\mathbb{R}}$. Thus \eqref{3.42}, and consequently \eqref{3.50}, are uniform with respect to $t$ and with respect to $x_0$ whenever both are confined to compact subsets of ${\mathbb{R}}$. Consequently, \eqref{3.55}
holds for elements ${\mathcal D}(z,c,x_0,\alpha_0)$, that this asymptotic expansion is uniform with respect to $\arg (z)$ for $|z|\to \infty$ in $C_\varepsilon$, and that it is uniform with respect to $x_0$ when $x_0$ is confined to compact subsets of ${\mathbb{R}}$. \end{proof}
\begin{remark}\label{r3.2} (i) In the special case $m=1$, the leading-order asymptotics \eqref{3.57} was published by Everitt, Hinton, and Shaw \cite{EHS83} in 1983. For asymptotic estimates of Weyl solutions in the case $m=1$ we refer to \cite{Mi91}. \\ (ii) A comparison of \eqref{3.57} with \eqref{2.41} then proves that the leading-order asymptotic behavior \eqref{3.57} is in fact independent of the boundary condition at $x_0$ indexed by $\alpha$, that is, \begin{equation}\label{3.58} M_\pm(z,x_0,\alpha) \underset{\substack{\abs{z}\to\infty\\ z\in C_\varepsilon}}{=} \pm iI_{m} +o(1) \end{equation} for any $\alpha$ satisfying the conditions stated in \eqref{2.8e}. In the scalar case $m=1$ this fact had been noticed in \cite{EHS83}. This boundary condition independence of the leading-order asymptotic behavior of $M_\pm(z,x_0,\alpha)$ is in sharp contrast to the case of matrix-valued Schr\"odinger operators (see, e.g., \cite{CG99}). Moreover, regarding the conclusion of Theorem~\ref{t3.12}, no generality is lost by assuming that $C_\varepsilon \subset {\mathbb{C}}_+$ because of \eqref{2.38}. \end{remark}
\section{Higher Order Terms in the Asymptotic Expansion of $M_\pm(z,x,\alpha)$} \label{s4}
In this section we shall prove one of our principal results of this paper, the asymptotic high-energy expansion of $M_+(z,x,\alpha_0)$ to arbitrarily high orders in sectors of the type $C_\varepsilon\subset{\mathbb{C}}_+$ as defined in Theorem~\ref{t3.12}.
Throughout this section we choose $z\in{\mathbb{C}}_+$. We also recall the following notion: $x\in [a,b)$ (resp., $x\in (a,b]$) is called a right (resp., left) Lebesgue point of an element $q\in L^1 ((a,b))$, $a<b$ if$\int_0^\varepsilon dx^\prime \,
|q(x+x^\prime)-q(x)|=o (\varepsilon)$ (resp., $\int_0^\varepsilon dx^\prime \, |q(x-x^\prime)-q(x)|=o (\varepsilon)$) as $\varepsilon\downarrow 0$. Similarly, $x\in (a,b)$ is called a Lebesgue point of $q\in L^1 ((a,b))$ if $\int_{-\varepsilon}^\varepsilon dx^\prime
\, |q(x+x^\prime)-q(x)|=o (\varepsilon)$ as $\varepsilon\downarrow 0$. The set of all such points is then denoted the right (resp., left) Lebesgue set of $q$ on $[a,b]$ in the former case and simply the Lebesgue set of $q$ on $[a,b]$ in the latter case. The analogous notions are applied to $2m\times 2m$ matrices $B\in L^1 ((a,b))^{2m\times 2m}$ by simultaneously considering all $4m^2$ entries of $B$. The right (resp., left) Lebesgue set of $B$ on $[a,b]$ is then simply the intersection of the right (resp., left) Lebesgue sets of $B_{j,k}$ for all $1\leq j,k\leq 2m$, and similarly for the Lebesgue set of $B$, etc.
Finally, we need one more ingredient, recently proven by Rybkin \cite[Lemma~3]{Ry99} using appropriate maximal functions. Let $q\in L^1 ((x_0,\infty))$, $\text{\rm{supp}}(q)\subseteq [x_0,x_0+R]$ for some $R>0$, and suppose $x\in [x_0,x_0+R]$ is a right Lebesgue point of $q$. Then \begin{equation} \int_x^{x_0+R} dx^\prime \, q(x^\prime)\exp(2iz(x^\prime -x))
\underset{\substack{\abs{z}\to\infty\\ z\in C_\varepsilon}}{=}-\frac{q(x)}{2iz} + o\big(|z|^{-1}\big). \label{4.-2} \end{equation} An alternative proof of \eqref{4.-2} follows from \cite[Theorem~I.13]{Ti86}, which implies \begin{equation} \lim_{\substack{\abs{z}\to\infty\\ z\in C_\varepsilon}}z^{-1} \int_x^{x_0+R} dx^\prime \,
|q(x^\prime) -q(x)|\exp(2iz(x^\prime -x)) = 0 \label{4.-1} \end{equation} for any right Lebesgue point $x$ of $q$.
We start with the simpler case where $B$ has compact support contained in some interval $[x_0,y_0]$. Below in \eqref{4.0} and in analogous formulas in this section, $\|\cdot\|_{{\mathbb{C}}^{\ell\times \ell}}$ denotes a norm in ${\mathbb{C}}^{\ell\times \ell}$.
\begin{lemma}\label{l4.1} Fix $x_0, y_0\in{\mathbb{R}}$ with $y_0>x_0$ and let $x\geq x_0$. Suppose $A=I_{2m}$, $B\in L^1([x_0,x_0+R])^{2m\times 2m}$ for all $R>0$, $B=B^*$ a.e.~on $(x_0,\infty)$. In addition, assume that $B$ has compact support contained in $[x_0,y_0]$, that $B^{(N-1)}\in L^1([x_0,y_0])^{2m\times 2m}$ for some $N\in{\mathbb{N}}$, that $x$ is a right Lebesgue point of $B^{(N-1)}$, and that \begin{align}
&\underset{y\in[x_0,y_0]}{\text{\rm{ess\,sup}}} \, \bigg\|\int_y^{y_0} dx'\,B^{(N-1)}(x')\exp(2iz(x'-y))
+\frac{1}{2iz}B^{(N-1)}(y)\bigg\|_{{\mathbb{C}}^{2m\times 2m}} \nonumber \\ & \underset{\substack{\abs{z}
\to\infty\\ z\in C_\varepsilon}}{=}o\big(|z|^{-1}\big). \label{4.0} \end{align} If $N=1$, suppose in addition $B_{k,k'}B_{\ell,\ell'}\in L^1([x_0,y_0])^{m\times m}$ for all $k,k',\ell,\ell'\in\{1,2\}$. Let $\alpha_0=(I_m\; 0)\in{\mathbb{C}}^{m\times 2m}$ and denote by $M_+(z,x,\alpha_0)$, $x\geq x_0$, the unique Weyl-Titchmarsh matrix associated with the half-line Dirac-type operator $D_+(\alpha_0)$ in \eqref{2.84}. Then, as $\abs{z}\to\infty$ in $C_\varepsilon$, $M_+(z,x,\alpha_0)$ has an asymptotic expansion of the form \begin{equation} M_+(z,x,\alpha_0)\underset{\substack{\abs{z}\to\infty\\ z\in C_\varepsilon}}{=}
i I_m +\sum_{k=1}^N m_{+,k}(x,\alpha_0)z^{-k}+
o\big(|z|^{-N}\big), \quad N\in{\mathbb{N}}. \label{4.1} \end{equation}
The expansion \eqref{4.1} is uniform with respect to $\arg\,(z)$ for $|z| \to \infty$ in $C_\varepsilon$ and uniform in $x$ as long as $x$ varies in compact subintervals of $[x_0,\infty)$ intersected with the right Lebesgue set of $B^{(N-1)}$. The expansion coefficients $m_{+,k}(x,\alpha_0)$ can be recursively computed from \begin{align} m_{+,1}(x,\alpha_0)&=-\frac{1}{2} \big( B_{1,2}(x)+B_{2,1}(x)\big) +\frac{i}{2} \big( B_{1,1}(x)-B_{2,2}(x)\big), \nonumber \\ m_{+,k+1}(x,\alpha_0)&=\frac{i}2\bigg(m_{+,k}^\prime(x,\alpha_0)+ \sum_{\ell=1}^{k} m_{+,\ell}(x,\alpha_0) m_{+,k+1-\ell}(x,\alpha_0) \nonumber \\ & \qquad \quad +\sum_{\ell=0}^{k} m_{+,\ell}(x,\alpha_0) B_{2,2}(x) m_{+,k-\ell}(x,\alpha_0) \label{4.2} \\ & \qquad \quad + B_{1,2}(x) m_{+,k}(x,\alpha_0) + m_{+,k}(x,\alpha_0) B_{2,1}(x)\bigg), \nonumber \\ & \hspace*{5.7cm} 1 \leq k\leq N-1. \nonumber \end{align} \end{lemma}
\begin{proof} In the following let $z\in{\mathbb{C}}_+$, and $x\geq x_0$. The existence of an expansion of the type \eqref{4.1} is shown as follows. First one considers a matrix Volterra integral equation of the type \begin{equation} \widetilde U_+(z,x,\alpha_0)=\begin{pmatrix}I_m\\ iI_m\end{pmatrix}\exp(iz(x-x_0)) +\int_x^\infty dx'\, K(z,x,x') J B(x')\widetilde U_+(z,x',\alpha_0), \label{4.3} \end{equation} where \begin{equation} \widetilde U_+(z,x,\alpha_0)=\begin{pmatrix} \widetilde u_{+,1}(z,x,\alpha_0)\\ \widetilde u_{+,2}(z,x,\alpha_0)\end{pmatrix}\in L^2([x_0,\infty))^{2m\times m}, \label{4.3A} \end{equation} and $K$ abbreviates the $2m\times 2m$ Volterra Green's kernel \begin{equation} K(z,x,x')=\begin{pmatrix} \cos(z(x-x'))I_m &\sin(z(x-x'))I_m \\ -\sin(z(x-x'))I_m &\cos(z(x-x'))I_m \end{pmatrix}. \label{4.3B} \end{equation} Clearly, $\widetilde U_+(z,\cdot,\alpha_0)$ solves the Dirac-type system \eqref{HS} and \eqref{DS}. In addition, it satisfies $\widetilde U_+(z,\cdot,\alpha_0)\in L^2 ([x_0,\infty))^{2m\times 2m}$. Thus, up to normalization, $\widetilde U_+(z,\cdot,\alpha_0)$ represents the Weyl solution associated with $B$ on the half-line $[x_0,\infty)$. Next, introducing \begin{equation} \widetilde V_+(z,x,\alpha_0)=\begin{pmatrix} \widetilde v_{+,1}(z,x,\alpha_0)\\ \widetilde v_{+,2}(z,x,\alpha_0) \end{pmatrix} =\widetilde U_+(z,x,\alpha_0)\exp(-iz(x-x_0)), \label{4.3a} \end{equation} one rewrites \eqref{4.3} in the form \begin{equation} \widetilde V_+(z,x,\alpha_0)=\begin{pmatrix}I_m\\ iI_m\end{pmatrix} +\int_x^{y_0} dx'\, \widetilde K(z,x,x') J B(x')\widetilde V_+(z,x',\alpha_0), \label{4.3b} \end{equation} where \begin{equation} \widetilde K(z,x,x')=\frac{1}{2}\begin{pmatrix} (1+\exp(2iz(x'-x)))I_m &-i(1-\exp(2iz(x'-x)))I_m \\[1mm] i(1-\exp(2iz(x'-x)))I_m &(1+\exp(2iz(x'-x)))I_m \end{pmatrix}. \label{4.3C} \end{equation} Thus, one infers, \begin{equation} M_+(z,x,\alpha_0)=\widetilde u_{+,2}(z,x,\alpha_0)\widetilde u_{+,1}(z,x,\alpha_0)^{-1} =\widetilde v_{+,2}(z,x,\alpha_0)\widetilde v_{+,1}(z,x,\alpha_0)^{-1}. \label{4.4} \end{equation} Introducing \begin{equation} R=\begin{pmatrix} C_1 & -iC_2 \\ iC_1 & C_2 \end{pmatrix}, \quad S=\begin{pmatrix} D_1 & iD_2 \\ -iD_1 & D_2 \end{pmatrix}, \label{4.4a} \end{equation} where \begin{align} C_1&=-B_{1,2}^* -iB_{1,1}, \quad C_2= B_{1,2}-iB_{2,2}, \label{4.4b} \\ D_1&=-B_{1,2}^*+iB_{1,1}, \quad D_2= B_{1,2}+iB_{2,2}, \label{4.4c} \end{align} \eqref{4.3b} results in \begin{align} \widetilde V_+(z,x,\alpha_0)&=\begin{pmatrix}I_m\\ iI_m\end{pmatrix} +\int_x^{y_0} dx'\, \big(R(x')+S(x')\exp(2iz(x'-x)) \big) \widetilde V_+(z,x',\alpha_0) \label{4.4d} \\ &=\bigg(I_{2m}+\sum_{k=1}^\infty 2^{-k}\int_x^{y_0} dx_1 \, \big(R(x_1)+S(x_1)e^{2iz(x_1-x)} \big) \times \nonumber \\ & \hspace*{2.5cm} \times \int_{x_1}^{y_0} dx_2 \, \big(R(x_2)+S(x_2)e^{2iz(x_2-x_1)} \big)\dots \label{4.4e} \\ & \hspace*{2.3cm} \dots \int_{x_{k-1}}^{y_0} dx_k \, \big(R(x_k)+S(x_k)e^{2iz(x_k-x_{k-1})} \big) \bigg)\begin{pmatrix}I_m\\ iI_m\end{pmatrix}. \nonumber \end{align} This yields \begin{equation}
\|\widetilde v_{+,j}(z,x,\alpha_0) \| \leq C_j, \quad z\in{\mathbb{C}}_+, \; \text{\rm Im}(z)> 0, \; x\geq x_0, \; j=1,2 \label{4.4ca} \end{equation}
for some $C_j>0$, $j=1,2$, depending on $\|B\|_1$. Integrating by parts in \eqref{4.4e}, repeatedly applying \eqref{4.-2} and \eqref{4.0} to $q(x)=(S(x))_{j,k}$ for all $1\leq j,k\leq 2m$ then results in the existence of an asymptotic expansion for $\widetilde V_+(z,x,\alpha_0)$ of the type \begin{equation} \widetilde V_+(z,x,\alpha_0)=\begin{pmatrix} \widetilde v_{+,1}(z,x,\alpha_0)\\
\widetilde v_{+,2}(z,x,\alpha_0) \end{pmatrix}=\sum_{k=0}^{N} \widetilde V_{+,k}(x,\alpha_0)\, z^{-k} +o\big(|z|^{-N}\big). \label{4.3f} \end{equation} Inserting the expansions for $\widetilde v_{+,2}(z,x,\alpha_0)$ and $\widetilde v_{+,1}(z,x,\alpha_0)^{-1}$ into \eqref{4.4} (using a geometric series expansion for $\widetilde v_{+,1}(z,x,\alpha_0)^{-1}$) then yields the existence of an expansion of the type \eqref{4.1} for $M_+(z,x,\alpha_0)$. The actual expansion coefficients and the associated recursion relation \eqref{4.2} then follow upon inserting expansion \eqref{4.1} into the Riccati-type equation \eqref{3.17a}. The stated uniformity assertions concerning the asymptotic expansion \eqref{4.1} then follow from iterating the system of Volterra integral integral equations \eqref{4.3b}. \end{proof}
\begin{remark} \label{r4.2} The analogous solution $\widetilde U_-(z,\cdot,\alpha_0)$ of the Dirac-type operator \eqref{2.61} on the interval $(-\infty,x_0]$ satisfies \begin{align} \widetilde U_-(z,x,\alpha_0)&=\begin{pmatrix}I_m\\ -iI_m\end{pmatrix}\exp(-iz(x-x_0)) \nonumber \\ & \quad -\int_{-\infty}^x dx'\, K(z,x,x') J \widetilde B(x')\widetilde U_-(z,x',\alpha_0), \label{4.14A} \end{align} with integral kernel $K$ given by \eqref{4.3B}. (Again $\widetilde U_-$ coincides with the Weyl solution $U_-$ up to normalization.) A closer look at the system of Volterra integral equations \eqref{4.3}, \eqref{4.4d}, \eqref{4.4e}, and similarly in connection with \eqref{4.14A}, then reveals that $\widetilde U_\pm(z,\cdot,\alpha_0)$ have the asymptotic behavior \begin{equation} \widetilde U_\pm (z,x,\alpha_0)\underset{\substack{\abs{z}\to\infty\\ z\in C_\varepsilon}}{=} \left(\sum_{k=0}^N \begin{pmatrix} \widetilde v_{\pm,k,1}(x,\alpha_0) \\ \widetilde v_{\pm,k,2}(x,\alpha_0)
\end{pmatrix}z^{-k} +o\big(|z|^{-N}\big)\right)\exp(\pm iz(x-x_0)), \label{4.14B} \end{equation} with leading asymptotics determined as follows. \begin{align} \begin{split} \label{4.14C} \widetilde v_{\pm,0,1}(x,\alpha_0)&=I_m+\widetilde w_{\pm,0,1}(x,\alpha_0), \\ \widetilde v_{\pm,0,2}(x,\alpha_0)&=\pm i\big(I_m+\widetilde w_{\pm,0,1}(x,\alpha_0)\big), \end{split} \end{align} where $\widetilde w_{\pm,0,1}(x,\alpha_0)$ satisfies \begin{equation} \widetilde w_{\pm,0,1}'(x,\alpha_0)=\frac{1}{2}\big[\widetilde B_{2,1}(x) -\widetilde B_{1,2}(x) \pm i\widetilde B_{2,2}(x) \pm i\widetilde B_{1,1}(x)\big]\big(I_m+\widetilde w_{\pm,0,1}(x,\alpha_0)\big), \label{4.14D} \end{equation} and \begin{equation} \lim_{x\to\pm\infty} \widetilde w_{\pm,0,1}(x,\alpha_0)=0 \label{4.14F} \end{equation} (in fact, $\widetilde v_{\pm,0,1}(\cdot,\alpha_0)=I_m$, $\widetilde v_{\pm,0,2}(\cdot,\alpha_0)=\pm iI_m$, and $\widetilde v_{\pm,k,j}(\cdot,\alpha_0)=0$, $j=1,2$, $1\leq k\leq N$ outside the support of $\widetilde B$). In particular, \begin{equation} \widetilde w_{\pm,0,1}(x,\alpha_0)=0 \label{4.14G} \end{equation} and hence \begin{equation} \widetilde U_\pm (z,x,\alpha_0)\underset{\substack{\abs{z}\to\infty\\ z\in C_\varepsilon}}{=} \left(\begin{pmatrix} I_m \\ \pm iI_m \end{pmatrix} +o(1)\right)\exp(\pm iz(x-x_0)), \label{4.14H} \end{equation} if and only if $\widetilde B$ is in the normal form \begin{equation} \widetilde B(x)=\begin{pmatrix} \widetilde B_{1,1}(x) & \widetilde B_{1,2}(x)\\ \widetilde B_{1,2}(x) &-\widetilde B_{1,1}(x)\end{pmatrix}, \quad \widetilde B_{1,1}^*(x)=\widetilde B_{1,1}(x), \; \widetilde B_{1,2}^*(x)=\widetilde B_{1,2}(x) \text{ a.e.} \label{4.14I} \end{equation} \end{remark}
For more details we refer to Lemma~\ref{l4.9}.
Next we recall an elementary result on finite-dimensional evolution equations essentially taken from \cite{MPS90} (cf.~also \cite[Lemma~4.2]{CG99}).
\begin{lemma} \mbox{\rm (\cite{MPS90}.)} \label{l4.2} Let $\Gamma_j\in L^1_{\text{\rm{loc}}}({\mathbb{R}})^{m\times m}$, $j=1,2$. Then any $m\times m$ matrix-valued solution $X$ of \begin{equation} X'(x)=\Gamma_1(x)X(x)+X(x)\Gamma_2(x) \text{ for ~a.e. } x\in{\mathbb{R}}, \label{4.5} \end{equation} is of the type \begin{equation} X(x)=Y(x)CZ(x), \label{4.6} \end{equation} where $C$ is a constant $m\times m$ matrix and $Y$ is a fundamental system of solutions of \begin{equation} \Psi'(x)=\Gamma_1(x)\Psi(x) \label{4.7} \end{equation} and $Z$ is a fundamental system of solutions of \begin{equation} \Phi'(x)=\Phi(x)\Gamma_2(x). \label{4.8} \end{equation} \end{lemma}
The next result provides the proper extension of Lemma~4.3 in \cite{CG99} in the context of matrix-valued Schr\"odinger operators (which in turn extended Proposition~2.1 in the scalar context in \cite{GS98} to the matrix-valued case) to the Dirac-type case under consideration.
\begin{lemma} \label{l4.3} Fix $x_0,y_0\in{\mathbb{R}}$ with $y_0>x_0$. Suppose $A_j=I_{2m}$, $B_j\in L^1([x_0,x_0+R])^{2m\times 2m}$ for all $R>0$, $B_j=B_j^*$ a.e.~on $[x_0,\infty)$, $j=1,2$, and $B_1=B_2$ a.e.~on $[x_0,y_0]$. Let $\alpha_0=(I_m\; 0)\in{\mathbb{C}}^{m\times 2m}$ and denote by $M_{j,+}(z,x,\alpha_0)$, $x\geq x_0$, the unique Weyl-Titchmarsh matrix corresponding to the half-line Dirac operators $D_{+,j}(\alpha_0)$, $j=1,2$, in \eqref{2.84}. Then, \begin{align} &[M_{1,+}'(z,x,\alpha_0)-M_{2,+}'(z,x,\alpha_0)] \nonumber \\ &=-(z/2)[M_{1,+}(z,x,\alpha_0)+M_{2,+}(z,x,\alpha_0)] [M_{1,+}(z,x,\alpha_0)-M_{2,+}(z,x,\alpha_0)] \nonumber \\ & \quad -(z/2)[M_{1,+}(z,x,\alpha_0)-M_{2,+}(z,x,\alpha_0)] [M_{1,+}(z,x,\alpha_0) +M_{2,+}(z,x,\alpha_0)] \nonumber \\ & \quad -[M_{1,+}(z,x,\alpha_0)+M_{2,+}(z,x,\alpha_0)]B_{2,2}(x) [M_{1,+}(z,x,\alpha_0)-M_{2,+}(z,x,\alpha_0)]/2 \nonumber \\ & \quad -[M_{1,+}(z,x,\alpha_0)-M_{2,+}(z,x,\alpha_0)]B_{2,2}(x) [M_{1,+}(z,x,\alpha_0)+M_{2,+}(z,x,\alpha_0)]/2 \nonumber \\ & \quad -B_{1,2}(x)[M_{1,+}(z,x,\alpha_0)-M_{2,+}(z,x,\alpha_0)] \nonumber \\ & \quad -[M_{1,+}(z,x,\alpha_0)-M_{2,+}(z,x,\alpha_0)]B_{2,1}(x) \; \text{ for~a.e. $x\in [x_0,y_0]$,} \label{4.14} \end{align} where we denoted $B_1=B_2=\left(\begin{smallmatrix}B_{1,1} &B_{1,2}\\ B_{2,1} &B_{2,2} \end{smallmatrix}\right)$ a.e.~on $(x_0,y_0)$. \end{lemma}
\begin{proof} This is obvious from \eqref{3.17a}. \end{proof}
\begin{lemma} \label{l4.4} Fix $x_0,y_0\in{\mathbb{R}}$ with $y_0>x_0$. Suppose $A_j=I_{2m}$, $B_j\in L^1([x_0,x_0+R])^{2m\times 2m}$ for all $R>0$, and $B_j=B_j^*$ a.e.~on $[x_0,\infty)$, $j=1,2$. Let $\alpha_0=(I_m\; 0)\in{\mathbb{C}}^{m\times 2m}$ and denote by $M_{j,+}(z,x,\alpha_0)$, $x\geq x_0$, the unique Weyl-Titchmarsh matrix corresponding to the half-line Dirac operators $D_{+,j}(\alpha_0)$, $j=1,2$, in \eqref{2.84}. Define \begin{align} \Gamma_1(z,x)&=-(z/2)[M_{1,+}(z,x,\alpha_0) +M_{2,+}(z,x,\alpha_0)] \nonumber \\ & \quad -(1/2)[M_{1,+}(z,x,\alpha_0) +M_{2,+}(z,x,\alpha_0)]B_{2,2}(x)-B_{1,2}(x), \label{4.15} \\ \Gamma_2(z,x)&=-(z/2)[M_{1,+}(z,x,\alpha_0) +M_{2,+}(z,x,\alpha_0)] \nonumber \\ & \quad -(1/2)B_{2,2}(x)[M_{1,+}(z,x,\alpha_0) +M_{2,+}(z,x,\alpha_0)]-B_{2,1}(x), \label{4.15a} \end{align} for a.e.~$x\in [x_0,y_0]$. In addition, assume $Y_+(z,\cdot)$ and $Z_+(z,\cdot)$ to be fundamental matrix solutions of \begin{equation} \Psi'(z,x)=\Gamma_1(z,x)\Psi(z,x) \text{ and } \Phi'(z,x)=\Phi(z,x)\Gamma_2(z,x) \label{4.16} \end{equation} on $[x_0,y_0]$, respectively, with \begin{equation} Y_+(z,y_0)=I_m, \quad Z_+(z,y_0)=I_m. \label{4.17} \end{equation}
Then, as $|z|\to\infty$, $z\in C_\varepsilon$, \begin{equation}
\|Y_+(z,x_0)\|_{{\mathbb{C}}^{m\times m}}, \|Z_+(z,x_0)\|_{{\mathbb{C}}^{m\times m}} \leq \exp(-\text{\rm Im}(z)(y_0-x_0)(1+o (1))). \label{4.18} \end{equation} \end{lemma}
\begin{proof} Define $\widetilde\Gamma_j(z,x)$, $j=1,2$, by \begin{equation} \widetilde\Gamma_j (z,x)=\Gamma_j (z,x)+izI_m, \quad j=1,2, \label{4.19} \end{equation} then \begin{equation}
\int_{x_0}^{y_0} dx\, \|\widetilde\Gamma_j (z,x)\|_{{\mathbb{C}}^{m\times m}} \underset{\substack{\abs{z}\to\infty\\ z\in C_\varepsilon}}{=} o (z), \quad j=1,2 \label{4.20} \end{equation} due to the uniform nature of the asymptotic expansion \eqref{3.57} for $x$ varying in compact intervals. Next, introduce \begin{equation} E_+(z,x,y_0)=I_m\exp(iz(y_0-x)), \quad x\leq y_0, \label{4.21} \end{equation} then \begin{align} Y_+(z,x)&=E_+(z,x,y_0)-\int_x^{y_0}dx'\,E_+(z,x,x') \widetilde\Gamma_1(z,x')Y_+(z,x'), \label{4.22} \\ Z_+(z,x)&=E_+(z,x,y_0)-\int_x^{y_0}dx'\,Z_+(z,x') \widetilde\Gamma_2(z,x')E_+(z,x,x'). \label{4.23} \end{align} Using \begin{equation}
\|E_+(z,x_0,y_0)\|_{{\mathbb{C}}^{m\times m}}\leq\exp(-\text{\rm Im}(z)(y_0-x_0)), \label{4.24} \end{equation} a standard Volterra-type iteration argument in \eqref{4.22}, \eqref{4.23} then yields \begin{align}
\|Y_+(z,x_0)\|_{{\mathbb{C}}^{m\times m}}& \leq \exp\left(-\text{\rm Im}(z)(y_0-x_0)+\int_{x_0}^{y_0}dx
\,\|\widetilde\Gamma_1(z,x)\|\right), \label{4.25} \\
\|Z_+(z,x_0)\|_{{\mathbb{C}}^{m\times m}}& \leq \exp\left(-\text{\rm Im}(z)(y_0-x_0)+\int_{x_0}^{y_0}dx
\,\|\widetilde\Gamma_2(z,x)\|\right), \label{4.25a} \end{align} and hence \eqref{4.18}. \end{proof}
\begin{theorem} \label{t4.5} Fix $x_0,y_0\in{\mathbb{R}}$ with $y_0>x_0$. Suppose $A_j=I_{2m}$, $B_j\in L^1([x_0,x_0+R])^{2m\times 2m}$ for all $R>0$, $B_j=B_j^*$ a.e.~on $[x_0,\infty)$, $j=1,2$, and $B_1=B_2$ a.e.~on $[x_0,y_0]$. Let $\alpha_0=(I_m\; 0)\in{\mathbb{C}}^{m\times 2m}$ and denote by $M_{j,+}(z,x,\alpha_0)$, $x\geq x_0$, the unique Weyl-Titchmarsh matrix corresponding to the half-line Dirac operators $D_{+,j}(\alpha_0)$, $j=1,2$, in \eqref{2.84}. Then, as $\abs{z}\to\infty$ in $C_\varepsilon$, \begin{equation}
\|M_{1,+}(z,x_0,\alpha_0)-M_{2,+}(z,x_0,\alpha_0)\|_{{\mathbb{C}}^{m\times m}} \leq C\exp(-2\text{\rm Im}(z)(y_0-x_0)(1+o (1))) \label{4.26} \end{equation} for some constant $C>0$. \end{theorem}
\begin{proof} Define for $z\in{\mathbb{C}}\backslash{\mathbb{R}}$, $x\in [x_0,y_0]$, \begin{equation} X_+(z,x)=M_{1,+}(z,x,\alpha_0)-M_{2,+}(z,x,\alpha_0), \label{4.26a} \end{equation} and for $z\in{\mathbb{C}}\backslash{\mathbb{R}}$ and a.e.~$x\in [x_0,y_0]$, \begin{align} \Gamma_1(z,x)&=-(z/2)[M_{1,+}(z,x_0,\alpha_0)+M_{2,+}(z,x_0,\alpha_0)] \nonumber \\ & \quad -(1/2)[M_{1,+}(z,x_0,\alpha_0)+M_{2,+}(z,x_0,\alpha_0)]B_{2,2}(x) -B_{1,2}(x), \label{4.26b} \\ \Gamma_2(z,x)&=-(z/2)[M_{1,+}(z,x_0,\alpha_0)+M_{2,+}(z,x_0,\alpha_0)] \nonumber \\ & \quad -(1/2)B_{2,2}(x)[M_{1,+}(z,x_0,\alpha_0)+M_{2,+}(z,x_0,\alpha_0)] -B_{2,1}(x). \label{4.26c} \end{align} By Lemma~\ref{l4.3}, \begin{equation} X_+'=\Gamma_1X_++X_+\Gamma_2 \end{equation} and hence by Lemma~\ref{l4.2}, \begin{equation} X_+(z,x)=Y_+(z,x)X_+(z,x_1)Z_+(z,x), \label{4.30} \end{equation} where $Y_+(z,x)$ and $Z_+(z,x)$ are fundamental solution matrices of \begin{equation} \Psi'(z,x)=\Gamma_1(z,x)\Psi(z,x) \text{ and } \Phi'(z,x)=\Phi(z,x)\Gamma_2(z,x), \end{equation} respectively, with \begin{equation} Y_+(z,y_0)=I_m, \quad Z_+(z,y_0)=I_m. \end{equation} By Lemma~\ref{l4.4}, \begin{equation}
\|Y_+(z,x_0)\|_{{\mathbb{C}}^{m\times m}}, \|Z_+(z,x_0)\|_{{\mathbb{C}}^{m\times m}} \leq \exp(-\text{\rm Im}(z)(y_0-x_0))(1+o (1))) \label{4.33} \end{equation}
as $|z|\to\infty$, $z\in C_\varepsilon$. Thus, as
$|z|\to\infty$, $z\in C_\varepsilon$, \begin{align}
\|X_+(z,x_0)\|_{{\mathbb{C}}^{m\times m}}&\leq
\|X_+(z,y_0)\|_{{\mathbb{C}}^{m\times m}}\,\|Y_+(z,x_0)\|_{{\mathbb{C}}^{m\times m}}\,\|Z_+(z,x_0)\|_{{\mathbb{C}}^{m\times m}} \nonumber \\ &\leq C\exp(-2\text{\rm Im}(z)(y_0-x_0)(1+o(1))) \label{4.34} \end{align} for some constant $C>0$ by \eqref{3.57}, \eqref{4.30}, and \eqref{4.33}. \end{proof}
Given these preparations we can now drop the compact support assumption on $B$ in Lemma~\ref{l4.1} and hence arrive at one of the principal results of this paper.
\begin{theorem} \label{t4.6} Fix $x_0,y_0\in{\mathbb{R}}$ with $y_0>x_0$ and suppose $A=I_{2m}$, $B\in L^1([x_0,x_0+R])^{2m\times 2m}$ for all $R>0$, and $B=B^*$ a.e.~on $(x_0,\infty)$. In addition, assume that for some $N\in{\mathbb{N}}$, $B^{(N-1)}\in L^1([x_0,c])^{2m\times 2m}$ for all $c>x_0$, that $x_0$ is a right Lebesgue point of $B^{(N-1)}$, and that \begin{align}
&\underset{y\in [x_0,y_0]}{\text{\rm{ess\,sup}}} \, \bigg\|\int_y^{y_0} dx'\,B^{(N-1)}(x')\exp(2iz(x'-y))
+\frac{1}{2iz}B^{(N-1)}(y)\bigg\|_{{\mathbb{C}}^{2m\times 2m}} \nonumber \\ &\underset{\substack{\abs{z}
\to\infty\\ z\in C_\varepsilon}}{=}o\big(|z|^{-1}\big). \label{4.34a} \end{align} If $N=1$, suppose in addition $B_{k,k'}B_{\ell,\ell'}\in L^1([x_0,y_0])^{m\times m}$ for all $k,k',\ell,\ell'\in\{1,2\}$. Let $\alpha_0=(I_m\; 0)\in{\mathbb{C}}^{m\times 2m}$ and denote by $M_+(z,x_0,\alpha_0)$ the unique element of the limit disk ${\mathcal D}_+ (z,x_0,\alpha_0)$ for the half-line Dirac operator $D_+(\alpha_0)$ in \eqref{2.84}. Then, as $\abs{z}\to\infty$ in $C_\varepsilon$, $M_+(z,x_0,\alpha_0)$ has an asymptotic expansion of the form \begin{equation}
M_+(z,x_0,\alpha_0)\underset{\substack{\abs{z}\to\infty\\ z\in C_\varepsilon}}{=} i I_m +\sum_{k=1}^N m_{+,k}(x_0,\alpha_0)z^{-k}+ o\big(|z|^{-N}\big), \quad N\in{\mathbb{N}}. \label{4.35} \end{equation} The expansion \eqref{4.35} is uniform with respect to $\arg\,(z)$
for $|z|\to \infty$ in $C_\varepsilon$. The expansion coefficients $m_{+,k}(x_0,\alpha_0)$ can be recursively computed from \eqref{4.2}. \end{theorem}
\begin{proof} Define \begin{equation} \widetilde B(x)=\begin{cases} B(x) &\text{ for } x\in [x_0,y_0], \; x_0<y_0 \\ 0 &\text{ otherwise} \label{4.36} \end{cases} \end{equation} and apply Theorem~\ref{t4.5} with $B_1=B$, $B_2=\widetilde B$. Then (in obvious notation) \begin{equation}
\|M_+(z,x_0,\alpha_0)-\widetilde M_+(z,x_0,\alpha_0)\|_{{\mathbb{C}}^{m\times m}}\leq C \exp(-2\text{\rm Im}(z)(y_0-x_0)(1+o(1))) \label{4.36a} \end{equation}
as $|z|\to\infty$, $z\in C_\varepsilon$, and hence the asymptotic expansion \eqref{4.1} for $\widetilde M_+(z,x_0,\alpha_0)$ in Lemma~\ref{l4.1} coincides with that of $M_+(z,x_0,\alpha_0)$. \end{proof}
In analogy to Theorem~\ref{t3.12}, the asymptotic expansion \eqref{4.35} extends to one for $M_+(z,x,\alpha_0)$ valid uniformly with respect to $x$ as long as $x$ varies in compact subintervals of $[x_0,\infty)$ intersected with the right Lebesgue set of $B^{(N-1)}$.
\begin{theorem} \label{t4.7} Fix $x_0\in{\mathbb{R}}$ and let $x\geq x_0$. Suppose $A=I_{2m}$, $B\in L^1([x_0,x_0+R])^{2m\times 2m}$ for all $R>0$, and $B=B^*$ a.e.~on $(x_0,\infty)$. In addition, assume that for some $N\in{\mathbb{N}}$, $B^{(N-1)}\in L^1([x_0,c))^{2m\times 2m}$ for all $c>x_0$, that $x$ is a right Lebesgue point of $B^{(N-1)}$, and that for all $R>0$, \begin{align} &\underset{y\in [x_0,x_0+R]}{\text{\rm{ess\,sup}}} \,
\bigg\|\int_y^{x_0+R} dx'\,B^{(N-1)}(x')\exp(2iz(x'-y))
+\frac{1}{2iz}B^{(N-1)}(y)\bigg\|_{{\mathbb{C}}^{2m\times 2m}} \nonumber \\ & \underset{\substack{\abs{z}
\to\infty\\ z\in C_\varepsilon}}{=}o\big(|z|^{-1}\big). \label{4.36b} \end{align} If $N=1$, suppose in addition $B_{k,k'}B_{\ell,\ell'}\in L^1([x_0,x_0+R])^{m\times m}$ for all $R>0$ and all $k,k',\ell,\ell'\in\{1,2\}$. Let $\alpha_0=(I_m\; 0)\in{\mathbb{C}}^{m\times 2m}$ and denote by $M_+(z,x,\alpha_0)$, $x\geq x_0$, the unique element of the limit disk ${\mathcal D}_+(z,x,\alpha_0)$ for the half-line Dirac operator $D_+(\alpha_0)$ in \eqref{2.84}. Then, as $\abs{z}\to\infty$ in $C_\varepsilon$, $M_+(z,x,\alpha_0)$ has an asymptotic expansion of the form \begin{equation}
M_+(z,x,\alpha_0)\underset{\substack{\abs{z}\to\infty\\ z\in C_\varepsilon}}{=} i I_m +\sum_{k=1}^N m_{+,k}(x,\alpha_0)z^{-k}+ o\big(|z|^{-N}\big), \quad N\in{\mathbb{N}}. \label{4.37} \end{equation} The expansion \eqref{4.37} is uniform with respect to $\arg\,(z)$ for
$|z|\to \infty$ in $C_\varepsilon$ and uniform in $x$ as long as $x$ varies in compact subsets of ${\mathbb{R}}$ intersected with the right Lebesgue set of $B^{(N-1)}$. The expansion coefficients $m_{+,k}(x,\alpha_0)$ can be recursively computed from \eqref{4.2}. \end{theorem}
\begin{proof} To see that uniformity holds for this expansion, first recall the role of Theorem~\ref{t3.12} in providing uniformity in the asymptotic expression
\eqref{4.20} which then leads to \eqref{4.18} holding uniformly with respect to $x_0$ varying within compact subsets of ${\mathbb{R}}$ and with respect to $\arg\,(z)$ for $|z|\to \infty$ in $C_\varepsilon$. This in turn leads to a similar uniformity holding for \eqref{4.26} which is the key to \eqref{4.35} holding with respect to $x_0$ varying within compact subsets
of ${\mathbb{R}}$ and with respect to $\arg\,(z)$ for $|z|\to \infty$ in $C_\varepsilon$. \end{proof}
\begin{remark} \label{r4.9} For simplicity, we focused thus far on the expansion of
$M_+(z,x_0,\alpha_0)$ as $|z|\to\infty$. Of course, Theorem~\ref{t4.7} holds also for $M_-(z,x_0,\alpha_0)$ replacing the hypotheses concerning right Lebesgue points by those of left Lebesgue points, etc. For convenience we just state the corresponding expansion and associated nonlinear recursion formula which covers both cases. \begin{equation}
M_\pm (z,x,\alpha_0)\underset{\substack{\abs{z}\to\infty\\ z\in C_\varepsilon}}{=} \sum_{k=0}^N m_{\pm,k}(x,\alpha_0)z^{-k}+ o\big(|z|^{-N}\big), \quad N\in{\mathbb{N}}. \label{4.100} \end{equation} \begin{align} m_{\pm,0}(x,\alpha_0)&=\pm iI_m, \nonumber \\ m_{\pm,1}(x,\alpha_0)&=-\frac{1}{2} \big( B_{1,2}(x)+B_{2,1}(x)\big) \pm \frac{i}{2} \big( B_{1,1}(x)-B_{2,2}(x)\big), \nonumber \\ m_{\pm,k+1}(x,\alpha_0)&=\pm\frac{i}2\bigg(m_{\pm,k}^\prime(x,\alpha_0)+ \sum_{\ell=1}^{k}m_{\pm,\ell}(x,\alpha_0) m_{\pm,k+1-\ell}(x,\alpha_0) \nonumber \\ & \qquad \quad +\sum_{\ell=0}^{k}m_{\pm,\ell}(x,\alpha_0)B_{2,2}(x) m_{\pm,k-\ell}(x,\alpha_0) \label{4.101} \\ & \qquad \quad +B_{1,2}(x)m_{\pm,k}(x,\alpha_0) +m_{\pm,k}(x,\alpha_0)B_{2,1}(x)\bigg), \nonumber \\ & \hspace*{5.8cm} 1\leq k\leq N-1. \nonumber \end{align} \end{remark}
Combining Theorem~\ref{t4.7} and \eqref{2.620} then yields the analogous asymptotic expansion for $M(z,x,\alpha_0)$.
\begin{theorem} \label{t4.10a} Assume Hypothesis~\ref{h2.1} with $A=I_{2m}$, and let $\alpha_0=(I_m\; 0)\in{\mathbb{C}}^{m\times 2m}$. Fix $x_0\in{\mathbb{R}}$ and let $x\in{\mathbb{R}}$. Suppose that for some $N\in{\mathbb{N}}$, $B^{(N-1)}\in L^1_{\text{\rm{loc}}}({\mathbb{R}})^{2m\times 2m}$, that $x$ is a right and a left Lebesgue point of $B^{(N-1)}$, and that for all $R>0$, \begin{align} & \quad \underset{y\in [x_0,x_0+R]}{\text{\rm{ess\,sup}}} \,
\bigg\|\int_y^{x_0+R} dx'\,B^{(N-1)}(x')\exp(2iz(x'-y))
+\frac{1}{2iz}B^{(N-1)}(y)\bigg\|_{{\mathbb{C}}^{2m\times 2m}} \nonumber \\ &+ \underset{y\in [x_0-R,x_0]}{\text{\rm{ess\,sup}}} \,
\bigg\|\int_{x_0-R}^{y} dx'\,B^{(N-1)}(x')\exp(2iz(x'-y))
-\frac{1}{2iz}B^{(N-1)}(y)\bigg\|_{{\mathbb{C}}^{2m\times 2m}} \nonumber \\ & \underset{\substack{\abs{z}
\to\infty\\ z\in C_\varepsilon}}{=}o\big(|z|^{-1}\big). \label{4.102} \end{align} If $N=1$, assume in addition $B_{k,k'}B_{\ell,\ell'}\in L^1_{\text{\rm{loc}}}({\mathbb{R}})^{m\times m}$ for all $k,k',\ell,\ell'\in\{1,2\}$. Let $M(z,x,\alpha_0)$ be defined as in \eqref{2.62} $($see also \eqref{2.620}$)$. Then, as $\abs{z}\to\infty$ in $C_\varepsilon$, $M(z,x,\alpha_0)$ has an asymptotic expansion of the form \begin{equation}
M(z,x,\alpha_0)\underset{\substack{\abs{z}\to\infty\\ z\in C_\varepsilon}}{=} (i/2) I_{2m} +\sum_{k=1}^N M_{k}(x,\alpha_0)z^{-k}+ o\big(|z|^{-N}\big), \quad N\in{\mathbb{N}}, \label{4.103} \end{equation} where \begin{align} M_1(x,\alpha_0)&=-\frac{i}8 \begin{pmatrix}B_{1,1}(x+0)-B_{2,2}(x+0) & B_{1,2}(x+0)+B_{2,1}(x+0) \\
B_{1,2}(x+0)+B_{2,1}(x+0)& B_{2,2}(x+0)-B_{1,1}(x+0)\end{pmatrix} \nonumber \\ & \quad -\frac{i}8 \begin{pmatrix}B_{1,1}(x-0)-B_{2,2}(x-0) & B_{1,2}(x-0)+B_{2,1}(x-0) \\
B_{1,2}(x-0)+B_{2,1}(x-0)& B_{2,2}(x-0)-B_{1,1}(x-0)\end{pmatrix}, \text{ etc.} \label{4.104} \end{align} The expansion \eqref{4.103} is uniform with respect to $\arg\,(z)$ for
$|z|\to \infty$ in $C_\varepsilon$ and uniform in $x$ as long as $x$ varies in compact subsets of ${\mathbb{R}}$ intersected with the right and left Lebesgue set of $B^{(N-1)}$. \\ \noindent If one merely assumes Hypothesis~\ref{h2.1} with $A=I_{2m}$, $\alpha_0=(I_m\; 0)$, and $B\in L^1_{\text{\rm{loc}}}({\mathbb{R}})^{2m\times 2m}$, then \begin{equation} M(z,x,\alpha_0)\underset{\substack{\abs{z}\to\infty\\ z\in C_\varepsilon}}{=} (i/2)I_{2m}+o(1). \label{4.105} \end{equation} Again the asymptotic expansion \eqref{4.105} is uniform with respect to
$\arg\,(z)$ for $|z|\to \infty$ in $C_\varepsilon$ and uniform in $x\in{\mathbb{R}}$ as long as $x$ varies in compact intervals. \end{theorem}
\noindent The higher-order coefficients in \eqref{4.103} can be derived upon inserting \eqref{4.100} into \eqref{3.17a}, taking into account \eqref{2.620}.
Theorems~\ref{t4.6} and \ref{t4.7} (with $N\in{\mathbb{N}}$) are new even in the scalar case $m=1$ with respect to the regularity assumptions on $B$. For previous results in the case $m=1$ under stronger hypotheses on $B$ we refer to \cite{EHS83}, \cite{Ha85}, \cite{HKS89a}, \cite{HKS89b}, \cite{Mi91}. In particular, \cite{Ha85}, \cite{HKS89a}, and \cite{HKS89b} derived alternative high-energy expansions for the Weyl-Titchmarsh $m$-function in the case $m=1$.
Throughout this section we fixed $\alpha$ to be $\alpha_0=(I_m\; 0)$. The case of general $\alpha\in{\mathbb{C}}^{2m\times m}$ satisfying \eqref{2.8e} then follows from \eqref{2.41}.
\section{A Local Uniqueness Result} \label{s5}
In this section we assume that $B$ is in the normal form given in Theorem~\ref{t1.1}, \begin{equation}\label{4.115} B(x)=\begin{pmatrix} B_{1,1}(x) & B_{1,2}(x)\\B_{1,2}(x) & -B_{1,1}(x) \end{pmatrix}, \end{equation} with $B_{1,1}$ and $B_{1,2}$ self-adjoint a.e. We prove fundamental new local uniqueness results for $B$ in terms of exponentially small differences of Weyl-Titchmarsh matrices $M_+(z,x,\alpha)$ and $M(z,x,\alpha)$. These results, in turn, yield new global ramifications. We start with an auxiliary result concerning asymptotic expansions.
\begin{lemma} \label{l4.9} Suppose $\alpha=(\alpha_1 \; \alpha_2)\in{\mathbb{C}}^{m\times 2m}$ satisfies \eqref{2.8e}, fix $x_0,y_0\in{\mathbb{R}}$ with $y_0>x_0$, and let $x\geq x_0$. Assume $A=I_{2m}$, $B\in L^1([x_0,\infty))^{2m\times 2m}$, $\text{\rm{supp}}(B)\subseteq [x_0,y_0]$, with $B$ in the normal form given in \eqref{4.115} a.e.~on $(x_0,y_0)$. Then, the following asymptotic expansions hold for $\Theta(z,x,x_0,\alpha)$, $\Phi(z,x,x_0,\alpha)$, and $U_{+}(z,x,x_0,\alpha)$ associated with \eqref{HSa}, \begin{align} \Theta(z,x,x_0,\alpha)&\underset{\substack{\abs{z} \to\infty\\ z\in {\mathbb{C}}_+}}{=}\frac{1}{2}\begin{pmatrix}\alpha_1^*+i\alpha_2^*\\ -i(\alpha_1^*+i\alpha_2^*)\end{pmatrix}\exp(-iz(x-x_0))\big(1+o(1)\big), \quad x>x_0, \label{4.48} \\ \Phi(z,x,x_0,\alpha)&\underset{\substack{\abs{z} \to\infty\\ z\in {\mathbb{C}}_+}}{=}\frac{i}{2}\begin{pmatrix}-\alpha_2^*+i\alpha_1^*\\ -i(-\alpha_2^*+i\alpha_1^*)\end{pmatrix}\exp(-iz(x-x_0))\big(1+o(1)\big), \quad x>x_0, \label{4.54} \\ U_{+}(z,x,x_0,\alpha)&\underset{\substack{\abs{z} \to\infty\\ z\in {\mathbb{C}}_+}}{=}\begin{pmatrix}\alpha_1^*-i\alpha_2^*\\ i(\alpha_1^*-i\alpha_2^*)\end{pmatrix}\exp(iz(x-x_0))\big(1+o(1)\big), \quad x\geq x_0. \label{4.60} \end{align} Next, we introduce the abbreviation \begin{equation} C=-B_{1,2}-iB_{1,1}, \quad C^*=-B_{1,2}+iB_{1,1}, \label{4.38f} \end{equation} and suppose in addition that \begin{equation}
\underset{y\in [x_0,y_0]}{\text{\rm{ess\,sup}}} \, \bigg\|\int_y^{y_0}
dx'\,B(x')\exp(2iz(x'-y)) +\frac{1}{2iz}B(y)\bigg\|_{{\mathbb{C}}^{2m\times 2m}} \underset{\substack{\abs{z}
\to\infty\\ z\in\rho_+}}{=}o\big(|z|^{-1}\big), \label{4.39a} \end{equation} along a ray $\rho_+\subset{\mathbb{C}}_+$, and that \begin{equation} B_{1,1}^2,\ B_{1,2}^2,\ B_{1,1}B_{1,2},\ B_{1,2}B_{1,1} \in L^1([x_0,y_0])^{m\times m}. \label{4.39f} \end{equation} Then, \begin{align} &\Theta(z,x,x_0,\alpha)\underset{\substack{\abs{z} \to\infty\\ z\in \rho_+}}{=}\bigg(\frac{1}{2}\begin{pmatrix}\alpha_1^*+i\alpha_2^*\\ -i(\alpha_1^*+i\alpha_2^*) \end{pmatrix}-\frac{i}{4z}\begin{pmatrix}C(x_0)^*(\alpha_1^*-i\alpha_2^*)\\ -iC(x_0)^*(\alpha_1^*-i\alpha_2^*)\end{pmatrix} \nonumber \\ & -\frac{i}{4z}\begin{pmatrix}C(x)(\alpha_1^*+i\alpha_2^*)\\ iC(x)(\alpha_1^*+i\alpha_2^*)\end{pmatrix} \nonumber \\ & +\frac{i}{4z}\int_{x_0}^x dx' \, \begin{pmatrix}C(x')^*C(x')(\alpha_1^*+i\alpha_2^*)\\ -iC(x')^*C(x')(\alpha_1^*+i\alpha_2^*)\end{pmatrix}\bigg)
e^{-iz(x-x_0)}\big(1+o\big(|z|^{-1}\big)\big), \nonumber \\ & \hspace*{9.5cm} x>x_0, \label{4.49} \\ &\Phi(z,x,x_0,\alpha)\underset{\substack{\abs{z} \to\infty\\ z\in \rho_+}}{=}\bigg(\frac{i}{2}\begin{pmatrix} -\alpha_2^*+i\alpha_1^*\\ -i(-\alpha_2^*+i\alpha_1^*)\end{pmatrix}-\frac{1}{4z} \begin{pmatrix}C(x_0)^*(-\alpha_2^*-i\alpha_1^*)\\ -iC(x_0)^*(-\alpha_2^*-i\alpha_1^*)\end{pmatrix} \nonumber \\ &+\frac{1}{4z}\begin{pmatrix}C(x)(-\alpha_2^*+i\alpha_1^*)\\ iC(x)(-\alpha_2^*+i\alpha_1^*)\end{pmatrix} \nonumber \\ & -\frac{1}{4z}\int_{x_0}^x dx' \, \begin{pmatrix}C(x')^*C(x')(-\alpha_2^*+i\alpha_1^*)\\ -iC(x')^*C(x')(-\alpha_2^*+i\alpha_1^*)\end{pmatrix}\bigg)
e^{-iz(x-x_0)}\big(1+o\big(|z|^{-1}\big)\big), \nonumber \\ & \hspace*{9.8cm} x>x_0, \label{4.55} \end{align} whenever $x_0$ is a right Lebesgue point of $B$ and $x$ is a left Lebesgue point of $B$, and \begin{align} &U_{+}(z,x,x_0,\alpha)\underset{\substack{\abs{z} \to\infty\\ z\in \rho_+}}{=}\bigg(\begin{pmatrix} \alpha_1^*-i\alpha_2^*\\ i(\alpha_1^*-i\alpha_2^*)\end{pmatrix}+\frac{i}{2z} \begin{pmatrix}(C(x)^*-C(x_0)^*)(\alpha_1^*-i\alpha_2^*)\\ -i(C(x)^*+C(x_0)^*)(\alpha_1^*-i\alpha_2^*)\end{pmatrix} \nonumber \\ & -\frac{i}{2z}\int_{x_0}^{x} dx' \, \begin{pmatrix}C(x')C(x')^*(\alpha_1^*-i\alpha_2^*)\\ iC(x')C(x')^*(\alpha_1^*-i\alpha_2^*) \end{pmatrix}\bigg)
e^{iz(x-x_0)}\big(1+o\big(|z|^{-1}\big)\big), \quad x\geq x_0, \label{4.61} \end{align} whenever $x$ is a right Lebesgue point of $B$. \end{lemma}
\begin{proof} Since $x_0$ and $\alpha$ are fixed throughout this proof, we will temporarily suppress these variables whenever possible to simplify notations. Introducing \begin{equation} \widehat \Theta(z,x)= 2\Theta(z,x)\exp(iz(x-x_0)), \label{4.41} \end{equation} the Volterra integral equation for $\Theta$ (cf. \eqref{4.3B}), \begin{align} \Theta(z,x)&=\begin{pmatrix}\alpha_1^*\cos(z(x-x_0)) +\alpha_2^*\sin(z(x-x_0))\\ \alpha_2^*\cos(z(x-x_0))-\alpha_1^*\sin(z(x-x_0))\end{pmatrix} \nonumber \\ & \quad -\int_{x_0}^x dx'\, K(z,x,x') J B(x')\Theta(z,x'), \label{4.42} \end{align} can be rewritten in terms of that of $\widehat \Theta$ in the form \begin{align} \widehat\Theta(z,x)&=\begin{pmatrix}\alpha_1^*+i\alpha_2^*\\ -i(\alpha_1^*+i\alpha_2^*)\end{pmatrix} +\begin{pmatrix}\alpha_1^*-i\alpha_2^*\\ i(\alpha_1^*-i\alpha_2^*)\end{pmatrix}\exp(2iz(x-x_0)) \nonumber \\ & \quad -\frac{1}{2}\int_{x_0}^x dx' \big(R(x')\exp(2iz(x-x'))+S(x')\big)\widehat\Theta(z,x'), \label{4.43} \end{align} where we abbreviated \begin{equation} R=\begin{pmatrix} C&iC\\ iC &-C \end{pmatrix}, \quad S=\begin{pmatrix} C^* & -iC^*\\ -iC^* & -C^*\end{pmatrix}. \label{4.44} \end{equation} Using the elementary algebraic facts \begin{equation} R\begin{pmatrix}a\\ ia\end{pmatrix}=0, \quad R\begin{pmatrix}b\\ -ib\end{pmatrix} =2\begin{pmatrix} Cb\\ iCb\end{pmatrix}, \quad S\begin{pmatrix} a\\ ia\end{pmatrix} =2\begin{pmatrix} C^* a\\ -iC^* a\end{pmatrix}, \quad S\begin{pmatrix}b\\ -ib\end{pmatrix}=0 \label{4.46} \end{equation} for any $a,b \in {\mathbb{C}}^{m\times m}$, iterating \eqref{4.43} yields \begin{align} &\widehat\Theta(z,x)=\begin{pmatrix}\alpha_1^*+i\alpha_2^*\\ -i(\alpha_1^*+i\alpha_2^*)\end{pmatrix}+ \begin{pmatrix}\alpha_1^*-i\alpha_2^*\\ i(\alpha_1^*+i\alpha_2^*)\end{pmatrix}e^{2iz(x-x_0)} \nonumber \\ & \quad +\sum_{m=1}^\infty (-2)^{-m}\int_{x_0}^x d\xi_1 \, \big(R(\xi_1)e^{2iz(x-\xi_1)}+S(\xi_1)\big)\times \nonumber \\ & \quad \times \int_{x_0}^{\xi_1} d\xi_2 \, \big(R(\xi_2)e^{2iz(\xi_1-\xi_2)}+S(\xi_2)\big)\dots \label{4.47} \\ & \quad \;\, \dots \int_{x_0}^{\xi_{m-2}} d\xi_{m-1} \, \big(R(\xi_{m-1})e^{2iz(\xi_{m-2}-\xi_{m-1})}+S(\xi_{m-1})\big) \times \nonumber \\ & \quad \times \int_{x_0}^{\xi_{m-1}} d\xi_{m} \, \bigg(R(\xi_{m})\begin{pmatrix}\alpha_1^*-i\alpha_2^*\\ -i(\alpha_1^*-i\alpha_2^*)\end{pmatrix}e^{2iz(\xi_{m-1}-\xi_{m})} \nonumber \\ & \hspace*{2.8cm} +S(\xi_{m}) \begin{pmatrix}\alpha_1^*+i\alpha_2^*\\ i(\alpha_1^*+i\alpha_2^*)\end{pmatrix}e^{2iz(\xi_m-x_0)}\bigg). \nonumber \end{align} Applying the Riemann-Lebesgue lemma to \eqref{4.47} then proves \eqref{4.48} assuming $B\in L^1([x_0,\infty))^{2m\times 2m}$, only. Assuming also \eqref{4.39a} and \eqref{4.39f} one can compute the next term in the asymptotic expansion \eqref{4.48} and then obtains \eqref{4.49} using \eqref{4.47} and the finite-interval variant of \eqref{4.-2}, whenever $x_0$ is a right Lebesgue point of $B$ and $x$ is a left Lebesgue point of $B$. \\ Exactly the same arguments apply to $\Phi$. Introducing \begin{equation} \widehat \Phi(z,x)= 2\Phi(z,x)\exp(iz(x-x_0)), \label{4.50} \end{equation} the Volterra integral equation for $\Phi$, \begin{align} \Phi(z,x)&=\begin{pmatrix}-\alpha_2^*\cos(z(x-x_0)) +\alpha_1^*\sin(iz(x-x_0))\\ \alpha_1^*\cos(iz(x-x_0))+\alpha_2^*\sin(z(x-x_0))\end{pmatrix} \nonumber \\ & \quad -\int_{x_0}^x dx'\, K(z,x,x') J B(x')\Phi_j(z,x'), \label{4.51} \end{align} can be rewritten in terms of that of $\widehat \Phi$ in the form \begin{align} \widehat\Phi(z,x)&=i\begin{pmatrix}-\alpha_2^*+i\alpha_1^*\\ -i(-\alpha_2^*+i\alpha_1^*)\end{pmatrix} -i\begin{pmatrix}-\alpha_2^*-i\alpha_1^*\\ i(-\alpha_2^*-i\alpha_1^*)\end{pmatrix}\exp(2iz(x-x_0)) \nonumber \\ & \quad -\frac{1}{2}\int_{x_0}^x \big(R(x')\exp(2iz(x-x'))+S(x')\big)\widehat\Phi(z,x'). \label{4.52} \end{align} Iterating \eqref{4.52}, taking into account \eqref{4.46}, yields \begin{align} &\widehat\Phi(z,x)=i\begin{pmatrix}-\alpha_2^*+i\alpha_1^*\\ -i(-\alpha_2^*+i\alpha_1^*)\end{pmatrix} -i\begin{pmatrix}-\alpha_2^*-i\alpha_1^*\\ i(-\alpha_2^*-i\alpha_1^*)\end{pmatrix}e^{2iz(x-x_0)} \nonumber \\ & \quad +\sum_{m=1}^\infty (-2)^{-m}\int_{x_0}^x d\xi_1 \, \big(R(\xi_1)e^{2iz(x-\xi_1)}+S(\xi_1)\big)\times \nonumber \\ & \quad \times \int_{x_0}^{\xi_1} d\xi_2 \, \big(R(\xi_2)e^{2iz(\xi_1-\xi_2)}+S(\xi_2)\big)\dots \label{4.53} \\ & \quad \;\, \dots \int_{x_0}^{\xi_{m-2}} d\xi_{m-1} \, \big(R(\xi_{m-1})e^{2iz(\xi_{m-2}-\xi_{m-1})}+S(\xi_{m-1})\big) \times \nonumber \\ & \quad \times \int_{x_0}^{\xi_{m-1}} d\xi_{m} \, \bigg(iR(\xi_{m})\begin{pmatrix}-\alpha_2^*+i\alpha_1^*\\ -i(-\alpha_2^*+i\alpha_1^*)\end{pmatrix} e^{2iz(\xi_{m-1}-\xi_{m})} \nonumber \\ & \hspace*{2.8cm} -iS(\xi_{m}) \begin{pmatrix}-\alpha_2^*-i\alpha_1^*\\ i(-\alpha_2^*-i\alpha_1^*)\end{pmatrix}e^{2iz(\xi_m-x_0)}\bigg). \nonumber \end{align} Applying the Riemann-Lebesgue lemma to \eqref{4.53} the proves \eqref{4.54} assuming $B\in L^1([x_0,\infty))^{2m\times 2m}$, only. Assuming also \eqref{4.39a} and \eqref{4.39f} one can compute the next term in the asymptotic expansion \eqref{4.54} and then obtains \eqref{4.55} using \eqref{4.53} and the finite-interval variant of \eqref{4.-2}, whenever $x_0$ is a right Lebesgue point of $B$ and $x$ is a left Lebesgue point of $B$. \\ Finally, we turn to $U_{+}(z,x)$. Introducing \begin{equation} {\widetilde V}_{+}(z,x)= \widetilde U_{+}(z,x)\exp(-iz(x-x_0)), \label{4.56} \end{equation} the Volterra integral equation for $\widetilde U_{+}$, \begin{equation} \widetilde U_{+}(z,x)=\begin{pmatrix} \alpha_1^*-i\alpha_2^*\\ i(\alpha_1^*-i\alpha_2^*)\end{pmatrix}\exp(iz(x-x_0)) +\int_x^\infty dx'\, K(z,x,x') J B(x')\widetilde U_{+}(z,x'), \label{4.57} \end{equation} can be rewritten in terms of that of ${\widetilde V}_{+}$ in the form \begin{equation} {\widetilde V}_{+}(z,x)=\begin{pmatrix} \alpha_1^*-i\alpha_2^*\\ i(\alpha_1^*-i\alpha_2^*)\end{pmatrix}
+\frac{1}{2}\int_{x}^{y_0} dx'\, \big(R(x')+S(x')\exp(2iz(x'-x))\big){\widetilde V}_{+}(z,x'). \label{4.58} \end{equation} Iterating \eqref{4.58}, taking into account \eqref{4.46}, yields \begin{align} {\widetilde V}_{+}(z,x)&=\begin{pmatrix} \alpha_1^*-i\alpha_2^*\\ i(\alpha_1^*-i\alpha_2^*)\end{pmatrix}+\sum_{k=1}^\infty 2^{-2k}\int_{x}^{y_0} d\xi_1 \, R(\xi_1) \int_{\xi_1}^{y_0} d\xi_2 \, S(\xi_2)e^{2iz(\xi_2-\xi_1)}\times \nonumber \\ & \quad \;\, \times \int_{\xi_2}^{y_0} d\xi_3 \, R(\xi_3) \dots \int_{\xi_{2k-2}}^{y_0} d\xi_{2k-1} \, R(\xi_{2k-1}) \times \nonumber \\ & \quad \times \int_{\xi_{2k-1}}^{y_0} d\xi_{2k} \, S(\xi_{m})\begin{pmatrix} \alpha_1^*-i\alpha_2^*\\ i(\alpha_1^*-i\alpha_2^*)\end{pmatrix}e^{iz(\xi_{2k}-\xi_{2k-1})} \nonumber \\ & \quad +\sum_{\ell=0}^\infty 2^{-2\ell+1} \int_x^{y_0} d\xi_1 \, S(\xi_1)e^{2iz(\xi_1-x)} \int_{\xi_1}^{y_0} d\xi_2 \, R(\xi_2)\times \nonumber \\ & \quad \;\, \times \int_{\xi_2}^{y_0} d\xi_3 \, S(\xi_3)e^{2iz(\xi_3-\xi_2)} \dots \int_{\xi_{2\ell-1}}^{y_0} d\xi_{2\ell} \, R(\xi_{2\ell}) \times \nonumber \\ & \quad \times \int_{\xi_{2\ell}}^{y_0} d\xi_{2\ell+1} \, S(\xi_{2\ell+1})\begin{pmatrix} \alpha_1^*-i\alpha_2^*\\ i(\alpha_1^*-i\alpha_2^*)\end{pmatrix}e^{iz(\xi_{2\ell+1}-\xi_{2\ell})}. \label{4.59} \end{align} Next, we take into account the different normalizations of $U_+$ and $\widetilde U_+$. Using $U_+(z,x_0)=[I_m \; M_+(z,x_0)^t]^t$ (cf., \eqref{2.52} and $\Psi(z,x_0,x_0,\alpha_0)=I_{2m}$), one readily verifies the relationship \begin{equation} u_{+,1}(z,x)={\widetilde u}_{+,1}(z,x){\widetilde u}_{+,1}(z,x_0)^{-1}, \quad u_{+,2}(z,x)={\widetilde u}_{+,2}(z,x){\widetilde u}_{+,1}(z,x_0)^{-1}. \label{4.59a} \end{equation} Thus, applying the Riemann-Lebesgue lemma to \eqref{4.59} then proves \eqref{4.60} (in agreement with \eqref{4.14H}), assuming $B\in L^1([x_0,\infty))^{2m\times 2m}$, only. Assuming also \eqref{4.39a} and \eqref{4.39f} one can compute the next term in the asymptotic expansion \eqref{4.60} and then obtains \eqref{4.61} using \eqref{4.59} and \eqref{4.-2}, whenever $x$ is a right Lebesgue point of $B$. \end{proof}
In the special case $m=1$ (and for $\alpha=(1\; 0)$), the expansion \eqref{4.61} was stated in \cite{Gr92}.
Next, we note an elementary result concerning the boundary data independence of exponentially close Weyl-Titchmarsh matrices.
\begin{lemma} \label{l4.9a} Fix $x_0\in{\mathbb{R}}$ and suppose $A_j=I_{2m}$, $B_j=B_j^*\in L^1([x_0,x_0+R])^{2m\times 2m}$ for all $R>0$. Denote by $M_{+,j}(z,x,\alpha)$, $x\geq x_0$, the unique Weyl-Titchmarsh matrices corresponding to the half-line Dirac-type operators $D_{+,j}(\alpha)$, $j=1,2$, in \eqref{2.84}. Fix an $\hat\alpha\in{\mathbb{C}}^{m\times 2m}$ satisfying \eqref{2.8e} and assume that for all $\varepsilon >0$, \begin{equation}
\|M_{+,1}(z,x_0,\hat\alpha)-M_{+,2}(z,x_0,\hat\alpha)\|_{{\mathbb{C}}^{m\times m}}\underset{\substack{|z|\to\infty\\z\in \rho_{+}}}{=} O\big(e^{-2\text{\rm Im}(z)(a-\varepsilon)}\big) \label{4.24a} \end{equation} along some ray $\rho_{+}\subset{\mathbb{C}}_+$. Then, for all $\alpha\in{\mathbb{C}}^{m\times 2m}$ satisfying \eqref{2.8e} and for all $\varepsilon >0$, \begin{equation}
\|M_{+,1}(z,x_0,\alpha)-M_{+,2}(z,x_0,\alpha)\|_{{\mathbb{C}}^{m\times m}}\underset{\substack{|z|\to\infty\\z\in \rho_{+}}}{=} O\big(e^{-2\text{\rm Im}(z)(a-\varepsilon)}\big) \label{4.25aa} \end{equation} along the ray $\rho_{+}$. \end{lemma}
\begin{proof} Using \eqref{2.38} and \eqref{2.41} one estimates \begin{align}
&\|M_{+,1}(z,x_0,\alpha)-M_{+,2}(z,x_0,\alpha)\|_{{\mathbb{C}}^{m\times m}} \nonumber \\
&=\|M_{+,1}(\bar z,x_0,\alpha)^* -M_{+,2}(z,x_0,\alpha)\|_{{\mathbb{C}}^{m\times m}} \nonumber \\
&\leq \|[\hat\alpha\alpha^* -M_{+,1}(\bar z,x_0,\hat\alpha)^*\hat\alpha
J\alpha^*]^{-1}\|_{{\mathbb{C}}^{m\times m}} \times \nonumber \\ & \quad \times
\|M_{+,1}(z,x_0,\hat\alpha)-M_{+,2}(z,x_0,\hat\alpha)\|_{{\mathbb{C}}^{m\times m}}\times \nonumber \\ & \quad \times \|[\alpha\hat\alpha^* +\alpha J\hat\alpha^*
M_{+,2}(z,x_0,\hat\alpha)]^{-1}\|_{{\mathbb{C}}^{m\times m}}, \label{4.26aa} \end{align} since by \eqref{2.8i} \begin{equation} \hat\alpha J\alpha^*\alpha\hat\alpha^*+\hat\alpha\alpha^*\alpha J\hat\alpha^*=0, \quad \hat\alpha\alpha^*\alpha\hat\alpha^*- \hat\alpha J\alpha^*\alpha J\hat\alpha^*=I_m. \label{4.27a} \end{equation} Moreover, since \begin{equation} [\hat\alpha\alpha^*-i\hat\alpha J\alpha^*][\hat\alpha\alpha^* -i\hat\alpha J\alpha^*]^*=I_m, \label{5.29a} \end{equation} by \eqref{4.27a}, one infers \eqref{4.25aa} from
\eqref{4.26aa} and $M_{+,j}(z,x_0,\alpha)=iI_m +o(1)$ as $|z|\to\infty$, $z\in{\mathbb{C}}_+$, $j=1,2$ (cf.~\eqref{3.1}). \end{proof}
Our principal new local uniqueness result for Dirac-type operators in terms of Weyl-Titchmarsh matrices then reads as follows.
\begin{theorem} \label{t4.10} Fix $x_0\in{\mathbb{R}}$ and suppose $A_j=I_{2m}$, $B_j\in L^1([x_0,x_0+R])^{2m\times 2m}$ for all $R>0$. Suppose also that $B_j$ is in the normal form given in \eqref{4.115} a.e.~on $(x_0,\infty)$, $j=1,2$. Denote by $M_{j,+}(z,x,\alpha)$, $x\geq x_0$, the unique Weyl-Titchmarsh matrices corresponding to the half-line Dirac-type operators $D_{+,j}(\alpha)$, $j=1,2$, in \eqref{2.84}. Then, \begin{equation} \text{if for some $a>0$, }\, B_1(x)=B_2(x) \, \text{ for a.e. $x\in (x_0,x_0+a)$,} \label{4.38AA} \end{equation} one obtains \begin{equation}
\|M_{1,+}(z,x_0,\alpha)-M_{2,+}(z,x_0,\alpha)\|_{{\mathbb{C}}^{m\times m}}\underset{\substack{\abs{z} \to\infty\\ z\in \rho_{+}}}{=} O\big(e^{-2\text{\rm Im}(z)a}\big) \label{4.38} \end{equation} along any ray $\rho_+\subset{\mathbb{C}}_+$ with $0<\arg(z)<\pi$ and for all $\alpha\in{\mathbb{C}}^{m\times 2m}$ satisfying \eqref{2.8e}. Conversely, fix an $\hat\alpha\in{\mathbb{C}}^{m\times 2m}$ satisfying \eqref{2.8e} and if $m>1$, assume in addition that $B_j\in L^\infty([x_0,x_0+a])^{2m\times 2m}$, $j=1,2$. Moreover, suppose that for all $\varepsilon >0$, \begin{equation}
\|M_{1,+}(z,x_0,\hat\alpha_1)-M_{2,+}(z,x_0,
\hat\alpha_1)\|_{{\mathbb{C}}^{m\times m}}\underset{\substack{|z|\to\infty\\z\in \rho_{+,\ell}}}{=} O\big(e^{-2\text{\rm Im}(z)(a-\varepsilon)}\big), \quad \ell=1,2, \label{4.39} \end{equation} along a ray $\rho_{+,1}\subset{\mathbb{C}}_+$ with $0<\arg(z)<\pi/2$ and along a ray $\rho_{+,2}\subset{\mathbb{C}}_+$ with $\pi/2<\arg(z)<\pi$. Then \begin{equation} B_1(x)=B_2(x) \text{ for a.e. } x\in [x_0,x_0+a]. \label{4.40} \end{equation} \end{theorem}
\begin{proof} Since \eqref{4.38} follows from Theorem~\ref{t4.5} and Lemma~\ref{l4.9a}, it suffices to focus on the proof of \eqref{4.40}. Moreover, applying Theorem~\ref{t4.5}, we may without loss of generality assume for the rest of the proof that \begin{equation} \text{\rm{supp}}(B_j)\subseteq [x_0,x_0+a], \quad j=1,2. \label{4.40a} \end{equation} In the following, we will adapt the principal ingredients of a recent proof of the local Borg-Marchenko uniqueness theorem for scalar Schr\"odinger operators (i.e., for $m=1$) by Bennewitz \cite{Be00}, to the current Dirac-type situation. First we recall that by Lemma~\ref{l4.9a}, \eqref{4.39} holds along the rays $\rho_{+,j}$, $j=1,2$ for all $\alpha=(\alpha_1\; \alpha_2)\in{\mathbb{C}}^{m\times 2m}$ satisfying \eqref{2.8e}. To simplify notations in the following we will again suppress $x_0$ and $\alpha$ whenever possible and hence abbreviate, $\Theta(z,x,x_0,\alpha)$, $\Phi(z,x,x_0,\alpha)$, and $U_{j,+}(z,x,x_0,\alpha)$ by $\Theta(z,x)$, $\Phi(z,x)$, and $U_{j,+}(z,x)$, respectively. Next, denoting in obvious notation by \begin{align} &\Theta_j(z,x)=\begin{pmatrix}\theta_{j,1}(z,x)\\ \theta_{j,2}(z,x)\end{pmatrix}, \quad \Phi_j(z,x)=\begin{pmatrix}\phi_{j,1}(z,x)\\ \phi_{j,2}(z,x)\end{pmatrix}, \quad U_{j,+}(z,x)=\begin{pmatrix} u_{j,+,1}(z,x)\\ u_{j,+,2}(z,x)\end{pmatrix}, \nonumber \\ & \hspace*{8.5cm} j=1,2, \; x\geq x_0, \label{4.62} \end{align} the solutions associated with $B_j$, $j=1,2$, which are defined in \eqref{FSb} and \eqref{2.14}, we introduce \begin{equation} g_{j,k}(z,x)=\phi_{j,k}(z,x) u_{j,+,k}(\bar z,x)^*, \quad j,k \in\{1,2\}, \; x\geq x_0. \label{4.63} \end{equation} Using the asymptotic expansions \eqref{4.48}--\eqref{4.60} for $\Theta_j(z,x)$, $\Phi_j(z,x)$, and $U_{j,+}(z,x)$, and the analogous ones for $\Theta_j(\bar z,x)^*$, $\Phi_j(\bar z,x)^*$, and $U_{j,+}(\bar z,x)^*$, one verifies for each fixed $x>x_0$, \begin{equation} g_{j,k}(z,x) \underset{\substack{\abs{z} \to\infty\\ z\in {\mathbb{C}}_+}}{=}(i/2)I_m + o(1), \quad j,k \in\{1,2\}, \label{4.65} \end{equation} assuming $B_j\in L^1([x_0,x_0+R])^{2m\times 2m}$ for all $R>0$, $j=1,2$, only. Next, using the fact that for each fixed $x>x_0$, \begin{align} \phi_{1,k}(z,x)^{-1}\phi_{2,k}(z,x)&\underset{\substack{\abs{z} \to\infty\\ z\in {\mathbb{C}}_+}}{=}I_m + o(1), \quad k=1,2, \label{4.65a} \\ (u_{1,+,k}(\bar z,x)^*)^{-1} u_{2,+,k}(\bar z,x)^*&\underset{\substack{\abs{z} \to\infty\\ z\in {\mathbb{C}}_+}}{=}I_m + o(1), \quad k=1,2, \label{4.66} \end{align} by \eqref{4.54}, \eqref{4.60}, one concludes \begin{align} &\phi_{1,k}(z,x) u_{2,+,j}(\bar z,x)^*-u_{1,+,k}(z,x) \phi_{2,k}(\bar z,x)^* \nonumber \\ &=\phi_{1,k}(z,x) \theta_{2,k}(\bar z,x)^*-\theta_{1,k}(z,x) \phi_{2,k}(\bar z,x)^* \nonumber \\ & \quad +\phi_{1,k}(z,x) \big(M_{2,+}(z)-M_{1,+}(z) \big) \phi_{2,k}(\bar z,x)^* \underset{\substack{\abs{z}\to\infty\\ z\in {\mathbb{C}}_+}}{=}o(1), \label{4.67} \end{align} using \eqref{4.65}, \eqref{4.66}, and $M_{2,+}(\bar z)^*=M_{2,+}(z)$. Combining hypothesis \eqref{4.39} and \eqref{4.54}, one infers \begin{equation}
\big\|\phi_{1,k}(z,x) \big(M_{2,+}(z)-M_{1,+}(z) \big)
\phi_{2,k}(\bar z,x)^*\big\| \underset{\substack{\abs{z}\to\infty\\ z\in \rho_{+,\ell}}}{=}o(1), \ \ x\in (x_0,x_0+a) \label{4.68} \end{equation} along the rays $\rho_{+,\ell}$, $\ell=1,2$. Thus, \eqref{4.67} implies \begin{equation}
\big\|\phi_{1,k}(z,x) \theta_{2,k}(\bar z,x)^*-\theta_{1,k}(z,x)
\phi_{2,k}(\bar z,x)^*\big\| \underset{\substack{\abs{z}\to\infty\\ z\in \rho_{+,\ell}}}{=}o(1), \ \ x\in (x_0,x_0+a) \label{4.69} \end{equation} along the rays $\rho_{+,\ell}$, $\ell=1,2$. The analogous estimate \eqref{4.69} holds along the complex conjugate rays $\bar \rho_{+,\ell}$, $\ell=1,2$, in the lower complex half-plane ${\mathbb{C}}_-$. To simplify notations we denote the open sector generated by $\rho_{+,1}$ and its complex conjugate $\bar \rho_{+,1}$ by ${\mathcal S}_1$, the open sector generated by the $\rho_{+,2}$ and its complex conjugate $\bar \rho_{+,2}$ by ${\mathcal S}_2$, the remaining sector in ${\mathbb{C}}_+$ is denoted by ${\mathcal S}_3$, and its complex conjugate sector in ${\mathbb{C}}_-$ is denoted by ${\mathcal S}_4$. Thus, one obtains a partition of ${\mathbb{C}}$ into \begin{equation} {\mathbb{C}}=\bigcup_{\ell=1}^4 \overline {{\mathcal S}_\ell}, \label{4.70} \end{equation} where each sector ${\mathcal S}_\ell$, $1\leq \ell\leq 4$, has opening angle strictly less than $\pi$. Since (each matrix element of) the expression under the norm in \eqref{4.69} is entire and of order less or equal to one, one can apply the Phragm\'en-Lindel\"of principle (cf., e.g., \cite[No.~322, p.~166--167, 379]{PS72}) to each sector ${\mathcal S}_\ell$, $1\leq \ell\leq 4$, and obtains that each matrix element under the norm in \eqref{4.69} is uniformly bounded in each sector and hence on all of ${\mathbb{C}}$. By Liouville's theorem, these matrix elements are all equal to certain constants. By the right-hand side of \eqref{4.69}, these constants all vanish. Thus, we proved \begin{equation} \phi_{1,k}(z,x) \theta_{2,k}(\bar z,x)^*=\theta_{1,k}(z,x) \phi_{2,k}(\bar z,x)^* \, \text{ for all $x\in (x_0,x_0+a)$} \label{4.71} \end{equation} and hence \begin{equation} \phi_{1,k}(z,x)^{-1}\theta_{1,k}(z,x)=\theta_{2,k}(\bar z,x)^* (\phi_{2,k}(\bar z,x)^*)^{-1} \, \text{ for all $x\in (x_0,x_0+a)$.} \label{4.72} \end{equation} Differentiating $\phi_{j,k}(z,x)^{-1}\theta_{j,k}(z,x)$, $j,k=1,2$, with respect to $x$ yields \begin{align} &\big(\phi_{j,1}(z,x)^{-1}\theta_{j,1}(z,x)\big)' \nonumber \\ &=\phi_{j,1}(z,x)^{-1}((B_j)_{1,1}(x)-z)(\phi_{j,2}(z,x) \phi_{j,1}(z,x)^{-1} \theta_{j,1}(z,x)-\theta_{j,2}(z,x)), \label{4.73} \\ &\big(\phi_{j,2}(z,x)^{-1}\theta_{j,2}(z,x)\big)' \nonumber \\ &=\phi_{j,2}(z,x)^{-1}((B_j)_{1,1}(x)+z)(\phi_{j,1}(z,x) \phi_{j,2}(z,x)^{-1}\theta_{j,2}(z,x)-\theta_{j,1}(z,x)). \label{4.74} \end{align} Multiplying \eqref{4.73} by $\phi_{j,1}(\bar z,x)^*(\phi_{j,1}(\bar z,x)^*)^{-1}$ and using \eqref{2.92}, \eqref{2.94}, and similarly, multiplying \eqref{4.74} by $\phi_{j,2}(\bar z,x)^*(\phi_{j,2}(\bar z,x)^*)^{-1}$ and using \eqref{2.93}, \eqref{2.95} then yields \begin{align} \big(\phi_{j,1}(z,x)^{-1}\theta_{j,1}(z,x)\big)' &= \phi_{j,1}(z,x)^{-1}((B_j)_{1,1}(x)-z)(\phi_{j,1}(\bar z,x)^*)^{-1}, \label{4.75} \\ \big(\phi_{j,2}(z,x)^{-1}\theta_{j,2}(z,x)\big)' &= \phi_{j,2}(z,x)^{-1}((B_j)_{1,1}(x)+z)(\phi_{j,2}(\bar z,x)^*)^{-1}. \label{4.76} \end{align} In exactly the same way one derives \begin{align} &\big(\theta_{j,1}(\bar z,x)^*(\phi_{j,1}(\bar z,x)^*)^{-1}\big)' \nonumber \\ &= (\theta_{j,1}(\bar z,x)^*(\phi_{j,1}(\bar z,x)^*)^{-1}\phi_{j,2}(\bar z,x)^*-\theta_2(\bar z,x)^*)((B_j)_{1,1}(x)-z)(\phi_{j,1}(\bar z,x)^*)^{-1} \nonumber \\ &= \phi_{j,1}(z,x)^{-1}((B_j)_{1,1}(x)-z)(\phi_{j,1}(\bar z,x)^*)^{-1}, \label{4.77} \\ &\big(\theta_{j,2}(\bar z,x)^*(\phi_{j,2}(\bar z,x)^*)^{-1}\big)' \nonumber \\ &=(\theta_{j,2}(\bar z,x)^*(\phi_{j,2}(\bar z,x)^*)^{-1}\phi_{j,1}(\bar z,x)^*-\theta_{j,1}(\bar z,x)^*)((B_j)_{1,1}(x)+z)(\phi_{j,2}(\bar z,x)^*)^{-1} \nonumber \\ &=\phi_{j,2}(z,x)^{-1}((B_j)_{1,1}(x)+z)(\phi_{j,2}(\bar z,x)^*)^{-1}, \label{4.78} \end{align} using \eqref{2.72}--\eqref{2.75}. Thus, \eqref{4.72} implies \begin{align} \phi_{1,1}(\bar z,x)^*((B_1)_{1,1}(x)-z)^{-1}\phi_{1,1}(z,x)&= \phi_{2,1}(\bar z,x)^*((B_2)_{1,1}(x)-z)^{-1}\phi_{2,1}(z,x), \label{4.79} \\ \phi_{1,2}(\bar z,x)^*((B_1)_{1,1}(x)+z)^{-1}\phi_{1,2}(z,x)&= \phi_{2,2}(\bar z,x)^*((B_2)_{1,1}(x)+z)^{-1}\phi_{2,2}(z,x), \label{4.80} \\ \theta_{1,1}(\bar z,x)^*((B_1)_{1,1}(x)-z)^{-1}\theta_{1,1}(z,x)&= \theta_{2,1}(\bar z,x)^*((B_2)_{1,1}(x)-z)^{-1}\theta_{2,1}(z,x), \label{4.81} \\ \theta_{1,2}(\bar z,x)^*((B_1)_{1,1}(x)+z)^{-1}\theta_{1,2}(z,x)&= \theta_{2,2}(\bar z,x)^*((B_2)_{1,1}(x)+z)^{-1}\theta_{2,2}(z,x) \label{4.82} \end{align} for a.e.~$x\in (x_0,x_0+a)$. Thus far we only used $B_j\in L^1([x_0,x_0+R])^{2m\times 2m}$ for all $R>0$, $j=1,2$ and \eqref{4.40a}.
In the special case $m=1$, each of the equations \eqref{4.79}--\eqref{4.82} allows for the completion of the proof of \eqref{4.40}. Indeed, using the fact that \begin{equation} \overline{ \phi_{j,k}(\bar z,x)}=\phi_{j,k}(z,x), \quad \overline{ \theta_{j,k}(\bar z,x)}=\theta_{j,k}(z,x), \quad j,k\in\{1,2\}, \label{4.83} \end{equation} and taking for instance \eqref{4.79}, one infers for a.e.~$x\in (x_0,x_0+a)$, that \begin{equation} \frac{\phi_{1,1}(z,x)^2}{\phi_{2,1}(z,x)^2}=\frac{(B_1)_{1,1}(x)-z} {(B_2)_{1,1}(x)-z}. \label{4.84} \end{equation} Since all zeros (and poles) of the left-hand side of \eqref{4.84} have even multiplicity, while all zeros (and poles) of the right-hand side of \eqref{4.83} are simple, one concludes, assuming only that $B_j\in L^1([x_0,x_0+R])^{2\times 2}$ for all $R>0$, $j=1,2$, that \begin{equation} \label{4.84a} (B_1)_{1,1}(x)=(B_2)_{1,1}(x) \, \text{ for a.e.~$x\in (x_0,x_0+a)$.} \end{equation} Thus for the case $m=1$, we see by \eqref{4.79}, and \eqref{4.80}, \eqref{4.83}, and \eqref{4.84a}, for a.e.~$x\in (x_0,x_0+a)$, that \begin{equation}\label{4.84b} \phi_{1,k}^2(z,x) = \phi_{2,k}^2(z,x), \ \ k=1,2. \end{equation} Now, \eqref{2.75}, \eqref{4.72}, and \eqref{4.83} show, for a.e.~$x\in (x_0,x_0+a)$, that \begin{align}\label{4.84c} \phi_{1,1}(z,x)\phi_{1,2}(z,x)&= \frac{\phi_{1,1}(z,x)}{\theta_{1,1}(z,x)}- \frac{\phi_{1,2}(z,x)}{\theta_{1,2}(z,x)} \nonumber \\ &=\frac{\phi_{2,1}(z,x)}{\theta_{2,1}(z,x)}- \frac{\phi_{2,2}(z,x)}{\theta_{2,2}(z,x)}=\phi_{2,1}(z,x)\phi_{2,2}(z,x). \end{align} By \eqref{HSa} we see that \begin{equation}\label{4.84d} (\phi_{j,1}^2(z,x))' = 2(z-(B_j(x))_{1,1} )\phi_{j,1}(z,x)\phi_{j,2}(z,x)
+ (B_j(x))_{1,2} \phi_{j,1}^2(z,x), \quad j=1,2. \end{equation} Thus, by \eqref{4.84a}, \eqref{4.84b}, and \eqref{4.84c}, \begin{equation}\label{4.84e} (B_1(x))_{1,2} = (B_2(x))_{1,2} \,\text{ for a.e.~$x\in (x_0,x_0+a)$.} \end{equation} Together, \eqref{4.84a} and \eqref{4.84e} imply \eqref{4.40} in the special case $m=1$.
Unfortunately, the case $m>1$ appears to be quite a bit more involved. To deal with this case we first note that taking determinants in \eqref{4.79} yields \begin{equation} \frac{\det(\phi_{1,1}(\bar z,x,x_0,\alpha)^*)\det(\phi_{1,1}(z,x,x_0,\alpha))} {\det(\phi_{2,1}(\bar z,x,x_0,\alpha)^*)\det(\phi_{2,1}(z,x,x_0,\alpha)}= \frac{\det((B_1)_{1,1}(x)-zI_m))}{\det((B_2)_{1,1}(x)-zI_m))} \label{4.85} \end{equation} for a.e.~$x\in (x_0,x_0+a)$. Next, we intend to prove that \begin{equation} \det((B_1)_{1,1}(x)-zI_m))=\det((B_2)_{1,1}(x)-zI_m)) \, \text{ for a.e.~$x\in (x_0,x_0+a)$.} \label{4.86} \end{equation} Given the fact that $(B_j)_{1,1}(x)$, $j=1,2$, is self-adjoint, showing \eqref{4.86} is equivalent to showing that $B_1(x)$ and $B_2(x)$ are unitarily equivalent for a.e.~$x\in (x_0,x_0+a)$. Arguing by contradiction, we assume that at least one pair of eigenvalues of $B_1(x)$ and $B_2(x)$ differs. Thus, fixing $x_1\in (x_0,x_0+a)$, let $\lambda(x_1)$ be an eigenvalue of $B_1(x_1)$ but not of $B_2(x_1)$. Then \eqref{4.85} implies, for all $\alpha\in {\mathbb{C}}^{m\times 2m}$ satisfying \eqref{2.8e}, that \begin{equation} \det(\phi_{1,1}(\lambda(x_1),x_1,x_0,\alpha))=0. \label{4.87} \end{equation} Next, for $\lambda\in{\mathbb{R}}$ and $x>x_0$ define \begin{align}\label{4.88} N(\lambda,x,\alpha)=&\theta_{1,2}(\lambda,x,x_0,\alpha)\theta_{1,2} (\lambda,x,x_0,\alpha)^* \nonumber \\ &+\phi_{1,2}(\lambda,x,x_0,\alpha)\phi_{1,2}(\lambda,x,x_0,\alpha)^*. \end{align} Then, $N(\lambda,x,\alpha)$ is strictly positive definite, \begin{equation} N(\lambda,x,\alpha)>0. \label{4.89} \end{equation} Indeed, suppose $Nf=0$ for some $f\in{\mathbb{C}}^m$, then \begin{equation} \theta_{1,2}(\lambda)\theta_{1,2}(\lambda)^*f+ \phi_{1,2}(\lambda)\phi_{1,2}(\lambda)^*f=0 \label{4.90} \end{equation} implies \begin{equation} \theta_{1,2}(\lambda)\theta_{1,2}(\lambda)^*f=0, \; \phi_{1,2}(\lambda)\phi_{1,2}(\lambda)^*f=0 \label{4.91} \end{equation} and hence \begin{equation} \theta_{1,2}(\lambda)^*f=0, \; \phi_{1,2}(\lambda)^*f=0. \label{4.92} \end{equation} Thus, \begin{equation} f=(\theta_{1,1}(\lambda)\phi_{1,2}(\lambda)^*- \phi_{1,1}(\lambda)\theta_{1,2}(\lambda)^*)f=0 \label{4.93} \end{equation} by \eqref{2.95}, and hence $f=0$ proves \eqref{4.89}. Introducing $\alpha_0=(I_m\; 0)\in{\mathbb{C}}^{m\times 2m}$ and $\gamma=(\gamma_1 \; \gamma_2)\in{\mathbb{C}}^{m\times 2m}$ defined by \begin{align} \gamma_1 &=[\theta_{1,2}(\lambda(x_1),x_1,x_0,\alpha_0) \theta_{1,2}(\lambda(x_1),x_1,x_0,\alpha_0)^* \label{4.94} \\ & \quad +\phi_{1,2}(\lambda(x_1),x_1,x_0,\alpha_0) \phi_{1,2}(\lambda(x_1),x_1,x_0,\alpha_0)^*]^{-1/2} \theta_{1,2}(\lambda(x_1),x_1,x_0,\alpha_0), \nonumber \\ \gamma_2 &=[\theta_{1,2}(\lambda(x_1),x_1,x_0,\alpha_0) \theta_{1,2}(\lambda(x_1),x_1,x_0,\alpha_0)^* \label{4.95} \\ & \quad +\phi_{1,2}(\lambda(x_1),x_1,x_0,\alpha_0) \phi_{1,2}(\lambda(x_1),x_1,x_0,\alpha_0)^*]^{-1/2} \phi_{1,2}(\lambda(x_1),x_1,x_0,\alpha_0), \nonumber \end{align} one verifies $\gamma\gamma^*=I_m$ (by \eqref{4.94} and \eqref{4.95}) and $\gamma J\gamma^*=0$ (by \eqref{2.93}). Thus, $\gamma$ satisfies \eqref{2.8e}. Next, since \begin{equation} \phi_{1,1}(\lambda(x_1),x_1,x_0,\gamma)= \phi_{1,1}(\lambda(x_1),x_1,x_0,\alpha_0)\gamma_1^*- \theta_{1,1}(\lambda(x_1),x_1,x_0,\alpha_0)\gamma_2^* \label{4.96} \end{equation} as a special case of \eqref{2.96}, one derives \begin{align} \phi_{1,1}(\lambda(x_1),x_1,x_0,\gamma)&= [\phi_{1,1}(\lambda(x_1),x_1,x_0,\alpha_0) \theta_{1,2}(\lambda(x_1),x_1,x_0,\alpha_0)^* \nonumber \\ & \quad -\theta_{1,1}(\lambda(x_1),x_1,x_0,\alpha_0) \phi_{1,2}(\lambda(x_1),x_1,x_0,\alpha_0)^*]\times \nonumber \\ & \quad \times [\theta_{1,2}(\lambda(x_1),x_1,x_0,\alpha_0) \theta_{1,2}(\lambda(x_1),x_1,x_0,\alpha_0)^* \nonumber \\ & \quad +\phi_{1,2}(\lambda(x_1),x_1,x_0,\alpha_0) \phi_{1,2}(\lambda(x_1),x_1,x_0,\alpha_0)^*]^{-1/2} \nonumber \\ & =-[\theta_{1,2}(\lambda(x_1),x_1,x_0,\alpha_0) \theta_{1,2}(\lambda(x_1),x_1,x_0,\alpha_0)^* \label{4.97} \\ & \quad +\phi_{1,2}(\lambda(x_1),x_1,x_0,\alpha_0) \phi_{1,2}(\lambda(x_1),x_1,x_0,\alpha_0)^*]^{-1/2}<0. \nonumber \end{align} using \eqref{2.95}. This contradiction to \eqref{4.87} proves \eqref{4.86}. Hence for $\lambda\in {\mathbb{R}}$ and for a.e.~$x\in (x_0,x_0+a)$ \begin{equation} \abs{\det(\phi_{1,1}(\lambda,x,x_0,\alpha))}= \abs{\det(\phi_{2,1}(\lambda,x,x_0,\alpha))}, \label{4.98} \end{equation} by \eqref{4.85}. Equation \eqref{4.98} implies that for a.e.~$x_1\in(x_0,x_0+a)$, the family of Dirac operators $D_+(\alpha,\alpha_0)$ in $L^2([x_0,x_1])^{2m}$, defined by \begin{align} D_+(\alpha,\alpha_0)&=J \frac{d}{dx}-B, \label{4.99} \\ \text{\rm{dom}}(D_+(\alpha,\alpha_0))&=\{\phi\in L^2([x_0,x_1])^{2m} \mid \phi \in\text{\rm{AC}}([x_0,x_1])^{2m}; \nonumber \\ & \qquad \alpha\phi(x_0)=0, \, \alpha_0\phi(x_1)=0; \, (J\phi^\prime-B\phi)\in L^2([x_0,x_1])^{2m} \}, \nonumber \end{align} with $\alpha_0 = (I_m\;0)$, have identical spectra for all boundary data $\alpha\in{\mathbb{C}}^{m\times 2m}$ satisfying \eqref{2.8e}. Hence, assuming $B_j\in L^\infty([x_0,x_0+a])^{2m\times 2m}$, $j=1,2$, one can apply Theorem~2.3 of Malamud \cite{Ma99} and obtains \eqref{4.40}. \end{proof}
We should note that Malamud's Theorem~2.3 in \cite{Ma99} only requires the equality of $m^2+1$ spectra (associated with linearly independent boundary data indexed by $\alpha\in{\mathbb{C}}^{m\times 2m}$) in order to conclude \eqref{4.40}.
There is no particular significance of the rays $\rho_\ell$, $\ell=1,2$, in Theorem~\ref{t4.10}. Any non-selfintersecting Jordan arc that tends to infinity in the sectors $\varepsilon\leq\arg(z)\leq (\pi/2)-\varepsilon$ and $(\pi/2)+\varepsilon\leq\arg(z)\leq \pi-\varepsilon$ for some $0<\varepsilon<\pi/4$ will do.
\begin{remark} \label{r4.10} We were not able to prove \eqref{4.40} directly from \eqref{4.79}--\eqref{4.82}, without resorting to the arguments involving \eqref{4.98} and \eqref{4.99}. To conclude the proof according to the Borg-type Theorem~2.3 of Malamud \cite{Ma99} (cf.~also Theorem~4 in \cite{Ma99a}), requires the introduction of the extra hypothesis $B_j\in L^\infty([x_0,x_0+a])^{2m\times 2m}$, $j=1,2$ in the matrix context $m>1$, since the construction of transformation operators for Dirac-type systems, to date, uses such an additional hypothesis on $B$. This extra hypothesis is clearly superfluous in the case $m=1$. Obviously, one conjectures that this extra hypothesis on $B_j$ should also be redundant in Theorem~\ref{t4.10}, but this appears to require nontrivial future efforts. In this context it might be interesting to note that the higher-order expansions \eqref{4.49}--\eqref{4.61} do not determine $B$ uniquely. An explicit analysis shows that while they do determine $B_{1,2}$, they only determine $B_{1,1}^2$, not $B_{1,1}$ itself. So that approach does not aide in proving \eqref{4.40} (besides, it would require the additional hypotheses \eqref{4.39a} on $B$). \end{remark}
The corresponding local uniqueness result in terms of $M(z,x_0,\alpha)$ then reads as follows.
\begin{theorem} \label{t4.11} Fix $x_0\in{\mathbb{R}}$ and suppose $A_j=I_{2m}$, $B_j\in L^1_{\text{\rm{loc}}}({\mathbb{R}})^{2m\times 2m}$, and $B_j=B_j^*$ a.e.~on ${\mathbb{R}}$, $j=1,2$. Suppose also that $B_j$ is in the normal form given in \eqref{4.115} a.e.~on $(x_0,\infty)$, $j=1,2$. Denote by $M_{j}(z,x_0,\alpha)$, the unique Weyl-Titchmarsh matrices \eqref{2.62} corresponding to the Dirac-type operators $D_{j}$, $j=1,2$, in \eqref{2.61}. Then, \begin{equation} \text{if for some $a>0$, }\, B_1(x)=B_2(x) \, \text{ for a.e. $x\in (x_0-a,x_0+a)$,} \label{4.110} \end{equation} one obtains \begin{equation}
\|M_{1}(z,x_0,\alpha)-M_{2}(z,x_0,\alpha)\|_{{\mathbb{C}}^{2m\times 2m}}\underset{\substack{\abs{z} \to\infty\\ z\in \rho_{+}}}{=} O\big(e^{-2\text{\rm Im}(z)a}\big) \label{4.111} \end{equation} along any ray $\rho_+\subset{\mathbb{C}}_+$ with $0<\arg(z)<\pi$ and for all $\alpha\in{\mathbb{C}}^{m\times 2m}$ satisfying \eqref{2.8e}. Conversely, fix a $\hat\alpha\in{\mathbb{C}}^{m\times 2m}$ satisfying \eqref{2.8e} and if $m>1$, assume in addition that $B_j\in L^\infty([x_0-a,x_0+a])^{2m\times 2m}$, $j=1,2$. Moreover, suppose that for all $\varepsilon >0$, \begin{equation}
\|M_{1}(z,x_0,\hat\alpha_1)-M_{2}(z,x_0,\hat\alpha_1)\|_{{\mathbb{C}}^{2m\times 2m}}\underset{\substack{|z|\to\infty\\z\in \rho_{+,\ell}}}{=} O\big(e^{-2\text{\rm Im}(z)(a-\varepsilon)}\big), \quad \ell=1,2, \label{4.112} \end{equation} along a ray $\rho_{+,1}\subset{\mathbb{C}}_+$ with $0<\arg(z)<\pi/2$ and along a ray $\rho_{+,2}\subset{\mathbb{C}}_+$ with $\pi/2<\arg(z)<\pi$. Then \begin{equation} B_1(x)=B_2(x) \text{ for a.e. } x\in [x_0-a,x_0+a]. \label{4.113} \end{equation} \end{theorem}
\begin{proof} \eqref{4.111} is proved by combining \eqref{2.620}, and \eqref{4.38AA}, \eqref{4.38}, and \eqref{4.113} then follows by combining \eqref{2.620}, and \eqref{4.39}, \eqref{4.40}, taking into account the asymptotic expansions \begin{equation}
M_\pm (z,x_0)\underset{|z|\to\infty}{=}\pm iI_m + o(1) \label{4.114} \end{equation} along any ray with $\varepsilon<\arg(z)<\pi-\varepsilon$ in the case of Dirac-type operators (cf.~\eqref{3.1}). \end{proof}
\begin{remark} \label{r4.12} Theorem~\ref{t4.10} and Theorem~\ref{t4.11} yield new global uniqueness theorems for half-line and full-line Dirac-type operators, extending the classical Borg-Marchenko-type results. Indeed, if \eqref{4.39} (resp., \eqref{4.112}) holds for all $a>0$, then \eqref{4.40} (resp. \eqref{4.113}) holds for a.e.~$x\in[x_0,\infty)$ (resp., for a.e.~$x\in{\mathbb{R}}$). \end{remark}
In the case of scalar Schr\"odinger operators, the analog of Theorem~\ref{t4.10} is due to Simon \cite{Si98}. An alternative proof, applicable to matrix-valued Schr\"odinger operators was presented in \cite{GS99} (cf.~also \cite{GKM00}). More recently, yet another proof was found by Bennewitz \cite{Be00} (following some ideas in \cite{Bo52}). These results extend the classical (global) uniqueness results due to Borg \cite{Bo52} and Marchenko \cite{Ma50}, \cite{Ma52} (cf. also \cite{Be00a}), which state that half-line $m$-functions uniquely determine the corresponding potential coefficient. The Dirac-type results presented in this section (especially, all local considerations) appear to be new, even in the special case $m=1$. Previous results in the Dirac case focused on global uniqueness questions only. We refer to Gasymov and Levitan \cite{GL66} in the case $m=1$ and to Lesch and Malamud \cite{LM00} in the matrix case $m\in{\mathbb{N}}$. Most recently, Alexander Sakhnovich kindly informed us that his integral representation of the Weyl-Titchmarsh matrix in \cite{Sa88a} can be used to derive asymptotic expansions for the Weyl-Titchmarsh matrix and its associated matrix-valued spectral function, and also yields a result analogous to Theorem~\ref{t4.10}\,(i) for a certain class of canonical systems. Moreover, in the case of skew-adjoint Dirac-type systems, similar results are discussed in \cite{Sa90} and applied to the nonlinear Schr\"odinger equation on a half-axis.
Although not directly used in this paper, it should be pointed out that inverse monodromy problems for canonical systems received a lot of attention (some of it very recently). The interested reader is referred to \cite{AD97}, \cite{AD00}, \cite{AD00a}, \cite{Ma95}, \cite{Ma99}, \cite{Ma99a}, \cite{Sa94}, \cite{Sa99a} and the extensive literature cited therein. Moreover, inverse spectral theory associated with canonical systems is discussed in \cite{MST01}, \cite{Sa90}, \cite{Sa97b}, \cite{Sa01}, \cite{Sa94}, \cite{Sa94a}, \cite{Sa99}, \cite{Sa99a} (see also the extensive literature cited in \cite{GKM00}).
\section{Trace Formulas and Borg-Type Theorems}\label{s6}
In our final section we derive a trace formula for $B$ and then discuss its application to Borg-type uniqueness theorems for Dirac-type operators.
\begin{theorem} \label{t5.1} Assume Hypothesis~\ref{h2.1} with $A=I_{2m}$, and let $\alpha_0=(I_m\; 0)\in{\mathbb{C}}^{m\times 2m}$. Fix $x_0\in{\mathbb{R}}$ and suppose that for all $R>0$, \begin{align} & \quad \underset{y\in [x_0,x_0+R]}{\text{\rm{ess\,sup}}} \,
\bigg\|\int_y^{x_0+R} dx'\,B(x')\exp(2iz(x'-y))
+\frac{1}{2iz}B(y)\bigg\|_{{\mathbb{C}}^{2m\times 2m}} \nonumber \\ & + \underset{y\in [x_0-R,x_0]}{\text{\rm{ess\,sup}}} \,
\bigg\|\int_{x_0-R}^{y} dx'\,B(x')\exp(2iz(x'-y))
-\frac{1}{2iz}B(y)\bigg\|_{{\mathbb{C}}^{2m\times 2m}} \nonumber \\ & \underset{\substack{\abs{z}
\to\infty\\ z\in \rho_+}}{=}o\big(|z|^{-1}\big) \label{5.0} \end{align} along a ray $\rho_+\subset{\mathbb{C}}_+$. In addition, assume $B_{k,k'}B_{\ell,\ell'}\in L^1_{\text{\rm{loc}}}({\mathbb{R}})^{m\times m}$ for all $k,k',\ell,\ell'\in\{1,2\}$. Then, with $\Upsilon(\lambda,x,\alpha_0)$ defined in \eqref{2.69}, \begin{align} & \begin{pmatrix}B_{1,1}(x)-B_{2,2}(x) & B_{1,2}(x)+B_{2,1}(x) \\
B_{1,2}(x)+B_{2,1}(x)& B_{2,2}(x)-B_{1,1}(x)\end{pmatrix} \nonumber \\
&=\lim_{\substack{|z|\to \infty\\z\in\rho_+}} 2 \int_{\mathbb{R}} d\lambda \, z^2(\lambda-z)^{-2}\Upsilon(\lambda,x,\alpha_0) \, \text{ for a.e. $x\in{\mathbb{R}}$}. \label{5.1} \end{align} \end{theorem}
\begin{proof} By \eqref{2.64}, \begin{equation} \frac{d}{dz}\text{\rm ln}(M(z,x,\alpha_0)) =\int_{\mathbb{R}} d\lambda\, (\lambda-z)^{-2}\Upsilon(\lambda,x,\alpha_0). \label{5.2} \end{equation} Next, suppose that $x\in{\mathbb{R}}$ is a left and right Lebesgue point of $B$. By \eqref{4.103}, \eqref{4.104} one obtains \begin{align} &\frac{d}{dz}\text{\rm ln}(M(z,x,\alpha_0)) \nonumber \\
&\underset{\substack{|z|\to \infty\\z\in\rho_+}}{=}\frac{1}{4} \begin{pmatrix}B_{1,1}(x+0)-B_{2,2}(x+0) & B_{1,2}(x+0)+B_{2,1}(x+0) \\ B_{1,2}(x+0)+B_{2,1}(x+0)& B_{2,2}(x+0)-B_{1,1}(x+0)\end{pmatrix}z^{-2} \label{5.3} \\ & \quad \;\,\, +\frac{1}{4} \begin{pmatrix}B_{1,1}(x-0)-B_{2,2}(x-0) & B_{1,2}(x-0)+B_{2,1}(x-0) \\ B_{1,2}(x-0)+B_{2,1}(x-0)& B_{2,2}(x-0)-B_{1,1}(x-0)\end{pmatrix}z^{-2} +o\big(z^{-2}\big) \nonumber \end{align} and hence \begin{align} & \quad \, \frac{1}{2}\begin{pmatrix}B_{1,1}(x+0)-B_{2,2}(x+0) & B_{1,2}(x+0)+B_{2,1}(x+0) \\
B_{1,2}(x+0)+B_{2,1}(x+0)& B_{2,2}(x+0)-B_{1,1}(x+0)\end{pmatrix} \nonumber \\ &+\frac{1}{2}\begin{pmatrix}B_{1,1}(x-0)-B_{2,2}(x-0) & B_{1,2}(x-0)+B_{2,1}(x-0) \\
B_{1,2}(x-0)+B_{2,1}(x-0)& B_{2,2}(x-0)-B_{1,1}(x-0)\end{pmatrix} \nonumber \\
&=\lim_{\substack{|z|\to \infty\\z\in\rho_+}} 2 \int_{\mathbb{R}} d\lambda \, z^2(\lambda-z)^{-2}\Upsilon(\lambda,x,\alpha_0). \label{5.2a} \end{align} Since a.e. $x\in{\mathbb{R}}$ is a Lebesgue point of $B$, one concludes \eqref{5.1}. \end{proof}
In the case $m=1$, a trace formula for Dirac-type operators, using Krein spectral shift functions and exponential representations of Herglotz functions, was discussed in \cite{Ti95}. This circle of ideas was first introduced in connection with trace formulas of Schr\"odinger operators in \cite{GS96} (see also \cite{GHS95}, \cite{GHSZ95}, \cite{Ry99}, \cite{Ry99a} in the scalar case $m=1$. The corresponding case of trace formulas for matrix-valued Schr\"odinger operators was introduced in \cite{GH97} (see also \cite{CGHL00}).
Analogous trace formulas can be drived for all higher-order coefficients $M_k(x,\alpha_0)$ in \eqref{4.103} (see, e.g., \cite{GHSZ95} in connection with scalar Schr\"odinger operators).
A comparison of the trace formula (3.20) in \cite{CGHL00} for Schr\"{o}dinger operators with its Dirac-type counterpart \eqref{5.1} reveals characteristic differences. While in the Schr\"{o}dinger case the trace formula directly involves the potential coefficient $Q(x)$, $M_1(x,\alpha_0)$ differs markedly from a constant multiple of $B(x)$, and consequently, the Dirac-type trace formula \eqref{5.1} does not directly involve $B(x)$ but certain linear combinations of $B_{j,k}(x)$. This is related to the fact that $M(z,x_0,\alpha_0)$ (or equivalently, $\Upsilon(\lambda,x_0,\alpha_0)$), in general, does not uniquely determine $B$ a.e. In fact, there exists a typical ambiguity concerning the coefficients of $D$ related to unitary gauge-transformations of $D$. In the case $m=1$ this ambiguity is well-known and discussed, e.g., in \cite{GL66}, \cite[Sect.~I.10]{LS75}, \cite[Ch.~7]{LS91}. These gauge transformations leave the spectrum of $D$ invariant and suggest that we focus our attention on certain normal forms of $D$ in connection with inverse spectral problems for Dirac-type operators.
\begin{lemma} \label{l5.2} Assume Hypothesis~\ref{h2.4}. Then $D=J\frac{d}{dx}-B$ is unitarily equivalent to $\widetilde D$, where $\widetilde D$ in $L^2({\mathbb{R}})^{2m}$ is of the normal form \begin{equation} \widetilde D=J\frac{d}{dx}-\widetilde B= \begin{pmatrix} -\widetilde B_{1,1} & -I_m \frac{d}{dx}- \widetilde B_{1,2} \\[1mm]
I_m \frac{d}{dx}-\widetilde B_{1,2}& \widetilde B_{1,1} \end{pmatrix}. \label{5.4} \end{equation} Here $\widetilde B=\widetilde B^*$ a.e. and \begin{align} \widetilde B_{1,1}&=-(1/2)\text{\rm Im}\big(U_{1,1}^{-1}[(B_{1,2}+B_{2,1}) -i(B_{1,1}-B_{2,2})] U_{2,2}\big)= \widetilde B_{1,1}^*, \label{5.5} \\ \widetilde B_{1,2}&=(1/2)\text{\rm Re}\big(U_{1,1}^{-1}[(B_{1,2}+B_{2,1}) -i(B_{1,1}-B_{2,2})] U_{2,2}\big)=\widetilde B_{1,2}^*, \label{5.6} \end{align} with $U_{j,j}\in{\mathbb{C}}^{m\times m}$, $j=1,2$, satisfying the first-order system of ordinary differential equations \begin{align} iU_{j,j}^\prime(x)&=-(1/2)\big((-1)^j(B_{1,1}(x) +B_{2,2}(x))+i(B_{1,2}(x)-B_{2,1}(x)) \big)U_{j,j}(x), \nonumber \\ &\hspace*{6cm} \text{for a.e. $x\in{\mathbb{R}}$}, \quad j=1,2. \label{5.7} \end{align} \end{lemma}
\begin{proof} We start with the unitary transformation $V$ in $L^2({\mathbb{R}})^{2m}$ defined by \begin{equation} V=\f1{\sqrt{2}}\begin{pmatrix} i I_m & I_m \\ I_m & i I_m \end{pmatrix}, \quad V^{-1}=\f1{\sqrt{2}}\begin{pmatrix} -i I_m & I_m \\ I_m & -i I_m \end{pmatrix}, \label{5.8} \end{equation} which maps $D$ to $D_1$, where \begin{align} D_1&=V^{-1}DV= i \begin{pmatrix} I_m\frac{d}{dx} & 0 \\ 0 & -I_m\frac{d}{dx} \end{pmatrix} \nonumber \\ & \quad -\f12 \begin{pmatrix}B_{1,1}+B_{2,2}-i(B_{1,2}-B_{2,1}) & B_{1,2}+B_{2,1}-i(B_{1,1}-B_{2,2}) \\ B_{1,2}+B_{2,1}+i(B_{1,1}-B_{2,2}) & B_{1,1}+B_{2,2}+i(B_{1,2}-B_{2,1})\end{pmatrix}. \label{5.9} \end{align} Next, we introduce the unitary operator $U$ in $L^2({\mathbb{R}})^{2m}$ defined by \begin{equation} U=\begin{pmatrix}U_{1,1} & 0 \\ 0 & U_{2,2}\end{pmatrix}, \label{5.10} \end{equation} where the unitary $m\times m$ matrices $U_{j,j}\in{\mathbb{C}}^{m\times m}$ are solutions of the first-order system \eqref{5.7}. Since by hypothesis $B_{j,k}\in L^1_{\text{\rm{loc}}}({\mathbb{R}})^{m\times m}$, $1\leq j,k \leq 2$, the solutions of equation \eqref{5.7} are well-defined and $U_{j,j}\in\text{\rm{AC}}_{\text{\rm{loc}}}({\mathbb{R}})^{m\times m}$, $j=1,2$. One computes \begin{equation} \widehat D=U^{-1} D_1 U= \begin{pmatrix}i I_m \frac{d}{dx} & -\widehat B_{1,2} \\[1mm] -\widehat B_{1,2}^* & -i I_m \frac{d}{dx}\end{pmatrix}, \label{5.11} \end{equation} where $\widehat B_{1,2}\in L^1_{\text{\rm{loc}}}({\mathbb{R}})^{m\times m}$ and \begin{equation} \widehat B_{1,2}(x)=(1/2) U^{-1}_{1,1}(x)\big(B_{1,2}(x)+B_{2,1}(x) -i(B_{1,1}(x)-B_{2,2}(x)\big)U_{2,2}(x). \label{5.12} \end{equation} Finally, defining $\widetilde D=V\widehat D V^{-1}$, one arrives at \eqref{5.4}--\eqref{5.6}. \end{proof}
Thus, unitary invariants of $D$ (such as the spectrum, $\text{\rm{spec}} (D)$, of $D$ and its multiplicity) cannot determine $B$ in general but at best a potential matrix of the type (normal form) $\widetilde B$ in \eqref{5.4}. A further restriction on the solvability of inverse spectral problems for Dirac-type operators is mentioned in the following result.
\begin{lemma} \label{l5.3} Assume Hypotheses~\ref{h2.4} and let $\omega=\omega^*\in{\mathbb{C}}^{m\times m}$ be a constant self-adjoint $m\times m$ matrix. Then $D=J\frac{d}{dx}-B$ is unitarily equivalent to $\widetilde D_\omega$ in $L^2({\mathbb{R}})^{2m}$, where \begin{equation} \widetilde D_\omega=J\frac{d}{dx}-\widetilde B_\omega=\begin{pmatrix} -\widetilde B_{\omega,1,1} & -I_m\frac{d}{dx}-\widetilde B_{\omega,1,2} \\[1mm] I_m\frac{d}{dx}-\widetilde B_{\omega,1,2} & \widetilde B_{\omega,1,1}\end{pmatrix}, \label{5.13} \end{equation} with \begin{align} \widetilde B_{\omega,1,1}&=-(1/2)\text{\rm Im}\big(e^{i\omega}U_{1,1}^{-1}[(B_{1,2} +B_{2,1})-i(B_{1,1}-B_{2,2})] U_{2,2}e^{i\omega}\big)=\widetilde B_{\omega,1,1}^*, \nonumber \\ \widetilde B_{\omega,1,2}&=(1/2)\text{\rm Re}\big(e^{i\omega}U_{1,1}^{-1}[(B_{1,2} +B_{2,1})-i(B_{1,1}-B_{2,2})] U_{2,2}e^{i\omega}\big)=\widetilde B_{\omega,1,2}^*, \label{5.14} \end{align} and with $U_{j,j}$, $j=1,2$, satisfying the first-order system \eqref{5.7}. \end{lemma}
\begin{proof} Define \begin{equation} U_\omega=\begin{pmatrix}e^{i\omega}& 0 \\ 0 & e^{-i\omega} \end{pmatrix}. \label{5.15} \end{equation} Using the notation employed in the proof of Lemma~\ref{l5.2} one verifies that \begin{equation} \widetilde D_\omega=V U_\omega (VU)^{-1}DVU(VU_\omega)^{-1}. \label{5.16} \end{equation} \end{proof}
In particular, choosing $\omega=(\pi/2)I_m$ effects the sign change $\widetilde B\to -\widetilde B$, with $\widetilde B$ given by \eqref{5.5}, \eqref{5.6}.
For detailed discussions of various normal forms for Dirac-type operators we refer to \cite{GL66}, \cite{HJKS91}, \cite[Ch.~9]{LS75}, \cite[Ch.~7]{LS91} in the case $m=1$ and to \cite{Ga68}, \cite{LM00}, \cite{Ma99}, \cite[p.~193--195]{Ma86}, \cite{Ma65} in the general matrix-valued case. Perhaps it should be noted that if $D$ is in its normal form $\widetilde D$ as in \eqref{5.4}, ${\widetilde D}^2$ turns into a $2m\times 2m$ matrix-valued Schr\"odinger operator under appropriate regularity assumptions on $\widetilde B$. Details on this fact and the relation between the $M$-matrices of $\widetilde D$ and ${\widetilde D}^2$ can be found in Section~3 of \cite{GKM00}.
Next, we turn to Borg-type theorems, one of the principal topics of this paper. In 1946 Borg \cite{Bo46} proved, among a variety of other inverse spectral theorems, the following result.
\begin{theorem}[\cite{Bo46}] \label{t5.4} Assume $q\in L^2_{\text{\rm{loc}}} ({\mathbb{R}})$ to be real-valued and periodic and let \begin{equation} h=-d^2/dx^2+q \label{5.16a} \end{equation} be the associated self-adjoint Schr\"odinger operator in $L^2({\mathbb{R}})$. Moreover, suppose that $\text{\rm{spec}} (h)=[e_0,\infty)$ for some $e_0\in{\mathbb{R}}$. Then \begin{equation} q(x)=e_0 \, \text{ for a.e. $x\in{\mathbb{R}}$}. \label{5.17} \end{equation} \end{theorem}
The analog of Theorem~\ref{t5.4} for Dirac-type operators (in the case $m=1$) was proven by Giacheti and Johnson \cite{GJ84} in 1984 (see also \cite{Ge89}, \cite{Ge91}, \cite{GSS91} in the special case where $p$ is constant and \cite{GG93} in the case where $p,q\in L^2 ({\mathbb{R}})$ are real-valued and periodic).
\begin{theorem}[\cite{GJ84}] \label{t5.4a} Assume $p,q\in L^\infty ({\mathbb{R}})$ to be real-valued and periodic and let \begin{equation} d=\begin{pmatrix}-p &-\frac{d}{dx}-q\\ \frac{d}{dx}-q & p\end{pmatrix} \label{5.17a} \end{equation} be the associated self-adjoint Dirac-type operator in $L^2({\mathbb{R}})^2$. Moreover, suppose that $\text{\rm{spec}} (d)={\mathbb{R}}$. Then \begin{equation} p(x)=q(x)=0 \, \text{ for a.e. $x\in{\mathbb{R}}$}. \label{5.17b} \end{equation} \end{theorem}
Traditionally, uniqueness results such as Theorems~\ref{t5.4} and \ref{t5.4a} are called Borg-type theorems. (However, this terminology is not uniquely adopted and hence a bit unfortunate. Indeed, inverse spectral results on finite intervals recovering the potential coefficient(s) from several spectra, were also pioneered by Borg in his celebrated paper \cite{Bo46}, and hence are also coined Borg-type theorems in the literature, see, e.g., \cite{Ma94}, \cite{Ma99}, \cite{Ma99a}.)
A quick and natural proof of Theorem~\ref{t5.4}, based on a trace formula for $q$, was presented in \cite{CGHL00}. This strategy of proof was then applied to the case of matrix-valued Schr\"odinger operators and the corresponding matrix-valued analog of Theorem~\ref{t5.4} was also proved in \cite{CGHL00} along these lines. A closer examination of the proof of Theorem~\ref{t5.4} shows that periodicity of $q$ is not the crucial element in the proof of the uniqueness result \eqref{5.17}. The key ingredient (besides $\text{\rm{spec}} (h)=[e_0,\infty)$) is clearly the fact that for all $x\in{\mathbb{R}}$, \begin{equation} \xi(\lambda,x)=1/2 \, \text{ for a.e. } \lambda\in\text{\rm{spec}}_{\text{\rm{ess}}}(h) \label{5.18} \end{equation} ($\text{\rm{spec}}_{\text{\rm{ess}}}(\,\cdot\,)$ the essential spectrum), where $\xi(\cdot,x)$ is defined by \begin{equation} \xi(\lambda,x)=\lim_{\varepsilon\to 0}\pi^{-1}\text{\rm Im}(\text{\rm ln}(g(\lambda+i\varepsilon,x))) \, \text{ for a.e. $\lambda\in{\mathbb{R}}$}, \label{5.19} \end{equation} and $g(z,x)$ denotes Green's function (i.e., the integral kernel of the resolvent) of $h$ on the diagonal, \begin{equation} g(z,x)=(h-z)^{-1}(x,x). \label{5.20} \end{equation} Completely analogous considerations apply to the Dirac-type case.
Real-valued periodic potentials are known to satisfy \eqref{5.18} but so are certain classes of real-valued quasi-periodic and almost-periodic potentials $q$ (see, e.g., \cite{CJ87}, \cite{Cr89}, \cite{DS83}, \cite{Jo82}, \cite{JM82}, \cite{Ko84}, \cite{Ko87a}, \cite{KK88}, \cite{KS88}, \cite{SY95}). In particular, the class of real-valued algebro-geometric finite-gap potentials $q$ (a subclass of the set of real-valued quasi-periodic potentials) is a prime example satisfying \eqref{5.18} without necessarily being periodic. Traditionally, potentials $q$ satisfying \eqref{5.18} are called \textit{reflectionless} (see \cite{Cr89}, \cite{DS83}, \cite{KK88}, \cite{SY95}). Again the analogous notions apply to the Dirac-type case (cf., e.g., \cite{CJ87}, \cite{GJ84}, \cite{Jo87}).
Taking this circle of ideas as the point of departure for our derivation of Borg-type results for Dirac-type operators, we now use the reflectionless situation described in \eqref{5.18}, actually, its proper analog for Dirac-type systems, as the model for the subsequent definition.
\begin{definition}\label{d5.5} Assume Hypothesis~\ref{h2.1} with $A=I_{2m}$, and let $\alpha_0=(I_m\; 0)\in{\mathbb{C}}^{m\times 2m}$. Then $B$ is called {\it reflectionless} if for all $x\in{\mathbb{R}}$, \begin{equation} \Upsilon(\lambda,x,\alpha_0)= (1/2) I_{2m} \, \text{ for a.e.\ $\lambda\in\text{\rm{spec}}_{\text{\rm{ess}}}(D)$}. \label{5.21} \end{equation} \end{definition}
Since hardly any confusion can arise, we will also call the Dirac-type operator $D$ reflectionless if \eqref{5.21} is satisfied.
Given Definition~\ref{d5.5}, we turn to a Borg-type uniqueness theorem and formulate the analog of Theorem~\ref{t5.4} for (reflectionless) Dirac-type operators.
\begin{theorem}\label{t5.6} Assume Hypothesis \ref{h2.1} with $A=I_{2m}$, and let $\alpha_0=(I_m\; 0)\in{\mathbb{C}}^{m\times 2m}$. If for all $x\in{\mathbb{R}}$, $\Upsilon(\lambda,x,\alpha_0)=C$ is a constant $2m\times 2m$ matrix for a.e.\ $\lambda\in{\mathbb{R}}$, especially, if $B$ is reflectionless and $\text{\rm{spec}}(D)={\mathbb{R}}$, then \begin{equation} B_{1,1}(x)=B_{2,2}(x), \quad B_{1,2}(x)=-B_{2,1}(x) \, \text{ for a.e. $x\in{\mathbb{R}}$}. \label{5.22} \end{equation} In particular, if $D$ is assumed to be in its normal form \eqref{5.4}, that is, of the type $\widetilde D = J\frac{d}{dx} - \widetilde B$, then \begin{equation} \widetilde B (x)=0 \text{ for a.e. $x\in{\mathbb{R}}$}. \label{5.23} \end{equation} \end{theorem}
\begin{proof} The fact that $\int_{\mathbb{R}} d\lambda\, (\lambda-z)^{-2}=0$ for all $z\in{\mathbb{C}}\backslash{\mathbb{R}}$, that a.e.~$x\in{\mathbb{R}}$ is a Lebesgue point of $B$, and the trace formula \eqref{5.1}, imply \eqref{5.22}. Together with Lemma \ref{l5.2} this yields \eqref{5.23}. \end{proof}
The analog of Theorem~\ref{t5.6} for matrix-valued Schr\"odinger operators was recently proved in \cite{CGHL00}.
In the remainder of the section we will show that the case of periodic $B$ is covered by Theorem~\ref{t5.6} under appropriate uniform multiplicity assumptions on $\text{\rm{spec}}(D)$. In order to handle Floquet theoretic aspects of periodic Dirac-type operators $D$, we adopt the following assumptions until the end of this section.
\begin{hypothesis} \label{h5.7} In addition to Hypothesis~\ref{h2.1} assume $A=I_{2m}$ and suppose that $B$ is periodic, that is, there is an $\omega>0$ such that $B(x+\omega)=B(x)$ for a.e.~$x\in{\mathbb{R}}$. \end{hypothesis}
The following result has been proven in \cite[Theorem~4.6]{CGHL00}.
\begin{theorem} [\cite{CGHL00}, Theorem~4.6] \label{t5.8} Assume Hypothesis~\ref{h5.7} and let $\alpha_0=(I_m\; 0)\in{\mathbb{C}}^{2m\times m}$. If $D$ has uniform spectral multiplicity $2m$, then for all $x\in{\mathbb{R}}$ and all $\lambda\in\text{\rm{spec}}(D)^o$, \begin{equation} M_+(\lambda+i0,x,\alpha_0)=M_-(\lambda+i0,x,\alpha_0)^* =M_-(\lambda-i0,x,\alpha_0). \label{5.24} \end{equation} In particular, $M_-(z,x,\alpha_0)$ is the analytic continuation of $M_+(z,x,\alpha_0)$ {\rm (}and vice versa{\rm )} through $\text{\rm{spec}}(D)^o$. \end{theorem}
\noindent Here $A^o$ denotes the open interior of a set $A\subseteq{\mathbb{R}}$.
Strictly speaking, Theorem~4.6 in \cite{CGHL00} was proved for matrix-valued Schr\"odinger operators. But the proof extends line by line to the corresponding Dirac-type situation and was predominantly formulated in terms of Hamiltonian systems notation (rather than Schr\"odinger operator specifics) in order to be applicable to the present context. In particular, the spectrum, $\text{\rm{spec}}(H)$, of the Schr\"odinger operator $H$ should be replaced by that of $D$, the point spectrum, $\text{\rm{spec}}_p(H^D_{x_0})$, of the Dirichlet Schr\"odinger operator
$H^D_{x_0}$ with a Dirichlet boundary condition at the point $x_0$ should simply be replaced by the set $\{\lambda\in{\mathbb{R}}\,|\, \det(\phi_1(\lambda,x_0+\omega,x_0,\alpha_0))=0\}$, etc.
\begin{theorem} \label{t5.9} Suppose Hypothesis~\ref{h5.7} and let $\alpha_0=(I_m\; 0) \in{\mathbb{C}}^{2m\times m}$. If $D$ has uniform spectral multiplicity $2m$, then $D$ is reflectionless and for all $x\in{\mathbb{R}}$ and all $\lambda\in\text{\rm{spec}}(D)^o$, \begin{equation} \Upsilon(\lambda,x,\alpha_0)= (1/2) I_{2m}. \label{5.30} \end{equation} \end{theorem}
\begin{proof} This is clear from \eqref{2.620} and \eqref{5.24}, which imply \begin{equation} M(\lambda+i0,x,\alpha_0)=-M(\lambda+i0,x,\alpha_0)^*. \label{5.27} \end{equation} \end{proof}
Theorems~\ref{t5.8} and \ref{t5.9} extend to more general situations (not necessarily periodic ones) as is clear from the corresponding results in \cite{CJ87}, \cite{GJ84}, \cite{GKT96}, \cite{Ko84}, \cite{Ko87a}, \cite{KK88}, \cite{SY95} in the scalar case $m=1$ (replacing the phrase ``for all $\lambda\in\text{\rm{spec}}(D)^o$'' by ``for~a.e.~$\lambda\in\text{\rm{spec}}(D)^o$'', etc.). For the corresponding matrix-valued Schr\"odinger operator case we refer to \cite{KS88}.
\begin{corollary} \label{c5.10} Assume Hypothesis~\ref{h5.7}. If $D$ has uniform spectral multiplicity $2m$ and $\text{\rm{spec}}(D)={\mathbb{R}}$, then \begin{equation} B_{1,1}(x)=B_{2,2}(x), \quad B_{1,2}(x)=-B_{2,1}(x) \, \text{ for a.e. $x\in{\mathbb{R}}$}. \label{5.31} \end{equation} In particular, if $D$ is assumed to be in its normal form $\widetilde D = J\frac{d}{dx} - \widetilde B$, with $\widetilde B$ given by \eqref{5.4}, then \begin{equation} \widetilde B (x)=0 \, \text{ for a.e. $x\in{\mathbb{R}}$}. \label{5.32} \end{equation} \end{corollary}
\begin{remark} \label{r5.11} The assumption of uniform (maximal) spectral multiplicity $2m$ in Corollary~\ref{c5.10} is an essential one. Otherwise, one can easily construct nonconstant potentials $B$ such that the associated operator $D$ has overlapping band spectra and hence spectrum the whole real line. Also self-adjointness of $B$ is crucial for Corollary~\ref{c5.10} to hold (cf.~the corresponding discussion in Remark~4.2 of \cite{CGHL00} in the context of Schr\"odinger operators). \end{remark}
The analog of Corollary~\ref{c5.10} for periodic matrix-valued Schr\"odinger operators was first proved by Depres \cite{De95} and recently rederived using such a trace formula approach in \cite{CGHL00}.
We note that all results presented in this paper also apply to matrix-valued finite-difference Hamiltonian systems. We refer the reader to \cite{CGR01} in this direction.
Finally, Borg-type uniqueness theorems for Hamiltonian systems are just a beginning. There is a natural extension of Borg's Theorem~\ref{t5.4} to self-adjoint periodic Schr\"{o}dinger, respectively, Dirac-type operators with one gap, respectively, two gaps in their spectrum. In the case of (scalar) Schr\"odinger operators, such an extension is due to Hochstadt \cite{Ho65} and the resulting potential $q$ becomes twice the elliptic Weierstrass function. In the case of Dirac-type operators (with $m=1$ and vanishing diagonal coefficients in $B$) such an extension involving elliptic functions can be found in \cite{Ge89}, \cite{Ge91}, \cite{GSS91} (see also \cite{GW98}). Extensions to matrix-valued versions (i.e., for $m\geq 2$) are currently under active investigations.
\vspace*{3mm} \noindent {\bf Acknowledgements.} We would like to thank Suzanne Collier, Helge Holden, Konstantin Makarov, Fedor Rofe-Beketov, Alexei Rybkin, Lev Sakhnovich, and Barry Simon for helpful discussions and many hints regarding the literature, and especially, Don Hinton, Boris Levitan, Mark Malamud, and Alexander Sakhnovich for repeated correspondence on various parts of the material in this paper. \\ S.~C. would like to thank the Mathematics Department of the University of Missouri-Columbia for the great hospitality extended to him during his 2000/2001 sabbatical when this work was completed.
\end{document} |
\begin{document}
\title{$\Pi^0_1$-ordinal analysis beyond first-order arithmetic} \author{Joost J. Joosten}
\maketitle
\begin{abstract} In this paper we give an overview of an essential part of a $\Pi^0_1$ ordinal analysis of Peano Arithmetic (\ensuremath{{\mathrm{PA}}}\xspace) as presented by Beklemishev (\cite{Beklemishev:2004}). This analysis is mainly performed within the polymodal provability logic ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$.
We reflect on ways of extending this analysis beyond \ensuremath{{\mathrm{PA}}}\xspace. A main difficulty in this is to find proper generalizations of the so-called Reduction Property. The Reduction Property relates reflection principles to reflection rules.
In this paper we prove a result that simplifies the reflection rules. Moreover, we see that for an ordinal analysis the full Reduction Property is not needed. This latter observation is also seen to open up ways for applications of ordinal analysis relative to some strong base theory. \end{abstract}
\maketitle
\section{Introduction}
Primitive Recursive Arithmetic (\ensuremath{{\mathrm{PRA}}}\xspace) is a rather weak formal theory about the natural numbers which is among various philosophers, logicians and mathematicians held to be a good candidate for the concept of finitism (see e.g., \cite{Tait81}). The concept of finitism tries to capture those mathematical truths and that part of mathematical reasoning which is true beyond doubt and which not uses strong assumptions on infinite mathematical entities.
Gentzen showed in his seminal paper from 1936 (\cite{Gentzen:1936}) that \ensuremath{{\mathrm{PRA}}}\xspace together with some clearly non-finitist notion of transfinite induction for easy formulas along a rather small ordinal could prove the consistency of Peano Arithmetic (\ensuremath{{\mathrm{PA}}}\xspace).
This result can be seen as a partial realization of Hilbert's programme where finitist theories are to prove the consistency of strong mathematical theories. Of course, since G\"odel's incompleteness results we know that this program is not viable but Gentzen's consistency proof seems to clearly isolate the non-finitist part needed for such a consistency proof.
Since Gentzen's consistency proof, the scientific community has tried to calibrate the proof-strength of various theories other than \ensuremath{{\mathrm{PA}}}\xspace. The amount of transfinite induction needed in these consistency proofs is referred to as the proof-theoretic ordinal of a theory. There are various ways of defining and computing these ordinals and for most natural theories these methods all yield the same ordinals.
Among these methods the more novel one was introduced by Beklemishev \cite{Beklemishev:2004} and is based on modal provability logics. The corresponding ordinals are referred to as the $\Pi_1^0$ ordinals. In this paper we shall sketch the method for computing these $\Pi^0_1$ ordinals. So far, the method has only been applied successfully to theories as \ensuremath{{\mathrm{PA}}}\xspace and its kin. In this paper we reflect on ways of extending this analysis beyond \ensuremath{{\mathrm{PA}}}\xspace.
A main difficulty in this is to find proper generalizations of the so-called Reduction Property. The Reduction Property relates reflection principles to reflection rules. In this paper we prove a result that simplifies the reflection rules. Moreover, we see that for an ordinal analysis the full Reduction Property is not needed. This latter observation is also seen to open up ways for applications of ordinal analysis relative to some strong base theory.
Before we can start looking into the ordinal analysis, we must first introduce some basic knowledge concerning arithmetic and provability logics.
\section{Prerequisites}
All results in this section are given without proofs. For further background the reader is referred to standard textbooks like \cite{Boo93} or \cite{HP}.
\subsection{Arithmetic} By the language of arithmetic we understand in this paper the language based on the symbols $\{ 0, S, +, \times, \exp, \leq, = \}$ where $\exp$ denotes the function $x\mapsto 2^x$.
Formulas of arithmetic are stratified in complexity classes as usual. Thus, $\Delta_0^0$ formulas are first-order formulas where all quantifiers refer to numbers and are bounded by some term $t$ as in $\forall \, x{\leq}t$ where of course $x\notin t$.
We define $\Sigma^0_0 := \Pi^0_0 := \Delta_0^0$. If $\varphi(\vec x, \vec{y}) \in \Sigma^0_n$, then $\forall \vec x \ \varphi(\vec x, \vec{y}) \in \Pi^0_{n+1}$ and likewise, if $\varphi(\vec x, \vec{y}) \in \Pi^0_n$, then $\exists \vec x \ \varphi(\vec x, \vec{y}) \in \Sigma^0_{n+1}$.
Similarly we define the hierarchies $\Pi^n_m$ where now the number of $n$th-order quantifiers is counted although in this paper we shall at most need second order quantifiers.
By \ensuremath{{\rm{EA}}}\xspace we denote the arithmetic theory of \emph{Elementary Arithmetic}. This theory is formulated in the language of arithmetic. Apart from the defining axioms for the symbols in the language, \ensuremath{{\rm{EA}}}\xspace has an induction axiom $I_\varphi$ for each $\Delta_0^0$ formula $\varphi(\vec x, y)$ (that may contain $\exp$): \[ I_\varphi(\vec x) : \ \ \varphi(\vec x, 0) \wedge \forall\, y\ (\varphi(\vec x, y) \to \varphi(\vec x, y+1)) \ \to \ \forall y \varphi(\vec x, y). \]
By $\ensuremath{{\rm{EA}}}\xspace^+$ we denote $\ensuremath{{\rm{EA}}}\xspace$ plus the axiom that states that super-exponentiation --the function that maps $x$ to the $x$ times iteration of $\exp$-- is a total function.
By $\isig{n}$ we denote the theory that is as \ensuremath{{\rm{EA}}}\xspace except that it now has induction axioms $I_\varphi$ for all formulas $\varphi (\vec x) \in \Sigma_n^0$. The theory \ensuremath{{\mathrm{PA}}}\xspace is the union of all the $\isig{n}$ in that it has induction axioms for all arithmetic formulas.
\subsection{Transfinite induction} Greek letters will often denote ordinals and as usual we denote by $\varepsilon_0$ the supremum of $\{\omega, \omega^\omega, \omega^{\omega^\omega}, \ldots \}$. Apart from considering induction along the natural numbers we shall consider induction along transfinite orderings too. If $\langle\Gamma,\prec\rangle$ is a natural arithmetical representation in \ensuremath{{\rm{EA}}}\xspace of some ordinal we denote by ${\sf TI}[X, \Gamma]$ the collection of transfinite induction axioms for all formulas in $X$: \[ \forall y \ \big( \forall \, y'{\prec} y\ \varphi(\vec x, y') \to \varphi(\vec x, y)\big) \ \to \ \forall y\ \varphi(\vec x, y) \ \ \ \ \ \mbox{ with $ \varphi(\vec x, y) \in X$.} \]
\subsection{Formalized metamathematics}
Throughout this paper we shall use representations in arithmetic of various metamathematical notions. In particular we fix some G\"odel numbering to represent formulas and other syntactical objects in arithmetic.
Moreover, we assume that we can represent r.e. theories in a suitable way so that we can speak of ``the formula $\varphi$ is provable in the theory $T$'' whose formalization we shall denote by $\Box_T\varphi$. Dually, we shall use the notion of ``the formula $\varphi$ is consistent with the theory $T$" which is denoted by ${\sf Con}_T(\varphi)$ or $\Diamond_T \varphi$.
If we write $\Box_T\varphi(\dot x)$ we denote by that a formula whose free variable is $x$, and so that provably for every $x$, the formula $\Box_T\varphi(\dot x)$ is equivalent to $\Box_T\varphi(\overline x)$. Here $\overline x$ denotes the numeral of $x$, that is, \[ \overline{x} = \overbrace{S\ldots S}^{x \mbox{ times}}0. \]
\section{Provability logics}
The logics ${\ensuremath{\mathsf{GLP}}}\xspace_\Lambda$ provide provability logics for a series of provability predicates/modalities $[\alpha]$ of increasing strength.
\begin{definition} Let $\Lambda$ be an ordinal. By ${\ensuremath{\mathsf{GLP}}}\xspace_\Lambda$ we denote the poly-modal propositional logic that has for each $\alpha < \Lambda$ a modality $[\alpha]$ (that syntactically binds as the negation symbol). The axioms of {\ensuremath{\mathsf{GLP}}}\xspace are all propositional logical tautologies in this signature together with instantiations of the following schemes: \[ \begin{array}{ll} [\alpha ] (A \to B ) \to ([\alpha ]A \to [\alpha] B)& \forall \alpha < \Lambda; \\ {[} \alpha ] ([\alpha ]A \to A ) \to [\alpha ]A & \forall \alpha < \Lambda; \\ {[}\alpha ] A \to [\beta]A & \forall \alpha \leq \beta < \Lambda; \\ \langle \alpha \rangle A \to [\beta] \langle \alpha \rangle A & \forall \alpha < \beta < \Lambda. \\ \end{array} \] As always we have that $\langle \alpha\rangle A := \neg [\alpha ] \neg A$. The rules are Modus Ponens and a Necessitation rule for each modality below $\Gamma$, that is, $\frac{A}{[\alpha]A}$. \end{definition}
By ${\ensuremath{\mathsf{GLP}}}\xspace$ we shall denote class-size logic which is the ``union'' of ${\ensuremath{\mathsf{GLP}}}\xspace_\Lambda$ over all $\Lambda \in {\sf On}$. The closed fragment of ${\ensuremath{\mathsf{GLP}}}\xspace_\Lambda$ is the set of its theorems that do not contain any propositional variables and is denoted by ${\ensuremath{\mathsf{GLP}}}\xspace_\Lambda^0$. It turns out that ${\ensuremath{\mathsf{GLP}}}\xspace_\Lambda^0$ is already a very rich structure that is strong enough to perform major parts of our ordinal analysis. Some privileged inhabitants of ${\ensuremath{\mathsf{GLP}}}\xspace_\Lambda^0$ are the so-called \emph{worms}. They are just iterated consistency statements in ${\ensuremath{\mathsf{GLP}}}\xspace_\Lambda^0$.
\begin{definition}[$S^\Lambda$] $\top \in S^\Lambda$, and if both $A \in S^\Lambda$ and $\beta < \Lambda$, then $\langle \beta \rangle A \in S^\Lambda$. \end{definition} We can define an order $<_0$ on $S^\Lambda$ by $A <_0 B \ :\Leftrightarrow \ {\ensuremath{\mathsf{GLP}}}\xspace_\Lambda^0 \vdash B\to \langle 0 \rangle A$. It is known (\cite{Beklemishev:2005, BeklemshevFernandezJoosten2011}) that this ordering makes $S^\Lambda$ into a well-order.
\subsection{The Reduction Property}
Japaridze (\cite{Japaridze:1988}) has shown ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ to be arithmetically sound and complete if we interpret $[n]$ as ``provable by $n$ applications of the $\omega$-rule''. Ignatiev then showed in \cite{Ignatiev:1993} that this completeness result actually holds for a wide range of arithmetical readings of $[n]$. In particular, we still have completeness when reading $[n]$ as a natural formalization of ``provable in \ensuremath{{\rm{EA}}}\xspace together with all true $\Pi^0_n$ sentences''.
For the remainder of the section, let $[n]$ refer to this latter reading. The advantage of this reading is that certain worms can be easily linked to reflection principles and fragments of arithmetic:
\begin{lemma}\label{theorem:ConsistencyEquivalentToReflectionEquivalentToInduction} $\ensuremath{{\rm{EA}}}\xspace + \langle n+2 \rangle \top \ \equiv \ \ensuremath{{\rm{EA}}}\xspace + {\sf RFN}_{\Sigma_{n+2}}(\ensuremath{{\rm{EA}}}\xspace) \ \equiv \ \isig{n+1}$. \end{lemma}
\begin{proof} We shall refrain from distinguishing a modal formula from its arithmetical interpretation if the context allows us to. Thus, in this statement, $\langle n+2 \rangle \top$ clearly refers to the formalized statement that $\ensuremath{{\rm{EA}}}\xspace$ together with all true $\Pi_{n+2}^0$-formulas is consistent.
By ${\sf RFN}_{\Sigma_{n+1}}(\ensuremath{{\rm{EA}}}\xspace)$ we denote the set of axioms $\{ [0]_\ensuremath{{\rm{EA}}}\xspace \sigma(\dot x) \to \sigma(x) \mid \sigma \in \Sigma_{n+1} \}$.
The $\ensuremath{{\rm{EA}}}\xspace + \langle n+2 \rangle \top \equiv \ensuremath{{\rm{EA}}}\xspace + {\sf RFN}_{\Sigma_{n+2}}(\ensuremath{{\rm{EA}}}\xspace)$ equivalence is actually rather easy and can be found in \cite{BeklemishevSurvey:2005}. The remaining equivalence $\ensuremath{{\rm{EA}}}\xspace + {\sf RFN}_{\Sigma_{n+2}}(\ensuremath{{\rm{EA}}}\xspace) \equiv \isig{n+1}$ is a classical result by Leivant \cite{Leivant:1983}. \end{proof}
We can write ${\sf RFN}_{\Sigma_n}(EA)$ also as $\pi(x) \to \Diamond_\ensuremath{{\rm{EA}}}\xspace \pi (\dot x)$ for $\pi(x) \in \Pi_n^0$. This in turn can be studied as a rule rather than an implication: $\frac{\pi(x)}{\Diamond_\ensuremath{{\rm{EA}}}\xspace \pi (\dot x)}$. In this rule we can vary both the complexity class to which $\pi(x)$ belongs and the notion of provability used (here just $\Diamond_\ensuremath{{\rm{EA}}}\xspace$ which is $\langle 0 \rangle_\ensuremath{{\rm{EA}}}\xspace$) giving rise to a scala of different rules. In \cite{Beklemishev:2004, BeklemishevSurvey:2005} these rules are introduced and studied.
\begin{definition} The Reflection rule $\Pi_m^0{-}{\sf RR}^n(\mathcal{U})$ is defined as $\frac{\pi(x)}{\langle n \rangle_{\mathcal{U}}\pi(\dot x)}$ where $\pi \in \Pi_m^0$. \end{definition}
The following theorem is called the Reduction Property. A proof of it can be found in either one of \cite{Beklemishev:2004, BeklemishevSurvey:2005}.
\begin{theorem}\label{theorem:ReductionProperty} The theory $\ensuremath{{\rm{EA}}}\xspace + {\sf RFN}_{\Sigma_{n+1}}$ is $\Pi_{n+1}^0$-conservative over $\ensuremath{{\rm{EA}}}\xspace + \Pi^0_{n+1}{-}{\sf RR}^n(\ensuremath{{\rm{EA}}}\xspace)$. \end{theorem}
The Reduction Property can be stated and proved under more general conditions but for the current purpose this presentation suffices. At first glance it might seem a mere technicality but it implies various classical results like Parson's result that $\isig{1}$ is $\Pi^0_2$-conservative over \ensuremath{{\mathrm{PRA}}}\xspace. Moreover, as we shall see, it is one of the main ingredients in our ordinal analysis.
\subsection{Simplifying the Reflection Rule}
In this subsection we shall see that we can simplify the family of reflection rules considerably. We prefer to work in a general setting here. Thus, let $[n]_{\mathcal{U}}$ be any series of provability predicates over a theory $\mathcal{U}$ that is sound for {\ensuremath{\mathsf{GLP}}}\xspace. Moreover, we have for each $n\in \omega$ that the formalized deduction theorem holds: \[ \mathcal{U} \vdash [n]_{\mathcal{U} + \varphi} \psi \ \ \Longleftrightarrow \ \ \mathcal{U} \vdash [n]_{\mathcal{U} }( \varphi\to \psi). \]
The Reflection Rule as studied in the {\ensuremath{\mathsf{GLP}}}\xspace project has currently two parameters $n$ and $m$:
\[ \Pi_m^0{\sf - RR}^n(U+\varphi) \ := \ \frac{\psi}{\langle n\rangle(\varphi \wedge \psi)} \ \ \ \mbox{\ for $\varphi \in \Pi_m$.} \] In virtue of the easy lemmas below we shall see that we can drop the parameter $m$ as over $\mathcal{U}$, for $m>n$ all the versions turn out to be equivalent, and for $m\leq n$ the rule is just equivalent to the axiom $\langle n \rangle \varphi$. Thus, we propose to just speak of the ${\sf RR}^n(U+\varphi)$: \[ {\sf RR}^n(U+\varphi) \ \ := \ \ \frac{\psi}{\langle n \rangle (\psi \wedge \varphi)} \] without any restriction on the complexity of $\psi$. In the remainder of this subsection, we shall assume that $\langle n \rangle \varphi$ is of complexity $\Pi_{n+1}^0$. However if this were not the case, the arguments go through exactly the same by replacing each occurrence of $\Pi^0_{n+1}$ by $\widetilde {\Pi^0_{n+1}}$ where $\widetilde {\Pi^0_{n+1}}$ represents some natural complexity class to which $\langle n \rangle \varphi$ belongs.
\begin{definition}\label{definition:FundamentalSequence} Let $Q^0_n(\varphi) = \langle n \rangle_{\mathcal{U}} \varphi$ and $Q^{k+1}_n(\varphi) = \langle n \rangle_{\mathcal{U}} (\varphi \wedge Q^k_n(\varphi))$. \end{definition}
\begin{lemma}\label{theorem:ReflectionRuleSimplification} Let $l,m,n \in \omega$ and $l >n<m$. We have that \[ \mathcal{U} + \Pi_l-{\sf RR}^n(\mathcal{U} + \varphi) \ \equiv \ \mathcal{U} + \Pi_m-{\sf RR}^n(\mathcal{U} + \varphi) \ \equiv \ \mathcal{U} + \{ Q^k_n (\varphi) \mid k\in \omega \}. \] \end{lemma}
\begin{proof} As the complexity of $Q_n^k(\varphi)$ is $\Pi_{n+1}$ for any $k$ and $\varphi$, it is easy to see by an induction on $k$ that for any $k,m,n\in \omega$ we have \[ \Pi_m-{\sf RR}^n(\mathcal{U} + \varphi) \ \vdash Q^k_n (\varphi) \] so that $\mathcal{U} + \Pi_m-{\sf RR}^n(\mathcal{U} + \varphi) \ \supseteq \ \mathcal{U} + \{ Q^k_n (\varphi) \mid k\in \omega \}$.
For the reverse inclusion we do induction on the number of applications of the rule $\Pi_m-{\sf RR}^n(\mathcal{U} + \varphi)$. So, suppose that for some $\chi \in \Pi_m$ we have that $\mathcal{U} + \Pi_m-{\sf RR}^n(\mathcal{U} + \varphi) \vdash \chi$. By the IH we have $\mathcal{U} \vdash Q^k_n(\varphi) \to \chi$ for some natural number $k$. But then by necessitation we have $\mathcal{U} \vdash [n]( Q^k_n(\varphi) \to \chi)$ whence $\mathcal{U} + Q^{k+1}_n(\varphi) \vdash \langle n \rangle_{\mathcal{U}}( \varphi \wedge \chi)$ as was to be shown.
\end{proof}
\begin{lemma} Let $m,n \in \omega$ with $n\geq m$. We have that \[ \mathcal{U} + \Pi_m-{\sf RR}^n(\mathcal{U} + \varphi) \ \equiv \ \mathcal{U} + \langle n \rangle \varphi. \] \end{lemma}
\begin{proof} Clearly, by one application of the $\Pi_m-{\sf RR}^n(\mathcal{U} + \varphi)$ rule we obtain $\frac{\top}{\langle n\rangle \varphi}$. Thus \[ \mathcal{U} + \langle n \rangle \varphi \subseteq \mathcal{U} + \Pi_m-{\sf RR}^n(\mathcal{U} + \varphi). \] To prove the converse implication we show that $\mathcal{U} + \langle n \rangle \varphi$ is closed under the rule. Thus, reason in $\mathcal{U} + \langle n \rangle \varphi$ and suppose we have proved $\psi$ with $\psi \in \Pi_m$. As $\psi \in \Sigma_{n+1}$ we have that $\psi \to [n] \psi$. We combine this with $\langle n\rangle \varphi$ to obtain the required $\langle n\rangle (\psi \wedge \varphi)$. \end{proof}
We note that a similar argument applies to ${\ensuremath{\mathsf{GLP}}}\xspace_\Lambda$ once we have fixed suitable formulas $Q^k_\alpha(\varphi)$ there and have specified complexity classes for formulas of the form $\langle \alpha \rangle \psi$.
\subsection{The Reduction Property revisited}
In more generality, we can define for {\ensuremath{\mathsf{GLP}}}\xspace formulas --not just worms-- an ordering over {\ensuremath{\mathsf{GLP}}}\xspace: \[ \varphi <_\alpha \psi \ \:\Leftrightarrow \ {\ensuremath{\mathsf{GLP}}}\xspace \vdash \psi \to \langle \alpha \rangle \varphi. \] With respect to these orderings, consistency statements behave very well and admit some sort of fundamental sequence. For any formula $\varphi$ we defined $Q^k_\alpha (\varphi)$ for $k\in \omega$ by $Q^0_\alpha (\varphi) := \langle \alpha \rangle \varphi$ and $Q^{k+1}_\alpha (\varphi) := \langle \alpha \rangle ( \varphi \wedge Q^k_\alpha (\varphi))$. With these formulas at hand we can state part of the fundamental sequence result to the effect that the formulas $\{ Q^k_n(\varphi)\}_{k\in \omega}$ substitutes a fundamental sequence of $\langle n+1 \rangle \varphi$.
\begin{lemma}\label{theorem:ReductionPropertyInclusion} For each $k \in \omega$ we have that ${\ensuremath{\mathsf{GLP}}}\xspace \vdash \langle \alpha +1 \rangle \varphi \to Q^k_\alpha(\varphi)$ whence also ${\ensuremath{\mathsf{GLP}}}\xspace \vdash \langle \alpha +1 \rangle \varphi \to \langle \alpha \rangle Q^k_\alpha(\varphi)$. \end{lemma} A proof of this lemma is not hard and can be found, e.g., in \cite{BeklemishevSurvey:2005}. The other half of the fundamental sequence result is in virtue of the above just recasting the Reduction Property in terms of {\ensuremath{\mathsf{GLP}}}\xspace.
\begin{theorem}\label{theorem:ReductionPropertyInGLP} $\ensuremath{{\rm{EA}}}\xspace + \langle n+1 \rangle \varphi$ is $\Pi_{n+1}$-conservative over $\ensuremath{{\rm{EA}}}\xspace + \{ Q^k_n(\varphi) \mid k\in \omega\}$. \end{theorem}
\begin{proof} By Lemma \ref{theorem:ReductionPropertyInclusion} we see that $\ensuremath{{\rm{EA}}}\xspace + \{ Q^k_n(\varphi) \mid k\in \omega\} \ \subseteq \ \ensuremath{{\rm{EA}}}\xspace + \langle n+1 \rangle \varphi$. The $\Pi_{n+1}$-conservativity follows directly from the Reduction Property --Theorem \ref{theorem:ReductionProperty}-- and Lemma \ref{theorem:ReflectionRuleSimplification} above. \end{proof}
The main ingredient of the proof of the Reduction Property is a cut-elimination argument. Thus, as was noted in previous papers, the theorem above --Theorem \ref{theorem:ReductionPropertyInGLP}-- is formalizable as soon as the superexponential function is provably total and in particular in $\ensuremath{{\rm{EA}}}\xspace^+$. From this fact we get a powerful result concerning provable equi-consistency (see e.g. \cite{BeklemishevSurvey:2005}):
\begin{theorem}\label{theorem:equiconsistencyReductionProperty} For $m\leq n$ we have that $\ensuremath{{\rm{EA}}}\xspace^+ \vdash \langle m\rangle \langle n+1 \rangle \varphi \ \leftrightarrow \ \forall k\ \langle m\rangle Q^k_n (\varphi)$. \end{theorem}
\begin{proof} We reason in $\ensuremath{{\rm{EA}}}\xspace^+$ and prove the equivalence by contraposition. Lemma \ref{theorem:ReductionPropertyInclusion} is actually already provable in \ensuremath{{\rm{EA}}}\xspace so that we see \[ \exists k\ [m]_{Q^k_n(\varphi)}\bot \to [m]_{\langle n+1 \rangle \varphi} \bot . \] For the other direction we invoke the Reduction Property as stated in Theorem \ref{theorem:ReductionPropertyInGLP}.
So, still reasoning in $\ensuremath{{\rm{EA}}}\xspace^+$, we suppose that $[m]_{\langle n+1 \rangle \varphi} \bot$. Let $\pi$ be the conjunction of $\Pi_m^0$ sentences that are used in the $\ensuremath{{\rm{EA}}}\xspace + \langle n+1\rangle \varphi$ proof of $\bot$. Thus, we get that $[0]_{\langle n+1\rangle \varphi} \neg \pi$. As $\neg \pi \in \Pi_{n+1}^0$ we get by the formalized reduction property that $[0]_{Q^k_n(\varphi)}\neg \pi$ for some (possibly non-standard) number $k$. The latter implies $[m]_{Q^k_n(\varphi)}\bot$ and we are done. \end{proof}
\section{A $\Pi_1^0$-ordinal analysis for \ensuremath{{\mathrm{PA}}}\xspace}
The following theorem with proof can be found in full detail in \cite{BeklemishevSurvey:2005}. We present here the main part of the proof but refer for certain claims made here to \cite{BeklemishevSurvey:2005}.
\begin{theorem}\label{theorem:PaIsConsistent} $\ensuremath{{\rm{EA}}}\xspace^+ + {\sf TI}[\Pi^0_1, \varepsilon_0] \vdash {\sf Con}(\ensuremath{{\mathrm{PA}}}\xspace)$ \end{theorem}
\begin{proof} It is well-known that the equivalence between reflection, induction and consistency as stated in Lemma \ref{theorem:ConsistencyEquivalentToReflectionEquivalentToInduction} can actually be formalized in $\ensuremath{{\rm{EA}}}\xspace^+$. Thus, we reason in $\ensuremath{{\rm{EA}}}\xspace^+$ and observe that we have $\ensuremath{{\mathrm{PA}}}\xspace \subseteq \ensuremath{{\rm{EA}}}\xspace + \{ \langle 1 \rangle \top, \langle 2\rangle \top, \langle 3 \rangle \top, \langle 4 \rangle \top, \ldots \}$. Consequently, ${\sf Con}(\ensuremath{{\rm{EA}}}\xspace + \{ \langle 1 \rangle \top, \langle 2\rangle \top, \langle 3 \rangle \top, \langle 4 \rangle \top, \ldots \} )\to {\sf Con} (\ensuremath{{\mathrm{PA}}}\xspace)$ and we shall complete our proof by showing ${\sf Con}(\ensuremath{{\rm{EA}}}\xspace + \{ \langle 1 \rangle \top, \langle 2\rangle \top, \langle 3 \rangle \top, \langle 4 \rangle \top, \ldots \})$. For this, it suffices to show \begin{equation}\label{equation:ConsistencyOfConsistencies} \forall n\ \langle 0 \rangle \langle n \rangle \top. \end{equation}
We shall prove this by transfinite induction. It is known that $\langle S^\omega,<_0 \rangle$ is provably in \ensuremath{{\rm{EA}}}\xspace isomorphic to $\langle \varepsilon_0 , <\rangle$. Thus it suffices to perform a transfinite induction over the structure $\langle S^\omega, <_0 \rangle$. Clearly \begin{equation}\label{equation:ConsistencyOfWorms} \forall \, A{\in}S^\omega\ \langle 0\rangle A \end{equation} implies \eqref{equation:ConsistencyOfConsistencies}, so we shall prove \eqref{equation:ConsistencyOfWorms} by transfinite induction over $\langle S^\omega, <_0 \rangle$. We set out to prove $\forall \, A{\in } S^\omega\ (\forall \, A'{<_0}A \ \langle 0 \rangle A' \to \langle 0 \rangle A )$ from which \eqref{equation:ConsistencyOfWorms} follows, and distinguish three cases: \begin{enumerate} \item $A = \top$ in which case we have $\langle 0 \rangle \top$ as $\ensuremath{{\rm{EA}}}\xspace^+$ proves the consistency of \ensuremath{{\rm{EA}}}\xspace.
\item $A$ is of the form $\langle 0 \rangle B$ for some worm $B$.
It is well-known that $\ensuremath{{\rm{EA}}}\xspace^+ \vdash {\sf RFN}_{\Sigma_1^0}(\ensuremath{{\rm{EA}}}\xspace)$. So in particular, as $[0] B$ is a $\Sigma_1^0$-sentence, we get $[0] [0] B \to [0] B$. Thus also $\langle 0 \rangle B \to \langle 0 \rangle \langle 0\rangle B$. However, as $B<_0 A$ we have by the induction hypothesis that $\langle 0 \rangle B$ and we are done.
\item $A$ is of the form $\langle n+1 \rangle B$ for some worm $B$ and natural number $n$.
So, we need to prove $\langle 0\rangle \langle n+1 \rangle B$. By Theorem \ref{theorem:equiconsistencyReductionProperty} we get that \[ \langle 0\rangle \langle n+1 \rangle B \leftrightarrow \forall k \ \langle 0 \rangle Q^k_n(B). \] However, as for each $k\in \omega$ we have by Lemma \ref{theorem:ReductionPropertyInclusion} that $Q^k_n(B) <_0 \langle n+1\rangle B$ we are done by the induction hypothesis. \end{enumerate} \end{proof}
On the basis of Theorem \ref{theorem:PaIsConsistent} one could decide to call $\varepsilon_0$ the proof-theoretical ordinal of \ensuremath{{\mathrm{PA}}}\xspace. Like many other ordinal analyses, the current analysis is susceptible to plugging in pathological ordinal notation systems so as to get way weaker or stronger proof-theoretical ordinals for \ensuremath{{\mathrm{PA}}}\xspace. However, we feel confident to judge ourselves which notation system is natural enough to use and which not.
We shall now briefly say why this particular ordinal is called the $\Pi_1^0$ ordinal of \ensuremath{{\mathrm{PA}}}\xspace. If we define Turing progressions $\ensuremath{{\rm{EA}}}\xspace^\alpha$ of \ensuremath{{\rm{EA}}}\xspace by transfinite induction in the standard way as $\ensuremath{{\rm{EA}}}\xspace^0 := \ensuremath{{\rm{EA}}}\xspace$, and $\ensuremath{{\rm{EA}}}\xspace^\alpha := \cup_{\beta<\alpha}(\ensuremath{{\rm{EA}}}\xspace_\beta + {\sf Con}(\ensuremath{{\rm{EA}}}\xspace_\beta))$ we can define a $\Pi^0_1$ proof theoretical ordinal based on these $\ensuremath{{\rm{EA}}}\xspace^\alpha$. For a target theory $T$ we define $|T|_{\Pi^0_1}$ --the $\Pi^0_1$ proof theoretical ordinal of $T$-- to be the smallest $\alpha$ for which $\ensuremath{{\rm{EA}}}\xspace^\alpha$ comprises all the $\Pi^0_1$ consequences of $T$.
For natural theories $T$ and natural ordinal notation systems, this ordinal will coincide with the ordinal obtained by an analysis presented in Theorem \ref{theorem:PaIsConsistent}. Moreover for $T=\ensuremath{{\mathrm{PA}}}\xspace$ and various sub-systems $T$ of \ensuremath{{\mathrm{PA}}}\xspace, it is known that $|T|_{\Pi^0_1}$ coincides with all the other known ordinal analyses like $|T|_{\Pi^1_1}$ or $|T|_{\Pi^0_2}$.
We mention these other proof-theoretical ordinals here without further detail and just to provide some context. In this same spirit it is worth mentioning that $|T|_{\Pi^0_1}$ is more fine-grained than any of the others. For example, $|\ensuremath{{\mathrm{PA}}}\xspace + {\sf Con}(\ensuremath{{\mathrm{PA}}}\xspace)|_{\Pi^1_1} = |\ensuremath{{\mathrm{PA}}}\xspace + {\sf Con}(\ensuremath{{\mathrm{PA}}}\xspace)|_{\Pi^0_2} = |\ensuremath{{\mathrm{PA}}}\xspace|_{\Pi^0_2}= \varepsilon_0$ whereas $|\ensuremath{{\mathrm{PA}}}\xspace + {\sf Con}(\ensuremath{{\mathrm{PA}}}\xspace)|_{\Pi_1^0} = \varepsilon_0 \cdot 2$.
\section{Ingredients for going beyond \ensuremath{{\mathrm{PA}}}\xspace}
The paradigm for $\Pi^0_1$ is nice in that it provides a more fine-grained analysis than all other ordinal analyses around. In a sense, it provides the finest analysis possible as different true theories will at least differ on $\Pi^0_1$ sentences. A critique to the paradigm is that the analysis has so far only been performed for rather weak mathematical theories: \ensuremath{{\mathrm{PA}}}\xspace and its kin.
If we wish to address stronger theories than \ensuremath{{\mathrm{PA}}}\xspace there are two paths that one can take. In the next subsection we discuss one such path where the base theory is strengthened. In the remaining subsection we speak about the approach where we strengthen ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ to ${\ensuremath{\mathsf{GLP}}}\xspace_\Lambda$ with $\Lambda > \omega$.
\subsection{Relative $\Pi^0_1$ ordinal analysis} We can choose to stay within ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ and strengthen our base theory \theory{X}. So, if we wish to analyze some target theory \theory{U} with the $\Pi_1^0$ paradigm relative to \theory{X}, the question translates to how often one should iterate the Turing progression based on \theory{X} to comprise all the $\Pi^0_1$ consequences of \theory{U}. In the next section we shall analyze this in further detail.
\subsection{Beyond ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$} Another choice to strengthen the applicability of the paradigm is to use modal provability logics that go beyond ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$. Currently most efforts of taking the paradigm further are along these lines. There are two main aspects involved here. The first is to extend the modal theory of {\ensuremath{\mathsf{GLP}}}\xspace beyond ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ and the other is to find suitable (hyper)arithmetical interpretations of the modalities $[\alpha]$ involved.
\subsubsection{The modal theory} By now, the modal theory of ${\ensuremath{\mathsf{GLP}}}\xspace_\Lambda$ is rather well studied and understood. A first and seminal step in this direction was taken by Beklemishev in \cite{Beklemishev:2005}. In particular, the paper focussed on the closed fragment ${\ensuremath{\mathsf{GLP}}}\xspace^0$ of {\ensuremath{\mathsf{GLP}}}\xspace and studied the worms in there. It was shown that the orderings $<_0$ are well behaved also in the class-size ${\ensuremath{\mathsf{GLP}}}\xspace^0$ and define a well order provided the irreflexivity of $<_0$.
The irreflexivity of $<_0$ has been shown both in \cite{BeklemshevFernandezJoosten2011} and \cite{FernandezJoostenModels2012}. In particular \cite{FernandezJoostenModels2012} provides a class-size universal model for ${\ensuremath{\mathsf{GLP}}}\xspace^0$. The ordering $<_0$ and natural and important generalizations are now well studied and understood as presented in \cite{Beklemishev:2005, FernandezJoostenWellOrders2012, FernandezJoostenWellFoundedPartialOrders2012}.
Although there are various important and interesting questions open in the modal theory of the logics ${\ensuremath{\mathsf{GLP}}}\xspace_\Lambda$ it seems that all modal theory is in place to move the $\Pi^0_1$ ordinal analysis beyond \ensuremath{{\mathrm{PA}}}\xspace.
\subsubsection{Hyperarithmetic interpretations and the Reduction Property}
Currently the aim the {\ensuremath{\mathsf{GLP}}}\xspace project is to provide an ordinal analysis of predicative analysis whose classical proof-theoretical ordinal is the Feferman-Sch\"utte ordinal $\Gamma_0$. Various natural candidates of provability notions have been seen to be sound and complete for ${\ensuremath{\mathsf{GLP}}}\xspace_{\Gamma_0}$. However, so far, for none of this interpretations a natural generalization of the Reduction Property has been established.
In the final section of this paper we shall briefly mention some of these generalized provability notions. In the next section we shall see how the need of a full Reduction Property can be circumvented.
\section{Reduction Property, equi-consistency, and relative ordinal analysis}
In this section we shall see how we can minimize the ingredients needed for a consistency proof as presented in Theorem \ref{theorem:PaIsConsistent}. In particular we shall not need the full Reduction Property but rather some weak version of it in terms of equi-consistency.
We shall see that the following steps suffice. Below, let \theory{U} denote the target theory of which we wish to perform an ordinal analysis.
\begin{enumerate} \item We fix some base theory \theory{X} over which most of our arguments will be performed;
\item\label{item:suitableConsistencyStatements} We find some notions of consistency over \theory{X} of increasing strength \[ \{ \langle 0 \rangle_{\theory{X}} \varphi, \langle 1 \rangle_{\theory{X}} \varphi, \langle 2 \rangle_{\theory{X}} \varphi, \langle 3 \rangle_{\theory{X}} \varphi, \ldots \}, \] so that the following properties are obtained (we shall drop subscripts \theory{X}) \begin{enumerate} \item\label{item:provableDeductionTheorem} The notion $\langle n \rangle_{\theory{T}}$ grows monotone both in $n$ and in $\theory{T}$ and for all natural numbers $n$, theories \theory{T}, and formulas $\varphi, \psi$ we have that provably in some weak theory but certainly in \theory{X} \[ \langle n \rangle_{\theory{T} + \varphi} \psi \ \leftrightarrow \ \langle n \rangle_{\theory{T} } (\psi \wedge \varphi); \]
\item\label{item:GLPisSound} The logic {\ensuremath{\mathsf{GLP}}}\xspace is sound for the corresponding dual provability operators $[n]_{\theory{X}}$;
\item\label{item:UincludedInConsistencies} We have that (provably in some weak theory but certainly in \theory{X}) \[ \mathcal{U} \subseteq \theory{X} + \{ \langle 0 \rangle_{\theory{X}} \top, \langle 1 \rangle_{\theory{X}} \top, \langle 2 \rangle_{\theory{X}} \top, \langle 3 \rangle_{\theory{X}} \top, \ldots \}; \]
\item\label{item:equiconsistencyReductionProperty} The theory $\theory{X} + \langle n+1 \rangle \top$ is equi-consistent with the theory $\theory{X} + \{ Q^k_n(\top) \mid k \in \omega \}$ where $Q^0_n(\varphi)= \langle n \rangle \varphi$ and $Q^{k+1}_n (\varphi) = \langle n \rangle (\varphi \wedge Q^k_n(\varphi))$. This equi-consistency should be provable in some weak extension $\theory{X}^+$ of \theory{X}.
\end{enumerate}
\end{enumerate}
We shall now see that these ingredients suffice to perform a consistency proof of \theory{U} relative to \theory{X} formalized in $\theory{X}^+$.
\begin{theorem}\label{theorem:strippedConsistencyProof} Suppose we have fixed \theory{X} and consistency notions as above. Then \[ \theory{X}^+ + {\sf TI}(\widetilde{\Pi^0_1}, \varepsilon_0) \vdash {\sf Con}(\theory{U}), \] where $\widetilde{\Pi^0_1}$ is some complexity class that corresponds to the consistency notion $\langle 0 \rangle_\theory{X}$. \end{theorem}
\begin{proof} The proof is similar to that of Theorem \ref{theorem:PaIsConsistent}. We reason in $\theory{X}^+$. By \ref{item:UincludedInConsistencies} above, we have that $\theory{U} \subseteq \theory{X} + \{ \langle 0 \rangle_{\theory{X}} \top, \langle 1 \rangle_{\theory{X}} \top, \langle 2 \rangle_{\theory{X}} \top, \langle 3 \rangle_{\theory{X}} \top, \ldots \}$ whence also \[ \langle 0 \rangle_{\{ \langle 0 \rangle_{\theory{X}} \top, \langle 1 \rangle_{\theory{X}} \top, \langle 2 \rangle_{\theory{X}} \top, \langle 3 \rangle_{\theory{X}} \top, \ldots \}}\top \to \langle 0 \rangle_\ensuremath{{\mathrm{ZFC}}}\xspace \top. \] Clearly, by \ref{item:provableDeductionTheorem} we have that \[ \langle 0 \rangle_{\{ \langle 0 \rangle_{\theory{X}} \top, \langle 1 \rangle_{\theory{X}} \top, \langle 2 \rangle_{\theory{X}} \top, \langle 3 \rangle_{\theory{X}} \top, \ldots \}}\top \ \ \leftrightarrow \ \ \forall \, n \ \langle 0 \rangle \langle n\rangle \top. \] We now reason inside some weak extension $\theory{X}^+$ of \theory{X} and conclude by using transfinite induction and showing that $ \forall \, n \ \langle 0 \rangle_\theory{X} \langle n\rangle_\theory{X} \top$. Clearly it suffices to show that for all worms $A$ in ${\ensuremath{\mathsf{GLP}}}\xspace_\omega$ we have that $\langle 0\rangle_\theory{X} A$. Thus, we set out to prove \begin{equation}\label{equation:progressive} \forall A\ [\forall \, B {<_0} A\ \langle 0\rangle_\theory{X} B \to \langle 0\rangle_\theory{X} A]. \end{equation} We choose $\theory{X}^+$ strong enough so that it at least contains ${\sf RFN}_{\widetilde{\Sigma_1^0}}(\theory{X})$ in order to have \begin{enumerate} \item $\theory{X}^+ \vdash \langle 0\rangle_\theory{X} \top$ and,
\item $\theory{X}^+ \vdash \langle 0\rangle_\theory{X} \varphi \ \to \ \langle 0\rangle_\theory{X} \langle 0\rangle_\theory{X} \varphi$. \end{enumerate} These two observations account for a proof of \eqref{equation:progressive} for the empty worm and worms of the form $\langle 0 \rangle A'$. For worms of the form $\langle n+1 \rangle A'$ we see by \ref{item:equiconsistencyReductionProperty} that \[ \langle 0 \rangle_\theory{X} \langle n+1 \rangle_\theory{X} A' \ \leftrightarrow \ \forall k \langle 0 \rangle_\theory{X} \langle n \rangle_\theory{X} Q^k_n(A'). \] But, as by \ref{item:GLPisSound}, {\ensuremath{\mathsf{GLP}}}\xspace is sound for our modalities, we get that all the $Q^k_n(A')$ are $<_0$-below $\langle n+1 \rangle \top$ and we have the right-hand side from the induction hypothesis. \end{proof} For the sake of presentation we have chosen \theory{X} and \theory{U} such that in some sense $\frac{\ensuremath{{\rm{EA}}}\xspace}{\ensuremath{{\mathrm{PA}}}\xspace} = \frac{\theory{X}}{\theory{U}}$ in which case we would be justified to say that the $\Pi^0_1$-proof theoretic ordinal of \theory{U} \emph{relative to} \theory{X} is $\varepsilon_0$.
It is clear that Theorem \ref{theorem:strippedConsistencyProof} above can be extended to larger orderings once we have extended our notion of fundamental sequence as in Definition \ref{definition:FundamentalSequence} also for modalities with limit ordinals. This is unproblematic in principle but may slightly depend on the choice of fundamental sequences of the ordinals inside the modalities. The important observation is that the use of the full Reduction Property can be avoided.
\section{Going beyond \ensuremath{{\mathrm{PA}}}\xspace: recent developments}
In this final section we just wish to briefly report on ongoing work to find arithmetical interpretations for ${\ensuremath{\mathsf{GLP}}}\xspace_\Lambda$ with $\Lambda>\omega$.
\subsection{Truth-predicates and reflection}
Lev Beklemishev and Evgeniy Dashkov --both at Moscow State University-- have been studying interpretations of ${\ensuremath{\mathsf{GLP}}}\xspace_{\omega\cdot 2 }$ where an additional truth-predicate for arithmetical formulas is added to the language of arithmetic. Within this framework they can express reflection for arithmetical formulas and slightly beyond.
The work is still unpublished but they have presented some results where they go up to $[\omega + \omega]$ while preserving the full Reduction Property at the price of giving up the nice modal logic {\ensuremath{\mathsf{GLP}}}\xspace. Rather they switch to a positive fragment of {\ensuremath{\mathsf{GLP}}}\xspace to account for the fact that certain reflection principles (at limit stages) are not finitely axiomatizable.
\subsection{Omega rule interpretations}
Andr\'es Cord\'on Franco, David Fern\'andez Duque, and F\'elix Lara Mart\'{\i}n from the University of Sevilla, in collaboration with the author are studying an interpretation where, within an infinitary proof calculus $[\alpha]$ is read as ``provable with $\alpha$ nested applications of the omega rule". Soundness w.r.t.\ this interpretation has been proved and completeness seems feasible too. However, not much is known to what extend the Reduction Property holds for this interpretation.
\subsection{Levy's reflection results}
Joan Bagaria from ICREA and the University of Barcelona suggested in discussions with the author the following set-theoretical reading of our modalities $[n]$ for $n\in \omega$. Let \theory{X} be the theory $\ensuremath{{\mathrm{ZFC}}}\xspace - \{ {\sf Repl} + {\sf Inf} \}$. It is established in a paper from Levy (\cite{Levy:1961}) that \[ \ensuremath{{\mathrm{ZFC}}}\xspace \equiv \theory{X} + \ensuremath{{\sf RFN}}\xspace(\theory{X}). \] Here, \ensuremath{{\sf RFN}}\xspace refers to the following notion of reflection: For each (externally quantified) natural number $n$, we denote by ${\sf RFN}_{\Sigma_n}(\theory{X})$ the following principle \[ \forall \, \varphi{\in}\Sigma_n\, \forall a\, \exists\, \alpha {\in}{\sf On} \ [ V_\alpha \models \varphi (a) \ \Leftrightarrow \ \ \models_n \varphi (a) ]. \] Here, $\models_n$ refers to partial truth predicates that are known to exist for \ensuremath{{\mathrm{ZFC}}}\xspace and subtheories. At first sight it seems that replacement is needed to define the entities $V_\alpha$. However, in the absence of replacement one can work with the Scott-rank instead and define $V_\alpha : = \{ x\mid {\sf rank}(x)\leq \alpha \}$ where ${\sf rank}(x)\leq \alpha$ \emph{is} definable in \theory{X} making use of the transitive closure. We now define classes that collect the ordinals $\alpha$ for which the partial universes $V_\alpha$ are $\Sigma_n$ elementary substructures of $V$: \[ C^{(n)} := \{ \alpha \mid V_\alpha \prec_{\Sigma_n} V\}. \] It is a theorem by Levy that the classes $C^{(n)}$ are $\Pi_n$ definable in \theory{X}. Next, we define \[ \langle n \rangle_\theory{T} \varphi \ :\Leftrightarrow \ \exists\, \alpha {\in} C^{(n)} \ [V_\alpha \models \theory{T} \wedge V_\alpha \models \varphi] \] It seems that all 2 (a)--(c) are satisfied for this notion of provability. In particular we have that $\langle n \rangle \varphi \to [m] \langle n\rangle \varphi$ since $\langle n \rangle_\theory{T}$ is definable in a $\Sigma_{n+1}$-fashion. As we cannot obtain that \[ \ensuremath{{\mathrm{ZFC}}}\xspace \nvdash {\sf TI}({\Pi^0_1}, \varepsilon_0) \] we must conclude that $(d)$ does not hold and that the two theories are not equi-consistent.
\subsection{On a (relative) proof-theoretical ordinal of \ensuremath{{\mathrm{ZFC}}}\xspace}
We conclude by a simple observation on a proof theoretical ordinal of \ensuremath{{\mathrm{ZFC}}}\xspace. It is generally believed that an ordinal analysis for \ensuremath{{\mathrm{ZFC}}}\xspace is currently way out of reach. With the methods presented here one might hope that at least an ordinal analysis relative to some strong base theory of \ensuremath{{\mathrm{ZFC}}}\xspace might be possible.
However, if such an analysis were to be given, it is most likely to be formalizable within \ensuremath{{\mathrm{ZFC}}}\xspace itself. As \ensuremath{{\mathrm{ZFC}}}\xspace proves transfinite induction over any well-ordering, this implies that the order type involved in such an ordinal analysis of \ensuremath{{\mathrm{ZFC}}}\xspace must be represented inside \ensuremath{{\mathrm{ZFC}}}\xspace in such a way that \ensuremath{{\mathrm{ZFC}}}\xspace does not prove it is indeed a well-order.
\end{document} |
\begin{document}
\author{Dmitri V. Alekseevsky and Liana David}
\title{A note about invariant SKT structures and generalized K\"ahler structures on flag manifolds}
\maketitle
{\bf Abstract:} We prove that any invariant strong K\"ahler structure with torsion (SKT structure) on a flag manifold $M = G/K$ of a semisimple compact Lie group $G$ is K\"ahler. As an application we describe invariant generalized K\"ahler structures on $M$.\\
{\it 2010 Mathematics Subject Classification:} 53D18.
\section{Introduction}
A Hermitian manifold $(M,g,J)$ admits a unique connection
$\nabla^B$ (called the Bismut connection) which preserves the
metric $g$ and the complex structure $J$ and has a skew-symmetric torsion tensor $c:= g(\cdot, T^{B}(\cdot,\cdot))$, where
$T^{B}$ is the torsion of $\nabla^B$. The 3-form $c$ can be expressed in
terms of the K\"ahler form $\omega = g\circ J$ by
$$ c = Jd\omega := d\omega(J\cdot, J \cdot,J \cdot).$$ The manifold $(M,g,J)$ is called strong K\"ahler with torsion (SKT) if the torsion 3-form $c$ is closed, or, equivalently, $\partial \bar \partial \omega =0$. SKT manifolds are a natural generalization of K\"ahler manifolds and many results from K\"ahler geometry can be generalized to SKT geometry, see e.g. \cite{enrietti-fino,fernandes-fino,fino-tomassini, fino-tomassini1}.\\
SKT geometry is also closely related to generalized K\"ahler geometry, which was recently introduced by N. Hitchin \cite{hitchin} and appeared before in physics as the geometry of the target space of $N=(2,2)$ supersymmetric nonlinear sigma models, see e.g. \cite{GHR,LRvUZ}.\\
A generalized K\"ahler structure on a manifold $M$ is a pair $(\mathcal{J}_1, \mathcal{J}_2)$ of commuting generalized complex structures such that the symmetric bilinear form
$ -(\mathcal{J}_1 \circ \mathcal{J}_2 \cdot, \cdot)$ is positive definite, where $(\cdot , \cdot )$ is the standard scalar product of neutral signature of the generalized tangent bundle $\mathbb{T}M= TM\oplus T^{*}M.$ (For the definition and basic facts about generalized complex structures, see e.g. \cite{thesis}). It was shown by M. Gualtieri \cite{thesis,Gual} that a generalized K\"ahler structure on a manifold can be described in classical terms as a bi-Hermitian structure $(g,J_+,J_-,b)$ in the sense of \cite{GHR}, i.e. a pair $(g, J_+)$, $(g, J_-)$ of SKT structures with common metric $g$ and a 2-form $b$ (called in the physical literature the $b$-field) such that \begin{equation}\label{cond1} db = J_+d\omega_{+} = - J_{-}d\omega_{-}, \end{equation} where $\omega_{\pm} = g \circ J_{\pm}$ are K\"{a}hler forms.
Let $G$ be a semisimple compact Lie group and $M= G/K$ a flag manifold, i.e. an adjoint orbit of $G$. In this note we describe invariant SKT structures and invariant generalized K\"ahler structures on $M$, as follows.
\begin{thm}\label{pmain}
Any invariant SKT structure $(g,J)$ on a flag manifold $M= G/ K$ is
K\"ahler, i.e. the K\"ahler form $\omega = g\circ J$ is closed. \end{thm}
The description of invariant K\"ahler structures on flag manifolds is well known, see e.g. \cite{dmitri,dmitri'} and Section \ref{hermitian} below.
\begin{cor}\label{mainh} Let $(g,J_{+}, J_{-},b)$ be an invariant bi-Hermitian structure in the sense of \cite{GHR} on a flag manifold $M = G/ K$ (which defines a generalized K\"{a}hler structure $(\mathcal{J}_1, \mathcal{J}_2)$ via Gualtieri's correspondence). Then $g$ is an invariant K\"ahler metric, $J_+,J_-$ are two parallel invariant complex structures and
$b$ is any closed invariant 2-form. If the group $G$ is simple, then $J_{+}= J_{-}$ or $J_{+}= - J_{-}.$ \end{cor}
The note is organized as follows. In Section \ref{hermitian} we fix our conventions and we recall the basic facts on the geometry of flag manifolds and, in particular, the description of invariant Hermitian and K\"{a}hler structures \cite{dmitri,dmitri'}. With these preliminaries, Theorem \ref{pmain} and Corollary \ref{mainh} will be proved in Section \ref{genkahler}.\\
{\bf Acknowledgements.} D.V.A thanks the University of Hamburg for hospitality and financial support. L.D. acknowledges financial support from a grant of the Romanian National Authority for Scientific Research, CNCS-UEFISCDI, project number PN-II-ID-PCE-2011-3-0362.
\section{Preliminary material}\label{hermitian}
{\bf Basic facts about flag manifolds.}
A flag manifold of a semisimple compact Lie group $G$ is an
adjoint orbit $M = \mathrm{Ad}_G (h_0) \simeq G/K$ of an element
$h_0 $ of the Lie algebra of $G$. We denote by
$\mathfrak{g}$, $\mathfrak{k}$ the complex Lie algebras associated with
the groups $G$, $K$ respectively, and we fix
a Cartan subalgebra $\mathfrak{h}$ of $\mathfrak{k}$. We denote by
$R, R_0$ the root systems of $\mathfrak{g}$, $\mathfrak{k}$
with respect to $\mathfrak{h}$ and we set $R':= R \setminus R_0$.
We write the root space decomposition of $\mbox{\goth g}$ as $$ \mathfrak{g} = \mathfrak{k}+ \mathfrak{m}=( \mathfrak{h} + \sum_{\alpha \in R_0}\mathfrak{g}_\alpha) + \sum_{\alpha \in R'}\mathfrak{g}_\alpha $$
and we identify the vector space $\mathfrak{m}= \sum_{\alpha \in R'}\mathfrak{g}_\alpha $ with the complexification of the tangent space $T_{h_0}M $.
Let $E_{\alpha}\in \mbox{\goth g}_{\alpha}$ be root vectors of a Weyl basis. Thus, $$ \langle E_{\alpha}, E_{-\alpha}\rangle =1,\quad\forall\alpha \in R $$ (where $\langle X, Y\rangle :=\mathrm{tr}\left(\mathrm{ad}_{X}\circ \mathrm{ad}_{Y}\right)$ denotes the Killing form of $\mbox{\goth g}$) and \begin{equation}\label{weyl} N_{-\alpha ,-\beta} = - N_{\alpha \beta},\quad\forall \alpha ,\beta \in R \end{equation} where $N_{\alpha\beta}$ are the structure constants defined by \begin{equation}\label{nab} [E_{\alpha}, E_{\beta}]= N_{\alpha\beta} E_{\alpha +\beta},\quad\forall \alpha ,\beta \in R. \end{equation}
The Lie algebra of $G$ is the fixed point set $\mathfrak{g}^\tau$
of the compact anti-involution $\tau$, which preserves the Cartan subalgebra ${\mbox{\goth h}}$ and sends $E_{\alpha}$ to $- E_{-\alpha}$, for any $\alpha\in R.$ It is given by
$$ \mathfrak{g}^\tau = i{\mbox{\goth h}}_{\mathbb{R}} + \sum_{\alpha \in R}\mathrm{span}
\{ E_{\alpha}-E_{-\alpha}, i(E_{\alpha}+ E_{-\alpha})\}$$ where ${\mbox{\goth h}}_{\mathbb{R}} = \mathrm{span}\{ H_\alpha:=[E_{\alpha}, E_{-\alpha}],\alpha \in R\}$ is a real form of $\mathfrak{h}$. Note that $$ \beta(H_\alpha) = \beta([E_\alpha, E_{-\alpha}]) = \langle\beta, \alpha\rangle $$ where $\langle \cdot , \cdot \rangle$ denotes also the scalar product on $\mathfrak{h}^*$ induced by the Killing form.
\subsection{Invariant complex structures on $M = G/K$}
We fix a system $\Pi_0$ of simple roots of $R_0$ and we extend it
to a system $\Pi = \Pi_0 \cup \Pi'$ of simple roots of $R$. We
denote by $R_0^+, R^+ = R_0^+ \cup R'_+$ the corresponding
systems of positive roots. A decomposition
$$ \mathfrak{m} = \mathfrak{m}^+ + \mathfrak{m}^- =
\sum_{\alpha \in R'_+}\mathfrak{g}_\alpha +\sum_{\alpha \in R'_+}\mathfrak{g}_{-\alpha} $$ defines an $\mathrm{Ad}_K$-invariant complex structure $J$ on $T_{h_0}M = \mathfrak{m}^\tau $, such that $$
J|_{\mathfrak{m}^{\pm}} = \pm i \mathrm{Id}. $$ We extend it to an invariant complex structure on $M$, also denoted by $J$. We will refer to $R'_{+}$ and $\Pi'$ as the set of positive roots, respectively the set of simple roots, of $J$. It is known that any invariant complex structure on $M$ can be obtained by this construction \cite{dmitri,wang}.
\subsection{$T$-roots and isotropy decomposition}
Let $\mathfrak{z} = i \mathfrak{t} \subset \mathfrak{h} $ be the center of the stability subalgebra $\mathfrak{k}^\tau$. The restriction of the roots from $R' \subset \mathfrak{h}^*$ to the subspace $\mathfrak{t}$ are called $T$-roots. Denote by
$$ \kappa : R' \to R_T, \, \alpha \mapsto \alpha|_{\mathfrak{t}}$$ the natural projection onto the set $R_T$ of $T$-roots. Note that
$\alpha|_{\mathfrak{t}}=0$ for any $\alpha\in R_{0}.$ Any $T$-root $\xi$ defines an $\mathrm{Ad}_K$-invariant subspace $$ \mathfrak{m}_\xi := \sum_{\alpha \in R', \kappa(\alpha) =\xi} \mathfrak{g}_\alpha $$ of the complexified tangent space $\mathfrak{m}$ and $$ \mathfrak{m} = \sum_{\xi \in R_T} \mathfrak{m}_\xi $$ is a direct sum decomposition into non-equivalent irreducible $\mathrm{Ad}_K$-submodules.
\subsection{Invariant metrics and Hermitian structures}
We denote by $\omega_{\alpha}\in \mbox{\goth g}^{*}$ the $1$-forms dual to $E_\alpha, \,\alpha \in R $, i.e. \begin{equation}\label{label-g} \omega_\alpha(E_\beta) = \delta_{\alpha \beta},\quad\omega_{\alpha}\vert_{{\mbox{\goth h}}}=0. \end{equation} Any invariant Riemannian metric on $M$ is defined by an $\mathrm{Ad}_K$-invariant Euclidean metric $g$ on $\mathfrak{m}^\tau$, whose complex linear extension has the form
\begin{equation}\label{adaus} g = - \frac{1}{2}\sum_{\alpha \in R'} g_\alpha \omega_{\alpha}\vee \omega_{-\alpha} \end{equation} where $\omega_{\alpha}\vee \omega_{-\alpha}= \omega_{\alpha}\otimes\omega_{-\alpha}+ \omega_{-\alpha} \otimes\omega_{\alpha}$ is the symmetric product and $g_\xi$, $\xi \in R_T$, is a system of positive constants associated to the $T$-roots, $g_{\xi}= g_{-\xi}$ for any $\xi \in R_{T}$ and $g_\alpha := g_{\kappa(\alpha)}$. Note that the restriction of $g$ to $\mathfrak{m}_\xi$ is proportional to the restriction of the Killing form, with coefficient of proportionality $-g_\xi$.\\ Any such metric $g$ is Hermitian with respect to any invariant complex structure $J$ and the corresponding K\"ahler form is given by \begin{equation}\label{omega} \omega = - i \sum_{\alpha \in R'_+} g_\alpha \omega_{\alpha}\wedge \omega_{-\alpha} \end{equation} where $R'_{+}$ is the set of positive roots of $J$ and in our conventions $\omega_{\alpha}\wedge\omega_{-\alpha} = \omega_{\alpha}\otimes \omega_{-\alpha} -\omega_{-\alpha}\otimes\omega_{\alpha}.$
\subsection{Invariant K\"ahler structures} Any invariant symplectic form $\omega$ on $M$ compatible with an invariant complex structure $J$ as above (i.e. such that $g:= - \omega\circ J$ is positive definite) is associated to a 1-form
$ \sigma \in \mathfrak{t}^* $ such that $\langle\sigma, \alpha_i \rangle >0$ for any $\alpha_i \in \Pi'$ (the set of simple roots of $J$). As a form on $\mathfrak{m}$, it is given by $$ \omega= \omega_{\sigma}:= -i \sum_{\alpha \in R'_+} \langle \sigma, \alpha\rangle \omega_{\alpha} \wedge \omega_{-\alpha} .$$
The associated K\"ahler metric $g$ has the coefficients $g_{\alpha}=g_{\kappa(\alpha)} = \langle\sigma, \alpha \rangle$, which, obviously, satisfy the following linearity property:
\begin{equation} \label{linearitycondition}
g_{\alpha + \beta} = g_{\alpha} + g_{\beta},\,\, \forall \alpha, \beta, \alpha + \beta \in R_+' \end{equation} In particular, if $\Pi' = \{ \alpha_1, \cdots , \alpha_m \}$ and $$R' \ni \alpha \equiv k_1 \alpha_1 + \cdots + k_m \alpha_m \, (\mathrm{mod} R_0) $$ then $$ g_\alpha = k_1 g_{\alpha_1} + \cdots + k_m g_{\alpha_m}. $$
To summarize, we get:
\begin{prop}\label{linearityProp} \cite{dmitri} An invariant Hermitian structure $(g,J)$ on $M$ is K\"ahler if and only if the coefficients $g_{\alpha}$ associated to $g$ by (\ref{adaus}) satisfy the linearity property: if $\alpha, \beta, \alpha + \beta \in R'_+$, then $g_{\alpha + \beta} = g_\alpha + g_{\beta}$. Here $R'_{+}$ is the set of positive roots of $J$. \end{prop}
\subsection{The formula for the exterior derivative}
An invariant $k$-form on $M=G/K$ can be considered as an $\mathrm{Ad}_K$-invariant $k$-form $\omega$ on the Lie algebra $\mathfrak{g}$ such that $i_{\mathfrak{k}}(\omega )=0$. We recall the standard Koszul formula for the exterior differential $d\omega$:
\begin{equation}\label{exteriord} ( d\omega )(X_{0}, \cdots , X_{k})=\sum_{i<j}(-1)^{i+j}\omega ( [X_{i}, X_{j}],X_{1},\cdots , \widehat{X_{i}}, \cdots , \widehat{X_{j}}, \cdots , X_{k}), \end{equation} for any $X_{i}\in \mbox{\goth m} \subset \mathfrak{g}$. In (\ref{exteriord}) the hat means that the term is omitted.
\section{Proof of our main results}\label{genkahler}
We now prove Theorem \ref{pmain} and Corollary \ref{mainh}. We preserve the notations from the previous sections. Let $(g,J)$ be an invariant Hermitian structure on a flag manifold $M=G/K$. Let $g_{\alpha}= g_{k(\alpha )}$ the positive numbers associated to $g$ and $R'_{+}$, $\Pi'$ the set of positive (respectively, simple) roots of $J$, like before. Let $\omega = g \circ J$ be the K\"ahler form. To prove the theorem, we have to check that if the form $J d\omega$ is closed, then $g_{\alpha}$ satisfy the linearity property (\ref{linearitycondition}). We define the sign $\epsilon_\alpha$ of a root $\alpha \in R'= R'_+ \cup(-R'_+)$ by $\epsilon_{\alpha} = \pm 1 $ if $\alpha \in \pm R'_{+}$. Note that $\epsilon_\alpha$ depends only on $\kappa(\alpha)$. Now we calculate $d\omega$ and $J d\omega $ on basic vectors, as follows:
\begin{lem}
\begin{enumerate}
\item[i)]
\begin{equation} \label{domega}
d \omega (E_{\alpha}, E_\beta, E_\gamma) =0 \,\, {\mathrm{ if}} \,\,
\alpha + \beta + \gamma \neq 0
\end{equation} and \begin{equation} \label{domega'} d \omega (E_{\alpha}, E_\beta, E_{-(\alpha + \beta)}) = -i N_{\alpha \beta} (\epsilon_\alpha g_\alpha + \epsilon_{\beta}g_\beta -\epsilon_{\alpha + \beta}g_{\alpha + \beta} ). \end{equation} \item[ii)]
\begin{equation}\label{add-d} (J d \omega )(E_{\alpha}, E_\beta, E_{-(\alpha + \beta)})= N_{\alpha \beta} (\epsilon_\beta \epsilon_{\alpha + \beta} g_\alpha + \epsilon_{\alpha}\epsilon_{\alpha + \beta}g_\beta - \epsilon_{\alpha} \epsilon_{\beta}g_{\alpha + \beta} ). \end{equation}
\end{enumerate} \end{lem}
\begin{proof} Relation (\ref{domega}) follows from (\ref{omega}) and (\ref{exteriord}).
Relation (\ref{domega'}) follows from (\ref{omega}), (\ref{exteriord}) and the following property of $N_{\alpha\beta}$ (see Chapter 5 of \cite{helgason}): \\ if $\alpha , \beta ,\gamma\in R$ are such that $\alpha +\beta +\gamma =0$, then \begin{equation}\label{suma} N_{\alpha\beta} = N_{\beta\gamma} = N_{\gamma\alpha}. \end{equation} Relation (\ref{add-d}) follows from (\ref{domega'}) and $JE_{\alpha}= i\epsilon_{\alpha}E_{\alpha}$ for any $\alpha\in R'.$
\end{proof}
\begin{lem} Suppose that $(g, J)$ is a SKT structure, i.e. $d\left( J d\omega\right) =0$. Then \begin{equation}\label{kahler1} N_{\alpha\beta}^{2}\left( g_{\alpha +\beta}-g_{\alpha} - g_{\beta}\right)+ \epsilon_{\alpha -\beta}N_{\alpha,-\beta}^{2} \left( \epsilon_{\alpha -\beta} g_{\alpha-\beta} - g_{\alpha} + g_{\beta}\right) =0 \end{equation} for any $\alpha, \beta \in R_+'$, where we assume that $\epsilon_{\alpha -\beta} =0$ if $\alpha - \beta \notin R'$. \end{lem}
\begin{proof} By a direct computation, we find \begin{align*} -\frac{1}{2}d\left( Jd\omega \right) (E_{\alpha}, E_{\beta}, E_{-\alpha}, E_{-\beta})&= N_{\alpha\beta}^{2}\left( g_{\alpha +\beta}- g_{\alpha}
- g_{\beta}\right)\\ &+\epsilon_{\alpha -\beta}N_{\alpha,-\beta}^{2} \left( \epsilon_{\alpha -\beta} g_{\alpha-\beta} -g_{\alpha} + g_{\beta}\right). \end{align*} This relation implies our claim. \end{proof}
For any root $$ R^{\prime}_{+}\ni\alpha \equiv k_1 \alpha_1 + \cdots + k_{m} \alpha_m
\, ( \mathrm{mod}R^{+}_{0}), \,\, \alpha_i \in \Pi', $$
we define the length of $\alpha$ as $\ell(\alpha) = \sum_{i=1}^{m} k_i$.
Note that $\ell (\alpha )$ depends only on the projection $\kappa(\alpha)$ of $\alpha$ onto $\mbox{\goth t}^{*}.$\\
{\bf Proof of Theorem \ref{pmain}.} By Proposition \ref{linearityProp} we have to check that \begin{equation}\label{condk} g_{\alpha +\beta} = g_{\alpha} +g_{\beta}, \end{equation} for any $\alpha ,\beta \in R'_{+}$ such that $\alpha +\beta\in R'_+$. We use induction on the length of $\gamma = \alpha + \beta \in R'_+$. Suppose first that $\gamma = \alpha + \beta \in R'_{+}$ has length two. Then $\alpha - \beta \notin R'$, hence $\epsilon_{\alpha -\beta}=0$. Identity (\ref{kahler1}) implies (\ref{condk}).\\ Suppose now that (\ref{condk}) holds for all $\gamma = \alpha+\beta\in R'_{+}$ with $l(\gamma )\leq k$. Let $\gamma\in R'_{+}$ with $\ell(\gamma) = k+1$ and suppose that $\gamma = \alpha + \beta$, where $\alpha, \beta \in R'_+$. We have to show that \begin{equation}\label{condk0} g_{\gamma } = g_{\alpha} +g_{\beta}. \end{equation} If $\alpha -\beta \notin R'$ , our previous argument shows that (\ref{condk0}) holds. Suppose now that $\alpha -\beta\in R'$. Without loss of generality, we may assume that $\alpha - \beta \in R'_{+}.$ Then $\alpha= (\alpha - \beta) + \beta$ is a decomposition of the root $\alpha $ into a sum of two roots from $R'_{+}.$ Since $\alpha$ has length $\leq k$, our inductive assumption implies that $g_\alpha = g_{\alpha - \beta} + g_\beta$. Thus the second term of the identity (\ref{kahler1}) vanishes and we obtain (\ref{condk0}). This concludes the proof of Theorem \ref{pmain}.\\
{\bf Proof of Corollary \ref{mainh}.} Let $(g, J_{+}, J_{-}, b)$ be a $G$-invariant bi-Hermitian structure in the sense of \cite{GHR} on a flag manifold $M=G/K$. Then, by Theorem \ref{pmain}, $(g,J_{\pm})$ are two K\"ahler structures and hence the $b$-field $b$ is closed. The complex structures $J_{\pm}$ are parallel with respect to the Levi-Civita connection. If the group $G$ is simple, the K\"ahler metric $g$ is irreducible. The endomorphism $A = J_{1}\circ J_{2}$ is symmetric with respect to $g$ and parallel. An easy argument which uses the irreducibility of $g$ shows that $J_{1}= J_{2}$ or $J_{1}= - J_{2}.$ This concludes the proof of Corollary \ref{mainh}.
DMITRI V. ALEKSEEVSKY: Edinburgh University, King's Buildings, JCMB, Mayfield Road, Edinburgh, EH9 3JZ,UK, D.Aleksee@ed.ac.uk\\
LIANA DAVID: Institute of Mathematics "Simion Stoilow" of the Romanian Academy; Calea Grivitei nr. 21, Sector 1, Bucharest, Romania; liana.david@imar.ro
\end{document} |
\begin{document}
\pagestyle{plain}
\title{An Introduction to Torsion Subcomplex Reduction} \subjclass[2010]{MSC 11F75: Cohomology of arithmetic groups}
\date{\today}
\author{Alexander D. Rahm}
\address{Laboratoire de math\'ematiques GAATI, Universit\'e de la Polyn\'esie Fran\c{c}aise, BP 6570 -- 98702 Faaa, French Polynesia} \urladdr{http://gaati.org/rahm/} \email{Alexander.Rahm@upf.pf}
\maketitle
\begin{abstract}
This survey paper introduces to a technique called Torsion Subcomplex Reduction (TSR) for computing torsion in the cohomology of discrete groups acting on suitable cell complexes. TSR enables one to skip machine computations on cell complexes, and to access directly the reduced torsion subcomplexes, which yields results on the cohomology of matrix groups in terms of formulas.
TSR has already yielded general formulas for the cohomology of the tetrahedral Coxeter groups as well as, at odd torsion, of SL$_2$ groups over arbitrary number rings. The latter formulas allow to refine the Quillen conjecture. Furthermore, progress has been made to adapt TSR to Bredon homology computations. In particular for the Bianchi groups, yielding their equivariant $K$-homology, and, by the Baum--Connes assembly map, the $K$-theory of their reduced $C^*$-algebras. As a side application, TSR has allowed to provide dimension formulas for the Chen--Ruan orbifold cohomology of the complexified Bianchi orbifolds, and to prove Ruan's crepant resolution conjecture for all complexified Bianchi orbifolds.
\end{abstract}
\section{Introduction} This survey paper is based on the habilitation thesis of the author, restricting to the expository parts, which are updated here, and referring to previously published papers for the proofs. The goal is to introduce to a technique for computing Farrell--Tate cohomology of arithmetic groups, presented in Section~\ref{techniques}. This technique can also be applied in the computation of other invariants, as described in Section~\ref{Results}, where further results are stated.
\subsection{Background}\label{background}
Our objects of study are discrete groups~$\Gamma$ such that~$\Gamma$ admits a torsion-free subgroup of finite index. By a theorem of Serre~\cite{SerreGroupesDiscrets}, all the torsion-free subgroups of finite index in~$\Gamma$ have the same cohomological dimension; this dimension is called the virtual cohomological dimension (abbreviated vcd) of~$\Gamma$. Above the vcd, the (co)homology of a discrete group is determined by its system of finite subgroups. We are going to discuss it in terms of Farrell--Tate cohomology. The Farrell--Tate cohomology $\widehat{\operatorname{H}}^q$ is identical to group cohomology $\Homol^q$ in all degrees $q$ above the vcd, and extends in lower degrees to a cohomology theory of the system of finite subgroups. Details are elaborated in Brown's book~\cite{Brown}*{chapter X}. So for instance considering the Coxeter groups, the virtual cohomological dimension of all of which vanishes, their Farrell--Tate cohomology is identical to all of their group cohomology. In Section~\ref{conjugacy reduction}, we will introduce a method of how to explicitly determine the Farrell--Tate cohomology: By reducing torsion sub-complexes.
Let us note that for the same arithmetic groups, cohomology outside of our setting has much stronger contemporary interest, and therefore, there has been extensive work on it. Just to mention a few, fairly recent publications about group cohomology in low cohomological degrees, from which to find more references: On SL$_N({\mathbb{Z}})$ with rising rank $N$ and modulo small torsion~\cites{sikiri2019voronoi}, on infinite towers of congruence subgroups \cites{AGMY,BergeronSengunVenkatesh}, on arbitrary groups using general purpose algorithms \cites{Ellis}.
\subsection{Overview of the results}
This paper introduces the technique of \emph{torsion subcomplex reduction}. It is a technique for the study of discrete groups $\Gamma$, giving easier access to the cohomology of the latter at a fixed prime~$\ell$ and above the virtual cohomological dimension, by extracting the relevant portion of the equivariant spectral sequence and then simplifying it. Instead of having to work with a full cellular complex $X$ with a nice $\Gamma$-action, the technique inputs only an often lower-dimensional subcomplex of $X$, and reduces it to a small number of cells.
The author first developed torsion subcomplex reduction for a specific class of arithmetic groups, the Bianchi groups, for which the method yielded all of the homology above the virtual cohomological dimension~\cite{Rahm:homological_torsion}. Some elements of this technique had already been used by Soul\'e for a modular group~\cite{Soule}; and were used by Mislin and Henn as a set of ad hoc tricks. After rediscovering these ad hoc tricks, the author puts them into a general framework~\cite{Rahm:formulas}. The advantage of using this framework
is that it becomes possible to find general formulas for the dimensions of the Farrell--Tate cohomology, for instance for the entire family of the Bianchi groups.
It is convenient to give some examples of where the technique of torsion subcomplex reduction has already produced good results: \begin{itemize}
\item The Bianchi groups and their congruence subgroups (cf. Section \ref{The Bianchi groups});
\item The Coxeter groups (cf. Section \ref{The Coxeter groups});
\item The SL$_2$ groups over arbitrary number rings (cf. Section \ref{formulas for the Farrell--Tate cohomology});
\item PSL$_4({\mathbb{Z}})$ and the PGL\texorpdfstring{$_3$}{(3)} groups over rings of quadratic integers (cf. Section \ref{GL3}).
\item The technique has also been adapted to groups with non-trivial centre (cf. Section \ref{non-trivial-centre}). \end{itemize} This has led to the following applications: \begin{itemize}
\item Refining the Quillen conjecture (cf. Section \ref{QC}),
\item Computing equivariant \textit{K}-homology (cf. Section \ref{Bredon state}),
\item Understanding Chen--Ruan orbifold cohomology (cf. Section \ref{orbifold state}). \end{itemize}
\section{The technique of Torsion Subcomplex Reduction} \label{techniques}
\subsection{Farrell--Tate cohomology and Steinberg homology}\label{Farrell--Tate cohomology and Steinberg homology} Let $\Gamma$ be a virtual duality group: this means, $\Gamma$ admits a finite index subgroup $\Gamma'$ such that ${\mathbb{Z}}$ admits a finite projective resolution over ${\mathbb{Z}}[\Gamma']$, and there is an integer $n$ such that $\Homol^i(\Gamma; \thinspace {\mathbb{Z}}[\Gamma])= 0$ for $i\neq n$ and $\Homol^n(\Gamma; \thinspace {\mathbb{Z}}[\Gamma])$ is ${\mathbb{Z}}$-torsion-free. Then $\Gamma$ is of finite virtual cohomological dimension vcd$(\Gamma) = n < \infty$ with $n$ the aforementioned integer (where we have to make the smallest choice $n=0$ if $n$ is not unique). Then the ``dualizing module'' is $D := \Homol^n(\Gamma;\thinspace {\mathbb{Z}}[\Gamma])$, and the \textit{Steinberg homology} of $\Gamma$ (with coefficients $M$) is $\Steinberg_i(\Gamma; \thinspace M) := \Homol_i(\Gamma; \thinspace D\otimes M)$. Recall~\cite{Brown79}*{\S 11.8} that there is an exact sequence tying together group cohomology $\Homol^\bullet$, Steinberg homology $\Steinberg_\bullet$ and Farrell-Tate cohomology $\widehat{\operatorname{H}}^\bullet$ of $\Gamma$ : \begin{center} \begin{tikzpicture}[descr/.style={fill=white,inner sep=1.5pt}]
\matrix (m) [
matrix of math nodes,
row sep=1em,
column sep=1.4em,
text height=1.99ex, text depth=0.75ex
]
{ & & & & \Homol^0 & \hdots & \Homol^{n-1} & \Homol^{n} & \Homol^{n+1} & \Homol^{n+2} & \hdots \\
& & & & & & & & \parallel & \parallel & \\
\hdots & \widehat{\operatorname{H}}^{-3} & \widehat{\operatorname{H}}^{-2} & \widehat{\operatorname{H}}^{-1}& \widehat{\operatorname{H}}^{0} & \hdots &\widehat{\operatorname{H}}^{n-1}&\widehat{\operatorname{H}}^{n}&\widehat{\operatorname{H}}^{n+1}&\widehat{\operatorname{H}}^{n+2}& \hdots \\
& \parallel & \parallel & & & & & & & & \\
\hdots &\Steinberg_{n+2}&\Steinberg_{n+1}&\Steinberg_{n}&\Steinberg_{n-1}& \hdots &\Steinberg_{0}\\
};
\path[overlay,->, font=\scriptsize,>=latex]
(m-5-4) edge[out=-355,in=-155] (m-1-5)
(m-3-4) edge[thick, right hook->] (m-5-4)
(m-1-5) edge (m-3-5)
(m-3-5) edge (m-5-5)
(m-1-7) edge (m-3-7)
(m-3-7) edge (m-5-7)
(m-5-5) edge[out=-355,in=-155] (m-1-6)
(m-5-6) edge[out=-355,in=-155] (m-1-7)
(m-5-7) edge[out=-355,in=-155] (m-1-8)
(m-1-8) edge[thick, draw,->>] (m-3-8) ; \end{tikzpicture} \end{center} Therefore, Brown describes the Farrell--Tate cohomology of $\Gamma$ to consist of the cohomology functors $\Homol^i$ for $i>n$, the Steinberg homology functors $\Steinberg_i$ for $i>n$, modified $\Homol^n$ and $\Steinberg_n$ functors, and $n$ additional functors $\widehat{\operatorname{H}}^0$, $\hdots$, $\widehat{\operatorname{H}}^{n-1}$ which are some sort of mixture of the functors $\Homol^i$ and $\Steinberg_i$ for $i \leq n$.
\subsection{Reduction of torsion subcomplexes in the classical setting} \label{conjugacy reduction} Let $\ell$ be a prime number. We require any discrete group $\Gamma$ under our study to be provided with what we will call a \textit{polytopal $\Gamma$-cell complex}, that is, a finite-dimensional simplicial complex $X$ with cellular $\Gamma$-action such that each cell stabiliser fixes its cell point-wise. In practice, we relax the simplicial condition to a polytopal one, merging finitely many simplices to a suitable polytope. We could obtain the simplicial complex back as a triangulation. We further require that the fixed point set~$X^G$ be acyclic for every non-trivial finite $\ell$-subgroup $G$ of~$\Gamma$.
Then, the $\Gamma$-equivariant Farrell--Tate cohomology $\widehat{\operatorname{H}}^*_\Gamma(X; \thinspace M)$ of~$X$, for any trivial $\Gamma$-module $M$ of coefficients, gives us the $\ell$-primary part $\widehat{\operatorname{H}}^*(\Gamma; \thinspace M)_{(\ell)}$ of the Farrell--Tate cohomology of~$\Gamma$, as follows. \begin{proposition}[Brown \cite{Brown}] \label{Brown's proposition} For a $\Gamma$-action on $X$ as specified above, the canonical map $$ \widehat{\operatorname{H}}^*(\Gamma; \thinspace M)_{(\ell)} \to \widehat{\operatorname{H}}^*_\Gamma(X; \thinspace M)_{(\ell)} $$ is an isomorphism. \end{proposition}
The classical choice \cite{Brown} is to take for $X$
the geometric realization of the partially ordered set of non-trivial finite subgroups (respectively, non-trivial elementary Abelian $\ell$-subgroups) of~$\Gamma$,
the latter acting by conjugation. The stabilisers are then the normalizers, which in many discrete groups are infinite. In addition, there are often great computational challenges to determine a group presentation for the normalizers. When we want to compute the module $\widehat{\operatorname{H}}^*_\Gamma(X; \thinspace M)_{(\ell)}$ subject to Proposition~\ref{Brown's proposition}, at least we must know the ($\ell$-primary part of the) Farrell--Tate cohomology of these normalizers. The Bianchi groups are an instance where different isomorphism types can occur for this cohomology
at different conjugacy classes of elementary Abelian $\ell$-subgroups, both for $\ell=2$ and $\ell=3$. As the only non-trivial elementary Abelian $3$-subgroups in the Bianchi groups are of rank $1$, the orbit space $_\Gamma \backslash X$ consists only of one point for each conjugacy class of type ${\mathbb{Z}}/3$ and a corollary~\cite{Brown} from Proposition~\ref{Brown's proposition} decomposes the
$3$-primary part of the Farrell--Tate cohomology of the Bianchi groups into the direct product over their normalizers. However, due to the different possible homological types of the normalizers (in fact, two of them occur),
the final result remains unclear and subject to tedious case-by-case computations of the normalizers.
In contrast, in the cell complex we are going to construct (specified in Definition~\ref{reduced torsion subcomplex definition} below),
the connected components of the orbit space are for the $3$-torsion in the Bianchi groups not simple points,
but have either the shape $\edgegraph$ or $\circlegraph$. This dichotomy already contains the information about the occurring normalizer.
The starting point for our construction is the following definition.
\begin{df}
Let $\ell$ be a prime number. The \emph{$\ell$-torsion subcomplex} of a polytopal $\Gamma$-cell complex~$X$
consists of all the cells of $X$ whose stabilisers in~$\Gamma$ contain elements of order $\ell$. \end{df}
We are from now on going to require the cell complex $X$ to admit only finite stabilisers in~$\Gamma$, and we require the action of $\Gamma$ on the coefficient module $M$ to be trivial. Then obviously only cells from the \emph{$\ell$-torsion subcomplex} contribute to $\widehat{\operatorname{H}}^*_\Gamma(X; \thinspace M)_{(\ell)}$.
\begin{corollary}[Corollary to Proposition~\ref{Brown's proposition}] \label{Brownian}
There is an isomorphism between the $\ell$-primary parts of the Farrell--Tate cohomology of~$\Gamma$ and the
$\Gamma$-equivariant Farrell--Tate cohomology of the $\ell$-torsion subcomplex. \end{corollary}
We are going to reduce the \emph{$\ell$-torsion subcomplex} to one which still carries the $\Gamma$-equivariant Farrell--Tate cohomology of~$X$, but which can also have considerably fewer orbits of cells. This can be easier to handle in practice, and, for certain classes of groups, leads us to an explicit structural description of the Farrell--Tate cohomology of~$\Gamma$. The pivotal property of this reduced $\ell$-torsion subcomplex will be given in Theorem~\ref{pivotal}. Our reduction process uses the following conditions, which are imposed to a triple $(\sigma, \tau_1, \tau_2)$ of cells in the $\ell$-torsion subcomplex, where $\sigma$ is a cell of dimension $n-1$, lying in the boundary of precisely the two $n$-cells $\tau_1$ and~$\tau_2$,
the latter cells representing two different orbits.
\begin{ConditionA} \label{cell condition} The triple $(\sigma, \tau_1, \tau_2)$ is said to satisfy Condition A if no higher-dimensional cells of the $\ell$-torsion subcomplex touch $\sigma$, if the interior of $\tau_1$ and the interior of $\tau_2$ do not contain two points which are on the same orbit, and if the $n$-cell stabilisers admit an isomorphism $\Gamma_{\tau_1} \cong \Gamma_{\tau_2}$. \end{ConditionA}
Where this condition is fulfilled in the $\ell$-torsion subcomplex,
we merge the cells $\tau_1$ and $\tau_2$ along~$\sigma$ and do so for their entire orbits,
if and only if they meet the following additional condition B. We will refer by \emph{mod $\ell$ cohomology} to group cohomology with ${\mathbb{Z}}/\ell$-coefficients under the trivial action.
\begin{isomorphismCondition} With the notation above Condition $A$, the inclusion $ \Gamma_{\tau_1} \subset \Gamma_\sigma$ induces an isomorphism on mod $\ell$ cohomology. \end{isomorphismCondition}
\begin{lemma}[\cite{Rahm:formulas}] \label{A}
Let $\widetilde{X_{(\ell)}}$ be the $\Gamma$-complex obtained by orbit-wise merging two $n$-cells of the
$\ell$-torsion subcomplex $X_{(\ell)}$ which satisfy Conditions~$A$ and~$B$. Then, $$\widehat{\operatorname{H}}^*_\Gamma(\widetilde{X_{(\ell)}}; \thinspace M)_{(\ell)} \cong \widehat{\operatorname{H}}^*_\Gamma(X_{(\ell)}; \thinspace M)_{(\ell)}.$$ \end{lemma}
By a ``terminal $(n-1)$-cell'', we will denote an $(n-1)$-cell $\sigma$ with \begin{itemize}
\item modulo~$\Gamma$ precisely one adjacent $n$-cell $\tau$, \item and such that $\tau$ has no further cells on the $\Gamma$-orbit of $\sigma$ in its boundary; \item there shall be no higher-dimensional cells adjacent to $\sigma$. \end{itemize} And by ``cutting off'' the $n$-cell $\tau$,
we will mean that we remove $\tau$ together with~$\sigma$ from our cell complex.
\begin{df} \label{reduced torsion subcomplex definition}
A \emph{reduced $\ell$-torsion subcomplex} associated to a polytopal $\Gamma$-cell complex~$X$
is a cell complex obtained by recursively merging orbit-wise all the pairs of cells satisfying
conditions~$A$ and~$B$,
and cutting off $n$-cells that admit a terminal $(n-1)$-cell when condition~$B$ is satisfied. \end{df}
A priori, this process yields a unique reduced $\ell$-torsion subcomplex only up to suitable isomorphisms, so we do not speak of ``the'' reduced $\ell$-torsion subcomplex. The following theorem makes sure that the $\Gamma$-equivariant mod $\ell$ Farrell--Tate cohomology is not affected by this issue.
\begin{theorem}[\cite{Rahm:formulas}] \label{pivotal}
There is an isomorphism between the $\ell$-primary part of the Farrell--Tate cohomology of~$\Gamma$ and the
$\Gamma$-equivariant Farrell--Tate cohomology of a reduced $\ell$-torsion subcomplex obtained from $X$ as specified above. \end{theorem}
In order to have a practical criterion for checking Condition~$B$, we make use of the following stronger condition.
Here, we write ${\rm N}_{\Gamma_\sigma}$ for taking the normalizer in ${\Gamma_\sigma}$ and ${\rm Sylow}_\ell$ for picking an arbitrary Sylow $\ell$-subgroup.
This is well defined because all Sylow $\ell$-subgroups are conjugate. We use Zassenhaus's notion for a finite group to be $\ell$-\emph{normal},
if the center of one of its Sylow $\ell$-subgroups is the center of every Sylow $\ell$-subgroup in which it is contained.
\begin{ConditionBprime} With the notation of Condition $A$, the group $\Gamma_\sigma$ admits a (possibly trivial) normal subgroup $T_\sigma$ with trivial mod~$\ell$ cohomology and with quotient group $G_\sigma$; and the group $\Gamma_{\tau_1}$ admits a (possibly trivial) normal subgroup $T_\tau$ with trivial mod~$\ell$ cohomology and with quotient group $G_\tau$ making the sequences \begin{center}
$ 1 \to T_\sigma \to \Gamma_\sigma \to G_\sigma \to 1$ and $ 1 \to T_\tau \to \Gamma_{\tau_1} \to G_\tau \to 1$ \end{center} exact and satisfying one of the following. \begin{enumerate}
\item Either $G_\tau \cong G_\sigma$, or
\item $G_\sigma$ is $\ell$-normal and $G_\tau \cong {\rm N}_{G_\sigma}({\rm center}({\rm Sylow}_\ell(G_\sigma)))$, or
\item both $G_\sigma$ and $G_\tau$ are $\ell$-normal and there is a (possibly trivial) group $T$
with trivial mod~$\ell$ cohomology making the sequence $$1 \to T \to {\rm N}_{G_\sigma}({\rm center}({\rm Sylow}_\ell(G_\sigma))) \to {\rm N}_{G_\tau}({\rm center}({\rm Sylow}_\ell(G_\tau))) \to 1$$ exact. \end{enumerate} \end{ConditionBprime}
\begin{lemma}[\cite{Rahm:formulas}] \label{Implying the isomorphism condition} Condition B' implies Condition B. \end{lemma}
\begin{remark}
The computer implementation \cite{BuiRahm:scpInHAP} checks Conditions~$B' (1)$ and $B' (2)$ for each pair of cell stabilisers, using a presentation of the latter in terms of matrices, permutation cycles or generators and relators. In the below examples however,
we do avoid this case-by-case computation by a general determination of the isomorphism types of pairs of cell stabilisers
for which group inclusion induces an isomorphism on mod $\ell$ cohomology. The latter method is the procedure of preference, because it allows us to deduce statements that hold for the entire class of groups in question. \end{remark}
\subsubsection{Example: A \texorpdfstring{$2$}{2}-torsion subcomplex for SL\texorpdfstring{$_3(\mathbb{Z})$}{(3,\textbf{Z})}}
The $2$-torsion subcomplex of the cell complex described by Soul\'e~\cite{Soule},
obtained from the action of SL$_3(\mathbb{Z})$ on its symmetric space,
has the following homeomorphic image. \begin{center} \scalebox{0.9} { \begin{pspicture}(-1.3,-7.44125)(11.894688,7.46125) \pstriangle[linewidth=0.04,dimen=outer](5.8803124,-6.42125)(10.46,7.72) \psline[linewidth=0.04](11.050312,-6.36125)(5.9103127,-3.52125)(5.8903127,1.25875)(0.6903125,1.27875)(0.6903125,-6.40125)(5.9303126,-3.52125)(5.9503126,-3.54125) \psline[linewidth=0.04](11.090313,-6.42125)(11.110312,1.27875)(5.8903127,1.25875)(5.9103127,5.65875)(0.6703125,1.27875)(0.6903125,1.25875) \psline[linewidth=0.04](5.9303126,5.65875)(11.110312,1.27875)(11.130313,1.25875) \usefont{T1}{ptm}{m}{it} \rput(5.9759374,6.26875){stab(M) $\cong {\mathcal{S}}_4$} \usefont{T1}{ptm}{m}{it} \rput(-0.575,1.44875){stab(Q) $\cong {\mathcal{D}}_6$} \usefont{T1}{ptm}{m}{it} \rput(7.217656,1.60875){stab(O) $\cong {\mathcal{S}}_4$} \usefont{T1}{ptm}{m}{it} \uput[0](11.0,1.62875){stab(N) $\cong {\mathcal{D}}_4$} \usefont{T1}{ptm}{m}{it} \uput[0](6.0,-3.4){stab(P) $\cong {\mathcal{S}}_4$} \usefont{T1}{ptm}{m}{it} \rput(0.14765625,-6.21125){N'} \usefont{T1}{ptm}{m}{it} \rput(11.657657,-6.19125){M'} \uput[90](8.5,3.5){${\mathcal{D}}_2$} \uput[0](5.9,3.5){${\mathcal{D}}_3$} \uput[0](5.9,-2.0){${\mathcal{D}}_3$} \uput[0](5.4,-5.0){${\mathcal{D}}_2$} \uput[180](3.3,3.5){${\mathbb{Z}}/2$} \rput(0.14765625,-2.21125){${\mathbb{Z}}/2$} \uput[0](2.6,-2.0){${\mathbb{Z}}/2$} \uput[0](2.6,-4.7){${\mathcal{D}}_4$} \uput[0](8.2,-2.0){${\mathbb{Z}}/2$} \uput[0](8.2,-4.7){${\mathcal{D}}_4$} \uput[0](2.6,1.6){${\mathcal{D}}_2$} \uput[270](8.6,1.2){${\mathbb{Z}}/2$} \psline[linewidth=0.04](8.99,3.5)(8.5,3.5)(8.5,3.0) \psline[linewidth=0.04](9.2,3.3)(8.7,3.3)(8.7,2.8) \psline[linewidth=0.04](10.7,-1.7)(11.1,-2.1)(11.5,-1.7) \psline[linewidth=0.04](10.7,-1.9)(11.1,-2.3)(11.5,-1.9) \psline[linewidth=0.04](5.49875,-6.0)(5.81875,-6.4)(5.55875,-6.8) \psline[linewidth=0.04](5.77875,-6.0)(6.03875,-6.4)(5.79875,-6.8) \end{pspicture} } \end{center} Here, the three edges $NM$, $NM'$ and $N'M'$ have to be identified as indicated by the arrows. All of the seven triangles belong with their interior to the $2$-torsion subcomplex, each with stabiliser ${\mathbb{Z}}/2$, except for the one which is marked to have stabiliser ${\mathcal{D}}_2$. Using the methods described in Section~\ref{conjugacy reduction}, we reduce this subcomplex to
\begin{center}
\scalebox{1} { \begin{pspicture}(-1.9,-0.9)(8.5,0.3)
\psdots(-0.0,0.0)
\psline(-0.0,0.0)(2.0,0.0)
\uput{0.1}[90](-0.0,0.0){ ${\mathcal{S}}_4$}
\uput{0.4}[270](-0.1,0.2){ $O$}
\psdots(2,0.0)
\uput{0.1}[90](1.0,0.0){ ${\mathcal{D}}_2$}
\uput{0.1}[90](2.0,0.0){ ${\mathcal{D}}_6$}
\uput{0.4}[270](2.0,0.2){ $Q$}
\psline(2,0.0)(8,0.0)
\uput{0.1}[90](3.0,0.0){ ${\mathbb{Z}}/2$}
\uput{0.1}[90](4.0,0.0){ ${\mathcal{S}}_4$}
\uput{0.4}[270](4.0,0.2){$M$}
\psdots(4,0.0)
\uput{0.1}[90](5.0,0.0){ ${\mathcal{D}}_4$}
\uput{0.1}[90](6.0,0.0){ ${\mathcal{S}}_4$}
\uput{0.2}[270](6.0,0.0){$P$}
\psdots(6,0.0)
\uput{0.1}[90](7.0,0.0){ ${\mathcal{D}}_4$}
\uput{0.1}[90](8.0,0.0){ ${\mathcal{D}}_4$}
\uput{0.4}[270](8.0,0.2){$N'$}
\psdots(8,0.0) \end{pspicture} } \end{center} and then to \begin{center}
\scalebox{1} { \begin{pspicture}(-1.9,-0.2)(8.0,0.3)
\uput{0.1}[90](2.0,0.0){ ${\mathcal{S}}_4$}
\psdots(2,0.0)
\psline(2,0.0)(6,0.0)
\uput{0.1}[90](3.0,0.0){ ${\mathbb{Z}}/2$}
\uput{0.1}[90](4.0,0.0){ ${\mathcal{S}}_4$}
\psdots(4,0.0)
\uput{0.1}[90](5.0,0.0){ ${\mathcal{D}}_4$}
\uput{0.1}[90](6.0,0.0){ ${\mathcal{S}}_4$}
\psdots(6,0.0) \end{pspicture} } \end{center} which is the geometric realization of Soul\'e's diagram of cell stabilisers.
This yields the mod $2$ Farrell--Tate cohomology as specified in~\cite{Soule}.
\subsubsection{Example: Farrell--Tate cohomology of the Bianchi modular groups} Consider the $\mathrm{SL}_2$ matrix groups over the ring $\mathcal{O}_{-m}$
of integers in the imaginary quadratic number field ${\mathbb{Q}}(\sqrt{-m})$,
with $m$ a square-free positive integer. These groups, as well as their central quotients $\mathrm{PSL}_2\left(\mathcal{O}_{-m}\right)$, are known as \textit{Bianchi (modular) groups}. We recall the following information from~\cites{Rahm:formulas} on the $\ell$-torsion subcomplex of $\mathrm{PSL}_2\left(\mathcal{O}_{-m}\right)$. Let~$\Gamma$ be a finite index subgroup in $\text{PSL}_2(\mathcal{O}_{-m})$. Then any element of~$\Gamma$ fixing a point inside hyperbolic $3$-space~$\mathcal{H}$ acts as a rotation of finite order. By Felix Klein's work, we know conversely that any torsion element~$\alpha$ is elliptic and hence fixes some geodesic line. We call this line \emph{the rotation axis of~$\alpha$}. Every torsion element acts as the stabiliser of a line conjugate to one passing through the Bianchi fundamental polyhedron. We obtain the \textit{refined cellular complex} from the action of~$\Gamma$ on~$\mathcal{H}$
as described in~\cite{Rahm:homological_torsion}, namely we subdivide~$\mathcal{H}$ until the stabiliser in~$\Gamma$ of any cell $\sigma$ fixes $\sigma$ point-wise. We achieve this by computing Bianchi's fundamental polyhedron for the action of~$\Gamma$,
taking as a preliminary set of 2-cells its facets lying on the Euclidean hemispheres
and vertical planes of the upper-half space model for $\mathcal{H}$,
and then subdividing along the rotation axes of the elements of~$\Gamma$.
It is well-known~\cite{SchwermerVogtmann} that if $\gamma$ is an element of finite order $n$ in a Bianchi group, then $n$ must be 1, 2, 3, 4 or 6,
because $\gamma$ has eigenvalues $\rho$ and $\overline{\rho}$,
with $\rho$ a primitive $n$-th root of unity, and the trace of~$\gamma$ is $\rho + \overline{\rho} \in \mathcal{O}_{-m} \cap {\mathbb{R}} = {\mathbb{Z}}$.
When $\ell$ is one of the two occurring prime numbers $2$ and~$3$, the orbit space of this subcomplex is a graph,
because the cells of dimension greater \mbox{than 1} are trivially stabilized in the refined cellular complex. We can see that this graph is finite either from the finiteness of the Bianchi fundamental polyhedron, or from studying conjugacy classes of finite subgroups as in~\cite{Kraemer:Diplom}.
As in \cite{RahmFuchs}, we make use of a $2$-dimensional deformation retract $X$ of the refined cellular complex, equivariant with respect to a Bianchi group \mbox{$\Gamma$}. This retract has a cell structure in which each cell stabiliser fixes its cell pointwise. Since $X$ is a deformation retract of $\mathcal{H}$ and hence acyclic, $$\Cohomol^*_\Gamma(X) \cong \Cohomol^*_\Gamma(\mathcal{H}) \cong \Cohomol^*(\Gamma).$$ \begin{table} \begin{center} $
\begin{array}{|c|c|c|c|c|c|} \hline \text{Subgroup type} & {\mathbb{Z}}/2 & {\mathbb{Z}}/3 & {\mathcal{D}}_2 &{\mathcal{D}}_3& {\mathcal{A}}_4 \\ \hline \text{Number of conjugacy classes} & \lambda_{4} & \lambda_6 & \mu_2 & \mu_3 & \mu_T \\ \hline \end{array} $ \end{center} \caption{The non-trivial finite subgroups of $\mathrm{PSL}_2\left(\mathcal{O}_{-m}\right)$ were classified by Klein~\cite{Klein:binaereFormenMathAnn9}. Here, ${\mathbb{Z}}/n$ is the cyclic group of order $n$, the dihedral groups are ${\mathcal{D}}_2$ with four elements and ${\mathcal{D}}_3$ with six elements, and the tetrahedral group is isomorphic to the alternating group ${\mathcal{A}}_4$ on four letters. Formulas for the numbers of conjugacy classes counted by the Greek symbols are given by Kr\"amer~\cite{Kraemer:Diplom}.} \label{table:covering} \end{table} In Theorem~\ref{Grunewald-Poincare series formulas} below, we give a formula expressing precisely how the Farrell--Tate cohomology
of a Bianchi group with units $\{\pm 1\}$
(i.e., just excluding the Gaussian and the Eisentein integers as imaginary quadratic rings,
see Section~\ref{Bredon state})
depends on the numbers of conjugacy classes of non-trivial finite subgroups of the occurring five types specified in Table~\ref{table:covering}. The main step in order to prove this, is to read off the Farrell--Tate cohomology from the quotient of the reduced torsion sub-complexes.
Kr\"amer's formulas \cite{Kraemer:Diplom} express the numbers of conjugacy classes of the five types of non-trivial finite subgroups given in Table~\ref{table:covering}. We are going to use the symbols of that table also for the numbers of conjugacy classes in $\Gamma$,
where $\Gamma$ is a finite index subgroup in a Bianchi group. Recall that for $\ell = 2$ and $\ell = 3$, we can express the dimensions of the homology of $\Gamma$ with coefficients in the field ${\mathbb{F}_\ell}$ with $\ell$ elements
in degrees above the virtual cohomological dimension of the Bianchi groups -- which is $2$ -- by the Poincar\'e series $$P^\ell_\Gamma(t) := \sum\limits_{q \thinspace > \thinspace 2}^{\infty} \dim_{\mathbb{F}_\ell} \Homol_q \left(\Gamma;\thinspace {\mathbb{F}_\ell} \right)\thinspace t^q,$$ which has been suggested by Grunewald. Further let $P_{\circlegraph} (t) := \frac{-2t^3}{t-1}$ , which equals the Poincar\'e series $P^2_\Gamma(t)$ of the groups $\Gamma$ the quotient of the reduced $2$--torsion sub-complex of which is a circle. Denote by \begin{itemize} \item $P_{{\mathcal{D}}_2}^*(t) := \frac{-t^3(3t -5)}{2(t-1)^2}$, the Poincar\'e series over $$\dim_{\mathbb{F}_2} \Homol_q \left({\mathcal{D}}_2;\thinspace {\mathbb{F}_2} \right) -\frac{3}{2}\dim_{\mathbb{F}_2} \Homol_q \left({\mathbb{Z}}/2;\thinspace {\mathbb{F}_2} \right)$$ \item and by $P_{{\mathcal{A}}_4}^*(t) := \frac{-t^3(t^3 - 2t^2 + 2t - 3)}{2(t-1)^2 (t^2 + t + 1 ) }$, the Poincar\'e series over $$\dim_{\mathbb{F}_2} \Homol_q \left({\mathcal{A}}_4;\thinspace {\mathbb{F}_2} \right) -\frac{1}{2}\dim_{\mathbb{F}_2} \Homol_q \left({\mathbb{Z}}/2;\thinspace {\mathbb{F}_2} \right).$$ \end{itemize}
In 3-torsion, let $P_{\edgegraph} (t) := \frac{-t^3(t^2 - t + 2)}{(t-1)(t^2+1)}$, which equals the Poincar\'e series $P^3_\Gamma(t)$ for those Bianchi groups, the quotient of the reduced $3$--torsion sub-complex of which is a single edge without identifications.
\vbox{ \begin{theorem}[\cite{Rahm:formulas}] \label{Grunewald-Poincare series formulas} For any finite index subgroup $\Gamma$ in a Bianchi group with units $\{\pm 1\}$, the group homology in degrees above its virtual cohomological dimension is given by the Poincar\'e series $$P^2_\Gamma(t) = \left(\lambda_4 -\frac{3\mu_2 -2\mu_T}{2}\right)P_{\circlegraph} (t) +(\mu_2 -\mu_T)P_{{\mathcal{D}}_2}^*(t) +\mu_T P_{{\mathcal{A}}_4}^*(t)$$ and $$P^3_\Gamma(t) = \left(\lambda_6 -\frac{\mu_3}{2}\right)P_{\circlegraph} (t) + \frac{\mu_3}{2}P_{\edgegraph}(t).$$ \end{theorem} }
More general results are stated in Section~\ref{formulas for the Farrell--Tate cohomology} below.
\subsubsection{Example: Farrell--Tate cohomology of Coxeter (tetrahedral) groups} Recall that a Coxeter group is a group admitting a presentation
$$\langle g_1, g_2, ..., g_n \medspace | \medspace (g_i g_j)^{m_{i,j}} = 1 \rangle,$$ where $m_{i,i} = 1$; for $i \neq j$ we have $m_{i,j} \geq 2$;
and $m_{i,j} = \infty$ is permitted, meaning that $(g_i g_j)$ is not of finite order. As the Coxeter groups admit a contractible classifying space for proper actions \cite{Davis},
their Farrell--Tate cohomology yields all of their group cohomology. So in this section, we make use of this fact to determine the latter. For facts about Coxeter groups, and especially for the Davis complex, we refer to \cite{Davis}. Recall that the simplest example of a Coxeter group, the dihedral group $\mathcal{D}_n$, is an extension $$ 1 \to {\mathbb{Z}}/n \to \mathcal{D}_n \to {\mathbb{Z}}/2 \to 1.$$ So we can make use of the original application~\cite{Wall} of Wall's lemma to obtain its mod $\ell$ homology for prime numbers
$\ell >2$, $$ \Homol_q(\mathcal{D}_n; \thinspace {\mathbb{Z}}/\ell) \cong \begin{cases}
{\mathbb{Z}}/\ell, & q = 0, \\
{\mathbb{Z}}/{\rm gcd}(n,\ell), & q \equiv 3 \medspace {\rm or} \medspace 4 \mod 4, \\
0, & {\rm otherwise}. \end{cases} $$ \begin{theorem}[\cite{Rahm:formulas}] \label{small rank Coxeter groups}
Let $\ell > 2$ be a prime number.
Let $\Gamma$ be a Coxeter group admitting a Coxeter system with at most four generators,
and relator orders not divisible by~$\ell^2$. Let $Z_{(\ell)}$ be the $\ell$--torsion sub-complex of the Davis complex of~$\Gamma$. If $Z_{(\ell)}$ is at most one-dimensional and its orbit space contains no loop or bifurcation, then the$\mod \ell$ homology of~$\Gamma$ is isomorphic to $\left(\Homol_q(\mathcal{D}_\ell; \thinspace {\mathbb{Z}}/\ell)\right)^m$, with $m$ the number of connected components of the orbit space of~$Z_{(\ell)}$. \end{theorem} The conditions of this theorem are for instance fulfilled by the Coxeter tetrahedral groups;
the exponent $m$ has been specified for each of them in the tables in~\cite{Rahm:formulas}. In the easier case of Coxeter triangle groups, we can sharpen the statement as follows.
The non-spherical and hence infinite \emph{Coxeter triangle groups} are given by the presentation $$
\langle\, a, b, c \;|\; a^2 = b^2 = c^2 = (ab)^p = (bc)^q =
(c a)^r = 1 \,\rangle\, , $$ where $2 \leq p,q,r \in {\mathbb{N}}$ and $\frac{1}{p} + \frac{1}{q} + \frac{1}{r} \le 1$.
\begin{proposition}[\cite{Rahm:formulas}] For any prime number $\ell>2$, the {\rm mod} $\ell$ homology of a Coxeter triangle group is given as the direct sum over the {\rm mod} $\ell$ homology of the dihedral groups ${\mathcal D}_p$, ${\mathcal D}_q$ and ${\mathcal D}_r$. \end{proposition}
\subsection{The non-central torsion subcomplex} \label{The non-central torsion subcomplex}
In the case of a trivial kernel of the action on the polytopal $\Gamma$-cell complex, torsion subcomplex reduction allows one to establish general formulas for the Farrell--Tate cohomology of~$\Gamma$ \cite{Rahm:formulas}. In contrast, for instance the action of $\mathrm{SL}_2\left(\mathcal{O}_{-m}\right)$ on hyperbolic $3$-space has the $2$-torsion group $\{\pm 1\}$ in the kernel; since every cell stabiliser contains $2$-torsion, the $2$-torsion subcomplex then does not ease our calculation in any way. We can remedy this situation by considering the following object, on whose cells we impose a supplementary property.
\begin{df} \label{non-central torsion subcomplex}
The \emph{non-central $\ell$-torsion subcomplex} of a polytopal $\Gamma$-cell complex $X$
consists of all the cells of $X$
whose stabilisers in~$\Gamma$ contain elements of order $\ell$ that are not in the center of~$\Gamma$. \end{df}
We note that this definition yields a correspondence between, on one side, the \textit{non-central} $\ell$-torsion subcomplex for a group action with kernel the center of the group,
and on the other side, the $\ell$-torsion subcomplex for its central quotient group. In~\cite{BerkoveRahm}, this correspondence has been used in order to identify the \textit{non-central} $\ell$-torsion subcomplex for the action of $\mathrm{SL}_2\left(\mathcal{O}_{-m}\right)$
on hyperbolic $3$-space as the $\ell$-torsion subcomplex of $\mathrm{PSL}_2\left(\mathcal{O}_{-m}\right)$.
However, incorporating the non-central condition for $\mathrm{SL}_2\left(\mathcal{O}_{-m}\right)$
introduces significant technical obstacles,
which were addressed in that paper, establishing
the following theorem for any finite index subgroup $\Gamma$ in $\mathrm{SL}_2\left(\mathcal{O}_{-m}\right)$. Denote by $X$ a $\Gamma$-equivariant retract of SL$_2({\mathbb{C}})/$SU$_2$, by $X_s$ the $2$-torsion subcomplex with respect to P$\Gamma$ (the ``non-central'' $2$-torsion subcomplex for $\Gamma$), and by $X_s^\prime$ the part of it with higher $2$-rank. Further, let $v$ denote the number of conjugacy classes of subgroups of higher $2$-rank, and define ${\rm sign}(v) := $\scriptsize$\begin{cases}
0, & v = 0,\\
1,& v> 0.
\end{cases}$\normalsize
\\ For $q \in \{1, 2\}$, denote the dimension $\dim_{{\mathbb{F}}_2}\Cohomol^q(_\Gamma \backslash X ; \thinspace {\mathbb{F}}_2)$ by $\beta^q$.
\vbox{ \begin{thm}[\cite{BerkoveRahm}] \label{E2 page}
The $E_2$ page of the equivariant spectral sequence with ${\mathbb{F}}_2$-coefficients
associated to the action of $\Gamma$ on $X$ is concentrated in the columns $n \in \{0, 1, 2\}$ and has the following form.
\[
\begin{array}{l | cccl} q = 4k+3 & E_2^{0,3}(X_s) & E_2^{1,3}(X_s) \oplus ({\mathbb F}_2)^{a_1} & ({\mathbb F}_2)^{a_2} \\ q = 4k+2 & \Cohomol^2_\Gamma(X_s^\prime) \oplus ({\mathbb F}_2)^{1 -{\rm sign}(v)} & ({\mathbb F}_2)^{a_3}& \Cohomol^2(_\Gamma \backslash X) \\ q = 4k+1 & E_2^{0,1}(X_s) & E_2^{1,1}(X_s) \oplus ({\mathbb F}_2)^{a_1} & ({\mathbb F}_2)^{a_2} \\ q = 4k & {\mathbb{F}}_2 & \Cohomol^1(_\Gamma \backslash X) & \Cohomol^2(_\Gamma \backslash X) \\ \hline k \in \mathbb{N} \cup \{0\} & n = 0 & n = 1 & n = 2 \end{array} \] where \[\begin{array}{ll} a_1 & = \chi(_\Gamma \backslash X_s) -1 +\beta^1(_\Gamma \backslash X) +c \\ a_2 & = \beta^{2} (_\Gamma \backslash X) +c \\ a_3 & = \beta^{1} (_\Gamma \backslash X) +v -{\rm sign}(v). \end{array} \] \end{thm} }
In order to derive the example stated in Section~\ref{non-trivial-centre} below, we combine the latter theorem with the following determination (carried out in~\cite{BerkoveRahm}) of the $d_2$-differentials on the four possible (cf. Table~\ref{table:subcomplexes}) connected component types $\circlegraph$, $\edgegraph$, $\graphFive$ and $\graphTwo$ of the reduced non-central $2$-torsion subcomplex for the full SL$_2$ groups over the imaginary quadratic number rings. \begin{lemma}[\cite{BerkoveRahm}] \label{d_2 lemma} The $d_2$ differential in the equivariant spectral sequence associated to the action of $\mathrm{SL}_2(\mathcal{O}_{-m})$ on hyperbolic space is trivial on components of the non-central $2$-torsion subcomplex quotient \begin{itemize}
\item of type $\circlegraph$ in dimensions $q \equiv 1 \bmod 4$ if and only if it is trivial on these components in dimensions $q \equiv 3 \bmod 4$. \item of type $\edgegraph$. \item of types $\graphTwo$ and $\graphFive$ in dimensions $q \equiv 3 \bmod 4$. \end{itemize} \end{lemma}
\begin{table}
\begin{center}
\caption{Connected component types of reduced torsion subcomplex quotients for the PSL$_2$ Bianchi groups.
The exhaustiveness of this table has been established using theorems of Kr\"amer \cite{BerkoveRahm}.} \label{table:subcomplexes}
\label{one}
\footnotesize
\begin{tabular}{|c|c|c|c|c|}
\hline & & &&\\ \begin{tabular}{c}$2$--torsion\\subcomplex \\components\end{tabular} & \begin{tabular}{c}counted \\by \end{tabular} & & \begin{tabular}{c}$3$--torsion\\subcomplex \\components\end{tabular} & \begin{tabular}{c}counted \\by \end{tabular} \\ \hline & & &&\\
$\circlegraph \thinspace {\mathbb{Z}}/2$ & $o_2 = \lambda_4 -\lambda_4^* $ & & $\circlegraph \thinspace {\mathbb{Z}}/3$ & $o_3 = \lambda_6 -\lambda_6^* $\\
& & & &\\ ${\mathcal{A}}_4 \edgegraph {\mathcal{A}}_4$ & $\iota_2$ & & ${\mathcal{D}}_3 \edgegraph {\mathcal{D}}_3$ & $\iota_3 = \lambda_6^* $\\
& & &&\\ ${\mathcal{D}}_2 \graphFive \thinspace {\mathcal{D}}_2$ & $\theta$&&&\\
& & &&\\ ${\mathcal{D}}_2 \graphTwo {\mathcal{A}}_4$ & $\rho$ &&&\\ \hline \end{tabular} \normalsize \end{center} \end{table}
\section{Applications of the technique and their results} \label{Results} This section is going to state some results in which the technique described in Section~\ref{techniques} was involved.
\subsection{The Bianchi groups and their congruence subgroups} \label{The Bianchi groups} In the case of the PSL$_2$ groups over rings of imaginary quadratic integers (known as the Bianchi groups), the torsion subcomplex reduction technique has permitted the author to find a description of the cohomology ring of these groups in terms of elementary number-theoretic quantities~\cite{Rahm:formulas}. The key step has been to extract, using torsion subcomplex reduction, the essential information about the geometric models, and then to detach the cohomological information completely from the model.
Torsion subcomplex reduction combined with an analysis of the equivariant spectral sequence by Ethan Berkove, Grant Lakeland and the author provides new tools for the calculation of the torsion in the cohomology of congruence subgroups in the Bianchi groups~\cite{BLR}.
\subsection{The Coxeter groups} \label{The Coxeter groups} Let us recall that the Coxeter groups are generated by reflections, and their homology consists solely of torsion. Thus, torsion subcomplex reduction allows one to obtain all homology groups for all of the tetrahedral Coxeter groups at all odd prime numbers, in terms of a general formula~\cite{Rahm:formulas}.
\subsection{The SL\texorpdfstring{$_2$}{(2)} groups over arbitrary number rings}\label{formulas for the Farrell--Tate cohomology} Matthias Wendt and the author established a complete description of the Farrell--Tate cohomology with odd torsion coefficients for all groups $\operatorname{SL}_2(\mathcal{O}_{K,S})$, where $\mathcal{O}_{K,S}$ is the ring of $S$-integers in an arbitrary number field $K$ at an arbitrary non-empty finite set $S$ of places of $K$ containing the infinite places~\cite{RahmWendt}, based on an explicit description of conjugacy classes of finite cyclic subgroups and their normalizers in $\operatorname{SL}_2(\mathcal{O}_{K,S})$.
The statement uses the following notation. Let $\ell$ be an odd prime number different from the characteristic of $K$. In the situation where, for $\zeta_\ell$ some primitive $\ell$-th root of unity, $\zeta_\ell+\zeta_\ell^{-1}\in K$, we will abuse notation and write $\mathcal{O}_{K,S}[\zeta_\ell]$ to mean the ring $\mathcal{O}_{K,S}[T]/(T^2-(\zeta_\ell+\zeta_\ell^{-1})T+1)$. Moreover, we denote the norm maps for class groups and units by $$ \operatorname{Nm}_0:
\widetilde{\operatorname{K}_0}(\mathcal{O}_{K,S}[\zeta_\ell])\to
\widetilde{\operatorname{K}_0}(\mathcal{O}_{K,S}) \qquad\textrm{ and }\qquad \operatorname{Nm}_1:\mathcal{O}_{K,S}[\zeta_\ell]^\times\to
\mathcal{O}_{K,S}^\times. $$ Denote by $M_{(\ell)}$ the $\ell$-primary part of a module $M$; by $N_G(\Gamma)$ the normalizer of $\Gamma$ in $G$; and by $\widehat{\operatorname{H}}^\bullet$ Farrell--Tate cohomology (cf. Section~\ref{background}).
\begin{theorem} [\cite{RahmWendt}] \label{thm:gl2nf} ${}$ \begin{enumerate} \item
$\widehat{\operatorname{H}}^\bullet(\operatorname{SL}_2(\mathcal{O}_{K,S}),\mathbb{F}_\ell)\neq
0$ if and only if \\
$\zeta_\ell+\zeta_\ell^{-1}\in K$ and the Steinitz
class $\det_{\mathcal{O}_{K,S}}(\mathcal{O}_{K,S}[\zeta_\ell])$ is
contained in the image of the norm map $\operatorname{Nm}_0$. \item Assume the condition in (1) is satisfied.
The set $\mathcal{C}_\ell$ of conjugacy classes of order $\ell$
elements in $\operatorname{SL}_2(\mathcal{O}_{K,S})$
sits in an extension $$ 1\to \operatorname{coker}\operatorname{Nm}_1\to \mathcal{C}_\ell\to \ker\operatorname{Nm}_0\to 0. $$ The set $\mathcal{K}_\ell$ of conjugacy classes of order $\ell$ subgroups of $\operatorname{SL}_2(\mathcal{O}_{K,S})$ can be identified with the quotient $\mathcal{K}_\ell=\mathcal{C}_\ell/\operatorname{Gal}(K(\zeta_\ell)/K)$. There is a direct sum decomposition $$ \widehat{\operatorname{H}}^\bullet(\operatorname{SL}_2(\mathcal{O}_{K,S}),\mathbb{F}_\ell)\cong \bigoplus_{[\Gamma]\in\mathcal{K}_\ell} \widehat{\operatorname{H}}^\bullet(N_{\operatorname{SL}_2(\mathcal{O}_{K,S})}(\Gamma), \mathbb{F}_\ell) $$ which is compatible with the ring structure, i.e., the Farrell--Tate cohomology ring of $\operatorname{SL}_2(\mathcal{O}_{K,S})$ is a direct sum of the sub-rings for the normalizers $N_{\operatorname{SL}_2(\mathcal{O}_{K,S})}(\Gamma)$.
\item If the class of $\Gamma$ is not $\operatorname{Gal}(K(\zeta_\ell)/K)$-invariant, then $$N_{\operatorname{SL}_2(\mathcal{O}_{K,S})}(\Gamma)\cong \ker\operatorname{Nm}_1.$$ There is a degree $2$ cohomology class $a_2$ and a ring isomorphism $$ \widehat{\operatorname{H}}^\bullet(\ker\operatorname{Nm}_1,\mathbb{Z})_{(\ell)}\cong \mathbb{F}_\ell[a_2,a_2^{-1}]\otimes_{\mathbb{F}_\ell}\bigwedge \left(\ker\operatorname{Nm}_1\right). $$ In particular, this is a free module over the subring $\mathbb{F}_\ell[a_2^2,a_2^{-2}]$. \item If the class of $\Gamma$ is $\operatorname{Gal}(K(\zeta_\ell)/K)$-invariant, then there is an extension $$ 0\to \ker\operatorname{Nm}_1\to N_{\operatorname{SL}_2(\mathcal{O}_{K,S})}(\Gamma)\to \mathbb{Z}/2\to 1. $$ There is a ring isomorphism $$ \widehat{\operatorname{H}}^\bullet(N_{\operatorname{SL}_2(\mathcal{O}_{K,S})}(\Gamma),\mathbb{Z})_{(\ell)}\cong \left(\mathbb{F}_\ell[a_2,a_2^{-1}]\otimes_{\mathbb{F}_\ell}\bigwedge \left(\ker\operatorname{Nm}_1\right)\right)^{\mathbb{Z}/2}, $$ with the $\mathbb{Z}/2$-action given by multiplication with $-1$ on $a_2$ and $\ker\operatorname{Nm}_1$. In particular, this is a free module over the subring $$\mathbb{F}_\ell[a_2^2,a_2^{-2}]\cong \widehat{\operatorname{H}}^\bullet(D_{2\ell},\mathbb{Z})_{(\ell)}.$$ \item The restriction map induced from the inclusion
$\operatorname{SL}_2(\mathcal{O}_{K,S})\to
\operatorname{SL}_2(\mathbb{C})$ maps the second Chern class
$c_2$ to the sum of the elements $a_2^2$ in all the components. \end{enumerate} \end{theorem} Wendt has extended this investigation to the cases of $\operatorname{SL}_2$ over the ring of functions on a smooth affine curve over an algebraically closed field~\cite{sl2parabolic}.
\subsection{Farrell--Tate cohomology of higher rank arithmetic groups} \label{GL3} Pertinent progress was also made on the Farrell--Tate cohomology of GL$_3$ over rings of quadratic integers. For this purpose, the conjugacy classification of cyclic subgroups was reduced to the classification of modules of group rings over suitable rings of integers which are principal ideal domains, generalizing an old result of Reiner. As an example of the number-theoretic input required for the Farrell--Tate cohomology computations, Bui, Wendt and the author describe the homological torsion in PGL$_3$ over principal ideal rings of quadratic integers, accompanied by machine computations in the imaginary quadratic case~\cite{BuiRahmWendt:GL3om}.
For machine calculations of Farrell--Tate or Bredon (co)homology, one needs cell complexes where cell stabilizers fix their cells pointwise. Bui, Wendt and the author provided two algorithms computing an efficient subdivision of a complex to achieve this rigidity property~\cite{BuiRahmWendt:Farrell-Tate}. Applying these algorithms to available cell complexes for {PSL}$_4({\mathbb{Z}})$, they computed the Farrell--Tate cohomology for small primes as well as the Bredon homology for the classifying spaces of proper actions with coefficients in the complex representation ring.
\subsection{Adaptation of the technique to groups with non-trivial centre} \label{non-trivial-centre} Berkove and the author~\cite{BerkoveRahm} extended the technique of torsion subcomplex reduction, which originally was designed for groups with trivial centre (e.g., PSL$_2$), to groups with non-trivial centre (e.g., SL$_2$). This way, they determined the $2$-torsion in the cohomology of the SL$_2$ groups over imaginary quadratic number rings~$\mathcal{O}_{-m}$ in ${\mathbb{Q}}(\sqrt{-m})$, based on their action on hyperbolic 3-space~$\mathcal{H}$. For instance, they get the following result in the case where the quotient of the $2$--torsion subcomplex has the shape $\edgegraph$, which is equivalent to the following three conditions (cf.~\cite{Rahm:formulas}): $m \equiv 3 \bmod 8$, the field ${\mathbb{Q}}(\sqrt{-m})$ has precisely one finite ramification place over ${\mathbb{Q}}$, and the ideal class number of the totally real number field ${\mathbb{Q}}(\sqrt{m})$ is $1$. Under these assumptions, our cohomology ring has the following dimensions: $$ \dim_{{\mathbb{F}}_2}\Cohomol^{q}(\mathrm{SL}_2\left(\mathcal{O}_{-m}\right); \thinspace {\mathbb{F}}_2) = \begin{cases}
\beta^{1} +\beta^{2} , & q = 4k+5, \\
\beta^1 +\beta^{2} +2, & q = 4k+4, \\
\beta^{1} + \beta^{2}+3, & q = 4k+3, \\
\beta^1+\beta^{2} +1, & q = 4k+2, \\
\beta^{1}, & q = 1, \\ \end{cases} $$ where $\beta^q := \dim_{{\mathbb{F}}_2}\Cohomol^q(_{\mathrm{SL}_2\left(\mathcal{O}_{-m}\right)} \backslash \mathcal{H} ; \thinspace {\mathbb{F}}_2)$. Let $ \beta_1 := \dim_{{\mathbb{Q}}}\Cohomol_1(_{\mathrm{SL}_2\left(\mathcal{O}_{-m}\right)} \backslash \mathcal{H} ; \thinspace {\mathbb{Q}}).$ For all absolute values of the discriminant less than $296$, numerical calculations yield $\beta^2 +1 = \beta^1= \beta_1.$ In this range, the numbers $m$ subject to the above dimension formula and $\beta_1$ are given as follows
(the Betti numbers are computed in a previous paper of the author~\cite{Rahm:higher_torsion}).
$$\begin{array}{l|ccccccccccccccccccccccccccc} m & 11 & 19 & 43 & 59 & 67 & 83 & 107 & 131 & 139 & 163 & 179 & 211 & 227 & 251 & 283\\
\hline \beta_1 & 1 & 1 & 2 & 4 & 3 & 5 & 6 & 8 & 7 & 7 & 10 & 10 & 12 & 14 & 13\\ \end{array}$$
This result is a consequence of Theorem~\ref{E2 page}, combined with Lemma~\ref{d_2 lemma} above.
\subsection{Investigation of the refined Quillen conjecture} \label{QC} The Quillen conjecture on the cohomology of arithmetic groups has spurred a great deal of mathematics (see the pertinent monograph \cite{Knudson:book}). Using Farrell--Tate cohomology computations, Wendt and the author established further positive cases for the Quillen conjecture for $\operatorname{SL}_2$. In detail, the original conjecture of 1971~\cite{Quillen} is as follows for GL$_n$. \begin{conjecture}[Quillen] \label{Quillen-conjecture} Let $\ell$ be a prime number. Let $K$ be a number field with $\zeta_\ell\in K$, and $S$ a finite set of places containing the infinite places and the places over $\ell$. Then the natural inclusion $\mathcal{O}_{K,S}\hookrightarrow \mathbb{C}$ makes $\operatorname{H}^\bullet(\operatorname{GL}_n(\mathcal{O}_{K,S}),\mathbb{F}_\ell)$ a free module over the cohomology ring $\operatorname{H}^\bullet_{\operatorname{cts}}(\operatorname{GL}_n(\mathbb{C}),\mathbb{F}_\ell)$. \end{conjecture} While there are counterexamples to the original version of the conjecture, it holds true in many other cases. From the first counterexamples through the present, the conjecture has kept researchers interested in determining its range of validity~\cite{Anton-mod5}.
Positive cases in which the conjecture has been established are $n=\ell=2$ by Mitchell \cite{Mitchell}, $n=3$, $\ell=2$ by Henn \cite{Henn}, and $n=2$, $\ell=3$ by Anton \cite{anton}.
On the other hand, cases where the Quillen conjecture is known to be false can all be traced to a remark by Henn, Lannes and Schwartz~\cite{henn:lannes:schwartz}*{remark on p. 51}, which shows that Quillen's conjecture for $\operatorname{GL}_n(\mathbb{Z}[1/2])$ implies that the restriction map $$ \operatorname{H}^\bullet(\operatorname{GL}_n(\mathbb{Z}[1/2]),\mathbb{F}_2)\to \operatorname{H}^\bullet(\operatorname{T}_n(\mathbb{Z}[1/2]),\mathbb{F}_2) $$ from $\operatorname{GL}_n(\mathbb{Z}[1/2])$ to the subgroup $\operatorname{T}_n(\mathbb{Z}[1/2])$ of diagonal matrices is injective. Non-injectivity of the restriction map has been shown by Dwyer \cite{dwyer} for $n\geq 32$ and $\ell=2$. Dwyer's bound was subsequently improved by Henn and Lannes to $n\geq 14$. At the prime $\ell=3$, Anton~\cite{anton} proved non-injectivity for $n\geq 27$.
Wendt's and the author's contribution is that we can determine precisely the module structure above the virtual cohomological dimension; this has allowed us to relate the Quillen conjecture for $\operatorname{SL}_2$ to statements about Steinberg homology (recall Section~\ref{Farrell--Tate cohomology and Steinberg homology}).
This, together with the results of~\cite{sl2parabolic}, has allowed us to find a refined version of the Quillen conjecture, which keeps track of all the types of known counter-examples to the original Quillen conjecture:
\begin{conjecture}[Refined Quillen conjecture \cite{qcnote}] \label{refined Quillen-conjecture} Let $K$ be a number field. Fix a prime $\ell$ such that $\zeta_\ell\in K$, and an integer $n<\ell$. Assume that $S$ is a set of places containing the infinite places and the places lying over $\ell$. If each cohomology class of $\operatorname{GL}_n(\mathcal{O}_{K,S})$ is detected on some finite subgroup, then $\operatorname{H}^\bullet(\operatorname{GL}_n(\mathcal{O}_{K,S}),\mathbb{F}_\ell)$ is a free module over the image of the restriction map $$\operatorname{H}^\bullet_{\operatorname{cts}}(\operatorname{GL}_n(\mathbb{C}),\mathbb{F}_\ell)\to \operatorname{H}^\bullet(\operatorname{GL}_n(\mathcal{O}_{K,S}),\mathbb{F}_\ell).$$ \end{conjecture}
We can make the following use of the description of the Farrell--Tate cohomology of SL$_2$ over rings of $S$-integers.
\begin{corollary}[Corollary to Theorem \ref{thm:gl2nf}] Let $K$ be a number field, let $S$ be a finite set of places containing the infinite ones, and let $\ell$ be an odd prime. \begin{enumerate} \item The original Quillen conjecture holds for group cohomology
$\operatorname{H}^\bullet(\operatorname{SL}_2(\mathcal{O}_{K,S}),\mathbb{F}_\ell)$ above
the virtual cohomological dimension. \item The refined Quillen conjecture holds for Farrell--Tate cohomology
$\widehat{\operatorname{H}}^\bullet(\operatorname{SL}_2(\mathcal{O}_{K,S}),\mathbb{F}_\ell)$. \end{enumerate} \end{corollary}
\subsection*{Verification of the Quillen conjecture in the rank 2 imaginary quadratic case} Bui and the author confirm a conjecture of Quillen in the case of the mod $2$ cohomology of arithmetic groups ${\rm SL}_2({\mathcal{O}}_{\Q(\sqrt{-m}\thinspace )}[\frac{1}{2}])$, where ${\mathcal{O}}_{\Q(\sqrt{-m}\thinspace )}$ is an imaginary quadratic ring of integers. To make explicit the free module structure on the cohomology ring conjectured by Quillen, they computed the mod $2$ cohomology of $\arithGrp$ via the amalgamated decomposition of the latter group~\cite{BuiRahm:Verification}.
\subsection{Application to equivariant \textit{K}-homology} \label{Bredon state} For the Bianchi groups, the torsion subcomplex reduction technique was adapted from group homology to Bredon homology $\Homol^\mathfrak{Fin}_n(\Gamma; \thinspace R_{\mathbb{C}})$ with coefficients in the complex representation rings, and with respect to the family of finite subgroups~\cite{Rahm:equivariant}. This has led the author to the following formulas for this Bredon homology, and by the Atiyah--Hirzebruch spectral sequence, to the below formulas for equivariant $K$-homology of the Bianchi groups acting on their classifying space for proper actions.
We let a Bianchi group $\Gamma$ act on a $2$-dimensional retract $X$ of hyperbolic $3$-space. Denote by $\Gamma_\sigma$ the stabiliser of a cell $\sigma$, and by $R_{\mathbb{C}}(G)$ the complex representation ring of a group $G$. As $X$ is a model for the classifying space for proper $\Gamma$-actions, the homology of the Bredon chain complex of our $\Gamma$-cell complex $X$ is identical to the Bredon homology $\Homol^\mathfrak{Fin}_p(\Gamma; \thinspace R_{\mathbb{C}})$ of~$\Gamma$~\cite{Sanchez-Garcia}. This Bredon chain complex can be stated as $$\xymatrix{ 0 \to \bigoplus\limits_{\sigma \in \thinspace_\Gamma \backslash X^{(2)}} R_{\mathbb{C}} (\Gamma_\sigma) \ar[r]^{\Psi_2 } & \bigoplus\limits_{\sigma \in \thinspace_\Gamma \backslash X^{(1)}} R_{\mathbb{C}} (\Gamma_\sigma) \ar[r]^{\Psi_1} & \bigoplus\limits_{\sigma \in \thinspace_\Gamma \backslash X^{(0)}} R_{\mathbb{C}} (\Gamma_\sigma) \to 0, } $$ where the blocks of the differential matrices $\Psi_1$ and $\Psi_2$ are obtained by inducing homomorphisms on the involved complex representation rings from the cell stabiliser inclusions. Note that in general, the Bredon chain complex continues in higher dimensions, and its truncation at dimension $2$ results from the dimension of $X$.
\begin{theorem} \label{splitting} Let $\Gamma$ be a Bianchi group or any one of its subgroups.
Then the Bredon homology $\Homol^\mathfrak{Fin}_n(\Gamma; \thinspace R_{\mathbb{C}})$ splits as a direct sum over (1) the orbit space homology $\Homol_n(\underbar{\rm B}\Gamma; \thinspace {\mathbb{Z}})$, \begin{enumerate}
\item[(2)] a submodule $\Homol_n(\Psi_\bullet^{(2)})$ determined by the reduced $2$-torsion subcomplex of $(\underline{\rm E}\Gamma, \Gamma)$ and
\item[(3)] a submodule $\Homol_n(\Psi_\bullet^{(3)})$ determined by the reduced $3$-torsion subcomplex of $(\underline{\rm E}\Gamma, \Gamma)$. \end{enumerate} \end{theorem} These submodules are given as follows.
Except for the Gauss{}ian and Eisenstein integers, which can easily be treated ad hoc~\cite{Rahm:noteAuxCRAS}, all the rings of integers of imaginary quadratic number fields admit as only units $\{\pm 1\}$. In the latter case, we call $\mathrm{PSL}_2(\mathcal{O}_{-m})$ a \textit{Bianchi group with units} $\{\pm 1\}$. \begin{theorem} \label{2}
The $2$-torsion part of the Bredon complex of a {Bianchi group $\Gamma$ with units} $\{\pm 1\}$ has homology
\begin{center}
$\Homol_n(\Psi_\bullet^{(2)}) \cong \begin{cases}
{\mathbb{Z}}^{z_2}\oplus ({\mathbb{Z}}/2)^\frac{d_2}{2},& n = 0,\\
{\mathbb{Z}}^{o_2},& n = 1,\\
0,&\text{\rm otherwise},
\end{cases}
$
\end{center} where $z_2$ counts the number of conjugacy classes of subgroups of type ${\mathbb{Z}}/2$ in $\Gamma$, $o_2$ counts the conjugacy classes of type ${\mathbb{Z}}/2$ in $\Gamma$ which are not contained in any $2$-dihedral subgroup, and $d_2$ counts the number of $2$-dihedral subgroups, whether or not they are contained in a tetrahedral subgroup of $\Gamma$. \end{theorem}
\vbox{ \begin{theorem} \label{3}
The $3$-torsion part of the Bredon complex of a {Bianchi group $\Gamma$ with units} $\{\pm 1\}$ has homology
\begin{center}
$\Homol_n(\Psi_\bullet^{(3)}) \cong \begin{cases}
{\mathbb{Z}}^{2o_3+\iota_3},& n = 0 \medspace \text{\rm or }1,\\
0,&\text{\rm otherwise},
\end{cases}
$
\end{center} where amongst the subgroups of type ${\mathbb{Z}}/3$ in $\Gamma$, $o_3$ counts the number of conjugacy classes of those of them which are not contained in any $3$-dihedral subgroup, and $\iota_3$ counts the conjugacy classes of those of them which are contained in some $3$-dihedral subgroup in $\Gamma$. \end{theorem} }
There are formulas for $o_2, z_2, d_2, o_3$ and $\iota_3$ in terms of elementary number-theoretic quantities~\cite{Kraemer:Diplom}, which are readily computable by machine~\cite{Rahm:formulas}*{appendix}. See Table~\ref{table:subcomplexes} for how they relate to the types of connected components of torsion subcomplexes.
We deduce in the following corollary formulas for the equivariant $K$-homology of the Bianchi groups. Note for this purpose that for a Bianchi group $\Gamma$, there is a model for \underline{E}$\Gamma$ of dimension 2, so $\Homol_2(\underline{\rm B}\Gamma ; \thinspace {\mathbb{Z}}) \cong {\mathbb{Z}}^{\beta_2}$ is torsion-free. Note also that the naive Euler characteristic of the Bianchi groups vanishes (again excluding the two special cases of Gaussian and Eisensteinian integers), that is, for $\beta_i = \dim \Homol_i(\underline{\rm B}\Gamma ; \thinspace {\mathbb{Q}})$ we have $\beta_0 -\beta_1 +\beta_2 = 0$ and $\beta_0 = 1$.
Whenever we have a classifying space for proper $G$-actions of dimension at most $2$, the Atiyah---Hirzebruch spectral sequence from its Bredon homology to the equivariant \mbox{$K$-homology} $K^G_j(\underbar{\rm E}G)$ of a group $G$ degenerates on the $E^2$-page and directly yields the following theorem, which can be found in the book by Mislin and Valette. Note that by Bott periodicity, only two indices $j = 0$ and $j = 1$ are relevant. \begin{thm}[\cite{MislinValette}] \label{Bredon_to_K-homology}
Let $G$ be an arbitrary group such that $\dim \underbar{\rm E}G \leq 2$. Then there is a natural short exact sequence $$0 \to \Homol^\mathfrak{Fin}_0(G; R_{\mathbb{C}}) \to K^G_0(\underbar{\rm E}G) \to \Homol^\mathfrak{Fin}_2(G; R_{\mathbb{C}}) \to 0 $$ and a natural isomorphism $\Homol^\mathfrak{Fin}_1(G; R_{\mathbb{C}}) \cong K^G_1(\underbar{\rm E}G)$. \end{thm}
\begin{corollary}[Corollary to theorems \ref{splitting}, \ref{2}, \ref{3} and \ref{Bredon_to_K-homology}]
For any {Bianchi group $\Gamma$ with units} $\{\pm 1\}$, the short exact sequence linking Bredon homology and equivariant
$K$-homology splits into $$K^\Gamma_0(\underbar{\rm E}\Gamma) \cong {\mathbb{Z}} \oplus {\mathbb{Z}}^{\beta_2} \oplus {\mathbb{Z}}^{z_2} \oplus ({\mathbb{Z}}/2)^\frac{d_2}{2} \oplus {\mathbb{Z}}^{2o_3+\iota_3}.$$ Furthermore, $K^\Gamma_1(\underbar{\rm E}\Gamma) \cong \Homol_1(\underline{\rm B}\Gamma; \thinspace {\mathbb{Z}}) \oplus {\mathbb{Z}}^{o_2} \oplus {\mathbb{Z}}^{2o_3+\iota_3}$. \end{corollary}
In order to adapt torsion subcomplex reduction to Bredon homology and prove Theorem~\ref{splitting}, we need to perform a ``representation ring splitting''.
\textit{Representation ring splitting}. \label{Representation ring splitting} The classification of Felix Klein~\cite{Klein:binaereFormenMathAnn9} of the finite subgroups in $\mathrm{PSL}_2(\mathcal{O})$
is recalled in Table~\ref{table:covering}. We further use the existence of geometric models for the Bianchi groups in which all edge stabilisers are finite cyclic and all cells of dimension $2$ and higher are trivially stabilised. Therefore, the system of finite subgroups of the Bianchi groups admits inclusions only emanating from cyclic groups. This makes the Bianchi groups and their subgroups subject to the splitting of Bredon homology stated in Theorem~\ref{splitting}.
The proof of Theorem~\ref{splitting} is based on the above particularities of the Bianchi groups, and applies the following splitting lemma for the involved representation rings
to a Bredon complex for~$(\underline{\rm E}\Gamma, \Gamma)$.
\vbox{\begin{lemma}[\cite{Rahm:equivariant}] \label{representation ring splitting} Consider a group $\Gamma$ such that every one of its finite subgroups is either cyclic of order at most~$3$, or of one of the types ${\mathcal{D}}_2, {\mathcal{D}}_3$ or~${\mathcal{A}}_4$. Then there exist bases of the complex representation rings of the finite subgroups of~$\Gamma$,
such that simultaneously every morphism of representation rings
induced by inclusion of cyclic groups into finite subgroups of~$\Gamma$,
splits as a matrix into the following diagonal blocks. \begin{enumerate}
\item A block of rank $1$ induced by the trivial and regular representations,
\item a block induced by the $2$--torsion subgroups
\item and a block induced by the $3$--torsion subgroups. \end{enumerate} \end{lemma} }
As this splitting holds simultaneously for every morphism of representation rings, we have such a splitting for every morphism of formal sums of representation rings, and hence for the differential maps of the Bredon complex for any Bianchi group and any of their subgroups.
The bases that are mentioned in the above lemma, are obtained by elementary base transformations from the canonical basis of the complex representation ring of a finite group to a basis whose matrix form has \begin{itemize}
\item its first row concentrated in its first entry, for a finite cyclic group (edge stabiliser). The base transformation is carried out by summing over all representations to replace the trivial representation by the regular representation.
\item its first column concentrated in its first entry, for a finite non-cyclic group (vertex stabiliser). The base transformation is carried out by subtracting the trivial representation from each representation, except from itself.
\end{itemize} The details are provided in \cite{Rahm:equivariant}.
In this setting, the technique has inspired work beyond the range of arithmetic groups, which has led to formulas for the integral Bredon homology and equivariant K-homology of all compact 3-dimensional hyperbolic reflection groups~\cite{LORS}, through a novel criterion for torsion-freeness of equivariant K-homology in a more general framework.
\subsection{Chen--Ruan orbifold cohomology of the complexified Bianchi orbifolds} \label{orbifold state}
The action of the Bianchi groups $\Gamma$ on real hyperbolic $3$-space $\mathcal{H} = \mathrm{SL}_2({\mathbb{C}})/\mathrm{SU}_2$ induces an action on a complexification $\mathcal{H}_{\mathbb{C}}$ of the latter (of real dimension $6$). For the orbifolds $[\mathcal{H}_{\mathbb{C}} /_\Gamma]$ given by this action, we can compute the Chen--Ruan Orbifold Cohomology as follows.
Let~$\Gamma$ be a discrete group acting \emph{properly}, i.e. with finite stabilizers, by diffeomorphisms on a manifold~$X$. For any element $g \in \Gamma$, denote by $C_\Gamma(g)$ the centralizer of $g$ in~$\Gamma$. Denote by $X^g$ the subset of $X$ consisting of the fixed points of $g$.
\begin{df} Let $T \subset \Gamma$ be a set of representatives of the conjugacy classes of elements of finite order in~$\Gamma$. Then we set $$ \Homol^*_{orb}([X / _\Gamma]) := \bigoplus_{g \in T} \Homol^* \left( X^g / C_\Gamma(g); \thinspace {\mathbb{Q}} \right),$$ where $\Homol^* \left( X^g / C_\Gamma(g); \thinspace {\mathbb{Q}} \right)$ is the ordinary cohomology of the quotient space $X^g / C_\Gamma(g)$. \end{df} It can be checked that this definition gives the vector space structure of the orbifold cohomology defined by Chen and Ruan~\cite{ChenRuan}, if we forget the grading of the latter. We can verify this fact using arguments analogous to those used by Fantechi and G\"ottsche \cite{FantechiGoettsche} in the case of a finite group~$\Gamma$ acting on~$X$. The additional argument needed when considering some element $g$ in~$\Gamma$ of infinite order, is the following. As the action of~$\Gamma$ on $X$ is proper, $g$ does not admit any fixed point in $X$. Thus, $ \Homol^* \left( X^g / C_\Gamma(g); \thinspace {\mathbb{Q}} \right) = \Homol^* \left( \emptyset; \thinspace {\mathbb{Q}} \right) = 0. $
\begin{theorem}[\cite{PerroniRahm}] \label{introduced result} Let $\Gamma$ be a finite index subgroup in a Bianchi group (except over the Gaussian or Eisensteinian integers). Denote by $\lambda_{2n}$ the number of conjugacy classes of cyclic subgroups of order ${n}$ in $\Gamma$. Denote by $\lambda_{2n}^*$ the cardinality of the subset of conjugacy classes which are contained in a dihedral subgroup of order $2n$ in~$\Gamma$. Then, \small $$ \Homol^d_{orb}\left([\mathcal{H}_{\mathbb{C}} /_\Gamma] \right) \cong \Homol^d\left(\mathcal{H}/_{\Gamma}; \thinspace {\mathbb{Q}} \right) \oplus \begin{cases} {\mathbb{Q}}^{\lambda_4 +2\lambda_6 -\lambda_6^*}, & d=2, \\
{\mathbb{Q}}^{\lambda_4-\lambda_4^* +2\lambda_6 -\lambda_6^*}, & d=3, \\ 0, & \mathrm{otherwise}. \end{cases}$$ \normalsize \end{theorem}
The (co)homology of the quotient space $\mathcal{H} / _\Gamma$
has been computed numerically for a large range of Bianchi groups \cite{Vogtmann}, \cite{Scheutzow}, \cite{Rahm:higher_torsion};
and bounds for its Betti numbers are given in \cite{Kraemer:Thesis}. Kr\"amer \cite{Kraemer:Diplom} has determined number-theoretic formulas
for the numbers $\lambda_{2n}$ and $\lambda_{2n}^*$ of conjugacy classes of finite subgroups in the Bianchi groups.
Building on this, Perroni and the author established the following result~\cite{PerroniRahm}. \begin{theorem}\label{mainthm} ${}$\\ Let $\mathcal{H}_{\mathbb{C}} /_\Gamma$ be the coarse moduli space of $[\mathcal{H}_{\mathbb{C}} /_\Gamma]$; \\ and let $Y \ensuremath{\rightarrow} \mathcal{H}_{\mathbb{C}} /_\Gamma$ be a crepant resolution of $\mathcal{H}_{\mathbb{C}} /_\Gamma$.\\ Then there is an isomorphism as graded $\ensuremath{\mathbb{C}}$-algebras between the Chen-Ruan cohomology ring of $[\mathcal{H}_{\mathbb{C}} /_\Gamma]$ and the singular cohomology ring of $Y$: $$ \left( H_{\rm CR}^*([\mathcal{H}_{\mathbb{C}} / _\Gamma]) , \cup_{\rm CR} \right) \cong \left( H^*(Y) , \cup \right) \, . $$ \end{theorem} The Chen--Ruan orbifold cohomology is conjectured by Ruan to match the quantum corrected classical cohomology ring of a crepant resolution for the orbifold. Perroni and the author proved furthermore that the Gromov-Witten invariants involved in the definition of the quantum corrected cohomology ring of $Y\ensuremath{\rightarrow} \mathcal{H}_{\mathbb{C}} /_\Gamma$ vanish. Hence, they deduced the following. \begin{corollary}\label{maincor} Ruan's crepant resolution conjecture holds true for the complexified Bianchi orbifolds $[\mathcal{H}_{\mathbb{C}} / _\Gamma]$. \end{corollary}
A result on the vector space structure of the Chen--Ruan orbifold cohomology of Bianchi orbifolds are the below two theorems.
\begin{theorem}[\cite{Rahm:equivariant}] \label{3-torsion quotients} For any element $\gamma$ of order $3$ in a finite index subgroup~$\Gamma$ in a Bianchi group with units~$\{\pm 1\}$,
the quotient space $\mathcal{H}^\gamma /_{C_\Gamma(\gamma)}$ of the rotation axis modulo the centralizer of $\gamma$
is homeomorphic to a circle. \end{theorem}
\begin{theorem}[\cite{Rahm:equivariant}] \label{2-torsion quotients} Let $\gamma$ be an element of order $2$ in a Bianchi group~$\Gamma$ with units~$\{\pm 1\}$. Then, the homeomorphism type of the quotient space $\mathcal{H}^\gamma /_{C_\Gamma(\gamma)}$ is \end{theorem} \begin{itemize} \item[$\edgegraph$] \textit{an edge without identifications, if $\langle \gamma \rangle$ is contained in a subgroup of type ${\mathcal{D}}_2$ inside~$\Gamma$ and \item[$\circlegraph$] a circle, otherwise.} \end{itemize}
Denote by $\lambda_{2\ell}$ the number of conjugacy classes of subgroups of type ${\mathbb{Z}}/\ell$ in a finite index subgroup {$\Gamma$} in a Bianchi group with units $\{\pm 1 \}$. Denote by $\lambda_{2\ell}^*$ the number of conjugacy classes of subgroups of type ${\mathbb{Z}}/\ell$ which are contained in a subgroup of type $\mathcal{D}_{n}$ in~$\Gamma$. By \cite{Rahm:equivariant}, there are \mbox{$2\lambda_6 -\lambda_6^*$} conjugacy classes of elements of order $3$. As a result of Theorems~\ref{3-torsion quotients} and~\ref{2-torsion quotients},
the vector space structure of the orbifold cohomology of $[\mathcal{H}_{\mathbb{R}} / _\Gamma]$ is given as
$ \Homol^\bullet_{orb}([\mathcal{H}_{\mathbb{R}} / _\Gamma]) \cong \Homol^{\bullet} \left( \mathcal{H}_{\mathbb{R}} / _\Gamma; \thinspace {\mathbb{Q}} \right) \bigoplus\nolimits^{\lambda_4^*} \Homol^{\bullet} \left( \edgegraph; \thinspace {\mathbb{Q}} \right) \bigoplus\nolimits^{(\lambda_4 -\lambda_4^*)} \Homol^{\bullet} \left( \circlegraph; \thinspace {\mathbb{Q}} \right) \bigoplus\nolimits^{(2\lambda_6 -\lambda_6^*)} \Homol^{\bullet} \left(\circlegraph; \thinspace {\mathbb{Q}} \right). $ \normalsize \\ The (co)homology of the quotient space $\mathcal{H}_{\mathbb{R}} / _\Gamma$
has been computed numerically for a large range of Bianchi groups \cite{Vogtmann}, \cite{Scheutzow}, \cite{Rahm:higher_torsion};
and bounds for its Betti numbers are given in \cite{Kraemer:Thesis}. Kr\"amer \cite{Kraemer:Diplom} has determined number-theoretic formulas
for the numbers $\lambda_{2\ell}$ and $\lambda_{2\ell}^*$ of conjugacy classes of finite subgroups in the full Bianchi groups. Kr\"amer's formulas were evaluated for hundreds of thousands of Bianchi groups \cite{Rahm:formulas}, and these values are matching with the ones from the orbifold structure computations with \cite{Rahm:BianchiGP} in the cases where the latter are available.
When we pass to the complexified orbifold $[\mathcal{H}_{\mathbb{C}} / _\Gamma]$, the real line that is the rotation axis in~$\mathcal{H}_{\mathbb{R}}$ of an element of finite order, becomes a complex line. However, the centralizer still acts in the same way by reflections and translations. So, the interval $\edgegraph$ as a quotient of the real line yields a stripe
$\edgegraph \times {\mathbb{R}}$ as a quotient of the complex line. And the circle $\circlegraph$ as a quotient of the real line yields a cylinder
$\circlegraph \times {\mathbb{R}}$ as a quotient of the complex line. Therefore, using the degree shifting numbers computed in~\cite{Rahm:equivariant}, we obtain the result of Theorem~\ref{introduced result}.
\subsection*{Acknowledgement.} The author was supported by the MELODIA project, grant number ANR-20-CE40-0013, during the revision of this paper.
\begin{bibdiv} \begin{biblist} \bib{anton}{article}{
author={Anton, Marian F.},
title={On a conjecture of Quillen at the prime $3$},
journal={J. Pure Appl. Algebra},
volume={144},
date={1999},
number={1},
pages={1--20},
issn={0022-4049},
review={\MR{1723188 (2000m:19003)}},
doi={10.1016/S0022-4049(98)00050-4}, } \bib{Anton-mod5}{article}{
author={Anton, Marian F.},
title={Homological symbols and the Quillen conjecture},
journal={J. Pure Appl. Algebra},
volume={213},
date={2009},
number={4},
pages={440--453},
issn={0022-4049},
review={\MR{2483829 (2010f:20042)}},
doi={10.1016/j.jpaa.2008.07.011}, } \bib{AGMY}{article}{
author={Ash, A.},
author={Gunnells, P. E.},
author={McConnell, M.},
author={Yasaki, D.},
title={On the growth of torsion in the cohomology of arithmetic groups},
journal={J. Inst. Math. Jussieu},
volume={19},
date={2020},
number={2},
pages={537--569},
issn={1474-7480},
review={\MR{4079152}},
doi={10.1017/s1474748018000117}, } \bib{BergeronSengunVenkatesh}{article}{
author={Bergeron, Nicolas},
author={\c{S}eng\"{u}n, Mehmet Haluk},
author={Venkatesh, Akshay},
title={Torsion homology growth and cycle complexity of arithmetic manifolds},
journal={Duke Math. J.},
volume={165},
date={2016},
number={9},
pages={1629--1693},
issn={0012-7094},
review={\MR{3513571}},
doi={10.1215/00127094-3450429}, } \bib{BerkoveRahm}{article}{
author = {Berkove, Ethan},
author = {Rahm, Alexander~D.} ,
title={The mod 2 cohomology rings of ${\rm SL}_2$ of the imaginary
quadratic integers},
note={With an appendix by Aurel Page},
journal={J. Pure Appl. Algebra},
volume={220},
date={2016},
number={3},
pages={944--975},
issn={0022-4049},
review={\MR{3414403}},
doi={10.1016/j.jpaa.2015.08.002}, } \bib{BLR}{article}{
author = {Berkove, Ethan},
author = {Lakeland, Grant} ,
author = {Rahm, Alexander~D.} ,
title = {The mod $2$ cohomology rings of congruence subgroups in the Bianchi groups},
journal ={J. Algebr. Comb.},
year = {2019},
pages = {\url{https://doi.org/10.1007/s10801-019-00912-8}}, }
\bib{Brown}{book}{
author={Brown, Kenneth S.},
title={Cohomology of groups},
series={Graduate Texts in Mathematics},
volume={87},
note={Corrected reprint of the 1982 original},
publisher={Springer-Verlag},
place={New York},
date={1994},
pages={x+306},
isbn={0-387-90688-6},
review={\MR{1324339 (96a:20072)}}, } \bib{Brown79}{article}{
author={Brown, Kenneth S.},
title={Groups of virtually finite dimension},
conference={
title={Homological group theory},
address={Proc. Sympos., Durham},
date={1977},
},
book={
series={London Math. Soc. Lecture Note Ser.},
volume={36},
publisher={Cambridge Univ. Press, Cambridge-New York},
},
date={1979},
pages={27--70},
review={\MR{564419}}, } \bib{BuiRahm:Verification}{article}{ author = {Bui Anh Tuan}, author = {Rahm, Alexander D.}, title = {Verification of the Quillen conjecture in the rank 2 imaginary quadratic case}, journal={HHA (Homology, Homotopy and Applications)}, Volume ={22}, year ={2020}, Number={2}, Pages={265--278, \url{http://dx.doi.org/10.4310/HHA.2020.v22.n2.a17}}, } \bib{BuiRahm:scpInHAP}{book}{
author={Bui Anh Tuan},
author = {Rahm, Alexander~D.} ,
title = {Torsion Subcomplexes package in HAP},
address = {a GAP subpackage, \url{http://hamilton.nuigalway.ie/Hap/doc/chap26.html} }, } \bib{BuiRahmWendt:GL3om}{article}{
TITLE = {{On Farrell--Tate cohomology of GL(3) over rings of quadratic integers}},
AUTHOR = {Bui Anh Tuan},
author = {Rahm, Alexander~D.},
author = {Wendt, Matthias},
NOTE = {Preprint, \url{https://hal.archives-ouvertes.fr/hal-02435963}},
YEAR = {2020}, } \bib{ChenRuan}{article}{
author={Chen, Weimin},
author={Ruan, Yongbin},
title={A new cohomology theory of orbifold},
journal={Comm. Math. Phys.},
volume={248},
date={2004},
number={1},
pages={1--31},
issn={0010-3616},
review={\MR{2104605 (2005j:57036)}},
review={Zbl 1063.53091},
} \bib{Davis}{book}{
author={Davis, Michael W.},
title={The geometry and topology of Coxeter groups},
series={London Mathematical Society Monographs Series},
volume={32},
publisher={Princeton University Press},
place={Princeton, NJ},
date={2008},
pages={xvi+584},
isbn={978-0-691-13138-2},
isbn={0-691-13138-4},
review={\MR{2360474 (2008k:20091)}}, } \bib{sikiri2019voronoi}{misc}{
title={Voronoi complexes in higher dimensions, cohomology of $GL_N(Z)$ for $N\geq 8$ and the triviality of $K_8(Z)$},
author={Mathieu Dutour Sikirić and Philippe Elbaz-Vincent and Alexander Kupers and Jacques Martinet},
year={2019},
address={arXiv:1910.11598[math.KT]}, } \bib{dwyer}{article}{
author={Dwyer, William G.},
title={Exotic cohomology for ${\rm GL}_n({\bf Z}[1/2])$},
journal={Proc. Amer. Math. Soc.},
volume={126},
date={1998},
number={7},
pages={2159--2167},
issn={0002-9939},
review={\MR{1443381 (2000a:57092)}},
doi={10.1090/S0002-9939-98-04279-8}, } \bib{Ellis}{book}{
author={Ellis, Graham},
title={An invitation to computational homotopy},
publisher={Oxford University Press, Oxford},
date={2019},
pages={xx+525},
isbn={978-0-19-883298-0},
isbn={978-0-19-883297-3},
review={\MR{3971587}},
doi={10.1093/oso/9780198832973.001.0001}, } \bib{FantechiGoettsche}{article}{
author={Fantechi, Barbara},
author={G{\"o}ttsche, Lothar},
title={Orbifold cohomology for global quotients},
journal={Duke Math. J.},
volume={117},
date={2003},
number={2},
pages={197--227},
issn={0012-7094},
review={\MR{1971293 (2004h:14062)}},
review={Zbl 1086.14046}, }
\bib{Henn}{article}{
author={Henn, Hans-Werner},
title={The cohomology of ${\rm SL}(3,{\bf Z}[1/2])$},
journal={$K$-Theory},
volume={16},
date={1999},
number={4},
pages={299--359},
issn={0920-3036},
review={\MR{1683179 (2000g:20087)}}, }
\bib{henn:lannes:schwartz}{article}{
author={Henn, Hans-Werner},
author={Lannes, Jean},
author={Schwartz, Lionel},
title={Localizations of unstable $A$-modules and equivariant mod $p$
cohomology},
journal={Math. Ann.},
volume={301},
date={1995},
number={1},
pages={23--68},
issn={0025-5831},
review={\MR{1312569 (95k:55036)}},
doi={10.1007/BF01446619}, } \bib{Klein:binaereFormenMathAnn9}{article}{
author={Klein, Felix},
title={Ueber bin\"are {F}ormen mit linearen {T}ransformationen in sich selbst},
date={1875},
ISSN={0025-5831},
journal={Math. Ann.},
volume={9},
number={2},
pages={183\ndash 208},
url={http://dx.doi.org/10.1007/BF01443373},
review={\MR{1509857}}, } \bib{Knudson:book}{book}{
author={Knudson, Kevin P.},
title={Homology of linear groups},
series={Progress in Mathematics},
volume={193},
publisher={Birkh\"auser Verlag, Basel},
date={2001},
pages={xii+192},
isbn={3-7643-6415-7},
review={\MR{1807154 (2001j:20070)}},
doi={10.1007/978-3-0348-8338-2}, }
\bib{Kraemer:Diplom}{book}{
author={Kr\"amer, Norbert},
title={Die Konjugationsklassenanzahlen der endlichen Untergruppen in der Norm-Eins-Gruppe von Maxi\-malordnungen in Quaternionenalgebren},
date={Diplomarbeit, Mathematisches Institut, Universit\"at Bonn, 1980.
\url{http://tel.archives-ouvertes.fr/tel-00628809/}},
language={German}, } \bib{Kraemer:Thesis}{thesis}{
author = {Kr\"amer, Norbert},
school = {Math.-Naturwiss. Fakult\"{a}t der Rheinischen Friedrich-Wilhelms-Universit\"{a}t Bonn; Bonn. Math. Schr.},
title = {Beitr\"{a}ge zur {A}rithmetik imagin\"{a}rquadratischer {Z}ahlk\"{o}rper},
year = {1984}, } \bib{LORS}{article}{
author={Lafont, Jean-Fran\c{c}ois},
author={Ortiz, Ivonne J.},
author={Rahm, Alexander D.},
author={S\'{a}nchez-Garc\'{\i}a, Rub\'{e}n J.},
title={Equivariant $K$-homology for hyperbolic reflection groups},
journal={Q. J. Math.},
volume={69},
date={2018},
number={4},
pages={1475--1505},
issn={0033-5606},
review={\MR{3908707}},
doi={10.1093/qmath/hay030}, }
\bib{MislinValette}{collection}{
author={Mislin, Guido},
author={Valette, Alain},
title={Proper group actions and the Baum-Connes conjecture},
series={Advanced Courses in Mathematics. CRM Barcelona},
publisher={Birkh\"auser Verlag},
place={Basel},
date={2003},
pages={viii+131},
isbn={3-7643-0408-1},
review={\MR{2027168 (2005d:19007)}},
review={Zbl 1028.46001}, } \bib{Mitchell}{article}{
author={Mitchell, Stephen A.},
title={On the plus construction for $B{\rm GL}\,{\bf Z}[\frac12]$ at the
prime $2$},
journal={Math. Z.},
volume={209},
date={1992},
number={2},
pages={205--222},
issn={0025-5874},
review={\MR{1147814 (93b:55021)}},
doi={10.1007/BF02570830}, }
\bib{PerroniRahm}{article}{
author={Perroni, Fabio},
author={Rahm, Alexander D.},
title={On Ruan's cohomological crepant resolution conjecture for the
complexified Bianchi orbifolds},
journal={Algebr. Geom. Topol.},
volume={19},
date={2019},
number={6},
pages={2715--2762},
issn={1472-2747},
review={\MR{4023327}},
doi={10.2140/agt.2019.19.2715}, } \bib{Quillen}{article}{
AUTHOR = {Quillen, Daniel},
TITLE = {The spectrum of an equivariant cohomology ring. {I}, {II}},
JOURNAL = {Ann. of Math. (2)},
VOLUME = {94},
YEAR = {1971},
PAGES = {549--572; ibid. (2) 94 (1971), 573--602},
ISSN = {0003-486X}, }
\bib{Rahm:noteAuxCRAS}{article}{
author={Rahm, Alexander D.},
title={Homology and $K$-theory of the \mbox{Bianchi} groups (Homologie et $K$-th\'eorie des groupes de \mbox{Bianchi})},
date={2011},
journal={Comptes Rendus Math\'ematique de l' Acad\'emie des Sciences - Paris},
volume={349},
number ={11-12},
pages={615\ndash 619}, }
\bib{Rahm:BianchiGP}{book}{
author = {Rahm, Alexander~D.} ,
title = {Bianchi.gp},
address = { Open source program (GNU general public
license), validated by the CNRS: \url{http://www.projet-plume.org/fiche/bianchigp} subject to the Certificat de Comp\'etences en Calcul Intensif (C3I)
and part of the GP scripts library of Pari/GP Development Center, 2010}, }
\bib{Rahm:formulas}{article}{
author={Rahm, Alexander D.},
title={Accessing the cohomology of discrete groups above their virtual cohomological dimension},
journal={J. Algebra},
volume={404},
date={2014},
pages={152--175},
issn={0021-8693},
review={\MR{3177890}}, }
\bib{Rahm:homological_torsion}{article}{
author={Rahm, Alexander~D.},
title={The homological torsion of $\rm{PSL}_2$ of the imaginary
quadratic integers},
journal={Trans. Amer. Math. Soc.},
volume={365},
date={2013},
number={3},
pages={1603--1635},
review={\MR{3003276}}, } \bib{Rahm:equivariant}{article}{ author = {Rahm, Alexander D.} ,
title = {On the equivariant $K$-homology of PSL$_2$ of the imaginary quadratic integers}, journal={Annales de l'Institut Fourier}, volume={66}, number={4}, year={2016}, pages={1667--1689,
\url{http://dx.doi.org/10.5802/aif.3047} }, }
\bib{Rahm:higher_torsion}{article}{
author={Rahm, Alexander~D.},
title={Higher torsion in the Abelianization of the full Bianchi groups},
journal={LMS J. Comput. Math.},
volume={16},
date={2013},
pages={344--365},
issn={1461-1570},
review={\MR{3109616}}, }
\bib{BuiRahmWendt:Farrell-Tate}{article}{
author={Bui, Anh Tuan},
author={Rahm, Alexander D.},
author={Wendt, Matthias},
title={The Farrell--Tate and Bredon homology for ${\rm PSL}_4(\mathbb{Z})$ via cell subdivisions},
journal={J. Pure Appl. Algebra},
volume={223},
date={2019},
number={7},
pages={2872--2888},
issn={0022-4049},
review={\MR{3912952}},
doi={10.1016/j.jpaa.2018.10.002}, } \bib{RahmFuchs}{article}{
Author = {Alexander D. {Rahm} and Mathias {Fuchs}},
Title = {{The integral homology of $\mathrm{PSL}_2$ of imaginary quadratic integers with non-trivial class group}},
Journal = {{J. Pure Appl. Algebra}},
ISSN = {0022-4049},
Volume = {215},
Number = {6},
Pages = {1443--1472},
Year = {2011},
Publisher = {Elsevier Science B.V. (North-Holland), Amsterdam},
DOI = {10.1016/j.jpaa.2010.09.005},
review = { Zbl 1268.11072} } \bib{RahmWendt}{article}{
author={Rahm, Alexander D.},
author={Wendt, Matthias},
title={On Farrell-Tate cohomology of $\rm SL_2$ over $S$-integers},
Journal={{J. Algebra}},
volume={512},
date={2018},
pages={427--464},
issn={0021-8693},
review={\MR{3841530}},
doi={10.1016/j.jalgebra.2018.06.031}, }
\bib{qcnote}{article}{
author={Rahm, Alexander D.},
author={Wendt, Matthias},
title={A refinement of a conjecture of Quillen},
journal={{Comptes Rendus Math\'ematique} de l'Acad\'emie des Sciences}, volume = {353}, number = {9}, pages = {779--784}, year = {2015}, issn = {1631-073X}, doi = {http://dx.doi.org/10.1016/j.crma.2015.03.022}, }
\bib{Sanchez-Garcia}{article}{
author={S{\'a}nchez-Garc{\'{\i}}a, Rub{\'e}n},
title={Bredon homology and equivariant $K$-homology of ${\rm SL}(3,{{\mathbb{Z}}})$},
journal={J. Pure Appl. Algebra},
volume={212},
date={2008},
number={5},
pages={1046--1059},
issn={0022-4049},
review={\MR{2387584 (2009b:19007)}}, } \bib{Scheutzow}{article}{
author={Scheutzow, Alexander},
title={Computing rational cohomology and Hecke eigenvalues for Bianchi
groups},
journal={J. Number Theory},
volume={40},
date={1992},
number={3},
pages={317--328},
issn={0022-314X},
review={\MR{1154042 (93b:11068)}},
doi={10.1016/0022-314X(92)90004-9}, } \bib{SchwermerVogtmann}{article}{
author={Schwermer, Joachim},
author={Vogtmann, Karen},
title={The integral homology of ${\rm SL}_{2}$ and ${\rm PSL}_{2}$ of
Euclidean imaginary quadratic integers},
journal={Comment. Math. Helv.},
volume={58},
date={1983},
number={4},
pages={573--598},
issn={0010-2571},
review={\MR{728453 (86d:11046)}},
doi={10.1007/BF02564653}, } \bib{SerreGroupesDiscrets}{article}{
author={Serre, Jean-Pierre},
title={Cohomologie des groupes discrets},
language={French},
conference={
title={Prospects in mathematics},
address={Proc. Sympos., Princeton Univ., Princeton, N.J.},
date={1970},
},
book={
publisher={Princeton Univ. Press, Princeton, N.J.},
},
date={1971},
pages={77--169. Ann. of Math. Studies, No. 70},
review={\MR{0385006}}, } \bib{Soule}{article}{
author={Soul{\'e}, Christophe},
title={The cohomology of ${\rm SL}_{3}({\bf Z})$},
journal={Topology},
volume={17},
date={1978},
number={1},
pages={1--22},
issn={0040-9383}, } \bib{Vogtmann}{article}{
author={Vogtmann, Karen},
title={Rational homology of Bianchi groups},
journal={Math. Ann.},
volume={272},
date={1985},
number={3},
pages={399--419},
ISSN={0025-5831},
review={\MR{799670 (87a:22025)}},
review={Zbl 0545.20031 } } \bib{Wall}{article}{
author={Wall, C. Terence~C.},
title={Resolutions for extensions of groups},
date={1961},
journal={Proc. Cambridge Philos. Soc.},
volume={57},
pages={251\ndash 255},
review={\MR{0178046 (31 \#2304)}}, } \bib{sl2parabolic}{article}{
author={Wendt, Matthias~},
title={Homology of {SL}$_2$ over function fields I: parabolic subcomplexes},
journal={J. Reine Angew. Math.},
volume={739},
date={2018},
pages={159--205},
issn={0075-4102},
review={\MR{3808260}},
doi={10.1515/crelle-2015-0047}, }
\end{biblist} \end{bibdiv}
\end{document} |
\begin{document}
\title{Desingularization of bounded-rank matrix sets} \begin{abstract} The conventional ways to solve optimization problems on low-rank matrix sets which appear in a great number of applications tend to ignore its underlying structure of an algebraic variety and existence of singular points. This leads to the appearance of inverses of singular values in algorithms and since they could be close to $0$ it causes certain problems. We tackle this problem by utilizing ideas from algebraic geometry and show how to desingularize these sets. Our main result is an algorithm which uses only bounded functions of singular values and hence does not suffer from the issue described above. \end{abstract} \begin{keywords}
low-rank matrices, algebraic geometry, Riemannian optimization, matrix completion \end{keywords}
\begin{AMS}
65F30 \end{AMS} \section{Introduction} Although low-rank matrices appear in many applications, the structure of the corresponding matrix variety (real algebraic) is not fully utilized in the computations, and the theoretical investigation is complicated because of the existence of singular points \cite{lakshmibai2015grassmannian} on such a variety, which correspond to matrices of smaller rank. We tackle this problem by utilizing the modified Room-Kempf desingularization \cite{naldi2015exact} of determinantal varieties that is classical in algebraic geometry, but has never been applied in the context of optimization over matrix varieties.
Briefly, it can be summarized as follows. Idea of the the Room-Kempf procedure is to consider a set of tuples of matrices $(A,Y)$ satisfying equations $AY=0$ and $BY=0$ for some fixed matrix $B$. These equations imply that the rank of $A$ is bounded and moreover a set of such tuples is a smooth manifold (for reasonable matrices $B$). However, conditions of the form $BY=0$ can be numerically unstable, so we modify it by imposing the condition $Y^T Y = I$ instead. The precise definition of the manifold we work with is given in terms of Grassmannians and then we transition to the formulas given above. We also show that the dimension of this manifold is the same as of the original matrix variety. Our main contributions are:
\begin{itemize}
\item We propose and analyze a modified Room-Kempf desingularization technique for the variety of matrices of shape $n \times m$ with rank bounded by $r$ (\cref{notsing:secdes}).
\item We prove smoothness and obtain bounds on the curvature of the desingularized variety in \cref{notsing:secdes} and \cref{sec:ts-desing}. The latter is performed by estimating singular values of the operator of the orthogonal projection onto the tangent space of the desingularized variety.
\item We find an effective low-dimensional parametrization of the tangent space (\cref{notsing:tspar}). Even though the desingularized variety is a subset of a space of much bigger dimension, this allows us to construct robust second order method with $O((n+m)r)$ complexity.
\item We implement an effective realization of a reduced Hessian method for the optimization over the desingularized variety (\cref{notsing:secnewton}). We start with the Lagrange multipliers method for which we derive a formula for the Newton method for the corresponding optimization problem. The latter takes the saddle point form which we solve using the null space method found in \cite{benzi2005numerical}. In \cref{sec:trick} we show how to reduce the total complexity of the algorithm to $O((n+m)r)$ per iteration.
\item We also briefly discuss a few technical details in the implementation of the algorithm (\cref{notsing:techstuff})
\item We present results of numerical experiments and compare them with some other methods found in \cref{notsing:numerical}. \end{itemize}
The manifolds that we work with in this paper will always be $C^{\infty}$ and in fact smooth algebraic varieties. \subsection{Idea of desingularization} \label{notsing:intro}
Before we define desingularization of bounded rank matrix sets, we will introduce its basic idea. The low-rank matrix case will be described in next section. Let $V$ be a variety (not necessarily smooth) and $f$ be a function $$f : V \to \mathbb{R}, $$
which is smooth in an open neighborhood of $V$ (which is assumed to be embedded in $\mathbb{R}^{k}$). To solve $$ f(x) \to \min, x \in V,$$ we often use methods involving the tangent bundle of $V$. However, due to the existence of the singular points where the tangent space is not well-defined, it is hard to prove correctness and convergence using those methods. To avoid this problem we construct a smooth variety $\widehat{V}$ and a surjective smooth map $\pi$ $$\pi: \widehat{V} \to V.$$ Let $\widehat{f}$ be a pullback of $f$ via map $\pi$ i.e. $$ \widehat{f} : \widehat{V} \to \mathbb{R},$$ $$ \widehat{f} = f \circ \pi.$$ It is obvious that $$ \min_{x \in V} f(x) = \min_{y \in \widehat{V}} \widehat{f}(y),$$ so we reduced our non-smooth minimization problem to a smooth one. Typically $\widehat{V}$ is a variety in a space of bigger dimension and is constructed to be of the same dimension as the smooth part of $V$. To have some geometrical idea one can think about the following example (see \cref{fig:bu}). Let $V$ be a cubic curve given by the following equation $$y^2 = x^2 (x+1),$$ and parametrized as $$(x(t),y(t)) = (t^2-1,t(t^2-1)).$$ It is easy to see that $(0,0)$ is a singular point of $V$. Then its desingularization is given by $$\widehat{V} = (x(t),y(t),z(t)) = (t^2-1,t(t^2-1),t) \subset \mathbb{R}^3,$$ which is clearly smooth. Projection is then just $$ \pi: (x(t),y(t),z(t)) = (x(t),y(t)).$$
\begin{figure}
\caption{Desingularization of the cubic.}
\label{fig:bu}
\end{figure}
\section{Desingularization of low-rank matrix varieties via kernel} \subsection{$2\times 2$ matrices} \label{notsing:2x2sec} Let $V$ be a variety of $2 \times 2$ matrices with the rank $\leq 1$. We have \begin{equation}\label{notsing:2x2sing} V = \lbrace (x_{11},x_{21},x_{12},x_{22}) \in \mathbb{R}^4 : x_{11}x_{22}-x_{12}x_{21} = 0 \rbrace , \end{equation} so it is indeed an algebraic variety. In order to analyze its smoothness and compute the tangent space we recall the following result.
Let $h_i \quad i \in \lbrace 1 \hdots k \rbrace $ be some smooth functions $$ h_i : \mathbb{R}^l \to \mathbb{R},$$ with $k \leq l.$ Define the set $M$ as $$ M = \lbrace x \in \mathbb{R}^{l} : h_1(x)=0, h_2(x)=0 \ldots h_k(x)=0 \rbrace.$$ Then for a point $p \in M$ we construct the matrix $N(p)$, $$N(p) = \begin{bmatrix} \nabla h_1(p) \\ \nabla h_2(p) \\ \vdots \\ \nabla h_k(p) \end{bmatrix}, $$ where $\nabla h_i(p)$ is understood as the row vector $$\nabla h_i(p) = \left( \frac{\partial h_i}{\partial x_1}, \hdots, \frac{\partial h_i}{\partial x_l} \right).$$ A point $p$ is called nonsingular if $N(p)$ has maximal row rank at $p$. In this case, by implicit function theorem, $M$ is locally a manifold (see \cite[Theorem 5.22]{lee-introman-2001}) and tangent space at $p$ is defined as $$T_p M = \lbrace v \in \mathbb{R}^l : N(p)v = 0 \rbrace .$$ Applying this to $V$ defined in \cref{notsing:2x2sing} we obtain $$N(x_{11}, x_{21}, x_{12}, x_{22}) = \begin{bmatrix} x_{22} & -x_{12} & -x_{21} & x_{11} \end{bmatrix}, $$ and then $(0,0,0,0)$ is a singular point of $V$.
We desingularize it by considering $\widehat{V}$ which is defined as the set of pairs $(A,Y) \in \mathbb{R}^{2 \times 2} \times \mathbb{R}^2$ with coordinates $$A = \begin{bmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \end{bmatrix}, $$ and $$ Y = \begin{bmatrix} y_1\\ y_2 \end{bmatrix}, $$ satisfying $$AY = 0,$$ and $$Y^{\top} Y = 1.$$ Such choice of equations for $Y$ is based on the Room-Kempf procedure described in \cite{naldi2015exact}, which suggests the following equations: $$AY=0, \quad BY=0,$$ with some fixed matrix $B$. Since the latter equation is numerically unstable, using an orthogonality condition instead allows us to maintain the manifold property while making computations more robust.
More explicitly we have $$\widehat{V} = \lbrace p: (x_{11}y_1+x_{12}y_2=0,x_{21}y_1+x_{22}y_2=0,y_1^2+y_2^2=1)\rbrace, $$ $$ p = (x_{11}, x_{21}, x_{12}, x_{22}, y_1, y_2) \in \mathbb{R}^6. $$ We find that the normal space at $p$ is spanned by rows of the following matrix $N(p)$:
\begin{equation}\label{notsing:n2nullspace}
N(p) = \begin{bmatrix} y_1 & 0 & y_2 & 0 &x_{11} & x_{12}\\
0 & y_1 & 0 & y_2 & x_{21} & x_{22}\\
0 & 0 & 0 & 0 & 2y_1&2y_2 \\
\end{bmatrix}.
\end{equation}
Since $y_1^2+y_2^2=1$ the matrix \cref{notsing:n2nullspace} clearly has rank $3$ at any point of $\widehat{V}$ which proves that $\widehat{V}$ is smooth. The projection $\pi$ is just $$\pi : (x_{11}, x_{21}, x_{12}, x_{22}, y_1, y_2) \to (x_{11}, x_{21}, x_{12}, x_{22}),$$ whose image is the entire $V$. However, we would also like to estimate how close the tangent spaces are at close points. Recall that by definition of the Grassmanian metric the distance between subspaces $C$ and $D$ is given by
$$d_{Gr}(C,D) := \|P_{C} - P_{D}\|_F,$$ where $P_{C}$ and $P_{D}$ are the orthogonal projectors on the corresponding planes. Since {$P_{C^{\perp}} = I - P_C$ the distance between any two subspaces is equal to the distance between their orthogonal complements. \\ It is well known that the projection on the subspace spanned by the rows of a matrix $M$ is given by $M^{\dagger}M$, where $M^{\dagger}$ is a pseudoinverse which for matrices of maximal row rank is defined as $$M^{\dagger} =M^{\top} (M M^{\top})^{-1}.$$ Hence, for two different points $p$ and $p'$ on the desingularized manifold we obtain
$$ \|P_{N(p)} - P_{N(p')}\|_F = \| N(p)^{\dagger}N(p) - N(p')^{\dagger} N(p') \|_F .$$
We will use the following classical result to estimate $\|P_{N(p)} - P_{N(p')}\|_F$ (we use it in the form appearing in \cite[Lemma~3.4]{dutta2017problem} which is based on the \cite[The $\sin \theta$ Theorem]{davis1970rotation}): \begin{equation} \label{notsing:projectorbound}
\| N(p)^{\dagger}N(p) - N(p')^{\dagger} N(p') \|_F \leq 2\max \lbrace \| N(p)^{\dagger} \|_2,
\| N(p')^{\dagger} \|_2 \rbrace \| N(p) - N(p') \|_F. \end{equation} In order to estimate the smoothness we need to estimate how $P_{N(p)}$ changes under small changes of $p$. It is sufficient to estimate the gradient of $P$. Thus, we have to uniformly bound $\Vert N^{\dagger}\Vert_2$ from above, which is equivalent to bounding the minimal singular value of $N$ from below. Denote the latter by $\sigma_{\min}(N)$. By taking the defining equations of the desingularized manifold into account, we find that
\begin{equation}\label{notsing:n2sing}
N(p) N(p)^{\top} = \begin{bmatrix} 1+x_{11}^2+x_{12}^2 & x_{11} x_{21}+x_{12} x_{22}& 0 \\
x_{11} x_{21}+x_{12} x_{22} & 1+x_{21}^2+x_{22}^2 & 0 \\
0 & 0 & 4
\end{bmatrix}.
\end{equation}
Hence $\sigma_{\min}^2 (N(a)) \geq 1$ and $\| N(a)^{\dagger} \|_2 \leq 1 $. From the definition of $N(p)$ it follows that for $p=(A,Y)$ and $ p'=(A', Y')$: $$\| N(p) - N(p') \|_F \leq \sqrt{6}\| Y - Y' \|_F + \| A - A' \|_F.$$ and from \cref{notsing:projectorbound} we obtain
$$d_{Gr}(T_p \widehat{V}, T_{p'} \widehat{V}) \leq 2 \sqrt{6} (\|A-A'\|_F + \|Y-Y'\|_F).$$ We will derive and prove similar estimates for the general case in the next section. \subsection{General construction and estimation of curvature} \label{notsing:secdes} \begin{remark}\label{remark} We will often use vectorization of matrices which is a linear operator $$\mathrm{vec} : \mathbb{R}^{m \times n} \to \mathbb{R}^{mn \times 1},$$ which acts by stacking columns of the matrix into a single column vector. To further simplify notation, variables denoted by uppercase and lowercase variables are understood as a matrix and vectorization of the corresponding matrix, e.g. $p =\mathrm{vec}({P}).$ We will also define the transposition operator $T_{m,n}$: $$T_{m,n} : \mathrm{vec}(X) \to \mathrm{vec}(X^{\top}),$$ for $X \in \mathbb{R}^{m \times n}$. \end{remark} Consider a variety $\mathcal{M}_{\le r}$ of $n \times m$ of matrices of rank not higher than $r$, $$\mathcal{M}_{\le r} = \{ A \in \mathbb{R}^{n \times m}: \mathrm{rank}(A) \leq r \}.$$
We recall the following classical result \cite[Theorem 10.3.3]{lakshmibai2015grassmannian}. \begin{lemma} $A \in \mathcal{M}_{\le r}$ is a singular point if and only if $A$ has rank smaller than $r$. \end{lemma} By definition, the dimension of a variety $X$ is equal to the dimension of the manifold $X \setminus X_{sing}$ where $X_{sing}$ is the set of all singular points of $X$ \cite{griffiths2014principles}. In the case of $\mathcal{M}_{\le r}$ we find that $$\dim \mathcal{M}_{\le r} = \dim \mathcal{M}_{=r},$$ where $$\mathcal{M}_{=r} =\{ A \in \mathbb{R}^{n \times m}: \mathrm{rank}(A) = r \},$$ is known to be of dimension $(n+m)r - r^2$ (e.g. \cite[Proposition 2.1]{vandereycken2013low}).
Now we return to the main topic of the paper. \\ Let $Gr(m-r,m)$ be the Grassmann manifold: $$Gr(m-r,m) = \mathbb{R}_{*}^{m,m-r} / GL_{m-r},$$ where $\mathbb{R}_{*}^{m,m-r}$ is the noncompact Stiefel manifold $$\mathbb{R}_{*}^{m,m-r} = \lbrace Y \in \mathbb{R}^{m \times (m-r)}: Y \text{ full rank} \rbrace, $$ and $GL_{m-r}$ is the group of invertible $m-r \times m-r$ matrices. \\ It is known \cite{lee-introman-2001} that $$\dim Gr(m-r,m) = r(m-r).$$
We propose the following desingularization for $\mathcal{\widehat{M}}_r$: \begin{equation} \label{notsing:grassm} \mathcal{\widehat{M}}_r = \lbrace (A,Y) \in \mathbb{R}^{n \times m} \times Gr(m-r,m) : AY=0 \rbrace , \end{equation} and prove the following theorem. \begin{theorem} $\mathcal{\widehat{M}}_r$ as defined by \cref{notsing:grassm} is a smooth manifold of dimension $(n+m)r - r^2$. \end{theorem} \begin{proof} Let $U_{\alpha}$ be a local chart of $Gr(m-r,m)$. To prove the theorem it suffices to show that $\mathcal{\widehat{M}}_r \cap( \mathbb{R}^{n \times m} \times U_{\alpha})$ is a smooth manifold for all $\alpha$. Without loss of generality let us assume that the coordinates of $Y \in Gr(m-r,m) \cap U_\alpha$ are given by $$ Y = \begin{bmatrix} I_{m-r} \\ Y_{\alpha} \\ \end{bmatrix}, $$ where $$ Y_{\alpha} = \begin{bmatrix} \alpha_{1,1} & \alpha_{1,2} & \hdots &\alpha_{1,m-r} \\ \alpha_{2,1} & \alpha_{2,2} & \hdots &\alpha_{2,m-r} \\ \hdots & \hdots & \hdots & \hdots \\ \alpha_{r,1} & \alpha_{2,2} & \hdots &\alpha_{r,m-r} \\ \end{bmatrix}. $$ In this chart equation \cref{notsing:grassm} reads \begin{equation} \label{eq:local-grassm} A \begin{bmatrix} I_{m-r} \\ Y_{\alpha} \\ \end{bmatrix} = 0. \end{equation} Splitting $A$ as $$A = \begin{bmatrix} A_1 & A_2 \end{bmatrix}, $$ where $$A_1 \in \mathbb{R}^{n \times (m-r)}, \quad A_2 \in \mathbb{R}^{n \times r},$$ and by using properties of the Kronecker product $\otimes$ we obtain that the Jacobian matrix of \cref{eq:local-grassm} is equal to $$ \begin{bmatrix} I_{n(m-r)} & Y_{\alpha}^{\top} \otimes I_n & I_{m-r} \otimes A_2 \\ \end{bmatrix}, $$ which is clearly of full rank, since it contains identity matrix. To conclude the proof we note that $$\dim \mathcal{\widehat{M}}_r = \underbrace{nm + (m-r)r}_{\text{number of variables}} -\underbrace{n(m-r)}_{\text{number of equations}} = (n+m)r - r^2,$$ as desired. \end{proof} The use of $\mathcal{\widehat{M}}_r$ is justified by the simple lemma \begin{lemma} The following statements hold: \begin{itemize} \item If $(A, Y) \in \mathcal{\widehat{M}}_r$ then $A \in \mathcal{M}_{\le r}$, \item If $A \in \mathcal{M}_{\le r}$ then there exists $Y$ such that $(A,Y) \in \mathcal{\widehat{M}}_r$. \end{itemize} \end{lemma} \begin{proof} These statements obviously follow from the equation $$AY = 0,$$ which implies that the dimension of the nullspace of $A$ is at least $m-r$. \end{proof}
We would like to construct Newton method on the manifold $\mathcal{\widehat{M}}_r$. In order to work with quotient manifolds such as $Gr(m-r,m)$ the conventional approach is to use the total space of the quotient. The tangent space is then dealt with using the concept of \emph{horizontal space} (sometimes this is referred to as \emph{gauge condition}) which is isomorphic to the tangent space of the quotient manifold. This approach is explained in great detail in \cite{absil2009optimization}. Although we will not go into the details of these concepts, we will apply them to $\mathcal{\widehat{M}}_r$ in the next section. \subsection{Tangent space of $\mathcal{\widehat{M}}_r$} \label{sec:ts-desing} For our analysis, it is more convenient to employ the following representation of the Grassmanian: \begin{equation} \label{eq:grassm-stief} Gr(m-r,m) = St(m-r,m)/O_{m-r}, \end{equation} where $St(m-r,m)$ is the orthogonal Stiefel manifold $$St(m-r,m) = \lbrace Y \in \mathbb{R}^{m,m-r}_{*}: Y^{\top} Y = I_{m-r} \rbrace,$$ and $O_{m-r}$ is the orthogonal group. }
Let $\pi$ be the quotient map \cref{eq:grassm-stief} and $\text{id} \times \pi$ is the obvious map $$\mathbb{R}^{n \times m} \times St(m-r,m) \to \mathbb{R}^{n \times m} \times Gr(m-r,m).$$ It is easy to see that $\mathcal{\widehat{M}}_r^{\text{tot}} \coloneqq (\text{id} \times \pi)^{-1} (\mathcal{\widehat{M}}_r)$ is the following manifold: \begin{equation} \label{eq:defining} \mathcal{\widehat{M}}_r^{\text{tot}} = \lbrace (A,Y) \in \mathbb{R}^{n \times m} \times \mathbb{R}^{m, m-r}_{*} : A Y = 0,\quad Y^{\top} Y = I_{m-r} \rbrace. \end{equation}
Let us now compute the horizontal distribution on $\mathcal{\widehat{M}}_r^{\text{tot}}$. As described in \cite[Example 3.6.4]{absil2009optimization} in the case of the projection $$\pi: \mathbb{R}_{*}^{m, m-r} \to Gr(m-r,m),$$ $$ Y \to \text{span} (Y),$$ the horizontal space at $Y$ is defined as the following subspace of $T_Y \mathbb{R}_{*}^{m, m-r}$: \begin{equation}\label{eq:gauge} \lbrace \delta Y \in T_Y \mathbb{R}_{*}^{m,m-r} : (\delta Y)^{\top} Y = 0 \rbrace . \end{equation} It immediately follows that in the case \cref{eq:defining} the horizontal space at $(A,Y)$ is equal to $$\mathcal{H}(A,Y) = T_{(A,Y)}(\mathcal{\widehat{M}}_r^{\text{tot}}) \cap \mathcal{H}_{\text{Gr}}(A,Y),$$ where $\mathcal{H}_{\text{Gr}}(A,Y)$ is similarly to \cref{eq:gauge} defined as: \begin{equation}\label{eq:true-gauge} \mathcal{H}_{\text{Gr}}(A,Y) = \lbrace (\delta A, \delta Y) \in T_{(A,Y)}(\mathbb{R}^{n \times m}\times \mathbb{R}_{*}^{m, m-r}) : (\delta Y)^{\top} Y = 0 \rbrace. \end{equation} Note that the dimension of $\mathcal{H}$ is equal to the dimension of $\mathcal{\widehat{M}}_r$ since it is, by construction, isomorphic to the $T\mathcal{\widehat{M}}_r$. We now proceed to one of the main results of the paper \begin{theorem} The orthogonal projection on $\mathcal{H}(A,Y)$ is Lipschitz continuous with respect to $(A,Y)$ and its Lipschitz constant is no greater than $2 (\sqrt{n} + \sqrt{m-r})$ in the Frobenius norm. \end{theorem} \begin{proof} In order to prove the theorem, first we need to find the equations of $\mathcal{H}(A,Y)$. Recall the defining equations of $\mathcal{\widehat{M}}_r^{\text{tot}}$ \cref{eq:defining} and that for a given $p=(A, Y)$ the tangent space is the nullspace of the gradient of the constraints. By taking into account the gauge condition \cref{eq:true-gauge} we find that
$$\mathcal{H}(A,Y) = \{v: N(p) v = 0 \},$$
where the matrix $N(p)$
has the following block structure:
\begin{equation}\label{notsing:nullspace2}
N(p) = \begin{bmatrix} Y^{\top} \otimes I_n & I_{m-r} \otimes A \\
0 & I_{m-r} \otimes Y^{\top}
\end{bmatrix}.
\end{equation}
For simplicity of notation we will omit $p$ in $N(p)$.
The projection onto the horizontal space of a given vector $z$ is given by the following formula
\begin{equation}\label{notsing:tangent2}
v = (I - N^{\top} (N N^{\top})^{-1} N) z = P_{N} z,
\end{equation}
where
$$P_{N} = (I - N^{\dagger} N ),$$
is the orthogonal projector onto the row range of $N$. Using exactly the same idea as in previous section we estimate
$\sigma_{\min}(N)$ from below.
Consider the Gram matrix
\begin{equation*}
\begin{split}
Z = N N^{\top} = \begin{bmatrix} Y^{\top} \otimes I_{n} & I_{m-r} \otimes A \\
0 & I_{m-r} \otimes Y^{\top}
\end{bmatrix}
\begin{bmatrix}
Y \otimes I_n & 0 \\
I_{m-r} \otimes A^{\top}& I_{m-r} \otimes Y
\end{bmatrix} = \\
=\begin{bmatrix} Y^{\top} Y \otimes I_n + I_{m-r} \otimes A A^{\top} & I_{m-r} \otimes AY \\
I_{m-r} \otimes Y^{\top} A^{\top} & I_{m-r} \otimes Y^{\top} Y
\end{bmatrix}.
\end{split}
\end{equation*}
Now we recall that for each point at the manifold $\mathcal{\widehat{M}}_r^{\text{tot}}$ \cref{eq:defining} holds, therefore
\begin{equation}\label{notsing:zmatrix}
Z = \begin{bmatrix} I + I_{m-r} \otimes AA^{\top} & 0 \\
0 & I \end{bmatrix}.
\end{equation}
It is obvious that $\sigma_{\min}(Z) \geq 1$ since it has the form $I + DD^{\top}$.
Finally, $\sigma^2_{\min}(N) = \sigma_{\min}(Z) \geq 1$, therefore
\begin{equation}\label{notsing:sigmabound}
\sigma_{\min}(N) = \sigma_{\min}(N^{\top}) \geq 1, \quad \Vert (N^{\top})^{\dagger} \Vert_2 \leq 1.
\end{equation}
Putting \cref{notsing:sigmabound} into \cref{notsing:projectorbound} we get
$$
\Vert P_N - P_{N'} \Vert_F \leq 2 \Vert N - N' \Vert_F,
$$
with $N = N(A,Y), N' = N(A', Y')$.
Finally, we need to estimate how $N$ changes under the change of $A$ and $Y$.
We have
$$
N - N' = \begin{bmatrix} (Y - Y') \otimes I_n & I_{m-r} \otimes (A - (A')) \\ 0 & I_{m-r} \otimes (Y^{\top} - (Y')^{\top})
\end{bmatrix},
$$
therefore
$$
\Vert N - N' \Vert_F \leq (\sqrt{n}+\sqrt{m-r}) \Vert Y - Y' \Vert_F + \sqrt{m-r} \Vert A - A' \Vert_F.
$$ Thus \begin{dmath}
d_{Gr}(\mathcal{H}(A',Y'), \mathcal{H}(A,Y)) = \|P_N - P_{N'}\|_F \leq 2 \|N-N'\|_F \leq 2 (\sqrt{n}+\sqrt{m-r}) (\Vert Y - Y' \Vert_F + \Vert A - A' \Vert_F). \end{dmath} \ \end{proof}
For small $r$ $$(m+n)r - r^2 \ll nm,$$ so to fully utilize the properties of $\mathcal{\widehat{M}}_r$ in computations we first have to find an explicit basis in the horizontal space. This will be done in the next section.
\subsection{Parametrization of the tangent space} \label{notsing:tspar} To work with low rank matrices it is very convenient to represent them using the truncated singular value decomposition (SVD). Namely for $A \in \mathcal{M}_{\le r}$ we have $$A = U S V^{\top},$$ with $U$ and $V$ having $r$ orthonormal columns and $S$ being a diagonal matrix. Using this notation we find that the following result holds: \begin{theorem}\label{notsing:theoremQ} The orthogonal basis in the kernel of $N$ from \cref{notsing:nullspace2} is given by columns of the following matrix $Q$ $$Q = \begin{bmatrix} V \otimes I_n & - Y \otimes (U S_1) \\ 0 & I_{m-r} \otimes (V S_2)\end{bmatrix},$$ where
$S_1$ and $S_2$ are diagonal matrices defined as
$$ S_1 =S(S^2 + I_r)^{-\frac{1}{2}}, \quad S_2 = (S^2 + I_r)^{-\frac{1}{2}} $$
\end{theorem} \begin{proof} It suffices to verify that $Q^{\top} Q = I$ and $NQ = 0$ which is performed by direct multiplication. The number of columns in $Q$ is $nr + (m-r) r$ which is exactly the dimension of the $\mathcal{H}(A,Y)$. \end{proof} Now we will use smoothness of $\mathcal{\widehat{M}}_r$ to develop an optimization algorithm over $\mathcal{M}_{\le r}$. The idea of using kernel of a matrix in optimization problems has appeared before \cite{markovsky2011low, markovsky2013structured}. Algorithm constructed there is a variable--projection–-like method with $O(m^3)$ per iteration complexity, where $m$ is number of columns in the matrix. We explain this approach in more detail in \cref{sec:related}. \section{Newton method} \label{notsing:secnewton} \subsection{Basic Newton method} \label{notsing:basicnewton} Consider the optimization problem $$F(A) \rightarrow \min, \quad \mbox{s.t. } A \in \mathcal{M}_{\le r},$$ where $F$ is twice differentiable. Using the idea described in \cref{notsing:intro} this problem is equivalent to $$\widehat{F}(A, Y) \rightarrow \min, \quad \mbox{s.t. } (A, Y) \in \mathcal{\widehat{M}}_r,$$ and $$\widehat{F}(A,Y) = F(A).$$ Following the approach described in e.g. \cite[Section 4.9]{absil2009optimization} we solve this problem by lifting it to the total space $\mathcal{\widehat{M}}_r^{\text{tot}}$ defined by \cref{eq:defining} with the additional condition that the search direction lies in the horizontal bundle $\mathcal{H}$, that is $$\widetilde{F}(A,Y) \rightarrow \min, \quad \mbox{s.t. } (A, Y) \in \mathcal{\widehat{M}}_r^{\text{tot}}, $$ $$\widetilde{F}(A,Y) = F(A), $$ \begin{equation}\label{eq:small_gauge} (\delta A, \delta Y) \in \mathcal{H}(A,Y). \end{equation}
To solve it we will rewrite it using the Lagrange multipliers method, with the additional constraint \cref{eq:small_gauge}. Taking into account the defining equations of $\mathcal{\widehat{M}}_r^{\text{tot}}$ \cref{eq:defining} the Lagrangian for the constrained optimization problem reads $$\mathcal{L}(A, Y, \Lambda, M) = F(A) + \langle AY, \Lambda \rangle + \frac{1}{2} \langle M, Y^{\top} Y - I \rangle,$$ where $\Lambda \in \mathbb{R}^{n \times m-r}$ and $M \in \mathbb{R}^{(m-r) \times (m-r)},$ $M^{\top}=M$ are the Lagrange multipliers. We now find the first-order optimality conditions. \subsection{First order optimality conditions} By differentiating $\mathcal{L}$ we find the following equations $$ \nabla F + \Lambda Y^{\top} = 0, \quad Y M + A^{\top} \Lambda = 0, \quad AY = 0, \quad Y^{\top} Y = I. $$ Multiplying second equation by $Y^{\top}$ from the left and using equations $AY=0$ and $Y^{\top} Y = I$ we find that $M=0$. Thus, the first-order optimality conditions reduce to \begin{equation}\label{notsing:firstorder} \nabla F + \Lambda Y^{\top} = 0, \quad A^{\top} \Lambda = 0, \quad AY = 0, \quad Y^{\top} Y = I. \end{equation} \subsection{Newton method and the reduced Hessian system} Now we can write down the Newton method for the system \cref{notsing:firstorder}, which can be written in the saddle point form \begin{equation}\label{notsing:newton1} \begin{bmatrix} \widehat{G} & N^{\top} \\ N & 0 \end{bmatrix} \begin{bmatrix} \delta z \\ \delta \lambda \end{bmatrix} = \begin{bmatrix} f\\ 0 \end{bmatrix} , \end{equation} and $$f = -\mathrm{vec}(\nabla F + \Lambda Y^{\top}).$$ where we assumed that the initial point satisfies the constraints ($AY = 0, Y^{\top} Y = I_{m-r}$), the vectors $\delta z$ and $\delta \lambda $ are $$ \delta z = \begin{bmatrix} \mathrm{vec}(\delta A) \\ \mathrm{vec}(\delta Y) \end{bmatrix}, \quad \delta \lambda = \begin{bmatrix} \mathrm{vec}(\delta \Lambda) \\ \mathrm{vec}(\delta M) \end{bmatrix}, $$ and the matrix $\widehat{G}$ in turn has a saddle-point structure: $$ \widehat{G} = \begin{bmatrix}H & C \\ C^{\top} & 0 \end{bmatrix}, $$ where $H = \nabla^2 F$ is the ordinary Hessian, and $C$ comes from differentiating the term $\Lambda Y^{\top}$ with respect to $Y$ and will be derived later in the text. The constraints on the search direction $\delta z$ are written as $$ N \delta z = 0, $$ and $$ N = \begin{bmatrix} Y^{\top} \otimes I_n & I_{m-r} \otimes A \\ 0 & I_{m-r} \otimes Y^{\top} \end{bmatrix}, $$ which means that $\delta z$ is in the $\mathcal{H}(A,Y)$ as desired. In what follows our approach is similar to the null space methods described in \cite[Section 6]{benzi2005numerical}. Using a parametrization via the matrix $Q$ defined in \cref{notsing:theoremQ} we obtain that $\delta z=Q \delta w$. \\ The first block row of system \cref{notsing:newton1} reads $$\widehat{G} Q \delta w + N^{\top} \delta \lambda = f.$$ Multiplying by $Q^{\top}$ we can eliminate $\delta \lambda$, which leads to the reduced Hessian equation \begin{equation} \label{reducedHess} Q^{\top} \widehat{G} Q \delta w = Q^{\top} f. \end{equation} Note that $Q^{\top} \widehat{G} Q$ is a small $(n+m)r-r^2 \times (n+m)r-r^2$ matrix as claimed. We now would like to simplify equation \cref{reducedHess}. Using the transposition operator defined in \cref{remark} we find that matrix $C$ is written as $$C = (I_m \otimes \Lambda) T_{m,m-r}.$$ An important property of the matrix $C$ is that if $Q_{12} = -Y \otimes (US_1)$ is the (1, 2) block of the matrix $Q$, then $$Q_{12} C = 0, $$ if $$A^{\top} \Lambda = 0,$$ which is again verified by direct multiplication using the properties of the Kronecker product. The direct evaluation of the product $$\widehat{G}^{loc} = Q^{\top} \widehat{G} Q,$$ (together with the property above) gives \begin{equation}\label{notsing:Gdef} \widehat{G}^{loc} = \begin{bmatrix} Q^{\top}_{11} H Q_{11} & Q^{\top}_{11} H Q_{12} + Q^{\top}_{11} C Q_{22} \\ Q^{\top}_{12} H Q_{11}+ Q^{\top}_{22} C^{\top} Q_{11} & Q^{\top}_{12} H Q_{12} \end{bmatrix}, \end{equation} and the system we need to solve has the form \begin{equation}\label{notsing:mainnewton} \begin{bmatrix} Q^{\top}_{11} H Q_{11} & Q^{\top}_{11} H Q_{12} + Q^{\top}_{11} C Q_{22} \\ Q^{\top}_{12} H Q_{11}+ Q^{\top}_{22} C^{\top} Q_{11} & Q^{\top}_{12} H Q_{12} \end{bmatrix}\begin{bmatrix} \delta u \\ \delta p \end{bmatrix} = \begin{bmatrix} Q^{\top}_{11} f \\ Q^{\top}_{12} f \end{bmatrix}, \end{equation} with $$ \quad \delta U \in \mathbb{R}^{n \times r}, \delta P \in \mathbb{R}^{r \times (m - r)}. $$ We also need to estimate $\Lambda$. Recall that to get $Q_{12} C = 0$ we have to require that $A^{\top} \Lambda = 0$ exactly, thus $$\Lambda = Z \Phi, $$ where $Z$ is the orthonormal basis for the left nullspace of $A$, and $\Phi$ is defined from the minimization of $$\Vert \nabla F + Z \Phi Y^{\top} \Vert \rightarrow \min,$$ i.e. $$\Phi = -Z^{\top} \nabla F Y,$$ and $$\Lambda = -ZZ^{\top} \nabla F Y.$$ Note that $f$ then is just a standard projection of $\nabla F$ on the tangent space: $$f = -\mathrm{vec} (\nabla F - ZZ^{\top} \nabla F YY^{\top}) = -\mathrm{vec}(\nabla F - (I - UU^{\top}) \nabla F (I - VV^{\top})),$$ which is always a vectorization of a matrix with a rank not larger than $2r$. Moreover, \begin{equation}\label{notsing:g1def} g_1 = Q^{\top}_{11} f = (V^{\top} \otimes I) f = \end{equation} $$ - \mathrm{vec}((\nabla F - (I - UU^{\top}) \nabla F (I - VV^{\top})) V) = \mathrm{vec}(-\nabla F V), $$ and the second component \begin{equation}\label{notsing:g2def} g_2 = Q^{\top}_{12} f = -(Y^{\top} \otimes (US_1)^{\top}) f = \end{equation} $$ \mathrm{vec}((U S_1)^{\top}(\nabla F - (I - UU^{\top}) \nabla F (I - VV^{\top}))Y) = \mathrm{vec}( S^{\top}_1 U^{\top}\nabla F Y).$$ The solution is recovered from $\delta u$, $\delta p$ as $$\delta a = (V \otimes I_n) \delta u - (Y \otimes (US_1)) \delta p,$$ or in the matrix form, $$\delta A = \delta U V^{\top} - U S_1 \delta P Y^{\top},$$ and the error in $A$ (which we are interested in) is given by $$\Vert \delta A \Vert_F^2 = \Vert \delta U \Vert_F^2 + \Vert S_1 \delta P \Vert_F^2.$$ We can further simplify the off-diagonal block. Consider $$\widehat{C} = Q^{\top}_{11} C Q_{22} = (V^{\top} \otimes I) (I \otimes \Lambda) T (I \otimes V) (I \otimes S_2).$$ Then multiplication of this matrix by a vector takes the form: $$ \mathrm{mat}(\widehat{C} \mathrm{vec}(\Phi)) = \Lambda (V S_2 \Phi)^{\top} V = \Lambda \Phi^{\top} S^{\top}_2 V^{\top} V = \Lambda \Phi^{\top} S^{\top}_2,$$ thus $$ \widehat{C} = (S_2 \otimes \Lambda) T_{r, n-r}. $$ \subsection{Retraction}\label{sec:retraction} Note that since we assumed that the initial points satisfy the constraints \begin{equation} \label{notsing:deseqs} AY = 0,\quad Y^{\top} Y = I_{m-r}, \end{equation} after doing each step of the Newton algorithm we have to perform the retraction back to the manifold $\mathcal{\widehat{M}}_r^{tot}$. One such possible retraction is the following. Define a map $$R: \mathcal{\widehat{M}}_r^{tot} \oplus \mathcal{H} \to \mathcal{\widehat{M}}_r^{tot},$$ $$R((A,Y),(\delta Y, \delta A)) = (R_1(A, \delta A), R_2(Y, \delta Y))$$ $$R_2(Y,\delta Y) = \textbf{qf}(Y + \delta Y) = Y_1,$$ $$R_1(A,\delta A) = A(I - Y_1 Y_1^{\top})$$ where $\textbf{qf}(\xi)$ denotes the Q factor of the QR-decomposition of $\xi$, which is a standard second-order retraction on the Stiefel manifold \cite[Example 4.1.3]{absil2009optimization}.
In the fast version of the Newton method which will be derived later, we will also use the standard SVD-based retraction which acts on the matrix $A+\delta A$ simply by truncating it's SVD to the rank $r$. It is also known that given the SVD of the matrix $A$ then for certain small corrections $\delta A$, the SVD of $A+\delta A$ can be recomputed with low computational cost as described in \cite[\S 3 ]{vandereycken2013low}. It is also known to be a second order retraction \cite{absil2012projection}. We denote this operation by $R_{\text{SVD}}(A, \delta A)$. \subsection{Basic Algorithm} The basic Newton method on the manifold $\mathcal{\widehat{M}}_r$ is summarized in the following algorithm \\ \begin{algorithm}[H]
\caption{Newton method} \label{notsing:alg1}
\begin{algorithmic}[1]
\STATE{Initial conditions $A_0,Y_0$, functional $F(A)$} and tolerance $\varepsilon$
\\
\STATE{Result: minimum of $F$ on $\mathcal{M}_{\le r}$}
\WHILE{$\Vert \delta U^i \Vert^2_F + \Vert (S_1)^i \delta P^i \Vert^2_F>\varepsilon$}
\STATE{
$U^i,S^i,V^i$ = svd($A^i$)}
\\
\STATE{Solve
$\widehat{G}^{loc}
\begin{bmatrix}
\delta u^i \\
\delta p^i
\end{bmatrix}
=
\begin{bmatrix}
g_1 \\
g_2
\end{bmatrix}
$ where $\widehat{G}^{loc}, g_1, g_2 $ are defined by formulas \cref{notsing:Gdef,notsing:g1def,notsing:g2def}}
\\
\STATE{
$\delta A^i = \delta U^i V^{i \top} - U^i (S_1)^i \delta P^i Y^{i \top}$
\\
$\delta Y^i = V^i (S_2)^i \delta P^i$}
\\
\STATE{
$A^{i+1},Y^{i+1} = R((A^i,\delta A^i),(Y^i,\delta Y^i))$}
\\
\STATE{
$i = i +1$ }
\ENDWHILE \RETURN{$A^i$} \end{algorithmic} \end{algorithm} Even though this algorithm demonstrates that our approach is rather inefficient in terms of memory and complexity -- storing and doing multiplications by $Y$ are of order $O(m^2)$ instead of desired $O((n+m)r)$. We resolve this issue in the next section. Analysis of the convergence and behavior of the algorithm near the points $(A,Y)$ corresponding to matrices of strictly smaller rank is performed in \cref{sec:behavior}. \subsection{Semi-implicit parametrization of the tangent space} \label{sec:trick} Let us introduce a new variable $$\delta \Phi^{\top} = Y \delta P^{\top}, $$ $$\delta \Phi \in \mathbb{R}^{r \times m}.$$ This results in an implicit constraint on $\delta \Phi$ $$ \delta \Phi V= 0.$$ In order to make an arbitrary $\Phi$ satisfy it, we first multiply it by the projection operator $I - VV^{\top},$ $$ \Phi' = \Phi (I-VV^{\top}),$$ or in the matrix form $$\begin{bmatrix} \delta u \\ \delta \phi' \end{bmatrix} = \begin{bmatrix} I & 0 \\ 0 & I-VV^{\top} \otimes I \end{bmatrix} \begin{bmatrix} \delta u \\ \delta \phi \end{bmatrix}. $$ Notice also that $$\delta P = \delta \Phi Y,$$ and again using the properties of the Kronecker product we obtain $$ \begin{bmatrix} \delta u \\ \delta p \end{bmatrix} = \begin{bmatrix} I & 0 \\ 0 & Y^{\top}\otimes I \end{bmatrix} \begin{bmatrix} \delta u \\ \delta \phi \end{bmatrix}. $$ Denote $$ \Pi = \begin{bmatrix} I & 0 \\ 0 & I-VV^{\top} \otimes I \end{bmatrix}, $$ $$ W = \begin{bmatrix} I & 0 \\ 0 & Y^{\top}\otimes I \end{bmatrix}. $$ The equations for the Newton method in the new variables take the following form: \begin{equation} \label{notsing:advnewton} \Pi^{\top}W^{\top} \widehat{G}^{loc} W\Pi \begin{bmatrix}
\delta u \\
\delta \phi \end{bmatrix} = \Pi^{\top} W^{\top} \begin{bmatrix}
g_1 \\
g_2 \end{bmatrix}, \end{equation} where $g_1,g_2, \widehat{G}^{loc}$ are as in \cref{notsing:g1def,notsing:g2def,notsing:Gdef} and the linear system in \cref{notsing:advnewton} is of size $(n+m)r$. \\ \subsection{Iterative method} For a large $n$ and $m$ forming the full matrix \cref{notsing:advnewton} is computationally expensive, so we switch to iterative methods. To implement the matvec operation we need to simplify $$ \begin{bmatrix} l_1 \\ l_2 \end{bmatrix}=\Pi^{\top} W^{\top} \widehat{Q}^{loc} W \Pi \begin{bmatrix} \delta u \\ \delta \phi \end{bmatrix}, $$ first. \\ Direct computation shows that $$ \begin{bmatrix} l_1 \\ l_2 \end{bmatrix} = \Pi^{\top} W^{\top} \left[ \begin{array}{c} (V^{\top} \otimes I) H (V \otimes I) \delta u - \\ (V^{\top} \otimes I)H(I \otimes U ) \mathrm{vec}(S_1 \delta \Phi (I - VV^{\top}))-\\ \mathrm{vec}((I-UU^{\top})\nabla F (I-V V^{\top}) \delta \Phi^{\top} S_2) \\ \\ -(Y^{\top} \otimes I) (I \otimes S_1) (I \otimes U^{\top}) H (V \otimes I) \delta u +\\ (Y^{\top} \otimes I) \mathrm{vec}(S_2 (\delta U)^{\top} (-(I-U U^{\top}) \nabla F)) + \\ (Y^{\top} \otimes I)(I \otimes S_1) (I\otimes U^{\top} ) H (I \otimes U) \mathrm{vec} (S_1 \delta \Phi (I-VV^{\top})) \end{array} \right], $$ and the right hand side has the following form: $$\begin{bmatrix} g_1^{\prime} \\ g_2^{\prime} \end{bmatrix} = \Pi^{\top} W^{\top} \begin{bmatrix} -\mathrm{vec} \nabla F V \\ (Y^{\top} \otimes I) \mathrm{vec} (S_1 U^{\top} \nabla F) \end{bmatrix}. $$ Since both $\Pi$ and $W$ only act on the second block it is easy to derive the final formulas: \begin{dmath} \label{eq:mv1} l_1 = (V^{\top} \otimes I) H (V \otimes I) \delta u - (V^{\top} \otimes I)H(I \otimes U ) \mathrm{vec}(S_1 \delta \Phi (I - VV^{\top}))-\mathrm{vec}((I-UU^{\top})\nabla F (I-V V^{\top}) \delta \Phi^{\top} S_2), \end{dmath} \begin{dmath} \label{eq:mv2} l_2 = -(I - VV^{\top} \otimes I) (I \otimes S_1) (I \otimes U^{\top}) H (V \otimes I) \delta u + \mathrm{vec}(S_2 (\delta U)^{\top} (-(I-U U^{\top}) \nabla F (I - V V^{\top})) +(I-VV^{\top}\otimes I)(I \otimes S_1) (I\otimes U^{\top} ) H (I \otimes U) \mathrm{vec} (S_1 \delta \Phi(I-VV^{\top})), \end{dmath} \begin{equation}\label{eq:g1_mv} g_1^{\prime} = -\mathrm{vec} \nabla F V, \end{equation} \begin{equation}\label{eq:g2_mv} g_2^{\prime} = \mathrm{vec} (S_1 U^{\top} \nabla F (I-VV^{\top})). \end{equation} Note that in new variables we obtain $$\delta A = \delta U V^{\top} - U S_1 \delta \Phi,$$ and $$A+\delta A = U(SV^{\top} - S_1 \delta \Phi) + \delta U V^{\top}.$$ Using this representation of $A+\delta A$ we can recompute its SVD without forming the full matrix as described in \cref{sec:retraction}. This allows us not to store the matrix $A$ itself but only the $U$, $S$ and $V$ that we get from the SVD. We obtain the following algorithm \begin{algorithm}[H]
\caption{Fast Newton method} \label{notsing:alg2}
\begin{algorithmic}[1]
\STATE{Initial conditions $U_0,S_0,V_0$, functional $F(A)$} and tolerance $\varepsilon$
\\
\STATE{Result: minimum of $F$ on $\mathcal{M}_{\le r}$}
\WHILE{$\Vert \delta U^i \Vert^2 + \Vert (S_1)^i \delta \Phi^i \Vert^2_F>\varepsilon$}
\STATE{
Solve linear system with matvec defined by formulas (\ref{eq:mv1}),(\ref{eq:mv2}) and right hand side defined by formulas \cref{eq:g1_mv,eq:g2_mv} using GMRES, obtaining $\delta u^i, \delta \phi^i.$}
\\
\STATE{
$\delta A^i = \delta U^i V^{i \top} - U^i (S_1)^i \delta \Phi^i$}
\\
\STATE{
$U^{i+1},S^{i+1},V^{i+1} = R_{\text{SVD}}(A^i,\delta A^i)$}
\\
\STATE{
$i = i +1$ }
\ENDWHILE
\RETURN{$U^i,S^i,V^i$}
\end{algorithmic} \end{algorithm} \pagebreak \section{Technical aspects of the implementation}\label{notsing:techstuff} \subsection{Computation of the matvec and complexity} To efficiently compute the matvec for a given functional $F$ one has to be able to evaluate the following expressions of the first order: \begin{equation} \label{notsing:first-order} \nabla F V , (\nabla F)^{\top} U ,\nabla F \delta X,\delta X \nabla F, \end{equation} and of the second order: \begin{equation} \label{notsing:second-order} (V^{\top} \otimes I) H (V \otimes I) \delta x, (V^{\top} \otimes I) H (I \otimes U) \delta x \end{equation} $$(I\otimes U^{\top}) H (V \otimes I) \delta x,(I \otimes U^{\top}) H (I \otimes U) \delta x.$$ The computational complexity of \cref{notsing:alg2} depends heavily on whether we can effectively evaluate \cref{notsing:first-order,notsing:second-order}, which, however, any similar algorithm requires. Let us now consider two examples. \subsection{Matrix completion} \label{notsing:mc} Given some matrix $B$ and a set of indices $\Gamma$ define $$F(x) = \frac{1}{2}\sum_{(i,j) \in \Gamma} (x_{i,j} - B_{i,j})^2 \to \min, x \in \mathcal{M}_{\le r}.$$ Then $$\nabla F_{ij} = x_{ij} - B_{ij}, (i,j) \in \Gamma,$$ $$\nabla F_{ij} = 0, (i,j) \notin \Gamma.$$ Then $H$ in this case is a diagonal matrix with ones and zeroes on the diagonal, the exact position of which are determined by $\Gamma$. Assuming that the cardinality of $\Gamma$ is small, the matrix products from \cref{notsing:first-order} can be performed efficiently by doing sparse matrix multiplication. Note that multiplication by $H$ in \cref{notsing:second-order} acts as a mask, turning the first factor into a sparse matrix, allowing for effective multiplication by the second factor. \subsection{Approximation of a sparse matrix} \label{notsing:ma} Consider the approximation functional
$$F(x) = \frac{1}{2} \| x - B \|_F^2 \to \min, x \in \mathcal{M}_{\le r},$$ and $B$ is a sparse matrix. Then $$\nabla F = x-B,$$ and expressions \cref{notsing:second-order}, can be heavily simplified by noticing that $H$ in this case is the identity matrix and the sparseness of $B$ is used to evaluate \cref{notsing:first-order}. \section{Numerical results}\label{notsing:numerical} \subsection{Convergence analysis} \cref{notsing:alg2} was implemented in Python using \texttt{numpy} and \texttt{scipy} libraries. We tested it on the functional described in \cref{notsing:ma} for $B$ being the matrix constructed from the MovieLens 100K Dataset \cite{harper2016movielens}, so $n=1000,m=1700$ and $B$ has $100000$ non-zero elements. Since the pure Newton method is only local, for a first test we choose small random perturbation (in the form $0.1 \mathcal{N} (0,1)$) of the solution obtained via SVD as initial condition. We got the following results for various $r$ (see \cref{fig:ma}). \begin{figure}
\caption{Close initial guess. }
\caption{Convergence histograms.}
\caption{Sparse matrix approximation: test of local convergence.}
\label{fig:ma}
\label{fig:hist}
\end{figure} This shows the quadratic convergence of our method.
Now we fix the rank and test whether the method converges to the exact answer for a perturbation of the form $\alpha \mathcal{N}(0,1)$ for various $\alpha$ and plot a number of convergent iterations vs $\alpha \in [0.1,2.5]$ (see \cref{fig:hist}). We see that for a sufficiently distant initial condition the method does not converge to the desired answer. To fix this we introduce a simple version of the trust-region algorithms described in \cite{yuan2000review} (to produce initial condition we perform a few steps of the power method). Results are summarized in \cref{diffr-tr}. \begin{figure}
\caption{Sparse matrix approximation: trust-region method.}
\label{r20-2}
\label{r25-2}
\label{r30-2}
\label{r35-2}
\label{diffr-tr}
\end{figure} We also test our algorithm for the matrix completion problem. As an initial data we choose first $15000$ entries in the database described above. Using the same trust-region algorithm we obtained the following results (see \cref{fig:mc}).
As a final test we show quadratic convergence even in a case when the exact solution is of rank smaller than $r$ for which the method is constructed. To do this we take first $k$ elements of the dataset for various $k$, find the rank $r_0$ of the matrix constructed from these elements, and run the trust-region Newton method for $r=r_0+10$. The results are presented in \cref{fig:mc-final}. \begin{figure}
\caption{Matrix completion tests.}
\label{fig:mc}
\label{fig:mc-final}
\end{figure} \subsection{Behavior of the algorithm in cases of rank deficiency and underestimation} \label{sec:behavior} In the Newton equation of \cref{notsing:alg1}, one has to solve a linear system with $$\widehat{G}^{loc}= \begin{bmatrix} Q^{\top}_{11} H Q_{11} & Q^{\top}_{11} H Q_{12} + Q^{\top}_{11} C Q_{22} \\
Q^{\top}_{12} H Q_{11} + Q^{\top}_{22} C Q_{11}& Q^{\top}_{12} H Q_{12} \end{bmatrix}, $$ with $H$ the Hessian of the objective function $F : \mathbb{R}^{n \times m} \to \mathbb{R}$, which we can assume to be positive definite. Suppose that a matrix of rank $<r$ is the global minimum of $F$. Then $S_1$ is singular and $\Lambda = 0$, which in turn imply that $Q_{12} = -Y \otimes (U S_1)$ is singular and $C = 0$. Hence, the matrix $\widehat{G}^{loc}$ is singular. It is easy to understand the reason of this behavior. The function $\widehat{F}$ defined on $\mathcal{\widehat{M}}_r$ now has non-unique critical point, --- the set of critical points is now in fact a submanifold of $\mathcal{\widehat{M}}_r$. Thus any vector tangent to this submanifold will be a solution of the Newton system. An analysis of the behavior of the Newton method for such functions is studied in e.g. \cite{decker1980newton}. While we plan to analyze it and prove quadratic convergence in our future work, now we note that Krylov iterative methods handle singular systems if we choose initial condition to be the zero vector, and quadratic convergence has been observed in all the numerical experiments.
We will now compare our method (desN) with the reduced Riemannian Newton (rRN) (which is also known as constrained Gauss-Newton method \cite{kressner2016preconditioned}) and CG methods on the fixed-rank matrix manifolds for the approximation problem. The former is obtained by neglecting the curvature term involving $S^{-1}$ in the Hessian (see \cite[Proposition 2.3]{vandereycken2013low}) and for the latter we use the \texttt{Pymanopt} \cite{JMLR:v17:16-177} implementation. We choose $n=m=30, r=10$ and for the first test we compare the behavior of these algorithms in the case of the exact solution being of rank $r_0 < r$ with $r_0 = 5$. In the second test, we study the converse situation when the rank is underestimated --- the exact solution has rank $r_0 > r$ with $r_0=15$. As before, for the reference solution we choose a truncated SVD of the approximated matrix. The results are summarized in the \cref{fig:comp-good,fig:comp-bad}. Note that the case of rank underestimation was also studied in \cref{fig:mc-final}. We observe that the proposed algorithm maintains quadratic convergence in both cases. Even though the reduced Riemannian Newton method is quadratic in the case of rank deficiency, it becomes linear in the case of rank underestimation. This phenomenon is well-known and explained e.g. in \cite[Section 5.3]{kressner2016preconditioned} and is related to the fact that when exact minimum is on the variety this approximate model in fact becomes exact second order model. CG is linear in both cases. \begin{figure}
\caption{Comparison of the convergence behaviour of various optimization algorithms.}
\label{fig:compare-conv}
\label{fig:comp-good}
\label{fig:comp-bad}
\end{figure} \subsection{Comparison with the regularized Newton method}
In this subsection we will compare behavior of our method and of the full Riemannian Newton method on the low-rank matrix variety. To avoid problems with zero or very small singular values we choose some small parameter $\varepsilon$, and in the summands involving $S^{-1}$ in the formulas for the Hessian matrix \cite[Proposition 2.3]{vandereycken2013low} we use the regularized singular values $$\sigma_i^{\varepsilon} = \max \lbrace \sigma_i, \varepsilon \rbrace,$$ thus obtaining regularized Newton method (regN). As a test problem we choose a matrix completion problem where the exact answer is known (given sufficiently many elements in the matrix) and of a small rank. To construct such a matrix $A$ we take the uniform grid of size $N=40$ in the square $[-1, 1]^2$ and sample values of the function $$f(x, y) = e^{-x^2-y^2},$$ on this grid. It is easy to check that this matrix has rank exactly $1$. We choose $r_0 = 5$ and compare relative error with respect to the exact solution $A$, value of the functional as defined in \cref{notsing:mc} and value of the second singular value $\sigma_2$. \begin{figure}
\caption{Matrix completion tests in the case of strong rank deficiency.}
\label{illcompl-relerr}
\label{illcompl-sig2}
\label{illcompl-func}
\label{fig:ill-compl}
\end{figure} Results are given in \cref{fig:ill-compl}. We see that even though in all the cases value of the functional goes to $0$, regularized Newton method fails to recover that $\sigma_2$ of the exact answer is in fact $0$ and it's behavior depends on the value of $\varepsilon$.
\section{Related work} \label{sec:related} Partly similar approach using so-called \emph{parametrization via kernel} is described in \cite{markovsky2011low, markovsky2013structured}. However, optimization algorithm proposed there is not considered as an optimization problem on a manifold of tuples $(A,Y)$ and is based on two separate optimization procedures (with respect to $A$ and to $Y$, where the latter belongs to the orthogonal Stiefel manifold), thus separating the variables. As stated in \cite{markovsky2013structured} in general it has $O(m^3)$ complexity per iteration. An overview of Riemannian optimization is presented in \cite{qi2016numerical}. An example of the traditional approach to bounded-rank matrix sets using Stiefel manifolds is given in \cite{koch2007dynamical} where explicit formulas for projection onto the tangent space are presented. An application of Riemannian optimization to low-rank matrix completion where $\mathcal{M}_{\le r}$ is considered as a subvariety in the set of all matrices is given in \cite{vandereycken2013low}. The case of $F$ being non-smooth but only Lipschitz is studied in \cite{hosseini2016riemannian}. Theoretical properties of matrix completion such as when exact recovering of the matrix is possible are studied in \cite{candes2010power}. Standard references for introductory algebraic geometry are \cite{hartshorne2013algebraic} and \cite{shafarevich1977basic}. For more computational aspects see \cite{grayson2002macaulay}.
\end{document} |
\begin{document}
\title{Geometrically formal homogeneous metrics of positive curvature}
\author{Manuel Amann} \address{Karlsruher Institut f\"ur Technologie\\ 76133 Karlsruhe, Germany} \email{manuel.amann@kit.edu} \author{Wolfgang Ziller} \address{University of Pennsylvania: Philadelphia, PA 19104, USA} \email{wziller@math.upenn.edu} \thanks{The first author was supported by IMPA and a research grant of the German Research Foundation DFG. The second author was supported by CAPES-Brazil, IMPA, the National Science Foundation and the Max Planck Institute in Bonn.}
\begin{abstract} A Riemannian manifold is called geometrically formal if the wedge product of harmonic forms is again harmonic, which implies in the compact case that the manifold is topologically formal in the sense of rational homotopy theory. A manifold admitting a Riemannian metric of positive sectional curvature is conjectured to be topologically formal. Nonetheless, we show that among the homogeneous Riemannian metrics of positive sectional curvature a geometrically formal metric is either symmetric, or a metric on a rational homology sphere. \end{abstract}
\maketitle
Compact manifolds of positive sectional curvature form an intriguing field of study. On the one hand, there are few known examples, and on the other hand the two main conjectures in the subject, the two Hopf conjectures, are still wide open.
The most basic examples of positive curvature are the rank one symmetric spaces $\mathbb{S}^n$, $\mathbb{C\mkern1mu P}^n$, $\mathbb{H\mkern1mu P}^n$ and $\mathrm{Ca}\mathbb{\mkern1mu P}^2$. Homogeneous spaces of positive curvature have been classified \cite{Be,BB}: there are the homogeneous flag manifolds due to Wallach, $W^6=\ensuremath{\operatorname{SU}} (3)/T^2$, $W^{12}=\ensuremath{\operatorname{Sp}} (3)/\ensuremath{\operatorname{Sp}} (1)^3$ and $W^{24}=\mathsf{F}_4/\ensuremath{\operatorname{Spin}} (8)$, the Berger spaces $B^7=\ensuremath{\operatorname{SO}} (5)/\ensuremath{\operatorname{SO}} (3)$ and $B^{13}=\ensuremath{\operatorname{SU}} (5)/\ensuremath{\operatorname{Sp}} (2)\cdot \ensuremath{\operatorname{S}} ^1$, and the Aloff--Wallach spaces $W^7_{p,q}=\ensuremath{\operatorname{SU}} (3)/\ensuremath{\operatorname{diag}} (z^p,z^q, \bar z^{p+q})$ with \linebreak[4]$\gcd(p,q)=1$, $p\geq q> 0$. See e.g. \cite{Zi2} for a detailed discussion. Furthermore, we have the biquotient examples due to Eschenburg \cite{E1,E2} and Bazaikin \cite{Baz} and the more recent cohomogeneity one example in \cite{De,GVZ}.
All the known examples have the following remarkable properties: They are rationally elliptic spaces, i.e.~their rational homotopy groups $\pi_i(M)\otimes {\mathbb{Q}} $ vanish from a certain degree $i$ on, and the even dimensional ones have positive Euler characteristic. For general simply-connected positively (or more generally non-negatively) curved manifolds, the Bott-Grove-Halperin conjecture claims rational ellipticity, whilst the Hopf conjecture asserts that their Euler characteristic is positive in even dimensions.
A (simply-connected) topological space is called (topologically) \emph{formal} if its rational homotopy type is a formal consequence of its rational cohomology algebra, or, equivalently in the case of a manifold, if its real cohomology algebra is weakly equivalent to its de Rham algebra. It is a classical result of rational homotopy theory that rationally elliptic spaces with positive Euler characteristic are formal, see e.g. \cite{FHT}. In fact, one easily sees that all known examples of positive curvature are formal, in even as well as in odd dimensions. It is thus natural to conjecture that positively curved manifolds are formal in general.
We mention here that the situation is different in non-negative curvature. Homogeneous spaces $G/H$ naturally admit non-negative curvature and are rationally elliptic. If $\ensuremath{\operatorname{rk}} H= \ensuremath{\operatorname{rk}} G$ they have positive Euler characteristic and are hence formal. On the other hand, in \cite{Ama12b, Kot11} one finds many examples of non-formal homogeneous spaces.
Other classical examples of formal spaces are compact symmetric spaces and compact K\"ahler manifolds. In the case of symmetric spaces this simply follows from the fact that harmonic forms are parallel. Thus in \cite{Kot01} the notion of geometric formality was introduced: A Riemannian metric is \emph{geometrically formal} if wedge products of harmonic forms are again harmonic. On a compact manifold the Hodge decomposition implies that a manifold admitting a geometrically formal metric
is also topologically formal. See \cite{Bae12} and \cite{Kot12} for some recent results on geometrically formal metrics in dimension 3 and 4, and \cite{Kot01,Kot03,Kot09,Kot11,OP,GN} for obstructions to geometric formality.
There are very few known examples of compact geometrically formal manifolds. In fact, to our knowledge they all belong to the following classes (see \cite{Kot01,Kot12,Kot11,Bae12}) \begin{itemize} \item a Riemannian metric all of whose harmonic forms are parallel, \item a homogeneous metric on a manifold whose rational cohomology is isomorphic to the cohomology of $\mathbb{S}^p\times\mathbb{S}^q$ with either $p$ and $q$ both odd, or $p$ even and $q$ odd with $p>q$, \item Riemannian products of the above and finite quotients by a group of isometries. \end{itemize}
In the homogeneous case geometric formality is an obvious consequence of homogeneity, since harmonic forms must be invariant under the id component of the isometry group. Homogeneous spaces which have the rational cohomology of the product of spheres are classified in \cite{Kr}, and in \cite{Kot11} it was shown that many of them are not homotopy equivalent to symmetric spaces. There are other metrics where all harmonic forms are parallel, besides the compact symmetric spaces. For example, any metric on a rational homology sphere or a K\"ahler metric on a rational $\mathbb{C\mkern1mu P}^n$, e.g. the twistor space of the quaternionic symmetric space $\ensuremath{\operatorname{G}} _2/\ensuremath{\operatorname{SO}} (4)$. If one allows the manifold not to be simply connected, there are many such examples, e.g. fake $\mathbb{C\mkern1mu P}^2$ and $\mathbb{C\mkern1mu P}^4$, see \cite{GN}, which are compact quotients of complex hyperbolic space. Although these spaces may be called topologically formal, this property usually has not the strong consequences known from rational homotopy theory unless the space is nilpotent. For quotients of products, as for example $(M\times {\mathbb{R}} ^n)/\Gamma$ with $M$ geometrically formal, one simply observes that geometric formality is a local property.
It is the main result of this article that geometric formality is also rare in positive curvature:
\begin{main*}\label{theoA} A homogeneous geometrically formal metric of positive curvature is either symmetric or a metric on a rational homology sphere. \end{main*}
In \cite{Kot11},Theorem 25, it was shown that a metric on a non-trivial $\mathbb{S}^2$ bundle over $\mathbb{C\mkern1mu P}^2$ cannot be formal. This includes the 6 dimensional flag manifold $W^6$, as well as the inhomogenous Eschenburg biquotient. We will show that any homogeneous metric on the other two flag manifolds $W^{12}$ and $W^{24}$ cannot be geometrically formal. Of course, every metric on a sphere is geometrically formal, and every homogeneous metric on $\mathbb{C\mkern1mu P}^{2n}$, $\mathbb{H\mkern1mu P}^n$ and $\mathrm{Ca}\mathbb{\mkern1mu P}^2$ is symmetric. The Berger space $B^7$ is geometrically formal as well, since it is a rational homology sphere. This leaves the Berger space $B^{13}$, the Aloff--Wallach spaces, and the homogeneous metrics on $\mathbb{C\mkern1mu P}^{2n+1}$. For the Aloff--Wallach spaces, it was shown in \cite{Kot11} that the normal homogeneous metric is not geometrically formal, but this metric does not have positive curvature.
The recent example of positive curvature in \cite{De,GVZ} is a rational homology sphere and hence geometrically formal.
It would be interesting to know if the only other known examples of positive curvature, i.e. the 7 dimensional Eschenburg spaces and 13 dimensional Bazaikin spaces, can admit geometrically formal metrics. They have the same cohomology as $W_{p,q}$ and $B^{13}$, but our methods do not apply in this case since the isometry group is too small.
It would also be interesting to have some other examples of homogeneous spaces where some of the homogeneous metrics are geometrically formal. Although the methods in this paper can be used to check this, an example seems to be difficult to find. Any relationship in the cohomology ring puts strong restrictions on a geometrically formal metric.
To prove the theorem we use the elementary fact that the de Rham cohomology is isomorphic to the finite dimensional algebra of invariant forms, and hence closed and harmonic forms can be computed explicitly. The Berger space $B^{13}$ has the rational cohomology of $\mathbb{C\mkern1mu P}^2\times \mathbb{S}^9$ and the Aloff--Wallach space $W_{p,q}$ that of $\mathbb{S}^2\times \mathbb{S}^5$. Hence there is a unique harmonic 2-form $\eta$ and to be geometrically formal implies that $\eta^3$ resp. $\eta^2$ must be 0 as a form. It turns out that even among the closed invariant forms there are none whose power is 0. For $W^{12}$ and $W^{24}$ there are relations in the cohomology ring that contradict geometric formality. In the case of $\mathbb{C\mkern1mu P}^{2n+1}$, the situation is more interesting. Here the condition is that $\eta^k$ must be harmonic for all $k$. But already the harmonic 4-form changes with the metric and is the square of the harmonic 2-form only if the metric is symmetric. We point out that this metric is also almost K\"ahler, hence gives examples of such metrics which are not geometrically formal.
In Section 1 we explain some background about homogeneous spaces and their cohomology. In Section 2 we deal with $B^{13} $ and in Section 3 with the Aloff--Wallach spaces. In Section 4 we discuss $W^{12}$ and $W^{24}$, and in Section 5 $\mathbb{C\mkern1mu P}^{2n+1}$.
\section{Preliminaries}\label{prelim} We first discuss the methods we will use to prove our main theorem.
\noindent Let $M=G/H$ be a homogeneous space with $H$ the stabilizer group at a base point $p_0\in M$. Using a fixed biinvariant metric $Q$ on the Lie algebra ${\mathfrak{g}} $, we define an orthogonal splitting $$ {\mathfrak{g}} ={\mathfrak{h}} +{\mathfrak{m}} \ \ \text{with identification} \ \ {\mathfrak{m}} \simeq T_{p_0}M $$ induced by the action fields $X^*$ via $X\in{\mathfrak{m}} \to X^*(p_0)$. The action of $H$ on $T_{p_0}M$ then becomes the adjoint action ${\ensuremath{\operatorname{Ad}} _H}$ on ${\mathfrak{m}} $. Choose an $\ensuremath{\operatorname{Ad}} _H$ invariant and $Q$ orthogonal decomposition $$ {\mathfrak{m}} ={\mathfrak{m}} _0\oplus {\mathfrak{m}} _1\oplus\ldots \oplus{\mathfrak{m}} _k $$
such that ${\ensuremath{\operatorname{Ad}} _H}_{|{\mathfrak{m}} _0}=\Id$ and ${\ensuremath{\operatorname{Ad}} _H}_{|{\mathfrak{m}} _i}$ is irreducible. A metric of the form $$g={g_0}_{|{\mathfrak{m}} _0}+\lambda_1Q_{|{\mathfrak{m}} _1}
+\lambda_2Q_{|{\mathfrak{m}} _2}+\ldots+\lambda_kQ_{|{\mathfrak{m}} _k}$$ with $g_0$ an inner product on ${\mathfrak{m}} _0$ and $\lambda_i$ positive constants, is then a $G$ invariant metric on $M$. If the the $\ensuremath{\operatorname{Ad}} _H$ representations ${\mathfrak{m}} _i$, $i=1\ldots k$, are all inequivalent, every $G$ invariant metric has this form. If ${\mathfrak{m}} _i\simeq{\mathfrak{m}} _j$ the inner products between ${\mathfrak{m}} _i$ and ${\mathfrak{m}} _j$ can be described by $1,2$ or $4$ arbitrary constants, depending on wether the representations are real, complex, or quaternionic.
We will use the elementary fact that the DeRham cohomology is isomorphic to the cohomology of $G$ invariant forms. By homogeneity this in turn is isomorphic to $$ H^*_{DR}(M)\simeq \left((\Lambda^* {\mathfrak{m}} )^H,d\right) $$ of forms on ${\mathfrak{m}} $ invariant under the isotropy action. The differential of a $k$-form $\omega\in (\Lambda^k {\mathfrak{m}} )^H$ is again $H$ invariant and can be computed via the following formula: \begin{equation}\label{dw}
d\omega(u_1,\ldots,u_{k+1})=\sum_{i<j}(-1)^{i+j} \omega([u_i,u_j]_{{\mathfrak{m}} },u_1,\ldots, \hat{u_i},\ldots, \hat{u}_j\ldots u_{k+1}) \end{equation} for $u_i\in{\mathfrak{m}} $, where $[u_i,u_j]_{{\mathfrak{m}} }$ denotes the projection of $[u_i,u_j]$ into ${\mathfrak{m}} $. On $\Lambda^* {\mathfrak{m}} $ we use the inner product that makes $e_{i_1}\wedge e_{i_2}\wedge \ldots \wedge e_{i_r}$, $i_1< i_2 <\ldots<i_r$ into an orthonormal basis of $\Lambda^r {\mathfrak{m}} $ for any orthonormal basis $e_i$ of ${\mathfrak{m}} $. We denote the codiferential by $\delta$. Since $\langle \omega, d\eta\rangle=\langle\delta \omega , \eta\rangle$, a $G$ invariant form $\omega\in (\Lambda^r {\mathfrak{m}} )^H$ is harmonic if and only if $$ d\omega=0 \ \ \text{ and}\ \ \langle \omega,d\eta\rangle=0 \ \ \text{for all}\ \ \eta\in (\Lambda^{r-1} {\mathfrak{m}} )^H. $$ This reduces the computation of the DeRham cohomology and the harmonic forms to a finite dimensional purely Lie algebraic computation. The equations are in fact linear in the coefficients of $\omega$ in some basis, and quadratic in the coefficients of the metric.
In order to simplify the computation of the differentials $d\omega$ we observe the following. Let $e_i$ be a basis of ${\mathfrak{g}} $ where each basis vector lies either in ${\mathfrak{h}} $ or ${\mathfrak{g}} $ and denote, by abuse of notation, the dual basis of 1-forms again by $e_i$. Although the 1-forms $e_i$ are in general not $\ensuremath{\operatorname{Ad}} (H)$ invariant, and hence do not represent forms on $G/H$, we can nevertheless formally use \eqref{dw} to compute $de_i$. By using the product rule we can then compute $d\omega$ for any $r$-form $\omega=\sum e_{i_1}\wedge e_{i_2}\wedge\ldots\wedge e_{i_r}$, in particular the $\ensuremath{\operatorname{Ad}} (H)$ invariant forms. To see that the formula in \eqref{dw} satisfies the product rule, observe that we could replace $[u_i,u_j]_{{\mathfrak{m}} }$ by $[u_i,u_j]$ since the ${\mathfrak{h}} $ component will evaluate to 0. But then it becomes the usual formula for the Lie algebra cohomology of ${\mathfrak{g}} $ and hence satisfies a product rule. Notice though that in this generality $d^2\omega$ does not have to be 0, unless $\omega$ is $H$ invariant. This is due to the fact that the proof that it vanishes, in the case of the Lie algebra cohomology, uses the Jacobi identity which does not hold if we take the ${\mathfrak{m}} $ component of all Lie brackets.
\section{The Berger space $B^{13}$}\label{sec01}
For the 13 dimensional Berger space $B^{13}=\ensuremath{\operatorname{SU}} (5)/\ensuremath{\operatorname{Sp}} (2)\cdot\ensuremath{\operatorname{S}} ^1$, the embedding $\ensuremath{\operatorname{Sp}} (2)\cdot\ensuremath{\operatorname{S}} ^1\subset\ensuremath{\operatorname{SU}} (5)$ is given by
$\ensuremath{\operatorname{diag}} (zA,\bar{z}^4)$ for $A\in \ensuremath{\operatorname{Sp}} (2)\subset\ensuremath{\operatorname{SU}} (4)$ and $z\in\ensuremath{\operatorname{S}} ^1$. The manifold $B^{13}$ has the same DeRham cohomology as $\mathbb{C\mkern1mu P}^2\times \mathbb{S}^9$. One can see this for example by using the two homogeneous fibrations $$\ensuremath{\operatorname{S}} ^1\to \ensuremath{\operatorname{SU}} (5)/\ensuremath{\operatorname{Sp}} (2)\to B^{13} \ \text{and}\ \ensuremath{\operatorname{SU}} (4)/\ensuremath{\operatorname{Sp}} (2)\to \ensuremath{\operatorname{SU}} (5)/\ensuremath{\operatorname{Sp}} (2) \to \ensuremath{\operatorname{SU}} (5)/\ensuremath{\operatorname{SU}} (4)$$
and the fact that $\ensuremath{\operatorname{SU}} (5)/\ensuremath{\operatorname{SU}} (4)=\mathbb{S}^9$, $\ensuremath{\operatorname{SU}} (4)/\ensuremath{\operatorname{Sp}} (2)=\ensuremath{\operatorname{SO}} (6)/\ensuremath{\operatorname{SO}} (5)=\mathbb{S}^5$ and that $B^{13}$ is simply connected. Thus there exists one harmonic 2-form $\eta$. Geometric formality requires $\eta^2$ to be harmonic, and $\eta^3=0$ on the level of forms. We actually do not need to explicitly compute the harmonic forms since we will show that there are no closed invariant 2-forms $\omega$ with $\omega^3=0$.
To compute the invariant forms, we first make the following observations.
$\ensuremath{\operatorname{Sp}} (n)$ acts on ${\mathbb{H}} ^n$ via matrix multiplication and $\ensuremath{\operatorname{Sp}} (n)\cdot \ensuremath{\operatorname{Sp}} (1)$ via $(A,q)v=Avq^{-1}$ for $A\in\ensuremath{\operatorname{Sp}} (n), q\in\ensuremath{\operatorname{Sp}} (1)$ and $v\in {\mathbb{H}} ^n$. It is well known that the algebra $\Lambda^*({\mathbb{H}} ^n)^{\ensuremath{\operatorname{Sp}} (n)}$ of invariant forms is generated by the 3 symplectic forms, corresponding to the K\"ahler forms $\omega_I, \omega_J,\omega_k$ associated to the 3 complex structures coming from right multiplication with $I,J,K\in \ensuremath{\operatorname{Sp}} (1)$. Right multiplication with $\ensuremath{\operatorname{Sp}} (1)$ acts on $\spam{\{I,J,K\} }\simeq{\mathbb{R}} ^3$ via matrix multiplication by $\ensuremath{\operatorname{SO}} (3)$ under the two fold cover $\ensuremath{\operatorname{Sp}} (1)\to \ensuremath{\operatorname{SO}} (3)$. Thus if $\ensuremath{\operatorname{S}} ^1\subset\ensuremath{\operatorname{Sp}} (1)$ is given by $e^{it}$, the algebra \begin{equation}\label{invariant}
\Lambda^*({\mathbb{H}} ^n)^{\ensuremath{\operatorname{Sp}} (n)\cdot\ensuremath{\operatorname{S}} ^1} \text{ is spanned by } \omega_I \text{ and its powers}.
\end{equation}
From the inclusions $\ensuremath{\operatorname{Sp}} (2)\cdot\ensuremath{\operatorname{S}} ^1\subset \ensuremath{\operatorname{SU}} (4)\cdot\ensuremath{\operatorname{S}} ^1=\ensuremath{\operatorname{U}} (4)\subset\ensuremath{\operatorname{SU}} (5)$ it easily follows that the decomposition of ${\mathfrak{m}} $ into irreducibles under the action of $H=\ensuremath{\operatorname{Sp}} (2)\cdot \ensuremath{\operatorname{S}} ^1$ is given by ${\mathfrak{m}} =V\oplus W$ with $\dim V=5$ and $\dim W=8$. On $V$ the factor $\ensuremath{\operatorname{S}} ^1$ acts trivially and $\ensuremath{\operatorname{Sp}} (2)$ via matrix multiplication by $\ensuremath{\operatorname{SO}} (5)$ under the two fold cover $\ensuremath{\operatorname{Sp}} (2)\to \ensuremath{\operatorname{SO}} (5)$. On $W$ it acts via $(A,z)v=Avz^{-1}$ with $(A,z)\in \ensuremath{\operatorname{Sp}} (2)\times\ensuremath{\operatorname{S}} ^1$. It follows that $\Lambda^*(V)^H$ is spanned by a 0-form and a 5-form, the volume form $v$, and $\Lambda^*(W)^H$ by $\omega_I$ and its powers by \eqref{invariant}. On the other hand
$\Lambda^k(V)\otimes\Lambda^l(W)$ with $k,l>0$ contains no invariant forms since the $\ensuremath{\operatorname{S}} ^1$ factor clearly acts non-trivially. Thus $(\Lambda{\mathfrak{m}} )^H$ is spanned by $v$ and $\omega_I$ as an algebra. Since there is only one invariant 2-form, $\omega_I$ must be harmonic, and similarly $\omega_I^2$ as well. In order to obtain the cohomology ring of $B^{13}$, we need $dv\ne0$, but the only possibility, up to a multiple, is $dv=\omega_I^3$. This implies that $\omega_I^3\ne 0$ and hence no invariant metric can be geometrically formal.
\section{The Wallach spaces $W_{p,q}$}
Let $H=\ensuremath{\operatorname{S}} ^1_{k}=\ensuremath{\operatorname{diag}} (e^{ik_1t}, e^{ik_2t},e^{ik_3t})\subset G=\ensuremath{\operatorname{SU}} (3)$ where $k_i$ are fixed integers with $\sum k_i=0$. The quotient $G/H=\ensuremath{\operatorname{SU}} (3)/\ensuremath{\operatorname{S}} ^1_{k}$ was studied by Aloff--Wallach \cite{AW} who showed that it admits a homogeneous metric with positive sectional curvature if none of the $k_i$ is 0. We will show that in fact none of the homogeneous metrics, even in this special case, can be geometrically formal. This was shown to be the case for the metric induced by the biinvariant metric on $\ensuremath{\operatorname{SU}} (3)$, but this metric does not have positive curvature.
It is well known that the rational cohomology ring of $\ensuremath{\operatorname{SU}} (3)/\ensuremath{\operatorname{S}} ^1_{k}$ is that of $\mathbb{S}^2\times\mathbb{S}^5$, but they can be differentiated by a torsion group in $H^4$. Thus there exists one harmonic 2-form $\eta$. To be geometrically formal, we need $\eta^2=0$ on the level of forms. As in the previous case, we will again show that there are no closed 2-forms with square 0, although the computation in this case is much more involved.
We choose the following basis for the Lie algebra of $\ensuremath{\operatorname{SU}} (3)$. To describe it, let $E_{ij}$ be the matrix which has a 1 in row i and column j, and 0 otherwise. Set \begin{align*} E_1&= E_{12}- E_{21}, & E_2&=-E_{13}+E_{31}, & E_3&=E_{23}-E_{32}\\ F_1&=iE_{12}+i E_{21}, & F_2&=i E_{13}+i E_{31}, & F_3&=i E_{23}+i E_{32}\\ H_1&=iE_{11}-i E_{22}, & H_2&=-i E_{11}+i E_{33}, & H_3&=i E_{22}-i E_{33}. \end{align*}
We also choose the biinvariant metric on ${\mathfrak{su}} (3)$ given by $\langle A,B\rangle =-\frac 12 \ensuremath{\operatorname{tr}} (AB)$ in which $E_i,F_i$ are orthonormal. Furthermore, $H_i$ have unit length, are orthogonal to $E_i,F_i$, and $\langle H_i,H_j\rangle=-\frac 12$. For the Lie brackets we have: \begin{align*} [H_i,E_i]&=2 F_i, & [H_i,F_i]&=-2 E_i, & [H_i,E_j]&=-F_j , & [H_i,F_j]&=E_j\\ [E_i,E_j]&=E_k, & [F_i,F_j]&=-E_k, & [E_i,F_j]&=-F_k , & [E_i,F_i]&=2 H_i \\ \end{align*} where $i,j,k$ is a cyclic permutation of $1,2,3$. For the decomposition of ${\mathfrak{g}} $ we choose $${\mathfrak{g}} ={\mathfrak{h}} + V_0 + V_1 +V_2+V_3$$ where $$ V_0= \spam(\varepsilon),\, V_i= \spam(E_i,F_i) \ \text{for}\ i=1,\ldots,3. $$ Here $\varepsilon$ needs to be $Q$-orthogonal to ${\mathfrak{h}} $ and of unit length, i.e. $$ \varepsilon=\sum r_i H_i \ \text{with}\ \ (r_1-r_2)k_1+(r_3-r_1)k_2+(r_2-r_3)k_3=0 \ \text{and}\ \sum r_i^2 -\sum r_ir_j=1. $$ The subspaces $V_i$ are invariant under the isotropy action by $H$. On $V_0$ it acts trivially and on $V_i$ as: \begin{align*} \ensuremath{\operatorname{Ad}} (\ensuremath{\operatorname{diag}} (e^{i\theta_1},e^{i\theta_2},e^{i\theta_3})E_1&=(\theta_1-\theta_2)F_1, & \quad \ensuremath{\operatorname{Ad}} (\ensuremath{\operatorname{diag}} (e^{i\theta_1},e^{i\theta_2},e^{i\theta_3})F_1=-(\theta_1-\theta_2)E_1\\ \ensuremath{\operatorname{Ad}} (\ensuremath{\operatorname{diag}} (e^{i\theta_1},e^{i\theta_2},e^{i\theta_3})E_2&=(\theta_3-\theta_1)F_2, & \quad \ensuremath{\operatorname{Ad}} (\ensuremath{\operatorname{diag}} (e^{i\theta_1},e^{i\theta_2},e^{i\theta_3})F_2=-(\theta_3-\theta_1)E_2\\ \ensuremath{\operatorname{Ad}} (\ensuremath{\operatorname{diag}} (e^{i\theta_1},e^{i\theta_2},e^{i\theta_3})E_3&=(\theta_2-\theta_3)F_3, & \quad \ensuremath{\operatorname{Ad}} (\ensuremath{\operatorname{diag}} (e^{i\theta_1},e^{i\theta_2},e^{i\theta_3})F_3=-(\theta_2-\theta_3)E_3 \end{align*} where $\theta_i=k_i\cdot t$.
For the differential forms we use the basis of 1-forms dual to the basis $\varepsilon,E_i,F_i$ and by abuse of notation use the same letters. Using \eqref{dw} and the above Lie brackets one easily obtains the following exterior derivatives of 1-forms:
\renewcommand{1.4}{1.4} \stepcounter{equation}
\begin{table}[ht] \begin{center}
\begin{tabular}{c|c} $w$ & $\dif w$ \\ \hline $E_1$ & $\hspace{10pt} E_2\wedge E_3-F_2 \wedge F_3+s_1F_1\wedge \varepsilon $ \\ $E_2$ & $\hspace{10pt} E_3\wedge E_1-F_3\wedge F_1+s_2F_2\wedge \varepsilon$ \\ $E_3$ & $\hspace{10pt} E_1\wedge E_2-F_1\wedge F_2+s_3F_3\wedge \varepsilon$ \\ $F_1$ & $-E_2\wedge F_3-F_2\wedge E_3-s_1E_1\wedge \varepsilon$ \\ $F_2$ & $-E_3\wedge F_1-F_3\wedge E_1-s_2E_2\wedge \varepsilon$ \\ $F_3$ & $-E_1\wedge F_2-F_1\wedge E_2-s_3E_3\wedge \varepsilon$\\ $\varepsilon$ & $s_1 E_1\wedge F_1+s_2 E_2\wedge F_2+s_3 E_3\wedge F_3$ \end{tabular} \end{center}
\caption{Differentials of one-forms}\label{1forms} \end{table} where $$ s_i=2r_i-r_j-r_k \ \text{with}\ i,j,k \ \text{distinct}. $$ In this computation we need to use the fact that $$ (H_1)_{\mathfrak{m}} =Q(H_1,\varepsilon)\varepsilon=Q\big(H_1,\sum r_iH_i\big)\varepsilon=(r_1-\tfrac12 (r_2+r_3)) $$ and hence $(2H_i)_{\mathfrak{m}} =s_i\varepsilon $.
As explained above, these one-forms are not all well defined on $G/H$ but are useful for computing the exterior derivative of 2-forms via the product formula for forms.
The discussion now depends on the values of the 3 integers $k_i$ and we differentiate between 3 cases.
\subsection{All three $k_i$ are distinct.}
Assume that $H=\ensuremath{\operatorname{S}} ^1_{k}=\ensuremath{\operatorname{diag}} (e^{ik_1t}, e^{ik_2t},e^{ik_3t})$ with all $k_i$ distinct. Since the differences $k_i-k_j$ are then also all distinct, the actions of $H$ on $V_i$ are all non-trivial and inequivalent. Hence an invariant metric depends on 4 parameters. The only invariant 1-form is $\varepsilon$, and the only invariant 2-forms are the volume forms of $V_i$, i.e. $\omega_i=E_i\wedge F_i$. Without having to compute which forms $\omega=\sum a_i\omega_i$ are closed, it is clear that an invariant metric cannot be geometrically formal since $\omega^2=0$ implies that $a_i=0$ for all $i$. \begin{rem*} One easily sees that the form $\omega=\sum a_i\omega_i$ is closed iff $\sum a_i=0$ and harmonic if in addition $\sum a_is_it_i^2=0$, where $t_i$ is the length of $E_i$ and $F_i$. \end{rem*}
\subsection{One of the $k_i$ vanishes.}
Here we can assume, since cyclic permutations of the $k_i$ and changing the sign of all 3 does not change the homogeneous space, that $(k_1,k_2,k_3)=(0,-1,1)$. These are in fact precisely those Wallach spaces which do not admit an invariant metric with positive curvature. Nevertheless we will now show that even here there are no geometrically formal metrics. The action of $\ensuremath{\operatorname{Ad}} _H$ on $V_i$ is a rotation of speed 1 on $V_1$ and $V_2$, and speed 2 on $V_3$. Thus the space of invariant metrics is 6-dimensional. $\varepsilon$ is still the only invariant one form, but now we have 5 invariant 2-forms: $$ \omega_i=E_i\wedge F_i,\ i=1,2, 3, \ \text{and}\ \omega_4=E_1\wedge E_2+F_1\wedge F_2,\ \omega_5=F_1\wedge E_2-E_1\wedge F_2. $$ For $\varepsilon$ we choose $$ \varepsilon=(H_1+2H_2)/\sqrt{3}=\ensuremath{\operatorname{diag}} (-i,-i,2i)/\sqrt{3} \ \text{and hence}\ (s_1,s_2,s_3)=(0,3,-3). $$ From Table \ref{1forms} we easily obtain the exterior derivatives of the invariant 2-forms:
\stepcounter{equation} \begin{table}[ht] \begin{center}
\begin{tabular}{c|c} $w$ & $\dif w$ \\ \hline $\omega_1$ & $E_1\wedge E_2 \wedge F_3-E_1\wedge E_3 \wedge F_2+E_2\wedge E_3\wedge F_1-F_1\wedge F_2\wedge F_3$ \\ $\omega_2$ & $E_1\wedge E_2\wedge F_3-E_1\wedge E_3\wedge F_2+E_2\wedge E_3\wedge F_1-F_1\wedge F_2\wedge F_3 $ \\ $\omega_3$ & $E_1\wedge E_2\wedge F_3-E_1\wedge E_3\wedge F_2+E_2\wedge E_3\wedge F_1-F_1\wedge F_2\wedge F_3$ \\ $\omega_4$ & $ 3\,\omega_5\wedge \varepsilon$ \\ $\omega_5$ & $ 3\,\omega_4\wedge \varepsilon$ \end{tabular} \end{center}
\caption{Differentials of 2-forms for $(k_1,k_2,k_3)=(0,-1,1)$ } \end{table}
Thus the 2-form $\omega=\sum a_i\omega_i$ is closed if and only if $\sum a_i=0$ and $a_4=a_5=0$, and as in the previous case it follows that $\omega^2=0$ implies $a_i=0$ for all $i$.
\subsection{Two of the $k_i$ are equal.}
Up to permutations, we can assume that $(k_1,k_2,k_3)=(-2,1,1)$. Thus $\ensuremath{\operatorname{Ad}} _H$ acts with speed 3 on $V_1$ and $V_2$, but with opposite orientation, and trivially on $V_3$ and $V_0$. The metric is thus arbitrary on $V_0\oplus V_3$. Since the action on $V_1$ and $V_2$ are also equivalent, an invariant metric depends on 10 parameters. Now the invariant 1-forms are $\varepsilon,\ E_3$ and $F_3$, and the invariant 2-forms are: $$ \omega_i=E_i\wedge F_i,\ i=1,2, 3, \ \omega_4=E_1\wedge E_2-F_1\wedge F_2,\ \omega_5=F_1\wedge E_2+E_1\wedge F_2,\ \omega_6=E_3\wedge \varepsilon,\ \omega_7=F_3\wedge \varepsilon. $$ For $\varepsilon$ we choose $$ \varepsilon=H_3 \ \text{and hence}\ (s_1,s_2,s_3)=(-1,-1,2). $$
The differentials for the invariant 2-forms are:
\stepcounter{equation} \begin{table}[ht] \begin{center}
\begin{tabular}{c|c} $w$ & $\dif w$ \\ \hline $\omega_1$ & $E_1\wedge E_2 \wedge F_3-E_1\wedge E_3 \wedge F_2+E_2\wedge E_3\wedge F_1-F_1\wedge F_2\wedge F_3$ \\ $\omega_2$ & $E_1\wedge E_2\wedge F_3-E_1\wedge E_3\wedge F_2+E_2\wedge E_3\wedge F_1-F_1\wedge F_2\wedge F_3 $ \\ $\omega_3$ & $E_1\wedge E_2\wedge F_3-E_1\wedge E_3\wedge F_2+E_2\wedge E_3\wedge F_1-F_1\wedge F_2\wedge F_3$ \\ $\omega_4$ & $ -2( E_1 \wedge F_1 +E_2 \wedge F_2) \wedge F_3+2\omega_5 \wedge \varepsilon$ \\ $\omega_5$ & $-2( E_1 \wedge F_1 +E_2 \wedge F_2) \wedge E_3-2\omega_4 \wedge \varepsilon $ \\ $\omega_6$ & $( E_1 \wedge F_1 +E_2 \wedge F_2) \wedge E_3+\omega_4 \wedge \varepsilon$\\ $\omega_7$ & $( E_1 \wedge F_1 +E_2 \wedge F_2) \wedge F_3-\omega_5 \wedge \varepsilon$ \end{tabular} \end{center}
\caption{Differentials of 2-forms for $(k_1,k_2,k_3)=(-2,1,1)$ } \end{table} \noindent Thus a 2-form $\omega=\sum a_i\omega_i$ is closed if and only if $$ a_1+a_2+a_3=0,\ -2a_4+a_7=0,\ -2a_5+a_6=0, $$
in other words, $a_1+a_2+a_3=0, a_7=2a_4$ and $a_6=2a_5$. This leaves us with a $4$-dimensional space of closed forms in degree $2$. One easily sees that the square of such a closed form is 0 iff all $a_i$ vanish.
This finishes the proof for the Aloff--Wallach spaces.
\section{The flag manifolds $W^{12}$ and $W^{24}$}\label{flag}
The cohomology ring of the the 3 flag manifolds $W^{6}$, $W^{12}$ and $W^{24}$ is well know, and can be computed by using Borel's method for the cohomology ring of a homogeneous space $G/H$, see e.g. \cite{Bo1,Bo2}. In our case this is particularly simple since $\ensuremath{\operatorname{rk}} H=\ensuremath{\operatorname{rk}} G$ and since we can restrict ourselves to real coefficients.
The result is that it is generated by 3 elements $a_1,a_2,a_3\in H^{2k}(M,R)$, where $k=1,2,4$ for the 3 different flag manifolds. The relationships come from the Weyl group invariant polynomials, i.e., the symmetric polynomials in $a_i$ vanish. If we choose the generators $x=a_1+a_2$ and $y=a_1-a_2$ the cohomology ring is: $$ H^*(M,R)=\{x,y\mid x^3=0, y^2=-3x^2\} $$ with basis $x,y$ in dimension $2k$, as well as $x^2,xy$ in dimension $4k$, and the fundamental class $y^3$ in dimension $6k$. The two relationships $ x^3=0$ and $ y^2=-3x^2$ put strong restrictions on a geometrically formal metric. For $W^6$, the method in \cite{Kot11} used the fact that $y$ must be a symplectic form, whereas $x$ has a kernel, contradicting $ y^2=-3x^2$. This proof does not seem to work when $k>1$. Instead, we restrict ourselves to homogeneous metrics and use the algebra of invariant forms.
For all three flag manifolds $G/H$ we have the splitting $$ {\mathfrak{m}} =V_1\oplus V_2\oplus V_3 $$ into $\ensuremath{\operatorname{Ad}} _H$ irreducibles, with $\dim V_i=2k$. Using representation theory, one easily sees that there are no invariant forms in degree $<2k$. In degree $2k$ we clearly have the $\ensuremath{\operatorname{Ad}} _H$ invariant volume forms $\omega_i$ of the modules $V_i$. Some differential must be nonzero since $b_{2k}=2$. For $\ensuremath{\operatorname{SU}} (3)/\ensuremath{\operatorname{T}} ^2$ and $\ensuremath{\operatorname{Sp}} (3)/\ensuremath{\operatorname{Sp}} (1)^3$ we also have inner automorphisms (e.g $\ensuremath{\operatorname{Ad}} (E_{12}-E_{21})$ for $\ensuremath{\operatorname{SU}} (3)/\ensuremath{\operatorname{T}} ^2$ ) which interchange the 3 modules $V_i$. For $F_4/\ensuremath{\operatorname{Spin}} (8)$ we have the triality automorphism of $\ensuremath{\operatorname{Spin}} (8)$. This outer automorphisms of $\ensuremath{\operatorname{Spin}} (8)$ also extends to inner automorphisms of $F_4$, see e.g. \cite{WZ}, Theorem 3.2,
and takes $V_1$ to $V_2$, $V_2$ to $V_3$, and $V_3$ to $V_1$. Thus there exist diffeomorphisms of $G/H$ which interchange the volume forms $\omega_i$, which implies that $d\omega_i\ne 0$ for all $i$. By rescaling $\omega_i$ if necessary we can assume that $\omega=\sum a_i\omega_i$ is closed iff $a_1+a_2+a_3=0$.
From the description of the forms $\omega_i$ it is also clear that $\omega_i^2=0$, that $\omega_i\wedge \omega_j$, $i<j$ are linearly independent, and that $vol=\omega_1\wedge \omega_2\wedge \omega_3$ is a volume form. Thus $\omega^3=6a_1a_2a_3 vol$ is nonzero iff all three $a_i$ are nonzero. Hence $x$ must be one of 3 forms, depending which $a_i$ vanishes. Assume that say $a_3=0$ and hence $x=\omega_1-\omega_2$ up to a multiple. Then $y=\sum a_i\omega_i$ for some nonzero $a_i$ with $\sum a_i=0$. The $2k$ dimensional classes $x$ and $y$ are the only closed invariant forms and are hence harmonic. But then the relation $y^2=-3x^2$ in cohomology must also hold on the level of $4k$-forms. Since $$ x^2=-2\omega_1\wedge \omega_2 \ \text{and}\ y^2=2a_1a_2\omega_1\wedge \omega_2 +2a_1a_3\omega_1\wedge \omega_3+2a_2a_3\omega_2\wedge \omega_3, $$
it follows that $a_1a_3=a_2a_3=0$, which implies that $y$ is a multiple of $x$. But this is not possible. This finishes the proof for the 3 Wallach flag manifolds.
\section{The complex projective space $\mathbb{C\mkern1mu P}^{2n+1}$}\label{CPn}
For $\mathbb{C\mkern1mu P}^{2n+1}=\ensuremath{\operatorname{SU}} (2n+2)/S(\ensuremath{\operatorname{U}} (2n+1)\ensuremath{\operatorname{U}} (1))$ it is well known that the set of homogeneous metrics can be described as follows, see e.g. \cite{Zi1}. First, observe that $\ensuremath{\operatorname{Sp}} (n+1)\subset \ensuremath{\operatorname{SU}} (2n+2)$ acts transitively on $\mathbb{C\mkern1mu P}^{2n+1}$ with stabilizer $\ensuremath{\operatorname{Sp}} (n)\cdot\ensuremath{\operatorname{S}} ^1$.
From the inclusions $\ensuremath{\operatorname{Sp}} (n)\cdot\ensuremath{\operatorname{S}} ^1\subset \ensuremath{\operatorname{Sp}} (n)\cdot\ensuremath{\operatorname{Sp}} (1)\subset\ensuremath{\operatorname{Sp}} (n+1)$ we obtain the twistor fibration $\mathbb{S}^2\to \mathbb{C\mkern1mu P}^{2n+1}\to \mathbb{H\mkern1mu P}^{n}$ and every homogeneous metric is a Riemannian submersion where one scales the metric induced by a biinvariant metric on $\ensuremath{\operatorname{Sp}} (n)$ with a factor $t$ on the fiber.
Of course, on $\mathbb{C\mkern1mu P}^{2n+1}$ we have only one harmonic 2-form $\alpha_H$, and a metric is geometrically formal if $\alpha_H^k$ are again harmonic for $k=1,2,\ldots 2n+1$. We will show that already for $\alpha_H^2$ this is only the case for the symmetric metric.
We need to explicitly express the invariant form in some basis.
We choose the embedding of $\ensuremath{\operatorname{Sp}} (n)$ in $\ensuremath{\operatorname{Sp}} (n+1)$ as the upper block embedding, i.e. the stabilizer of the last basis vector in its action on ${\mathbb{H}} ^n$. We first describe the basis of its orthogonal complement. Recall that $E_{i,j}$ is the matrix which has a 1 in row i and column j, and 0 otherwise. Set $$
e_1=iE_{n+1,n+1},\ \ e_2=jE_{n+1,n+1},\ \ e_3=kE_{n+1,n+1}, \ \ Y_\alpha=E_{\alpha,n+1}-E_{n+1,\alpha}$$
$$
Y_{\alpha,1}=i E_{\alpha,n+1}+iE_{n+1,\alpha} ,\ \ Y_{\alpha,2}=j E_{\alpha,n+1}+jE_{n+1,\alpha} ,\ \ Y_{\alpha,3}=k E_{\alpha,n+1}+kE_{n+1,\alpha}$$ where $\alpha$ goes from $1$ to $n$.
Let $H=\ensuremath{\operatorname{Sp}} (n)\cdot\ensuremath{\operatorname{S}} ^1\subset\ensuremath{\operatorname{Sp}} (n+1)$ where $\ensuremath{\operatorname{S}} ^1=e^{it}\subset \ensuremath{\operatorname{Sp}} (1)$. Then the orthogonal complement of ${\mathfrak{h}} $ in ${\mathfrak{g}} $ splits as $$ {\mathfrak{m}} = V\oplus W={\mathbb{C}} \oplus{\mathbb{H}} ^n \ \text{with}\ V=\spam(e_2,e_3) \ \text{and} \ W=\spam( Y_\alpha,Y_{\alpha,1},Y_{\alpha,2},Y_{\alpha,3}), \ \alpha=1,\ldots n $$ and $\ensuremath{\operatorname{diag}} (A,e^{it})\in H$ acts on ${\mathfrak{m}} $ as $(z,v) \to (e^{2it}z,Ave^{-it})$. $H$ acts irreducibly on $V$ and $W$ and hence the metric depends on 2 parameters. We denote by $\langle \ , \ \rangle_t$ the metric on ${\mathfrak{m}} $ where $e_i$ have length $t$ and the basis vectors in $W$ have length 1. Extended to a homogeneous metric on $G/H$, the symmetric metric then corresponds to $t=1$.
For the ${\mathfrak{m}} $ component of the Lie brackets of vectors in ${\mathfrak{m}} $ we have: $$ [e_2,e_3]_{\mathfrak{m}} =0\ ,\ [e_i,Y_\alpha]_{\mathfrak{m}} =-Y_{\alpha,i}\ ,\ [e_i,Y_{\alpha,i}]_{\mathfrak{m}} =Y_{\alpha}\ ,\ [e_i,Y_{\alpha,j}]_{\mathfrak{m}} =Y_{\alpha,k}\ ,\ [Y_{\alpha},Y_{\alpha,i}]_{\mathfrak{m}} =-2e_i$$ $$
[Y_{\alpha,i},Y_{\alpha,j}]_{\mathfrak{m}} =2e_k\ ,\ [Y_{\alpha},Y_{\beta}]_{\mathfrak{m}} = [Y_{\alpha},Y_{\beta,i}]_{\mathfrak{m}} =[Y_{\alpha,i},Y_{\beta,i}]_{\mathfrak{m}} =[Y_{\alpha,i},Y_{\beta,j}]_{\mathfrak{m}} =0$$ where $i,j,k$ is a cyclic permutation of $1,2,3$ and $\alpha,\beta$ are distinct.
As in the previous case, we first compute the differentials of 1-forms:
\renewcommand{1.4}{1.4} \stepcounter{equation}
\begin{table}[ht] \begin{center}
\begin{tabular}{c|c} $w$ & $\dif w$ \\ \hline $Y_\alpha$ & $\hspace{10pt} e_2\wedge Y_{\alpha,2}+ e_3\wedge Y_{\alpha,3} $ \\ $Y_{\alpha,1}$ & $\hspace{10pt} e_2\wedge Y_{\alpha,3}- e_3\wedge Y_{\alpha,2}$ \\ $Y_{\alpha,2}$ & $\hspace{0pt} -e_2\wedge Y_{\alpha}\ \ + e_3\wedge Y_{\alpha,1}$ \\ $Y_{\alpha,3}$ & $\hspace{-7pt} -e_2\wedge Y_{\alpha,1}- e_3\wedge Y_{\alpha}$ \\ $e_2$ & $\hspace{10pt}\sum_\alpha ( -2Y_{\alpha}\wedge Y_{\alpha,2}+ 2Y_{\alpha,3}\wedge Y_{\alpha,1})$ \\ $e_3$ & $\hspace{10pt} \sum_\alpha ( -2Y_{\alpha}\wedge Y_{\alpha,3}+ 2Y_{\alpha,1}\wedge Y_{\alpha,2})$. \end{tabular} \end{center}
\caption{Differentials of one-forms on $\mathbb{C\mkern1mu P}^{2n+1}$}\label{1formsCPn} \end{table}
We now determine the $H$ invariant forms. Clearly, in $\Lambda^*(V)$ we only have the volume element $v=e_2\wedge e_3\in\Lambda^2(V)$. Recall that the algebra $\Lambda^*({\mathbb{H}} ^n)^{\ensuremath{\operatorname{Sp}} (n)}$ of invariant forms is generated by the 3 symplectic forms, corresponding to the K\"ahler forms $\omega_I, \omega_J,\omega_k$ associated to the 3 complex structures coming from right multiplication with $I,J,K\in \ensuremath{\operatorname{Sp}} (1)$ on $W={\mathbb{H}} ^n$. Thus $\Lambda^*(W)^{\ensuremath{\operatorname{Sp}} (n)}$ is generated by:
{ \fontsize{11}{15} \selectfont $$ \omega_I=\sum_\alpha ( Y_{\alpha}\wedge Y_{\alpha,1}- Y_{\alpha,2}\wedge Y_{\alpha,3})\ , \omega_J=\sum_\alpha ( Y_{\alpha}\wedge Y_{\alpha,2}- Y_{\alpha,3}\wedge Y_{\alpha,1}), \ \omega_K=\sum_\alpha ( Y_{\alpha}\wedge Y_{\alpha,3}- Y_{\alpha,1}\wedge Y_{\alpha,2}) $$ }
and hence $$ \Lambda^*({\mathfrak{m}} )^{\ensuremath{\operatorname{Sp}} (n)} \ \text{is generated by }\ e_1,\ e_2,\ \omega_I ,\ \omega_J,\ \omega_K. $$ In this algebra we can identify the $H$ invariant forms by determining the action of the circle in $H$ and diagonalizing it via complexification. On $e_2, e_3$ the circle $e^{it}\in \ensuremath{\operatorname{S}} ^1\subset H$ acts via a rotation $R(2t)$ since it is given by conjugation. On the two-plane spanned by $Y_{\alpha},\ Y_{\alpha,1}$ one easily checks that it acts via a rotation $R(-t)$ and on the two-plane spanned by $Y_{\alpha,2},\ Y_{\alpha,3}$ as a rotation $R(t)$. Hence it acts trivially on $\omega_I$, and on the two-plane spanned by $\omega_J, \omega_K$ it acts as $R(2t)$. This action is diagonal in the basis $e_2+ie_3,e_2-ie_3, \omega_I, \omega_J+i\omega_K, \omega_J-i\omega_K$ and acts via $\theta^2 + (\theta^*)^2 + \Id+ \theta^2 + (\theta^*)^2$. Thus we obtain invariant forms, besides $\omega_I$, by taking real and imaginary parts of $(e_2+ie_3)\wedge(e_2-ie_3)$ and $( \omega_J+i\omega_K)\wedge (\omega_J-i\omega_K)$ as well as $(e_2+ie_3)\wedge(\omega_J-i\omega_K)$. This gives us the following basis for the invariant forms in low degrees: $$ \Lambda^2({\mathfrak{m}} )^H=\spam(v,\ \omega_I), \ \text{where }\ v=e_2\wedge e_3 $$ and $$ \Lambda^3({\mathfrak{m}} )^H=\spam(\beta_1,\beta_2) \ \text{where }\ \beta_1=e_2\wedge \omega_J+e_3\wedge \omega_K \ \text{and }\ \beta_2=e_2\wedge \omega_K-e_3\wedge \omega_J $$ and the invariant 4-forms $$ \Lambda^4({\mathfrak{m}} )^H=\spam(\omega_I^2,\ v\wedge \omega_I,\ \omega_J^2+ \omega_K^2). $$
Notice that in the above language $de_2=- 2\omega_J $ and $de_3=- 2\omega_K $, and that we have the relations $v\wedge v=v\wedge\beta_1=v\wedge\beta_2=0$. Using Table \ref{1formsCPn} and the product formula, one easily sees that: $$ dv=2\beta_2,\ \ d\omega_I=2\beta_2,\ \ d\omega_J=2e_3\wedge\omega_I,\ \ d\omega_K=-2e_2\wedge\omega_I, \ \ d\beta_1=-2(\omega_J^2+ \omega_K^2 ) -4 v\wedge\omega_I. $$ Thus $\alpha_H=v-\omega_I$ is the only closed 2-form, which is hence harmonic.
We now claim that $\alpha_H^2$ can only be harmonic for the symmetric metric. For this, we compute the differentials of the 4-forms: $$ d\omega_I^2=4\omega_I\wedge\beta_2,\ d(v\wedge \omega_I)=2\omega_I\wedge\beta_2,\ d(\omega_J^2+ \omega_K^2)=-4\omega_I\wedge\beta_2. $$ Thus we have 2 closed 4-forms: $$ \alpha_H^2=\omega_I^2-2v\wedge \omega_I,\ \ \text{and }\ \omega_J^2+ \omega_K^2+2 v\wedge \omega_I, $$ and we need to determine which linear combination is harmonic. For this it needs to be orthogonal to the derivative of the invariant 3-forms, which is $d\beta_1$ since $d\beta_2=\frac12 d\omega_I^2=0$. Thus the 4-form is harmonic iff $$ \langle a(\omega_I^2-2v\wedge \omega_I ) + b ( \omega_J^2+ \omega_K^2+2 v\wedge \omega_I), (\omega_J^2+ \omega_K^2 ) +2 v\wedge\omega_I \rangle_t=0. $$ Observe that the inner products between the 3 symplectic forms are all the same, say equal to $L$, and that they are orthogonal to $v\wedge \omega_I$. Furthermore, $\langle v\wedge\omega_I,v\wedge\omega_I\rangle= \langle v, v\rangle\cdot \langle \omega_I,\omega_I\rangle= t^2L$. Thus we need $$ 2aL+4bL-4bt^2L=2L(a+2b(1-t^2))=0. $$ But this implies that the only value of $t$ where $\alpha_H^2$ is harmonic is $t=1$, i.e. the symmetric metric.
This finishes the proof of our main Theorem.
We note that in the terminology from \cite{Na} we proved that a homogeneous metric on $\mathbb{C\mkern1mu P}^{n}$ which is 2-formal, i.e., the product of harmonic 2-forms is again harmonic, is already symmetric.
We remark further that the metrics with positive sectional curvature are described as follows. For $B^{13}$ and $ \mathbb{C\mkern1mu P}^{2n+1}$ we consider the fibrations $\mathbb{S}^2\to \mathbb{C\mkern1mu P}^{2n+1}\to \mathbb{H\mkern1mu P}^{n}$ and $\mathbb{R\mkern1mu P}^5\to B^{13}\to \mathbb{C\mkern1mu P}^4$ and scale the fibers with $t$. The metric then has positive curvature iff $0<t<\frac43$. For the more complicated description of the homogeneous positively curved metrics on $W_{p,q}$ see \cite{Pu}, for the ones on the flag manifolds see \cite{Va}, and for the ones on spheres \cite{VZ}.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\end{document} |
\begin{document}
\title[Real structures on KLT Calabi-Yau pairs]{Finiteness of real structures on\\ KLT Calabi-Yau regular smooth pairs of dimension 2} \author{Mohamed Benzerga} \address{Universit\'e d'Angers, \textsc{Larema}, UMR CNRS 6093, 2, boulevard Lavoisier, 49045 Angers cedex 01, France} \email{mohamed.benzerga@univ-angers.fr} \noindent \subjclass{14J26, 14J50, 14P05, 20F67} \keywords{Rational surfaces, real structures, real forms, KLT Calabi-Yau pairs, CAT(0) metric spaces.}
\begin{abstract} In this article, we prove that a smooth projective complex surface $X$ which is regular (i.e. such that $h^1(X,\mathcal O_X)=0$) and which has a $\mathbb R$-divisor $\Delta$ such that $(X,\Delta)$ is a KLT Calabi-Yau pair has finitely many real forms up to isomorphism. For this purpose, we construct a complete CAT(0) metric space on which $\mathrm{Aut\,} X$ acts properly discontinuously and cocompactly by isometries, using Totaro's Cone Theorem. Then we give an example of a smooth rational surface with finitely many real forms but having a so large automorphism group that \cite{mon-papier} does not predict this finiteness. \end{abstract} \renewcommand{\subjclassname} {\textup{2010} Mathematics Subject Classification}
\maketitle
\section*{Introduction}
\indent A \textit{real form} of a complex projective variety $X$ is a scheme over $\mathbb R$ whose complexification is $\mathbb C$-isomorphic to $X$. A \textit{real structure} on $X$ is an antiregular (or antiholomorphic) involution $\sigma : X\to X$ (cf. \cite[Chap. 2]{le-bouquin}). Two real structures $\sigma$ and $\sigma'$ are \textit{equivalent} if there is a $\mathbb C$-automorphism $\varphi$ of $X$ such that $\sigma'=\varphi\sigma\varphi^{-1}$.\\%this is an important notion in real algebraic geometry, \indent By Weil descent of the base field (cf. \cite[III.\S 1.3]{serre-cohgal-english}), there is a bijective correspondence between the set of $\mathbb R$-isomorphism classes of real forms of $X$ and the set of equivalence classes of real structures on $X$. Moreover, if $\sigma$ is a real structure on $X$, this set is parametrized by the \textit{first Galois cohomology set} $H^1(G,\mathrm{Aut\,}_{\mathbb C}X)$, where $G=\langle\sigma\rangle$ acts on $\mathrm{Aut\,}_{\mathbb C}X$ by conjugation.\\ \indent The results of this paper are motivated by the study of the finiteness problem for real forms of rational surfaces. We already addressed this question in our previous article \cite{mon-papier} whose main result, combined with [\textit{loc. cit.} \S 3.2], states as follows: \begin{thm-intro}\label{first-thm} Let $X$ be a smooth complex rational surfaces and let $\mathrm{Aut\,}^*X$ be the image of the natural morphism $\mathrm{Aut\,} X\to \mathrm{O}(\mathrm{Pic\,} X)$.\\ If $\mathrm{Aut\,}^*X$ does not contain a non-abelian free group $\mathbb Z*\mathbb Z$ then $X$ has finitely many real forms up to $\mathbb R$-isomorphism.\end{thm-intro} However, this result does not completely solve the problem since there are rational surfaces whose automorphism group does contain a non-abelian free group (cf. Example \ref{example-12-points}). In fact, it is not known how $\mathrm{Aut\,}^*X$ can be large for a rational surface $X$. For example, up to our knowledge, the problem of the finite generation of the group $\mathrm{Aut\,}^*X$ is open (but Lesieutre constructed in \cite{lesieutre} a six-dimensional variety $X$ such that $\mathrm{Aut\,}^*X$ is not finitely generated and he showed that $X$ is an example of a smooth projective variety having infinitely many non-isomorphic real forms\footnote{To the best of our knowledge, it is the first known example.}).\\ \indent The aim of this article is to prove the following result: \begin{thm-intro}\label{main-thm} Let $X$ be a smooth projective complex surface which is regular (i.e. $q(X) := h^1(X,\mathcal O_X) = 0$).\\ If there is a $\mathbb R$-divisor $\Delta$ on $X$ such that $(X,\Delta)$ is a KLT Calabi-Yau pair\footnote{See Definition \ref{def-klt-cy}}, then $X$ has finitely many real forms up to $\mathbb R$-isomorphism.\end{thm-intro} The proof of this result uses different kind of tools than those we used in \cite{mon-papier} and mostly geometric actions on complete CAT(0) metric spaces: roughly speaking, a metric space is CAT(0) if it has "nonpositive curvature". We will give precise definitions in section \ref{section-cat0}. Then, we recall the definition of KLT Calabi-Yau pairs and we prove finiteness of real forms for them using Totaro's Cone Theorem \ref{cone-thm}. We give an example of a rational surface whose finiteness of real forms cannot be deduced from Theorem \ref{first-thm} but is obtained from Theorem \ref{thm-finiteness-klt}. Finally, we present an example of a rational surface for which the finiteness problem remains open and which can be equipped of a $\mathbb Q$-divisor $\Delta$ such that $(X,\Delta)$ is log-canonical Calabi-Yau.
\section{Preliminaries: Geometric actions on CAT(0) spaces} \label{section-cat0}
We begin this section by a brief explanation of the link between finiteness of real forms and geometric (i.e. proper and cocompact) actions on CAT(0) spaces, which are a generalization of manifolds with nonpositive curvature (see \cite[I.1.3, II.1.1]{bridson-haefliger}): this will be our main tool in order to turn our finiteness problem into a problem of hyperbolic geometry.
\begin{definition} \label{def-cat0} \begin{itemize}
\item A \textbf{geodesic} between two points $a$ and $b$ in a metric space $(X,d)$ is a map $\gamma : [0,\ell]\to X$ such that $\gamma(0)=a$, $\gamma(\ell)=b$ and $\forall t,t'\in[0,\ell],\;d(\gamma(t),\gamma(t'))=|t-t'|$ (in particular, $\gamma$ is continuous and $\ell = d(a,b)$). A \textbf{geodesic triangle} $\Delta$ in $X$ consists of three points $x,y,z\in X$ and three geodesic segments $[x, y], [y, z], [z, x]$. \item A metric space $(X,d)$ is \textbf{geodesic} if every two points in $X$ are joined by a geodesic (not necessarily unique). \item A geodesic metric space $(X,d)$ is said to be a $\mathbf{\operatorname{\textbf{CAT}}(0)}$ \textbf{space} if for every geodesic triangle $\Delta$ in $X$, there exists a triangle $\Delta'$ in $\mathbb R^n$ endowed with the euclidean metric, with sides of the same length as the sides of $\Delta$, such that distances between points on $\Delta$ are less than or equal to the distances between corresponding points on $\Delta'$. \index{CAT(0) space}
\item \emph{\cite[I.8.2]{bridson-haefliger}\footnote{As explained in \textit{op.cit.}, I.8.3, if every closed ball of $X$ is compact, then this definition is equivalent to the standard definition where the open balls are replaced by the compact subsets of $X$: for us, this is always the case.}} Let $\Gamma$ be a group acting by isometries on a metric space $X$. The action is said to be \textbf{proper} (or \textbf{properly discontinuous}) if\footnote{Denoting by $B(x,r) = \{ y\in X | d(x,y) <r\}$ the open ball of center $x\in X$ and radius $r\geq 0$.}
$$\forall x\in X,\; \exists r > 0 ,\;\{\gamma\in\Gamma|\gamma .B(x, r) \cap B(x, r) \neq \varnothing\} \text{ is finite}.$$ \end{itemize} \end{definition} \begin{center} \includegraphics[scale=0.35]{Triangles-de-comparaison-CAT_0_-1.pdf} \end{center}
\begin{theorem} {\bf (\cite[II.2.8]{bridson-haefliger})} \label{conj-classes-cat0}\\ If a group $\Gamma$ acts geometrically (i.e. properly discontinuously and cocompactly by isometries) on a complete CAT(0) space, then $\Gamma$ contains only finitely many conjugacy classes of finite subgroups.\qed \end{theorem}
\section{Finiteness theorem}
Firstly, let us introduce the surfaces we deal with (cf. \cite{totaro}, \cite[8.2,8.4]{totaro2} for a slightly more general definition): \begin{definition} {\bf (KLT Calabi-Yau pair)} \label{def-klt-cy} \index{KLT Calabi-Yau pair}\\ Let $X$ be a \textit{smooth} projective complex variety and $\Delta$ be a $\mathbb R$-divisor on $X$.\\ $(X,\Delta)$ is \textbf{a KLT (\textit{resp.} log-canonical) Calabi-Yau pair} if there exists a resolution $\pi : (\widetilde{X},\widetilde{\Delta})\to (X,\Delta)$ satisfying the following conditions: \begin{itemize} \item $K_{\widetilde{X}}+\widetilde{\Delta} = \pi^*(K_X+\Delta)$; \item $\widetilde{\Delta}$ has simple normal crossings and his coefficients are $< 1$ (KLT condition), \textit{resp.} $\leq 1$ (log-canonical condition); \item $\Delta$ is an effective $\mathbb R$-divisor such that $K_X+\Delta$ is numerically trivial (Calabi-Yau condition). \end{itemize} \end{definition}
\begin{example} \label{example-klt} Let us present here some examples of KLT Calabi-Yau pairs. The reader may also look at Examples \ref{example-12-points} and \ref{exemple-dolgachev}. \begin{itemize} \item Of course, there are irrational surfaces $X$ having a $\mathbb R$-divisor $\Delta$ such that $(X,\Delta)$ is KLT Calabi-Yau (simply think of $X$ being Calabi-Yau smooth and $\Delta=0$). There are less trivial examples, like some $\mathbb P^1$-bundles over elliptic curves (cf. \cite[1.4]{alexeev-mori}). \item If $X$ is a \textit{Halphen surface} of index $m\geq 2$ (for the definition, cf. \cite[\S 2]{cantat-dolgachev}) and $F$ a reduced fibre of the elliptic fibration on $X$ with simple normal crossings, then $\left(X,\dfrac{1}{m}F \right)$ is a KLT Calabi-Yau pair: for, the definition of a Halphen surface shows that $F\sim -mK_X$ and $\dfrac{1}{m}<1$. If $X$ is of index 1, then $\left(X,\dfrac{1}{2}(F+F')\right)$ is a KLT Calabi-Yau pair for two distinct reduced smooth fibers $F$ and $F'$ of $X\to\mathbb P^1$. \item Similarly, if $X$ is a \textit{Coble surface} and if the special fibre $F$ (see \cite[Prop. 3.1]{cantat-dolgachev}) is reduced and has simple normal crossings, then $\left(X,\dfrac{1}{2}F \right)$ is a KLT Calabi-Yau pair. \end{itemize} \end{example}
For our purposes, we need the following finiteness theorem: \begin{theorem} {\bf (Cone theorem for KLT Calabi-Yau pairs - \cite[8.7]{totaro2})}\label{cone-thm}\index{KLT Calabi-Yau! Cone theorem for}\index{Cone theorem}\\ Let $(X,\Delta)$ be a KLT Calabi-Yau pair.\\ If $X$ is a surface, then the action of $\mathrm{Aut\,} X$ on the nef cone has a rational polyhedral fundamental domain (i.e. it is the closed convex cone spanned by a \emph{finite} set of Cartier divisors in $\mathrm{Pic\,} X\otimes_{\mathbb Z}\mathbb R$).\qed \end{theorem}
The aim of this article is to prove the following result:
\begin{theorem} \label{thm-finiteness-klt}
Let $X$ be a smooth projective complex surface which is regular (i.e. $q(X) := h^1(X,\mathcal O_X) = 0$).\\ If there is a $\mathbb R$-divisor $\Delta$ on $X$ such that $(X,\Delta)$ is a KLT Calabi-Yau pair, then $X$ has finitely many real forms up to $\mathbb R$-isomorphism.\end{theorem}
Thus, Example \ref{example-klt} shows that our previous result about finiteness of real forms for Cremona special surfaces, i.e. (cf. \cite[3.2]{mon-papier}) is a special case of this result when the fibre $F$ we mentioned in \ref{example-klt} above is reduced with simple normal crossings.
\\
\noindent \textbf{Strategy of the proof. }--- Let $\sigma$ be a real structure on $X$ and let $\mathrm{Aut\,}^{\#}X$ (resp. $\mathrm{Aut\,}^*X$) be the kernel (resp. the image) of the natural morphism $p : \mathrm{Aut\,} X\to \mathrm{O}(\mathrm{Pic\,} X)$. If $G=\langle\sigma\rangle$ acts on $\mathrm{Aut\,} X$ by conjugation (i.e. $\forall\varphi\in\mathrm{Aut\,} X,\;\sigma.\varphi := \sigma\varphi\sigma^{-1}$), then the exact sequence $$1\longrightarrow \mathrm{Aut\,}^{\#}X \longrightarrow \mathrm{Aut\,} X\longrightarrow \mathrm{Aut\,}^*X \longrightarrow 1$$ is $G$-equivariant and induces an exact sequence in Galois cohomology. By \cite[I.\S 5.5, Cor. 3]{serre-cohgal-english}, it suffices to prove that $H^1(G,\mathrm{Aut\,}^*X)$ is finite and that $\forall b\in Z^1(G,\mathrm{Aut\,} X),\;H^1(G,(\mathrm{Aut\,}^{\#}X)_b)$ is finite. But this last condition is true for every smooth irreducible projective complex variety by \cite[III.\S 4.3]{serre-cohgal-english} and \cite[1.2]{mon-papier} (see also the proof of Theorem 2.5 in \textit{loc. cit.}). Thus, we need only to show the finiteness of $H^1(G,\mathrm{Aut\,}^*X)$.\\ \indent Now, for the special case of KLT Calabi-Yau pairs, the idea is the following: using Totaro's Cone Theorem, we will construct a complete CAT(0) space on which $\mathrm{Aut\,}^*X\rtimes \langle\sigma^*\rangle$ acts geometrically (where $\sigma^*\in\mathrm{O}(\mathrm{Pic\,} X)$ is the isometry induced by $\sigma$). Then, we will be able to conclude that $H^1(G,\mathrm{Aut\,}^*X)$ is finite using Theorem \ref{conj-classes-cat0} together with the following result (which can be proved easily using the definitions as in the proof of \cite[Th. 2.4]{mon-papier}) :
\begin{lemma}\label{finiteness-conj-classes} Let $G=\langle \sigma\rangle \simeq \mathbb Z/2\mathbb Z$, $A$ be a $G$-group and $A\rtimes G$ the semidirect product defined by the action of $G$ on $A$.\\ If $A\rtimes G$ has a finite number of conjugacy classes of elements of order 2 (in particular, if it has finitely many conjugacy classes of finite subgroups), then $H^1(G,A)$ is finite. \end{lemma}
\begin{center} --- $***$ --- \end{center}
Before beginning the proof of this theorem, let us give some definitions to clarify the terms we use: \begin{definition} \label{def-poly-funda-domain} Let $X$ be either $\mathcal H^n$ or $\mathbb R^n$ (\footnote{In what follows, when writing $\mathbb R^n$, it is understood as $\mathbb R^n$ equipped with its euclidean metric (which is denoted by $E^n$ in \cite{ratcliffe}).}). \begin{itemize} \item A subset $C$ of $X$ is \textbf{convex} if $\forall x,y\in C$, the geodesic segment linking $x$ and $y$ is contained in $C$. \item A \textbf{side} of $C$ is a maximal nonempty convex subset of the relative boundary $\partial C$ (cf.\cite[p. 195, 198]{ratcliffe}). \item A \textbf{(convex) polyhedron} of $X$ is a nonempty closed convex subset of $X$ whose collection of sides is locally finite. In what follows, we will always say "polyhedron" instead of "convex polyhedron". \item Let $X$ be a subset of either $\mathcal H^n$ or $\mathbb R^n$. A \textbf{fundamental polyhedron (or polyhedral fundamental domain)} for the action of a discrete group $\Gamma$ of isometries of $X$ is a polyhedron $P$ whose interior $\mathring{P}$ is such that the elements of $\{g(\mathring{P}),\;g\in\Gamma\}$ are pairwise disjoint and $\displaystyle X=\bigcup_{g\in\Gamma} g(P)$. Moreover, $P$ is a \textbf{locally finite fundamental polyhedron} if the set $\{g(P),\;g\in\Gamma\}$ is locally finite, i.e. if for all compact $K\subseteq X$, there are only finitely many elements of $\{g(P),\;g\in\Gamma\}$ which intersect $K$. \end{itemize} \end{definition}
\begin{proof}[Proof of Theorem \ref{thm-finiteness-klt}]\label{beginning-proof-klt} We begin by explaining how we can turn our problem into a problem of hyperbolic geometry. Hodge index Theorem shows that the signature of the intersection form on $\mathrm{NS}(X) = \mathrm{Pic\,} X$ is $(1,n)$, where $\mathrm{rk}\,\mathrm{Pic\,} X = n+1$. Note that this is the only place where we use the fact that $h^1(X,\mathcal O_X)=0$. In fact, we could try to remove this hypothesis but we should replace $\mathrm{Aut\,}^{\#}X$ and $\mathrm{Aut\,}^*X$ with the analogous groups corresponding to the action of $\mathrm{Aut\,} X$ on $\mathrm{NS}(X)$ instead of $\mathrm{Pic\,} X$ but we do not have a general result of cohomological finiteness for the kernel of the action of $\mathrm{Aut\,} X$ on $\mathrm{NS}(X)$ whereas we gave such a result for the kernel of the action of $\mathrm{Aut\,} X$ on $\mathrm{Pic\,} X$ in the paragraph "Strategy of the proof" above.\\
\indent Thus, we obtain the \textit{hyperboloid model} of the hyperbolic space $\mathcal H^n := \{v \in \mathrm{Pic\,} X\otimes_{\mathbb Z}\mathbb R\,|\;v^2=1,\;v.H>0\}$ equipped with the distance $d :(u,v) \mapsto \mathrm{argcosh}(u.v)$ (where $u.v$ is the intersection product of $u$ and $v$ and $H$ is an ample divisor class on $X$.\\
\indent The radial projection $\pi : \mathrm{Pic\,} X\otimes_{\mathbb Z}\mathbb R \to \mathrm{Pic\,} X\otimes_{\mathbb Z}\mathbb R$ from the origin onto the hyperplane $\{v \in \mathrm{Pic\,} X\otimes_{\mathbb Z}\mathbb R\,|\;v.E_0=1\}\simeq \mathbb R^n$ maps the hyperboloid $\mathcal H^n$ onto the open unit ball $D^n$ of this hyperplane: when endowed with the appropriate metric, this is the \textit{Klein (projective) model} of $\mathcal H^n$ and $\pi$ restricts to an isometry $\mathcal H^n\to D^n$. The geodesic lines of this model are straight line segments so that the convex subsets of $D^n$ (for the hyperbolic metric) are exactly its convex subsets for the euclidean metric of $D^n\subseteq \mathbb R^n$. Note that $\pi$ maps the isotropic half-cone $\{v \in \mathrm{Pic\,} X\otimes_{\mathbb Z}\mathbb R\,|\;v^2=0,\;v.E_0>0\}$ onto the boundary $\partial D^n$ (which can be seen as the set of lines of this isotropic half-cone). We also want to mention the \textit{Poincaré ball model} $B^n$ of $\mathcal H^n$ which is obtained from the hyperboloid model by means of a stereographic projection from the south pole of the unit sphere of $\mathrm{Pic\,} X\otimes_{\mathbb Z}\mathbb R$ on the hyperplane $\{v \in \mathrm{Pic\,} X\otimes_{\mathbb Z}\mathbb R\,|\;v.E_0=0\}$.\\ \indent Finally, if we denote by $\text{Nef}\,X$ the nef effective cone of $X$ and if $N := \pi( \text{Nef}\,X\cap \mathcal H^n) \simeq (\text{Nef}\,X\cap \mathcal H^n)/\mathbb R^*$, then we see easily that $N$ is a closed convex subset of $D^n$ and that, by Totaro Cone Theorem \ref{cone-thm}, $\mathrm{Aut\,} X$ (or $\mathrm{Aut\,}^*X$) acts on it with a \textit{finitely sided} polyhedral fundamental domain (namely, the projection onto $D^n$ of a polyhedral fundamental domain of the action on $\text{Nef}\,X$). Note that, as we said in the statement of Theorem \ref{cone-thm}, there is a fundamental domain $\mathcal P$ of the action of $\mathrm{Aut\,} X$ on $\mathrm{Nef}\,X$ which is the closed convex cone generated by finitely many points of $\mathrm{Pic\,} X\otimes_{\mathbb Z}\mathbb R$; hence, $\overline{P}:=\pi(\mathcal P)\subseteq \overline{D^n}$ is the convex hull of finitely many points and classical results about convex polyhedra of $\mathbb R^n$ show that such a convex set is the intersection of finitely many half spaces and has finitely many sides (which are defined by the bounding hyperplanes of $\overline{P}$). Hence, this is also true for $P := \overline{P}\cap D^n$, the fundamental polyhedron of the action of $\mathrm{Aut\,} X$ on $N$.\\ \indent In order to use Lemma \ref{finiteness-conj-classes} and Theorem \ref{conj-classes-cat0}, we have to prove that $\mathrm{Aut\,}^*X\rtimes \langle\sigma^*\rangle$ acts properly and cocompactly by isometries on a CAT(0) complete metric space. In fact, we are reduced to prove Lemma \ref{ratcliffe-modified}, which is the adaptation to our case of \cite[12.4.5, 1$\Rightarrow$ 2]{ratcliffe} (where we replaced a fundamental domain of the action on $\mathcal H^n$ by a fundamental domain on a closed convex subset, which is our $N$), and Lemma \ref{cat(0)-truncated-horoballs}. \textit{The proof ends on page \pageref{end-proof-klt}.} \end{proof}
\begin{lemma} \label{proper-action-discrete} Any discrete subgroup $\Gamma$ of $\mathrm{Isom\,}(\mathcal H^n)$ acts properly discontinuously on $\mathcal H^n$. \end{lemma}
\begin{proof} Note that the action of $\mathrm{Isom\,}(\mathcal H^n) \simeq \mathrm{O}^+(1,n)$ on $\mathcal H^n$ is transitive and that the stabilizer of a point $x\in\mathcal H^n$ (in the hyperboloid model) is the orthogonal group $\mathrm{O}(x^{\perp}) \simeq \mathrm{O}_n(\mathbb R)$. Thus, this action induces a bijection $\mathcal H^n \simeq \mathrm{O}^+(1,n)/\mathrm{O}_n(\mathbb R)$. Since $\Gamma$ is discrete in the locally compact group $\mathrm{O}^+(1,n)$ and since $\mathrm{O}_n(\mathbb R)$ is compact, the result follows from \cite[3.1.1]{wolf}. \end{proof}
Before stating our lemmas \ref{ratcliffe-modified} and \ref{cat(0)-truncated-horoballs}, let us give some other definitions (cf. \cite{ratcliffe}): \begin{definition} Let $\Gamma$ be a discrete subgroup of $\mathrm{Isom\,}(\mathcal H^n)$. \begin{itemize} \item A point $a\in\partial \mathcal H^n$ is a \textbf{limit point} of $\Gamma$ if there is a point $x$ of $\mathcal H^n$ and a sequence $(g_i)$ of elements of $\Gamma$ such that $(g_i(x))$ converges to $a$. \item In the Poincaré ball model $B^n$, an \textbf{horoball} based at a point $a\in\partial B^n$ is an Euclidean ball contained in $\overline{B^n}$ which is tangent to $\partial B^n$ at the point $a$. \item Assume $\Gamma$ contains a parabolic element (cf. \cite[\S 4.7]{ratcliffe}) having $a\in\partial \mathcal H^n$ as its fixed point.\\ A \textbf{horocusp region} is an open horoball $B$ based at a point $a\in\partial \mathcal H^n$ such that $$\forall g\in\Gamma\setminus \text{Stab}_{\Gamma}(a),\;g(B)\cap B = \varnothing.$$ \end{itemize} \end{definition} The following Lemma develops and makes more precise an idea of Totaro in \cite[\S 7]{totaro2}: \begin{lemma}\label{ratcliffe-modified} Let $\Gamma$ be a discrete subgroup of $\mathrm{Isom\,}(\mathcal H^n)$, $L(\Gamma)$ the set of limit points of $\Gamma$ in $\overline{\mathcal H^n}$, $C(\Gamma)$ the convex hull of $L(\Gamma)$ in $\overline{\mathcal H^n}$ and $N$ a $\Gamma$-invariant closed convex subset of $\mathcal H^n$.\\ If the action of $\Gamma$ on $N$ has a finitely sided polyhedral fundamental domain $P$, then there exists a finite union (maybe empty) $V_0$ of horocusp regions with disjoint closures such that $(P\cap C(\Gamma))\setminus V_0$ is compact. \end{lemma} \begin{proof}[Sketch of proof] The proof of this lemma is an adaptation of the proof of \cite[12.4.5, 1$\Rightarrow$ 2]{ratcliffe}: we replaced the fundamental domain of the action on $\mathcal H^n$ by a fundamental domain of the action on a closed convex subset, which is our $N$ and we replaced geometrical finiteness hypothesis for $\Gamma$ (which is more general than the existence of a finitely sided fundamental polyhedral domain of $\Gamma$ on $\mathcal H^n$) by the hypothesis of the existence of a finitely sided fundamental polyhedral domain for the action of $\Gamma$ on $N$. Thus, we have to check all the proofs of the results used by \cite{ratcliffe} in the proof of \cite[12.4.5, 1$\Rightarrow$ 2]{ratcliffe} in order to replace $\mathcal H^n$ by a closed convex subset $N$. The details of these verifications are in the \hyperref[appendix-section]{Appendix}. Here, we sum up the main points: \begin{itemize}[label=$\bullet$,font=\small]
\item if $P_0$ is a fundamental polyhedron of the action of $\Gamma$ on $\mathcal H^n$, then $P=P_0\cap N$ is a fundamental polyhedron of the action of $\Gamma$ on $\mathcal H^n$ \item by (6.6.10, 8.5.7), like $\mathcal H^n$, a closed convex subset $N$ of $\mathcal H^n$ is a proper geodesically connected and geodesically complete metric space\footnote{A metric space is \textbf{proper} or \textbf{finitely compact} if every bounded closed subset of it is compact.}: indeed, $N$ is proper as a subspace of the proper metric space $\mathcal H^n$, $N$ is geodesically complete, as it is complete, and it is geodesically connected, since it is convex. From this fact, we can deduce that the action of $\Gamma$ on $N$ has a (locally finite) exact convex fundamental polyhedron, e.g. a Dirichlet polyhedron, by (5.3.5, 6.6.13) and (6.7.4 (2)) since the group is discrete (and hence acts properly discontinuously, cf. Lemma \ref{proper-action-discrete}) and since there is a point $a\in N$ whose stabilizer $\Gamma_a$ is trivial (by (6.6.12)).
\item for the other points, it is a question of replacing $\mathcal H^n$ by $N$ and checking that everything remains true (sometimes by using convexity and/or closedness of $N$ in $\mathcal H^n$). \end{itemize}
\end{proof}
\begin{lemma} \label{cat(0)-truncated-horoballs} Let $C$ be a closed convex subset of $\mathcal H^n$, $\Gamma$ a discrete subgroup of $\mathrm{Isom\,}(\mathcal H^n)$ stabilizing $C$, $V_0$ a finite family of open horoballs with disjoint closures and $\displaystyle V_1 := \bigcup_{\gamma\in\Gamma} \gamma(V_0)$.\\ There is a family of open horoballs with disjoint closures, obtained by shrinking the horoballs of $V_1$, whose union $U$ is such that $C\setminus U$ is a complete CAT(0) space. \end{lemma}
\begin{proof} By \cite[II.11.27]{bridson-haefliger}, for every family $U$ of \textit{disjoint}\footnote{This is the key point which explains why we had to prove Lemma \ref{ratcliffe-modified}.} open horoballs, $\mathcal H^n\setminus U$ is a complete CAT(0) space for the induced length metric (this distance is defined between 2 points as the infimum of the lengths of rectifiable curves of $\mathcal H^n\setminus U$ between those two points ; it is different from the metric induced by the hyperbolic metric on $\mathcal H^n\setminus U$). Thus, $C\setminus U$ is complete as a closed subset of the complete space $\mathcal H^n\setminus U$. \\ It remains to study geodesic connectedness (term of \cite[\S 1.4]{ratcliffe}) or convexity (term of \cite[I.1.3]{bridson-haefliger}) of $C\setminus V_1$ in $\mathcal H^n\setminus V_1$ to conclude (using \cite[II.1.15.(1)]{bridson-haefliger}) that $C\setminus V_1$ is CAT(0) for the metric induced by the distance of $\mathcal H^n\setminus V_1$ (which is itself the length metric induced by the metric of $\mathcal H^n$). So let $x,y\in C\setminus V_1$ : if the geodesic $\gamma$ of $\mathcal H^n$ joining $x$ and $y$ is contained in $\mathcal H^n\setminus V_1$, then it is also contained in $C\setminus V_1$ since $C$ is convex. Otherwise, $\mathrm{Im\,}\gamma$ passes through at least one horoball and \cite[II.11.33, II.11.34]{bridson-haefliger} shows that a geodesic $\delta$ of $\mathcal H^n\setminus V_1$ linking $x$ and $y$ is obtained by concatenation of the hyperbolic geodesics which are tangent to the bounding horospheres of the horoballs crossed by $\gamma$ on the one hand and geodesics of these horo\textit{spheres} on the other hand. \textit{A priori}, it may happen that $\mathrm{Im\,}\delta$ is not contained in $C$. But we can shrink $V_0$ so that the antipodal point of the base point of each horosphere of $V_0$ belongs to $C$ (\footnote{More simply, the horospheres do not "get out" of $C$.}), which causes this effect on all the horoballs of $V_1$ under a finite number of operations.
Hence, the geodesics of the horospheres are contained in $C$ and the hyperbolic geodesics are contained in $C$ because they join two points of $C$. Thus, if we denote by $U$ the result of this shrinking of $V_1$, we showed that $C\setminus U$ is geodesically connected in $\mathcal H^n\setminus U$ and this shows that $C\setminus U$ is a complete CAT(0) space. \end{proof}
\begin{proof}[End of the proof of Theorem \ref{thm-finiteness-klt}] \label{end-proof-klt} \indent We apply Lemma \ref{ratcliffe-modified} with $\Gamma = \mathrm{Aut\,}^*X$, $N = \pi( \text{Nef}\,(X)\cap \mathcal H^n) \simeq \text{Nef}\,(X)/\mathbb R^*$ and $P$ being a fundamental polyhedron of the action of $\Gamma$ on $N$: this gives us a finite family $V_0$ of open horoballs \textit{with disjoint closures} and a convex subset $P_C = P\cap C(\Gamma)$ of $P$ such that $P_C\setminus V_0$ is \textit{compact}. Thus, $\Gamma$ acts properly (by Lemma \ref{proper-action-discrete}) and cocompactly on $C\setminus V_1$ where $C := N\cap C(\Gamma)$ and $\displaystyle V_1 := \bigcup_{\gamma\in\Gamma} \gamma(V_0)$ (note that $C(\Gamma)$ is a $\Gamma$-invariant closed convex subset of $\mathcal H^n$).\\ \indent Now, by Lemma \ref{cat(0)-truncated-horoballs}, we can replace $V_1$ by another family $U$ of open horoballs \textit{with disjoint closures} such that $C\setminus U$ is a complete CAT(0) space. The compacity of a fundamental domain is preserved by this shrinking because $P\setminus (U\cap P)$ is a bounded closed subset of $\mathcal H^n$, so it is compact because $\mathcal H^n$ is a proper metric space. By the way, one can verify that the proof of \cite[12.4.5, 1$\Rightarrow$ 2]{ratcliffe} allows to shrink the horoballs without trouble.\\ \indent \textit{Finally,} we can conclude that $\mathrm{Aut\,}^*X$ acts properly and cocompactly on the complete CAT(0) space $C\setminus U$. It is not enough: in order to apply Lemma \ref{finiteness-conj-classes} and Theorem \ref{conj-classes-cat0}, we must obtain the same result for $\mathrm{Aut\,}^*X\rtimes \langle\sigma^*\rangle$, where $\sigma$ is a real structure on $X$. By Lemma \ref{proper-action-discrete}, the discrete isometry group $\mathrm{Aut\,}^*X\rtimes \langle\sigma^*\rangle$ acts properly on $\mathcal H^n$ (hence also on $\mathcal H^n\cap (C\setminus U)$). Since there is a fundamental domain of $\mathrm{Aut\,}^*X\rtimes \langle\sigma^*\rangle$ which is a closed subset of that of $\mathrm{Aut\,}^*X$ which is compact, we see that $\mathrm{Aut\,}^*X\rtimes \langle\sigma^*\rangle$ also acts cocompactly: this concludes the proof. \end{proof}
\begin{rmk} In fact, we could give a much shorter proof of Theorem \ref{thm-finiteness-klt} if $N$ were smooth complete.\\
\indent First, note that $N$ is a pinched Hadamard manifold\footnote{A \textbf{pinched Hadamard manifold} is a complete simply connected Riemannian manifold whose all sectional curvatures lie between two negative constants.} as a convex subset of $\mathcal H^n$: in particular, note that it is simply connected because of its convexity (which can be seen in the Klein model, where convexity is the same as Euclidean convexity and really implies simply connectedness).\\
\indent Now, if we denote by $P$ a fundamental domain of the action of $\mathrm{Aut\,} X$ on $N\subseteq D^n$, we remark that $\overline{P}$ is a fundamental domain of the action of $\mathrm{Aut\,} X$ on $\overline{N}\subseteq\overline{D^n}$ and that it is a convex polyhedron of the Klein model $\overline{D^n}$ of $\overline{\mathcal H^n}$ and also of the Euclidean space $\mathbb R^n$ (since convexity in the Klein model is the same as Euclidean convexity). Indeed, by Definition \ref{def-poly-funda-domain}, we need to check that $\overline{P}$ has a locally finite collection of sides. But this is true since Totaro Cone Theorem \ref{cone-thm} shows that it is finitely sided (as we have seen in the beginning of the proof of Theorem \ref{thm-finiteness-klt}, see page \pageref{beginning-proof-klt}). Thus, by \cite[6.4.8]{ratcliffe}, $\overline{P}$ has finite volume. Therefore, $P$ is also of finite volume. Since there is a fundamental domain $P'$ of $\mathrm{Aut\,} X\rtimes \langle\sigma\rangle$ which is contained in $P$, we see that $P'$ is also of finite volume. Thus, the quotient of $N$ by $\Gamma:= \mathrm{Aut\,} X\rtimes \langle \sigma\rangle$ is a finite volume quotient of the pinched Hadamard manifold $N$ and \cite[5.4.2, F5, 6.1, 5.5.2]{bowditch95} shows that $\Gamma$ has finitely many conjugacy classes of finite subgroups: thus $X$ has a finite number of real forms by Lemma \ref{finiteness-conj-classes}.\\
\end{rmk}
\section{Two examples}
\begin{example} \label{example-12-points} Here we study the example given by Totaro in \cite{totaro}: it is a blow-up $X$ of $\mathbb P^2$ at 12 points and we show that $\mathrm{Aut\,} X$ contains a subgroup isomorphic to $\mathbb Z*\mathbb Z$. Since there exists a $\mathbb R$-divisor $\Delta$ such that $(X,\Delta)$ is a KLT Calabi-Yau pair, $X$ has finitely many non-isomorphic real forms and this finiteness cannot be deduced from Theorem \ref{first-thm}.\\ \indent Let $\zeta=e^{2i\pi/3}$. We denote by $X$ the blow-up of $\mathbb P^2$ at the 12 points of the set $\mathcal P = \{[1:\zeta^i:\zeta^j],\;(i,j)\in[\![ 0;2]\!]^2\}\cup\{[1:0:0],\;[0:1:0],\;[0:0:1]\}$. Let $C_1,\dots,C_9$ be the lines of $\mathbb P^2$ of equations $(y=x),\;(y=\zeta x),\;(y=\zeta^2 x),\;(z=x),\;(z=\zeta x),\;(z=\zeta^2 x),\;(z=y),\;(z=\zeta y),\;(z=\zeta^2 y)$. We can easily verify that each line $C_i$ passes exactly through 4 of the points of $\mathcal P$ and that each point blown-up is the intersection point of exactly 3 of the $C_i$: this is called the \textit{dual of Hesse configuration}.\footnote{Hesse configuration itself is not interesting for our purposes: since it contains exactly 9 points (and 12 lines), the surface obtained by blowing up these points has finitely many non-equivalent real structures by Theorem \ref{first-thm}.}\\ \indent Note that $\displaystyle \left(X,\frac{1}{3}\sum_{i=1}^9 \widehat{C_i}\right)$ is a KLT Calabi-Yau pair because: \begin{itemize} \item $X$ is smooth; \item $\displaystyle \frac{1}{3}\sum_{i=1}^9 \widehat{C_i}$ has simple normal crossings and its coefficients are < 1; \item $\displaystyle -K_X=\frac{1}{3}\sum_{i=1}^9 \widehat{C_i}$. \end{itemize} \indent Now, let us give some results about $\mathrm{Aut\,} X$. Firstly, one can show that if a line $D$ passes through one of the points $[1:0:0]$, $[0:1:0]$, $[0:0:1]$ and one of the $[1:\zeta^i:\zeta^j]$, then $D$ is one of the $C_i$: for example, if $D$ passes through $[1:0:0]$ and one of the $[1:\zeta^i:\zeta^j]$, then, in the affine chart $(x\neq 0)$ of $\mathbb P^2$, we have $D=(z=\zeta^{j-i}y)$.\\ \indent We claim that \fbox{$\mathrm{Aut\,}^{\#} X=\{\Id\}$}: since all the points blown-up belong to $\mathbb P^2$, it suffices to check that there does not exist any line passing through at least 11 of the 12 points of $\mathcal P$ (in fact, $\mathrm{Aut\,}^{\#}X$ is non-trivial if and only if $\mathcal P$ does not contain 4 points in general position, i.e. if and only if all the points of $\mathcal P$ are collinear except maybe only one of them). But if $D$ was such a line, then it would necessarily pass through one of the points $[1:0:0]$, $[0:1:0]$, $[0:0:1]$ and one of the $[1:\zeta^i:\zeta^j]$: thus, $D$ would be one of the $C_i$. Since none of the $C_i$ passes through 11 of the 12 points blown-up, we see that $D$ does not exist: this proves the claim.\\ \indent By the end of the example of \cite[\S 2]{totaro}, we have (denoting $E:=\mathbb C/\mathbb Z[\zeta]$): \begin{center} \fbox{$\displaystyle \mathrm{Aut\,} X \simeq \mathrm{Aut\,}^* X = (\mathbb Z/3\mathbb Z)^2\rtimes \frac{GL_2(\mathbb Z[\zeta])}{\mathbb Z/3\mathbb Z} = \mathrm{Aut\,} ((E\times E)/(\mathbb Z/3\mathbb Z))$} \end{center} \indent Finally, we want to show that $\mathrm{Aut\,} X$ contains a subgroup isomorphic to $\mathbb Z*\mathbb Z$: \begin{itemize}[label=$\bullet$,font=\small] \item it is well-known that $SL_2(\mathbb Z)$ contains finite index subgroups isomorphic to $\mathbb Z*\mathbb Z$ (for example, $\displaystyle S:=\left\langle\begin{bmatrix} 1 & 2\\ & 1 \end{bmatrix},\begin{bmatrix} 1 & \\2 & 1 \end{bmatrix}\right \rangle$ has index 12, cf. \cite[II.25]{de-la-harpe}) ; \item $GL_2(\mathbb Z[\zeta])$ acts on $E\times E = \mathbb C^2/(\mathbb Z[\zeta]^2)$ by matrix product and $\displaystyle \frac{GL_2(\mathbb Z[\zeta])}{\mathbb Z/3\mathbb Z}$ is the quotient by the subgroup generated by $\zeta.I_2$. Clearly, two elements of $SL_2(\mathbb Z)$ (or even $GL_2(\mathbb Z)$) are never equal modulo $\langle \zeta.I_2\rangle$ so $SL_2(\mathbb Z)$ injects into $\displaystyle \frac{GL_2(\mathbb Z[\zeta])}{\mathbb Z/3\mathbb Z}$: this concludes the proof. \end{itemize} \end{example}
\begin{example} {\bf (\cite[6.10]{dolgachev-zhang})} \label{exemple-dolgachev} As promised, we now describe the example of a rational surface for which the finiteness problem for real forms remains open. \\%\textbf{Pourquoi n'est-elle pas KLT Calabi-Yau ?}\\ Let $L_1,\dots,L_5$ be five lines in general linear position in $\mathbb P^2$. For $i,j\in[\![ 1;4]\!]$, we denote by $p_{ij}$ the intersection point of $L_i$ and $L_j$. Let us fix a cubic $C_3$ passing through the points $p_{ij}$ and intersecting $L_5$ at three distinct points $q_1$, $q_2$, $q_3$. Finally, let $a$ be another point of $L_5$.\\%\textbf{\attention\, Faire une figure en couleur !} \\
We consider the blow-up $X$ of $\mathbb P^2$ at the 10 points $p_{ij}$, $q_k$ and $a$: it is a nodal Coble surface since $|-K_X|=\varnothing$ and $|-2K_X| = \{C_6:= R_1+R_2+R_3+R_4+2R_5\}$, where the $R_i$'s are the strict transform of the $L_i$'s in $X$ (note that $X$ is nodal since $R_1$,\dots,$R_4$ are $(-2)$-curves).\\ In \cite{dolgachev-zhang}, it is claimed that $\mathrm{Aut\,} X$ has infinitely many orbits on the set of (-1)-curves of $X$ but, in a private communication, Dolgachev explained me that there is a gap in the proof of this fact (more precisely, the elements of the group $G$ constructed in \textit{op. cit.} cannot be lifted to the double covering $S(A)$ of $X$ ramified along $R_1+\dots+R_4$). Note that if it were true, this would show that $X$ does not contain a divisor $\Delta$ such that $(X,\Delta)$ is a KLT Calabi-Yau pair. For, if such a divisor existed, then Cone Theorem would imply that $\mathrm{Aut\,} X$ has finitely many orbits on the extremal rays of the nef cone of $X$ and this would be true also for its dual cone, which is the cone of curves of $X$ (cf. \cite[4.1]{looijenga}): this is absurd because $(-1)$-curves form an $\mathrm{Aut\,} X$-invariant subset of the set of extremal rays of $\overline{NE}(X)$.\\ However, note that $\left(X,\dfrac{1}{2}C_6\right)$ is a log-canonical Calabi-Yau pair since $\dfrac{1}{2}C_6 = \dfrac{1}{2}(R_1+R_2+R_3+R_4)+R_5$ has clearly simple normal crossings, has coefficients $\leq 1$ and satisfies the condition $K_X+\dfrac{1}{2}C_6 \equiv 0$.\\
\end{example}
\section{Appendix} \label{appendix-section} In this appendix, we provide a detailed proof of Lemma \ref{ratcliffe-modified}, i.e. a detailed inspection of all the proofs of the results used by \cite{ratcliffe} in the proof of \cite[12.4.5]{ratcliffe} in order to replace $\mathcal H^n$ by a closed convex subset $N$. We recall here the statement of Lemma \ref{ratcliffe-modified}:
\\ \textbf{Lemma \ref{ratcliffe-modified}.} \textit{Let $\Gamma$ be a discrete subgroup of $\mathrm{Isom\,}(\mathcal H^n)$, $L(\Gamma)$ the set of limit points of $\Gamma$ in $\overline{\mathcal H^n}$, $C(\Gamma)$ the convex hull of $L(\Gamma)$ in $\overline{\mathcal H^n}$ and $N$ a $\Gamma$-invariant closed convex subset of $\mathcal H^n$.\\ If the action of $\Gamma$ on $N$ has a finitely sided polyhedral fundamental domain $P$, then there exists a finite union (maybe empty) $V_0$ of horocusp regions with disjoint closures such that $(P\cap C(\Gamma))\setminus V_0$ is compact.}
In what follows, all numbers like (12.4.2) refer to \cite{ratcliffe}. Moreover, when some notations are undefined, please consider they are the same as in \cite{ratcliffe}, \emph{mutatis mutandis}. Finally, when some results cited in the diagram page \pageref{diagramme-horoboules} are not cited in the text below, then these are general results which apply to our case either without any change, or changing only $\mathcal H^n$ into $N$. \\
Some remarks are widely used below so we gather them here: \begin{itemize} \item if $P_0$ is a fundamental polyhedron of the action of $\Gamma$ on $\mathcal H^n$, then $P=P_0\cap N$ is a fundamental polyhedron of the action of $\Gamma$ on $\mathcal H^n$ \item by (6.6.10, 8.5.7), like $\mathcal H^n$, a closed convex subset $N$ of $\mathcal H^n$ is a proper geodesically connected and geodesically complete metric space\footnote{A metric space is \textbf{proper} or \textbf{finitely compact} if every bounded closed subset of it is compact.}: indeed, $N$ is proper as a subspace of the proper metric space $\mathcal H^n$, $N$ is geodesically complete, as it is complete, and it is geodesically connected, since it is convex. From this fact, we can deduce that the action of $\Gamma$ on $N$ has a (locally finite) exact convex fundamental polyhedron, e.g. a Dirichlet polyhedron, by (5.3.5, 6.6.13) and (6.7.4 (2)) since the group is discrete (and hence acts properly discontinuously, cf. Lemma \ref{proper-action-discrete}) and since there is a point $a\in N$ whose stabilizer $\Gamma_a$ is trivial (by (6.6.12)). \item if $\Gamma$ is a discrete group of isometries of $\mathcal H^n$ (seen as Poincaré half-space), then the stabilizer $\Gamma_{\infty}$ of the point at infinity induces a discrete subgroup of $\mathrm{Isom\,}(\mathbb R^{n-1}) = \mathrm{Isom\,}(\partial\mathcal H^n\setminus\{\infty\})$. By (5.4.6), there is a $\Gamma_{\infty}$-invariant affine subspace $Q$ of $\mathbb R^{n-1}$ of dimension $m\leq n-1$ and $\Gamma_{\infty}$ is a finite extension of a $\mathbb Z^m$. By (7.5.2), $\Gamma_{\infty}$ is a crystallographic isometry group of $\mathbb R^m\simeq Q$, i.e. $Q/\Gamma_{\infty}$ is compact \item for the other points, it is a question of replacing $\mathcal H^n$ by $N$ and checking that everything remains true (sometimes by using convexity and/or closedness of $N$ in $\mathcal H^n$). \end{itemize}
\textbf{(12.3.7)}: $\Gamma$ is a discrete subgroup of $\mathrm{Isom\,}(\mathcal H^n)$ so we can define "limit point", "bounded parabolic point"... with regard to its action on the whole space $\mathcal H^n$. Note that $a$ is a limit point if and only if $\exists (g_i)\in\Gamma^{\mathbb N}, \forall x\in\mathcal H^n, g_i(x)\xrightarrow[i\to +\infty]{} a$: in particular, if $x\in N$, then $\forall i, g_i(x)\in N$. The rest of the proof can be followed, except that we can check that the geodesic ray $R_i$ is contained in $N$.\\
\textbf{(12.4.3)} Firstly, note that (12.4.2) is not necessary for our purposes since $P$ is finitely-sided. We can make the same reasoning with $N$ instead of $\mathcal H^n$: if $P$ is a fundamental polyhedron of $\Gamma$ acting on $N$, then $\{g(P)|\;g\in\Gamma\}$ is an exact tessellation of $N$ and $\{\nu g(P)|\;g\in\Gamma\} = \mathcal T$ is an exact tessellation of $\nu(N)\subseteq \mathbb R^{n-1}$. But $\displaystyle \bigcup_{g\in\Gamma} g(P) = N$ so $U\subseteq \nu(N)$. Since $U$ is an open closed subset of $\mathbb R^{n-1}$ and $U\subseteq \nu(N)$, we see that $U$ is open and closed in the non-empty connected space $\nu(N)$ so that $U=\nu(N)$.\\
\textbf{(12.4.4)} The beginning of the proof remains valid: it shows that if $x\in\overline{P}\cap L(\Gamma)$, where $\overline{P}$ is the closure of $P$ in $\mathcal H^n$, then the stabilizer $\Gamma_x$ is infinite and elementary of parabolic type (cf. \cite[\S 5.5]{ratcliffe}). Of course, $\mathcal T$ is an exact tessellation of $\nu(N)$ instead of $\mathbb R^{n-1}$. If $c\in N$ is a cusp point of $\Gamma$, then $U(Q,r)\cap N\neq \varnothing$ because $U(Q,r)$ is a neighborhood of $c$: thus it suffices to replace $U(Q,r)$ by $N\cap U(Q,r)$ in the end of the proof to conclude.\\
\textbf{(12.4.5)} Firstly, we note that (12.4 Cor. 3) is a direct corollary of (12.3.7), (12.4.1) and (12.4.4) and that $\overline{P}$ is the closure of $P$ in $\mathcal H^n$. It suffices to replace: \begin{itemize} \setlength\itemsep{.25em} \item "$\Gamma$ is geometrically finite" by "the fundamental polyhedron of $\Gamma$ on $N$ is finitely-sided" (see (\S 12.4, Example 1));
\item in view of the statement of our Lemma \ref{ratcliffe-modified}, all the statements made in the proof of (12.4.5) concerning $\pi$, $V$, $M$ are useless for our purposes and all we need is $\displaystyle V_0 := \bigcup_{i=1}^m B_i$ \end{itemize} and we have to note that $K$ is a closed subset of $B^n$ included in the closed subset $N$ (since $P\subseteq N$) hence $K$ is a closed subset of $N$.
\setlength{\fboxsep}{1.5mm} \begin{landscape} \begin{center} \begin{figure}
\caption{We framed with bold lines the "initial" results, i.e. those whose proof does not require anything else that standard definitions and results (in topology, group theory, etc.) or those whose statement can be adapted to our case without examining their proof and the results used by it.}
\label{diagramme-horoboules}
\end{figure} \end{center} \end{landscape}
\noindent \textbf{Acknowledgements.} The author is grateful to Frédéric Mangolte for asking him this question, and also for his advice and help. We want to thank Julie Déserti, Igor Dolgachev, Viatcheslav Kharlamov, Stéphane Lamy and Burt Totaro for useful comments, discussions or emails.
\end{document} |
\begin{document}
\title{Poisson Reduction}
\author{Chiara Esposito}
\address{Department of Mathematics, Universitat Autonoma de Barcelona, 08193 Bellaterra. Spain} \maketitle
\begin{abstract} In this paper we develope a theory of reduction for classical systems with Poisson Lie groups symmetries using the notion of momentum map introduced by Lu. The local description of Poisson manifolds and Poisson Lie groups and the properties of Lu's momentum map allow us to define a Poisson reduced space. \end{abstract}
\section{Introduction}\label{intro} In this paper we prove a generalization of the Marsden-Weinstein reduction to the general case of an arbitrary Poisson Lie group action on a Poisson manifold. Reduction procedures are known in many different settings. In particular, a reduction theory is known in the case of Poisson Lie groups acting on symplectic manifolds \cite{Lu3} and in the case of Lie groups acting on Poisson manifolds \cite{RO}, \cite{MR}. An important generalization to the Dirac setting has been studied in \cite{BC}.
The theory of symplectic reduction plays a key role in classical mechanics. The phase space of a system of $n$ particles is described by a symplectic or more generally Poisson manifold. Given a symmetry group of dimension $k$ acting on a mechanical system, the dimension of the phase space can be reduced by $2k$. Marsden-Weinstein reduction formalizes this feature. Recall roughly the notion of Hamiltonian actions in this setting. Given a Poisson manifold $M$ there are natural Hamiltonian vector fields $\{f, \cdot \}$ on $M$. Let $G$ be a Lie group acting on $M$ by $\Phi$; the action is Hamiltonian if the vector fields defined by the infinitesimal generator of $\Phi$ are Hamiltonian. More precisely, let $G$ be a Lie group acting on a Poisson manifold $(M, \pi)$. The action $\Phi:G\times M \to M$ is canonical if it preserves the Poisson structure $\pi$. Suppose that there exists a linear map $H: \mathfrak{g} \to C^{\infty}(M)$ such that the infinitesimal generator $\Phi_{X}$ for $X\in \mathfrak{g}$ of the canonical action is induced by $H$ by $$ \Phi_{X}= \{H_{X}, \cdot\}. $$ A canonical action induced by $H$ is said Hamiltonian if $H$ is a Lie algebra homomorphism. We can define a map $\boldsymbol{\mu}: M \to \mathfrak{g}^*$, called momentum map, by $H_{X}(m)=\langle \boldsymbol{\mu}(m), X\rangle$ for $m\in M$. It is equivariant if the corresponding $H$ is a Lie algebra homomorphism. Given an Hamiltonian action, under certain assumptions, the reduced space has been defined as $M//G:=\boldsymbol{\mu}^{-1}(u)/G_{u}$ and it has been proved that it is a Poisson manifold \cite{MsWe}.
In this paper we are interested in analyzing the case in which one has an extra structure on the Lie group, a Poisson structure making it a Poisson Lie group. Poisson Lie groups are very interesting objects in mathematical physics. They may be regarded as classical limit of quantum groups \cite{Dr1} and they have been studied as carrier spaces of dynamical systems \cite{LMS}. It is believed that actions of Poisson Lie groups on Poisson manifolds should be used to understand the ``hidden symmetries'' of certain integrable systems \cite{STS}. Moreover, the study of classical systems with Poisson Lie group symmetries may give information about the corresponding quantum group invariant system (an attempt can be found in \cite{me}, \cite{me1}).
The purpose of this paper is to prove that, given a Poisson manifold acted by a Poisson Lie group, under certain conditions, we can also reduce this phase space to another Poisson manifold.
The paper is organized as follows. In Section \ref{sec_pg} we recall some basic elements of Poisson geometry: Poisson manifolds and their local description, Lie bialgebras and Poisson Lie groups. A nice review of these results can be found in \cite{V} and \cite{YK}. The Section \ref{sec_mm} is devoted to Poisson actions and associated momentum maps and we discuss dressing actions and their properties. In Section \ref{sec: pr} we present the main result of this paper, the Poisson reduction, and we discuss an example.
\noindent{\bf Acknowledgments:} I would like to thank my advisor Ryszard Nest and Eva Miranda for many interesting discussions about Poisson reduction and its possible developments. I also wish to thank George M. Napolitano for his help and his useful suggestions and Rui L. Fernandes for his comments regarding Dirac reduction theory.
\section{Poisson manifolds, Poisson Lie groups and Lie bialgebras}\label{sec_pg}
In this section we introduce the notion of Poisson manifolds and their local description, we give some background about Poisson Lie groups and Lie bialgebras which will be used in the paper. For more details on this subject, see \cite{Lu3}, \cite{Dr1}, \cite{YK}, \cite{V}, \cite{We1}.
\subsection{Poisson manifolds and symplectic foliation}\label{sec_1.1}
A Poisson structure on a smooth manifold $M$ is a Lie bracket $\{\cdot, \cdot\}$ on the space $C^{\infty}(M)$ of smooth functions on $M$ which satisfies the Leibniz rule. This bracket is called Poisson bracket and a manifold $M$ equipped with such a bracket is called Poisson manifold. Therefore, a bivector field $\pi$ on $M$ such that the bracket $$ \{ f, g\}:= \langle \pi, df\wedge dg\rangle $$ is a Poisson bracket is called Poisson tensor or Poisson bivector field. A Poisson tensor can be regarded as a bundle map $\pi^{\sharp}: T^*M\to TM$: $$
\langle \alpha, \pi^{\sharp}(\beta)\rangle = \pi(\alpha,\beta) $$
\begin{definition} A mapping $\phi: (M_1,\pi_1)\rightarrow (M_2,\pi_2)$ between two Poisson manifolds is called a Poisson mapping if $\forall f,g\in C^{\infty}(M_2)$ one has \begin{equation} \lbrace f\circ\phi, g\circ\phi\rbrace_1=\lbrace f,g\rbrace_2\circ \phi \end{equation} \end{definition} The structure of a Poisson manifold is described by the splitting theorem of Alan Weinstein \cite{We1}, which shows that locally a Poisson manifold is a direct product of a symplectic manifold with another Poisson manifold whose Poisson tensor vanishes at a point.
\begin{theorem}[Weinstein]\label{thm: split} On a Poisson manifold $(M,\pi)$, any point $m\in M$ has a coordinate neighborhood with coordinates $(q_1,\dots,q_k,p_1,\dots,p_k,\allowbreak y_1,\dots,y_l)$ centered at $m$, such that \begin{equation}\label{eq: splitp} \pi=\sum_i \frac{\partial}{\partial q_i}\wedge\frac{\partial}{\partial p_i}+\frac{1}{2}\sum_{i,j}\phi_{ij}(y) \frac{\partial}{\partial y_i}\wedge\frac{\partial}{\partial y_j}\qquad \phi_{ij}(0)=0. \end{equation} The rank of $\pi$ at $m$ is $2k$. Since $\phi$ depends only on the $y_i$s, this theorem gives a decomposition of the neighborhood of $m$ as a product of two Poisson manifolds: one with rank $2k$, and the other with rank 0 at $m$. \end{theorem}
The term
\begin{equation} \frac{1}{2}\sum_{i,j}\phi_{ij}(y)\frac{\partial}{\partial y_i}\wedge \frac{\partial}{\partial y_j} \end{equation} is called transverse Poisson structure and it is evident that the equations $y_{i}=0$ determine the symplectic leaf through $m$.
\subsection{Lie bialgebras and Poisson Lie groups} \label{sec:lie bialgebras}
\begin{definition} A Poisson Lie group $(G,\pi_G)$ is a Lie group equipped with a multiplicative Poisson structure $\pi_G$, i.e. such that the multiplication map $G\times G \to G$ is a Poisson map. \end{definition}
Let $G$ be a Lie group with Lie algebra $\mathfrak{g}$. The linearization $\delta:= d_{e}\pi_{G}: \mathfrak{g}\to \mathfrak{g}\wedge \mathfrak{g}$ of $\pi_{G}$ at $e$ defines a Lie algebra structure on the dual $\mathfrak{g}^*$ of $\mathfrak{g}$ and, for this reason, it is called cobracket. The pair $(\mathfrak{g},\mathfrak{g}^*)$ is called Lie bialgebra. The relation between Poisson Lie groups and Lie bialgebras has been proved by Drinfeld \cite{Dr1}:
\begin{theorem}\label{thm: dr} If $(G,\pi_G)$ is a Poisson Lie group, then the linearization of $\pi_G$ at $e$ defines a Lie algebra structure on $\mathfrak{g}^*$ such that $(\mathfrak{g},\mathfrak{g}^*)$ form a Lie bialgebra over $\mathfrak{g}$, called the tangent Lie bialgebra to $(G,\pi_G)$. Conversely, if $G$ is connected and simply connected, then every Lie bialgebra $(\mathfrak{g},\mathfrak{g}^*)$ over $\mathfrak{g}$ defines a unique multiplicative Poisson structure $\pi_G$ on $G$ such that $(\mathfrak{g},\mathfrak{g}^*)$ is the tangent Lie bialgebra to the Poisson Lie group $(G,\pi_G)$. \end{theorem}
From this theorem it follows that there is a unique connected and simply connected Poisson Lie group $(G^*,\pi_{G^*})$, called the dual of $(G,\pi_G)$, associated to the Lie bialgebra $(\mathfrak{g}^*,\delta)$. If $G$ is connected and simply connected, then the dual of $G^*$ is $G$.
\begin{example}[$\mathfrak{g}=ax+b$]\label{ex: 1}
Consider the Lie algebra $\mathfrak{g}$ spanned by $X$ and $Y$ with commutator \begin{equation} [X,Y]=Y \end{equation} and cobracket given by \begin{equation} \delta(X)=0 \quad \delta(Y)= X\wedge Y. \end{equation} The Lie bracket on $\mathfrak{g}^*$ is given by $$ [X^*,Y^*]=Y^*. $$ A matrix representation of $\mathfrak{g}$ is the Lie algebra $\mathfrak{gl}(2,\mathbb{R})$ via $$ X = \left(\begin{matrix} 1 & 0 \\ 0 & 0 \end{matrix}\right) \quad Y = \left(\begin{matrix} 0 & 1 \\ 0 & 0 \end{matrix}\right) $$ and $$ X^* = \left(\begin{matrix} 0 & 0 \\ 0 & 1 \end{matrix}\right) \quad Y^* = \left(\begin{matrix} 0 & 0 \\ 1 & 0 \end{matrix}\right) $$ with the metric $\gamma(a,b)= tr(aJbJ)$ and $J=\left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right)$.
The corresponding Poisson Lie group $G$ and dual $G^*$ are subgroups of $GL(2,\mathbb{R})$ of matrices with positive determinant are given by \begin{equation} G=\left\lbrace \left(\begin{matrix} 1 & 0 \\ \xi & \eta \end{matrix}\right)\; :\eta>0\right\rbrace \qquad G^*=\left\lbrace \left(\begin{matrix} s & t \\ 0 & 1 \end{matrix}\right)\; :s>0\right\rbrace \end{equation}
\end{example}
\section{Poisson actions and Momentum maps}\label{sec_mm}
In this section we first introduce the concept of Poisson action of a Poisson Lie group on a Poisson manifold, which generalizes the canonical action of a Lie group on a symplectic manifold. We define momentum maps associated to such actions and finally we consider the particular case of a Poisson Lie group $G$ acting on its dual $G^*$ by dressing transformations. This allows us to study the symplectic leaves of $G$ that are exactly the orbits of the dressing action. These topics can be found e.g. in \cite{Lu3}, \cite{Lu1} and \cite{STS}.
From now on we assume that $G$ is connected and simply connected. \begin{definition}
The action $\Phi:G\times M\rightarrow M$ of a Poisson Lie group $(G,\pi_G)$ on a Poisson manifold $(M,\pi)$ is called Poisson action if $\Phi$ is a Poisson map, where $G\times M$ is a Poisson manifold with structure $\pi_G\oplus\pi$. \end{definition} This definition generalizes the notion of canonical action; indeed, if $G$ carries the trivial Poisson structure $\pi_G=0$, the action $\Phi$ is Poisson if and only if it preserves $\pi$, i.e. if it is canonical. In general, the structure $\pi$ is not invariant with respect to the action $\Phi$. The easiest examples of Poisson actions are given by the left and right actions of $G$ on itself.
For an action $\Phi: G\times M \to M$ we use $\Phi: \mathfrak{g} \to Vect\, M: X \mapsto \Phi_{X}$ to denote the Lie algebra anti-homomorphism which defines the infinitesimal generators of this action. The proof of the following Theorem can be found in \cite{LuWe1}. \begin{theorem} The action $\Phi: G\times M\rightarrow M$ is a Poisson action if and only if \begin{equation}\label{eq: pa} L_{\Phi_{X}}(\pi)=(\Phi\wedge\Phi)\delta(X) \end{equation}
for any $X\in\mathfrak{g}$, where $L$ denotes the Lie derivative and $\delta$ is the derivative of $\pi_{G}$ at $e$. \end{theorem}
Let $\Phi:G\times M \to M$ be a Poisson action of $(G, \pi_{G})$ on $(M,\pi)$. Let $G^*$ be the dual Poisson Lie group of $G$ and let $\Phi_{X}$ be the vector field on $M$ which generates the action $\Phi$. In this formalism the definition of momentum map reads (Lu, \cite{Lu3}, \cite{Lu1}):
\begin{definition}\label{def: mm} A momentum map for the Poisson action $\Phi:G\times M\rightarrow M$ is a map $\boldsymbol{\mu}: M\rightarrow G^*$ such that \begin{equation}\label{eq: mmp} \Phi_{X}=\pi^{\sharp}(\boldsymbol{\mu}^*(\theta_{X})) \end{equation} where $\theta_{X}$ is the left invariant 1-form on $G^*$ defined by the element $X\in\mathfrak{g}=(T_eG^*)^*$ and $\boldsymbol{\mu}^*$ is the cotangent lift $T^* G^*\rightarrow T^*M$. \end{definition} In other words, the momentum map generates the vector field $\Phi_{X}$ via the construction $$ X\in \mathfrak{g} \to \theta_{X}\in T^*G^* \to \alpha_{X}= \boldsymbol{\mu}^*(\theta_{X})\in T^*M \to \pi^{\sharp}(\alpha_{X})\in TM $$ It is important to remark that Noether's theorem still holds in this general context. \begin{theorem} Let $\Phi:G\times M \to M$ a Poisson action with momentum map $\boldsymbol{\mu}: M\rightarrow G^*$. If $H\in C^{\infty}(M)$ is $G$-invariant, then $\boldsymbol{\mu}$ is an integral of the Hamiltonian vector field associated to $H$. \end{theorem}
It is important to point out that in this setting the vector field $\Phi_{X}$ is not Hamiltonian, unless the Poisson structure on $G$ is trivial. In this case $G^*=\mathfrak{g}^*$, the differential 1-form $\theta_{X}$ is the constant 1-form $X$ on $\mathfrak{g}^*$, and \begin{equation} \boldsymbol{\mu}^*(\theta_{X})=d(H_{X}),\quad\text{where}\quad H_{X}(m)=\langle\boldsymbol{\mu}(m),X \rangle. \end{equation} This implies that the momentum map is the canonical one and \begin{equation} \Phi_{X}=\pi^{\sharp}(dH_{X})=\{H_{X}, \cdot\}. \end{equation} In other words, $\Phi_{X}$ is the Hamiltonian vector field with Hamiltonian $H_{X}\in C^{\infty}(M)$. We observe that, when $\pi_G$ is not trivial, $\theta_{X}$ is a Maurer-Cartan form, hence $\boldsymbol{\mu}^*(\theta_{X})$ can not be written as a differential of a Hamiltonian function. In the following we give an example for the infinitesimal generator in this general case.
\subsection{Dressing Transformations}\label{sec: dressing}
One of the most important example of Poisson action is the dressing action of $G$ on $G^*$. The name ``dressing'' comes from the theory of integrable systems and was introduced in this context in \cite{STS}. Interesting examples can be found in \cite{AM}. We remark that, given a Poisson Lie group $(G, \pi_{G})$, the left (right) invariant 1-forms on $G^*$ form a Lie algebra with respect to the bracket: $$ [\alpha,\beta] = L_{\pi^{\sharp}(\alpha)}\beta-L_{\pi^{\sharp}(\beta)}\alpha - d(\pi(\alpha, \beta)). $$
For $X\in\mathfrak{g}$, let $\theta_{X}$ be the left invariant 1-form on $G^*$ with value $X$ at $e$. Let us define the vector field on $G^*$
\begin{equation}\label{eq: idr}
l(X)=\pi_{G^*}^{\sharp}(\theta_{X}). \end{equation} The map $l: \mathfrak{g}\to TG^*: X\mapsto l(X)$ is a Lie algebra anti-homomorphism. We call $l$ the left infinitesimal dressing action of $\mathfrak{g}$ on $G^*$; its linearization at $e$ is the coadjoint action of $\mathfrak{g}$ on $\mathfrak{g}^*$. Similarly we can define the right infinitesimal dressing action.
Let $l(X)$ (resp. $r(X)$) a left (resp. right) dressing vector field on $G^*$. If all the dressing vector fields are complete, we can integrate the $\mathfrak{g}$-action into an action of $G$ on $G^*$ called the dressing action and we say that the dressing actions consist of dressing transformations. \begin{definition} A multiplicative Poisson tensor $\pi_G$ on $G$ is complete if each left (equiv. right) dressing vector field is complete on $G$. \end{definition}
From the definition of dressing action follows (the proof can be found in \cite{STS}) that the orbits of the right or left dressing action of $G^*$ (resp. $G$) are the symplectic leaves of $G$ (resp. $G^*$).
It can be proved (see \cite{Lu3}) that if $\pi_{G}$ is complete, both left and right dressing actions are Poisson actions with momentum map given by the identity.
Assume that $G$ is a complete Poisson Lie group. We denote respectively the left (resp. right) dressing action of $G$ on its dual $G^*$ by $g\mapsto l_g$ (resp. $g\mapsto r_g$).
\begin{definition} A momentum map $\boldsymbol{\mu}:M\rightarrow G^*$ for a left (resp. right) Poisson action $\Phi$ is called G-equivariant if it is such with respect to the left dressing action of $G$ on $G^*$, that is, $\boldsymbol{\mu}\circ \Phi_g=\lambda_g\circ \boldsymbol{\mu}$ (resp. $\boldsymbol{\mu}\circ \Phi_g=\rho_g\circ \boldsymbol{\mu}$) \end{definition} It is important to remark that a momentum map is $G$-equivariant if and only if it is a Poisson map, i.e. $\boldsymbol{\mu}_*\pi=\pi_{G^*}$. \begin{definition}
An action $\Phi: G\times M \to M$ of a Poisson Lie group $(G, \pi_{G})$ on a Poisson manifold $(M, \pi)$ is said Hamiltonian if it is a Poisson action generated by an equivariant momentum map. \end{definition}
\section{Poisson Reduction}\label{sec: pr} \label{sec: poisson reduction}
In this section we present the main result of this paper. We show that, given a Hamiltonian action $\Phi$, as defined above, we can define a reduced manifold in terms of momentum map and prove that it is a Poisson manifold. The approach used is a generalization of the orbit reduction \cite{Ml} in symplectic geometry. Recall that, under certain conditions, the orbit space of $\Phi$ is a smooth manifold and it carries a Poisson structure. First, we give an alternate proof of this claim. Then, we consider a generic orbit $\mathcal{O}_{u}$ of the dressing action of $G$ on $G^*$, for $u\in G^*$, and we prove that the set $\boldsymbol{\mu}^{-1}(\mathcal{O}_{u})/G$ is a regular quotient manifold with Poisson structure induced by the Poisson structure on $M$. Similarly to the symplectic case, this reduced space is isomorphic to the space $\boldsymbol{\mu}^{-1}(u)/G_{u}$ which will be regarded as the Poisson reduced space.
\subsection{Poisson structure on $M/G$}
Consider a Hamiltonian action of a connected and simply connected Poisson Lie group $(G,\pi_{G})$ on a Poisson manifold $(M,\pi)$. It is known that, if the action is proper and free, the orbit space $M/G$ is a smooth manifold, it carries a Poisson structure such that the natural projection $M \to M/G$ is a Poisson map (a proof of this result can be found in \cite{STS}). In this section we give an alternate proof of this result, by introducing an explicit formulation for the infinitesimal generator of the Hamiltonian action, in terms of local coordinates.
As discussed in the previous section, a Hamiltonian action is a Poisson action induced by an equivariant momentum map $\boldsymbol{\mu}: M \to G^*$ by formula (\ref{eq: mmp}). In other words, the map $$ \alpha: \mathfrak{g} \to \Omega^1(M): X \mapsto \alpha_{X}=\boldsymbol{\mu}^*(\theta_{X}) $$ is a Lie algebra homomorphism such that $$ \Phi_{X}=\pi^{\sharp}(\alpha_{X}) $$ The dual map of $\alpha$ defines a $\mathfrak{g}^*$-valued 1-form on $M$, still denoted by $\alpha$, satisfying Maurer-Cartan equation (as proved in \cite{Lu3}) $$ d\alpha+\frac{1}{2}[\alpha,\alpha]_{\mathfrak{g}^*}=0. $$ In particular, $$ \{\alpha_{X}: X\in\mathfrak{g}\} $$ defines a foliation $\mathcal{F}$ on $M$.
\begin{lemma}\label{thm: m/g} The space of $G$-invariant functions on $M$ is closed under Poisson bracket. Hence $\pi$ defines a Poisson structure on $M/G$ \end{lemma}
\begin{proof} Let $H_{i}$, $i=1, \dots n$ be local coordinates on $M$ such that $$ \mathcal{F}= Ker\{dH_{1}, \dots, dH_{n}\} $$ Then \begin{equation}\label{eq: al} \alpha_{X}=\sum_{i}c_{i}(X)dH_i \end{equation} and \begin{equation}\label{eq: xis} \Phi_{X}[f]=\pi^{\sharp}(\alpha_{X})=\sum_{i}c_{i}(X)\lbrace H_j,f\rbrace_{M}. \end{equation} This implies that a function $f\in C^{\infty}(M)$ is $G$-invariant ($\Phi_{X}[f]=0$) if and only if $\lbrace H_i,f\rbrace=0$ for any $i$. If $f,g$ are $G$-invariant functions on $M$, we have $\lbrace H_i,f\rbrace=\lbrace H_i,g\rbrace=0$ for any $i$. Then, using the Jacobi identity we get
$\lbrace H_i,\lbrace f,g\rbrace\rbrace=0$. Since $G$ is connected, the result follows. \end{proof}
\subsection{Poisson reduced space}
Assume that $G$ is connected, simply connected and complete. In order to define a reduced space and to prove that it is a Poisson manifold we consider a generic orbit $\mathcal{O}_u$ of the dressing orbit of $G$ on $G^*$ passing through $u\in G^*$. First, we prove the following:
\begin{lemma} Let $\Phi:G\times M \to M$ be a free and Hamiltonian action of a compact Poisson Lie group $(G,\pi_{G})$ on a Poisson manifold $(M,\pi)$. Then: \begin{itemize}
\item[(i)] $\mathcal{O}_u$ is closed and the Poisson structure $\pi_{G^*}$ does not depend on the transversal coordinates on $\mathcal{O}_u$.
\item[(ii)] $\boldsymbol{\mu}^{-1}(\mathcal{O}_u)/G$ is a smooth manifold. \end{itemize} \end{lemma}
\begin{proof}
\begin{itemize}
\item[(i)] If $G$ is compact, any $G$-action is automatically proper. This implies that, given $u\in G^*$ the generic orbit $\mathcal{O}_u$ of the dressing action is closed. From section (\ref{sec: dressing}) we know that $\mathcal{O}_u$ is the symplectic leaf through $u$. Using the local description of Poisson manifolds introduced in Theorem (\ref{thm: split}) it is evident that $\pi_{G^*}$ restricted to $\mathcal{O}_u$ does not depend on the transversal coordinates $y_{i}$.
\item[(ii)] If the action $\Phi$ is free, the momentum map $\boldsymbol{\mu}:M \to G^*$ is a submersion onto some open subset of $G^*$. This implies that $\boldsymbol{\mu}^{-1}(u)$ is a closed submanifold of $M$. As $\boldsymbol{\mu}$ is equivariant, it follows that $\boldsymbol{\mu}^{-1}(u)$ is $G$-invariant. Free and proper actions of $G$ on $M$ restrict to free and proper $G$-actions on $G$-invariant submanifolds. In particular, the action of $G$ on $\boldsymbol{\mu}^{-1}(u)$ is still proper, then $G\cdot \boldsymbol{\mu}^{-1}(u)$ is closed. Using the equivariance we have that $G\cdot \boldsymbol{\mu}^{-1}(u)= \boldsymbol{\mu}^{-1}(\mathcal{O}_u)$, which is still $G$-invariant. The action of $G$ on $\boldsymbol{\mu}^{-1}(\mathcal{O}_u)$ is proper and free, so we can conclude that the orbit space $\boldsymbol{\mu}^{-1}(\mathcal{O}_u)/G$ is a smooth manifold. \end{itemize}
\end{proof}
We aim to prove that the manifold $N/G:=\boldsymbol{\mu}^{-1}(\mathcal{O}_u)/G$ carries a Poisson structure. In the previous Lemma we stated that $\pi_{G^{*}}$ restricted to $\mathcal{O}_u$ does not depend on the transversal coordinates $y_i$'s; if $x_{i}$ are local coordinates along $N=\boldsymbol{\mu}^{-1}(\mathcal{O}_u)$ and $H_{i}$ are pullback of the transversal coordinates $y_{i}$'s by \begin{equation} H_{i}:= y_{i}\circ \boldsymbol{\mu} \end{equation} we can easily deduce that the Poisson structure $\pi$ on $M$ involves derivatives in $H_{i}$ only in the combination $$ \partial_{x_i}\wedge\partial_{H_i} $$ This is evident because the differential $d\boldsymbol{\mu}$ between $TM\vert_N/TN$ and $TG^{*}/T\mathcal{O}_u$ is a bijective map. Moreover, since $\{y_{i},y_{j}\}$ vanishes on the orbit $\mathcal{O}_u$, $\{H_{i},H_{j}\}$ vanishes on the preimage $N$ and $dH_{i}$'s are in the span of $\{\alpha_{X}:X\in\mathfrak{g}\}$.
Now we introduce the ideal $\mathcal{I}$ generated by $H_i$ and prove some properties. \begin{lemma} Let $\mathcal{I}=\{f\in C^{\infty}(M): f\vert_{N}=0\}$. \begin{itemize}
\item[(i)] $\mathcal{I}$ is defined in an open $G$-invariant neighborhood $U$ of $N$.
\item [(ii)] $\mathcal{I}$ is closed under Poisson bracket. \end{itemize} \end{lemma}
\begin{proof} \begin{itemize}
\item[(i)] The coordinates $H_i$ are locally defined but we can show that $\mathcal{I}$ is globally defined. Considering a different neighborhood on the orbit of $G^{*}$ we have transversal coordinates $y_i^{\prime}$ and their pullback to $M$ will be $H_i^{\prime}=y_i^{\prime}\circ \boldsymbol{\mu}$. The coordinates $H_i^{\prime}$ are defined in a different open neighborhood $V$ of $N$, but we can see that the ideal $\mathcal{I}$ generated by $H_i$ coincides with $\mathcal{I}^{\prime}$ generated by $H_i^{\prime}$ on the intersection of $U$ and $V$, then it is globally defined.
\item [(ii)] Since $\boldsymbol{\mu}$ is a Poisson map we have: $$
\{ H_i,H_j\}_{M}=\{ y_i\circ \boldsymbol{\mu},y_j\circ \boldsymbol{\mu}\}_{M}=\{ y_i,y_j\}_{G^*}\circ \boldsymbol{\mu}. $$ Hence the ideal $\mathcal{I}$ is closed under Poisson brackets. \end{itemize} \end{proof}
Motivated by this Lemma we use the following identification $$ C^{\infty}(N/G)\simeq(C^{\infty}(U)/\mathcal{I})^G. $$
\begin{lemma}\label{lem: id2}
Suppose that $N/G$ is an embedded submanifold of the smooth manifold $M/G$, then \begin{equation}
(C^{\infty}(U)/\mathcal{I})^G \simeq(C^{\infty}(U)^G + \mathcal{I})/\mathcal{I} \end{equation} \end{lemma}
\begin{proof}
Let $f$ be a smooth function on $U$ satisfying $[f]\in (C^{\infty}(U)/\mathcal{I})^G$. As the equivalence class $[f]$ is $G$-invariant, we have \begin{equation} f(G\cdot m)=f(m)+i(m), \end{equation} where $i\in \mathcal{I}$ and $G\cdot m$ is a generic orbit of the Hamiltonian action of $G$ on $M$. It is clear that $f\vert_N$ is $G$-invariant and hence it defines a smooth function $\bar{f}\in C^{\infty}(N/G)$. Since $N/G$ is a $k$-dimensional embedded submanifold of the $n$-dimensional smooth manifold $M/G$, the inclusion map $\iota: N/G\rightarrow M/G$ has local coordinates representation: \begin{equation}
(x_1,\dots,x_k)\mapsto (x_1,\dots,x_k,c_{k+1},\dots,c_n) \end{equation} where $c_i$ are constants. Hence we can extend $\bar{f}$ to a smooth function $\phi$ on $M/G$ by setting $\bar{f}(x_1,\dots,x_k)=\phi(x_1,\dots,x_k,0,\dots,0)$. The pullback $\tilde{f}$ of $\phi$ by $\text{pr}:M\rightarrow M/G$ is $G$-invariant and satisfies \begin{equation}
\tilde{f}-f\vert_N=0, \end{equation} hence $\tilde{f}-f\in \mathcal{I}$. \end{proof}
Using these results we can prove the following:
\begin{theorem}\label{thm: pred} Let $\Phi:G\times M\rightarrow M$ be a free Hamiltonian action of a compact Poisson Lie group $(G,\pi_G)$ on a Poisson manifold $(M,\pi)$ with momentum map $\boldsymbol{\mu}:M\rightarrow G^*$. The orbit space $N/G$ has a Poisson structure induced by $\pi$. \end{theorem}
\begin{proof} First we prove that the Poisson bracket of $M$ induces a well defined Poisson bracket on $(C^{\infty}(U)^G+\mathcal{I})/\mathcal{I}$. In fact, for any $f+i\in C^{\infty}(U)^{G}/\mathcal{I}$ and $j\in \mathcal{I}$ the Poisson bracket $\{ f+i,j\}$ still belongs to the ideal $\mathcal{I}$. Since the ideal $\mathcal{I}$ is closed under Poisson brackets, $\{ i,j\}$ belongs to $\mathcal{I}$. The function $j$, by definition on the ideal $\mathcal{I}$, can be written as a linear combination of $H_i$, so $\{ f,j\}=\sum_i a_i\{ f,H_i\}$. By Lemma \ref{thm: m/g}, we have $\{ f,H_i\}=0$, hence $\{ f+i,j\}\in \mathcal{I}$ as stated. Finally, using the isomorphism proved in the Lemma (\ref{lem: id2}) and the identification $ C^{\infty}(N/G)\simeq(C^{\infty}(U)/\mathcal{I})^G$, the claim is proved. \end{proof}
Finally, we observe that there is a natural isomorphism \begin{equation}
\boldsymbol{\mu}^{-1}(u)/G_{u}\simeq \boldsymbol{\mu}^{-1}(\mathcal{O}_u)/G. \end{equation} We refer to $\boldsymbol{\mu}^{-1}(u)/G_{u}$ as the Poisson reduced space.
\section{An example} \label{sec: ex} In this section we discuss a concrete example of Poisson reduction. Consider the Lie bialgebra $\mathfrak{g}=ax+b$ discussed in Example (\ref{ex: 1}). The Poisson tensor on the dual Poisson Lie group $G^*$ is given, in the coordinates $(s,t)$ introduced in the matrix representation, by \begin{equation} \pi_{G^*}=st\partial_s\wedge\partial_t. \end{equation} It is clear that $(s,t)$ are global coordinates on $G^*$. First, we need to study the orbits of the dressing action. Remember that the dressing orbits $\mathcal{O}_u$ through a point $u\in G^*$ are the same as the symplectic leaves, hence it is clear that they are determined by the equation $t=0$. The symplectic foliation of the manifold $G^*$ in this case is given by two open orbits, determined by the conditions $t>0$ and $t<0$ respectively, and a closed orbit given by $t=0$ and $a\in\mathbb{R}^+$.
Consider a Hamiltonian action $\Phi:G\times M \to M$ of $G$ on a generic Poisson manifold $M$ induced by the equivariant momentum map $\boldsymbol{\mu}:M\rightarrow G^*$. Its pullback \begin{equation} \boldsymbol{\mu}^*: C^{\infty}(G^*)\longrightarrow C^{\infty}(M) \end{equation} maps the coordinates $s$ and $t$ on $G^*$ to $$ x(u)=s(\boldsymbol{\mu}(u)) \qquad y(u)=t(\boldsymbol{\mu}(u)). $$ It is important to underline that we have no information on the dimension of $M$, so $x$ and $y$ are just a pair of the possible coordinates. Nevertheless, since $\boldsymbol{\mu}$ is a Poisson map, we have \begin{equation} \lbrace x,y\rbrace=xy \end{equation} on $M$. The infinitesimal generators of the action $\Phi$ can be written in terms of these coordinates $(x,y)$ as \begin{equation} \Phi(X)=x\{y,\cdot \}\quad \Phi(Y)=x\{x^{-1},\cdot \}. \end{equation}
In the following, we discuss the Poisson reduction case by case, by considering the different dressing orbits studied above.
\paragraph{Case 1: $(t>0)$} Consider the dressing orbit $\mathcal{O}_u$ determined by the condition $t>0$. Since $s$ and $t$ are both positive, we can put \begin{equation}
x=e^p,\quad y=e^q. \end{equation} Since $\lbrace x,y\rbrace=xy$ we have \begin{equation}
\lbrace p,q\rbrace=1. \end{equation} For this reason the preimage of the dressing orbit can be split as $N=\mathbb{R}^2\times M_1$ and $C^{\infty}(N)$ is given explicitly by the set of functions generated by $y^{-1}$. The infinitesimal generators are given by \begin{equation} \Phi(X)=e^p\{e^q,\cdot \}\qquad \Phi(Y)=e^p\{e^{-p},\cdot \} \end{equation} which is the action of $G$ on the plane. Hence the Poisson reduction in this case is given by \begin{equation}
(C^{\infty}(M)[y^{-1}])^G. \end{equation}
\paragraph{Case 2: $(t<0)$} This case is similar, with the only difference that $y=-e^q$.
\paragraph{Case 3: $(t=0)$} The orbit $\mathcal{O}_u$ is given by fixed points on the line $t=0$, then we choose the point $s=1$. Consider the ideal $\mathcal{I}=\langle x-1,y\rangle$ of functions vanishing on $N$. It is easy to check that it is $G$-invariant, hence the Poisson reduction in this case is simply given by \begin{equation}
(C^{\infty}(M)/\mathcal{I})^G. \end{equation}
\subsection{Questions and future directions}
The theory of Poisson reduction can be further developed, as it has been obtained under the assumption that the orbit space $M/G$ is a smooth manifold. This result could be proved under weaker hypothesis, for instance requiring that $M/G$ is an orbifold.
As stated in the introduction, the idea of momentum map and Poisson reduction can be also used for the study of symmetries in quantum mechanics. In particular, the approach of deformation quantization would provide a relation between classical and quantum symmetries. A notion of quantum momentum map has been defined in \cite{me}, \cite{me1} and it can be used to define the quantization of the Poisson reduction.
At classical level, Poisson reduction could be generalized to actions of Dirac Lie groups \cite{MJ} on Dirac manifolds \cite{Co}. Finally, a possible development of this theory is its integration to symplectic groupoids by means of the theories on the integrability of Poisson brackets \cite{CF} and Poisson Lie group actions \cite{FP}.
\end{document} |
\begin{document}
\maketitle \begin{abstract} We prove Gebhard and Sagan's $(e)$-positivity of the line graphs of tadpoles in noncommuting variables. This implies the $e$-positivity of these line graphs. We then extend this $(e)$-positivity result to that of certain cycle-chord graphs, and derive the bivariate generating function of all cycle-chord graphs. \end{abstract}
\section{Introduction}
\citet{Sta95} introduced the {\it chromatic symmetric function} for a simple graph $G$ as \[ X_G =\sum_{\kappa}\prod_{v\in V(G)}\mathbf{x}_{\kappa(v)} \] where $\mathbf{x}=(x_1, x_2, \ldots)$ is a countable set of indeterminates, and the sum is over all proper colorings~$\kappa$ of $G$. Chromatic symmetric functions is a generalization of \citeauthor{Bir12}'s chromatic polynomials $\chi_G(k)$ since \[ X_G(1^k,0,0,\dots)=\chi_G(k), \] see \citet{Bir12}.
Chromatic symmetric functions are particular symmetric functions. It is the fundamental theorem of symmetric functions that $\{e_\lambda\}$ is a basis of the algebra $\Lambda(x_1,x_2,\ldots)$ of symmetric functions, where \[ e_\lambda =e_{\lambda_1}e_{\lambda_2}\dotsm \quad\text{and}\quad e_n =\sum_{1\le i_1<\dots<i_n}x_{i_1}\dotsm x_{i_n}. \] The algebra $\Lambda(x_1,x_2,\ldots)$ has also some other bases such like the Schur basis $\{s_\lambda\}$. For any bases $\{b_\lambda\}$, a symmetric function is \emph{$b$-positive} if its expansion under the basis~$b_{\lambda}$ has only nonnegative coefficients, see \citet{Mac95B} and \citet[Chapter 7]{Sta99B}. A graph is said to be $b$-positive if its chromatic symmetric function is $b$-positive. A class of graphs is said to be $b$-positive if every graph in the class is $b$-positive.
This work is originally motivated by Stanley and Stembridge's 3+1 conjecture, see~\citet{SS93}.
\begin{conjecture}[\citeauthor{SS93}]\label{conj:ep:cf-inc} Any claw-free incomparability graph is $e$-positive. \end{conjecture}
Only a few methods are known to show the $e$-positivity of a graph, while there are many results on the $e$-positivity of particular graph classes. \citet{Wol97D} provided a powerful criterion that every connected $e$-positive graph has a connected partition of every type. Graph classes that are shown to be $e$-positive include complete graphs, paths, cycles, generalized bull graphs, $K$-chains, lollipop graphs, triangular ladders, Ferrers graphs; see \cite{Cv16, Sta95, CH18, CH19, FHM19, Dah19, Dv18, GS01, Ev04, LY21}. Graphs that are proved not to be $e$-positive include generalized nets, saltire graphs $\mathrm{SA}_{n,n}$, augmented saltire graphs $AS_{n,n}$ and $AS_{n,n+1}$, and triangular tower graphs $TT_{n,n,n}$; see \cite{DFv20, FHM19}. \citet{DFv20} gave an infinite number of families of non-$e$-positive graphs that are not contractible to the claw; one such family is additionally claw-free, thus establishing that the $e$-positivity is in general not dependent on the existence of an induced claw or of a contraction to a claw.
Note that $e$-positive graphs are Schur positive, since every element $e_\lambda$ is a linear combination of elements $s_\mu$ over Kostka numbers, see~\citet[Exercise 2.12]{MR15B}. \citet{Gas96P} obtained the Schur positivity of the graphs in \cref{conj:ep:cf-inc} by smart bijections, see also~\citet[Theorem 6.3]{SW16}. \begin{theorem}[\citeauthor{Gas99}]\label{thm:Schur:cf-inc} Any claw-free incomparability graph is Schur positive. \end{theorem}
In view of \cref{thm:Schur:cf-inc}, \citet[Conjecture 1.4]{Sta98} proposed the following concise conjecture, and attributed it to \citeauthor{Gas99}.
\begin{conjecture}[\citeauthor{Sta98}]\label{conj:Schur:cf} Any claw-free graph is Schur positive. \end{conjecture}
\citet[Propositions 5.3 and 5.4]{Sta95} demonstrated the $e$-positivity of paths and cycles, which can be considered as the most basic graphs in some sense. \citet{GS01} introduced certain $(e)$-positivity of chromatic symmetric functions of graphs in noncommuting variables, which is stronger than the $e$-positivity in the ordinary sense. They established the $(e)$-positivity of paths, cycles, and the so-called K-chains. These results can be used to show the $(e)$-positivity of tadpole graphs, see \citet{LLWY21}.
In the same spirit, we manage to show the $e$-positivity of the line graphs of tadpoles, which is a bit less basic. By computing the generating function of the chromatic symmetric functions of these line graphs, one may see that this $e$-positivity is stronger than that of tadpoles, see \cref{coro:e:tadpole}. On the other hand, since $e$-positive graphs are all Schur positive, we obtain the Schur positivity of the line graphs of tadpoles. Recall that \citet{CS05} discovered a characterization of claw-free graphs, in which line graphs is one of six building blocks. Therefore, the result above confirms a particular case of \cref{conj:Schur:cf}.
This paper is organized as follows. In \cref{sec:preliminary}, we give an overview for necessary notion and notation, and known results that will be of use in the sequel. \Cref{sec:Line:Tpml} is devoted to the $(e)$-positivity of the line graphs of tadpoles. In \cref{sec:Y:CCm3}, we derive the bivariate generating function of cycle-chord graphs $\mathrm{CC}_{a,b}$, which is a slight extension of the line graphs. We also obtain the $(e)$-positivity of~$\mathrm{CC}_{m,3}$.
\section{Preliminaries} \label{sec:preliminary}
This section consists of basic properties of chromatic symmetric functions given by \citet{Sta95}, and those for the $(e)$-positivity due to \citet{GS01}. They all will be of use in the sequel.
\begin{proposition}[\citeauthor{Sta95}]\label{prop:csf:disjoint} For any graphs $G$ and $H$, \[ X_{G\sqcup H}=X_G X_H, \] where $G\sqcup H$ is the disjoint union of $G$ and~$H$. \end{proposition}
One way of computing a chromatic symmetric function is by using the power symmetric functions~$p_\lambda$,
which are defined for integer partitions $\lambda=\lambda_1\lambda_2\dotsm$ by \[ p_\lambda=p_{\lambda_1}p_{\lambda_2}\dotsm \quad\text{and}\quad p_n=\sum_{i\ge 1}x_i^n. \]
\begin{proposition}[\citeauthor{Sta95}]\label{prop:csf} For any graph $G$, \[ X_G =\sum_{E'\subseteq E}(-1)^{\abs{E'}}p_{\lambda(E')}, \] where $\lambda(E')$ is the partition consisting of the component orders of the spanning subgraph~$(V,E')$. \end{proposition}
The generating function of the power symmetric functions will also be needed (cf.~\cite{Mac95B, MR15B}): \begin{equation}\label{gf:p} \sum_{j\ge 0}p_j(-z)^j=\frac{F(z)}{E(z)}, \end{equation} where \[ E(z)=\sum_{n\ge0} e_n z^n \quad\text{and}\quad F(z)=E(z)-zE'(z). \]
The generating functions of paths and cycles are known, see \cite[Propositions 5.3 and 5.4]{Sta95}.
\begin{proposition}[\citeauthor{Sta95}]\label{prop:gf:path:cycle} Denote by $P_n$ the $n$-vertex path and by $C_n$ the $n$-vertex cycle. Then \[ \sum_{n\geq 0}X_{P_n}z^n =\frac{E(z)}{F(z)} \quad\text{and}\quad \sum_{n\geq 2}X_{C_n}z^n =\frac{z^2E''(z)}{F(z)}. \] As a consequence, paths and cycles are $e$-positive. \end{proposition}
A beautiful \emph{triple-deletion} property can be used to reduce the computation of a chromatic symmetric function, see \citet[Theorem 3.1, Corollaries 3.2 and 3.3]{OS14}.
\begin{theorem}[\citeauthor{OS14}]\label{thm:rec:3del} Let $G$ be a graph. Suppose that $G$ contains three verices $u$, $v$, and $w$ such that no two of them are connected by an edge. Write $e_1=uv$, $e_2=vw$, and $e_3=wu$. For any set $S\subseteq \{1,2,3\}$, denote by $G_S$ the subgraph spanned by the edge set $E(G)\cup\{e_j\colon j\in S\}$. Then \[ X_{G_{12}}=X_{G_1}+X_{G_{23}}-X_{G_3} \quad\text{and}\quad X_{G_{123}}=X_{G_{12}}+X_{G_{23}}-X_{G_2}. \] \end{theorem}
\citet{GS01} made a systematic study of the algebra of chromatic symmetric functions in noncommuting variables. Let $G$ be a graph with vertices labeled by $v_{1}, \dots, v_{d}$. They defined the \emph{chromatic symmetric function in noncommuting variables} of $G$ to be \[ Y_{G}=\sum_{\kappa} x_{\kappa(v_{1})}\dotsm x_{\kappa(v_{d})}, \] where the sum runs also over all proper colorings $\kappa$ of $G$. Note that $Y_{G}$ depends not only on the coloring~$\kappa$, but also on the vertex labeling of $G$. Let $\Pi_d$ denote the lattice of partitions of the set \[ [d]=\{1,\dots, d\} \] ordered by refinement. Given $\pi \in \Pi_d$, the \emph{elementary symmetric function $e_{\pi}$ in noncommuting variables} is defined by \[ e_{\pi}=\sum_{(i_1, \dots, i_d)} x_{i_1} \dotsm x_{i_d}, \] where the sum runs over all sequences $(i_1,\dots, i_d)$ of positive integers such that $i_j \neq i_k$ if $j$ and $k$ are in the same block of $\pi$. The \emph{type} $\lambda(\pi)$ of $\pi$ is the integer partition of $d$ whose parts are the block sizes of $\pi$. It is direct to verify that $e_{\pi}$ becomes $\lambda(\pi)! e_{\lambda(\pi)}$ if we allow the variables $x_i$ to commute, where the symbol $\lambda!$ stands for the product of factorials of all parts of a partition $\lambda$.
Fix an element $i\in[d]$. \citeauthor{GS01} introduced the following congruence relations: \begin{itemize} \item Two partitions $\pi, \sigma\in \Pi_d$ are said to be \emph{congruent modulo $i$}, denoted $\pi \equiv_{i} \sigma$, if \[ \lambda(\pi)=\lambda(\sigma) \quad\text{and}\quad b_{\pi, i}=b_{\sigma, i}, \] where $b_{\pi, i}$ is the size of the block of $\pi$ that contains $i$. Denote by $(\pi)_i$ the congruence class of $\Pi_d$ modulo $i$ that contains $\pi$. \item For any elementary symmetric functions $e_\pi$ and $e_\sigma$, we write $e_\pi \equiv_i e_\sigma$ if and only if $\pi \equiv_{i} \sigma$. Denote by $e_{(\pi)_i}$ the congruence class modulo $i$ that contains $e_{\pi}$. \item For any graph $G$ on $d$ vertices, we write \begin{equation}\label{equ:Y_G} Y_G\equiv_{i} \sum_{(\pi)_i\subseteq \Pi_d} c_{(\pi)_i} e_{(\pi)}, \end{equation} where \[ c_{(\pi)_i}=\sum_{\sigma \in(\pi)_i} c_{\sigma} \quad\text{if}\quad Y_{G}=\sum_{\sigma \in \Pi_{d}} c_{\sigma} e_{\pi}. \] \end{itemize} They say that $G$ is \emph{$(e)$-positive modulo $i$} if all coefficients $c_{(\pi)}$ are nonnegative. When $i$ equals the order $d$ of the graph $G$, the term ``modulo $i$'' and the letter $i$ in the notation \[ b_{\pi, i},\qquad \equiv_i, \qquad\text{and}\qquad (\pi)_i \] are all ignored. For instance, $G$ is said to be \emph{$(e)$-positive} if it is so modulo $d$, and \cref{equ:Y_G} reduces to \begin{equation}\label{eq:equiv:Y} Y_G\equiv\sum_{(\pi)\subseteq\Pi_d}c_{(\pi)}e_{\pi}. \end{equation} Unlike the $e$-coefficients in $X_G$, it is possible that the $e$-coefficients $c_{\pi}$ in~$Y_G$ is not integral.
\cref{prop:(e)toe} builds a bridge between the $(e)$-positivity and the $e$-positivity of the same graph, which is proved and used in~\cite{GS01}. We state it as a proposition for clarity.
\begin{proposition}[\citeauthor{GS01}]\label{prop:(e)toe} Any $(e)$-positive graph is $e$-positive. \end{proposition} \begin{proof} Suppose that \cref{eq:equiv:Y} holds and $c_{(\pi)}\ge 0$ for all congruence classes $(\pi)$. Then \[ X_G=\sum_{(\pi)\subseteq \Pi_d}c_{(\pi)}\lambda(\pi)!\, e_{\lambda(\pi)} \] is $e$-positive. \end{proof}
\citeauthor{GS01} defined the induction operation $\ind$ on monomials $x_{i_{1}} \dotsm x_{i_{d-1}}$, by \[ \left(x_{i_{1}} \dotsm x_{i_{d-1}}\right) \ind=x_{i_{1}} \dotsm x_{i_{d-2}} x_{i_{d-1}}^{2} \] and extended it linearly. They discovered the following deletion-contraction property to reduce the computation of the chromatic symmetric functions $Y_G$, see \cite[Proposition 3.5]{GS01}. \begin{proposition}[{\citeauthor{GS01}}]\label{prop:DC} For the edge $e=v_{d-1}v_d$ in a graph $G$ whose vertices are labeled by $v_1,\dots,v_d$, \[ Y_G=Y_{G \backslash e}-Y_{G / e}\ind, \] where the contraction of $e$ is labeled by $v_{d-1}$. \end{proposition}
In practice, we always firstly select an edge $e$ in \cref{prop:DC} according to some reduction strategy, and then we need to relabel the ends of $e$ so that their labels become the largest and second largest labels. The effect of such relabelings is demonstrated by \cref{lem:relabel}, see \cite[Lemma 6.6]{GS01}.
\begin{lemma}[{\citeauthor{GS01}}]\label{lem:relabel} For any relabeling $\gamma(G)$ of vertices of $G$ that fixes the element $d$, we have $Y_{\gamma(G)} \equiv Y_G$. \end{lemma}
For any block $B$ that is disjoint with the set $[d]$, they use the symbol $\pi/B$ to denote the partition that is formed by adding the block $B$ to $\pi$, and the symbol $\pi+(d+1)$ to denote the partition of~$[d+1]$ that is formed by inserting the element $d+1$ into the block of $\pi$ that contains $d$. \Cref{prop:ind:e} exhibits the effect of the induction operation acted on $e_\pi$, see \cite[Corollary 6.1]{GS01}.
\begin{proposition}[{\citeauthor{GS01}}]\label{prop:ind:e} For any partition $\pi\in \Pi_d$, \[ e_{\pi}\ind \,\equiv \frac{1}{b_{\pi}} \brk2{e_{(\pi/d+1)}-e_{(\pi+(d+1))}}. \] \end{proposition} Since the induction respects the congruence relation of partitions, namely, \[ e_\pi\equiv e_\sigma \implies e_\pi\ind \equiv e_\sigma\ind, \] it is extendable to congruence classes. Precisely speaking, they defined \[ e_{(\pi)}\ind\equiv \sum_{(\sigma)\subseteq \Pi_{d+1}}c_{(\sigma)}e_{(\sigma)} \quad\text{if}\quad e_{\pi}\ind=\sum_{\sigma\in \Pi_{d+1}}c_{\sigma}e_{\sigma}. \] By using the induction operation, \citet[Propsitions 6.4 and 6.7]{GS01} obtained \cref{prop:path:cycle}, which implies the $(e)$-positivity of paths $P_d$ and cycles $C_d$.
\begin{proposition}[{\citeauthor{GS01}}]\label{prop:path:cycle} If \cref{eq:equiv:Y} holds for $G=P_d$, then \begin{align*} Y_{P_{d+1}} \equiv \sum_{(\pi)\subseteq \Pi_d} \frac{c_{(\pi)}}{b_{\pi}} \brk2{(b_{\pi}-1)e_{(\pi/d+1)}+ e_{(\pi+(d+1))}}\quad\text{and}\quad Y_{C_{d+1}} \equiv \sum_{(\pi)\subseteq \Pi_d}c_{(\pi)}e_{(\pi+(d+1))}. \end{align*} \end{proposition}
We also need two more results that allow us to construct new $(e)$-positive graphs from old ones. \cref{prop:Y:DisjointUnion:G:Km} is a straightforward corollary of \cite[Lemma 6.3]{GS01}.
\begin{proposition}\label{prop:Y:DisjointUnion:G:Km} Let $G$ be a graph on $d$ vertices. Let $K_m$ be the complete graph of order $m$, whose vertices are labeled by $v_{d+1},\dots, v_{d+m}$. If \cref{eq:equiv:Y} holds, then the disjoint union of $G$ and $K_m$ satisfies \[ Y_{G\,\uplus K_m} \equiv \sum_{(\pi)\subseteq \Pi_d} c_{(\pi)}e_{(\pi/\{d+1,\,\dots,\,d+m\})}. \] \end{proposition}
\cref{thm:(e)-pos:G+K_n} is a restatement of \cite[Theorem 7.6]{GS01}.
\begin{theorem}[{\citeauthor{GS01}}]\label{thm:(e)-pos:G+K_n} Given $n,d\ge 1$, let $G=(V_1,E_1)$ be a graph with vertex set $V_1=\{v_1, \dots, v_d\}$, and let $G+K_n=(V_2,E_2)$ be the graph with \begin{align*} V_2&=V_1\cup \{v_{d+1},\, \dots,\, v_{d+n-1}\},\\ E_2&=E_1\cup \{v_iv_j\colon i,j\in [d,\, d+n-1]\}. \end{align*} If $Y_G$ is $(e)$-positive, then so is $Y_{G+K_n}$. \end{theorem}
\section{The $(e)$-positivity of the line graphs of tadpoles} \label{sec:Line:Tpml}
The \emph{tadpole} graph $\mathrm{Tp}_{m,l}$ is obtained by identifying a vertex of the cycle $C_m$ with an end of the path $P_l$, see \cref{fig:tadpole}. Tadpoles are special squid graphs that is investigated by \citet{MMW08}. We first give the bivariate generating functions for the chromatic symmetric functions of tadpoles and their line graphs. Recall that the functions $E(z)$ and $F(z)$ are defined in \cref{sec:preliminary}. \begin{figure}
\caption{The tadpole graph $\mathrm{Tp}_{m,l}$.}
\label{fig:tadpole}
\end{figure}
\begin{theorem}\label{thm:gf:tadpole} The chromatic symmetric functions of tadpole graphs $\mathrm{Tp}_{m,l}$ can be computed by \begin{equation}\label{rec:TPml} X_{\mathrm{Tp}_{m,l}}=(m-1)X_{P_{m+l}}-\sum_{i=2}^{m-1}X_{P_{m+l-i}}X_{C_i}, \end{equation} and their bivariate generating function is \[ \sum_{m\ge2}\sum_{l\ge0}X_{\mathrm{Tp}_{m,l}}x^my^l =\frac{x^2}{(x-y)^2} \brk[s]3{\frac{x(x-y)E''(x)E(y)}{F(x)F(y)} -y\brk3{\frac{E'(x)}{F(x)}-\frac{E'(y)}{F(y)}}}. \] \end{theorem} \begin{proof} Let $m\ge 2$ and $l\ge0$. By \cref{prop:csf:disjoint,thm:rec:3del}, we obtain \[ X_{\mathrm{Tp}_{m,l}} =\begin{cases} X_{P_{m+l}}+X_{\mathrm{Tp}_{m-1,\,l+1}}-X_{C_{m-1}}X_{P_{l+1}},&\text{if $m\ge 4$},\\ 2X_{P_{l+3}}-X_{C_2}X_{P_{l+1}},&\text{if $m=3$},\\ X_{P_{l+2}},&\text{if $m=2$}. \end{cases} \] If $m\ge 4$, then we can use the formula above iteratedly which gives \[ X_{\mathrm{Tp}_{m,l}} =(m-3)X_{P_{m+l}}+X_{\mathrm{Tp}_{3,l}}-\sum_{i=3}^{m-1}X_{C_i}X_{P_{m+l-i}}, \] which simplifies to \cref{rec:TPml}. It is routine to verify its truth for $m\le 3$. Using standard techniques of generating functions, one may translate \cref{rec:TPml} into the generating function equivalently. \end{proof}
Let $u$ and $v$ be two adjacent vertices on the cycle $C_m$, and let $w$ be an end of the path $P_l$. The line graph $L(\mathrm{Tp}_{m,l})$ of the tadpole $\mathrm{Tp}_{m,l}$ can be obtained by adding the edges $uw$ and $vw$ to the disjoint union of $C_m$ and $P_l$, see~\cref{fig:Ltadpole}. \begin{figure}
\caption{The line graph of a tadpole.}
\label{fig:Ltadpole}
\end{figure}
\begin{theorem}\label{thm:ep:tadpole:Ltadpole} The chromatic symmetric functions of the line graphs $L(\mathrm{Tp}_{m,l})$ can be computed by \begin{equation}\label{fml:csf:Ltadpole} X_{L(\mathrm{Tp}_{m,l})} =X_{P_l} X_{C_m} +2\sum_{k\ge 1} X_{P_{l-k}} X_{C_{m+k}} -2lX_{P_{m+l}}, \qquad\text{for $m\ge 2$ and $l\ge 0$,} \end{equation} or alternatively by \begin{equation}\label{rec:Ltadpole:tadpole} X_{L(\mathrm{Tp}_{m,l})}=2X_{\mathrm{Tp}_{m,l}}-X_{C_m}X_{P_l},\qquad\text{for $m\ge 2$ and $l\ge 0$}. \end{equation} Their bivariate generating function is \[ \sum_{m\ge 2}\sum_{l\ge0}X_{L(\mathrm{Tp}_{m,l})} x^m y^l =\frac{x^2}{(x-y)^2} \brk[s]3{\frac{(x^2-y^2)E''(x)E(y)}{F(x)F(y)} -2y\brk3{\frac{E'(x)}{F(x)}-\frac{E'(y)}{F(y)}}}. \] \end{theorem} \begin{proof} By \cref{prop:csf:disjoint,thm:rec:3del}, we obtain \cref{rec:Ltadpole:tadpole}. Together with \cref{prop:gf:path:cycle,thm:gf:tadpole}, one may deduce \cref{fml:csf:Ltadpole} and the generating function by routine calculation. \end{proof}
We did not succeed in proving the $e$-positivity of the line graphs $L(\mathrm{Tp}_{m,l})$ by analyzing their generating function. Yet powerful enough is \cref{prop:(e)toe} to complete this job.
\begin{theorem} Let $m\ge 3$ and $l\ge 1$. Let $C_m$ be the cycle $v_1\dotsm v_mv_1$, and $P_l$ the path $v_{m+1}\dotsm v_{m+l}$. Then the line graph $L(\mathrm{Tp}_{m,l})$ that is labeled by adding the edges $v_1v_{m+1}$ and $v_mv_{m+1}$ in the disjoint union $C_m\uplus P_l$ is $(e)$-positive. \end{theorem}
\begin{proof} We first show it for $l=1$. Let $G=L(\mathrm{Tp}_{m,1})$. By \cref{prop:DC}, \begin{equation}\label{pf:Y:L:Tp:m:1} Y_G = Y_{\mathrm{Tp}_{m,1}}-Y_{C_m}\ind, \end{equation} see \cref{fig:tadpole} with $l=1$ for the labeling of the graph $\mathrm{Tp}_{m,1}$. By \cref{lem:relabel}, we can relabel the vertices of $\mathrm{Tp}_{m,1}$ so that the resulting graph $\mathrm{Tp}_{m,1}'$ is obtained by adding the edge $v_1v_m$ onto the path $v_1\dotsm v_{m+1}$. Applying \cref{prop:DC} to $\mathrm{Tp}_{m,1}'$ and by \cref{pf:Y:L:Tp:m:1}, we can infer that \begin{equation}\label{pf:Y:LTp:m1} Y_G \equiv Y_{\mathrm{Tp}_{m,1}'}-Y_{C_m}\ind \equiv Y_{C_m\,\uplus K_1}-2Y_{C_m}\ind. \end{equation} Suppose that \begin{equation}\label{eq:Y:P:m-1} Y_{P_{m-1}} \equiv \sum_{(\rho)\subseteq \Pi_{m-1}}c_{(\rho)}e_{(\rho)}. \end{equation} By \cref{prop:path:cycle}, \[ c_{(\rho)}\ge 0 \quad\text{and}\quad Y_{C_m}\equiv \sum_{(\rho)\subseteq \Pi_{m-1}}c_{(\rho)}e_{(\rho+m)}. \] By \cref{prop:Y:DisjointUnion:G:Km}, \begin{equation}\label{pf:Cm+K1} Y_{C_m\,\uplus K_1} \equiv \sum_{(\rho)\subseteq \Pi_{m-1}}c_{(\rho)}e_{(\rho+m/m+1)}. \end{equation} By \cref{prop:ind:e}, \begin{align} Y_{C_m}\ind \equiv \sum_{(\rho)\subseteq \Pi_{m-1}}c_{(\rho)}e_{(\rho+m)}\ind \equiv \sum_{(\rho)\subseteq \Pi_{m-1}} \frac{c_{(\rho)}}{b_{\rho}+1} \brk2{e_{(\rho+m/ m+1)}-e_{(\rho+m+(m+1))}}.\label{pf:Cm:ind} \end{align} Therefore, substituting \cref{pf:Cm+K1,pf:Cm:ind} into \cref{pf:Y:LTp:m1}, we obtain \[ Y_G \equiv \sum_{(\rho)\subseteq \Pi_{m-1}} \frac{c_{(\rho)}}{b_{\rho}+1} \brk2{(b_{\rho}-1)e_{(\rho +m/ m+1)}+2e_{(\rho +m+(m+1))}}. \] Since $b_{\rho}\ge 1$ for all $\rho\in \Pi_{m-1}$, we obtain that $Y_G$ is $(e)$-positive.
Applying \cref{thm:(e)-pos:G+K_n} recursively yields that $Y_{L(\mathrm{Tp}_{m,l})}$ is $(e)$-positive. \end{proof}
\begin{corollary}\label{coro:LTp} The line graphs $L(\mathrm{Tp}_{m,l})$ are $e$-positive for any integers $m\ge 3$ and $l\ge 1$. \end{corollary} \begin{proof} Immediate by \cref{prop:(e)toe}. \end{proof}
\begin{corollary}\label{coro:e:tadpole} The tadpoles $\mathrm{Tp}_{m,l}$ are $e$-positive for any integers $m\ge 3$ and $l\ge 1$. \end{corollary} \begin{proof} Immediate by \cref{rec:Ltadpole:tadpole,coro:LTp}. \end{proof}
In fact, tadpoles labeled as in \cref{fig:tadpole} are $(e)$-positive, which is a corollary of \cref{prop:path:cycle,thm:(e)-pos:G+K_n}, see also~\cite{LLWY21}.
\section{The $(e)$-positivity of the cycle-chord graphs $\mathrm{CC}_{m,3}$} \label{sec:Y:CCm3}
Regarding the line graphs $L(\mathrm{Tp}_{m,1})$ as obtained by identifying an edge of the cycles $C_m$ with an edge of the triangle $C_3$, we extend this $(e)$-positive graph class a bit by considering the graphs that are obtained by identifying an edge in $C_m$ with an edge of the rectangle $C_4$. In more general, one may consider the \emph{cycle-chord graph} $\mathrm{CC}_{a,b}$ that is obtained by identifying an edge of $C_{a+1}$ with an edge of~$C_{b+1}$, where $a,b\ge1$. In particular, the graph $\mathrm{CC}_{m,2}$ is $L(\mathrm{Tp}_{m,1})$. \begin{figure}
\caption{The cycle-chord graph $\mathrm{CC}_{a,b}$.}
\label{fig:CCml}
\end{figure}
In this section, we first present the bivariate generating function for the chromatic symmetric functions of the cycle-chord graphs $\mathrm{CC}_{a,b}$, and then show the $(e)$-positivity of $\mathrm{CC}_{m,3}$.
\begin{theorem}\label{thm:gf:cycle-chord} The generating function of cycle-chord graphs $\mathrm{CC}_{a,b}$ is \[ \sum_{a,b\ge1}X_{\mathrm{CC}_{a,b}}x^ay^b =\frac{xy}{(x-y)^2}\brk[s]3{ \frac{x^2E''(x)E(y)+y^2E''(y)E(x)}{F(x)F(y)} -\frac{2xy}{x-y}\brk3{ \frac{E'(x)}{F(x)}-\frac{E'(y)}{F(y)}}}. \] \end{theorem} \begin{proof} Let $\mathrm{CC}_{a,b}=(V,E)$, where $V=\{v_0,v_1, \dotsc, v_{a+b-1}\}$ and $E=E_a\cup E_b\cup \{v_0v_a\}$ with \[ E_a=\{v_0v_1,\,v_1v_2,\,\dotsc, v_{a-1}v_a\} \quad\text{and}\quad E_b=\{v_av_{a+1},\,v_{a+1}v_{a+2},\,\dotsc,\,v_{a+b-1}v_0\}. \] Since removing the edge $v_0v_a$ from $\mathrm{CC}_{a,b}$ results in the cycle $C_{a+b}$, by \cref{prop:csf}, \[ \sum_{v_0v_a\not\in S\subseteq E}(-1)^{\abs{S}}p_{\lambda(S)} =X_{C_{a+b}}. \] When $v_0v_a\in S$, let $M$ be the component of the graph $(V,S)$ that contains the edge $v_0v_a$. Let $a'$ be the number of edges in $M\cap E_a$ and $b'$ the number of edges in $M\cap E_b$. Then $0\le a'\le a$ and $0\le b'\le b$. According to whether $a'=a$ and $b'=b$, we obtain \begin{align} X_{\mathrm{CC}_{a,b}} &=X_{C_{a+b}} +\sum_{a'=0}^{a-1}\sum_{b'=0}^{b-1} (-1)^{a'+b'+1}(a'+1)(b'+1)p_{a'+b'+2} X_{P_{a-a'-1}}X_{P_{b-b'-1}} +(-1)^{a+b+1}p_{a+b}\notag\\ &\quad +\sum_{a'=0}^{a-1}(-1)^{a'+b+1}(a'+1)p_{a'+b+1}X_{P_{a-a'-1}} +\sum_{b'=0}^{b-1}(-1)^{a+b'+1}(b'+1)p_{a+b'+1}X_{P_{b-b'-1}} \notag\\ &=X_{C_{a+b}} +\sum_{i=1}^{a}\sum_{j=1}^{b} (-1)^{i+j+1}\cdotp i\cdotp j\cdotp p_{i+j}X_{P_{a-i}}X_{P_{b-j}} +(-1)^{a+b+1}p_{a+b}\notag\\ &\quad +\sum_{i=1}^{a}(-1)^{b+i}\cdotp i\cdotp p_{b+i}X_{P_{a-i}} +\sum_{j=1}^{b}(-1)^{a+j}\cdotp j\cdotp p_{a+j}X_{P_{b-j}}. \label{rec:CC:a:b} \end{align} Considering the generating function \[ H(x,y)=\sum_{a,b\ge1}X_{\mathrm{CC}_{a,b}}x^ay^b, \] we compute each term on the right side of \cref{rec:CC:a:b} by multiplying $x^ay^b$ and summing over $a,b\ge1$. For convenience, we will use the substitutions \[ s=-x,\quad t=-s, \quad\text{and}\quad Q_z=\frac{F(z)}{E(z)}. \] For the first and third term in \cref{rec:CC:a:b}, by \cref{prop:gf:path:cycle,gf:p}, \begin{align} \sum_{a,b\ge1}X_{C_{a+b}}x^a y^b &=\frac{xy}{x-y}\sum_{h\ge2}X_{C_h}(x^{h-1}-y^{h-1}) =\frac{xy}{x-y}\brk3{\frac{xE''(x)}{F(x)}-\frac{yE''(y)}{F(y)}}, \label{gf:C:a+b}\\ \sum_{a,b\ge1}(-1)^{a+b+1}p_{a+b}x^ay^b &=-\sum_{a,b\ge1}p_{a+b}s^a t^b =\frac{xy}{x-y}\sum_{h\ge2}p_h(s^{h-1}-t^{h-1})\notag\\ &=\frac{xy}{x-y} \brk3{\frac{Q_x-1-e_1 s}{s}-\frac{Q_y-1-e_1 t}{t}} =\frac{xQ_y-yQ_x}{x-y}-1.\notag \end{align} For the fourth term in \cref{rec:CC:a:b}, \begin{align*} &\sum_{a,b\ge1}\sum_{i=1}^{a} (-1)^{b+i}\cdotp i\cdotp p_{b+i}X_{P_{a-i}}x^a y^b =\sum_{b,i\ge1}(-1)^{i}\cdotp i\cdotp p_{b+i}\cdotp t^b \sum_{a\ge i}X_{P_{a-i}}x^a\\ =&\frac{1}{Q_x}\sum_{b,i\ge1} i\cdotp p_{b+i}\cdotp s^i t^b =\frac{1}{Q_x}\sum_{h\ge2} p_h\cdotp\brk2{st^{h-1}+2s^2t^{h-2}+\dotsb+(h-1)s^{h-1}t}\\ =&\frac{xy}{(x-y)^2Q_x} \sum_{h\ge2} p_h\cdotp\brk2{hs^h-hs^{h-1}t-s^h+t^h} =\frac{xy\brk[s]1{(x-y)Q_x'-(Q_x-Q_y)}}{(x-y)^2Q_x}. \end{align*} Exchanging $x$ and $y$ and exchanging $a$ and $b$ in the formula above, we obtain \[ \sum_{a,b\ge1}\sum_{j=1}^{b} (-1)^{a+j}\cdotp j\cdotp p_{a+j}X_{P_{b-j}}x^a y^b =\frac{xy\brk[s]1{(y-x)Q_y'+(Q_x-Q_y)}}{(x-y)^2Q_y}. \] For the second term in \cref{rec:CC:a:b}, we compute \begin{align*} &\sum_{a,b\ge1} \sum_{i=1}^{a}\sum_{j=1}^{b} (-1)^{i+j+1}\cdotp i\cdotp j\cdotp p_{i+j}X_{P_{a-i}}X_{P_{b-j}} x^a y^b\\ =&-\sum_{i,j\ge1}i\cdotp j\cdotp p_{i+j} \sum_{a\ge i}X_{P_{a-i}}s^a \sum_{b\ge j}X_{P_{b-j}}t^b\\ =&-\frac{1}{Q_x Q_y} \sum_{i,j\ge1}i\cdotp j\cdotp p_{i+j}\cdotp s^i\cdotp t^j =-\frac{1}{Q_x Q_y}\sum_{h\ge2}p_h \sum_{i=1}^{h-1} i\cdotp (h-i)\cdotp s^i t^{h-i}\\ =&-\frac{st}{(s-t)^3Q_x Q_y}\sum_{h\ge2}p_h \brk[s]2{(h-1)\brk1{s^{h+1}-t^{h+1}}-(h+1)st\brk1{s^{h-1}-t^{h-1}}}\\ =&\frac{xy}{(x-y)^3Q_x Q_y} \brk[s]2{(x+y)(Q_x-Q_y)-(x-y)(xQ_x'+yQ_y')}. \end{align*} Adding the five results above up, we obtain \begin{align*} H(x,y) &=\frac{xy}{x-y}\brk3{\frac{xE''(x)}{F(x)}-\frac{yE''(y)}{F(y)}} +\frac{xQ_y-yQ_x}{x-y}-1\\ &\qquad+\frac{xy\brk[s]1{(x-y)Q_x'-(Q_x-Q_y)}}{(x-y)^2Q_x} +\frac{xy\brk[s]1{(y-x)Q_y'+(Q_x-Q_y)}}{(x-y)^2Q_y}\\ &\qquad+\frac{xy}{(x-y)^3Q_x Q_y} \brk[s]2{(x+y)(Q_x-Q_y)-(x-y)(xQ_x'+yQ_y')}, \end{align*} which simplifies to the desired formula. \end{proof}
In the procedure of showing the $(e)$-positivity of $Y_{\mathrm{CC}_{m,3}}$, we will use \cref{prop:DC,lem:relabel} frequently in which we need to relabel the largest vertex. We write $(d, i)$ to denote the transposition of the elements $d$ and $i$. For any partition $\pi\in\Pi_d$ and any graph $G$ on $d$ vertices, we write \begin{itemize} \item $\pi\circ (d, i)$ to denote the partition of $[d]$ that interchanges the elements $d$ and $i$ in $\pi$, \item $G\circ (d, i)$ to denote the relabeling of $G$ that interchanges the vertex labels $v_d$ and $v_{i}$, and \item $Y_G\circ (d, i)$ to denote the chromatic symmetric function $Y_{G\circ (d, i)}$. \end{itemize} It follows that \[ Y_G=\sum_{\pi \in \Pi_d} c_{\pi}e_{\pi}\quad \Longrightarrow\quad Y_G\circ (d, i)=\sum_{\pi \in \Pi_d} c_{\pi}e_{\pi\circ (d, i)}. \] Another basic result of the transposition is \cref{prop:ind:relabel:d}.
\begin{proposition}\label{prop:ind:relabel:d} For any partition $\pi \in \Pi_d$ and any element $i\in [d]$, \[ e_{\pi\circ (d, i)}\ind \, \equiv \frac{1}{b_{\pi, i}}\brk2{e_{(\pi/d+1)}-e_{(\pi+_{i}(d+1))}}\circ (d, i). \] where $\pi+_i(d+1)$ is the partition of $[d+1]$ formed by inserting the element $d+1$ into the block of $\pi$ that contains $i$. \end{proposition} \begin{proof} By \cref{prop:ind:e}, we have \begin{align*} e_{\pi\circ (d, i)}\ind &\equiv \frac{1}{b_{\pi\circ (d, i)}}\brk2{e_{(\pi\circ (d, i)/d+1)}-e_{(\pi\circ (d, i)+(d+1))}}, \end{align*} which simplifies to the desired congruence relation. \end{proof}
Note that \cref{prop:ind:relabel:d} reduces to \cref{prop:ind:e} when $i=d$.
Here comes the main result of \cref{sec:Y:CCm3}. \begin{theorem}\label{thm:Y:CCm3} For any integer $m\ge 3$, the cycle-chord graph $\mathrm{CC}_{m,3}$ that is labeled by adding the edge~$v_2v_{m+1}$ to the cycle $v_1\dotsm v_{m+2}v_1$ is $(e)$-positive. \end{theorem} \begin{proof} By \cref{prop:DC}, \begin{align*} Y_{\mathrm{CC}_{m,3}}= Y_{\mathrm{Tp}_{m,2}}-Y_{L(\mathrm{Tp}_{m,1})}\ind, \end{align*} see the left part of \cref{fig:CCm3} for the vertex labeling of $\mathrm{CC}_{m,3}$. \begin{figure}\label{fig:CCm3}
\end{figure}
We relabel the vertices of the tadpole $\mathrm{Tp}_{m,2}$ so that it is obtained by adding the edge $v_1v_m$ in the path $v_1\dotsm v_{m+2}$. Write the resulting graph as $\mathrm{Tp}_{m,2}'$. We also relabel the vertices of the line graph $L(\mathrm{Tp}_{m,1})$ so that it is obtained by adding the edge $v_{m-1}v_{m+1}$ in the cycle $v_1\dotsm v_{m+1}v_1$. Write the resulting graph as~$L(\mathrm{Tp}_{m,1})'$, see the right part of \cref{fig:CCm3}. By \cref{lem:relabel} and by applying \cref{prop:DC} another twice, we can deduce that \begin{align} Y_{\mathrm{CC}_{m,3}} &\equiv_{m+2} Y_{\mathrm{Tp}_{m,2}'}-Y_{L(\mathrm{Tp}_{m,1})'}\ind\notag\\ &\equiv_{m+2} \brk1{Y_{\mathrm{Tp}_{m,1}\,\uplus K_1}-Y_{\mathrm{Tp}_{m,1}}\ind}-Y_{L(\mathrm{Tp}_{m,1})'}\ind\notag\\ &\equiv_{m+2} Y_{\mathrm{Tp}_{m,1}\,\uplus K_1}-(Y_{C_m\,\uplus K_1}-Y_{C_m}\ind)\ind-Y_{L(\mathrm{Tp}_{m,1})'}\ind\notag\\ &\equiv_{m+2} Y_{\mathrm{Tp}_{m,1}\,\uplus K_1}+Y_{C_m}\ind \ind-Y_{C_m\,\uplus K_1}\ind-Y_{L(\mathrm{Tp}_{m,1})'}\ind.\label{pf:Y:CCm2} \end{align} We proceed by computing the four terms in \cref{pf:Y:CCm2} separately. In order to compute the last term, we pause here and give a lemma for computing $Y_{L(\mathrm{Tp}_{m,1})'}$.
\begin{lemma}\label{lem:lineTp} If \cref{eq:Y:P:m-1} holds, then \begin{equation}\label{eq:Y:LTpm1} Y_{L(\mathrm{Tp}_{m,1})'} \equiv \sum_{(\rho)\subseteq \Pi_{m-1}}c_{(\rho)} \brk3{ \frac{2}{b_{\rho}+1}e_{(\rho+m+(m+1))}- \frac{b_{\rho}-1}{b_{\rho}(b_{\rho}+1)}e_{(\rho+m/m+1)} +\frac{b_{\rho}-1}{b_{\rho}} e_{(\rho')}}, \end{equation} where $\rho'\in\Pi_{m+1}$ is the partition $\rho\backslash \{m-1\}+m+(m+1)/m-1$. \end{lemma}
\begin{proof}[Proof of \cref{lem:lineTp}] By using \cref{prop:DC}, we obtain \[ Y_{C_m}=Y_{P_m}-Y_{C_{m-1}}\ind. \] It follows that \begin{align}\label{pf:YCm-1} Y_{C_{m-1}}\ind\ind=Y_{P_m}\ind-Y_{C_m}\ind. \end{align} Let $i\in [m-1]$. It will be carefully set to facilliate the computation in the sequel. By using \cref{prop:DC} thrice and using \cref{pf:YCm-1}, we can deduce \begin{align} Y_{L(\mathrm{Tp}_{m,1})'} &\equiv Y_{\mathrm{Tp}_{m,1}\circ (m, m-1)}-Y_{C_m}\ind\notag\\ &\equiv Y_{P_{m+1}}-Y_{\mathrm{Tp}_{m-1,1}}\ind-Y_{C_m}\ind\notag\\ &\equiv Y_{P_{m+1}}-\brk1{Y_{(C_{m-1}\,\uplus K_1)\circ (m, i)}-Y_{C_{m-1}}\ind} \ind-Y_{C_m}\ind\notag\\ &\equiv Y_{P_{m+1}}-Y_{C_{m-1}\,\uplus K_1}\circ (m, i)\ind+Y_{P_m}\ind-2Y_{C_m}\ind, \label{pf:Y:LTpm1} \end{align} where the graph $C_{m-1}\,\uplus K_1$ is labeled by $C_{m-1}=v_1\dotsm v_{m-1}v_1$ and $V(K_1)=\{v_m\}$. We will calculate the four terms in \cref{pf:Y:LTpm1} independently. Suppose that \begin{align*} Y_{P_{m-2}} &\equiv
\sum_{(\tau)\subseteq \Pi_{m-2}}h_{(\tau)}e_{(\tau)},\\ Y_{P_{m-1}} &\equiv \sum_{(\rho)\subseteq \Pi_{m-1}}c_{(\rho)}e_{(\rho)}\quad \text{and}\\ Y_{C_{m-1}} &\equiv \sum_{(\rho)\subseteq \Pi_{m-1}}c'_{(\rho)}e_{(\rho)}. \end{align*} Then by \cref{prop:path:cycle}, \begin{align*} c_{(\rho)}&= \begin{cases} \displaystyle\frac{h_{(\tau)}}{b_{\tau}}(b_{\tau}-1),&\text{if $\rho=\tau/(m-1)$},\\[10pt] \displaystyle\frac{h_{(\tau)}}{b_{\rho}-1},&\mbox{if $\rho=\tau+(m-1)$},\\[9pt] 0,&\mbox{otherwise}, \end{cases}\\ c'_{(\rho)}&= \begin{cases} h_{(\tau)},&\text{if $\rho=\tau+(m-1)$},\\[9pt] 0,&\text{otherwise}. \end{cases} \end{align*} It follows that $c'_{(\rho)}=h_{(\tau)}=(b_{\rho}-1)c_{(\rho)}$, if $\rho=\tau+(m-1)$.
By using \cref{prop:Y:DisjointUnion:G:Km,prop:ind:relabel:d}, we obtain \begin{align*} Y_{C_{m-1}\,\uplus K_1}\circ (m, i)\ind \equiv &\sum_{(\rho)\subseteq \Pi_{m-1}}c'_{(\rho)}e_{(\rho/ m)}\circ (m, i)\ind\\ \equiv &\sum_{(\rho)\subseteq \Pi_{m-1}}\frac{c'_{(\rho)}}{b_{\rho/m,\,i}}\brk2{e_{(\rho/ m/m+1))}-e_{\rho/m+_{i}(m+1)}} \circ (m, i)\\ \equiv &\sum_{(\rho)\subseteq \Pi_{m-1}} \frac{(b_{\rho}-1)c_{(\rho)}}{b_{\rho,i}}\brk2{e_{(\rho /m/m+1)}-e_{\rho+_{i}(m+1)/m}}\circ (m, i). \end{align*} Taking $i=m-1$, we obtain \begin{align} Y_{C_{m-1}\,\uplus K_1}\circ (m, i)\ind \equiv &\sum_{(\rho)\subseteq \Pi_{m-1}} \frac{(b_{\rho}-1)c_{(\rho)}}{b_{\rho}} \brk2{e_{(\rho /m/m+1)}-e_{(\rho')}}.\label{pf:Y:Cm-1K1:ind} \end{align} On the other hand, by \cref{prop:path:cycle}, we obtain \begin{align}\label{pf2:Pm} Y_{P_m} &\equiv \sum_{(\rho)\subseteq \Pi_{m-1}} \frac{c_{(\rho)}}{b_{\rho}} \brk2{(b_{\rho}-1)e_{(\rho/m)}+e_{(\rho+m)}}\quad\text{and}\\ Y_{C_m}\label{pf2:Cm} &\equiv \sum_{(\rho)\subseteq \Pi_{m-1}} c_{(\rho)}e_{(\rho+m)}. \end{align} By \cref{prop:path:cycle}, we obtain \begin{align} Y_{P_{m+1}} \equiv &\sum_{(\rho)\subseteq \Pi_{m-1}} \frac{(b_{\rho}-1)c_{(\rho)}}{b_{\rho}b_{\rho/m}} \brk3{(b_{\rho/m}-1)e_{(\rho/m/m+1)}+e_{(\rho/m+(m+1))}}\notag\\ &\qquad+\sum_{(\rho)\subseteq \Pi_{m-1}} \frac{c_{(\rho)}}{b_{\rho}b_{\rho+m}} \brk3{(b_{\rho+m}-1)e_{(\rho+m/m+1)}+e_{(\rho+m+(m+1))}}\notag\\ \equiv &\sum_{(\rho)\subseteq \Pi_{m-1}}c_{(\rho)} \brk3{\frac{b_{\rho}-1}{b_{\rho}}e_{(\rho/\{m,m+1\})}+ \frac{1}{b_{\rho}+1}e_{(\rho+m/m+1)}+ \frac{1}{b_{\rho}(b_{\rho}+1)}e_{(\rho+m+(m+1))}}.\label{pf:Y:Pm+1} \end{align} Applying the induction to \cref{pf2:Pm,pf2:Cm} respectively, and by \cref{prop:ind:e}, we obtain \begin{align} Y_{P_m}\ind \equiv &\sum_{(\rho)\subseteq \Pi_{m-1}} \frac{c_{(\rho)}}{b_{\rho}} \brk2{(b_{\rho}-1)e_{(\rho/m)}\ind + e_{(\rho+m)}\ind}\notag\\ \equiv &\sum_{(\rho)\subseteq \Pi_{m-1}} \frac{c_{(\rho)}}{b_{\rho}} (b_{\rho}-1) \brk2{e_{(\rho/m/m+1)}-e_{(\rho/\{m,m+1\})}}\notag\\ &\qquad+\sum_{(\rho)\subseteq \Pi_{m-1}} \frac{c_{(\rho)}}{b_{\rho}(b_{\rho}+1)} \brk2{e_{(\rho+m/m+1)}-e_{(\rho+m+(m+1))}}, \quad \text{and}\label{pf:Y:Pm:ind}\\ Y_{C_m}\ind \equiv &\sum_{(\rho)\subseteq \Pi_{m-1}} c_{(\rho)}e_{(\rho+m)}\ind \equiv \sum_{(\rho)\subseteq \Pi_{m-1}}\frac{c_{(\rho)}}{b_{\rho}+1} \brk2{e_{(\rho+m/m+1)}-e_{(\rho+m+(m+1))}}.\label{pf:Y:Cm:ind} \end{align} Therefore, substituting \cref{pf:Y:Cm-1K1:ind,pf:Y:Pm+1,pf:Y:Pm:ind,pf:Y:Cm:ind} into \cref{pf:Y:LTpm1} with $i=m-1$, we obtain \begin{align*} Y_{L(\mathrm{Tp}_{m,1})'} \equiv &Y_{P_{m+1}}-Y_{C_{m-1}\,\uplus K_1}\circ (m, m-1)\ind+Y_{P_m}\ind-2Y_{C_m}\ind\\ \equiv &\sum_{(\rho)\subseteq \Pi_{m-1}}c_{(\rho)} \brk2{ \frac{b_{\rho}-1}{b_{\rho}}e_{(\rho/m/m+1)}+ \frac{2}{b_{\rho}+1}e_{(\rho+m+(m+1))}- \frac{b_{\rho}-1}{b_{\rho}(b_{\rho}+1)}e_{(\rho+m/m+1)} }\\ &-\sum_{(\rho)\subseteq \Pi_{m-1}} \frac{(b_{\rho}-1)c_{(\rho)}}{b_{\rho}} \brk2{e_{(\rho /m/m+1)}-e_{(\rho')}}, \end{align*} which simplifies to \cref{eq:Y:LTpm1}. \end{proof}
Now we are in a position to complete the proof of \cref{thm:Y:CCm3}. Applying the induction to \cref{eq:Y:LTpm1}, and by \cref{prop:ind:e}, we can compute \begin{align*} Y_{L(\mathrm{Tp}_{m,1})'}\ind \equiv &\sum_{(\rho)\subseteq \Pi_{m-1}}c_{(\rho)} \brk3{ \frac{2}{b_{\rho}+1}e_{(\rho+m+(m+1))}\ind- \frac{b_{\rho}-1}{b_{\rho}(b_{\rho}+1)}e_{(\rho+m/m+1)}\ind +\frac{b_{\rho}-1}{b_{\rho}}e_{(\rho')}\ind} \\ \equiv &\sum_{(\rho)\subseteq \Pi_{m-1}}c_{(\rho)} \biggl( \frac{2}{(b_{\rho}+1)(b_{\rho}+2)} \brk2{e_{(\rho+m+(m+1)/m+2)} -e_{(\rho+m+(m+1)+(m+2))}}\\ &\qquad\qquad-\frac{b_{\rho}-1}{b_{\rho}(b_{\rho}+1)} \brk2{e_{(\rho+m/m+1/m+2)} -e_{(\rho+m/(m+1)+(m+2))}}\\ &\qquad\qquad+\frac{b_{\rho}-1}{b_{\rho}(b_{\rho}+1)}\brk2{e_{(\rho'/m+2)} -e_{(\rho'+(m+2))}}\biggr)\\ \equiv &\sum_{(\rho)\subseteq \Pi_{m-1}}c_{(\rho)} \biggl( \frac{2}{(b_{\rho}+1)(b_{\rho}+2)}\brk2{e_{(\rho+m+(m+1)/m+2)}-e_{(\rho+m+(m+1)+(m+2))}}\\ &\qquad\qquad+\frac{b_{\rho}-1}{b_{\rho}(b_{\rho}+1)}\brk2{e_{(\rho+m/(m+1)+(m+2))}-e_{(\rho'+(m+2))}}\biggr). \end{align*}
Suppose that \cref{eq:Y:P:m-1} holds. By \cref{prop:Y:DisjointUnion:G:Km,pf2:Cm}, we obtain \begin{equation}\label{Y:CmK1} Y_{C_m\,\uplus K_1} \equiv \sum_{(\rho)\subseteq \Pi_{m-1}} c_{(\rho)}e_{(\rho+m/m+1)}. \end{equation} Applying \cref{prop:DC} to the graph $\mathrm{Tp}_{m,1}$, and by using \cref{Y:CmK1,pf:Y:Cm:ind}, we can deduce that \begin{align*} Y_{\mathrm{Tp}_{m,1}} &= Y_{C_m\,\uplus K_1}-Y_{C_m}\ind\\ &\equiv \sum_{(\rho)\subseteq \Pi_{m-1}} c_{(\rho)}e_{(\rho+m/m+1)} -\sum_{(\rho)\subseteq \Pi_{m-1}}\frac{c_{(\rho)}}{b_{\rho}+1} \brk2{e_{(\rho+m/m+1)}-e_{(\rho+m+(m+1))}}\\ &\equiv \sum_{(\rho)\subseteq \Pi_{m-1}} \frac{c_{(\rho)}}{b_{\rho}+1} \brk2{b_{\rho}e_{(\rho+m/m+1)}+ e_{(\rho+m+(m+1))}}. \end{align*} It follows by \cref{prop:Y:DisjointUnion:G:Km} that \begin{align*} Y_{\mathrm{Tp}_{m,1}\,\uplus K_1} \equiv \sum_{(\rho)\subseteq \Pi_{m-1}} \frac{c_{(\rho)}}{b_{\rho}+1} \brk2{b_{\rho}e_{(\rho+m/m+1/m+2)}+ e_{(\rho+m+(m+1)/m+2)}}. \end{align*} Applying the induction to \cref{Y:CmK1,pf:Y:Cm:ind} respectively, and by \cref{prop:ind:e}, we c an infer that \begin{align*} Y_{C_m}\ind\ind \equiv &\sum_{(\rho)\subseteq \Pi_{m-1}} \frac{c_{(\rho)}}{b_{\rho}+1} \brk2{e_{(\rho+m/m+1)}\ind-e_{(\rho+m+(m+1))}\ind}\\ \equiv &\sum_{(\rho)\subseteq \Pi_{m-1}} \brk2{e_{(\rho+m/m+1/m+2)}-e_{(\rho+m/(m+1)+(m+2))}}\\ &-\sum_{(\rho)\subseteq \Pi_{m-1}} \frac{c_{(\rho)}}{(b_{\rho}+1)(b_{\rho}+2)} \brk2{e_{(\rho+m+(m+1)/m+2)}-e_{(\rho+m+(m+1)+(m+2))}}\quad\text{and} \\ Y_{C_m\,\uplus K_1}\ind \equiv &\sum_{(\rho)\subseteq \Pi_{m-1}} c_{(\rho)}e_{(\rho+m/m+1)}\ind\\ \equiv &\sum_{(\rho)\subseteq \Pi_{m-1}} c_{(\rho)} \brk2{e_{(\rho+m/m+1/m+2)}-e_{(\rho+m/(m+1)+(m+2))}}. \end{align*} Now we can continue calculating $Y_{\mathrm{CC}_{m,3}}$ by \cref{pf:Y:CCm2} as follows. \begin{align*} Y_{\mathrm{CC}_{m,3}} &\equiv Y_{\mathrm{Tp}_{m,1}\,\uplus K_1}+Y_{C_m}\ind \ind-Y_{C_m\,\uplus K_1}\ind-Y_{L(\mathrm{Tp}_{m,1})'}\ind\\ &\equiv\sum_{(\rho)\subseteq \Pi_{m-1}} \frac{c_{(\rho)}}{b_{\rho}+1} \brk2{b_{\rho}e_{(\rho+m/m+1/m+2)}+ e_{(\rho+m+(m+1)/m+2)}}\\ &\qquad+\sum_{(\rho)\subseteq \Pi_{m-1}} \frac{c_{(\rho)}}{b_{\rho}+1} \brk2{e_{(\rho+m/m+1/m+2)}-e_{(\rho+m/(m+1)+(m+2))}}\\ &\qquad-\sum_{(\rho)\subseteq \Pi_{m-1}} \frac{c_{(\rho)}}{(b_{\rho}+1)(b_{\rho}+2)} \brk2{e_{(\rho+m+(m+1)/m+2)}-e_{(\rho+m+(m+1)+(m+2))}}\\ &\qquad-\sum_{(\rho)\subseteq \Pi_{m-1}} c_{(\rho)} \brk2{e_{(\rho+m/m+1/m+2)}-e_{(\rho+m/(m+1)+(m+2))}}\\ &\qquad-\sum_{(\rho)\subseteq \Pi_{m-1}}c_{(\rho)} \biggl( \frac{2}{(b_{\rho}+1)(b_{\rho}+2)}\brk2{e_{(\rho+m+(m+1)/m+2)}-e_{(\rho+m+(m+1)+(m+2))}}\\ &\qquad\qquad\qquad+\frac{b_{\rho}-1}{b_{\rho}(b_{\rho}+1)}\brk2{e_{(\rho+m/(m+1)+(m+2))}-e_{(\rho'+(m+2))}}\biggr)\\ &\equiv \sum_{(\rho)\subseteq \Pi_{m-1}} \frac{c_{(\rho)}}{b_{\rho}+1} \biggl( \frac{b_{\rho}-1}{b_{\rho}+2} e_{(\rho+m+(m+1)/m+2)}+ \frac{b_{\rho}^2-b_{\rho}+1}{b_{\rho}} e_{(\rho+m/(m+1)+(m+2))}\\ &\qquad\qquad\qquad+ \frac{3}{b_{\rho}+2} e_{(\rho+m+(m+1)+(m+2))} +\frac{b_{\rho}-1}{b_{\rho}} e_{(\rho'+(m+2))}\biggr). \end{align*} Since $b_{\rho}\ge 1$ for all $\rho\in \Pi_{m-1}$, we obtain that $Y_{\mathrm{CC}_{m,3}}$ is $(e)$-positive. \end{proof}
\begin{corollary} The cycle-chord graphs $\mathrm{CC}_{m,3}$ are $e$-positive for any integer $m\ge 3$. \end{corollary} \begin{proof} Immediate by \cref{prop:(e)toe,thm:Y:CCm3}. \end{proof}
\end{document} |
\begin{document}
\title{Quantum which-way information and fringe visibility when the detector is entangled with an ancilla} \author{J. Prabhu Tej} \affiliation{Department of Physics, Bangalore University, Bangalore-560 056, India} \author{A. R. Usha Devi}\email{arutth@rediffmail.com} \affiliation{Department of Physics, Bangalore University, Bangalore-560 056, India} \affiliation{Inspire Institute Inc., Alexandria, Virginia, 22303, USA.} \author{H. S. Karthik} \affiliation{Raman Research Institute, Bangalore 560 080, India} \author{Sudha} \affiliation{Department of Physics, Kuvempu University, Shankaraghatta, Shimoga-577 451, India.} \affiliation{Inspire Institute Inc., Alexandria, Virginia, 22303, USA,} \author{A. K. Rajagopal} \affiliation{Inspire Institute Inc., Alexandria, Virginia, 22303, USA.} \affiliation{Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211 019, India.} \affiliation{Institute of Mathematical Sciences, C.I.T Campus, Tharamani, Chennai 600 113, India } \date{\today} \begin{abstract} Quantum mechanical wave-particle duality is quantified in terms of a trade-off relation between the fringe visibility and the which-way distinguishability in an interference experiment. This relation is recently generalized by Banaszek et. al., (Nature Communications {\bf 4}, 2594, (2013)) when the particle is equipped with an internal degree of freedom such as spin. Here, we extend the visibility-distinguishability trade-off relation to quantum interference of a particle possessing an internal degree of freedom, when the {\em which-way} detector state is entangled with an ancillary system. We introduce an {\em extended which-way distinguishability} ${\cal D}_E$ and the associated {\em extended fringe visibility} ${\cal V}_E$ satisfying the inequality ${\cal D}^2_E+{\cal V}^2_E\leq 1$ in this scenario. We illustrate, with the help of three specific examples, that while the which-way information inferred solely from the detector state (without ancilla) vanishes, the {\em extended distinguishability} retrievable via measurements on the detector-ancilla entangled state is non-zero. Furthermore, in all the three examples, the {\em extended visibility} and the {\em generalized visibility} (which was introduced by Banaszek et. al., Nature Communications {\bf 4}, 2594, (2013)) match identically with each other. \end{abstract} \pacs{03.65.Ta, 03.65.Ud} \maketitle
\section{Introduction}
Visibility of fringes in a single quantum particle interference experiment sets limits on the which-way information~\cite{WKW, Sc, GY, En}, thus demonstrating wave-particle duality. Very recently, Banaszek et. al.~\cite{Ban} analyzed the trade-off between interference visibility and which-path distinguishability for a quantum particle possessing an internal structure (such as spin or polarization). In this setting, an interaction of the internal spin state with the detector system is shown to offer non-trivial identifications. The internal structure could play a manipulative role in controlling the information about which-path in the interferometer arms is taken by the particle. Trade-off between the amount of which-way information encoded in the detector system and the fringe visibility is captured in terms of a generalized complementarity relation~\cite{Ban}, by extending the notion of fringe visibility in terms of the internal spin states as well as their interaction with the detector.
To place things in order, we outline the basic scenario,~\cite{En} where a single quantum particle (quanton) $Q$ travelling through a two-path interferometer (double slit or Mach-Zehnder interferrometer), with the paths being equiprobable. Let us denote the initial state of the quanton and the detector system by $\rho^{\rm (in)}_{QD}=\rho^{\rm (in)}_Q\otimes \rho^{\rm (in)}_D$. When the quanton takes either path $0$ or $1$ of the interferometer arms, the detector state correspondingly gets transformed into \begin{eqnarray} \rho^{(i)}_D&=U^{(i)}_D \rho^{\rm (in)}_{D} U^{(i)\dag}_D,\ \ i=0,1, \end{eqnarray} where $U^{(i)}_D$ denote unitary transformations on the detector states corresponding to the paths of the quanton. (The interaction is constrained such that the quanton paths cannot get transferred into one another due to interaction. The final detector state after the interaction is then given by, $\rho^{\rm (fin)}_D=\frac{1}{2}\, \rho^{(0)}_D+ \frac{1}{2}\,\rho^{(1)}_D$).
Which-way information is quantified in terms of {\em distinguishability} $0\leq {\cal D}\leq 1$, which is the trace-distance between the detector states $\rho^{(0)}_D$ and $\rho^{(1)}_D$: \begin{eqnarray} {\cal D}&=&\frac{1}{2}\, \vert\vert \rho^{(0)}_D-\rho^{(1)}_D\vert\vert. \end{eqnarray} (Here, $\vert\vert\, A\, \vert\vert={\rm Tr}\,[\sqrt{A^\dag\,A}]$ denotes the trace-norm of $A$).
It may be noted that the distinguishability is the maximum of the difference of probabilities of the correct and incorrect decisions about the paths~\cite{ Helstrom}. The paths of the quanton cannot be distinguished when ${\cal D}=0$ (i.e.,when $\rho^{(0)}_D\equiv \rho^{(1)}_D$) whereas, they are perfectly distinguishable when ${\cal D}=1$ (i.e., when $\rho^{(0)}_D$ and $\rho^{(1)}_D$ are orthogonal). Consequently, the fringe visibility ${\cal V}$ is given by~\cite{En} \begin{equation} {\cal V}=\left\vert {\rm Tr}[U^{(0)}_D\, \rho^{\rm (in)}_{D}\, U^{(1)\dag}_D] \right\vert. \end{equation} Visibility $0\leq {\cal V}\leq 1$ characterizes the ability of the quanton, distributed between two paths, to interfere after geting combined at the exit of the interferometer. Wave-particle duality in the quantum interference experiment is expressed in terms of the trade-off relation~\cite{En} between visibililty and distinguishability: \begin{equation} {\cal D}^2+{\cal V}^2\leq 1. \end{equation}
In a more general set up, explored recently~\cite{Ban}, the quanton is equipped with an internal degree of freedom such as spin (characterized by a $d_S$ level quantum system) and it is recognized that there is an intricate relation between the which-way information ${\cal D}$ on the initial preparation of the internal spin state, in addition to the specific details of its interaction with the detector. Banaszek et. al.~\cite{Ban} demonstrated a stringent bound on distinguishability in terms of {\em generalized fringe visibility}, which depends on the initial preparation of spin state as well as on the nature of its interaction with the detector system. The generalized trade-off inequality reads as~\cite{Ban}, \begin{equation} \label{dvg} {\cal D}^2+{\cal V}_G^2\leq 1 \end{equation} where the distinguishability ${\cal D}=\frac{1}{2}\, \vert\vert \rho^{(0)}_D- \rho^{(1)}_D\vert\vert$ captures the leak-out of which-way information to the detector (here, $\rho^{(i)}_D=2\, _Q\langle i\vert {\rm Tr}_S[U_{QSD}\, \rho^{\rm (in)}_{QS}\otimes \rho^{\rm (in)}_{D}\, U^\dag_{QSD}]\vert i\rangle_Q,\ \ i=0,1$ are the detector states corresponding to the quanton paths); the unitary interaction $U_{QSD}$ is constrained to be of the form $U_{QSD}=\sum_{i=0,1}\, \vert i\rangle_Q\langle i\vert \otimes U^{(i)}_{SD}$ such that the which-way interaction does not shift the quanton between interferometer arms.
The generalized fringe visiblity ${\cal V}_G$ in (\ref{dvg}) is given by~\cite{Ban}, \begin{equation} \label{vg} {\cal V}_G= d_S\, \vert\vert (\mathbbm{1}\otimes \Lambda_{01})[(I_S\otimes \sqrt{\rho^{\rm (in)}_{S0}})\vert\Phi_{+}\rangle\langle \Phi_{+}\vert\, (I_S\otimes \sqrt{\rho^{\rm (in)}_{S1}})]\vert\vert, \end{equation} where, the unitary interaction of the detector with the spin subsystem corresponds to the action of a quantum channel~\cite{NC} $\Lambda$ on the internal spin state as explained below: Let $\rho^{\rm (in)}_{QSD}=\rho^{\rm (in)}_{QS}\otimes \rho^{\rm (in)}_D$ denote the initial quanton path-spin (denoted by $QS$) and the detector (denoted by $D$) states. Unitary interaction between the detector and the internal spin results in the final state $\rho^{\rm (fin)}_{QSD}=U_{QSD}\, \rho^{\rm (in)}_{QSD}\, U^\dag_{QSD}$. This unitary interaction on the initial state $\rho^{\rm (in)}_{QSD}$ may be viewed as the action of a quantum superoperator $\Lambda$ on the input state $\rho^{\rm (in)}_{QS}$ i.e., $\Lambda(\rho^{\rm (in)}_{QS})={\rm Tr}_D[\rho^{\rm (fin)}_{QSD}]$. Further, as the quanton does not get switched between the interferometer arms $0$ and $1$ as a result of the interaction, the channel $\Lambda$ must be of the form: \begin{equation} \Lambda(\vert i\rangle_Q\langle j\vert \otimes \sigma_S)=\vert i\rangle_Q\langle j\vert \otimes \Lambda_{ij}(\sigma_S),\ \ \ \ i,j=0,1 \end{equation} where $\sigma_S$ corresponds to any operator in the spin space. Further, in (\ref{vg}), the states $\rho^{\rm (in)}_{S0}, \ \rho^{\rm (in)}_{S1}$ are the initial spin states along the paths $0$, $1$ and are given by $\rho^{\rm (in)}_{S0}=2\, _Q\langle 0\vert \rho^{\rm (in)}_{QS}\vert 0\rangle_Q, \ \rho^{\rm (in)}_{S1}=2\, _Q\langle 1\vert \rho^{\rm (in)}_{QS}\vert 1\rangle_Q$; $I_S$ denotes identity operator in the spin space and $\mathbbm{1}$ denotes the identity channel; the state $\vert \Phi_{+}\rangle=\frac{1}{\sqrt{d_S}}\,\displaystyle\sum_{\alpha=0}^{d_S-1} \vert \alpha\rangle_S \, \vert \alpha\rangle_{S'}$ is a maximally entangled state of two replicas of the spin system.
Banaszek et.al.,\cite{Ban} analyze specific examples to demonstrate the intricate role played by the internal spin state preparation corresponding to specific interaction channels $\Lambda_{01}$. Specifically, when there is no interaction with the detector, the channel reduces to $\Lambda_{01}=\mathbbm{1}$ and after simplification of (\ref{vg}) one obtains ${\cal V}_G=\sqrt{{\rm Tr}[\rho^{\rm (in)}_{S0}]\, {\rm Tr}[\rho^{\rm (in)}_{S1}]}=1$, irrespective of the preparation of the initial spin state (i.e., when the detector gains no information about the path, visibility ${\cal V}_G$ is 1 and the distinguishability ${\cal D}$ is 0). When the interaction channel is given by $\Lambda_{01}(\sigma_S)={\rm Tr}[\sigma_S]\, \Sigma_S$, with $\Sigma_S$ being a fixed unit-trace hermitian operator, the generalized visibility reduces to the fidelity~\cite{U, J} between the spin states $\rho^{\rm (in)}_{S0},\ \rho^{\rm (in)}_{S1}$ i.e., ${\cal V}_G={\rm Tr}[\sqrt{\sqrt{\rho^{\rm (in)}_{S0}}\, \rho^{\rm (in)}_{S1}\, \sqrt{\rho^{\rm (in)}_{S0}}}]=F(\rho^{\rm (in)}_{S0}, \rho^{\rm (in)}_{S1})$. Thus, the which-way information can be blocked by preparing identical spin states for both the paths i.e., $\rho^{\rm (in)}_{S0}=\rho^{\rm (in)}_{S1}$, so that the generalized visibility takes its maximum value 1. In yet another interesting example of the interaction channel, defined through $\Lambda_{01}(\sigma_S)=\sigma^T_S/d_S$ (where $\sigma^T_S$ is the transpose of the spin operator $\sigma_S$), the generalized visibility (\ref{vg}) gets simplified to ${\cal V}_G = \vert\vert \sqrt{\rho^{\rm (in)}_{S0}}\vert\vert\, \vert\vert \sqrt{\rho^{\rm (in)}_{S1}}\vert\vert / d_S$. The spin states in both the paths, prepared initially in a completely mixed state $\rho^{\rm (in)}_{S0}= \rho^{\rm (in)}_{S1}=I_S/d_S$, would lead to the generalized fringe visibility ${\cal V}_G=1$ and hence, the which-path information to the detector can be blocked.
In the present work, we explore the trade-off between the fringe visibility and the which-way information retrievable, when the detector is entangled with an ancilla. Basically, the which-way information corresponds, in particular, to the discrimination of the detector states $\rho^{(0)}_D$ and $\rho^{(1)}_D$, when the quanton chooses path 0 or path 1. One may view the states $\rho^{(0)}_D$ and $\rho^{(1)}_D$ as the outputs of completely positive, trace preserving quantum channels~\cite{note} $\Phi_0$, $\Phi_1$, acting on the input state $\rho^{\rm (in)}_D$. The problem of gaining which-way information via distinguishing the two detector states $\rho^{(0)}_D$ and $\rho^{(1)}_D$ can then be linked with that of discriminating the two quantum channels $\Phi_0$, $\Phi_1$. A useful measure of distinguishability of two quantum channels is given by their trace distance, \begin{equation} \label{chd} \frac{1}{2}\vert\vert \Phi_0-\Phi_1\vert\vert=\stackrel{\max}{_\rho} \,\frac{1}{2}\, \vert\vert \Phi_0(\rho)-\Phi_1(\rho)\vert\vert \end{equation}
where the maximum is taken over all pure input states $\rho$ ~\cite{War, Sacchi1}. However, an optimal approach to maximize the observable difference between the two channels is to prepare the input state entangled with another auxiliary system; apply one of the channels (chosen randomly) to the input state (with ancillary subsystem being an idler) and then measure the resulting output bipartite state to identify which of the channels was applied. It has been established that channel inputs, which are entangled with an ancilla could offer remarkable improvements in distinguishing some pairs of channels~\cite{War, Sacchi1, kit, chi, Dar, Acin, Sacchi2, SL, Tan, ARU, Piani}. In fact, there are examples of channels~\cite{War} that can be distinguished {\em perfectly}, when they are applied to one part of a maximally entangled state, while they are {\em indistinguishable} if the auxilliary system is not employed.
\section{Which-way information using entangled detector-ancilla state}
Our purpose here is to investigate the enhancement of which-way information, given that the detector is entangled with an ancilla $D'$ of same dimension as that of the detector system~\cite{note2}. We define {\em extended distinguishability} achievable with an entangled detector-ancilla initial state by, \begin{eqnarray} {\cal D}_E&=&\frac{1}{2} \vert\vert (\Phi_0\otimes \mathbbm{1})(\rho^{\rm (in)}_{DD'})- (\Phi_1\otimes \mathbbm{1})((\rho^{\rm (in)}_{DD'})\vert\vert \nonumber \\ \label{dg}
&=& \frac{1}{2}\, \vert\vert \rho^{(0)}_{DD'}-\rho^{(1)}_{DD'}\vert\vert \end{eqnarray} where \begin{eqnarray*} &&\rho^{(i)}_{DD'}= (\Phi_i\otimes \mathbbm{1})(\rho^{\rm (in)}_{DD'})\nonumber \\ &&\ \ \ \ = 2\, _Q\langle i\vert {\rm Tr}_S\,[U_{QSD}\otimes I_{D'}\, \rho_{QS}\otimes \rho_{DD'}\, U^\dag_{QSD}\otimes I_{D'}]\vert i\rangle_Q, \end{eqnarray*} denote the final detector-ancilla states corresponding to quanton paths $i=0,1$.
Using the inequality~\cite{Fuchs} $D(\varrho, \tau)\leq \sqrt{1-F^2(\varrho,\, \tau)},$ relating the trace distance $D(\varrho,\, \tau)=\frac{1}{2}\, \vert\vert \varrho-\tau\vert\vert$ and the fidelity $F(\varrho,\, \tau)={\rm Tr}[\sqrt{\sqrt{\varrho}\,\, \tau\,\sqrt{\varrho}}]$ between the two density operators $\varrho$ and $\tau$, we obtain the following relation for the extended distinguishability: \begin{equation} \label{df} {\cal D}_E\leq \sqrt{1-F^2(\rho^{(0)}_{DD'},\rho^{(1)}_{DD'})}. \end{equation} Expressing the extended distinguishability as ${\cal D}_E\leq \sqrt{1-{\cal V}^2_E}$, we define the corresponding {\em extended fringe visibility} by \begin{eqnarray} \label{fv} {\cal V}_E&=&F(\rho^{(0)}_{DD'},\rho^{(1)}_{DD'}).
\end{eqnarray}
We now proceed to analyze three specific examples of interaction to illurstrate that the extended which-way distinguishability ${\cal D}_E$ can assume non-zero values, even when the distinguishability ${\cal D}$ inferred by measuring only the detector states vanishes. In the meanwhile, we also find that the extended visibility and the generalized visibiilty~\cite{Ban} agree identically with each other in these examples.
\section{Examples}
Let us consider a quanton -- with two dimensional internal spin states $\vert 0\rangle_S, \vert 1\rangle_S$ -- traveling through a Mach-Zehnder interferometer. We consider a pure entangled detector-ancilla input state \begin{equation} \label{ee'}
\vert \Psi\rangle_{DD'}= \frac{1}{\sqrt{2}}\, (\vert 0\rangle_D\, \vert 0\rangle_{D'}+\vert 1\rangle_D\,\vert\, 1\rangle_{D'}). \end{equation} Here, $\{\vert k\rangle_D,\ k=0,1\}$ and $\{\vert l\rangle_{D'}, l=0,1\}$) denote orthogonal states of the detector $D$ and ancilla $D'$). Let $\vert 0\rangle_Q$, $\vert 1\rangle_Q$ denote the state of the quanton in path $0$ and $1$ respectively. Internal spin states in paths $0$, $1$ are denoted by $\vert \psi_0\rangle_S=a_{00}\, \vert 0\rangle_S+ a_{01}\, \vert 1\rangle_S$ and $\vert \psi_1\rangle_S=a_{10}\, \vert 0\rangle_S+ a_{11}\, \vert 1\rangle_S$ (the coefficients $a_{\alpha\, \alpha'}$ obey the normalization condition $\displaystyle\sum_{\alpha '=0,1}\, \vert a_{\alpha\, \alpha'}\vert^2=1, \ \alpha=0,1$).
We consider the unitary interaction on the quanton spin with the detector $D$ along the paths $0$, $1$ to be of the form, \begin{eqnarray} U^{(0)}_{SD}= \left( \begin{array}{llll} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{array} \right)\ \ U^{(1)}_{SD}= \left( \begin{array}{llll} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right) \end{eqnarray} when expressed in the basis $\{\vert 0\rangle_S\, \vert 0\rangle_D,\, \vert 0\rangle_S\, \vert 1\rangle_D,$ $\vert 1\rangle_S\, \vert 0\rangle_D,$
$\vert 1\rangle_S\, \vert 1\rangle_D\})$.
Under this unitary interaction with the detector the initial quanton path-spin state $\vert \zeta\rangle_{QS}=\frac{1}{\sqrt{2}}\, (\vert 0\rangle_Q\, \vert \psi_0\rangle_S+\vert 1\rangle_Q\, \vert \psi_1\rangle_S)$ and the detector-ancilla state $\vert \Psi\rangle_{DD'}$ (see (\ref{ee'})) get transformed to $\vert \zeta\rangle_{QS}\, \vert \Psi\rangle_{DD'} \rightarrow \vert\varphi\rangle_{QSDD'}$, which is given explicitly by, \begin{eqnarray} &&\vert \varphi\rangle_{QSDD'}=\frac{1}{\sqrt{2}}\, \left[(a_{00} \vert 0\rangle_Q\, \vert 0\rangle_S\, + a_{11}\, \vert 1\rangle_Q\, \vert 1\rangle_S)\, \vert \Psi\rangle_{DD'} \right. \nonumber \\ &&\left. \ \ \ \ \ + (a_{01}\, \vert 0\rangle_Q\, \vert 1\rangle_S\, +a_{10} \vert 1\rangle_Q\, \vert 0\rangle_S) \, \vert \Psi_\perp\rangle_{DD'} \right] \end{eqnarray} where $\vert\Psi_\perp\rangle_{DD'}=\frac{1}{\sqrt{2}}\, (\vert 0\rangle_D\, \vert 1\rangle_{D'}+\vert 1\rangle_D\,\vert\, 0\rangle_{D'})$. The quanton path-spin final density operator $\rho^{\rm (fin)}_{QS}~=~{\rm Tr}_{DD'} [\vert \varphi\rangle_{QSDD'}\langle \varphi\vert]$ is then found to be,
\begin{eqnarray} \label{finqs} &&\rho^{\rm (fin)}_{QS} = \frac{1}{2}\left[\vert 0\rangle_Q\langle 0\vert \otimes ( \vert a_{00}\vert^2\, \vert 0\rangle_S\langle 0\vert+ \vert a_{01}\vert^2\, \vert 1\rangle_S\langle 1\vert) \right.\nonumber \\ &&\ \ \ \ +\vert 1\rangle_Q\langle 1\vert \otimes (\vert a_{10}\vert^2\, \vert 0\rangle_S\langle 0\vert+ \vert a_{11}\vert^2\, \vert 1\rangle_S\langle 1\vert) \nonumber \\
&& \ \ \ \ + \vert 0\rangle_Q\langle 1\vert \otimes ( a_{00}\, a^{*}_{11}\, \vert 0\rangle_S\langle 1\vert+ a_{01}\, a^*_{10}\, \, \, \vert 1\rangle_S\langle 0\vert) \nonumber \\ &&\left. \ \ \ \ +\vert 1\rangle_Q\langle 0\vert \otimes ( a^*_{00}\, a_{11}\, \vert 1\rangle_S\langle 0\vert+ a^*_{01}\,a_{10}\, \, \vert 0\rangle_S\langle 1\vert) \right]. \end{eqnarray}
The states $\rho^{(i)}_{DD'},\, \ i=0,1 $ of the detector-ancilla system after the interaction are obtained by $\rho^{(i)}_{DD'}=\ _Q\langle i\vert {\rm Tr}_{S}[\vert \varphi\rangle_{QSDD'}\langle\varphi\vert]\, \vert i\rangle_Q,$ and are explicitly given (in the basis $\{\vert 0\rangle_D\, \vert 0\rangle_{D'},\, \vert 0\rangle_D\, \vert 1\rangle_{D'}, \vert 1\rangle_D\, \vert 0\rangle_{D'}, \vert 1\rangle_D\, \vert 1\rangle_{D'}\})$ by \begin{eqnarray*} \rho^{(0)}_{DD'}&=& \frac{1}{2}\, \left( \begin{array}{llll} \vert a_{00}\vert^2 & 0 & 0 & \vert a_{00}\vert^2\\ 0 & \vert a_{01}\vert^2 & \vert a_{01}\vert^2 & 0 \\ 0 & \vert a_{01}\vert^2 & \vert a_{01}\vert^2 & 0 \\ \vert a_{00}\vert^2 & 0 & 0 & \vert a_{00}\vert^2 \end{array}, \right), \\
\rho^{(1)}_{DD'}&=& \frac{1}{2}\, \left( \begin{array}{llll} \vert a_{11}\vert^2 & 0 & 0 & \vert a_{11}\vert^2\\ 0 & \vert a_{10}\vert^2 & \vert a_{10}\vert^2 & 0 \\ 0 & \vert a_{10}\vert^2 & \vert a_{10}\vert^2 & 0 \\ \vert a_{11}\vert^2 & 0 & 0 & \vert a_{11}\vert^2 \end{array} \right). \end{eqnarray*}
The which-way information retrievable from the {\em extended distinguishability} (see (\ref{dg})) is given by, \begin{equation} \label{1de} {\cal D}_E=\left\{ \begin{array}{l} (\vert a_{00}\vert^2-\vert a_{11}\vert^2)\ \ {\rm if}\ \ \vert a_{00}\vert^2> \vert a_{11}\vert^2 \\ (\vert a_{11}\vert^2-\vert a_{00}\vert^2)\ \ {\rm if}\ \ \vert a_{11}\vert^2> \vert a_{00}\vert^2 \end{array}
\right. \end{equation} and ${\cal D}_E=0$ when $\vert a_{00}\vert^2 =\vert a_{11}\vert^2$.
\begin{widetext}
\begin{figure}
\caption{Contour plots of {\em extended distinguishability} ${\cal D}_E$, {\em extended visibility} ${\cal V}_E$ (see (\ref{1de}) and (\ref{1ve})) and ${\cal D}^2_E+{\cal V}^2_E$, as functions of $|a_{00}|,\ |a_{11}|$, the parameters of the initial spin preparation. It is clearly seen that the which-way distinguishability ${\cal D}_E$ and fringe visibility ${\cal V}_E$ obey the duality relation ${\cal D}^2_E+ {\cal V}^2_E\leq 1$.}
\end{figure} \end{widetext} The extended visibility ${\cal V}_E$ (see (\ref{fv})) gets simplified to the following: \begin{equation} \label{1ve} {\cal V}_E=\vert a_{00}\vert\, \vert a_{11}\vert\, +\, \sqrt{1- \vert a_{00}\vert^2}\, \sqrt{1- \vert a_{11}\vert^2}. \end{equation}
In Fig.~1 we have plotted the {\em extended which-way distinguishability} ${\cal D}_E$, the {\em extended fringe visibility} ${\cal V}_E$ and ${\cal D}^2_E+{\cal V}^2_E$ as a function of the initial spin state parameters $\vert a_{00}\vert,\ \vert a_{11}\vert$. It is clearly seen that the extended distinguishability, visibility are complementary to each other and they obey the trade-off relation ${\cal D}^2_E+{\cal V}^2_E\leq 1$.
In the absence of the ancilla $D'$, we find the detector states,
$\rho^{(0)}_{D}={\rm Tr}_{D'}\, [\rho^{(0)}_{DD'}]=\frac{1}{2}\left(\begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right),$ $\rho^{(1)}_{D}={\rm Tr}_{D'}\, [\rho^{(0)}_{DD'}]=\frac{1}{2}\left(\begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right),$
are perfectly indistinguishable leading to which-way distinguishability ${\cal D}=0$ when ancilla is not taken into consideration.
We also evaluate the generalized fringe visibility introduced in Ref.~\cite{Ban} (see (\ref{vg})) in this example. From the final state density operator $\rho^{\rm (fin)}_{QS}$ (see (\ref{finqs})) of the quanton path-spin system, we identify that the action of the quantum channel $\Lambda_{01}$ on spin states is to kill the diagonal elements of the spin operator. More explicitly, \begin{equation} \Lambda_{01}(\vert \alpha\rangle_S\langle \alpha '\vert)=\left\{ \begin{array}{l} 0,\ {\rm if}\ \alpha=\alpha ' \\
1,\ {\rm if}\ \alpha \neq \alpha ' \end{array}\right. , \end{equation} where $\alpha, \alpha '=0,1$. We simplify (\ref{vg}) to obtain the generalized fringe visibility \begin{equation} {\cal V}_G=\vert a_{00}\vert\, \vert a_{11}\vert\, +\, \sqrt{1- \vert a_{00}\vert^2}\, \sqrt{1- \vert a_{11}\vert^2}, \end{equation} which matches exactly with the extended visibility ${\cal V}_E$ of (\ref{1ve}).
It may be noted that even though the which-way distinguishability ${\cal D}$ (obtained when the ancilla degree of freedom is ignored) is zero, the generalized visibility ${\cal V}_G$ does not take its maximum value 1 in this example. Variation of ${\cal V}_G$ does indeed reveal a leakage of which-way information. However, the trade-off relation turns out to be that between the visibility and the which-way information captured by the extended distinguishability ${\cal D}_E$ -- and not the one assimilated through ${\cal D}$.
Next, we consider the initial quanton path-spin to be prepared in the state $\vert\zeta_{QS}\rangle^{\rm (in)}=
\frac{1}{2}[\vert 0\rangle_Q +\vert 1\rangle_Q]\otimes \vert 0\rangle_S$ and the detector-ancilla state is initially prepared in the maximally entangled state $\vert \Psi\rangle_{DD'}$ given by (\ref{ee'}). The unitary interaction between the quanton spin and detector is chosen to be, \begin{eqnarray} U^{(0)}_{SD}= \left( \begin{array}{llll} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{array} \right)\ \ U^{(1)}_{SD}= \left( \begin{array}{llll} 0 & 0 & 0 & 1 \\ 0 & 0 & e^{-i\phi} & 0 \\ 0 & e^{i\phi} & 0 & 0 \\ 1 & 0 & 0 & 0 \end{array} \right) \end{eqnarray} which are given in the spin-detector basis states $\{\vert 0\rangle_S\, \vert 0\rangle_D,\, \vert 0\rangle_S\, \vert 1\rangle_D,$ $\vert 1\rangle_S\, \vert 0\rangle_D,$ $\vert 1\rangle_S\, \vert 1\rangle_D\})$.
After the interaction, the detector-ancilla states associated with the paths $0$, $1$ of the quanton are given by, \begin{equation} \rho^{(0)}_{DD'}=\frac{1}{2}\left(\begin{array}{llll} 0 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0
\end{array}\right),\, \rho^{(1)}_{DD'}=\frac{1}{2}\left(\begin{array}{llll} 0 & 0 & 0 & 0 \\ 0 & 1 & e^{-i\phi} & 0 \\ 0 & e^{i\phi} & 1 & 0 \\ 0 & 0 & 0 & 0
\end{array}\right) \end{equation} The which-way information extracted via the extended distinguishability is $ {\cal D}_E=\vert \sin(\phi/2)\vert$, while the extended fringe visibility is identified to be exactly complementary~\cite{note3} i.e., ${\cal V}_E=\vert \cos(\phi/2)\vert$. It is easy to see that the detector states $\rho^{(i)}_{D}={\rm Tr}_{D'}\,[\rho^{(i)}_{DD'}]=I_S/2,\ \ i=0,1$ are indistinguishable.
In order to evaluate the generalized visibility ${\cal V}_G$, we first identify the action of the spin channel $\Lambda_{01}$: \begin{widetext} \begin{eqnarray*} \Lambda_{01}(\vert 0\rangle_S\langle 0\vert)&=&\left(\frac{1+e^{i\phi}}{2}\right)\, \vert 1\rangle_S\langle 1\vert,\ \ \Lambda_{01}(\vert 1\rangle_S\langle 1\vert)=\left(\frac{1+e^{-i\phi}}{2}\right)\, \vert 0\rangle_S\langle 0\vert, \\ \Lambda_{01}(\vert 0\rangle_S\langle 1\vert)&=&\left(\frac{1+e^{-i\phi}}{2}\right)\, \vert 1\rangle_S\langle 0\vert, \ \ \Lambda_{01}(\vert 1\rangle_S\langle 0\vert)=\left(\frac{1+e^{i\phi}}{2}\right)\, \vert 0\rangle_S\langle 1\vert. \end{eqnarray*} \end{widetext} We thus find that the generalized visibility (\ref{fv}) reduces to ${\cal V}_G=\vert \cos(\phi/2)\vert$, which is equal to the extended visibility ${\cal V}_E$ in this example too.
Even here, the variation of ${\cal V}_G$ would indicate leak-out of which-path information -- but it is not retrievable from the detector alone -- thus bringing out the significance of detection using the extended detector-ancilla system.
In the third example, we consider the unitary interaction between the quanton and the detector to be \begin{eqnarray} U_{QSD}&=&\sum_{i=0,1}\, \vert i\rangle_Q\langle i\vert \otimes U^{(i)}_{SD},\ U^{(i)}_{SD}=e^{-i\frac{\theta_i}{2}\, \sigma_z\otimes \sigma_x} \end{eqnarray} where $\sigma_z\otimes \sigma_x =\left(\vert 0\rangle_S\langle 0\vert -\vert 1\rangle_S\langle 1\vert\right)\otimes \left(\vert 0\rangle_D\langle 1\vert + \vert 1\rangle_D\langle 0\vert\right)$.
With an initial quanton path-spin state $\vert\, \zeta\rangle_{QS}=\left(\frac{\vert 0\rangle_Q+\vert 1\rangle_Q}{\sqrt{2}}\right)\, \left(\frac{\vert 0\rangle_S+\vert 1\rangle_S}{\sqrt{2}}\right)$ and the maximally entangled detector-ancilla state (\ref{ee'}), we find that \begin{equation} \rho^{(i)}_{DD'}=\frac{1}{2}\left(\begin{array}{llll} \cos^2\left(\frac{\theta_i}{2}\right) & 0 & 0 & \cos^2\left(\frac{\theta_i}{2}\right) \\ 0 & \sin^2\left(\frac{\theta_i}{2}\right) & \sin^2\left(\frac{\theta_i}{2}\right) & 0 \\ 0 & \sin^2\left(\frac{\theta_i}{2}\right) & \sin^2\left(\frac{\theta_i}{2}\right) & 0 \\ \cos^2\left(\frac{\theta_i}{2}\right) & 0 & 0 & \cos^2\left(\frac{\theta_i}{2}\right)
\end{array}\right) \end{equation} are the detector-ancilla final states, corresponding to the quanton paths $i=0,1$. Discrimination of these detector-ancilla states results in ${\cal D}_E=\frac{1}{2}\, \vert\vert \rho^{(0)}_{DD'}-\rho^{(1)}_{DD'} \vert\vert=\vert \cos^2\left(\frac{\theta_0}{2}\right)- \cos^2\left(\frac{\theta_1}{2}\right)\vert $. The extended fringe visibility is found to be ${\cal V}_E=\left\vert \cos\left(\frac{\theta_0}{2}\right)\, \cos\left(\frac{\theta_1}{2}\right)\right\vert + \left\vert \sin\left(\frac{\theta_0}{2}\right)\, \sin\left(\frac{\theta_1}{2}\right)\right\vert$. In this example too, the detector states $\rho^{(i)}_{D}={\rm Tr}_{D'}[\rho^{(i)}_{DD'}]=I_D/2$ after the interaction are indistinguishable and they are incapable of retrieving the which-way information.
We find that the generalized visibility reduces to the extended visibility in this example also. In order to see this, we first identify the operation of the spin interaction channel $\Lambda_{01}$: \begin{eqnarray*} \Lambda_{01}(\vert 0\rangle_S\langle 0\vert)&=&\cos\left(\frac{\theta_0-\theta_1}{2}\right)\, \vert 0\rangle_S\langle 0\vert, \\ \Lambda_{01}(\vert 0\rangle_S\langle 1\vert)&=&\cos\left(\frac{\theta_0+\theta_1}{2}\right)\, \vert 0\rangle_S\langle 1\vert, \\ \Lambda_{01}(\vert 1\rangle_S\langle 0\vert)&=&\cos\left(\frac{\theta_0+\theta_1}{2}\right)\, \vert 1\rangle_S\langle 0\vert, \\ \Lambda_{01}(\vert 1\rangle_S\langle 1\vert)&=&\cos\left(\frac{\theta_0-\theta_1}{2}\right)\, \vert 1\rangle_S\langle 1\vert. \end{eqnarray*}
After simplification of (\ref{vg}), we obtain the generalized visibility ${\cal V}_G=\left\vert \cos\left(\frac{\theta_0}{2}\right)\, \cos\left(\frac{\theta_1}{2}\right)\right\vert + \left\vert \sin\left(\frac{\theta_0}{2}\right)\, \sin\left(\frac{\theta_1}{2}\right)\right\vert$, which agrees perfectly with the extended visibililty.
It is pertinent to point out here that in both the first and the third examples the trade-off between the which way information and the visibilities turn out to be identical (which is evident by the parametrization $\vert a_{00}\vert =\left\vert \cos\left(\frac{\theta_0}{2}\right)\right\vert$ and $\vert a_{11}\vert =\left\vert \cos\left(\frac{\theta_1}{2}\right)\right\vert$). While in the first example, the trade-off is realized for different intial spin preparations (by varying the initial spin parameters $\vert a_{00}\vert, \ \vert a_{11}\vert$), analogous trade-off is brought out by varying the parameters of the interaction channel.
\section {Conclusions}
In a recent work on the interference of a quanton in a two path interferometer, Banaszek et. al.~\cite{Ban} showed that a control over the leakage of which-way information can be achieved by appropriate initial preparation of the internal spin state of the quanton. In this work, we extended this analysis and showed that with the help of an entangled detector-ancilla system, the amount of which-way information could get enhanced beyond that discerned solely from the detector. Our analysis generalizes the trade-off relation between the which-way distinguishability and the fringe visibility, when the detector is equipped with an ancillary degree of freedom. We considered three different examples of interaction between the quanton and the detector to demonstrate that the extended which-way distinguishability ${\cal D}_E$ can assume non-zero values even when the distinguishability ${\cal D}$ inferred only by the detector vanishes. In the meanwhile, we also find that the extended visibility ${\cal V}_E$ and the generalized visibilty~\cite{Ban} ${\cal V}_G$ agree identically with each other in these examples. These illustrative examples analyzed here, reveal that there are instances where the detector fails to gain any information on which-way distinguishability, but the corresponding generalized visibility does not attain its maximum value 1. In fact, variation of the generalized visibility $0\leq {\cal V}_G\leq 1$ -- even when the which-way information transferred to the detector is completely erased -- draws striking attention. These examples indeed bring forth the need for our extended analysis to explore how leakage of which-way information is captured by the entangled detector-ancilla system, but not by the detector state alone. However, the agreement of the extended visibility with generalized visibilty in these specific examples appears to be conincidental. It would be of interest to investigate how both these fringe visibilities are related to each other in general. Furthermore, we leave open the question, {\em "is it possible to represent the extended fringe visibility ${\cal V}_E$ in terms of quantities that could be controlled at the stage of preparation of quanton internal spin state, so that one can prevent the which-way information leak-out to the combined detector-ancilla system?"}, for further exploration.
\begin{center}
{\bf ACKNOWLEDGEMENTS} \end{center} We thank Konrad Banaszek for insightful discussions. J.~P acknowledges support from UGC-BSR, Government of India.
\end{document} |
\begin{document}
\title{Postnikov--Stanley Linial arrangement conjecture} \author{Shigetaro Tamura \thanks{Department of Mathematics, Faculty of Science, Hokkaido University, Kita 10, Nishi 8, Kita-ku, Sapporo 060-0810, JAPAN. E-mail: tamurashigetaro@gmail.com}} \maketitle \begin{abstract} A characteristic polynomial is an important invariant in the field of hyperplane arrangement. For the Linial arrangement of any irreducible root system, Postnikov and Stanley conjectured that all roots of the characteristic polynomial have the same real part. In relation to this conjecture, Yoshinaga obtained an explicit relationship between the characteristic quasi-polynomial and the Ehrhart quasi-polynomial for the fundamental alcove. In this paper, we calculate Yoshinaga's explicit formula through the decomposition of the Ehrhart quasi-polynomial into several quasi-polynomials and a modified shift operator, and obtain new formulas for the characteristic quasi-polynomial of the Linial arrangement. In particular, when the parameter of the Linial arrangement is relatively prime to the period of the Ehrhart quasi-polynomial, we prove the Postnikov--Stanley Linial arrangement conjecture. This generalizes some of the results for the root systems of classical types that have been proved by Postnikov--Stanley and Athanasiadis. For other cases, we verify this conjecture for exceptional root systems using a computational approach.
\begin{comment} Postnikov and Stanley conjectured that all roots of the characteristic polynomial of $\mathcal{L}_{\Phi}^m$ have the same real part
The (extended) Linial arrangement $\mathcal{L}_{\Phi}^m$ is a certain finite truncation of the affine Weyl arrangement of a root system $\Phi$ with a parameter $m$. Postnikov and Stanley conjectured that all roots of the characteristic polynomial of $\mathcal{L}_{\Phi}^m$ have the same real part, and this has been proved for the root systems of classical types.
In this paper we prove that the conjecture is true for exceptional root systems when the parameter $m$ is sufficiently large.
The proof is based on representations of the characteristic quasi-polynomials in terms of Eulerian polynomials. \end{comment}
\noindent {\textbf{Keywords:} Hyperplane arrangement, Linial arrangement, Characteristic quasi-polynomial, Quasi-polynomial, Ehrhart quasi-polynomial, Eulerian polynomial.} \end{abstract} \tableofcontents \section{Introduction} Let $\mathcal{A}$ be a hyperplane arrangement, that is, a finite collection of affine hyperplanes in a vector space $V$. One of the most important invariants of $\mathcal{A}$ is the characteristic polynomial $\chi(\mathcal{A},t)$. Let $\Phi$ be an irreducible root system with the Coxeter number $h$. Let $a,b\in \mathbb{Z}$ be integers with $a\leqq b$. Let us denote by $\mathcal{A}_{\Phi}^{[a,b]}$ the truncated affine Weyl arrangement. In particular, $\mathcal{A}_{\Phi}^{[1,n]}$ is called the Linial arrangement. Postnikov and Stanley \cite{Postnikov-Stanley} conjectured that every root $z \in \mathbb{C}$ of the equation $\chi(\mathcal{A}_{\Phi}^{[1,n]},t)=0$ satisfies $\operatorname{Re} z=\frac{n h}{2}$ (see \S \ref{section:Conjecture} for details).\par Postnikov and Stanley proved this conjecture for $\Phi=A_{\ell}$ \cite{Postnikov-Stanley}. Subsequently, Athanasiadis gave proofs for $\Phi=A_{\ell}$, $B_{\ell}$, $C_{\ell}$, and $D_{\ell}$ using a combinatorial method \cite{Athanasiadis}. Yoshinaga approached the conjecture through the characteristic quasi-polynomial $\chi_{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t)$, which was introduced by Kamiya et al.~\cite{Kamiya-Takemura-Terao_0}. The characteristic quasi-polynomial $\chi_{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t)$ has the important property that when $t$ is relatively prime to the period of $\chi_{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t)$, the formula $\chi_{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t)=\chi(\mathcal{A}_{\Phi}^{[1,n]},t)$ holds \cite[Theorem 2.1]{Athanasiadis}. Yoshinaga has proved the following formula \cite{Yoshinaga_1} (see Theorem \ref{characteristic_quasi_poly}). \begin{equation}\label{intro_ch} \chi_{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t)=\mathrm{R}_{\Phi}(\mathrm{S}^{n+1})\mathrm{L}_{\Phi}(t), \end{equation} where $\mathrm{S}$ is the shift operator for the variable $t$ (see \S \ref{Shift congruences}), $\mathrm{L}_{\Phi}(t)$ is the Ehrhart quasi-polynomial for the closed fundamental alcove of type $\Phi$ (see \S \ref{section:Eh_quasi}), and $\mathrm{R}_{\Phi}(t)$ is the generalized Eulerian polynomial of type $\Phi$, which was introduced by Lam and Postnikov \cite{Lam-Postnikov} (see \S \ref{section:generalized Eulerian}). By using this formula, Yoshinaga verified several cases of the conjecture (see \S \ref{section:Conjecture}).
\subsection{Main results} Let $\rho$ be the period of the characteristic quasi-polynomial $\chi_{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t)$. Let $m$ be an integer with $n+1=m\cdot \mathrm{gcd}(n+1,\rho)$. Let $c_0,\cdots, c_{\ell}$ be integers that are coefficients of each simple root when the highest root is expressed as a linear combination of simple roots in an irreducible root system $\Phi$ of rank $\ell$ (see \S \ref{root system}). By calculating the right-hand side of (\ref{intro_ch}), we prove the formula \begin{equation}\label{intro_} \chi_{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t)=(\prod_{j=0}^{\ell}\frac{1}{m}[m]_{\mathrm{S}^{c_j\cdot \mathrm{gcd}(n+1,\rho)}}) \chi _{quasi}(\mathcal{A}_{\Phi}^{[1,\mathrm{gcd}(n+1,\rho)-1]},t), \end{equation} where $\cyc{m}{t}=\frac{1-t^{m}}{1-t}=1+t+\cdots+t^{m-1}$ (see Theorem \ref{corollary_1}). Furthermore, the characteristic quasi-polynomial $\chi_{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t)$ has the period $\mathrm{gcd}(n+1,\rho)$. In particular, when the parameter $n+1$ is relatively prime to the period $\rho$ of the Ehrhart quasi-polynomial $\mathrm{L}_{\Phi}(t)$, that is, $\mathrm{gcd}(n+1,\rho)=1$, we have \begin{equation}\label{intro_gcd} \chi_{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t)=(\prod_{j=0}^{\ell}\frac{1}{n+1}[n+1]_{\mathrm{S}^{c_j}})t^{\ell} \end{equation} from (\ref{intro_}) (see Theorem \ref{gcd_prime}). In this case, from (\ref{intro_gcd}) and the technique used by Postnikov and Stanley in \cite{Postnikov-Stanley} (see Lemma \ref{Postnikov-Stanley's lemma}), we see that the conjecture holds. In addition, we prove the formula for the characteristic polynomial \begin{equation}\label{intro_rad} \chi(\mathcal{A}_{\Phi}^{[1,\mathrm{gcd}(n+1,\rho)-1]},t)=(\prod_{j=0}^{\ell}\frac{1}{\eta}[\eta]_{\mathrm{S}^{c_j\cdot \mathrm{gcd}(n+1,\mathrm{rad}(\rho))}}) \chi(\mathcal{A}_{\Phi}^{[1,\mathrm{gcd}(n+1,\mathrm{rad}(\rho))-1]},t), \end{equation} where $\eta=\frac{\mathrm{gcd}(n+1,\rho)}{\mathrm{gcd}(n+1,\mathrm{rad}(\rho))}$ (see Theorem \ref{Ch_rad}). From (\ref{intro_}) and (\ref{intro_rad}), if all roots of the characteristic polynomial $\chi(\mathcal{A}_{\Phi}^{[1,\mathrm{gcd}(n+1,\mathrm{rad}(\rho))-1]},t)$ have the same real part $\frac{(\mathrm{gcd}(n+1,\mathrm{rad}(\rho))-1)h}{2}$, then the same method as for (\ref{intro_gcd}) can be used to show that $\chi(\mathcal{A}_{\Phi}^{[1,n]},t)$ satisfies the conjecture. We can check the conjecture for $\Phi \in \{E_6,E_7,E_8,F_4\}$ by computing the real part of all roots of $\chi_(\mathcal{A}_{\Phi}^{[1,\mathrm{gcd}(n+1,\rho)-1]},t)$ using a computational approach.
\subsection{Outline of the proof} To prove the conjecture, we transform the right-hand side of (\ref{intro_ch}) into a suitable form. One of the difficulties in this transformation is that the shift operator $\mathrm{S}$ acts on a quasi-polynomial, not a polynomial. To overcome this difficulty, we introduce the operator $\overline{\mathrm{S}}$, which acts on a constituent of a quasi-polynomial (Definition \ref{shift_bar}). Additionally, we define a quasi-polynomial $\tilde{f}^{i}(t)$ from a quasi-polynomial $f(t)$ (Definition \ref{quasi_av}). The quasi-polynomial $\tilde{f}^{i}(t)$ is like an average of the constituents of the quasi-polynomial $f(t)$, and its minimal period is a divisor of the integer $i$. Using a generalization of Lemma 2.2 in \cite{Athanasiadis} (Lemma \ref{Athanasiadis's lemma_2}), for a quasi-polynomial $f(t)$ of degree $\ell$ and period $\rho$, we obtain the formula \begin{equation}\label{intro_shift_bar} [c]_{\mathrm{S}^m}^{\ell+1}g(\mathrm{S}^m)f(t)=[c]_{\overline{\mathrm{S}}^m}^{\ell+1}g(\overline{\mathrm{S}}^m)\tilde{f}^{\mathrm{gcd}(m,\rho)}(t), \end{equation} where $g(\mathrm{S})$ is the substituted shift operator $\mathrm{S}$ for a polynomial $g(t)$ (Proposition \ref{averaging}).
The Ehrhart quasi-polynomial $\mathrm{L}_{\Phi}(t)$ decomposes into several quasi-polynomials that have a degree and period that is less than or equal to its own degree and period: \begin{equation}\label{intro_Eh} \mathrm{L}_{\Phi}(t)=\sum_{k\in \{\hat{c}_0,\cdots, \hat{c}_{\hat{\ell}}\}}\Ehf{k}{(\ell_{k})}(t), \end{equation} where $\hat{c}_0,\cdots,\hat{c}_{\hat{\ell}}$ are all the different integers in $c_0,\cdots, c_{\ell}$, $\ell_{\hat{c}_k}+1$ is the number of multiples of $\hat{c}_k$ in $c_0,\cdots, c_{\ell}$ (see \S \ref{section:Eh_quasi}), and $\Ehf{k}{(\ell_{k})}(t)$ is a quasi-polynomial of degree $\ell_{k}$ with period $k$ (Proposition \ref{Eh_deco}). This decomposition is well matched with the following decomposition of generalized Eulerian polynomials, which was proved in \cite{Lam-Postnikov}. \begin{equation}\label{intro_Eu} \mathrm{R}_{\Phi}(t)=\cyc{c_0}{t}\cyc{c_1}{t}\cdots\cyc{c_{\ell}}{t}\mathrm{A}_{\ell}(t), \end{equation} where $\mathrm{A}_{\ell}(t)$ is the Eulerian polynomial (Theorem \ref{Lam-Postnikov}). The right-hand side of (\ref{intro_Eu}) has the divisor $\cyc{\hat{c}_k}{t}^{\ell_{\hat{c}_k}+1}$. Hence, we can apply (\ref{intro_shift_bar}) to each $\Ehf{\hat{c}_k}{(\ell_{\hat{c}_k})}(t)$ of (\ref{intro_Eh}). From the above argument, we have the formula \begin{equation}\label{intro_R} \mathrm{R}_{\Phi}(\mathrm{S}^{n+1})\mathrm{L}_{\Phi}(t)=\mathrm{R}_{\Phi}(\overline{\mathrm{S}}^{n+1})\tilde{\mathrm{L}}^{\mathrm{gcd}(n+1,\rho)}_{\Phi}(t), \end{equation} (see Theorem \ref{main theorem}). We can think of the operator $\mathrm{R}_{\Phi}(\overline{\mathrm{S}}^{n+1})$ as acting on a polynomial, or more precisely, on a constituent of the quasi-polynomial $\tilde{\mathrm{L}}^{\mathrm{gcd}(n+1,\rho)}_{\Phi}(t)$. Thus, we can easily calculate the right-hand side of (\ref{intro_R}) and prove (\ref{intro_}).\par The remainder of this paper is organized as follows. Section \ref{section:Pre} contains some preliminaries required to prove the main results. First, we prove a generalization of Athanasiadis' Lemma \cite{Athanasiadis} in \S \ref{Shift congruences}. In \S \ref{section:Pre_quasi}, we introduce the operator $\overline{\mathrm{S}}$ and a quasi-polynomial $\tilde{f}^{i}(t)$, and prove (\ref{intro_shift_bar}). In \S \ref{sec:Pre_deco}, we prove that a decomposition of a quasi-polynomial holds using a generating function. In \S \ref{root system}, \S \ref{section:Eh_quasi}, and \S \ref{section:generalized Eulerian}, we prepare several concepts required to explain Yoshinaga's results \cite{Yoshinaga_1} for the characteristic quasi-polynomial $\chi_{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t)$, which is explained together with the Postnikov--Stanley Linial arrangement conjecture in \S \ref{section:Conjecture}. The explanations in \S \ref{section:shift operator}, \S \ref{root system}, \S \ref{section:Eh_quasi}, \S \ref{section:generalized Eulerian}, and \S \ref{section:Conjecture} are based on \cite{Yoshinaga_1}. We prove (\ref{intro_}), (\ref{intro_gcd}), and (\ref{intro_rad}) in \S \ref{section:main_formula}. We present a table of the characteristic polynomial $\chi(\mathcal{A}_{\Phi}^{[1,n]},t)$ and the real part of all of its roots for $\Phi \in \{E_6,E_7,E_8,F_4\}$ in \S \ref{section:main_check}.
\section{Preliminaries}\label{section:Pre} \subsection{Shift operator and congruence}\label{Shift congruences} \subsubsection{Shift operator}\label{section:shift operator} Let $f:\mathbb{Z}\rightarrow\mathbb{C}$ be a partial function, that is, a function defined on a subset of $\mathbb{Z}$. Define the action of the shift operator by \begin{equation} \mathrm{S} f(t)=f(t-1). \end{equation} More generally, for a polynomial $g(\mathrm{S})=\sum_{k}a_k\mathrm{S}^k$ in $\mathrm{S}$, the action is defined by \begin{equation} g(\mathrm{S})f(t)=\sum_{k}a_kf(t-k). \end{equation} \begin{proposition}\label{cong}(\cite{Yoshinaga_1}, Proposition 2.8) Let $g(\mathrm{S}) \in \mathbb{C}[\mathrm{S}]$ and $f(t) \in \mathbb{C}[t]$. Suppose $\deg f=\ell$. Then, $g(\mathrm{S})f(t)=0$ if and only if $(1-\mathrm{S})^{\ell+1}$ divides $g(\mathrm{S})$. \end{proposition} \begin{remark} Note that, because $(1-\mathrm{S})f(t)=f(t)-f(t-1)$ is the difference operator, $\deg(1-\mathrm{S})f=\deg f-1$. Hence, inductively, $(1-\mathrm{S})^{\deg f +1}f(t)=0$. Proposition \ref{Shift congruences} implies that if polynomials $g_1(\mathrm{S})$ and $g_2(\mathrm{S})$ satisfy the congruence \begin{equation}\label{congruence} g_1(t) \equiv g_2(t) \bmod (1-t)^{\ell+1}, \end{equation} then for any polynomial $f(t)$ of degree less than or equal to $\ell$, \begin{equation}\label{shift_equation} g_1(\mathrm{S})f(t)=g_2(\mathrm{S})f(t), \end{equation} since $(1-\mathrm{S})^{\ell+1}f(t)=0$. Conversely, when $g_1(\mathrm{S})f(t)= g_2(\mathrm{S})f(t)$ for a polynomial $f(t)$ of degree $\ell$, (\ref{congruence}) holds. \end{remark}
\subsubsection{Congruence} Lemmas \ref{Athanasiadis's lemma_3/2} and \ref{Athanasiadis's lemma_2} are generalizations of Lemma 2.2 in \cite{Athanasiadis}. The proofs of these lemmas are very similar to the proof given by Athanasiadis \cite{Athanasiadis}. Let $\cyc{c}{t}:=\frac{1-t^c}{1-t}=1+t+\cdots+t^{c-1}$, where $c$ is a non-negative integer.
\begin{lemma}\label{Athanasiadis's lemma_3/2} If $g(t)=\sum_{k}a_kt^{k}$ is a polynomial and $n$ is a positive integer, then $g(t)$ can be divided by $\cyc{n}{t}^{\ell+1}$ if and only if the following formulas hold for any integer $r \in \{0,1,\cdots,\ell\}$. \begin{equation} \sum_{k \equiv 0 \bmod n}a_k k^r= \sum_{k \equiv 1 \bmod n}a_k k^r= \cdots=\sum_{k \equiv n-1 \bmod n}a_k k^r. \end{equation} \end{lemma} \begin{proof} Let $\omega:=\mathrm{e}^{\frac{2 \pi \sqrt{-1}}{n} }$. First, suppose that $g(t)=\cyc{n}{t}^{\ell+1}h(t)$, where $h(t)$ is a polynomial. \[ \sum_{k}a_k k^r t^k=\biggl(t\frac{d}{dt} \biggr)^r\cyc{n}{t}^{\ell+1}h(t). \] Since $r \leqq \ell$ and using Leibniz's rule, \[ \sum_{k}a_k k^r \omega^k=0. \] From $\omega^n=1$, \[ (\sum_{k \equiv 0 \bmod n}a_k k^r) +(\sum_{k \equiv 1 \bmod n}a_k k^r \omega) +\cdots +(\sum_{k \equiv n-1 \bmod n}a_k k^r \omega^{n-1})=0. \] Let $s^{(r)}_{i}:=\sum_{k \equiv i \bmod n}a_k k^r$. Then, \[s^{(r)}_0+ s^{(r)}_1\omega+\cdots+ s^{(r)}_{n-1}\omega^{n-1}=0.\] Since $\omega^2,\cdots,\omega^{n-1}$ are also $n$-th roots of unity, we obtain the formulas \begin{equation}\label{n-th root} \begin{array}{llll} s^{(r)}_0+ s^{(r)}_1\omega &+\cdots+s^{(r)}_{n-1}\omega^{n-1}&=0,\\ s^{(r)}_0+ s^{(r)}_1\omega^2 &+\cdots+ s^{(r)}_{n-1}\omega^{2(n-1)}&=0,\\ \quad \vdots\\ s^{(r)}_0+ s^{(r)}_1\omega^{n-1} &+\cdots+ s^{(r)}_{n-1}\omega^{(n-1)^2}&=0. \end{array} \end{equation} Let $s:=(s^{(r)}_0, s^{(r)}_1,\cdots, s^{(r)}_{n-1})^T$. Let us define an $(n-1) \times n$ matrix $W$ as\[ W := \left( \begin{array}{llll} 1 & \omega & \ldots & \omega^{n-1} \\ 1 & \omega^{2} & \ldots & \omega^{2(n-1)} \\ \vdots & \vdots & \ddots & \vdots \\ 1 & \omega^{n-1} & \ldots & \omega^{(n-1)^2} \end{array}\right). \] We rewrite (\ref{n-th root}) as $Ws=0$. Since $\omega$ is primitive, we have that $\operatorname{dim}(\operatorname{ker} W)=1$ from Vandermonde's determinant. By $(1,\cdots,1) \in \operatorname{ker} W$, we obtain the formula $s_0^{(r)}=\cdots= s_{n-1}^{(r)}$. Conversely, suppose that $s_0^{(r)}=\cdots= s_{n-1}^{(r)}$ for any $r\in\{0,1,\cdots,\ell\}$. Then, for any $r\in\{0,1,\cdots,\ell\}$ and $m\in\{0,\cdots,n-1\}$, \begin{equation}
\biggl(t\frac{d}{dt} \biggr)^rg(t)\Bigl|_{t=\omega}=\sum_{k}a_k k^{r}(\omega^{m})^k=0. \end{equation} By induction on the parameter $\ell$, we find that the polynomial $\cyc{n}{t}^{\ell+1}$ divides $g(t)$. \end{proof} We prove the following lemma using Lemma \ref{Athanasiadis's lemma_3/2}. \begin{lemma} \label{Athanasiadis's lemma_2} If $g(t)=\sum_{k}a_kt^{k}$ is a polynomial and $n$ is a positive integer, then a polynomial $g(t)$ can be divided by $\cyc{n}{t}^{\ell+1}$ if and only if the following formulas hold. \begin{equation}\label{cyclotomic congruence} \frac{1}{n}g(t) \equiv \sum_{k\equiv 0 \bmod n}a_kt^k \equiv \cdots \equiv \sum_{k\equiv n-1 \bmod n}a_kt^k \ \bmod (1-t)^{\ell+1}. \end{equation} \end{lemma} \begin{proof} First, suppose that $g(t)=\cyc{n}{t}^{\ell+1}h(t)$, where $h(t)$ is a polynomial. We prove the formula using the shift operator action on $f(t)= t^{\ell}$. For any $j \in \{0,1,\cdots,n-1\}$, \[ \begin{split} \sum_{k\equiv j \bmod n}a_k\mathrm{S}^k t^{\ell} &= \sum_{k\equiv j \bmod n}a_k(t-k)^{\ell}\\ &= \sum_{k\equiv j \bmod n}a_k \sum_{r=0}^{\ell} \binom{\ell}{r} (-1)^{r}k^{r}t^{\ell-r}\\ &= \sum_{r=0}^{\ell}\binom{\ell}{r}(-1)^{r} (\sum_{k\equiv j \bmod n}a_kk^{r})t^{\ell-r}. \end{split} \] By Lemma \ref{Athanasiadis's lemma_3/2}, \[ \sum_{k \equiv 0 \bmod n}a_k \mathrm{S}^k t^{\ell} = \sum_{k \equiv 1 \bmod n}a_k \mathrm{S}^k t^{\ell} = \cdots=\sum_{k \equiv n-1 \bmod n}a_k \mathrm{S}^k t^{\ell}. \] Thus, for any $j \in \{1,\cdots,n\}$, \[ \frac{1}{n}g(\mathrm{S})t^{\ell} = \sum_{k\equiv j \bmod n}a_k\mathrm{S}^{k}t^{\ell}. \] By Proposition \ref{Shift congruences}, \[ \frac{1}{n}g(t) \equiv \sum_{k\equiv j \bmod n}a_kt^k \ \bmod (1-t)^{\ell+1}. \] The converse is proved by following the above proof in reverse. \end{proof} \subsection{Quasi-polynomial}\label{section:Pre_quasi} A function $f:\mathbb{Z}\rightarrow \mathbb{C}$ is called a quasi-polynomial if there exists a positive integer $n>0$ and polynomials $f_1(t),\cdots,f_{n}(t) \in \mathbb{C}[t]$ such that \begin{equation} f(t) = \left\{ \begin{array}{ll} f_1(t), & t\equiv1 \bmod n,\\ f_2(t), & t\equiv2 \bmod n,\\ \quad \vdots\\ f_{n-1}(t), & t\equiv n-1 \bmod n,\\ f_{n}(t), & t\equiv 0 \bmod n. \end{array}\right. \end{equation} Such a $n$ is called the period of the quasi-polynomial $f(t)$. The minimum of the period of $f(t)$ is called the minimal period. The polynomials $f_1(t), \cdots, f_{n}(t)$ are the constituents of $f(t)$. We define $\deg f:=\underset{1 \leqq i \leqq n}{\max}\deg f_i$ as the degree of a quasi-polynomial $f(t)$. Moreover, if $f_r(t)=f_{\mathrm{gcd}(r,n)}(t)$ for any $r \in \{1,\cdots,n\}$, then we say that the quasi-polynomial $f(t)$ has the gcd-property. \begin{remark} We can express a quasi-polynomial as \[ f(t)=p^{\ell}(t)t^{\ell}+p^{\ell-1}(t)t^{\ell-1}+\cdots +p^{0}(t), \] where $p^{\ell}(t),\cdots, p^{0}(t)$ are periodic functions. The minimal period of a quasi-polynomial $f(t)$ is the least common multiple of the periods of the periodic functions $p^{\ell}(t),\cdots, p^{0}(t)$. \end{remark} \begin{definition}\label{quasi_av} Let $f(t)$ be a quasi-polynomial with minimal period $n$. Let $s$ be a positive integer.\\ \[ \begin{split} f(t) = \left\{ \begin{array}{ll} f_1(t), & t\equiv1 \bmod sn,\\ f_2(t), & t\equiv2 \bmod sn,\\ \quad \vdots\\ f_{sn-1}(t), & t\equiv sn-1 \bmod sn,\\ f_{sn}(t), & t\equiv 0 \bmod sn. \end{array}\right. \end{split} \] We define the action of the symmetric group $\mathfrak{S}_{sn}$ on a quasi-polynomial as follows. \[ \begin{split} f^{\sigma}(t) := \left\{ \begin{array}{ll} f_{\sigma^{-1}(1)}(t), & t\equiv1 \bmod sn,\\ f_{\sigma^{-1}(2)}(t), & t\equiv2 \bmod sn,\\ \quad \vdots\\ f_{\sigma^{-1}(sn-1)}(t), & t\equiv sn-1 \bmod sn,\\ f_{\sigma^{-1}(sn)}(t), & t\equiv 0 \bmod sn, \end{array}\right. \end{split} \] where $\sigma \in \mathfrak{S}_{sn}$. Let $\sigma_{sn}$ be the cyclic permutation $(1,2,\cdots,sn)\in \mathfrak{S}_{sn}$. For any positive integer $s$, we have $f^{\sigma_{sn}}(t)=f^{\sigma_{n}}(t)$. In other words, the action of the cyclic permutation $\sigma_{sn}=(1,\cdots,sn)$ on $f(t)$ does not depend on $s$. From now on, we denote a cyclic permutation $(1,\cdots,n)$ by $\sigma$, where $n$ takes the minimal period of a quasi-polynomial on which $\sigma$ acts in each case. Let $k$ be an integer. Define the following quasi-polynomial for $k$: \begin{equation} \tilde{f}^k(t):=\frac{f(t)+f^{\sigma^{k}}(t)+ f^{\sigma^{2k}}(t)+ \cdots +f^{\sigma^{(n-1)k}}(t)}{n}. \end{equation} \end{definition} \begin{remark}\label{remark_tilde} \begin{enumerate}[(1)] \item Let $n$ be a period of $f(t)$. Let $k$ be a divisor of $n$ and $m:=\frac{n}{k}$. The quasi-polynomial $\tilde{f}^{k}(t)$ has the period $k$. \[ \begin{split} \tilde{f}^{k}(t)=\left\{ \begin{array}{ll} \frac{f_1(t)+f_{k+1}(t)+f_{2k+1}(t)+\cdots+f_{(m-1)k+1}(t)}{m}, & t \equiv 1 \bmod k,\\ \frac{f_2(t)+f_{k+2}(t)+f_{2 k+2}(t)+\cdots+f_{(m-1)k+2}(t)}{m}, & t \equiv 2 \bmod k,\\ \quad \vdots\\ \frac{f_{k-1}(t)+f_{2k-1}(t)+f_{3k-1}(t)+\cdots+f_{mk-1}(t)}{m}, & t \equiv k-1 \bmod k,\\ \frac{f_{k}(t)+f_{2k}(t)+f_{3k}(t)+\cdots+f_{mk}(t)}{m}, & t \equiv 0 \bmod k.\\\end{array}\right. \end{split} \] \item\label{remark_tilde_2} When a quasi-polynomial $f(t)$ has a period $n$, we have that $\tilde{f}^k(t)=\tilde{f}^{k+n}(t)$. \end{enumerate} \end{remark} \begin{lemma}\label{sigma_linear} Let $f(t)$, $g(t)$, and $h(t)$ be quasi-polynomials such that $f(t)=g(t)+h(t)$ holds. Then, $f^{\sigma}(t)=g^{\sigma}(t)+h^{\sigma}(t)$, that is, the action of the cyclic permutation $\sigma$ is linear. \end{lemma} \begin{proof} Let $n$ be the minimal period of $f(t)$. Let $sn$ be the least common multiple of the minimal periods of $g(t)$ and $h(t)$. Let $g_j(t)$ and $h_j(t)$ be constituents of $g(t)$ and $h(t)$ for $t\equiv j \bmod sn$. Let $\sigma_{sn}:=(1,\cdots,sn)$. Note that we use the notation $\sigma$ as the cyclic permutation for the minimal period of a quasi-polynomial on which $\sigma$ acts, and we have \[ \begin{split} f^{\sigma}(t)=f^{\sigma_{sn}}(t)&=\left\{ \begin{array}{ll} g_{\sigma_{sn}^{-1}(1)}(t)+h_{\sigma_{sn}^{-1}(1)}(t), & t\equiv1 \bmod sn,\\ g_{\sigma_{sn}^{-1}(2)}(t)+h_{\sigma_{sn}^{-1}(2)}(t), & t\equiv2 \bmod sn,\\ \quad \vdots\\ g_{\sigma_{sn}^{-1}(sn-1)}(t)+h_{\sigma_{sn}^{-1}(sn-1)}(t), & t\equiv sn-1 \bmod sn,\\ g_{\sigma_{sn}^{-1}(sn)}(t)+h_{\sigma_{sn}^{-1}(sn)}(t), & t\equiv 0 \bmod sn \end{array}\right.\\ &=g^{\sigma_{sn}}(t)+h^{\sigma_{sn}}(t)\\ &=g^{\sigma}(t)+h^{\sigma}(t). \end{split} \] \end{proof}
\begin{lemma}\label{tilde_linear} Let $f(t)$, $g(t)$, and $h(t)$ be quasi-polynomials such that $f(t)=g(t)+h(t)$ holds. Let $k$ be an integer. Then, $\tilde{f}^{k}(t)=\tilde{g}^{k}(t)+\tilde{h}^{k}(t)$. \end{lemma} \begin{proof} Let $n_0,n_1,n_2$ be the minimal period of each $f(t),g(t),h(t)$. Note that $n_1n_2$ is a multiple of $n_0$. Then, by Lemma \ref{sigma_linear}, Remark \ref{remark_tilde} (\ref{remark_tilde_2}), \[ \begin{split} \tilde{f}^{k}(t)&=\frac{f(t)+f^{\sigma^{k}}(t)+\cdots+f^{\sigma^{(n_0-1)k}}}{n_0}\\ &=\frac{f(t)+f^{\sigma^{k}}(t)+\cdots+f^{\sigma^{(n_1n_2-1)k}}}{n_1n_2}\\ &=\frac{\bigl(g(t)+h(t)\bigr)+\bigl(g^{\sigma^{k}}(t)+ h^{\sigma^{k}}(t)\bigr)+\cdots+ \bigl(g^{\sigma^{(n_1n_2-1)k}}(t)+ h^{\sigma^{(n_1n_2-1)k}}(t)\bigr)}{n_1n_2}\\ &=\frac{g(t)+g^{\sigma^{k}}(t)+\cdots+ g^{\sigma^{(n_1n_2-1)k}}(t)}{n_1n_2}+ \frac{h(t)+h^{\sigma^{k}}(t)+\cdots+ h^{\sigma^{(n_1n_2-1)k}}(t)}{n_1n_2}\\ &=\frac{n_2(g(t)+g^{\sigma^{k}}(t)+\cdots+ g^{\sigma^{(n_1-1)k}}(t))}{n_1n_2}+ \frac{n_1(h(t)+h^{\sigma^{k}}(t)+\cdots+ h^{\sigma^{(n_2-1)k}}(t))}{n_1n_2}\\ &=\tilde{g}^{k}(t)+\tilde{h}^{k}(t). \end{split} \] \end{proof} \begin{proposition}\label{tilde-gcd} Let $f(t)$ be a quasi-polynomial with period $n$. Let $k$ be an integer. \begin{equation} \tilde{f}^{k}(t)=\tilde{f}^{\mathrm{gcd}(k,n)}(t). \end{equation} In particular, the quasi-polynomial $\tilde{f}^{k}(t)$ has the period $\mathrm{gcd}(k,n)$. \end{proposition} \begin{proof} Let $[b]:=b+n\mathbb{Z} \in \mathbb{Z}/n \mathbb{Z}$. We will prove that \begin{equation}\label{modn} \{[k],[2k],\cdots,[(n-1)k] \} = \{[\mathrm{gcd}(k,n)],[2\mathrm{gcd}(k,n)],\cdots,[(n-1)\mathrm{gcd}(k,n)]\}. \end{equation} If (\ref{modn}) holds, then from the relation $f^{\sigma^i}(t)=f^{\sigma^{i+n}}(t)$, we have that \[ f(t)+f^{\sigma^{k}}(t)+\cdots+f^{\sigma^{(n-1)k}}(t)=f(t)+f^{\sigma^{\mathrm{gcd}(k,n)}}(t)+\cdots +f^{\sigma^{(n-1)\mathrm{gcd}(k,n)}}(t). \] First, if there exists an integer $m\in \{1,\cdots,n-1\}$ such that $[m \frac{k}{\mathrm{gcd}(k,n)}]=[0]$ holds, then $[m \mathrm{gcd}(k,n)]=[0]$. Actually, if we write $m\frac{k}{\mathrm{gcd}(k,n)}=qn$, where $q \in \mathbb{Z}$, then the following formula holds. \[ \begin{split} m \mathrm{gcd}(k,n)&=qn \frac{\mathrm{gcd}(k,n)}{k} \mathrm{gcd}(k,n)\\ &= n \frac{\mathrm{gcd}(qk\mathrm{gcd}(k,n),qn\mathrm{gcd}(k,n))}{k}\\ &= n \frac{\mathrm{gcd}(qk\mathrm{gcd}(k,n),mk)}{k}\\ &= n \mathrm{gcd}(q \mathrm{gcd}(k,n),m). \end{split} \] In other words, if $[m \frac{k}{\mathrm{gcd}(k,n)}]=[0]$, then \[ [mk]=[m \frac{k}{\mathrm{gcd}(k,n)} \mathrm{gcd}(k,n)]=[0] \in \{[\mathrm{gcd}(k,n)],[2\mathrm{gcd}(k,n)],\cdots,[(n-1)\mathrm{gcd}(k,n)]\}. \] Next, we suppose that an integer $m \in \{1,\cdots,n-1\}$ satisfies $[m\frac{k}{\mathrm{gcd}(k,n)}]\neq[0]$. Then, there exists $m_k \in \{1,\cdots,n-1\}$ with $[m_k]=[m\frac{k}{\mathrm{gcd}(k,n)}]$. Hence, \[ [mk]=[m\frac{k}{\mathrm{gcd}(k,n)}\mathrm{gcd}(k,n)]=[m_k\mathrm{gcd}(k,n)] \in \{[\mathrm{gcd}(k,n)],[2\mathrm{gcd}(k,n)],\cdots,[(n-1)\mathrm{gcd}(k,n)]\}. \] Thus, $\{[k],[2k],\cdots,[(n-1)k] \} \subset \{[\mathrm{gcd}(k,n)],[2\mathrm{gcd}(k,n)],\cdots,[(n-1)\mathrm{gcd}(k,n)]\}$. Because the map \[ \mapel{\phi_{k_{n}}}{\{[\mathrm{gcd}(k,n)],[2\mathrm{gcd}(k,n)],\cdots,[(n-1)\mathrm{gcd}(k,n)]\}}{\{[k],[2k],\cdots,[(n-1)k]\}}{[x]}{[k_n x]} \] is bijective, we have that $\{[k],[2k],\cdots,[(n-1)k] \} = \{[\mathrm{gcd}(k,n)],[2\mathrm{gcd}(k,n)],\cdots,[(n-1)\mathrm{gcd}(k,n)]\}$. \end{proof}
We now prepare a lemma on greatest common divisors that will be used later. \begin{lemma}\label{invgcd} Let $n$ and $d$ be integers. Then, for any integers $\mu_0 \in \mathbb{Z}$, \begin{equation} \mathrm{gcd}(d+\mu_0 \mathrm{rad}(n) \mathrm{gcd}(d,n),n)=\mathrm{gcd}(d,n), \end{equation}
where $\mathrm{rad}(n):=\underset{p:prime,p | n}{\prod}p$ is a radical of $n$. \end{lemma} \begin{proof} Note that $\mathrm{gcd}(\frac{d}{\mathrm{gcd}(d,n)}, \frac{n}{\mathrm{gcd}(d,n)})=\frac{\mathrm{gcd}(d,n)}{\mathrm{gcd}(d,n)}=1$. Hence, for any integer $\mu_0$, we have that $\mathrm{gcd}(\frac{d}{\mathrm{gcd}(d,n)}+\mu_0 \mathrm{rad}(n), \frac{n}{\mathrm{gcd}(d,n)})=1$. Therefore, \[ \begin{split} &\mathrm{gcd}(d+\mu_0 \mathrm{rad}(n) \mathrm{gcd}(d,n),n)\\ &= \mathrm{gcd}(d,n)\mathrm{gcd}(\frac{d}{\mathrm{gcd}(d,n)}+\mu_0 \mathrm{rad}(n), \frac{n}{\mathrm{gcd}(d,n)})\\ &=\mathrm{gcd}(d,n). \end{split} \] \end{proof} \begin{proposition}\label{constituent_inv} Let $f(t)$ be a quasi-polynomial of period $n$ with the gcd-property. Let $k$ be a positive integer. Let $\tilde{f}^{\mathrm{gcd}(k,n)}_j(t)$ be the constituent of the quasi-polynomial $\tilde{f}^{\mathrm{gcd}(k,n)}(t)$ for $t \equiv j \bmod \mathrm{gcd}(k,n)$. If $\mathrm{gcd}(j,n)=1$ , then \begin{equation} \tilde{f}^{\mathrm{gcd}(k,n)}_j(t)=\tilde{f}^{\mathrm{gcd}(k,\mathrm{rad}(n))}_j(t). \end{equation} \end{proposition} \begin{proof} We will prove that \begin{equation}\label{set_gcd} \{\mathrm{gcd}(j+\mu \mathrm{gcd}(k,n),n)\}_{\mu=0}^{n-1}=\{\mathrm{gcd}(j+\mu \mathrm{gcd}(k,\mathrm{rad}(n)),n)\}_{\mu=0}^{n-1}. \end{equation} If (\ref{set_gcd}) holds, then from the gcd-property of $f(t)$, \[ \begin{split} &f_j(t)+f_{j+\mathrm{gcd}(k,n)}(t)+\cdots+f_{j+(n-1) \mathrm{gcd}(k,n)}(t)\\ &= f_j(t)+f_{j+\mathrm{gcd}(k,\mathrm{rad}(n))}(t)+\cdots+f_{j+(n-1) \mathrm{gcd}(k,\mathrm{rad}(n))}(t). \end{split} \] Let $[b]:=b+\mathbb{Z}/n\mathbb{Z}$. Let $c:=\frac{\mathrm{gcd}(k,n)}{\mathrm{gcd}(k,\mathrm{rad}(n))}$. For any integer $\mu \in \{0,1,\cdots,n-1\}$, there exists $\mu' \in \{0,1,\cdots,n-1\}$ such that $[\mu']=[\mu c]$. Hence, $\{\mathrm{gcd}(j+\mu \mathrm{gcd}(k,n),n)\}_{\mu=0}^{n-1} \subset \{\mathrm{gcd}(j+\mu \mathrm{gcd}(k,\mathrm{rad}(n)),n)\}_{\mu=0}^{n-1}$. Next, we set $d:=\mathrm{gcd}(k,n)$. We write $n=r_1^{s_1}r_2^{s_2}\cdots r_m^{s_m}$ and $d= r_1^{q_1 i_1}r_2^{q_2 i_2}\cdots r_m^{q_m i_m}$, where $r_1,r_2,\cdots, r_m$ are primes, $s_1,\cdots,s_m,q_1,\cdots,q_m$ are positive integers, and $i_1,\cdots,i_m \in \{0,1\}$, and then we define $\check{d}_n:=r_1^{s_1(1-i_1)} r_2^{s_2(1-i_2)}\cdots r_m^{s_m(1-i_m)}$. Note that $\mathrm{gcd}(d, \check{d}_n)=1$ and any divisor of $n$ that is relatively prime to $d$ divides $\check{d}_n$. We have $\mathrm{gcd}(\frac{d}{\mathrm{gcd}(k,\mathrm{rad}(n))},\frac{\mathrm{rad}(n) \check{d}_n}{\mathrm{gcd}(k,\mathrm{rad}(n))})=1$ since $\mathrm{gcd}(d,\frac{\mathrm{rad}(n)}{\mathrm{gcd}(k,\mathrm{rad}(n))})=1$ and $\mathrm{gcd}(d,\check{d}_n)=1$. Hence, for any integer $\mu \in \{0,1,\cdots,n-1\}$, there exist integers $\mu_1,\mu_2 \in \mathbb{Z}$ such that $\mu=\mu_1\frac{d}{\mathrm{gcd}(k,\mathrm{rad}(n))}+\mu_2\frac{\mathrm{rad}(n) \check{d}_n}{\mathrm{gcd}(k,\mathrm{rad}(n))}$. We transform the formula \begin{eqnarray}\label{aaa} j+\mu \mathrm{gcd}(k,\mathrm{rad}(n))&=&j+\Bigl(\mu_1\frac{d}{\mathrm{gcd}(k,\mathrm{rad}(n))}+\mu_2\frac{\mathrm{rad}(n) \check{d}_n}{\mathrm{gcd}(k,\mathrm{rad}(n))} \Bigr) \mathrm{gcd}(k,\mathrm{rad}(n)) \nonumber \\ &=&j+\mu_1 d+\mu_2 \mathrm{rad}(n) \check{d}_n. \end{eqnarray}
The integer $\mathrm{gcd}(j+\mu_1 d,n)$ is relatively prime to $d$ since $\mathrm{gcd}(j+\mu_1 d,d)=1$. Since any divisor of $n$ that is relatively prime to $d$ divides $\check{d}_n$, $\mathrm{gcd}(j+\mu_1 d,n)$ divides $\check{d}_n$. Let $\mu_3:=\frac{\check{d}_n}{\mathrm{gcd}(j+\mu_1 d,n)} \in \mathbb{Z}$. From (\ref{aaa}), we obtain \begin{equation}\label{bbb} j+\mu \mathrm{gcd}(k,\mathrm{rad}(n))=j+\mu_1 d+\mu_2 \mu_3\mathrm{rad}(n)\mathrm{gcd}(j+\mu_1 d,n). \end{equation} Hence, using Lemma \ref{invgcd} for the right-hand side of (\ref{bbb}), we have the formula $\mathrm{gcd}(j+\mu \mathrm{gcd}(k,\mathrm{rad}(n)),n)=\mathrm{gcd}(j+\mu_1 d,n)=\mathrm{gcd}(j+\mu_1 \mathrm{gcd}(k,n),n)$. Furthermore, since there exists an integer $\mu'_1 \in \{0,1,\cdots,n-1\}$ such that $[\mu_1]=[\mu'_1]$, we have that $\{\mathrm{gcd}(j+\mu \mathrm{gcd}(k,\mathrm{rad}(n)),n)\}_{\mu=0}^{n-1} \subset \{\mathrm{gcd}(j+\mu \mathrm{gcd}(k,n),n)\}_{\mu=0}^{n-1}$. \end{proof}
\begin{definition}\label{shift_bar} Let $f(t)$ be a quasi-polynomial with period $n$ as follows.\\ \[ \begin{split} f(t) &= \left\{ \begin{array}{ll} f_1(t), & t\equiv1 \bmod n,\\ f_2(t), & t\equiv2 \bmod n,\\ \quad \vdots\\ f_{n-1}(t), & t\equiv n-1 \bmod n,\\ f_{n}(t), & t\equiv 0 \bmod n. \end{array}\right. \end{split} \] We define the operator $\overline{\mathrm{S}}$ as follows. \[ \begin{split} (\overline{\mathrm{S}}f) := \left\{ \begin{array}{ll} f_1(t-1), & t\equiv1 \bmod n,\\ f_2(t-1), & t\equiv2 \bmod n,\\ \quad \vdots\\ f_{n-1}(t-1), & t\equiv n-1 \bmod n,\\ f_{n}(t-1), & t\equiv 0 \bmod n. \end{array}\right. \end{split} \] \end{definition} \begin{remark} The operators $\mathrm{S}$ and $\overline{\mathrm{S}}$ have the relation \[ \begin{split} (\mathrm{S} f)(t) &= \left\{ \begin{array}{ll} f_{n}(t-1), & t\equiv1 \bmod n,\\ f_1(t-1), & t\equiv2 \bmod n,\\ \quad \vdots\\ f_{n-2}(t-1), & t\equiv n-1 \bmod n,\\ f_{n-1}(t-1), & t\equiv 0 \bmod n, \end{array}\right.\\ \\ &= \left\{ \begin{array}{ll} f_{\sigma^{-1}(1)}(t-1),&t\equiv1 \bmod n,\\ f_ {\sigma^{-1}(2)}(t-1),&t\equiv2 \bmod n,\\ \quad \vdots\\ f_ {\sigma^{-1}(n-1)}(t-1),&t\equiv n-1 \bmod n,\\ f_ {\sigma^{-1}(n)}(t-1),&t\equiv 0 \bmod n, \end{array}\right.\\ \\ &= (\overline{\mathrm{S}}f^{\sigma})(t). \end{split} \] \end{remark} \begin{lemma}\label{S_bar_linear} \begin{enumerate}[(1)] \item Let $f(t)$ and $g(t)$ be quasi-polynomials, which may have different minimal periods. Then, $\overline{\mathrm{S}}(f(t)+g(t))=\overline{\mathrm{S}}f(t)+\overline{\mathrm{S}}g(t)$, that is, the operator $\overline{\mathrm{S}}$ is linear. \item For any quasi-polynomial $h(t)$, $(\overline{\mathrm{S}}-1)^{\deg h+1}h(t)=0$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[(1)] \item Let $m$ and $n$ be the minimal periods of $f(t)$ and $g(t)$, respectively. Let $k$ be an integer. Let $f_k(t)$ and $g_k(t)$ be constituents of $f(t)$ and $g(t)$ for $t \equiv k \bmod \mathrm{lcm}(m,n)$. If $t \equiv k \bmod \mathrm{lcm}(m,n)$, then $\overline{\mathrm{S}}(f(t)+g(t))=f_k(t-1)+g_k(t-1)= \overline{\mathrm{S}}f(t)+\overline{\mathrm{S}}g(t)$. (2) By the definition of the operator $\overline{\mathrm{S}}$, the inequality $\deg ((\overline{\mathrm{S}}-1) h)<\deg h$ holds. Hence, inductively, $(\overline{\mathrm{S}}-1)^{\deg h+1} h(t)=0$. \end{enumerate} \end{proof} \begin{lemma}\label{lemma_lemma} Let $f(t)$ be a quasi-polynomial with period $n$. Let $j$ and $m$ be integers. Let $c$ be a multiple of $\frac{n}{\mathrm{gcd}(m,n)}$. Let $\sum_{k \equiv j \bmod c}a_k t^{mk}$ be a polynomial. Then, \begin{equation} (\sum_{k \equiv j \bmod c}a_k\mathrm{S}^{mk}f)(t)=(\sum_{k \equiv j \bmod c}a_k\overline{\mathrm{S}} ^{mk} f^{\sigma^{m j}})(t). \end{equation} \end{lemma} \begin{proof} First, note that $(\mathrm{S}^{mj+mc} f)(t)=(\overline{\mathrm{S}}^{mj+mc} f^{\sigma^{mj +mc}})(t) =(\overline{\mathrm{S}}^{mj+mc}f^{\sigma^{mj}})(t)$ because $mc$ is a multiple of $n$. \[ \begin{split} (\sum_{k \equiv j \bmod c}a_k\mathrm{S}^{mk}f)(t)&=(a_j\overline{\mathrm{S}}^{mj} f^{\sigma^{m j}})(t)+(a_{j+c}\overline{\mathrm{S}}^{mj+m c}f^{\sigma^{m j}})(t)+\cdots \\ &= (\sum_{k \equiv j \bmod c}a_k\overline{\mathrm{S}}^{mk} f^{\sigma^{m j}})(t). \end{split} \] \end{proof} The following proposition concerns an average of a quasi-polynomial using the cyclotomic shift operator. \begin{proposition}\label{averaging} Let $f(t)$ be a quasi-polynomial of degree $\ell$ with period $n$. Let $g(t)$ be a polynomial. Let $m$ be an integer and $c$ be a multiple of $\frac{n}{\mathrm{gcd}(m,n)}$. Then, \begin{equation} [c]_{\mathrm{S}^m}^{\ell+1}g(\mathrm{S}^m)f(t)=[c]_{\overline{\mathrm{S}}^m}^{\ell+1}g(\overline{\mathrm{S}}^m)\tilde{f}^{\mathrm{gcd}(m,n)}(t). \end{equation} \end{proposition} \begin{proof} Let $[c]_{{\mathrm{S}}^m}^{\ell+1}g({\mathrm{S}}^m)=:\sum_{k}a_k\mathrm{S}^{mk}$. We calculate $[c]_{{\mathrm{S}}^m}^{\ell+1}g({\mathrm{S}}^m)f(t)$ using Lemma \ref{Athanasiadis's lemma_2}, Proposition \ref{tilde-gcd}, Lemma \ref{S_bar_linear}, and Lemma \ref{lemma_lemma}.\\ \[ \begin{split} ([c]_{\mathrm{S}^{m}}^{\ell+1}g(\mathrm{S}^m)f)(t)&=(\sum_{k}a_{k}\mathrm{S}^{mk}f)(t)\\ &=(\sum_{k \equiv 0 \bmod c}a_k \mathrm{S} ^{mk} f)(t) +\cdots+ (\sum_{k \equiv c-1 \bmod c}a_k\mathrm{S}^{mk}f)(t)\\ &=(\sum_{k \equiv 0 \bmod c}a_k \overline{\mathrm{S}}^{mk} f)(t) +\cdots+ (\sum_{k \equiv c-1 \bmod c}a_k \overline{\mathrm{S}}^{mk} f^{\sigma^{m(c-1)}})(t)\\ &=(\frac{1}{c}\sum_{k}a_k\overline{\mathrm{S}}^{mk} f)(t)+\cdots+(\frac{1}{c}\sum_{k}a_k\overline{\mathrm{S}}^{mk} f^{\sigma^{m (c-1)}})(t)\\ &=(\frac{1}{c} \sum_{k}a_k\overline{\mathrm{S}}^{mk})(f(t)+ f^{\sigma^{m}}(t)+\cdots+f^{\sigma^{m (c-1)}}(t))\\ &=(\sum_{k}a_k\overline{\mathrm{S}}^{mk}) \Biggl (\frac{f(t)+ f^{\sigma^{ m}}(t)+\cdots+f^{\sigma^{m (c-1)}}(t)}{c} \Biggr)\\ &=([c]_{\overline{\mathrm{S}}^m}^{\ell+1}g(\overline{\mathrm{S}}^m)\tilde{f}^{m})(t)\\ &=([c]_{\overline{\mathrm{S}}^m}^{\ell+1}g(\overline{\mathrm{S}}^m)\tilde{f}^{\mathrm{gcd}(m,n)})(t). \end{split} \] \end{proof}
\subsection{Decomposition of a quasi-polynomial}\label{sec:Pre_deco} First, we summarize the relation between (quasi-)polynomial and rational functions. \begin{lemma}\label{gene_poly}(\cite[Corollary 4.3.1]{Beck-Robinson, Stanley-EC1}) If \[ \sum_{n=0}^{\infty}f(n)x^n=\frac{g(x)}{(1-x)^{\ell+1}}, \] then $f(t)$ is a polynomial of degree $\ell$ if and only if $g(x)$ is a polynomial of degree at most $\ell$ and cannot be divided by $(1-x)$. \end{lemma} \begin{lemma}\label{gene_quasi_p}(\cite[Proposition 4.4.1]{Beck-Robinson, Stanley-EC1}) If \[ \sum_{n=0}^{\infty}f(n)x^n=\frac{g(x)}{h(x)}, \] then $f(t)$ is a quasi-polynomial of degree $\ell$ with period $p$ if and only if $g(x)$ and $h(x)$ are polynomials such that $\deg g<\deg h$ and all roots of $h(x)$ are $p$-th roots of unity of multiplicity at most $\ell+1$, and there is a root of multiplicity equal to $\ell+1$ (all of this assuming that $\frac{g(x)}{h(x)}$ has been reduced to its lowest terms). \end{lemma} The following classical lemma is called partial fraction decomposition. \begin{lemma}\label{pfd} Let $g(x)$ and $h(x)$ be polynomials with $\deg g < \deg h$. Let $h_1(x),\cdots,h_n(x)$ be polynomials with $h(x)=h_1(x)h_2(x)\cdots h_n(x)$ that are relatively prime to each other. Then, there exist polynomials $g_1(x),\cdots,g_n(x)$ such that $\deg g_i < \deg h_i$ for any $i\in\{1,\cdots,n\}$ and \begin{equation} \frac{g(x)}{h(x)}=\frac{g_1(x)}{h_1(x)}+\cdots + \frac{g_n(x)}{h_n(x)}. \end{equation} \end{lemma} In general, a quasi-polynomial has the following decomposition into several quasi-polynomials. \begin{proposition}\label{gene_quasi_deco} Let $g(x)$ and $h(x)$ be relatively prime polynomials such that $\deg g<\deg h$ and all roots of $h(x)$ are $p$-th roots of unity. Let $\divisors{p}$ be the set of divisors of $p$. Let $\ell_{i}+1$ be the number of primitive $i$-th roots of unity in the roots of $h(x)$. If \[ \sum_{n=0}^{\infty}f(n)x^n=\frac{g(x)}{h(x)}, \] then there exist quasi-polynomials $f_{i}^{(\ell_i)}(t)$, $(i \in \divisors{p})$ of degree $\ell_i$ and period $i$ that satisfy \begin{equation}\label{claim} f(t)=\sum_{i \in \divisors{p}}f_{i}^{(\ell_i)}(t). \end{equation} \end{proposition} \begin{proof} Let $\{h_i(x)\}_{i \in \divisors{p}}$ be polynomials such that all roots of $h_i(x)$ are primitive $i$-th roots of unity in the roots of $h(x)$ and $h(x)=\prod_{i \in \divisors{p}}h_i(x)$. By Lemma \ref{pfd}, there exist polynomials $\{g_i(x)\}_{i\in \divisors{p}}$ such that $\deg g_i< \deg h_i=\ell_{i}+1$ for any $i \in \divisors{p}$ and \[ \sum_{n=0}^{\infty}f(n)x^n=\sum_{i\in \divisors{p}}\frac{g_i(x)}{h_{i}(x)}. \] By Lemma \ref{gene_quasi_p}, for any $i\in \divisors{p}$, there exists the quasi-polynomial $f^{(\ell_i)}_{i}(t)$ of degree $\ell_i$ with period $i$ such that $\sum_{n=0}^{\infty}f^{(\ell_i)}_{i}(n)x^n=\frac{g_i(x)}{h_{i}(x)}$. Hence, \begin{equation}\label{above} \sum_{n=0}^{\infty}f(n)x^n=\sum_{i \in \divisors{p}}\sum_{n=0}^{\infty}f^{(\ell_i)}_{i}(n)x^n. \end{equation} By comparing each term of (\ref{above}), we obtain the formula stated in (\ref{claim}). \end{proof} \begin{comment} The following Lemma is the relation between a generating function and the shift operator, and we omit the proof of it. Note that we use the notations $x$ and $t$ to express the variable. The notation $x$ implies the variable of the generating function. \begin{lemma}\label{zeros} Let $k$ be a positive integer. Let $f(t)$ be a quasi-polynomial such that $f(-1)=f(-2)=\cdots=f(-\ell)=0$. If the degree of a polynomial $g(x)$ is less than equal to $\ell$, then \[ g(x)\sum_{n=0}^{\infty}f(n)x^n=\sum_{n=0}^{\infty}(g(\Shp{t})f)(n)x^n. \] \end{lemma} The following Lemma \ref{gene_poly} and Lemma \ref{gene_quasi_p} are Exercises in \cite{Beck-Robinson}. We give the proof to these Lemmas. \begin{lemma}\label{gene_poly}(\cite[Exercise 3.13]{Beck-Robinson}) If \[ \sum_{n=0}^{\infty}f(n)x^n=\frac{g(x)}{(1-x)^{\ell+1}}. \] then $f(t)$ is a polynomial of degree $\ell$ if and only if $g(x)$ is a polynomial of degree at most $\ell$ and cannot be divided by $(1-x)$. \end{lemma} \begin{proof} First suppose that $g(x)$ is a polynomial of degree at most $\ell$ and $g(1)\neq 0$. Let $\mathrm{L}_{A_{\ell}}(t):=\binom{t+\ell}{\ell}=\frac{(t+1)(t+2)\cdots(t+\ell)}{\ell !}$. The polynomial $\mathrm{L}_{A_{\ell}}(t)$ satisfies $\mathrm{L}_{A_{\ell}}(-1)=\cdots=\mathrm{L}_{A_{\ell}}(-\ell)=0$ and \[ \sum_{n=0}^{\infty}\mathrm{L}_{A_{\ell}}(n)x^n=\frac{1}{(1-x)^{\ell+1}}. \] By Lemma \ref{zeros}, \[ \sum_{n=0}^{\infty}(g(\Shp{t})\mathrm{L}_{A_{\ell}})(n)x^n=\frac{g(x)}{(1-x)^{\ell+1}}. \] The degree of the polynomial $f(t):=g(\mathrm{S})\mathrm{L}_{A_{\ell}}(t)$ is $\ell$ since the polynomial $g(x)$ cannot be divided by $(1-x)$ by Proposition \ref{Shift congruences}. Conversely, suppose $f(t):=a_{\ell}t^{\ell}+a_{\ell-1}t^{\ell-1}+\cdots+a_0$ and $a_{\ell}\neq0$. By formula (\ref{Eulerian_series}), \[ \sum_{n=0}^{\infty}f(n)x^n=\frac{a_{\ell}\mathrm{A}_{\ell}(x)+a_{\ell-1}(1-x)\mathrm{A}_{\ell-1}(x)+\cdots+a_0(1-x)^{\ell}}{(1-x)^{\ell+1}}. \] The polynomial $g(x):=a_{\ell}\mathrm{A}_{\ell}(x)+a_{\ell-1}(1-x)\mathrm{A}_{\ell-1}(x)+\cdots+a_0(1-x)^{\ell}$ has a degree $\ell$ and $g(1)=a_{\ell}\mathrm{A}_{\ell}(1)\neq 0$. \end{proof} \begin{remark} Lemma \ref{gene_poly} implies that for any polynomials $f(t)$, there exists a unique polynomial $g(x)$ of degree less than equal to $\ell$ such that \begin{equation} f(t)=g(\mathrm{S})\binom{t+\ell}{\ell}, \end{equation} that is, the polynomials $\binom{t+\ell}{\ell}$, $\binom{t+\ell-1}{\ell}$, $\binom{t+\ell-2}{\ell}$, $\cdots$, $\binom{t}{\ell}$ is a basis of the vector space of polynomials of degree less than equal to $\ell$ (\cite[Exercise 3.14]{Beck-Robinson}). For a polynomial $f(t)=a_{\ell}t^{\ell}+ a_{\ell-1}t^{\ell-1}+\cdots+a_{0}$, we have $g(x)=a_{\ell}\mathrm{A}_{\ell}(x)+a_{\ell-1}(1-x)\mathrm{A}_{\ell-1}(x)+\cdots+a_0(1-x)^{\ell}$. \end{remark} \begin{lemma}\label{gene_quasi_p}(\cite[Exercise 3.24]{Beck-Robinson}) If \[ \sum_{n=0}^{\infty}f(n)x^n=\frac{g(x)}{(1-x^{p})^{\ell+1}}, \] then $f(t)$ is a quasi-polynomial of degree $\ell$ with period $p$ if and only if $g(x)$ is a polynomial of degree at most $p(\ell+1)-1$ and cannot be divided by $(1-x^{p})$. (Polynomials $g(x)$ and $(1-x^{p})^{\ell+1}$ may have a common factor.) \end{lemma} \begin{proof} First suppose $f(t)$ is a quasi-polynomial of degree $\ell$ with period $p$. \[ f(t) = \left\{ \begin{array}{ll} f_0(t), & t\equiv0 \bmod p,\\ f_1(t), & t\equiv1 \bmod p,\\ \quad \vdots\\ f_{p-1}(t), & t\equiv p-1 \bmod p.\\ \end{array}\right. \] Let $q_{k}(t):=f_{k}(tp+k)$ for $k \in \{0,\cdots,p-1\}$. By Lemma \ref{gene_poly}, for any $k\in \{0,\cdots,p-1\}$, there exists a polynomial $h_k(x)$ of degree less than equal to $\deg q_k$ such that $h_k(x)$ cannot be divided by $(1-x)$ and \[ \begin{split} \sum_{n=0}^{\infty}f_k(np+k)x^{np+k}&=(\sum_{n=0}^{\infty}q_k(n)x^{np})x^k\\ &=\frac{x^kh_k(x^p)}{(1-x^p)^{\deg q_k+1}}. \end{split} \] Thus, \begin{equation}\label{above} \sum_{n=0}^{\infty}f(n)x^{n}=\sum_{k=0}^{p-1}\frac{x^kh_k(x^{p})}{(1-x^{p})^{\deg q_k+1}} =\frac{\sum_{k=0}^{p-1} x^kh_k(x^{p})(1-x^{p})^{\ell-\deg q_k}}{(1-x^{p})^{\ell+1}}. \end{equation} We define $g(x):=\sum_{k=0}^{p-1} x^k h_k(x^{p})(1-x^{p})^{\ell-\deg q_k}$. Since $k+p(\deg h_k+\ell-\deg q_k)<p(\ell+1)$ for any $k \in \{0,\cdots,p-1\}$, we get the inequality $\deg g \leqq p(\ell+1)-1$. Let $\omega$ be a primitive $p$-th root of unity. Without loss of generality, we set $\deg q_0=\deg q_1=\ell$ and $\deg q_k<\ell$ for $k\in \{2,\cdots,p-1\}$ since $\deg f=\ell$. Substituting $\omega$ to $g(t)$, we have $g(\omega)=h_0(1)+\omega h_1(1)\neq0$ since $\omega$ is primitive root of unity and $(1-x)$ doesn't divide $h_0(x)$ and $h_1(x)$. Hence, the polynomial $(1-x^{p})$ doesn't divide $g(x)$. Conversely, suppose that $g(x)$ is a polynomial of degree at most $p (\ell+1)-1$ and cannot be divided by $(1-x^{p})$. Let a polynomial $g(x)=:\sum_{k}a_kx^k$ and $g_i(x^{p}):=x^{-i}(\sum_{k \equiv i \bmod p}a_kx^k)$ for $i \in \{0,\cdots,p-1\}$. we have $g(x)=\sum_{i=0}^{p-1}x^ig_i(x^{p})$. By Lemma \ref{gene_poly}, for $i \in \{0,\cdots,p-1\}$, \[ \sum_{n=0}^{\infty}(g_i(\Shp{t})\mathrm{L}_{A_{\ell}})(n)x^{np}=\frac{g_i(x^{p})}{(1-x^{p})^{\ell+1}}, \] where $\mathrm{L}_{A_{\ell}}(t):=\binom{t+\ell}{\ell}=\frac{(t+1)\cdots(t+\ell)}{\ell !}$. Thus, \[ \begin{split} \sum_{n=0}^{\infty}f(n)x^{n}&=\frac{g(x)}{(1-x^{p})^{\ell+1}}\\ &=\sum_{i=0}^{p-1}\frac{x^{i}g_i(x^{p})}{(1-x^{p})^{\ell+1}}\\ &=\sum_{i=0}^{p-1}\sum_{m=0}^{\infty}(g_i(\Shp{t})\mathrm{L}_{A_{\ell}})(m)x^{mp+i}. \end{split} \] We have $f(mp+i)=(g_i(\Shp{t})\mathrm{L}_{A_{\ell}})(m)$ for any $i \in \{0,\cdots,p-1\}$. We define \[ f(t) := \left\{ \begin{array}{ll} (g_1(\Shp{\frac{t}{p}})\mathrm{L}_{A_{\ell}})(\frac{t-1}{p}), & t\equiv1 \bmod p,\\ (g_2(\Shp{\frac{t}{p}})\mathrm{L}_{A_{\ell}})(\frac{t-2}{p}), & t\equiv2 \bmod p,\\ \quad \vdots\\ (g_{p-1}(\Shp{\frac{t}{p}})\mathrm{L}_{A_{\ell}})(\frac{t-(p-1)}{p}), & t\equiv p-1 \bmod p,\\ (g_{0}(\Shp{\frac{t}{p}})\mathrm{L}_{A_{\ell}})(\frac{t}{p}), & t\equiv 0 \bmod p. \end{array} \] If $g_0(1),\cdots,g_{p-1}(1)$ satisfy $g_0(1)=\cdots=g_{p-1}(1)=0$, then any $p$-th root of unity is a root of $g(t)$, that is, the polynomial $(1-t^{p})$ divides $g(t)$. Hence, there exists an integer $i \in \{0,\cdots,p-1\}$ such that $g_i(1)\neq0$. By Proposition \ref{Shift congruences}, the polynomial $(g_i(\mathrm{S})\mathrm{L}_{A_{\ell}})(t)$ has a degree $\ell$. Therefore, $\deg f=\ell$. \end{proof} We see the following from the proof of Lemma \ref{gene_quasi_p}. \begin{corollary}\label{gene_gcd} Use notation in the proof of Lemma \ref{gene_quasi_p}. If \[ \sum_{n=0}^{\infty}f(n)x^n=\frac{g(x)}{(1-x^{p})^{\ell+1}}, \] then $f(t)$ is a quasi-polynomial of degree $\ell$ with period $p$ such that constituents of $f(t)$ for $t\equiv i \bmod p$ and $t\equiv j \bmod p$ are the same polynomial if and only if $g(x)=\sum_{i=0}^{p-1}x^{i} g_i(x^{p})$ is a polynomial of degree at most $p (\ell+1)-1$ such that $g(x)$ cannot be divided by $(1-x^{p})$ and $x^{i}g_i(x^{p})\equiv x^{j}g_{j}(x^{p}) \bmod (1-x)^{\ell+1}$ (Polynomials $g(x)$ and $(1-x^{p})^{\ell+1}$ may have a common factor). In particular, $f(t)$ is a quasi-polynomial with gcd-property if and only if $x^{i}g_i(x^{p})\equiv x^{\mathrm{gcd}(i,p)}g_{\mathrm{gcd}(i,p)}(x^{p}) \bmod (1-x)^{\ell+1}$ for any $i\in \{0,\cdots,p-1\}$. \end{corollary}
\begin{remark} Lemma \ref{Athanasiadis's lemma_2} in the case of limiting the degree of $g(t)$ can also be proved using Corollary \ref{gene_gcd} and Lemma \ref{gene_poly}, where $g(t)$ is the notation in Lemma \ref{Athanasiadis's lemma_2}. Indeed, if a polynomial $g(x)=\cyc{p}{x}^{\ell+1}h(x)=\sum_{k}a_kx^k$ with $\deg h\leqq\ell$, that is, $\deg g\leqq p(\ell+1)-1$, then we have the formula \[ \sum_{n=0}^{\infty}f(n)x^n=\frac{g(x)}{(1-x^{p})^{\ell+1}}=\frac{h(x)}{(1-x)^{\ell+1}}. \] by Lemma \ref{gene_poly}, the quasi-polynomial $f(t)$ becomes a polynomial. Hence, we get the formulas $\sum_{k\equiv 0 \bmod p}a_kx^k \equiv \sum_{k\equiv 1 \bmod p}a_kx^k \equiv \cdots \equiv \sum_{k\equiv p-1 \bmod p}a_kx^k \bmod (1-x)^{\ell+1}$ by Corollary \ref{gene_gcd}. Conversely, if a polynomial $g(x)=\sum_{k} a_kx^k$ satisfies $\sum_{k\equiv 0 \bmod p}a_kx^k \equiv \sum_{k\equiv 1 \bmod p}a_kx^k \equiv \cdots \equiv \sum_{k\equiv p-1 \bmod p}a_kx^k \bmod (1-x)^{\ell+1}$, the polynomial $g(x)$ can be divided by $\cyc{n}{x}^{\ell+1}$ because a quasi-polynomial $f(t)$ becomes a polynomial by Corollary \ref{gene_gcd}. \end{remark}
The following Lemma is the classical fact and what is called partial fraction. We omit the proof of the following. \begin{lemma}\label{pfd} Let $g(x)$ and $h(x)$ be polynomials with $\deg g < \deg h$. Let $h_1(x),\cdots,h_n(x)$ be polynomials with $h(x)=h_1(x)h_2(x)\cdots h_n(x)$ and relatively prime to each other. Then there exist polynomials $g_1(x),\cdots,g_n(x)$ such that $\deg g_i < \deg h_i$ for any $i\in\{1,\cdots,n\}$ and \begin{equation} \frac{g(x)}{h(x)}=\frac{g_1(x)}{h_1(x)}+\cdots + \frac{g_n(x)}{h_n(x)}. \end{equation} \end{lemma} In general, a quasi-polynomial has the following decomposition into several quasi-polynomials. \begin{proposition}\label{gene_quasi_deco} Let $g(x)$ and $h(x)$ are relatively prime polynomials such that $\deg g<\deg h$ and all roots of $h(x)$ are $p$-th roots of unity. Let $X$ be the set of divisors of $p$. Let $\ell_{i}+1$ be the number of primitive $i$-th roots of unity in the roots of $h(x)$. If \[ \sum_{n=0}^{\infty}f(n)x^n=\frac{g(x)}{h(x)}, \] then there exist quasi-polynomials $\{f_{i}^{(\ell_i)}(t)\}_{i \in X}$ such that $f_{i}^{(\ell_i)}(t)$ has a degree $\ell_i$, a period $i$ and the quasi-polynomials satisfies \begin{equation}\label{claim} f(t)=\sum_{i \in X}f_{i}^{(\ell_i)}(t). \end{equation} \end{proposition} \begin{proof} Let $\{h_i(x)\}_{i \in X}$ be polynomials such that all the roots of $h_i(x)$ are all the primitive $i$-th roots of unity in the roots of $h(x)$ and $h(x)=\prod_{i \in X}h_i(x)$. By Lemma \ref{pfd}, there exist polynomials $\{g_i(x)\}_{i\in X}$ such that $\deg g_i< \deg h_i=\ell_{i}+1$ for any $i \in X$ and \[ \sum_{n=0}^{\infty}f(n)x^n=\sum_{i\in X}\frac{g_i(x)}{h_{i}(x)}. \] Let $q_i(x):=\frac{(1-x^{i})^{\ell_i+1}}{h_i(x)}$. A function $q_i(x)$ is a polynomial since all the roots of $h_i(x)$ are $i$-th roots of unity and $\deg h_i=\ell_{i}+1\leqq i(\ell_{i}+1)$. we have \[ \frac{g_i(x)}{h_i(x)}=\frac{g_i(x)q_i(x)}{(1-x^{i})^{\ell_i+1}}. \] Since $\deg (g_i(x)q_i(x))=\deg g_i+(i-1)(\ell_i+1)<i(\ell_i+1)=\deg ((1-x^{i})^{\ell_i+1})$, there exists the quasi-polynomial $f^{(\ell_i)}_{i}(t)$ of degree $\ell_i$ with period $i$ by Lemma \ref{gene_quasi_p}. Hence, \begin{equation}\label{above} \sum_{n=0}^{\infty}f(n)x^n=\sum_{i \in X}\sum_{n=0}^{\infty}f^{(\ell_i)}_{i}(n)x^n. \end{equation} By camparing each term of the formula (\ref{above}), we get the formula (\ref{claim}). \end{proof} \end{comment} \begin{comment} \begin{lemma} Let $f_0(t),f_1(t),\cdots,f_{n}(t)$ be quasi-polynomials of degree $\ell_{f_0}, \ell_{f_1},\cdots,\ell_{f_n}$ with period $m_{f_0},m_{f_1},\cdots,m_{f_n}$, respectively. If the quasi-polynomial $\sum{i=0}^{n}f_i(t)$ has the gcd-property, then there exists quasi-polynomials $\acute{f}(t)$ and $\acute{g}(t)$ which has the gcd-property such that $f(t)+g(t)=\acute{f}(t)+\acute{g}(t)$. \end{lemma} If a quasi-polynomial has the gcd-property, then there exists the decomposition by quasi-polynomials with gcd-property as follows. \begin{proposition}\label{gene_quasi_deco_gcd} Let $g(x)$ and $h(x)$ are relatively prime polynomials such that $\deg g<\deg h$ and all roots of $h(x)$ are $p$-th roots of unity. Let $X$ be a set of divisors of $p$. Let $\ell_{i}+1$ be the number of premitive $i$-th roots of unity in the roots of $h(x)$. If \[ \sum_{n=0}^{\infty}f(n)x^n=\frac{g(x)}{h(x)}, \] and a quasi-polynomial $f(t)$ has the gcd-property, then there exist quasi-polynomials $\{f_{i}^{(\ell_i)}(t)\}_{i \in X}$ such that $f_{i}^{(\ell_i)}(t)$ has a degree $\ell_i$, a period $i$, the gcd-property and the quasi-polynomials satisfies \begin{equation}\label{claim} f(t)=\sum_{i \in X}f_{i}^{(\ell_i)}(t). \end{equation} \end{proposition} \end{comment}
\subsection{Root system}\label{root system} We now introduce some concepts that help to explain the results for the characteristic polynomial of the Linial arrangement given by Yoshinaga \cite{Yoshinaga_1}. Let $V=\mathbb{R}^{\ell}$ be the Euclidean space with inner product $(\cdot,\cdot)$. Let $\Phi \subset V$ be an irreducible root system with Coxeter number $h$. Fix a positive system $\Phi^{+}\subset \Phi$ and the set of simple roots $\Delta=\{\alpha_1,\cdots,\alpha_{\ell}\} \subset \Phi^{+}$. The highest root, denoted by $\tilde{\alpha} \in \Phi^{+}$, can be expressed as the linear combination $\tilde{\alpha}=\sum_{i=1}^{\ell}c_i \alpha_i$ $(c_i \in \mathbb{Z}_{>0})$. We also set $\alpha_0:=-\tilde{\alpha}$ and $c_0:=1$. Then, we have the linear relation \begin{equation} c_0\alpha_0+ c_1\alpha_1+\cdots+ c_{\ell}\alpha_{\ell}=0. \end{equation} The integers $c_0,\cdots, c_{\ell}$ have the following relation with the Coxeter number $h$: \begin{proposition}\label{coxc_coxn_relation}(\cite{Humphreys}) \begin{equation} c_0+c_1+\cdots+c_{\ell}=h. \end{equation} \end{proposition}
\subsection{Ehrhart quasi-polynomial for the fundamental alcove}\label{section:Eh_quasi} The coweight lattice $Z(\Phi)$ and the coroot lattice $Q(\Phi)$ are defined as follows. \[ \begin{split} Z(\Phi)&:=\Set{x\in V}{(\alpha_i,x) \in \mathbb{Z}, \alpha_i\in \Delta},\\ Q(\Phi)&:=\sum_{\alpha \in \Phi}\mathbb{Z}\cdot\frac{2\alpha}{(\alpha,\alpha)}. \end{split} \]
The index $\#\frac{Z(\Phi)}{Q(\Phi)}=f$ is called the index of connection. Let $\varpi_i \in Z(\Phi)$ be the dual basis for the simple roots $\alpha_1,\cdots,\alpha_{\ell}$, that is, $(\alpha_i,\varpi_j)=\delta_{ij}$. Then, $Z(\Phi)$ is a free abelian group generated by $\varpi_1,\cdots,\varpi_{\ell}$. We also have $c_i=(\varpi_i,\tilde{\alpha})$. A connected component of $V\verb|\| \underset{{\underset{k\in\mathbb{Z}}{\alpha\in \Phi^{+}}}}{\cup}H_{\alpha,k}$ is called an alcove. Let us define the fundamental alcove $F_{\Phi}$ of type $\Phi$ as \[ \begin{split} F_{\Phi}:=\left\{
x\in V\ \middle| \begin{array}{ll} &(\alpha_i,x)>0,\ (1\leqq i\leqq\ell)\\ &(\tilde{\alpha},x)<1 \end{array} \right\}. \end{split} \] The closure $\overline{F_{\Phi}}=\Set{x \in V}{(\alpha_i,x)\geqq0\ (1\leqq i\leqq\ell),\ (\tilde{\alpha},x)\leqq 1 }$ is the convex hull of $0,\frac{\varpi_1}{c_1},\cdots, \frac{\varpi_{\ell}}{c_{\ell}}\in V$. The closed fundamental alcove $\overline{F_{\Phi}}$ is a simplex. For a positive integer $q \in \mathbb{Z}_{>0}$, we define the function $\map{\mathrm{L}_{\Phi}}{\mathbb{Z}_{>0}}{\mathbb{Z}}$ as \begin{equation} \mathrm{L}_{\Phi}(q):=\#(q F_{\Phi}\cap Z(\Phi)). \end{equation} The function $\mathrm{L}_{\Phi}(q)$ can be extended as the function $\map{\mathrm{L}_{\Phi}}{\mathbb{Z}}{\mathbb{Z}}$ because $\mathrm{L}_{\Phi}(q)$ is a quasi-polynomial \cite{Beck-Robinson}. The quasi-polynomial $\mathrm{L}_{\Phi}(t)$ is called the Ehrhart quasi-polynomial for the fundamental alcove of type $\Phi$. Let $\rho$ be the minimal period of the quasi-polynomial $\mathrm{L}_{\Phi}(t)$. The quasi-polynomial $\mathrm{L}_{\Phi}(t)$ was computed for any irreducible root system $\Phi$ by Suter \cite{Suter}. In particular, for type $A_{\ell}$, the Ehrhart quasi-polynomial $\mathrm{L}_{A_{\ell}}(t)=\binom{t+\ell}{\ell}=\frac{(t+1)\cdots(t+\ell)}{\ell !}$. The Ehrhart quasi-polynomial $\mathrm{L}_{\Phi}(t)$ satisfies the following duality. \begin{theorem}(Suter \cite{Suter}) Let $\Phi$ be an irreducible root system of rank $\ell$. If $q \in \mathbb{Z}$, then \begin{equation} \mathrm{L}_{\Phi}(-q)=(-1)^{\ell}\mathrm{L}_{\Phi}(q-h). \end{equation} \end{theorem} The following statements are true for the Ehrhart quasi-polynomial $\mathrm{L}_{\Phi}(t)$. \begin{theorem}\label{gene_Eh}(Suter \cite{Suter}) \begin{enumerate}[(1)] \item The Ehrhart quasi-polynomial $\mathrm{L}_{\Phi}(t)$ has the gcd-property. \item The degree of $\mathrm{L}_{\Phi}(t)$ is the rank of $\Phi$ \item The minimal period $\rho$ is as given in Table \ref{fig:table_Ehpara}. \item $\mathrm{L}_{\Phi}(-1)=\mathrm{L}_{\Phi}(-2)=\cdots= \mathrm{L}_{\Phi}(-(h-1))=0$. \item The generating function of $\mathrm{L}_{\Phi}(t)$ is \begin{equation} \sum_{n=0}^{\infty}\mathrm{L}_{\Phi}(n)x^{n}=\frac{1}{(1-x^{c_0})\cdots (1-x^{c_\ell})}. \end{equation} \end{enumerate} \end{theorem} There is a relation between the Ehrhart quasi-polynomials of each type. \begin{proposition}\label{root Ehrhart} Let $\Phi$ be an irreducible root system of rank $\ell$. The following formula holds. \begin{equation}\label{cyclo_Ehrhart} \mathrm{L}_{A_{\ell}}(t)=\C{\mathrm{S}}\mathrm{L}_{\Phi}(t). \end{equation} \end{proposition} \begin{proof} The Ehrhart polynomial for the fundamental alcove of any irreducible root system $\Phi$ satisfies $\mathrm{L}_{\Phi}(-1)= \mathrm{L}_{\Phi}(-2)=\cdots=\mathrm{L}_{\Phi}(-(h-1))=0$. Hence, by Proposition \ref{coxc_coxn_relation}, \begin{equation}\label{Eh_coxc_shift} [c_0]_x \cdots[c_{\ell}]_x \sum_{n=0}^{\infty}\mathrm{L}_{\Phi}(n)x^n=\sum_{n=0}^{\infty}([c_0]_{\mathrm{S}}\cdots[c_{\ell}]_{\mathrm{S}}\mathrm{L}_{\Phi})(n)x^n. \end{equation} On the left-hand side of (\ref{Eh_coxc_shift}), by Theorem \ref{gene_Eh}, we can write \[ \begin{split} [c_0]_x \cdots[c_{\ell}]_x \sum_{n=0}^{\infty}\mathrm{L}_{\Phi}(n)x^n&=[c_0]_x \cdots[c_{\ell}]_x \frac{1}{(1-x^{c_0})\cdots(1-x^{c_{\ell}})}\\ &=\frac{1}{(1-x)^{\ell+1}}\\ &=\sum_{n=0}^{\infty}\mathrm{L}_{A_{\ell}}(n)x^n. \end{split} \] Therefore, by comparing each term of (\ref{Eh_coxc_shift}), we obtain the formula given in (\ref{cyclo_Ehrhart}). \end{proof} \begin{remark} Note that $(1-\mathrm{S})\mathrm{L}_{A_{\ell}}(t)= \mathrm{L}_{A_{\ell-1}}(t)$. We obtain the relations between the Ehrhart quasi-polynomials of root systems of different ranks from (\ref{cyclo_Ehrhart}) and Proposition \ref{Shift congruences}. The following are some examples. \[ (1-\mathrm{S}^2)\mathrm{L}_{C_{\ell}}(t)=\mathrm{L}_{C_{\ell-1}}(t). \] \[ (1-\mathrm{S}^2)\mathrm{L}_{D_{\ell}}(t) =\mathrm{L}_{D_{\ell-1}}(t). \] \[ [3]_{\mathrm{S}}[4]_{\mathrm{S}}(1-\mathrm{S})\mathrm{L}_{E_{7}}(t) =\mathrm{L}_{E_{6}}(t). \] \[ [2]_{\mathrm{S}^2}[5]_{\mathrm{S}}[6]_{\mathrm{S}}(1-\mathrm{S})\mathrm{L}_{E_{8}}(t)=\mathrm{L}_{E_{7}}(t). \] \[ [2]_{\mathrm{S}}[4]_{\mathrm{S}}(1-\mathrm{S})^2\mathrm{L}_{F_{4}}(t) =\mathrm{L}_{G_{2}}(t). \] \[ (1-\mathrm{S})^2\mathrm{L}_{E_{6}}(t) =(1+\mathrm{S}^2)\mathrm{L}_{F_{4}}(t). \] \end{remark} Let $\hat{c}_0,\cdots,\hat{c}_{\hat{\ell}}$ be all the different integers in $c_0,\cdots, c_{\ell}$ and $\ell_{\hat{c}_k}+1$ be the number of multiples of $\hat{c}_k$ in $c_0,\cdots, c_{\ell}$. Theorem \ref{gene_Eh} and Proposition \ref{gene_quasi_deco} lead to the following decomposition of the Ehrhart quasi-polynomial $\mathrm{L}_{\Phi}(t)$. \begin{proposition}\label{Eh_deco} For any irreducible root system $\Phi$, there exist quasi-polynomials $\{\Ehf{k}{(\ell_k)}\}_{k \in \{\hat{c}_0,\cdots, \hat{c}_{\hat{\ell}}\}}$ such that \begin{equation}\label{Eh_sum} \mathrm{L}_{\Phi}(t)=\sum_{k\in \{\hat{c}_0,\cdots, \hat{c}_{\hat{\ell}}\}}\Ehf{k}{(\ell_{k})}(t), \end{equation} where $\Ehf{k}{(\ell_{k})}$ has period $k$ and degree $\ell_{k}$. \end{proposition} \begin{comment} \begin{proof} Let $\divcoxc{i}:=\frac{\rho}{c_{i}}$ for $i \in \{0,\cdots,\ell\}$. By Theorem \ref{gene_Eh} (Suter \cite{Suter}), \[ \begin{split} \sum_{n=0}^{\infty}\mathrm{L}_{\Phi}(n)t^n&=\frac{1}{(1-t^{c_{0}})\cdots(1-t^{c_{\ell}})},\\ &=\frac{\cyc{\divcoxc{0}}{t^{c_{0}}}\cdots \cyc{\divcoxc{\ell}}{t^{c_{\ell}}}}{(1-t^{\rho})^{\ell+1}}. \end{split} \] We obtain formula (\ref{Eh_sum}) by Lemma \ref{gene_quasi_deco}. \end{proof} \end{comment} \begin{remark} We can express Proposition \ref{Eh_deco} in a different way from formula (\ref{Eh_sum}). First, note that a quasi-polynomial $f(t)$ of degree $\ell$ can be expressed with the periodic functions $p^{0}(t),\cdots,p^{\ell}(t)$ as follows. \[ f(t)=p^{\ell}(t)t^{\ell}+\cdots+p^1(t)t+p^0(t). \] Let $p_i^{j}(t)$ be a periodic function with period $i$. In the case of $E_6$, Proposition \ref{Eh_deco} can be expressed as follows. \[ \begin{split} \Ehf{1}{(6)}(t)&=p_1^6(t)t^6+ p_1^5(t)t^5+ p_1^4(t)t^4+ p_1^3(t)t^3+ p_1^2(t)t^2+ p_1^1(t)t+ p_1^0(t).\\ \Ehf{2}{(2)}(t)&=p_2^2(t)t^2+ p_2^1(t)t+ p_2^{0}(t).\\ \Ehf{3}{(0)}(t)&=p_3^{0}(t). \end{split} \] \[ \begin{split} \mathrm{L}_{E_6}(t)&=\Ehf{1}{(6)}(t)+\Ehf{2}{(2)}(t)+\Ehf{3}{(0)}(t)\\ &=p_{1}^{6}(t)t^6+ p_1^{5}(t)t^5+p_1^{4}(t)t^4+p_1^{3}(t)t^3+\Bigl(p_1^{2}(t)+p_2^{2}(t)\Bigr)t^2\\ &\quad +\Bigl(p_1^{1}(t)+p_2^{1}(t)\Bigr)t+\Bigl(p_1^{0}(t)+p_2^{0}(t)+p_3^{0}(t)\Bigr). \end{split} \] From this expression, we can see that the parts of degree $6,5,4$, and $3$ have the period $1$, the parts of degree $2$ and $1$ have the period $2$, and the parts of degree $0$ have the period $6$ since the period of the sum of periodic functions is the least common multiple of the period of each periodic function. Note that $\{\Ehf{k}{(\ell_k)}\}_{k \in \{\hat{c}_0,\cdots,\hat{c}_{\hat{\ell}}\}}$ are not unique, because it is sufficient for part of a periodic function of the quasi-polynomial $\mathrm{L}_{\Phi}(t)$ to be the sum of periodic functions. \end{remark}
\begin{table}[htbp] \centering \caption{Table of root systems.} {\footnotesize
\begin{tabular}{c|l|l|l|c|c|c|c} $\Phi$&$c_0, \cdots, c_\ell$&$\hat{c}_0, \cdots, \hat{c}_{\hat{\ell}}$&$\ell_{\hat{c}_0}, \cdots, \ell_{\hat{c}_{\hat{\ell}}} $&$\hat{\ell}$&$\rho$&$\mathrm{rad}(\rho)$&$h$\\ \hline\hline $A_\ell$&$1,1,1,\dots,1$&$1$&$\ell$&$1$&$1$&$1$&$\ell+1$\\ $B_\ell, C_\ell$&$1,1,2,2,\dots,2$&$1,2$&$\ell,\ell-2$&$2$&$2$&$2$&$2\ell$\\ $D_\ell$&$1,1,1,1,2,\dots,2$&$1,2$&$\ell,\ell-4$&$2$&$2$&$2$&$2\ell-2$\\ $E_6$&$1,1,1,2,2,2,3$&$1,2,3$&$6,2,0$&$3$&$6$&$6$&$12$\\ $E_7$&$1,1,2,2,2,3,3,4$&$1,2,3,4$&$7,3,1,0$&$4$&$12$&$6$&$18$\\ $E_8$&$1,2,2,3,3,4,4,5,6$&$1,2,3,4,5,6$&$8,4,2,1,0,0$&$6$&$60$&$30$&$30$\\ $F_4$&$1,2,2,3,4$&$1,2,3,4$&$4,2,0,0$&$4$&$12$&$6$&$12$\\ $G_2$&$1,2,3$&$1,2,3$&$2,0,0$&$3$&$6$&$6$&$6$ \end{tabular} } \label{fig:table_Ehpara} \end{table}
\subsection{Eulerian polynomial}\label{section:generalized Eulerian} We summarize some facts about the Eulerian polynomial and the generalized Eulerian polynomial with reference to \cite{Yoshinaga_1}. \begin{definition}[Eulerian polynomial] For a permutation $\tau \in \mathfrak{S}_{n} $, define \[ a(\tau):=\#\Set{i\in \{1,\cdots,n-1\}}{\tau(i)<\tau(i+1)}. \] Then, \begin{equation} \mathrm{A}(n,k):=\#\Set{\tau \in \mathfrak{S}_{n}}{a(\tau)=k-1} \end{equation} $(1\leqq k\leqq n)$ is called the Eulerian number and the generating polynomial \begin{equation} \mathrm{A}_{n}(t):=\sum_{k=1}^{n}\mathrm{A}(n,k)t^k=\sum_{\tau \in \mathfrak{S}_{n}}t^{1+a(\tau)} \end{equation} is called the Eulerian polynomial. Define $\mathrm{A}_0(t)=1$. \end{definition} The Eulerian polynomial $\mathrm{A}_{\ell}(t)$ satisfies the duality $\mathrm{A}_{\ell}(t)=t^{\ell+1}\mathrm{A}_{\ell}(\frac{1}{t})$. The following theorem is the so-called Worpitzky identity. \begin{theorem}(Worpitzky \cite{Worpitzky}) Note that $\mathrm{L}_{A_{\ell}}(t)=\binom{t+\ell}{\ell}=\frac{(t+1)\cdots(t+\ell)}{\ell!}$. Then, \begin{equation}\label{Worpitzky identity} t^{\ell}=\mathrm{A}_{\ell}(\mathrm{S})\mathrm{L}_{A_{\ell}}(t). \end{equation} \end{theorem} The Eulerian polynomial also satisfies the following congruence. \begin{theorem}\label{Eulerian}(\cite{Iijima-Sasaki-Takahashi-Yoshinaga}, \cite{Yoshinaga_1}) Let $\ell \geqq 1$, $n \geqq 2$. Then, \begin{equation} \mathrm{A}_{\ell}(t^n) \equiv \frac{1}{n^{\ell+1}}[n]_{t}^{\ell+1}\mathrm{A}_{\ell}(t)\ \bmod \ (1-t)^{\ell+1}. \end{equation} \end{theorem} Lam and Postnikov introduced the following generalization of Eulerian polynomials \cite{Lam-Postnikov}. \begin{definition}[Generalized Eulerian polynomial] Let $W$ be the Weyl group of an irreducible root system $\Phi$. For $\omega\in W$, the integer $\mathrm{asc}(\omega) \in \mathbb{Z}$ is defined by \[ \mathrm{asc}(\omega):=\sum_{\underset{\omega(\alpha_i)>0}{0\leqq i\leqq \ell}}c_i. \] Then, \begin{equation} \mathrm{R}_{\Phi}(t):=\frac{1}{f}\sum_{\omega \in W}t^{\mathrm{asc}(\omega)} \end{equation} is called the generalized Eulerian polynomial of type $\Phi$. \end{definition} The generalized Eulerian polynomial $\mathrm{R}_{\Phi}(t)$ can be expressed in terms of the cyclotomic type polynomial $[c]_t$ and the Eulerian polynomial $\mathrm{A}_{\ell}(t)$. \begin{theorem}\label{Lam-Postnikov}(Lam--Postnikov \cite{Lam-Postnikov}, Theorem 10.1) Let $\Phi$ be an irreducible root system of rank $\ell$. Then, \begin{equation} \mathrm{R}_{\Phi}(t)=[c_0]_{t}[c_1]_{t} \cdots [c_{\ell}]_{t}\mathrm{A}_ {\ell}(t). \end{equation} \end{theorem} Some basic properties of the generalized Eulerian polynomial $\mathrm{R}_{\Phi}(t)$ follow from Theorem \ref{Lam-Postnikov} (Lam--Postnikov \cite{Lam-Postnikov}). \begin{proposition} \begin{enumerate}[(1)] \item $\deg \mathrm{R}_{\Phi}=h-1$. \item $t^{h}\mathrm{R}_{\Phi}(\frac{1}{t})=\mathrm{R}_{\Phi}(t)$. \item $\mathrm{R}_{A_{\ell}}(t)=\mathrm{A}_{\ell}(t)$. \end{enumerate} \end{proposition} We can obtain the following formula from Theorems \ref{Eulerian} and \ref{Lam-Postnikov}. \begin{proposition}\label{g_Eulerian_cong} Let $\Phi$ be an irreducible root system of rank $\ell$. Let $n$ be a positive integer. Then, \begin{equation} \mathrm{R}_{\Phi}(t^{n})\equiv (\prod_{i=0}^{\ell}\frac{1}{n}\cyc{n}{t^{c_i}})\mathrm{R}_{\Phi}(t) \bmod (1-t)^{\ell+1}. \end{equation} \end{proposition} \begin{proof} Using Theorems \ref{Eulerian} and \ref{Lam-Postnikov}, we calculate the following. \[ \begin{split} \mathrm{R}_{\Phi}(t^{n})&=[c_0]_{t^{n}}[c_1]_{t^{n}} \cdots [c_{\ell}]_{t^{n}}\mathrm{A}_{\ell}(t^{n})\\ &\equiv [c_0]_{t^{n}}[c_1]_{t^{n}} \cdots [c_{\ell}]_{t^{n}}(\frac{1}{n^{\ell+1}}[n]_{t}^{\ell+1}\mathrm{A}_{\ell}(t)) \bmod (1-t)^{\ell+1}\\ &\equiv \frac{1}{n^{\ell+1}}[c_0 n]_{t}[c_1 n]_{t} \cdots [c_{\ell} n]_{t}\mathrm{A}_{\ell}(t) \bmod (1-t)^{\ell+1}\\ &\equiv \frac{1}{n^{\ell+1}}[n]_{t^{c_0}} [n]_{t^{c_1} }\cdots [n]_{t^{c_{\ell}}}[c_0]_{t}[c_1]_{t} \cdots [c_{\ell}]_{t}\mathrm{A}_{\ell}(t) \bmod (1-t)^{\ell+1}\\ &\equiv \frac{1}{n^{\ell+1}}[n]_{t^{c_0}} [n]_{t^{c_1} }\cdots [n]_{t^{c_{\ell}}} \mathrm{R}_{\Phi}(t) \bmod (1-t)^{\ell+1}.\\ \end{split} \] \end{proof}
\subsection{Postnikov--Stanley Linial arrangement conjecture}\label{section:Conjecture} Let $V$ be a vector space with the inner product $(\cdot,\cdot)$. For any integer $k\in \mathbb{Z}$ and $\alpha \in V$, the affine hyperplane $H_{\alpha,k}$ is defined by \begin{equation} H_{\alpha,k}:=\Set{x \in V}{(\alpha,x)=k}.\end{equation} Let $a,b\in \mathbb{Z}$ be integers with $a\leqq b$. Define the hyperplane arrangement $\mathcal{A}_{\Phi}^{[a,b]}$ as follows. \begin{equation} \mathcal{A}_{\Phi}^{[a,b]}:=\Set{H_{\alpha,k}}{\alpha \in \Phi^{+}, k \in \mathbb{Z}, a\leqq k\leqq b}. \end{equation} Note that we define $\mathcal{A}_{\Phi}^{[1,0]}$ as an empty set. The hyperplane arrangement $\mathcal{A}_{\Phi}^{[a,b]}$ is called the truncated affine Weyl arrangement. In particular, $\mathcal{A}_{\Phi}^{[1,n]}$ is called the Linial arrangement. Let us denote by $\chi(\mathcal{A}_{\Phi}^{[a,b]},t)$ the characteristic polynomial of $\mathcal{A}_{\Phi}^{[a,b]}$. Postnikov and Stanley conjectured the following for $\chi(\mathcal{A}_{\Phi}^{[a,b]},t)$. \begin{conjecture}\label{Postnikov-Stanley_2}(Postnikov--Stanley \cite{Postnikov-Stanley}, Conjecture 9.14) Let $a,b \in \mathbb{Z}$ with $a \leqq 1 \leqq b$. Suppose that $1 \leqq a+b$. Then, every root $z \in \mathbb{C}$ of the equation $\chi(\mathcal{A}_{\Phi}^{[a,b]},t)=0$ satisfies $\operatorname{Re} z=\frac{(b-a+1)h}{2}$. \end{conjecture} It is known that if Conjecture \ref{Postnikov-Stanley_2} is true in the case of the Linial arrangement $\mathcal{A}_{\Phi}^{[1,n]}$, then Conjecture \ref{Postnikov-Stanley_2} is also true by the following theorem. \begin{theorem}(Yoshinaga \cite{Yoshinaga_1}) Let $n \geqq 0$ and $k\geqq 0$. The characteristic quasi-polynomial of the Linial arrangement $\mathcal{A}_{\Phi}^{[1,n]}$ is \begin{equation}\label{Ch_para_shift_2} \chi(\mathcal{A}_{\Phi}^{[1,n]},t)=\chi(\mathcal{A}_{\Phi}^{[1-k,n+k]},t+kh). \end{equation} \end{theorem} For classical root systems, the formula in (\ref{Ch_para_shift_2}) has been proved by Athanasiadis \cite{Athanasiadis_0, Athanasiadis}.\par Conjecture \ref{Postnikov-Stanley_2} was proved by Postnikov and Stanley for $\Phi=A_{\ell}$ \cite{Postnikov-Stanley}, and by Athanasiadis for $\Phi= A_{\ell},B_{\ell},C_{\ell},D_{\ell}$ \cite{Athanasiadis}. Yoshinaga \cite{Yoshinaga_1} verified Conjecture \ref{Postnikov-Stanley_2} for $E_6,E_7,E_8,F_4$ when the parameter $n>0$ of the Linial arrangement $\mathcal{A}_{\Phi}^{[1,n]}$ satisfies \begin{equation}\label{the parameter} n \equiv -1 \left\{ \begin{array}{ll} \bmod \quad 6, &\Phi=E_6, E_7, F_4\\ \bmod \quad 30, &\Phi=E_8. \end{array}\right. \end{equation} He also verified Conjecture \ref{Postnikov-Stanley_2} for exceptional root systems when the parameter $n$ is a sufficiently large integer \cite{Yoshinaga_2}. The case $\Phi=G_2$ is easy.\par In proving the conjecture for the case in (\ref{the parameter}), Yoshinaga studied from the perspective of the characteristic quasi-polynomial \cite{Yoshinaga_1}, which was introduced by Kamiya et al.~\cite{Kamiya-Takemura-Terao_0}. One of the most important properties of the characteristic quasi-polynomial is that it coincides with the characteristic polynomial on the integers that are relatively prime to its own period as a quasi-polynomial. Let us denote by $\chi_{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t)$ the characteristic quasi-polynomial of $\mathcal{A}_{\Phi}^{[1,n]}$. Yoshinaga proved the explicit formula for $\chi_{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t)$. \begin{theorem}\label{characteristic_quasi_poly}(Yoshinaga \cite{Yoshinaga_1}) Let $n \geqq 0$. The characteristic quasi-polynomial of the Linial arrangement $\mathcal{A}_{\Phi}^{[1,n]}$ is \begin{equation}\label{Ch_Yoshinaga} \chi_{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t)=\mathrm{R}_{\Phi}(\mathrm{S}^{n+1})\mathrm{L}_{\Phi}(t). \end{equation} \end{theorem} From (\ref{Ch_Yoshinaga}), we see that $\chi_{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t) $ has the same period as $\mathrm{L}_{\Phi}(t)$, namely, the period $\rho$. Note that $\chi_{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t)= \chi(\mathcal{A}_{\Phi}^{[1,n]},t)$ when $t \equiv 1 \bmod \rho$. We will calculate the left-hand side of (\ref{Ch_Yoshinaga}) in the following section.\par When $n=0$, that is, $\mathcal{A}_{\Phi}^{[1,0]}=\emptyset$, Theorem \ref{characteristic_quasi_poly} leads to the following generalization of the Worpitzky identity (\ref{Worpitzky identity}) \cite{Yoshinaga_1,Yoshinaga_2}. \begin{theorem}\label{g-Eulerian}(Yoshinaga \cite{Yoshinaga_1}) \begin{equation} t^{\ell}=\mathrm{R}_{\Phi}(\mathrm{S})\mathrm{L}_{\Phi}(t). \end{equation} \end{theorem}
\section{Main results} \subsection{Postnikov--Stanley Linial arrangement conjecture when the parameter $n+1$ is relatively prime to the period}\label{section:main_formula} \begin{theorem}\label{main theorem} Let $n\geqq0$. \begin{equation} \chi _{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t)=\mathrm{R}_{\Phi}(\overline {\mathrm{S}}^{n+1})\avEh{\Phi}{\mathrm{gcd}(n+1,\rho)}(t). \end{equation} In particular, $\chi _{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t)$ has the period $\mathrm{gcd}(n+1,\rho)$. \end{theorem} \begin{proof} Let $\Phi$ be an irreducible root system of rank $\ell$. We can define the polynomial \[ g_k(t^{n+1}):= \frac{\C{t^{n+1}} \mathrm{A}_{\ell}(t^{n+1})}{[\hat{c}_k]_{t^{n+1}}^{\ell_{\hat{c}_k}+1}} \] for any $k\in\{0,\cdots,\hat{\ell}\}$ because $[\hat{c}_k]_{t^{n+1}}^{\ell_{\hat{c}_k}+1}$ divides $\C{t^{n+1}}$. By Proposition \ref{Eh_deco}, \begin{equation} \mathrm{L}_{\Phi}(t)=\sum_{k\in \{\hat{c}_0,\cdots, \hat{c}_{\hat{\ell}}\}}\Ehf{k}{(\ell_{k})}(t). \end{equation} Note that $\Ehf{\hat{c}_k}{(\ell_{\hat{c}_k})}(t)$ is a quasi-polynomial of degree $\ell_{\hat{c}_k}$ with period $\hat{c}_k$. Because $\rho$ is a multiple of $\hat{c}_k$, by Proposition \ref{tilde-gcd}, we obtain $\widetilde{\Ehf{\hat{c}_k} {(\ell_{\hat{c}_k})}}^{\mathrm{gcd}(n+1,\rho)} (t)=\widetilde{\Ehf{\hat{c}_k} {(\ell_{\hat{c}_k})}}^{\mathrm{gcd}(n+1,\hat{c}_k)}(t)$. By Proposition \ref{averaging}, for any $k\in\{0,\cdots,\hat{\ell}\}$, \[ [\hat{c}_k]_{\mathrm{S}^{n+1}}^{\ell_{\hat{c}_k}+1}g_k(\mathrm{S}^{n+1}) \Ehf{\hat{c}_k}{(\ell_{\hat{c}_k})}(t) = [\hat{c}_k]_{\overline{\mathrm{S}}^{n+1}}^{\ell_{\hat{c}_k}+1} g_k(\overline{\mathrm{S}}^{n+1}) \widetilde{\Ehf{\hat{c}_k} {(\ell_{\hat{c}_k})}}^{\mathrm{gcd}(n+1,\rho)}(t). \] Therefore, by Lemma \ref{tilde_linear} and Theorem \ref{Lam-Postnikov}, \[ \begin{split} \chi_{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t)&=\mathrm{R}_{\Phi}(\mathrm{S}^{n+1})\mathrm{L}_{\Phi}(t)\\ &=\mathrm{R}_{\Phi}(\mathrm{S}^{n+1})(\sum_{k\in \{\hat{c}_0,\cdots,\hat{c}_{\hat{\ell}}\}}\Ehf{k}{(\ell_{k})}(t))\\ &=(\C{\mathrm{S}^{n+1}}) \mathrm{A}_{\ell}(\mathrm{S}^{n+1}) (\sum_{k\in \{\hat{c}_0,\cdots,\hat{c}_{\hat{\ell}}\}}\Ehf{k}{(\ell_{k})}(t))\\ &=[\hat{c}_0]_{\mathrm{S}^{n+1}}^{\ell_{\hat{c}_0}+1}g_0(\mathrm{S}^{n+1})\Ehf{\hat{c}_0}{(\ell_{\hat{c}_0})}(t)+\cdots+ [\hat{c}_{\ell}]_{\mathrm{S}^{n+1}}^{\ell_{\hat{c}_{\hat{\ell}}+1}}g_{\hat{\ell}}(\mathrm{S}^{n+1})\Ehf{\hat{c}_{\hat{\ell}}}{(\ell_{\hat{c}_{\hat{\ell}}})}(t)\\ &=[\hat{c}_0]_{\overline{\mathrm{S}}^{n+1}}^{\ell_{\hat{c}_0}+1}g_0(\overline{\mathrm{S}}^{n+1})\widetilde{\Ehf{\hat{c}_0}{(\ell_{\hat{c}_0})}}^{\mathrm{gcd}(n+1,\rho)}(t)+\cdots+ [\hat{c}_{\hat{\ell}}]_{\overline{\mathrm{S}}^{n+1}}^{\ell_{\hat{c}_{\hat{\ell}}+1}}g_{\hat{\ell}}(\overline{\mathrm{S}}^{n+1}) \widetilde{\Ehf{\hat{c}_{\hat{\ell}}}{(\ell_{\hat{c}_{\hat{\ell}}})}}^{\mathrm{gcd}(n+1,\rho)}(t)\\ &=(\C{\overline{\mathrm{S}}^{n+1}})\mathrm{A}_{\ell}(\overline{\mathrm{S}}^{n+1})(\sum_{k\in \{\hat{c}_0,\cdots,\hat{c}_{\hat{\ell}}\}}\widetilde{\Ehf{k}{(\ell_{k})}}^{\mathrm{gcd}(n+1,\rho)}(t))\\ &=\mathrm{R}_{\Phi}(\overline{\mathrm{S}}^{n+1})(\sum_{k\in \{\hat{c}_0,\cdots,\hat{c}_{\hat{\ell}}\}}\widetilde{\Ehf{k}{(\ell_{k})}}^{\mathrm{gcd}(n+1,\rho)}(t))\\ &=\mathrm{R}_{\Phi}(\overline{\mathrm{S}}^{n+1})\avEh{\Phi}{\mathrm{gcd}(n+1,\rho)}(t). \end{split} \] The characteristic quasi-polynomial $\chi_{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t)$ has the period $\mathrm{gcd}(n+1,\rho)$ because $\avEh{\Phi}{\mathrm{gcd}(n+1,\rho)}$ has the period $\mathrm{gcd}(n+1,\rho)$. \end{proof} Note that $\avEh{\Phi}{1}(t)$ is a polynomial. The following comes immediately from Theorems \ref{g-Eulerian} and \ref{main theorem}. \begin{corollary}\label{gene_worpitzky_pol} \begin{equation} t^{\ell}=\mathrm{R}_{\Phi}(\mathrm{S})\tilde{\mathrm{L}}^1_{\Phi}(t). \end{equation} \end{corollary} \begin{remark} \item Proposition \ref{root Ehrhart} can also be proved using Corollary \ref{gene_worpitzky_pol}. First, note that if a function $f(t)$ is a polynomial, then $(\mathrm{S} f)(t)=(\overline{\mathrm{S}}f)(t)$. By Corollary \ref{gene_worpitzky_pol} and Lemma \ref{Lam-Postnikov}, \[ \begin{split} \mathrm{A}_{\ell}(\overline{\mathrm{S}})\mathrm{L}_{A_{\ell}}(t)&=\mathrm{R}_{\Phi}(\overline{\mathrm{S}})\avEh{\Phi}{1}(t)\\ &=\C{\overline{\mathrm{S}}}\mathrm{A}_{\ell}(\overline{\mathrm{S}})\avEh{\Phi}{1}(t)\\ &=\mathrm{A}_{\ell}(\overline{\mathrm{S}})\C{\overline{\mathrm{S}}}\avEh{\Phi}{1}(t)\\ &=\mathrm{A}_{\ell}(\overline{\mathrm{S}})\C{\mathrm{S}}\mathrm{L}_{\Phi}(t). \end{split} \] Thus, \[ \mathrm{A}_{\ell}(\mathrm{S})(\mathrm{L}_{A_{\ell}}(t)-\C{\mathrm{S}}\mathrm{L}_{\Phi}(t))=0. \] If ($\mathrm{L}_{A_{\ell}}(t)-\C{\mathrm{S}}\mathrm{L}_{\Phi}(t))\neq 0$, then Lemma \ref{Shift congruences} implies that $(1-\mathrm{S})$ divides $\mathrm{A}_{\ell}(\mathrm{S})$, but $(1-\mathrm{S})$ does not divide $\mathrm{A}_{\ell}(\mathrm{S})$. Hence, \[ \mathrm{L}_{A_{\ell}}(t)-\C{\mathrm{S}}\mathrm{L}_{\Phi}(t)=0. \] \end{remark}
\begin{theorem}\label{corollary_1} Let $m:=\frac{n+1}{\mathrm{gcd}(n+1,\rho)}$. Then, \begin{equation} \chi_{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t)=(\prod_{j=0}^{\ell}\frac{1}{m}[m]_{\mathrm{S}^{c_j\cdot \mathrm{gcd}(n+1,\rho)}}) \chi _{quasi}(\mathcal{A}_{\Phi}^{[1,\mathrm{gcd}(n+1,\rho)-1]},t). \end{equation} \end{theorem} \begin{proof} By Theorem \ref{main theorem}, Lemma \ref{S_bar_linear}, and Proposition \ref{g_Eulerian_cong}, \[ \begin{split} \chi _{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t) &=\mathrm{R}_{\Phi}(\overline{\mathrm{S}}^{n+1})\avEh{\Phi}{\mathrm{gcd}(n+1,\rho)}(t)\\ &= \frac{1}{m^{\ell+1}} ([m]_{\overline{\mathrm{S}}^{c_0 \mathrm{gcd}(n+1,\rho)}}\cdots [m]_{\overline{\mathrm{S}}^{c_{\ell} \mathrm{gcd}(n+1,\rho)}})\mathrm{R}_{\Phi}(\overline{\mathrm{S}}^{\mathrm{gcd}(n+1,\rho)})\avEh{\Phi}{\mathrm{gcd}(n+1,\rho)}(t)\\ &=\frac{1}{m^{\ell+1}} ([m]_{\mathrm{S}^{c_0 \mathrm{gcd}(n+1,\rho)}}\cdots [m]_{\mathrm{S}^{c_{\ell} \mathrm{gcd}(n+1,\rho)}}) \chi _{quasi}(\mathcal{A}_{\Phi}^{[1, \mathrm{gcd}(n+1,\rho)-1]},t).\\ \end{split} \] \end{proof} We prove Conjecture \ref{Postnikov-Stanley_2} using the following lemma, as used in \cite{Athanasiadis}, \cite{Postnikov-Stanley}, and \cite{Yoshinaga_1}. \begin{lemma}\label{Postnikov-Stanley's lemma}(Postnikov--Stanley \cite{Postnikov-Stanley}, Lemma 9.13)
Let $f(t) \in \mathbb{C}[t]$. Suppose that all the roots of the equation $f(t)=0$ have real parts that are equal to $a$. Let $g(\mathrm{S}) \in \mathbb{C}[\mathrm{S}]$ be a polynomial such that every root of the equation $g(z)=0$ satisfies $|z|=1$. Then, all roots of the equation $g(\mathrm{S})f(t)=0$ have real parts that are equal to $a+\frac{\mathrm{deg} g}{2}$. \end{lemma}
\begin{theorem}\label{gcd_prime} Let $n$ be an integer with $\mathrm{gcd}(n+1,\rho)=1$. Then, \begin{equation} \chi_{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t)=(\prod_{j=0}^{\ell}\frac{1}{n+1}[n+1]_{\mathrm{S}^{c_j}})t^{\ell}. \end{equation} In particular, the characteristic quasi-polynomial becomes a polynomial and any root $z$ of the equation $\chi(\mathcal{A}_{\Phi}^{[1,n]},t)=0$ satisfies $\operatorname{Re} z=\frac{n h}{2}$. \end{theorem} \begin{proof} We calculate the characteristic polynomial using Theorems \ref{corollary_1} and \ref{g-Eulerian}. \[ \begin{split} \chi_{quasi}(\mathcal{A}_{\Phi}^{[1,n]},t)&=(\prod_{j=0}^{\ell}\frac{1}{n+1}[n+1]_{\mathrm{S}^{c_j}}) \chi _{quasi}(\mathcal{A}_{\Phi}^{[1,0]},t)\\ &=(\frac{1}{n+1})^{\ell+1} ([n+1]_{\mathrm{S}^{c_0}}[n+1]_{\mathrm{S}^{c_1}} \cdots [n+1]_{\mathrm{S}^{c_{\ell}}})\mathrm{R}_{\Phi}(\mathrm{S})\mathrm{L}_{\Phi}(t)\\ &=(\frac{1}{n+1})^{\ell+1} [n+1]_{\mathrm{S}^{c_0}}[n+1]_{\mathrm{S}^{c_1}} \cdots [n+1]_{\mathrm{S}^{c_{\ell}}}t^{\ell}. \end{split} \] By Lemma \ref{Postnikov-Stanley's lemma}, the real part of any root of the equation $\chi(\mathcal{A}_{\Phi}^{[1,n]},t)=0$ is $\frac{n(c_0+c_1+\cdots+c_{\ell})}{2}=\frac{nh}{2}$. \end{proof}
\begin{remark} Theorem \ref{gcd_prime} is a generalization of the expression of the characteristic polynomial of $\mathcal{A}_{A_{\ell}}^{[1,n]}$ given by Postnikov and Stanley \cite{Postnikov-Stanley} and the expression of $\mathcal{A}_{B_{\ell}}^{[1,n]}$, $\mathcal{A}_{C_{\ell}}^{[1,n]}$, and $\mathcal{A}_{D_{\ell}}^{[1,n]}$ for even values of $n$ given by Athanasiadis \cite{Athanasiadis}. \end{remark} \begin{example}[case $E_6$] Let $n$ be a positive integer. Let $m:=\frac{n+1}{\mathrm{gcd}(n+1,\rho)}$. \[ \begin{split} \text{If }\mathrm{gcd}(n+1,6)=1,\text{ then} \\ \chi_{quasi}(\mathcal{A}_{E_6}^{[1,n]},t)&=(\frac{1}{m}[m]_{\mathrm{S}})^{3} (\frac{1}{m}[m]_{\mathrm{S}^2})^{3} (\frac{1}{m}[m]_{\mathrm{S}^3})t^{6}.\\ \text{If }\mathrm{gcd}(n+1,6)=2,\text{ then} \\ \chi_{quasi}(\mathcal{A}_{E_6}^{[1,n]},t)&=(\frac{1}{m}[m]_{\mathrm{S}^{2}})^{3} (\frac{1}{m}[m]_{\mathrm{S}^4})^{3} (\frac{1}{m}[m]_{\mathrm{S}^6})\chi_{quasi}(\mathcal{A}_{E_6}^{[1,1]},t).\\ \text{If }\mathrm{gcd}(n+1,6)=3,\text{ then} \\ \chi_{quasi}(\mathcal{A}_{E_6}^{[1,n]},t)&=(\frac{1}{m}[m]_{\mathrm{S}^3})^{3} (\frac{1}{m}[m]_{\mathrm{S}^6})^{3} (\frac{1}{m}[m]_{\mathrm{S}^9})\chi_{quasi}(\mathcal{A}_{E_6}^{[1,2]},t).\\ \text{If }\mathrm{gcd}(n+1,6)=6,\text{ then} \\ \chi_{quasi}(\mathcal{A}_{E_6}^{[1,n]},t)&=(\frac{1}{m}[m]_{\mathrm{S}^6})^{3} (\frac{1}{m}[m]_{\mathrm{S}^{12}})^{3} (\frac{1}{m}[m]_{\mathrm{S}^{18}})\chi_{quasi}(\mathcal{A}_{E_6}^{[1,5]},t). \end{split} \] \end{example}
\begin{theorem}\label{Ch_rad} Let $\eta:=\frac{\mathrm{gcd}(n+1,\rho)}{\mathrm{gcd}(n+1,\mathrm{rad}(\rho))}$. \begin{equation} \chi(\mathcal{A}_{\Phi}^{[1,\mathrm{gcd}(n+1,\rho)-1]},t)=(\prod_{j=0}^{\ell}\frac{1}{\eta}[\eta]_{\mathrm{S}^{c_j\cdot \mathrm{gcd}(n+1,\mathrm{rad}(\rho))}}) \chi(\mathcal{A}_{\Phi}^{[1,\mathrm{gcd}(n+1,\mathrm{rad}(\rho))-1]},t). \end{equation} \end{theorem} \begin{proof} We set $t\equiv 1 \bmod \rho$. Then, we have that $\avEh{\Phi}{\mathrm{gcd}(n+1,\rho)}(t)= \avEh{\Phi}{\mathrm{gcd}(n+1,\mathrm{rad}(\rho))}(t)$ by Proposition \ref{constituent_inv} and Proposition \ref{gene_Eh}. Hence, by Lemma \ref{S_bar_linear}, Proposition \ref{g_Eulerian_cong}, and Theorem \ref{main theorem}, \[ \begin{split} \chi(\mathcal{A}_{\Phi}^{[1,\mathrm{gcd}(n+1,\rho)-1]},t)&=\mathrm{R}_{\Phi}(\overline {\mathrm{S}}^{\mathrm{gcd}(n+1,\rho)})\avEh{\Phi}{\mathrm{gcd}(n+1,\rho)}(t)\\ &=\mathrm{R}_{\Phi}(\overline {\mathrm{S}}^{\eta \mathrm{gcd}(n+1,\mathrm{rad}(\rho))})\avEh{\Phi}{\mathrm{gcd}(n+1,\mathrm{rad}(\rho))}(t)\\ &=(\prod_{j=0}^{\ell}\frac{1}{\eta}[\eta]_{\overline{\mathrm{S}}^{c_j\cdot \mathrm{gcd}(n+1,\mathrm{rad}(\rho))}})\mathrm{R}_{\Phi}(\overline{\mathrm{S}}^{\mathrm{gcd}(n+1,\mathrm{rad}(\rho))})\avEh{\Phi}{\mathrm{gcd}(n+1,\mathrm{rad}(\rho))}(t)\\ &=(\prod_{j=0}^{\ell}\frac{1}{\eta}[\eta]_{\mathrm{S}^{c_j\cdot \mathrm{gcd}(n+1,\mathrm{rad}(\rho))}}) \chi(\mathcal{A}_{\Phi}^{[1,\mathrm{gcd}(n+1,\mathrm{rad}(\rho))-1]},t). \end{split} \] \end{proof}
We now provide examples of Theorem \ref{Ch_rad} for $E_8$. Using the notation of Theorem \ref{Ch_rad}, in the case of $E_8$, $\eta$ can only take a value of $1$ or $2$. If $\eta=1$, then Theorem \ref{Ch_rad} is trivial. The following formulas are examples of Theorem \ref{Ch_rad} for $\eta=2$. \begin{example}[Case $E_8$] \begin{equation} \chi(\mathcal{A}_{E_8}^{[1,4-1]},t)=(\prod_{i=0}^{\ell}\frac{1}{2}[2]_{\mathrm{S}^{2c_{i}}})\chi(\mathcal{A}_{E_8}^{[1,2-1]},t). \end{equation} \begin{equation} \chi(\mathcal{A}_{E_8}^{[1,12-1]},t)=(\prod_{i=0}^{\ell}\frac{1}{2}[2]_{\mathrm{S}^{6c_{i}}})\chi(\mathcal{A}_{E_8}^{[1,6-1]},t). \end{equation} \begin{equation} \chi(\mathcal{A}_{E_8}^{[1,20-1]},t)=(\prod_{i=0}^{\ell}\frac{1}{2}[2]_{\mathrm{S}^{10c_{i}}})\chi(\mathcal{A}_{E_8}^{[1,10-1]},t). \end{equation} \begin{equation} \chi(\mathcal{A}_{E_8}^{[1,60-1]},t)=(\prod_{i=0}^{\ell}\frac{1}{2}[2]_{\mathrm{S}^{30c_{i}}})\chi(\mathcal{A}_{E_8}^{[1,30-1]},t). \end{equation} \end{example}
\subsection{Verification of the Postnikov--Stanley Linial arrangement conjecture}\label{section:main_check} We verify Conjecture \ref{Postnikov-Stanley_2} for $\Phi=E_6,E_7,E_8$, or $F_4$. We use the notation of Theorems \ref{corollary_1} and \ref{Ch_rad}. Recall that, according to these theorems, the following formula holds. \begin{equation} \chi(\mathcal{A}_{\Phi}^{[1,n]},t)=(\prod_{j=0}^{\ell}\frac{1}{m}[m]_{\mathrm{S}^{c_j\cdot \mathrm{gcd}(n+1,\rho)}}) (\prod_{j=0}^{\ell}\frac{1}{\eta}[\eta]_{\mathrm{S}^{c_j\cdot \mathrm{gcd}(n+1,\mathrm{rad}(\rho))}}) \chi(\mathcal{A}_{\Phi}^{[1,\mathrm{gcd}(n+1,\mathrm{rad}(\rho))-1]},t). \end{equation}
If the real part of any root of the equation \[ \chi(\mathcal{A}_{\Phi}^{[1,\mathrm{gcd}(n+1,\mathrm{rad}(\rho))-1]},t)=0 \] is $\frac{(\mathrm{gcd}(n+1,\mathrm{rad}(\rho))-1)h}{2}$ for $\Phi \in \{E_6,E_7,E_8,F_4\}$, then Lemma \ref{Postnikov-Stanley's lemma} implies that Conjecture \ref{Postnikov-Stanley_2} is true. We have computed the characteristic polynomial such that the parameter $n+1$ is a factor of $\mathrm{rad}(\rho)$ other than $1$ and have determined the real part of the roots using a computational method. We use Theorem \ref{main theorem} and the calculation results of the Ehrhart quasi-polynomial given by Suter \cite{Suter} to compute the characteristic polynomial. The case of $\mathrm{gcd}(n+1,\mathrm{rad}(\rho))=\mathrm{rad}(\rho)$ has already been verified in \cite{Yoshinaga_1}. We present the characteristic polynomials for $\Phi \in \{E_6,E_7,E_8,F_4\} $ in the following tables.
\begin{landscape} \begin{table}[htbp] \centering \caption{Characteristic polynomials for $E_6$ ($\mathrm{rad}(\rho)=6$).} {\normalsize
\begin{tabular}{|l|c|c|} \hline $\chi(\mathcal{A}_{E_6}^{[1,n]},t)$ &$n$&real part\\ \hline\hline $t^6-36t^5+630t^4-6480t^3+40185t^2-140076t+211992$&$2-1$&$6$\\ \hline $t^6-72t^5+2400t^4-46080t^3+528600t^2-3396672t+9474200$&$3-1$&$12$\\ \hline $t^6-180t^5+14550t^4-666000t^3+18019065t^2-271143900t+1762474040$&$6-1$&$30$\\ \hline \end{tabular} } \label{fig:E_6} \end{table}
\begin{table}[htbp] \centering \caption{Characteristic polynomials for $F_4$ ($\mathrm{rad}(\rho)=6$).} {\normalsize
\begin{tabular}{|l|c|c|} \hline $\chi(\mathcal{A}_{F_4}^{[1,n]},t)$ &$n$&real part\\ \hline\hline $t^4-24t^3+258t^2-1368t+2917$&$2-1$&$6$\\ \hline $t^4-48t^3+1000t^2-10176t+41572$&$3-1$&$12$\\ \hline $t^4-120t^3+5986t^2-143160t+1361989$&$6-1$&$30$\\ \hline \end{tabular} } \label{fig:F_4} \end{table}
\begin{table}[htbp] \centering \caption{Characteristic polynomials for $E_7$ ($\mathrm{rad}(\rho)=6$).} {\normalsize
\begin{tabular}{|l|c|c|} \hline $\chi(\mathcal{A}_{E_7}^{[1,n]},t)$ &$n$&real part\\ \hline\hline $t^7-63t^6+1953t^5-36855t^4+446355t^3-3417309t^2+15154251t-29798253$&$2-1$&$9$\\ \hline $t^7-126t^6+7476t^5-264600t^4+5948040t^3-84088368t^2+687202712t-2490427440$&$3-1$&$18$\\ \hline $t^7-315t^6+45465t^5-3850875t^4+204937635t^3 -6808068225t^2+130052291075t-1097517119625$&$6-1$&$45$\\ \hline \end{tabular} } \label{fig:E_7} \end{table}
\begin{table}[htbp] \centering \caption{Characteristic polynomials for $E_8$ ($\mathrm{rad}(\rho)=30$).} {\scriptsize
\begin{tabular}{|l|c|c|} \hline $\chi(\mathcal{A}_{E_8}^{[1,n]},t)$ &$n$&real part\\ \hline\hline $t^8-120t^7+7140t^6-264600t^5+6540030t^4-108901800t^3+1181603220t^2-7583286600t+21918282249$&$2-1$&$15$\\ \hline $t^8-240t^7+27440t^6-1915200t^5+88161360t^4-2716963200t^3+54385106720t^2-643164643200t+3426392186728$&$3-1$&$30$\\ \hline $t^8-480t^7+107520t^6-14515200t^5+1281219408t^4-75249457920t^3+2857900896480t^2-63918602553600t+642465923287416$&$5-1$&$60$\\ \hline $t^8-600t^7+167300t^6-28035000t^5+3065453790t^4-222698637000t^3+10449830016500t^2-288505461225000t+3577184806486057$&$6-1$&$75$\\ \hline $t^8-1080t^7+538020t^6-160234200t^5+31018986558t^4-3977954041320t^3+328758988903380t^2-15957853314798600t+347373804233610441$&$10-1$&$135$\\ \hline $t^8-1680t^7+1297520t^6-597643200t^5+178602069408t^4-35307879102720t^3+4493170619530880t^2-335521093135065600t+11227745283721390816$&$15-1$&$210$\\ \hline $t^8-3480t^7+5550020t^6-5266510200t^5+3236633286558t^4-1314003597910920t^3+343011765319289780t^2-52494228716611434600t+3597446896074261934441$&$30-1$&$435$\\ \hline \end{tabular} } \label{fig:E_8} \end{table} \end{landscape}
\noindent {\bf Acknowledgements.} I am very grateful to Masahiko Yoshinaga for his various comments on how to improve this paper, for many discussions on the content of this paper, and for his suggestions for addressing the Postnikov--Stanley Linial arrangement conjecture. I thank Stuart Jenkinson, PhD, from Edanz Group (https://en-author-services.edanzgroup.com/ac) for editing a draft of this manuscript. The author also thanks the Department of Mathematics, Hokkaido University and JSPS KAKENHI JP18H01115 (PI: M. Yoshinaga) for financial supports.
\end{document} |
\begin{document}
\title[Multivariate generating functions]{Multivariate generating functions built of Chebyshev polynomials and some of its applications and generalizations.} \author{Pawe\l \ J. Szab\l owski} \address{Emeritus in Department of Mathematics and Information Sciences,\\ Warsaw University of Technology\\ ul Koszykowa 75, 00-662 Warsaw, Poland} \email{pawel.szablowski@gmail.com} \thanks{The author is grateful to the unknown referee for his detailed and in-depth remarks and suggestions.} \date{January, 2018} \subjclass[2000]{Primary 42C10, 33C47, Secondary 26B35 40B05} \keywords{multivariate generating functions, Kibble-Slepian formula, Chebyshev polynomials, q-Hermite polynomials, inversion of Poisson-Mehler formula}
\begin{abstract} We sum multivariate generating functions composed of products of Chebyshev polynomials of the first and the second kind. That is, we find closed forms of expressions of the type $\sum_{j\geq0}\rho^{j}\prod_{m=1}^{k}T_{j+t_{m}} (x_{m})\prod_{m=k+1}^{n+k}U_{j+t_{m}}(x_{m}),$ for different integers $t_{m},$ $m=1,...,n+k.$ We also find a Kibble-Slepian formula of $n$ variables with Hermite polynomials replaced by Chebyshev polynomials of the first or the second kind. In all the considered cases, the obtained closed forms are rational functions with positive denominators. We show how to apply the obtained results to integrate some rational funtions or sum some related series of Chebyshev polynomials. We hope that the obtained formulae will be useful in the so-called free probability. We expect also that the obtained results should inspire further research and generalizations. In particular, that, following methods presented in this paper, one would be able to obtain similar formulae for the so-called $q-$Hermite polynomials. Since the Chebyshev polynomials of the second kind considered here are the $q$-Hermite polynomials for $q=0$. We have applied these methods in the one- and two-dimensional cases and were able to obtain nontrivial identities concerning $q-$Hermite polynomials.
\end{abstract} \maketitle
\section{Introduction}
In this work we obtain closed forms of the following expressions:
Case I. The multivariate generating functions: \begin{equation}
\chi_{k,n}^{(t_{1},...,t_{k+n})}(x_{1},...,x_{n+k}|\rho)=\sum_{j\geq0}\rho ^{j}\prod_{m=1}^{k}T_{j+t_{m}}(x_{m})\prod_{m=k+1}^{n+k}U_{j+t_{m}}(x_{m}), \label{_ktnu} \end{equation} where $\left\vert t_{m}\right\vert ,k,n\in\{0,1,...\},$ $k+n\geq1,$ $\left\vert \rho\right\vert <1,$ $\left\vert x_{m}\right\vert \leq1$ and $T_{j},U_{j}$ denote $j-$th Chebyshev polynomials respectively of the first and second kind.
Case II. The so-called Kibble--Slepian formula for Chebyshev polynomials i.e. closed forms of the expressions: \begin{align}
f_{T}(\mathbf{x}|K_{n}) & =\sum_{S}(\prod_{1\leq i<j\leq n}\left( \rho _{ij}\right) ^{s_{ij}})\prod_{m=1}^{n}T_{\sigma_{m}}(x_{m}),\label{_t}\\
f_{U}(\mathbf{x}|K_{n}) & =\sum_{S}(\prod_{1\leq i<j\leq n}\left( \rho _{ij}\right) ^{s_{ij}})\prod_{m=1}^{n}U_{\sigma_{m}}(x_{m}), \label{_u} \end{align} where $\mathbf{x\allowbreak=\allowbreak(}x_{1},...,x_{n})$. $K_{n}$ denotes the symmetric, non-singular, $n\times n$ matrix with ones on its diagonal and with $\rho_{ij}$ as its non-diagonal $ij-th$ entry. $\sum_{S}$ denotes summation over all $n(n-1)/2$ non-diagonal entries of a symmetric $n\times n-$ matrix $S_{n}$ with zeros on the main diagonal and entries $s_{ij}$ being nonnegative integers, while $\sigma_{m}$ is the sum of the entries $s_{ij}$ along the $m-th$ row of the matrix $S_{n}.$
We will show that in the case I. all functions $\chi_{k,n}$ are rational with common denominator $w_{n+k}(x_{1},...,x_{k+n}|\rho)$ which is a symmetric polynomial in $x_{1},...,x_{n+k}$ of degree $2^{n+k-1}$ as well as in $\rho$ of degree $2^{n+k}$ defined recursively by (\ref{rek}).
In case II. both functions $f_{T}(\mathbf{x}|K_{n})$ and $f_{U}(\mathbf{x}
|K_{n})$ are rational with the same denominator \begin{equation}
V_{n}(\mathbf{x|}K_{n})=\prod_{j=1}^{n-1}\prod_{k=j+1}^{n}w_{2}(x_{k}
,x_{j}|\rho_{kj}), \label{mianKib} \end{equation} where $w_{2}$ is defined by (\ref{w2}), below.
The fact that these functions are rational, is not very surprising, given the fact that Chebyshev polynomials could be expressed by the trigonometric functions and the fact that by the Euler formulae the series (\ref{_ktnu}), (\ref{_t}) and (\ref{_u}) are sums of some geometric series. However, to get the exact forms of the denominators and especially the numerators, is nontrivial.
Both statements will be proved in the sequel. The first one in the Section \ref{one} and the second in the Section \ref{kibb}.
Chebyshev polynomials of the second kind (that are orthogonal with respect to the semicircle distribution) have played a similar role in the rapidly recently developing "free probability", as the Hermite polynomials (that are orthogonal with respect to the normal distribution) play in classical probability. This is so because the central role in the free probability is played by the semicircle distribution, while in the classical one the central role is played by the normal distribution. Hence the results presented below are of significance for the free probability theory.
The possible other applications of the results of the paper can, for example, help in the following:
\begin{enumerate} \item To simplify calculations of some of the multiple integrals of the form \[
\underset{k~fold}{\int...\int}\frac{v_{m}(x_{1},...,x_{n}|\mathbf{p})}
{\Omega_{n}(x_{1},...,x_{n}|\mathbf{p})}\prod_{j=1}^{k}(1-x_{i}^{2})^{m_{j} /2}dx_{1}...dx_{k}, \] where $v_{m}$ denotes some polynomial in variables $x_{1},...,x_{n}$ and numbers $m_{j}\allowbreak\in\{-1,1\}$, $\mathbf{p}$\textbf{\ }denotes a set of parameters. Thus, this set might be different in cases I. or II. $\Omega_{n} $\textbf{\ }is equal to $w_{n}$ in the case I, (see iterative formula (\ref{rek})) or $V_{n}$ in the case II (see formula (\ref{mianKib})). This is based on the observation that the closed forms in Case I and Case II are the rational functions with the denominators of the form $\Omega_{n}$ while the numerators are, depending on the case and on numbers $t_{m},$ $m\allowbreak =\allowbreak1,\ldots,n,$ polynomials of degree at most $\sum_{m=1}^{n} (t_{m}+1).$ For example, for $n\allowbreak=\allowbreak2$ see Proposition \ref{2wym}. Hence, one could imagine expanding $\frac{v_{m}(x_{1}
,...,x_{n}|\mathbf{p})}{\Omega_{n}(x_{1},...,x_{n}|\mathbf{p})}$ into the linear combinations of the series of the forms (\ref{_ktnu}), (\ref{_t}) or (\ref{_u}) depending on the cases considered Case I or Case II. Now notice that having an absolute uniform convergence of the appropriate series ($\left\vert \rho\right\vert ,$ $\left\vert \varrho_{ij}\right\vert <1$ and
$\left\vert T_{i}(x)\right\vert ,|U_{i}(x)|\leq i,$ $\left\vert x\right\vert \leq1,i\geq0$) one can perform integrations of each summand separately, which is very easy.
Below we present a few examples illustrating this idea. In the first three of these examples we will use the fact that following Proposition \ref{2wym}, iii), the numerators of the functions $\chi_{0,2}^{0,0}(x,y,\rho)$ and $\chi_{0,2}^{2,0}(x,y,\rho)$ are equal respectively \[ 1-\rho^{2}\text{ and }(4x^{2}-4xy-1+\rho^{2}).\text{ } \] Thus for $\left\vert x\right\vert ,\left\vert y\right\vert \leq1$ and $\left\vert \rho\right\vert <1$ we get \begin{equation} \int_{-1}^{1}\frac{2(1-\rho^{2})\sqrt{1-y^{2}}dy}{\pi((1-\rho^{2})^{2} -4xy\rho(1+\rho^{2})+4\rho^{2}(x^{2}+y^{2}))}=1, \label{E0} \end{equation} \begin{equation} \int_{-1}^{1}\frac{2(4x^{2}-4xy-1+\rho^{2})\sqrt{1-y^{2}}dy}{\pi((1-\rho ^{2})^{2}-4xy\rho(1+\rho^{2})+4\rho^{2}(x^{2}+y^{2}))}=4x^{2}-1, \label{E1} \end{equation} since $U_{2}(x)\allowbreak=\allowbreak4x^{2}-1.$ In the next example we use the (\ref{00}) to sum \begin{equation} \sum_{j\geq0}\rho^{2j}U_{2j}(x)=\chi_{0,2}^{0,0}(x,0,i\rho)=\frac{1+\rho^{2} }{(1+\rho^{2})^{2}-4\rho^{2}x^{2}} \label{oddU} \end{equation} and then (\ref{IU}) and the form of $\chi_{0,2}^{2,0}(x,y,\rho)$ to get the following result :
\begin{equation} \int_{-1}^{1}\frac{(4x^{2}-4xy-1+\rho^{2})dy}{\pi\sqrt{1-y^{2}}((1-\rho ^{2})^{2}-4xy\rho(1+\rho^{2})+4\rho^{2}(x^{2}+y^{2}))}=\frac{4x^{2}-1-\rho ^{2}}{(1+\rho^{2})^{2}-4x^{2}\rho^{2}}. \label{E2} \end{equation} In the example below, we used the fact that, following Proposition \ref{2wym}, iv), the numerator of the function $\chi_{1,1}^{1,0}(y,x,\rho)$ is equal to $(y(1+\rho^{2})-2\rho x).$ Hence taking into account (\ref{IT}) and the fact that $U_{1}(x)\allowbreak=\allowbreak2x$ we get:
\begin{equation} \int_{-1}^{1}\frac{2(y(1+\rho^{2})-2\rho x)\sqrt{1-y^{2}}dy}{\pi((1-\rho ^{2})^{2}-4xy\rho(1+\rho^{2})+4\rho^{2}(x^{2}+y^{2}))}=-\rho x. \label{E3} \end{equation}
The following two example exploit the form Corollary \ref{3wym},ii) and either (\ref{oU})
\begin{equation} \frac{2}{\pi}\int_{-1}^{1}\frac{(1+\rho^{2})^{3}+16\rho^{3}xyz-4\rho
^{2}(1+\rho^{2})(x^{2}+y^{2}+z^{2})}{w_{3}(x,y,z|\rho)}\sqrt{1-z^{2}}dz=1, \label{E4} \end{equation} or (\ref{IU}) and then, of course, one of the formulae given in Proposition \ref{2wym} to sum the obtained infinite series:
\begin{gather} \frac{1}{\pi}\int_{-1}^{1}\frac{(1+\rho^{2})^{3}+16\rho^{3}xyz-4\rho
^{2}(1+\rho^{2})(x^{2}+y^{2}+z^{2})}{\sqrt{1-z^{2}}w_{3}(x,y,z|\rho )}dz\label{E5}\\ =\frac{(1-\rho^{2})^{3}+4\rho^{2}(1-\rho^{2})(x^{2}+y^{2})}{(1-\rho^{2} )^{4}+16\rho^{4}(x^{4}+y^{4})+8\rho^{2}(1-\rho^{2})^{2}(x^{2}+y^{2} )-16\rho^{2}(1+\rho^{4})x^{2}y^{2}},\nonumber \end{gather}
we have here $w_{3}(x,y,z|\rho)$ is given by (\ref{w3}).
\item To derive several expansions of the type (\ref{_u}) and (\ref{_t}) for the special choices of the parameters $x_{j}$. To illustrate this idea we have the following examples: \begin{equation} \sum_{j=0}^{\infty}(j+1)\rho^{j}U_{j}(x)U_{j}(y)=\frac{(1+\rho^{2})(1-\rho ^{2})^{2}-4\rho^{2}(1+\rho^{2})(x^{2}+y^{2})+16\rho^{3}xy}{((1-\rho^{2} )^{2}-4xy\rho(1+\rho^{2})+4\rho^{2}(x^{2}+y^{2}))^{2}}, \label{EE1} \end{equation} \begin{gather} \sum_{j\geq0}t^{j}T_{2j+1}(x)T_{2j+1}(y)=\label{EE2}\\ \frac{(1-t)xy(1+6t+t^{2}-4t(x^{2}+y^{2}))}{(1-t)^{4}+8t(1-t)^{2}(x^{2} +y^{2})-16t(1+t^{2})x^{2}y^{2}+16t^{2}(x^{4}+y^{4})}.\nonumber \end{gather} To get these identities we used formualae given in (\ref{00}), (\ref{T11}), (\ref{U11}) as well as in Corollary \ref{3wym},
\item To obtain families of multivariate distributions in $\mathbb{R}^{n}$ with compact support of the form: \[
f_{n}(x_{1},...,x_{n})=\frac{p_{m}(x_{1},...,x_{n}|\mathbf{p})}{\Omega _{n}(x_{1},...,x_{n}|\mathbf{p})}\prod_{j=1}^{n}(1-x_{i}^{2})^{m_{j}/2}, \] where polynomial $p_{m}$ can depend on many parameters, can have any degree, but must me positive on $\mathbf{S\allowbreak=\allowbreak}[-1,1]^{n}$ and such that $f_{n}$ integrates to $1$ on $\mathbf{S,}$ indices $m_{j}\in\{-1,1\}.$ \end{enumerate}
There is one more reason for which the results are important. Namely, the Chebyshev polynomials of the second kind are, as stated above, identical with the so-called $q-$Hermite polynomials for $q\allowbreak=\allowbreak0$. Thus the results of the paper can be an inspiration to obtain similar results for the $q-$Hermite polynomials. All these ideas are explained and made more precise in the sequence of observations, remarks, hypothesis and conjectures presented in Section \ref{gen}.
An interesting, nontrivial example of an application of the method presented in Theorem \ref{main} applied to the well-known cases and leading to the non-obvious identities like the ones shown by (\ref{id2wym}), (\ref{sumk}) and (\ref{00k}) is presented in Subsection \ref{Ident}.
The paper is organized as follows. In the next section we present some elementary observations, we recall the basic properties of Chebyshev polynomials as well as we prove some important auxiliary results. The main results of the paper are presented in the two successive Sections \ref{one} and \ref{kibb} presenting respectively closed forms of the one-parameter multivariate generating functions and the closed form of the analogue of Kibble--Slepian formula. The next Section \ref{gen} presents generalization, observations, conjectures and examples. Finally the last Section \ref{dow} contains longer proofs.
\section{Auxiliary results and elementary observations\label{pom}}
Let us recall (following \cite{Mason2003}), the definitions of the Chebyshev polynomials: \begin{equation} U_{n}(\cos(\alpha))\allowbreak=\allowbreak\sin((n+1)\alpha)/\sin(\alpha)\text{ and }T_{n}(\cos(\alpha))\allowbreak=\allowbreak\cos(n\alpha) \label{Czebysz} \end{equation} and the orthogonality relations they satisfy: \begin{align} \int_{-1}^{1}T_{i}(x)T_{j}(x)\frac{1}{\pi\sqrt{1-x^{2}}}dx & =\left\{ \begin{array} [c]{ccc} 0 & if & i\neq j\\ 1/2 & if & i=j\neq0\\ 1 & if & i=j=0 \end{array} \right. ,\label{oT}\\ \int_{-1}^{1}U_{i}(x)U_{j}(x)\frac{2}{\pi}\sqrt{1-x^{2}}dx\allowbreak & =\allowbreak\left\{ \begin{array} [c]{ccc} 0 & if & i\neq j\\ 1 & if & i=j \end{array} \right. . \label{oU} \end{align}
We have also some simple properties of Chebyshev polynomials that were useful in obtaining examples (\ref{E1}-\ref{E5}) and (\ref{EE1},\ref{EE2}): \begin{equation} T_{j}(0)\allowbreak=\allowbreak U_{j}(0)=\left\{ \begin{array} [c]{ccc} 0 & if & j\text{ is odd}\\ (-1)^{j/2} & if & j\text{ is even} \end{array} \right. , \label{00} \end{equation}
\begin{gather} T_{i}(1)=1,T_{j}(-1)=(-1)^{j-2\left\lfloor j/2\right\rfloor },\label{T11}\\ U_{j}(\pm1)=\pm(j+1), \label{U11} \end{gather} for $j\geq0,$
\begin{equation} \int_{-1}^{1}T_{j}(x)\frac{2\sqrt{1-x^{2}}}{\pi}dx=\left\{ \begin{array} [c]{ccc} 1 & if & j=0\\ -1/2 & if & j=2\\ 0 & if & j\notin\{0,2\} \end{array} \right. , \label{IT} \end{equation} and \begin{equation} \int_{-1}^{1}U_{j}(x)\frac{1}{\pi\sqrt{1-x^{2}}}dx=\left\{ \begin{array} [c]{ccc} 0 & if & j\text{ is odd}\\ 1 & if & j\text{ is even} \end{array} \right. . \label{IU} \end{equation}
In the sequel, if all integer parameters $t_{1},...,t_{n+k}$ will be equal to zero, then they will be dropped from function $\chi$. Notice also that the functions $\chi$ are known for $n\allowbreak=\allowbreak1$ and $n\allowbreak =\allowbreak2$ and $t_{1}\allowbreak=\allowbreak0,$ $t_{2}\allowbreak =\allowbreak0$. By (\ref{_ktnu}) we have: \begin{gather}
\chi_{0,1}(x|\rho)=\frac{1}{w_{1}(x|\rho)};\chi_{1,0}(x|\rho)=\frac{1-\rho x}{w_{1}(x|\rho)},\label{_1}\\
\chi_{0,2}(x,y|\rho)\allowbreak=\allowbreak\sum_{n\geq0}\rho^{n}U_{n}
(x)U_{n}(y)=\frac{1-\rho^{2}}{w_{2}(x,y|\rho)},\label{_2}\\
\chi_{2,0}(x,y|\rho)=\sum_{n\geq0}\rho^{n}T_{n}(x)T_{n}(y)=\frac{1-\rho
^{2}+2\rho^{2}\left( x^{2}+y^{2}\right) -\left( \rho^{2}+3\right) \rho xy}{w_{2}(x,y|\rho)},\label{_3}\\
\chi_{1,1}(x,y|\rho)=\sum_{n\geq0}\rho^{n}U_{n}(x)T_{n}(y)=\frac{1-\rho
^{2}-2\rho xy+2\rho^{2}y^{2}\allowbreak}{w_{2}(x,y|\rho)}, \label{_4} \end{gather} where: \begin{align}
w_{1}(x|\rho) & =1-2\rho x+\rho^{2},\label{w1}\\
w_{2}(x,y|\rho) & =(1-\rho^{2})^{2}-4xy\rho(1+\rho^{2})+4\rho^{2} (x^{2}+y^{2}). \label{w2} \end{align}
Notice also that both $\chi_{2,0}$ and $\chi_{0,2}$ are positive on $[-1,1]\times\lbrack-1,1].$ The formulae in (\ref{_1}) are well known within e.g. theory of Poisson kernel. The formula in (\ref{_2}) it is famous Poisson-Mehler formula for $q-$Hermite polynomials where we set $q=0.$ Both can be found in \cite{Mason2003}. The second formula in (\ref{_3}) and in (\ref{_4}) have been recently obtained in \cite{Szab-Cheb}.
To calculate the functions $\chi_{k,n}^{(t_{1},...,t_{k+n})}$ we need the following auxiliary results. They are very simple, based on the elementary properties of the trigonometric functions. We present them for the sake of the completeness of the paper. We have:
\begin{proposition} \begin{equation}
w_{1}(\cos(\alpha+\beta)|\rho)w_{1}(\cos(\alpha-\beta)|\rho)\allowbreak
=w_{2}(\cos(\alpha),\cos(\beta)|\rho). \label{pro2} \end{equation}
\end{proposition}
\begin{proof} We have \begin{gather*} (1-2\rho\cos(\alpha+\beta)+\rho^{2})((1-2\rho\cos(\alpha-\beta)+\rho ^{2})\allowbreak=\allowbreak\\ (1+\rho^{2})^{2}-2\rho(1+\rho^{2})(\cos(\alpha+\beta)+\cos(\alpha -\beta))+4\rho^{2}\cos(\alpha+\beta)\cos(\alpha-\beta). \end{gather*} Now recall that $\cos(\alpha+\beta)+\cos(\alpha-\beta)\allowbreak =\allowbreak2\cos(\alpha)\cos(\beta)$ and $\cos(\alpha+\beta)\cos(\alpha -\beta)\allowbreak=\allowbreak\cos^{2}\alpha\allowbreak+\allowbreak\cos ^{2}\beta\allowbreak-\allowbreak1.$ \end{proof}
\begin{proposition} \label{ilocz} \begin{gather} \prod_{j=1}^{k}\cos(\alpha_{j})\allowbreak=\allowbreak\frac{1}{2^{n}} \sum_{i_{1}\in\{-1,1\}}...\sum_{i_{k}\in\{-1,1\}}\cos(\sum_{l=1}^{k} i_{l}\alpha_{l}),\label{cos}\\ \prod_{j=1}^{n}\sin(\alpha_{j})\allowbreak\prod_{j=n+1}^{n+k}\cos(\alpha _{j})=\allowbreak\nonumber\\ \left\{ \begin{array} [c]{ccc} \begin{array} [c]{c} (-1)^{(n+1)/2}\frac{1}{2^{n+k}}\sum_{i_{1}\in\{-1,1\}}...\sum_{i_{n+k} \in\{-1,1\}}\\ (-1)^{\sum_{l=1}^{n}(i_{l}+1)/2}\sin(\sum_{l=1}^{n+k}i_{l}\alpha_{l}) \end{array} & if & n\text{ is odd}\\% \begin{array} [c]{c} (-1)^{n/2}\frac{1}{2^{n+k}}\sum_{i_{1}\in\{-1,1\}}...\sum_{i_{n+k}\in \{-1,1\}}\\ (-1)^{\sum_{l=1}^{n}(i_{l}+1)/2}\cos(\sum_{l=1}^{n+k}i_{l}\alpha_{l}) \end{array} & if & n\text{ is even} \end{array} \right. . \label{sin} \end{gather}
\end{proposition}
\begin{proof} See section \ref{dow}. \end{proof}
\begin{lemma} \label{aux1}Let us take $n\in\mathbb{N},$ $\left\vert \rho_{i}\right\vert <1,$ $\alpha_{i}\in\mathbb{R},$ $i\allowbreak\in S_{n}\allowbreak=\allowbreak \{1,...,n\}.$ Let $M_{i,n}$ denote a subset of the set $S_{n}$ containing $i$ elements. Let us denote by $\sum_{M_{i,n}\subseteq S_{n}}$ summation over all $M_{i,n}$ contained in $S_{n}.$ We have: \begin{gather} \sum_{k_{1}\geq0}...\sum_{k_{n}\geq0}(\prod_{i=1}^{n}\rho_{i}^{k_{i}} )\cos(\beta+\sum_{i=1}^{n}k_{i}\alpha_{i})\allowbreak=\label{ksc}\\ \allowbreak\frac{\sum_{j=0}^{n}(-1)^{j}\sum_{M_{j,n}\subseteq S_{n}} (\prod_{k\in M_{j,n}}\rho_{k})\cos(\beta-\sum_{k\in M_{j,n}}\alpha_{k})} {\prod_{i=1}^{n}(1+\rho_{i}^{2}-2\rho_{i}\cos(\alpha_{i}))},\nonumber\\ \sum_{k_{1}\geq0}...\sum_{k_{n}\geq0}(\prod_{i=1}^{n}\rho_{i}^{k_{i}} )\sin(\beta+\sum_{i=1}^{n}k_{i}\alpha_{i})=\label{kss}\\ \frac{\sum_{j=0}^{n}(-1)^{j}\sum_{M_{j,n}\subseteq S_{n}}(\prod_{k\in M_{j,n} }\rho_{k})\sin(\beta-\sum_{k\in M_{j,n}}\alpha_{k})}{\prod_{i=1}^{n} (1+\rho_{i}^{2}-2\rho_{i}\cos(\alpha_{i}))}.\nonumber \end{gather}
\end{lemma}
\begin{proof} See section \ref{dow}. \end{proof}
We will also need the following almost trivial special cases of formulae (\ref{ksc}) and (\ref{kss}). We will formulate them as corollary.
\begin{corollary} \label{suma}For all $\left\vert \rho\right\vert <1$ we have \begin{align} \sum_{n\geq0}\rho^{n}\sin(n\alpha+\beta) & =(\sin(\beta)-\rho\sin (\beta-\alpha))/(1-2\rho\cos(\alpha)+\rho^{2}),\label{s_si}\\ \sum_{n\geq0}\rho^{n}\cos(n\alpha+\beta) & =(\cos(\beta)-\rho\cos (\beta-\alpha)/(1-2\rho\cos(\alpha)+\rho^{2}). \label{s_g_c} \end{align}
\end{corollary}
\begin{proof} Set $n\allowbreak=\allowbreak1$ and $\alpha\allowbreak=\allowbreak\alpha_{1}$ (\ref{kss}) and (\ref{ksc}). \end{proof}
\section{One parameter sums. Multivariate generating functions of Chebyshev polynomials\label{one}}
The theorem below is obtained by very elementary methods. Given the definition of the function $\chi_{k,n}^{(t_{1},...,t_{n+k})}(x_{1},...,x_{n+k}|\rho)$ presented by (\ref{_ktnu}) it is obvious that it must be in the form of a rational function. Even many properties of the denominator of these functions can be more or less deduced from the definition. However the exact forms of the numerators of these functions are not trivial. For the sake of completeness of the paper, we present all these trivial and nontrivial observations in one theorem.
\begin{theorem} \label{main}For all integers $n,k\geq0,$ $\left\vert x_{s}\right\vert <1,t_{s}\in\mathbb{Z},$ $s\allowbreak=\allowbreak1,...,n+k,$ we have: \begin{equation}
\chi_{k,n}^{(t_{1},...,t_{n+k})}(x_{1},...,x_{n+k}|\rho)=\allowbreak
\frac{l_{k,n}^{(t_{1},...,t_{n+k})}(x_{1},...,x_{n+k}|\rho)}{w_{n+k}
(x_{1},...,x_{n+k}|\rho)}, \label{formula} \end{equation}
where $w_{m}(x_{1},...,x_{m}|q)$ is a symmetric polynomial of degree $2^{m-1}$ in $x_{1},...,x_{m}$ and of degree $2^{m}$ in $\rho$ defined by the following recurrence : \begin{gather}
w_{m+1}(x_{1},...,x_{m-1},\cos(\alpha),\cos(\beta)|\rho)=\label{rek}\\
w_{m}(x_{1},...,x_{m-1},\cos(\alpha+\beta)|\rho)w_{m}(x_{1},...,x_{m-1}
,\cos(\alpha-\beta)|\rho),\nonumber \end{gather}
$n\geq1$, with $w_{1}(x|q)$ given by (\ref{w1}).
$l_{n,k}^{(t_{1},\ldots,t_{n+k})}(x_{1},...,x_{n+k}|\rho)$ is another polynomial given by the relationship: \begin{gather}
l_{k,n}^{(t_{1},...,t_{n+k})}(x_{1},...,x_{n+k}|\rho)=\label{diff}\\ \sum_{j=0}^{2^{n+k}-1}\rho^{j}\sum_{m=0}^{j}\frac{1}{m!}\left. \frac{d^{m}
}{d\rho^{m}}w_{k+n}(x_{1},...,x_{k+n}|\rho)\right\vert _{\rho=0}\nonumber\\ \times\prod_{s=1}^{k}T_{(j-m)+t_{s}}(x_{s})\prod_{s=1+k}^{n+k}U_{(j-m)+t_{s} }(x_{s}). \end{gather}
\end{theorem}
\begin{proof} See section \ref{dow}. \end{proof}
\begin{corollary} Theorem \ref{main} provides for free the following important set of identities involving Chebyshev polynomial of the first and the second kind. Namely we have: $\forall n,k\geq0:n+k\geq1,\forall t_{1},\ldots,t_{n+k}\geq0,\forall j\geq2^{n+k},\forall(x_{1},\ldots,x_{k+n})\in(-1,1)^{n+k}$ \begin{equation} \sum_{m=0}^{j}\frac{1}{m!}\left. \frac{d^{m}}{d\rho^{m}}w_{k+n}
(x_{1},...,x_{k+n}|\rho)\right\vert _{\rho=0}\times\prod_{s=1}^{k} T_{(j-m)+t_{s}}(x_{s})\prod_{s=1+k}^{n+k}U_{(j-m)+t_{s}}(x_{s})=0. \label{identities} \end{equation}
In particular we have for $n+k\allowbreak=\allowbreak1:$ \[ U_{k}(x)-2xU_{k+1}(x)+U_{k+2}(x)=0, \] which is nothing else as the well-known three-term recurrence satisfied by the Chebyshev polynomials. However for say $k=0$ and $n\allowbreak=\allowbreak2$ we get for all $s,m\geq0$ \begin{gather*} -4xyU_{s}(y)U_{m}(x)+2(2x^{2}+2y^{2}-1)U_{s+1}(y)U_{m+1}(x)\\ -4xyU_{s+2}(y)U_{m+3}(x)+U_{s+3}(y)U_{m+3}(x)=0, \end{gather*} which is, to my knowledge, unknown. \end{corollary}
\begin{proof}
Since $l_{k,n}^{(t_{1},...,t_{n+k})}(x_{1},...,x_{n+k}|\rho)$ is a polynomial of degree $2^{k+n}-1$ in $\rho$ all its derivatives with respect to $\rho$ of higher than $2^{k+n}-1$ should be equal to zero. \end{proof}
\begin{corollary} For $n\geq1,$ after swapping $x_{1}$ and $x_{n},$ taking $\beta\allowbreak =\allowbreak0,$ $\cos(\alpha)\allowbreak=\allowbreak x_{2}$ we get: \[
w_{n}(1,...x_{n-1},x_{n}|\rho)=(w_{n-1}(x_{2},...x_{n}|\rho))^{2}. \] In particular \[
w_{3}(x_{1},\cos(\alpha_{2}),\cos(\alpha_{3})|\rho)=w_{2}(x_{1},\cos
(\alpha_{3}+\alpha_{2})|\rho)w_{2}(x_{1},\cos(\alpha_{3}-\alpha_{2})|\rho), \] which, after replacing $\cos(\alpha_{2})$ by $x_{2}$ and $\cos(\alpha_{3})$ by $x_{3}$ and with the help of Mathematica, yields: \begin{gather}
w_{3}(x_{1},x_{2},x_{3}|\rho)=16\rho^{4}(x_{1}^{4}+x_{2}^{4}+x_{3}^{4} )-8\rho^{2}(1+\rho^{2})^{2}(x_{1}^{2}+x_{2}^{2}+x_{3}^{2})\label{w3}\\ +16\rho^{2}(1+\rho^{4})(x_{1}^{2}x_{2}^{2}+x_{1}^{2}x_{3}^{2}+x_{2}^{2} x_{3}^{2})+64\rho^{4}x_{1}^{2}x_{2}^{2}x_{3}^{2}-32\rho^{3}(1+\rho^{2} )x_{1}x_{2}x_{3}(x_{1}^{2}+x_{2}^{2}+x_{3}^{2})\nonumber\\ -8\rho(1+\rho^{2})(1+\rho^{4}-6\rho^{2})x_{1}x_{2}x_{3}+(1+\rho^{2} )^{4}.\nonumber \end{gather}
\end{corollary}
\begin{remark} Notice that from Theorem \ref{main} we deduce that for all integers $t_{1},...,t_{k+n}$ the ratio \[
\frac{\chi_{k,n}^{(t_{1},...,t_{k+n})}(x_{1},...,x_{n+k}|\rho)}{\chi _{k,n}^{(0,...,0)}(x_{1},...,x_{n+k}|\rho)} \] is a rational function of arguments $x_{1},...,x_{n+k},\rho.$
Such observation for was first made by Carlitz for $k+n\allowbreak =\allowbreak2,$ nonnegative integers $t_{1}$ and $t_{2}$ concerning the so-called Rogers--Szeg\"{o} polynomials and two variables $x_{1}$ and $x_{2}$ in \cite{Carlitz72} (formula 1.4). Later it was generalized by Szab\l owski in \cite{Szab5} for the so-called $q-$Hermite polynomials, also for the two variables . Now, it turns out that for $q\allowbreak=\allowbreak0$ the $q-$Hermite polynomials are equal to Chebyshev polynomials of the second kind, hence one can state that so far the above-mentioned observation was known for $k\allowbreak=\allowbreak0$ and $n\allowbreak=\allowbreak2.$ Hence we deal with far-reaching generalization both in the number of variables as well as for the Chebyshev polynomials of the first kind. \end{remark}
\begin{corollary} \label{gest}For $\left\vert x_{i}\right\vert \leq1$ and $\left\vert \rho\right\vert <1,$ $n\geq1:$ \begin{gather*}
\chi_{n,0}(x_{1},...,x_{n}|\rho)\geq0,\\ \underset{j\text{ fold}}{\int_{-1}^{1}...\int_{-1}^{1}(}\prod_{s=1}^{n}
\frac{1}{\pi\sqrt{1-x_{s}^{2}}})\chi_{n,0}(x_{1},...,x_{n}|\rho)dx_{1} ...dx_{j}\allowbreak=\allowbreak\prod_{s=j+1}^{n}\frac{1}{\pi\sqrt{1-x_{s} ^{2}}}, \end{gather*} for $j=1,...,n.$ \end{corollary}
\begin{proof} For the first assertion recall that based on Theorem \ref{main} we have \begin{gather*}
\chi_{n,0}(\cos(\alpha_{1}),...,\cos(\alpha_{n})|\rho)\allowbreak =\allowbreak\sum_{k\geq0}\rho^{k}\prod_{j=1}^{n}T_{k}(\cos(\alpha _{j}))\allowbreak=\\ \allowbreak\frac{1}{2^{n}}\sum_{i_{1}\in\{-1,1\}}...\sum_{i_{n}\in \{-1,1\}}\frac{(1-\rho\cos(\sum_{k=1}^{n}i_{k}\alpha_{k}))}{(1-2\rho\cos (\sum_{k=1}^{n}i_{k}\alpha_{k})+\rho^{2})}, \end{gather*} which is nonnegative for all $\alpha_{i}\in\mathbb{R}$, $i\allowbreak =\allowbreak1,...,n$ and $\left\vert \rho\right\vert <1.$ \newline The remaining part follows directly the definition (\ref{_ktnu}) of $\chi_{n,0}$ and the properties of polynomials $T_{i}$. \end{proof}
Let us now finish the case $n\allowbreak=\allowbreak2.$ That is let us calculate $\chi_{2,0}^{n,m}(x,y|\rho),$ $\chi_{1,1}^{n,m}(x,y|\rho)$. The case of $\chi_{0,2}^{n,m}(x,y|\rho)$ has been solved in e.g. \cite{SzablAW} (Lemma 3, with $q\allowbreak=\allowbreak0).$
\begin{proposition} \label{2wym}i) \begin{align*}
\chi_{1,0}^{m,0}(x|\rho) & =\sum_{i=0}^{\infty}\rho^{i}T_{i+m}
(x)\allowbreak=\allowbreak\frac{T_{m}(x)-\rho T_{m-1}(x)}{w_{1}(x|\rho)},\\
\chi_{0,1}^{0,m}(x|\rho) & =\sum_{i=0}^{\infty}\rho^{i}U_{i+m}
(x)=\frac{U_{m}(x)-\rho U_{m-1}(x)}{w_{1}(x|\rho)}, \end{align*}
ii) \begin{gather*}
\chi_{2,0}^{n,m}(x,y|\rho)\allowbreak=\allowbreak\sum_{k\geq0}\rho^{k} T_{k+n}(x)T_{k+m}(y)\allowbreak=\allowbreak\\
(T_{n}(x)T_{m}(y)(w_{2}(x,y|\rho)-\rho^{4})\\ +\rho T_{n+1}(x)T_{m+1}(y)(1-2\rho^{2}+4\rho^{2}(x^{2}+y^{2})-4\rho xy)\\ +\rho^{2}T_{n+2}(x)T_{m+2}(y)(1-4\rho xy)+\rho^{3}T_{n+3}(x)T_{m+3}
(y))/w_{2}(x,y|\rho), \end{gather*}
iii) \begin{gather*}
\chi_{0,2}^{n,m}(x,y|\rho)\allowbreak=\sum_{j\geq0}\rho^{j}U_{j+n} (x)U_{j+m}(y)\allowbreak=\\
(U_{n}(x)U_{m}(y)(w_{2}(x,y|\rho)-\rho^{4})\\ +\rho U_{n+1}(x)U_{m+1}(y)(1-2\rho^{2}+4\rho^{2}(x^{2}+y^{2})-4\rho xy)\\ +\rho^{2}U_{n+2}(x)U_{m+2}(y)(1-4\rho xy)+\rho^{3}U_{n+3}(x)U_{m+3}
(y))/w_{2}(x,y|\rho) \end{gather*} $\allowbreak$\newline
iv) \begin{gather*}
\chi_{1,1}^{n,m}(x,y|\rho)\allowbreak=\allowbreak\sum_{j\geq0}\rho^{j} U_{m+j}(x)T_{n+j}(y)\allowbreak=\\
(T_{n}(y)U_{m}(x)(w_{2}(x,y|\rho)-\rho^{4})\\ +\rho T_{n+1}(y)U_{m+1}(x)(1-2\rho^{2}+4\rho^{2}(x^{2}+y^{2})-4\rho xy)\\ +\rho^{2}T_{n+2}(y)U_{m+2}(y)(1-4\rho xy)+\rho^{3}T_{n+3}(y)U_{m+3}
(y))/w_{2}(x,y|\rho). \end{gather*} $\allowbreak\allowbreak\allowbreak$ \end{proposition}
\begin{proof} We apply a formula (\ref{diff}). For i) we take $n=1$ and notice that values of derivatives of $w_{1}$ respect to $\rho$ at $\rho=0$ are $1,$ $-2x,$ $2.$
To get ii) we notice that subsequent derivatives of $w_{2}$ with respect to $\rho$ at $\rho=0$ are $1,$ $-4xy,$ $8x^{2}+8y^{2}-4,$ $-24xy$. Having this and applying directly (\ref{diff}) we get certain defined formula expanded in powers of $\rho.$ Now it takes Mathematica to get this form.
iii) and iv) We argue similarly getting expansions in powers of $\rho.$ Then using Mathematica we try to get more friendly form. \end{proof}
As a corollary we get formulae presented in (\ref{_2}) and (\ref{_3}) when setting $n\allowbreak=\allowbreak m\allowbreak=\allowbreak0$ and remembering that $T_{-i}(x)\allowbreak=\allowbreak T_{i}(x),$ $U_{-i}(x)=-U_{i-2}(x),$ for $i\allowbreak=\allowbreak0,1,2$.
\begin{corollary} \label{3wym}$\forall x,y,z\in\lbrack-1,1],\left\vert \rho\right\vert <1:$
i) \begin{gather*}
\chi_{3,0}(x,y,z|\rho)=\sum_{i\geq0}\rho^{i}T_{i}(x)T_{i}(y)T_{i} (z)\allowbreak=\allowbreak((1+\rho^{2})^{3}\allowbreak\allowbreak +\allowbreak8\rho^{4}\left( x^{4}+y^{4}+z^{4}\right) \allowbreak +\allowbreak\allowbreak32\rho^{4}x^{2}y^{2}z^{2}\allowbreak\\ -\allowbreak2\left( \rho^{2}+1\right) \left( \rho^{2}+3\right) \rho ^{2}\left( x^{2}+y^{2}+z^{2}\right) \allowbreak+\allowbreak4\left( \rho ^{4}+3\right) \rho^{2}\left( x^{2}y^{2}+x^{2}z^{2}+y^{2}z^{2}\right) \allowbreak\\ -\newline\allowbreak4\left( 3\rho^{2}+5\right) \rho^{3}xyz\left( x^{2}+y^{2}+z^{2}\right) \allowbreak-\allowbreak\left( \rho^{6}-15\rho
^{4}-25\rho^{2}+7\right) \rho xyz)\allowbreak/w_{3}(x,y,z|\rho), \end{gather*}
ii) \begin{gather*}
\chi_{0,3}(x,y,z|\rho)=\sum_{i\geq0}\rho^{i}U_{i}(x)U_{i}(y)U_{i}(z)=\\ ((1+\rho^{2})^{3}+16\rho^{3}xyz-4\rho^{2}(1+\rho^{2})(x^{2}+y^{2}
+z^{2}))/w_{3}(x,y,z|\rho), \end{gather*}
iii) \begin{gather*}
\chi_{1,2}(x,y,z|\rho)=\sum_{i\geq0}\rho^{i}T_{i}(x)U_{i}(y)U_{i}(z)=\\ (\left( \rho^{2}+1\right) ^{3}\allowbreak+\allowbreak8\rho^{4} x^{4}\allowbreak-\allowbreak16\rho^{3}x^{3}yz\allowbreak-\allowbreak2\left( \rho^{2}+1\right) \left( \rho^{2}+3\right) \rho^{2}x^{2}\allowbreak\\ \allowbreak+\allowbreak8\rho^{2}x^{2}\left( y^{2}+z^{2}\right) -\allowbreak4\rho\left( 5-(\rho^{2}+2)^{2}\right) xyz\allowbreak -\allowbreak4\left( \rho^{2}+1\right) \rho^{2}(y^{2}+z^{2})\allowbreak
)/w_{3}(x,y,z|\rho), \end{gather*}
iv) \begin{gather*}
\chi_{2,1}(x,y,z|\rho)\sum_{i\geq0}\rho^{i}T_{i}(x)T_{i}(y)U_{i}(z)=\\ (\left( \rho^{2}+1\right) ^{3}\allowbreak+\allowbreak8\rho^{4}\left( x^{4}+y^{4}\right) \allowbreak-\allowbreak2\left( \rho^{2}+1\right) \left( \rho^{2}+3\right) \rho^{2}\left( x^{2}+y^{2}\right) \allowbreak\\ +\allowbreak4\left( \rho^{4}+3\right) \rho^{2}x^{2}y^{2}\allowbreak +\allowbreak16\rho^{4}x^{2}y^{2}z^{2}\allowbreak+\allowbreak8\rho^{2} z^{2}\left( x^{2}+y^{2}\right) \allowbreak-\allowbreak8\left( \rho ^{2}+2\right) \rho^{3}xyz\left( x^{2}+y^{2}\right) \allowbreak\\ -\allowbreak8\rho^{3}xyz^{3}\allowbreak-\allowbreak2\left( -5\rho^{4} -10\rho^{2}+3\right) \rho xyz\allowbreak-\allowbreak4\left( \rho
^{2}+1\right) \rho^{2}z^{2})/w_{3}(x,y,z|\rho), \end{gather*}
where $w_{3}(x,y,z|\rho)\allowbreak$ is given by (\ref{w3}). \end{corollary}
\begin{proof} Again we apply formula (\ref{diff}). Besides we take $n\allowbreak =\allowbreak3,$ $k\allowbreak=\allowbreak0$ for i), $n\allowbreak =\allowbreak0,$ $k\allowbreak=\allowbreak3$ for ii), $n\allowbreak =\allowbreak1,$ $k\allowbreak=\allowbreak2$ for iii) and $n\allowbreak =\allowbreak2,$ $k\allowbreak=\allowbreak1$ for iv). Now we have to remember that successive derivatives of $w_{3}$ with respect to $\rho$ taken at $\rho\allowbreak=\allowbreak0$ are respectively $1,$ $-8xyz,$ $8(1\allowbreak -\allowbreak(x^{2}+y^{2}+z^{2})\allowbreak+\allowbreak4(x^{2}y^{2} \allowbreak+\allowbreak x^{2}z^{2}\allowbreak+\allowbreak y^{2}z^{2})),$ $48xyz(5\allowbreak-\allowbreak4(x^{2}+y^{2}+z^{2})),$ $48(3\allowbreak -\allowbreak8(x^{2}+y^{2}+z^{2})\allowbreak+\allowbreak8(x^{4}+y^{4} +z^{4})\allowbreak+\allowbreak32x^{2}y^{2}z^{2}),$ $960xyz(5\allowbreak -\allowbreak4(x^{2}+y^{2}+z^{2})),$ $2880(1\allowbreak-\allowbreak(x^{2} +y^{2}+z^{2})\allowbreak+\allowbreak4(x^{2}y^{2}\allowbreak+\allowbreak x^{2}z^{2}\allowbreak+\allowbreak y^{2}z^{2})),$ $-40320xyz.$ Then we get certain formulae by applying directly formula (\ref{diff}). The expression are long and not very legible. We applied Mathematica to get forms presented in i), ii) iii) and iv). \end{proof}
\section{Kibble--Slepian formula and related sums for Chebyshev polynomials \label{kibb}}
Let $f_{n}(x_{1},...,x_{n}|K_{n})$ denote the density of the normal distribution with zero expectations and non-singular covariance matrix $K_{n}$ such that $\operatorname*{var}(X_{i})=\allowbreak1$ for $i\allowbreak =\allowbreak1,...,n,$ i.e. having $1^{\prime}s$ on the diagonal. Let $\rho_{ij}$ denote $ij-$th entry of matrix $K_{n}.$ Consequently, the one-dimensional marginals $f_{1}$ are given by: \[ f_{1}(x)\allowbreak=\allowbreak\exp(-x^{2}/2)/\sqrt{2\pi}. \] Let us also denote by $S_{n}$ a symmetric $n\times n$ matrix with zeros on the diagonal and nonnegative integers as off-diagonal entries. Let us denote the $ij-$th entry of the matrix $S_{n}$ by $s_{ij}.$ Recall that Kibble in the 40s and Slepian in the 70s presented the following formula: \begin{equation}
\frac{f_{n}(x_{1},...,x_{n}|K_{n})}{\prod_{m=1}^{n}f_{1}(x_{m})}=\sum _{S}(\prod_{1\leq i<j\leq n}\frac{\left( \rho_{ij}\right) ^{s_{ij}}} {s_{ij}!}\prod_{m=1}^{n}H_{\sigma_{m}}(x_{m})), \label{K-S} \end{equation} where $H_{i}(x)$ denotes $i-th$ (so called probabilistic) Hermite polynomial i.e. forming the orthonormal base of the space of functions square integrable with respect to the weight $f_{1}(x)$, $\sigma_{m}\allowbreak=\allowbreak \sum_{j=1}^{m-1}s_{jm}\allowbreak+\allowbreak\sum_{j=1+m}^{n}s_{mj},$ $\sum_{S}$ denotes, as before, summation over all $n(n-1)/2$ non-diagonal entries of the matrix $S_{n}.$ To see more details on Kibble--Slepian formula see e.g. recent paper by Ismail \cite{Ismal2016}. A partially successful attempt was made by Szab\l owski in \cite{SzablKib} where for $n\allowbreak =\allowbreak3$ the author replaced polynomials $H_{n}$ by the so called
$q-$Hermite polynomials $H_{n}(x|q)$ and $s_{ij}!$ substituted by $[s_{ji}]_{q}!$ where $[n]_{q}\allowbreak=\allowbreak(1-q^{n})/(1-q)$ for $\left\vert q\right\vert <1,$ $[n]_{1}\allowbreak=\allowbreak n$ and $[n]_{q}!\allowbreak=\allowbreak\prod_{i=1}^{n}[i]_{q}$ with $[0]_{q}
!\allowbreak=\allowbreak1.$ Taking into account that $H_{n}(x|0)\allowbreak =\allowbreak U_{n}(x/2)$ and $[n]_{0}!\allowbreak=\allowbreak1$ we see that (\ref{K-S}) has been generalized and summed already for other polynomials. The intension of summing in \cite{SzablKib} was to find a generalization of the normal distribution that has compact support. The attempt was partially successful since also one has obtained a relatively closed form for the sum, however the obtained sum was not positive for the suitable values of parameters $\rho_{ij}$ and all values of parameters $\left\vert q\right\vert <1.$
In the present paper, we are going to present closed form of the sum (\ref{K-S}) where polynomials $H_{n}$ are replaced by Chebyshev polynomials of both the first and second kind and $s_{ij}!$ are replaced by $1.$ This last replacement is justified by the fact that $\left[ s_{ji}\right] _{q}!\allowbreak=\allowbreak1$ if $q\allowbreak=\allowbreak0.$ For more details, see publications on the so-called $q-$series and also brief introduction at the beginning of the Section \ref{gen}, below.
In other words, we are going to find closed forms for the sums (\ref{_t}) and (\ref{_u}), where $\mathbf{x\allowbreak}$ and $K_{n},$ used below, mean, as before, $\mathbf{x=\allowbreak(}x_{1},...,x_{n})$ while $K_{n}$ denotes symmetric $n\times n$ matrix with ones on its diagonal and $\rho_{ij}$ as its $ij-th$ entry. We will assume that all $\rho^{\prime}s$ are from the segment $(-1,1)$ and additionally that matrix $K_{n}$ is positive definite.
We have the following result:
\begin{theorem} \label{kibble}Let us denote $\mathcal{K}_{n}\allowbreak=\allowbreak\left\{ (i,j):1\leq i<j\leq n\right\} $, $\beta_{n,m}\allowbreak=\beta_{n,m} (i_{n},i_{m})\allowbreak=\allowbreak i_{n}\alpha_{n}+i_{m}\alpha_{m} $. For $S\subseteq\mathcal{K}_{n}$ let $\rho_{S}\allowbreak=\allowbreak \prod_{(n,m)\in S}\rho_{nm}$, $b_{S}\allowbreak=\allowbreak\sum_{(n,m)\in S}\beta_{n,m}$, $B_{1,\ldots,n}\allowbreak=\allowbreak B(i_{1},\ldots ,i_{n})\allowbreak=\allowbreak\sum_{j=1}^{n}i_{j}\alpha_{j} $.
We have i) \begin{gather*}
f_{T}(\cos(\alpha_{1}),...,\cos(\alpha_{n})|K_{n})\allowbreak=\\ \allowbreak\frac{1}{2^{n}}\sum_{i_{1}\in\{-1,1\}}...\sum_{i_{n}\in \{-1,1\}}\frac{\sum_{k=0}^{n}(-1)^{k}\sum_{S_{k}\subseteq\mathcal{K}_{n} }^{\prime}\rho_{S_{k}}\cos(b_{S_{k}})}{\prod_{j=1}^{n}\prod_{m=j+1} ^{n}(1-2\rho_{jm}\cos(\beta_{j,m}(i_{j},i_{m}))+\rho_{jm}^{2})}, \end{gather*} ii) If $n$ is even then \begin{gather*}
f_{U}(\cos(\alpha_{1}),...,\cos(\alpha_{n})|K_{n})=\\ (-1)^{n/2}\frac{1}{2^{n}\prod_{j=1}^{n}\sin(\alpha_{j})}\sum_{i_{1} \in\{-1,1\}}...\sum_{i_{n+k}\in\{-1,1\}}(-1)^{\sum_{l=1}^{n}(i_{l}+1)/2}\\ \frac{\sum_{k=0}^{n}(-1)^{k}\sum_{S_{k}\subseteq\mathcal{K}_{n}}^{\prime} \rho_{S_{k}}\cos(B_{1,\ldots,n}-b_{S_{k}})}{\prod_{j=1}^{n}\prod_{m=j+1} ^{n}(1-2\rho_{jm}\cos(\beta_{j,m}(i_{j},i_{m}))+\rho_{jm}^{2})}, \end{gather*} while if $n$ is odd then \begin{gather*}
f_{U}(\cos(\alpha_{1}),...,\cos(\alpha_{n})|K_{n})=\\ (-1)^{n/2}\frac{1}{2^{n}\prod_{j=1}^{n}\sin(\alpha_{j})}\sum_{i_{1} \in\{-1,1\}}...\sum_{i_{n+k}\in\{-1,1\}}(-1)^{\sum_{l=1}^{n}(i_{l}+1)/2}\\ \frac{\sum_{k=0}^{n-1}(-1)^{k}\sum_{S_{k}\subseteq\mathcal{K}_{n}}^{\prime }\rho_{S_{k}}\sin(B_{1,\ldots,n}-b_{S_{k}})}{\prod_{j=1}^{n}\prod_{m=j+1} ^{n}(1-2\rho_{jm}\cos(\beta_{j,m}(i_{j},i_{m}))+\rho_{jm}^{2})} \end{gather*} where $S_{k}$ denotes any subset of $\mathcal{K}_{n}$ that contains $k$ elements and $\sum_{S_{k}\in K_{n}}^{\prime}$ means summation over all $S_{k}$. \end{theorem}
\begin{proof} Let us consider (\ref{_t}) first. Keeping in mind assertions of Proposition
\ref{ilocz} we see that $f_{T}(\cos(\alpha_{1}),...,\cos(\alpha_{n})|K_{n})$ is the sum of $2^{n}$ summands depending on different arrangement of values of variables $i_{k}\in\{-1,1\},$ $k\allowbreak=\allowbreak1,...,n.$ Each summand is equal to cosine taken at $\sum_{j=1}^{n}i_{j}s_{j}\alpha_{j}.$ Recalling the definition of numbers $s_{j}$ we see that in such sum $s_{mj},$ $1\leq m<j\leq n$ appears twice, once as $s_{mj}\alpha_{m}i_{m}$ and secondly as $s_{mj}\alpha_{j}i_{j}.$ Or in other words, we have $\sum_{j=1}^{n}i_{j} s_{j}\alpha_{j}\allowbreak=\allowbreak\sum_{m=1}^{n-1}\sum_{j=m+1}^{n} s_{mj}(\alpha_{m}i_{m}+\alpha_{j}i_{j}).$ Having this in mind, we can now apply summation formula (\ref{ksc}) with $\beta\allowbreak=\allowbreak0$ and have summed each cosine with a particular system of values of the set $\{i_{j}:j\allowbreak=\allowbreak1,...,n\}.$ Now it remains to sum over, all such systems of values.
As far as other assertions are concerned, we use the definition of Chebyshev polynomials of the second kind, formulae presented in Proposition \ref{ilocz}. We have in this case $\sum_{j=1}^{n}i_{j}(s_{j}+1)\alpha_{j}\allowbreak =\allowbreak\sum_{j=1}^{n}i_{j}\alpha_{j}\allowbreak+\allowbreak\sum _{m=1}^{n-1}\sum_{j=m+1}^{n}s_{mj}(\alpha_{m}i_{m}+\alpha_{j}i_{j}).$ As the result we deal with signed sum of either sines or cosines depending on the fact if $n(n-1)/2$ (the number of different $s_{mj},$ $1\leq m<j\leq n$ ) is odd or even. Now again we refer to either (\ref{kss}) or (\ref{ksc}) depending on the parity of $n(n-1)/2$ this time with $\beta\allowbreak=\allowbreak \sum_{j=1}^{n}i_{j}\alpha_{j}.$ \end{proof}
\begin{corollary}
Both functions $f_{T}(\mathbf{x}|K_{n})$ and $f_{U}(\mathbf{x}|K_{n})$ are rational functions of all its arguments. Moreover, they have the same denominators given by the following formula: \[
V_{n}(\mathbf{x|}K_{n})=\prod_{j=1}^{n-1}\prod_{k=j+1}^{n}w_{2}(x_{j}
,x_{k}|\rho_{ij}), \] where $w_{2}$ is given by the formula (\ref{w2}). \end{corollary}
\begin{proof}
First of all, notice that following formulae given in Theorem \ref{kibble} the functions $f_{T}(\mathbf{x}|K_{n})$ and $f_{U}(\mathbf{x}|K_{n})$ are rational functions of $x_{1}\allowbreak=\allowbreak\cos(\alpha_{1}),...,x_{n} \allowbreak=\allowbreak\cos(\alpha_{n}).$ Moreover, it is easy to notice that all formulae have the same denominators. To find these denominators notice that the factors in each denominator referring to $(i_{j},i_{m})$ and $(-i_{j},i_{m})$ are the same since cosine is an even function and that cosines appear solely in denominators. Further, we can group factors $(1-2\rho_{jm}\cos(\beta_{j,m}(i_{j},i_{m}))+\rho_{jm}^{2})$ and $(1-2\rho_{jm}\cos(\beta_{j,m}(i_{j},-i_{m}))+\rho_{jm}^{2})$ and apply (\ref{pro2}) \begin{gather*} (1-2\rho_{jm}\cos(\beta_{j,m}(i_{j},i_{m}))+\rho_{jm}^{2})(1-2\rho_{jm} \cos(\beta_{j,m}(i_{j},-i_{m}))+\rho_{jm}^{2})\\
=w_{2}(\cos(\alpha_{j}),\cos(\alpha_{m})|\rho_{jm}). \end{gather*} since $\beta_{n,m}(i_{n},i_{m})\allowbreak=\allowbreak i_{n}\alpha_{n} +i_{m}\alpha_{m}.$ \end{proof}
\begin{corollary} Let us denote $\beta_{kj}=i_{k}\alpha_{k}+i_{j}\alpha_{j},$ $k\allowbreak =\allowbreak1,2,$ $j=2,3,$ $k<j,$ $p\allowbreak=\allowbreak\rho_{12}\rho _{13}\rho_{23},$ $B_{1,2,3}\allowbreak=\sum_{j=1}^{3}i_{j}a_{j},$ \newline$\allowbreak$ \begin{gather*} c(i_{1},i_{2},i_{3},\alpha_{1},\alpha_{2},\alpha_{3},\rho_{12},\rho_{13} ,\rho_{23})\allowbreak=\allowbreak(1-\sum_{1\leq k<j\leq3}\rho_{k,j}\cos (\beta_{k,j})\allowbreak+\allowbreak\\ p\sum_{1\leq k<j\leq3}\rho_{k,j}^{-1}\cos(2B_{1,2,3}\allowbreak-\allowbreak \beta_{kj})\allowbreak-\allowbreak p\cos(2B_{1,2,3}))/\prod_{1\leq k<j\leq 3}(1-\rho_{kj}\cos(\beta_{kj})+\rho_{kj}^{2}), \end{gather*}
\begin{gather*} s(i_{1},i_{2},i_{3},\alpha_{1},\alpha_{2},\alpha_{3},\rho_{12},\rho_{13} ,\rho_{23})\allowbreak=(\sin(B_{1,2,3})(1+p)\\ -(\rho_{12}\sin(i_{3}\alpha_{3})+\rho_{13}\sin(i_{2}\alpha_{2})+\rho_{23} \sin\left( i_{1}\alpha_{2}\right) )\allowbreak\\ -\allowbreak(\rho_{12}\rho_{13}\sin(i_{1}\alpha_{1})+\rho_{12}\rho_{23} \sin(i_{2}\alpha_{2})+\rho_{13}\rho_{23}\sin(i_{3}\alpha_{3}))\\ /\allowbreak\prod_{1\leq k<j\leq3}(1-\rho_{kj}\cos(\beta_{kj})+\rho_{kj}^{2}). \end{gather*} Then:
i) $f_{T}(\cos(\alpha_{1}),\cos(\alpha_{2}),\cos(\alpha_{3}),\rho_{12} ,\rho_{13},\rho_{23})=$\newline$\frac{1}{4}\sum_{i_{2}\in\{-1,1\}}\sum _{i_{3}\in\{-1,1\}}c(1,i_{2},i_{3},\alpha_{1},\alpha_{2},\alpha_{3},\rho _{12},\rho_{13},\rho_{23}),$
ii) $f_{U}(\cos(\alpha_{1}),\cos(\alpha_{2}),\cos(\alpha_{3}),\rho_{12} ,\rho_{13},\rho_{23})\allowbreak=\allowbreak$\newline$\frac{1}{8}\sum _{i_{1}\in\{-1,1\}}\sum_{i_{2}\in\{-1,1\}}\sum_{i_{3}\in\{-1,1\}} (-1)^{\sum_{k=1}^{3}(i_{k}+1)/2}s(i_{1},i_{2},i_{3},\alpha_{1},\alpha _{2},\alpha_{3},\rho_{12},\rho_{13},\rho_{23})$
$p/\rho_{kj}$ in case of $\rho_{kj}\allowbreak=\allowbreak0$ is understood as the limit when $\rho_{kj}\allowbreak\rightarrow\allowbreak0.$
iii) $f_{U}(x,y,z,\rho_{12},\rho_{13},\rho_{23})\allowbreak=\allowbreak (4\rho_{12}\rho_{13}(\rho_{23}-\rho_{12}\rho_{13})(1-\rho_{23}^{2} )x^{2}\allowbreak+\allowbreak4\rho_{12}\rho_{23}(\rho_{13}-\rho_{12}\rho _{23})(1-\rho_{13}^{2})y^{2}\allowbreak+\allowbreak4\rho_{13}\rho_{23} (\rho_{12}-\rho_{13}\rho_{23})(1-\rho_{12}^{2})z^{2}\allowbreak-\allowbreak 4(\rho_{13}-\rho_{12}\rho_{23})(\rho_{23}-\rho_{12}\rho_{13})(1+\rho_{12} \rho_{13}\rho_{23})xy\allowbreak-\allowbreak4(\rho_{12}-\rho_{13}\rho _{23})(\rho_{23}-\rho_{12}\rho_{13})(1+\rho_{12}\rho_{13}\rho_{23} )xz\allowbreak-\allowbreak4(\rho_{13}-\rho_{12}\rho_{23})(\rho_{12}-\rho _{23}\rho_{13})(1+\rho_{12}\rho_{13}\rho_{23})yz\allowbreak+\allowbreak (1-\rho_{12}^{2})(1-\rho_{13}^{2})(1-\rho_{23}^{2})(1-\rho_{12}\rho_{13}
\rho_{23}))$\newline$/(w_{2}(x,y|\rho_{12})w_{2}(x,z|\rho_{13})w_{2}
(y,z|\rho_{23}))$ \end{corollary}
\begin{proof} First of all, notice that $\sum_{k=1}^{2}\sum_{j=k+1}^{3}\beta_{kj} \allowbreak=\allowbreak2B_{1,2,3}$ hence in particular $B_{1,2,3} \allowbreak-\allowbreak\sum_{k=1}^{2}\sum_{j=k+1}^{3}\beta_{kj}\allowbreak =\allowbreak-B_{1,2,3}.$ Then the formula i) is clear based on (\ref{ksc}) with $\beta\allowbreak=\allowbreak B_{1,2,3}$. To get ii) notice that $B_{1,2,3}\allowbreak-\allowbreak\beta_{12}\allowbreak=\allowbreak i_{3} \alpha_{3}$ and $B_{1,2,3}\allowbreak-\allowbreak\beta_{12}\allowbreak -\allowbreak\beta_{13}\allowbreak=\allowbreak-i_{1}\alpha_{1},$ similarly for the other pairs $(1,3)$ and $(2,3)$. Recall also that $B_{1,2,3} \allowbreak-\allowbreak\sum_{k=1}^{2}\sum_{j=k+1}^{3}\beta_{kj}\allowbreak =\allowbreak-B_{1,2,3}.$ Now based on (\ref{kss}) ii) is also clear.
iii) was obtained with the help of Mathematica. \end{proof}
\begin{remark} With the help of Mathematica one can show, for example, that the numerator of
$f_{T}(x,y,z|K_{3})$ is a polynomial of degree $6$ and it consists of $265$ monomials. Numerical simulation suggest that it is a nonnegative on
$(-1,1)^{3}.$ Unfortunately $f_{U}(x,y,z|K_{3})$ is not nonnegative there since we have for example \newline$f_{U}(-.9,-.95,.94,|\left[ \begin{array} [c]{ccc} 0 & .6 & .8\\ .6 & 0 & .9\\ .8 & .9 & 0 \end{array} \right] )\allowbreak=\allowbreak-0.0912121.$ Besides notice that it happens in the case when matrix $\left[ \begin{array} [c]{ccc} 1, & .6 & .8\\ .6 & 1 & .9\\ .8 & .9 & 1 \end{array} \right] $ is positive definite. This observation is in accordance with the general negative result presented in \cite{SzablKib} Theorem 1. Recall that \cite{SzablKib} concerns something like generalization of $f_{U}$ to all parameters $q\in(-1,1)$ taking into account that $q$-Hermite polynomials
$H_{n}(x|q)$ can be identified for $q\allowbreak=\allowbreak0$ with polynomials $U_{n}(x/2).$ The example presented in \cite{SzablKib} concerns the case (adopted to $q\allowbreak=\allowbreak0)$ when say $\rho _{12}\allowbreak=\allowbreak0.$ Hence we see that there are many sets of $6$ tuples $x,y,z,\rho_{12},\rho_{13},\rho_{23}$ leading to negative values of $f_{U}.$ \end{remark}
\section{Remarks on generalization\label{gen}}
In this section, we are going firstly to present $q-$generalization of the Chebyshev of the first kind and secondly present some remarks and observations that might help to obtain formulae similar to the ones presented in Theorem \ref{main} with Chebyshev polynomials replaced by the so-called $q-$Hermite $\left\{ h_{n}\right\} $ and related polynomials. $q$ is here a certain real (in general) number such that $\left\vert q\right\vert <1.$ Since in the previous chapters we considered, so to say, the case $q=0$ we will assume in this chapter that $q\neq0.$
To proceed further we need to recall certain notions used in $q-$series theory: $\left[ 0\right] _{q}\allowbreak=\allowbreak0;$ $\left[ n\right] _{q}\allowbreak=\allowbreak1+q+\ldots+q^{n-1},$ $\left[ n\right] _{q}!\allowbreak=\allowbreak\prod_{j=1}^{n}\left[ j\right] _{q},$ with $\left[ 0\right] _{q}!\allowbreak=1,$ \[
\genfrac{[}{]}{0pt}{}{n}{k}
_{q}\allowbreak=\allowbreak\left\{ \begin{array} [c]{ccc} \frac{\left[ n\right] _{q}!}{\left[ n-k\right] _{q}!\left[ k\right] _{q}!} & , & 0\leq k\leq n\\ 0 & , & otherwise \end{array} \right. . \] $\binom{n}{k}$ will denote ordinary, well known binomial coefficient. \newline It is useful to use the so-called $q-$Pochhammer symbol for $n\geq1:$ \[
\left( a|q\right) _{n}=\prod_{j=0}^{n-1}\left( 1-aq^{j}\right) ,~~\left(
a_{1},a_{2},\ldots,a_{k}|q\right) _{n}\allowbreak=\allowbreak\prod_{j=1}
^{k}\left( a_{j}|q\right) _{n}. \]
with $\left( a|q\right) _{0}\allowbreak=\allowbreak1$. Note that $n$ can be equal to $\infty,$ then the $q-$Pochhammer symbol is well defined provided $\left\vert q\right\vert <1.$
Often $\left( a|q\right) _{n},$ as well as, $\left( a_{1},a_{2}
,\ldots,a_{k}|q\right) _{n}$ will be abbreviated to $\left( a\right) _{n}$ and \newline$\left( a_{1},a_{2},\ldots,a_{k}\right) _{n},$ if it will not cause misunderstanding.
It is easy to notice that $\left( q\right) _{n}=\left( 1-q\right) ^{n}\left[ n\right] _{q}!$ and that \[
\genfrac{[}{]}{0pt}{}{n}{k}
_{q}\allowbreak=\allowbreak\allowbreak\left\{ \begin{array} [c]{ccc} \frac{\left( q\right) _{n}}{\left( q\right) _{n-k}\left( q\right) _{k}} & , & n\geq k\geq0\\ 0 & , & otherwise \end{array} \right. . \] \newline The above mentioned formula is just an example where direct setting $q\allowbreak=\allowbreak1$ is senseless however, the passage to the limit $q\longrightarrow1^{-}$ makes sense.
Notice that in particular $\left[ n\right] _{1}\allowbreak=\allowbreak n,\left[ n\right] _{1}!\allowbreak=\allowbreak n!,$ $
\genfrac{[}{]}{0pt}{}{n}{k}
_{1}\allowbreak=\allowbreak\binom{n}{k},$ $(a)_{1}\allowbreak=\allowbreak1-a,$ $\left( a;1\right) _{n}\allowbreak=\allowbreak\left( 1-a\right) ^{n}$ and $\left[ n\right] _{0}\allowbreak=\allowbreak\left\{ \begin{array} [c]{ccc} 1 & if & n\geq1\\ 0 & if & n=0 \end{array} \right. ,$ $\left[ n\right] _{0}!\allowbreak=\allowbreak1,$ $
\genfrac{[}{]}{0pt}{}{n}{k}
_{0}\allowbreak=\allowbreak1,$ $\left( a;0\right) _{n}\allowbreak =\allowbreak\left\{ \begin{array} [c]{ccc} 1 & if & n=0\\ 1-a & if & n\geq1 \end{array} \right. .$
$i$ will denote, as before, the imaginary unit, unless otherwise clearly stated. In the sequel we will need also the so-called $q-$Hermite polynomials. There exists a very large literature on the properties as well as applications of these polynomials. Let us recall only that the three-term recurrence satisfied by these polynomials is the following \[
h_{n+1}(x|q)=2xh_{n}(x|q)-(1-q^{n})h_{n-1}(x|q), \]
with $h_{-1}(x|q)\allowbreak=\allowbreak0,$ $h_{0}(x|q)\allowbreak =\allowbreak1.$ It is well known that the density, which makes these polynomials orthogonal is the following \[
f_{h}\left( x|q\right) =\frac{2\left( q\right) _{\infty}\sqrt{1-x^{2}}
}{\pi}\prod_{k=1}^{\infty}l\left( x|q^{k}\right) , \]
where $l\left( x|a\right) =(1+a)^{2}-4x^{2}a.$ Moreover, generating functions of these polynomials, are equal to: \begin{equation} \sum_{j=0}^{\infty}\frac{t^{j}}{\left( q\right) _{j}}h_{j}\left(
x|q\right) =\frac{1}{\prod_{k=0}^{\infty}v\left( x|tq^{k}\right) }, \label{ch1} \end{equation}
where $v\left( x|a\right) =1-2ax+a^{2}.$
\begin{remark} For the sake of completeness of the paper, let us recall that $h_{n}
(x|0)\allowbreak=\allowbreak U_{n}(x),$ for $n\geq-1.$ \end{remark}
\subsection{Conjectures, remarks and interesting identities\label{Ident}}
Theorem \ref{main} suggests the new method of summing characteristic functions. One can formulate it in the following way.
\emph{Suppose, that we can guess, that the form of certain multivariate characteristic function, say for example } \begin{equation}
\chi_{n}^{(l_{1},\ldots l_{n})}(x_{1},\ldots,x_{n}|\rho,q)=\sum_{j\geq0}
\frac{\rho^{j}}{(q)_{j}}\prod_{k=1}^{n}h_{j+l_{k}}(x_{k}|q), \label{gen_ch} \end{equation} \emph{where} \emph{numbers }$l_{1},\ldots,l_{n}$ \emph{are integer and
}$\left\vert \rho\right\vert ,\left\vert q\right\vert <1$, \emph{is of the form of the ratio of two functions. Moreover, suppose that we can guess the form of the denominator }$W_{n}(x_{1},\ldots,x_{n}|\rho,q)$ \emph{of this ratio. Then the numerator can be obtained by the formula similar to (\ref{diff}) i.e. by: } \[ \sum_{j=0}^{\infty}\rho^{j}\sum_{k=0}^{j}\frac{1}{k!}\left. \frac{d^{k}
}{d\rho^{k}}W_{n}(x_{1},...,x_{n+m}|\rho,q)\right\vert _{\rho=0}\frac
{1}{(q)_{j-k}}\prod_{s=1}^{n}h_{j-k+l_{s}}(x_{s}|q). \]
\begin{remark} There are classes of characteristic functions that have common denominators like for example bivariate ones described in \cite{Szab5}, Proposition 7 (iv) or, more generally, bivariate functions of the form similar to (\ref{gen_ch}) that were considered by Carlitz in \cite{Carlitz72}. The point is that all these functions are at most bivariate. There are no results concerning more variables. Thus we have the following conjecture. \end{remark}
\begin{conjecture}
Functions $\chi_{n}^{(l_{1},\ldots l_{n})}(x_{1},\ldots,x_{n}|\rho,q)$ for all $n,m,l_{1},\ldots,l_{n}$ are the ratios of some functions with the common denominators of the form \[
W_{n}(x_{1},\ldots,x_{n}|\rho,q)=\prod_{i=0}^{\infty}w_{n}(x_{1}
,...,x_{n}|\rho q^{i}), \]
where functions $w_{n}(x_{1},\ldots,x_{n}|\rho)$ are given by the iterative relationship (\ref{rek}). \end{conjecture}
\subsubsection{One-dimensional case}
Now we will present a one-dimensional example, in order to show that even in this simplest case we obtain interesting identities. In this example, we will, so to say, derive once more formula (\ref{ch1}). First of all, notice that $(1-ae^{i\varphi})(1-ae^{-i\varphi})\allowbreak=\allowbreak1+a^{2}-2ax$
$\overset{df}{=}v(x|a)$ where $x\allowbreak=\allowbreak\cos\varphi$. Moreover, we have: \[
W_{1}(x|\rho,q)=\prod_{j=0}^{\infty}v(x|\rho q^{j})\allowbreak=\allowbreak (\rho e^{i\varphi})_{\infty}(\rho e^{-i\varphi})_{\infty}. \]
Let us denote indirectly function $d_{n}(x|q)$ by the relationship: $\frac
{n!}{(q)_{n}}d_{n}(x|q)\allowbreak=\allowbreak\left. \frac{d^{n}}{d\rho^{n}
}W_{1}(x|\rho,q)\right\vert _{\rho=0}.$ Notice that $d_{n}(x|q)$ are coefficients of the expansion of $W_{1}(x|\rho,q)$ in the following series \begin{equation}
W_{1}(x|\rho,q)=\sum_{n\geq0}\frac{\rho^{n}}{(q)_{n}}d_{n}(x|q). \label{expW} \end{equation}
For the sake of symmetry let us also denote by $f_{n}(x|q)$ coefficients of the expansion $1/W_{1}(x|\rho,q)$ in the following series \[
1/W_{1}(x|\rho,q)\allowbreak=\allowbreak\sum_{n\geq0}\frac{\rho^{n}}{(q)_{n}
}f_{n}(x|q). \]
\begin{remark} Let us recall polynomials $\left\{ b_{n}\right\} $ defined in \cite{bms} and later analyzed in \cite{Szab-rev}(2.43). These polynomials satisfy the following three term recurrence : \[
b_{n+1}(x|q)=-2q^{n}xb_{n}(x|q)+q^{n-1}(1-q^{n})b_{n-1}(x|q), \]
with $b_{-1}(x|q)\allowbreak=\allowbreak0,$ $b_{1}(x|q)\allowbreak =\allowbreak1.$ Moreover, as it follows from \cite{SzablAW}(3.18) after some trivial transformation polynomials $\left\{ b_{n}\right\} $ satisfy the following identity: \begin{equation} \sum_{j=1}^{n}
\genfrac{[}{]}{0pt}{}{n}{j}
_{q}b_{n-j}(x|q)h_{j+k}(x|q)=\left\{ \begin{array} [c]{ccc} 0 & if & k<n\\
(-1)^{n}q^{\binom{n}{2}}\frac{(q)_{k}}{(q)_{k-n}}h_{k-n}(x|q) & if & k\geq n \end{array} \right. . \label{idb} \end{equation}
Recall also that the two families of polynomials $\left\{ h_{n}\right\} $ and $\left\{ b_{n}\right\} $ are related to one another by \[
b_{n}(x|q)=(-1)^{n}q^{\binom{n}{2}}h_{n}(x|q^{-1}), \] for $q\neq0$ and for $q\allowbreak=\allowbreak0$ we have $b_{-1}
(x|0)\allowbreak=\allowbreak b_{n}(x|0)\allowbreak=\allowbreak0$ for $n\geq3,$
$b_{1}(x|q)\allowbreak=\allowbreak-2x,$ $b_{2}(x|0)\allowbreak=\allowbreak1.$
In the sequel when considering the case $q\allowbreak=\allowbreak0$ we will understand as the limit with $q\rightarrow0$ in the function in question. \end{remark}
One can notice that, we have \[
\frac{n!}{(q)_{n}}f_{n}(x|q)\allowbreak=\allowbreak\left. \frac{d^{n}}
{d\rho^{n}}W_{1}^{-1}(x|\rho,q)\right\vert _{\rho=0}. \] We have the following lemma.
\begin{lemma} \label{1-dim}For $\left\vert x\right\vert \leq1,\left\vert q\right\vert <1,$ we have \begin{align}
d_{n}(x|q)\allowbreak & =\allowbreak b_{n}(x|q),\label{1dimb}\\
f_{n}(x|q) & =h_{n}(x|q). \label{1dimh} \end{align}
\end{lemma}
\begin{proof} To prove (\ref{1dimb}) let us recall formula (1.7) of \cite{bms}. \[
W_{1}(x|\rho,q)=\sum_{j\geq0}\frac{\rho^{j}}{(q)_{j}}b_{j}(x|q). \] To get (\ref{1dimh}) we recall (\ref{ch1}). The separate proof is needed for the case $q\allowbreak=\allowbreak0.$ Then $W_{1}(x,\rho,0)\allowbreak
=\allowbreak v(x|\rho)\allowbreak=\allowbreak1-2x\rho+\rho^{2}$ which confronted with our definition of polynomials $b_{n}$ for $q\allowbreak =\allowbreak0$ shows that the (\ref{1dimb}) is true for this case also. \end{proof}
Now we see that following, adapted to the present situation, formula (\ref{diff}) we have, for $\left\vert q\right\vert ,\left\vert \rho\right\vert <1 $ and $\left\vert x\right\vert \leq1$. \begin{align*}
\chi_{1}^{t}(x|\rho,q)\allowbreak & =\allowbreak\sum_{j=0}^{\infty}\frac
{\rho^{j}}{(q)_{j}}h_{t+j}(x|q)\allowbreak=\allowbreak\frac{1}{W_{1}
(x|\rho,q)}\\ & \times\sum_{j=0}^{\infty}\rho^{j}\sum_{m=0}^{j}\frac{1}{(j-m)!}
\frac{(j-m)!}{(q)_{j-m}(q)_{m}}b_{j-m}(x|q)h_{m+t}(x|q)\\
& =\allowbreak\frac{1}{W_{1}(x|\rho,q)}\sum_{j=0}^{\infty}\frac{\rho^{j} }{(q)_{j}}\sum_{m=0}^{j}
\genfrac{[}{]}{0pt}{}{j}{m}
_{q}b_{j-m}(x|q)h_{m+t}(x|q)\\
& =\frac{1}{W_{1}(x|\rho,q)}\sum_{j=0}^{t}
\genfrac{[}{]}{0pt}{}{t}{j}
_{q}(-\rho)^{j}q^{\binom{j}{2}}h_{t-j}(x|q). \end{align*} In particular, for $t\allowbreak=\allowbreak0,$ we get once more formula (\ref{ch1}). This can be regarded as yet another prove of this formula since we started from (\ref{expW}).
\subsubsection{Two-dimensional case}
Again, as before, let us denote\newline$\frac{n!}{(q)_{n}}d_{n}^{(2)}
(x,y|q)\allowbreak=\allowbreak\left. \frac{d^{n}}{d\rho^{n}}W_{2}
(x,y|\rho,q)\right\vert _{\rho=0},$ $\frac{n!}{(q)_{n}}f_{n}^{(2)}
(x,y|q)\allowbreak=\allowbreak\left. \frac{d^{n}}{d\rho^{n}}W_{2}
^{-1}(x,y|\rho,q)\right\vert _{\rho=0}$, where $W_{2}(x,y|\rho,q)\allowbreak
=\allowbreak\prod_{j=0}^{\infty}w_{2}(x,y|\rho q^{j}),$ with $w_{2}(x,y|a)$ defined by (\ref{w2}).
\begin{lemma} \label{2-dim}For $\theta,\varphi\in\lbrack0,2\pi),\left\vert q\right\vert <1,$ we have \begin{align}
d_{n}^{(2)}(\cos\theta,\cos\varphi|q)\allowbreak & =\allowbreak\sum_{m=0}^{n}
\genfrac{[}{]}{0pt}{}{n}{m}
_{q}b_{m}(\cos(\theta+\varphi)|q)b_{n-m}(\cos(\theta-\varphi)|q),\label{d2b}\\
f_{n}^{(2)}(\cos\theta,\cos\varphi|q) & =\sum_{m=0}^{n}
\genfrac{[}{]}{0pt}{}{n}{m}
_{q}h_{m}(\cos(\theta+\varphi)|q)h_{n-m}(\cos(\theta-\varphi)|q). \label{d2h} \end{align}
\end{lemma}
\begin{proof}
First of all, notice that $w_{2}(\cos\theta,\cos\varphi|\rho)$ can be decomposed as \begin{equation}
w_{2}(\cos\theta,\cos\varphi|\rho)\allowbreak=\allowbreak w_{1}(\cos
(\theta+\varphi)|\rho)(w_{1}(\theta-\varphi)|\rho) \label{w22} \end{equation} hence, taking into account Leibniz rule, we get: \begin{gather*}
d_{n}^{(2)}(x,y|q)=\frac{(q)_{n}}{n!}\left. \frac{d^{n}}{d\rho^{n}}
(W_{1}(\cos(\theta+\varphi)|\rho,q)W_{1}(\cos(\theta-\varphi)|\rho ,q))\right\vert _{\rho=0}\\ =\frac{(q)_{n}}{n!}\sum_{m=0}^{n}\binom{n}{m}\left. \frac{d^{m}}{d\rho^{m}
}(W_{1}(\cos(\theta+\varphi)|\rho,q)\right\vert _{\rho=0}\left. \frac
{d^{n-m}}{d\rho^{n-m}}(W_{1}(\cos(\theta-\varphi)|\rho,q)\right\vert _{r=0}\\ =\frac{(q)_{n}}{n!}\sum_{m=0}^{n}\binom{n}{m}\frac{m!}{(q)_{m}}b_{m}
(\cos(\theta+\varphi)|q)\frac{(n-m)!}{(q)_{n-m}}b_{n-m}(\cos(\theta
-\varphi)|q). \end{gather*} To get (\ref{d2h}), we argue in a similar way using Lemma \ref{1-dim} on the way. \end{proof}
\begin{theorem} \label{wazny}We have for $\left\vert x\right\vert ,\left\vert y\right\vert ,\left\vert q\right\vert \in\mathbb{R}$ and all $n\geq0:$ \begin{gather}
d_{n}^{(2)}(x,y|q)=\label{con1}\\ (-1)^{n}\sum_{j=0}^{\left\lfloor n/2\right\rfloor }(-1)^{j}q^{-\binom{n-2j}
{2}-j+\binom{j}{2}}\frac{(q)_{n}}{(q)_{j}(q)_{n-2j}}b_{n-2j}(x|q)b_{n-2j}
(y|q),\nonumber\\
f_{n}^{(2)}(x,y|q)\allowbreak=\allowbreak\sum_{j=0}^{\left\lfloor n/2\right\rfloor }\frac{(q)_{n}}{(q)_{j}(q)_{n-2j}}h_{n-2j}(x|q)h_{n-2j}(y|q). \label{con2} \end{gather}
\end{theorem}
\begin{proof} Is shifted to Section \ref{dow}. \end{proof}
\begin{remark} Notice that, in accordance with our agreement that the case $q\allowbreak =\allowbreak0$ will be understood as the limit when $q\rightarrow0,$ we have
$d_{0}^{(2)}(x,y|0)\allowbreak=\allowbreak1,$ $d_{1}^{(2)}(x,y|0)\allowbreak
=\allowbreak-4xy$, $d_{2}^{(2)}(x,y|0)\allowbreak=\allowbreak4(x^{2}
+y^{2})-2,$ $d_{3}^{(2)}(x,y|0)\allowbreak=\allowbreak-4xy,$ $d_{4}
^{(2)}(x,y|0)\allowbreak=\allowbreak1$, $d_{n}^{(2)}(x,y|0)\allowbreak =\allowbreak0$ for all $n\geq4$. \end{remark}
As a corollary we get the following interesting nontrivial identity involving polynomials $\left\{ b_{n}\right\} $ and $\left\{ h_{n}\right\} .$
\begin{corollary} For all complex $x,y,q,$ $k\geq0$ and $t,s\in\mathbb{N\cup}\{0\},$we get \begin{equation} \sum_{m=0}^{k}
\genfrac{[}{]}{0pt}{}{k}{m}
_{q}d_{m}^{(2)}(x,y|q)h_{k-m+t}(x|q)h_{k-m+s}(y|q)=P_{t,s}^{(k)}(x,y|q) \label{sumk} \end{equation}
where $P_{t,s}^{(k)}(x,y|q)$ is a polynomial of order $t+s$ in $x$ and $y.$
In particular, we have \begin{equation} \sum_{m=0}^{k}
\genfrac{[}{]}{0pt}{}{k}{m}
_{q}d_{m}^{(2)}(x,y|q)h_{k-m}(x|q)h_{k-m}(y|q)\allowbreak=\left\{ \begin{array} [c]{ccc} 0 & if & k\text{ is odd}\\ (-1)^{l}q^{\binom{l}{2}}(q^{l+1})_{l} & if & k=2l \end{array} \right. . \label{00k} \end{equation}
\end{corollary}
\begin{proof} Knowing that \[
\sum_{j=0}^{\infty}\frac{\rho^{j}}{(q)_{j}}h_{j+t}(x|q)h_{j+s}(y|q)\allowbreak
=\allowbreak\frac{(\rho^{2})_{\infty}V_{t,s}(x,y|\rho,q)}{W_{2}(x,y|\rho,q)}, \]
for $t,s\in\mathbb{N\cup}\{0\}$, which is a modification of the formula given in assertion i) of Lemma 3 in \cite{SzablAW}, where $V_{t,s}(x,y|\rho,q)$
denotes certain polynomial of the degree $t+s$ in $x$ and $y$, our expansion of $W_{2}(x,y|\rho,q)$ and then applying Cauchy multiplication of series get the identity \begin{equation} \sum_{j=0}^{\infty}\frac{\rho^{j}}{(q)_{j}}\sum_{m=0}^{j}
\genfrac{[}{]}{0pt}{}{j}{m}
d_{m}^{(2)}(x,y|q)h_{j-m+t}(x|q)h_{j-m+s}(y|q)\allowbreak=\allowbreak V_{t,s}(x,y|\rho,q)(\rho^{2})_{\infty}, \label{id2wym} \end{equation} true for all $\left\vert x\right\vert ,\left\vert y\right\vert \leq1,$ $\left\vert \rho\right\vert ,\left\vert q\right\vert <1.$ Now knowing the form of the polynomial $V_{t,s}$ given either in \cite{SzablAW}, \cite{SzabP-M} or
\cite{Szab-rev}, we deduce that the expansion of the polynomial $V_{t,s}$ in the power series of $\rho$ is of a form of the sum of infinite power series only in $\rho$ times polynomials of $x$ and $y$ of order at most $t+s$. Hence it is of the form of the power series in $\rho$ with coefficients being polynomials in $x$ and $y$ of order at most $t+s.$ Since the linear combination of polynomials of order $t+s$ is a polynomial of order $t+s.$ A similar argument can be applied to the product $V_{t,s}(x,y|\rho,q)(\rho ^{2})_{\infty}.$ Now comparing the coefficients of the powers of $\rho$ on the two sides of (\ref{id2wym}), one proves the first part of the statement.
Now knowing that $V_{0,0}\allowbreak=\allowbreak1$, expanding $\left( \rho^{2}\right) _{\infty}$ in a standard way and finally comparing coefficients by equal powers of $\rho$ we arrive to (\ref{sumk}). \end{proof}
\section{Proofs\label{dow}}
\begin{proof} [Proof of Proposition \ref{ilocz}.]We will be using well known formulae for the product of sines and cosines. The proof is by induction. For $n\allowbreak=\allowbreak1$ and $k\allowbreak=\allowbreak1$ we have in case of (\ref{cos}) and $k\allowbreak=\allowbreak0$ $\cos(\alpha)\allowbreak =\allowbreak\frac{1}{2}(\cos(\alpha)+\cos(-\alpha))$ while in case of (\ref{sin}) we get \begin{gather*} \sin(\alpha_{1})\cos(\alpha_{2})=\frac{-1}{4}(\sin(-\alpha_{1}-\alpha _{2})\allowbreak+\allowbreak\sin(-\alpha_{1}+\alpha_{2})-\sin(\alpha _{1}-\alpha_{2})-\sin\left( \alpha_{1}+\alpha_{2})\right) \\ =\frac{1}{2}(\sin(\alpha_{1}+\alpha_{2})+\sin(\alpha_{1}-\alpha_{2})). \end{gather*} Hence, let us assume that they are true for $n\allowbreak=\allowbreak m.$
In the case of the first one, we have \begin{gather*} \prod_{j=1}^{m+1}\cos(\xi_{j})\allowbreak=\allowbreak\cos(\xi_{m+1} )\prod_{j=1}^{m}\cos(\xi_{j})\allowbreak=\allowbreak\\ \frac{1}{2^{m}}\sum_{i_{1}\in\{-1,1\}}...\sum_{i_{m}\in\{-1,1\}}\cos (\sum_{k=1}^{m}i_{k}\xi_{k})\cos(\xi_{m+1})\allowbreak\\ =\allowbreak\newline\frac{1}{2^{m+1}}\allowbreak\times\allowbreak\sum _{i_{1}\in\{-1,1\}}...\sum_{i_{m}\in\{-1,1\}}(\cos(\sum_{k=1}^{m}i_{k}\xi _{k}+\xi_{m+1})\allowbreak+\allowbreak\cos(\sum_{k=1}^{m}i_{k}\xi_{k} -\xi_{m+1})). \end{gather*}
Along the way we used the fact that $\cos(\alpha)\cos(\beta)\allowbreak =\allowbreak(\cos(\alpha-\beta)+\cos(\alpha+\beta))/2.$ Let us also observe that the product $\prod_{j=1}^{m}\cos(\xi_{j})$ is a sum of cosines of a certain linear combination of arguments $\xi_{j},$ $j\allowbreak =\allowbreak1,\ldots,m$ multiplied by $2^{m-1}.$
In the case of the second one we first consider the case of $k\allowbreak =\allowbreak0$. Assuming that $m$ is even we get: \begin{gather*} \prod_{j=1}^{m+1}\sin(\xi_{j})=\allowbreak\sin(\xi_{m+1})\prod_{j=1}^{m} \sin(\xi_{j})\allowbreak=\allowbreak(-1)^{m/2}\frac{1}{2^{m}}\allowbreak \times\allowbreak\\ \sum_{i_{1}\in\{-1,1\}}...\sum_{i_{m}\in\{-1,1\}}(-1)^{\sum_{k=1}^{m} (i_{k}+1)/2}\cos(\sum_{k=1}^{m}i_{k}\xi_{k})\sin(\xi_{m+1})\allowbreak\\ =(-1)^{m/2}\frac{1}{2^{m+1}}\allowbreak\sum_{i_{1}\in\{-1,1\}}...\sum _{i_{m}\in\{-1,1\}}(-1)^{\sum_{k=1}^{m}(i_{k}+1)/2}\allowbreak\times\\ (\sin(\sum_{k=1}^{m}i_{k}\xi_{k}\allowbreak+\allowbreak\xi_{m+1} )\allowbreak-\allowbreak\sin(\sum_{k=1}^{m}i_{k}\xi_{k}\allowbreak -\allowbreak\xi_{m+1}))\allowbreak=\\ -(-1)^{m/2}\frac{1}{2^{m+1}}\sum_{i_{m+1}\in\{-1\}}\sum_{i_{1}\in \{-1,1\}}...\sum_{i_{m}\in\{-1,1\}}(-1)^{\sum_{k=1}^{m+1}(i_{k}+1)/2}\sin (\sum_{k=1}^{m+1}i_{k}\xi_{k})\allowbreak\\ -(-1)^{m/2}\frac{1}{2^{m+1}}\sum_{i_{m+1}\in\{-1\}}\sum_{i_{1}\in \{-1,1\}}...\sum_{i_{m}\in\{-1,1\}}(-1)^{\sum_{k=1}^{m+1}(i_{k}+1)/2}\sin (\sum_{k=1}^{m+1}i_{k}\xi_{k}). \end{gather*} $\allowbreak$\newline We used the fact that $\sin(\alpha)\cos(\beta )\allowbreak=\allowbreak(\sin(\alpha-\beta)+\sin(\alpha+\beta))/2.$ The case of $m$ odd is treated in the similar way.
Now to consider general case we expand both products of sines and cosines. \end{proof}
\begin{proof} [Proof of Lemma \ref{aux1}](\ref{ksc}) Using the Euler's identity $\cos (\theta)\allowbreak=\allowbreak(e^{i\theta}\allowbreak+\allowbreak e^{-i\theta})/2$ we get \[ \cos(\beta+\sum_{j=1}^{n}k_{j}\alpha_{j})\allowbreak=\allowbreak\exp (i\beta+\sum_{j=1}^{n}ik_{j}\alpha_{j})/2\allowbreak+\allowbreak\exp (-i\beta-\sum_{j=1}^{n}ik_{j}\alpha_{j})/2. \] So \[ \sum_{k_{1}\geq0}...\sum_{k_{n}\geq0}(\prod_{j=1}^{n}\rho_{j}^{k_{j}} )\exp(i\beta+\sum_{j=1}^{n}ik_{j}\alpha_{j})/2\allowbreak=\allowbreak\frac {1}{2}\exp(i\beta)\prod_{j=1}^{n}\frac{1}{1-\rho_{j}\exp(i\alpha_{j})}. \] Similarly: \[ \sum_{k_{1}\geq0}...\sum_{k_{n}\geq0}(\prod_{j=1}^{n}\rho_{j}^{k_{j}} )\exp(-i\beta-\sum_{j=1}^{n}ik_{j}\alpha_{j})/2\allowbreak=\allowbreak\frac {1}{2}\exp(-i\beta)\prod_{j=1}^{n}\frac{1}{1-\rho_{j}\exp(-i\alpha_{j})}. \] Thus \begin{gather*} \sum_{k_{1}\geq0}...\sum_{k_{n}\geq0}(\prod_{j=1}^{n}\rho_{j}^{k_{j}} )\cos(\beta+\sum_{j=1}^{n}ik_{j}\alpha_{j})\\ =\frac{\exp(i\beta)\prod_{j=1}^{n}(1-\rho_{j}\exp(-i\alpha_{j}))+\exp (-i\beta)\prod_{j=1}^{n}(1-\rho_{j}\exp(i\alpha_{j}))}{2\prod_{j=1}^{n} (1+\rho_{j}^{2}-2\rho_{j}\cos(\alpha_{j}))}. \end{gather*} Now, notice that \begin{gather*} \exp(-i\beta)\prod_{j=1}^{n}(1-\rho_{j}\exp(i\alpha_{j}))\allowbreak=\\ \allowbreak\sum_{j=1}^{n}(-1)^{j}\sum_{M_{j,n}\subseteq S_{n}}\prod_{k\in M_{j,n}}\rho_{k}\exp(-i\beta+i\sum_{k\in M_{j,n}}\alpha_{k}). \end{gather*}
To verify (\ref{kss}), we use the fact that $\sin(\theta)\allowbreak =\allowbreak(e^{i\theta}\allowbreak-\allowbreak e^{-i\theta})/2$ getting \[ \sin(\beta+\sum_{j=1}^{n}k_{j}\alpha_{j})\allowbreak=\allowbreak\exp (i\beta+\sum_{j=1}^{n}ik_{j}\alpha_{j})/2i\allowbreak-\newline\allowbreak \exp(-i\beta-\sum_{j=1}^{n}ik_{j}\alpha_{j})/2i. \] So we have: \[ \sum_{k_{1}\geq0}...\sum_{k_{n}\geq0}(\prod_{j=1}^{n}\rho_{j}^{k_{i}} )\exp(i\beta+i\sum_{j=1}^{n}k_{j}\alpha_{j})/2i\allowbreak=\allowbreak \exp(i\beta)\frac{1}{2i}\prod_{j=1}^{n}\frac{1}{1-\rho_{j}\exp(i\alpha_{j})}. \] Similarly we get\newline \[ \sum_{k_{1}\geq0}...\sum_{k_{n}\geq0}(\prod_{j=1}^{n}\rho_{j}^{k_{j}} )\exp(-i\beta-i\sum_{j=1}^{n}k_{j}\alpha_{j})/2i\allowbreak=\allowbreak \exp(-i\beta)\frac{1}{2i}\prod_{j=1}^{n}\frac{1}{1-\rho_{j}\exp(-i\alpha_{j} )}. \] So \begin{gather*} \sum_{k_{1}\geq0}...\sum_{k_{n}\geq0}(\prod_{j=1}^{n}\rho_{j}^{k_{j}} )\sin(\beta+\sum_{j=1}^{n}k_{j}\alpha_{j})\allowbreak=\allowbreak\\ \frac{1}{2i}\frac{\exp(i\beta)\prod_{j=1}^{n}(1-\rho_{j}\exp(-i\alpha _{j}))-\exp(-i\beta)\prod_{j=1}^{n}(1-\rho_{j}\exp(i\alpha_{j}))}{\prod _{j=1}^{n}(1+\rho_{j}^{2}-2\rho_{j}\cos(\alpha_{j}))}\allowbreak. \end{gather*}
\end{proof}
\begin{proof} [Proof of Theorem \ref{main}]The proof is based on the following observation. First one is that we convert products Chebyshev polynomials to the products of $\sin(j\alpha_{s}+(t_{s}+1)\alpha_{s})$ and $\cos(j\alpha_{s}+t_{s}\alpha _{s})$ according to (\ref{Czebysz}). Secondly we change these products to sums of either cosines if $n$ is even or zero or sines if $n$ is odd according to the assertion of the Proposition \ref{ilocz}. The arguments of these sines and cosines are the linear combinations of the arguments of sines and cosines that were participating in the products. The coefficients of these linear combinations are $j\geq0$ and $i_{m}\in\left\{ -1,1\right\} ,$ $m\allowbreak=\allowbreak1,\ldots,n+k.$ Thus we can sum first with respect to $j$ and apply Corollary \ref{suma}. There the r\^{o}le of $\alpha$ plays now $\sum_{s=1}^{k+n}$ $i_{s}\alpha_{s}$ for chosen combination of $i^{\prime}s$ while the r\^{o}le of $\beta$ similar combination $\sum_{s=1}^{n}i_{s} (t_{s}+1)\alpha_{s}+\allowbreak+\allowbreak\sum_{s=n+1}^{n+k}i_{s}t_{s} \alpha_{s}.$ The point is that the sum of such sines or cosines with respect to $j,$ is a ratio of two trigonometric expressions. Moreover all these the expressions in the denominators depend only on $\sum_{s=1}^{k+n}$ $i_{s} \alpha_{s},$ i.e. do not depend on indeces $t_{s}$ (note that denominators of sums in Corollary \ref{suma} do not depend on $\beta$). For $\alpha_{s} \in\mathbb{R}$, $t_{s}\in\mathbb{Z}$, $s=1,...,n+k$, $\left\vert \rho\right\vert <1$ we have, depending on the parity of $n$, the following equations.
If $n$ is odd then, \begin{gather} \sum_{j\geq0}\rho^{j}\prod_{s=1}^{n}U_{j+t_{s}}(\cos(\alpha_{s}))\prod _{s=n+1}^{n+k}T_{j+t_{s}}(\cos(\alpha_{s}))=\label{si}\\ \frac{(-1)^{(n+1)/2}}{2^{n+k}\prod_{i=1}^{n}\sin(\alpha_{i})}\sum_{i_{1} \in\{-1,1\}}...\sum_{i_{n+k}\in\{-1,1\}}(-1)^{\sum_{k=1}^{n}(i_{k}+1)/2} \times\nonumber\\ \frac{(\sin(\sum_{s=1}^{n}i_{s}(t_{s}+1)\alpha_{s}+\sum_{s=n+1}^{n+k} i_{s}t_{s}\alpha_{s})-\rho\sin(\sum_{s=1}^{n}i_{s}t_{s}\alpha_{s}+\sum _{s=n+1}^{n+k}i_{s}(t_{s}-1)\alpha_{s}))}{(1-2\rho\cos(\sum_{s=1}^{n+k} i_{s}\alpha_{s})+\rho^{2})},\nonumber \end{gather} while, when $n$ is even or zero, we get:$\allowbreak$\newline \begin{gather} \sum_{j\geq0}\rho^{j}\prod_{s=1}^{n}U_{j+t_{s}}(\cos(\alpha_{s}))\prod _{s=n+1}^{n+k}T_{j+t_{s}}(\cos(\alpha_{s}))=\label{chi}\\ \frac{(-1)^{n/2}}{2^{n+k}\prod_{i=1}^{n}\sin(\alpha_{i})}\sum_{i_{1} \in\{-1,1\}}...\sum_{i_{n+k}\in\{-1,1\}}(-1)^{\sum_{k=1}^{n}(i_{k}+1)/2} \times\nonumber\\ \frac{\cos(\sum_{s=1}^{n}i_{s}(t_{s}+1)\alpha_{s}+\sum_{s=n+1}^{n+k}i_{s} t_{s}\alpha_{s})-\rho\cos(\sum_{s=1}^{n}i_{s}t_{s}\alpha_{s}+\sum _{s=n+1}^{n+k}i_{s}(t_{s}-1)\alpha_{s})}{(1-2\rho\cos(\sum_{s=1}^{n+k} i_{s}\alpha_{s})+\rho^{2})}.\nonumber \end{gather}
To justify it, we use (\ref{Czebysz}) first, then based on Proposition \ref{ilocz}, we convert products to sums of sines or cosines (if $n$ is odd sines if $n$ is even cosines) that are of the following arguments: \begin{gather*} \sum_{s=1}^{n}l_{s}((j+1)\alpha_{s}+t_{s}\alpha_{s})\allowbreak+\allowbreak \sum_{s=n+1}^{n+k}l_{s}(j\alpha_{s}+t_{s}\alpha_{s})\allowbreak\\ =\allowbreak j\sum_{s=1}^{n+k}l_{s}\alpha_{s}+\sum_{s=1}^{n}l_{s} (t_{s}+1)\alpha_{s}+\sum_{s=n+1}^{n+k}l_{s}t_{s}\alpha_{s}. \end{gather*} Then, we change the order of summation and we sum over $j$ first. We identify "$\alpha$" with $\sum_{s=1}^{n+k}l_{s}\alpha_{s}$ and "$\beta$" with $\sum_{s=1}^{n}l_{s}(t_{s}+1)\alpha_{s}\allowbreak+\allowbreak\sum _{s=n+1}^{n+k}l_{s}t_{s}\alpha_{s}$ and apply formulae (\ref{s_si} or \ref{s_g_c}) depending on on the case of parity of $n$.
Now let us analyze polynomial $w_{n}.$ Notice that denominator in both (\ref{si}) and (\ref{chi}) is of the form \begin{gather}
w_{k+n}(\cos(\alpha_{1}),...,\cos(\alpha_{k+n})|\rho)=\label{wkn}\\ \prod_{i_{1}\in\{-1,1\}}...\prod_{i_{k+n}\in\{-1,1\}}(1-2\rho\cos(\sum _{s=1}^{n+k}i_{s}\alpha_{s})+\rho^{2}).\nonumber \end{gather} To get (\ref{wkn}) we will argue by induction. Let us replace $n+k$ by $m$ to avoid confusion. To start with $m\allowbreak=\allowbreak1$ for $m\allowbreak =\allowbreak2$ we recall (\ref{pro2}). Hence (\ref{wkn}) is true for $m\allowbreak=\allowbreak1,2.$
Let us assume that the formula is true for $m\allowbreak=\allowbreak k+1.$ Hence, taking $\alpha\allowbreak=\allowbreak\alpha_{k+1}$ and $\beta \allowbreak=\allowbreak\sum_{s=1}^{k}i_{s}\alpha_{s}$ and noting that $i_{k}^{2}\allowbreak=\allowbreak1$we get: \begin{gather*}
w_{k+1}(\cos(\alpha_{1}),...,\cos(\alpha_{k+1})|\rho)=\\ \prod_{i_{2}\in\{-1,1\}}...\prod_{i_{k}\in\{-1,1\}}((1-2\rho\cos(\sum _{s=1}^{k-1}i_{s}\alpha_{s}+i_{k}(\alpha_{k}-i_{k}\alpha_{k+1}))+\rho^{2})\\ \times(1-2\rho\cos(\sum_{s=1}^{k-1}i_{s}\alpha_{s}+i_{k}(\alpha_{k} +i_{k}\alpha_{k+1}))+\rho^{2}))\\
=w_{k}(\cos(\alpha_{1}),...,\cos(\alpha_{k}+\alpha_{k+1})|\rho)w_{k}
(\cos(\alpha_{1}),...,\cos(\alpha_{k}-\alpha_{k+1})|\rho). \end{gather*} by induction assumption. Now it is elementary to see that polynomials $w_{n}$ satisfy relationship (\ref{rek}). Similarly, the remarks concerning degree of symmetry and the degree of polynomials $w_{n}$ follow directly (\ref{wkn}).
Now, let us multiply both sides of (\ref{si}) and (\ref{chi}) by
$w_{n+k}(x_{1},...,x_{n+k}|\rho)$. We see that this product is equal to the right hand sides of these equalities with an obvious replacement $\cos (\alpha_{s})->x_{s},$ $s=1,...,n+k.$ Inspecting (\ref{si}) and (\ref{chi}), we notice that these right hand sides are polynomials of degree $2(2^{n+k-1} -1)+1\allowbreak=\allowbreak2^{n+k}-1$ in $\rho.$ Thus, these polynomials can be regained by using well known formula: \[ p_{n}(x)\allowbreak=\allowbreak\sum_{i=0}^{n}x^{n}a_{n}\allowbreak =\allowbreak\sum_{j=0}^{n}\frac{x^{j}}{j!}\left. \frac{d^{j}}{dx^{j}} p_{n}(x)\right\vert _{x=0}. \] This leads directly to the differentiation of the products of $w_{n+k}
(x_{1},...,x_{n+k}|\rho)$ and right hand side of (\ref{_ktnu}). Now we apply the Leibniz formula: \[ \left. \frac{d^{n}}{dx^{n}}[f(x)g(x)]\right\vert _{x=0}=\sum_{j=0}^{n} \binom{n}{i}\left. \frac{d^{j}}{dx^{j}}f(x)\right\vert _{x=0}\left. \frac{d^{n-j}}{dx^{n-j}}g(x)\right\vert _{x=0}. \] and notice that \[ \left. \frac{d^{k}}{d\rho^{k}}\sum_{j\geq0}\rho^{j}\prod_{s=1}^{n}T_{j+t_{s} }(x_{s})\prod_{s=1+n}^{n+k}U_{j+t_{s}}(x_{s})\right\vert _{\rho=0} =k!\prod_{s=1}^{n}T_{k+t_{s}}(x_{s})\prod_{s=1+n}^{n+k}U_{k+t_{s}}(x_{s}). \] Having this we get directly (\ref{formula}). \end{proof}
\begin{proof} [ Proof of the Theorem \ref{wazny}]The proof consists of several steps. First, we prove that for all $\theta,\varphi\in\mathbb{R}$ we have \begin{equation} \sum_{m=0}^{n}
\genfrac{[}{]}{0pt}{}{n}{m}
_{q}h_{m}(\cos(\theta+\varphi)|q)h_{n-m}(\cos(\theta-\varphi)|q)=\sum _{j=0}^{\left\lfloor n/2\right\rfloor }\frac{(q)_{n}}{(q)_{j}(q)_{n-2j}
}h_{n-2j}(\cos\theta|q)h_{n-2j}(\cos\varphi|q). \label{exKM} \end{equation} This formula follows, firstly from the fact that we have \[
\left. \frac{d^{n}}{d\rho^{n}}W_{1}^{-1}(x|\rho,q)\right\vert _{\rho=0}
=\frac{n!}{(q)_{n}}h_{n}(x|q), \] which follows directly from (\ref{ch1}). Secondly, arguing in the similar way as in the proof of Lemma \ref{2-dim} we deduce that \begin{align*}
& \left. \frac{d^{n}}{d\rho^{n}}W_{1}^{-1}(\cos(\theta+\varphi)|\rho
,q)W_{1}^{-1}(\cos(\theta-\varphi)|\rho,q)\right\vert _{\rho=0}\\ & =\frac{n!}{(q)_{n}}\sum_{m=0}^{n}
\genfrac{[}{]}{0pt}{}{n}{m}
_{q}h_{m}(\cos(\theta+\varphi)|q)h_{n-m}(\cos(\theta-\varphi)|q). \end{align*} Thirdly, we notice that \[
\frac{1}{W_{1}(\cos(\theta+\varphi)|\rho,q)W_{1}(\cos(\theta-\varphi)|\rho
,q)}\allowbreak=\allowbreak\frac{1}{W_{2}(\cos(\theta),\cos(\varphi)|\rho ,q)}, \] which follows directly from (\ref{w22}).
Now, let us calculate \[ \sum_{n\geq0}\frac{\rho^{n}}{(q)_{n}}\sum_{j=0}^{\left\lfloor n/2\right\rfloor
}\frac{(q)_{n}}{(q)_{j}(q)_{n-2j}}h_{n-2j}(\cos(\theta)|q)h_{n-2j}
(\cos(\varphi)|q). \] After changing the order of summation, we get \[ \sum_{j\geq0}\frac{\rho^{2j}}{(q)_{j}}\sum_{n\geq2j}\frac{\rho^{n-2j}
}{(q)_{n-2j}}h_{n-2j}(\cos(\theta)|q)h_{n-2j}(\cos(\varphi)|q)\allowbreak =\allowbreak\frac{1}{(\rho^{2})_{\infty}}\frac{(\rho^{2})_{\infty}}{W_{2}
(\cos(\theta),\cos(\varphi)|\rho,q)}, \] by the binomial and Poisson-Mehler summation theorems. Thus we have proved (\ref{exKM}) as well as (\ref{con2}) at least for $\left\vert q\right\vert <1.$ The formula can be easily extended to all values of $q\neq1$ since both sides are polynomials in $q$. Similarly, we can extend it to all values of $x$ and $y$ by substitution $\cos(\theta)$ by $x$ and $\cos(\varphi)$ by $y.$ Now, having proven (\ref{exKM}) we recall the definition of polynomials
$b_{n}(x|q)$ given in Lemma \ref{1-dim}, above. Recall also that \[
(\frac{1}{q}|\frac{1}{q})_{n}=(-1)^{n}q^{-\binom{n+1}{2}}(q)_{n}, \] and consequently that we have: \[
\genfrac{[}{]}{0pt}{}{n}{j}
_{1/q}=
\genfrac{[}{]}{0pt}{}{n}{j}
_{q}q^{-j(n-j)}. \] Hence, for the left hand side of (\ref{exKM}), we have after changing $q$ to $1/q$ \begin{gather*} \sum_{m=0}^{n}
\genfrac{[}{]}{0pt}{}{n}{m}
_{1/q}h_{m}(\cos(\theta+\varphi)|\frac{1}{q})h_{n-m}(\cos(\theta
-\varphi)|\frac{1}{q})\\ =\sum_{m=0}^{n}
\genfrac{[}{]}{0pt}{}{n}{m}
_{q}q^{-m(n-m)}(-1)^{m}q^{-\binom{m}{2}}\\
\times b_{m}(\cos(\theta+\varphi)|q)(-1)^{n-m}q^{-\binom{n-m}{2}}b_{n-m}
(\cos(\theta-\varphi)|q)\\ =(-1)^{n}q^{-\binom{n}{2}}\sum_{m=0}^{n}
\genfrac{[}{]}{0pt}{}{n}{m}
_{q}b_{m}(\cos(\theta+\varphi)|q)b_{n-m}(\cos(\theta-\varphi)|q). \end{gather*} Now let us consider the right hand side of (\ref{exKM}) and change $q$ by $1/q.$ We have \begin{gather*}
\sum_{j=0}^{\left\lfloor n/2\right\rfloor }\frac{(q^{-1}|q^{-1})_{n}}
{(q^{-1}|q^{-1})_{j}(q^{-1}|q^{-1})_{n-2j}}h_{n-2j}(x|q^{-1})h_{n-2j}
(y|q^{-1})\\ =\sum_{j=0}^{\left\lfloor n/2\right\rfloor }\frac{(q)_{n}(-1)^{n} q^{-\binom{n+1}{2}}}{(q)_{j}(-1)^{j}q^{-\binom{j+1}{2}}(q)_{n-2j} (-1)^{n-2j}q^{-\binom{n-2j+1}{2}}}(-1)^{n-2j}\\
\times q^{-\binom{n-2j}{2}}b_{n-2j}(x|q)(-1)^{n-2j}q^{-\binom{n-2j}{2}
}b_{n-2j}(y|q). \end{gather*} We deduce that (\ref{con1}) is true since we have $\binom{n}{2}+n\allowbreak =\allowbreak\binom{n+1}{2}.$ \end{proof}
\end{document} |
\begin{document}
\title{Braids with as many full twists as strands realize the braid index
} \author{Peter Feller}
\address{ETH Z\"urich, R\"amistrasse 101, 8092 Z\"urich, Switzerland} \email{peter.feller@math.ch} \author{Diana Hubbard}
\address{Brooklyn College, CUNY, 2900 Bedford Avenue, Brooklyn, NY 11210 USA} \email{diana.hubbard@brooklyn.cuny.edu}
\subjclass[2010]{57M25, 57M27} \keywords{Fractional Dehn twist coefficient, braid groups, braid index, Dehornoy order, Upsilon (knot Floer homology), concordance group homomorphism, homogenization} \begin{abstract}
We characterize the fractional Dehn twist coefficient of a braid in terms of a slope of the homogenization of the Upsilon function, where Upsilon is the function-valued concordance homomorphism defined by Ozsv\'ath, Stipsicz, and Szab\'o. We use this characterization to prove that $n$-braids with fractional Dehn twist coefficient larger than $n-1$ realize the braid index of their closure. As a consequence, we are able to prove a conjecture of Malyutin and Netsvetaev stating that $n$-times twisted braids realize the braid index of their closure. We provide examples that address the optimality of our results. The paper ends with an appendix about the homogenization of knot concordance homomorphisms. \end{abstract} \maketitle \section{Introduction}
A \emph{braid} or \emph{$n$-braid} is an element of \emph{Artin's braid group on $n$-strands} $B_n$~\cite{Artin_TheorieDerZoepfe}, which can be presented as
\[B_n=\left\langle a_1,\cdots,a_{n-1}\;\middle|\;a_ia_j=a_ja_i\text{ for }|i-j|\geq 2,a_ia_{i+1}a_i=a_{i+1}a_ia_{i+1}\right\rangle.\]
Our main result about braids connects two notions from different perspectives on braid theory. On one hand, viewing braids as mapping classes of the $n$ punctured closed disk $D_{n}$ leads to the notion of \emph{the fractional Dehn twist coefficient} ${\omega}(\beta)$ of the conjugacy class of a braid $\beta$: a rational number which roughly speaking measures how much the mapping class twists along the boundary of $D_{n}$ as one performs an isotopy to its canonical representative. On the other hand, \emph{links}---oriented and closed smooth $1$-submanifolds of $S^3$ considered up to ambient isotopy---can be studied as the closures of braids; see Figure \ref{fig:braid_closure}.
Indeed, by Alexander's theorem~\cite{Alexander_23_ALemmaOnSystemsOfKnottedCurves} all links arise as closures of braids, making the following well-defined: the \emph{braid index} of a link $L$ is the smallest positive integer $n$ such that there exists an $n$-braid with closure $L$.
It is a long-standing open problem to find an algorithm that determines the braid index of a given link;
compare to Birman and Brendle's survey~\cite[Open Problem 1]{Birman_Brendle_braidssurvey}.
With the exception of certain families (for instance, see~\cite{Murasugi_BraidIndexAlternatingLinks} and~\cite{Franks_Williams_87_BraidsAndTheJonesPolynomial}), little is known about the braid indices of knots and links. One of the most famous results about the braid index is the Morton-Franks-Williams (MFW) inequality, which gives bounds on the braid index in terms of the Jones/HOMFLY-PT polynomial (\cite{Franks_Williams_87_BraidsAndTheJonesPolynomial}, \cite{Morton_SeifertCircles}, \cite{Morton_PolynomialsFromBraids}). In \cite{Birman_Brendle_braidssurvey}, Birman and Brendle observed that this was, to their knowledge, the only ``general result" about the braid index. While the MFW inequality is sharp on all but five of the prime knots with up to ten crossings (\cite{Jones_MFW}), it is not sharp for infinitely many knots and links, and furthermore, Kawamuro showed that the defect between the MFW bound and the braid index can be arbitrarily large (see \cite{Kawamuro_braidindex}, \cite{Kawamuro_KR_MFW}, \cite{Elrifai_thesis}).
We relate the braid index and the fractional Dehn twist coefficient as follows. \begin{theorem}\label{thmintro:FDTCofNonminimalBraidIsBounded}
For any integer $n \geq 2$, every $n$-braid $\beta$ with $|{\omega}(\beta)|>n-1$ realizes the braid index of its closure. In other words, for every $n$-braid $\beta$ such that there exists an $(n-1)$-braid with isotopic link closure, we have $|{\omega}(\beta)| \leq n-1$. \end{theorem}
\begin{figure}\label{fig:braid_closure}
\end{figure}
We show in Section \ref{sec:Examples} (see Example~\ref{Ex:elrifai_examples}) that this result determines the braid index for infinitely many examples where the MFW inequality fails to be sharp. Furthermore, in Section~\ref{sec:Examples} we discuss examples, originally discovered by Malyutin and Netsvetaev in~\cite{MalyutinNetsvetaev_03}, of $n$-braids with fractional Dehn twist coefficient $n-2$ that do not realize the braid indices of their closures. These examples show that Theorem~\ref{thmintro:FDTCofNonminimalBraidIsBounded} is very close to optimal, with the possibility that the bound could be improved to $n-2$. The main tool for the proof of Theorem~\ref{thmintro:FDTCofNonminimalBraidIsBounded} is Theorem~\ref{thmintro:FDTCviaUpsilon} given below---a characterization of ${\omega}(\beta)$ in terms of Ozsv\'ath, Stipsicz, and Szab\'o's $\Upsilon$-invariant for knots, which is defined using the Heegaard Floer knot complex $CFK^\infty(K)$. {(\emph{Knots} are links consisting of a single connected component.)} Surprisingly, our proof of Theorem~\ref{thmintro:FDTCofNonminimalBraidIsBounded}, which is a purely 3-dimensional result, uses the concordance properties of the $\Upsilon$-invariant (in other words, its 4-dimensional aspects; see Section~\ref{sec:PropofhomoofUpsilon} and Appendix~\ref{app:homogenizationofconcordancehomos}). Before we discuss Theorem~\ref{thmintro:FDTCviaUpsilon}, we briefly recall a description of the fractional Dehn twist coefficient from Malyutin~\cite{Malyutin_Twistnumber} via a Thurston-type order on the braid group due to Dehornoy~\cite{Dehornoy_WhyAreBraidsOrderable} and we use Theorem~\ref{thmintro:FDTCofNonminimalBraidIsBounded} to resolve a conjecture by Malyutin and Netsvetaev.
A braid $\beta$ is said to be \emph{Dehornoy positive}, denoted by $\beta\succ 1$, if it can be written as a braid word that, for some integer $1\leq i<n$, contains a braid generator $a_i$ but no $a_i^{-1}$ or any generators $a_j^{\pm 1}$ for $j<i$. Dehornoy showed that this gives a well-defined, left invariant, total order $\succ$ on $B_n$ by setting $\alpha\prec\beta$ to mean $\alpha^{-1}\beta\succ1$. The \emph{Dehornoy floor} $\lfloor\beta\rfloor$ is the unique integer $m$ such that $(\Delta^2)^{m+1}\succ\beta\succeq(\Delta^2)^{m}$ where $\Delta^{2} = (a_{1}\cdots a_{n-1})^{n}$ is the full twist on $n$ strands. The fractional Dehn twist coefficient equals the homogenization of the Dehornoy floor, i.e.~for any $\beta$, ~${\omega}(\beta)=\lim_{k\to\infty}\frac{\lfloor\beta^k\rfloor}{k}$; see~\cite{Malyutin_Twistnumber}. Using this description of the fractional Dehn twist coefficient, Theorem~\ref{thmintro:FDTCofNonminimalBraidIsBounded} allows us to conclude the following:
\begin{corollary}[Compare to Conjecture 7.4 in~\cite{MalyutinNetsvetaev_03}]\label{corintro:ConjMalyutinNetsetaev} Fix an integer $n\geq2$. If an $n$-braid $\beta$ satisfies $\Delta^{2n}\preceq\beta$ or $\beta\preceq\Delta^{-2n}$, then the closure of $\beta$ does not arise as the closure of a braid on $n-1$ or fewer strands. \end{corollary}
In~\cite{MalyutinNetsvetaev_03}, Malyutin and Netsvetaev used work of Birman and Menasco in~\cite{BirmanMenasco_StabilizationI} (specifically, their Markov theorem without stabilization) to show that for every $n\geq 2$ there exist a constant $r_n$ such that, if an $n$-braid $\beta$ satisfies $\Delta^{2r_{n}}\preceq\beta$ or $\beta\preceq\Delta^{-2r_{n}}$, then the closure of $\beta$ does not arise as the closure of a braid on $n-1$ or fewer strands. Their proof is based on a counting argument which does not yield the constant $r_n$ explicitly. However, they showed that $r_{n} \geq n-1$ (\cite{MalyutinNetsvetaev_03}, Example 7.5), crucially observing that $r_{n}$ must increase with the number of strands, and they conjectured that $r_{n} = n$ works (see~\cite[Conjecture 7.4]{MalyutinNetsvetaev_03}. Our approach allows to prove that conjecture; see Corollary~\ref{corintro:ConjMalyutinNetsetaev}. We describe the characterization of ${\omega}(\beta)$ in terms of the homogenization of $\Upsilon$ next.
In~\cite{OSS_2014}, Ozsv\'ath, Stipsicz, and Szab\'o associate to a knot $K$ a piecewise linear function $\Upsilon_K\colon[0,1]\to\mathbb{R}$. Its homogenization is an invariant of braids defined by \[\HU_\beta(t)=\lim_{k\to\infty}\frac{\Upsilon_{\widehat{\beta^k\varepsilon_{\beta^k}}}}{k},\] where $\varepsilon_{\beta^k}$ is a shortest possible (as a word in the Artin generators and their inverses) $n$-braid such that the closure of $\beta^k\varepsilon_{\beta^k}$ is a knot rather than a link. The homogenization of many (concordance) knot invariants (including Ozsv\'ath and Szab\'o's $\tau$ invariant that is generalized by $\Upsilon$) are completely determined by the writhe, where the \emph{writhe} ${\rm{wr}}(\beta)$ of a braid $\beta$ equals the exponent sum of its braid word; see~\cite{Brandenbursky_11}. One instance of this is $\HU$ of $n$-braids for $t\leq \frac{2}{n}$: for an $n$-braid $\beta$, we have $\HU_{\beta}(t)=-t\frac{{\rm{wr}}(\beta)}{2}$ for $t\leq\frac{2}{n}$; see~\cite {FellerKrcatovich_16_OnCobBraidIndexAndUpsilon}. Our main result on $\HU$ is that $\widetilde{\Upsilon}_\beta(t)$ is also linear on $[\frac{2}{n},\frac{2}{n-1}]$ and the change of slope at $\frac{2}{n}$ equals ${\omega}(\beta)n$: \begin{theorem}\label{thmintro:FDTCviaUpsilon}
Fix an integer $n\geq 2$. For all $n$-braids $\beta$, we have
\[\widetilde{\Upsilon}_\beta(t)=-t\frac{{\rm{wr}}(\beta)}{2}+{\omega}(\beta) n(t-\frac{2}{n}) \quad\mbox{for } \frac{2}{n}\leq t\leq \min\{\frac{2}{n-1},1\} .\] \end{theorem}
It is known that $\omega(\beta)$ can only take on values in a certain set of rational numbers (see Proposition~\ref{fdtcproperties}) which allows us to conclude in Corollary~\ref{cor:upsilonrational} that the slope change at $\frac{2}{n}$ can also only take on certain rational values. A priori, it does not seem clear that one should even expect $\HU$ to be linear on $[\frac{2}{n},\frac{2}{n-1}]$. In contrast to this, Gambaudo and Ghys studied the homogenization of the $\omega$-signature function, which, properly normalized, is also determined by the writhe on $[0,\frac{2}{n}]$: it agrees with $\HU$ on $[0,\frac{2}{n}]$; see~\cite{GambaudoGhys_BraidsSignatures} and~\cite {FellerKrcatovich_16_OnCobBraidIndexAndUpsilon}. However, in general the homogenization of the $\omega$-signature function behaves non-linearly on $[\frac{2}{n},\frac{2}{n-1}]$; for example, it is not linear on $[\frac{2}{3},1]$ for the $3$-braid $a_1^3a_2^7$ (see ~Example~4.6 in~\cite{FellerKrcatovich_16_OnCobBraidIndexAndUpsilon}).
Finally, we end with a short discussion of a specific corollary of the MFW inequality in order to compare and contrast it with Theorem \ref{thmintro:FDTCofNonminimalBraidIsBounded}. This corollary is the fact that any positive braid that can be written as the product of a single positive full twist with another positive braid realizes the braid index~\cite {Franks_Williams_87_BraidsAndTheJonesPolynomial}. (Recall that a braid is \emph{positive} if it has a word representative that only uses positive powers of the Artin generators.) The reader may ask why, in contrast, our results require the number of twists to increase with the number of strands. The reason is that the positivity condition in the MFW corollary is more restrictive than our conditions. For example, our results can apply to non-positive braids. For instance, an $n$-braid of the form $\Delta^{2n}(a_{2}a_{3}\cdots a_{n-1})^{-k}$ has fractional Dehn twist coefficient $n$ and is neither positive nor even quasipositive for large enough $k$. Furthermore, we point to the existence of a family of positive braids with an increasing number of twists (more precisely, with increasing fractional Dehn twist coefficient) that do not realize the braid index; see Example~\ref{optimalityexample}.
\subsection*{Organization} We describe the structure of the paper and the proofs of our main results. In Section~\ref{sec:FDTC}, we provide background on the fractional Dehn twist coefficient. In Section~\ref{sec:PropofhomoofUpsilon}, we recall properties of $\Upsilon$, and provide properties of its homogenization. In Section~\ref{sec:FDTCviaUpsilon}, we prove Theorem~\ref{thmintro:FDTCviaUpsilon}. We use the description of the fractional Dehn twist coefficient in terms of the Dehornoy floor, a lower bound on $\HU$ for Dehornoy positive braids in terms of the writhe (Lemma~\ref{lemma:beta>1}), the characterization of $\Upsilon$ for torus knots $T_{n,n+1}$, and linearity of $\HU$ on $[0,\frac{2}{n}]$ and $[0,\frac{2}{n-1}]$ for $n$-braids and $(n-1)$-braids, respectively. In Section~\ref{sec:UpsilonandBraidIndex}, we derive Theorem~\ref{thmintro:FDTCofNonminimalBraidIsBounded} from Theorem~\ref{thmintro:FDTCviaUpsilon} by employing the fact that the difference between $\HU$ of an $n$-braid and an $m$-braid with the same closure is bounded by $t\frac{n+m-2}{2}$ (Proposition~\ref{prop:Upsilon_alpha-Upsilon_beta}) and the generalized Jones conjecture as proven by Dynnikov and Prasolov~\cite{DynnikovPrasolov_13} (compare also with~\cite{LaFountainMenasco_14}). In Section~\ref{sec:Examples}, we provide examples that show that Theorem~\ref{thmintro:FDTCofNonminimalBraidIsBounded} is essentially optimal. In Section~\ref{sec:questions}, we collect some questions. Finally, in Appendix~\ref{app:homogenizationofconcordancehomos} we prove properties of homogenizations of concordance homomorphisms that specialize to properties of the homogenization of $\Upsilon$ provided in Section~\ref{sec:PropofhomoofUpsilon}.
\subsection*{Acknowledgments} The first author thanks Benjamin Hennion and Kristian Moi for helpful discussions and the second author thanks David Krcatovich for a helpful conversation about Upsilon. The first author thanks the Max Planck Institute for Mathematics in Bonn for their support and hospitality. The second author was supported in part by NSF RTG grant 1045119. \section{Background on the fractional Dehn twist coefficient}\label{sec:FDTC}
As we will see in more detail towards the end of this section, the fractional Dehn twist coefficient can be defined in many different ways, and in fact can be defined not only for braids but also for mapping classes of general surfaces with boundary. It first appeared in the literature in the work of Gabai and Oertel on essential laminations of $3$-manifolds (see~\cite{Gabai_EssentialLaminations3Manifolds}), though there it is referred to in very different language. The definition that is most useful to us comes from Dehornoy's order on the braid group, and is due to Malyutin in~\cite{Malyutin_Twistnumber}. The advantage of this point of view is that Dehornoy's order provides a concrete characterization of the positivity of a braid in terms of its word in the Artin generators.
Recall from the introduction that a braid $\beta$ is said to be \emph{Dehornoy positive}, denoted by $\beta\succ 1$, if it can be written as a braid word that contains a braid generator $a_i$ for some integer $1\leq i<n$ but no $a_i^{-1}$ and no $a_j^{\pm 1}$ for $j<i$. We then say that $\alpha\prec\beta$ if $\alpha^{-1}\beta\succ1$. In \cite{Dehornoy_94_Braidgroups} (see also \cite{Dehornoy_WhyAreBraidsOrderable}), Dehornoy proved that this is a well-defined left-invariant total order $\succ$ on $B_n$.
While Dehornoy was the first to establish the existence of a left-invariant total order on the braid group, many more orders are now known coming from geometric considerations. In~\cite{Fenn_Orderingbraidgroups}, the five authors give a method for constructing orders on $B_{n}$ that involves comparing the action of braids on diagrams of curves drawn on the punctured disk $D_{n}$. In~\cite{Short_OrderingsmappingclassgroupsafterThurston}, Short and Wiest describe and classify more orderings on $B_{n}$ (originally due to Thurston). These orderings come from equipping $D_{n}$ with a hyperbolic structure, and considering the action of braids on the boundary of its universal cover, viewed in $\mathbb{H}^{2}$, together with its limit points on the circle at infinity. Both of these perspectives can give rise to orders not only on $B_{n}$ but more generally on mapping class groups of surfaces with boundary (see \cite{Rourke_OrderAutomaticMappingClassGroups} and \cite{Short_OrderingsmappingclassgroupsafterThurston}).
While it is possible to prove that Dehornoy's ordering is in fact total and left-invariant using entirely combinatorial and algebraic tools, the order has natural geometric content. Indeed, it can be recovered as an order coming from both the curve diagram perspective and the Thurston perspective, and many of the properties of Dehornoy's order are more or less immediate from the geometric point of view. While the geometric perspective is in some sense more natural, working with Dehornoy's order directly makes many of our computations more straightforward.
Recall that $\Delta^{2} \in B_{n}$ is the element $(a_{1} \cdots a_{n-1})^{n}$; it corresponds to a full twist around the boundary of $D_{n}$ and commutes with every other element in $B_{n}$. Dehornoy's order on $B_{n}$ now allows us to define the following: the \emph{Dehornoy floor} $\lfloor\beta\rfloor$ is the unique integer $m$ such that $(\Delta^2)^{m+1}\succ\beta\succeq(\Delta^2)^{m}$. The intuition here is that the Dehornoy floor gives a measurement of how many positive full twists can be extracted from a braid so that the remainder is still non-negative in the order. We now can define the fractional Dehn twist coefficient of a braid $\beta \in B_{n}$, denoted $\omega(\beta)$, as follows (\cite{Malyutin_Twistnumber}): $${\omega}(\beta)=\lim_{k\to\infty}\frac{\lfloor\beta^k\rfloor}{k}$$
One can prove that this is well-defined in a self-contained way using the fact that Dehornoy's floor is a quasimorphism.
We collect in the following proposition some properties of the fractional Dehn twist coefficient that are relevant for this paper:
\begin{prop}\label{fdtcproperties} [\cite{Malyutin_Twistnumber}, \cite{ItoKawamuro_OpenBookFoliations}] For any $\alpha, \beta \in B_{n}$, we have: \begin{enumerate}[a)]
\item (Quasimorphism) $|\omega(\alpha\beta) - \omega(\alpha) - \omega(\beta) | \leq 1$ \item (Homogeneity) $\omega(\alpha^n) = n\omega(\alpha)$ \item (Behavior under full twists) $\omega(\Delta^{2}\alpha) = \omega(\alpha) + 1$ \item (Conjugacy invariant) $\omega(\alpha) = \omega(\beta\alpha\beta^{-1})$
\item $\omega(\alpha)$ is rational, and in fact $\{\omega(\alpha) | \alpha \in B_{n}\} = \{\frac{p}{q} | \, p \in \mathbb{Z}, q \in \mathbb{Z}, 1 \leq q \leq n\}$.
\end{enumerate}
\end{prop}
Note that Dehornoy's floor is not a conjugacy invariant, but the fractional Dehn twist coefficient is. Properties $(a)-(d)$ of the fractional Dehn twist coefficient can be proved directly from the definition in terms of the Dehornoy floor. In fact, Property $(d)$ is a straightforward consequence of properties $(a)-(b)$. Malyutin's proof of Property $(e)$ requires a different, but equivalent, definition of the fractional Dehn twist coefficient and involves considering cases depending on the Nielsen-Thurston classification of the braid in question.
Very briefly, to define the fractional Dehn twist coefficient in this alternate way, one can consider the compactification of the universal cover of $D_{n}$ embedded in $\mathbb{H}^{2}$, use the action of the lift of $\beta$ to this universal cover to define a map $\Theta: B_{n} \to \widetilde{Homeo^{+}(S^{1})}$, and define $\omega(\beta)$ to be the translation number of $\Theta(\beta)$. For a more thorough discussion, see~\cite{Malyutin_Twistnumber}, \cite{ItoKawamuro_OpenBookFoliations}, and~\cite{Plamenevskaya_RightVeering}. For yet another alternate and equivalent definition that demonstrates more clearly that the fractional Dehn twist coefficient is measuring the amount of (signed) twisting a braid realizes around $\partial D_{n}$, see~\cite{HondaKazezMatic_RightVeeringI}, \cite{KazezRoberts_FDTC}, and \cite{ItoKawamuro_OpenBookFoliations}. Both of these alternate definitions generalize easily beyond braids to elements in mapping class groups of surfaces with boundary.
The fractional Dehn twist coefficient has been extensively studied in the context of contact topology and open book decompositions (see for instance \cite{HondaKazezMatic_RightVeeringI}, \cite{HondaKazezMatic_RightVeeringII}, \cite{KazezRoberts_FDTC}, \cite{ItoKawamuro_OpenBookFoliations}, \cite{BaldwinEtnyre_admissiblesurgery}, \cite{HeddenMark_FloerFDTC}) and of course in the context of classical braids (\cite{Malyutin_Twistnumber}, \cite{MalyutinNetsvetaev_03}). Relationships have also been explored between the fractional Dehn twist coefficient and monoids in the mapping class group (\cite{Etnyre_MonoidsMappingClassGroup}, \cite{ItoKawamuro_OnAQuestion}), classical knot theory (\cite{KazezRoberts_FDTC}), and homological invariants of knots and 3-manifolds (\cite{HeddenMark_FloerFDTC}, \cite{Plamenevskaya_RightVeering}) .
\section{The homogenization of Upsilon}\label{sec:PropofhomoofUpsilon}
In this section, we discuss properties of the homogenization of Ozsv\'ath, Stipsicz, and Szab\'o's $\Upsilon$. Rather than recalling the definition of $\Upsilon$ using the $CFK^\infty(K)$ knot Floer complex, we will only recall some of its properties and work with those. This is appropriate since our results would hold for any other invariant that satisfies these properties. While no other such invariants are known as of this writing\footnote{Recently, Grigsby-Wehrli-Licata \cite{GrigsbyLicataWehrli_16} and Lewark-Lobb \cite{LewarkLobb_17_Upsilonlike} defined $\Upsilon$-type invariants using annular Khovanov cohomology and higher $sl_N$-Khovanov-Rozansky cohomologies, respectively. However, neither of these invariants fit the framework of an $\Upsilon$-type invariant as needed here.}, one might hope for such invariants to be found in the future, in a similar way as Ozsv\'ath and Szab\'o's $\tau$ invariant (which is generalized by $\Upsilon$) led to the discovery of invariants with similar properties (e.g.~the Rasmussen $s$-invariant). In addition to the original article~\cite{OSS_2014}, Livingston's note~\cite{Livingston_NotesOnUpsilon} is a good and short reference for the definition and properties of $\Upsilon$.
We delay most of the proofs of the statements in this section to the end of the paper (see Appendix~\ref{app:homogenizationofconcordancehomos}) for the following reasons: first, these proofs are somewhat long but standard arguments using language from knot concordance theory and do not constitute the core of the argument of our main results. Additionally, these proofs are best given in a general setting of homogenization of concordance knot invariants rather than the specific case of $\Upsilon$, so, for future reference, an independent appendix seems more appropriate.
\subsection*{Background on the concordance homomorphism $\Upsilon$} Recall that two links $K$ and $L$ are called \emph{concordant} if there exists an oriented smooth embedding of a disjoint union of annuli in $S^3\times[0,1]$ such that the oriented boundary is $K\times\{0\}\cup L^{\textrm{rev}}\times\{1\}$, where $L^{\textrm{rev}}$ denotes the result of reversing the orientation of $L$. Knots up to concordance form a group, called the \emph{concordance group}: \[\mathcal{C}=(\{\text{concordance classes of knots}\},\#),\] where $\#$ denotes the operation induced by connected sum of knots. {For all knots $K$, the knot $-K$ given by taking the mirror of $K$ and reversing orientation, represents the inverse of the class of $K$ in $\mathcal{C}$.}
Ozsv\'ath, Stipsicz, and Szab\'o associate to a knot $K$ (in fact, to its concordance class) a piecewise linear function $\Upsilon_K\colon[0,2]\to\mathbb{R}$, which turns out to be a strong tool in detecting free subgroups and free summands of the concordance group; see~\cite{OSS_2014}. In what follows we consider $\Upsilon_K$ as a function on $[0,1]$ by restriction without losing any information since $\Upsilon(t)=\Upsilon(2-t)$ for all $t\in[0,2]$; see~Proposition~1.2 in \cite{OSS_2014}.
We summarize all the properties of $\Upsilon$ needed in this paper in the following proposition. For this, recall that the \emph{$3$-genus} or \emph{genus} $g(K)$ of a knot $K$ is the smallest genus among smooth oriented surfaces in $S^3$ with boundary $K$. Similarly, the \emph{smooth $4$-ball genus} or \emph{slice genus} $g_4(K)$ of a knot $K$ is the smallest genus among smoothly embedded surfaces in the $4$-ball $B^4$ with boundary $K\subset S^3=\partial B^4$. For positive coprime integers $p$ and $q$, the knot given as the closure of the $p$-braid $(a_1a_2\cdots a_{p-1})^q$ is denoted $T_{p,q}$ and called the $(p,q)$-\emph{torus knot}.
\begin{prop}[\cite{OSS_2014}]\label{prop:PropOfU} Let $\rm{PL}[0,1]$ denote the group (with respect to addition in the target) of piecewise-linear, $\mathbb{R}$-valued, continuous functions on $[0,1]$. There exists a group homomorphism, the \emph{Upsilon-invariant}, \[\Upsilon\colon\mathcal{C}\to \rm{PL}[0,1]\] that satisfies the following properties: \begin{itemize} \item
\cite[Theorem~1.11]{OSS_2014}: For all knots $K$ and all $t\in[0,1]$, $\left|\Upsilon_K(t)\right|\leq tg_4(K)$.
\item
\cite[Theorem~1.13]{OSS_2014}: For all knots $K$, the absolute value of the slopes of $\Upsilon_K$ is bounded above by $g(K)$.
\item
\cite[Theorem~1.15 and Proposition~6.3]{OSS_2014}: For positive integers $n$ and $k$, \[\Upsilon_{T_{n,nk+1}}=-tg_4(T_{n,nk+1})=-tg(T_{n,nk+1})=-t\frac{n(n-1)k}{2}\quad\mbox{for } t\leq \frac{2}{n}\quad\mbox{and}\quad\] \[\Upsilon_{T_{n,nk+1}}=-tg_4(T_{n,nk+1})+nk(t-\frac{2}{n})\quad\mbox{for } \frac{2}{n}\leq t\leq \min\{\frac{2}{n-1},1\}.\qed\]
\end{itemize}
\end{prop}
\subsection*{The homogenization of $\Upsilon$} The Upsilon-invariant can be used to construct an invariant of (conjugacy classes of) braids, called the \emph{homogenization of $\Upsilon$}, as follows: \[\HU\colon B_n\to \rm{Cont}[0,1], \beta\mapsto \lim_{k\to\infty}\frac{\Upsilon_{\widehat{\beta^k\varepsilon_{\beta^k}}}}{k},\] where $\varepsilon_{\beta^k}$ is a shortest possible (as a word in the generators $a_i$) $n$-braid such that the closure of $\beta^k\varepsilon_{\beta^k}$ is a knot rather than a link; concretely, $\varepsilon_{\beta^k}$ can be chosen to be the product of at most $n-1$ generators $a_i$. Here $\rm{Cont}[0,1]$ denotes the real-valued continuous functions on $[0,1]$.
In~\cite{Brandenbursky_11}, Brandenbursky studied this construction in a more general context: for any knot invariant $I$ that descends to a homomorphism $I\colon \mathcal{C}\to \mathbb{R}$ with $|I(K)|\leq t_I g_4(K)$ for all knots $K$ and some real constant $t_I$, he showed there is a well-defined (independent of the choice of shortest possible $\varepsilon_{\beta^k}$)
map \[\HI\colon B_n\to \mathbb{R}, \beta\mapsto \lim_{k\to\infty}\frac{I\left(\widehat{\beta^k\varepsilon_{\beta^k}}\right)}{k}.\]
For a fixed $t\in[0,1]$, $\Upsilon$ fits into this setting with $I=\Upsilon(t)$ and $t_{I}=t$ by Proposition~\ref{prop:PropOfU}.
We summarize properties of $\HU$ that hold for every fixed $t\in[0,1]$ and a fixed number of strands $n\geq1$; see Lemma~\ref{lemma:PropofHI} for a proof. \begin{lemma}\label{lem:PropofHU} \begin{enumerate}[I)]
\item\label{item:HUofgenerator} For all $n$-braids $\beta$ and all $1\leq i\leq n-1$, $\left|\HU_{\beta a_i}(t)-\HU_{\beta}(t)\right|\leq \frac{t}{2}$.
\item\label{item:HUhasDefectt(n-1)} For all $n$-braids $\alpha$ and $\beta$, $|\HU_{\alpha\beta}(t)-\HU_{\alpha}(t)-\HU_{\beta}(t)|\leq t(n-1)$. If $\alpha$ and $\beta$ commute, e.g.~if $\alpha$ is the $n$-stranded full twist $\Delta^2$ or a power of $\beta$, then
$\HU_{\alpha\beta}(t)=\HU_{\alpha}(t)+\HU_{\beta}(t)$.
\item \label{item:HUofUnions}If an $n$-braid $\beta$ is given as the disjoint union (see Figure \ref{fig:disjoint_union}) of braids $\beta_1$, \ldots, $\beta_l$ on $n_1$, \ldots , $n_l$ strands, respectively, then $\HU_\beta(t)=\sum_{i=1}^l \HU_{\beta_i}(t)$.\qed \end{enumerate}
\end{lemma}
\begin{figure}
\caption{The disjoint union of the $2$-braid $\beta_{1}$ and the $3$-braid $\beta_{2}$.}
\label{fig:disjoint_union}
\end{figure}
Our proof of Theorem~\ref{thmintro:FDTCofNonminimalBraidIsBounded} will use the characterization of the fractional Dehn twist coefficient in terms of $\HU$ provided in Theorem~\ref{thmintro:FDTCviaUpsilon} (and proved in Section~\ref{sec:FDTCviaUpsilon}), the generalized Jones conjecture as proven by~\cite[Theorem~9]{DynnikovPrasolov_13}, and the following bound on the difference of $\HU$ of braids that have isotopic or concordant closures. The following is Proposition~\ref{prop:HI(alpha)-HI(beta)} for $I=\Upsilon(t)$. \begin{prop}\label{prop:Upsilon_alpha-Upsilon_beta} Fix positive integers $n$ and $m$. If an $n$-braid $\beta$ and an $m$-braid $\alpha$ have isotopic links as their closure (or, more generally, concordant links as their closure), then
\[\left|\HU_\beta(t)-\HU_\alpha(t)\right|\leq t\frac{n-1+m-1}{2}\quad\mbox{for } t\in [0,1].\qed\] \end{prop}
The value of $\Upsilon$ for torus knots $T_{n,kn+1}$ and $t\leq \frac{2}{n}$ (see Proposition~\ref{prop:PropOfU}) implies the following: for an $n$-braid, we have $\HU(t)=-t\frac{{\rm{wr}}(\beta)}{2}$ for $t\leq\frac{2}{n}$; see~\cite[Corollary~4.2]{FellerKrcatovich_16_OnCobBraidIndexAndUpsilon} or Lemma~\ref{lemma:HIwhenSBIissharp}. Combined with Lemma~\ref{lem:PropofHU}.\ref{item:HUofUnions}, we can state this as follows: \begin{lemma}\label{lemma:HUislinearforsmallt} For all $n$-braids $\beta$, let $m\leq n$ be the smallest integer such that $\beta$ is the disjoint union of braids on $m$ or fewer strands. Then, \[\HU_\beta(t)=-t\frac{wr(\beta)}{2}\quad\mbox{for } t\leq \min\left\{1,\frac{2}{m}\right\}.\qed\] \end{lemma}
Next, we discuss properties of $\HU$ as a function depending on $t$. As a consequence of the fact that the slopes of $\Upsilon$ are bounded by the $3$-genus (see~\cite[Theorem~1.13]{OSS_2014}), one finds that, for a fixed braid, $\HU$ is Lipschitz continuous: \begin{prop}\label{prop:HUisLipschitz} For all $n$-braids $\beta$, we have
\[|\HU_\beta(t)-\HU_\beta(s)|\leq |t-s|\frac{{{\ell}}(\beta)}{2},\] where ${{\ell}}(\beta)$ denotes the minimal number of generators $a_i$ and their inverses needed to write $\beta$. \end{prop} While Proposition~\ref{prop:HUisLipschitz} is not used in the rest of the paper, it brings us to ask about the regularity of $\HU$; see Question~\ref{q:HUisPL}.
\begin{proof}[Proof of Proposition~\ref{prop:HUisLipschitz}] We note that \begin{equation}\label{eq:g<=l/2}g\left(\widehat{\beta^k\varepsilon_{\beta^k}}\right)\leq \frac{{{\ell}}(\beta^k\varepsilon_{\beta^k})-(n-1)}{2}\leq \frac{k{{\ell}}(\beta)+n-1-(n-1)}{2}\end{equation}
since applying the Seifert algorithm to a standard diagram of the closure of an $n$-braid given by a braid word of length $l$ yields a Seifert surface of genus $\frac{l-n+1}{2}$.
Thus,
\begin{align*}\left|\HU_\beta(t)-\HU_\beta(s)\right|
&=\lim_{k\to\infty}\left|\frac{\Upsilon_{\beta^k\varepsilon_{\beta^k}}(t)-\Upsilon_{\beta^k\varepsilon_{\beta^k}}(s)}{k}\right|
\\&\leq\liminf_{k\to\infty}|t-s|\frac{g\left(\widehat{\beta^k\varepsilon_{\beta^k}}\right)}{k}
\\&\leq |t-s|\frac{{{\ell}}(\beta)}{2},\end{align*} where in the second and third line we use that $\Upsilon_K$ is $g(K)$-Lipschitz continous for all knots (Proposition~\ref{prop:PropOfU}~\cite[Theorem~1.13]{OSS_2014}) and~\eqref{eq:g<=l/2}, respectively. \end{proof}
\section[FDTC as a slope of the homogenization of Upsilon]{The fractional Dehn twist coefficient as a slope of the homogenization of Upsilon}\label{sec:FDTCviaUpsilon} In this section, we study the homogenization of $\Upsilon$ for a fixed integer $n\geq2$. We describe $\widetilde{\Upsilon}_\beta(t)$ for $t\in[\frac{2}{n},\frac{2}{n-1}]$ and all $n$-braids $\beta$. It turns out that $\widetilde{\Upsilon}_\beta$ is linear on $[\frac{2}{n},\frac{2}{n-1}]$ with slope $\frac{-{\rm{wr}}(\beta)}{2}+n{\omega}(\beta)$, which is the content of Theorem~\ref{thmintro:FDTCviaUpsilon}.
Let $\preceq$ ($\prec$) denote Dehornoy's (strict) total order. \begin{lemma}\label{lemma:beta>1} Let $\beta$ be an $n$-braid. If $\beta\succeq 1$, then $\widetilde{\Upsilon}_\beta(t)\geq-t\frac{{\rm{wr}}(\beta)}{2}$ for $t\leq\frac{2}{n-1}$. \end{lemma} \begin{proof} Without loss of generality, we may and do assume that $\beta$ cannot be written as a braid word without $a_1$ or $a_1^{-1}$, since otherwise $\widetilde{\Upsilon}_\beta(t)=-t\frac{{\rm{wr}}(\beta)}{2}$ for $t\leq\frac{2}{n-1}$ by Lemma~\ref{lemma:HUislinearforsmallt}.
By the definition of $\beta\succ 1$, we have \[\beta=\alpha_0a_1\alpha_1a_1\cdots\alpha_{l-1}a_1\alpha_{l}\] for an integer $l\geq 1$, where the $\alpha_i$ are braids `only involving strands 2 to $n$'; i.e.~the $\alpha_i$ can be given by braid words that do not contain $a_1$ or $a_1^{-1}$.
Let $\beta'$ be the $n$-braid given by $\alpha_0\alpha_1\cdots\alpha_{l-1}\alpha_{l}$. Since $\beta'$ can be obtained from $\beta$ by deleting $l$ generators, $l$ times applying Lemma~\ref{lem:PropofHU}.\ref{item:HUofgenerator} yields
$\left|\widetilde{\Upsilon}_{\beta'}(t)-\widetilde{\Upsilon}_{\beta}(t)\right|\leq t\frac{l}{2}$. Therefore, \[\widetilde{\Upsilon}_{\beta}(t)\geq \widetilde{\Upsilon}_{\beta'}(t)-t\frac{l}{2}=-t\frac{{\rm{wr}}(\beta')}{2}-t\frac{l}{2}=-t\frac{{\rm{wr}}(\beta)}{2}\] for $t\leq \frac{2}{n-1}$, where the first equality uses Lemma~\ref{lemma:HUislinearforsmallt}. \end{proof}
Let $\Delta^2$ denote the $n$-braid $(a_1a_2\cdots a_{n-1})^n$, called the \emph{positive full twist}. We have \begin{equation}\label{eq:HomUpsofFullTwist} \HU_{\Delta^2\beta}=\HU_\beta+\HU_{\Delta^2} =\HU_\beta+\lim_{k\to\infty}\frac{\Upsilon_{T_{n,kn+1}}}{k}
=\HU_\beta+\Upsilon_{T_{n,n+1}};\end{equation} where the first equality holds since $\Delta^2$ is in the center of $B_n$ (see Lemma~\ref{lem:PropofHU}\ref{item:HUhasDefectt(n-1)}), the second equality follows from choosing $\varepsilon_{(\Delta^2)^k}$ to be $a_1a_2\cdots a_{n-1}$ in the definition of $\HU$ and noting that then the closure of $(\Delta^2)^k\varepsilon_{(\Delta^2)^k}$ is the torus knot $T(n,kn+1)$, and the third equality is immediate from $\Upsilon_{T_{n,kn+1}}=k\Upsilon_{T_{n,n+1}}$ (see \cite[Proposition~2.2]{FellerKrcatovich_16_OnCobBraidIndexAndUpsilon}). Using~\eqref{eq:HomUpsofFullTwist} we establish the following. \begin{corollary}\label{cor:beta>Delta} Let $\beta$ be an $n$-braid. If $\beta\succeq \Delta^{2m}$, then $\widetilde{\Upsilon}_\beta(t)\geq-t\frac{{\rm{wr}}(\beta)}{2}+mn(t-\frac{2}{n})$ for $t\in[\frac{2}{n},\frac{2}{n-1}]$. \end{corollary} \begin{proof} By definition, $\beta\succeq \Delta^{2m}$ means $\Delta^{-2m}\beta\succeq 1$. For $t\in[\frac{2}{n},\frac{2}{n-1}]$, we calculate \begin{align*} -t\frac{wr(\Delta^{-2m})}{2}-t\frac{wr(\beta)}{2}&=-t\frac{wr(\Delta^{-2m}\beta)}{2}\\&\leq\HU_{\Delta^{-2m}\beta}(t) \\&=\HU_{\Delta^{-2m}}(t)+\HU_{\beta}(t) \\&=-m\Upsilon_{T_{n,n+1}}(t)+\HU_{\beta}(t) \\&=-m\left(-t\frac{n(n-1)}{2}+n(t-\frac{2}{n})\right)+\HU_{\beta}(t) \\&=-t\frac{wr(\Delta^{-2m})}{2}-nm(t-\frac{2}{n})+\HU_{\beta}(t), \end{align*} where Lemma~\ref{lemma:beta>1} is used in the second line, \eqref{eq:HomUpsofFullTwist} is used in the fouth line, and the value for $\Upsilon_{T_{n,n+1}}$ as provided in~\cite[Proposition~6.3]{OSS_2014} (see Proposition~\ref{prop:PropOfU}) is used in the fifth line. This yields the desired lower bound for $\HU_{\beta}$. \end{proof}
We note that, since $\beta\preceq 1$ if and only if $\beta^{-1}\succeq 1$ and $\HU_{\beta^{-1}}=-\HU_{\beta}$, Lemma~\ref{lemma:beta>1} and Corollary~\ref{cor:beta>Delta} also hold when replacing $\succeq$ and $\geq$ by $\preceq$ and $\leq$, respectively. This allows to conclude the following:
\begin{prop}\label{prop:HUofbeta} Let $\beta$ be an $n$-braid. Assume $\beta$ has Dehornoy floor $\lfloor\beta\rfloor=m\in\mathbb{Z}$, i.e.~$\Delta^{2m+2}~\succ\beta\succeq \Delta^{2m}$. Then, for $t\in[\frac{2}{n},\frac{2}{n-1}]$, we have \[-t\frac{{\rm{wr}}(\beta)}{2}+(m+1)n(t-\frac{2}{n})\geq\HU_\beta(t)\geq-t\frac{{\rm{wr}}(\beta)}{2}+mn(t-\frac{2}{n}) .\qed\] \end{prop}
Using the characterization of the fractional Dehn twist coefficient as ${\omega}(\beta)=\lim_{k\to\infty}\frac{\lfloor\beta^k\rfloor}{k}$, Proposition~\ref{prop:HUofbeta} yields Theorem~\ref{thmintro:FDTCviaUpsilon}, which we restate as follows.
\newtheorem*{thmCharOfFDTC}{Theorem~\ref{thmintro:FDTCviaUpsilon}} \begin{thmCharOfFDTC}
Fix an integer $n\geq 2$. For all $n$-braids $\beta$, we have
\[\widetilde{\Upsilon}_\beta(t)=\left\{\begin{array}{c} -t\frac{{\rm{wr}}(\beta)}{2}\quad\mbox{for } t\leq \frac{2}{n}\\ -t\frac{{\rm{wr}}(\beta)}{2}+{\omega}(\beta) n(t-\frac{2}{n}) \quad\mbox{for } \frac{2}{n}\leq t\leq \frac{2}{n-1}
\end{array}\right..\] \end{thmCharOfFDTC} In other words, $\widetilde{\Upsilon}_\beta(t)$ is linear on $[0,\frac{2}{n}]$ and $[\frac{2}{n},\frac{2}{n-1}]$ with change of slope equal to ${\omega}(\beta)n$ at $\frac{2}{n}$.
\begin{proof}[Proof of Theorem~\ref{thmintro:FDTCviaUpsilon}] By Lemma~\ref{lemma:HUislinearforsmallt}, $\HU_\beta(t)$ is linear on $[0,\frac{2}{n}]$ with slope $-\frac{{\rm{wr}}(\beta)}{2}$. Thus, we only discuss the case $t\in [\frac{2}{n},\frac{2}{n-1}]$. For any integer $k>0$, we have \begin{equation}\label{eq:HUofbeta^k} \frac{-t\frac{{\rm{wr}}(\beta^k)}{2}+(\lfloor\beta^k\rfloor+1)n(t-\frac{2}{n})}{k}\geq\frac{\HU_{\beta^k}(t)}{k} \geq\frac{-t\frac{{\rm{wr}}(\beta^k)}{2}+\lfloor\beta^k\rfloor n(t-\frac{2}{n})}{k},\end{equation} by Proposition~\ref{prop:HUofbeta}.
Note that the writhe of a braid is homogeneous and, by construction, $\HU$ is homogeneous, i.e.~${\rm{wr}}(\beta^k)=k{\rm{wr}}(\beta)$ and $\HU_{\beta^k}=k\HU_{\beta}$, respectively, for all integers $k$ and all braids $\beta$. With this we rewrite~\eqref{eq:HUofbeta^k} as \[-t\frac{{\rm{wr}}(\beta)}{2}+\frac{(\lfloor\beta^k\rfloor)n(t-\frac{2}{n})}{k}+\frac{n(t-\frac{2}{n})}{k}\geq\HU_{\beta}(t)\geq-t\frac{{\rm{wr}}(\beta)}{2}+\frac{\lfloor\beta^k\rfloor n(t-\frac{2}{n})}{k},\] from which the result follows by taking the limit $k\to\infty$. \end{proof}
Theorem \ref{thmintro:FDTCviaUpsilon} combined with property (e) of Proposition \ref{fdtcproperties} immediately yields:
\begin{corollary}\label{cor:upsilonrational} For every braid group $B_{n}$, the set of all possible changes in slope of $\HU(t)$ at $\frac{2}{n}$ is precisely $\{\frac{np}{q} | p \in \mathbb{Z}, q \in \mathbb{Z}, 1 \leq q \leq n\}$. Each of these values is realized by some braid in $B_{n}$.\qed \end{corollary} For instance, the $4$-braid $A = a_{1}a_{2}a_{3}a_{3}$ has fractional Dehn twist coefficient $\frac{1}{3}$ (since one can first see using braid relations that $A^{3} = \Delta^{2}$, and then apply (b) and (c) from Proposition \ref{fdtcproperties}), and so $\HU_{A}(t)$ has slope change $\frac{4}{3}$ at $\frac{1}{2}$. The $5$-braid $B = a_{1}a_{2}a_{3}a_{4}a_{1}a_{2}$ also has fractional Dehn twist coefficient $\frac{1}{3}$ and so $\HU_{B}(t)$ has slope change $\frac{5}{3}$ at $\frac{2}{5}$. Here we calculate the fractional Dehn twist coefficient of $B$ by first observing that $\omega(a_{1}a_{2}a_{3}a_{4}) \leq \omega(B) \leq \omega((a_{1}a_{2}a_{3}a_{4})^2)$ (this is an application of Lemma 5.2 in ~\cite{Malyutin_Twistnumber}), which implies that $\frac{1}{5} \leq \omega(B) \leq \frac{2}{5}$ as $(a_{1}a_{2}a_{3}a_{4})^5 = \Delta^{2}$. Then combining the fact that $B$ is a pseudo-Anosov braid~\cite{Ham_Song_PseudoAnosov5Braid} and Malyutin's restrictions in~\cite{Malyutin_Twistnumber} on which values of the fractional Dehn twist coefficient are realized by pseudo-Anosov braids yields the calculation. Notice that Corollary~\ref{cor:upsilonrational} is in contrast to the situation for $\Upsilon$, which only has integral slopes (and hence only integral changes in slope).
\section{Homogenization of Upsilon and braid index}\label{sec:UpsilonandBraidIndex} Based on the characterization of ${\omega}$ in terms of the slope of $\HU$ (Theorem~\ref{thmintro:FDTCviaUpsilon}), we derive Theorem~\ref{thmintro:FDTCofNonminimalBraidIsBounded} about the braid index and ${\omega}$. A key element of our proof is the generalized Jones Conjecture as proven by Dynnikov and Prasolov ~\cite[Theorem~9]{DynnikovPrasolov_13}, which we quote here for reference:
\newtheorem*{GeneralizedJonesConjecture}{Theorem \cite[Theorem~9]{DynnikovPrasolov_13}}
\begin{GeneralizedJonesConjecture} Suppose braids $\beta_{1} \in B_{m}$ and $\beta_{2} \in B_{n}$ represent the same class of oriented links and $\beta_{1}$ has the smallest possible number of strands in that class. Then
$$ | {\rm{wr}}(\beta_{2}) - {\rm{wr}}(\beta_{1}) | \leq n-m.$$ \end{GeneralizedJonesConjecture}
We now recall Theorem ~\ref{thmintro:FDTCofNonminimalBraidIsBounded} before proving it. \newtheorem*{thmFDTCandBraidIndex}{Theorem~\ref{thmintro:FDTCofNonminimalBraidIsBounded}} \begin{thmFDTCandBraidIndex}
Fix an integer $n\geq 2$. For any $n$-braid $\beta$ such that there exists an $(n-1)$-braid with isotopic closure, we have $|{\omega}(\beta)|\leq n-1$.
\end{thmFDTCandBraidIndex}
\begin{proof} Let $\alpha$ be an $m$-braid such that $\alpha$ and $\beta$ have the same closure, where $m\leq n-1$ is the braid index of the closure of $\beta$.
By the generalized Jones Conjecture as proven by Dynnikov and Prasolov~\cite[Theorem~9]{DynnikovPrasolov_13}, we have $|{\rm{wr}}(\beta)-{\rm{wr}}(\alpha)|\leq n-m$.
We use Proposition~\ref{prop:Upsilon_alpha-Upsilon_beta} with $t=\frac{2}{n-1}$ to find
\begin{align*}\left|\left(-{\rm{wr}}(\beta)+2n{\omega}(\beta)(1-\frac{n-1}{n})\right)+{\rm{wr}}(\alpha)\right|
&=\left|2t^{-1}\left(\HU_\beta(t)-\HU_\alpha(t)\right)\right| \\&\leq n+m-2,\end{align*} where the equality and the inequality are given by Theorem~\ref{thmintro:FDTCviaUpsilon} and Proposition~\ref{prop:Upsilon_alpha-Upsilon_beta}, respectively. Therefore, we have
\[2|{\omega}(\beta)|\leq |{\rm{wr}}(\alpha)-{\rm{wr}}(\beta)|+n+m-2\leq 2n-2,\] as claimed. \end{proof} \begin{Remark} In terms of $\HU$, Theorem~\ref{thmintro:FDTCofNonminimalBraidIsBounded} states that, given an $n$-braid for which the absolute value of the slope change of $\HU$ at $\frac{2}{n}$ is strictly larger than $n(n-1)$, said braid realises the braid index of its closure. \end{Remark} \begin{corollary}(Compare to Conjecture 7.4 of~\cite{MalyutinNetsvetaev_03}) Fix an integer $n\geq2$. If an $n$-braid $\beta$ satisfies $\Delta^{2n}\preceq\beta$ or $\beta\preceq\Delta^{-2n}$, then the closure of $\beta$ does not arise as the closure of a braid on $n-1$ or fewer strands. \end{corollary} \begin{proof} If $\Delta^{2n}\preceq\beta$, then $\Delta^{2nk}\preceq\beta^k$ for all positive integers $k$, and, thus, ${\omega}(\beta)\geq n>n-1$. Similarly, if $\beta\preceq\Delta^{-2n}$, then $\beta^k\preceq\Delta^{-2nk}\prec\Delta^{(-2n)k+2}$ for all positive integers $k$, and, thus, ${\omega}(\beta)\leq -n<-(n-1)$. Consequently, the corollary follows from Theorem~\ref{thmintro:FDTCofNonminimalBraidIsBounded}. \end{proof}
\section{Examples and Optimality}\label{sec:Examples}
The following example shows that Theorem \ref{thmintro:FDTCofNonminimalBraidIsBounded} is (very close to) optimal.
\begin{Example}\label{optimalityexample}
For positive integers $n,m\geq 2$, let $\beta_{n,m}$ be the $n$-braid $(\delta\delta^\Delta)^{m-1}\delta$, where \[\delta=a_1a_2\cdots a_{n-1}\quad\mbox{and}\quad\delta^\Delta=a_{n-1}a_{n-2}\cdots a_1.\] We calculate below that ${\omega}(\beta_{n,m})=m-1$. (Note that this should intuitively be clear, since in $\beta_{n,m}$ the first strand is wrapping $m-1$ times around the rest.)
It was observed in~\cite{MalyutinNetsvetaev_03} that the closures of $\beta_{n,m}$ and $\beta_{m,n}$ are isotopic (briefly, their observation was that these are the same link with respect to different braid axes: see Figure 2 in ~\cite{MalyutinNetsvetaev_03}); thus, when $n>m$, we have that $\beta_{n,m}$ does not realize the braid index of its closure. In particular, $\beta_{n,n-1}$ is an $n$-braid with fractional Dehn twist coefficient $n-2$ that does not realize the braid index of its closure. On the other hand, Theorem~\ref{thmintro:FDTCofNonminimalBraidIsBounded} implies that, if $n<m$, then $\beta_{n,m}$ does realize the braid index of its closure. This leaves the question whether $\beta_{n,n}$ realizes the braid index of its closure. It turns out that this is the case; see Proposition~\ref{prop:betanm} below.
To show that ${\omega}(\beta_{n,m})=m-1$, we rewrite $\beta_{n,m}$ as follows: \[\beta_{n,m}=\Delta^{2m-2}\Delta_{2,\cdots,n}^{-2m+2}\delta,\] where by $\Delta^2_{2,\cdots,n}$ we mean the $n$-braid $(a_{2}\cdots a_{n-1})^{n-1}$ (that is, it is the full twist on the last $n-1$ strands). Similarly, we denote by $\Delta^2_{1,\cdots,n-1}$ the $n$-braid $(a_{1}\cdots a_{n-2})^{n-1}$ (that is, it is the full twist on the first $n-1$ strands), and note that $\Delta^{2l}_{2,\cdots,n}\delta=\delta\Delta^{2l}_{1,\cdots,n-1}$ for all integers $l$. For all positive integers $k$, we calculate \begin{align*}(\Delta^{(2m-2)k})^{-1}\beta_{n,m}^k&=(\Delta_{2,\cdots,n}^{-2m-2}\delta)^k\succ1\quad\mbox{and}\quad\\ (\Delta^{(2m-2)k+2})^{-1}\beta_{n,m}^k&=\Delta^{-2}(\Delta_{2,\cdots,n}^{-2m-2}\delta)^k
\\&=\Delta^{-2}(\delta\Delta_{1,\cdots,n-1}^{-2m-2})^{k} \\&=\Delta^{-2}\delta(\Delta_{1,\cdots,n-1}^{-2m-2}\delta)^{k-1}\Delta_{1,\cdots,n-1}^{-2m-2} \\&\prec 1,\end{align*} where in the last line we use that $\Delta^{-2}\delta$ and $\Delta_{1,\cdots,n-1}^{-2m-2}\delta$ can be written as a braid word containing $a_1^{-1}$ but no $a_1$. Consequently, we have \[\Delta^{2(m-1)k+2}\succ\beta_{n,m}^k\succ\Delta^{2(m-1)k}\] for all positive integers $k$ and, thus, ${\omega}(\beta_{n,m})=m-1$.
\end{Example} We remind the reader that a braid is called \emph{positive} if it can be given as a word in which only generators $a_i$ (but no $a_i^{-1}$) feature. Similarly, a braid is called \emph{quasipositive} if it can be written as a word in conjugates of generators $a_i$. A knot is called \emph{quasipositive} if it arises as the closure of a quasipositive braid. \begin{prop}\label{prop:betanm} If a knot $K$ is the closure of an $n$-braid of the form
\[\alpha_1\delta\beta_1\delta^\Delta\alpha_2\delta\beta_2\cdots\alpha_{n-1}\delta\beta_{n-1}\delta^\Delta\alpha_{n}\delta\beta_{n},\]
where the $\alpha_j$ and $\beta_j$ are (possibly trivial) quasipositive $n$-braids,
then $K$ has braid index $n$.
In fact, any quasipositive knot $K'$ (more generally, knot $K'$ that is the closure of a braid on which the slice-Bennequin inequality is sharp) concordant to $K$ has braid index at least $n$. \end{prop} Here the slice-Bennequin inequality being sharp on a braid $n$-braid $\beta$ whose closure is a knot means $\frac{{\rm{wr}}(\beta)-(n-1)}{2}=g_4\left(\widehat{\beta}\right)$. In particular, one has $g_4\left(\widehat{\beta}\right)=\tau\left(\widehat{\beta}\right)$ by~\cite[Corollary~11]{Livingston_Comp}, where $\tau$ denotes Ozsv\'ath and Szab\'o's concordance homomorphism introduced in~\cite{OzsvathSzabo_03_KFHandthefourballgenus}. \begin{Remark} The proof of Proposition~\ref{prop:betanm} uses $\Upsilon$ rather than $\HU$. It is in spirit closer to~\cite{FellerKrcatovich_16_OnCobBraidIndexAndUpsilon} (in particular, to the proof of~\cite[ Theorem~1.3]{FellerKrcatovich_16_OnCobBraidIndexAndUpsilon}), where $\Upsilon$ was used to understand cobordism distance and braid index of positive braids and $\HU$ was only discussed to make connections to the signature clearer. In contrast, the main results in this article use $\HU$, which not only makes the connection to ${\omega}$ possible, but also allows for much shorter proofs (once the formal properties of $\HU$ are established) and treatment of links (rather than just knots). However, using $\HU$ comes at the cost of no longer being able to treat some examples; in particular $\beta_{n,n}$, which realizes the braid index of its closure by Proposition~\ref{prop:betanm}, but as $\omega=n-1$ this does not follow from Theorem~\ref{thmintro:FDTCofNonminimalBraidIsBounded}. \end{Remark} \begin{proof}[Proof of Proposition~\ref{prop:betanm}] We show that the first singularity $t_0>0$ of $\Upsilon_K$ is strictly smaller than $\frac{2}{n-1}$. This suffices since for quasipositive knots (or more generally knots that arise as the closure of braids on which the slice-Bennequin inequality is sharp) the braid index is bounded below by $\frac{2}{t_0}$; see~\cite[Lemma~3.4 and Proposition~3.7]{FellerKrcatovich_16_OnCobBraidIndexAndUpsilon}.
Let $g$ denote the smooth $4$-ball genus $g_4(K)=\tau(K)$ of $K$ and let $L$ denote the knot obtained as the closure of the $n$-braid $\beta_{n,n}=(\delta\delta^\Delta)^{n-1}\delta$. For all knots, the function $\Upsilon$ equals $-\tau t$ for small enough $t$; see~\cite[Proposition~1.6]{OSS_2014}. So, we know that $\Upsilon_K(t)=-gt$ for small $t$. We will show that $\Upsilon_K(t)>-gt$ for $\frac{2}{n-1}\geq t>\frac{2(n-1)}{(n-1)^2+1}$. From this we conclude that $t_0$ is in $(0,\frac{2(n-1)}{(n-1)^2+1}]\subset(0,\frac{2}{n-1})$.
To show that the first singularity $t_0>0$ of $\Upsilon_K$ is strictly smaller than $\frac{2}{n-1}$ we use concordance properties of $\Upsilon$ established in~\cite{OSS_2014} and the following two `short' cobordisms:
\begin{Claim}\label{Claim:g4K-L}
There exists a cobordism of genus $g_4(K)-g_4(L)=g-(n-1)^2$ between $K$ and $L$. In other words, $g_4(K\#(-L))=g-(n-1)^2$. \end{Claim}
\begin{Claim}\label{Claim:L=diffoftorusknots} There exists a cobordism of genus $n-2$ between \[L\quad\mbox{and}\quad T_{n,(n-1)n+1}\#(-T_{n-1,(n-1)(n-1)+1}).\] \end{Claim}
We postpone the proof of these claims to the end of Appendix~\ref{app:homogenizationofconcordancehomos} as they use similar ideas as the proofs there.
Fix $t\in [\frac{2}{n},\frac{2}{n-1}]$. Using the value of $\Upsilon_{T_{n,(n-1)n+1}}(t)$ and $\Upsilon_{T_{n-1,(n-1)(n-1)+1}}(t)$ provided in Proposition~\ref{prop:PropOfU}, we bound $\Upsilon_K$ from below as follows.
\begin{align*}
\Upsilon_K(t)&\geq& \Upsilon_L(t)-t(g-(n-1)^2)&\\
& \geq& \Upsilon_{T_{n,(n-1)n+1}\#(-T_{n-1,(n-1)(n-1)+1})}(t)-t(g-(n-1)^2)-t(n-2)&\\
&=&\Upsilon_{T_{n,(n-1)n+1}}-\Upsilon_{T_{n-1,(n-1)(n-1)+1}}(t)-t(g-(n-1)^2)-t(n-2)&\\
&=&-t(n-1)\frac{n(n-1)}{2}+(n-1)n(t-\frac{2}{n})\\&&+t(n-1)\frac{(n-1)(n-2)}{2}-t(g-(n-1)^2)-t(n-2)&\\
&=&-t\left((n-1)\frac{n(n-1)}{2}-(n-1)\frac{(n-1)(n-2)}{2}+g-(n-1)^2\right)\\&&+(n-1)n(t-\frac{2}{n})-t(n-2)&\\
&=&-tg+(n-1)n(t-\frac{2}{n})-t(n-2)&\\
&=&-tg+t((n-1)n-(n-2))-(n-1)n\frac{2}{n})&\\
&=&-tg+t((n-1)^2+1)-2(n-1),&
\end{align*}
where we used Claim~\ref{Claim:g4K-L} and Claim~\ref{Claim:L=diffoftorusknots} in the first and second line, respectively.
This concludes the proof since $t((n-1)^2+1)-2(n-1)>0$ for $t>\frac{2(n-1)}{(n-1)^2+1}$ and so
$\Upsilon_K(t)>-tg$.
\end{proof}
We now observe that the bound in Theorem~\ref{thmintro:FDTCofNonminimalBraidIsBounded} is larger than necessary for $3$-braids. This leads us to ask Question~\ref{q:n-2} in Section~\ref{sec:questions}.
\begin{prop}\label{prop:n-2} Any $3$-braid $\beta$ such that $|\omega(\beta)| > 1 = 3-2 = n-2$ realizes the braid index of its closure. \end{prop}
\begin{proof}
We prove the contrapositive. Consider a $3$-braid $\beta$ such that the closure of $\beta$ admits a braid representative of strand number one or two. By the classification of $3$-braids in~\cite{BirmanMenasco_StudyingLinksViaClosedBraidsIII}, $\beta$ is conjugate to either $a_{1}a_{2}, a_{1}^{-1}a_{2}^{-1}, a_{1}a_{2}^{-1}$ (if it is a representative of the unknot) or $a_{1}^{k}a_{2}$ or $a_{1}^{k}a_{2}^{-1}$ for $k \in \mathbb{Z}$ (if it is a representative of a $(2,k)$ torus knot or link). One consequence of the properties listed in Proposition~\ref{fdtcproperties} (see~\cite{Malyutin_Twistnumber}, Proposition 13.1) is that if a braid $\alpha \in B_{n}$ is represented by a word containing precisely $r$ occurrences of the generator $a_{i}$ and $s$ occurrences of the generator $a_{i}^{-1}$ for $i \in \{1, \ldots, n-1\}$, then $$ -s \leq \omega(\alpha) \leq r$$
Notice that each of the braid words listed above contains at most one negative power and at most one positive power of $a_{2}$, and hence $-1$ is a lower bound and $1$ is an upper bound for each of their fractional Dehn twist coefficients. The fractional Dehn twist coefficient is invariant under conjugation, and so this implies that $|\omega(\beta)| \leq 1$.
\end{proof}
We finish this section by showing that Theorem \ref{thmintro:FDTCofNonminimalBraidIsBounded} determines the braid index in infinitely many cases where the Morton-Franks-Williams inequality (\cite{Franks_Williams_87_BraidsAndTheJonesPolynomial}, \cite{Morton_SeifertCircles}, \cite{Morton_PolynomialsFromBraids}) is not sharp. Elrifai in \cite{Elrifai_thesis} (see also~\cite{Kawamuro_KR_MFW}) proved that for all knots and links of braid index three, the Morton-Franks-Williams inequality is sharp except for the families of knots and links which are closures of $$ K_{k} = (a_{1}a_{2}a_{2}a_{1})^{2k} a_{1}a_{2}^{-2k-1}$$ and $$ L_{k} = (a_{1}a_{2}a_{2}a_{1})^{2k+1} a_{1}a_{2}^{-2k+1}$$ for $k$ a positive integer.
\begin{Example}\label{Ex:elrifai_examples} The families of $3$-braids $K_{k}$ and $L_{k}$ (for $k \geq 2$ and $k \geq 1$, respectively) have fractional Dehn twist coefficients strictly larger than two (and hence have braid index three by Theorem \ref{thmintro:FDTCofNonminimalBraidIsBounded}).
To see this, we rewrite $K_{k}$ as $$K_{k} = (a_{1}a_{2}a_{2}a_{1})^{2k} a_{1}a_{2}^{-2k-1} = (\Delta^{2} a_{2}^{-2})^{2k} a_{1}a_{2}^{-2k-1} = (\Delta^{2})^{2k}a_{2}^{-4k}a_{1}a_{2}^{-2k-1}$$ and similarly $L_{k}$ as $$L_{k} = (a_{1}a_{2}a_{2}a_{1})^{2k+1} a_{1}a_{2}^{-2k+1} = (\Delta^{2})^{2k+1}a_{2}^{-4k-2}a_{1}a_{2}^{-2k+1}.$$ Again using Proposition 13.1 from \cite{Malyutin_Twistnumber}, we see that $\omega(a_{2}^{-4k}a_{1}a_{2}^{-2k-1}) = 0 = \omega(a_{2}^{-4k-2}a_{1}a_{2}^{-2k+1})$ as each has only positive powers of $a_{1}$ and only negative powers of $a_{2}$. Finally, by Property (c) from Proposition \ref{fdtcproperties}, we can conclude that $\omega(K_{k}) = 2k$ and $\omega(L_{k}) = 2k+1$. \end{Example}
\section{Questions}\label{sec:questions}
By Theorem~\ref{thmintro:FDTCviaUpsilon}, we know more about $\HU$ than that it is Lipschitz continous (Proposition~\ref{prop:HUisLipschitz}): $\HU$ is piecewise linear with rational slopes on $[0,\frac{2}{n-1}]$ by Theorem~\ref{thmintro:FDTCviaUpsilon} and Corollary~\ref{cor:upsilonrational}. This brings us to ask:
\begin{question}\label{q:HUisPL} Is $\HU$ piecewise linear for all $n$-braids and are all the slopes rational? \end{question}
We remark that a positive answer to the following question about the $\Upsilon$-invariant of closures of braids of a fixed number of strands would imply that $\HU$ is always piecewise linear. \begin{question}Fix an integer $n\geq 2$. Given a knot $K$ that arises as the closure of an $n$-braid, denote by $S$ the subset of $[0,1]$ on which the piecewise-linear function $\Upsilon_K(t)$ is not smooth. Is $S$ contained in
\[\left\{\frac{2p}{q}\;|\;\text{where $p$ and $q$ are positive integers such that }|q|\leq n\right\}?\] \end{question}
Finally, Proposition~\ref{prop:n-2} motivates the following:
\begin{Question}\label{q:n-2} Is it true that for any $n$-braid $\beta$ such that $|\omega(\beta)| > n-2$, $\beta$ realizes the braid index of its closure? \end{Question}
Example~\ref{optimalityexample} shows that this would be the lowest possible bound for $\omega(\beta)$, as $\beta_{n,n-1}$ is an $n$-braid that does not realize the braid index of its closure and $\omega(\beta_{n,n-1}) = n-2$. Note that just as Theorem~\ref{thmintro:FDTCofNonminimalBraidIsBounded} implied Corollary~\ref{corintro:ConjMalyutinNetsetaev}, a positive answer to Question \ref{q:n-2} would imply that if an $n$-braid $\beta$ satisfies $\Delta^{2(n-1)}\preceq\beta$ or $\beta\preceq\Delta^{-2(n-1)}$, then the closure of $\beta$ does not arise as the closure of a braid on $n-1$ or fewer strands.
\appendix \section{Homogenization of concordance homomorphisms}\label{app:homogenizationofconcordancehomos} In this section, we establish some basic properties of the homogenizations of concordance homomorphisms. These properties seem to have not been established in the literature so far, although some have been claimed without proof in~\cite{FellerKrcatovich_16_OnCobBraidIndexAndUpsilon} and so we provide proofs for completeness. Our proofs are in spirit close to the constructions of Baader~\cite{Baader_07_AsymptRasmussenInv} and Brandenbursky~\cite{Brandenbursky_11}. More concretely, the proofs of Lemma~\ref{lemma:PropofHI} and Proposition~\ref{prop:HI(alpha)-HI(beta)} are based on the following fundamental observation: given two $n$-braids $\alpha$ and $\beta$, the closure of $\alpha\beta$ and the connected sum of the closures of $\alpha$ and $\beta$ are related by a connected cobordism of Euler characteristic $n-1$. The proof of Lemma~\ref{lemma:HIwhenSBIissharp} is a variation of Rudolph's proof for the slice-Bennequin inequality given in~\cite[Lemma~4]{rudolph_QPasobstruction}.
In the entire section, $I$ denotes a knot invariant that
descends to a homomorphism $I\colon \mathcal{C}\to \mathbb{R}$ with $|I(K)|\leq t_I g_4(K)$ for all knots $K$ and some real constant~$t_I$. Here $\mathcal{C}$ denotes the \emph{concordance group}---knots up to concordance with group operation given by connected sum (denoted by $\#$); in particular, for every knot $K$, the knot given as the mirror image of $K$ with reversed orientation (denoted by $-K$) represents the class of the inverse of the class of $K$.
Fix a positive integer $n$ and, for every $n$-braid $\beta$, choose an $n$-braid $\varepsilon_{\beta}$ of bounded (independent of $\beta$) length such that the closure of $\beta\varepsilon_{\beta}$ is a knot. In fact, $\varepsilon_{\beta}$ can be chosen to be of length equal to the number of components of the closure of $\beta$; in particular, of length at most $n-1$. Brandenbursky~\cite{Brandenbursky_11} showed that there is a well-defined (independent of the choices for $\varepsilon_{\beta}$) map \[\HI\colon B_n\to \mathbb{R}, \beta\mapsto \lim_{k\to\infty}\frac{I\left(\widehat{\beta^k\varepsilon_{\beta^k}}\right)}{k},\]
called the \emph{homogenization} of $I$. In fact, $\HI$ is a homogeneous quasimorphism. Here a \emph{quasimorphism} on a group $G$ is any map $\phi\colon G\to \mathbb{R}$ such that \[\sup_{(a,b)\in G\times G}|\phi(ab)-\phi(a)-\phi(b)|<\infty\] and a quasimorphism $\phi\colon G\to \mathbb{R}$ is called \emph{homogeneous} if $\phi(a^k)=k\phi(a)$ for all integers $k$ and all $a\in G$. Homogeneity of $\HI$ is immediate from the construction. All homogeneous quasimorphisms are constant on conjugacy classes. We summarize properties specific to $\HI$ that get used in the main part of this text.
\begin{lemma}\label{lemma:PropofHI} \begin{enumerate}[I)]
\item\label{item:HI-I} If a knot $K$ is the closure of an $n$-braid $\beta$, then $|\HI(\beta)-I(K)|\leq t_I\frac{n-1}{2}$.
\item\label{item:HI(betaa_i)}
For all $n$-braids $\beta$, $\left|\HI(\beta a_i^{\pm1})-\HI(\beta)\right|\leq \frac{t_I}{2}$.
\item\label{item:HIhasDefectt_I(n-1)} For all $n$-braids $\alpha$ and $\beta$, $\left|\HI({\alpha\beta})-\HI(\alpha)-\HI(\beta)\right|\leq t_I(n-1)$. If $\alpha$ and $\beta$ commute; for example, if $\alpha$ is the $n$-stranded full twist $\Delta^2$ or a power of $\beta$, then
$\HI({\alpha\beta})=\HI(\alpha)+\HI(\beta)$.
\item \label{item:HIofUnions}Fix positive integers $n$, $n_1$, $\cdots$, $n_l$ such that $n=\sum_{i=1}^l n_i$. If an $n$-braid $\beta$ is given as the disjoint union of braids $\beta_1$, $\cdots$ ,$\beta_l$ on $n_1$, $\cdots$, $n_l$ strands, respectively, then $\HI(\beta)=\sum_{i=1}^l \HI({\beta_i})$. \end{enumerate} \end{lemma} \begin{Remark}\label{Rmk:Brandenburskysproof} Brandenbursky proved that $\HI$ is a homogeneous quasimorphism by showing that it arises as the homogenization of the quasimorphism given by $\beta\mapsto I\left(\widehat{\beta\varepsilon_{\beta}}\right)$, which is a quasimorphism of defect at most $3t_In$.\footnote{For this, we recall that the homogenization of a quasimorphism is well-defined, which can for example be seen by adding or subtracting the defect, turning the quasimorphism into an subadditive or superadditive function, and then applying Fekete's Lemma, which states the following: given a subadditive (superadditive) real-valued sequence $(a_k)_{k=1}^{\infty}$, the limit $\lim_{k\to\infty}\frac{a_k}{k}$ exists in $[-\infty,\infty)$ ($(-\infty,\infty]$)~\cite{Fekete_23}.} A priori, this only allows to conclude that $\HI$ is a homogeneous quasimorphism of defect at most $6t_In$ rather than $t_I(n-1)$. We do not know of an example of an $I$ that shows that the bound $t_I(n-1)$ is realized. However, we do provide examples that show that the other inequalities in Lemma~\ref{lemma:PropofHI} cannot be improved; see Example~\ref{Ex:optofPropofHI}.
\end{Remark} For the proofs we will build cobordisms between closures of braids and then apply the fact that if there is a cobordism of genus $g$ between two knots $K_1$ and $K_2$, then
\begin{equation}\label{eq:I(K)-I(L)<tg}|I(K_1)-I(K_2)|\leq t_I g,\end{equation} since $I(K_1)-I(K_2)=I(K_1\# -K_2)$ and $g_4(K_1\# -K_2)\leq g$. Here a cobordism $C$ between two links $L_0$ and $L_1$ is a smooth oriented surface in $S^3\times [0,1]$ such that $\partial C=L_0\times\{0\} \cup L_1\times\{1\}$.
\begin{proof}[Proof of Lemma~\ref{lemma:PropofHI} ] \ref{item:HI-I}): For every fixed $k$, we claim that there exists a cobordism of genus $\frac{(n-1)(k-1)+{{\ell}}}{2}$ between $kK$ and $\widehat{\beta^k\varepsilon_{\beta^k}}$, where $kK$ denotes the $k$-fold connect sum $K\# \cdots \# K$ and $\varepsilon_{\beta^k}$ is an $n$-braid of length ${{\ell}}$ at most $n-1$ such that the closure of $\beta^k\varepsilon_{\beta^k}$ is a knot. In fact, there exists a cobordism $C$ between $kK$ and $\widehat{\beta^k\varepsilon_{\beta^k}}$ given by $(k-1)(n-1)+{{\ell}}$ band moves; see Figure~\ref{fig:cob}.
\begin{figure}\label{fig:cob}
\end{figure}
In particular, the cobordism $C$ has Euler characteristic $-(k-1)(n-1)-{{\ell}}$, is connected, and has two boundary components; thus, its genus is $\frac{(k-1)(n-1)+{{\ell}}}{2}$.
We calculate
\begin{align*}\left|I\left(\widehat{\beta^k\varepsilon_k}\right)-kI(K)\right|
&=\left|I\left(\widehat{\beta^k\varepsilon_k}\right)-I(kK)\right| \overset{\text{\eqref{eq:I(K)-I(L)<tg}}}{\leq} t_I\frac{(n-1)(k-1)+{{\ell}}}{2}.\end{align*} Dividing by $k$ and taking the limit $k\to\infty$ yields
\[\left|\widetilde{I}(\beta)-I(K)\right|=\left|\lim_{k\to\infty}\frac{I\left(\widehat{\beta^k\varepsilon_{\beta^k}}\right)}{k}-I(K)\right|
=\left|\lim_{k\to\infty}\frac{I\left(\widehat{\beta^k\varepsilon_{\beta^k}}\right)-kI(K)}{k}\right|\leq \frac{t_I(n-1)}{2}.\]
\ref{item:HI(betaa_i)}): For every $k$, suppose $\varepsilon_{\beta^k}$ and $\varepsilon_{(\beta a_i^{\pm1})^k}$ are braids of length ${{\ell}}\leq n-1$ and ${{\ell}}'\leq n-1$, respectively, such that the closures of
$\beta^k\varepsilon_{\beta^k}$ and $(\beta a_i^{\pm1})^k\varepsilon_{(\beta a_i^{\pm1})^k}$ are knots. If braids $\alpha$ and $\alpha'$ differ by adding or removing a generator $a_i$ or $a_i^{-1}$, then their closures are related by a cobordism of Euler characteristic $-1$. Indeed, as discussed in Figure~\ref{fig:cob}, adding or removing a crossing can be realized by a cobordism consisting of one 1-handle. So, since the braid $\beta^k\varepsilon_{\beta^k}$ can be turned into the braid $(\beta a_i^{\pm1})^k\varepsilon_{(\beta a_i^{\pm1})^k}$ by removing ${{\ell}}$ + k generators and adding ${{\ell}}'$ generators, there exists a cobordism of Euler characteristic $-{{\ell}}-{{\ell}}'-k$, i.e.~genus $\frac{{{\ell}}+{{\ell}}'+k}{2}$, between the knots given as the closure of $\beta^k\varepsilon_{\beta^k}$ and $(\beta a_i^{\pm1})^k\varepsilon_{(\beta a_i^{\pm1})^k}$. Consequently, we have \[\left|I\left(\widehat{(\beta a_i^{\pm1})^k\varepsilon_{(\beta a_i^{\pm1})^k}}\right)-I\left(\widehat{\beta^k\varepsilon_{\beta^k}}\right)\right| \overset{\eqref{eq:I(K)-I(L)<tg}}{\leq} t_I\frac{{{\ell}}+{{\ell}}'+k}{2}.\]
Dividing by $k$ and taking the limit $k\to\infty$ yields $\left|\HI(\beta a_i)-\HI(\beta)\right|\leq \frac{t_I}{2}$.
\ref{item:HIhasDefectt_I(n-1)}): Fix a positive integer $k$, and let $\varepsilon_{(\alpha\beta)^k}$, $\varepsilon_{\alpha^k}$, and $\varepsilon_{\beta^k}$ denote $n$-braids of length ${{\ell}},{{\ell}}_\alpha,{{\ell}}_\beta\leq n-1$, respectively, such that closures of $(\alpha\beta)^k\varepsilon_{(\alpha\beta)^k}$, $\alpha^k\varepsilon_{\alpha^k}$, and $\beta^k\varepsilon_{\beta^k}$ are knots.
We first observe that there exists a cobordism of Euler characteristic $-(n-1)k-{{\ell}}-{{\ell}}_\alpha$ between the knot $\widehat{(\alpha\beta)^k\varepsilon_{(\alpha\beta)^k}}$ and the link $\widehat{\alpha^k\varepsilon_{\alpha^k}}\# k\widehat{\beta}$, where $k\widehat{\beta}$ denotes the connected sum of $k$ copies of $\widehat{\beta}$ with the summing operation happening along the component of $\widehat{\beta}$ that contains the strand of $\beta$ that ends left-most on the top of $\beta$; see Figure \ref{fig:connectsum}. \begin{figure}
\caption{An illustration of $2\widehat{\beta}$ for $n=4$.}
\label{fig:connectsum}
\end{figure} Figure~\ref{fig:cob2} shows how such a cobordism is given by band moves. \begin{figure}\label{fig:cob2}
\end{figure}
By the same argument as in the proof of~\ref{item:HI-I} (see Figure~\ref{fig:cob}), there exists a cobordism of Euler characteristic $-(n-1)(k-1)-{{\ell}}_\beta$ between $k\widehat{\beta}$ and $\widehat{\beta^k\varepsilon_{\beta^k}}$.
These two cobordisms may be concatenated to yield a cobordism of genus \[\frac{(n-1)(2k-1)+{{\ell}}+{{\ell}}_\alpha+{{\ell}}_\beta}{2}\] between the knots $\widehat{(\alpha\beta)^k\varepsilon_{(\alpha\beta)^k}}$ and $\widehat{\alpha^k\varepsilon_{\alpha^k}}\#\widehat{\beta^k\varepsilon_{\beta^k}}$. Therefore, we have
\[\left|I\left(\widehat{(\alpha\beta)^k\varepsilon_{(\alpha\beta)^k}}\right)- I\left(\widehat{\alpha^k\varepsilon_{\alpha^k}}\#\widehat{\beta^k\varepsilon_{\beta^k}}\right)\right|\overset{\eqref{eq:I(K)-I(L)<tg}}{\leq} t_I\frac{(n-1)k+{{\ell}}+{{\ell}}_\alpha+(n-1)k+{{\ell}}_\beta}{2}.\]
Dividing by $k$ and taking the limit $k\to \infty$ implies $\left|\HI({\alpha\beta})-\HI(\alpha)-\HI(\beta)\right|\leq t_I(n-1)$.
\
In the case when $\alpha$ and $\beta$ commute, we use the fact that
\[\lim_{k\to\infty}\frac{|\HI(\alpha^k\beta^k)-\HI(\alpha^k)-\HI(\beta^k)|}{k}=0\] to conclude \[\HI(\alpha\beta)=\lim_{k\to\infty}\frac{\HI(\alpha^k\beta^k)}{k}=\lim_{k\to\infty}\frac{\HI(\alpha^k)}{k}+\lim_{k\to\infty}\frac{\HI(\beta^k)}{k} =\HI(\alpha)+\HI(\beta),\] where $(\alpha\beta)^k=(\alpha)^k(\beta)^k$ was used in the first equality.
\
\ref{item:HIofUnions}):
For any $n_i$-braids $\beta_i$, let $\beta$ denote their disjoint union. In order to give a braid word for $\beta$, we must shift each $\beta_{i}$ by the appropriate number of strands. Indeed, if we let $\beta'_i$ denote the $n$-braid obtained from a braid word for $\beta_i$ by replacing $a_k^{\pm1}$ by $a_{k+\sum_{j<i}n_j}^{\pm1}$, then $\beta=\beta'_1\beta'_2\cdots\beta'_l$. We note that $\beta'_i$ and $\beta'_j$ commute for all $i,j\leq l$. Therefore, $\HI(\beta)=\sum_{i=1}^l \HI({\beta'_i})$ by~\ref{item:HIhasDefectt_I(n-1)}. Thus, we are left with showing $\HI({\beta'_i})=\HI({\beta_i})$ for all $i\leq l$. We approach this by observing that, while $\beta'_i$ and $\beta_i$ have different numbers of strands, since the braided portions of $\beta'_i$ and $\beta_i$ are identical we may choose convenient $\varepsilon$'s to concatenate with ${\beta'_i}^k$ and ${\beta_i}^k$ in the computation of $\HI$ in order to make the closures be isotopic knots.
Fix a positive integer $k$ and let $\varepsilon_{\beta_i^k}$ be an $n_i$-braid of length ${{\ell}}_i$ such that the closure of $\beta_i^k\varepsilon_{\beta_i^k}$ is a knot. Let $\varepsilon'_{\beta_i^k}$ be the $n$-braid obtained from a braid word for $\varepsilon_{\beta_i^k}$ by replacing $a_k^{\pm1}$ by $a_{k+\sum_{j<i}n_j}^{\pm1}$ and set \[\varepsilon_{{\beta'}_i^k}=a_1a_2\cdots a_{\sum_{j<i}n_j}\varepsilon'_{\beta_i^k}.\] We note that ${\beta'}_i^k\varepsilon_{{\beta'}_i^k}$ and ${\beta_i}^k\varepsilon_{{\beta_i}^k}$ have isotopic closures and so we conclude \[\HI(\beta_i)=\lim_{k\to\infty}\frac{I\left(\widehat{\beta_i^k\varepsilon_{\beta_i^k}}\right)}{k} =\lim_{k\to\infty}\frac{I\left(\widehat{{\beta'}_i^k\varepsilon_{{\beta'}_i^k}}\right)}{k}=\HI(\beta'_i).\] \end{proof}
Two famous examples for $I$, Ozsv\'ath and Szab\'o's $\tau$ invariant and Rassmusen's $s$ invariant, turn out to have very simple homogenizations. The following implies this and can be seen as a version of~\cite[Theorem~3.5]{Brandenbursky_11} that depends on the braid index. For coprime positive integers, we denote by $T_{p,q}$ the torus knot given as the closure of the $p$-braid $(a_1a_2\cdots a_{p-1})^q$.
\begin{lemma}\label{lemma:HIwhenSBIissharp} Fix an integer $n\geq 2.$ If $I(T_{n,nk+1})=t_Ig(T_{n,nk+1})=\frac{(n-1)nk}{2}$ for all positive integers $k$, then \[\HI(\beta)=t_I\frac{{\rm{wr}}(\beta)}{2}\text{ for all $n$-braids }\beta.\] \end{lemma} \begin{proof} The equality $I(T_{n,nk+1})=t_Ig_4(T_{n,nk+1})$, for all positive integers $k$, implies that the slice-Bennequin inequality holds for all $n$-braids; that means, if the knot $K$ is the closure of an $n$-braid $\alpha$, then \begin{equation}\label{eq:sB}t_I\frac{{\rm{wr}}(\alpha)-(n-1)}{2}\leq I(K)\leq t_I\frac{{\rm{wr}}(\alpha)+n-1}{2}.\end{equation} For completeness, we provide a proof of~\eqref{eq:sB} following~\cite[Lemma~4]{rudolph_QPasobstruction}; compare also with the proof of~\cite[Corollary~11]{Livingston_Comp}. We only establish the first inequality of~\eqref{eq:sB} as the second one follows by applying the first to $-\alpha$. Removing all $a_i^{-1}$ in a braid word for $\alpha$, then adding generators $a_i$ allows us to turn $\alpha$ into $(a_1a_2\cdots a_{n-1})^{nk+1}$ for some positive integer $k$, which we additionally chose such that $(n-1)(nk+1)-{\rm{wr}}(\alpha)>0$. Since adding and removing generators yields a cobordism consisting of a $1$-handle between the corresponding closures, this implies that there exists a cobordism of Euler characteristic $-(n-1)(nk+1)+{\rm{wr}}(\alpha)$ between $K$ and $T_{n,nk+1}$. Thus, we find \begin{align*}t_I\frac{(n-1)nk}{2}-I(K)&=I(T_{n,nk+1})-I(K) \\&\overset{\text{\eqref{eq:I(K)-I(L)<tg}}}{\leq} t_I\frac{(n-1)(nk+1)-{\rm{wr}}(\alpha)}{2}\\&=t_I\frac{(n-1)nk}{2}-t_I\frac{{\rm{wr}}(\alpha)-(n-1)}{2},\end{align*} as wanted, where the assumption on the value of torus knots was used in the first line.
The statement of the lemma follows from~\eqref{eq:sB} by setting $\alpha=\beta^k\varepsilon_k$ and $K=\widehat{\beta^k\varepsilon_k}$, dividing by $k$ and taking the limit $k\to\infty$. \end{proof}
The ideas of the proof of Lemma~\ref{lemma:PropofHI} can be used to establish the following. \begin{prop}\label{prop:HI(alpha)-HI(beta)} Fix positive integers $n$ and $m$. If an $n$-braid $\beta$ and an $m$-braid $\alpha$ have isotopic links as their closure (or, more generally, concordant links as their closure), then
\[\left|\HI(\beta)-\HI(\alpha)\right|\leq t_I\frac{n-1+m-1}{2}.\] \end{prop} We use Proposition~\ref{prop:HI(alpha)-HI(beta)} crucially in the proof of Theorem~\ref{thmintro:FDTCofNonminimalBraidIsBounded}. In Example~\ref{Ex:optofPropofHI}, we comment on the optimality of Proposition~\ref{prop:HI(alpha)-HI(beta)}.
\begin{proof}[Proof of Proposition~\ref{prop:HI(alpha)-HI(beta)}] We will first prove the statement in the case that the closure of one (and thus both) of $\alpha$ and $\beta$ are knots.
Fix a positive integer $k$ and let $\varepsilon_{\alpha^k}$ and $\varepsilon_{\beta^k}$ be braids given by braid words of length ${{\ell}}\leq n-1$ and ${{\ell}}'\leq m-1$ such that $\beta^k\varepsilon_{\beta^k}$ and $\alpha^k\varepsilon_{\alpha^k}$ are braids with closures that are knots. We claim that there exists a cobordism of genus $\frac{(n-1+m-1)(k-1)+{{\ell}}+{{\ell}}'}{2}$ between the closures of $\beta^k\varepsilon_{\beta^k}$ and $\alpha^k\varepsilon_{\alpha^k}$. To see this, let $k\widehat{\beta}$ denote the knot obtained as the connect sum of $k$ copies the knot $\widehat{\beta}$. By the argument given in the proof of~\ref{item:HI-I} of Lemma~\ref{lemma:PropofHI}, there exists a cobordism $C$ from the closure of $\beta^k\varepsilon_{\beta^k}$ to $k\widehat{\beta}$ given by $(k-1)(n-1)+{{\ell}}$ band moves; see Figure~\ref{fig:cob}. Similarly, there is a cobordism $D$ from $k\widehat{\alpha}$---the connect sum of $k$ times the closure of $\alpha$---to the closure of $\alpha^k\varepsilon_{\alpha^k}$ given by $(k-1)(m-1)+{{\ell}}'$ band moves. Note that knots $k\widehat{\beta}$ and $k\widehat{\alpha}$ are concordant, say by a concordance $A$, since the knots $\widehat{\beta}$ and $\widehat{\alpha}$ are concordant by assumption. The concatenation of the cobordisms $C$, $A$, and $D$ yield a cobordism between the closure of $\beta^k\varepsilon_{\beta^k}$ and $\alpha^k\varepsilon_{\alpha^k}$ with genus $\frac{(k-1)(n-1)+{{\ell}}+(k-1)(m-1)+{{\ell}}'}{2}\leq k\frac{n-1+m-1}{2}$.
The statement follows from the existence of the above cobordism by the following calculation:
\begin{align*}\left|\HI(\beta)-\HI(\alpha)\right|
&=\left|\lim_{k\to\infty}\frac{I\left({\widehat{\beta^k\varepsilon_{\beta^k}}}\right)}{k}
-\lim_{k\to\infty}\frac{I\left({\widehat{\alpha^k\varepsilon_{\alpha^k}}}\right)}{k}\right|\\
&=\left|\lim_{k\to\infty}\frac{I\left(\widehat{\beta^k\varepsilon_{\beta^k}}\right)
-I\left(\widehat{\alpha^k\varepsilon_{\alpha^k}}\right)}{k}\right|\\ &\overset{\eqref{eq:I(K)-I(L)<tg}}{\leq}\lim_{k\to\infty}\frac{t_Ik\frac{n-1+m-1}{2}}{k}=t_I\frac{n-1+m-1}{2}. \end{align*}
\
It remains to discuss the case when the closure of $\beta$ (and thus $\alpha$) is a link with several components. The argument remains the same as above, but it only works for a particular choice of links $k\widehat{\beta}$ and $k\widehat{\alpha}$, which we elaborate below.
Let $C$ be a concordance (a union of annuli smoothly embedded in $S^3\times[0,1]$) between $\widehat{\beta}$ and $\widehat{\alpha}$. The concordance $C$ induces a bijection between the connected components of the links $\widehat{\beta}$ and $\widehat{\alpha}$: connected components of the links are related if they are contained in the same subannulus of $C$. We pick $i\leq m-1$ such that under this bijection the connected component of $\widehat{\beta}$ that contains the strand that ends left-most on the top of $\beta$ gets map to the connected component of $\widehat{\alpha}$ that contains the strand of $\alpha$ that ends $i$th on the top of $\alpha$. For example, let $\beta$ be the $3$-braid $a_2^3$ and $\alpha$ be the $3$-braid $a_1^3$. The closure of both of these are an unknot disjoint union a $T_{2,3}$ (i.e.~a trefoil). A concordance has to relate the two unknots, so in this case $i=3$. We may conjugate $\alpha$ by a braid $\gamma$ such that the strand of $\gamma\alpha\gamma^{-1}$ that ends left-most on the top contains the strand of $\alpha$ that ends $i$th on the top of $\alpha$. For the above $3$-braid example, where $\beta=a_2^3$ and $\alpha=a_1^3$, we could choose $\gamma=a_1a_2$.
We now prove the statement for the braids $\beta$ and $\alpha$, where, without loss of generality (by the previous paragraph and the fact that conjugated braids have the same closure), we may and do assume the following: the bijection induced by the concordance $C$ relates the connected component of $\widehat{\beta}$ that contains the strand that ends left-most on the top of $\beta$ to the connected component of $\widehat{\alpha}$ that contains the strands that ends left-most on the top of $\alpha$.
As in the proof of Lemma~\ref{lemma:PropofHI}.\ref{item:HIhasDefectt_I(n-1)}, we choose $k\widehat{\beta}$ to be the connected sum of $k$ times the link $\widehat{\beta}$, where the connected sum is done along the connected components of $\widehat{\beta}$ that contains the strand that ends left-most at the top of $\beta$; see~Figure~\ref{fig:connectsum}. Similarly, we set $k\widehat{\alpha}$ to be the connected sum of $k$ times the link $\widehat{\alpha}$, where the connected sum is done along the connected components of $\widehat{\alpha}$ that contains the strand that ends left-most on top of $\alpha$. The assumption made in the last paragraph guarantees that the links $k\widehat{\beta}$ are concordant to $k\widehat{\alpha}$: indeed, take $A$ to be the concordance (a cobordism that is a union of $k$ annuli) that is given by the concordance $C$ on the $k$ summands of $k\widehat{\beta}$ and $k\widehat{\alpha}$. With this set-up, we conclude the proof as in the case where $\widehat{\beta}$ and $\widehat{\alpha}$ are knots. \end{proof}
\subsection*{Optimality of the inequalities in Lemma~\ref{lemma:PropofHI} and Proposition~\ref{prop:HI(alpha)-HI(beta)}} We provide examples that show that, in general, the inequalities in Lemma~\ref{lemma:PropofHI}.\ref{item:HI-I}$\&$\ref{item:HI(betaa_i)} and Proposition~\ref{prop:HI(alpha)-HI(beta)} cannot be improved. \begin{Example}\label{Ex:optofPropofHI} Let $I$ be a concordance homomorphism such that $t_I=1$ and $I$ satisfies the assumption of Lemma~\ref{lemma:HIwhenSBIissharp} for all $n\geq 2$; for example, take $I$ to be Ozsv\'ath and Szab\'o's $\tau$ invariant.
Fix a positive integer $n\geq 2$. The $n$-braid $\beta=a_1a_2\cdots a_{n-1}$ has $\HI(\beta)=\frac{{\rm{wr}}(\beta)}{2}=\frac{n-1}{2}$, while its closure $K$ has $I(K)=0$ since it is the unknot. Thus, we have equality
\[|\HI(\beta)-I(K)|=\frac{n-1}{2}= t_I\frac{n-1}{2}\] in Lemma~\ref{lemma:PropofHI}.\ref{item:HI-I}. Taking $\beta$ to be the trivial $n$-braid yields equality in Lemma~\ref{lemma:PropofHI}.\ref{item:HI(betaa_i)}.
For positive integers $n$ and $m$, we set \[\beta=a_1a_2\cdots a_{n-1}\quad\mbox{and}\quad\alpha=(a_1a_2\cdots a_{m-1})^{-1}.\]
We have \[\HI(\beta)=\frac{{\rm{wr}}(\beta)}{2}=\frac{n-1}{2}\quad\mbox{and}\quad \HI(\alpha)=\frac{{\rm{wr}}(\alpha)}{2}=\frac{-m+1}{2},\] which yields that $\beta$ and $\alpha$ are braids with isotopic closure such that the inequality in Proposition~\ref{prop:HI(alpha)-HI(beta)} is an equality:
\[\left|\HI(\beta)-\HI(\alpha)\right|=\frac{n-1+m-1}{2}=t_I\frac{n-1+m-1}{2}.\] \end{Example}
\subsection*{Proofs of Claim~\ref{Claim:g4K-L} and Claim~\ref{Claim:L=diffoftorusknots}}
We conclude this paper by providing the proofs for Claim~\ref{Claim:g4K-L} and Claim~\ref{Claim:L=diffoftorusknots}.
\begin{proof}[Proof of Claim~\ref{Claim:g4K-L}]
The knot $K$ is the closure of
\[\beta=\alpha_1\delta\beta_1\delta^\Delta\alpha_2\delta\beta_2\cdots\alpha_{n-1}\delta\beta_{n-1}\delta^\Delta\alpha_{n}\delta\beta_{n},\]
where the $\alpha_j$ and $\beta_j$ are (possibly trivial) quasipositive $n$-braids.
We note that the $n$-braid $\beta$ can be changed into the $n$-braid $\beta_{n,n}=(\delta\delta^\Delta)^{n-1}\delta$ by removing
\[{\rm{wr}}(\beta)-{\rm{wr}}(\beta_{n,n})=\sum_{j=1}^{n} {\rm{wr}}{\alpha_j}+\sum_{j=1}^{n} {\rm{wr}}{\beta_j}\]
positive generators $a_i$ in a braid word for $\beta$. For this, we recall that the $\alpha_j$ and $\beta_j$ are given by braid words that are products of the form $\omega a_i\omega^{-1}$ and so removing the middle $a_i$ in each such conjugate yields a braid word for $\beta_{n,n}$.
Therefore (as in the proof of Lemma~\ref{lemma:PropofHI}), there is a cobordism between $K$ and $L=\widehat{\beta_{n,n}}$ given by
\[{\rm{wr}}(\beta)-{\rm{wr}}(\beta_{n,n})=2g+(n-1)-((2n-1)(n-1))=2(g-(n-1)^2)\] many $1$-handles. In particular, this cobordism is connected and of genus $\frac{2(g-(n-1)^2)}{2}$, as wanted in Claim~\ref{Claim:g4K-L}. \end{proof}
\begin{proof}[Proof of Claim~\ref{Claim:L=diffoftorusknots}] We first observe that \[\beta_{n,n}=(\Delta^2)^{n-1}(a_2\cdots a_{n-1})^{-(n-1)(n-1)}(a_1a_2\cdots a_{n-1}).\] Thus, $\beta_{n,n}$ is conjugate to the $n$-braid \begin{align*}
\beta'_{{n,n}}&=(a_1a_2\cdots a_{n-1})(\Delta^2)^{n-1}(a_2\cdots a_{n-1})^{-(n-1)(n-1)}\\&=(a_1a_2\cdots a_{n-1})^{(n-1)n+1}(a_2\cdots a_{n-1})^{-(n-1)(n-1)};\end{align*} in particular, $L=\widehat{\beta_{{n,n}}}=\widehat{\beta'_{{n,n}}}$. By adding $n-2$ generators we can turn $\beta'_{{n,n}}$ into $\beta''_{{n,n}}=(a_1a_2\cdots a_{n-1})^{(n-1)n+1}(a_2\cdots a_{n-1})^{-(n-1)(n-1)-1}.$ Consequently, there exists a cobordism $C$ between $L$ and $\widehat{\beta''_{{n,n}}}$ of Euler characteristic $-n+2$ given by $n-2$ many $1$-handles between $L$ and $\widehat{\beta''_{{n,n}}}$. Also, $n-2$ many band moves turn the closure of $\beta''_{{n,n}}$ into the connect sum of \[T_{n,(n-1)n+1}=\widehat{(a_1a_2\cdots a_{n-1})^{(n-1)n+1}}\quad\mbox{and}\quad -T_{n-1,(n-1)(n-1)+1}.\] This can be seen by an argument similar to the proof of Lemma~\ref{lemma:PropofHI}.\ref{item:HIhasDefectt_I(n-1)}; compare to Figure~\ref{fig:cob2}. This gives rise to a cobordism $D$ of Euler characteristic $-n+2$ between \[\widehat{\beta''}\quad\mbox{and}\quad T_{n,(n-1)n+1}\#(-T_{n-1,(n-1)(n-1)+1}).\] Concatenating $C$ and $D$ yields a cobordism of genus $n-2$ between $L$ and \newline $T_{n,(n-1)n+1}\#(-T_{n-1,(n-1)(n-1)+1})$, as wanted. \end{proof}
\end{document} |
\begin{document}
\begin{abstract} We show that the pipe dream complex associated to the permutation $1\text{ } n \text{ }n-1\text{ } \cdots \text{ }2$ can be geometrically realized as a triangulation of the vertex figure of a root polytope. Leading up to this result we show that the Grothendieck polynomial specializes to the $h$-polynomial of the corresponding pipe dream complex, which in certain cases equals the $h$-polynomial of canonical triangulations of root (and flow) polytopes, which in turn equals a specialization of the reduced form of a monomial in the subdivision algebra of root (and flow) polytopes. Thus, we connect Grothendieck polynomials to reduced forms in subdivision algebras and root (and flow) polytopes. We also show that root polytopes can be seen as projections of flow polytopes, explaining that these families of polytopes possess the same subdivision algebra. \end{abstract}
\title{Pipe dream complexes and triangulations of root polytopes belong together}
\tableofcontents
\section{Introduction} \label{sec:intro} In this paper we journey from Grothendieck polynomials to geometric realizations of pipe dream complexes via root polytopes. On this journey we meet reduced forms of monomials in the subdivision algebra of root and flow polytopes, and root and flow polytopes themselves. While the connection between Grothendieck polynomials and pipe dream complexes is a well known one, the other objects in the above list are not universally thought of as tied to Grothendieck polynomials and pipe dream complexes. As this work will illustrate, they might indeed belong together.
Grothendieck polynomials represent K-theory classes on the flag manifold; they generalize Schubert polynomials, which in turn generalize Schur polynomials. We show that Grothedieck polynomials specialize to $h$-polynomials of pipe dream complexes. Since pipe dream complexes are known to be homeomorphic to balls (except in a trivial case), we get that their $h$-polynomials, and thus shifted specialized Grothedieck polynomials have nonnegative coefficients. Such property was first observed by Kirillov \cite{k2}, who indicated that he had an algebraic proof in mind.
In \cite{k2} Kirillov also observed that a certain specialization of the shifted Grothendieck polynomial equals a specialization of a particular reduced form in the subdivision algebra of root and flow polytopes. His observation was based on numerical evidence. We explain this equality in terms of the geometry of the underlying pipe dream complex and root (and flow) polytopes. Indeed, we show that the mentioned pipe dream complex can be realized as the canonical triangulation of the vertex figure of the root polytope. No wonder then the specialized Grothendieck polynomial and reduced form are equal: they are the $h$-polynomial of the pipe dream complex and the $h$-polynomial of the canonical triangulation of the vertex figure of the root polytope, respectively. That the reduced form can be seen as the $h$-polynomial of the canonical triangulation of the flow polytope, and thus of the canonical triangulation of the vertex figure of the root polytope, was proved in \cite{h-poly1}. The paper \cite{h-poly2} also contains closely related results.
The outline of the paper is as follows. In Section \ref{sec:groth} we show that the shifted $\beta$-Grothendieck polynomial corresponding to the permutation $w$ is the $h$-polynomial of pipe dream complex of $w$ denoted by $PD(w)$. In Section \ref{sec:red} we define the subdivision algebra and reduced forms and recall related results. We also allude to the connection of Grothendieck polynomials and reduced forms. In Section \ref{sec:r-f} we explain why the subdivision algebras of root and flow polytopes are the same, by showing that root polytopes are projections of flow polytopes. Finally, in Section \ref{sec:pipe} we tie all the above together, by showing that the pipe dream complex $PD({1\text{ } n \text{ }n-1\text{ } \cdots \text{ }2})$ can be realized as a canonical triangulation of a vertex figure of a root polytope. $PD({1\text{ } n \text{ }n-1\text{ } \cdots \text{ }2})$ has been realized previously via the classical associahedron \cite{assoc, cesar, subwordcluster}.
\section{Grothendieck polynomials} \label{sec:groth}
In this section we define Grothendieck polynomials and explain that they specialize to $h$-polynomials of certain simplicial complexes called pipe dream complexes. Since the pipe dream complex is homeomorphic to a ball, its $h$-polynomial has nonnegative coefficients. Therefore, we immediately obtain nonnegativity properties of Grothendieck polynomials, which were observed by Kirillov in \cite{k2}.
There are several ways to express Grothendieck polynomials, and we will present the expression in terms of pipe dreams here. Given a permutation $w$ in the symmetric group $S_n$ it can be represented by a triangular table filled with \includegraphics[scale=.5]{cross.pdf}'s (crosses)
and \includegraphics[scale=.5]{elbow.pdf}'s (elbows)
such that (1) the pipes intertwine according to $w$ and (2) two pipes cross at most once. Such representations of $w$ are called {\bf reduced pipe dreams}, see Figure \ref{pipe}. Pipe dreams are also known as RC-graphs, and the reduced pipe dreams of a permutation were shown to be connected by ladder and chute moves by Bergeron and Billey in \cite{rc}. To each reduced pipe dream we can associate the {\bf weight} $wt_{x,y}(P):=\prod_{(i,j) \in {\rm cross}(P)} (x_i-y_j)$, with ${\rm cross}(P)$ being the set of positions where $P$ has a cross. Note that throughout the literature the definition of the weight varies; however, all results can be phrased using any one convention.
\begin{figure}
\caption{All reduced pipe dreams for $w=1432$ (that is pipe dreams with exactly $3$ crosses). The weights $wt_{x,y}(P)$ when $\bf{y=0}$ are written below the reduced pipe dreams. }
\label{pipe}
\end{figure}
A \textbf{nonreduced pipe dream} for $w \in S_n$ is a triangular table filled with crosses
and elbows so that (1) the pipes intertwine according to $w$ whereby if two pipes have already crossed previously then we simply ignore the extra crossings and (2) there are two pipes that cross at least twice. All nonreduced pipe dreams for $1432$ with a total of $4$ crosses can be seen in Figure \ref{nonred-pipe} and the unique pipe dreams for $1432$ with a total of $5$ crosses can be seen in Figure \ref{nonred-one}. We associate a weight $wt_{x,y}(P)$ to each pipe dream as above.
\begin{figure}
\caption{All pipe dreams with exactly $4$ crosses for $w=1432$. The weights $wt_{x,y}(P)$ when $\bf{y=0}$ are written below the pipe dreams.}
\label{nonred-pipe}
\end{figure}
\begin{figure}
\caption{The unique pipe dreams with exactly $5$ crosses for $w=1432$. The weights $wt_{x,y}(P)$ when $\bf{y=0}$ are written below the pipe dream.}
\label{nonred-one}
\end{figure}
The set ${{\rm Pipes}(w)}$, which is the set of all pipe dreams of $w$ (both reduced and nonreduced), naturally labels the interior simplices of the {pipe dream complex} $PD(w)$ associated to a permutation $w\in S_n$; see Figure \ref{fig:1432} for $PD({1432})$. The pipe dream complex $PD(w)$ is a special case of a subword complex and can be defined as follows. A word of size $m$ is an ordered sequence $Q=(\sigma_1, \ldots, \sigma_m)$ of elements from the simple reflections $\{s_1, \ldots, s_{n-1}\}$ in $S_n$. An ordered subsequence $R$ of $Q$ is called a subword of $Q$. A subword $R$ of $Q$ represents $w \in S_n$ if the ordered product of simple reflections in $R$ is a reduced decomposition for $w$. The word $R$ contains $w \in S_n$ if some subsequence of $R$ represents $w$. The \textbf{pipe dream complex} $PD(w)$ is the set of subwords $(s_{n-1}, s_{n-2}, \ldots, s_1,s_{n-1}, s_{n-2}, \ldots, s_2, s_{n-1}, s_{n-2}, \ldots, s_3, \ldots, s_{n-1}, s_{n-2}, s_{n-1})\backslash R$ whose complements $R$ contain $w$. The word $Q=(s_{n-1}, s_{n-2}, \ldots, s_1,s_{n-1}, s_{n-2}, \ldots, s_2, s_{n-1}, s_{n-2}, \ldots, s_3, \ldots, $ $s_{n-1}, s_{n-2}, s_{n-1})$ is called the triangular word.
\begin{figure}
\caption{The pipe dream complex $PD({1432})$. Figure used with permission from \cite{allen}.}
\label{fig:1432}
\end{figure}
The following theorem provides a combinatorial way of thinking about double Grothendieck polynomials.
\begin{theorem} \label{allen} \cite{subword, fom-kir} The \textbf{double Grothendieck polynomial $\mathfrak{G}_w({\bf x, y})$} for $w \in S_n$, where ${\bf x}=(x_1, \ldots, x_{n-1})$ and ${\bf y}=(y_1, \ldots, y_{n-1})$ can be written as
\begin{equation} \mathfrak{G}_w({\bf x, y})=\sum_{P\in {\rm Pipes}(w)}(-1)^{codim_{PD(w)}F(P)} wt_{x,y}(P), \label{eq:groth} \end{equation}
\noindent where ${\rm Pipes}(w)$ is the set of all pipe dreams of $w$ (both reduced and nonreduced), $F(P)$ is the interior face in $PD(w)$ labeled by the pipe dream $P$, $codim_{PD(w)} F(P)$ denotes the codimension of $F(P)$ in $PD(w)$ and $wt_{x,y}(P)=\prod_{(i,j) \in {\rm cross}(P)} (x_i-y_j)$, with ${\rm cross}(P)$ being the set of positions where $P$ has a cross. \end{theorem}
In the spirit of Theorem \ref{allen}, we use the following definition for the {\bf double $\beta$-Grothendieck polynomial}:
\begin{equation} \mathfrak{G}^{\beta}_w({\bf x, y})=\sum_{P\in {\rm Pipes}(w)}\beta^{codim_{PD(w)}F(P)} wt_{x,y}(P). \label{eq:bgroth} \end{equation}
Note that if we assume that $\beta$ has degree $-1$, while all other variables are of degree $1$, then the powers of $\beta$'s simply make the polynomial $ \mathfrak{G}_w^{\beta}({\bf x, y})$ homogeneous. We chose this definition of $\beta$-Grothendieck polynomials, as it will be the most convenient notationwise for our purposes.
Next we state a special case of \eqref{eq:bgroth}, since it will play a special role in this section.
\begin{lemma} \label{qt} Denoting $\mathfrak{G}_w^{\beta}({\bf x, y})$ by $\mathfrak{G}_w^{\beta}({q, t})$ when we set all components of ${\bf x}$ to $q$ and all components of ${\bf y}$ to $t$, we have \begin{equation} \mathfrak{G}_w^{\beta}({q, t})=(q-t)^{l(w)}\sum_{P\in {\rm Pipes}(w)} [\beta(q-t)]^{codim_{PD(w)}F(P)}, \label{eq:g} \end{equation} where $l(w)$ is the length of the permutation $w$, $F(P)$ is the interior face in $PD(w)$ labeled by the pipe dream $P$ and $codim_{PD(w)} F(P)$ denotes the codimension of $F(P)$ in $PD(w)$. \end{lemma}
\proof By \eqref{eq:bgroth} we have that
\begin{equation} \mathfrak{G}_w^{\beta}({q, t})=\sum_{P\in {\rm Pipes}(w)}\beta^{codim_{PD(w)}F(P)} (q-t)^{|{\rm cross}(P)|}. \end{equation} Since the number of crosses in a pipe dream $P$ is $l(w)+codim_{PD(w)}F(P)$, equation \eqref{eq:g} follows. \qed
The next lemma follows from the well-known relation between $f$- and $h$-polynomials. We note that we take $h({\mathcal{C}}, x)=\sum_{i=0}^d h_{i}x^{i}$ to be the $h$-polynomial of a $(d-1)$-dimensional simplicial complex ${\mathcal{C}}$. For a direct proof of the lemma see \cite{h-poly1}.
\begin{lemma} \cite{Stcom} \label{pure} Let ${\mathcal{C}}$ be a $(d-1)$-dimensional simplicial complex homeomorphic to a ball and $f_i^\circ$ be the number of interior faces of ${\mathcal{C}}$ of dimension $i$. Then \begin{equation} \label{h} h({\mathcal{C}}, \beta+1)=\sum_{i=0}^{d-1} f_i^\circ \beta^{d-1-i} \end{equation} \end{lemma}
Using that the interior simplices of $PD(w)$ are in bijection with pipe dreams of $w$ we obtain the following corollary of Lemma \ref{pure}.
\begin{corollary} \label{cor}Given $w \in S_n$ we have
\begin{equation} \label{h1} h(PD(w), \beta+1)=\sum_{P\in {\rm Pipes}(w)} \beta^{codim_{PD(w)} F(P)}, \end{equation} where $F(P)$ is the interior face in $PD(w)$ labeled by the pipe dream $P$ and $codim_{PD(w)} F(P)$ denotes the codimension of $F(P)$ in $PD(w)$. \end{corollary}
Finally, we obtain the following as a corollary of the above.
\begin{theorem} \label{thm:g-h} We have
\begin{equation} \mathfrak{G}_w^{\beta-1}({q, q-1})= h(PD(w), \beta), \label{eq:groth-h} \end{equation} where $h(PD(w), \beta)$ is the $h$-polynomial of $PD(w)$. In particular we have that $ \mathfrak{G}_w^{\beta-1}({q, q-1})\in \mathbb{Z}_{\geq 0}[\beta]$. \end{theorem}
\proof Corollary \ref{cor} yields $h(PD(w), \beta+1)=\sum_{P\in {\rm Pipes}(w)} \beta^{codim_{PD(w)} F(P)}$. Together with Lemma \ref{qt} applied when $t=q-1$ we get \eqref{eq:groth-h} in Theorem \ref{thm:g-h}. The nonnegativity of the coefficients of $ \mathfrak{G}_w^{\beta-1}({q, q-1})$ then follows because of the nonnegativity of the $h$-polynomial of a simplicial complex which is homeomorphic to a ball. Recall that $PD(w)$ is known to be homeomorhpic to a ball, except in the trivial case when it is a $(-1)$-sphere, which case can be checked separately. \qed
The nonnegativity of the coefficients of $\mathfrak{G}_w^{\beta-1}({1, 0})$ was observed by Kirillov \cite{k2}. Equation \eqref{eq:groth-h} makes clear why this is the case: because it is the $h$-polynomial of a simplicial complex which is homeomorphic to a ball, implying that its coefficients are nonnegative \cite{Stcom}.
\section{Reduced forms in the subdivision algebra} \label{sec:red} In this section we point to a connection between reduced forms in the so called subdivision algebra and Grothendieck polynomials. In Section \ref{sec:pipe} we provide a geometric realization of the pipe dream complex $PD({1 n (n-1)\ldots 2})$ via a triangulation of a root (or flow) polytope, which implies this connection. In Section \ref{sec:r-f} we explain the connection between root and flow polytopes via the subdivision algebra also explaining the algebra's name.
The \textbf{subdivision algebra} ${\mathcal{S}(\beta)}$ is a commutative algebra generated by the variables $x_{ij}$, $1\leq i<j\leq n$, over $\mathbb{Q}[\beta]$, subject to the relations $x_{ij} x_{jk}=x_{ik}(x_{ij}+x_{jk}+\beta)$, for $1\leq i<j<k\leq n$. This algebra is called the subdivision algebra, because its relations can be seen geometrically as subdividing flow and root polytopes. This is explained in detail in Section \ref{sec:r-f}. The subdivision algebra has been used extensively for subdividing root and flow polytopes in \cite{prod, mm, h-poly1, h-poly2, root1, root2}.
A \textbf{reduced form} of the monomial in the algebra ${\mathcal{S}(\beta)}$ is a polynomial obtained by successively substituting $x_{ik}(x_{ij}+x_{jk}+\beta)$ in place of an occurrence of $x_{ij} x_{jk}$ for some $i<j<k$ until no further reduction is possible. Note that the reduced forms are not necessarily unique.
A possible sequence of reductions in algebra ${\mathcal{S}(\beta)}$ yielding a reduced form of $x_{12}x_{23}x_{34}$ is given by
\begin{eqnarray} \label{ex1}
x_{12} \mbox {\boldmath$ x_{23}x_{34}$} & \rightarrow & \mbox{\boldmath$x_{12}$}x_{24}\mbox{\boldmath$x_{23}$}+\mbox{\boldmath$x_{12}$}x_{34}\mbox {\boldmath$x_{24}$}+\beta \mbox {\boldmath$x_{12}x_{24}$} \nonumber \\ & \rightarrow& \mbox {\boldmath$ x_{24}$} x_{13}\mbox {\boldmath$x_{12}$}+x_{24}x_{23}x_{13}+ \beta x_{24}x_{13}+x_{34}x_{14}x_{12}+x_{34}x_{24}x_{14} \nonumber \\ & &+\beta x_{34}x_{14}+\beta x_{14}x_{12}+\beta x_{24}x_{14}+\beta^2 x_{14} \nonumber \\ & \rightarrow &x_{13}x_{14}x_{12}+x_{13}x_{24}x_{14}+\beta x_{13}x_{14}+x_{24}x_{23}x_{13}+\beta x_{24}x_{13}\nonumber \\ & & +x_{34}x_{14}x_{12}+x_{34}x_{24}x_{14}+\beta x_{34}x_{14}+\beta x_{14}x_{12}+\beta x_{24}x_{14}\nonumber \\ & &+\beta^2 x_{14} \end{eqnarray}
\noindent where the pair of variables on which the reductions are performed is in boldface. The reductions are performed on each monomial separately.
Given a graph $G$, denote by $Q_G(\beta)$ the reduced form of the monomial $\prod_{(i,j) \in E(G)}x_{ij}$ specialized at $x_{ij}=1$ for all $1\leq i<j\leq n$. The polynomial $Q_G(\beta)$ is unique, though the reduced form with variables $x_{ij}$ is not \cite{root1}. In recent work \cite{h-poly1} the author connected $Q_G(\beta)$ to the $h$-polynomials of triangulations of flow polytopes of $\tilde{G}=(V(G)\cup \{s,t\}, E(G)\cup \{(s,i), (i,t) \mid i \in V(G)\})$. Flow polytopes are defined in Section \ref{subsec:fp}; in the next theorem we treat their triangulations, which we denote by ${\mathcal{C}}$, as a simplicial complex. Since ${\mathcal{C}}$ is a simplicial complex homeomorphic to a ball, it follows that the $h$-polynomial of a triangulation of a polytope has nonnegative coefficients \cite{Stcom}.
\begin{theorem} \cite{h-poly1} \label{h2} For any graph $G$ we have \begin{equation} \label{Q} Q_G(\beta)=h(\mathcal{C}, \beta+1),\end{equation} where $\mathcal{C}$ is any unimodular triangulation of the flow polytope ${\mathcal{F}}_{\tilde{G}}(1,0,\ldots, 0,-1)$ and $h(\mathcal{C}, x)$ is its $h$-polynomial. In particular, the reduced form $Q_G(\beta-1)$ is a polynomial in $\beta$ with nonnegative coefficients. \end{theorem}
For brevity, use the notation $\mathfrak{G}_w(\beta)$ for $\mathfrak{G}_w^{\beta}({1, 0})$, the double $\beta$-Grothendieck polynomial evaluated when all $x$'s are set to $1$ and $y$'s are set to $0$. In this notation Theorem \ref{thm:g-h} specialized at $q=1$ states that $\mathfrak{G}_w(\beta)=h(PD(w), \beta+1)$.
Note the similarity of the statements of Theorems \ref{h2} and \ref{thm:g-h} as a certain polynomial equaling the $h$-polynomial of a simplicial complex. Paired with Kirillov's observation in \cite[Proposition 3.1]{k2} that \begin{equation} \label{equal} Q_{P_n}(\beta)= \mathfrak{G}_{\pi}(\beta),\end{equation} for
the permutation $\pi=1\text{ } n \text{ }n-1\text{ } \cdots \text{ }2$ and path graph $P_n=([n], \{(i, i+1)| i \in [n-1]\})$, we obtain that $h(PD(\pi), \beta)=h({\mathcal{C}}, \beta)$, where $\mathcal{C}$ is any unimodular triangulation of the flow polytope ${\mathcal{F}}_{\tilde{P_n}}$. The previous raises the natural question: can $PD({\pi})$ be realized geometrically as a triangulation $\mathcal{C}$ of the flow polytope ${\mathcal{F}}_{\tilde{P_n}}$? The answer is almost yes as we explain in the next sections.
\section{On the relation of root and flow polytopes} \label{sec:r-f} This section explains the geometric reasons for root and flow polytopes to have the same subdivision algebras and in turn to possess dissections with identical descriptions via reduced forms \cite{prod, mm, h-poly1, h-poly2, root1, root2}. The simplest reason for the above would be if root and flow polytopes were equivalent. While this is not the case, the truth does not lie far from it, as we will see.
\subsection{Root polytopes.} In the terminology of \cite{p1}, a root polytope of type $A_{n}$ is the convex hull of the origin and some of the points $e_{ij}^-:=e_i-e_j$ for $1\leq i<j \leq n+1$, where $e_i$ denotes the $i^{th}$ coordinate vector in $\mathbb{R}^{n+1}$. A very special root polytope is the full root polytope $$\mathcal{P}(A_{n}^+)=\textrm{ConvHull}(0, e_{ij}^- \mid 1\leq i<j \leq n+1),$$ where $e_{ij}^-=e_i-e_j$. In this paper we restrict ourself to a class of root polytopes including $\mathcal{P}(A_{n}^+)$, which have subdivision algebras \cite{root1}.
Let $G$ be an acyclic graph on the vertex set $[n+1]$. Define $$\mathcal{V}_G=\{e_{ij}^- \mid (i, j) \in E(G), i<j\}, \mbox{ a set of vectors associated to $G$;}$$
$$\mathcal{C}(G)=\langle \mathcal{V}_G \rangle :=\{\sum_{ e_{ij}^- \in \mathcal{V}_G}c_{ij} e_{ij}^- \mid c_{ij}\geq 0\}, \mbox{ the cone associated to $G$; and } $$
$$\overline{\mathcal{V}}_G=\Phi^+ \cap \mathcal{C}(G), \mbox{ all the positive roots of type $A_n$ contained in $\mathcal{C}(G)$}, $$
where $\Phi^+=\{e_{ij}^- \mid1\leq i<j \leq n+1\}$ is the set of positive roots of type $A_n$.
The root polytope $\mathcal{P}(G)$ associated to the acyclic graph $G$ is
\begin{equation} \label{eq11} \mathcal{P}(G)=\textrm{ConvHull}(0, e_{ij}^- \mid e_{ij}^- \in \overline{\mathcal{V}}_G)\end{equation} The root polytope $\mathcal{P}(G)$ associated to graph $G$ can also be defined as \begin{equation} \label{eq21} \mathcal{P}(G)=\mathcal{P}(A_n^+) \cap \mathcal{C}(G).\end{equation}
Note that $\mathcal{P}(A_{n}^+)=\mathcal{P}(P_{n+1})$ for the path graph $P_{n+1}$ on the vertex set $[n+1]$.
We can view reduced forms in the subdivision algebra in terms of graphs, as hinted at in the previous section.
The {\bf reduction rule for graphs:} Given a graph $G_0$ on the vertex set $[n+1]$ and $(i, j), (j, k) \in E(G_0)$ for some $i<j<k$, let $G_1, G_2, G_3$ be graphs on the vertex set $[n+1]$ with edge sets
\begin{eqnarray} \label{graphs} E(G_1)&=&E(G_0)\backslash \{(j, k)\} \cup \{(i, k)\}, \nonumber \\ E(G_2)&=&E(G_0)\backslash \{(i, j)\} \cup \{(i, k)\},\nonumber \\ E(G_3)&=&E(G_0)\backslash \{(i, j), (j, k)\} \cup \{(i, k)\}. \end{eqnarray}
We say that $G_0$ \textbf{reduces} to $G_1, G_2, G_3$ under the reduction rules defined by equations (\ref{graphs}).
The reason for the name subdivision algebra is the following key lemma appearing in \cite{root1}:
\begin{lemma} \cite{root1} \label{reduction_lemma} \textbf{(Reduction Lemma for Root Polytopes)} Given an acyclic graph $G_0$ with $d$ edges, let $(i, j), (j, k) \in E(G_0)$ for some $i<j<k$ and $G_1, G_2, G_3$ as described by equations (\ref{graphs}). Then $$\mathcal{P}(G_0)=\mathcal{P}(G_1) \cup \mathcal{P}(G_2)$$ where all polytopes $\mathcal{P}(G_0), \mathcal{P}(G_1), \mathcal{P}(G_2)$ are $d$-dimensional and $$\mathcal{P}(G_3)=\mathcal{P}(G_1) \cap \mathcal{P}(G_2) \mbox{ is $(d-1)$-dimensional. } $$ \end{lemma}
What the Reduction Lemma really says is that performing a reduction on an acyclic graph $G_0$ is the same as dissecting the $d$-dimensional polytope $\mathcal{P}(G_0)$ into two $d$-dimensional polytopes $\mathcal{P}(G_1)$ and $ \mathcal{P}(G_2)$, whose vertex sets are subsets of the vertex set of $\mathcal{P}(G_0)$, whose interiors are disjoint, whose union is $\mathcal{P}(G_0)$, and whose intersection is a facet of both. It is clear then that the reduced form can be seen as a dissection of the root polytope into simplices.
\subsection{Flow polytopes.} \label{subsec:fp} Now we define flow polytopes and explain the analogue of the Reduction Lemma for them.
Let $G$ be a loopless graph on the vertex set $[n+1]$, and let ${\rm in }(e)$ denote the smallest (initial) vertex of edge $e$ and ${\rm fin}(e)$ the biggest (final) vertex of edge $e$. Think of fluid flowing on the edges of $G$ from the smaller to the bigger vertices, so that the total fluid volume entering vertex $1$ is one and leaving vertex $n+1$ is one, and there is conservation of fluid at the intermediate vertices. Formally, a \textbf{flow} $f$ of size one on $G$ is a function $f: E \rightarrow \mathbb{R}_{\geq 0}$ from the edge set $E$ of $G$ to the set of nonnegative real numbers such that
$$1=\sum_{e \in E, {\rm in }(e)=1}f(e)= \sum_{e \in E, {\rm fin}(e)=n+1}f(e),$$
and for $2\leq i\leq n$
$$\sum_{e \in E, {\rm fin}(e)=i}f(e)= \sum_{e \in E, {\rm in }(e)=i}f(e).$$
The \textbf{flow polytope} ${\mathcal{F}}_G$ associated to the graph $G$ is the set of all flows $f: E \rightarrow \mathbb{R}_{\geq 0}$ of size one.
In this paper we restrict our attention to flow polytopes of certain augmented graphs $\tilde{G}=(V(G)\cup \{s,t\}, E(G)\cup \{(s,i), (i,t) \mid i \in V(G)\})$:
\begin{lemma} \cite{prod, mm} \label{red} \textbf{(Reduction Lemma for Flow Polytopes)} Given a graph $G_0$ on the vertex set $[n+1]$ and $(i, j), (j, k) \in E(G_0)$, for some $i<j<k$, let $G_1, G_2, G_3$ be as in equations (\ref{graphs}). Then
$${\mathcal{F}}_{{\tilde{G}_0}}={\mathcal{F}}_{{\tilde{G}_1}} \bigcup {\mathcal{F}}_{{\tilde{G}_2}},$$ where all polytopes ${\mathcal{F}}_{{\tilde{G}_0}},{\mathcal{F}}_{{\tilde{G}_1}}, {\mathcal{F}}_{{\tilde{G}_2}},$ are of the same dimension and $${\mathcal{F}}_{{\tilde{G}_3}}={\mathcal{F}}_{{\tilde{G}_1}} \cap{\mathcal{F}}_{{\tilde{G}_2}} \mbox{ is one dimension less. } $$ \end{lemma}
\subsection{Are root polytopes and flow polytopes the same?} Given an acyclic graph $G$ Lemmas \ref{reduction_lemma} and \ref{red} imply that we can dissect ${\mathcal{P}}(G)$ and ${\mathcal{F}}_{{\tilde{G}}}$ with identical procedures. Are then ${\mathcal{P}}(G)$ and ${\mathcal{F}}_{{\tilde{G}}}$ equivalent for acyclic graphs $G$?
Note that the dimension of ${\mathcal{P}}(G)$ is $|E(G)|$, while the dimension of ${\mathcal{F}}_{{\tilde{G}}}$ is $|E(G)|+|V(G)|-1$, so the polytopes cannot be identical. However, we show that ${\mathcal{F}}_{{\tilde{G}}}$ can be projected onto an $|E(G)|$-dimensional polytope ${\mathcal{S}}(G)$ that is equivalent to ${\mathcal{P}}(G)$. When with the subdivision algebra we are dissecting ${\mathcal{P}}(G)$ and ${\mathcal{F}}_{{\tilde{G}}}$ in identical ways, we get the corresponding (identifiable) induced dissections on ${\mathcal{S}}(G)$ and ${\mathcal{P}}(G)$.
Recall the well-known charaterization of the vertices of flow polytopes.
\begin{lemma} \label{vertices}\cite[Section 13.1a]{sch} The vertex set of ${\mathcal{F}}_G$ are the unit flows on increasing paths going from the smallest to the largest vertex of $G$.
\end{lemma}
The polytope ${\mathcal{F}}_{{\tilde{G}}}$ naturally lives in the space $\mathbb{R}^{|E(\tilde{G})|}$, with the coordinates corresponding to the edges of $\tilde{G}$. Denoting by $e_{(i,j)}$ the unit coordinate corresponding to the edge $(i,j) \in E(\tilde{G})$, we see that the vectors $e_{(i,j)}$, $(i,j) \in E(G)$, $e_{(s,i)},e_{(i,t)}$, for $i \in [n]$, are an orthonormal basis of $\mathbb{R}^{|E(\tilde{G})|}$. Projecting onto the subspace $W$ of $\mathbb{R}^{|E(\tilde{G})|}$ spanned by $e_{(i,j)}$, $(i,j) \in E(G)$, let the polytope ${\mathcal{S}}(G)$ be the image of ${\mathcal{F}}_{{\tilde{G}}}$. Denote the mentioned projection by $p$. The vertices of ${\mathcal{S}}(G)$ are $0$ and vertices of the form $e_{(i_1, i_2)}+e_{(i_2, i_3)}+\cdots+e_{(i_k, i_{k+1})}$, where $i_1<\cdots<i_{k+1}$, ${(i_1, i_2)}, {(i_2, i_3)},\ldots, {(i_k, i_{k+1})} \in E(G)$.
Define the map $f:W\rightarrow \mathbb{R}^n$ as follows: $f(e_{(i,j)})=e_i-e_j$, for $(i,j) \in E(G)$, and extend linearly. It follows by definition that the image of ${\mathcal{S}}(G)$ under $f$ is ${\mathcal{P}}(G)$. Since for an acyclic graph $G$ the vectors $e_i-e_j$, $(i,j) \in E(G)$, are linearly independent, we get that $f$ is an affine map which is a bijection onto ${\mathcal{P}}(G)$ when restricted to ${\mathcal{S}}(G)$. Thus, ${\mathcal{S}}(G)$ and ${\mathcal{P}}(G)$ are affinely (and thus combinatorially) equivalent polytopes.
Let $G_0$ be an acyclic graph, and let $G_1, G_2, G_3$ be as specified \eqref{graphs} . Then a check shows that $f(p({\mathcal{F}}_{{\widetilde{G_i}}}))={\mathcal{P}}(G_i)$, for $i \in [3]$ and thus any dissection of ${\mathcal{F}}_{{\widetilde{G_0}}}$ that we obtain by repeated reductions as in Lemma \ref{red} under the map $f \circ p$ yields a dissection of ${\mathcal{P}}(G_0)$ obtained by the same sequence of reductions as interpreted in Lemma \ref{reduction_lemma}.
The above considerations prove the following theorem, which relates root and flow polytopes. The maps $p$ and $f$ are as defined above.
\begin{theorem} \label{root-flow} The root polytope ${\mathcal{P}}(G)$ is equivalent to ${\mathcal{S}}(G)$, which is a projection of ${\mathcal{F}}_{{\tilde{G}}}$. Indeed, ${\mathcal{P}}(G)=f(p({\mathcal{F}}_{{\tilde{G}}}))$. Moreover, when the reductions \eqref{graphs} are performed on $G$ yielding dissections $\mathcal{D}_1$ and $\mathcal{D}_2$ of ${\mathcal{P}}(G)$ and ${\mathcal{F}}_{{\tilde{G}}}$, respectively, then $\mathcal{D}_1$ is the image of $\mathcal{D}_2$ under $f\circ p$. \end{theorem}
It is in the sense of Theorem \ref{root-flow} that root polytopes and flow polytopes of acyclic graphs are the same. Since root polytopes are lower dimensional by definition and in this paper we are only concerned with acyclic graphs, namely, the path graph, we will use root polytopes in the rest of the paper.
\section{Geometric realization of pipe dream complexes via root polytopes} \label{sec:pipe} The main theorem of this section is that the canonical triangulation of the vertex figure ${\mathcal{V}}(P_n)$ of ${\mathcal{P}}(P_n)$ at $0$ is a geometric realization of the pipe dream complex $PD({1\text{ } n \text{ }n-1\text{ } \cdots \text{ }2})$. The vertex figure of a polytope $P$ at vertex $v$ is the intersection of a hyperplane $\mathcal{H}$ with $P$, such that vertex $v$ is on one side of $\mathcal{H}$ and all the other vertices of $P$ are on the other side of $\mathcal{H}$. See \cite[p.54]{ziegler} for further details. We now explain the canonical triangulation of ${\mathcal{P}}(P_n)$; there is an analogous triangulation for all root (and flow) polytopes \cite{root1, h-poly2}, but since we are only concerned with ${\mathcal{P}}(P_n)$ in this section, we restrict our attention to this case. $PD({1\text{ } n \text{ }n-1\text{ } \cdots \text{ }2})$ has previously been been realized via the classical associahedron \cite{assoc, cesar, subwordcluster}.
Recall that a graph $G$ on the vertex set $[n]$ is said to be {\bf noncrossing} if there are no vertices $i <j<k<l$ such that $(i, k)$ and $(j, l)$ are edges in $G$. A graph $G$ on the vertex set $[n]$ is said to be {\bf alternating} if there are no vertices $ i <j<k $ such that $(i, j)$ and $(j, k)$ are edges in $G$.
\begin{theorem}\cite{GGP, root1} \label{path} Let $T_1, \ldots, T_k$ be all the noncrossing alternating spanning trees of $K_n$. Then ${\mathcal{P}}(T_1), \ldots, {\mathcal{P}}(T_k)$ are top dimensional simplices in a triangulation of ${\mathcal{P}}(P_n)$. Moreover, $${\mathcal{P}}(T_{i_1})\cap \cdots \cap {\mathcal{P}}(T_{i_l})={\mathcal{P}}(T_{i_1}\cap \cdots \cap T_{i_l}),$$ where $i_1,\ldots,i_l \in [k]$, and $T_{i_1}\cap \cdots \cap T_{i_l}=([n], \{(i,j)| (i,j) \in E(T_{i_1})\cap \cdots \cap E(T_{i_l})$.
\end{theorem}
The triangulation described in Theorem \ref{path} is called the \textbf{canonical triangulation} of ${\mathcal{P}}(P_n)$. Since all top dimensional simplices in it contain $0$, we see that ${\mathcal{V}}(P_n)$ has a triangulation indexed by the same noncrossing alternating spanning trees:
\begin{theorem} \label{v-path} Let $T_1, \ldots, T_k$ be all the noncrossing alternating spanning trees of $K_n$. Then ${\mathcal{P}}(T_1)\cap {\mathcal{V}}(P_n), \ldots, {\mathcal{P}}(T_k)\cap {\mathcal{V}}(P_n)$ are top dimensional simplices in a triangulation of ${\mathcal{V}}(P_n)$. Moreover, $${\mathcal{P}}(T_{i_1})\cap \cdots \cap {\mathcal{P}}(T_{i_l})\cap {\mathcal{V}}(P_n)={\mathcal{P}}(T_{i_1}\cap \cdots \cap T_{i_l})\cap {\mathcal{V}}(P_n),$$ where $i_1,\ldots,i_l \in [k]$, and $T_{i_1}\cap \cdots \cap T_{i_l}=([n], \{(i,j)| (i,j) \in E(T_{i_1})\cap \cdots \cap E(T_{i_l})$.
\end{theorem}
We call the triangulation described in Theorem \ref{v-path} the \textbf{canonical triangulation} of ${\mathcal{V}}(P_n)$. The following is the main theorem of this section.
\begin{theorem} \label{thm:gr} The canonical triangulation of ${\mathcal{V}}(P_n)$ is a geometric realization of the pipe dream complex $PD({1\text{ } n \text{ }n-1\text{ } \cdots \text{ }2})$.
\end{theorem}
\begin{figure}
\caption{The interior simplices of $PD({1432})$ with the pipe dreams that label them. The graphs obtained via the bijection $G$ can be seen in red juxtaposed on top of the pipe dreams (rotated by $45^\circ$). Note that the top dimensional simplices are indeed labeled by the noncrossing alternating spanning trees of $K_n$.}
\label{bij}
\end{figure}
Before proceeding to prove Theorem \ref{thm:gr} we note that a proof of it could be obtained using the previous realization of $PD({1\text{ } n \text{ }n-1\text{ } \cdots \text{ }2})$ via the associahedron. However, instead we will give a proof using root polytopes.
\noindent \textit{Proof of Theorem \ref{thm:gr}.} First note that the dimensions of both ${\mathcal{V}}(P_n)$ and $PD({1\text{ } n \text{ }n-1\text{ } \cdots \text{ }2})$ are $n-2$. Recall that
the top dimensional simplices of $PD({1\text{ } n \text{ }n-1\text{ } \cdots \text{ }2})$ are indexed by reduced pipe dreams of ${1\text{ } n \text{ }n-1\text{ } \cdots \text{ }2}$, whereas their intersections by nonreduced pipe dreams in the following way. If we identify a pipe dream $P$ with its set of crosses, then the simplex at the intersection of the simplices labeled by pipe dreams $P_{i_1}, \ldots, P_{i_l}$ is labeled by a pipe dream $P_{i_1}\cup \cdots \cup P_{i_l}$, where in the latter we simply let the crosses be all the crosses in $P_{i_1}, \ldots, P_{i_l}$. See Figure \ref{fig:1432}.
We first show that the top dimensional simplices in $PD({1\text{ } n \text{ }n-1\text{ } \cdots \text{ }2})$ are in bijection with the top dimensional simplices in the canonical triangulation of ${\mathcal{V}}(P_n)$. Such a bijective map $G$ is easy to define. Given a reduced pipe dream $P$, let $$G(P)=([n], \{(i,j) | \mbox{there is an elbow in box } (n-j+1, i) \mbox{ in P}\}).$$ See Figure \ref{bij} for an example.
The map $G$ is clearly one-to-one. On the other hand we know that both reduced pipe dreams of ${1\text{ } n \text{ }n-1\text{ } \cdots \text{ }2}$ and noncrossing alternating spanning trees of $K_n$ are counted by the Catalan numbers \cite{woo, GGP}, thereby immediately yielding that $G$ is a bijection between the two sets.
Given that the simplex at the intersection of the simplices labeled by pipe dreams $P_{i_1}, \ldots, P_{i_l}$ is labeled by a pipe dream $P_{i_1}\cup \cdots \cup P_{i_l}$ (as explained above), and $${\mathcal{P}}(T_{i_1})\cap \cdots \cap {\mathcal{P}}(T_{i_l})\cap {\mathcal{V}}(P_n)={\mathcal{P}}(T_{i_1}\cap \cdots \cap T_{i_l})\cap {\mathcal{V}}(P_n),$$ where $i_1,\ldots,i_l \in [k]$, and $T_{i_1}\cap \cdots \cap T_{i_l}=([n], \{(i,j)| (i,j) \in E(T_{i_1})\cap \cdots \cap E(T_{i_l})$ (as in Theorem \ref{v-path}) we have that the bijection $G$ extends to the lower dimensional interior simplices of $PD({1\text{ } n \text{ }n-1\text{ } \cdots \text{ }2})$ and
${\mathcal{V}}(P_n)$. See Figure \ref{bij}. Moreover, the same map also extends to the boundary simplices in the canonical triangulation of ${\mathcal{V}}(P_n)$ and those in $PD({1\text{ } n \text{ }n-1\text{ } \cdots \text{ }2})$. Therefore, we can conclude that the canonical triangulation of ${\mathcal{V}}(P_n)$ is a geometric realization of $PD({1\text{ } n \text{ }n-1\text{ } \cdots \text{ }2})$.
\qed
We remark that the $h$-vector of the canonical triangulation of ${\mathcal{P}}(P_n)$, and so also of $PD({1\text{ } n \text{ }n-1\text{ } \cdots \text{ }2})$ consists of Narayana numbers; see \cite[Exercise 6.31b]{ec2}.
\end{document} |
\begin{document}
\title{Rohlin actions of finite groups on the Razak-Jacelon algebra} \author{Norio Nawata} \address{Department of Educational Collaboration, Osaka Kyoiku University, 4-698-1 Asahigaoka, Kashiwara, Osaka, 582-8582, Japan} \email{nawata@cc.osaka-kyoiku.ac.jp} \keywords{Stably projectionless C$^*$-algebra; Rohlin property; Kirchberg-Phillips type theorem} \subjclass[2010]{Primary 46L55, Secondary 46L35; 46L40} \thanks{This work was supported by JSPS KAKENHI Grant Number 16K17614}
\begin{abstract} Let $A$ be a simple separable nuclear C$^*$-algebra with a unique tracial state and no unbounded traces, and let $\alpha$ be a strongly outer action of a finite group $G$ on $A$. In this paper, we show that $\alpha\otimes \mathrm{id}$ on $A\otimes\mathcal{W}$ has the Rohlin property, where $\mathcal{W}$ is the Razak-Jacelon algebra. Combing this result with the recent classification results and our previous result, we see that such actions are unique up to conjugacy. \end{abstract} \maketitle
\section{Introduction}
Let $\mathcal{O}_2$ be the Cuntz algebra generated by 2 isometries. It is known that $\mathcal{O}_2$ is a simple separable unital nuclear purely infinite C$^*$-algebra, and is $KK$-equivalent to $\{0\}$. Kirchberg and Phillips showed that a simple separable unital nuclear C$^*$-algebra $B$ is isomorphic to $\mathcal{O}_2$ if and only if $B$ has an asymptotically central inclusion of $\mathcal{O}_2$ in \cite{KP}. In particular, if $A$ is a simple separable unital nuclear C$^*$-algebra, then $A\otimes\mathcal{O}_2$ is isomorphic to $\mathcal{O}_2$. It is known that $\mathcal{O}_2$ plays an important role in the classification of nuclear C$^*$-algebra (see, for example, \cite{G2} and \cite{Ror1}).
Let $\mathcal{W}$ be the Razak-Jacelon algebra studied in \cite{J}, which is a certain simple separable nuclear stably projectionless C$^*$-algebra having trivial $K$-groups and a unique tracial state and no unbounded traces. Note that $\mathcal{W}$ is $KK$-equivalent to $\{0\}$ and $\mathcal{O}_2$. Hence we may regard $\mathcal{W}$ as a stably finite analogue of $\mathcal{O}_2$. Combing Elliott, Gong, Lin and Niu's result \cite{EGLN} and Castillejos and Evington's result \cite{CE} (see also \cite{CETWW}), we see that if $A$ is a simple separable nuclear C$^*$-algebra with a unique tracial state and no unbounded traces, then $A\otimes\mathcal{W}$ is isomorphic to $\mathcal{W}$. We refer the reader to \cite{EGLN0}, \cite{EGLN} (see also \cite{EN} and \cite{GL}) and \cite{GL2} for recent progress in the classification of stably projectionless C$^*$-algebras.
In the theory of operator algebras, the classification of group actions is one of the most fundamental problems and has a long history. There exists a complete classification of actions of countable amenable groups on approximately finite dimensional (AFD) factors. Although there exist some successes in the classification of group actions on ``classifiable'' C$^*$-algebras, the classification of countable amenable group (outer) actions on ``classifiable'' C$^*$-algebras is far from complete because of $K$-theoretical obstructions. We refer the reader to \cite{I} and the references given there for details and results in the classification of group actions on operator algebras. We shall review only some results that are directly related to this paper.
Connes \cite{C3} classified finite cyclic group actions on the AFD factor $\mathcal{R}_0$ of type II$_1$ up to conjugacy. More generally, Jones \cite{Jones} classified finite group actions on $\mathcal{R}_0$. In particular, outer actions of a finite group on $\mathcal{R}_0$ are unique up to conjugacy.
In \cite{I1}, Izumi introduced the Rohlin property of finite group actions on unital C$^*$-algebras and showed an equivariant version of the Kirchberg-Phillips type theorem for finite group actions on $\mathcal{O}_2$. Indeed, he characterized Rohlin actions on $\mathcal{O}_2$ by using the fixed point subalgebra of the central sequence C$^*$-algebra of $\mathcal{O}_2$ and showed that if $\alpha$ is an outer action of a finite group $G$ on a simple separable unital nuclear C$^*$-algebra $A$, then $\alpha\otimes\mathrm{id}$ on $A\otimes\mathcal{O}_2$ has the Rohlin property. In particular, such actions are unique up to conjugacy. Note that Izumi also showed that there exist uncountably many mutually non-conjugate outer actions of $\mathbb{Z}_2$ on $\mathcal{O}_2$. Also, Goldstein and Izumi obtained an equivariant Kirchberg-Phillips type result for finite group actions on $\mathcal{O}_{\infty}$ in \cite{GI}.
Remarkably, Szab\'o generalized Izumi's result to countable amenable group actions in \cite{Sza4}. He showed that countable amenable group outer actions on $\mathcal{O}_2$ that equivariantly absorb the trivial action on $\mathcal{O}_2$ are unique up to strong cocycle conjugacy. Note that Szab\'o considered more general settings and obtained results for strongly self-absorbing C$^*$-dynamical systems. See \cite{Sza3}, \cite{Sza1}, \cite{Sza2}, \cite{Sza4}, \cite{Sza5} and \cite{Sza6}.
In this paper, we shall consider an equivariant Kirchberg-Phillips type result for finite group actions on $\mathcal{W}$. Indeed, we shall show that if $\alpha$ is a strongly outer action of a finite group $G$ on a simple separable nuclear C$^*$-algebra $A$ with a unique tracial state and no unbounded traces, then $\alpha\otimes\mathrm{id}$ on $A\otimes\mathcal{W}$ has the Rohlin property (Theorem \ref{thm:main}). Since the author showed that Rohlin actions of a finite group on $\mathcal{W}$ are unique up to conjugacy in \cite{Na4}, we see that such actions are unique up to conjugacy by Elliott, Gong, Lin and Niu's result and Castillejos and Evington's result. Indeed, we obtain the following theorem.
\begin{mainthm} (Corollary \ref{cor:main}) \ \\ Let $A$ and $B$ be simple separable nuclear C$^*$-algebras with a unique tracial state and no unbounded traces, and let $\alpha$ and $\beta$ be strongly outer actions of a finite group $G$ on $A$ and $B$, respectively. Then $\alpha\otimes\mathrm{id}$ on $A\otimes\mathcal{W}$ is conjugate to $\beta\otimes\mathrm{id}$ on $B\otimes\mathcal{W}$. \end{mainthm}
Our main result (Theorem \ref{thm:main}) is shown by using a cohomolgy vanishing type result (Lemma \ref{lem:cohomology}). The proof of Lemma \ref{lem:cohomology} is based on Connes' $2\times 2$ matrix trick in \cite[Corollary 2.6]{C3}. We need to consider the comparison theory for projections in the fixed point subalgebra $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$ of the central sequence C$^*$-algebra of $A\otimes\mathcal{W}$ for Connes' $2\times 2$ matrix trick. We obtain this as a corollary of a classification up to unitary equivalence of certain normal elements in $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$. This classification is based on arguments in \cite{Na3} where the author classified certain unitary elements and projections in $F(\mathcal{W})$ up to unitary equivalence.
This paper is organized as follows. In Section \ref{sec:Pre}, we collect notations, definitions and some results. In Section \ref{sec:target} and Section \ref{sec:stable-uniqueness}, we show a variant of \cite[Corollary 3.8]{Na3}, which is a main technical tool in this paper. In particular, we introduce a (non-separable) C$^*$-algebra $\mathcal{B}^{\gamma}$, and show that $\mathcal{B}^{\gamma}$ has strict comparison (Proposition \ref{pro:main-section3}) in Section \ref{sec:target}. Note that $\mathcal{B}^{\gamma}$ is a target algebra of a (natural) homomorphism from $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}\otimes\mathcal{W}$. The proof of Proposition \ref{pro:main-section3} is essentially based on arguments in \cite{MS}, \cite{MS2} and \cite{MS3}. In particular, it is important to consider the property (SI) and the weak Rohlin property. These concepts were introduced by Sato in his pioneering work \cite{Sa0} and \cite{Sa} (see also \cite{Kis1}). We refer the reader to \cite{Sa1} for recent progress of such type arguments. Section \ref{sec:stable-uniqueness} is essentially based on arguments in \cite{EN} (see \cite[Section 3]{Na3}). In Section \ref{sec:normal}, we classify certain normal elements in $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$ up to unitary equivalence (Theorem \ref{thm:classification-normal}), and show a comparison theorem for certain projections in $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$ (Corollary \ref{cor:comparison}). In Section \ref{sec:main}, we show the main result in this paper.
\section{Preliminaries}\label{sec:Pre}
In this section we shall collect notations, definitions and some results.
For a C$^*$-algebra $A$, let $A_{+}$ denote the set of positive elements in $A$ and $A_{+,1}$ the set of positive contractions in $A$. For $x,y\in A$, let $[x,y]$ be the commutator $xy-yx$. We denote by $K(H)$ and $M_{n^\infty}$ for $n\in\mathbb{N}$ the C$^*$-algebra of compact operators on a Hilbert space $H$ and the uniformly hyperfinite (UHF) algebra of type $n^{\infty}$, respectively.
\subsection{Approximate units and actions}
If $A$ is a separable C$^*$-algebra, then there exists a positive element $s\in A$ such that $sA$ is dense in $A$. Such a positive element $s$ is said to be {\it strictly positive} in $A$. For any $n\in\mathbb{N}$, define $f_n:[0,1]\to \mathbb{R}$ by $$ f_n(t):=\left\{\begin{array}{cl} 0 & t\in [0,\frac{1}{n+1}] \\ n(n+1)t-n & t\in (\frac{1}{n+1},\frac{1}{n}] \\ 1 & t\in (\frac{1}{n}, 1] \end{array} \right.. $$
If $s$ is a strictly positive element in $A$ and $\|s\|=1$, then $\{f_n(s)\}_{n\in\mathbb{N}}$ is an approximate unit for $A$ with $f_{n+1}(s)f_{n}(s)=f_{n}(s)$. Let $A^{\sim}$ denote the unitization algebra of $A$. Note that we assume $A^{\sim}=A$ if $A$ is unital. Let $M(A)$ be the \textit{multiplier algebra} of $A$, which is the largest unital C$^*$-algebra that contains $A$ as an essential ideal. If $\alpha$ is an automorphism of $A$, then $\alpha$ extends uniquely to an automorphism of $M(A)$. We denote it by the same symbol $\alpha$ for simplicity.
We denote by $\mathrm{Aut}(A)$ the automorphism group of $A$. An automorphism $\alpha$ of $A$ is said to be \textit{inner} if there exists a unitary element $u$ in $M(A)$ such that $\alpha (x)=\mathrm{Ad}(u)(x)=uxu^*$ for any $x\in A$. For a subset $F$ of $A$ and $\varepsilon >0$, we say a completely positive (c.p.) map $\varphi :A\to B$ is \textit{$(F,\varepsilon)$-multiplicative} if $$
\| \varphi (xy) - \varphi (x)\varphi(y) \| < \varepsilon $$ for any $x,y\in F$.
An \textit{action} $\alpha$ of a discrete group $G$ on $A$ is a homomorphism from $G$ to $\mathrm{Aut}(A)$. We say that $\alpha$ is \textit{outer} if $\alpha_g$ is not inner for any $g\in G\setminus \{\iota \}$ where $\iota$ is the identity of $G$. An $\alpha$-\textit{cocycle} is a map $w$ from $G$ to the unitary group of $M(A)$ such that $w(gh)=w(g)\alpha_{g}(w(h))$ for any $g,h\in G$. We say that an $\alpha$-cocycle $w$ is a \textit{coboundary} if there exists a unitary element $v$ in $M(A)$ such that $w(g)=v\alpha_g(v^*)$ for any $g\in G$. For two $G$-actions $\alpha$ on $A$ and $\beta$ on $B$, we say that $\alpha$ and $\beta$ are \textit{conjugate} if there exists an isomorphism $\theta$ from $A$ onto $B$ such that $\theta \circ \alpha_g=\beta_{g}\circ \theta$ for any $g\in G$. We denote by $A^{\alpha}$ the fixed point algebra.
Every tracial state $\tau$ on $A$ extends uniquely to a tracial state on $M(A)$. We denote it by the same symbol $\tau$ for simplicity. Let $(\pi_{\tau}, H_{\tau})$ be the Gelfand-Naimark-Segal (GNS) representation of $A$ associated with $\tau$. Then $\tau$ extends uniquely to a normal tracial state $\tilde{\tau}$ on $\pi_{\tau} (A)^{''}$. If $\alpha$ is an automorphism of $A$ such that $\tau \circ \alpha =\tau$, then $\alpha$ extends uniquely to an automorphism $\tilde{\alpha}$ of $\pi_{\tau} (A)^{''}$. Moreover if $\alpha$ is an action of $G$ on $A$ such that $\tau \circ \alpha_g =\tau$ for any $g\in G$, then $\alpha$ extends uniquely to a von Neumann algebraic action $\tilde{\alpha}$ on $\pi_{\tau}(A)^{''}$. We say that an action $\alpha$ of $G$ on a C$^*$-algebra $A$ with a unique tracial state $\tau$ is \textit{strongly outer} if $\tilde{\alpha}_g$ is not inner in $\pi_{\tau}(A)^{''}$ for any $g\in G\setminus \{\iota\}$.
\subsection{Kirchberg's central sequence C$^*$-algebras} We shall recall Kirchberg's central sequence C$^*$-algebras in \cite{Kir2} (see also \cite[Section 5]{Na2} and \cite[Section 2.2]{Na3}). Fix a free ultrafilter $\omega$ on $\mathbb{N}$. For a C$^*$-algebra $A$, put $$ c_{\omega}(A):=\{\{x_n\}_{n\in\mathbb{N}}\in \ell^{\infty}(\mathbb{N}, A)\;
|\; \lim_{n \to \omega}\| x_n\| =0 \}, \; A^{\omega}:=\ell^{\infty}(\mathbb{N}, A)/c_{\omega}(A). $$ A sequence $(x_n)_n$ is a representative of an element in $A^{\omega}$. Let $B$ be a C$^*$-subalgebra of $A$. We identify $A$ and $B$ with the C$^*$-subalgebras of $A^\omega$ consisting of equivalence classes of constant sequences. Set $$
A_{\omega}:=A^{\omega}\cap A^{\prime},\; \mathrm{Ann}(B,A^{\omega}):=\{(x_n)_n\in A^{\omega}\cap B^{\prime}\; |\; (x_n)_nb =0 \;\mathrm{for}\;\mathrm{any}\; b\in B \}. $$ Then $\mathrm{Ann}(B,A^{\omega})$ is a closed ideal of $A^{\omega}\cap B^{\prime}$. Define a \textit{central sequence C$^*$-algebra} $F(A)$ of $A$ by $$ F(A):=A_{\omega}/\mathrm{Ann}(A,A^{\omega}). $$ If $\{h_n\}_{n\in\mathbb{N}}$ is a countable approximate unit for $A$, then $[(h_n)_n]$ is a unit in $F(A)$. It can be easily checked that $F(A)$ is isomorphic to $M(A)^\omega\cap A^{\prime}/\mathrm{Ann}(A,M(A)^{\omega})$ and ${A^{\sim}}_{\omega}/ \mathrm{Ann}(A,(A^{\sim})^{\omega})$. If $\alpha$ is an automorphism of $A$, $\alpha$ induces natural automorphisms of $A^{\omega}$, $A_{\omega}$ and $F(A)$. We denote them by the same symbol $\alpha$ for simplicity. For a tracial state $\tau$ on $A$, define $\tau_{\omega}([(x_n)_n]):=\lim_{n\to\omega}\tau (x_n)$. Then $\tau_{\omega}$ is a well defined tracial state on $F(A)$ by \cite[Proposition 2.1]{Na3}.
\subsection{Razak-Jacelon algebra}
Let $\mathcal{W}$ be the Razak-Jacelon algebra studied in \cite{J}, which is a simple separable nuclear C$^*$-algebra with a unique tracial state and no unbounded traces, and is $KK$-equivalent to $\{0\}$. The Razak-Jacelon algebra $\mathcal{W}$ is constructed as an inductive limit C$^*$-algebra of Razak's building block in \cite{Raz}. Let $S_1$ and $S_2$ be the generators of the Cuntz algebra $\mathcal{O}_2$. For every $\lambda_1,\lambda_2\in\mathbb{R}$, define a flow $\gamma$ on $\mathcal{O}_2$ by $\gamma_t (S_j)=e^{it\lambda_{j}}S_j$. Kishimoto and Kumjian showed that if $\lambda_{1}$ and $\lambda_{2}$ are all non-zero, of the same sign and $\lambda_1$ and $\lambda_2$ generate $\mathbb{R}$ as a closed subgroup, then $\mathcal{O}_2\rtimes_{\gamma}\mathbb{R}$ is a simple stably projectionless C$^*$-algebra with unique (up to scalar multiple) trace in \cite{KK1} and \cite{KK2}. Robert showed that $\mathcal{W}\otimes K(\ell^2(\mathbb{N}))$ is isomorphic to $\mathcal{O}_2\rtimes_{\gamma}\mathbb{R}$ for some $\lambda_1$ and $\lambda_2$ in \cite{Rob}. (See also \cite{Dean}.) Razak's classification theorem \cite{Raz} implies that $\mathcal{W}$ is UHF-stable, and hence $\mathcal{W}$ is $\mathcal{Z}$-stable.
\subsection{Corollaries of Matui and Sato's results}
We shall collect some corollaries of Matui and Sato's results in \cite{MS1} and \cite{MS2}. Although they assume that C$^*$-algebras are unital, their arguments for the following results work for non-unital C$^*$-algebras by suitable modifications (see \cite{Na2} and \cite{Na3}).
First, we recall the definition of the weak Rohlin property. See \cite[Definition 2.7]{MS1} and \cite[Definition 2.5]{MS2}. Note that Matui and Sato define the weak Rohlin property for more general settings.
\begin{Def} Let $A$ be a simple C$^*$-algebra with a unique tracial state $\tau$, and let $\alpha$ be an action of a finite group $G$ on $A$. We say that $\alpha$ has the weak Rohlin property if there exists a positive contraction $f$ in $F(A)$ such that $$
\alpha_g(f)\alpha_h(f)=0, \quad \tau_{\omega}(f)= \frac{1}{|G|} $$ for any $g,h\in G$ with $g\neq h$. \end{Def}
Essentially the same proof as \cite[Theorem 3.4]{MS1} shows the following theorem. See also the proof of \cite[Lemma 6.2]{Na3} and \cite[Theorem 3.6]{MS2}.
\begin{thm}\label{thm}\label{thm:weak-rohlin} Let $A$ be a simple separable nuclear C$^*$-algebra with a unique tracial state and no unbounded traces, and let $\alpha$ be an action of a finite group $G$ on $A$. Then $\alpha$ has the weak Rohlin property if and only if $\alpha$ is strongly outer. \end{thm}
Essentially the same proofs as \cite[Lemma 4.7]{MS2} and \cite[Proposition 4.8]{MS2} show the following proposition. See also \cite[Propostion 3.3]{MS3} and \cite[Theorem 4.1]{BBSTWW}. Note that if $A$ is a simple separable nuclear C$^*$-algebra with a unique tracial state and no unbounded traces, then $A\otimes\mathcal{W}$ has property (SI) since $\mathcal{W}$ is $\mathcal{Z}$-stable (see \cite{Ror}, \cite{MS} and \cite{Na2}).
\begin{pro}\label{thm:Matui-Sato} Let $A$ be a simple separable nuclear C$^*$-algebra with a unique tracial state and no unbounded traces, and let $\alpha$ be a strongly outer action of a finite group $G$ on $A$. Then: \ \\ (i) $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$ has a unique tracial state $\tau_{\omega}$. \ \\ (ii) If $a$ and $b$ are positive elements in $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$ satisfying $d_{\tau_{\omega}}(a)< d_{\tau_{\omega}} (b)$, then there exists an element $r\in F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$ such that $r^*br=a$. \end{pro}
\subsection{Rohlin property and properties of $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$}
We shall recall some results in \cite{Na4} (see also \cite{GS}) and \cite{Na3}.
\begin{Def} (cf. \cite[Definition 3.1]{I1} and \cite[Definition 3.1]{Na4}). An action $\alpha$ of a finite group $G$ on a separable C$^*$-algebra $A$ is said to have the \textit{Rohlin property} if there exists a partition of unity $\{p_{g}\}_{g\in G}\subset F(A)$ consisting of projections satisfying $$ \alpha_{g} (p_{h}) =p_{gh}, $$ for any $g,h\in G$. \end{Def}
For any finite group $G$, there exists an action of $G$ on $\mathcal{W}$ with the Rohlin property by \cite[Example 3.2]{Na4}. The following theorem is \cite[Corollary 3.7]{Na4}.
\begin{thm}\label{thm:classification} Let $\alpha$ and $\beta$ be actions of a finite group $G$ on $\mathcal{W}$ with the Rohlin property. Then $\alpha$ and $\beta$ are conjugate. \end{thm}
Note that there exists a strongly outer action $\alpha$ of $\mathbb{Z}_2$ on $\mathcal{W}$ such that $\alpha$ does not have the Rohlin property (see \cite[Example 5.6]{Na4}).
Since we can regard $F(\mathcal{W})$ is a unital C$^*$-subalgebra of $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$, we obtain the following proposition by \cite[Proposition 4.2]{Na3} and Proposition \ref{thm:Matui-Sato}.
\begin{pro}\label{pro:key-pro} Let $\tau_{\omega}$ be the unique tracial state on $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$. \ \\ (i) For any $N\in\mathbb{N}$, there exists a unital homomorphism from $M_N(\mathbb{C})$ to $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$. \ \\ (ii) For any $\theta\in [0,1]$, there exists a non-zero projection $p$ in $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$ such that $\tau_{\omega}(p)=\theta$. \ \\ (iii) Let $h$ be a positive element in $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$ such that $d_{\tau_{\omega}}(h)>0$. For any $\theta \in [0, d_{\tau_{\omega}}(h))$, there exists a non-zero projection $p$ in $\overline{hF(A\otimes\mathcal{W})^{\alpha\otimes \mathrm{id}}h}$ such that $\tau_{\omega}(p)=\theta$. \end{pro}
Using the proposition above instead of \cite[Proposition 4.2]{Na3}, the same arguments as in \cite[Section 4]{Na3} show the following proposition.
\begin{pro}\label{pro:MvN-u} (cf. \cite[Proposition 4.8]{Na3}). Let $p$ and $q$ be projections in $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$ such that $\tau_{\omega} (p)<1$ where $\tau_{\omega}$ is the unique tracial state on $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$. Then $p$ and $q$ are Murray-von Neumann equivalent if and only if $p$ and $q$ are unitarily equivalent. \end{pro}
\section{Target algebra}\label{sec:target}
In the rest of this paper, we assume that $A$ is a simple separable nuclear C$^*$-algebra with a unique tracial state $\tau_A$ and no unbounded traces, and $\alpha$ is a strongly outer action of a finite group $G$ on $A$. Define an action $\gamma$ on $A\otimes\mathcal{W}$ by $\gamma:= \alpha\otimes\mathrm{id}$. Let $\tau_{\mathcal{W}}$ denote the unique tracial state on $\mathcal{W}$, and let $\tau:=\tau_A\otimes\tau_{\mathcal{W}}$ on $A\otimes\mathcal{W}$. For any $a\in A$ and $b\in \mathcal{W}$, we regard $a\otimes 1_{\mathcal{W}^{\sim}}$ and $1_{A^{\sim}}\otimes b$ as elements in $M(A\otimes\mathcal{W})$. Put $$
\mathcal{A}:= \{(x_n)_n\in (A\otimes\mathcal{W})^{\omega}\; |\; ([x_n ,a\otimes 1_{\mathcal{W}^{\sim}}])_n=0 \text{ for any }a\in A\} $$ and $$
\mathcal{I}:= \{(x_n)_n\in\mathcal{A} \; |\; (x_n (a\otimes 1_{\mathcal{W}^{\sim}}))_n=0 \text{ for any }a\in A \}. $$ Then $\mathcal{I}$ is a closed ideal of $\mathcal{A}$, and define $\mathcal{B}:= \mathcal{A}/\mathcal{I}$. Note that for any $[(x_n)_n]\in \mathcal{B}$, $$
\| [(x_n)_n] \| = \sup_{a\in A_{+,1}} \lim_{n\to \omega} \|x_n(a\otimes 1_{\mathcal{W}^{\sim}}) \|. $$
Indeed, let $\| [(x_n)_n] \|^{\prime}:=\sup_{a\in A_{+,1}}\lim_{n\to \omega} \|x_n
(a\otimes 1_{\mathcal{W}^{\sim}}) \|$ for any $[(x_n)_n]\in \mathcal{B}$.
Then it can be easily checked that $\|\cdot\|^{\prime}$ is a well defined C$^*$-norm on
$\mathcal{B}$. By the uniqueness of the C$^*$-norm, $\| [(x_n)_n] \|= \| [(x_n)_n] \|^{\prime}$ for any $[(x_n)_n] \in\mathcal{B}$. The action $\gamma$ on $A\otimes\mathcal{W}$ induces a natural action on $\mathcal{B}$. We denote it by the same symbol $\gamma$ for simplicity. In this section we shall consider properties of the fixed point algebra $\mathcal{B}^{\gamma}$.
Consider the GNS representation $(\pi_{\tau},H_{\tau})$ of $A\otimes\mathcal{W}$ associated with $\tau$. Note that $\pi_{\tau}$ extends to a representation $\overline{\pi}_{\tau}$ of $M(A\otimes\mathcal{W})$ on $H_{\tau}$ and $\overline{\pi}_{\tau}(M(A\otimes\mathcal{W}))\subset \pi_{\tau}(A\otimes\mathcal{W})''$ (see, for example, \cite[3.12]{Ped2}). Put $$ M:= \ell^{\infty}(\mathbb{N}, \pi_{\tau}(A\otimes\mathcal{W})'')/\{\{x_n\}_{n\in\mathbb{N}}
\; |\; \lim_{n\to\omega}\tilde{\tau} (x_n^*x_n)=0\}, $$ and define a homomorphism $\Pi$ from $A$ to $M$ by $\Pi (a):= (\overline{\pi}_{\tau}(a\otimes 1_{\mathcal{W}^{\sim}}))_n$. Note that $M$ is a von Neumann algebraic ultrapower of $\pi_{\tau}(A\otimes\mathcal{W})''$. Since $\tau=\tau_A\otimes\tau_\mathcal{W}$, $\pi_{\tau}(A\otimes\mathcal{W})''$ is isomorphic to $\pi_{\tau_A}(A)''\bar{\otimes}\pi_{\tau_\mathcal{W}}(\mathcal{W})''$. Moreover, $\pi_{\tau}(A\otimes\mathcal{W})''$, $\pi_{\tau_A}(A)''$ and $\pi_{\tau_\mathcal{W}}(\mathcal{W})''$ are isomorphic to the AFD II$_1$ factor $\mathcal{R}_0$. Set $$ \mathcal{M}:= M\cap \Pi (A)'. $$ It is easy to see that $\mathcal{M}$ is isomorphic to $(\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)^{\omega}\cap (\mathcal{R}_0\bar{\otimes}\mathbb{C})'$ where $(\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)^{\omega}$ is the von Neumann algebraic ultrapower of $\mathcal{R}_0\bar{\otimes}\mathcal{R}_0$.
\begin{pro}\label{pro:factor} With notation as above, $\mathcal{M}$ is a factor of type II$_1$. \end{pro} \begin{proof} Let $\{N_n\}_{n=1}^\infty$ be an increasing sequence of finite-dimensional subfactors such that $\mathcal{R}_{0}=(\bigcup_{n=1}^\infty N_n)''$. Since $(\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)\cap (N_n\bar{\otimes}\mathbb{C})' =(\mathcal{R}_0\cap N_n')\bar{\otimes}\mathcal{R}_0$ is a factor of type II$_1$, the same proof as in \cite[Theorem XIV.4.18]{Tak} shows this proposition. Indeed, let $(a_n)_n$ be an element in $\mathcal{M}\setminus \mathbb{C}1_{\mathcal{M}}$. By the same way as in \cite[Theorem XIV.4.18]{Tak}, we may assume that
$\tilde{\tau} (a_n)=0$ for any $n\in\mathbb{N}$ and $\lim_{n\to\infty} \| a_n\|_2>0$
where $\| a_n\|_2= \tilde{\tau}(a_n^*a_n)^{1/2}$. Using the conditional expectation $\mathcal{E}_n$ from $\mathcal{R}_0\bar{\otimes}\mathcal{R}_0$ onto $(\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)\cap (N_n\bar{\otimes}\mathbb{C})'$,
we can choose $\{k_n\; |\; n\in\mathbb{N}\}\in \omega$ and $b_{k_n}\in (\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)\cap (N_n\bar{\otimes}\mathbb{C})'$
such that $\tilde{\tau}(b_{k_{n}})=0$ and $\| b_{k_{n}}-a_{k_{n}}\|_2\leq 1/2^{n}$ for any $n\in\mathbb{N}$. Since $\tilde{\tau}$ is the unique tracial state on $(\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)\cap (N_n\bar{\otimes}\mathbb{C})'$, $0=\tilde{\tau}(b_{k_n})\in \overline{\mathrm{co}}
\{ ub_{k_n}u^*\; |\; u\in (\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)\cap (N_n\bar{\otimes}\mathbb{C})', \; \mathrm{unitary}\}$. Therefore there exists a unitary element
$u_n\in (\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)\cap (N_n\bar{\otimes}\mathbb{C})'$ such that
$\| b_{k_{n}}- u_n b_{k_n}u_n^*\|_2\geq \| b_{k_n}\|_2/2$ for any $n\in\mathbb{N}$.
Then we have $(u_n)_n\in \mathcal{M}$ and $\limsup_{n\in\mathbb{N}}\|a_{k_{n}}-u_na_{k_{n}u_n^*}\|_2>0$. This shows that $(a_n)_n$ can not be in the center of $\mathcal{M}$.
\end{proof}
The action $\tilde{\gamma}=\tilde{\alpha}\otimes \mathrm{id}$ on $\pi_{\tau}(A\otimes\mathcal{W})''\cong \pi_{\tau_A}(A)''\bar{\otimes}\pi_{\tau_\mathcal{W}}(\mathcal{W})''$ induces an action on $\mathcal{M}$. We denote it by the same symbol $\tilde{\gamma}$ for simplicity. The following lemma is essentially based on \cite[Proposition 2.1.2]{C2}.
\begin{lem}\label{lem:central-trivial} The action $\tilde{\gamma}$ on $\mathcal{M}$ is outer. \end{lem} \begin{proof} It is enough to show that for any element $(u_n)_n$ in $(\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)^{\omega}\cap(\mathcal{R}_0\bar{\otimes}\mathbb{C})'$, there exists an element $(x_n)_n$ in $(\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)^{\omega}\cap(\mathcal{R}_0\bar{\otimes}\mathbb{C})'$ such that $(\tilde{\gamma}(x_n))_n\neq (x_n)_n$ and $[(x_n)_n, (u_n)_n]=0$.
Let $(u_n)_n$ be an element in $(\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)^{\omega}\cap (\mathcal{R}_0\bar{\otimes}\mathbb{C})'$. By \cite[Theorem XIV.4.16]{Tak}, there exists an element $(a_n)_n$ in $\mathcal{R}_0^{\omega}\cap \mathcal{R}_0'$ such that $(\tilde{\alpha}(a_n))_n\neq (a_n)_n$ because $\tilde{\alpha}$ is outer and $\mathcal{R}_0$ is the AFD II$_1$ factor. Put $(x_n)_n:= (a_n\otimes 1_{\mathcal{R}_0})_n$ in $(\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)^{\omega}$. Then $(\tilde{\gamma}(x_n))_n\neq (x_n)_n$ and $[(x_n)_n ,y]=0$ for any $y\in \mathcal{R}_0 \bar{\otimes}\mathcal{R}_0$. Taking a suitable subsequence of $(x_n)_n$, we obtain the conclusion. \end{proof}
By Proposition \ref{pro:factor} and Lemma \ref{lem:central-trivial}, we obtain the following proposition.
\begin{pro}\label{pro:fixed-factor} The fixed point algebra $\mathcal{M}^{\tilde{\gamma}}$ is a factor of type II$_1$. \end{pro}
Define a homomorphism $\Phi$ from $M(A\otimes\mathcal{W})^{\omega}$ to $M$ by $\Phi ((x_n)_n):= (\overline{\pi}_{\tau}(x_n))_n$.
By Kaplansky's density theorem, we see that $\Phi |_{(A\otimes\mathcal{W})^{\omega}}$ is surjective. It is easy to see that $\Phi$ maps $\mathcal{A}$ into $\mathcal{M}$. The following proposition is essentially based on \cite[Theorem 3.3]{KR} and \cite[Theorem 3.1]{MS3}.
\begin{pro}
The restriction $\Phi|_{\mathcal{A}} : \mathcal{A}\to \mathcal{M}$ is surjective. \end{pro} \begin{proof}
Let $x$ be a contraction in $\mathcal{M}$. Since $\Phi |_{(A\otimes\mathcal{W})^{\omega}}$ is surjective, there exists a contraction $(x_n)_n$ in $(A\otimes\mathcal{W})^{\omega}$ such that $\Phi ((x_n)_n)=x$. Let $D$ be a C$^*$-subalgebra of $M(A\otimes\mathcal{W})^{\omega}$ generated by
$(x_n)_n$ and $\{ a\otimes1_{\mathcal{W}^{\sim}} \; |\; a\in A\} $, and
put $I:=\mathrm{ker}\; \Phi|_{D}$. Then the rest of proof is same as the proof of \cite[Theorem 3.1]{MS3}. Indeed, let $\{e_{k}\}_{k\in\mathbb{N}}$ be an approximate unit for $I$ which is quasicentral for $D$. (Note that $D$ is separable.) Since $[(x_n)_n, a\otimes1_{\mathcal{W}^{\sim}}]\in I$, we have \begin{align*} 0
& =\lim_{k\to\infty} \| (1-e_{k})[(x_n)_n, a\otimes1_{\mathcal{W}^{\sim}}] (1-e_{k})\| \\
& =\lim_{k\to\infty} \|[(1-e_{k})(x_n)_n(1-e_{k}), a\otimes 1_{\mathcal{W}^{\sim}}]\| \end{align*} for any $a\in A$. Then we obtain the conclusion by usual arguments (see the proof of \cite[Theorem 3.3]{KR} and \cite[Theorem 3.1]{MS3}). \end{proof}
Let $\{h_n\}_{n\in\mathbb{N}}$ be an approximate unit for $A$. Since $\lim_{n\to \infty} \tau (h_n\otimes 1_{\mathcal{W}})=1$, a similar argument as in the proof of \cite[Proposition 2.1]{Na3} shows
$\mathcal{I}\subset \mathrm{ker}\; \Phi|_{\mathcal{A}}$.
Therefore $\Phi|_{\mathcal{A}}$ induces a surjective homomorphism $\varrho$ from $\mathcal{B}$ to $\mathcal{M}$. Since $\gamma$ is an action of a finite group, it is easy to show the following proposition.
\begin{pro}\label{pro:surjective}
The restriction $\varrho|_{\mathcal{B}^{\gamma}}: \mathcal{B}^{\gamma} \to \mathcal{M}^{\tilde{\gamma}}$ is surjective. \end{pro}
The following lemma is essentially based on \cite[Lemma 3.2]{MS3}. This lemma may be considered that a homomorphism $a\mapsto a\otimes 1_{\mathcal{W}^{\sim}}$ from $A$ to $M(A\otimes\mathcal{W})^{\omega}$ has ``property (SI) with respect to $A\otimes\mathcal{W}$''.
\begin{lem}\label{lem:A-SI} Let $(x_n)_n$ and $(y_n)_n$ be positive contractions in $\mathcal{A}$ such that $$ \lim_{n\to\omega} \tau (x_n)=0 \quad \text{and} \quad \inf_{m\in\mathbb{N}}\lim_{n\to \omega} \tau (y_n^m)>0. $$ Then there exists an element $(s_n)_n$ in $\mathcal{A}$ such that $(s_n^*s_n)_n=(x_n)_n$ and $(y_ns_n)_n=(s_n)_n$. \end{lem} \begin{proof} Similar arguments as in the proofs of \cite[Lemma 3.2]{MS3} and \cite[Theorem 1.1]{MS} with some modifications in \cite[Section 5]{Na2} show this lemma. Indeed, let $\varphi$ be a pure state on $A$. We can uniquely extend $\varphi$ to a pure state $\tilde{\varphi}$ on $A^{\sim}$. Since we may assume that $A$ is a (separable simple) non-type I C$^*$-algebra, $K(H_{\tilde{\varphi}})\cap \pi_{\tilde{\varphi}}(A^{\sim})=\{0\}$. Therefore \cite[Proposition 5.9]{KR} implies that the identity map on $A^{\sim}$ can be approximated in the pointwise norm topology by a completely positive map $\psi$ of the form $$ \psi (a)= \sum_{i,j=1}\tilde{\varphi}(d_i^*ad_j)c_i^*c_j, \quad a\in A^{\sim}, $$ where $c_i,d_i\in A^{\sim}$. Note that $\sum_{i,j=1}\tilde{\varphi}(d_i^*ad_j)(c_i^*c_j\otimes 1_{\mathcal{W}^{\sim}})x_n$ is an element in $A\otimes\mathcal{W}$. Since $A\otimes\mathcal{W}$ has strict comparison, a similar argument as in the proof of \cite[Lemma 3.2]{MS3} (we need to use \cite[Lemma 5.7]{Na2}) shows that there exists a sequence of $(s_n)_n$ in $A\otimes\mathcal{W}$ such that $(f_ns_n)_n=(s_n)_n$ and $(s_n^*(a\otimes 1_{\mathcal{W}^{\sim}})s_n)_n= ((a\otimes 1_{\mathcal{W}^{\sim}})e_n)$ in $(A\otimes\mathcal{W})^{\omega}$ for any $a\in A^{\sim}$. Therefore we obtain the conclusion (see \cite[Remark 5.5]{Na2}). \end{proof}
For any $[(x_n)_n]\in\mathcal{B}$, let $\tau_{\mathcal{B}}([(x_n)_n]):=\lim_{n\to\omega}\tau (x_n)$. By a similar argument as in the proof of \cite[Proposition 2.1]{Na3}, $\tau_{\mathcal{B}}$ is a well defined tracial state on $\mathcal{B}$. The following proposition is essentially based on \cite[Proposition 4.5]{MS2}. See also the proof of \cite[Theorem 4.7]{MS1}.
\begin{pro}\label{pro:target-si} Let $x$ and $y$ be positive contractions in $\mathcal{B}^{\gamma}$ such that $$ \tau_{\mathcal{B}}(x)=0 \quad \text{and} \quad \inf_{m\in\mathbb{N}}\tau_{\mathcal{B}}(y^m)>0. $$ Then there exists an element $s$ in $\mathcal{B}^{\gamma}$ such that $s^*s=x$ and $ys=s$. \end{pro} \begin{proof} Let $(x_n)_n$ and $(y_n)_n$ be positive contractions in $\mathcal{A}$ such that $x=[(x_n)_n]$ and $y=[(y_n)_n]$. Then we have $$ (\gamma_g(x_n)-x_n)_n(a\otimes 1_{\mathcal{W}^{\sim}})=0 \quad \text{and} \quad (\gamma_g(y_n)-y_n)_n(a\otimes 1_{\mathcal{W}^{\sim}})=0 $$ for any $a\in A$ and $g\in G$. Since $\alpha$ is strongly outer, Theorem \ref{thm:weak-rohlin} implies that there exists a positive contraction $(f_n)_n$ in $A_{\omega}$ such that $$
(\alpha_g(f_n)\alpha_h(f_n))_na=0\quad \text{and} \quad \lim_{n\to\omega}\tau_A(f_n)=\frac{1}{|G|} $$ for any $a\in A$ and $g,h\in G$ with $g\neq h$. Let $\{k_n\}_{n=1}^\infty$ be an approximate unit for $\mathcal{W}$. Then we have $(f_n\otimes k_n)_n\in \mathcal{A}$, $$ \lim_{n\to\omega}\gamma_g(f_n\otimes k_n)\gamma_h(f_n\otimes k_n)(a\otimes 1_{\mathcal{W}^{\sim}})
=0 \quad \text{and} \quad \lim_{n\to\omega}\tau (f_n\otimes k_n)=\frac{1}{|G|} $$ for any $a\in A$ and $g,h\in G$ with $g\neq h$. Using \cite[Lemma 5.6]{Na2} instead of \cite[Lemma 4.6]{MS}, a similar argument as in the proof of \cite[Proposition 4.5]{MS2} shows that there exists a positive contraction $(\tilde{y}_n)_n$ in $\mathcal{A}$ such that $$ (\tilde{y}_n)_n\leq (y_n)_n, \quad \inf_{m\in\mathbb{N}}\lim_{n\to \omega} \tau (\tilde{y}_n^m)>0 \quad \text{and} \quad \lim_{n\to\omega}\gamma_g(\tilde{y}_n)\gamma_h(\tilde{y}_n) (a\otimes 1_{\mathcal{W}^{\sim}})=0 $$ for any $a\in A$ and $g,h\in G$ with $g\neq h$. By Lemma \ref{lem:A-SI}, there exists an element $(r_n)_n$ in $\mathcal{A}$ such that $(r_n^*r_n)_n=(x_n)_n$ and $(\tilde{y}_nr_n)_n=(r_n)_n$. Since $(y_n)_n$ is a positive contraction and $(\tilde{y}_n)_n\leq (y_n)_n$, we have $(y_nr_n)_n=(r_n)_n$. Put $$
(s_n)_n:= \frac{1}{|G|}\sum_{g\in G} \gamma_g ((r_n)_n)\in\mathcal{A} . $$ Then we have $$ (\gamma_g(s_n)-s_n)_n=0, \quad (s_n^*s_n-x_n)_n(a\otimes 1_{\mathcal{W}^{\sim}})=0 \quad \text{and} \quad (y_ns_n-s_n)_n(a\otimes 1_{\mathcal{W}^{\sim}})=0 $$ for any $a\in A$ and $g\in G$. Therefore, putting $s:= [(s_n)_n]\in \mathcal{B}^{\gamma}$, we obtain the conclusion. \end{proof}
The following proposition is essentially based on \cite[Proposition 4.8]{MS2} and \cite[Proposition 3.3]{MS3}.
\begin{pro}\label{pro:main-section3} (i) $\tau_{\mathcal{B}}$ is the unique tracial state on $\mathcal{B}^{\gamma}$. \ \\ (ii) $\mathcal{B}^{\gamma}$ has strict comparison. \end{pro} \begin{proof} (i) By Proposition \ref{pro:fixed-factor} and Proposition \ref{pro:surjective}, it suffices to show that
if $[(x_n)_n]$ is a positive contraction in $\mathrm{ker}\;\varrho|_{\mathcal{B}^{\gamma}}$, then $T([(x_n)_n])=0$ for any tracial state $T$ on $\mathcal{B}^{\gamma}$.
Note that $[(x_n)_n]^{1/2}\in\mathrm{ker}\;\varrho|_{\mathcal{B}^{\gamma}}$, and hence $\tau_{\mathcal{B}}([(x_n)_n])=0$. Let $\{e_n\}_{n=1}$ be an approximate unit for $A\otimes\mathcal{W}$. Then it is easy to see that for any $m\in\mathbb{N}$, $\tau_{\mathcal{B}}(([(e_n)_n]-[(x_n)_n])^m)=1$. By Proposition \ref{pro:target-si}, there exists an element $s_1\in\mathcal{B}^{\gamma}$ such that $s_1^*s_1=[(x_n)_n]$ and $([(e_n)_n]-[(x_n)_n])s_1=s_1$. Hence we have $s_1s_1^*\leq [(e_n)_n]-[(x_n)_n]$. Since $[(x_n)_n]+s_1s_1^*$ is a positive contraction and $\tau_{\mathcal{B}}([(x_n)_n]+s_1s_1^*)=0$, the same argument as above shows that there exists an element $s_2\in\mathcal{B}^{\gamma}$ such that $s_2^*s_2=[(x_n)_n]$ and $([(e_n)_n]-[(x_n)_n]-s_1s_1^*)s_2=s_2$. Repeating this process, for any $N\in\mathbb{N}$, we obtain elements $s_1,s_2,...,s_N$ in $\mathcal{B}^{\gamma}$ such that $$ s_i^*s_i=[(x_n)_n] \quad \text{and} \quad [(x_n)_n]+\sum_{i=1}^{N}s_is_i^* \leq [(e_n)_n]. $$ Since $T$ is a tracial state and $[(e_n)_n]$ is a contraction, $(N+1)T([(x_n)_n])\leq 1$. Therefore $T([(x_n)_n])=0$.
(ii) Since $\mathcal{W}\otimes M_n(\mathbb{C})$ is isomorphic to $\mathcal{W}$, it can be easily checked that $\mathcal{B}^{\gamma}\otimes M_n(\mathbb{C})$ is isomorphic to $\mathcal{B}^{\gamma}$. Hence it is enough to show that if $a$ and $b$ are positive elements in $\mathcal{B}^{\gamma}$ with $d_{\tau_{\mathcal{B}}}(a)<d_{\tau_{\mathcal{B}}}(b)$, then there exists an element $r$ in $\mathcal{B}^{\gamma}$ such that $r^*br=a$. Using Proposition \ref{pro:fixed-factor}, Proposition \ref{pro:surjective} and Proposition \ref{pro:target-si} instead of \cite[Lemma 4.2]{MS2}, \cite[Theorem 4.3]{MS2} and \cite[Proposition 4.5]{MS2}, the same argument as in the proof of \cite[Proposition 4.8]{MS2} shows this. We shall recall a sketch of a proof for reader's convenience. We may assume that $a$ and $b$ are contractions. Let $\tilde{\tau}_{\omega}$ denote the unique tracial state on $\mathcal{M}^{\tilde{\gamma}}$. Since $\tau_{\mathcal{B}}=\tilde{\tau}_{\omega}\circ \varrho$, we have $$ d_{\tau_{\mathcal{B}}}(a)= \tilde{\tau}_{\omega} (1_{(0, \infty)}(\varrho(a))) \quad \text{and} \quad d_{\tau_{\mathcal{B}}}(b)= \tilde{\tau}_{\omega} (1_{(0, \infty)}(\varrho(b))) $$ where $1_{(0,\infty)}$ is the characteristic function of $(0,\infty)$. Note that $1_{(0, \infty)}(\varrho(a))$ and $1_{(0, \infty)}(\varrho(b))$ are projections in $\mathcal{M}^{\tilde{\gamma}}$ because $\mathcal{M}^{\tilde{\gamma}}$ is a von Neumann algebra. Since $\mathcal{M}^{\tilde{\gamma}}$ is a factor of type II$_1$ and
$\varrho|_{\mathcal{B}^{\gamma}}$ is surjective, the same argument as in the proof of \cite[Proposition 4.8]{MS2} shows that there exist positive contractions $y_1$ and $y_2$ in $\mathcal{B}^{\gamma}$ and a projection $p$ in $\mathcal{M}^{\tilde{\gamma}}$ such that $$ y_1y_2=0,\quad \varrho(y_1)=p,\quad \varrho (y_2)=1-p, \quad y_1b=by_1, \quad y_2b=by_2 $$ and there exists $\varepsilon >0$ such that $$ \tilde{\tau}_{\omega}(1_{(\varepsilon, \infty)}(\varrho(b)p))>0 \quad \text{and} \quad \tilde{\tau}_{\omega}(1_{(0, \infty)}(\varrho(a)))< \tilde{\tau}_{\omega}(1_{(\varepsilon, \infty)}(\varrho(b)(1-p))). $$ Furthermore, there exists a unitary element $v$ in $\mathcal{M}^{\tilde{\gamma}}$ such that $$ 1_{(0, \infty)}(\varrho(a)) \leq v1_{(\varepsilon, \infty)}(\varrho(b)(1-p))v^* $$ because $\mathcal{M}^{\tilde{\gamma}}$ is a factor of type II$_1$.
Since $\varrho|_{\mathcal{B}^{\gamma}}$ is surjective, there exists an element $w$ in $\mathcal{B}^{\gamma}$ such that $\varrho (w)=v$. Define continuous functions $g$ and $h$ on $ [0, \infty )$ by $g(t)=\min\{1/\varepsilon, 1/t \}$ and $h(t)=tg(t)$. Note that $g(b)$ is an element in $(\mathcal{B}^{\gamma})^{\sim}$ and $$ h(t):= \left\{\begin{array}{cl} t/\varepsilon & \text{if } t\in [0,\varepsilon] \\ 1 & \text{if } t\in (\varepsilon, \infty ) \end{array} \right.. $$ Put $r_1:= y_2^{1/2}g(b)^{1/2}w^*a^{1/2}\in\mathcal{B}^{\gamma}$, then we have $r_1^*br_1\leq a$ and $\varrho (r_1^*br_1)=\varrho(a)$. Therefore $a-r_1^*br_1$ is a positive contraction in $\mathrm{ker}\; \varrho$, and hence we have $$ \tau_{\mathcal{B}} (a-r_1^*br_1)=0. $$ Since $ \tau_{\mathcal{B}}((h(b)y_1)^m)= \tilde{\tau}_{\omega} (\varrho(h(b))^mp) \geq \tilde{\tau}_{\omega}(1_{(\varepsilon, \infty)}(\varrho(b)p))) $ for any $m\in\mathbb{N}$, we have $$ \inf_{m\in\mathbb{N}}\tau_{\mathcal{B}}((h(b)y_1)^m)>0. $$ Therefore Proposition \ref{pro:target-si} implies that there exists an element $s$ in $\mathcal{B}^{\gamma}$ such that $$ s^*s= a-r_1^*br_1 \quad \text{and} \quad h(b)y_1s=s. $$ Put $r_2:=y_1^{1/2}g(b)^{1/2}s\in \mathcal{B}^{\gamma}$, then we have $r_1^*br_2=0$ and $r_2^*br_2=a-r_1^*br_1$. Consequently, put $r=r_1+r_2$, then we have $r^*br=a$. \end{proof}
\section{Stable uniqueness theorem}\label{sec:stable-uniqueness}
In this section we shall show a variant of \cite[Corollary 3.8]{Na3} which is based on the results in \cite{EN}(see also \cite{EGLN}), \cite{EllK}(see also \cite{G}), \cite{DE1} and \cite{DE2}.
First, we shall define a homomorphism $\rho$ from $F(A\otimes \mathcal{W})^{\gamma}$ to $\mathcal{B}^{\gamma}$. Let $\{k_n\}_{n=1}^\infty$ be an approximate unit for $\mathcal{W}$ with $k_{n+1}k_{n}=k_{n}$, and
let $\mathcal{W}_0:=\{k_nbk_n \; |\; n\in\mathbb{N}, b\in\mathcal{W}\}$. Then $\mathcal{W}_0$ is a dense self-adjoint subalgebra of $\mathcal{W}$. For any $(x_n)_n\in (A\otimes\mathcal{W})_{\omega}$, $a\in A$, $b\in\mathcal{W}$ and $N\in\mathbb{N}$, we have \begin{align*} ((1_{A^{\sim}}\otimes k_N)x_n(1_{A^{\sim}}\otimes b)(a\otimes 1_{\mathcal{W}^{\sim}}))_n & = ((1_{A^{\sim}}\otimes k_Nk_{N+1})x_n(a\otimes b))_n \\ & = ((a\otimes k_{N}k_{N+1}b)x_n)_n \\ & = ((1_{A^{\sim}}\otimes k_N)x_n (a\otimes k_{N+1}b))_n \\ & = ((a\otimes k_Nk_{N+1})x_n (1_{A^{\sim}}\otimes b))_n \\ & = ((a\otimes 1_{\mathcal{W}^{\sim}})(1_{A^{\sim}}\otimes k_N)x_n(1_{A^{\sim}}\otimes b))_n. \end{align*} Hence $((1_{A^{\sim}}\otimes k_N)x_n(1_{A^{\sim}}\otimes b))_n\in \mathcal{A}$. For any $[(x_n)_n]\in F(A\otimes\mathcal{W})^{\gamma}$ and $k_Nbk_N\in\mathcal{W}_0$, define $$ \rho ([(x_n)_n]\otimes k_{N}bk_{N}):= [((1_{A^{\sim}}\otimes k_N)x_n(1_{A^{\sim}}\otimes bk_N))_n]\in \mathcal{B}. $$ We shall show this is well defined. Let $[(x_n)_n]=[(y_n)_n]\in F(A\otimes\mathcal{W})^{\gamma}$ and $k_{N}bk_{N}=k_{N^{\prime}}b^{\prime}k_{N^{\prime}}\in \mathcal{W}_0$. For any $a\in A$, we have \begin{align*} & (((1_{A^{\sim}}\otimes k_N)x_n(1_{A^{\sim}}\otimes bk_N)-(1_{A^{\sim} }\otimes k_{N^{\prime}})y_n(1_{A^{\sim}}\otimes b^{\prime}k_{N^{\prime}})) (a\otimes 1_{\mathcal{W}^{\sim}}))_n \\ & = ((1_{A^{\sim}}\otimes k_N)x_n(a\otimes bk_N)-(1_{A^{\sim} }\otimes k_{N^{\prime}})y_n(a\otimes b^{\prime}k_{N^{\prime}}))_n \\ & = ((a\otimes k_{N}bk_{N})x_n-(a\otimes k_{N^{\prime}}b^{\prime}k_{N^{\prime}})y_n)_n=
((a\otimes k_{N}bk_{N})(x_n-y_n))_n=0. \end{align*} Therefore $[((1_{A^{\sim}}\otimes k_N)x_n(1_{A^{\sim}}\otimes bk_N))_n]=
[((1_{A^{\sim}}\otimes k_{N^{\prime}})y_n(1_{A^{\sim}}\otimes b^{\prime}k_{N^{\prime}}))_n]$. By a similar argument, it can be easily checked that $\rho$ is a homomorphism from the algebraic tensor product $F(A\otimes\mathcal{W})^{\gamma}\odot\mathcal{W}_0$ to $\mathcal{B}^{\gamma}$. Since we have \begin{align*}
\| \rho ([(x_n)_n]\otimes k_{N}bk_{N})\|
& = \sup_{a\in A_{+,1}} \lim_{n\to \omega} \|(1_{A^{\sim}}\otimes k_{N})x_n
(1_{A^{\sim}}\otimes bk_{N})(a\otimes 1_{\mathcal{W}^{\sim}}) \| \\
& = \sup_{a\in A_{+,1}} \lim_{n\to \omega} \|(a\otimes k_{N}bk_{N})x_n\| \\
& \leq \sup_{a\in A_{+,1}} \lim_{n\to \omega} \|a\| \| k_{N}bk_{N}\| \| x_n\| \\
& = \lim_{n\to\omega} \| x_n\| \cdot \| k_{N}bk_{N}\|, \end{align*} $\rho$ can be extended to a homomorphism from the algebraic tensor product $F(A\otimes\mathcal{W})^{\gamma}\odot\mathcal{W}$ to $\mathcal{B}^{\gamma}$. Consequently, $\rho$ can be extended to a homomorphism from $F(A\otimes\mathcal{W})^{\gamma}\otimes\mathcal{W}$ to $\mathcal{B}^{\gamma}$ because $\mathcal{W}$ is nuclear. By the construction of $\rho$, it is easy to show the following proposition.
\begin{pro}\label{pro:homrho} Let $(z_n)_n$ be an element in $\mathcal{A}$ such that $[(z_n)_n]=\rho ([(x_n)_n]\otimes b)$ for some $[(x_n)_n]\in F(A\otimes\mathcal{W})^{\gamma}$ and $b\in \mathcal{W}$. Then $$ (z_n(a\otimes 1_{\mathcal{W}^{\sim}}))_n= (x_n(a\otimes b))_n $$ for any $a\in A$. \end{pro}
\begin{rem} Note that there exists an element $(x_n)_n$ in $(A\otimes\mathcal{W})_{\omega}$ such that $(x_n)_n\notin \mathcal{A}$. Indeed, if $a$ is not an element in the center of $A$, $(a\otimes (k_n^2-k_n))_n$ is such an element. But we do not know whether there exist $(x_n)_n\in (A\otimes \mathcal{W})_{\omega}$ and $b\in \mathcal{W}$ such that $(x_n(1_{A^{\sim}}\otimes b))_n\notin \mathcal{A}$. \end{rem}
The following lemma is an analogous lemma of \cite[Lemma 3.6]{Na3}. \begin{lem}\label{lem:Lemma3.6} If $x$ is a positive element in $F(A\otimes\mathcal{W})$, then $$ \tau_{\mathcal{B}}(\rho (x\otimes b))= \tau_{\omega}(x) \tau_\mathcal{W} (b) $$ for any $b\in\mathcal{W}$. \end{lem} \begin{proof} Let $(z_n)_n$ be an element in $\mathcal{A}$ such that $[(z_n)_n]=\rho (x\otimes b)$, and let $\{h_n\}_{n=1}^\infty$ be an approximate unit for $A$. Note that $\tau_{\mathcal{B}}(\rho (x\otimes b))=\lim_{n\to\omega}\tau (z_n)$. Since $\lim_{n\to \infty} \tau (h_n\otimes 1_{\mathcal{W}})=1$, a similar argument as in the proof of \cite[Proposition 5.3]{Na2} shows $$ \lim_{n\to\omega}\tau (z_n)=\lim_{m\to \infty} \lim_{n\to \omega} \tau (z_n(h_m\otimes 1_{\mathcal{W}^{\sim}})). $$ By Proposition \ref{pro:homrho} and \cite[Lemma 3.6]{Na3}, $$ \lim_{n\to \omega} \tau (z_n(h_m\otimes 1_{\mathcal{W}^{\sim}}))= \tau_{\omega}(x) \tau (h_m\otimes b)=\tau_{\omega}(x)\tau_A (h_m)\tau_\mathcal{W}(b) $$ for any $m\in\mathbb{N}$. Therefore $\tau_{\mathcal{B}}(\rho (x\otimes b))= \tau_{\omega}(x) \tau_\mathcal{W} (b)$ since $\lim_{m\to \infty} \tau_A (h_m)=1$. \end{proof}
For a projection $p$ in $F(A\otimes\mathcal{W})^{\gamma}$, let $$ \mathcal{B}^{\gamma}_p:=\overline{\rho (p\otimes s) \mathcal{B}^{\gamma} \rho (p\otimes s)} $$ where $s$ is a strictly positive element in $\mathcal{W}$. Note that $\mathcal{B}^{\gamma}_p$ is a hereditary subalgebra of $\mathcal{B}^{\gamma}$. Define a homomorphism $\sigma_p$ from $\mathcal{W}$ to $\mathcal{B}^{\gamma}_p$ by $$ \sigma_p (b) = \rho (p\otimes b) $$ for any $b\in \mathcal{W}$.
Since the target algebra $\mathcal{B}^{\gamma}$ has strict comparison by Proposition \ref{pro:main-section3}, the same proof as \cite[Proposition 3.7]{Na3} shows the following proposition by using Lemma \ref{lem:Lemma3.6} instead of \cite[Lemma 3.6]{Na3}. See \cite[Definition 3.2]{Na3} for the definition of the $(L,N)$-fullness.
\begin{pro}\label{pro:full-inclusion} There exist maps $L: \mathcal{W}_{+,1}\setminus \{0\}\times (0,1)\to \mathbb{N}$ and $N: \mathcal{W}_{+,1}\setminus \{0\}\times (0,1) \to (0,\infty)$ such that the following holds. If $p$ be a projection in $F(A\otimes\mathcal{W})^{\gamma}$ such that $\tau_{\omega} (p)>0$, then $\sigma_p$ is $(L,N)$-full. \end{pro}
The following corollary is an immediate consequence of \cite[Proposition 3.3]{Na3} and the proposition above. For finite sets $\mathcal{F}_1$ and $\mathcal{F}_2$,
let $\mathcal{F}_1\odot \mathcal{F}_2:=\{a\otimes b \; |\; a\in \mathcal{F}_1, b\in \mathcal{F}_2\}$.
\begin{cor}\label{cor:stable-uniqueness} Let $\Omega$ be a compact metrizable space. For any finite subsets $F_1\subset C(\Omega)$, $F_2\subset \mathcal{W}$ and $\varepsilon>0$, there exist finite subsets $\mathcal{F}_1\subset C(\Omega)$, $\mathcal{F}_2\subset \mathcal{W}$, $m\in\mathbb{N}$ and $\delta >0$ such that the following holds. Let $p$ be a projection in $F(A\otimes\mathcal{W})^{\gamma}$ such that $\tau_{\omega} (p)>0$. For any contractive ($\mathcal{F}_1\odot \mathcal{F}_2, \delta$)-multiplicative maps $\varphi, \psi : C(\Omega)\otimes \mathcal{W}\to \mathcal{B}^{\gamma}_p$, there exist a unitary element $u$ in $M_{m^2+1}(\mathcal{B}^{\gamma}_p)^{\sim}$ and $z_1,z_2,...,z_m\in\Omega$ such that \begin{align*}
\| u & (\varphi(f\otimes b) \oplus \overbrace{\bigoplus_{k=1}^m f(z_k)\rho (p\otimes b)\oplus \cdots \oplus\bigoplus_{k=1}^m f(z_k)\rho (p\otimes b) }^m) u^* \\
& - \psi(f\otimes b)\oplus \overbrace{\bigoplus_{k=1}^m f(z_k)\rho (p\otimes b) \oplus \cdots \oplus \bigoplus_{k=1}^m f(z_k)\rho (p\otimes b)}^m\| < \varepsilon \end{align*} for any $f\in F_1$ and $b\in F_2$. \end{cor}
\section{Classification of normal elements in $F(A\otimes\mathcal{W})^{\gamma}$} \label{sec:normal}
In this section we shall classify certain normal elements in $F(A\otimes\mathcal{W})^{\gamma}$ up to unitary equivalence. Furthermore, we shall consider the comparison theory for certain projections in $F(A\otimes\mathcal{W})^{\gamma}$. We assume that $\Omega$ is a compact metrizable space in this section.
Using Proposition \ref{pro:key-pro} and Proposition \ref{pro:MvN-u} instead of \cite[Proposition 4.1]{Na3}, \cite[Proposition 4.2]{Na3} and \cite[Proposition 4.8]{Na3}, we obtain the following lemma by the same proof as \cite[Lemma 5.1]{Na3}. See also \cite[Lemma 4.1]{M2} and \cite[Lemma 4.2]{M2}.
\begin{lem}\label{lem:m4.2} Let $F$ be a finite subset of $C(\Omega)$ and $\varepsilon >0$. Suppose that $\varphi$ and $\psi$ are unital homomorphisms from $C(\Omega)$ to $F(A\otimes\mathcal{W})^{\gamma}$ such that $ \tau_{\omega} \circ \varphi = \tau_{\omega} \circ \psi . $ Then there exist a projection $p\in F(A\otimes\mathcal{W})^{\gamma}$, $(F,\varepsilon)$-multiplicative unital c.p. maps $\varphi^{\prime}$ and $\psi^{\prime}$ from $C(\Omega)$ to $pF(A\otimes\mathcal{W})^{\gamma}p$, a unital homomorphism $\sigma$ from $C(\Omega)$ to $(1-p)F(A\otimes\mathcal{W})^{\gamma}(1-p)$ with finite-dimensional range and a unitary element $u\in F(A\otimes\mathcal{W})^{\gamma}$ such that $$ 0 <\tau_{\omega} (p) < \varepsilon, \;
\| \varphi (f)- (\varphi^{\prime}(f)+ \sigma (f))\| <\varepsilon, \;
\| \psi (f)- u(\psi^{\prime}(f)+ \sigma (f))u^*\| <\varepsilon $$ for any $f\in F$. \end{lem}
The following theorem is a variant of \cite[Theorem 5.2]{Na3}. See also \cite[Theorem 4.5]{M2}.
\begin{thm}\label{thm:unitary-equivalence-ed} Let $F_1$ be a finite subset of $C(\Omega)$, $F_2$ a finite subset of $A$ and $F_3$ a finite subset of $\mathcal{W}$, and let $\varepsilon >0$. Then there exist mutually orthogonal positive elements $h_1,h_2,...,h_{l}$ in $C(\Omega)$ of norm one such that the following holds. For any $\nu >0$, there exist finite subsets $\mathcal{G}_1\subset C(\Omega)$, $\mathcal{G}_2\subset A\otimes \mathcal{W}$ and $\delta >0$ such that the following holds. If $\varphi$ and $\psi$ are unital c.p. maps from $C(\Omega)$ to $M(A\otimes\mathcal{W})$ such that $$ \tau (\varphi (h_i)) \geq \nu, \; \forall i\in\{1,2,...,l\}, $$ $$
\| [\varphi (f), x] \| < \delta , \; \| [\psi(f),x ] \| < \delta, \; \forall f\in \mathcal{G}_1, x\in \mathcal{G}_2, $$ $$
\| (\varphi (f_1f_2)- \varphi (f_1)\varphi (f_2))x\| < \delta, \;
\| (\psi (f_1f_2)- \psi (f_1)\psi (f_2))x\| < \delta, \; \forall f_1,f_2\in \mathcal{G}_1, x\in \mathcal{G}_2, $$ $$
\| (\gamma_g (\varphi (f))-\varphi (f))x \| < \delta, \;
\| (\gamma_g (\psi (f))-\psi (f))x \| < \delta, \; \forall g\in G, f\in \mathcal{G}_1, x\in \mathcal{G}_2, $$ $$
| \tau (\varphi (f)) -\tau (\psi (f)) | < \delta, \; \forall f\in \mathcal{G}_1, $$ then there exists a contraction $u$ in $(A\otimes\mathcal{W})^{\sim}$ such that $$
\| (a\otimes b) (u^*u -1)\| < \varepsilon, \;
\| (a\otimes b)(uu^* -1) \| < \varepsilon, \;
\| (a\otimes b)(\gamma_g(u)-u) \| < \varepsilon, $$ $$
\| u\varphi (f)(a\otimes b)u^* - \psi(f)(a\otimes b) \| < \varepsilon $$ for any $f\in F_1$, $a\in F_2$, $b\in F_3$ and $g\in G$. \end{thm}
\begin{proof}
We may assume that every element in $F_2$ and $F_3$ is positive and of norm one. Take positive elements $h_1,h_2,...,h_l$ in $C(\Omega)$ by the same way as in the proof of \cite[Theorem 5.2]{Na3}. We will show that $h_1,h_2,...,h_l$ have the desired property. On the contrary, suppose that $h_1,h_2,...,h_l$ did not have the desired property. Then there exists a positive number $\nu$ satisfying the following: For any $n\in\mathbb{N}$, there exist unital c.p. maps $\varphi_n, \psi_n : C(\Omega)\to M(A\otimes\mathcal{W})$ such that $$ \tau (\varphi_n (h_i)) \geq \nu, \: \forall i \in\{1,2,...,l\}, $$ $$
\| [\varphi_n(f_1),x]\| \to 0, \; \| [\psi_n(f_1),x]\| \to 0, \; \| (\varphi_n(f_1f_2)- \varphi_n(f_1)
\varphi_n(f_2))x\| \to 0, $$ $$
\| (\psi_n(f_1f_2)- \psi_n (f_1)\psi_n(f_2))x\| \to 0, \; \| (\gamma_g (\varphi_n (f_1))-\varphi_n (f_1))x
\|\to 0, $$ $$
\| (\gamma_g (\psi_n (f_1))-\psi_n (f_1))x \|\to 0, \; |\tau (\varphi_n(f_1))-\tau (\psi_n(f_1))| \to 0 $$ as $n\to\infty$ for any $f_1,f_2\in C(\Omega)$, $x\in A\otimes\mathcal{W}$ and $g\in G$ and $$ \max_{f\in F_1, a\in F_2, b\in F_3}
\| u\varphi_n (f)(a\otimes b)u^* - \psi_n (f)(a\otimes b)\| \geq \varepsilon $$ for any contraction $u$ in $(A\otimes\mathcal{W})^{\sim}$ satisfying $$
\| (a\otimes b)(\gamma_g(u)-u) \| < \varepsilon, \;
\|(a\otimes b) (u^*u -1) \| < \varepsilon, \;
\|(a\otimes b) (uu^* -1) \| < \varepsilon $$ for any $a\in F_2$, $b\in F_3$ and $g\in G$.
Define homomorphisms $\varphi$ and $\psi$ from $C(\Omega)$ to $F(A\otimes\mathcal{W})^{\gamma}$ by $\varphi (f) := [(\varphi_n(f))_n]$ and $\psi (f):= [(\psi_n(f))_n]$ for any $f\in C(\Omega)$. Then we have $$ \tau_{\omega} \circ \varphi= \tau_{\omega} \circ \psi \quad \text{and} \quad \tau_{\omega}(\varphi (h_i))\geq \nu $$ for any $i=1,2,...,l$.
We obtain finite subsets $\mathcal{F}_1\subset C(\Omega)$, $\mathcal{F}_2\subset \mathcal{W}$, $m\in\mathbb{N}$ and $\delta >0$ by applying Corollary \ref{cor:stable-uniqueness} to $F_1$ and $F_3$ and $\varepsilon /7$. Put $$ F_1^{\prime}:= F_1\cup \mathcal{F}_1 \cup \{h_1, h_2,...,h_l \} \quad \text{and} \quad \varepsilon^{\prime}:=
\min \left\{\frac{\varepsilon}{7}, \frac{\delta}{\max\{\| b\| \; |\; b\in \mathcal{F}_2\}}, \frac{\nu}{(m^2+2)} \right\}. $$
Applying Lemma \ref{lem:m4.2} to $F_1^{\prime}$, $\varepsilon^{\prime}$, $\varphi$ and $\psi$, there exist a projection $p\in F(A\otimes\mathcal{W})^{\gamma}$, $(F_1^{\prime},\varepsilon^{\prime})$-multiplicative unital c.p. maps $\varphi^{\prime}$ and $\psi^{\prime}$ from $C(\Omega)$ to $pF(A\otimes\mathcal{W})^{\gamma}p$, a unital homomorphism $\sigma$ from $C(\Omega)$ to $(1-p)F(A\otimes\mathcal{W})^{\gamma}(1-p)$ with finite-dimensional range and a unitary element $w\in F(A\otimes\mathcal{W})^{\gamma}$ such that $$ 0 <\tau_{\omega} (p) < \varepsilon^{\prime}, \;
\| \varphi (f)- (\varphi^{\prime}(f)+ \sigma (f))\| <\varepsilon^{\prime}, \;
\| \psi (f)- w(\psi^{\prime}(f)+ \sigma (f))w^*\| <\varepsilon^{\prime} $$ for any $f\in F_1^{\prime}$. The Choi-Effros lifting theorem implies that there exist sequences of contractive c.p. maps $\varphi_n^{\prime}$, $\psi_n^{\prime}$ and $\sigma_n$ from $C(\Omega )$ to $A\otimes\mathcal{W}$ such that $\varphi^{\prime}(f)=[(\varphi_n^{\prime}(f))_n]$, $\psi^{\prime}(f)=[(\psi_n^{\prime}(f))_n]$ and $\sigma (f)=[(\sigma_n (f))_n]$ for any $f\in C(\Omega )$. By \cite[Proposition 4.9]{Na3}, there exists a unitary element $(w_n)_n$ in $(A\otimes\mathcal{W})^{\sim}_{\omega}$ such that $w=[(w_n)_n]$. Note that we have $(x\gamma_g(w_n))_n=(xw_n)_n$ for any $g\in G$ and $x\in A\otimes\mathcal{W}$, $$
\lim_{n\to\omega}\| \varphi_n (f)(a\otimes b)- (\varphi_n^{\prime}(f)+ \sigma_n (f))(a\otimes b)\| <\frac{\varepsilon}{7} \eqno{(1)} $$ and $$
\lim_{n\to\omega}\| \psi_n (f)(a\otimes b)- w_n(\psi_n^{\prime}(f)+ \sigma_n (f))(a\otimes b)w_n^*\| <\frac{\varepsilon}{7} \eqno{(2)} $$ for any $f\in F_1^{\prime}$, $a\in F_2$ and $b\in F_3$.
Define c.p. maps $\Phi^{\prime}$ and $\Psi^{\prime}$ from $C(\Omega)\otimes \mathcal{W}$ to $\mathcal{B}_p^{\gamma}$ by $$ \Phi^{\prime}:= \rho \circ (\varphi^{\prime}\otimes \mathrm{id}_{\mathcal{W}})\quad \text{and} \quad \Psi^{\prime}:= \rho \circ (\psi^{\prime}\otimes \mathrm{id}_{\mathcal{W}}). $$ Then $\Phi^{\prime}$ and $\Psi^{\prime}$ are contractive $(\mathcal{F}_1\odot \mathcal{F}_2, \delta)$-multiplicative maps. Hence Corollary \ref{cor:stable-uniqueness} implies that there exist a unitary element $U$ in $M_{m^2+1}(\mathcal{B}^{\gamma}_p)^{\sim}$ and $z_1,z_2,...,z_m\in\Omega$ such that \begin{align*}
\| U & (\Phi^{\prime}(f\otimes b) \oplus \overbrace{\bigoplus_{k=1}^m f(z_k)\rho (p\otimes b)\oplus \cdots \oplus\bigoplus_{k=1}^m f(z_k)\rho (p\otimes b) }^m) U^* \\
& - \Psi^{\prime}(f\otimes b)\oplus \overbrace{\bigoplus_{k=1}^m f(z_k)\rho (p\otimes b) \oplus \cdots \oplus \bigoplus_{k=1}^m f(z_k)\rho (p\otimes b)}^m\| < \frac{\varepsilon}{7} \end{align*} for any $f\in F_1$ and $b\in F_3$.
Using Proposition \ref{thm:Matui-Sato} instead of \cite[Proposition 4.1]{Na3}, the same argument as in the proof of \cite[Theorem 5.2]{Na3} shows that there exist mutually orthogonal projections $\{p_{j,k} \}_{j,k=1}^m$ in $(1-p)F(A\otimes\mathcal{W})^{\gamma}(1-p)$ and a homomorphism $\sigma^{\prime\prime}: C(\Omega)\to (1-p-q)F(A\otimes\mathcal{W})^{\gamma}(1-p-q)$ where $q=\sum_{j,k=1}^mp_{j,k}$ such that $$
\| \sigma (f) - \left(\sum_{j=1}^m\sum_{k=1}^mf(z_k)p_{j,k} + \sigma^{\prime\prime} (f) \right) \| < \frac{2\varepsilon}{7} $$ for any $f\in F_1$ and $p_{j,k}$ is Murray-von Neumann equivalent to $p$ for any $j,k=1,2,...,m$. Define a homomorphism $\hat{\sigma}$ from $C(\Omega)$ to $F(A\otimes\mathcal{W})^{\gamma}$ by $$ \hat{\sigma}(f):= \sum_{j=1}^m\sum_{k=1}^mf(z_k)p_{j,k}+ \sigma^{\prime\prime} (f) $$ for any $f\in C(\Omega)$. By the Choi-Effros lifting theorem, there exists a sequence of contractive c.p. maps $\hat{\sigma}_n$ from $C(\Omega)$ to $A\otimes\mathcal{W}$ such that $\hat{\sigma}(f)=[(\hat{\sigma}_n (f))_n]$. Note that we have $$
\lim_{n\to \omega} \|\sigma_n(f)(a\otimes b)- \hat{\sigma}_n(f)(a\otimes b) \|< \frac{2\varepsilon}{7}
\eqno{(3)} $$ for any $f\in F_1$, $a\in F_2$ and $b\in F_3$.
Since we can regard $\Phi^{\prime}(f\otimes b)+ \sum_{j=1}^m\sum_{k=1}^mf(z_k)\rho (p_{j,k}\otimes b)\in \mathcal{B}_{p+q}^{\gamma}$ as an element in $M_{m^2+1}(\mathcal{B}^{\gamma}_p)$, the same argument as in the proof of \cite[Theorem 5.2]{Na3} shows that there exists a unitary element $V$ in $(\mathcal{B}^{\gamma})^{\sim}$ such that $$
\| V(\Phi^{\prime} (f\otimes b) + \rho (\hat{\sigma}(f)\otimes b))V^*- (\Psi^{\prime}(f\otimes b)
+\rho (\hat{\sigma}(f)\otimes b)) \|< \frac{\varepsilon}{7} $$
for any $f\in F_1$ and $b\in F_3$. Let $(v_n)_n$ be a contraction in $\mathcal{A}^{\sim}$ such that $V=[(v_n)_n]$. Then we have $((a\otimes 1_{\mathcal{W}^{\sim}})v_n^*v_n)_n =((a\otimes 1_{\mathcal{W}^{\sim}})v_nv_n^*)_n=a\otimes 1_{\mathcal{W}^{\sim}}$ and $((a\otimes 1_{\mathcal{W}^{\sim}})\gamma_g(v_n))_n=((a\otimes 1_{\mathcal{W}^{\sim}})v_n)_n$ for any $g\in G$ and $a\in A$. Furthermore, we see that $$
\lim_{n\to\omega} \|v_n(\varphi^{\prime}_n(f)+\hat{\sigma}_n(f))(a\otimes b) v_n^*
-(\psi^{\prime}_n(f)+\hat{\sigma}_n(f))(a\otimes b)\| < \frac{\varepsilon}{7} \eqno{(4)} $$ for any $f\in F_1$, $a\in F_2$ and $b\in F_3$ by Proposition \ref{pro:homrho}.
Put $(u_n)_n:=(w_nv_n)_n\in ((A\otimes\mathcal{W})^{\sim})^{\omega}$. Then we have $$ ((a\otimes b)u_n^*u_n)_n= ((a\otimes b)v_n^*v_n)_n= a\otimes b $$ and $$ ((a\otimes b)u_nu_n^*)_n=(w_n(a\otimes b)v_nv_n^*w_n)_n=(w_n(a\otimes b)w_n^*)_n=a\otimes b $$ for any $a\in A$ and $b\in\mathcal{W}$. Also, we have \begin{align*} ((a\otimes b)\gamma_g(u_n))_n & =((a\otimes b)\gamma_g(w_n)\gamma_g(v_n))_n=((a\otimes b)w_n\gamma_g(v_n))_n \\ & =(w_n(a\otimes b)\gamma_g(v_n))_n=(w_n(a\otimes b)v_n)_n=((a\otimes b)u_n)_n \end{align*} for any $g\in G$, $a\in A$ and $b\in\mathcal{W}$. By (1), (2), (3) and (4), we see that $$
\lim_{n\to\omega} \| u_n\varphi_n(f)(a\otimes b)u_n^* - \psi_n (f)(a\otimes b) \| < \varepsilon $$ for any $f\in F_1$, $a\in F_2$ and $b\in F_3$. Therefore, taking a sufficiently large $n$, we obtain a contradiction. Consequently, the proof is complete. \end{proof}
The following theorem is the main result in this section.
\begin{thm}\label{thm:classification-normal} Let $N_1$ and $N_2$ be normal elements in $F(A\otimes\mathcal{W})^{\gamma}$ such that $\mathrm{Sp} (N_1)=\mathrm{Sp} (N_2)$ and $\tau_{\omega} (f(N_1)) >0$ for any $f\in C(\mathrm{Sp}(N_1))_{+}\setminus \{0\}$. Then there exists a unitary element $u$ in $F(A\otimes\mathcal{W})^{\gamma}$ such that $uN_1u^* =N_2$ if and only if $ \tau_{\omega} (f(N_1))= \tau_{\omega} (f(N_2)) $ for any $f\in C(\mathrm{Sp}(N_1))$. \end{thm} \begin{proof} By a similar argument as in the proof of \cite[Theorem 5.3]{Na3}, we can prove this theorem. We shall give a proof for reader's convenience.
Since the only if part is clear, we will show the if part. Define unital homomorphisms $\varphi$ and $\psi$ from $C(\mathrm{Sp}(N_1))$ to $F(A\otimes\mathcal{W})^{\gamma}$ by $\varphi (f):= f(N_1)$ and $\psi (f):=f(N_2)$, respectively. By the Choi-Effros lifting theorem, we see that there exist sequences of unital c.p. maps $\varphi_n$ and $\psi_n$ from $C(\mathrm{Sp}(N_1))$ to $(A\otimes\mathcal{W})^{\sim}$ such that $f(N_1)=[(\varphi_n(f))_n]$ and $f(N_2)=[(\psi_n(f))_n]$ for any $f\in C(\mathrm{Sp}(N_1))$. Then we have $$
|\tau (\varphi_n(f_1))-\tau_{\omega}(f_1(N_1))| \to 0, \; \| [\varphi_n(f_1),x]\| \to 0, \;
\| [\psi_n(f_1),x]\| \to 0, $$ $$
\| (\varphi_n(f_1f_2)- \varphi_n(f_1)\varphi_n(f_2))x\| \to 0,\;
\| (\psi_n(f_1f_2)- \psi_n (f_1)\psi_n(f_2))x\| \to 0, $$ $$
\| (\gamma_g (\varphi_n (f_1))-\varphi_n (f_1))x \|\to 0, \;
\| (\gamma_g (\psi_n (f_1))-\psi_n (f_1))x \|\to 0, $$ $$
\; |\tau (\varphi_n(f_1))-\tau (\psi_n(f_1))| \to 0 $$ as $n\to\omega$ for any $f_1,f_2\in C(\mathrm{Sp}(N_1))$, $x\in A\otimes\mathcal{W}$ and $g\in G$.
We denote by $\iota$ the identity function on $\mathrm{Sp}(N_1)$, that is, $\iota (z)=z$ for any $z\in\mathrm{Sp}(N_1)$. Let $F_1:=\{1, \iota \}\subset C(\mathrm{Sp}(N_1))$, and let $\{F_{2,k}\}_{k\in\mathbb{N}}$ and $\{F_{3,k}\}_{k\in\mathbb{N}}$ be increasing sequences of finite subsets in $A$ and $\mathcal{W}$ such that $A=\overline{\bigcup_{k\in\mathbb{N}} F_{2,k}}$ and $\mathcal{W}=\overline{\bigcup_{k\in\mathbb{N}} F_{3,k}}$, respectively. For any $k\in\mathbb{N}$, we obtain mutually orthogonal positive elements $h_{1,k},h_{2,k},...,h_{l(k),k}$ in $C(\mathrm{Sp}(N_1))$ of norm one by applying Theorem \ref{thm:unitary-equivalence-ed} to $F_1$, $F_{2,k}$, $F_{3,k}$ and $1/k$. Put $$ \nu_k := \frac{1}{2} \min\{\tau_{\omega} (h_{1,k}(N_1)),\tau_{\omega} (h_{2,k}(N_1)),..., \tau_{\omega} (h_{l(k),k}(N_1)) \} >0. $$ Applying Theorem \ref{thm:unitary-equivalence-ed} to $\nu_k$, we obtain finite subsets $\mathcal{G}_{1,k}\subset C(\mathrm{Sp}(N_1))$, $\mathcal{G}_{2,k} \subset A\otimes\mathcal{W}$ and $\delta_k>0$. We may assume that $\{\mathcal{G}_{1,k}\}_{k\in\mathbb{N}}$ and $\{\mathcal{G}_{2,k}\}_{k\in\mathbb{N}}$ are increasing sequences and $\delta_k>\delta_{k+1}$ for any $k\in\mathbb{N}$. We can find a sequence $\{X_k\}_{k=1}^\infty$ of elements in $\omega$ such that $X_k\subset X_{k+1}$ and for any $n\in X_{k}$, $$
|\tau (\varphi_n(h_{i,k})) - \tau_{\omega}(h_{i,k}(N_1)) | < \nu_k , \;
\| [\varphi_n(f_1),x]\| < \delta_k, \; \| [\psi_n(f_1),x]\| < \delta_k, $$ $$
\| (\varphi_n (f_1f_2)- \varphi_n (f_1)\varphi_n (f_2))x\| < \delta_k, \;
\| (\psi_n (f_1f_2)- \psi_n (f_1)\psi_n (f_2))x\| < \delta_k, $$ $$
\| (\gamma_g (\varphi_n (f_1))-\varphi_n (f_1))x \| < \delta_k, \;
\| (\gamma_g (\psi_n (f_1))-\psi_n (f_1))x \| < \delta_k, $$ $$
| \tau (\varphi_n (f_1)) -\tau (\psi_n (f_1)) | < \delta_k $$ for any $i\in\{1,2,...,l(k)\}$, $f_1,f_2\in\mathcal{G}_{1,k}$, $x\in\mathcal{G}_{2,k}$ and $g\in G$. Since we have $$ \tau (\varphi_n(h_{i,k})) > \tau_{\omega}(h_{i,k}(N_1))-\nu_k \geq 2\nu_k - \nu_k = \nu_k $$ for any $i\in\{1,2,...,l(k)\}$, Theorem \ref{thm:unitary-equivalence-ed} implies that for any $n\in X_{k}$, there exists a contraction $u_{k,n}$ in $(A\otimes\mathcal{W})^{\sim}$ such that $$
\| (a\otimes b) (u_{k,n}^*u_{k,n}-1)\| < \frac{1}{k}, \; \| (a\otimes b)(u_{k,n}u_{k,n}^*-1)\| < \frac{1}{k}, $$ $$
\|(a\otimes b)(\gamma_g(u_{k,n})-u_{k,n})\| <\frac{1}{k}, \;
\| u_{k,n}\varphi_n (f)(a\otimes b)u_{k,n}^* - \psi_n(f)(a\otimes b) \| <\frac{1}{k} $$ for any $f\in F_1$, $a\in F_{2,k}$, $b\in F_{3,k}$ and $g\in G$. Since $F_1=\{1, \iota \}$, we have $$
\| [u_{k,n}, a\otimes b]\| \leq \| u_{k,n} (a\otimes b)(1-u_{k,n}^*u_{k,n})\|
+ \| (u_{k,n} (a\otimes b)u_{k,n}^*- a\otimes b)u_{k,n} \|< \frac{2}{k} $$ and $$
\| u_{k,n}\varphi_n (\iota) (a\otimes b)u_{k,n}^*- \psi_n(\iota)(a\otimes b)\| < \frac{1}{k} $$ for any $n\in X_{k}$, $a\in F_{2,k}$ and $b\in F_{3,k}$. Put $$ u_{n} := \left\{\begin{array}{cl} 1 & \text{if } n\notin X_1 \\ u_{k,n} & \text{if } n\in X_k\setminus X_{k+1}\quad (k\in\mathbb{N}) \end{array} \right.. $$ Then $$
\| (a\otimes b)(u_n^*u_n-1)\|\to 0, \;
\| (a\otimes b)(u_nu_n^*-1)\|\to 0, \; \| (a\otimes b)(\gamma_g (u_n)-u_n)\|\to 0, $$ $$
\| [u_n, a\otimes b]\|\to 0,\; \|(u_n\varphi_n(\iota)u_n^*-\psi_n(\iota))
(a\otimes b)\|\to 0 $$ as $n\to \omega$ for any $a\in A$, $b\in\mathcal{W}$ and $g\in G$. Therefore $[(u_n)_n]$ is a unitary element in $F(A\otimes\mathcal{W})^{\gamma}$ and $[(u_n)_u]N_1[(u_n)_n]^*=N_2$. \end{proof}
Applying the theorem above to projections, we obtain the following corollary. Note that if $p$ is a projection, then $C(\mathrm{Sp}(p))$ can be identified with
$\{\lambda_1 p+ \lambda_2(1-p)\; |\; \lambda_1,\lambda_2\in\mathbb{C}\}$. Hence it is clear that $\tau_{\omega}(f(p))>0$ for any $f\in C(\mathrm{Sp}(p))_{+}\setminus \{0\}$ if and only if $0<\tau_{\omega}(p)<1$. Also, for projections $p$ and $q$, we have $\tau_{\omega}(f(p))=\tau_{\omega}(f(q))$ for any $f\in C(\mathrm{Sp}(p))$ if and only if $\tau_{\omega}(p)=\tau_{\omega}(q)$.
\begin{cor}\label{thm:comparison} Let $p$ and $q$ be projections in $F(A\otimes \mathcal{W})^{\gamma}$ such that $0< \tau_{\omega} (p) <1$. Then $p$ and $q$ are unitarily equivalent if and only if $ \tau_{\omega} (p)= \tau_{\omega} (q) $. \end{cor}
The following corollary is important in the next section.
\begin{cor}\label{cor:comparison} Let $p$ and $q$ be projections in $F(A\otimes \mathcal{W})^{\gamma}$ such that $0< \tau_{\omega} (p) \leq 1$. Then $p$ and $q$ are Murray-von Neumann equivalent if and only if $ \tau_{\omega} (p)= \tau_{\omega} (q) $. \end{cor} \begin{proof} By Corollary \ref{thm:comparison}, it suffices to show that if $p$ is a projection in $F(A\otimes \mathcal{W})^{\gamma}$ such that $\tau_{\omega}(p)=1$, then $p$ is Murray-von Neumann equivalent to $1$. Proposition \ref{pro:key-pro} implies that there exists a projection $r$ in $F(A\otimes\mathcal{W})^{\gamma}$ such that $r\leq p$ and $\tau_{\omega}(r)=1/2$. By Corollary \ref{thm:comparison}, $p-r$ is unitarily equivalent to $1-r$. Therefore $p=(p-r)+r$ is Murray-von Neumann equivalent to $(1-r)+r=1$. \end{proof}
\section{Rohlin type theorem}\label{sec:main} In this section we shall show that $\gamma$ has the Rohlin property.
For a $\gamma$-cocycle $w$ in $F(A\otimes\mathcal{W})$, define an action $\gamma^{w}$ on $F(A\otimes\mathcal{W})\otimes M_{2}(\mathbb{C})$ by $$ \gamma^{w}_g := \mathrm{Ad}\left(\left(\begin{array}{cc}
1 & 0 \\
0 & w(g)
\end{array} \right)\right) \circ (\gamma_g\otimes \mathrm{id}) $$ for any $g\in G$. Since $\gamma$ has the weak Rohlin property, we obtain the following lemma by similar arguments as in \cite[Proposition 4.8]{MS2} and \cite[Proposition 3.3]{MS3} (see also arguments in Section \ref{sec:target}). We leave the proof to the reader.
\begin{lem}\label{lem:fixed-strict-comparison} Let $a$ and $b$ be positive elements in $(F(A\otimes\mathcal{W})\otimes M_{2}(\mathbb{C}))^{\gamma^{w}}$ such that $d_{\tau_{\omega}\otimes\mathrm{Tr}_2}(a) <d_{\tau_{\omega}\otimes\mathrm{Tr}_2}(b)$ where $\mathrm{Tr}_2$ is the (unnormalized) usual trace on $M_2(\mathbb{C})$. Then there exists an element $r$ in $(F(A\otimes\mathcal{W})\otimes M_{2}(\mathbb{C}))^{\gamma^{w}}$ such that $r^*br=a$. \end{lem}
The proof of the following lemma is based on Connes' $2\times 2$ matrix trick in \cite[Corollary 2.6]{C3}.
\begin{lem}\label{lem:cohomology} Every $\gamma$-cocycle $w$ in $F(A\otimes\mathcal{W})$ is a coboundary. \end{lem} \begin{proof} Let $\varepsilon >0$. By Proposition \ref{pro:key-pro}, there exists a projection $p_{\varepsilon}$ in $F(A\otimes\mathcal{W})^{\gamma}$ such that $\tau_{\omega}(p_{\varepsilon})=1-\varepsilon$. Taking a suitable subsequence of a representative of $p_{\varepsilon}$, we may assume that $w(g)p_{\varepsilon}=p_{\varepsilon}w(g)$ for any $g\in G$. Lemma \ref{lem:fixed-strict-comparison} implies that there exists an element $R_{\varepsilon}$ in $(F(A\otimes\mathcal{W})\otimes M_{2}(\mathbb{C}))^{\gamma^{w}}$ such that $$ R_{\varepsilon}^*\left(\begin{array}{cc}
1 & 0 \\
0 & 0
\end{array} \right) R_{\varepsilon} = \left(\begin{array}{cc}
0 & 0 \\
0 & p_{\varepsilon}
\end{array} \right) . $$ The diagonal argument shows that there exist a projection $p$ in $F(A\otimes\mathcal{W})^{\gamma}$ and an element $R$ in $(F(A\otimes\mathcal{W})\otimes M_{2}(\mathbb{C}))^{\gamma^{w}}$ such that $\tau_{\omega}(p)=1$ and $$ R^*\left(\begin{array}{cc}
1 & 0 \\
0 & 0
\end{array} \right) R = \left(\begin{array}{cc}
0 & 0 \\
0 & p
\end{array} \right) . $$ By Corollary \ref{cor:comparison}, there exists an element $s$ in $F(A\otimes\mathcal{W})^{\gamma}$ such that $s^*s=1$ and $ss^*=p$. Taking suitable subsequences of representatives of $s$, $p$ and $R$, we may assume that $w(g)s=sw(g)$ for any $g\in G$ and $$ \left(\begin{array}{cc}
0 & 0 \\
0 & s^*
\end{array} \right) R^*\left(\begin{array}{cc}
1 & 0 \\
0 & 0
\end{array} \right) R \left(\begin{array}{cc}
0 & 0 \\
0 & s
\end{array} \right) = \left(\begin{array}{cc}
0 & 0 \\
0 & 1
\end{array} \right) . $$ It it easy to see that there exists a projection $q$ in $F(A\otimes\mathcal{W})^{\gamma}$ such that $\tau_{\omega}(q)=1$ and $$ \left(\begin{array}{cc}
q & 0 \\
0 & 0
\end{array} \right) = \left(\begin{array}{cc}
1 & 0 \\
0 & 0
\end{array} \right) R \left(\begin{array}{cc}
0 & 0 \\
0 & p
\end{array} \right) R^* \left(\begin{array}{cc}
1 & 0 \\
0 & 0
\end{array} \right) . $$ By Corollary \ref{cor:comparison}, there exists an element $t$ in $F(A\otimes\mathcal{W})^{\gamma}$ such that $t^*t=1$ and $tt^*=q$. Put $$ V:= \left(\begin{array}{cc}
0 & 0 \\
0 & s^*
\end{array} \right) R^* \left(\begin{array}{cc}
t & 0 \\
0 & 0
\end{array} \right) . $$ Then we have $V\in (F(A\otimes\mathcal{W})\otimes M_{2}(\mathbb{C}))^{\gamma^{w}}$, $$ V^*V=\left(\begin{array}{cc}
1 & 0 \\
0 & 0
\end{array} \right) \quad \text{and} \quad VV^*=\left(\begin{array}{cc}
0 & 0 \\
0 & 1
\end{array} \right) . $$ It is easy to see that there exists a unitary element $v$ in $F(A\otimes\mathcal{W})$ such that $$ V= \left(\begin{array}{cc}
0 & 0 \\
v & 0
\end{array} \right) . $$ Since $V\in (F(A\otimes\mathcal{W})\otimes M_{2}(\mathbb{C}))^{\gamma^{w}}$, $w(g)\gamma_g(v)=v$ for any $g\in G$. Consequently, $w$ is a coboundary. \end{proof}
\begin{rem} The lemma above shows that the first cohomology of $\gamma$ vanishes. This property is one of the important properties for the Bratteli-Elliott-Evans-Kishimoto intertwining argument (see, for example, \cite{EK} and \cite{Kis1}) in the classification of Rohlin actions. \end{rem}
The following theorem is the main result in this paper.
\begin{thm}\label{thm:main} Let $A$ be a simple separable nuclear C$^*$-algebra with a unique tracial state and no unbounded traces, and let $\alpha$ be a strongly outer action of a finite group $G$ on $A$. Then $\gamma=\alpha\otimes\mathrm{id}$ on $A\otimes\mathcal{W}$ has the Rohlin property. \end{thm} \begin{proof}
We identify $B(\ell^2(G))$ with $M_{|G|}(\mathbb{C})$. Also, we can identify $F(A\otimes\mathcal{W})^{\gamma}$ with $F(A\otimes \mathcal{W}\otimes
\bigotimes_{n\in\mathbb{N}} M_{|G|}(\mathbb{C}))^{\gamma\otimes\mathrm{id}}$ because $\mathcal{W}$ is UHF stable. Let $\lambda$ be the left regular representation of $G$ on $\ell^2(G)$. Define a map $w$ from $G$ to $F(A\otimes\mathcal{W})^{\gamma}$ by $$ w (g) := [(h_n\otimes k_n \otimes \overbrace{1\otimes \cdots \otimes 1}^n \otimes \lambda(g) \otimes 1\otimes \cdots )_n ] $$ where $\{h_n\}_{n=1}^\infty$ and $\{k_n\}_{n=1}^\infty$ are approximate units for $A$ and $\mathcal{W}$, respectively. Then $w$ is a homomorphism, and hence $w$ is a $\gamma$-cocycle in $F(A\otimes\mathcal{W})$. By Lemma \ref{lem:cohomology}, there exists a unitary element $v$ in $F(A\otimes\mathcal{W})$ such that $w(g)=v\gamma_g(v^*)$ for any $g\in G$. For any $g\in G$, let $e_{g}$ be a projection onto $\mathbb{C}\delta_g$ where
$\{\delta_h\; |\; h\in G\}$ is the canonical basis of $\ell^2(G)$, and put $$ p_g:= v^*[(h_n\otimes k_n \otimes \overbrace{1\otimes \cdots \otimes 1}^n \otimes e_g \otimes 1\otimes \cdots )_n ]v. $$ Then $\{p_{g}\}_{g\in G}$ is a partition of unity in $F(A\otimes\mathcal{W})$ consisting of projections satisfying $$ \gamma_{g} (p_{h}) =p_{gh} $$ for any $g,h\in G$. Consequently, $\gamma$ has the Rohlin property. \end{proof}
Combining the theorem above and the classification results in \cite{CE} and \cite{EGLN}, we obtain the following corollary.
\begin{cor}\label{cor:main} Let $A$ and $B$ be simple separable nuclear C$^*$-algebras with a unique tracial state and no unbounded traces, and let $\alpha$ and $\beta$ be strongly outer actions of a finite group $G$ on $A$ and $B$, respectively. Then $\alpha\otimes\mathrm{id}$ on $A\otimes\mathcal{W}$ is conjugate to $\beta\otimes\mathrm{id}$ on $B\otimes\mathcal{W}$. \end{cor} \begin{proof} By \cite[Theorem 6.1]{CE}, $A\otimes\mathcal{Z}$ and $B\otimes\mathcal{Z}$ have finite nuclear dimension. Hence \cite[Corollary 6.7]{EGLN} implies that $A\otimes\mathcal{W}$ and $B\otimes\mathcal{W}$ are isomorphic to $\mathcal{W}$. Therefore we obtain the conclusion by Theorem \ref{thm:classification} and Theorem \ref{thm:main}. \end{proof}
\begin{rem} (1) If $\alpha^{\prime}$ is not a strongly outer action of a non-trivial finite group $G$ on $A$, then $(A\otimes\mathcal{W})\rtimes_{\alpha^{\prime}\otimes\mathrm{id}} G$ has at least two extremal tracial state. Hence $\alpha^{\prime}\otimes\mathrm{id}$ is not (cocycle) conjugate to the action in the corollary above. \ \\ (2) There exist uncountably many non-conjugate strongly outer actions of $\mathbb{Z}_2$ on $\mathcal{W}$ by \cite[Example 5.6]{Na4} and \cite[Remark 5.7]{Na4}. \ \\ (3) For generalizing the corollary above to amenable group actions, it seems to be important that we characterize $\mathcal{W}$ by using the central sequence C$^*$-algebra $F(\mathcal{W})$. \ \\ (4) If $\alpha$ is a strongly outer action of a finite group $G$ on a simple separable nuclear C$^*$-algebra $A$ with a unique tracial state and no unbounded traces, then $A\rtimes_{\alpha} G$ is a simple separable nuclear C$^*$-algebra with a unique tracial state and no unbounded traces. Hence $(A\otimes\mathcal{W})\rtimes_{\alpha\otimes\mathrm{id}}G \cong (A\rtimes_{\alpha} G)\otimes\mathcal{W}$ is isomorphic to $\mathcal{W}$. \ \\ (5) We need the unique trace property of $A$ for $A\otimes\mathcal{W}\cong \mathcal{W}$. Moreover, we used this property for many arguments in Section \ref{sec:target}. But it seems to be possible that we generalize some results in this paper to more general $KK$-contractible C$^*$-algebras by suitable modifications. \end{rem}
\end{document} |
\begin{document}
\author{Luca Minotti \thanks{Universit\`a di Pavia. email: \textsf{luca.minotti01@universitadipavia.it}. } }
\title{Visco-Energetic solutions to one-dimensional \ rate-independent problems}
\begin{abstract} Visco-Energetic solutions of rate-independent systems (recently introduced in \cite{MinSav16}) are obtained by solving a modified time Incremental Minimization Scheme, where at each step the dissipation is reinforced by a viscous correction $\delta$, typically a quadratic perturbation of the dissipation distance. Like Energetic and Balanced Viscosity solutions, they provide a variational characterization of rate-independent evolutions, with an accurate description of their jump behaviour. \par In the present paper we study Visco-Energetic solutions in the one-dimensional case and we obtain a full characterization for a broad class of energy functionals. In particular, we prove that they exhibit a sort of intermediate behaviour between Energetic and Balanced Viscosity solutions, which can be finely tuned according to the choice of the viscous correction $\delta$. \end{abstract}
\tableofcontents
\section{Introduction} Rate-independent problems occur in several contexts. We refer the reader to the recent monograph \cite{Mielke-Roubicek15} for a survey of rate-independent modeling and analysis in a wide variety of applications. The analytical theory of rate-independent evolutions encounters some mathematical challenges, which are apparent even in the simplest example, the \emph{doubly nonlinear} differential inclusion \begin{equation} \label{DN} \partial \Psi(u'(t))+{\mathrm D} {\ensuremath{\mathcal E}}(t,u(t))\ni 0\quad \text{in $X^*$}\quad\text{ for a.a. $t\in(a,b)$}. \tag{DN} \end{equation} Here $X^*$ is the dual of a finite-dimensional linear space, ${\mathrm D}{\ensuremath{\mathcal E}}$ is the (space) differential of a time-dependent energy functional ${\ensuremath{\mathcal E}}\in {\mathrm C}^1([a,b]\times X;\mathbb{R})$ and $\Psi: X\rightarrow (0,+\infty)$ is a convex and nondegenerate dissipation potential, hereafter supposed \emph{positively homogeoneous of degree 1}. \par It is well known that if the energy ${\ensuremath{\mathcal E}}(t,\cdot)$ is not strictly convex, one cannot expect the existence of an absolutely continuous solution to \eqref{DN}, so that the natural space for candidate solutions $u$ is $\mathrm{BV}([a,b];X)$. This fact has motivated the development of various weak formulations of \eqref{DN}, which should also take into account the behaviour of $u$ at jump points.
\paragraph{Energetic solutions.} The first is the notion of \emph{Energetic solutions}, \cite{Mielke-Theil-Levitas02,Mielke-Theil04,Mainik-Mielke05}. For the simplified rate-independent evolution \eqref{DN}, Energetic solutions are curves $u:[a,b]\to X$ with bounded variation that are characterized by two variational conditions, called \emph{stability} (S$_\Psi$) and \emph{energy balance} (E$_\Psi$): \begin{equation} \label{en-stability} {\ensuremath{\mathcal E}}(t,u(t))\le {\ensuremath{\mathcal E}}(t,z)+\Psi(z-u(t))\qquad\text{for every $z\in X$}, \tag{${\mathrm S}_\Psi$} \end{equation} \begin{equation} \label{en-energy-balance} {\ensuremath{\mathcal E}}(t,u(t))+\varpsi(u;[a,t])={\ensuremath{\mathcal E}}(a,u(a))+\int_a^t\partial_t{\ensuremath{\mathcal E}}(s,u(s))\,{\mathrm d} s, \tag{${\mathrm E}_\Psi$} \end{equation} where $\varpsi$ is the pointwise total variation with respect to $\Psi$ (see \eqref{eq:total-variation} in section \ref{sec:2} for the precise definition). \par One of the strongest feature of the energetic approach is the possibility to construct energetic solutions by solving the \emph{time Incremental Minimization Scheme} \begin{equation} \label{IMS} \min_{U\in X} {\ensuremath{\mathcal E}}(t_\tau^n,U)+\Psi\big(U-U_\tau^{n-1}\big). \tag{$\mathrm{IM}_\Psi$} \end{equation} If ${\ensuremath{\mathcal E}}$ has compact sublevels then for every ordered partition $\tau=\{t^0_\tau=a,t^1_\tau,\cdots,t^{N-1}_\tau,t^N_\tau=b\}$ of the interval $[a,b]$ with variable time step $\tau^n:=t^n_\tau-t^{n-1}_\tau$ and for every initial choice $U^0_\tau= u(a)$ we can construct by induction an approximate sequence $(U^n_\tau)_{n=0}^N$ solving \eqref{IMS}. If $\overline U_\tau$ denotes the left-continuous piecewise constant interpolant of $(U^n_\tau)_n$, then the family of discrete solutions $\overline U_\tau$ has limit curves with respect to pointwise convergence as the maximum of the step sizes
$|\tau|=\max \tau^n$ vanishes, and every limit curve $u$ is an energetic solution.
\par Consider for instance the 1-dimensional example when the energy has the form \begin{equation} \label{1d-energy} {\ensuremath{\mathcal E}}(t,u):=W(u)-\ell(t)u\quad\text{for a double-well potential such as $W(u)=(u^2-1)^2$}. \end{equation}
When the loading $\ell \in {\mathrm C}^1([a,b])$ is strictly increasing, $\Psi(v):=\alpha |v|$ with $\alpha>0$, and $u(a)$ is choosen carefully, it is possible to prove, \cite{Rossi-Savare13}, that an Energetic solution $u$ is an increasing selection of the equation \begin{equation} \label{eq:23} \alpha+W^{**}(u(t))\ni\ell(t)\qquad\text{for every $t\in [a,b]$}, \end{equation} where $W^{**}$ is the convex envelope $W^{**}(u)=((u^2-1)_+)^2$.
\begin{figure}
\caption{Energetic solution for a double-well energy $W$ with an
increasing load $\ell$.}
\label{fig:1}
\end{figure} In this context, the solution $u$ have a jump when it is satisfied the so-called \emph{Maxwell rule} \begin{equation} \int_{\ul(t)}^{\ur(t)}\Big( W'(r)-\ell(t)+\alpha_+ \Big)\,{\mathrm d} r=0. \end{equation} The latter evolution mode prescribes that for all $t\in [a,b]$, the function $u(t)$ only attains \emph{absolute minima} of the function $u\mapsto W(u)-(\ell(t)-\alpha_+)u$. This corresponds to a convexification of $W$ and causes the system to jump ``early''.
\paragraph{Balanced Viscosity (BV) solutions.} The global stability condition \eqref{en-stability} may lead the system to change instantaneously in a very drastic way, jumping into far apart energetic configurations. In order to obtain a formulation where local effects are more relevant (see \cite{DalMaso-Toader02, NegOrt07?QSCP,Efendiev-Mielke06}), a natural idea is to consider rate-independent evolution as the limit of systems with smaller and smaller viscosity, namely to study the approximation of \eqref{DN} \begin{equation} \label{eq:DNepsilon} \partial\Psi_\varepsilon(u'(t))+{\mathrm D}{\ensuremath{\mathcal E}}(t,u(t))\ni 0 \quad\text{in $X^*$},\qquad \Psi_\varepsilon(v):=\Psi(v)+\frac{\varepsilon}{2}\Psi^2(v), \tag{$\mathrm{DN}_\varepsilon$} \end{equation} which corresponds to introduce a quadratic (or even more general) perturbation in the time Incremental Minimization Scheme: \begin{equation} \label{eq:173} \min_{U\in X} {\ensuremath{\mathcal E}}(t_\tau^n,U)+\Psi\big(U-U_\tau^{n-1}\big)+\frac{\varepsilon^n}{2\tau^n}\Psi^2\big(U-U_\tau^{n-1}\big). \tag{$\mathrm{IM}_{\Psi,\varepsilon}$} \end{equation}
The choice $\varepsilon^n=\varepsilon^n(\tau)\downarrow 0$ with $ \frac{\varepsilon^n(\tau)}{|\tau|}\uparrow+\infty$ leads to the notion of \emph{Balanced Viscosity} solutions \cite{Rossi-Mielke-Savare08,Mielke-Rossi-Savare09,Mielke-Rossi-Savare12,Mielke-Rossi-Savare13}. Under suitable smoothness and lower semicontinuity assumptions, it is possible to prove that all the limit curves satisfy a \emph{local stability} condition and a modified energy balance, involving an augmented total variation that encodes a more refined description of the jump behaviour of $u$: roughly speaking, a jump between $\ul(t)$ and $\ur(t)$ occurs only when these values can be connected by a rescaled solution $\vartheta$ of \eqref{eq:DNepsilon}, where the energy is frozen at the jump time $t$ \begin{equation}
\label{eq:173bis}
\partial\Psi(\vartheta'(s))+\vartheta'(s)+{\mathrm D}{\ensuremath{\mathcal E}}(t,\vartheta(s))\ni0. \end{equation}
In the one-dimensional example \eqref{1d-energy}, with the loading $\ell$ strictly increasing and under suitable choices of the initial datum, it is possible to prove, \cite{Rossi-Savare13}, that $u$ is a BV solution if and only if it is nondecreasing and \begin{equation} \label{eq:24} \alpha+W'(u(t))=\ell(t)\qquad \text{for all $t\in[a,b]\setminus \Ju$}. \end{equation} \begin{figure}
\caption{BV solution for a double-well energy $W$ with an
increasing load $\ell$. The blue line denotes the
path described by the optimal transition $\vartheta$ solving \eqref{eq:173bis}.}
\label{fig:2}
\end{figure}
The evolution mode \eqref{eq:24} follows the so called \emph{Delay rule}, related to hysteresis behaviour. The system accepts also \emph{relative minima} of $u\mapsto W(u)-(\ell(t)-\alpha)u$, and thus the function $t\mapsto u(t)$ tend to jump ``as late as possible''.
\paragraph{Visco-Energetic solutions and main results of the paper.} Recently, in \cite{MinSav16}, the new notion of Visco-Energetic (VE) solutions has been proposed. This is a sort of intermediate situation between energetic and balanced viscosity, since these solutions are obtained by studying the time Incremental Minimization Scheme \eqref{eq:173} when one keeps constant the ratio $\mu:=\varepsilon^n/\tau^n$. In this way the dissipation $\Psi$ is corrected by an extra viscous penalization term, for example of the form \begin{equation} \label{eq:169} \delta(u,v):=\frac{\mu}2\Psi^2(v-u)\qquad \text{for every $u,v\in X,\quad\mu\ge0$,} \end{equation} which induces a stronger localization of the minimizers, according to the size of the parameter $\mu$. The new modified time Incremental Minimization Scheme is therefore \begin{equation}
\label{eq:167mu}
\min_{U\in X} {\ensuremath{\mathcal E}}(t_\tau^n,U)+\Psi\big(U-U_\tau^{n-1}\big)+\delta(U,U_\tau^{n-1}) \tag{$\mathrm{IM}_{\Psi,\mu}$}. \end{equation} \par As in the energetic and BV cases, a variational characterization of the functions obtained as a limit of the solution of \eqref{eq:167mu} is possible, still involving a suitable stability condition and an energetic balance. Concerning stability, we have a natural generalization of \eqref{en-stability}: \begin{equation} \label{eq:168}
{\ensuremath{\mathcal E}}(t,u(t))\le {\ensuremath{\mathcal E}}(t,v)+\Psi(v-u(t))+\delta(u(t),v) \quad\text{for every }v\in X,\
t\in [a,b]\setminus \Ju.
\tag{S$_{\sf D}$} \end{equation} The right replacement of the energy balance condition is harder to formulate. A heuristic idea, which one can figure out by the direct analysis of \eqref{1d-energy}, is that jump transitions between $\ul(t)$ and $\ur(t)$ should be described by discrete trajectories $\vartheta:Z\to X$ defined in a subset $Z\subset \mathbb{Z}$ such that each value $\vartheta(n)$ is a minimizer of the incremental problem \eqref{eq:167mu}, with datum $\vartheta(n-1)$ and with the energy ``frozen'' at time $t$. In the simplest cases $Z=\mathbb{Z}$, the left and right jump values
are the limit of $\vartheta(n)$ as $n\to\pm\infty$, but more complicated situations can occur, when $Z$ is a proper subset of $\mathbb{Z}$ or one has to deal with concatenation of (even countable) discrete transitions and sliding parts parametrized by a continuous variable, where the stability condition \eqref{eq:168} holds. \par In order to capture all of these possibilities, VE transitions are parametrized by continuous maps $\vartheta:E\to X$ defined in an \emph{arbitrary compact subset} of $\mathbb{R}$. We refer to section \ref{subsec:Visco-Energetic-Euclidean} for the precise description of the new dissipation cost and the corresponding total variation. \par In the present paper we study Visco-Energetic solutions in the one dimensional setting and we obtain a full characterization for the same broad class of energy functionals of \cite{Rossi-Savare13}. Respect to Energetic and BV solutions, the main difficulty here comes from the description of solutions at jumps: as we have mentioned, transitions are now defined in an arbitrary compact subset of $\mathbb{R}$, so that a wide range of possibilities can occur. For instance, the energetic case is a very particular situation, where (e.g. for an increasing jump) the transitions have the form \begin{equation} \vartheta: \{0;1\}\rightarrow\mathbb{R}\qquad\text{such that $\vartheta(0)=\ul(t),\quad\vartheta(1)=\ur(t)$}, \end{equation} defined in a compact set that consists just in two points. \par However, thanks to an accurate analysis of VE dissipation cost, we are able to describe all these possibilities. Coming back to the standard example \eqref{1d-energy}, with the viscous correction $\delta$ of the form \eqref{eq:169}, the behaviour of VE solutions strongly depends on the parameter $\mu$. More precisely, the following situations can occur: \begin{itemize} \item The viscous correction term is ``strong'', for example $\mu \ge -\min W''$. In this case VE solutions exhibits a behaviour comparable to BV solutions: both satisfies the same local stability condition and equation \eqref{eq:24} holds, so that they follow a \emph{delay rule}. \item No viscous corrections are added to the system, which corresponds to $\mu=0$. In this case VE solutions coincides with energetic solutions, equation \eqref{eq:23} holds and they satisfy the \emph{Maxwell rule}. \item A ``weak'' viscous correction is added to the system, which corresponds to a small $\mu>0$. We have a sort of intermediate situation between the two previous cases: a jump can occur even before reaching a local extremum of $W'$. In particular, an increasing jump can occur when the \emph{modified Maxwell rule} is satisfied: \begin{equation} \label{eq:201} \int_{\ul(t)}^{u_+}\Big( W'(r)-\ell(t)+\alpha+\mu (r-\ul(t))\Big)\,{\mathrm d} r=0,\qquad \text{for some $u_+>\ul(t)$}. \end{equation} In this case $\ur(t)$ may differ from $u_+$: see Figure \ref{fig:3} for more details. \end{itemize}
\begin{figure}
\caption{Visco-Energetic solutions for a double-well energy $W$ with an
increasing load $\ell$. When $\mu>-\min W''$ (first picture) the solution jumps when it reach the maximum of $W'$ and the transition is the ``double chain'' obtained by solving the Incremental Minimization Scheme with frozen time $t$. When $\mu$ is small (second picture) the optimal transition $\vartheta$ makes a first jump connecting $\ul(t)$ with $u_+$ according to the
modified Maxwell rule \eqref{eq:201}: $\ul(t)$ and $u_+$ corresponds to
the intersection of $W'$ with the red line, whose slope is $-\mu$. }
\label{fig:3}
\end{figure}
\paragraph{Plan of the paper.} In the paper we will analyse VE solutions to one-dimensional rate-independent evolutions driven by general (nonconvex) potentials and we will assume that the viscous corrections $\delta$ satisfies only the natural assumptions of the visco-energetic theory, including in particular the quadratic case \eqref{eq:169}.
In the preliminary section \ref{sec:2}, we recall the main definitions of Visco-Energetic solutions, their dissipation cost and the corresponding total variation, along with some useful properties and characterizations coming from the general theory; all the assumptions of the one-dimensional setting are collected in section \ref{subsec:one-dimensional-setting}. \par In section \ref{sec:3}, after a brief discussion about the stability conditions, we give a characterizations of Visco-Energetic solutions with a general (i.e. non monotone) external loading. This characterization involves the \emph{one-sided global slopes} with a $\delta$ correction, which are defined in section \ref{subsec:stability}. \par In section \ref{sec:4} we analyse the case of a monotone loading $\ell$. We exhibit a more explicit characterization of Visco-Energetic solutions, in term of the monotone envelopes of the one-sided global slopes. This characterization, in a suitable sense, generalizes \eqref{eq:23} and \eqref{eq:24}.
\section{Preliminaries} \label{sec:2} Throughout this section, $[a,b]\subseteq\mathbb{R}$ and \[
(X,\|\cdot\|_X)\quad\text{ will be a finite dimensional normed vector space.} \] We first recall the key elements of the rate-independent system $(X,{\ensuremath{\mathcal E}},\Psi)$ along with the main definitions of Visco-Energetic solutions, their dissipation cost and some useful properties coming from the general theory, \cite{MinSav16}.
\subsection{Rate-independent setting and BV functions} \label{subsec:setting} Hereafter we consider a rate-independent system $(X,{\ensuremath{\mathcal E}},\Psi)$, where the dissipation potential \[ \Psi: X \rightarrow [0,+\infty)\text{ is 1-positively homogeneous, convex, with $\Psi(v)>0$ if $v\neq 0$,} \] and ${\ensuremath{\mathcal E}}$ is a smooth, time dependent energy functional, which we take of the form \begin{equation} {\ensuremath{\mathcal E}}(t,u):=W(u)-\langle\ell(t),u\rangle \end{equation} for some $W\in {\mathrm C}^1(X)$ bounded from below with a constant $-\lambda>-\infty$ and $\ell\in {\mathrm C}^1\left([a,b];X^*\right)$. We shall also use the notation ${\ensuremath{\mathcal P}}(t,u):=\partial_t{\ensuremath{\mathcal E}}(t,u)=-\langle\ell'(t),u\rangle$ for the partial time derivative of ${\ensuremath{\mathcal E}}$, and we set \begin{equation} \label{eq:K*} K^*:=\partial\Psi(0)=\{w\in X^*:\Psi_*(w)\le 1\}\subset X^*,\quad\text{where $\Psi_*(w):=\sup_{\Psi(v)\le 1}\langle w,v\rangle$}. \end{equation} \par The rate-independent system associated with the energy functional ${\ensuremath{\mathcal E}}$ and the dissipation potential $\Psi$ can be formally described by the \emph{rate-independent doubly nonlinear} differential inclusion \begin{equation} \partial \Psi(u'(t))+{\mathrm D} {\ensuremath{\mathcal E}}(t,u(t))\ni 0\quad \text{in $X^*$}\quad\text{ for a.a. $t\in(a,b)$}. \tag{DN} \end{equation} \par It is well known that for nonconvex energies, solutions to \eqref{DN} may exhibit discontinuities in time. Therefore, we shall consider functions of bounded variation pointwise defined in every $t\in[a,b]$, such that the pointwise total variation $\varpsi(u;[a,b])$ is finite, where \begin{equation} \label{eq:total-variation} \varpsi(u;[a,b]):=\sup\left\{\sum_{m=1}^M\Psi(u(t_m)-u(t_{m-1})):a=t_0<t_1<\dots<t_M=b\right\}. \end{equation} Notice that a function $u\in \mathrm{BV}([a,b];X)$ admits left and right limits at every $t\in [a,b]$: \begin{equation} \ul(t):=\lim_{s\uparrow t} u(s),\quad \ur(t):=\lim_{s\downarrow t} u(s),\quad\text{with $\ul(a):=u(a)$ and $\ur(b):=u(b)$} \end{equation} and its pointwise jump set $\Ju$ is the at most countable set defined by \begin{equation} \Ju:=\{t\in [a,b]: \ul(t)\neq u(t)\text{ or }u(t)\neq \ur(t)\}\supset\text{ess-$\Ju$}:=\{t\in (a,b):\ul(t)\neq\ur(t)\}. \end{equation}
\par We denote by $u'$ the distributional derivative of $u$ (extended by $u(a) \in (-\infty,a)$ and by $u(b)$ in $(b,+\infty)$): it is a Radon vector measure with finite total variation $|u'|$ supported in $[a,b]$. It is well known, \cite{Ambrosio-Fusco-Pallara00}, that $u'$ can be decomposed into the sum of its diffuse part $u'_{\mathrm{co}}$ and its jump part $u'_{\mathrm J}$: \[ u'=u'_{\mathrm{co}}+u'_{\mathrm J},\quad u'\llcorner \text{ess-$\Ju$},\quad\text{so that $u'_{\mathrm{co}}(\{t\})=0$ for every $t\in[a,b]$.} \]
\subsection{Visco-Energetic (VE) solutions in the finite-dimensional case} \label{subsec:Visco-Energetic-Euclidean} We recall the notion of Visco-Energetic solutions for the rate-independent system $(X,{\ensuremath{\mathcal E}},\Psi)$ introduced in section \ref{subsec:setting}. The first ingredient we need is a \emph{viscous correction}, namely a continuous map $\delta:X\times X\rightarrow [0,+\infty)$, and its associated augmented dissipation \begin{equation} {\sf D}(u,v):=\Psi(v-u)+\delta(u,v)\qquad \text{for every $u,v\in X$ }. \end{equation} As in the energetic framework, \cite{Mielke-Theil-Levitas02,Mielke-Theil04,Mainik-Mielke05}, Visco-Energetic solutions to the rate-independent system $(X,{\ensuremath{\mathcal E}},\Psi)$ are curves $u:[a,b]\rightarrow X$ with bounded variation that are characterized by a \emph{stability condition} and an \emph{energetic balance}. \par Concerning stability, we have a similar inequality, but we have to replace $\Psi$ with the augmented dissipation ${\sf D}$. More precisely, we will require that for every $t\notin\Ju$ \begin{equation} \label{stability} {\ensuremath{\mathcal E}}(t,u(t))\le {\ensuremath{\mathcal E}}(t,v)+{\sf D}(u(t),v)\quad\text{for every $v\in X$}, \tag{${\mathrm S}_{\sf D}$} \end{equation} which is naturally associated with the ${\sf D}$ stable set $\mathscr{S}_\sfD$. \begin{definition}[{\sf D}-stable set] The ${\sf D}$-stable set is the subsets of $[a,b]\times X$ \begin{equation} \label{eq:SSD} \mathscr{S}_\sfD:=\left\{(t,u): {\ensuremath{\mathcal E}}(t,u)\le {\ensuremath{\mathcal E}}(t,v)+{\sf D}(u,v)\quad\text{for every $v\in X$}\right\}. \end{equation} Its section at time $t$ will be denoted with $\mathscr{S}_\sfD(t)$. \end{definition} \par As intuition suggests, not every viscous correction $\delta$ will be admissible for our purpose. A full description of Visco-Energetic solutions and admissible viscous corrections is discussed in \cite{MinSav16}, where the general metric-topological setting is considered. For the sake of simplicity, in this section we will assume that $\delta$ satisfies the following condition \begin{equation} \lim_{v\rightarrow u}\frac{\delta(u,v)}{\Psi(v-u)}=0 \quad\text{for every $u \in \mathscr{S}_\sfD(t)$,\quad $t\in [a,b].$} \end{equation} \par The \emph{energetic balance} is harder to formulate than stability and we first need to introduce the key concepts of transition cost and augmented total variation associated with the dissipation ${\sf D}$. \par Hereafter, for every subset $E\subset\mathbb{R}$ we call $E^-:=\inf E$, $E^+:=\sup E$; whenever $E$ is compact, we will denote by ${\mathfrak H}(E)$ the (at most) countable collection of the connected components of the open set $[E^-,E^+]\setminus E$. We also denote by ${\mathfrak P}_f(E)$ the collection of all finite subsets of $E$. \par Concerning the transition cost, the main point is to consider transitions parametrized by continuous maps $\vartheta:E\to X$ defined in arbitrary compact subsets of $\mathbb{R}$ such that $\vartheta(E^-)=\ul(t)$ and $\vartheta(E^+)=\ur(t)$. More precisely, the first ingredient will be a \emph{residual stability function}: \begin{definition}[Residual stability function] \label{def:res-stability}
For every $t\in[a,b]$ and $u\in
X$
the residual stability function is defined by
\begin{align}
\label{eq:108}
\Res(t,u):&=\sup_{v\in X}
\{\mathcal{E}(t,u)-\mathcal{E}(t,v)-{\sf D}(u,v)\} \\
\label{eq:109} &=\mathcal{E}(t,u)-\inf_{v\in X}\{\mathcal{E}(t,v)+{\sf D}(u,v)\}. \end{align} \end{definition} $\Res$ provides a measure of the failure of the stability condition \eqref{stability}, since for every $u\in X$, $t\in[a,b]$ we get \begin{equation}
\label{eq:62}
{\ensuremath{\mathcal E}}(t,u)\le {\ensuremath{\mathcal E}}(t,v)+{\sf D}(u,v)+\Res(t,u) \end{equation} and \begin{equation} \Res(t,u)=0\quad \Longleftrightarrow \quad \relax u\in \mathscr{S}_\sfD(t).\label{eq:63} \end{equation}
The transition cost is the sum of three contributions, accordingly with the following definition. \begin{definition}[Transition cost]
\label{def:transition-cost}
Let $E\subset \mathbb{R}$ compact and $\vartheta\in {\mathrm C}(E;X)$. For every
$t\in[a,b]$ we define the \emph{transition cost function} $\Cf(t,\vartheta,E)$ by \begin{equation}
\label{eq:33}
\Cf(t,\vartheta,E):=\varpsi(\vartheta,E)+\Cd(\vartheta,E)+\sum_{s\in E\setminus \{E^+ \}}\Res(t,\vartheta(s)) \end{equation} where the first term is the usual total variation \eqref{eq:total-variation}, the second one is \[ \Cd(\vartheta,E):=\sum_{I\in \mathfrak H(E)}\delta(\vartheta(I^- ),\vartheta(I^+ )), \] and the third term is \[ \sum_{s\in
E\setminus\{E^+ \}}\Res(t,\vartheta(s)):=\sup\left\{\sum_{s\in
P}\Res(t,\vartheta(s)): P\in \mathfrak P_f(E\setminus \{E^+ \})\right\}, \] with the sum defined as $0$ if $E\setminus \{E^+ \}=\emptyset$. \end{definition}
We adopt the convention $\Cf(t,\vartheta,\emptyset):=0$. It is not difficult to check that the transition cost $\Cf(t,\vartheta,E)$ is additive with respect to $E$: \begin{equation}
\label{eq:195}
\Cf(t,\vartheta,E\cap[a,c])=
\Cf(t,\vartheta,E\cap[a,b])+
\Cf(t,\vartheta,E\cap[b,c])\quad \text{for every }a<b<c. \end{equation} It has been proved, \cite[Theorem 6.3]{MinSav16}, that for every $t\in[a,b]$ and for every $\vartheta\in {\mathrm C}(E;X)$ \begin{equation}\label{eq:crinqualitytheta}
{\ensuremath{\mathcal E}}(t,\vartheta(E^+ ))+\Cf(t,\vartheta,E)\geq {\ensuremath{\mathcal E}}(t,\vartheta(E^- )). \end{equation} The dissipation cost $\Fd(t,u_0,u_1)$ induced by the function $\Cf$ is defined by minimizing $\Cf(t,\vartheta,E)$ among all the transitions $\vartheta$ connecting $u_0$ to $u_1$: \begin{definition}[Jump dissipation cost and augmented total variation] \label{dissipationcost} Let $t\in[a,b]$ be fixed and let us consider $u_0, u_1\in X$. We set \begin{equation} \label{eq:dissipationcost} \Fd(t,u_0,u_1):=\inf\left\{\Cf(t, \vartheta,E): E\Subset \mathbb{R},\ \vartheta\in {\mathrm C}(E; X),\ \vartheta(E^- )=u_0,\ \vartheta(E^+ )=u_1\right\}, \end{equation} with the incremental dissipation cost $\Delta_{\sf c}(t, u_0,u_1):=\Fd(t,u_0,u_1)-\Psi(u_1-u_0)$. We also define \begin{multline}
\Jmp{\Delta_{\sf c}}(u,[a,b]):=\Delta_{\sf c}
(a,u(a),\ur(a)) + \Delta_{\sf c}
(b,\ul(b),u(b))
\\
+\sum_{t \in \Ju\cap (a,b)}
\Delta_{\sf c}
(t,\ul(t),u(t),\ur(t)), \end{multline} and the corresponding augmented total variation $\mVar{\Psi,{\sf c}}$ is then \begin{equation} \label{eq:varP}
\mVar{\Psi,{\sf c}}(u,[a,b]):=\varpsi(u,[a,b])+
\Jmp{\Delta_{\sf c}}(u,[a,b]).
\end{equation} \end{definition} The infimum in \eqref{eq:dissipationcost} is attained whenever there is at least one admissible transition $\vartheta$ with finite cost. In this case, we say that $\vartheta$ is an \emph{optimal transition}. \begin{definition}[Optimal transitions] Let $t\in[a,b]$ and $u_-$,
$u_+\in X$. We say that a curve $\vartheta\in {\mathrm C}(E;X)$, $E$ being a compact subset of $\mathbb{R}$, is an optimal transition between $u_-$ and $u_+$ if \begin{equation}
u_-=\vartheta(E^- ),\quad
u_+=\vartheta(E^+ ),\quad
\Fd(t,u_-,u_+)=\Cf(t,\vartheta,E). \end{equation} $\vartheta$ is \emph{tight} if for every $I\in \mathfrak H(E)$ $\vartheta(I^-)\neq \vartheta(I^+)$. $\vartheta$ is a \begin{align}
\text{pure jump transition, if }&E\setminus \{E^-,E^+\}\text{ is discrete,}\\
\text{sliding transition, if } &\Res(t,\vartheta(r))=0\quad \text{for every $r\in E$}, \\
\text{viscous transition, if } &\Res(t,\vartheta(r))>0\quad \text{for every $r\in E\setminus \{E^{\pm}\}$} \label{eq:viscoustransition}. \end{align} \end{definition} Notice that if $\vartheta$ is a transition with finite cost $\Cf(t,\vartheta,E)<\infty$, then the set \begin{equation} E_{\Res}:=\{r\in E \setminus E^+: \Res(t, \vartheta(r)>0\}\quad\text{is discrete, i.e. all its points are isoltaed}. \end{equation} \par With these notions at our disposal, we can now give the precise definition of Visco-Energetic solutions to the rate-independent system $(X,{\ensuremath{\mathcal E}},\Psi,\delta)$. \begin{definition}[Visco-Energetic (VE) solutions] We say that a curve $u\in \mathrm{BV} ([a,b];X)$
is a \emph{Visco-Energetic (VE) solution} of the rate-independent system $(X,\mathcal{E},\Psi,\delta)$ if it satisfies the stability condition \begin{equation}\label{eq:stability}
u(t)\in \mathscr{S}_\sfD(t)\quad\text{for every }t\in [a,b]\setminus \Ju,
\tag{S$_{\sf D}$} \end{equation} and the energetic balance \begin{equation} \label{energybalance} \mathcal{E}(t,u(t))+\mVar{\Psi,{\sf c}}(u,[a,t])=\mathcal{E}(a,u(a))+\int_a^t{\ensuremath{\mathcal P}}(s,u(s))\,{\mathrm d} s \tag{$\mathrm{E_{\Psi,{\sf c}}}$} \end{equation} for every $t\in[a,b]$. \end{definition} \par
Existence of Visco-Energetic solutions in a much more general metric-topological setting is proved in \cite{MinSav16}. Solutions are obtained as a limit of piecewise constant interpolant of discrete solutions $U^n_\tau$ obtained by recursively solving the modified time \emph{Incremental Minimization Scheme} \begin{equation}
\label{eq:167}
\min_{U\in X} {\ensuremath{\mathcal E}}(t^n_\tau,U)+{\sf D}(U^{n-1}_\tau,U).
\tag{IM$_{\sf D}$} \end{equation} starting from an initial datum $U_\tau^0\approx u_0$.
\subsection{Some useful properties of VE solutions} \label{subsec:VE-Euclidean-properties} In this section we collect a list of useful properties of Visco-Energetic solutions and we prove an equivalent characterization in the finite-dimensional setting, involving a doubly nonlinear evolution equation. For more details about these results and their proof we refer to \cite{MinSav16,Minotti16T}. \par To simply the notations, we first introduce the \emph{Minimal set}, which is related to the connection of two points through a step of Minimizing Movements. \begin{definition}[Moreau-Yosida regularization and Minimal set] Suppose that ${\ensuremath{\mathcal E}}$ satisfies \eqref{eq:E-assumption} and $\eqref{eq:W'-assumption}$. The ${\sf D}$-Moreau-Yosida regularization ${\ensuremath{\mathcal Y}}:[a,b]\times \mathbb{R}\to \mathbb{R}$ of ${\ensuremath{\mathcal E}}$
is defined by
\begin{equation}
\label{eq:111}
{\ensuremath{\mathcal Y}}(t,u):=\min_{v\in\mathbb{R}}{\ensuremath{\mathcal E}}(t,v)+{\sf D}(u,v).
\end{equation}
For every $t\in [a,b]$ and $u\in \mathbb{R}$ the minimal set is
\begin{equation}
\label{eq:107}
{\mathrm M}(t,u):=\mathop{\rm argmin}\limits_{\mathbb{R}} {\ensuremath{\mathcal E}}(t,\cdot)+{\sf D}(u,\cdot)=
\Big\{v\in \mathbb{R}:{\ensuremath{\mathcal E}}(t,v)+{\sf D}(u,v)={\ensuremath{\mathcal Y}}(t,u)\Big\}.
\end{equation} \end{definition} Notice that, by \eqref{eq:E-assumption} and \eqref{eq:W'-assumption}, ${\mathrm M}(t,u)\neq\emptyset$ for every $t,u$. It is also clear that ${\ensuremath{\mathcal R}}(t,u)={\ensuremath{\mathcal E}}(t,u)-{\ensuremath{\mathcal Y}}(t,u)$ and that \[ u\in \mathscr{S}_\sfD(t)\Longrightarrow u\in {\mathrm M}(t,u). \] \par As we have mentioned in the Introduction, when $t\in\Ju$ and $\vartheta:E\rightarrow\mathbb{R}$ is an optimal transition between $\ul(t)$ and $\ur(t)$, $\vartheta$ ``keeps trace'' of the whole construction via \eqref{eq:167}. For instance, when $\vartheta(E)$ is discrete, every point is obtained with a step of Minimizing Movements from the previous one, with the energy frozen the time $t$. The next result, \cite[Theorem 3.16]{MinSav16}, formalises this property and characterizes Visco-Energetic optimal transitions. Whenever a set $E\subset\mathbb{R}$ is given, we will use the notations \begin{equation} \label{eq:44} r_E^-:=\sup\{E\cap (-\infty,r)\}\cup \{E^-\},\qquad r_E^+:=\inf\{E\cap(r,+\infty)\}\cup\{E^+\}. \end{equation} \begin{theorem} \label{prop:2}
A curve $\vartheta\in {\mathrm C}(E,\mathbb{R})$ with $\vartheta(E)\ni
u(t)$ is an optimal transition between $\ul(t)$ and $\ur(t)$ satisfying \begin{equation} \label{eq:180} {\ensuremath{\mathcal E}}(t,\ul(t))-{\ensuremath{\mathcal E}}(t,\ur(t))=\Cf(t,\vartheta,E) \end{equation}
if and only if it satisfies
\begin{equation}
\label{eq:120}
\mVar\Psi(\vartheta,E\cap[r_0,r_1])\le
{\ensuremath{\mathcal E}}(t,\vartheta(r_0))-{\ensuremath{\mathcal E}}(t,\vartheta(r_1))\quad
\text{for every }r_0,r_1\in E,\ r_0\le r_1,
\end{equation}
and
\begin{equation}
\label{eq:121}
\vartheta(r)\in {\mathrm M}(t,\vartheta(r^-_E))\quad \text{for every
}r\in E\setminus \{E^-\}.
\end{equation} \end{theorem} In some situations, the first inequality \eqref{eq:180} can be proved thanks to the following elementary lemma, whose proof is analogous to \cite[Lemma 6.1]{MinSav16} \begin{lemma}
\label{le:elementary}
Let $E\subset \mathbb{R}$ be a compact set with $E^- <E^+ $, let $L(E)$ be the
set of
limit
points of $E$.
We consider a function $f:E\to \mathbb{R}$ lower semicontinuous
and continuous on the left and a function $g\in {\mathrm C}(E)$
strictly increasing, satisfying the
following two conditions:\\
i) for every $I\in \mathfrak H(E)$
\begin{equation}
\label{eq:65}
\frac{f(I^+ )-f(I^- )}
{g(I^+ )-g(I^- )}\ge 1;
\end{equation}
ii) for every $t\in L(E)$ which is an accumulation point of
$L(E)\cap (-\infty,t)$ we have
\begin{equation}
\label{eq:22}
\limsup_{s\uparrow t,\ s\in L(E)} \frac{f(t)-f(s)}
{g(t)-g(s)}\ge 1.
\end{equation}
Then the map $s\mapsto f(s)-g(s)$ is non decreasing in $E$; in particular
\begin{equation}
\label{eq:52}
f(E^+ )-f(E^- )\ge g(E^+ )-g(E^- ).
\end{equation} \end{lemma}
\par
The following proposition, a consequence of \eqref{eq:crinqualitytheta}, is useful to prove existence of VE solutions since it gives some sufficient conditions. \begin{proposition}[Sufficient criteria for VE solutions]
\label{prop:leqinequality} Let
$u\in \mathrm{BV}([a,b];X)$ be a curve satisfying the
stability condition \eqref{stability}.
Then $u$ is a {\rm VE}\ solution of the rate-independent system $(X,\mathcal{E},\Psi,\delta)$ if and only if it satisfies one of the following equivalent characterizations: \begin{enumerate}[i)] \item $u$ satisfies the $(\Psi,{\sf c})$-energy-dissipation inequality \begin{equation} \label{leqinequality}
\mathcal{E}(b,u(b))+\mVar{\Psi,{\sf c}}(u,[a,b])\leq\mathcal{E}(a,u(a))+\int_a^b{\ensuremath{\mathcal P}}(s,u(s)){\mathrm d}
s.
\end{equation} \item $u$ satisfies the ${\sf d}$-energy-dissipation inequality
\begin{equation}
{\ensuremath{\mathcal E}}(t,u(t))+\varpsi(u,[s,t])\leq {\ensuremath{\mathcal E}}(s,u(s))+\int_s^t{\ensuremath{\mathcal P}}(r,u(r)){\mathrm d}
r\quad\text{for all $s\le t\in[a,b]$}\label{eq:110} \end{equation}
and the following jump conditions at each point $t\in \Ju$ \begin{align}\label{Jve}
\tag{$\mathrm{J_{VE}}$} \begin{split} \mathcal{E}(t,u(t- ))-\mathcal{E}(t,u(t))=\Fd(t,u(t- ),u(t)), \\ \mathcal{E}(t,u(t))-\mathcal{E}(t,u(t+ ))=\Fd(t,u(t),u(t+ )), \\ \mathcal{E}(t,u(t- ))- \mathcal{E}(t,u(t+ ))=\Fd(t,u(t- ),u(t+ )). \end{split} \end{align} \end{enumerate} \end{proposition} \par Another simple property concerns the behaviour of Visco-Energetic solutions with respect to restrictions and concatenation. The proof is trivial. \begin{proposition}[Restriction and concatenation principle] \label{lem:1} The following properties hold: \begin{enumerate} \item The restriction of a Visco-Energetic solution in $[a,b]$ to an interval $[\alpha,\beta]\subseteq [a,b]$ is a Visco-Energetic solution in $[\alpha,\beta]$; \item If $a=t_0<t_1<t_{m-1}<t_m=b$ is a subdivision of $[a,b]$ and $u:[a,b]\rightarrow\mathbb{R}$ is Visco-Energetic solution on each one of the intervals $[t_{j-1},t_j]$, then $u$ is a Visco-Energetic solution in $[a,b]$. \end{enumerate} \end{proposition} \par In our finite-dimensional setting it is possible to give another sufficient criterium for Visco-Energetic solutions, more precisely a characterization through the stability condition \eqref{stability}, a doubly nonlinear differential inclusion, and the Jump condition \eqref{Jve}. This result will be the starting point for our discussion in the one-dimensional case. \begin{theorem}[Characterization of VE solutions]
\label{prop:differential-characterization}
A curve $u\in \mathrm{BV}([a,b];X)$ is a Visco-Energetic solution of the rate-independent system $(X,\mathcal{E},\Psi,\delta)$ if and only if it satisfies the stability condition \eqref{stability}, the doubly nonlinear differential inclusion
\begin{equation} \label{eq:DN0}
\partial\Psi\left(\frac{{\mathrm d} u'_{\mathrm{co}}}{{\mathrm d}\mu}(t)\right)+{\mathrm D} W(u(t))\ni\ell(t)\quad\text{for $\mu$-a.e. $t\in(a,b)$,\quad $\mu:=\mathscr{L}^1+|u'_{\mathrm{co}}|$} \tag{$\mathrm{DN}_0$}
\end{equation}
and the jump conditions \eqref{Jve} at every $t\in\Ju$: \begin{align}
\tag{$\mathrm{J_{VE}}$} \begin{split} \mathcal{E}(t,u(t- ))-\mathcal{E}(t,u(t))=\Fd(t,u(t- ),u(t)), \\ \mathcal{E}(t,u(t))-\mathcal{E}(t,u(t+ ))=\Fd(t,u(t),u(t+ )), \\ \mathcal{E}(t,u(t- ))- \mathcal{E}(t,u(t+ ))=\Fd(t,u(t- ),u(t+ )). \end{split} \end{align} \end{theorem}
\begin{proof} From the definition of the viscous dissipation cost $\cost(t,\ul(t),\ur(t))$, \ref{def:transition-cost}, it is immediate to check that \[ \varpsi\big(u,[a,b]\big)\le \mVar{\Psi,{\sf c}}\big(u,[a,b]\big), \] so that Visco-Energetic solutions are in particular \emph{local solutions}, in the sense of \cite{Mielke-Rossi-Savare12}. This differential characterization is therefore an immediate consequence of Proposition \ref{prop:leqinequality} and \cite[Proposition 2.7]{Mielke-Rossi-Savare12}. \end{proof}
\subsection{The one-dimensional setting} \label{subsec:one-dimensional-setting} From now on we consider the particular case $X=\mathbb{R}$, which we also identify with $X^*$. We will denote by $v^+$, $v^-$ the positive and the negative part of $v\in\mathbb{R}$. \paragraph{Dissipation.} A dissipation potential is a function of the form \begin{equation} \Psi(v):=\alpha_+v^++\alpha_-v^-,\quad v\in\mathbb{R},\quad\text{for some $\alpha_\pm>0$.} \end{equation} Hence, we have \[ \partial\Psi(v)=\begin{cases} \alpha_+ \quad &\text{if $v>0$,} \\ [-\alpha_-,\alpha_+] \quad&\text{if $v=0$,} \\ -\alpha_- \quad&\text{if $v<0$} \end{cases}\qquad\text{for all $v\in\mathbb{R}$}, \] and \begin{equation} \label{eq:40} K^*=[-\alpha_-,\alpha_+],\quad\Psi_*(w)=\frac{1}{\alpha_+}w^+ +\frac{1}{\alpha_-}w^-\quad\text{for all $w\in\mathbb{R}$}. \end{equation}
\paragraph{Energy functional.} The energy is given by a function ${\ensuremath{\mathcal E}}:[a,b]\times\mathbb{R}\rightarrow\mathbb{R}$ of the form \begin{equation} \label{eq:E-assumption} {\ensuremath{\mathcal E}}(t,u):=W(u)-\ell(t)u \end{equation} with $\ell\in {\mathrm C}^1([a,b])$ and $W:\mathbb{R}\rightarrow\mathbb{R}$ such that \begin{equation} \label{eq:W'-assumption} W\in {\mathrm C}^1(\mathbb{R}),\quad\lim_{x\rightarrow-\infty}W'(x)=-\infty,\quad\lim_{x\rightarrow+\infty}W'(x)=+\infty. \end{equation}
\paragraph{Viscous correction.} The admissible one-dimensional viscous correction is a continuous map $\delta:\mathbb{R}\times\mathbb{R}\rightarrow [0,+\infty)$ which satisfies \begin{equation}\label{eq:delta} \tag{$\delta1$}
\lim_{v\rightarrow u}\frac{\delta(u,v)}{|v-u|}=0\qquad\text{for every $u\in \mathscr{S}_\sfD(t)$},\quad t\in [a,b], \end{equation} and the reverse triangle inequality \begin{equation} \label{eq:delta2} \tag{$\delta2$} \delta(u_0,u_1)> \delta(u_0,v)+\delta(v,u_1)\qquad \text{for every $u_0< v< u_1$}. \end{equation} We still use the notation ${\sf D}(u,v):=\Psi(v-u)+\delta(u,v)$ for the augmented dissipation. \par
\begin{remark}[Admissible viscous corrections]\upshape Assumption \eqref{eq:delta} is necessary for the general theory of Visco-Energetic solutions; \eqref{eq:delta2} will be crucial for our one-dimensional characterization (see section \ref{subsec:main-theorem}). However, these assumptions are quite natural: they are satisfied, for example, if we choose $\delta$ of the form \[ \delta(u,v)=f(\Psi(u-v))\qquad\text{with $f$ positive, strictly convex, with $\lim_{r\rightarrow 0}\frac{f(r)}{r}=0$}. \] For instance, the standard choice $\delta(u,v)=\frac{\mu}{2}(v-u)^2$, for some positive parameter $\mu$, is admissible. This particular case will be analysed with some example in sections \ref{sec:3} and \ref{sec:4}. \end{remark}
\section{Visco-Energetic solutions of rate-independent systems in \texorpdfstring{$\mathbb{R}$}{R}} \label{sec:3} As we have underlined in the Introduction, Visco-Energetic solutions of the rate-independent system $(\mathbb{R},{\ensuremath{\mathcal E}},\Psi,\delta$) are intermediate between energetic, which correspond to the choice $\delta\equiv 0$, and Balanced Viscosity solutions, which corresponds to a choice of $\delta=\delta(\tau)$, depending of $\tau$, in \eqref{eq:167} of the form \[
\delta_\tau(u,v):=\mu(|\tau|)\delta(u,v),\qquad \mu:(0,+\infty)\rightarrow(0,+\infty), \quad\lim_{r\rightarrow 0}\mu(r)=+\infty. \] Guided by the characterizations of this two cases, given in \cite{Rossi-Savare13} in a similar one-dimensional setting and recalled in the Introduction, we obtain a full characterization for the visco-energetic case. In particular, the main results of \cite{Rossi-Savare13} can be recover for some choices of $\delta$.
\subsection{One-sided global slopes with a \texorpdfstring{$\delta$}{d} correction} \label{subsec:stability} One-sided global slopes are used in \cite{Rossi-Savare13} to give a one-dimensional characterization of Energetic solutions of the rate-independent system $(\mathbb{R},{\ensuremath{\mathcal E}},\Psi)$. We recall their definitions: \begin{equation}\label{eq:one-sided-slopes} \mathit W_{\mathsf{i}\mathsf{r}}'(u):= \inf_{z>u} \frac{W(z)-W(u)}{z-u},\qquad \mathit W_{\mathsf{s}\mathsf{l}}'(u):= \sup_{z<u} \frac{W(z)-W(u)}{z-u}, \end{equation} where the subscripts $\mathsf{ir}$ and $\mathsf{sl}$ stands for \emph{inf-right} and \emph{sup-left} respectively. \par In this section we introduce a generalization of $W'_{{\sf i}{\sf r}}$ and $W'_{{\sf s}{\sf l}}$, and we prove some important properties. These slopes allow us to give an equivalent, one-dimensional, characterization of the ${\sf D}$-Stability \eqref{eq:stability}. \begin{definition} For every $u\in \mathbb{R}$ we define the one-sided global slopes with a $\delta$ correction \begin{gather} \Wird\delta(u):=\inf_{z>u}\left\{\frac{1}{z-u}\Big(W(z)-W(u)+\delta(u,z)\Big)\right\}, \label{eq:Wir} \\ \Wsld\delta(u):=\sup_{z<u}\left\{\frac{1}{z-u}\Big(W(z)-W(u)+\delta(u,z)\Big)\right\}. \label{eq:Wsl} \end{gather} \end{definition} For simplicity, we will still use the notations $\mathit W_{\mathsf{i}\mathsf{r}}'$ and $\mathit W_{\mathsf{s}\mathsf{l}}'$ instead of $\Wird0$ and $\Wsld0$ when $\delta\equiv 0$. From \eqref{eq:delta} it follows that the modified global slopes satisfy \begin{equation} \label{eq:4} \Wird\delta(u)\le W'(u)\le \Wsld\delta(u), \quad \text{for every $u \in \mathbb{R}$} \end{equation} and it is not difficult to check they are continuous. Indeed, it is sufficient to introduce the continuous function $V: \mathbb{R}\times \mathbb{R}\rightarrow \mathbb{R}$ \[ V(u,z):=\begin{cases} W'(u) &\quad \text{if $z=u$}, \\ \frac{1}{z-u}\Big(W(z)-W(u)+\delta(u,z)\Big)&\quad \text {if $z\neq u$}, \end{cases} \] and observe, e.g. for $\Wird\delta$, that \[ \Wird\delta (u) = \min \{V(u,z): z\ge u \} \] and for $u$ in a bounded set the minimum is attained in a compact set thanks to \eqref{eq:W'-assumption}. \par If $\delta$ is big enough, in a suitable sense, equalities hold in \eqref{eq:4}. An important result is stated in the following proposition. \begin{proposition} \label{prop:1} Suppose that $W$ satisfies the $\delta$-convexity assumption \begin{equation} \label{eq:41} W(v)\le (1-t)W(u)+tW(w)+t(1-t)\delta(u,w),\qquad v=(1-t)u+tw,\quad t\in[0,1]. \end{equation} Then the one-sided slopes coincides with the usual derivative: \[ \Wird{\delta}(u)=W'(u)=\Wsld{\delta}(u)\qquad \text{for every $u\in\mathbb{R}$}. \] \end{proposition}
\begin{proof} We prove the first equality since the second one is analogous. Let us take $v,w\in\mathbb{R}$ with $u<v\le w$ and $t\in (0,1]$ such that $v=(1-t)u+tw$. Then \begin{multline*} \frac{W(v)-W(u)}{v-u}\le \frac{(1-t)W(u)+tW(w)+ t(1-t)\delta(u,w)-W(u)}{t(w-u)} = \\ \frac{W(w)-W(u)+\delta(u,w)}{w-u} -t\frac{\delta(u,w)}{w-u}\le \frac{W(w)-W(u)+\delta(u,w)}{w-u}. \end{multline*} Passing to the limit as $v\downarrow u$ we get \[ W'(u)\le \frac{W(w)-W(u)+\delta(u,w)}{w-u}\quad \text{for every $w>u$.} \] Now it is enough to take to infimum over $w>u$. \end{proof}
\begin{remark} \upshape An interesting consequence of Proposition \ref{prop:1} is that if $W$ satisfies the usual $\lambda$-convexity assumption \begin{equation} \label{eq:lambda-convex} W(v)\le (1-t)W(u)+tW(w)-\lambda t(1-t)(w-u)^2,\qquad v=(1-t)u+tw,\quad t\in[0,1] \end{equation} for some $\lambda\in \mathbb{R}$, then for every $\mu\ge \min\{-\lambda, 0\}$ we can choose $\delta(u,w):=\mu(w-u)^2$ and \eqref{eq:41} holds. In particular, if $W$ is convex, for every admissible viscous correction $\delta$ the one-sided global slopes coincide with the usual derivative. \end{remark}
\par If $\Wird\delta (u)<W'(u)$ in a point $u\in\mathbb{R}$, then from \eqref{eq:W'-assumption} there exist $z>u$ which attains the infimum in \eqref{eq:Wir}. The same happens if $\Wsld\delta(u)>W'(u)$. Moreover, from the continuity of $W$ and of the global slopes, there exist a neighborhood of $u$ in which the strict inequality holds. In this neighborhood $\Wird\delta$, or $\Wsld\delta$, are decreasing.
\begin{proposition} \label{prop: Wir.prop} Let $I \subseteq \mathbb{R}$ be an open interval such that \[ \Wird\delta (v) < W'(v) \quad\text{(resp. $\Wsld\delta(v)>W'(v)$)} \qquad \text{ for every $v\in I$}. \] Then $\Wird\delta$ (resp. $\Wsld\delta$) is decresing on $I$. \end{proposition}
\begin{proof} Let $v_1\in I$ and let $z>v_1$ be an element that attains the infimum in \eqref{eq:Wir}. Then for every $v_2<z$ we have the inequality \[ \Wird\delta(v_2)-\Wird\delta(v_1) \le \\ \frac{W(z)-W(v_2)+\delta(v_2,z)}{z-v_2}- \left(\frac{W(z)-W(v_1)+\delta(v_1,z)}{z-v_1}\right). \] From \eqref{eq:delta2}, $\delta(v_1,z)\ge\delta(v_2,z)$ so that \[ \frac{1}{v_2-v_1}\left[\frac{\delta(v_2,z)}{z-v_2}-\frac{\delta(v_1,z)}{z(v_1)-v_1}\right]\le \frac{\delta(v_2,z)}{(z-v_2)(z-v_1)}. \] Combining this with the simple identity \[ \frac{W(z)-W(v_1)}{z-v_1}= \\ \frac{W(z)-W(v_2)}{z-v_2}\left(1-\frac{v_2-v_1}{z-v_1}\right)+\frac{W(v_2)-W(v_1)}{v_2-v_1}\frac{v_2-v_1}{z-v_1} \] after a simple computation we obtain \[ \frac{\mathit W_{\mathsf{i}\mathsf{r}}' (v_2)-\mathit W_{\mathsf{i}\mathsf{r}}' (v_1)}{v_2-v_1} \le \frac{1}{z-v_1}\left[\frac{W(z)-W(v_2)+\delta(v_2,z)}{z-v_2}- \frac{W(v_2)-W(v_1)}{v_2-v_1}\right]. \] Passing to the limsup for $v_2\downarrow v_1$ we get \[ \limsup_{v_2\downarrow v_1} \frac{\Wird\delta (v_2)-\Wird\delta(v_1)}{v_2-v_1}\le \frac{1}{z-v_1}\left(\Wird\delta (v_1) - W'(v_1)\right) < 0. \] The claim follows from a classical result concerning Dini derivatives, see \cite{Gal57}. \end{proof}
\paragraph{Characterizations of ${\sf D}$-Stability.} Taking \eqref{eq:Wir} and \eqref{eq:Wsl} into account, we can formulate a characterization of the global ${\sf D}$-stability \eqref{stability}. Since the energy is of the form ${\ensuremath{\mathcal E}}(t,u)=W(u)-\ell(t)u$, \eqref{stability} is equivalent to \[ W(u(t))-W(v)-\ell(t)(u(t)-v)\le \Psi(v-u)+\delta(u,v)\quad\text{for every $t\in[a,b]\setminus \Ju$, $v\in \mathbb{R}$}. \] Dividing by $u(t)-v$ and taking the infimum over $v>u(t)$, or the supremum over $v<u(t)$, for every $t\in[a,b]\setminus \Ju$ we get the system of inequalities \begin{equation} \label{eq:1d-stability} -\alpha_- \le \ell(t)-\Wsld\delta(u(t))\le \ell(t)-W'(u(t))\le \ell(t)-\Wird\delta(u(t))\le\alpha_+, \tag{${\mathrm S}_{{\sf D}, \mathbb{R}}$} \end{equation} which are the one-dimensional version of the global ${\sf D}$-stability. The continuity property of the $\delta$-corrected one-sided slopes also yields for every $t\in (a,b)$ \begin{gather} -\alpha_- \le \ell(t)-\Wsld\delta(\ur(t))\le \ell(t)-W'(\ur(t))\le \ell(t)-\Wird\delta(\ur(t))<\alpha_+, \label{eq:1d-stabilityright} \\ -\alpha_- \le \ell(t)-\Wsld\delta(\ul(t))\le \ell(t)-W'(\ul(t))\le \ell(t)-\Wird\delta(\ul(t))<\alpha_+. \label{eq:1d-stabilityleft} \end{gather}
\begin{remark}\upshape The stability region $\mathscr{S}_\sfD$ is bigger when $\delta$ increases. If we call \begin{equation} \mathscr{S}_\infty:=\{(t,u)\in [a,b]\times \mathbb{R}: {\mathrm D}_u{\ensuremath{\mathcal E}}(t,u)\in K^*\}, \end{equation} where $K^*$ is defined in \eqref{eq:K*}, the set of points which satisfies the \emph{local stability} condition typical of BV solutions, \cite{Mielke-Rossi-Savare12,Mielke-Rossi-Savare13}, it is immediate to check that \[ \mathscr{S}\kern-3pt_\sfd\subseteq\mathscr{S}_\sfD \subseteq \mathscr{S}_\infty\qquad\text{for every admissible viscous correction}. \] The first inclusion is an equality if $\delta\equiv 0$. If the energy satisfies the $\delta$-convexity property \eqref{eq:41}, or, equivalently, if $\delta$ is chosen big enough, from Propostion \ref{prop:1} we get $\mathscr{S}_\sfD=\mathscr{S}_\infty$.
\end{remark}
\subsection{Visco-Energetic Maxwell rule} After the brief discussion about stability in section \ref{subsec:stability}, we now focus on jumps. In this section we show a relation between the minimal sets \eqref{eq:107} and the one-sided global slopes $\Wird\delta$ and $\Wsld\delta$, along with some geometrical interpretations of the results. \begin{proposition} \label{prop:3} Let $t,u\in\mathbb{R}$. Suppose that $z\in {\mathrm M}(t,u)$. Then \begin{gather} \Wird\delta(v)\le\frac{W(z)-W(v)}{z-v}+\frac{\delta(v,z)}{z-v}< \ell(t)-\alpha_+ \qquad \text{if $u< v<z$}, \label{eq:8} \\ \Wsld\delta(v)\ge \frac{W(z)-W(v)}{z-v}+\frac{\delta(v,z)}{z-v}> \ell(t)+\alpha_- \qquad \text{if $u> v>z$}, \label{eq:9} \end{gather} Moreover, if $u\in \mathscr{S}_\sfD(t)$ the following identities hold: \begin{gather} \Wird\delta(u)=\frac{W(z)-W(u)}{z-u}+\frac{\delta(u,z)}{z-u}=\ell(t)-\alpha_+ \qquad \text{if $z>u$},\label{eq:10} \\ \Wsld\delta(u)=\frac{W(z)-W(u)}{z-u}+\frac{\delta(u,z)}{z-u}= \ell(t)+\alpha_- \qquad \text{if $z<u$}. \label{eq:11} \end{gather}
\end{proposition}
\begin{proof} Let us consider the case $z>u$. From the minimality of $z$ for every $v\in (u,z)$ we get \[ W(z)-W(v)-\ell(t)(z-v)\le -\alpha_+(z-v)+\delta(u,v)-\delta(u,z). \] Taking $\eqref{eq:delta2}$ into account and dividing by $z-u$ we get \[ \frac{W(z)-W(v)}{z-v}-\ell(t)< -\alpha_+ -\frac{\delta(v,z)}{z-v}, \] which proves \eqref{eq:8}. If $u\in \mathscr{S}_\sfD(t)$, we can combine the one dimensional ${\sf D}$-stability condition \eqref{eq:1d-stability} with \eqref{eq:8}, where we pass to the limit for $v\downarrow u$, and we get \[ \Wird\delta(u)\le\frac{W(z)-W(u)}{z-u}+\frac{\delta(u,z)}{z-u}\le\ell(t)-\alpha_+\le \Wird\delta(u), \] so that all the previous inequalities are identities and \eqref{eq:10} is proved. The case $z<u$ can be proved in a similar way. \end{proof} \begin{remark} \label{rem:1} \upshape Notice that the strict inequality in \eqref{eq:delta2} implies \begin{align} \text{if $z\in {\mathrm M}(t,u)$, $z>u$, then}\qquad \Wird\delta(v)<\ell(t)-\alpha_+\quad \forall v\in (u,z), \label{eq:16}\\ \text{if $z\in {\mathrm M}(t,u)$, $z<u$, then}\qquad \Wsld\delta(v)>\ell(t)-\alpha_-\quad \forall v\in (z,u). \label{eq:17} \end{align} In particular $v\not\in \mathscr{S}_\sfD(t)$, since \eqref{eq:16} and \eqref{eq:17} contradict the global stability \eqref{eq:1d-stability}. This inequalities will be one the key ingredients for the characterization Theorem \ref{thm:main-theorem}. \end{remark}
\paragraph{${\sf D}$-Maxwell rule.} Equalities \eqref{eq:10} and \eqref{eq:11} admit a nice geometrical interpretation. Suppose that $u$ is a Visco-Energetic solution, $t\in\Ju$ and that there exist $z\in {\mathrm M}(t,\ul(t))$ with $z>\ul(t)$. According to \eqref{eq:1d-stabilityleft}, $\ul(t)$ is stable, so that we can choose $u=\ul(t)$ in \eqref{eq:10} and we get \begin{equation} \label{eq:12} W(z)=W(\ul(t))+(\ell(t)-\alpha_+)(z-\ul(t))-\delta(\ul(t),z). \end{equation} This identity is a generalization of the so-called \emph{Mawell rule}: in the energetic case, combining global stability and energetic balance, we easily get $z=\ur(t)$, so that \eqref{eq:12} assume the classical formulation \begin{equation} \int_{\ul(t)}^{\ur(t)}\Big( W'(r)-\ell(t)+\alpha_+ \Big)\,{\mathrm d} r=0. \end{equation} \par Considering for simplicity the choice $\delta(u,v):=\frac{\mu}{2}(v-u)^2$, for some parameter $\mu>0$, when $W'(\ul(t))=\ell(t)-\alpha_+$ \eqref{eq:12} can be rewritten in the form \[ W(z)=W(\ul(t))+W'(\ul(t))(z-\ul(t))-\frac{\mu}{2}(z-\ul(t))^2. \] This means that we can have a jump only when the area between the graph $W'$ and the straight line whose slope is $-\mu$ vanishes. If $\mu$ is big enough, then the area is always positive and ${\mathrm M}(t,u)=\{u\}$. In this case the description of the jump transition will be more complicated (see section \ref{subsec:main-theorem} and \ref{sec:4} for more details).
\subsection{Main characterization Theorem} \label{subsec:main-theorem} In this section we exhibit an explicit characterization of Visco-Energetic solutions for a general (i.e. non monotone) external loading $\ell$. This result is the equivalent of \cite[Theorem 3.1]{Rossi-Savare13} and \cite[Theorem 5.1]{Rossi-Savare13} for Energetic and BV solutions. \begin{theorem}[1d-characterization of VE solutions] \label{thm:main-theorem} Let $u\in \mathrm{BV}([a,b];\mathbb{R})$ be a Visco-Energetic solution of the rate-independent system $(\mathbb{R},{\ensuremath{\mathcal E}},\Psi,\delta)$. Then the the following properties hold: \begin{enumerate}[a)] \item $u$ satisfies the 1d-stability condition \eqref{eq:1d-stability} for every $t\in[a,b]\setminus \Ju$ (and therefore \eqref{eq:1d-stabilityright} and \eqref{eq:1d-stabilityleft} as well); \item $u$ satisfies the following precise formulation of the doubly nonlinear differential inclusion: \begin{gather} W'(\ur(t))=\Wird\delta(\ur(t))=\ell(t)-\alpha_+ \quad \text{ for every $t\in \mathrm{supp}\left((u')^+\right)\cap [a,b)$}, \label{eq:equation1} \\ W'(\ur(t))=\Wsld\delta(\ur(t))=\ell(t)+\alpha_- \quad \text{ for every $t\in \mathrm{supp}\left((u')^-\right)\cap [a,b)$}; \label{eq:equation2} \end{gather} \item at each point $t\in \Ju$, $u$ fulfils the jump conditions \begin{equation} \label{eq:jump1} \min(\ul(t),\ur(t))\le u(t) \le \max(\ul(t),\ur(t)) \end{equation} and \begin{equation} \label{eq:jump2} \Wird\delta(v)\le \ell(t)-\alpha_+\quad \text{if $\ul(t)< \ur(t)$},\qquad \Wsld\delta(v)\ge \ell(t)+\alpha_-\quad \text{if $\ul(t)>\ur(t)$}, \end{equation} for every $v$ such that $\min\left(\ul(t),\ur(t)\right)\le v\le \max\left(\ul(t),\ur(t)\right)$. \end{enumerate} \par \noindent Conversely, let $u\in\mathrm{BV}([a,b],\mathbb{R})$ be a curve satisfying \eqref{eq:equation1}, \eqref{eq:equation2}, \eqref{eq:jump1}, \eqref{eq:jump2}, along with the following modified version of a): \begin{enumerate}[a)] \item[a')] $u$ satisfies the 1d-stability condition \eqref{eq:1d-stability} for every $t\in (a,b)$. \end{enumerate} Then $u$ is a Visco-Energetic solution of the rate-independent system $(\mathbb{R},{\ensuremath{\mathcal E}},\Psi,\delta)$. \end{theorem} \par Since any jump point belongs either to the support of $\left(u'\right)^+$ or of $\left(u'\right)^-$, combining \eqref{eq:equation1}, \eqref{eq:jump1}, \eqref{eq:jump2} and \eqref{eq:equation2}, \eqref{eq:jump1} and \eqref{eq:jump2} we also get at every $t\in \Ju\cap (a,b)$ \begin{gather} \Wird\delta(\ul)=\Wird\delta(\ur)=W'(\ur)=\ell(t)-\alpha_+\qquad \text{ if $\ul<\ur$}, \label{eq:6}\\ \Wsld\delta(\ul)=\Wsld\delta(\ur)=W'(\ur)=\ell(t)-\alpha_-\qquad \text{ if $\ul>\ur$}, \label{eq:7} \end{gather} and this identities still hold in $t=a$ or $t=b$ if $u(a)\in S_{\sf D}(a)$ or $u(b)\in S_{\sf D}(b)$.
\begin{remark} \upshape For a full characterization of Visco-Energetic solutions we need that \eqref{eq:1d-stability} holds also when $t\in \Ju$. This condition is required just to recover the first and the second equalities in \eqref{Jve} from the third. However, it is quite natural: if $u$ is a Visco-Energetic solution, we can consider the right continuous function \[ \tilde{u}\in \mathrm{BV}([a,b],\mathbb{R})\quad\text{such that $\tilde{u}(t):=\ul(t)$ for every $t\in[a,b]$}. \] Then $\tilde{u}$ is still a Visco-Energetic solution and $\tilde{u}(t)$ is stable for every $t\in(a,b]$. \end{remark}
\begin{proof}[Proof of theorem \ref{thm:main-theorem}] We split the argument in various steps. \par
\underline {Claim 1}. \emph{ ${\sf D}$-stability \eqref{stability} is equivalent to \eqref{eq:1d-stability}}.\newline It is a consequence of the choice ${\ensuremath{\mathcal E}}(t,u)=W(u)-\ell(t)u$; see the discussion in section \ref{subsec:stability}. \par \underline{Claim 2}. \emph{\eqref{Jve} implies the jump conditions \eqref{eq:jump1} and \eqref{eq:jump2}}.\newline From the general properties of the viscous dissipation cost, there exist an optimal transition $\vartheta\in {\mathrm C}(E;\mathbb{R})$ connecting $\ul(t)$ and $\ur(t)$, namely \[ \vartheta(E^-)=\ul(t),\quad\vartheta(E^+)=\ur(t),\quad \Fd(t,\ul(t),\ur(t))=\Cf(t,\vartheta,E). \] Since \eqref{Jve} holds, we can apply Theorem \ref{prop:2}. Let us start from the case $\ul(t)<\ur(t)$ and let $v\in [\ul(t),\ur(t)$]. If $v\notin \vartheta(E)$, which is compact, there exist an open interval $I\subset [\ul(t),\ur(t)]\setminus \vartheta(E)$ such that $v\in I$. From \eqref{eq:121} \[ \vartheta(I^+)\in{\mathrm M} (t,\vartheta(I^-)), \] so that, by Proposition \ref{prop:3}, we get $\Wird\delta(v)\le \ell(t)-\alpha_+$. By continuity, the inequality still holds if $v\in \vartheta(E)$ is isolated in $\vartheta(E)\cap[v,+\infty)$. Otherwise, $v\in {\mathrm L}\big(\vartheta(E)\cap [v,+\infty)\big)$, where ${\mathrm L}$ denotes the set of the limit points. From \eqref{eq:120} we have \[ {\ensuremath{\mathcal E}}(t,v)\ge {\ensuremath{\mathcal E}}(t,v_1)+\alpha_+(v_1-v)\quad \text{for every $v_1\in \vartheta(E),\quad v_1> v $}, \] which yields \[ \Wird\delta(v)\le \frac{W(v_1)-W(v)}{v_1-v}+\frac{\delta(v,v_1)}{v_1-v}\le \ell(t)-\alpha_+ +\frac{\delta(v,v_1)}{v_1-v}. \] We can pass to the limit for $v_1\downarrow z$ so that $\eqref{eq:jump2}$ holds in $[\ul(t),\ur(t))$. By continuity, it still holds in $v=\ur(t)$. The case $\ul(t)>\ur(t)$ can be proved in a similar way. \par The property \eqref{eq:jump1} easily follows by summing the identities of the jump conditions \eqref{Jve}, thus obtaining \[ \cost(t,\ul(t),\ur(t))=\cost(t,\ul(t),u(t))+\cost(t,u(t),\ur(t)), \] and considering the additivity of the cost \eqref{eq:195}. \par
\underline{Claim 3}. \emph{The jump conditions \eqref{eq:jump1}, \eqref{eq:jump2} and a') imply \eqref{Jve}}.\newline Let us start again with $\ul(t)<\ur(t)$. We still want to apply Theorem \ref{prop:2}: we need to find an admissible transition $\vartheta\in {\mathrm C}(E;\mathbb{R})$ which satisfies \eqref{eq:120} and \eqref{eq:121}. To define such a transition, let us consider \[ S:=\{v\in [\ul(t),\ur(t)]: \Wird\delta(v)=\ell(t)-\alpha_+\text{ and } \Wsld\delta(v)\le \ell(t)+\alpha_- \}. \] The set $S$ is compact, then there exists a sequence of disjoint open intervals $I_k$ such that $[S^-,S^+]\setminus S=\bigcup_{k=0}^\infty I_k$. Let us fix for a moment one of these $I_k$. Taking into account assumption \eqref{eq:W'-assumption}, we can have only two possibilities. \par \begin{enumerate}[a)] \item[-] \emph{Case 1: ``The initial jump''}. The infimum in $\Wird\delta(I_k^-)$ is attained in a point $z>I_k^-$.\newline From $\Wird\delta(I_k^-)=\ell(t)-\alpha_+$ and \eqref{eq:10} we recover the energetic balance \begin{equation} {\ensuremath{\mathcal E}}(t,z)+{\sf D}(I_k^-,z)={\ensuremath{\mathcal E}}(t,I_k^-). \end{equation} Arguing as in Proposition \ref{prop:3}, $\Wird\delta(v)<\ell(t)-\alpha_+$ for every $v\in(I_k^-,z)$, so that $z\in \overline{I_k}$. We can thus define by induction the sequence $(u_n^k)$ such that \[ u^k_0:=z,\qquad u_{n+1}^k=u_n^k\quad\text{if $u_n^k=I_k^+$} \qquad u^k_{n+1}\in {\mathrm M}(t,u_n^k) \quad \text{otherwise}. \] Notice that from Proposition \ref{prop:3} and Remark \ref{rem:1}, by induction we easily get $u_n^k\in \overline{I_k}$ for every $n\in \mathbb{N}$. Moreover, \begin{equation} \Psi(u_{n+1}^k-u_n^k)\le {\ensuremath{\mathcal E}}(t,u_n)-{\ensuremath{\mathcal E}}(t,u_{n+1}), \end{equation} so that $(u_n^k)$ is a Cauchy sequence and then it converges to some $\bar{u}^k\in \overline{I_k}$. From the general properties of the residual stability function \begin{equation} \label{eq:19} \Res(t,u_n^k)={\ensuremath{\mathcal E}}(t,u_n^k)-{\ensuremath{\mathcal E}}(t,u_{n+1}^k)-{\sf D}(u_n^k,u_{n+1}^k). \end{equation} By passing to the limit in \eqref{eq:19} we get \[ \Res(t,\bar{u}^k)=0,\quad \text{so that $\bar{u}^k\in S$}, \] which means $\bar{u}^k\in\{I_k^-;I_k^+\}$. In addition, $\bar{u}^k\neq I_k^-$ since ${\ensuremath{\mathcal E}}(t,u_{n+1}^k)<{\ensuremath{\mathcal E}}(t,u_n^k)$ every time that $u_{n+1}^k\neq u_n^k$, which implies ${\ensuremath{\mathcal E}}(t,\bar{u}^k)<{\ensuremath{\mathcal E}}(t,I_k^-)$. Finally, we conclude $\bar{u}^k=I_k^+$ and we set $E_k:=\bigcup_{n=0}^\infty \{u_n^k\}$.
\item[-] \emph{Case 2: ``The (double) chain''}. $W'(I_k^-)< \frac{W(z)-W(I_k^-)+\delta(I_k^-,z)}{z-I_k^-}$ for every $z>I_k^-$.\newline In this case $\Wird\delta(I_k^-)=W'(I_k^-)=\ell(t)-\alpha_+$. The energy ${\ensuremath{\mathcal E}}(t,u)=W(u)-(W'(I_k^-)+\alpha_+)u$ has negative derivative in $u=I_k^-$, so that it is decreasing in a neighborhood of $I_k^-$. Let us choose $\varepsilon>0$ such that ${\ensuremath{\mathcal E}}(t,I_k^-+\varepsilon)<{\ensuremath{\mathcal E}}(t,I_k^-)$. We can thus define by induction the following sequence $(u^k_{n,\varepsilon})$: \[ u_{0,\varepsilon}^k:=I_k^-+\varepsilon,\qquad u_{n+1,\varepsilon}^k=u_{n,\varepsilon}^k\quad\text{if $u_{n,\varepsilon}^k=I_k^+$}, \qquad u_{n+1,\varepsilon}^k\in{\mathrm M}(t,u_{n,\varepsilon}^k)\quad\text{otherwise}. \] As in the previous case, this sequence is well defined and it converges to $I_k^+$. In order to pass to the limit for $\varepsilon\downarrow 0$, we apply a compactness argument: we consider the family of sets \[ E_{k,\varepsilon}:=\bigcup_{n=0}^\infty\{u_{n,\varepsilon}^k\}\cup \{I_k^+\}. \] $E_{k,\varepsilon}$ are compact and $E_{k,\varepsilon}\subseteq \overline{I_k}$. We can apply Kuratowski Theorem (see e.g.~\cite{Kuratowski55}): there exists a compact subset $E_k\subseteq \overline{I_k}$ such that, up to a subsequence, $E_{k,\varepsilon}\rightarrow E_k$ in the Hausdorff metric. It is easy to check, \cite[Lemma 3.11]{MinSav16}, that $E_k^-=I_k^-$, $E_k^+=I_k^+$ and \begin{equation} \label{eq:28} z\in {\mathrm M}(t,z_{E_k}^-)\qquad\text{for every $z\in E_k$}, \end{equation} where $z_{E_k}^-$ is defined in \eqref{eq:44}. \end{enumerate} In conclusion, we repeat this construction for every open interval $I_k$ and we consider $E:=\bigcup_{k=0}^{\infty} E_k\cup S$. Notice that $E^-=\ul(t)$, $E^+=\ur(t)$ and $E$ is a compact subset of $\mathbb{R}$. Indeed, $E$ is bounded and if $(x_n)$ is a sequence in $E$ that accumulates in some point $\bar{x}$, by construction $x_n$ is definitively contained in one of the sets $E_k$ or in $S$, which are compact. \par We can thus consider the curve \[ \vartheta:E\rightarrow \mathbb{R} \quad\text{such that $\vartheta(z)=z$ for every $z\in E$}: \] it is an admissible transition connecting $\ul(t)$ and $\ur(t)$, with $\vartheta(E)\ni u(t)$ thanks to $a')$. It remains just to prove that $\vartheta$ satisfies \eqref{eq:120} and\eqref{eq:121}. \par Concerning \eqref{eq:120}, for every $I\subset {\mathfrak H}(E)$, by construction $\vartheta(I^+)\in{\mathrm M}(t,\vartheta(I^-)$, so that \begin{equation} \label{eq:45} \varpsi(\vartheta, E\cap [I^-,I^+])=\Psi(\vartheta(I^+)-\vartheta(I^-))\le {\ensuremath{\mathcal E}}(t,\vartheta(I^-))-{\ensuremath{\mathcal E}}(t,\vartheta(I^+)). \end{equation} When $s\in {\mathrm L}\big(\vartheta(E)\cap (-\infty,s]\big)$ we get $\vartheta(s)\in S$, so that $\Wsld\delta(\vartheta(s))\le \ell(t)+\alpha_-$. In particular, since $\vartheta$ is increasing \[ \frac{W(\vartheta(r))-W(\vartheta(s))+\delta(\vartheta(s),\vartheta(r))}{\vartheta(r)-\vartheta(s)}\le \ell(t)+\alpha_-\qquad\text{for every $r<s$}. \] After a trivial computation, by using \eqref{eq:delta} and by passing to the limit we get \begin{equation} \label{eq:46} \limsup_{r\uparrow s}\frac{{\ensuremath{\mathcal E}}(t,\vartheta(r))-{\ensuremath{\mathcal E}}(t,\vartheta(s))}{\varpsi\big(\vartheta,E\cap [r,s]\big)}\ge 1. \end{equation} We can thus recover \eqref{eq:120} from \eqref{eq:45} and \eqref{eq:46} by using Lemma \ref{le:elementary}, where we set $f(s):=-{\ensuremath{\mathcal E}}(t,\vartheta(s))$ and $g(s):=\varpsi(\vartheta,E\cap [E^-,s])$. \par Finally, \eqref{eq:121} holds by construction if $r$ is isolated in $E\cap (-\infty,r]$. Otherwise, $r_{E^-}=r$ and it is still satisfied. In conclusion, by Theorem \ref{prop:2} $\vartheta$ is an optimal transition satisfying the third of \eqref{Jve}. Considering the restriction of $\vartheta$ on $E\cap [\ul(t),u(t)]$ and $E\cap [u(t),\ur(t)]$ we also get the first two identities of \eqref{Jve}.
\underline{Claim 4}. \emph{$b)$ is equivalent to the doubly nonlinear equation \eqref{eq:DN0}.} \newline We notice that \eqref{eq:DN0} yields \begin{equation} \label{eq:47} W'(u(t))= \ell(t)-\alpha_+\qquad \text{for $\left(u'_{\rm{co}}\right)^+$-a.e. $t\in (a,b)$}, \end{equation} so that \eqref{eq:equation1} holds by continuity and by \eqref{eq:1d-stabilityright} in $\rm{supp} \left(u'\right)^+\setminus \Ju$. On the other hand, for every $t\in \Ju\cap \,\mathrm{supp} \left(u'\right)^+$ we have $\ul(t)<\ur(t)$. From \eqref{eq:1d-stabilityright} and \eqref{eq:jump2}, $\ell(t)-\alpha_+=\mathit W_{\mathsf{i}\mathsf{r}}'(u_r(t))$ and then combining Proposition \ref{prop: Wir.prop} and \eqref{eq:jump2} again we get \[ W'(\ur(t))=\mathit W_{\mathsf{i}\mathsf{r}}'(\ur(t))=\ell(t)-\alpha_+, \] which proves \eqref{eq:equation1}. The identities in \eqref{eq:equation2} follow by the same argument. \par The converse implication is trivial since $\mu$ is diffuse and therefore $\ul(t)=\ur(t)=u(t)$ for $\mu$-a.e. $t\in (a,b)$. Then \eqref{eq:DN0} follows combining \eqref{eq:equation1}, \eqref{eq:equation2} and \eqref{eq:1d-stability}. \end{proof}
The previous general result has a simple consequence: a Visco-Energetic solution is locally constant in a neighborhood of a point where the stability condition \eqref{eq:1d-stability} holds with a strict inequality. \begin{corollary} \label{cor:main-theorem-corollary} Let $u\in \rm{BV}([a,b];\mathbb{R})$ be a Visco-Energetic solution of the rate-independent system $(\mathbb{R},{\ensuremath{\mathcal E}},\Psi,\delta)$. Then $u$ is locally constant in the open set \[ {\ensuremath{\mathcal I}}:=\left\{t\in [a,b]: -\alpha_-<\ell(t)-\Wsld\delta(u(t))\le \ell(t)-\Wird\delta(u(t))<\alpha_+\right\}. \] \end{corollary}
\begin{proof} By \eqref{eq:jump2} any $t\in {\ensuremath{\mathcal I}}$ is a continuity point for $u$; the continuity properties of $\Wird\delta(\cdot)$ and $\Wsld\delta(\cdot)$ then show that a neighborhood of $t$ is also contained in ${\ensuremath{\mathcal I}}$, so that ${\ensuremath{\mathcal I}}$ is open and disjoint from $\Ju$. Relations \eqref{eq:equation1} and \eqref{eq:equation2} then yield that \[ u'=0 \qquad \text{in the sense of distributions in ${\ensuremath{\mathcal I}}$}, \] so that $u$ is locally constant. \end{proof}
\paragraph{Example.} We conclude this section with the classic example of the double-well potential energy $W(u)=\frac{1}{4}(u^2-1)^2$. \begin{figure}
\caption{Visco-Energetic solution of a double-well potential energy with an oscillating external loading and a quadratic viscous-correction $\delta(u,v)$, turned by a parameter $\mu>-\min W''$.}
\label{fig:6}
\end{figure}
This energy clearly satisfies \eqref{eq:W'-assumption}. Notice also that $W'(u)=u^3-u$ and $\min W''=-1$. Therefore, if we choose $\delta(u,v):=(v-u)^2$, according to Proposition \ref{prop:1}, $\Wird\delta=\Wsld\delta=W'$ and we expect a similar behaviour to BV solutions, with the optimal transition similar in the form to a ``double chain'' at every jump point.
If the loading is oscillating, for example $\ell(t)=\sin(t)$, $\alpha_\pm=\frac{1}{2}$ and we choose the initial datum such that $W'(u(a))=\ell(t)-\alpha_+$, the result is a loop typical of the hysteresis fenomena: the solution $u$ is locally constant when $\ell$ change direction.
\section{Visco-Energetic solutions with monotone loadings} \label{sec:4} Visco-Energetic solutions of rate-independent systems in $\mathbb{R}$, driven by monotone loadings, involve the notion of the \textit{upper and lower monotone} (i.e. nondecreasing) \textit{envelopes} of the graph of $\Wird\delta$ and $\Wsld\delta$. \par In this section we first focus on a few properties of this maps and their inverse and then we exhibit the explicit formulae characterizing Visco-Energetic solutions when $\ell$ is increasing or decreasing.
\subsection{Monotone envelopes of one-sided global slopes} \label{subsec:monotone-envelopes} \begin{definition}[Upper monotone envelope of $\Wird\delta$] For every $\bar{u}$ in $\mathbb{R}$, we define the maximal monotone map $\textit{\textbf{m}}^{\bar{u}}_\delta(\cdot):\mathbb{R}\rightarrow\mathbb{R}$ \begin{equation} \begin{split} \textit{\textbf{m}}^{\bar{u}}_\delta(u):=\max_{\bar{u}\le v\le u}\Wird\delta(v)&\quad\text{if $u>\bar{u}$},\qquad \textit{\textbf{m}}^{\bar{u}}_\delta(\bar{u}):=(-\infty,\Wird\delta(\bar{u})], \\ &\textit{\textbf{m}}^{\bar{u}}_\delta(u)=\emptyset\quad \text{if $u<\bar{u}$}. \end{split} \end{equation} \end{definition} We call $\textit{\textbf{m}}^{\bar{u}}_\delta(\cdot)$ the \textit{upper monotone envelope of $\Wird\delta$} in the interval $(\bar{u},+\infty)$. The \emph{contact set} is defined by \[ C^{\bar{u}}:=\{\bar{u}\}\cup\{u>\bar{u}:\Wird\delta(u)=\textit{\textbf{m}}^{\bar{u}}_\delta(u)\}. \] Thanks to \eqref{eq:W'-assumption}, it is easy to check that \begin{equation} \lim_{v\rightarrow -\infty}\Wird\delta(v)=-\infty,\qquad \lim_{v\rightarrow +\infty}\Wird\delta(v)=+\infty, \end{equation} so that the map $\textit{\textbf{m}}^{\bar{u}}_\delta(\cdot)$ is monotone and surjective; it is also single-valued on $(\bar{u},+\infty)$ (where we identify the set $\textit{\textbf{m}}^{\bar{u}}_\delta(u)$ with its unique element with a slight abuse of notation). We can thus consider the inverse graph $\textit{\textbf{p}}^{\bar{u}}_\delta(\cdot):\mathbb{R}\rightarrow[\bar{u},+\infty)$ of $\textit{\textbf{m}}^{\bar{u}}_\delta(\cdot)$: it is defined by \[ u\in\textit{\textbf{p}}^{\bar{u}}_\delta(\ell)\quad\Leftrightarrow\quad\ell\in\textit{\textbf{m}}^{\bar{u}}_\delta(u)\qquad\text{for $u$, $\ell\in\mathbb{R}$}. \] Clearly, $\textit{\textbf{p}}^{\bar{u}}_\delta(\cdot)$ is a maximal monotone graph in $\mathbb{R}$ and it is uniquely characterized by a left-continuous monotone function $p_{{\sf l},\delta}^{\bar{u}}(\cdot)$ and a right-continuous monotone function $p_{{\sf r},\delta}^{\bar{u}}(\cdot)$ such that \[ \textit{\textbf{p}}^{\bar{u}}_\delta(\ell)=[p_{{\sf l},\delta}^{\bar{u}}(\ell),p_{{\sf r},\delta}^{\bar{u}}(\ell)],\quad \text{i.e.}\quad \ell\in\textit{\textbf{m}}^{\bar{u}}_\delta(u)\quad\Leftrightarrow\quad p_{{\sf l},\delta}^{\bar{u}}(\ell)\le u \le p_{{\sf r},\delta}^{\bar{u}}(\ell). \] We also consider a further selection in the graph of $\textit{\textbf{p}}^{\bar{u}}_\delta(\cdot)$: \[ \textit{\textbf{p}}^{\bar{u}}_{\sfc,\delta}(\ell):=\{u\in \textit{\textbf{p}}^{\bar{u}}_\delta(\ell):\Wird\delta(u)=\ell\}=\{u\in C^{\bar{u}}:\textit{\textbf{m}}^{\bar{u}}_\delta(u)\ni\ell\}=\textit{\textbf{p}}^{\bar{u}}_\delta(\ell)\cap C^{\bar{u}}. \] By introducing the set \[ A_\delta^{\bar{u}}:=\{f:(\bar{u},+\infty)\rightarrow\mathbb{R}:\text{ $f$ is nondecreasing and fulfills $f\ge \Wird\delta$}\}, \] we have \begin{equation} \label{eq:13} \textit{\textbf{m}}^{\bar{u}}_\delta(\cdot)\restr{(\bar{u},+\infty)}\in A_\delta^{\bar{u}},\quad \Wird\delta(u)\le \textit{\textbf{m}}^{\bar{u}}_\delta(u)\le f(u) \text{ for all $f\in A_\delta^{\bar{u}}, u\in(\bar{u},+\infty)$}, \end{equation} so that $\textit{\textbf{m}}^{\bar{u}}_\delta$ is the minimal nondecreasing map above the graph of $\Wird\delta$ in $(\bar{u},+\infty)$. It immediately follows from \eqref{eq:13} that \[ \textit{\textbf{m}}^{\bar{u}}_\delta(u)=\inf\{f(u): f\in A_\delta^{\bar{u}}\}\qquad \text{for all $u>\bar{u}$}. \] The following result collects some simple properties of $p_{{\sf l},\delta}^{\bar{u}}(\cdot)$ and $p_{{\sf r},\delta}^{\bar{u}}(\cdot)$. \begin{proposition} Assume \eqref{eq:W'-assumption}. Then for every $\ell\ge \Wird\delta(\bar{u})$ there holds \begin{equation} \label{eq:14} \Wird\delta(u)\le \ell\qquad \text{if $u\in [\bar{u},p_{{\sf r},\delta}^{\bar{u}}(\ell)]$}. \end{equation} Moreover, for every $\ell\in\mathbb{R}$ we have \begin{equation} \label{eq:15} p_{{\sf l},\delta}^{\bar{u}}(\ell)=\min \{u\ge \bar{u}:\Wird\delta(u)\ge\ell\},\qquad p_{{\sf r},\delta}^{\bar{u}}(\ell)=\inf\{u\ge \bar{u}: \Wird\delta(u)>\ell\}. \end{equation} \end{proposition}
\begin{proof} Property \eqref{eq:14} is an immediate consequence of the inequality $\Wird\delta\le \textit{\textbf{m}}^{\bar{u}}_\delta(\cdot)$ in $[\bar{u},+\infty]$. \newline To prove the first of \eqref{eq:15} it is sufficient to notice that \[ \Wird\delta(u)\le \textit{\textbf{m}}^{\bar{u}}_\delta(u)<\ell \quad \text{if }\bar{u}\le u< p_{{\sf l},\delta}^{\bar{u}}(\ell), \] and $\textit{\textbf{m}}^{\bar{u}}_\delta(u)=\ell$ if $u=p_{{\sf l},\delta}^{\bar{u}}(\ell)$. For the second of \eqref{eq:15}, we observe that, when $u>p_{{\sf r},\delta}^{\bar{u}}(\ell)$ we have $\textit{\textbf{m}}^{\bar{u}}_\delta(u)>\ell$, and we know that there exists $v\in [p_{{\sf r},\delta}^{\bar{u}}(\ell),u]$ such that $\Wird\delta(v)>\ell$. Since $u$ is arbitrary we get \[ p_{{\sf l},\delta}^{\bar{u}}(\ell)\ge \inf\{u\ge \bar{u}: \Wird\delta(u)>\ell\}. \] The converse inequality follows from \eqref{eq:14}. \end{proof}
\par In a completely similar way we can introduce the \textit{maximal monotone map} below the graph of $\Wsld\delta$ on the interval $(-\infty,\bar{u}]$. \begin{definition}[Lower monotone envelope of $\Wsld\delta$] For every $\bar{u}$ in $\mathbb{R}$, we define the maximal monotone map $\textit{\textbf{n}}^{\bar{u}}_\delta(\cdot):\mathbb{R}\rightarrow\mathbb{R}$ \begin{equation} \begin{split} \textit{\textbf{n}}^{\bar{u}}_\delta(u):=\inf_{\bar{u}\le v\le u}\Wsld\delta(v)&\quad\text{if $u<\bar{u}$},\qquad \textit{\textbf{n}}^{\bar{u}}_\delta(\bar{u}):=[\Wsld\delta(\bar{u}),+\infty), \\ &\textit{\textbf{n}}^{\bar{u}}_\delta(u)=\emptyset\quad \text{if $u>\bar{u}$}, \end{split} \end{equation} \end{definition} which satisfies \[ \textit{\textbf{n}}^{\bar{u}}_\delta(u)=\sup \{f(u):f\in B_\delta^{\bar{u}}\}\quad\text{for $u<\bar{u}$}, \] where \[ B_\delta^{\bar{u}}:=\{f:(-\infty,\bar{u})\rightarrow\mathbb{R}: \text{ $f$ is nondrecreasing and fulfills $f\le \Wsld\delta$} \}. \] As before, the inverse graph $\textit{\textbf{q}}^{\bar{u}}_\delta(\cdot):=(\textit{\textbf{n}}^{\bar{u}}_\delta(\cdot))^{-1}:\mathbb{R}\rightarrow (-\infty,\bar{u}]$ can be represented as $\textit{\textbf{q}}^{\bar{u}}_\delta(u)=[q_{{\sf l},\delta}^{\bar{u}}(u),q_{{\sf r},\delta}^{\bar{u}}(u)]$, where \[ q_{{\sf l},\delta}^{\bar{u}}(u)=\sup \{u\le \bar{u}: \Wsld\delta(u)<\ell\},\qquad q_{{\sf r},\delta}^{\bar{u}}(u)=\max \{u\le \bar{u}: \Wsld\delta(u)\le\ell\} \] and we set \[ \textit{\textbf{q}}^{\bar{u}}_{\sfc,\delta}(\ell):=\{u\in \textit{\textbf{q}}^{\bar{u}}_\delta(\ell):\Wsld\delta(u)=\ell\}. \]
\subsection{Monotone loadings and Visco-Energetic solutions} We apply the notions introduced the previous section to characterize Visco-Energetic solutions when $\ell$ is monotone. First of all, we provide an explicit formula yielding Visco-Energetic solutions for an increasing loading $\ell$. The case of a decreasing and of a piecewise monotone loading can be proved in a similar way.
\begin{theorem} \label{thm:monotonicity-theorem1} Let $\bar{u}\in \mathbb{R}$, $\ell\in {\mathrm C}^1([a,b])$ be a nondecreasing loading such that \begin{equation} \label{eq:monotone-loading-assumption1} \ell(a)\ge \Wsld\delta(\bar{u})-\alpha_-. \end{equation} Any nondecreasing map $u:[a,b]\rightarrow\mathbb{R}$, with $u(a)=\bar{u}$, such that for every $t\in(a,b]$ \begin{equation}\label{eq:monotone-loading-assumption2} \Wsld\delta(u(t))-\alpha_-\le W'(u(t)),\qquad u(t)\in \textit{\textbf{p}}^{\bar{u}}_{\sfc,\delta}(\ell(t)-\alpha_+) \end{equation} is a Visco-Energetic solution of the rate-independent system $(\mathbb{R},{\ensuremath{\mathcal E}},\Psi,\delta)$. In particular, \eqref{eq:monotone-loading-assumption2} yields \begin{equation}\label{eq:monotone-loading-characterization1} u(t)\in [p_{{\sf l},\delta}^{\bar{u}}(\ell(t)-\alpha_+), p_{{\sf r},\delta}^{\bar{u}}(\ell(t)-\alpha_+)]\quad\text{for every $t\in (a,b]$}. \end{equation} \end{theorem}
\begin{proof} We apply Theorem \ref{thm:main-theorem}. Concerning the global stability condition, notice that \eqref{eq:monotone-loading-assumption2} yield \begin{equation} \label{eq:31} \Wird\delta(u(t))=W'(u(t))\qquad\text{for every $t\in(a,b]$}. \end{equation} Indeed, if $\Wird\delta(u(t))\neq W'(u(t))$, from Proposition \ref{prop: Wir.prop}, $\Wird\delta$ is decreasing in a neighborhood of $u(t)$, which contradicts the second of \eqref{eq:monotone-loading-assumption2}. Therefore, the first of \eqref{eq:monotone-loading-assumption2}, combined with \eqref{eq:31} gives \eqref{eq:1d-stability} for every $t\in(a,b]$. \par To check the equation \eqref{eq:equation1}, we set \[ \gamma:=\inf \{t>a: u(t)>u(a)\}. \] If $\Wird\delta(u(a))<\ell(a)-\alpha_+$, then from \eqref{eq:monotone-loading-assumption2} $a\in \Ju$ and $\Wird\delta(\ur(a))=\ell(a)-\alpha_+$. Otherwise, $u(a)$ satisfies the stability condition and $u$ is clearly a constant Visco-Energetic solution on $[a,\gamma]$. Thus, it is not restrictive to assume that $\gamma=a$ by Proposition \ref{lem:1}. In this case $\ur(t)>u(a)$ for every $t>a$ and by continuity \eqref{eq:monotone-loading-assumption2} yields \begin{equation} \label{eq:32} \Wird\delta(\ul(t))=\Wird\delta(\ur(t))=\ell(t)-\alpha_+\qquad \text{for every $t\in (a,b)$}, \end{equation} and the second identity still holds in $t=a$. Thus, from the first of \eqref{eq:31} and the continuity of $W$ we finally get \eqref{eq:equation1}.
\par To check the jump conditions, let us first notice that combining equation \eqref{eq:32} and \eqref{eq:monotone-loading-assumption2} \[ \Wird\delta(v)\le \ell(t)-\alpha_+\qquad\text{ for every $t\in (a,b]$, $\bar{u}\le v\le \ur(t)$}. \] Then, from \eqref{eq:15} and the monotonicity of $u$ we get \begin{equation} \label{eq:35} p_{{\sf l},\delta}^{\bar{u}}(\ell(t)-\alpha_+)\le \ul(t)\le u(t)\le \ur(t)\le p_{{\sf r},\delta}^{\bar{u}}(\ell(t)-\alpha_+)\quad \text{for every $t\in(a,b]$} \end{equation} which yields \eqref{eq:jump1} and \eqref{eq:jump2} by \eqref{eq:14}. Moreover, the inequalities in \eqref{eq:35} also shows that \eqref{eq:monotone-loading-assumption2} implies \eqref{eq:monotone-loading-characterization1} \end{proof}
\par If the loading is decreasing, a similar result still holds. It can be proved with an adaptation of the proof of Theorem \ref{thm:monotonicity-theorem1}. \begin{theorem} \label{thm:45}
Let $\bar{u}\in \mathbb{R}$ and $\ell\in {\mathrm C}^1[a,b])$ be a nonincreasing loading such that \begin{equation} \ell(a)\le \Wird\delta(\bar{u})+\alpha_-. \end{equation} Any nonincreasing map $u:[a,b]\rightarrow\mathbb{R}$, with $u(a)=\bar{u}$, such that \begin{equation} \label{eq:34} \Wird\delta(u(t))+\alpha_+\ge W'(u(t)),\qquad u(t)\in \textit{\textbf{q}}^{\bar{u}}_{\sfc,\delta}(\ell(t)+\alpha_-)\text{ for every $t\in (a,b]$} \end{equation} is a Visco-Energetic solution of the rate-independent system $(\mathbb{R},{\ensuremath{\mathcal E}},\Psi,\delta)$. In particular, \eqref{eq:34} yields \begin{equation} u(t)\in [q_{{\sf l},\delta}^{\bar{u}}(\ell(t)+\alpha_-), q_{{\sf r},\delta}^{\bar{u}}(\ell(t)+\alpha_-)]\quad\text{for every $t\in [a,b]$}, \end{equation} \end{theorem}
\begin{remark} The first condition of \eqref{eq:monotone-loading-assumption2} (resp. the first of \eqref{eq:34}) holds if the energy density $W$ satisfies the $\delta$-convexity assumption \eqref{eq:41}. In this case \[ \Wird\delta(u)=W'(u)=\Wsld\delta(u)\qquad\text{for every $u\in\mathbb{R}$}. \] In particular, it is satisfied if $W$ is $\lambda$-convex, see \eqref{eq:lambda-convex}, and we choose a quadratic $\delta$, tuned by a parameter $\mu\ge\min\{-\lambda,0\}$. \end{remark}
\par The next result shows that, under a slightly stronger condition on the initial data, any Visco-Energetic solution driven by an increasing loading admits a similar representation to \eqref{eq:monotone-loading-assumption2}: the second inclusion holds for every $t\not\in \Ju$. \begin{theorem} [Nondecreasing loading] \label{thm:monotonicity-main-theorem} Let $\ell\in {\mathrm C}^1([a,b])$ be a nondecreasing loading and let $u\in \mathrm{BV}([a,b],\mathbb{R})$ be a Visco-Energetic solution of the rate-independent system $(\mathbb{R},{\ensuremath{\mathcal E}},\Psi,\delta)$ satisfying \begin{gather} \ell(a)\ge \Wsld\delta(u(a))-\alpha_-, \tag{IC1} \\ W'<W'(u(a)) \text{ in a left neighborhood of u(a)}\quad \text{if $W'(u(a))=\ell(a)+\alpha_-$}; \tag{IC2} \label{eq:monotonicity-theorem-assumption1} \end{gather} and, for every $z<u(a)$, \begin{equation} \frac{W(z)-W(u(a))+\delta(u(a),z)}{z-u(a)}<\Wsld\delta(u(a)) \quad \text{if $\Wsld\delta(u(a))=\ell(a)+\alpha_-$}.\tag{IC3} \label{eq:monotonicity-theorem-assumpion2} \end{equation} Then, similarly to Theorem \ref{thm:monotonicity-theorem1}, $u$ satisfies \begin{equation} \label{eq:monotonicity-theorem-thesis1} u\text{ is nondecreasing},\qquad u(t)\in \textit{\textbf{p}}^{u(a)}_{\sfc,\delta}(\ell(t)-\alpha_+)\quad\text{for every $t\in [a,b]\setminus \Ju$} \end{equation} and therefore \begin{equation} \label{eq:monotonicity-theorem-thesis2} u(t)\in [p_{{\sf l},\delta}^{u(a)}(\ell(t)-\alpha_+), p_{{\sf r},\delta}^{u(a)}(\ell(t)-\alpha_+)]\quad\text{for every $t\in [a,b]$}. \end{equation} \end{theorem} This result is a generalization to the visco-energetic framework of \cite[Theorem 6.3]{Rossi-Savare13}. One of the technical point there is to avoid the extra assumption \begin{equation} \label{eq:monotonicity-extra-assumption} \mathit W_{\mathsf{s}\mathsf{l}}'(u)-\alpha_- \le \ell(t)< \mathit W_{\mathsf{i}\mathsf{r}}'(u)+\alpha_+\quad \text{for every $u \in\mathbb{R}$}, \end{equation} which is not immediately satisfied if $\alpha_\pm$ are very small. In our context, if $\delta$ is too small \eqref{eq:monotonicity-extra-assumption} is still not satisfied even if we replace the one-sided slopes with their $\delta$-corrected versions. \par The next technical lemma contributes to solve this issue. Compared with the same result in the energetic setting, we need a more refined analysis of the behaviour of $u$ at jumps. \begin{lemma} \label{lem:monotonicity-proof-lemma} Under the same assumptions of Theorem \ref{thm:monotonicity-main-theorem}, let $a<\sigma'<\sigma\le b$ be such that \begin{equation} \ell(t)-W'(\ur(t))>-\alpha_-=\ell(\sigma)-W'(\ur(\sigma))\quad\text{for every $t\in[\sigma',\sigma)$}. \end{equation} Then $\sigma\notin \Ju$. \end{lemma}
\begin{proof} We argue by contradiction and assume that $\sigma\in \Ju$. In view of \eqref{eq:equation1} necessarily \begin{equation} \label{eq:monotone-loading-ineq2} \ul(\sigma)>\ur(\sigma), \end{equation} and \eqref{eq:equation2} shows that $\ur$ is nondecreasing in $[\sigma',\sigma)$. Moreover, combining \eqref{eq:1d-stabilityleft} and \eqref{eq:jump2}, \[ \ell(\sigma)+\alpha_-=\Wsld\delta(\ul(\sigma))>W'(\ul(\sigma)), \] so that by Proposition \ref{prop: Wir.prop} there exist $\tilde{u}<\ul(\sigma)$ which attains the supremum in the definition of $\Wsld\delta(\ul(\sigma))$ and a neighborhood of $\ul(\sigma)$ in which $\Wsld\delta(u)$ is decreasing. \par
We want to prove that $\ul(\sigma)=u(a)$. We consider the set \[ {\ensuremath{\mathcal P}}:=\{\rho\in [a,\sigma): \ur(t)\equiv \ul(\sigma),\mbox{ }\ell(t)=\ell(\sigma)\quad \text{for all $t\in[\rho,\sigma)$}\}, \] and we prove that $[a,\sigma)={\ensuremath{\mathcal P}}$.
\underline{Claim 1}. ${\ensuremath{\mathcal P}}\neq \emptyset$ and ${\ensuremath{\mathcal P}}$ is closed in $[a,\sigma)$. \newline We need to show that $\ell$ and $u$ are constant in a left neighborhood of $\sigma$. We already know that they are nondecreasing in $[\sigma',\sigma)$. To show that they are also nonincreasing we argue by contradiction: assume that there exists a sequence $t_n<\sigma$ converging to $\sigma$ such that \[ u_n:=\ur(t_n)\uparrow \ul(\sigma), \mbox{ }\ell_n:=\ell(t_n)\uparrow\ell(\sigma) \quad\text{and $u_n+\ell_n<\ul(\sigma)+\ell(\sigma)$}. \] Then, for $n$ great enough, $u_n>\tilde{u}$. If $u_n<\ul(\sigma)$ the global stability \eqref{eq:1d-stabilityright} and the jump condition \eqref{eq:17} yield \[ \ell_n+\alpha_-\ge \Wsld\delta(u_n)>\ell(\sigma)+\alpha_-\ge \ell_n+\alpha_-, \] which is absurd. Similarly, if $\ell_n<\ell(\sigma)$, \[ \ell_n+\alpha_-\ge \Wsld\delta(u_n)\ge\ell(\sigma)+\alpha_-> \ell_n+\alpha_-. \] This proves that ${\ensuremath{\mathcal P}}$ contains a left neighborhood of $\sigma$ and then it is non-empty. Moreover, ${\ensuremath{\mathcal P}}$ is clearly closed in $[a,\sigma)$. \par \underline{Claim 2}. Suppose that $\rho\in {\ensuremath{\mathcal P}}$. Then $\rho\notin \Ju$. \newline By contradiction suppose that $\rho\in \Ju$. From Proposition \ref{prop: Wir.prop}, $\mathit W_{\mathsf{s}\mathsf{l}}'$ is decreasing in an open set containing $\ul(\sigma))=\ur(\rho)$. From \eqref{eq:jump2} the only possibility is that $\ul(\rho)<\ur(\rho)$. \newline Suppose that $\ul(\rho)\le \ur(\sigma)$. Then we consider an optimal transition $\vartheta\in {\mathrm C}(E,\mathbb{R})$ connecting $\ul(\rho)$ and $\ur(\rho)$. Clearly, $\ur(\sigma)\notin E$ since ${\ensuremath{\mathcal E}}(\rho,\ur(\sigma))={\ensuremath{\mathcal E}}(\sigma,\ur(\sigma))$ but from $\rm{(J_{VE})}$ the energy is decreasing during a transition: \[ {\ensuremath{\mathcal E}}(\sigma,\ur(\sigma))<{\ensuremath{\mathcal E}}(\sigma,\ul(\sigma))={\ensuremath{\mathcal E}}(\rho,\ur(\rho))<{\ensuremath{\mathcal E}}(\rho,\ur(\sigma)). \] Then $\ur(\sigma)$ must be in a hole of $E$. Combing Theorem \ref{prop:2} and Proposition \ref{prop:3}, \[ \Wird\delta(\ur(\sigma))<\ell(\sigma)-\alpha_+, \] which contradicts the global stability \eqref{eq:1d-stabilityright}. In a similar way we can discuss the case $\ul(\rho)\ge \ur(\sigma)$. \par \underline{Claim 3}. If $a<\rho\in {\ensuremath{\mathcal P}}$, there exist $\varepsilon>0$ such that $\ell(t)\equiv \ell(\rho)\equiv \ell(\sigma)$ and $u(t)=u(\rho)=\ul(\sigma)$ for every $t\in [\rho-\varepsilon,\rho]$. \newline Thanks to Claim 2, $\rho\notin \Ju$. Then we can argue as in Claim 1, starting from $\ur(\rho)=\ul(\sigma)$ and $\ell(\rho)=\ell(\sigma)$. \par \underline{Conclusion}. ${\ensuremath{\mathcal P}}$ is also open in $[a,\sigma)$ since for every $\rho\in {\ensuremath{\mathcal P}}\cap (a,\sigma)$, it contains a left neighborhood of $\rho$ (${\ensuremath{\mathcal P}}$ obviously contains also a right neighborhood of $\rho$). Since ${\ensuremath{\mathcal P}}$ is both open and closed, ${\ensuremath{\mathcal P}}=[a.\sigma)$. \par Another application of Claim 2, combined with \eqref{eq:monotonicity-theorem-assumpion2}, which prevents the case $u(a)>\ur(a)$, yields that $a\notin \Ju$, so that $\ul(\sigma)=u(a)$ and $\ell(\sigma)=\ell(a)$. Finally, another application of \eqref{eq:monotonicity-theorem-assumpion2} gives the contradiction with \eqref{eq:monotone-loading-ineq2}. \end{proof}
With these notions at our disposal, the proof of Theorem \ref{thm:monotonicity-main-theorem} is a simple adaptation of \cite[Theorem 6.3]{Rossi-Savare13}. For completeness, we report the steps in details. \begin{proof}[Proof of Theorem \ref{thm:monotonicity-main-theorem}] We split again the argument in various steps. \par \underline{Claim 1}. There exists $\gamma\in [a,b]$ such that $\ell(t)-W'(\ur(r))>\gamma$ for all $t\in (\gamma,b]$ and $u(t)\equiv u(a),\,\ell(t)\equiv\ell(a)$ in $[a,\gamma]$. \newline Let us consider the set \[ \Sigma:=\{t\in [a,b]: W'(\ur(t))=\ell(t)+\alpha_-\} \] and observe that $t_n\in\Sigma,\quad t_n\downarrow t\quad \Rightarrow\quad t\in \Sigma$. If $a\in \Sigma$, we denote by $\Sigma_a$ the connected component of $\Sigma$ containing $a$ and we set $\gamma:=\sup \Sigma_a$. If $\gamma>a$, then \[ W'(\ur(t))=\ell(t)+\alpha_-\quad\text{for every $t\in[a,\gamma]$}, \] so that by \eqref{eq:equation1} $u$ is nonincreasing in $[a,\gamma]$. Assumption \eqref{eq:monotonicity-theorem-assumpion2} imply that $a\notin \Ju$ and $\ur(a)=u(a)$. Since also $\ell$ is nondecreasing we conclude by \eqref{eq:monotonicity-theorem-assumption1} and \eqref{eq:equation2} that $u(t)\equiv u(a)$ and $\ell(t)\equiv\ell(a)$ in $[a,\gamma]$; moreover, by the same argument, $\gamma\notin \Ju$, so that $\gamma\in \Sigma$. When $a\notin \Sigma$ we simply set $\gamma:=a$ and $\Sigma_a=\emptyset$. \par The claim then follows if we show that $\Sigma\setminus\Sigma_a$ is empty. This is trivial if $\gamma=b$. If $\gamma<b$ we suppose $\Sigma\setminus \Sigma_a \neq \emptyset$ and we argue by contradiction. We can find points $\gamma_2>\gamma_1>\gamma$ such that $\gamma_1\notin \Sigma$ and $\gamma_2\in \Sigma$. We can consider $\sigma:=\min(\Sigma\cap [\gamma_1,b])>\gamma_1>\gamma$. Lemma \ref{lem:monotonicity-proof-lemma} with $\sigma':=\gamma_1$ yields that $\sigma\notin \Ju$, so that we can find $\varepsilon>0$ such that \begin{equation} \label{eq:monotone-loading-ineq3} -\alpha_-<\ell(t)-W'(\ur(t))<\alpha_+\qquad\text{for every $t\in(\sigma-\varepsilon,\sigma)$}. \end{equation} Point $b)$ of Theorem \ref{thm:main-theorem} implies that $\ur(t)=u(t)\equiv \ul(\sigma)=\ur(\sigma)$ is constant in $(\sigma-\varepsilon,\sigma)$. Hence, $W'(\ur(t))\equiv W'(\ur(\sigma))=\ell(\sigma)+\delta_-\ge \ell(t)+\delta_-$ for every $t\in (\sigma-\varepsilon,\sigma)$, since $\ell$ is nondecreasing and $\sigma\in \Sigma$. This contradicts \eqref{eq:monotone-loading-ineq3}. \par \underline{Claim 2}. $u$ is nondecreasing in $[a,b]$. \newline Relation \eqref{eq:equation2} and Claim 1 imply that $\left( u'\right)^-\left([a,b)\right)=0$, so that $u$ in nondrecreasing in $[a,b)$. If $b$ is a jump point, then by \eqref{eq:jump1} $u(b)>\ul(b)$.
\par \underline{Claim 3}. Let $B:=\{t\in [\gamma,b]:\Wird\delta(u(a))+\alpha_+=\ell(t)\}$ and let $\beta:=\min B$ (with the convention $\beta=b$ if $B$ is empty). Then $u(t)\equiv u(a)$ in $[a,\beta)$ and \begin{equation} \label{eq:monotone-loading-identity1} \Wird\delta(\ul(t))=\Wird\delta(\ur(t))=\ell(t)-\alpha_+\quad\text{for all $t\in (\beta,b)$}. \end{equation} In particular, $\ul(t)\ge p_{{\sf l},\delta}^{u(a)}(\ell(t)-\alpha_+)$ for all $t\in [a,b]$. \newline The first statement follows from the previous Claim and Corollary \ref{cor:main-theorem-corollary}. \newline To prove the second identity in \eqref{eq:monotone-loading-identity1} for $\ur(t)$, we argue by contradiction and we suppose that exists a point $s\in(\beta,b]$ such that $\Wird\delta(\ur(s))+\alpha_+>\ell(s)$. Then, in view of Corollary \eqref{cor:main-theorem-corollary} $u$ is locally constant around $s$. Since $\ell$ is nondecreasing, because of $\eqref{eq:equation1}$, we conclude that $u(t)\equiv u(s)$ for every $t \in[\gamma,s]$, so that $s\le \beta$, a contradiction. The first identity of \eqref{eq:monotone-loading-identity1} follows by continuity and by \eqref{eq:equation1}. \newline The last statement is a consequence of \eqref{eq:15}. Notice that we can also take $t=b$ since $\Wird\delta(\ul(t))=\ell(t)-\alpha_+$ still holds in $t=b$. \par \underline{Claim 4}. For all $t\in [a,b]$ we have $\ur(t)\le p_{{\sf r},\delta}^{u(a)}(\ell(t)-\alpha_+)$. \newline If $\ur(t)=u(a)$ there is nothing to prove. Otherwise, let $t\ge\beta$ and take $z\in (u(a),\ur(t))$. Since $u$ is nondecreasing, there exists $s\in [\beta,t]$ such that $\ul(s)\le z\le \ur(s)$, so that \eqref{eq:jump2} (in the case $s\in\Ju$) or \eqref{eq:equation1} (in the case $\ul(s)=\ur(s)$) yield \[ \Wird\delta(z)\le\ell(s)-\alpha_+\le \ell(t)-\alpha_+, \] since $\ell$ is nondecreasing. Being $z<\ur(t)$ arbitrary, the claim follows from the second of \eqref{eq:15}. \par \underline{Conclusion}. Combining Claim 2, Claim 3 and Claim 4, we get \[ p_{{\sf l},\delta}^{u(a)}(\ell(t)-\alpha_+)\le \ul(t)\le u(t)\le\ur(t) \le p_{{\sf r},\delta}^{u(a)}(\ell(t)-\alpha_+)\quad \text{for every $t\in [a,b]$}, \] which proves relation \eqref{eq:monotonicity-theorem-thesis2}. Finally, \eqref{eq:monotonicity-theorem-thesis1} is due to \eqref{eq:monotonicity-theorem-thesis2} and \eqref{eq:monotone-loading-identity1}. \end{proof}
In a similar way, we can deduce the the characterization of Visco-Energetic solutions in the case of a decreasing load. \begin{theorem}[Nonincreasing loading] Let $\ell\in {\mathrm C}^1([a,b])$ be a nonincreasing loading and let $u\in \rm{BV} ([a,b],\mathbb{R})$ be a Visco-Energetic solution of the rate-independent system $(\mathbb{R},{\ensuremath{\mathcal E}},\Psi,\delta)$ satisfying \begin{gather} \ell(a)\le \Wird\delta(u(a))+\alpha_+, \\ W'>W'(u(a)) \text{ in a right neighborhood of u(a)}\quad \text{if $W'(u(a))=\ell(a)-\alpha_+$}; \end{gather} and, for every $z>u(a)$, \begin{equation} \frac{W(z)-W(u(a))+\delta(u(a),z)}{z-u(a)}>\Wird\delta(u(a)) \quad \text{if $\Wird\delta(u(a))=\ell(a)-\alpha_+$}. \end{equation} Then, similarly to Theorem \ref{thm:45}, $u$ satisfies \begin{equation} u\text{ is nonincreasing},\qquad u(t)\in \textit{\textbf{q}}^{u(a)}_{\sfc,\delta}(\ell(t)+\alpha_-)\quad\text{for every $t\in [a,b]\setminus \Ju$} \end{equation} and therefore \begin{equation} u(t)\in [q_{{\sf l},\delta}^{u(a)}(\ell(t)\alpha_-), q_{{\sf r},\delta}^{u(a)}(\ell(t)+\alpha_-)]\quad\text{for every $t\in [a,b]$}. \end{equation}
\end{theorem}
\paragraph{Example.} We conclude with a final example, involving a more complex potential $W$ (see figure \ref{fig:7}). When $W\in {\mathrm C}^2([a,b];\mathbb{R})$ and we choose $\delta(u,v):=\frac{\mu}{2}(v-u)^2$, with $\mu\ge-\min W''$, Visco-Energetic solutions follow the monotone envelope of $W'+\alpha_+$.
\begin{figure}
\caption{Visco-Energetic solutions of a nonconvex energy and an increasing loading. The optimal transition is a combination of sliding and viscous parts.}
\label{fig:7}
\end{figure}
\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
\end{document} |
\begin{document}
\newcommand{\color{blue}}{\color{blue}} \newcommand{\color{green}}{\color{green}} \newcommand{\bf\color{red}}{\bf\color{red}} \newcommand{\color{red}}{\color{red}}
\newcommand{\bibitem}{\bibitem} \newcommand{\noindent}{\noindent}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\begin{subequations}}{\begin{subequations}} \newcommand{\end{subequations}}{\end{subequations}} \newcommand{\begin{gather}}{\begin{gather}}
\newcommand{\begin{gather}}{\begin{gather}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{eqnarray*}}{\begin{eqnarray*}} \newcommand{\end{eqnarray*}}{\end{eqnarray*}}
\renewcommand{1.1}{1.1}
\def \D{\mathbb{D}} \def \E{\mathbb{E}} \def \F{\mathbb{F}} \def \L{\mathbb{L}} \def \P{\mathbb{P}} \def \Q{\mathbb{Q}} \def \R{\mathbb{R}} \def \S{\mathbb{S}} \def \M{\mathbb{M}} \def \N{\mathbb{N}} \def \C{\mathbb{C}}
\def{\cal A}{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}} \def{\cal E}{{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def{\cal H}{{\cal H}} \def{\cal I}{{\cal I}} \def{\cal J}{{\cal J}} \def{\cal K}{{\cal K}} \def{\cal L}{{\cal L}} \def{\cal M}{{\cal M}} \def{\cal N}{{\cal N}} \def{\cal O}{{\cal O}} \def{\cal P}{{\cal P}} \def{\cal Q}{{\cal Q}} \def{\cal R}{{\cal R}} \def{\cal S}{{\cal S}} \def{\cal T}{{\cal T}} \def{\cal U}{{\cal U}} \def{\cal V}{{\cal V}} \def{\cal W}{{\cal W}} \def{\cal X}{{\cal X}} \def{\cal Y}{{\cal Y}} \def{\cal Z}{{\cal Z}}
\def{\mathbf L}{\overline L} \def\overline{\P}{\overline{\P}} \def{\bar \Q}{{\bar \Q}} \def{\bar \R}{{\bar \R}} \def{\bar{\Rc}}{{\bar{{\cal R}}}} \def\bar{D}{\bar{D}} \def\bar{\D}{\bar{\D}} \def\bar{\Dc}{\bar{{\cal D}}} \def{\mathbb C}{\overline{C}} \def\underline{C}{\underline{C}} \def\overline{O}{\overline{O}} \def\overline{\Oc}{\overline{{\cal O}}} \def\widehat{B}{\widehat{B}} \def\overline{\Lc}{\overline{{\cal L}}}
\def\tilde{\Rc}{\tilde{{\cal R}}} \def\tilde{\E}{\tilde{\E}} \def\overline{\E}{\overline{\E}}
\def\tilde{N}{\tilde{N}} \def\tilde{B}{\tilde{B}} \def\widetilde{\Kc}{\widetilde{{\cal K}}} \def\tilde{\Om}{\tilde{\Om}} \def\tilde{\Fc}{\tilde{{\cal F}}} \def\tilde{\P}{\tilde{\P}} \def\widetilde{p}{\widetilde{p}} \def\tilde{Y}{\tilde{Y}} \def\tilde{Z}{\tilde{Z}}
\def\overline{\Ec}{\overline{{\cal E}}} \def\underline{\Ec}{\underline{{\cal E}}} \def\overline{\Ac}{\overline{{\cal A}}} \def\underline{\Ac}{\underline{{\cal A}}} \def\overline{\Uc}{\overline{{\cal U}}} \def\underline{\Uc}{\underline{{\cal U}}} \def\widehat{\Lc}{\widehat{{\cal L}}}
\def\bar \tau{\bar \tau}
\def\bar{\Theta}{\bar{\Theta}} \def\overline{\psi}{\overline{\psi}} \def\underline{\psi}{\underline{\psi}}
\def{\bf M}{{\bf M}} \def \I{{\bf I}}
\def \au{\overline{\alpha}} \def \al{\underline{\alpha}} \def \sigmal{\underline{\sigma}} \def \a{\alpha}
\def \Om{\Omega} \def \om{\omega} \def \Omb{\overline{\Omega}} \def \omb{\overline{\om}} \def \omh{\hat{\om}} \def \tauh{\hat{\tau}} \def \omt{\tilde{\om}} \def \eps{\epsilon} \def \xb{\mathbf{x}} \def \xbh{\hat{\xb}} \def \0{\mathbf{0}} \def \H{\mathbb{H}} \def \Xb{\overline{X}}
\def \Lambdab{\overline{\Lambda}} \def \ab{\bar{a}}
\def \Sg{\Sigma} \def \Fcb{\overline{{\cal F}}} \def \Fbb{\overline{\mathbb{F}}}
\def \Pcb{\overline{{\cal P}}} \def \Kcb{\overline{{\cal K}}} \def \psih{\widehat{\psi}}
\def \vp{\varphi} \def \x{\times} \def \sigmah{\widehat \sigma} \def \yr{\mathrm{y}}
\def \Omh{\widehat \Om} \def \Fch{\widehat {\cal F}} \def \Fh{\widehat \F} \def \Ph{\widehat \P} \def \Xh{\widehat X} \def \Wh{\widehat W} \def \T{\mathbb{T}} \def \Z{\mathbb{Z}} \def \ph{\widehat p} \def \Vh{\widehat V} \def \muh{\widehat \mu}
\newcommand{{\rm Var}}{{\rm Var}}
\newcommand{{\rm (i)$\>\>$}}{{\rm (i)$\>\>$}}
\newcommand{{\rm (ii)$\>\>$}}{{\rm (ii)$\>\>$}} \newcommand{{\rm (iii)$\>\, \,$}}{{\rm (iii)$\>\, \,$}} \newcommand{{\rm (iv)$\>\>$}}{{\rm (iv)$\>\>$}} \newcommand{{\rm (v)$\>\>$}}{{\rm (v)$\>\>$}}
\newcommand{{\rm a)$\>\>$}}{{\rm a)$\>\>$}} \newcommand{{\rm b)$\>\>$}}{{\rm b)$\>\>$}} \newcommand{{\rm c)$\>\>$}}{{\rm c)$\>\>$}} \newcommand{{\rm d)$\>\>$}}{{\rm d)$\>\>$}}
\def\mathbf{1}{\mathbf{1}} \def{\rm x}{{\rm x}} \def{\rm v}{{\rm v}} \deft_{i}^{n}{t_{i}^{n}} \deft_{i+1}^{n}{t_{i+1}^{n}} \def\bru#1{{\color{red}{#1}}} \def{d}{{\mathfrak d}} \def{d}{{d}} \defC([0,T]){C([0,T])} \def\widetilde C([0,T]){\widetilde C([0,T])} \defD([0,T]){D([0,T])} \def[0,T]\x\DT{[0,T]\xD([0,T])} \def[0,T]\x\CT{[0,T]\xC([0,T])} \def\v#1{_{\wedge #1}}
\def\Ninfty#1{\|#1\|}
\def{\mathbb C}{{\mathbb C}}
\def\Href#1{{\rm (H\ref{#1})}}
\def\HHref#1{{\rm (H\ref{#1}$^{*}$)}}
\def{\rm w}{{\rm w}}
\def
{
}
\def{\mathbf L}{{\mathbf L}}
\def\P-{\rm a.s.}{\P-{\rm a.s.}}
\def\bru#1{{\color{red} #1}}
\def\brub#1{{\color{purple} #1}} \def\greg#1{{\color{red}#1}} \def\red#1{{\color{red}#1}} \def\blue#1{{\color{blue}#1}}
\title{Itô-Dupire's formula for $\C^{0,1}$-functionals of càdlàg weak Dirichlet processes} \author{Bruno Bouchard\footnote{CEREMADE, Universit\'e Paris-Dauphine, PSL, CNRS. bouchard@ceremade.dauphine.fr. } , Maximilien Vallet\footnote{CEREMADE, Universit\'e Paris-Dauphine, PSL, CNRS. vallet@ceremade.dauphine.fr. } } \maketitle $\newtheorem{Theorem}{Theorem}[section]$ $\newtheorem{Lemma}[Theorem]{Lemma}$ $\newtheorem{Proposition}[Theorem]{Proposition}$ $\newtheorem{Remark}[Theorem]{Remark}$ $\newtheorem{Definition}[Theorem]{Definition}$ $\newtheorem*{Assumption}{Assumption}$
\begin{abstract}
We extend to c\`adl\`ag weak Dirichlet processes the $\mathbb{C}^{0,1}$-functional It\^{o}-Dupire's formula of Bouchard, Loeper and Tan (2021). In particular, we provide sufficient conditions under which a $\mathbb{C}^{0,1}$-functional transformation of a special weak Dirichlet process remains a special weak Dirichlet process. As opposed to Bandini and Russo (2018) who considered the Markovian setting, our approach is not based on the approximation of the functional by smooth ones, which turns out not to be available in the path-dependent case. We simply use a small-jumps cutting argument. \end{abstract}
\section{Introduction} Let $X=X_0+M+A$ be a càdlàg semimartingale where $M=M^c+M^d$ is a local martingale and $A$ is adapted and of bounded variations. Let $\mu^{X}$ denotes its jump measure and $\nu^{X}$ its compensator. Then, given a $C^{1,2}$ function $F:[0,T]\x\mathbb{R}^d\to\mathbb{R}$, the Itô's formula ensures that $(F(t,X_t))_{t\in\left[ 0,T\right] }$ is a semimartingale with decomposition \begin{align*} F(t,X_t)=F(0,X_0)+&\int_0^t\nabla_x F(s,X_{s-}) dM_s\notag\\ &+\int_{\left] 0,t\right] \times\mathbb{R}^d}(F(s,X_{s-}+x)-F(s,X_{s-})-x\cdot \nabla_x F(s,X_{s-}))(\mu^X-\nu^{X})(ds,dx)\\ &+\Gamma^F_t \end{align*} where \begin{align*} \Gamma^F_t&=\int_0^t\partial_tF(s,X_{s}) ds+\int_0^t\nabla_x F(s,X_{s-}) dA_s+\dfrac{1}{2}\sum_{1\leq i,j\leq d}\int_0^t\nabla^2_{x^{i}x^{j}}F(s,X_{s-}) d\left[ X^i,X^j\right]^c_s. \end{align*} If we assume furthermore that $F(\cdot,X_{\cdot})$ is a local martingale, then $\Gamma^{F}\equiv 0$, and this formula only uses the first derivative in space of $F$ and should be valid even if $F$ is only ${C}^{0,1}$. In the Markovian setting, we know from \cite{bandini2017weak,coquet2006natural} that it is indeed true for càdlàg weak Dirichlet processes, even when $F(\cdot,X_{\cdot})$ is not a local martingale, in which case $\Gamma^{F}$ turns out to be an orthogonal process, which is even predictable if $X$ is special. In \cite{bouchard2021ac}, the authors provide an extension of this result to the path-dependent case under the condition that $X$ has continuous path. Naturally, it uses the notion of Dupire's derivative, see \cite{Dupire2009FunctionalIC,cont2013functional}.
Such a decomposition appears to be a powerful tool in particular for using verification arguments in optimal control problems, for which obtaining a $C^{1,2}$-type regularity for the value function may be difficult, if even true. The situation is worse when it comes to considering path-dependent problems, for which classical derivatives have to be replaced by the notion of Dupire's derivatives, whose existence and regularity are difficult to obtain. Versions of the above formula were actually already applied successfully in \cite{BT19,bouchard2021quasi,bouchard2021ac} in the context of risk hedging, under model uncertainty or in markets with price impacts. See \cite{bouchard2021approximate} for an application to BSDEs, or for a class of so-called $\pi$-approximate viscosity solutions of fully non-linear parabolic path-dependent PDEs, for which $C^{0,1}$-regularity in the sense of Dupire can be obtained.
When, as in \cite{bouchard2021ac}, $X$ has continuous path, then it is immediate to conclude that $\Gamma^{F}$ is predictable. Things are a priori more complex if $X$ has jumps. In this case, the above decomposition into a weak Dirichlet process is not unique, and the orthogonal process $\Gamma^{F}$ can contain a purely discontinuous martingale part, which makes the above decomposition useless for verification arguments. In the Markovian setting, \cite{bandini2017weak} uses an approximation argument on $F$ to show that it can actually be chosen to be predictable if $X$ is special. Namely, they construct a sequence of predictable processes $(\Gamma^{F_{n}})_{n\ge 1}$ obtained by applying It\^{o}'s formula to smooth approximations $(F_{n})_{n\ge 1}$ of $F$, and then show that $(\Gamma^{F_{n}})_{n\ge 1}$ converges to $\Gamma^{F}$. This argument could not be extended so far to the case where $F$ is path-dependent. The main reason is that the vertical and horizontal Dupire's derivatives do not commute, which renders the construction of smooth (in the sense of Dupire) approximations a completely open problem, see e.g.~\cite{saporito2018functional}.
In this paper, we follow a different and actually simpler route. First, we observe that the decomposition can easily be deduced from \cite{bouchard2021ac} when $X$ does not have small jumps. Then, we just approximate $X$ by removing its small jumps, and passing to the limit.
The rest of the paper is organized as follows. We first recall usefull results of the functionnal Itô calculus and Itô calculus via regularization. Then, we state and demonstrate our version of the functionnal Itô's formula for càdlàg special weak Dirichlet processes. We conclude with a typical exemple of application. Some (essentially known) technical results are collected in the Appendix for completeness.
\section{Notations and definitions}
All over this paper, we fix a time horizon $T>0$ and let $(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\in \left[ 0,T\right] },\P)$ be a stochastic basis, $i.e.$ a filtered probability space such that the filtration $(\mathcal{F}_t)_{t\in \left[ 0,T\right] }$ is right continuous.
\subsection{Skorokhod space and path-dependent functionnals} Let $D(\left[ 0,T\right] )$ be the set of càdlàg paths on $\left[ 0,T\right] $ taking values in $ \mathbb{R}^d$ and $\Theta :=\left[ 0,T\right]\times D(\left[ 0,T\right] )$. For $(t,{\rm x})\in\Theta $, we define the (optional-) stopped path ${\rm x}_{t\wedge}\in D(\left[ 0,T\right] )$ by ${\rm x}_{t\wedge}:={\rm x}\mathbb{1}_{ \left[ 0,t\right[ }+{\rm x}_t\mathbb{1}_{\left[ t,T\right] }$ and its predictable version ${\rm x}^{-}_{t\wedge}\in D(\left[ 0,T\right] )$ by ${\rm x}^{-}_{t\wedge}:={\rm x}\mathbb{1}_{ \left[ 0,t\right[ }+{\rm x}_{t-}\mathbb{1}_{\left[ t,T\right] }$. For $(t,{\rm x})\in\Theta $ and $y\in\mathbb{R}^d$, we also define the trajectory ${\rm x}\oplus_ty$ by ${\rm x} \mathbb{1}_{\left[ 0,t\right[ }+({\rm x}_t+y)\mathbb{1}_{ \left[ t,T\right] }$ and the trajectory ${\rm x}\boxplus_ty$ by ${\rm x} \mathbb{1}_{ \left[ 0,t\right[ }+y\mathbb{1}_{\left[ t,T\right] }$.
We define on $\Theta$ the pseudo-distance $d_\Theta((t,{\rm x}),(t',{\rm x}'))=\lvert t'-t \rvert + \lvert\!\lvert {\rm x} '_{t'\wedge}-{\rm x}_{t\wedge}\rvert\!\rvert$, where $\vert\!\lvert\cdot\rvert\!\rvert$ denotes the uniform norm on $D(\left[ 0,T\right] )$. Considering the quotient space $(\Theta,\sim)$ defined by $(t,{\rm x})\sim(t',{\rm x}')$ whenever $t=t'$ and ${\rm x}_{t\wedge }={\rm x}'_{t\wedge }$, $(\Theta,d_\Theta)$ is a complete metric space.
We say that $F:\Theta\rightarrow\mathbb{R}$ is non-anticipative if $F(t,{\rm x})=F(t,{\rm x}_{t\wedge})$ $\forall(t,{\rm x})\in\Theta $. A non anticipative function $F:\Theta\rightarrow\mathbb{R}$ is said continuous if it is continuous for $(\Theta,d_\Theta)$. The set of continuous non-anticipative maps on $\Theta$ will be denoted $\mathbb{C}(\Theta)$. We say that $F$ is locally uniformly continuous if, for all $K>0$, there exists a modulus of continuity $\delta_K(F,\cdot)$ ($i.e.$ a non-negative and non-decreasing function defined on $\mathbb{R}_+$ that is continuous at $0$ and vanishes at $0$) such that \begin{equation}\label{def_module_continuit} \lvert F(t,{\rm x})-F(t,{\rm x}')\rvert\leq\delta_K(F,d_\Theta((t,{\rm x}),(t',{\rm x}'))) \end{equation}
for all $(t,{\rm x}),(t',{\rm x}')\in\Theta$ with $\| {\rm x}\|\vee\| {\rm x}'\|\leq K$.\\ A functionnal $F:\Theta\rightarrow\mathbb{R}$ is said to be locally bounded if \begin{equation*} \sup_{t\in\left[ 0,T\right],~\lvert\!\lvert{{\rm x}}\rvert\!\rvert\leq{K}}\lvert F(t,{\rm x})\rvert <+\infty,~~\forall{K}\in\R_+~. \end{equation*}\\ We denote by $\C_{loc}^{u,b}(\Theta)$ the set of non anticipative, locally uniformly continuous and locally bounded functionnals.
We can now define the notion of differentiability for path-dependent functionals following the one introduced by Dupire in \cite{Dupire2009FunctionalIC}: a non-anticipative function $F:\Theta\rightarrow\mathbb{R}$ is said to be vertically differentiable at $(t,{\rm x})\in\Theta $ if $y\in\R^d\mapsto F(t,{\rm x}\oplus_ty)$ is differentiable at 0. In this case, we denote by $\nabla_{\rm x} F(t,{\rm x})$ this differential. We denote by $\mathbb{C}^{0,1}(\Theta)$ the collection of non-anticipative functions $F$ such that $\nabla_{\rm x} F$ is well-defined and continuous on $\Theta$.
In this paper, for all path-dependent functionnal $F$ defined on $\Theta$ and $(t,{\rm x})\in\Theta$, we will use the notations $$ F_t({\rm x}):=F(t,{\rm x})\;\mbox{ and } \; F_{t}({\rm x}^{-}):=F_{t}({\rm x}^{-}_{t\wedge}). $$ \subsubsection{It\^o's calculus via regularization and weak Dirichlet processes}
Let us recall here some definitions and facts on the It\^o calculus via regularization developped by Russo and Vallois \cite{russo1993forward, russo1995generalized, russo2007elements}.
See also Bandini and Russo \cite{bandini2017weak} for the case of c\`adl\`ag processes. For the rest of the paper u.c.p. means uniform convergence in probability.
\begin{Definition} \label{def:integral}
$\mathrm{(i)}$ Let $X$ be a real valued {c\`adl\`ag} process, and $H$ be a process with paths in $L^1([0,T])$ a.s. The forward integral of $H$ w.r.t. $X$ is defined by
$$
\int_0^t H_s ~d^-X_s
~:=~
\lim_{\eps \searrow 0} \frac1\eps \int_0^t H_s \big( X_{(s+\eps) \wedge t} - X_s \big) ds,
~~t \ge 0,
$$
whenever the limit exists in the sense of u.c.p.\\
We naturally extend the definition of the forward integral for two $\R^d$-valued processes $X$ and $H$ such that $X^i$ is càdlàg and $H^i$ has paths in $L^1([0,T])$ for all $i=1\ldots d$ by
$$
\int_0^t H_s ~d^-X_s=\sum_{i=1}^d\int_0^t H^i_s ~d^-X^i_s ,
~~t \ge 0,
$$
whenever all those integrals exist.\\
\noindent $\mathrm{(ii)}$
Let $X$ and $Y$ be two real valued {c\`adl\`ag} processes. The quadratic {co}variation $[X,Y]$ is defined by
$$
[X,Y]_{t}
~:=~
\lim_{\eps\searrow 0} \frac1\eps \int_{0}^{t} (X_{(s+\eps)\wedge t}-X_{s})(Y_{(s+\eps)\wedge t}-Y_{s})ds,
~~t\ge 0,
$$
whenever the limit exists in the sense of u.c.p.\\
In the following, we will use the notation $\left[ X,Y\right] _{\eps,t}^{ucp}:=\frac1\eps \int_{0}^{t} (X_{(s+\eps)\wedge t}-X_{s})(Y_{(s+\eps)\wedge t}-Y_{s})ds$.\\
We naturally define the quadratic {co}variation matrix $([X,Y]^{i,j})_{1\leq i,j\leq d}$ for two $\R^d$-valued càdlàg processes $X$ and $Y$ by, for all $1\leq i,j\leq d$,
$$
[X,Y]^{i,j}_{t}=[X^i,Y^j]_{t},~~t\ge 0,
$$
whenever $[X^i,Y^j]$ is well defined for all $1\leq i,j\leq d$.
\noindent $\mathrm{(iii)}$ We say that a $\R^d$-valued {c\`adl\`ag} process $X$ has finite quadratic variation,
if its quadratic variation, defined by $[X]:=[X,X]$, exists and is finite a.s.
\end{Definition}
\begin{Remark}\label{consistence_bracket}
When $X$ is a {(c\`adl\`ag)} semimartingale and $H$ is a c\`adl\`ag adapted process, $\int_0^t H_s ~d^-X_s$ coincides with the usual It\^o's integral $\int_0^t H_{s-} dX_s$.
When $X$ and $Y$ are two semimartingales, $[X, Y]$ coincides with the usual bracket.
\end{Remark}
\begin{Definition} \label{def:weakDirichlet}
$\mathrm{(i)}$ We say that an adapted process $A$ is orthogonal if $[A, N] = 0$ for any continuous local martingale $N$.
\noindent $\mathrm{(ii)}$ An adapted process $X$ is called a (resp.~special) weak Dirichlet process if it has a decomposition of the form
$X = X_0 + M +A$, where $M$ is a local martingale and $A$ is an (resp.~predictable) orthogonal process, such that $M_0 = A_0 = 0$.
\end{Definition}
\begin{Remark}\label{rem: weak dirichlet}
$\mathrm{(i)}$ An adapted process with finite variation is orthogonal.
Consequently, a semimartingale is in particular a weak Dirichlet process.
\noindent$\mathrm{(ii)}$ Any purely discontinuous local martingale is orthogonal by Remark \ref{consistence_bracket}.
\noindent $\mathrm{(iv)}$ An orthogonal process has not necessarily finite variations. For example, any deterministic process (with possibly infinite variation) is orthogonal.
\noindent $\mathrm{(iv)}$
The decomposition $X= X_0 + M+A$ for a càdlàg weak Dirichlet process $X$ is not unique in general. Indeed, we can always displace a purely discontinuous martingale part in the orthogonal part. However, this decomposition is unique if $X$ is special.
\end{Remark}
\section{The It\^{o}-Dupire's formula for $\mathbb{C}^{0,1}$-functionals}
In \cite{bouchard2021ac}, the authors require an assumption relating the regularity of the path of $X$ and of the functional $F$, Assumption (A) below. When $X$ has continuous path, it turns out be equivalent to the decomposition \eqref{eq: Ito thm} below. In our setting, we shall apply it to an approximation of $X$ obtained by removing its small jumps, see Remark \ref{rem: applique A a Xn} below.
\begin{Assumption}[A] Let $F:\Theta\to \R$ be a non-anticipative functional and $Y$ be a càdlàg process. We say that the couple $(F,Y)$ satisfies Assumption $(A)$ if \begin{equation}\label{iff_Ito} \dfrac{1}{\epsilon}\int_0^\cdot(F_{s+\epsilon}(Y)-F_{s+\epsilon}(Y_{s\wedge}\boxplus_{s+\epsilon}Y_{s+\epsilon}))(N_{s+\epsilon}-N_s)ds\underset{\epsilon\rightarrow{0}}{\longrightarrow}0~~u.c.p. \end{equation} for all continuous martingale $N$. \end{Assumption} \begin{Remark}\label{suff_Ito} First note that the left-hand side of \eqref{iff_Ito} is always $0$ when $F$ is Markov, i.e.~$F(t,{\rm x})=F(t,{\rm x}')$ whenever ${\rm x}_{t}={\rm x}'_{t}$. Second, the above also holds if we assume that, for all ${\rm x}\in D([0,T])$, $s\in [0,T]$ and $\eps \in [0,T-s]$,
$$
\big|F_{s+\eps}({\rm x})- F_{s+\eps}({\rm x}_{s \wedge }\boxplus_{s +\eps} {\rm x}_{s+\eps}) \big|
~\le~
\int_{(s, s+\eps)} \phi \big({\rm x}, |{\rm x}_{u-} - {\rm x}_{s }|\big)db_u ({\rm x}),
$$
where $\phi: D([0,T]) \x \R_+ \longrightarrow \R$ satisfies $ \sup_{|y|\le K} \phi({\rm x},y ) <\infty$, $\lim_{y \searrow 0} \phi({\rm x},y ) = \phi({\rm x}, 0) =0$ for all ${\rm x} \in D([0,T])$ and $K > 0$,
and $b$ maps $D([0,T])$ into the space $ {\rm BV}_{+}$ of non-decreasing bounded variation processes.
This follows from the same arguments as in \cite[Proposition 2.11]{bouchard2021ac}. In particular, \eqref{iff_Ito} is satisfied if $F$ is Fr\'echet differentiable in the sense of Clark \cite{clark1970representation}, see \cite[Example 2.12]{bouchard2021ac}.
\end{Remark} We are now ready to state our decomposition result. From now on, we fix a càdlàg special weak Dirichlet process \begin{equation}\label{eq: def X} X=X_0+M+A. \end{equation} Here, $M=M^{c}+M^{d}$ where $M^{c}$ and $M^{d}$ denote its continuous and purely discontinuous martingale parts, and $A$ is a predictable orthogonal process (recall that the decomposition is unique in this case, see Remark \ref{rem: weak dirichlet}). We denote by $\mu^X$ the jump measure of $X$, and by $\nu^X$ its compensator. If $\sum_{{0\leq s \leq T}}\lvert \Delta A_s \rvert<+\infty$ a.s., then one can define the continuous part of $X$ by $$X^c=X-M^d-{\sum_{s\le \cdot}} \Delta A_s.$$
\begin{Theorem}\label{TIto} Let $X$ be as in \eqref{eq: def X} and assume that \begin{align}\label{eq: hyp jumps} [X]_{T} +\sum_{0\leq s \leq T}\lvert \Delta A_s \rvert<+\infty \;\mbox{ a.s.} \end{align} Let $F\in\mathbb{C}^{0,1}(\Theta)$ be such that $F$ and $\nabla_{{\rm x}}F$ are both in $\mathbb{C}_{loc}^{u,b}(\Theta)$ and such that $t\in [0,T]\mapsto\nabla_{\rm x} F_{t}(X^-)$ admits right-limits $a.s$. Assume further that $(F,Z\oplus_\tau(X^c-X_\tau^c))$ satisfies Assumption $(A)$ for every càdlàg process $Z$ and stopping time $\tau$ such that $\tau\le T$ a.s.\\ Then, $(F_{t}(X))_{t\in{\left[ 0,T \right] }}$ is a special weak Dirichlet process with decomposition \begin{align}\label{eq: Ito thm} F_{t}(X)=F_{0}(X)+&\int_0^t\nabla_{\rm x} F_{s}(X^-) dM_s\notag\\ &+\int_{\left] 0,t\right] \times\mathbb{R}^d}(F_s(X^-\oplus_s x)-F_s(X^-)-x\nabla_{\rm x} F_{s}(X^-)) {(\mu^X-\nu^{X})}(ds,dx)\notag\\ &+\Gamma_t^F,~~\forall t\in\left[ 0,T\right], \end{align} where $\Gamma^F$ is an orthogonal and predictable process. \end{Theorem}
Before to provide the proof of this result, let us make several comments.
\begin{Remark} All the terms in \eqref{eq: Ito thm} are well defined. In particular, \begin{align*} &\int_{\left] 0,\cdot\right] \times\mathbb{R}^d}(F_s(X^-\oplus_s x)-F_s(X^-))\mathbb{1}_{\left\lbrace \lvert{x}\rvert\leq{1}\right\rbrace }(\mu^X-\nu^X)(ds,dx)\\ &\int_{\left] 0,\cdot\right] \times\mathbb{R}^d}x\nabla_{\rm x} F_{s}(X^-)\mathbb{1}_{\left\lbrace \lvert{x}\rvert\leq{1}\right\rbrace }(\mu^X-\nu^X)(ds,dx) \end{align*} are purely discontinuous local martingales. See Lemma \ref{purely _discontinuous} below. \end{Remark}
\begin{Remark} If $X$ is {a semimartingale}, then \eqref{eq: hyp jumps} holds. \end{Remark}
\begin{Remark}\label{rem : def Zn} Let $(\epsilon_n)_{n\in\mathbb{N}}\subset (0,1)^{\N}$ be a decreasing sequence of positive real numbers converging to $0$. Using \eqref{eq: hyp jumps}, we can define $Z^n:=Y^n+\sum_{s\leq\cdot}\Delta A_s\mathbb{1}_{\lvert\Delta A_s\rvert< \eps_n}$ where $Y^n:=x\mathbb{1}_{\left\lbrace \lvert x\rvert< \eps_n\right\rbrace }\ast(\mu^X-\nu^X)$ is a purely discontinuous local martingale (see \cite[Theorem ~II.2.34]{jacod2013limit}). Then, $Z^n$ is an orthogonal special semi-martingale with jumps not larger than $\epsilon_n$, namely $\lvert\Delta Z^n_t\rvert<\epsilon_n$ $\forall t\in \left[ 0,T\right]$ a.s., such that $X^n:=X-Z^n$ only has jumps larger than $\epsilon_n$. Moreover, $\lvert\!\lvert Z^n\rvert\!\rvert+\left[ Z^n\right]_T {\to}0$ a.s~as $n\to \infty$. \end{Remark}
\begin{Remark}\label{rem: applique A a Xn} For simplicity of exposition of our main result, we assumed that $(F,Z\oplus_\tau(X^c-X_\tau^c))$ satisfies Assumption $(A)$ for every càdlàg process $Z$ and stopping time $\tau$ such that $\tau\le T$ a.s. In the proof, we shall actually only use the fact that $(F,X^{n}\oplus_\tau(X^c-X_\tau^c))$ satisfies Assumption $(A)$ for all stopping time $\tau$ corresponding to a jump time of $X^{n}$, for all $n\ge 1$. \end{Remark}
\begin{proof}[Proof of Theorem \ref{TIto}] {\rm 1.} The fact that the decomposition \eqref{eq: Ito thm} holds with $\Gamma^{F}$ orthogonal, but not necessarily predictable, follows from the same arguments as in \cite{bandini2017weak,bouchard2021ac}, see Proposition \ref{ItoC01standard} in the Appendix.
We therefore just have to show that $\Gamma^F$ is predictable.
\noindent {\rm 2.} Let $(\epsilon_n,X^{n},Y^{n},Z^{n})_{n\in\mathbb{N}}$ be as in Remark \ref{rem : def Zn}. Fix $n\in\mathbb{N}$ and let $(\tau_k^n)_{k\in\mathbb{N}}$ be the sequence of stopping times corresponding to the jumps of $X$ larger or equal to $\epsilon_n$, namely $\tau_0^n=0$ and $\tau_{k+1}^n=\inf\{ s> \tau_k^n$ s.t.~$\lvert\Delta X_s \rvert\geq \epsilon_n\}$. These are the jump times of $X^{n}$. Then, $K^n:=\min\{ k\in \N$ s.t.~$\tau_k^n\wedge T=T \} $ is finite a.s.~and, for $t\in\left[ 0,T\right] $, \begin{align*} F_t(X^n)-F_0(X^n)=&\sum_{k=0}^{K^n-1}F_{\tau_{k+1}^n\wedge t}(X^n)-F_{\tau_{k}^n\wedge t}(X^n)\\ =&\sum_{k=0}^{K^n-1}\big[F_{\tau_{k+1}^n\wedge t}(X^n)-F_{\tau_{k+1}^n\wedge t}(X^{n-})-\Delta X_{\tau_{k+1}^n\wedge t}^n\nabla_{\rm x} F_{\tau_{k+1}^n\wedge t}(X^{n-})\\ &~~~~~~~~+F_{\tau_{k+1}^n\wedge t}(X^{n-})-F_{\tau_{k}^n\wedge t}(X^n)+\Delta X_{\tau_{k+1}^n\wedge t}^n\nabla_{\rm x} F_{\tau_{k+1}^n\wedge t}(X^{n-})\big]\\ =& R^{1,n}_t+R^{2,n}_t+R^{3,n}_t \end{align*} where \begin{align*} R^{1,n}_t=&\int_{\left] 0,t\right] \times\mathbb{R}^d}(F_s(X^{n-}\oplus_sx)-F_s(X^{n-})-x\nabla_{\rm x} F_{s}(X^{n-}))\mathbb{1}_{\left\lbrace \lvert{x}\rvert>{1}\right\rbrace }\mu^X(ds,dx)\\ &+\int_{\left] 0,t\right] \times\mathbb{R}^d}(F_s(X^{n-}\oplus_sx)-F_s(X^{n-}))\mathbb{1}_{\left\lbrace \epsilon_n\leq\lvert{x}\rvert\leq{1}\right\rbrace }(\mu^X-\nu^X)(ds,dx)\\ &-\int_{\left] 0,t\right] \times\mathbb{R}^d}x\nabla_{\rm x} F_{s}(X^{n-})\mathbb{1}_{\left\lbrace \epsilon_n\leq\lvert{x}\rvert\leq{1}\right\rbrace }(\mu^X-\nu^X)(ds,dx)\\ R^{2,n}_t=&\sum_{k=0}^{K^n-1}\left[ F_{\tau_{k+1}^n\wedge t}(X^{n-})-F_{\tau_{k}^n\wedge t}(X^n)+\Delta M^{n}_{\tau_{k+1}^n\wedge t}\nabla_{\rm x} F_{\tau_{k+1}^n\wedge t}(X^{n-})\right] \\ R^{3,n}_t=&\int_{\left] 0,t\right] \times\mathbb{R}^d}(F_s(X^{n-}\oplus_sx)-F_s(X^{n-})-x\nabla_{\rm x} F_{s}(X^{n-}))\mathbb{1}_{\left\lbrace \epsilon_n\leq\lvert{x}\rvert\leq{1}\right\rbrace }\nu^X(ds,dx)\\ &+\sum_{k=0}^{K^n-1}\Delta A^{n}_{\tau_{k+1}^n\wedge t}\nabla_{\rm x} F_{\tau_{k+1}^n}(X^{n-}), \end{align*} in which $M^{n}$ and $A^{n}$ denote respectively the martingale and the bounded variation part of $X^{n}$. By hypothesis, for all $k=0,\ldots,K^n-1$, the couple $(F,X^n\oplus_{\tau_k^n}(X^c-X^c_{\tau_k^n}))$ satisfies Assumption $(A)$. Moreover, by definition of $X^n$ and $(\tau_k^n)_{k\in\N}$, $X^n\boxplus_{\tau_{k+1}^n}X^n_{\tau_{k+1}^n-}$ is continuous on $\left[ \tau_k^n,\tau_{k+1}^n\right]$ and coincides with $X^n\oplus_{\tau_k^n}(X^c-X^c_{\tau_k^n})$ on $\left[ 0,\tau_{k+1}^n\right]$. By Proposition \ref{ItoC01standard}, we can then find an adapted orthogonal process $\Gamma^{F,n,k}$ such that \begin{equation} F_{t}(X^n\boxplus_{\tau_{k+1}^n}X^n_{\tau_{k+1}^n-})-F_{\tau_{k}^n}(X^n)=\int_{\tau_k^n}^t\nabla_{\rm x} F_s(X^{n-})dM^{c}_s+\Gamma^{F,n,k}_t-\Gamma^{F,n,k}_{\tau_{k}^n}~~~\forall t\in\left[ \tau_k^n,\tau_{k+1}^n\right] \end{equation} in which we used that $X^n$ and $X$ have the same continuous martingale part. By continuity of $F$ and the path of $X^n\boxplus_{\tau_{k+1}^n}X^n_{\tau_{k+1}^n-}$ on $\left[ \tau_{k}^n,\tau_{k+1}^n\right]$, we see from the above that $\Gamma^{F,n,k}$ is continuous on $\left[\tau_{k}^n,\tau_{k+1}^n\right]$. Then{,} \begin{align*} R^{2,n}_t&=\sum_{k=0}^{K^n-1}\int_{\tau_k^n\wedge t}^{\tau^n_{k+1}\wedge t}\nabla_{\rm x} F_s(X^{n-})dM^{c}_s+\Gamma^{F,n,k}_{\tau_{k+1}^n\wedge t}-\Gamma^{F,n,k}_{\tau_{k}^n\wedge t}+\Delta M^n_{\tau_{k+1}^n\wedge t}\nabla_{\rm x} F_{\tau_{k+1}^n\wedge t}(X^{n-})\\ &=\int_0^t \nabla_{\rm x} F_s(X^{n-})dM^n_s+\sum_{k=0}^{K^n-1}\Gamma^{F,n,k}_{\tau_{k+1}^n\wedge t}-\Gamma^{F,n,k}_{\tau_{k}^n\wedge t}. \end{align*} Let us define \begin{equation*} \Gamma^{F,n}_t=R^{3,n}_t+\sum_{k=0}^{K^n-1}\Gamma^{F,n,k}_{\tau_{k+1}^n\wedge t}-\Gamma^{F,n,k}_{\tau_{k}^n\wedge t}, \;t\le T. \end{equation*} It follows from the above that $\Gamma^{F,n}$ is predictable as a sum of predictable processes.
{\rm 3.} Let us now show that \begin{equation}\label{conv_int_sto} \int_0^\cdot \nabla_{\rm x} F_s(X^{n-})dM^n_s\to\int_0^\cdot \nabla_{\rm x} F_s(X^-)dM_s ~~\mbox{u.c.p.} \end{equation} on $[0,T]$. We have \begin{equation*} \int_0^t \nabla_{\rm x} F_s(X^{n-})dM^n_s=\int_0^t \nabla_{\rm x} F_s(X^{n-})dM_s-\int_0^t \nabla_{\rm x} F_s(X^{n-})dY^n_s \end{equation*} Since $Y^n$ is a purely discontinuous martingale such that $\lvert\!\lvert Y^n\rvert\!\rvert\to0$ a.s.~and $\nabla_{\rm x} F$ is locally bounded, we can assume, up to using a localizing sequence, that $(\nabla_{\rm x} F(X^{n-}), Y^n,\left[ Y^n\right] )_{n}$ is uniformly bounded by a constant $C$. Then, since $\lvert\!\lvert X^n-X \rvert\!\rvert\to 0$ a.s.~and $\nabla_{\rm x} F$ is continuous, we deduce from \cite[Theorem~I.4.31]{jacod2013limit} that \begin{equation*} \int_0^\cdot \nabla_{\rm x} F_s(X^{n-})dM_s\to\int_0^\cdot \nabla_{\rm x} F_s(X^-)dM_s\;\;\;\mbox{u.c.p.} \end{equation*} on $[0,T]$. Moreover, \begin{align*} \mathbb{E}\left[ \underset{t\in\left[ 0,T\right]}{\sup} \lvert\int_0^{t} \nabla_{\rm x} F_s(X^{n-})dY^n_s\rvert^2\right] &\leq 4\mathbb{E}\left[ \int_0^{T} (\nabla_{\rm x} F_s(X^{n-}))^2 d\left[ Y^n\right] _s\right]\\ &\leq 4C^{2}\mathbb{E}\left[ \left[ Y^n\right] _{T}\right] \end{align*} in which the last term tends to $0$ as $n$ goes to $+\infty$, by dominated convergence. This proves \eqref{conv_int_sto}.
Similarly, by applying Lemma \ref{Lconv} below, we deduce that $R^{1,n}$ converges u.c.p.~on $[0,T]$ to \begin{align*} t\in [0,T]\mapsto& \int_{\left] 0,t\right] \times\mathbb{R}^d}(F_s(X{^{-}}\oplus_s x)-F_s(X^-)-x\nabla_{\rm x} F_{s}(X^-))\mathbb{1}_{\left\lbrace \lvert{x}\rvert>{1}\right\rbrace }\mu^X(ds,dx)\\ &+\int_{\left] 0,t\right] \times\mathbb{R}^d}(F_s(X{^{-}}\oplus_s x)-F_s(X^-))\mathbb{1}_{\left\lbrace \lvert{x}\rvert\leq{1}\right\rbrace }(\mu^X-\nu^X)(ds,dx)\\ &-\int_{\left] 0,t\right] \times\mathbb{R}^d}x\nabla_{\rm x} F_{s}(X^-)\mathbb{1}_{\left\lbrace \lvert{x}\rvert\leq{1}\right\rbrace }(\mu^X-\nu^X)(ds,dx). \end{align*}
Finally, since $\lvert\!\lvert X^n-X \rvert\!\rvert\to 0$ a.s., we have $\lvert\!\lvert F_{\cdot}(X^n)-F_{\cdot}(X)\rvert\!\rvert\to 0$ a.s.~by local uniform continuity of $F$.
\noindent{\rm 4.} Combining steps 1.~to 3.~above, we obtain that the sequence of predictable processes $(\Gamma^{F,n})_{n\in \N}$
converges to $ \Gamma^F$ u.c.p., which implies that $ \Gamma^F$ is predictable, and concludes the proof. \end{proof}
We conclude this section with the proof of the technical lemma that was used in the proof of Theorem \ref{TIto}. We borrow the standard notations $\mathcal{A}_{loc}^+$ and $\mathcal{G}_{loc}^2(\mu^X)$ from \cite[Section I.3.a., Section II.1.d.]{jacod2013limit}.
\begin{Lemma}\label{Lconv} Let $(\epsilon_n,X^{n},Y^{n},Z^{n})_{n\in\mathbb{N}}$ be as in the proof of Theorem \ref{TIto}.\\ Define $H^n_s(x)=(F_s(X^{n-}\oplus_sx)-F_s(X^{n-})-x\nabla_{\rm x} F_{s}(X^{n-}))\mathbb{1}_{\left\lbrace \epsilon_n\leq\lvert{x}\rvert\leq 1 \right\rbrace }$ for $(s,x)\in [0,T]\x \R^{d}$. Then, $H^n_s(x)\ast(\mu^X-\nu^X)$ is a sequence of purely discontinuous local martingales that converges to\\ $t\mapsto\int_{\left] 0,t\right] \times\mathbb{R}^d}(F_s(X{^{-}}\oplus_s x)-F_s(X^-)-x\nabla_{\rm x} F_{s}(X^-))\mathbb{1}_{\left\lbrace \lvert{x}\rvert\leq{1}\right\rbrace }(\mu^X-\nu^X)(ds,dx)$ u.c.p. \end{Lemma} \begin{proof} Let us define \begin{align*} &V^{1,n}_s(x)=(F_s(X^{-}\oplus_sx)-F_s(X^{n-}\oplus_sx)+F_s(X^{n-})-F_s(X^{-})+x\nabla_{\rm x} F_{s}(X^{n-})-x\nabla_{\rm x} F_{s}(X^{-}))\mathbb{1}_{\left\lbrace \epsilon_n\leq\lvert{x}\rvert\leq 1 \right\rbrace }\\ &V^{2,n}_s(x)=(F_s(X^-\oplus_sx)-F_s(X^-)-x\nabla_{\rm x} F_{s}(X^-))\mathbb{1}_{\left\lbrace \lvert{x}\rvert<\epsilon_n\right\rbrace } \end{align*} for $(s,x)\in [0,T]\x \R^{d}$. By linearity, it suffices to show that ${I^{i,n}}:={V^{i,n}}\ast(\mu^X-\nu^X)$ converge to $0$ u.c.p., for $i=1,2$.\\
We recall that any càglàd process is locally bounded. Furthermore, since $X$ is càdlàg and has finite quadratic variation, we have $\sum_{s\in\left[ 0,T\right]}\lvert\Delta X_s\rvert^2<+\infty$ a.s.~by \cite[Lemma~2.10]{bandini2017weak}. We also recall that $(Z^n)_{n\in\N}$ is uniformly locally bounded{, see Remark \ref{rem : def Zn}}. Consider the c\`adl\`ag process $E:=(X_{t-},\sum_{s<t} \lvert\Delta X_s\rvert ^2)_{t\geq 0}$ and let $(S_m)_{m\in\N}$ be a localization sequence such that for all $m\in\N$ the processes $((Z^n,E)_{\cdot\wedge S^m}\mathbb{1}_{S^m>0})_{n\in\N}$ are uniformly bounded in $n$. It suffices to show that ${(V^{i,n}\ast(\mu^X-\nu^X))_{\cdot\wedge S}}$ converge to $0$ u.c.p. for a fixed $S=S^m$, $i=1,2$. Let $C$ be such that $\|E_{\cdot \wedge S}\|\vee\|Z^n_{\cdot \wedge S}\|\leq{C}$ for all $n$ a.s. Then, \begin{align} \mathbb{E}\left[ ({\lvert V^{2,n}\rvert ^2\ast \nu^X})_{T\wedge S}\right]&=\mathbb{E}\left[\int_{\left] 0,T\wedge S\right] \times\mathbb{R}^d}\lvert (F_s(X^-\oplus_sx)-F_s(X^-)-x\nabla_{\rm x} F_{s}(X^-))\mathbb{1}_{\left\lbrace \lvert{x}\rvert<\epsilon_n\right\rbrace }\rvert ^2\mu^X(ds,dx)\right]\nonumber\\ &=\mathbb{E}\left[\sum_{\substack{s\leq T\wedge S\\0<\lvert \Delta{X_s}\rvert<{\eps_n}}}\lvert (F_s(X)-F_s(X^-)-\Delta X_s\nabla_{\rm x} F_{s}(X^-))\rvert ^2\right]\nonumber\\ &=\mathbb{E}\left[\sum_{\substack{s\leq T\wedge S\\0<\lvert \Delta{X_s}\rvert<{\eps_n}}} \lvert\Delta{X_s}\rvert^2\;\lvert\int_0^1\{\nabla_{\rm x}{F}_s(X^-\oplus_{s}\lambda\Delta{X_s})-\nabla_{\rm x} F(X^-)\}d\lambda\rvert^2\right]\nonumber\\ &\leq\delta^2_{C +1}(\nabla_{\rm x} F,\epsilon_n)\mathbb{E}\left[\sum_{\substack{s\in\left]0,T\wedge S\right[\\0<\lvert \Delta{X_{s}}\rvert<{\eps_n}}} \lvert\Delta{X_{s}}\rvert^2\mathbb{1}_{S>0}+\lvert\Delta{X_{T\wedge S}}\rvert^2\mathbb{1}_{S>0}\mathbb{1}_{\lvert\Delta{X}_{T\wedge S}\rvert\leq{1}}\right]\nonumber\\ &\leq\delta^2_{C +1}(\nabla_{\rm x} F,\epsilon_n) (C+1)\label{eq: inega borne avec C 1} \end{align} where $\delta_\cdot(\nabla_{\rm x} F,\cdot)$ denotes the modulus of continuity of $\nabla_{\rm x} F$ defined in \eqref{def_module_continuit}. In the same way, \begin{align} \label{eq: inega borne avec C 2} \mathbb{E}&\left[ ({\lvert V^{1,n}\rvert ^2\ast \nu^X})_{T\wedge S}\right]\nonumber\\&=\mathbb{E}\left[\sum_{\substack{{s\leq T\wedge S}\\{0<\lvert \Delta{X_s}\rvert\le 1} }} \lvert\Delta{X_s}\rvert^2\;\lvert\int_0^1\{\nabla_{\rm x}{F}_s(X^-\oplus_{s}\lambda\Delta{X_s})-\nabla_{\rm x} F(X^{n-}\oplus_{s}\lambda\Delta{X_s})\}d\lambda+\nabla_{\rm x} F(X^{n-})-\nabla_{\rm x} F(X^{-})\rvert^2\right]\nonumber\\
&\leq 4(C+1)\mathbb{E}[\delta^2_{C +1}(\nabla_{\rm x} F,\|Z^n\|_{[0,S\wedge T]})] \end{align}
where $\|{\rm x}\|_{[0,t]}=\sup_{s\in[0,t]}\lvert{\rm x}_s\rvert$ for all $(t,{\rm x})\in\Theta$.\\ Thus, for $i=1,2$, $V^{i,n}$ belongs to $\mathcal{G}^2_{loc}(\mu^X)$ by \cite[Lemma~2.4]{bandini2018special}, and $I^{i,n}_{\cdot\wedge S}$ is a purely discontinuous square integrable martingale by \cite[Theorem~11.21]{he2019semimartingale}. Also, by \cite[3) of Theorem 11.21]{he2019semimartingale},we have \begin{equation}\label{eq: borne [In]}
\left[ I^{i,n}\right]_{t\wedge S} =\ \int_{\left] 0,t\wedge S\right] \times\mathbb{R}^d}\lvert{V^{i,n}_s(x)}\rvert^{2} \nu^{X}(ds,dx)-\sum_{0<s\leq t\wedge S}\lvert\hat V^{i,n}_s\rvert^2 \leq \int_{\left] 0,t\wedge S\right] \times\mathbb{R}^d}\lvert{V^{i,n}_s(x)}\rvert^{2} \nu^{X}(ds,dx) \end{equation} where $\hat V^{i,n}_s=\int_{\mathbb{R}^d}V^{i,n}_s(x)\nu(\left\lbrace s\right\rbrace ,dx)$, for $i=1,2$. \\ Hence, we can apply Doob's maximal inequality to the square integrable martingale $I^{i,n}_{\cdot\wedge S}${, $i=1,2$,} and then use \eqref{eq: inega borne avec C 1}, \eqref{eq: inega borne avec C 2} and \eqref{eq: borne [In]} to obtain that, for any $\alpha>0$, \begin{align*} &\mathbb{P}(\sup_{t\in\left[ 0,T\right]}\lvert{I^{2,n}_{t\wedge S}}\rvert\geq \alpha)\leq\dfrac{1}{\alpha^2}\mathbb{E}\left[ \lvert I^{2,n}_{T\wedge S}\rvert ^2\right] \leq \dfrac{1}{\alpha^2}\mathbb{E}\left[\int_{\left] 0,T\wedge S\right] \times\mathbb{R}^d}\lvert{V^{2,n}_s(x)}\rvert^{2} \nu^{X}(ds,dx)\right] \leq\dfrac{1}{\alpha^2} \delta^2_{C+1}(\nabla_{\rm x} F,\epsilon_n) (C+1){,}\\
&\mathbb{P}(\sup_{t\in\left[ 0,T\right]}\lvert{I^{1,n}_{t\wedge S}}\rvert\geq \alpha) \leq \dfrac{1}{\alpha^2}\mathbb{E}\left[\int_{\left] 0,T\wedge S\right] \times\mathbb{R}^d}\lvert{V^{1,n}_s(x)}\rvert^{2} \nu^{X}(ds,dx)\right] \leq\dfrac{4}{\alpha^2}\mathbb{E}[\delta^2_{C +1}(\nabla_{\rm x} F,\|Z^n\|_{[0,S\wedge T]})](C+1). \end{align*} The right-hand side terms tend to $0$ as $n\to \infty$ (by {Remark \ref{rem : def Zn} and} dominated convergence for the second one), which concludes the proof. \end{proof}
\section{A toy example of application}
To illustrate our main result, we now provide a simple toy example of application. We keep it as simple as possible. Semilinear and fully-non linear problems have been studied in \cite{bouchard2021ac,bouchard2021approximate} in the context of continuous path processes and can also be extended to our setting.
We fix $d=1$. Let $W$ be a standard Brownian motion and $N$ be a compound Poisson process with compensator $\nu_{t}dt$, for some predictable $(t,\omega)\in [0,T]\x \Omega \mapsto \nu_{t}(\omega,\cdot)$ taking values in the set of probability measures on $\R$. Given $(t,{\rm x})\in \Theta$, we define $X^{t,{\rm x}}$ by \begin{equation} X^{t,{\rm x}}_s={\rm x}_{t\wedge s}+A_{t\vee s}-A_{t}+\int_{t}^{t\vee s}\sigma_sdW_s+\int_{{\left] s,t\vee s\right]} \times\R^d}\gamma_s(y)N(ds,dy),\;\;s\le T, \end{equation}
where $\sigma$ is predictable and bounded, $\gamma$ is ${\cal P}\otimes {\cal B}(\R)$-mesurable\footnote{We use the standard notations ${\cal P}$ (resp.~${\cal B}(R)$) for the predictable sigma-field (resp.~the Borel sigma-field).} and bounded, and $A$ is a c\`adl\`ag, bounded, and with bounded variations predictable process.
We then consider a bounded $C^{1+\alpha}(\R)$-map $g:\R\to \R$, for some $\alpha\in (0,1]$, with bounded derivative, and a right-continuous {measure} $\mu$ with bounded total variation on $\left[ 0,T\right]$ and at most a finite number of atoms $\{0\le t_{1}\le \cdots \le t_{n}\le T\}$ which are deterministic. We define \begin{align*} {\rm v}:~&(t,{\rm x})\in \Theta\mapsto\E[g(\int_0^TX^{t,{\rm x}}_s\mu(ds))]. \end{align*}
The following is nothing but a version of the celebrated Clark's formula, see \cite{clark1970representation}, which we retrieve here as a consequence of Theorem \ref{TIto}.
\begin{Proposition} Let the above conditions hold and set $X:=X^{0,{\rm x}}$ for some ${\rm x} \in D([0,T])$. Then, ${\rm v}$ admits a vertical derivative
$$ \nabla_{{\rm x}}{\rm v}: (t',{\rm x}')\in \Theta\mapsto \E[\nabla g(\int_0^TX^{t',{\rm x}'}_s\mu(ds))\mu([t',T])],
$$
and there exists an orthogonal and predictable process $\Gamma$ such that $\Gamma_{0}=0$ and
\begin{align*}
g(\int_0^TX_s\mu(ds))=&{\rm v}(0,{\rm x})+\int_{0}^{T} \nabla_{{\rm x}}{\rm v}(s, X)\sigma_{s}dW_{s}\\
&+\sum_{s\le T} ({\rm v}(s,X)-{\rm v}(s,X^{-}))-\int_{0}^{T}\int_{\R} ({\rm v}(s,X^{-}\oplus_{s} y)-{\rm v}(s,X^{-}))\nu_{s}(dy) \lambda_{s} ds+\Gamma_{T}.
\end{align*}
If moreover ${\rm v}(\cdot,X)$ is a martingale, then $\Gamma\equiv 0$. It is in particular the case if $A$, $\sigma$, $\gamma$, $\lambda$ and $t\in [0,T]\mapsto \nu_{t}$ are deterministic.
\end{Proposition}
\begin{proof} 1. We first assume that $\mu$ does no have atoms. How to treat the general case will be discussed in step 3. First note that, for $(t,{\rm x})\in \Theta$ and $y\in \R$, $$
\left|{\rm v}(t,{\rm x}\oplus_{t}y)-{\rm v}(t,{\rm x})-\E[\nabla g(\int_0^TX^{t,{\rm x}}_s\mu(ds))\mu([t,T])]y\right|
\le C\E[\{|\mu|([t,T]) |y|\}^{1+\alpha}] $$
for some $C>0$. Since $|\mu|$ is bounded, this implies that $$ \nabla_{{\rm x}}{\rm v}(t,{\rm x})=\E[\nabla g(\int_0^TX^{t,{\rm x}}_s\mu(ds))\mu([t,T])]. $$
Clearly, ${\rm v}$ and $\nabla_{{\rm x}}{\rm v}$ are locally uniformly bounded since $g$, $\nabla g$ and $|\mu|$ are bounded.
2. Note that $({\rm v},Z)$ satisfies assumption $(A)$ for all càdlàg process $Z$ by Remark \ref{suff_Ito}. We now prove that ${\rm v}$ and $\nabla_{{\rm x}}{\rm v}$ are locally uniformly continuous. Fix $(t,{\rm x}),(t',{\rm x}')\in\Theta$ with $t'\ge t$. Then, by standard estimates based on our boundedness assumptions, \begin{align*} \E[\int_0^T\lvert X^{t,{\rm x}}_s-X^{t',{\rm x}'}_s\rvert\mu(ds)]\leq& C(\lvert\!\lvert{\rm x}_{t\wedge}-{\rm x} '_{t'\wedge}\rvert\!\rvert+\sqrt{t'-t}) \end{align*} for some $C>0$ that does not depend on $(t,{\rm x})$ and $(t',{\rm x}')$. Given the above and the fact that $g$ is Lipschitz and $C^{1+\alpha}(\R)$, this implies that \begin{align}\label{eq: estim regul v}
|{\rm v}(t',{\rm x}')-{\rm v}(t,{\rm x})|&\le C(\lvert\!\lvert{\rm x}_{t\wedge}-{\rm x} '_{t'\wedge}\rvert\!\rvert +(t'-t)^{\frac{1}2})\\
|\nabla_{{\rm x}}{\rm v}(t',{\rm x}')-\nabla_{{\rm x}}{\rm v}(t,{\rm x})|&\le C(\lvert\!\lvert{\rm x}_{t\wedge}-{\rm x} '_{t'\wedge}\rvert\!\rvert^{\alpha}+(t'-t)^{\frac{\alpha}2}+\mu([t,t']))\label{eq: estim regul nabla v} \end{align} for some $C>0$ that does not depend on $(t,{\rm x})$ and $(t',{\rm x}')$.
3. In the general case where $\mu$ has a finite number of atoms $\{0\le t_{1}\le \cdots \le t_{n}\le T\}$, then \eqref{eq: estim regul v}-\eqref{eq: estim regul nabla v} shows that ${\rm v}$ and $\nabla_{{\rm x}}{\rm v}$ are locally uniformly bounded and locally uniformly continuous on each closed and convex interval of $\cup_{i=0}^{n} [t_{i},t_{i+1})$, with the convention that $t_{0}=0$ and $t_{n+1}=T$. Moreover, \eqref{eq: estim regul v} also shows that ${\rm v}(\cdot,X^{-})$ admit right-limits, for all $X:=X^{0,{\rm x}}$ for some ${\rm x} \in D([0,T])$. Then, one can apply Theorem \ref{TIto} on intervals of the form $[t_{i},t]$ with $t_{i}\le t<t_{i+1}$ and $0\le i\le n$. Since, like $X$, ${\rm v}(\cdot,X)$ is a.s.~continuous at $t_{i+1}$, this implies that \begin{align*} {\rm v}(t\wedge t_{i+1}, X)=&{\rm v}(t_{i}, X)+\int_{t_{i}}^{t\wedge t_{i+1}} \nabla_{{\rm x}}{\rm v}(s, X)dM^{c}_{s}\\ &+\sum_{s\le t\wedge t_{i+1}} ({\rm v}(s,X)-{\rm v}(s,X^{-}))-\int_{0}^{t\wedge t_{i+1}}\int_{\R} ({\rm v}(s,X^{-}\oplus_{s} y)-{\rm v}(s,X^{-}))\nu_{s}(dy) \lambda_{s} ds\\ &+\Gamma_{t\wedge t_{i+1}}-\Gamma_{t_{i}}, \; t\in [t_{i},t_{i+1}], \end{align*} in which $M^{c}$ is the continuous martingale part of $X$ and $\Gamma$ is predictable and orthogonal.
4. In the case where $A$, $\sigma$, $\gamma$, $\lambda$ and $t\in [0,T]\mapsto \nu_{t}$ are deterministic, then one easily checks that ${\rm v}(\cdot,X)$ is a martingale. If the later holds, then $\Gamma\equiv 0$ by uniqueness of the martingale decomposition. \end{proof}
\appendix \section{Appendix}
We first state a technical result whose proof is very close to the first part of the proof of Lemma \ref{Lconv}. Again, we borrow the standard notations $\mathcal{A}_{loc}^+$ and $\mathcal{G}_{loc}^2(\mu^X)$ from \cite[Section I.3.a., Section II.1.d.]{jacod2013limit}.
\begin{Lemma}\label{purely _discontinuous} Let $F\in\C^{0,1}(\Theta)$ be such that $\nabla_{\rm x} F$ is locally bounded and let $X$ be a càdlàg process such that $\sum_{s\leq T} \lvert\Delta X_s\rvert^2<+\infty$ a.s. Then, \begin{align*} &V:=\lvert(F_s(X\oplus_s x)-F_s(X^-))\mathbb{1}_{\left\lbrace \lvert{x}\rvert\leq{1}\right\rbrace }\rvert^{2} \ast \mu^{X} \in \mathcal{A}_{loc}^+\\ &W:=\lvert{x}\nabla_{\rm x} F_{s}(X^-)\mathbb{1}_{\left\lbrace \lvert{x}\rvert\leq{1}\right\rbrace }\rvert^{2} \ast \mu^{X} \in \mathcal{A}_{loc}^+ \end{align*} In particular, $((F_s(X\oplus_s x)-F_s(X^-))\mathbb{1}_{\left\lbrace \lvert{x}\rvert\leq{1}\right\rbrace }) \ast (\mu^{X}-\nu^X)$ and $(x\nabla_{\rm x} F_{s}(X^-)\mathbb{1}_{\left\lbrace \lvert{x}\rvert\leq{1}\right\rbrace }) \ast (\mu^{X}-\nu^X)$ are well-defined purely discontinuous local martingales. \end{Lemma}
\begin{proof} The fact that both processes are increasing is trivial. We next argue as in the proof of Lemma \ref{Lconv}. Set $Y:=(X_{t-},\sum_{s<t} \lvert\Delta X_s\rvert ^2)_{t\geq 0}$ and let $(S_m)_{m\in\N}$ be a localization sequence such that $(Y_{\cdot\wedge S^m}\mathbb{1}_{S^m>0})_{m\in\N}$ is a sequence of bounded processes. We fix $S=S_m$ for some $m$ and $C$ s.t. $\lvert{Y_{t\wedge S}}\rvert\leq{C}~~\forall t\leq T $ a.s.
Then, \begin{align*} \mathbb{E}\left[ W_{t\wedge{S}}\right] &=\mathbb{E}\left[ \int_{\left] 0,t\wedge S\right] \times\mathbb{R}}\lvert{x}\nabla_{\rm x} F_{s}(X^-)\mathbb{1}_{\left\lbrace \lvert{x}\rvert\leq{1}\right\rbrace }\rvert^{2} \mu^{X}(ds,dx)\right]\\
&\leq\sup_{s\in\left[ 0,T\right],~\lvert\!\lvert{\rm x}\rvert\!\rvert\leq{C}}|\nabla_{\rm x} F_s({\rm x})|^2\;\mathbb{E}\left[\sum_{\substack{s\in\left] 0,t\wedge S\right[\\0<\lvert \Delta{X_{s}}\rvert\leq{1}}} \lvert\Delta{X_{s}}\rvert^2\mathbb{1}_{S>0}+\lvert\Delta{X_{S}}\rvert^2\mathbb{1}_{S>0}\mathbb{1}_{\lvert\Delta{X}_S\rvert\leq{1}}\right]\\
&\leq\sup_{s\in\left[ 0,T\right],~\lvert\!\lvert{\rm x}\rvert\!\rvert\leq{C}}|\nabla_{\rm x} F_s({\rm x})|^2\;(C+1) . \end{align*} The last term is finite since $\nabla_{\rm x} F$ is locally bounded. Similarly, \begin{align*} \mathbb{E}\left[ V_{t\wedge{S}}\right] &=\mathbb{E}\left[ \int_{\left] 0,t\wedge S\right] \times\mathbb{R}^d}\lvert(F_s(X\oplus_s x)-F_s(X^-))\mathbb{1}_{\left\lbrace \lvert{x}\rvert\leq{1}\right\rbrace }\rvert^{2} \mu^{X}(ds,dx)\right]\\ &=\mathbb{E}\left[\sum_{\substack{s\in\left]0,t\wedge S\right]\\0<\lvert \Delta{X_{s}}\rvert\leq{1}}} \lvert(F_{s}(X)-F_{s}(X^-)\rvert^2\right]\\ &=\mathbb{E}\left[\sum_{\substack{s\in\left]0,t\wedge S\right]\\0<\lvert \Delta{X_{s}}\rvert\leq{1}}} \lvert\Delta{X_{s}}\rvert^2\;\lvert\int_0^1\nabla_x{F}_{s}(X^-\oplus_{s}\lambda\Delta X_s)d\lambda\rvert^2\right]\\
&\leq\sup_{s\in\left[ 0,T\right],~\lvert\!\lvert{{\rm x}}\rvert\!\rvert\leq{M}}{|\nabla_{\rm x} F_s({\rm x})|}^2 \;\mathbb{E}\left[\sum_{\substack{s\in\left]0,t\wedge S\right[\\0<\lvert \Delta{X_{s}}\rvert\leq{1}}} \lvert\Delta{X_{s}}\rvert^2\mathbb{1}_{S>0}+\lvert\Delta{X_{S}}\rvert^2\mathbb{1}_{S>0}\mathbb{1}_{\lvert\Delta{X}_S\rvert\leq{1}}\right]\\
&\leq\sup_{s\in\left[ 0,T\right],~\lvert\!\lvert{{\rm x}}\rvert\!\rvert\leq{C}}|\nabla_{\rm x} F_s({\rm x})|^2\; (C+1) . \end{align*}
Thus, $V$ and $W$ belong to $\mathcal{A}^+_{loc}$.
We conclude that $((F_s(X\oplus_s x)-F_s(X^-))\mathbb{1}_{\left\lbrace \lvert{x}\rvert\leq{1}\right\rbrace }) \ast (\mu^{X}-\nu^X)$ and $(x\nabla_{\rm x} F_{s}(X^-)\mathbb{1}_{\left\lbrace \lvert{x}\rvert\leq{1}\right\rbrace }) \ast (\mu^{X}-\nu^X)$ are well-defined square integrable purely discontinuous locale martingales by \cite[Theorem~11.21]{he2019semimartingale} since their integrands belong to $\mathcal{G}^2_{loc}(\mu^X)$ by \cite[Lemma~2.4]{bandini2018special}.\\ \end{proof}
The next result follows from the same arguments as in \cite{bandini2017weak,bouchard2021ac}. {At the difference of Theorem \ref{TIto}, it does not assert that $\Gamma^{F}$ is predictable.} We provide its proof for completeness.
\begin{Proposition}\label{ItoC01standard} Let $X=X_0+M+A$ be a càdlàg weak Dirichlet process with finite quadratic variation. Let $\mu^X$ be its jump measure and $\nu^X$ its compensator.
Let $F:\Theta\to\mathbb{R}$ be $\mathbb{C}^{0,1}$, such that $F$ and $\nabla_{x}F$ are both in $\mathbb{C}_{loc}^{u,b}(\Theta)$, and such that $s\mapsto\nabla_{\rm x} F_{s}(X^-)$ admits right-limits a.s. Then, $(F_{t}(X))_{t\in{\left[ 0,T \right] }}$ is a weak Dirichlet process with decomposition \begin{align*} F_{t}(X)=F_{0}(X)+&\int_0^t\nabla_{\rm x} F_{s}(X^-) dM_s\notag\\ &+\int_{\left] 0,t\right] \times\mathbb{R}^d}(F_s(X^-\oplus_s x)-F_s(X^-)-x\nabla_{\rm x} F_{s}(X^-))\mathbb{1}_{\left\lbrace \lvert{x}\rvert>{1}\right\rbrace }\mu^X(ds,dx)\notag\\ &+\int_{\left] 0,t\right] \times\mathbb{R}^d}(F_s(X^-\oplus_s x)-F_s(X^-))\mathbb{1}_{\left\lbrace \lvert{x}\rvert\leq{1}\right\rbrace }(\mu^X-\nu^X)(ds,dx)\notag\\ &-\int_{\left] 0,t\right] \times\mathbb{R}^d}x\nabla_{\rm x} F_{s}(X^-)\mathbb{1}_{\left\lbrace \lvert{x}\rvert\leq{1}\right\rbrace }(\mu^X-\nu^X)(ds,dx)\notag\\ &+\Gamma_t^F,~~\forall t\in\left[ 0,T\right], \end{align*} where $\Gamma^F$ is an orthogonal process, if and only if $(F,X)$ satisfies Assumption $(A)$. \end{Proposition}
\begin{proof} For the rest of the proof, we denote by $\delta_\cdot(F,\cdot)$ and $\delta_\cdot(\nabla_{{\rm x}}F,\cdot)$ the modulus of continuity of $F$ and $\nabla_{{\rm x}}F$, see \eqref{def_module_continuit}.
Let $N$ be a continuous local martingale. Our aim is to show that $\left[ \Gamma^F,N\right]\equiv0$, in which \begin{align}\label{Gamma} \Gamma_t^F=F_{t}(X)-F_{0}(X)&-\int_0^t\nabla_{\rm x} F_{s}(X^-)dM_s\notag\\ &-\int_{\left] 0,t\right] \times\mathbb{R}}(F_s(X\oplus_s x)-F_s(X^-)-x\nabla_{\rm x} F_{s}(X^-))\mathbb{1}_{\left\lbrace \lvert{x}\rvert>{1}\right\rbrace }\mu^X(ds,dx)\notag\\ &-\int_{\left] 0,t\right] \times\mathbb{R}}(F_s(X\oplus_s x)-F_s(X^-))\mathbb{1}_{\left\lbrace \lvert{x}\rvert\leq{1}\right\rbrace }(\mu^X-\nu^X)(ds,dx)\notag\\ &+\int_{\left] 0,t\right] \times\mathbb{R}}x\nabla_{\rm x} F_{s}(X^-)\mathbb{1}_{\left\lbrace \lvert{x}\rvert\leq{1}\right\rbrace }(\mu^X-\nu^X)(ds,dx),~~t\leq T. \end{align} Note that $\sum_{s\leq T} \lvert\Delta X_s\rvert^2<+\infty$ a.s., since $X$ has finite quadratic variation, see \cite[Lemma~2.10.]{bandini2017weak}. Then, by Lemma \ref{purely _discontinuous} and the definition of a purely discontinuous local martingale, the two last terms of \eqref{Gamma} are orthogonal{,} hence their quadratic {co}variation with $N$ equals $0$.
On the orher hand, since $X$ is a càdlàg process, it has finitely many jumps larger or equal to $1$, a.s. \\ Hence $\int_{\left] 0,\cdot\right] \times\mathbb{R^d}}(F_s(X\oplus_s x)-F_s(X^-)-x\nabla_{\rm x} F_{s}(X^-))\mathbb{1}_{\left\lbrace \lvert{x}\rvert>{1}\right\rbrace }\mu^X(ds,dx)$ is a bounded variation process and, by Remark \ref{consistence_bracket}, its quadratic covariation with $N$ also equals $0$. Moreover, by Remark \ref{consistence_bracket}, \begin{equation*} \left[ \int_0^{\cdot}\nabla_{\rm x} F_{s}(X^-) dM_s,N\right] _t=\int_0^t\nabla_{\rm x} F_{s}(X^-) d\left[ M,N\right] _s. \end{equation*} Thus, by bilinearity of the quadratic covariation, we only have to show that \begin{equation*} \left[ F_\cdot(X),N\right]_t=\int_0^t\nabla_{\rm x} F_{s}(X^-) d\left[ M,N\right] _s \end{equation*} which by continuity of $N$ and \cite[Proposition A.3]{bandini2017weak} is equivalent to \begin{equation*} I^{\epsilon}_t:=\dfrac{1}{\epsilon}\int_0^t(F_{s+\epsilon}(X)-F_s(X))(N_{s+\epsilon}-N_s)ds\underset{\epsilon\rightarrow{0}}{\longrightarrow} \int_0^t\nabla_{\rm x} F_{s}(X^-) d\left[ M,N\right] _s ~~\mbox{u.c.p.} \end{equation*} We have \begin{equation*} I^{\epsilon}_t=I^{\epsilon,1}_t+I^{\epsilon,2}_t \end{equation*} where \begin{align*} &I^{\epsilon,1}_t=\dfrac{1}{\epsilon}\int_0^t(F_{s+\epsilon}(X_{s\wedge}\boxplus_{s+\epsilon}X_{s+\epsilon})-F_s(X))(N_{s+\epsilon}-N_s)ds\\ &I^{\epsilon,2}_t=\dfrac{1}{\epsilon}\int_0^t(F_{s+\epsilon}(X)-F_{s+\epsilon}(X_{s\wedge}\boxplus_{s+\epsilon}X_{s+\epsilon}))(N_{s+\epsilon}-N_s)ds{.} \end{align*} If we show that $I^{\epsilon,1}\underset{\epsilon\rightarrow{0}}{\longrightarrow}\int_0^\cdot\nabla_{\rm x} F_{s}(X^-) d\left[ M,N\right] _s$ u.c.p., then $I^{\epsilon}\underset{\epsilon\rightarrow{0}}{\longrightarrow}0$ u.c.p.~if and only if $I^{\epsilon,2}\underset{\epsilon\rightarrow{0}}{\longrightarrow}0$ u.c.p., which would provide the required result.
Let us decompose $I^{\epsilon,1}$ in \begin{equation*} I^{\epsilon,1}_t=I^{\epsilon,11}_t+I^{\epsilon,12}_t+I^{\epsilon,13}_t+I^{\epsilon,14}_t \end{equation*} where \begin{align*} &I^{\epsilon,11}_t=\dfrac{1}{\epsilon}\int_0^t\int_0^1(\nabla_{\rm x} F_{s+\epsilon}(X_{s\wedge}\oplus_{s+\epsilon}\lambda(X_{s+\epsilon}-X_s))-\nabla_{\rm x} F_{s+\epsilon}(X_{s\wedge}))d\lambda~(X_{s+\epsilon}-X_s)(N_{s+\epsilon}-N_s)ds\\ &I^{\epsilon,12}_t=\dfrac{1}{\epsilon}\int_0^t(\nabla_{\rm x} F_{s+\epsilon}(X_{s\wedge})-\nabla_{\rm x} F_{s}(X))(X_{s+\epsilon}-X_s)(N_{s+\epsilon}-N_s)ds\\ &I^{\epsilon,13}_t=\dfrac{1}{\epsilon}\int_0^t\nabla_{\rm x} F_{s}(X)(X_{s+\epsilon}-X_s)(N_{s+\epsilon}-N_s)ds\\ &I^{\epsilon,14}_t=\dfrac{1}{\epsilon}\int_0^t(F_{s+\epsilon}(X_{s\wedge})-F_s(X))(N_{s+\epsilon}-N_s)ds. \end{align*} Since $\nabla_{\rm x} F$ is in ${\mathbb C}_{loc}^{u,b}(\Theta)$, we have \begin{equation*} \lvert{I^{\epsilon,12}_t}\rvert\leq\delta_{\lvert\!\lvert{X}\rvert\!\rvert}(\nabla_{\rm x} F,\epsilon)\sqrt{\left[ N,N\right]^{ucp}_{\epsilon,t}\;\left[ X,X\right]^{ucp}_{\epsilon,t}}. \end{equation*} Since $N$ is a local martingale, $N$ has finite quadratic variation by Remark \ref{consistence_bracket}. Then, $\left[ N,N\right]^{ucp}_{\epsilon}\underset{\epsilon\rightarrow{0}}{\longrightarrow}\left[ N,N\right]$ and $\left[ X,X\right]^{ucp}_{\epsilon}\underset{\epsilon\rightarrow{0}}{\longrightarrow}\left[ X,X\right]$ u.c.p. Hence, the right-hand side term converges to $0$ u.c.p.~and thus $I^{\epsilon,12}\underset{\epsilon\rightarrow{0}}{\longrightarrow}0$ u.c.p.
Let us now consider $I^{\epsilon,14}$: \begin{align*} I^{\epsilon,14}_t=&\dfrac{1}{\epsilon}\int_0^t(F_{s+\epsilon}(X_{s\wedge})-F_s(X))(N_{s+\epsilon}-N_s)ds\\ &=\dfrac{1}{\epsilon}\int_0^t(F_{s+\epsilon}(X_{s\wedge})-F_s(X))\int_s^{s+\epsilon}dN_u~ds\\ &=\dfrac{1}{\epsilon}\int_0^{t+\epsilon}\int_{(u-\epsilon)\vee0}^{u}F_{s+\epsilon}(X_{s\wedge})-F_s(X)ds~dN_u \end{align*} where we used the stochastic Fubini's Lemma to deduce the last equation. Since $F\in{{\mathbb C}}_{loc}^{u,b}(\Theta)$, we have $\frac{1}{\epsilon}\lvert\int_{(u-\epsilon)\vee0}^{u}F_{s+\epsilon}(X_{s\wedge})-F_s(X)ds\rvert\leq\delta_{\lvert\!\lvert{X}\rvert\!\rvert}(F,\epsilon)\underset{\epsilon\rightarrow{0}}{\longrightarrow}0~~\forall{u}\in\left[ 0,T\right]$ a.s. Then, using \cite[Theorem~I.4.31]{jacod2013limit}, we conclude that $I^{\epsilon,14}\underset{\epsilon\rightarrow{0}}{\longrightarrow}0$ u.c.p. By using \cite[Proposition~A.6.]{bandini2017weak}, we have $I^{\epsilon,13}\underset{\epsilon\rightarrow{0}}{\longrightarrow}\int_0^\cdot\nabla_{\rm x} F_{s}(X^-) d\left[ M,N\right] _s$ u.c.p. Hence, it remains to show that $I^{\epsilon,11}\underset{\epsilon\rightarrow{0}}{\longrightarrow}0$ u.c.p.
Let $(\epsilon_n)_{n\in\N}$ a sequence of real numbers which tends to $0$ and let $\mathcal{N}$ be an element of $\mathcal{F}$ $s.t.$ ${\mathbb P}(\mathcal{N}^c)=0$ and $s.t.$ $\left[ N,N\right]^{ucp}_{\epsilon_n}\underset{n\rightarrow{+\infty}}{\longrightarrow}\left[ N,N\right]$ and $\left[ X,X\right]^{ucp}_{\epsilon_n}\underset{n\rightarrow{+\infty}}{\longrightarrow}\left[ X,X\right]$ uniformly on $\mathcal{N}.$ We fix $\omega\in\mathcal{N}$ for the rest of the proof (we omit it to alleviate notations).\\ Fix an arbitrary $\gamma>{0}$ and let $(t_i)_{i\in\N}$ be the jump times of $X$ (depending on this fixed $\omega$). \\ By \cite[Lemma~2.10.]{bandini2017weak}, there exists $K=K(\omega)$ $s.t.$ $\sum_{i=K+1}^\infty\lvert\Delta{X_{t_i}}\rvert^2\leq\gamma^2$.\\ We define $A_{\epsilon_n}=\bigcup\limits_{i=1}^{K}\left] t_i-\epsilon_n,t_i\right] $ and $B_{\epsilon_n}=\left[ 0,T\right] \backslash{A_{\epsilon_n}}$ and decompose $I^{\epsilon_n,11}$ as follows: \begin{equation*} I^{\epsilon_n,11}=I^{\epsilon_n,11A}+I^{\epsilon_n,11B} \end{equation*} where \begin{align*} I^{\epsilon_n,11A}_t&=\sum_{i=1}^K\dfrac{1}{\epsilon_n}\int_{t_i-\epsilon_n}^{t_i}\mathbb{1}_{s\in\left] 0,t\right]}\int_0^1 G_{\epsilon_{n}}(s,\lambda)d\lambda~(X_{s+\epsilon_n}-X_s)(N_{s+\epsilon_n}-N_s)ds\\ I^{\epsilon_n,11B}_t&=\dfrac{1}{\epsilon_n}\int_0^t\mathbb{1}_{s\in{B}_{\epsilon_n}}\int_0^1G_{\epsilon_{n}}(s,\lambda)d\lambda~(X_{s+\epsilon_n}-X_s)(N_{s+\epsilon_n}-N_s)ds \end{align*} in which $$ G_{\epsilon_{n}}(s,\lambda):=\nabla_{\rm x} F_{s+\epsilon_n}(X_{s\wedge}\oplus_{s+\epsilon_n}\lambda(X_{s+\epsilon_n}-X_s))-\nabla_{\rm x} F_{s+\epsilon_n}(X_{s\wedge}). $$ We have \begin{align*} \lvert{I^{\epsilon_n,11B}_t}\rvert&\leq\delta_{\lvert\!\lvert{X}\rvert\!\rvert}(\nabla_{\rm x} F,\sup_{i~{ \rm s.t. }~ t_{i}\le T}~\sup_{r,a\in\left[ t_i,t_{i+1}\right[,~\lvert{r-a}\rvert\leq\epsilon_n}\lvert{X_r-X_a}\rvert)\sqrt{\left[ N,N\right]^{ucp}_{\epsilon_n,t}\;\left[ X,X\right]^{ucp}_{\epsilon_n,t}}\\ &\leq\delta_{\lvert\!\lvert{X}\rvert\!\rvert}(\nabla_{\rm x} F,3\gamma)\sqrt{\left[ N,N\right]^{ucp}_{\epsilon_n,t}\;\left[ X,X\right]^{ucp}_{\epsilon_n,t}} \end{align*} for $n$ large enough (depending on $\omega$), by \cite[Lemma~2.12.]{bandini2017weak} applied successively on the intevals $\left[ t_i,t_{i+1}\right]$ to the processes $X_{t_i}\boxplus_{t_i+1}X_{t_{i+1}-}$ for $i=0,\ldots, K-1$ and on $\left[0,t_0\right]$ and $\left[t_K,T\right]$. Then, \begin{equation*} \limsup_{n\rightarrow\infty}\sup_{t\in\left[ 0,T\right] }\lvert{I^{\epsilon_n,11B}_t}\rvert\leq\delta_{\lvert\!\lvert{X}\rvert\!\rvert}(\nabla_{\rm x} F,3\gamma)\sqrt{\left[ N,N\right]_T\left[ X,X\right]_T}. \end{equation*} On the other hand, since $N$ is continuous and hence uniformly continuous on $\left[ 0,T\right])$, $\lvert{N_{s+\epsilon_n}-N_s}\rvert\leq\gamma$ $\forall{s}\in\left[ 0,T\right]$, for $n$ large enough. Then \begin{align*} \sup_{t\in\left[ 0,T\right] }\lvert{I^{\epsilon_n,11A}_t}\rvert&\leq\sum_{i=1}^K\dfrac{1}{\epsilon_n}\int_{t_i-\epsilon_n}^{t_i}\int_0^1\rvert G_{\epsilon_{n}}(s,\lambda)\rvert{d\lambda}~\lvert{X_{s+\epsilon_n}-X_s}\rvert \lvert{N_{s+\epsilon_n}-N_s}\rvert{ds}\\ &\leq\gamma\times{K}\times{2}\lvert\!\lvert{X}\rvert\!\rvert\times{2}\sup_{s\in\left[ 0,T\right],~\lvert\!\lvert{\rm x}\rvert\!\rvert\leq{\lvert\!\lvert{X}\rvert\!\rvert}}\nabla_{\rm x} F_s({\rm x}). \end{align*} Hence, \begin{equation*} \limsup_{n\rightarrow\infty}\sup_{t\in\left[ 0,T\right] }\lvert{I^{\epsilon_n ,11}_t}\rvert\leq\delta_{\lvert\!\lvert{X}\rvert\!\rvert}(\nabla_{\rm x} F,3\gamma)\sqrt{\left[ N,N\right]_T\left[ X,X\right]_T}+4\gamma{K}\lvert\!\lvert{X}\rvert\!\rvert\sup_{s\in\left[ 0,T\right],~\lvert\!\lvert{\rm x}\rvert\!\rvert\leq{\lvert\!\lvert{X}\rvert\!\rvert}}\nabla_{\rm x} F_s({\rm x}) \end{equation*} which allows us to conclude that $I^{\epsilon_n,11}\underset{n\rightarrow{+\infty}}{\longrightarrow}0$ by arbitrariness of $\gamma>0$. \\Since $\mathbb{P}(\mathcal{N})=1$, we get $I^{\epsilon_n,11}\underset{n\rightarrow{+\infty}}{\longrightarrow}0$ uniformly a.s.~and thus the convergence holds u.c.p. Since it is true for all sequence $(\epsilon_n)_{n\in\N}$ that converges to 0, then $I^{\epsilon,11}\underset{\epsilon\rightarrow{0}}{\longrightarrow}0$ u.c.p., which concludes the proof. \end{proof}
\nocite{*}
\end{document} |
\begin{document}
\title[Euler matrices and their algebraic properties]{Euler matrices and their algebraic properties revisited}
\author[Y. Quintana]{Yamilet Quintana$^{(1)}$} \address{Departamento de Matem\'aticas Puras y Aplicadas, Edificio Matem\'aticas y Sistemas (MYS), Apartado Postal: 89000, Caracas 1080 A, Universidad Sim\'on Bol\'{\i}var, Venezuela} \email{yquintana@usb.ve} \thanks{$(1)\,\,\,$Partially supported by the grants Impacto Caribe (IC-002627-2015) from Universidad del Atl\'antico, Colombia, and DID-USB (S1-IC-CB-003-16) from Decanato de Investigaci\'on y Desarrollo. Universidad Sim\'on Bol\'{\i}var, Venezuela.}
\author[W. Ram\'{\i}rez]{William Ram\'{\i}rez$^{(2)}$} \address{Departamento de Ciencias B\'asicas, Universidad de la Costa - CUC, Barranquilla, Colombia.} \email{wramirez4@cuc.edu.co} \thanks{$(2)\,\,\,$Partially supported by the grant Impacto Caribe (IC-002627-2015) from Universidad del Atl\'antico, Colombia.}
\author[A. Urieles]{Alejandro Urieles$^{(2)}$} \address{Programa de Matem\'aticas, Universidad del Atl\'antico, Km 7 V\'{i}a Pto. Colombia, Barranquilla, Colombia.} \email{alejandrourieles@mail.uniatlantico.edu.co}
\subjclass[2010]{11B68, 11B83, 11B39, 05A19.} \keywords{Euler polynomials, Euler matrix, generalized Euler matrix, generalized Pascal matrix, Fibonacci matrix, Lucas matrix.}
\begin{abstract} This paper is concerned with the generalized Euler polynomial matrix $\mathrsfs{E}^{(\alpha)}(x)$ and the Euler matrix $\mathrsfs{E}$. Taking into account some properties of Euler polynomials and numbers, we deduce product formulae for $\mathrsfs{E}^{(\alpha)}(x)$ and determine the inverse matrix of $\mathrsfs{E}$. We establish some explicit expressions for the Euler polynomial matrix $\mathrsfs{E}(x)$, which involving the generalized Pascal, Fibonacci and Lucas matrices, respectively. From these formulae we get some new interesting identities involving Fibonacci and Lucas numbers. Also, we provide some factorizations of the Euler polynomial matrix in terms of Stirling matrices, as well as, a connection between the shifted Euler matrices and Vandermonde matrices. \end{abstract}
\maketitle
\section{Introduction} \label{intro}
The classical Euler polynomials $E_{n}(x)$ and the generalized Euler polynomials $E_{n}^{(\alpha)}(x)$ of (real or complex) order $\alpha$, are usually defined as follows (see, for details, \cite{AIK2014,N,SCh2012,SM}):
\begin{equation} \label{euler1} \displaystyle\left(\frac{2}{e^{z}+1}\right)^{\alpha}e^{xz} =\displaystyle\sum\limits_{n=0}^{\infty}
E_{n}^{(\alpha)}(x)\frac{z^n}{n!}, \quad |z|<\pi, \quad 1^{\alpha}:=1, \end{equation} and \begin{equation} \label{euler2} E_{n}(x):= E_{n}^{(1)}(x), \quad n\in \mathbb{N}_{0}, \end{equation} where $ \mathbb{N}_{0}:= \mathbb{N}\cup\{0\}$ and $\mathbb{N}=\{1,2,3,\ldots\}$.
The numbers $E_{n}^{(\alpha)}:= E_{n}^{(\alpha)}(0)$ are called generalized Euler numbers of order $\alpha$, $n\in \mathbb{N}_{0}$. It is well-known that the classical Euler numbers are defined by the generating function \begin{equation} \label{euler3} \frac{2}{e^{z}+e^{-z}}=\sum_{n=0}^{\infty} \varepsilon_{n}\frac{z^{n}}{n!}. \end{equation}
The sequence $\{\varepsilon_{n}\}_{n\geq 0}$ counts the numbers of alternating $n$-permutations. Let recall us that a permutation $\sigma$ of a set of $n$ elements (or $n$-permutation), is said alternating if and only if the $n-1$ differences $\sigma(i+1)-\sigma(i)$ for $i=1,2,\ldots, n-1$ have alternating signs (cf. \cite[p. 258]{C1974}). From \eqref{euler2} and \eqref{euler3} it is easy to check that the connection between the classical Euler numbers and the Euler polynomials is given by the formula \begin{equation} \label{euler4} \varepsilon_{n}=2^{n}E_{n}\left(\frac{1}{2}\right), \quad n\in\mathbb{N}_{0}. \end{equation}
So, the numbers $E_{n}:=E_{n}(0)$ also are known in the literature as Euler numbers (cf., e.g., \cite{LS2005,SCh2012}).
The first six generalized Euler polynomials are $$\begin{aligned} E_{0}^{(\alpha)}(x)=& \,1,\quad E_{1}^{(\alpha)}(x)=x-\frac{\alpha}{2},\quad E_{2}^{(\alpha)}(x)= x^{2}-\alpha x+\frac{\alpha(\alpha-1)}{4},\\ E_{3}^{(\alpha)}(x)=& \, x^{3}-\frac{3\alpha}{2}x^{2}+\frac{3\alpha(\alpha-1)}{4}x-\frac{3\alpha^{2}(\alpha-1)}{8}, \\ E_{4}^{(\alpha)}(x)=&\, x^{4}-2\alpha x^{3}+\frac{3\alpha(\alpha-1)}{2}x^{2}-\frac{\alpha^{2}(\alpha-3)}{2}x +\frac{\alpha(\alpha^{3}-6\alpha^{2}+3\alpha-26)}{16},\\ E_{5}^{(\alpha)}(x)=& \, x^{5}-\frac{5\alpha}{2}x^{4}+\frac{5\alpha(\alpha-1)}{2}x^{3}-\frac{5\alpha^{2}(\alpha-3)}{4}x^{2} +\frac{5\alpha(\alpha-1)(\alpha^{2}-5\alpha-2)}{16}x \\ &-\frac{\alpha^{2}(\alpha^{3}-10\alpha^{2}+15\alpha+10)}{32}. \end{aligned}$$
Recent and interesting works dealing with these polynomials, Appell and Apostol type polynomials, their properties and applications in several areas as such as combinatorics, number theory, numerical analysis and partial differential equations, can be found by reviewing the current literature on this subject. For a broad information on old literature and new research trends about these classes of polynomials we strongly recommend to the interested reader see \cite{C1974,HAS2016,HASA2015,HQU2015,LS2005,N,PS2013,QRU,Rio68,SBR2018,SCh2012,SKS2017,SMR2018,SOK2013,SOY2014,SP2003}.
From the generating relation \eqref{euler1}, it is fairly straightforward to deduce the addition formula: \begin{equation} \label{euler6} E_{n}^{(\alpha+\beta)}(x+y)= \sum_{k=0}^{n}\binom{n}{k}E_{k}^{(\alpha)}(x)E_{n-k}^{(\beta)}(y). \end{equation}
And, it follows also that \begin{equation} \label{euler11} E_{n}^{(\alpha)}(x+1)+ E_{n}^{(\alpha)}(x) =2 E_{n}^{(\alpha-1)}(x). \end{equation} Since $E_{n}^{(0)}(x)=x^{n}$, making the substitution $\beta=0$ into \eqref{euler6} and interchanging $x$ and $y$, we get \begin{equation} \label{euler7} E_{n}^{(\alpha)}(x+y)= \sum_{k=0}^{n}\binom{n}{k}E_{k}^{(\alpha)}(y)x^{n-k}. \end{equation} And, as an immediate consequence, we have \begin{eqnarray} \label{euler8} E_{n}(x+y)&=& \sum_{k=0}^{n}\binom{n}{k}E_{k}(y)x^{n-k},\\ \label{euler9} E_{n}(x)&=&\sum_{k=0}^{n}\binom{n}{k}E_{k}\,x^{n-k}. \end{eqnarray} Using \eqref{euler4}, \eqref{euler8} and the well-known relation $E_{n}(1-x)=(-1)^{n}E_{n}(x)$, it is possible to deduce the following connection formula between $E_{n}$ and the classical Euler numbers $\varepsilon_{n}$: \begin{equation} \label{euler5} E_{n}=\left\{ \begin{array}{l}-\frac{1}{2^{n}}\sum_{k=0}^{n}\binom{n}{k}\varepsilon_{n-k},\quad \mbox{ if } n \mbox{ is odd},\\ \\ 0, \quad \mbox{ if } n \mbox{ is even}. \end{array}\right. \end{equation}
Inspired by the article \cite{ZW2006} in which the authors introduce the generalized Bernoulli matrix and establish some algebraic properties of the Bernoulli polynomial and Bernoulli matrices, in the present article we focus our attention on the algebraic and differential properties of the generalized Euler matrix. It is worthwhile to mention that the authors of \cite{ZW2006} to point out that their proposed methodology can be used for obtaining similar properties in the setting of the generalized Euler matrix. However, the authors of \cite{ZW2006} do not actually write out any proof about this statement.
The outline of the paper is as follows. Section \ref{sec:1} has an auxiliary character and provides some background as well as some results which will be used throughout the paper. Making use of the some identities above, we introduce the generalized Euler matrix in Section \ref{sec:2}. Then, we study some interesting particular cases of this matrix, namely, the Euler polynomial matrix, the Euler matrix and the specialized Euler matrix. The main results of this section are Theorems \ref{teogeneuler1}, \ref{teogeneuler2}, \ref{teogeneuler3} and \ref{teogeneuler4}, because these theorems contain the information concerning the product formula for the Euler matrix, an explicit expression for the inverse matrix of the specialized Euler matrix, the factorization of the Euler matrix via the generalized Pascal matrix of first kind, and a useful factorization for the inverse matrix of a particular ``horizontal sliding'' of the Euler polynomial matrix, respectively. Also, some consequences of these results are showed (see for instance, Corollaries \ref{corgeneuler1}, \ref{corgeneuler2}, \ref{corgeneuler3} and \ref{corgeneuler4}). Section \ref{sec:3} shows several factorizations of the generalized Euler matrix in terms the Fibonacci and Lucas matrices, respectively (cf. Theorems \ref{teogeneuler5} and \ref{teogeneuler6}). Also, some new identities involving Fibonacci and Lucas numbers are given in this section. Finally, in Section \ref{sec:5} we provide some factorizations of the Euler polynomial matrix in terms of Stirling matrices, and the shifted Euler matrices and their connection with Vandermonde matrices are given.
\section{Background and previous results} \label{sec:1}
Throughout this paper, all matrices are in $M_{n+1}(\mathbb{R})$, the set of all $(n+1)$-square matrices over the real field. Also, for $i,j$ any nonnegative integers we adopt the following convention $$\binom{i}{j}=0, \mbox{ whenever } j>i.$$
In this section we recall the definitions of the generalized Pascal matrix, the Fibonacci matrix and the Lucas matrix, as well as, some properties of these matrices.
\begin{definition} \label{defi1} Let $x$ be any nonzero real number. The generalized Pascal matrix of first kind $P[x]$ is an $(n+1)\times(n+1)$ matrix whose entries are given by (see \cite{CV1993,Z1997}): \begin{equation} \label{pascal1} p_{i,j}(x)=\left\{\begin{array}{l} \binom{i}{j}x^{i-j}, \quad i\geq j,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{equation} \end{definition}
In \cite{CV1993,Z1997,Z1998} some properties of the generalized Pascal matrix of first kind are showed, for example, its matrix factorization by special summation matrices, its associated differential equation and its bivariate extensions. The following proposition summarizes some algebraic and differential properties of $P[x]$.
\begin{prop} Let $P[x]$ be the generalized Pascal matrix of first kind and order $n+1$. Then the following statements hold. \begin{enumerate} \item[(a)] Special value. If the convention $0^{0}=1$ is adopted, then it is possible to define \begin{equation} \label{pascal2} P[0]:= I_{n+1}={\rm diag}(1,1,\ldots,1), \end{equation} where $I_{n+1}$ denotes the identity matrix of order $n+1$. \item[(b)] $P[x]$ is an invertible matrix and its inverse is given by \begin{equation} \label{pascal3} P^{-1}[x]:=\left(P[x]\right)^{-1}= P[-x]. \end{equation} \item[(c)] \cite[Theorem 2]{CV1993} Addition theorem of the argument. For $x,y\in \mathbb{R}$ we have \begin{equation} \label{pascal4} P[x+y]= P[x]P[y]. \end{equation} \item[(d)] \cite[Theorem 5]{CV1993} Differential relation (Appell type polynomial entries). $P[x]$ satisfies the following differential equation \begin{equation} \label{pascal5}
D_{x}P[x]= \mathfrak{L} P[x]= P[x]
\mathfrak{L}, \end{equation} where $D_{x}P[x]$ is the matrix resulting from taking the derivative with respect to $x$ of each entry of $P[x]$ and the entries of the $(n+1)\times(n+1)$ matrix $\mathfrak{L}$ are given by $$\begin{aligned} {\rm l}_{i,j}=&\left\{\begin{array}{l} p_{i,j}'(0), \quad i\geq j,\\ \\ 0, \quad \mbox{otherwise}, \end{array}\right.\\ =&\left\{\begin{array}{l} j+1, \quad i=j+1,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{aligned}$$ \item[(e)] (\cite[Theorem 1]{Z1997}) The matrix $P[x]$ can be factorized as follows. \begin{equation} \label{pascal6} P[x]=G_{n}[x]G_{n-1}[x]\cdots G_{1}[x], \end{equation} where $G_{k}[x]$ is the $(n+1)\times(n+1)$ summation matrix given by $$\begin{aligned} G_{k}[x]=& \left\{\begin{array}{l} \begin{bmatrix} I_{n-k}&0\\ 0&S_{k}[x] \end{bmatrix}, \quad k=1,\ldots, n-1,\\ \\ S_{n}[x], \quad k=n, \end{array}\right. \end{aligned}$$ being $S_{k}[x]$ the $(k+1)\times(k+1)$ matrix whose entries $S_{k}(x;i,j)$ are given by $$\begin{aligned} S_{k}(x;i,j)=&\left\{\begin{array}{l} x^{i-j}, \quad j\leq i,\\ \\ 0, \quad j>i, \end{array}\right. \quad (0\leq i,j\leq k). \end{aligned}$$ \end{enumerate} \end{prop}
Another necessary structured matrices in what follows, are the Fibonacci and Lucas matrices. Below, we recall the definitions of each one of them.
\begin{definition} \label{defi2} Let $\{F_{n}\}_{n\geq 1}$ be the Fibonacci sequence, i.e., $F_{n}=F_{n-1}+F_{n-2}$ for $n\geq 2$ with initial conditions $F_{0}=0$ and $F_{1}=1$. The Fibonacci matrix $\mathrsfs{F}$ is an $(n+1)\times(n+1)$ matrix whose entries are given by \cite{LKL}: \begin{equation} \label{fibo2} f_{i,j}=\left\{\begin{array}{l} F_{i-j+1}, \quad i-j+1\geq 0,\\ \\ 0, \quad i-j+1< 0. \end{array}\right. \end{equation} \end{definition}
Let $\mathrsfs{F}^{-1}$ be the inverse of $\mathrsfs{F}$ and denote by $\tilde{f}_{i,j}$ the entries of $\mathrsfs{F}^{-1}$. In \cite{LKL} the authors obtained the following explicit expression for $\mathrsfs{F}^{-1}$.
\begin{equation} \label{fibo1}
\begin{aligned} \tilde{f}_{i,j}=&\left\{\begin{array}{l} 1, \quad i=j,\\ \\ -1, \quad i=j+1, j+2,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{aligned} \end{equation}
\begin{definition} \label{defi3} Let $\{L_{n}\}_{n\geq 1}$ be the Lucas sequence, i.e., $L_{n+2}=L_{n+1}+L_{n}$ for $n\geq 1$ with initial conditions $L_{1}=1$ and $L_{2}=3$. The Lucas matrix $\mathrsfs{L}$ is an $(n+1)\times(n+1)$ matrix whose entries are given by \cite{ZZ2007}:
\begin{equation} \label{lucas} l_{i,j}=\left\{\begin{array}{l} L_{i-j+1}, \quad i-j\geq 0,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{equation} \end{definition}
Let $\mathrsfs{L}^{-1}$ be the inverse of $\mathrsfs{L}$ and denote by $\tilde{l}_{i,j}$ the entries of $\mathrsfs{L}^{-1}$. In \cite[Theorem 2.2]{ZZ2007} the authors obtained the following explicit expression for $\mathrsfs{L}^{-1}$.
\begin{equation} \label{lucas1}
\begin{aligned} \tilde{l}_{i,j}=&\left\{\begin{array}{l} 1, \quad i=j,\\ \\ -3, \quad i=j+1, \\ \\ 5(-1)^{i-j}2^{i-j-2}, \quad i\geq j+2,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{aligned} \end{equation}
For $x$ any nonzero real number, the following relation between the matrices $P[x]$ and $\mathrsfs{L}$ was stated and proved in \cite[Theorem 3.1]{ZZ2007}. \begin{equation} \label{lucas2} P[x]=\mathrsfs{L} \mathrsfs{G}[x] =\mathrsfs{H}[x]\mathrsfs{L}, \end{equation} where the entries of the $(n+1)\times(n+1)$ matrices $\mathrsfs{G}[x]$ and $\mathrsfs{H}[x]$ are given by $$\begin{aligned} g_{i,j}(x)=& x^{-j-1}\left[x^{i+1} \binom{i}{j} -3x^{i}\binom{i-1}{j}+ 5(-1)^{i+1} 2^{i-1}m_{i-1, j+1}\left(\frac{x}{2}\right)\right],\\ \\ h_{i,j}(x)=&x^{-j-1}\left[x^{i+1} \binom{i}{j} -3x^{i}\binom{i}{j+1}+ (-1)^{j+1}\frac{5x^{i+j+2}}{2^{j+3}} n_{i+1, j+3}\left(\frac{2}{x}\right)\right], \end{aligned}$$ respectively, with $$\begin{aligned} m_{i,j}(x):=& \left\{\begin{array}{l} \sum_{k=j}^{i}(-1)^{k}\binom{k}{j}x^{k}, \quad i\geq j,\\ \\ 0, \quad i<j, \end{array}\right. \end{aligned} \quad \mbox{ and } \quad \begin{aligned} n_{i,j}(x):=& \left\{\begin{array}{l} \sum_{k=j}^{i}(-1)^{k}\binom{i}{k}x^{k}, \quad i\geq j,\\ \\ 0, \quad i<j. \end{array}\right. \end{aligned}$$
\section{The generalized Euler matrix} \label{sec:2} \begin{definition} \label{def3} The generalized $(n+1)\times(n+1)$ Euler matrix $\mathrsfs{E}^{(\alpha)}(x)$ is defined by \begin{equation} \label{euler10} E^{(\alpha)}_{i,j}(x)= \left\{\begin{array}{l} \binom{i}{j} E^{(\alpha)}_{i-j}(x), \quad i\geq j,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{equation} While, $\mathrsfs{E}(x):= \mathrsfs{E}^{(1)}(x)$ and $\mathrsfs{E}:= \mathrsfs{E}(0)$ are called the Euler polynomial matrix and the Euler matrix, respectively. In the particular case $x=\frac{1}{2}$, we call the matrix $\mathbb{E}:=\mathrsfs{E}\left(\frac{1}{2}\right)$ specialized Euler matrix. \end{definition}
It is clear that \eqref{euler11} yields the following matrix identity: \begin{equation} \label{geneuler5} \mathrsfs{E}^{(\alpha)}(x+1)+ \mathrsfs{E}^{(\alpha)}(x)= 2\mathrsfs{E}^{(\alpha-1)}(x). \end{equation}
Since $\mathrsfs{E}^{(0)}(x)=P[x]$, replacing $\alpha$ by $1$ in \eqref{geneuler5} we have \begin{equation} \label{euler20} \mathrsfs{E}(x+1)+ \mathrsfs{E}(x)= 2P[x]. \end{equation}
Then, putting $x=0$ in \eqref{euler20} and taking into account \eqref{pascal2}, we get $$\mathrsfs{E}(1)+ \mathrsfs{E}= 2I_{n+1}.$$ Analogously, $$\mathrsfs{E}+ \mathrsfs{E}(-1)= 2P\left[-\frac{1}{2}\right].$$
From \eqref{euler4} it follows that the entries of the specialized Euler matrix $\mathbb{E}$ are given by \begin{equation} \label{euler12} e_{i,j}= \left\{\begin{array}{l} \binom{i}{j} 2^{j-i}\varepsilon_{i-j}, \quad i\geq j,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{equation}
From \eqref{euler5} it follows that the entries of the Euler matrix $\mathrsfs{E}$ are given by \begin{equation} \label{euler18} E_{i,j}= \left\{\begin{array}{l} \binom{i}{j} E_{i-j}, \quad i>j \mbox{ and } i-j \mbox{ odd},\\ \\ 1, \quad i=j,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{equation}
The next result is an immediate consequence of Definition \ref{def3} and the addition formula \eqref{euler6}.
\begin{teo} \label{teogeneuler1} The generalized Euler matrix $\mathrsfs{E}^{(\alpha)}(x)$ satisfies the following product formula. \begin{equation} \label{geneuler1} \mathrsfs{E}^{(\alpha+\beta)}(x+y)= \mathrsfs{E}^{(\alpha)}(x)\,\mathrsfs{E}^{(\beta)}(y)= \mathrsfs{E}^{(\beta)}(x)\,\mathrsfs{E}^{(\alpha)}(y)= \mathrsfs{E}^{(\alpha)}(y)\, \mathrsfs{E}^{(\beta)}(x). \end{equation} \end{teo}
\begin{proof} We proceed as in the proof of \cite[Theorem 2.1]{ZW2006}, making the corresponding modifications. Let $A_{i,j}^{(\alpha,\beta)}(x,y)$ be the $(i,j)$-th entry of the matrix product $ \mathrsfs{E}^{(\alpha)}(x)\,\mathrsfs{E}^{(\beta)}(y)$, then by the addition formula \eqref{euler6} we have $$\begin{aligned} A_{i,j}^{(\alpha,\beta)}(x,y)=&\sum_{k=0}^{n}\binom{i}{k}E_{i-k}^{(\alpha)}(x)\binom{k}{j}E_{k-j}^{(\beta)}(y)\\ =&\sum_{k=j}^{i}\binom{i}{k}E_{i-k}^{(\alpha)}(x)\binom{k}{j}E_{k-j}^{(\beta)}(y)\\ =& \sum_{k=j}^{i}\binom{i}{j}\binom{i-j}{i-k}E_{i-k}^{(\alpha)}(x)E_{k-j}^{(\beta)}(y)\\ =&\binom{i}{j}\sum_{k=0}^{i-j}\binom{i-j}{k}E_{i-j-k}^{(\alpha)}(x)E_{k}^{(\beta)}(y)\\ =& \binom{i}{j}E_{i-j}^{(\alpha)}(x+y), \end{aligned}$$ which implies the first equality of \eqref{geneuler1}. The second and third equalities of \eqref{geneuler1} can be derived in a similar way. \end{proof}
\begin{coro} \label{corgeneuler1} Let $(x_{1},\ldots,x_{k})\in \mathbb{R}^{k}$. For $\alpha_{j}$ real or complex parameters, the Euler matrices $\mathrsfs{E}^{(\alpha_{j})}(x)$ satisfies the following product formula, $j=1,\ldots, k$. \begin{equation} \label{geneuler2} \mathrsfs{E}^{(\alpha_{1}+\alpha_{2}+\cdots+\alpha_{k})}(x_{1}+x_{2}+\cdots+x_{k})= \mathrsfs{E}^{(\alpha_{1})}(x_{1})\,\mathrsfs{E}^{(\alpha_{2})}(x_{2})\,\cdots\, \mathrsfs{E}^{(\alpha_{k})}(x_{k}). \end{equation} \end{coro}
\begin{proof} The application of induction on $k$ gives the desired result. \end{proof}
If we take $x=x_{1}=x_{2}=\cdots=x_{k}$ and $\alpha=\alpha_{1}=\alpha_{2}=\cdots=\alpha_{k}$, then we obtain the following simple formula for the powers of the generalized Euler matrix, and consequently, for the powers of the Euler polynomial and Euler matrices.
\begin{coro} \label{corgeneuler2} The generalized Euler matrix $\mathrsfs{E}^{(\alpha)}(x)$ satisfies the following identity. \begin{equation} \label{geneuler3} \left(\mathrsfs{E}^{(\alpha)}(x)\right)^{k}= \mathrsfs{E}^{(k\alpha)}(kx). \end{equation} In particular, \begin{equation} \label{geneuler4} \begin{aligned} \left(\mathrsfs{E}(x)\right)^{k}=& \mathrsfs{E}^{(k)}(kx),\\ \mathrsfs{E}^{k}=& \mathrsfs{E}^{(k)}. \end{aligned} \end{equation} \end{coro}
\begin{remark} Note that Theorem \ref{teogeneuler1} and Corollaries \ref{corgeneuler1} and \ref{corgeneuler2} are respectively, the analogous of Theorem 2.1 and Corollaries 2.2 and 2.3 of \cite{ZW2006} in the setting of Euler matrices. \end{remark}
Let $\mathrsfs{D}$ be the $(n+1)\times(n+1)$ matrix whose entries are defined by \begin{equation} \label{euler13} d_{i,j}=\left\{\begin{array}{l} (1+(-1)^{i-j})\binom{i}{j} 2^{j-i-1}, \quad i\geq j,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{equation}
\begin{teo} \label{teogeneuler2} The inverse matrix of the specialized Euler matrix $\mathbb{E}$ is given by $$\mathbb{E}^{-1}=\mathrsfs{D}.$$ Furthermore, $$\left[\mathrsfs{E}^{(k)}\left(\frac{k}{2}\right)\right]^{-1}= \mathrsfs{D}^{k}.$$ \end{teo}
\begin{proof} Taking into account \eqref{euler4} and \eqref{euler12}, it is possible to deduce $$ \sum_{k=0}^{n}\frac{(1+(-1)^{k})}{2}\binom{n}{k} 2^{n-k}E_{n-k}\left(\frac{1}{2}\right)=\sum_{k=0}^{n}\frac{(1+(-1)^{k})}{2}\binom{n}{k}\varepsilon_{n-k}= \delta_{n,0},$$ where $\delta_{n,0}$ is the Kronecker delta (cf., e.g., \cite[pp. 107-109]{Rio68}). So, we obtain that the $(i,j)$-th entry of the matrix product $\mathrsfs{D}\mathbb{E}$ may be written as $$ \sum_{k=j}^{i} \binom{i}{k}\frac{(1+(-1)^{i-k})}{2}2^{k-i}\binom{k}{j}E_{k-j}\left(\frac{1}{2}\right) $$ $$\begin{aligned} =&\binom{i}{j}2^{j-i}\sum_{k=j}^{i}\binom{i-j}{k-j}\frac{(1+(-1)^{i-k})}{2}2^{k-j}E_{k-j}\left(\frac{1}{2}\right)\\ =&\binom{i}{j}2^{j-i}\sum_{k=0}^{i-j}\binom{i-j}{k} \frac{(1+(-1)^{i-j-k})}{2}2^{k}E_{k}\left(\frac{1}{2}\right) =&\binom{i}{j}2^{j-i}\delta_{i-j,0}, \end{aligned}$$ and consequently, $\mathrsfs{D}\mathbb{E}=I_{n+1}$. Similar arguments allow to show that $\mathbb{E}\mathrsfs{D}=I_{n+1}$, and hence $\mathbb{E}^{-1}=\mathrsfs{D}$.
Finally, from the identity $\mathbb{E}^{-1}=\mathrsfs{D}$ and \eqref{geneuler4} we see that $$\left[\mathrsfs{E}^{(k)}\left(\frac{k}{2}\right)\right]^{-1}= \left(\mathbb{E}^{k}\right)^{-1}=\left(\mathbb{E}^{-1}\right)^{k}=\mathrsfs{D}^{k}.$$ This last chain of equalities finishes the proof. \end{proof}
It is worthwhile to mention that the calculation of $\mathbb{E}^{-1}$ strongly depends on the use of inverse relations derived from exponential generating functions (cf. \cite[Chap. 3, Sec. 3.4]{Rio68}). This tool can be applied in order to determine $\mathbb{E}^{-1}$, but it not works for determining of $\mathrsfs{E}^{-1}$. This fact and \eqref{euler5} suggest that methodology proposed in \cite{ZW2006} does not suffice to finding an explicit formula for $\mathrsfs{E}^{-1}$.
The next result establishes the relation between the generalized Euler matrix and the generalized Pascal matrix of first kind.
\begin{teo} \label{teogeneuler3} The generalized Euler matrix $\mathrsfs{E}^{(\alpha)}(x)$ satisfies the following relation. \begin{equation} \label{euler141} \mathrsfs{E}^{(\alpha)}(x+y)= \mathrsfs{E}^{(\alpha)}(x)P[y]= P[x]\mathrsfs{E}^{(\alpha)}(y)= \mathrsfs{E}^{(\alpha)}(y)P[x]. \end{equation} In particular, \begin{equation} \label{euler14} \mathrsfs{E}(x+y)=P[x]\mathrsfs{E}(y)=P[y]\mathrsfs{E}(x), \end{equation} \begin{equation} \label{euler17} \mathrsfs{E}(x)=P[x]\mathrsfs{E}, \end{equation} \begin{equation} \label{euler15} \mathrsfs{E}\left(x+\frac{1}{2}\right)=P[x]\mathbb{E}, \end{equation} and \begin{equation} \label{euler19} \mathrsfs{E}=P\left[-\frac{1}{2}\right]\mathbb{E}. \end{equation} \end{teo}
\begin{proof} The substitution $\beta=0$ into \eqref{geneuler1} yields $$\mathrsfs{E}^{(\alpha)}(x+y)= \mathrsfs{E}^{(\alpha)}(x)\,\mathrsfs{E}^{(0)}(y)= \mathrsfs{E}^{(0)}(x)\,\mathrsfs{E}^{(\alpha)}(y)= \mathrsfs{E}^{(\alpha)}(y)\, \mathrsfs{E}^{(0)}(x).$$ Since $\mathrsfs{E}^{(0)}(x)=P[x]$, we obtain $$\mathrsfs{E}^{(\alpha)}(x+y)=P[x]\mathrsfs{E}^{(\alpha)}(y).$$ A similar argument allows to show that $\mathrsfs{E}^{(\alpha)}(x+y)=\mathrsfs{E}^{(\alpha)}(x)P[y]$ and \\$\mathrsfs{E}^{(\alpha)}(x+y)= \mathrsfs{E}^{(\alpha)}(y)P[x]$.
Next, the substitution $\alpha=1$ into \eqref{euler141} yields \eqref{euler14}. From the substitutions $y=0$ and $y=\frac{1}{2}$ into \eqref{euler14}, we obtain the relations \eqref{euler17} and \eqref{euler15}, respectively.
Finally, the substitution $x=-\frac{1}{2}$ into \eqref{euler15} completes the proof. \end{proof}
\begin{remark} Note that the relation \eqref{euler14} is the analogous of \cite[Eq. (13)]{ZW2006} in the context of Euler polynomial matrices and, the counterpart of \eqref{euler17} is \cite[Eq. (14)]{ZW2006}. However, the relation \eqref{euler15} is slightly different from \cite[Eq. (14)]{ZW2006}, since it involves an Euler polynomial matrix with ``shifted argument'' and the specialized Euler matrix. More precisely, the relation \cite[Eq. (14)]{ZW2006} reads as $$\mathrsfs{B}(x)=P[x]\mathrsfs{B},$$ consequently, this relation expresses to the Bernoulli polynomial matrix $\mathrsfs{B}(x)$ in terms of the matrix product between the generalized Pascal matrix of first kind $P[x]$ and the Bernoulli matrix $\mathrsfs{B}$. While, on the left hand side of \eqref{euler15} appears an Euler polynomial matrix with ``shifted argument'', and the matrix product on the right hand side of \eqref{euler15} contains to the specialized Euler matrix $\mathbb{E}$. \end{remark}
The following example shows the validity of Theorem \ref{teogeneuler3}.
\begin{eje} Let us consider $n=3$. It follows from the definition \ref{def3} that $$\mathbb{E}=\begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ -\frac{1}{4} & 0 & 1 & 0\\ 0 & -\frac{3}{4} & 0 & 1 \end{bmatrix} \quad \mbox{ and }\quad \mathrsfs{E}\left(x+\frac{1}{2}\right)= \begin{bmatrix} 1 & 0 & 0 & 0\\ x & 1 & 0 & 0\\ x^2-\frac{1}{4} & 2x & 1 & 0\\ x^3-\frac{3}{4}x & 3x^2-\frac{3}{4} & 3x & 1 \end{bmatrix}.$$
On the other hand, from \eqref{euler15} and a simple computation we have \begin{eqnarray*} \mathrsfs{E}\left(x+\frac{1}{2}\right)&=& \underbrace{\begin{bmatrix} 1 & 0 & 0 & 0\\ x & 1 & 0 & 0\\ x^2 & 2x & 1 & 0\\ x^3 & 3x^2 & 3x & 1 \end{bmatrix}}_{P[x]}\underbrace{\begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ -\frac{1}{4} & 0 & 1 & 0\\ 0 & -\frac{3}{4} & 0 & 1 \end{bmatrix}}_{\mathbb{E}} =\begin{bmatrix} 1 & 0 & 0 & 0\\ x & 1 & 0 & 0\\ x^2-\frac{1}{4} & 2x & 1 & 0\\ x^3-\frac{3}{4}x & 3x^2-\frac{3}{4} & 3x & 1 \end{bmatrix}. \end{eqnarray*} \end{eje}
The next theorem follows by a simple computation.
\begin{teo} \label{teogeneuler4} The inverse of the Euler polynomial matrix $\mathrsfs{E}\left(x+\frac{1}{2}\right)$ can be expressed as \begin{equation} \label{euler16} \left[\mathrsfs{E}\left(x+\frac{1}{2}\right)\right]^{-1}= \mathbb{E}^{-1}P[-x]=\mathrsfs{D} P[-x]. \end{equation} In particular, \begin{equation} \label{euler21} \mathrsfs{E}^{-1}= \mathrsfs{D} P\left[\frac{1}{2}\right]. \end{equation} \end{teo}
\begin{proof} Using \eqref{pascal3}, \eqref{euler15} and Theorem \ref{teogeneuler2} the relation \eqref{euler16} is deduced. The substitution $x=-\frac{1}{2}$ into \eqref{euler16} yields \eqref{euler21}. \end{proof}
\begin{eje} Let us consider $n=3$. From the definition \ref{def3} and a standard computation we obtain \begin{eqnarray*} \left[\mathrsfs{E}\left(x+\frac{1}{2}\right)\right]^{-1}&=&\begin{bmatrix} 1 & 0 & 0 & 0\\ x & 1 & 0 & 0\\ x^2-\frac{1}{4} & 2x & 1 & 0\\ x^3-\frac{3}{4}x & 3x^2-\frac{3}{4} & 3x & 1 \end{bmatrix}^{-1}=\begin{bmatrix} 1& 0& 0& 0\\ -x& 1& 0& 0\\ x^{2}+\frac{1}{4}& -2x& 1& 0\\ -x^{3}-\frac{3}{4}x& 3x^{2}+\frac{3}{4}& -3x& 1 \end{bmatrix}. \end{eqnarray*}
On the other hand, from \eqref{euler16} we have \begin{eqnarray*} \left[\mathrsfs{E}\left(x+\frac{1}{2}\right)\right]^{-1}&=& \underbrace{\begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ \frac{1}{4} &0 & 1 & 0\\ 0 & \frac{3}{4} & 0 & 1 \end{bmatrix}}_{\mathrsfs{D}}\underbrace{\begin{bmatrix} 1 & 0 & 0 & 0\\ -x & 1 & 0 & 0\\ x^2 & -2x & 1 & 0\\ -x^3 & 3x^2 & -3x & 1 \end{bmatrix}}_{P[-x]}= \begin{bmatrix} 1& 0& 0& 0\\ -x& 1& 0& 0\\ x^{2}+\frac{1}{4}& -2x& 1& 0\\ -x^{3}-\frac{3}{4}x& 3x^{2}+\frac{3}{4}& -3x& 1 \end{bmatrix}. \end{eqnarray*}
Hence, when $x=-\frac{1}{2}$, we get $$\mathrsfs{E}^{-1}= \underbrace{\begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ \frac{1}{4} &0 & 1 & 0\\ 0 & \frac{3}{4} & 0 & 1 \end{bmatrix}}_{\mathrsfs{D}}\underbrace{\begin{bmatrix} 1 & 0 & 0 & 0\\ \frac{1}{2} & 1 & 0 & 0\\ \frac{1}{4} & 1 & 1 & 0\\ \frac{1}{8} & \frac{3}{4}& \frac{3}{2} & 1 \end{bmatrix}}_{P\left[\frac{1}{2}\right]}= \begin{bmatrix} 1& 0& 0& 0\\ \frac{1}{2}& 1& 0& 0\\ \frac{1}{2}& 1& 1& 0\\ \frac{1}{2}& \frac{3}{2}&\frac{3}{2}& 1 \end{bmatrix}.$$ \end{eje}
At this point, an apart mention deserves the recent work \cite{IRS} since it states an explicit formula to the inverse matrix of the $q$-Pascal matrix plus one in terms of the $q$-analogue of the Euler matrix $\mathrsfs{E}$.
As a consequence of the relations \eqref{pascal6}, \eqref{lucas2}, and Theorems \ref{teogeneuler3} and \ref{teogeneuler4}, we obtain the following corollaries.
\begin{coro} \label{corgeneuler3} The Euler polynomial matrix $\mathrsfs{E}\left(x+\frac{1}{2}\right)$ and its inverse can be factorized by summation matrices as follows. $$\begin{aligned} \mathrsfs{E}\left(x+\frac{1}{2}\right)=& G_{n}[x]G_{n-1}[x]\cdots G_{1}[x] \mathbb{E},\\ \\ \left[\mathrsfs{E}\left(x+\frac{1}{2}\right)\right]^{-1}=& \mathrsfs{D} G_{n}[-x]G_{n-1}[-x]\cdots G_{1}[-x]. \end{aligned}$$ In particular, $$\begin{aligned} \mathrsfs{E}=& G_{n}\left[-\frac{1}{2}\right]G_{n-1}\left[-\frac{1}{2}\right]\cdots G_{1}\left[-\frac{1}{2}\right] \mathbb{E},\\ \\ \mathrsfs{E}^{-1}=&\mathrsfs{D} G_{n}\left[\frac{1}{2}\right]G_{n-1}\left[\frac{1}{2}\right]\cdots G_{1}\left[\frac{1}{2}\right]. \end{aligned}$$ \end{coro}
\begin{coro} \label{corgeneuler4} For $x$ any nonzero real number, the Euler polynomial matrix $\mathrsfs{E}\left(x+\frac{1}{2}\right)$ and its inverse can be factorized, respectively, in terms of the Lucas matrix $\mathrsfs{L}$ and its inverse as follows. $$\begin{aligned} \mathrsfs{E}\left(x+\frac{1}{2}\right)=& \mathrsfs{L} \mathrsfs{G}[x]\mathbb{E}= \mathrsfs{H}[x]\mathrsfs{L} \mathbb{E} ,\\ \\ \left[\mathrsfs{E}\left(x+\frac{1}{2}\right)\right]^{-1}=& \mathrsfs{D} (\mathrsfs{G}[x])^{-1} \mathrsfs{L}^{-1}= \mathrsfs{D}\mathrsfs{L}^{-1} (\mathrsfs{H}[x])^{-1} . \end{aligned}$$ In particular, $$\begin{aligned} \mathrsfs{E}=& \mathrsfs{L} \mathrsfs{G}\left[-\frac{1}{2}\right]\mathbb{E}=\mathrsfs{H}\left[-\frac{1}{2}\right]\mathrsfs{L} \mathbb{E},\\ \\ \mathrsfs{E}^{-1}=&\mathrsfs{D} \left(\mathrsfs{G}\left[-\frac{1}{2}\right]\right)^{-1} \mathrsfs{L}^{-1}= \mathrsfs{D} \mathrsfs{L}^{-1} \left(\mathrsfs{H}\left[-\frac{1}{2}\right]\right)^{-1}. \end{aligned}$$ \end{coro}
We end this section showing others identities, which can be easily deduced from the content of this paper. So, we will omit the details of their proofs. $$\begin{aligned}
D_{x}\mathrsfs{E}(x+y)=&\mathfrak{L} P[x]\mathrsfs{E}(y),\\
&\\ D_{x}\mathrsfs{E}(x)=&\mathfrak{L} P[x]\mathrsfs{E},\\
&\\
D_{x}\mathrsfs{E}\left(x+\frac{1}{2}\right)=&\mathfrak{L} P[x]\mathbb{E},\\
&\\
D_{x}\left[\mathrsfs{E}\left(x+\frac{1}{2}\right)\right]^{-1}=& \mathrsfs{D}\mathfrak{L} P[-x]. \end{aligned}$$
\section{Generalized Euler polynomial matrices via Fibonacci and Lucas matrices} \label{sec:3}
For $0\leq i,j\leq n$ and $\alpha$ a real or complex number, let $\mathrsfs{M}^{(\alpha)}(x)$ be the $(n+1)\times(n+1)$ matrix whose entries are given by (cf. \cite[Eq. (18)]{ZW2006}):
\begin{equation} \label{euler28a} \tilde{m}^{(\alpha)}_{i,j}(x)= \binom{i}{j}E^{(\alpha)}_{i-j}(x)-\binom{i-1}{j}E^{(\alpha)}_{i-j-1}(x)-\binom{i-2}{j}E^{(\alpha)}_{i-j-2}(x). \end{equation} We denote $\mathrsfs{M}(x)=\mathrsfs{M}^{(1)}(x)$ and $\mathrsfs{M}=\mathrsfs{M}(0)$.
Similarly, let $\mathrsfs{N}^{(\alpha)}(x)$ be the $(n+1)\times(n+1)$ matrix whose entries are given by (cf. \cite[Eq. (32)]{ZW2006}): \begin{equation} \label{euler28b} \tilde{n}^{(\alpha)}_{i,j}(x)= \binom{i}{j}E^{(\alpha)}_{i-j}(x)-\binom{i}{j+1}E^{(\alpha)}_{i-j-1}(x)-\binom{i}{j+2}E^{(\alpha)}_{i-j-2}(x). \end{equation} We denote $\mathrsfs{N}(x)=\mathrsfs{N}^{(1)}(x)$ and $\mathrsfs{N}=\mathrsfs{N}(0)$.
From the definitions of $\mathrsfs{M}^{(\alpha)}(x)$ and $\mathrsfs{N}^{(\alpha)}(x)$, we see that $$\begin{array}{l} \tilde{m}^{(\alpha)}_{0,0}(x)= \tilde{m}^{(\alpha)}_{1,1}(x)= \tilde{n}^{(\alpha)}_{0,0}(x)= \tilde{n}^{(\alpha)}_{1,1}(x)= E^{(\alpha)}_{0}(x)=1,\\ \\ \tilde{m}^{(\alpha)}_{0,j}(x)=\tilde{n}^{(\alpha)}_{0,j}(x)=0, \quad j\geq 1,\\ \\ \tilde{m}^{(\alpha)}_{1,0}(x)= \tilde{n}^{(\alpha)}_{1,0}(x)= E^{(\alpha)}_{1}(x)- E^{(\alpha)}_{0}(x)=x-\frac{\alpha}{2}-1,\\ \\ \tilde{m}^{(\alpha)}_{1,j}(x)=\tilde{n}^{(\alpha)}_{1,j}(x)=0, \quad j\geq 2,\\ \\ \tilde{m}^{(\alpha)}_{i,0}(x)= \tilde{n}^{(\alpha)}_{i,0}(x) = E^{(\alpha)}_{i}(x)-E^{(\alpha)}_{i-1}(x)-E^{(\alpha)}_{i-2}(x), \quad i\geq 2. \end{array}$$
For $0\leq i,j\leq n$ and $\alpha$ a real or complex number, let $\mathrsfs{L}_{1}^{(\alpha)}(x)$ be the $(n+1)\times(n+1)$ matrix whose entries are given by \begin{equation} \label{euler28c} \hat{l}^{(\alpha,1)}_{i,j}(x)= \binom{i}{j}E^{(\alpha)}_{i-j}(x)-3\binom{i-j}{j}E^{(\alpha)}_{i-j-1}(x)+ 5\sum_{k=j}^{i-2}(-1)^{i-k}2^{i-k-2}\binom{k}{j}E^{(\alpha)}_{k-j}(x). \end{equation} We denote $\mathrsfs{L}_{1}(x)=\mathrsfs{L}_{1}^{(1)}(x)$ and $\mathrsfs{L}_{1}=\mathrsfs{L}_{1}(0)$.
Similarly, let $\mathrsfs{L}_{2}^{(\alpha)}(x)$ be the $(n+1)\times(n+1)$ matrix whose entries are given by \begin{equation} \label{euler28d} \hat{l}^{(\alpha,2)}_{i,j}(x)= \binom{i}{j}E^{(\alpha)}_{i-j}(x)-3\binom{i}{j+1}E^{(\alpha)}_{i-j-1}(x)+ 5\sum_{k=j+1}^{i}(-1)^{k-j}2^{k-j-2}\binom{i}{k}E^{(\alpha)}_{i-k}(x). \end{equation} We denote $\mathrsfs{L}_{2}(x)=\mathrsfs{L}_{2}^{(1)}(x)$ and $\mathrsfs{L}_{2}=\mathrsfs{L}_{2}(0)$.
From the definitions of $\mathrsfs{L}_{1}^{(\alpha)}(x)$ and $\mathrsfs{L}_{2}^{(\alpha)}(x)$, we see that $$\begin{array}{l} \hat{l}^{(\alpha,1)}_{i,i}(x)= \hat{l}^{(\alpha,2)}_{i,i}(x)=1, \quad i\geq 0,\\ \\ \hat{l}^{(\alpha,1)}_{0,j}(x)= \hat{l}^{(\alpha,2)}_{0,j}(x)=0, \quad j\geq 1,\\ \\ \hat{l}^{(\alpha,1)}_{1,0}(x)= \hat{l}^{(\alpha,2)}_{1,0}(x)= E^{(\alpha)}_{1}(x)- 3E^{(\alpha)}_{0}(x)=x-\frac{\alpha}{2}-3,\\ \\ \hat{l}^{(\alpha,1)}_{1,j}(x)= \hat{l}^{(\alpha,2)}_{1,j}(x)=0, \quad j\geq 2,\\ \\ \hat{l}^{(\alpha,1)}_{i,0}(x)= E^{(\alpha)}_{i}(x)-3E^{(\alpha)}_{i-1}(x)+ 5\sum_{k=0}^{i-2}(-1)^{i-k}2^{i-k-2}E^{(\alpha)}_{k}(x), \quad i\geq 2,\\ \\ \hat{l}^{(\alpha,2)}_{i,0}(x)= E^{(\alpha)}_{i}(x)-3iE^{(\alpha)}_{i-1}(x)+ 5\sum_{k=1}^{i}(-1)^{k}2^{k-2}\binom{i}{k} E^{(\alpha)}_{i-k}(x), \quad i\geq 2,\\ \\ \hat{l}^{(\alpha,1)}_{i,1}(x)= iE^{(\alpha)}_{i-1}(x)-\frac{7(i-1)}{2}E^{(\alpha)}_{i-2}(x)+ 5\sum_{k=1}^{i-2}(-1)^{i-k}2^{i-k-2}kE^{(\alpha)}_{k-1}(x), \quad i\geq 3. \end{array}$$
The following results show several factorizations of $\mathrsfs{E}^{(\alpha)}(x)$ in terms of Fibonacci and Lucas matrices, respectively.
\begin{teo} \label{teogeneuler5} The generalized Euler polynomial matrix $\mathrsfs{E}^{(\alpha)}(x)$ can be factorized in terms of the Fibonacci matrix $\mathrsfs{F}$ as follows. \begin{equation} \label{euler22} \mathrsfs{E}^{(\alpha)}(x)= \mathrsfs{F} \mathrsfs{M}^{(\alpha)}(x), \end{equation} or, \begin{equation} \label{euler22a} \mathrsfs{E}^{(\alpha)}(x)= \mathrsfs{N}^{(\alpha)}(x)\mathrsfs{F}. \end{equation} In particular, \begin{equation} \label{euler23a} \mathrsfs{F}\mathrsfs{M}(x)=\mathrsfs{E}(x)=\mathrsfs{N}(x)\mathrsfs{F}, \end{equation} \begin{equation} \label{euler23} \mathrsfs{F}\mathrsfs{M}=\mathrsfs{E}=\mathrsfs{N}\mathrsfs{F}, \end{equation} and \begin{equation} \label{euler24} \mathrsfs{F}\mathrsfs{M}\left(\frac{1}{2}\right)=\mathbb{E}=\mathrsfs{N}\left(\frac{1}{2}\right)\mathrsfs{F}. \end{equation} \end{teo}
\begin{proof} Since the relation \eqref{euler22} is equivalent to $\mathrsfs{F}^{-1}\mathrsfs{E}^{(\alpha)}(x)= \mathrsfs{M}^{(\alpha)}(x)$, it is possible to follow the proof given in \cite[Theorem 4.1]{ZW2006}, making the corresponding modifications, for obtaining \eqref{euler22}. The relation \eqref{euler22a} can be obtained using a similar procedure. The relations \eqref{euler23a}, \eqref{euler23} and \eqref{euler24} are straightforward consequences of \eqref{euler22} and \eqref{euler22a}. \end{proof}
Also, the relations \eqref{euler22} and \eqref{euler22a} allow us to deduce the following identity:
$$\mathrsfs{M}^{(\alpha)}(x)=\mathrsfs{F}^{-1}\mathrsfs{N}^{(\alpha)}(x)\,\mathrsfs{F}.$$ As a consequence of Theorems \ref{teogeneuler4} and \ref{teogeneuler5}, we can derive simple factorizations for the inverses of the polynomial matrices $\mathrsfs{M}\left(x+\frac{1}{2}\right)$ and $\mathrsfs{N}\left(x+\frac{1}{2}\right)$:
\begin{coro} \label{teogeneuler6} The inverses of the polynomial matrices $\mathrsfs{M}\left(x+\frac{1}{2}\right)$ and $\mathrsfs{N}\left(x+\frac{1}{2}\right)$ can be factorized as follows. \begin{equation} \label{euler24a} \left[\mathrsfs{M}\left(x+\frac{1}{2}\right)\right]^{-1}=\mathrsfs{D} P[-x]\mathrsfs{F}, \end{equation} \begin{equation} \label{euler24b} \left[\mathrsfs{N}\left(x+\frac{1}{2}\right)\right]^{-1}=\mathrsfs{F} \mathrsfs{D} P[-x]. \end{equation} In particular, \begin{equation} \label{euler25}
\mathrsfs{M}^{-1}= \mathrsfs{D} P\left[\frac{1}{2}\right]\mathrsfs{F}, \quad \mbox{ and } \quad \mathrsfs{N}^{-1}= \mathrsfs{F}\mathrsfs{D} P\left[\frac{1}{2}\right], \end{equation} \begin{equation} \label{euler26} \left[\mathrsfs{M}\left(\frac{1}{2}\right)\right]^{-1}= \mathrsfs{D}\mathrsfs{F}, \quad \mbox{ and } \quad \left[ \mathrsfs{N}\left(\frac{1}{2}\right)\right]^{-1}= \mathrsfs{F}\mathrsfs{D}. \end{equation} \end{coro}
An analogous reasoning as used in the proof of Theorem \ref{teogeneuler5} allows us to prove the results below.
\begin{teo} \label{teogeneuler8} The generalized Euler polynomial matrix $\mathrsfs{E}^{(\alpha)}(x)$ can be factorized in terms of the Lucas matrix $\mathrsfs{L}$ as follows. \begin{equation} \label{euler33} \mathrsfs{E}^{(\alpha)}(x)= \mathrsfs{L} \mathrsfs{L}_{1}^{(\alpha)}(x), \end{equation} or, \begin{equation} \label{euler34} \mathrsfs{E}^{(\alpha)}(x)= \mathrsfs{L}_{2}^{(\alpha)}(x)\mathrsfs{L}. \end{equation} In particular, \begin{equation} \label{euler34a} \mathrsfs{L} \mathrsfs{L}_{1}(x)=\mathrsfs{E}(x)=\mathrsfs{L}_{2}(x)\mathrsfs{L}, \end{equation} \begin{equation} \label{euler34b} \mathrsfs{L}\mathrsfs{L}_{1}=\mathrsfs{E}=\mathrsfs{L}_{2}\mathrsfs{L}, \end{equation} and \begin{equation} \label{euler34e} \mathrsfs{L}\mathrsfs{L}_{1}^{\left(\frac{1}{2}\right)}(x)=\mathbb{E}=\mathrsfs{L}_{2}^{\left(\frac{1}{2}\right)}(x)\mathrsfs{L}. \end{equation} \end{teo}
Also, the relations \eqref{euler33} and \eqref{euler34} allow us to deduce the following identity: $$\mathrsfs{L}_{1}^{(\alpha)}(x)=\mathrsfs{L}^{-1}\mathrsfs{L}_{2}^{(\alpha)}(x)\,\mathrsfs{L}.$$
\begin{coro} \label{teogeneuler9} The inverses of the polynomial matrices $\mathrsfs{L}_{1}\left(x+\frac{1}{2}\right)$ and $\mathrsfs{L}_{2}\left(x+\frac{1}{2}\right)$ can be factorized as follows. \begin{equation} \label{euler35} \left[\mathrsfs{L}_{1}\left(x+\frac{1}{2}\right)\right]^{-1}=\mathrsfs{D} P[-x]\mathrsfs{L}, \end{equation} \begin{equation} \label{euler35a} \left[\mathrsfs{L}_{2}\left(x+\frac{1}{2}\right)\right]^{-1}=\mathrsfs{L} \mathrsfs{D} P[-x]. \end{equation} In particular, \begin{equation} \label{euler35b}
\mathrsfs{L}_{1}^{-1}= \mathrsfs{D} P\left[\frac{1}{2}\right]\mathrsfs{L}, \quad \mbox{ and } \quad \mathrsfs{L}_{2}^{-1}= \mathrsfs{L}\mathrsfs{D} P\left[\frac{1}{2}\right], \end{equation} \begin{equation} \label{euler35c} \left[\mathrsfs{L}_{1}\left(\frac{1}{2}\right)\right]^{-1}= \mathrsfs{D}\mathrsfs{L}, \quad \mbox{ and } \quad \left[ \mathrsfs{L}_{2}\left(\frac{1}{2}\right)\right]^{-1}= \mathrsfs{L}\mathrsfs{D}. \end{equation} \end{coro}
\begin{remark} It is worthwhile to mention that if we consider $a\in \mathbb{C}$, $b\in \mathbb{C}\setminus\{0\}$ and $s=0,1$, then Theorems \ref{teogeneuler5} and \ref{teogeneuler8}, as well as, their corollaries have corresponding analogous forms for generalized Fibonacci matrices of type $s$, $\mathrsfs{F}^{(a,b,s)}$, and for generalized Fibonacci matrices $\mathrsfs{U}^{(a,b,0)}$ with second order recurrent sequence $U_{n}^{(a,b)}$ subordinated to certain constraints. The reader may consult \cite{SNS2008} in order to complete the details of this assertion. \end{remark}
We finish this section with some new identities involving the Fibonacci numbers, the Lucas numbers and the generalized Euler polynomials and numbers.
\begin{teo} \label{teogeneuler7} For $0\leq r\leq n$ and $\alpha$ any real or complex number, we have \begin{eqnarray} \nonumber \label{euler27} \binom{n}{r}E^{(\alpha)}_{n-r}(x)&=& F_{n-r+1}+\left[(r+1)x -\frac{(r+1)\alpha +2}{2}\right]F_{n-r}\\ \nonumber & & + \sum_{k=r+2}^{n}\binom{k}{r}\left\{E^{(\alpha)}_{k-r}(x)- \frac{k-r}{k}\left[E^{(\alpha)}_{k-r-1}(x)+ \frac{k-r-1}{k-1}E^{(\alpha)}_{k-r-2}(x)\right]\right\}F_{n-k+1}\\ \nonumber \label{euler31} & & \\ \nonumber & & \\ \nonumber &=& F_{n-r+1}+\left[n\left(x-\frac{\alpha}{2}\right)-1\right]F_{n-r}\\ \nonumber & & + \sum_{k=0}^{n-2}\binom{n}{k}\left\{E^{(\alpha)}_{n-k}(x)- \frac{n-k}{k+1}\left[E^{(\alpha)}_{n-k-1}(x)+ \frac{n-k-1}{k+2}E^{(\alpha)}_{n-k-2}(x)\right]\right\}F_{k-r+1}.
\end{eqnarray}
\end{teo}
\begin{proof} We proceed as in the proof of \cite[Theorem 4.2]{ZW2006}, making the corresponding modifications. From \eqref{euler28a}, it is clear that $$\tilde{m}_{r,r}^{(\alpha)}(x)= 1, \quad \tilde{m}_{r+1,r}^{(\alpha)}(x)= (r+1)x -\frac{(r+1)\alpha +2}{2},$$ and, for $k\geq 2$: $$\tilde{m}_{k,r}^{(\alpha)}(x)= \binom{k}{r}\left\{E^{(\alpha)}_{k-r}(x)- \frac{k-r}{k}\left[E^{(\alpha)}_{k-r-1}(x)+ \frac{k-r-1}{k-1}E^{(\alpha)}_{k-r-2}(x)\right]\right\}.$$
Next, it follows from \eqref{euler22} that \begin{eqnarray} \nonumber \binom{n}{r}E^{(\alpha)}_{n-r}(x)&=& E^{(\alpha)}_{n,r}(x)\\ \nonumber &=&\sum_{k=r}^{n}F_{n-k+1}\tilde{m}_{k,r}^{(\alpha)}(x)\\ \nonumber &=&F_{n-r+1}+ F_{n-r}\tilde{m}_{r+1,r}^{(\alpha)}(x) +\sum_{k=r+2}^{n}F_{n-k+1}\tilde{m}_{k,r}^{(\alpha)}(x)\\ \nonumber &=&F_{n-r+1}+\left[(r+1)x -\frac{(r+1)\alpha +2}{2}\right]F_{n-r}\\ \nonumber & & + \sum_{k=r+2}^{n}\binom{k}{r}\left\{E^{(\alpha)}_{k-r}(x)- \frac{k-r}{k}\left[E^{(\alpha)}_{k-r-1}(x)+ \frac{k-r-1}{k-1}E^{(\alpha)}_{k-r-2}(x)\right]\right\}F_{n-k+1}.
\end{eqnarray} This chain of equalities completes the first part of the proof. The second one is obtained in a similar way, taking into account the following identities: $$\tilde{n}_{n,n}^{(\alpha)}(x)= 1, \quad \tilde{n}_{n,n-1}^{(\alpha)}(x)= n\left(x-\frac{\alpha}{2}\right)-1,$$ and, for $0\leq k\leq n-2$: $$\tilde{n}_{n,k}^{(\alpha)}(x)= \binom{n}{k}\left\{E^{(\alpha)}_{n-k}(x)- \frac{n-k}{k+1}\left[E^{(\alpha)}_{n-k-1}(x)+ \frac{n-k-1}{k+2}E^{(\alpha)}_{n-k-2}(x)\right]\right\}.$$ \end{proof}
\begin{coro} \label{corgeneuler5} For $0\leq r\leq n$ and $\alpha$ any real number, we have \begin{multline*} \nonumber (-1)^{n}\binom{n}{r}E^{(\alpha)}_{n-r}(x)=(-1)^{r}F_{n-r+1}+(-1)^{r+1}\left[\frac{(r+1)(2x-\alpha)+2}{2}\right]F_{n-r}\\ \hspace{2.5cm}+ \sum_{k=r+2}^{n}(-1)^{k}\binom{k}{r}\left\{E^{(\alpha)}_{k-r}(x)+ \frac{k-r}{k}\left[E^{(\alpha)}_{k-r-1}(x)- \frac{k-r-1}{k-1}E^{(\alpha)}_{k-r-2}(x)\right]\right\}F_{n-k+1}\\ \\ \nonumber = (-1)^{r}F_{n-r+1}+(-1)^{r+1}\left[n\left(x-\frac{\alpha}{2}\right)-1\right]F_{n-r}\\
\hspace{2cm} + \sum_{k=0}^{n-2}(-1)^{n-k+r}\binom{n}{k}\left\{E^{(\alpha)}_{n-k}(x)+ \frac{n-k}{k+1}\left[E^{(\alpha)}_{n-k-1}(x)+ \frac{n-k-1}{k+2}E^{(\alpha)}_{n-k-2}(x)\right]\right\}F_{k-r+1}.
\end{multline*} \end{coro}
\begin{proof} Replacing $x$ by $\alpha-x$ in \eqref{euler27} and applying the formula $$E^{(\alpha)}_{n}(x)= (-1)^{n}E^{(\alpha)}_{n}(\alpha-x)$$ to the resulting identity, we obtain the first identity of Corollary \ref{corgeneuler5}. An analogous reasoning yields the second identity. \end{proof}
Analogous reasonings to those used in the proofs of Theorem \ref{teogeneuler7} and Corollary \ref{corgeneuler5} allow us to prove the following results.
\begin{teo} \label{teogeneuler10} For any real or complex number $\alpha$, we have the following identities \begin{eqnarray} \nonumber \label{lucas3} E^{(\alpha)}_{n}(x)&=& L_{n+1}+\left(x-\frac{\alpha}{2}-3\right)L_{n}+ \sum_{k=2}^{n}\left(E^{(\alpha)}_{k}(x)-3E^{(\alpha)}_{k-1}(x) \right)L_{n-k+1}\\ & & + 5\sum_{k=2}^{n}\sum_{s=0}^{k-2}(-1)^{k-s}2^{k-s-2}L_{n-k+1}E^{(\alpha)}_{s}(x), \end{eqnarray} whenever $n\geq 2$. \begin{eqnarray} \label{lucas4} \nonumber nE^{(\alpha)}_{n-1}(x)&=& L_{n}+\left(2x-\alpha-3\right)L_{n-1}+ \sum_{k=3}^{n}\left(kE^{(\alpha)}_{k-1}(x)-3(k-1)E^{(\alpha)}_{k-2}(x) \right)L_{n-k+1}\\ & & + 5\sum_{k=3}^{n}\sum_{s=1}^{k-2}(-1)^{k-s}2^{k-s-2}sL_{n-k+1}E^{(\alpha)}_{s-1}(x), \end{eqnarray} whenever $n\geq3$. \end{teo}
\begin{coro} The following identities hold. \begin{eqnarray} \nonumber \label{lucas5} (-1)^{n}E^{(\alpha)}_{n}(x)&=& L_{n+1}-\left(x-\frac{\alpha}{2}+3\right)L_{n}+ \sum_{k=2}^{n}(-1)^{k}\left(E^{(\alpha)}_{k}(x)+3E^{(\alpha)}_{k-1}(x) \right)L_{n-k+1}\\ & & + 5\sum_{k=2}^{n}\sum_{s=0}^{k-2}(-1)^{k-s}2^{k-s-2}L_{n-k+1}E^{(\alpha)}_{s}(x), \end{eqnarray} whenever $n\geq 2$. \begin{eqnarray} \label{lucas6} \nonumber (-1)^{n-1}nE^{(\alpha)}_{n-1}(x)&=& L_{n}+\left(\alpha-2x -3\right)L_{n-1}\\ \nonumber && + \sum_{k=3}^{n}(-1)^{k-1}\left(kE^{(\alpha)}_{k-1}(x)+3(k-1)E^{(\alpha)}_{k-2}(x) \right)L_{n-k+1}\\ & & + 5\sum_{k=3}^{n}\sum_{s=1}^{k-2}(-1)^{k-1}2^{k-s-2}sL_{n-k+1}E^{(\alpha)}_{s-1}(x), \end{eqnarray} whenever $n\geq3$. \end{coro}
By \eqref{lucas3}, \eqref{lucas4}, \eqref{lucas5} and \eqref{lucas6} we obtain the following interesting identities involving Lucas and Euler numbers.
\begin{itemize} \item For $n\geq 2$: $$ E_{n}-\left(L_{n+1}-\frac{7}{2}L_{n}\right)= \sum_{k=2}^{n}\left(E_{k}-3E_{k-1}+5\sum_{s=0}^{k-2}(-1)^{k-s}2^{k-s-2}E_{s}\right)L_{n-k+1}, $$ $$ (-1)^{n}E_{n}= L_{n+1}-\frac{5}{2}L_{n}+ \sum_{k=2}^{n}(-1)^{k}\left(E_{k}+3E_{k-1}\right)L_{n-k+1}+ 5\sum_{k=2}^{n}\sum_{s=0}^{k-2}(-1)^{k-s}2^{k-s-2}L_{n-k+1}E_{s}. $$ \item For $n\geq 3$: \begin{eqnarray*} nE_{n-1}-(L_{n}-4L_{n-1})&=& \sum_{k=3}^{n}\left(kE_{k-1}-3(k-1)E_{k-2}\right)L_{n-k+1}\\
& & + 5\sum_{k=3}^{n}\sum_{s=1}^{k-2}(-1)^{k-s}2^{k-s-2}sE_{s-1}L_{n-k+1}, \end{eqnarray*} $$(-1)^{n-1}nE_{n-1}-L_{n}-\left(\alpha-2x -3\right)L_{n-1} = \sum_{k=3}^{n}(-1)^{k-1}\left(kE_{k-1}+3(k-1)E_{k-2} \right)L_{n-k+1}$$ $$\qquad\qquad\hspace{4cm}\qquad + 5\sum_{k=3}^{n}\sum_{s=1}^{k-2}(-1)^{k-1}2^{k-s-2}sL_{n-k+1}E_{s-1}.$$ \end{itemize}
Another similar combinatorial identities may be obtained using the results of \cite{LKC2003}. We leave to the interested reader the formulation of them.
\section{Euler matrices and their relation with Stirling and Vandermonde matrices} \label{sec:5}
Let $s(n, k)$ and $S(n, k)$ be the Stirling numbers of the first and second kind, which are respectively defined by the generating functions \cite[Chapther 1, Section 1.6]{SCh2012}:
\begin{eqnarray} \label{seuler1} \sum_{k=0}^{n}s(n, k)z^{k}&=& z(z-1)\cdots(z-n+1), \\ \nonumber
(\log(1+z))^{k}&=&k!\sum_{n=k}^{\infty}s(n, k)\frac{z^{n}}{n!}, \quad |z|<1,\\ \nonumber z^{n}&=& \sum_{k=0}^{n}S(n, k)z(z-1)\cdots(z-k+1),\\ \nonumber (e^{z}-1)^{k} &=& k!\sum_{n=k}^{\infty}S(n, k)\frac{z^{n}}{n!}. \end{eqnarray}
In combinatorics, it is well-known that the value $|s(n, k)|$ represents the number of permutations of $n$ elements with $k$ disjoint cycles. While, the Stirling numbers of the second kind $S(n, k)$ give the number of partitions of $n$ objects into $k$ non-empty subsets. Another way to compute these numbers is by means of the formula (see \cite[Eq. (5.1)]{E2012} or \cite[p. 226]{Rio68}: \begin{equation*}
S(n, k)= \frac{1}{k!}\sum_{l=0}^{k}(-1)^{k-l}\binom{k}{l}l^{n}, \quad 1\leq k\leq n. \end{equation*}
A recent connection between the Stirling numbers of the second kind and the Euler polynomials is given by the formula (see \cite[Theorem 3.1, Eq. (3.3)]{GQ2014}): \begin{equation} \label{seuler11} E_{n}(x)=\sum_{k=0}^{n}(-1)^{n-k}\binom{n}{k}\left[\sum_{l=1}^{n-k+1}\frac{(-1)^{l-1}(l-1)!}{2^{l-1}}S(n-k+1,l)\right]x^{k}. \end{equation}
Proceeding as in the proof of \cite[Theorem 3.1]{GQ2014}, one can find a similar relation to the previous one but connecting Stirling numbers of the first kind and a particular class of generalized Euler polynomials.
\begin{teo} \label{stinglemma} Let us assume that $\alpha=m\in \mathbb{N}$. Then, the connection between the Stirling numbers of the first kind and the generalized Euler polynomial $E^{(m)}_{n}(x)$ is given by the formula: \begin{equation} \label{sneuler1} E^{(m)}_{n}(x)=\frac{1}{2^{n}}\sum_{k=0}^{n}\binom{n}{k}\left[\sum_{j=0}^{n-k}s(n-k,j)(-m)^{j}\right](2x)^{k}, \end{equation} \end{teo}
\begin{proof} By Leibniz's theorem for differentiation we have \begin{eqnarray*} \frac{\partial^{r}}{\partial z^{r}}\left[\left(\frac{2}{e^{z}+1}\right)^{m}e^{xz}\right]&=& \sum_{k=0}^{r}\binom{r}{k}\left[\left(\frac{2}{e^{z}+1}\right)^{m}\right]^{(k)}\frac{\partial^{r-k}}{\partial z^{r-k}}(e^{xz})\\ &=&\sum_{k=0}^{r}\binom{r}{k}\left[\left(\frac{2}{e^{z}+1}\right)^{m}\right]^{(k)}x^{r-k}e^{xz}\\ &=&\left(\frac{2}{e^{z}+1}\right)^{m}e^{(x+1)z}\sum_{k=0}^{r}\binom{r}{k}\frac{(-m)_{k}\,x^{r-k}}{(e^{z}+1)^{k}}, \end{eqnarray*} where in the last expression $(-m)_{k}$ denotes the falling factorial with opposite argument $-m$.
Combining this with the $r$-th differentiation on both sides of the generating function in \eqref{euler1} reveals that $$\sum_{n=r}^{\infty}E_{n}^{(m)}(x)\frac{z^{n-r}}{(n-r)!}= \left(\frac{2}{e^{z}+1}\right)^{m}e^{(x+1)z}\sum_{k=0}^{r}\binom{r}{k}\frac{(-m)_{k}\,x^{r-k}}{(e^{z}+1)^{k}}.$$
Further taking $z\rightarrow 0$ and employing \eqref{seuler1} give \begin{eqnarray*} E_{r}^{(m)}(x)&=&\sum_{k=0}^{r}\binom{r}{k}(-m)_{k}\,\frac{x^{r-k}}{2^{k}}=\frac{1}{2^{r}}\sum_{k=0}^{r}\binom{r}{k} (-m)_{r-k}\,(2x)^{k}\\ &=&\frac{1}{2^{r}}\sum_{k=0}^{r}\binom{r}{k}\left[\sum_{j=0}^{r-k}s(r-k, j)(-m)^{j} \right](2x)^{k}. \end{eqnarray*}
Finally, changing $r$ by $n$ the proof of the formula \eqref{seuler11} is complete. \end{proof}
\begin{definition} \label{def6} For the Stirling numbers $s(i,j)$ and $S(i,j)$ of the first kind and of the second kind respectively, define $\mathfrak{S}$ and $\mathrsfs{S}$ to be the $(n+1)\times(n+1)$ matrices by \begin{equation} \label{seuler10} \mathfrak{S}_{i,j}= \left\{\begin{array}{l} s(i,j), \quad i\geq j,\\ 0, \quad \mbox{otherwise}, \end{array}\right. \quad \mbox{ and } \quad \mathrsfs{S}_{i,j}= \left\{\begin{array}{l} S(i,j), \quad i\geq j,\\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{equation}
The matrices $\mathfrak{S}$ and $\mathrsfs{S}$ are called Stirling matrix of the first kind and of the second kind, respectively (see \cite{ChK2001}). \end{definition}
In order to obtain factorizations for Euler matrices via Stirling matrices we will need the following matrices:
Let $\tilde{S}_{n}$ be the factorial Stirling matrix, i.e., the $n\times n$ matrix whose $(i,j)$-th entry is given by $\tilde{S}_{i,j,n}:=j!\,\mathrsfs{S}_{i,j}$, $i\geq j$ and otherwise $0$.
For $m\in\mathbb{N}$, let $\mathfrak{S}^{(m)}$ be the $(n+1)\times(n+1)$ matrix whose $(i,j)$-entries are defined by \begin{equation} \label{kind1} \mathfrak{S}^{(m)}_{i,j}=\left\{\begin{array}{l} \binom{i}{j}\sum_{k=0}^{i-j}s(i-j, k)(-m)^{k}, \quad i\geq j,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{equation}
Let $\tilde{C}$ and $\tilde{D}$ be the $(n+1)\times(n+1)$ matrices whose $(i,j)$-entries are defined by
\begin{equation} \label{kind2} \tilde{C}_{i,j}=\left\{\begin{array}{l} \binom{i}{j} (-1)^{i-j}\sum_{k=0}^{i-j}\left(-\frac{1}{2}\right)^{k}\tilde{S}_{i-j-k,k,i-j}, \quad i\geq j,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{equation}
\begin{equation} \label{kind3} \tilde{D}_{i,j}=\left\{\begin{array}{l} \binom{i}{j} (-1)^{i-j}\sum_{k=0}^{i-j}\left(-\frac{1}{2}\right)^{k}\tilde{S}_{i-j-k,k+1,i-j}, \quad i\geq j,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{equation}
The next theorem shows the corresponding factorizations of the generalized Euler matrix $\mathrsfs{E}^{(m)}$, $m\in\mathbb{N}$, in terms of the Stirling matrices, when the expressions \eqref{seuler11} and \eqref{sneuler1} are incorporated.
\begin{teo} \label{teogenseuler5} For $m\in \mathbb{N}$, the generalized Euler matrix $\mathrsfs{E}^{(m)}(x)$ can be factorized as follows. \begin{equation} \label{seuler22} \mathrsfs{E}^{(m)}(x)= \mathfrak{S}^{(m)}P[x]. \end{equation} In the case of the Stirling matrix of the second kind, we have \begin{equation} \label{sneuler22b} \mathrsfs{E}(x)= (\tilde{C}+\tilde{D})P[x]. \end{equation} Furthermore, \begin{equation} \label{sneuler22r} \mathfrak{S}^{(1)}= \tilde{C}+\tilde{D}. \end{equation} \end{teo}
\begin{proof} For $m\in \mathbb{N}$ and $i\geq j$, let $A_{i,j}^{(m)}(x)$ be the $(i,j)$-th entry of the matrix product $\mathfrak{S}^{(m)}P[x]$, then $$\begin{aligned} A_{i,j}^{(m)}(x)=&\sum_{k=j}^{i}\mathfrak{S}^{(m)}_{i,k}\,p_{k,j}(x)=\sum_{k=j}^{i}\binom{i}{k}\binom{k}{j}\left[\sum_{r=0}^{i-k}s(i-k, r)(-m)^{r}\right]x^{k-j}\\ =&\sum_{k=j}^{i}\binom{i}{j}\binom{i-j}{k-j}2^{j-i}\left[\sum_{r=0}^{i-k}s(i-k, r)(-m)^{r}\right](2x)^{k-j}\\ =&\frac{\binom{i}{j}}{2^{i-j}}\sum_{k=0}^{i-j}\binom{i-j}{k}\left[\sum_{r=0}^{i-j-k}s(i-j-k, r)(-m)^{r}\right](2x)^{k}=\binom{i}{j}E_{i-j}^{(m)}(x), \end{aligned}$$ The last equality is an immediate consequence of \eqref{sneuler1}, and \eqref{seuler22} follows from the previous chain of equalities.
In order to prove \eqref{sneuler22b}, we proceed in a similar way to the previous one, tanking into account \eqref{seuler11}, \eqref{kind2}, \eqref{kind3}, and making the corresponding modifications. Finally, the substitution $m=1$ into \eqref{seuler22} yields \eqref{sneuler22r}. \end{proof}
\begin{definition} \label{shift1} The $(n+1)\times(n+1)$ shifted Euler polynomial matrix $\tilde{\mathrsfs{E}}(x)$ are given by \begin{equation} \label{shift2} \tilde{\mathrsfs{E}}_{i,j}(x)= \mathrsfs{E}_{i}(j+x), \quad 0\leq i,j\leq n. \end{equation} \end{definition}
Let us consider the Vandermonde matrix: $$\mathrsfs{V}(x):= \begin{bmatrix}
1 & 1 & 1 & \cdots & 1 \\
x & 1+x & 2+x & \cdots & n+x \\
x^{2} &(1+x)^{2} & (2+x)^{2} & \cdots & (n+x)^{2} \\
\vdots & \vdots & \vdots& \ddots & \vdots \\
x^{n}& (1+x)^{n} & (2+x)^{n} & \cdots & (n+x)^{n}
\end{bmatrix}.$$
In \cite[Theorem 2.1]{ChK2002}, the following factorization for the Vandermonde matrix $\mathrsfs{V}(x)$ was stated. \begin{equation} \label{veuler3} \mathrsfs{V}(x)= ([1]\oplus \tilde{S}_{n}) \Delta_{n+1}(x) P^{T}:=([1]\oplus \tilde{S}_{n}) \Delta_{n+1}(x) (P[1])^{T}, \end{equation} where $\Delta_{n+1}(x)(P[1])^{T}$ represents the LU-factorization of a lower triangular matrix whose $(i,j)$-th entry is $\binom{x}{i-j}$, if $i\geq j$ and otherwise $0$.
The relation between the shifted Euler polynomial matrix $\tilde{\mathrsfs{E}}(x)$ and the matrices $\mathrsfs{V}(x)$ and $\tilde{S}_{n}$ is contained in the following result.
\begin{teo} \label{teogenvaneuler5} The shifted Euler polynomial matrix $\tilde{\mathrsfs{E}}(x)$ can be factorized in terms of the Vandermonde matrix $\mathrsfs{V}(x)$ and consequently, in terms of the factorial Stirling matrix $\tilde{S}_{n}$ as follows: \begin{equation} \label{veuler22} \tilde{\mathrsfs{E}}(x)= \mathrsfs{E} \mathrsfs{V}(x), \end{equation} \begin{equation} \label{veuler22c} \tilde{\mathrsfs{E}}(x)= \mathrsfs{E} ([1]\oplus \tilde{S}_{n}) \Delta_{n+1}(x) P^{T}. \end{equation} \end{teo}
\begin{proof} Let $\tilde{\mathrsfs{E}}_{i,j}(x)$ be the $(i,j)$-th entry of the shifted Euler polynomial matrix $\tilde{\mathrsfs{E}}(x)$. Then, using \eqref{euler9} we get $$\tilde{\mathrsfs{E}}_{i,j}(x)= \mathrsfs{E}_{i}(j+x)=\sum_{k=0}^{i}\binom{i}{k}E_{i-k}(j+x)^{k}=\sum_{k=0}^{i}E_{i,k}\mathrsfs{V}_{k,j}(x).$$ Hence, \eqref{veuler22} follows from this chain of equalities. The relation \eqref{veuler22c} is a straightforward consequence of \eqref{veuler3}. \end{proof}
\begin{remark} Note that the relations \eqref{veuler22} and \eqref{veuler22c} are the analogous of \cite[Eqs. (37), (38)]{ZW2006}, respectively, in the context of Euler polynomial matrices. \end{remark}
Finally, in the present paper, all matrix identities have been expressed using finite matrices. Since such matrix identities involve lower triangular matrices, they have a resemblance for infinite matrices. We state this property briefly as follows.
Let $\mathrsfs{E}^{(\alpha)}_{\infty}(x)$, $\mathrsfs{E}_{\infty}\left(x+\frac{1}{2}\right)$, $\mathrsfs{E}_{\infty}(x)$, $\mathrsfs{E}_{\infty}$, $\mathbb{E}_{\infty}$, $\mathrsfs{D}_{\infty}$, $\tilde{\mathrsfs{E}}_{\infty}(x)$, $P_{\infty}[x]$, $\mathrsfs{F}_{\infty}$, $\mathrsfs{F}^{-1}_{\infty}$, $\mathrsfs{L}_{\infty}$, $\mathrsfs{G}_{\infty}[x]$, $\mathrsfs{H}_{\infty}[x]$, $\mathrsfs{M}^{(\alpha)}_{\infty}(x)$, $\mathrsfs{M}_{\infty}\left(x+\frac{1}{2}\right)$, $\mathrsfs{N}^{(\alpha)}_{\infty}(x)$, $\mathrsfs{N}_{\infty}\left(x+\frac{1}{2}\right)$, $\mathrsfs{V}_{\infty}$ and $\mathfrak{S}^{(m)}_{\infty}$, be the infinite cases of the matrices $\mathrsfs{E}^{(\alpha)}(x)$, $\mathrsfs{E}\left(x+\frac{1}{2}\right)$, $\mathrsfs{E}(x)$, $\mathrsfs{E}$, $\mathbb{E}$, $\mathrsfs{D}$, $\tilde{\mathrsfs{E}}(x)$, $P[x]$, $\mathrsfs{F}$, $\mathrsfs{F}^{-1}$, $\mathrsfs{L}$, $\mathrsfs{G}[x]$, $\mathrsfs{H}[x]$, $\mathrsfs{M}^{(\alpha)}(x)$, $\mathrsfs{M}\left(x+\frac{1}{2}\right)$, $\mathrsfs{N}^{(\alpha)}(x)$, $\mathrsfs{N}\left(x+\frac{1}{2}\right)$, $\mathrsfs{V}$ and $\mathfrak{S}^{(m)}$ respectively. Then the following identities hold. \begin{eqnarray*} 2\mathrsfs{E}^{(\alpha-1)}_{\infty}(x)&=& \mathrsfs{E}^{(\alpha)}_{\infty}(x+1)+\mathrsfs{E}^{(\alpha)}_{\infty}(x),\\ \mathrsfs{E}^{(\alpha)}_{\infty}(x+y) &=& \mathrsfs{E}^{(\alpha)}_{\infty}(x)P_{\infty}[y]=P_{\infty}[x]\mathrsfs{E}^{(\alpha)}_{\infty}(y)=\mathrsfs{E}^{(\alpha)}_{\infty}(y)P_{\infty}[x], \\
\mathrsfs{E}_{\infty}(x+y) &=& P_{\infty}[x] \mathrsfs{E}_{\infty}(y)=P_{\infty}[y] \mathrsfs{E}_{\infty}(x), \\ \mathrsfs{E}_{\infty}(x) &=& P_{\infty}[x]\mathrsfs{E}_{\infty}, \\ \mathrsfs{E}_{\infty}\left(x+\frac{1}{2}\right) &=& P_{\infty}[x] \mathbb{E}_{\infty}, \\ \left[\mathrsfs{E}_{\infty}\left(x+\frac{1}{2}\right)\right]^{-1} &=& \mathrsfs{D}_{\infty}P_{\infty}[-x], \\ \mathrsfs{E}_{\infty}\left(x+\frac{1}{2}\right) &=& \mathrsfs{L}_{\infty}\mathrsfs{G}_{\infty}[x]\mathbb{E}_{\infty}=\mathrsfs{H}_{\infty}[x]\mathrsfs{L}_{\infty}\mathbb{E}_{\infty}, \\
\mathrsfs{E}^{(\alpha)}_{\infty}(x) &=& \mathrsfs{F}_{\infty}\mathrsfs{M}^{(\alpha)}_{\infty}(x)=\mathrsfs{N}^{(\alpha)}_{\infty}(x)\mathrsfs{F}_{\infty}, \\
\mathrsfs{M}^{(\alpha)}_{\infty}(x) &=& \mathrsfs{F}^{-1}_{\infty}\mathrsfs{N}^{(\alpha)}_{\infty}(x)\mathrsfs{F}_{\infty},\\
\left[\mathrsfs{M}_{\infty}\left(x+\frac{1}{2}\right)\right]^{-1} &=& \mathrsfs{D}_{\infty}P_{\infty}[-x] \mathrsfs{F}_{\infty},\\
\left[\mathrsfs{N}_{\infty}\left(x+\frac{1}{2}\right)\right]^{-1} &=& \mathrsfs{F}_{\infty} \mathrsfs{D}_{\infty}P_{\infty}[-x],\\ \mathrsfs{E}^{(m)}_{\infty}(x)&=&\mathfrak{S}_{\infty}P_{\infty}[x],\\ \tilde{\mathrsfs{E}}_{\infty}(x)&=& \mathrsfs{E}_{\infty} \mathrsfs{V}_{\infty}(x). \end{eqnarray*}
\end{document} |
\begin{document}
\title[Shallow-water model with the Coriolis effect] {A nonlocal shallow-water model arising from the full water waves with the Coriolis effect} \author[Gui]{Guilong Gui} \address{Guilong Gui\newline School of Mathematics, Northwest University, Xi'an 710069, P. R. China} \email{glgui@amss.ac.cn} \author[Liu] {Yue Liu} \address{Yue Liu \newline Department of Mathematics, University of Texas at Arlington, Arlington, TX 76019}
\email{yliu@uta.edu} \author[Sun] {Junwei Sun} \address{Junwei Sun\newline Department of Mathematics, University of Texas at Arlington, Arlington, TX 76019} \email{junwei.sun@mavs.uta.edu}
\begin{abstract} In the present study a mathematical model of the equatorial water waves propagating mainly in one direction with the effect of Earth's rotation is derived by the formal asymptotic procedures in the equatorial zone. Such a model equation is analogous to the Camassa-Holm approximation of the two-dimensional incompressible and irrotational Euler equations and has a formal bi-Hamiltonian structure. Its solution corresponding to physically relevant initial perturbations is more accurate on a much longer time scale. It is shown that the deviation of the free surface can be determined by the horizontal velocity at a certain depth in the second-order approximation. The effects of the Coriolis force caused by the Earth rotation and nonlocal higher nonlinearities on blow-up criteria and wave-breaking phenomena are also investigated. Our refined analysis is approached by applying the method of characteristics and conserved quantities to the Riccati-type differential inequality. \end{abstract}
\maketitle
\noindent {\sl Keywords\/}: Coriolis effect; rotation-Camassa-Holm equation; shallow water; wave breaking.
\vskip 0.2cm
\noindent {\sl AMS Subject Classification} (2010): 35Q53; 35B30; 35G25 \\
\renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}} \setcounter{equation}{0}
\section{Introduction}
It is known that many of the shallow water models as approximations to the full Euler dynamics are only valid in the weakly nonlinear regime, for instance, the classical Korteweg-de Vries (KdV) equation \cite{KdV} \[ u_t + u_x + \frac{3}{2}u u_x + \frac{1}{6} u_{xxx} = 0. \] However, the more interesting physical phenomena, such as wave breaking, waves of maxima height \cite {AmTo, To}, require a transition to full {\it nonlinearity}. The KdV equation is a simple mathematical model for gravity waves in shallow water, but it fails to model fundamental physical phenomena such as the extreme wave of Stokes \cite{St} and does not include breaking waves (i.e. wave profile remains bounded while its slope becomes unbounded in finite time). The failure of weakly nonlinear shallow-water wave equations to model observed wave phenomena in nature is prime motivation in the search for alternative models for nonlinear shallow-water waves \cite {Ro, Wh}. The long-wave regime is usually characterized by presumptions of long wavelength $\lambda$ and small amplitude $a$ with the amplitude parameter $\varepsilon $ and the shallowness parameter $\mu $ respectively by \begin{equation*} \label{parameter} \varepsilon = \frac{a}{h_0} \ll 1, \qquad \mu = \frac{h_0^2}{\lambda^2} \ll 1. \end{equation*} It is well understood that the KdV model provides a good asymptotic approximations of unidirectional solutions of the irrotational two-dimensional water waves problem on the Boussinesq regime $ \mu \ll 1$, $\varepsilon = O(\mu) $ \cite{BCL05, C85}. To describe more accurately the motion of these unidirectional waves, it was shown in \cite{CL09} that the Camassa-Holm (CH) equation \cite{CH, FF} in the CH scaling, $ \mu \ll 1$, $\varepsilon = O(\sqrt{\mu}), $ could be valid higher order approximations to the governing equation for full water waves in the long time scaling $ O(\frac{1}{\varepsilon})$. Like the KdV, the CH equation is integrable and have solitons, while the CH equation models breaking waves and has peaked solitary waves \cite {CH, ce-1, Mc1}. It is also found that the Euler equation has breaking waves \cite{BeOl} and a traveling-wave solution with the greatest height which has a corner at its crest \cite{To}.
The Camassa-Holm equation inspired the search for various generalization of this equation with interesting properties and applications. Note that all nonlinear terms in the CH equation is quadratic. It is then of great interest to find those integrable equations with higher-power nonlinear terms.
Analogous to the CH equation, our first main aim of the present paper is to formally derive a model equation with the Coriolis effect from the incompressible and irrotational two-dimensional shallow water in the equatorial region. This new model equation called the rotation-Camassa-Holm (R-CH) equation has a cubic and even quartic nonlinearities and a formal Hamiltonian structure. More precisely, the motion of the fluid is described by the scalar equation in the form \begin{equation}\label{R-CH-1} \begin{split}
u_t-\beta\mu u_{xxt} + c u_x + 3\alpha\varepsilon uu_x - \beta_0\mu u_{xxx} &+ \omega_1 \varepsilon^2u^2u_x + \omega_2 \varepsilon^3u^3u_x \\ &= \alpha\beta\varepsilon\mu( 2u_{x}u_{xx}+uu_{xxx}), \end{split} \end{equation}
where the parameter $ \Omega $ is the constant rotational frequency due to the Coriolis effect. The other constants appearing in the equation are defined by $ c = \sqrt{1 + \Omega^2} - \Omega, \; \alpha \overset{\text{def}}{=} \frac{c^2}{1+c^2}, \, \beta_0 \overset{\text{def}}{=}\frac{c(c^4+6c^2-1)}{6(c^2+1)^2}, \, \beta \overset{\text{def}}{=}\frac{3c^4+8c^2-1}{6(c^2+1)^2}, $ $\omega_1 \overset{\text{def}}{=}\frac{-3c(c^2-1)(c^2-2)}{2(1+c^2)^3}, \, {\rm and}\, \omega_2 \overset{\text{def}}{=}\frac{(c^2-2)(c^2-1)^2(8c^2-1)}{2(1+c^2)^5} $
satisfying $c\to 1$, $\beta\to\frac{5}{12}$, $\beta_0\to\frac{1}{4}$, $\omega_1, \, \omega_2 \to 0$ and $\alpha\to\frac{1}{2}$ when $\Omega\to 0$. Denote $p_{\mu}(x)\overset{\text{def}}{=}\frac{1}{2\sqrt{\beta\mu}}e^{-\frac{|x|}{\sqrt{\beta\mu}}}$, $x\in \mathbb{R}$, then $(1-\beta\mu\partial_x^2)^{-1}f=p_\mu \ast f$ for all $f \in L^2(\mathbb{R})$ and $p_\mu \ast (u-\beta\mu u_{xx})=u$, where $\ast$ denotes convolution with respect to the spatial variable $x$. With this notation, equation \eqref{R-CH-1} can also be equivalently rewritten as the following nonlocal form: \begin{equation*}\label{nonlocal-form-1} u_t +\frac{\beta_0}{\beta}u_x+\alpha\varepsilon u u_x+ p_\mu \ast \partial_x\bigg((c-\frac{\beta_0}{\beta}) u + \alpha \varepsilon u^2 +\frac{1}{2}\alpha\beta\varepsilon\mu u_x^2+ \frac{\omega_1}{3}\varepsilon^2 u^3+ \frac{\omega_2}{4}\varepsilon^3 u^4 \bigg) = 0, \end{equation*} or what is the same, \begin{equation*}\label{nonlocal-form-2} \begin{cases} &u_t + \frac{\beta_0}{\beta}u_x+\alpha\varepsilon u u_x+ \partial_x P= 0,\\ &(1-\beta\mu\partial_x^2)P=(c-\frac{\beta_0}{\beta}) u + \alpha \varepsilon u^2 +\frac{1}{2}\alpha\beta\varepsilon\mu u_x^2+ \frac{\omega_1}{3}\varepsilon^2 u^3+ \frac{\omega_2}{4}\varepsilon^3 u^4. \end{cases} \end{equation*} The solution $ u $ of \eqref{R-CH-1} represents the horizontal velocity field at height $ z_0$, and after the re-scaling, it is required that $ 0 \leq z_0 \leq 1, $ where \begin{equation}\label{z-0-value} z_0^2 = \frac {1}{2} - \frac{2}{3} \frac{1}{(c^2 + 1)} + \frac{4}{3} \frac{1}{(c^2 + 1)^2}. \end{equation} Since it is also natural to require that the constant $ \beta > 0, $ it must be the case \[ 0 \leq \Omega < \sqrt {\frac{1}{6} (1 + 2 \sqrt{19})} \approx 1.273, \] and \[ \frac{1}{\sqrt{2}} \leq z_0 < \sqrt{ \frac{61 - 2 \sqrt{19}} {54}} \approx 0.984. \] In particular, when $ \Omega = 0, $ $ z_0= \frac{1}{\sqrt{2}} $ is corresponding to the case of classical CH equation.
The starting point of our derivation of the R-CH model in \eqref{R-CH-1} is the paper \cite{Jo1} where the classical CH equation was derived. The R-CH equation in \eqref{R-CH-1} is established by showing that after a double asymptotic expansion with respect to $\varepsilon$ and $\mu$, the free surface $\eta=\eta(\tau, \xi)$ under the field variable $ (\eta, \xi) $ defined in \eqref{notation-1} in 2D Euler's dynamics \eqref{Euler-1} (see Section 2), is governed by the equation \begin{equation*}\label{eta-eqns-1} \begin{split}
2(\Omega+c)\eta_{\tau} + 3c^2\eta\eta_{\xi} + \frac{c^2}{3}\mu\eta_{\xi\xi\xi} + A_1\varepsilon\eta^2\eta_{\xi} + A_2\varepsilon^2\eta^3\eta_{\xi} +A_{0}\varepsilon^3\eta^4\eta_{\xi}\\
= \varepsilon\mu\Big[A_3\eta_{\xi}\eta_{\xi\xi} + A_4\eta\eta_{\xi\xi\xi}\Big]+O(\varepsilon^4, \mu^2), \end{split} \end{equation*} where the constants $A_1 \overset{\text{def}}{=} \frac{3c^2(c^2-2)}{(c^2+1)^2}$, $ A_2 \overset{\text{def}}{=} -\frac{c^2(2-c^2)(c^6-7c^4+5c^2-5)}{(c^2+1)^4}$, $A_3 \overset{\text{def}}{=} \frac{-c^2(9c^4+16c^2-2)}{3(c^2+1)^2}$, $A_4 \overset{\text{def}}{=} \frac{-c^2(3c^4+8c^2-1)}{3(c^2+1)^2}$, $A_{0} \overset{\text{def}}{=} \frac{c^2(c^2-2)(3c^{10}+228c^8-540c^6-180c^4-13c^2+42)}{12(c^2+1)^6}$. The free surface $\eta $ with respect to the horizontal component of the velocity $u$ at $ z = z_0 $ under the CH regime $\varepsilon=O(\sqrt{\mu})$ is also given by \begin{equation*}\label{surface-1} \eta = \frac{1}{c} u + \gamma_1\varepsilon u^2 +\gamma_2\varepsilon^2 u^3+\gamma_3\varepsilon^3 u^4+\gamma_4 \varepsilon\mu u_{\xi\xi}+O(\varepsilon^4,\mu^2), \end{equation*} where the constants in the expression are given by $ \gamma_1=\frac{2-c^2}{2c^2(c^2+1)}$, $\gamma_2=\frac{(c^2-1)(c^2-2)(2c^2+1)}{2c^3(c^2+1)^3}$, $\gamma_3=-\frac{(c^2-1)^2(c^2-2)(21c^4+16c^2+4)}{8c^4(c^2+1)^5}$, and $\gamma_4=\frac{z_0^2}{2c}-\frac{3c^2+1}{6c(c^2+1)}=\frac{-(3c^4+6c^2-5)}{12c(c^2+1)^2}$ (here the height parameter $z_0$ is determined by \eqref{z-0-value}).
Denote $m\overset{\text{def}}{=}(1-\beta\mu\partial_x^2)u$, one can rewrite the above equation in terms of the evolution of the momentum density $m$, namely, \begin{equation}\label{R-CH-m} \partial_t m +\alpha\varepsilon(um_x+2mu_x) +cu_{x} - \beta_0\mu u_{xxx} + \omega_1 \varepsilon^2u^2u_x + \omega_2 \varepsilon^3u^3u_x = 0. \end{equation} In the case that the Coriolis effect vanishes ($ \Omega = 0$), the coefficients in the higher-power nonlinearities $ \omega_1 = 0 $ and $ \omega_2 = 0.$ Using the scaling transformation $u(t, x) \to \alpha \varepsilon u(\sqrt{\beta \mu}\,\,t,\sqrt{\beta \mu}\,\,x)$ and then the Galilean transformation $ u(t, x) \to u(t, x- \frac{3}{4}t) + \frac{1}{4}, $ the R-CH equation \eqref{R-CH-m} is then reduced to the classical CH equation \begin{equation*}\label{CH-1} u_t - u_{xxt} + 3 uu_x = 2 u_x u_{xx} + u u_{xxx}. \end{equation*} On the other hand, if we take formally $\beta=0$ and $\omega_2=0$ in \eqref{R-CH-m}, then we get the following integrable Gardner equation \cite{Gard68} \begin{equation*} u_t + c u_x + 3\alpha\varepsilon uu_x - \beta_0\mu u_{xxx} + \omega_1 \varepsilon^2u^2u_x = 0. \end{equation*} Note that the R-CH equation \eqref{R-CH-m} has the following three conserved quantities \[ I(u) =\int_{\mathbb{R}} u\, dx, \quad E(u)=\frac{1}{2}\int_{\mathbb{R}} u^2+\beta\mu u_x^2\,dx, \] and \[
F(u)=\frac{1}{2}\int_{\mathbb{R}} cu^2+\alpha\varepsilon u^3+\beta_0\mu u_x^2+\frac{\omega_1 \varepsilon^2}{6}u^4 + \frac{\omega_2 \varepsilon^3}{10}u^5 +\alpha\beta\varepsilon\mu uu^2_x\,dx. \] Define that \begin{equation*} \begin{split} B_1 &\overset{\text{def}}{=} \partial_x(1-\beta\mu\partial^2_x), \qquad {\rm and} \\ B_2 &\overset{\text{def}}{=} \partial_x((\alpha\varepsilon m+\frac{c}{2})\cdot)+(\alpha\varepsilon m+\frac{c}{2})\partial_x-\beta_0\mu\partial_x^3+\frac{2}{3}\omega_1\varepsilon^2\partial_x(u\partial_x^{-1}(u\partial_x\cdot)) \\ &\qquad\qquad\qquad \qquad\qquad\qquad\qquad \qquad+ \frac{5}{8}\omega_2\varepsilon^3\partial_x(u^{\frac{3}{2}} \partial_x^{-1}(u^{\frac{3}{2}}\partial_x\cdot)). \end{split} \end{equation*} A simple calculation then reveals that the R-CH equation \eqref{R-CH-1} can be written as \begin{equation*} m_t=-B_1\frac{\delta F}{\delta m} = -B_2\frac{\delta E}{\delta m}, \end{equation*} where $B_1$ and $B_2$ are two skew-symmetric differential operators.
The class of evolution equations \eqref{R-CH-1} are all formally models for small amplitude, long waves on the surface of water over a flat bottom. It is our expectation that these equations approximate solutions of the full water-wave problem with the Coriolis effect for an ideal fluid with an error that is of order $ O(\mu^2t) $ over a CH time scale at least of order $ O(\varepsilon^{-1}). $ Rigorous justification to this effect is available in \cite {CGL16} (see also \cite{CL09} for the case without the Coriolis effect).
It is also found that the consideration of the Coriolis effect gives rise to a higher power nonlinear term into the R-CH model, which has interesting implications for the fluid motion, particular in the relation to the wave breaking phenomena and the permanent waves. On the other hand, it is also our goal in the present paper to investigate from this model how the Coriolis forcing due to the Earth rotation with the higher power nonlinearities affects the wave breaking phenomena and what conditions can ensure the occurrence of the wave-breaking phenomena or permanent waves.
The dynamics of the blow-up quantity along the characteristics in the R-CH equation actually involves the interaction among three parts: a local nonlinearity, a nonlocal term, and a term stemming from the weak Coriolis forcing. It is observed that the nonlocal (smoothing) effect can help maintain the regularity while waves propagate and hence prevent them from blowing up, even when dispersion is weak or absent. See, for example, the Benjamin-Bona-Mahoney (BBM) equation \cite{BBM}. As the local nonlinearity becomes stronger and dominates over the dispersion and nonlocal effects singularities may occur in the sense of {\it wave-breaking}. Examples can be found in the Whitham equation \cite {ce-1, Wh}, Camassa-Holm (CH) equation \cite{CH,CL09, FF}. It is also found that the Coriolis effect will spread out waves and make them decay in time, delaying the onset of wave-breaking. Understanding the wave-breaking mechanism such as when a singularity can form and what the nature of it is not only presents fundamental importance from mathematical point of view but also is of great physical interest, since it would help provide a key-mechanism for localizing energy in conservative systems by forming one or several small-scale spots. For instance, in fluid dynamics, the possible phenomenon of finite time breakdown for the incompressible Euler equations signifies the onset of turbulence in high Reynolds number flows.
The R-CH equation with a nonlocal structure can be reformulated in a weak form of nonlinear nonlocal transport type. From the transport theory, the blow-up criteria assert that singularities are caused by the focusing of characteristics, which involve the information on the gradient $u_x$. The dynamics of the wave-breaking quantity along the characteristics is established by the Riccati-type differential inequality. The argument is then approached by a refined analysis on evolution of the solution $ u $ and its gradient $ u_x $. Recently Brandolese and Cortez \cite{BrCo1} introduced a new type of blow-up criteria in the study of the classical CH equation. It is shown how local structure of the solution affects the blow-ups. Their argument relies heavily on the fact that the convolution terms are quadratic and positively definite. As for the R-CH equation, the convolution contains cubic even quartic nonlinearities which do not have a lower bound in terms of the local terms. Hence the higher-power nonlinearities in the equation makes it difficult to obtain a purely local condition on the initial data can generate finite-time wave-breaking. In our case, the blow-up can be deduced by the interplay between $u$ and $u_x$. More precisely, this motivates us to carry out a refined analysis of the characteristic dynamics of $M = u - u_x + c_1 $ and $N = u + u_x + c_2$. The estimates of $M$ and $N$ can be closed in the form of \begin{equation*}\label{estimates MN} M'(t)\geq - cMN + \mathcal{N}_1, \quad N'(t) \leq c MN + \mathcal{N}_2, \end{equation*} where the nonlocal terms $\mathcal{N}_i \;(i=1,2)$ can be bounded in terms of certain order conservation laws. From these Riccati-type differential inequalities the monotonicity of $M$ and $N$ can be established, and hence the finite-time wave-breaking follows.
The present contribution proceeds in the following. In the next section, the R-CH model equation is formally derived from the incompressible and irrotational full water wave equations with the Coriolis effect considered, which is an asymptotic model in the CH regime to the $f$-plane geophysical governing equations in the equatorial region. Sections \ref{local} is devoted to the local well-posedness and blow-up criteria. In the last section, Section \ref{breaking}, the wave-breaking criteria are established in Theorem \ref{thm-wavebreak-crt} and the breakdown mechanisms are set up in Theorem \ref{Blow-up}.
\noindent{\bf Notation.} In the sequel, we denote by $\ast$ the convolution. For $1\leq p<\infty$, the norms in the Lebesgue space $L^p(\R)$ is
$\|f\|_{p}=\Big(\int_{\R}|f(x)|^pdx\Big)^{\frac1p}$, the space $L^{\infty}(\R)$ consists of all essentially bounded, Lebesgue measurable functions $f$ equipped with the norm $\displaystyle
\|f\|_{\infty}=\inf_{\mu(e)=0}\sup_{x\in \R\setminus e}|f(x)|$.
For a function $f$ in the classical Sobolev spaces $H^s(\R)\;(s\geq0)$ the norm is denoted by $ \|f\|_{H^s} $. We denote $p(x) = {1\over2} e^{-|x|}$ the fundamental solution of $ 1 - \partial^2_x $ on $\R$, and define the two convolution operators $p_+, \;p_-$ as \begin{equation*}\label{convo} \begin{split} & p_+ \ast f (x) = {e^{-x} \over 2} \int^x_{-\infty} e^y f(y) dy\\ & p_- \ast f(x) = {e^{x}\over 2} \int^\infty_{x} e^{-y} f(y) dy. \end{split} \end{equation*} Then we have the relations $ \displaystyle p = p_+ + p_-, \quad p_x = p_- - p_+. $
\renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}} \setcounter{equation}{0}
\section{Derivation of the R-CH model} \label{derivation} The formal derivation of the Camassa-Holm model equation with the Coriolis effect in the equatorial region is the topic of the present section. Attention is given here is the so-called long-wave limit. in this setting, it is assumed that water flows are incompressible and inviscid with a constant density $\rho$ and no surface tension, and the interface between the air and the water is a free surface. Then such a motion of water flow occupying a domain $\Omega_t$ in $\mathbb{R}^3$ under the influence of the gravity $ g $ and the Coriolis force due to the Earth's rotation
can be described by the Euler equations \cite{GSR07}, {\it viz.} \begin{equation*} \label{R-Euler} \begin{cases}
&\vec{u}_t+\left( \vec{u}\cdot\nabla \right)\vec{u} + 2 \vec {\Omega} \times \vec{u} =-{1\over\rho}\nabla P +\vec{g},\quad x\in \Omega_t,\\
&\nabla\cdot \vec{u}=0, \quad x\in \Omega_t,\\
&\vec{u}|_{t=0}=\vec{u_0}, \quad x\in \Omega_0,
\end{cases} \end{equation*} where $ \vec u = (u, v, w )^T $ is the fluid velocity, $ P(t,x,y,z)$ is the pressure in the fluid, $ \vec{g} = ( 0, 0, -g )^T$ with $g \approx 9.8 m/s^2$ the constant gravitational acceleration at the Earth's surface, and $\vec \Omega = ( 0, \, \Omega_0 \cos \phi, \, \Omega_0 \sin \phi)^T$, with the rotational frequency $\Omega_0 \approx 73\cdot 10^{-6}$rad/s and the local latitude $ \phi$, is the angular velocity vector which is directed along the axis of rotation of the rotating reference frame. We adopt a rotating framework with the origin located at a point on the Earth's surface, with the $x$-axis chosen horizontally due east, the $y$-axis horizontally due north and the $z$-axis upward. We consider here waves at the surface of water with a flat bed, and assume that $\Omega_t=\{(x, y, z): 0<z<h_0+\eta(t, x, y)\}$, where $h_0$ is the typical depth of the water and $\eta(t, x, y)$ measures the deviation from the average level. Under the $f$-plane approximation $( \sin \phi \approx 0, \; \phi \ll 1)$, the motion of inviscid irrotational fluid near the Equator in the region $0 < z < h_0 + \eta(t,x,y)$ with a constant density $\rho$ is described by the Euler equations \cite{Con12,GSR07} in the form \begin{equation*} \label{f-plane} \begin{cases} u_t + uu_x+ vu_y + wu_z + 2\Omega_0 w = -\frac{1}{\rho}P_x, \\ v_t + uv_x+ vv_y + wv_z = -\frac{1}{\rho}P_y, \\ w_t + uw_x + vw_y + ww_z - 2\Omega_0 u = -\frac{1}{\rho}P_z - g, \end{cases} \end{equation*} the incompressibility of the fluid, \begin{equation*}\label{incom-1} u_x + v_y + w_z = 0, \end{equation*} and the irrotational condition, \begin{equation*}\label{irrot-1} (w_y-v_z, u_z-w_x, v_x-u_y)^T = (0,0,0)^T. \end{equation*} The pressure is written as \begin{equation*} P(t, x,z) = P_a + \rho g(h_0 - z) + p(t, x, y, z), \end{equation*} where $P_a$ is the constant atmosphere pressure, and $p$ is a pressure variable measuring the hydrostatic pressure distribution.
The dynamic condition posed on the surface $z = h_0 + \eta$ yields $P = P_a$. Then there appears that \begin{equation*}\label{perssure-1} p = \rho g \eta. \end{equation*} Meanwhile, the kinematic condition on the surface is given by \begin{equation*}\label{KC-1} w = \eta_t + u\eta_x + v\eta_y, \quad \mbox{when} \quad z = h_0 + \eta(t, x, y). \end{equation*} Finally, we pose "no-flow" condition at the flat bottom $z = 0$, that is, \begin{equation*}\label{bottom-1}
w|_{z=0} = 0. \end{equation*}
Consider the two-dimensional flows, moving in the zonal direction along the equator independent of the $y$-coordinate, in other words, $v \equiv 0$ throughout the flow, the irrotational condition will be simplified as $u_z-w_x=0$. According to the magnitude of the physical quantities, we introduce dimensionless quantities as follows \begin{equation*} x \rightarrow \lambda x,\quad z \rightarrow h_0 z,\quad \eta \rightarrow a \eta,\quad t \rightarrow \frac{\lambda}{\sqrt{gh_0}}t, \end{equation*} which implies \begin{equation*} u \rightarrow \sqrt{gh_0}u,\quad w \rightarrow \sqrt{\mu gh_0} w,\quad p \rightarrow \rho g h_0 p. \end{equation*} And under the influence of the Earth rotation, we introduce \begin{equation*}\label{rescall-1} \Omega = \sqrt{{h_0}/{g}} \, \Omega_0. \end{equation*} Furthermore, considering whenever $\varepsilon \rightarrow 0$, \begin{equation*} u \rightarrow 0,\quad w \rightarrow 0,\quad p \rightarrow 0, \end{equation*} that is, $u, w$ and $p$ are proportional to the wave amplitude so that we require a scaling \begin{equation*}\label{rescall-2} u \rightarrow \varepsilon u,\quad w \rightarrow \varepsilon w,\quad p \rightarrow \varepsilon p. \end{equation*} Therefore the governing equations become \begin{equation}\label{governing} \begin{cases} u_t + \varepsilon(uu_x+wu_z)+2\Omega w = - p_x &\text{in}\quad 0 < z < 1+\varepsilon\eta(t,x), \\ \mu \{w_t + \varepsilon (uw_x + ww_z)\} - 2\Omega u = -p_z &\text{in}\quad 0 < z < 1+\varepsilon\eta(t,x), \\ u_x + w_z = 0 &\text{in}\quad 0 < z < 1+\varepsilon\eta(t,x),\\ u_z - \mu w_x = 0 &\text{in}\quad 0 < z < 1+\varepsilon\eta(t,x), \\ p = \eta &\text{on}\quad z = 1 + \varepsilon\eta(t,x),\\ w = \eta_t + \varepsilon u \eta_x &\text{on}\quad z = 1 + \varepsilon\eta(t,x),\\ w = 0 & \text{on}\quad z = 0. \end{cases} \end{equation}
To derive the R-CH equation for shallow water waves, we first introduce a suitable scale and a double asymptotic expansion to get equations in groups with respect to $\varepsilon$ and $\mu$ independent on each other, where $\varepsilon,\, \mu \ll 1$.
Let $c$ be the group speed of water waves. We can apply a suitable far field variable together with a propagation problem \cite{Jo1, Jo2} \begin{equation} \label{notation-1} \xi = \varepsilon^{1/2}(x-ct),\quad \tau = \varepsilon^{3/2}t, \end{equation} which implies, for consistency from the equation of mass conservation, that we also transform \begin{equation*} w = \sqrt{\varepsilon} \,W. \end{equation*} Then the governing equations \eqref{governing} become \begin{equation}\label{Euler-1} \begin{cases} - c u_{\xi} + \varepsilon (u_\tau + uu_\xi + Wu_z) + 2\Omega W = - p_\xi \quad & \text{in}\quad 0 < z < 1 + \varepsilon \eta,\\ \varepsilon\mu \{- c W_\xi + \varepsilon (W_\tau + u W_\xi + WW_z)\} - 2\Omega u = - p_z \quad & \text{in}\quad 0 < z < 1 + \varepsilon \eta,\\ u_\xi + W_z = 0 \quad & \text{in}\quad 0 < z < 1 + \varepsilon \eta,\\ u_z - \varepsilon\mu W_\xi = 0 \quad & \text{in}\quad 0 < z < 1 + \varepsilon \eta,\\ p = \eta \quad & \text{on}\quad z = 1+ \varepsilon \eta,\\ W = - c \eta_\xi + \varepsilon (\eta_\tau + u \eta_\xi) \quad & \text{on}\quad z = 1+ \varepsilon \eta,\\ W = 0 \quad & \text{on} \quad z = 0. \end{cases} \end{equation} A double asymptotic expansion is introduced to seek a solution of the system \eqref{Euler-1}, \begin{equation*} q \sim \sum_{n=0}^{\infty} \sum_{m=0}^{\infty}\varepsilon^n \mu^m q_{nm} \end{equation*}
as $\varepsilon \rightarrow 0, \mu \rightarrow 0$, where $q$ will be taken the scale functions $u, \,W, \,p$ and $\eta$, and all the functions $q_{nm}$ satisfiy the far field conditions $q_{nm} \rightarrow 0$ as $|\xi|\rightarrow \infty$ for every $n, \,m=0, 1, 2, 3, ...$.
Substituting the asymptotic expansions of $u, \,W, \,p,\,\eta$ into \eqref{Euler-1}, we check all the coefficients of the order $O(\varepsilon^i\mu^j)$ ($i, \, j=0, 1, 2, 3, ...$).
From the order $O(\varepsilon^0 \mu^0)$ terms of \eqref{Euler-1} we obtain from the Taylor expansion \begin{equation}\label{taylor-1} f(z)=f(1)+\sum_{n=1}^{\infty} \frac{(z-1)^n}{n!}f^{(n)}(1) \end{equation} that \begin{equation}\label{equation-00} \begin{cases} -c u_{00,\xi} + 2\Omega W_{00} = - p_{00,\xi} &\text{in}\quad 0 < z < 1,\\ 2\Omega u_{00} = p_{00,z} &\text{in}\quad 0 < z < 1,\\ u_{00,\xi} + W_{00,z} = 0 &\text{in}\quad 0 < z < 1,\\ u_{00,z} = 0 &\text{in}\quad 0 < z < 1,\\ p_{00} = \eta_{00}, \quad W_{00} = - c \eta_{00,\xi} & \text{on} \quad z = 1,\\ W_{00} = 0 & \text{on} \quad z = 0. \end{cases} \end{equation} To solve the system \eqref{equation-00}, we first obtain from the fourth equation in \eqref{equation-00} that $u_{00}$ is independent of $z$, that is, $u_{00} = u_{00}(\tau, \xi)$.
Thanks to the third equation in \eqref{equation-00} and the boundary condition of $W$ on $z=0$, we get \begin{equation} \label{w00-1}
W_{00} =W_{00}|_{z = 0} + \int_0^z W_{00,z'} dz' = -\int_0^z u_{00,\xi}\, dz'= - z u_{00,\xi}, \end{equation} which along with the boundary condition of $W$ on $z=1$ implies $u_{00,\xi}(\tau, \xi) = c\eta_{00,\xi}(\tau, \xi)$. Therefore, we have \begin{equation}\label{w-00} u_{00}(\tau, \xi) = c\eta_{00}(\tau, \xi), \quad W_{00} = -cz\eta_{00,\xi}, \end{equation}
here use has been made of the far field conditions $u_{00}, \, \eta_{00} \rightarrow 0$ as $|\xi| \rightarrow \infty$.
On the other hand, from the second equation in \eqref{equation-00}, there appears that \begin{equation}\label{p-00-1}
p_{00}= p_{00}|_{z = 1} + \int_1^z p_{00,z'} \,dz'=\eta_{00}+2\Omega \int_1^z u_{00} \,dz'=\eta_{00}+2\Omega (z-1) u_{00}, \end{equation} which along with $u_{00,\xi} = c\eta_{00,\xi}$ implies \begin{equation}\label{p-00-2} p_{00, \xi}=\big(\frac{1}{c}+2\Omega (z-1)\big) u_{00, \xi}, \end{equation} Combining \eqref{p-00-2} with \eqref{w00-1} and the first equation in \eqref{equation-00} gives rise to $(c^2 + 2\Omega c - 1) u_{00, \xi} = 0$, which follows that \begin{equation}\label{c-1-2} c^2 + 2\Omega c - 1= 0, \end{equation} if we assume that $u_{00}$ is an non-trivial velocity. Therefore, when consider the waves move towards to the right side, we may obtain \begin{equation}\label{c-1-3} c = \sqrt{1 + \Omega^2} - \Omega. \end{equation}
Similarly, vanishing the orders $O(\varepsilon^0 \mu^1)$, $O(\varepsilon^2 \mu^0)$, $O(\varepsilon^1 \mu^1)$, $O(\varepsilon^3 \mu^0)$, $O(\varepsilon^4 \mu^0)$, and $O(\varepsilon^2 \mu^1)$ terms of \eqref{Euler-1} respectively, we may obtain \begin{equation}\label{uwp-01-1} \begin{split} &u_{01} = c\eta_{01}=c\eta_{01}(\tau, \xi),\\ &u_{20}=u_{20}(\tau, \xi)= c \eta_{20} - 2(c+c_1)\eta_{00}\eta_{10}-\frac{2c_1-3\Omega}{3(c+\Omega)}(c+c_1)\eta_{00}^3,\\ &u_{11} =u_{11}(\tau, \xi)= \left(\frac{c}{6} - \frac{2c_1}{9} -\frac{c}{2}z^2 \right) \eta_{00,\xi\xi} + c \eta_{11} - 2 (c+c_1) \eta_{00}\eta_{01},\\ &u_{30}=u_{30}(\tau, \xi)= c \eta_{30} - 2(c+c_1)(\eta_{00}\eta_{20}) -(c+c_1)(\eta_{10}^2)-\frac{2c_1-3\Omega}{\Omega+c}(c+c_1)(\eta_{00}^2\eta_{10})\\ &\qquad -\frac{(64cc_1+24c_1^2+45c^2+24\Omega^2-3)}{24(c+\Omega)^2}(c+c_1)(\eta_{00}^4), \end{split}\end{equation} and \begin{equation}\label{eta-30-eqn} \begin{split} &2(c+\Omega)\eta_{30,\tau} +3c^2(\eta_{00}\eta_{30}+\eta_{10}\eta_{20})_{\xi} -2(3c+2c_1)(c+c_1)(\eta_{00}^2\eta_{20}+\eta_{00}\eta_{10}^2)_{\xi}\\ &\quad-\frac{(64cc_1+24c_1^2+45c^2-15)}{3(c+\Omega)}(c+c_1)(\eta_{00}^3\eta_{10})_{\xi}-B_2(\eta_{00}^5)_{\xi}=0,\\ &2(\Omega + c)\eta_{11,\tau} + 3c^2(\eta_{00}\eta_{11}+\eta_{10}\eta_{01})_\xi-2(c+c_1)(3c+2c_1)(\eta_{00}^2\eta_{01})_\xi+\frac{c^2}{3}\eta_{10,\xi\xi\xi}\\ &-\left(\frac{c^2}{6}+\frac{10c c_1}{9}+\frac{2 c_1^2}{9}\right)(\eta_{00,\xi}^2)_{\xi}-\left(\frac{c^2}{3}+\frac{20 c c_1}{9}+\frac{8 c_1^2}{9}\right)(\eta_{00}\eta_{00,\xi\xi})_{\xi}=0. \end{split}\end{equation} with \begin{equation}\label{c-0-0} c_1 \overset{\text{def}}{=} -\frac{3c^2}{4(\Omega + c)}=-\frac{3 c^3}{2 (c^2 + 1)}, \end{equation} \begin{equation*}\begin{split} B_1\overset{\text{def}}{=} &\frac{(c+c_1)^2(82cc_1+36c_1^2+45c^2-18\Omega c_1-27\Omega c-15)}{3(\Omega+c)^2}\\ &+\frac{c_1(c+c_1)(64cc_1+24c_1^2+45c^2+24\Omega^2-3)}{3(\Omega+c)^2}, \end{split}\end{equation*} and \begin{equation*}\begin{split} B_2&\overset{\text{def}}{=} \frac{1}{5}B_1-\frac{(c+c_1)^2(2c_1-3\Omega)}{3(\Omega+c)}+\frac{2c(c+c_1)(64cc_1+24c_1^2+45c^2+24\Omega^2-3)}{12(\Omega+c)^2}\\ &=\frac{c^2(2-c^2)(3c^{10}+228c^8-540c^6-180c^4-13c^2+42)}{60(c^2+1)^6}. \end{split}\end{equation*} More details should be found in Appendix A.
Taking $\eta := \eta_{00} + \varepsilon \eta_{10} + \varepsilon^2 \eta_{20}+ \varepsilon^3 \eta_{30}+ \mu \eta_{01} + \varepsilon\mu\eta_{11}+O(\varepsilon^4,\mu^2)$. Multiplying the equations \eqref{A-eta-00-eqn}, \eqref{A-eta-10-eqn}, \eqref{A-eta-01-eqn}, \eqref{A-eta-20-eqn}, \eqref{A-eta-30-eqn}, and \eqref{A-eta-11-eqn} by $1$, $\varepsilon$, $\mu$, $\varepsilon^2$, $\varepsilon^3$, and $\varepsilon \mu$, respectively, and then summating the results, we get the equation of $\eta$ up to the order $O(\varepsilon^4,\mu^2)$ that \begin{equation}\label{etaepsilon3} \begin{split} &2(\Omega + c) \eta_\tau + 3 c^2 \eta \eta_\xi + \frac{c^2}{3} \mu \eta_{\xi\xi\xi} + \varepsilon A_1 \eta^2 \eta_\xi+\varepsilon^2 A_2 \eta^3 \eta_{\xi}+A_0\varepsilon^3\eta^4 \eta_{\xi} \\ &= \varepsilon \mu\bigg(A_3 \eta_{\xi}\eta_{\xi\xi} +A_4 \eta\eta_{\xi\xi\xi}\bigg) + O(\varepsilon^4,\mu^2), \end{split} \end{equation} where $c_1 =-\frac{3 c^3}{2 (c^2 + 1)}$ is defined in \eqref{c-0-0}, $ A_1 \overset{\text{def}}{=} - 2(3c+2c_1)(c+c_1)= \frac{3c^2(c^2-2)}{(c^2+1)^2}$,
$ A_2 \overset{\text{def}}{=} -\frac{(64cc_1+24c_1^2+45c^2-15)}{3(c+\Omega)}(c+c_1)= -\frac{c^2(2-c^2)(c^6-7c^4+5c^2-5)}{(c^2+1)^4}$, $ A_3 \overset{\text{def}}{=} \frac{2c^2}{3}+\frac{40c c_1}{9}+\frac{4 c_1^2}{3}= \frac{-c^2(9c^4+16c^2-2)}{3(c^2+1)^2}$, $ A_4 \overset{\text{def}}{=} \frac{c^2}{3}+\frac{20 c c_1}{9}+\frac{8 c_1^2}{9}=\frac{-c^2(3c^4+8c^2-1)}{3(c^2+1)^2}$, $A_{0} \overset{\text{def}}{=} \frac{c^2(c^2-2)(3c^{10}+228c^8-540c^6-180c^4-13c^2+42)}{12(c^2+1)^6}$.
On the other hand, notice that $u_{00} = c \eta_{00}$, $u_{10} = c\eta_{10} - (c_1 + c)\eta^2_{00}$, $u_{01} = c \eta_{01}$, $u_{11} = c \eta_{11} - 2(c_1 + c)\eta_{00}\eta_{01} + \left( \frac{c}{6}-\frac{2c_1}{9}-\frac{cz^2}{2}\right)\eta_{00,\xi\xi}$, $u_{20}= c \eta_{20} - 2(c+c_1)(\eta_{00}\eta_{10})-\frac{2c_1-3\Omega}{3(c+\Omega)}(c+c_1)(\eta_{00}^3)$,
and \begin{equation*} \begin{split} u_{30}= c \eta_{30} - 2(c+c_1)(\eta_{00}\eta_{20}) &-(c+c_1)(\eta_{10}^2)-\frac{2c_1-3\Omega}{\Omega+c}(c+c_1)(\eta_{00}^2\eta_{10})\\ &-\frac{(64cc_1+24c_1^2+45c^2+24\Omega^2-3)}{24(c+\Omega)^2}(c+c_1)(\eta_{00}^4), \end{split} \end{equation*} we obtain \begin{equation*}\label{u-equation-12} \begin{split} &\eta_{00}=\frac{1}{c}u_{00},\, \eta_{10}=\frac{1}{c}u_{10}+\gamma_1 u^2_{00}, \, \eta_{01}=\frac{1}{c}u_{01},\, \eta_{20} =\frac{1}{c}u_{20}+ 2\gamma_1 u_{00}u_{10}+\gamma_2 u_{00}^3,\\ & \eta_{30}= \frac{1}{c}u_{30} +\gamma_1 u_{10}^2+ 2 \gamma_1 u_{00}u_{20}+ 3 \gamma_2 u_{00}^2u_{10}+\gamma_3 u_{00}^4,\\ & \eta_{11} = \frac{1}{c}u_{11} + 2 \gamma_1 u_{00}u_{01} +\gamma_4 u_{00,\xi\xi}, \end{split} \end{equation*} where $\gamma_1 \overset{\text{def}}{=} \frac{c_1 + c}{c^3}$, $\gamma_2 \overset{\text{def}}{=}\frac{2(c+c_1)^2}{c^5}+\frac{(2c_1-3\Omega)(c+c_1)}{3c^4(c+\Omega)}$, $\gamma_3\overset{\text{def}}{=}\frac{5(c+c_1)^3}{c^7}+\frac{5(2c_1-3\Omega)(c+c_1)^2}{3c^6(c+\Omega)} +\frac{(64cc_1+24c_1^2+45c^2+24\Omega^2-3)}{24c^5(c+\Omega)^2}(c+c_1)$, $\gamma_4 \overset{\text{def}}{=} -\left( \frac{1}{6c}-\frac{2c_1}{9c^2}-\frac{z^2}{2c}\right)$, or it is the same, \begin{equation}\label{gamma-defi-2} \begin{split} &\gamma_1=\frac{2-c^2}{2c^2(c^2+1)}, \quad \gamma_2 =\frac{(c^2-1)(c^2-2)(2c^2+1)}{2c^3(c^2+1)^3},\\ &\gamma_3 =-\frac{(c^2-1)^2(c^2-2)(21c^4+16c^2+4)}{8c^4(c^2+1)^5}, \quad \gamma_4 =\frac{z^2}{2c}-\frac{3c^2+1}{6c(c^2+1)}. \end{split} \end{equation} Therefore, it follows that \begin{equation*}\label{u-equation-13a} \begin{split} \eta &=\eta_{00} + \varepsilon \eta_{10}+ \varepsilon^2 \eta_{20} + \mu \eta_{01}+ \varepsilon^3 \eta_{30} + \varepsilon\mu\eta_{11}+O(\varepsilon^4,\mu^2)\\ &=\frac{1}{c}u_{00}+\varepsilon\bigg(\frac{1}{c}u_{10}+\gamma_1 u^2_{00}\bigg)+\varepsilon^2\bigg(\frac{1}{c}u_{20}+ 2\gamma_1 u_{00}u_{10}+\gamma_2 u_{00}^3\bigg)\\ &+\mu \frac{1}{c}u_{01}+\varepsilon\mu\bigg(\frac{1}{c}u_{11} + 2 \gamma_1 u_{00}u_{01} +\gamma_4 u_{00,\xi\xi}\bigg)\\ & +\varepsilon^3\bigg( \frac{1}{c}u_{30} +\gamma_1 u_{10}^2+ 2 \gamma_1 u_{00}u_{20}+ 3 \gamma_2 u_{00}^2u_{10}+\gamma_3 u_{00}^4\bigg) +O(\varepsilon^4,\mu^2). \end{split} \end{equation*} which along with $u =u_{00} + \varepsilon u_{10}+ \varepsilon^2 u_{20} + \mu u_{01}+ \varepsilon^3 u_{30} + \varepsilon\mu u_{11}+O(\varepsilon^4,\mu^2)$ yields \begin{equation}\label{eta-u} \begin{split}
\eta = \frac{1}{c}u + \gamma_1 \varepsilon u^2 + \gamma_2\varepsilon^2u^3 + \gamma_3\varepsilon^3u^4 + \gamma_4\varepsilon\mu u_{\xi\xi} +O(\varepsilon^4,\mu^2), \end{split} \end{equation} where $\gamma_i$ ($i=1, 2, 3, 4$) are defined in \eqref{gamma-defi-2} and the parameter $z \in [0, 1]$.
\begin{remark}\label{rmk-eta-form} From the above derivation, we know that, in the free-surface incompressible irrotational Euler equations in the equatorial region, the relation between the free surface $\eta$ and the horizontal velocity $u$ formally obeys the equation \eqref{eta-u}, with or without Coriollis effect. It also illustrates that, all the classical models, such as the classical KdV equation, the BBM equation, or the (improved) Boussinesq equation, can be also formally derived from relation \eqref{eta-u} with the KdV regime $\varepsilon=O(\mu)$ in the equatorial region. \end{remark}
In the following steps, we will derive the equation for $ u $ from express \eqref{etaepsilon3}. \\ In view of \eqref{eta-u}, we have \begin{equation}\label{etaepsilon3-2} \begin{split} 2(\Omega + c) \eta_\tau =&\frac{2(\Omega + c)}{c} u_{\tau} + \frac{2(\Omega + c)(c_1+c)}{c^3}\varepsilon (u^2)_{\tau} + 2(\Omega + c)\gamma_2\varepsilon^2(u^3)_{\tau} \\ &+ 2(\Omega + c) \gamma_{3}\varepsilon^3(u^4)_{\tau}+ 2(\Omega + c)\gamma_4\varepsilon\mu u_{\tau\xi\xi} +O(\varepsilon^4,\mu^2), \end{split} \end{equation} and \begin{equation*} \begin{split}
3c^2\eta\eta_{\xi} &\;= \frac{3c^2}{2}\bigg((\frac{1}{c}u + \frac{c_1+c}{c^3}\varepsilon u^2 + \gamma_2\varepsilon^2u^3 +
\gamma_3 \varepsilon^3u^4)^2+ \gamma_4\varepsilon\mu u_{\xi\xi} \bigg)_{\xi}+O(\varepsilon^4,\mu^2) \\
&\; = \frac{3c^2}{2}\bigg(\frac{1}{c^2}u^2 + \frac{2(c_1+c)}{c^4}\varepsilon u^3 +( \frac{(c_1+c)^2}{c^6} + \frac{2}{c}\gamma_2)\varepsilon^2u^4 +\frac{2}{c} \gamma_4 \mu\varepsilon uu_{\xi\xi} \\
&\; \qquad\quad + (\frac{2}{c}\gamma_3 + \frac{2(c_1+c)}{c^3}\gamma_2)\varepsilon^3u^5\bigg)_{\xi}+O(\varepsilon^4,\mu^2). \\ \end{split} \end{equation*} Similarly, we may get \begin{equation*}
\frac{c^2}{3} \mu \eta_{\xi\xi\xi} =\frac{c^2}{3}\mu(\frac{1}{c}u + \frac{c_1+c}{c^3}\varepsilon u^2)_{\xi\xi\xi}+O(\varepsilon^4,\mu^2), \end{equation*} \begin{equation*} \begin{split} \varepsilon \mu\bigg(A_3 \eta_{\xi}\eta_{\xi\xi} +A_4 \eta\eta_{\xi\xi\xi}\bigg) =\varepsilon \mu\bigg(\frac{A_3 }{c^2}u_{\xi}u_{\xi\xi} +\frac{A_4 }{c^2} uu_{\xi\xi\xi}\bigg)+ O(\varepsilon^4,\mu^2), \end{split} \end{equation*} \begin{equation*} \begin{split}
A_1\varepsilon\eta^2\eta_{\xi} =\frac{A_{1}}{3}\varepsilon\bigg[ \frac{1}{c^3}u^3 + \frac{3(c_1+c)}{c^5}\varepsilon u^4 + (\frac{3(c_1+c)^2}{c^7}+\frac{3}{c^2}\gamma_2)\varepsilon^2 u^5 \bigg]_{\xi}+O(\varepsilon^4,\mu^2), \end{split} \end{equation*} \begin{equation*} \begin{split}
A_2\varepsilon^2\eta^3\eta_{\xi} = \frac{A_2}{4c^4}\varepsilon^2(u^4)_{\xi} + \frac{A_2(c_1+c)}{c^6}\varepsilon^3(u^5)_{\xi}+O(\varepsilon^4,\mu^2), \end{split} \end{equation*} and \begin{equation*} \begin{split} -5B_2\varepsilon^3\eta^4\eta_{\xi} = -\frac{B_2}{c^5}\varepsilon^3(u^5)_{\xi}+O(\varepsilon^4,\mu^2).\\ \end{split} \end{equation*} Hence, we deduce from the equation \eqref{etaepsilon3} that \begin{equation}\label{uepsilon3_A7} \begin{split}
u_{\tau}&\;+\frac{2(c_1+c)}{c^2}\varepsilon uu_{\tau} + 3\gamma_2 c\varepsilon^2u^2u_{\tau} + \gamma_4 c\varepsilon\mu u_{\tau\xi\xi}+4 \gamma_3 c\varepsilon^3u^3u_{\tau}
+ \frac{3c}{2(\Omega+c)}uu_{\xi} \\ &\;+ \frac{cA_5}{2(\Omega+c)}\varepsilon^2u^3u_{\xi} + \frac{cA_6}{2(\Omega+c)}\varepsilon u^2u_{\xi} + \frac{c^2}{6(\Omega+c)}\mu u_{\xi\xi\xi} + \frac{cA_{7}}{2(\Omega+c)}\varepsilon^3 u^4u_{\xi}\\ &\;+ (\frac{cA_8}{2(\Omega+c)}u_{\xi}u_{\xi\xi} + \frac{cA_{9}}{2(\Omega+c)}uu_{\xi\xi\xi})\varepsilon\mu = O(\varepsilon^4, \varepsilon^2\mu, \mu^2), \end{split} \end{equation} where $A_5 := \frac{6(c_1+c)^2}{c^4} + 12c \gamma_2 + \frac{4A_1(c_1+c)}{c^5}\varepsilon^2 + \frac{A_2}{c^4}$, $A_6 := \frac{9(c_1+c)}{c^2} + \frac{A_1}{c^3}$, $A_8 := 3c \gamma_4+ \frac{2(c_1+c)}{c} - \frac{A_3}{c^2}$, $A_{9}:= 3c \gamma_4 + \frac{2(c_1+c)}{3c} - \frac{A_4}{c^2}$, and $A_{7}:= 5\bigg[ \frac{3}{2}c^2(\frac{2 }{c}\gamma_3+\frac{2(c_1+c)}{c^3}\gamma_2) + \frac{A_1}{3}(\frac{3}{c^7}(c_1+c)^2+\frac{3}{c^2}\gamma_2) + \frac{A_2(c_1+c)}{c^6} - \frac{B_2}{c^5} \bigg]$.
Hence, we obtain \begin{equation*} \begin{split}
\varepsilon uu_{\tau}&\; = -\varepsilon u \bigg( \frac{2(c_1+c)}{c^2}\varepsilon uu_{\tau} + 3 \gamma_2 c\varepsilon^2u^2u_{\tau} + \frac{3c}{2(\Omega+c)}uu_{\xi}
+ \frac{cA_5}{2(\Omega+c)}\varepsilon^2u^3u_{\xi} \\ &\;\qquad\qquad+ \frac{cA_6}{2(\Omega+c)}\varepsilon u^2u_{\xi} + \frac{c^2}{6(\Omega+c)}\mu u_{\xi\xi\xi}\bigg) + O(\varepsilon^4, \varepsilon^2\mu, \mu^2), \end{split} \end{equation*} which implies \begin{equation*} \begin{split}
\varepsilon u\bigg(1+\frac{2(c_1+c)}{c^2}\varepsilon u &\;+3\gamma_2 c\varepsilon^2u^2\bigg)u_{\tau} = -\varepsilon u \bigg( \frac{3c}{2(\Omega+c)}uu_{\xi}
+ \frac{cA_5}{2(\Omega+c)}\varepsilon^2u^3u_{\xi} \\ &\;+ \frac{cA_6}{2(\Omega+c)}\varepsilon u^2u_{\xi} + \frac{c^2}{6(\Omega+c)}\mu u_{\xi\xi\xi}\bigg) + O(\varepsilon^4, \varepsilon^2\mu, \mu^2). \end{split} \end{equation*} This follows that \begin{equation*} \begin{split} \varepsilon uu_{\tau}&\; = -\varepsilon u \bigg[1- (\frac{2(c_1+c)}{c^2}\varepsilon u+3\gamma_2 c\varepsilon^2u^2) + (\frac{2(c_1+c)}{c^2}\varepsilon u)^2 \bigg]
\bigg[ \frac{3c}{2(\Omega+c)}uu_{\xi} \\ &\;\quad+ \frac{cA_5}{2(\Omega+c)}\varepsilon^2u^3u_{\xi} + \frac{cA_6}{2(\Omega+c)}\varepsilon u^2u_{\xi} + \frac{c^2}{6(\Omega+c)}\mu u_{\xi\xi\xi}\bigg]
+ O(\varepsilon^4, \mu^2), \end{split} \end{equation*} and then \begin{equation}\label{uuT} \begin{split} \varepsilon uu_{\tau}&\; = -\varepsilon u\bigg[ \frac{3c}{2(\Omega+c)}uu_{\xi} + \frac{c^2}{6(\Omega+c)}\mu u_{\xi\xi\xi}
+ \frac{c^2A_6-6(c_1+c)}{2c(\Omega+c)}\varepsilon u^2u_{\xi} \\ &\;\quad + \frac{c^2A_5 - 2A_6(c_1+c) + 3c^2(\frac{4(c_1+c)^2}{c^4}-3 \gamma_2 c)}{2c(\Omega+c)}\varepsilon^2u^3u_{\xi} \bigg]+ O(\varepsilon^4, \mu^2), \end{split} \end{equation} \begin{equation} \label{u3uT} \begin{split} \varepsilon^2 u^2u_{\tau} =&\; -\varepsilon^2 u^2\bigg[ \frac{3c}{2(\Omega+c)}uu_{\xi} + \frac{c^2A_6-6(c_1+c)}{2c(\Omega+c)}\varepsilon u^2u_{\xi} \bigg]
+ O(\varepsilon^4, \varepsilon^2\mu, \mu^2), \\ \varepsilon^3 u^3u_{\tau} =&\; - \frac{3c}{2(\Omega+c)}\varepsilon^3 u^4 u_{\xi} + O(\varepsilon^4, \mu^2), \quad \varepsilon\mu u_{\tau\xi\xi}=- \frac{3c}{2(\Omega+c)} \varepsilon\mu (u u_{\xi})_{\xi\xi} + O(\varepsilon^4, \mu^2) \end{split} \end{equation} Decompose $\varepsilon\mu u_{\tau\xi\xi}$ into $\varepsilon\mu (1-\nu)u_{\tau\xi\xi} + \varepsilon\mu\nu u_{\tau\xi\xi}$ for some constant $\nu$ (to be determined later), we may get from \eqref{u3uT} that \begin{equation} \label{nu} \begin{split} \varepsilon\mu u_{\tau\xi\xi}= \varepsilon\mu (1-\nu)u_{\tau\xi\xi} - \frac{3c \nu}{2(\Omega+c)} \varepsilon\mu (u u_{\xi})_{\xi\xi} + O(\varepsilon^4, \mu^2). \end{split} \end{equation} Substituting \eqref{uuT}-\eqref{nu} into \eqref{uepsilon3_A7}, we obtain that \begin{equation*} \begin{split}
u_{\tau}&\;+ c\gamma_4(1-\nu)\mu\varepsilon u_{\tau\xi\xi} + \frac{3c}{2(\Omega+c)}uu_{\xi}+\frac{c^2}{6(\Omega+c)}\mu u_{\xi\xi\xi}-\frac{9c^2 \gamma_2}{2(\Omega+c)}\varepsilon^2u^3u_{\xi}\\ &\; - \frac{3c^2\gamma_4\nu}{2(\Omega+c)}\mu\varepsilon(u u_{\xi})_{\xi\xi} + \frac{2(c_1+c)}{c^2}\varepsilon\bigg[\frac{3c}{2(\Omega+c)}u^2u_{\xi} + \frac{c^2}{6(\Omega+c)}\mu uu_{\xi\xi\xi} \\ &\; + \frac{c^2A_6-6(c_1+c)}{2c(\Omega+c)}\varepsilon u^3u_{\xi}\bigg]+ \frac{cA_5}{2(\Omega+c)}\varepsilon^2u^3u_{\xi} + \frac{cA_6}{2(\Omega+c)}\varepsilon u^2u_{\xi} \\ &\; + \mu\varepsilon(\frac{cA_8}{2(\Omega+c)}u_{\xi}u_{\xi\xi} + \frac{cA_{9}}{2(\Omega+c)}uu_{\xi\xi\xi}) + A_{10}\varepsilon^3 u^4u_{\xi} = O(\varepsilon^4, \mu^2), \end{split} \end{equation*} where \begin{equation*} \begin{split} A_{10}:=&\; \frac{cA_{7}}{2(\Omega+c)}-\frac{(c_1+c)\bigg(c^2A_5 - 2A_6(c_1+c) + 3c^2(\frac{4(c_1+c)^2}{c^4}-3\gamma_2c)\bigg)}{c^3(\Omega+c)} \\ &\;- \frac{3 \gamma_2(c^2A_6-6(c_1+c))+12c^2\gamma_3}{2(\Omega+c)}, \end{split} \end{equation*} which implies \begin{equation}\label{uepsi3abb} \begin{split} u_{\tau}&\; + \frac{3c^2}{c^2+1}uu_{\xi}+\frac{c^3}{3(c^2+1)}\mu u_{\xi\xi\xi} + c \gamma_4(1-\nu)\mu\varepsilon u_{\tau\xi\xi}
+ A_{11}\varepsilon u^2u_{\xi} \\ &\; + A_{12}\varepsilon^2u^3u_{\xi}
+ A_{10}\varepsilon^3u^4u_{\xi} + \mu\varepsilon\bigg[A_{13}uu_{\xi\xi\xi} + A_{14}u_{\xi}u_{\xi\xi}\bigg] = O(\varepsilon^4, \varepsilon^2\mu, \mu^2). \\ \end{split} \end{equation} where $A_{11} := \frac{c^2A_6-6(c_1+c)}{2c(\Omega+c)}=\frac{-3c(c^2-1)(c^2-2)}{2(c^2+1)^3}$, $A_{12} := \frac{cA_5}{2(\Omega+c)} - \frac{9c^2 \gamma_2}{2(\Omega+c)} - \frac{2(c_1+c)}{c^2}\frac{c^2A_6-6(c_1+c)}{2c(\Omega+c)}=\frac{(c^2-1)^2(c^2-2)(8c^2-1)}{2(c^2+1)^5}$, $A_{13} := \frac{cA_{9}}{2(\Omega+c)}- \frac{3c^2 \gamma_4 \nu}{2(\Omega+c)} - \frac{c_1+c}{3(\Omega+c)}=\frac{3c^3 \gamma_4}{(c^2+1)}(1-\nu) +\frac{c^2(3c^4+8c^2-1)}{3(c^2+1)^3}$, $A_{14} := \frac{cA_8}{2(\Omega+c)} - \frac{9c^2\gamma_4\nu}{2(\Omega+c)}= \frac{3c^3}{(c^2+1)}\gamma_4(1-3\nu)+\frac{c^2(6c^4+19c^2+4)}{3(c^2+1)^3}$.
Back to the original transformation $ x=\varepsilon^{-\frac{1}{2}}\xi+c\varepsilon^{-\frac{3}{2}}\tau,\quad t = \varepsilon^{-\frac{3}{2}}\tau$, we have \begin{equation*} \begin{split} \frac{\partial}{\partial\xi} = \varepsilon^{-\frac{1}{2}}\partial_x, \quad \frac{\partial}{\partial \tau} = \varepsilon^{-\frac{3}{2}}(c\partial_x+ \partial_t). \end{split} \end{equation*} Hence, according to this transformation, the equation \eqref{uepsi3abb} can be written as \begin{equation*} \begin{split} &\; u_t + c u_x + \frac{3c^2}{c^2+1}\varepsilon uu_{x} + A_{11}\varepsilon^2 u^2u_{x}+ A_{12}\varepsilon^3u^3u_{x}+ c\gamma_4(1-\nu)\mu u_{txx} \\ &\; + \Big(\frac{c^3}{3(c^2+1)} - c^2 \gamma_4(1-\nu)\Big)\mu u_{xxx} + \mu\varepsilon\bigg(A_{13}uu_{xxx} + A_{14}u_{x}u_{xx}\bigg) = O(\varepsilon^4, \mu^2). \end{split} \end{equation*}
In order to get the R-CH equation, we need \begin{equation*} \begin{split} &\frac{2c^2}{(c^2+1)}c \gamma_4(1-\nu)=2A_{13}=A_{14}, \end{split} \end{equation*} which yields \begin{equation}\label{gamma-4-1} \begin{split}
&\frac{2c^3}{(c^2+1)}\gamma_4 =\frac{-c^2(3c^4+6c^2-5)}{6(c^2+1)^3} \end{split} \end{equation} and then \begin{equation*} \begin{split} \frac{2c^2}{(c^2+1)}c \gamma_4(1-\nu)=2A_{13}=A_{14}=\frac{-c^2(3c^4+8c^2-1)}{3(c^2+1)^3}. \end{split} \end{equation*} Therefore, it enables us to derive the R-CH equation in the form \begin{equation*}\label{R-CH-1-1} \begin{split}
u_t -\beta\mu u_{xxt} + c u_x + 3\alpha\varepsilon uu_x - \beta_0\mu u_{xxx} &+ \omega_1 \varepsilon^2u^2u_x + \omega_2 \varepsilon^3u^3u_x \\ &= \alpha\beta\varepsilon\mu( 2u_{x}u_{xx}+uu_{xxx}). \end{split} \end{equation*} Combining \eqref{gamma-4-1} and \eqref{gamma-defi-2}, it is found that the height parameter $z$ in $\gamma_4$ may take the value \begin{equation}\label{height-2} z_0 =\bigg(\frac {1}{2} - \frac{2}{3} \frac{1}{(c^2 + 1)} + \frac{4}{3} \frac{1}{(c^2 + 1)^2}\bigg)^{1/2}. \end{equation}
\renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}} \setcounter{equation}{0}
\section{Local well-posedness} \label{local}
Our attention in this section is now turned to the local-posedness issue for the R-CH equation. Recall the R-CH equation \eqref{R-CH-1} in terms of the evolution of $m$, namely, the equation \eqref{R-CH-m}. Applying the transformation $u_{\varepsilon, \mu}(t, x) = \alpha \varepsilon u(\sqrt{\beta \mu}\,t,\sqrt{\beta \mu}\,x)$ to \eqref{R-CH-m}, we know that $u_{\varepsilon, \mu}(t, x)$ solves \begin{equation*} \begin{split} u_t - u_{xxt} + c u_x + 3 u u_x - \frac{\beta_0}{\beta} u_{xxx} +\frac{\omega_1}{\alpha^2}u^2 u_x+ \frac{\omega_2}{\alpha^3}u^3 u_x = 2 u_{x} u_{xx} + u u_{xxx}, \end{split} \end{equation*} and its corresponding three conserved quantities (still denoted by $I(u)$, $E(u)$, and $F(u)$) are as follows \[ I(u) =\int_{\mathbb{R}} u\, dx, \quad E(u)=\frac{1}{2}\int_{\mathbb{R}} u^2+u_x^2\,dx, \] and \[
F(u)=\frac{1}{2}\int_{\mathbb{R}} c u^2+ u^3+\frac{\beta_0}{\beta}u_x^2+\frac{\omega_1}{6\alpha^2}u^4 + \frac{\omega_2}{10\alpha^3}u^5 +uu^2_x\,dx. \] And we also have two more forms of equations, \begin{equation*} \begin{cases} m_t + u m_x + 2 u_x m + c u_x - \frac{\beta_0}{\beta} u_{xxx} +\frac{\omega_1}{\alpha^2}u^2 u_x+ \frac{\omega_2}{\alpha^3}u^3 u_x = 0,\\ m = u - u_{xx}, \end{cases} \end{equation*} and \begin{equation}\label{weak-RCH} u_t + u u_x+\frac{\beta_0}{\beta}u_x + p * \partial_x\left\{\left(c-\frac{\beta_0}{\beta}\right)u + u^2+\frac{1}{2}u_x^2+\frac{\omega_1}{3\alpha^2}u^3+\frac{\omega_2}{4\alpha^3}u^4\right\}=0. \end{equation}
where $p = \frac{1}{2}e^{-|x|}$.
Now we are in a position to state the local well-posedness result of the following Cauchy problem, which may be similarly obtained as in \cite{CL09, Danchin01} (up to a slight modification). \begin{equation}\label{rCH-Cauchy} \begin{cases} u_t - u_{xxt} + c u_x + 3 u u_x - \frac{\beta_0}{\beta} u_{xxx} +\frac{\omega_1}{\alpha^2}u^2 u_x+ \frac{\omega_2}{\alpha^3}u^3 u_x = 2 u_{x} u_{xx} + u u_{xxx},\\
u|_{t = 0} = u_0. \end{cases} \end{equation}
\begin{theorem}\label{local} Let $u_0 \in H^{s}(\mathbb{R})$ with $s > \frac{3}{2}$. Then there exist a positive time $T>0$ and a unique solution $u \in C([0, T]; H^s(\mathbb{R})) \cap C^1([0, T]; H^{s-1}(\mathbb{R}))$ to the Cauchy problem \eqref{rCH-Cauchy} with $u(0)=u_0$. Moreover, the solution $u$ depends continuously on the initial value $u_0$. In addition, the Hamiltonians $I(u)$, $E(u)$ and $F(u)$ are independent of the existence time $t>0$. \end{theorem}
Thanks to the scaling of the solution $u_{\varepsilon, \mu}(t, x) = \alpha \varepsilon u(\sqrt{\beta \mu}\,t,\sqrt{\beta \mu}\,x)$, the large existence time for Equation \eqref{R-CH-1} has the form $ \frac{T}{\epsilon}.$
Motivated to the method in \cite{Danchin01}, the following blow-up criterion can be also derived, and we omit details of its proof. \begin{theorem}[Blow-up criterion] Let $s > \frac{3}{2}$, $u_0 \in H^s$ and $u$ be the corresponding solution to \eqref{rCH-Cauchy} as in Theorem \ref{local}. Assume $T^*_{u_0}$ is the maximal time of existence. Then \begin{equation}\label{blowup-criterion-1}
T^{\ast}_{u_0} < \infty \quad \Rightarrow \quad \int_0^{T^{\ast}_{u_0}} \|\partial_x u(\tau)\|_{L^{\infty}} d\tau = \infty. \end{equation} \end{theorem} \begin{remark}\label{rmk-blowup-1} The blow-up criterion \eqref{blowup-criterion-1} implies that the lifespan $T^{\ast}_{u_0}$ does not depend on the regularity index $s$ of the initial data $u_0$. \end{remark}
Now we return to the original R-CH \eqref{R-CH-1}, and let \begin{equation*}
\|u\|^2_{X^{s+1}_{\mu}} = \|u\|^2_{H^s} + \mu \beta \|\partial_x u\|^2_{H^s}. \end{equation*} For some $\mu_0 > 0$ and $M > 0$, we define the Camassa-Holm regime $\mathcal{P}_{\mu_0, M} := \{(\varepsilon, \mu): <\mu \leq \mu_0, 0<\varepsilon \leq M \sqrt{\mu}\}$. Then, we have the following corollary. \begin{cor}(\cite{CL09})
Let $u_0 \in H^{s+1}(\mathbb{R})$, $\mu_0 > 0$ and $M > 0$, $s > \frac{3}{2}$. Then, there exist $T > 0$ and a unique family of solutions $\left(u_{\varepsilon,\mu}\right)|_{(\varepsilon,\mu) \in \mathcal{P}_{\mu_0, M}}$ in $C\left(\left[0,\frac{T}{\varepsilon}\right];X^{s+1}(\mathbb{R})\right) \cap C^1\left(\left[0,\frac{T}{\varepsilon};X^s(\mathbb{R})\right]\right)$ to the Cauchy problem \begin{equation*} \begin{cases} &\partial_t u-\beta\mu \partial_t u_{xx} + c u_x + 3\alpha\varepsilon uu_x - \beta_0\mu u_{xxx} + \omega_1 \varepsilon^2u^2u_x + \omega_2 \varepsilon^3u^3u_x \\ &\qquad\qquad \qquad\qquad \qquad\qquad \qquad\qquad \qquad= \alpha\beta\varepsilon\mu( 2u_{x}u_{xx}+uu_{xxx}),\\
&u|_{t = 0} = u_0. \end{cases} \end{equation*} \end{cor}
\renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}} \setcounter{equation}{0}
\section{Wake-breaking phenomena} \label{breaking}
Using the energy estimates, we can further obtain the following wave breaking criterion to the R-CH equation.
\begin{theorem}[Wave breaking criterion]\label{thm-wavebreak-crt} Let $u_0 \in H^s(\mathbb{R})$ with $s > \frac{3}{2}$, and $T^{\ast}_{u_0} >0$ be the maximal existence time of the solution $u$ to the system \eqref{rCH-Cauchy} with initial data $u_0$ as in Theorem \ref{local}. Then the corresponding solution blows up in finite time if and only if \begin{equation}\label{wavw-breaking-condition} \liminf_{t \uparrow T^{\ast}_{u_0}, x \in \mathbb{R}} u_x(t, x) = - \infty. \end{equation} \end{theorem} \begin{proof} Applying Theorem \ref{local}, Remark \ref{rmk-blowup-1}, and a simple density argument, we only need to show that Theorem \ref{thm-wavebreak-crt} holds for some $s \geq 3.$ Here we assume $s = 3$ to prove the above theorem.
Multiplying the first equation in \eqref{rCH-Cauchy} by $u$ and integrating by parts, we get \begin{equation}\label{4.1-1}
\frac{1}{2}\frac{d}{dt}\|u\|_{H^1}^2=0, \end{equation} and then for any $t\in (0, T^{\ast}_{u_0})$ \begin{equation}\label{4.1-1c}
\|u(t)\|_{H^1}=\|u_0\|_{H^1}. \end{equation} On the other hand, multiplying the first equation in \eqref{rCH-Cauchy} by $u_{xx}$ and integrating by parts again, we obtain \begin{equation}\label{4.4-1b} \begin{split}
\frac{1}{2}\frac{d}{dt}\|u_x\|_{H^1}^2&=-\frac{3}{2}\int_{\mathbb{R}}u_{x}(u_{x}^2+u_{xx}^2) dx-\int_{\mathbb{R}} (\frac{\omega_1}{\alpha^2}u^2u_x+\frac{\omega_2}{\alpha^3}u^3u_x)u_{xx} \,dx\\ &=-\frac{3}{2}\int_{\mathbb{R}}u_{x}(u_{x}^2+u_{xx}^2)
dx+\int_{\mathbb{R}} \big|\frac{\omega_1}{2\alpha^2}u^2+\frac{\omega_2}{2\alpha^3}u^3\big|(u_x^2+u_{xx}^2) \,dx. \end{split} \end{equation} Assume that $T^{\ast}_{u_0} < +\infty$ and there exists $M > 0$ such that \begin{equation}\label{4.4-1-1a} u_x(t, x) \geq -M, \quad \forall \, (t, \, x) \in [0, T^{\ast}_{u_0}) \times \mathbb{R}. \end{equation}
It then follows from \eqref{4.1-1}, \eqref{4.1-1c}, and \eqref{4.4-1b} that \begin{equation}\label{4.9-1} \begin{split} \frac{d}{dt}\int_{\mathbb{R}}(u^2+2u_{x}^2+u_{xx}^2)
dx &\leq (\frac{3}{2}M+\frac{|\omega_1|}{2\alpha^2}\|u\|_{L^{\infty}}^2+\frac{|\omega_2|}{2|\alpha|^3}\|u\|_{L^{\infty}}^3) \int_{\mathbb{R}}(u_{x}^2+u_{xx}^2)\,dx\\
& \leq C(1+M+\|u\|_{H^1}^3)\int_{\mathbb{R}}(u_{x}^2+u_{xx}^2)\,dx, \end{split} \end{equation} where we used the Sobolev embedding theorem $H^{s}(\mathbb{R}) \hookrightarrow L^{\infty}(\mathbb{R})$ (with $s>\frac{1}{2}$) in the last inequaity. Applying Gronwall's inequality to \eqref{4.9-1} yields for every $t \in [0, \, T^{\ast}_{u_0})$ \begin{equation}\label{4.10-1}
\|u(t)\|_{H^{2}}^2 \leq 2\|u_0\|_{H^{2}(\mathbb{R})}^2 e^{C t(1+M+\|u_0\|_{H^{1}}^3)}
\leq 2\|u_0\|_{H^{2}(\mathbb{R})}^2 e^{C T^{\ast}_{u_0}(1+M+\|u_0\|_{H^{1}}^3)} . \end{equation} Differentiating the first equation in \eqref{rCH-Cauchy} with respect to $x$, and multiplying the result equation by $u_{xxx},$ then integrating by parts, we get \begin{equation*}\label{4.11-1} \begin{split} & \frac{1}{2}\frac{d}{dt}\int_{\mathbb{R}}(u_{xx}^2+u_{xxx}^2)\, dx\\ &=-\frac{15}{2}\int_{\mathbb{R}}u_{x}u_{xx}^2 dx-\frac{5}{2}\int_{\mathbb{R}}u_{x}u_{xxx}^2 dx-\int_{\mathbb{R}} (\frac{\omega_1}{\alpha^2}u^2u_x+\frac{\omega_2}{\alpha^3}u^3u_x)_{x}u_{xxx} \,dx\\
& \leq C(1+M+\|u\|_{L^{\infty}}^3) \int_{\mathbb{R}}(u_{xx}^2+u_{xxx}^2) \,dx+C(\|u\|_{L^{\infty}}^2+\|u\|_{L^{\infty}}^4)\|u_x\|_{L^4}^4, \end{split} \end{equation*} where we have used the assumption \eqref{4.4-1-1a}, which follows from the Sobolev embedding theorem and the interpolation inequality
$\|f\|_{L^4(\mathbb{R})} \leq C \|f\|_{L^2(\mathbb{R})}^{\frac{3}{4}} \|f_x\|_{L^{2}(\mathbb{R})}^{\frac{1}{4}} $
that \begin{equation*}\label{4.11-1a} \begin{split}
&\frac{d}{dt}\int_{\mathbb{R}}(u_{xx}^2+u_{xxx}^2)\, dx\leq C(1+M+\|u_0\|_{H^{1}}^3) \int_{\mathbb{R}}(u_{xx}^2+u_{xxx}^2) \,dx\\
&\qquad\qquad\qquad \qquad\qquad+C\|u_0\|_{H^{1}}^5(1+\|u_0\|_{H^{1}}^2)\|u_{xx}\|_{L^2}\\
&\leq C(1+M+\|u_0\|_{H^{1}}^{14}) \int_{\mathbb{R}}(u_{xx}^2+u_{xxx}^2) \,dx. \end{split} \end{equation*} Hence, Gronwall's inequality applied implies that for every $t \in [0, \, T^{\ast}_{u_0})$ \begin{equation*}\label{4.14-1} \begin{split}
\int_{\mathbb{R}}(u_{xx}^2+u_{xxx}^2)
dx \leq e^{C(1+M+\|u_0\|_{H^{1}}^{14}) T^{\ast}_{u_0}}\int_{\mathbb{R}}(u_{0xx}^2+u_{0xxx}^2) dx, \end{split} \end{equation*} which, together with \eqref{4.10-1}, yields that for every $t \in [0, \, T^{\ast}_{u_0})$, \begin{equation*}
\|u(t)\|_{H^{3}(\mathbb{R})}^2
\leq 3\|u_0\|_{H^{3}(\mathbb{R})}^2 e^{C (1+M+\|u_0\|_{H^{1}}^{14}) T^{\ast}_{u_0}}. \end{equation*} This contradicts the assumption the maximal existence time $T^{\ast}_{u_0}<+\infty.$
Conversely, the Sobolev embedding theorem $H^{s}(\mathbb{R}) \hookrightarrow L^{\infty}(\mathbb{R})$ (with $s>\frac{1}{2}$) implies that if \eqref{wavw-breaking-condition} holds, the corresponding solution blows up in finite time, which completes the proof of Theorem \ref{thm-wavebreak-crt}. \end{proof}
Recall the R-CH equation \eqref{weak-RCH}, namely, \begin{equation*} u_t + u u_x+\frac{\beta_0}{\beta}u_x + p_x \ast \left (\left(c-\frac{\beta_0}{\beta}\right)u + u^2+\frac{1}{2}u_x^2+\frac{\omega_1}{3\alpha^2}u^3 + \frac{\omega_2}{4\alpha^3}u^4 \right )=0, \end{equation*}
where $p = \frac{1}{2}e^{-|x|}$. The wave breaking phenomena could be now illustrated by choosing certain the initial data.
\begin{theorem}[Wave breaking data]\label{Blow-up} Suppose $u_0 \in H^s$ with $s > 3/2$. Let $T > 0$ be the maximal time of existence of the corresponding solution $u(t, x)$ to \eqref{weak-RCH} with the initial data $u_0$. Assume these is $x_0 \in \mathbb{R}$ such that \begin{equation*}
u_{0,x}(x_0) < - \left | u_0(x_0) - \frac{1}{2} \left ( \frac{\beta_0}{\beta} - c \right ) \right | -\sqrt{2}C_0, \end{equation*} where $ C_0 > 0 $ is defined by \begin{equation} \label{Blow-up data}
C_0^2 = \frac{|\omega_1|}{2 \alpha^2} E_0^{\frac{3}{2}} + \frac{ |\omega_2|}{2 \alpha^3} E_0^2, \end{equation} and $$ E_0 = \frac{1}{2} \int_{\mathbb{R}} \left ( u_0^2 + (\partial_xu_0)^2 \right ) dx. $$ Then the solution $u(t, x)$ breaks down at the time \begin{equation*} T \leq \frac{2}{\sqrt{u_{0,x}^2(x_0) - \left (u_0(x_0) - \frac{1}{2} \left ( \frac{\beta_0}{\beta} - c \right ) \right )^2 }-\sqrt{2}C_0}. \end{equation*} \end{theorem} \begin{remark} In the case of the rotation frequency $ \Omega = 0, $ or the wave speed $ c = 1, $ the corresponding constant $ C_0 $ in \eqref {Blow-up data} must be zero, because the parameters $ \omega_1 $ and $ \omega _2 $ vanish. The assumption on the wave breaking is then back to the case of the classical CH equation. \end{remark}
\begin{proof} Applying the translation $ u(t, x) \mapsto u(t, x - \frac{\beta_0}{\beta} t) $ to equation \eqref{weak-RCH} yields the equation in the form, \begin{equation} \label{weak-RCH-2} u_t + u u_x + p_x \ast \left (\left(c-\frac{\beta_0}{\beta}\right)u + u^2+\frac{1}{2}u_x^2+\frac{\omega_1}{3\alpha^2}u^3 + \frac{\omega_2}{4\alpha^3}u^4 \right )=0. \end{equation} Taking the derivative $\partial_x$ to \eqref{weak-RCH-2}, we have \begin{equation}\label{u-xt} \begin{split} u_{xt} + u u_{xx} = & - \frac{1}{2}u^2_x + u^2 + \left(c-\frac{\beta_0}{\beta}\right)u+\frac{\omega_1}{3\alpha^2}u^3 + \frac{\omega_2}{4\alpha^3}u^4 \\ &- p \ast \left (\left(c-\frac{\beta_0}{\beta}\right)u + u^2+\frac{1}{2}u_x^2+\frac{\omega_1}{3\alpha^2}u^3 + \frac{\omega_2}{4\alpha^3}u^4 \right ). \end{split} \end{equation} We introduce the associated Lagrangian scales of \eqref{weak-RCH-2} as \begin{equation*} \begin{cases} \frac{\partial q}{\partial t} = u(t, q), & 0 < t < T,\\ q(0, x) = x, & x \in \mathbb{R}, \end{cases} \end{equation*} where $u \in C^1([0,T), H^{s-1})$ is the solution to equation \eqref{weak-RCH-2} with initial data $u_0 \in H^s$, $s> 3/2$. Along with the trajectory of $q(t, x_))$, \eqref{weak-RCH-2} and \eqref{u-xt} become \begin{gather*} \frac{\partial u(t,q)}{\partial t} = - p_x \ast \left ( \left(c-\frac{\beta_0}{\beta}\right)u + u^2+\frac{1}{2}u_x^2+\frac{\omega_1}{3\alpha^2}u^3 +\frac{\omega_2}{4\alpha^3}u^4 \right ),\\ \begin{split} \frac{\partial u_x(t,q)}{\partial t} = - \frac{1}{2}u^2_x + u^2 + & \left(c-\frac{\beta_0}{\beta}\right)u + \frac{\omega_1}{3\alpha^2}u^3 +\frac{\omega_2}{4\alpha^3}u^4 \\ & - p \ast \left ( \left(c-\frac{\beta_0}{\beta}\right)u + u^2+\frac{1}{2}u_x^2+\frac{\omega_1}{3\alpha^2}u^3 +\frac{\omega_2}{4\alpha^3}u^4 \right ). \end{split} \end{gather*} Denote now at $ (t, q(t, x_0)),$ \begin{equation*} M(t) = u(t,q) - \frac{k}{2} - u_x(t, q) \quad \text{and} \quad N(t) = u(t, q) - \frac{k}{2} + u_x(t, q), \end{equation*} where $ k = \frac{\beta_0}{\beta} - c. $ Recall the two convolution operators $p_+$, $p_-$ as \begin{equation*} \begin{split} & p_+ \ast f (x) = {e^{-x} \over 2} \int^x_{-\infty} e^y f(y) dy,\\ & p_- \ast f(x) = {e^{x}\over 2} \int^\infty_{x} e^{-y} f(y) dy \end{split} \end{equation*} and the relation \begin{equation*} p = p_+ + p_-, \qquad p_x = p_- - p_+. \end{equation*} Applying \cite[Lemma 3.1 (1)]{BrCo2} with $m = - k^2/4$ and $K = 1$ we have the following convolution estimates \begin{equation*} p_\pm \ast \left ( u^2 - ku + {1\over 2}u^2_x \right ) \geq {1\over 4} \left ( u^2 - ku - {k^2 \over 4} \right ). \end{equation*} It then follows that at $(t, q(t, x_0))$, \begin{equation*} \begin{split} \frac{\partial M}{\partial t} =&\, \frac{1}{2} u^2_x - u^2 + k u - \frac{\omega_1}{3\alpha^2}u^3 - \frac{\omega_2}{4\alpha^3}u^4 \\ &\, + 2 p_{+} \ast \left (- ku + u^2+\frac{1}{2}u_x^2+\frac{\omega_1}{3\alpha^2}u^3 + \frac{\omega_2}{4\alpha^3}u^4 \right )\\ \geq & \, \frac{1}{2} \left ( u_x^2 - \left (u - \frac{k}{2} \right )^2 \right ) - \frac{\omega_1}{3\alpha^2}u^3 - \frac{\omega_2}{4\alpha^3}u^4 + 2 p_{+} \ast \left(\frac{\omega_1}{3\alpha^2}u^3 + \frac{\omega_2}{4\alpha^3}u^4 \right ) \\ = &\, -\frac{1}{2} MN - \frac{\omega_1}{3\alpha^2}u^3 - \frac{\omega_2}{4\alpha^3}u^4 + 2 p_{+} \ast \left ( \frac{\omega_1}{3\alpha^2}u^3 + \frac{\omega_2}{4\alpha^3}u^4 \right ) \end{split} \end{equation*} \begin{equation*} \begin{split} \frac{\partial N}{\partial t} = &\, - \frac{1}{2} u^2_x + u^2 - k u + \frac{\omega_1}{3\alpha^2}u^3 + \frac{\omega_2}{4\alpha^3}u^4 \\ &\, - 2 p_{-} \ast \left ( - k u + u^2+\frac{1}{2}u_x^2+\frac{\omega_1}{3\alpha^2}u^3 +\frac{\omega_2}{4\alpha^3}u^4 \right )\\ \leq &\, -\frac{1}{2}\left ( u_x^2 - \left ( u - \frac{k}{2} \right )^2 \right ) + \frac{\omega_1}{3\alpha^2}u^3 +\frac{\omega_2}{4\alpha^3}u^4 - 2 p_{-} * \left ( \frac{\omega_1}{3\alpha^2}u^3 + \frac{\omega_2}{4\alpha^3}u^4 \right ) \\ =&\, \frac{1}{2}MN + \frac{\omega_1}{3\alpha^2}u^3 + \frac{\omega_2}{4\alpha^3}u^4 - 2 p_{-} \ast \left (\frac{\omega_1}{3\alpha^2}u^3 + \frac{\omega_2}{4\alpha^3}u^4 \right ) \end{split} \end{equation*} The terms with $ \omega_1 $ and $ \omega_2 $ in the right sides of the above estimates can be bounded by \begin{equation*} \begin{split}
\left| \frac{\omega_1}{3\alpha^2}u^3 \right . & \left . + \frac{\omega_2}{4\alpha^3}u^4 \mp 2p_{\pm} \ast \left (\frac{\omega_1}{3\alpha^2}u^3 + \frac{\omega_2}{4\alpha^3}u^4 \right ) \right|\\
\leq & \, \frac{|\omega_1|}{3\alpha^2} \|u\|_{L^\infty}^3 + \frac{|\omega_2|}{4\alpha^3} \|u\|_{L^\infty}^4 + \|u\|_{L^\infty} \left(\frac{|\omega_1|}{3\alpha^2} \|u\|^2_{L^2}\right) + \|u\|^2_{L^\infty} \left(\frac{|\omega_2|}{4\alpha^3} \|u\|^2_{L^2}\right) \\
\leq&\, \frac {|\omega_1| }{ 2 \alpha^2} E_0^{\frac{3}{2}} + \frac{|\omega_2|} {2 \alpha^3} E_0^2 = C^2_0 > 0, \end{split} \end{equation*} where use has been made of the fact that \begin{equation*}
\| p_\pm \|_{L^\infty} = {1\over 2}, \quad \| p_\pm \|_{L^2} = {1\over 2\sqrt{2}}. \end{equation*} In consequence, we have \begin{equation}\label{AB} \begin{cases} \frac{d M}{d t} \geq - \frac{1}{2}MN - C^2_0,\\ \frac{d N}{d t} \leq \frac{1}{2}MN + C^2_0. \end{cases} \end{equation} By the assumptions on $ u_0(x_0) $, it is easy to see that \begin{equation*} M(0) = u_0(x_0) - \frac{k}{2} - u_{0,x}(x_0) > 0, \; N(0) = u_0(x_0) - \frac{k}{2} + u_{0,x}(x_0) < 0, \; \frac{1}{2}M(0)N(0) + C^2_0 < 0. \label{AB-0} \end{equation*} By the continuity of $M(t)$ and $N(t)$, it then ensures that \begin{equation*} \frac{d M}{d t} > 0,\quad \frac{d N}{d t} < 0,\quad \forall t \in [0, T). \label{AB-1} \end{equation*} This in turn implies that \begin{equation*} M(t) > M(0) > 0,\quad N(t) < N(0) < 0, \quad \forall t \in [0,T). \label{AB-2} \end{equation*} Let $h(t) = \sqrt{-M(t)N(t)}$. It then follows from \eqref{AB} that \begin{equation*} \begin{split} \frac{d h}{d t} =\frac{-M'(t)N(t) -M(t) N'(t)}{2 h}\geq &\,\frac{\left(-\frac{1}{2}MN-C^2_0\right)(-N)-M\left(\frac{1}{2}MN+C^2_0\right)}{2h}\\ =&\,\frac{M-N}{2h}\left(-\frac{1}{2}MN-C^2_0\right). \end{split} \end{equation*} Using the estimate $\frac{M-N}{2h} \geq 1$ and the fact that $h+\sqrt{2}C_0 > h - \sqrt{2}C_0 > 0$, we obtain the following differential inequalities \begin{equation*} \begin{split} \frac{d h}{d t} \geq&-\frac{1}{2}MN -C^2_0 =\frac{1}{2}(h-\sqrt{2}C_0)(h+\sqrt{2}C_0) \geq \frac{1}{2}(h - \sqrt{2}C_0)^2. \end{split} \end{equation*} Solving this inequality gives \begin{equation*} t \leq \frac{2}{\sqrt{u_{0,x}(x_0)^2-( u_0(x_0) - \frac{k}{2} )^2}-\sqrt{2}C_0} < \infty. \end{equation*} This in turn implies there exists $T < \infty$, such that $$\liminf_{t \uparrow T_{u_0}, x \in \mathbb{R}} \partial_x u(t, x) = - \infty,$$ the desired result as indicated in Theorem \ref{Blow-up}. \end{proof} \begin{remark} Returning to the original scale, our assumption for the blow-up phenomena becomes \begin{equation*}
\sqrt{\beta \mu}\,u_{0,x}(\sqrt{\beta \mu}x_0) + \left |u_0(\sqrt{\beta \mu}x_0)- \frac {1}{2\alpha \varepsilon} \left ( \frac{\beta_0}{\beta} - c \right ) \right | < -\frac{\sqrt{2}}{\alpha \varepsilon} C_1. \end{equation*} Note that when $\Omega$ increases, $\alpha$ and $\beta$ decrease. It is then observed that with effect of the Earth rotation, a worse initial data $u_0(x_0)$ are required to make the breaking wave happen. On the other hand, in the original scale, we have \begin{equation*} T \leq \frac{2}{\alpha \varepsilon \left ( \sqrt{\beta \mu u_{0,x}^2(\sqrt{\beta\mu} x_0)- \left (u_0(\sqrt{\beta \mu}x_0)- \frac{1}{2\alpha \varepsilon} ( \frac{\beta_0}{\beta} - c ) \right )^2} -\frac{\sqrt{2}}{\alpha \varepsilon} C_1 \right ) } \end{equation*} where \begin{equation*}
C_1^2 = \frac{ |\omega_1|\alpha \varepsilon^3}{2}E^{\frac{3}{2}} + \frac{|\omega_2|\varepsilon^2 }{2 \alpha} E^2 \quad \mbox{with} \quad E(u_0) = \frac{1}{\alpha^2 \varepsilon^2}E_0(\alpha \varepsilon u_0(\sqrt{\beta \mu}x_0)). \end{equation*}
\end{remark}
\renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}} \setcounter{equation}{0}
\appendix \section{Derivations of the asymptotic expansions of $u, \,W, \,p,\,\eta$}
We consider the governing equations \eqref{governing} \begin{equation}\label{A-Euler-1} \begin{cases} - c u_{\xi} + \varepsilon (u_\tau + uu_\xi + Wu_z) + 2\Omega W = - p_\xi \quad & \text{in}\quad 0 < z < 1 + \varepsilon \eta,\\ \varepsilon\mu \{- c W_\xi + \varepsilon (W_\tau + u W_\xi + WW_z)\} - 2\Omega u = - p_z \quad & \text{in}\quad 0 < z < 1 + \varepsilon \eta,\\ u_\xi + W_z = 0 \quad & \text{in}\quad 0 < z < 1 + \varepsilon \eta,\\ u_z - \varepsilon\mu W_\xi = 0 \quad & \text{in}\quad 0 < z < 1 + \varepsilon \eta,\\ p = \eta \quad & \text{on}\quad z = 1+ \varepsilon \eta,\\ W = - c \eta_\xi + \varepsilon (\eta_\tau + u \eta_\xi) \quad & \text{on}\quad z = 1+ \varepsilon \eta,\\ W = 0 \quad & \text{on} \quad z = 0. \end{cases} \end{equation} A double asymptotic expansion is introduced to seek a solution of the system \eqref{A-Euler-1}, \begin{equation*} q \sim \sum_{n=0}^{\infty} \sum_{m=0}^{\infty}\varepsilon^n \mu^m q_{nm} \end{equation*}
as $\varepsilon \rightarrow 0, \mu \rightarrow 0$, where $q$ will be taken the scale functions $u, \,W, \,p$ and $\eta$, and all the functions $q_{nm}$ satisfiy the far field conditions $q_{nm} \rightarrow 0$ as $|\xi|\rightarrow \infty$ for every $n, \,m=0, 1, 2, 3, ...$.
Substituting the asymptotic expansions of $u, \,W, \,p,\,\eta$ into \eqref{A-Euler-1}, we check all the coefficients of the order $O(\varepsilon^i\mu^j)$ ($i, \, j=0, 1, 2, 3, ...$).
From the order $O(\varepsilon^0 \mu^0)$ terms of \eqref{A-Euler-1} we obtain \begin{equation}\label{A-equation-00} \begin{cases} -c u_{00,\xi} + 2\Omega W_{00} = - p_{00,\xi} &\text{in}\quad 0 < z < 1,\\ 2\Omega u_{00} = p_{00,z} &\text{in}\quad 0 < z < 1,\\ u_{00,\xi} + W_{00,z} = 0 &\text{in}\quad 0 < z < 1,\\ u_{00,z} = 0 &\text{in}\quad 0 < z < 1,\\ p_{00} = \eta_{00} & \text{on} \quad z = 1, \\ W_{00} = - c \eta_{00,\xi} & \text{on} \quad z = 1,\\ W_{00} = 0 & \text{on} \quad z = 0. \end{cases} \end{equation} To solve the system \eqref{A-equation-00}, we first obtain from the fourth equation in \eqref{A-equation-00} that $u_{00}$ is independent of $z$, that is, $u_{00} = u_{00}(\tau, \xi)$.
Thanks to the third equation in \eqref{A-equation-00} and the boundary condition of $W$ on $z=0$, we get \begin{equation} \label{A-w00-1}
W_{00} =W_{00}|_{z = 0} + \int_0^z W_{00,z'} dz' = -\int_0^z u_{00,\xi}\, dz'= - z u_{00,\xi}, \end{equation} which along with the boundary condition of $W$ on $z=1$ implies \begin{equation}\label{A-u-00-1} u_{00,\xi}(\tau, \xi) = c\eta_{00,\xi}(\tau, \xi). \end{equation} Thereore, we have \begin{equation}\label{A-w-00} u_{00}(\tau, \xi) = c\eta_{00}(\tau, \xi), \quad W_{00} = -cz\eta_{00,\xi}, \end{equation}
here use has been made of the far field conditions $u_{00}, \, \eta_{00} \rightarrow 0$ as $|\xi| \rightarrow \infty$.
On the other hand, from the second equation in \eqref{A-equation-00}, there appears that \begin{equation}\label{A-p-00-1}
p_{00}= p_{00}|_{z = 1} + \int_1^z p_{00,z'} \,dz'=\eta_{00}+2\Omega \int_1^z u_{00} \,dz'=\eta_{00}+2\Omega (z-1) u_{00}, \end{equation} which along with \eqref{A-u-00-1} implies \begin{equation}\label{A-p-00-2} p_{00, \xi}=\big(\frac{1}{c}+2\Omega (z-1)\big) u_{00, \xi}, \end{equation} Combining \eqref{A-p-00-2} with \eqref{A-w00-1} and the first equation in \eqref{A-equation-00} gives rise to \begin{equation*} (c^2 + 2\Omega c - 1) u_{00, \xi} = 0, \end{equation*} which follows that \begin{equation}\label{A-c-1-2} c^2 + 2\Omega c - 1= 0, \end{equation} if we assume that $u_{00}$ is an non-trivial velocity. Therefore, when consider the waves move towards to the right side, we may obtain \begin{equation}\label{A-c-1-3} c = \sqrt{1 + \Omega^2} - \Omega. \end{equation}
Vanishing the order $O(\varepsilon^1 \mu^0)$ terms of \eqref{A-Euler-1}, we obtain from the second equation in \eqref{A-equation-10} and the Taylor expansion \begin{equation}\label{A-taylor-1} f(z)=f(1)+\sum_{n=1}^{\infty} \frac{(z-1)^n}{n!}f^{(n)}(1) \end{equation} that \begin{equation}\label{A-equation-10} \begin{cases}
- c u_{10,\xi} + u_{00,\tau} + u_{00} u_{00,\xi} + 2\Omega W_{10} = - p_{10,\xi} &\text{in}\quad 0 < z < 1, \\ 2\Omega u_{10} = p_{10,z} &\text{in}\quad 0 < z < 1, \\ u_{10,\xi} + W_{10,z} = 0 &\text{in}\quad 0 < z < 1, \\ u_{10,z} = 0 &\text{in}\quad 0 < z < 1, \\ p_{10} + p_{00, z}\eta_{00} = \eta_{10}&\text{on}\quad z = 1,\\ W_{10}+\eta_{00} W_{00, z}= - c \eta_{10,\xi} + \eta_{00,\tau} + u_{00}\eta_{00,\xi} &\text{on}\quad z = 1,\\ W_{10} = 0&\text{on}\quad z = 0. \end{cases} \end{equation} From the fourth equation in \eqref{A-equation-10}, we know that $u_{10}$ is independent to $z$, that is, $u_{10} = u_{10}(\tau, \xi)$. Thanks to the third equation in \eqref{A-equation-10} and the boundary conditions of $W$ on $z=0$ and $z=1$, we get \begin{equation}\label{A-w-10-1}
W_{10} = W_{10}|_{z = 0} + \int_0^z W_{10,z'} dz'= - z u_{10,\xi} \end{equation} and \begin{equation*} \begin{split}
W_{10}|_{z = 1} =- c \eta_{10,\xi} + \eta_{00,\tau} + (u_{00}\eta_{00})_{\xi} . \end{split} \end{equation*} Hence, we obtain from the third equation in \eqref{equation-00} and \eqref{w-00} that \begin{equation}\label{A-u-10-1} u_{10,\xi} = c\eta_{10,\xi} - \eta_{00,\tau} - (u_{00}\eta_{00})_{\xi}, \end{equation} and then \begin{equation*} W_{10} = z(\eta_{00,\tau}+2c\eta_{00}\eta_{00,\xi}-c\eta_{10,\xi} ). \end{equation*} On the other hand, thanks to the second equation in \eqref{A-equation-10} and \eqref{A-w-00}, we deduce that \begin{equation*} \begin{split}
p_{10} &= p_{10}|_{z = 1} + \int_1^z p_{10,z'} dz' = \eta_{10}-2\Omega u_{00}\eta_{00} +2\Omega (z-1)u_{10},\end{split} \end{equation*} and then \begin{equation}\label{A-p10-1a} \begin{split} p_{10, \xi} &= \eta_{10, \xi}-2\Omega (u_{00}\eta_{00})_{\xi} +2\Omega (z-1)u_{10, \xi}.\end{split} \end{equation} Taking account of the first equation in \eqref{A-equation-10} and \eqref{A-w-00}, it must be \begin{equation*} - p_{10,\xi}=- c u_{10,\xi} + c \eta_{00,\tau} + c^2 \eta_{00} \eta_{00,\xi} - 2\Omega z u_{10, \xi}, \end{equation*} which along with \eqref{A-p10-1a} and \eqref{A-u-10-1} implies \begin{equation*} \begin{split} 0=&- (c+2\Omega) u_{10,\xi} +\eta_{10, \xi}+ c \eta_{00,\tau} + c^2 \eta_{00} \eta_{00,\xi} -2\Omega (u_{00}\eta_{00})_{\xi}\\ =&c(u_{00}\eta_{00})_{\xi} -(c^2 + 2\Omega c - 1)\eta_{10, \xi}+ 2(c+\Omega) \eta_{00,\tau} + c^2 \eta_{00} \eta_{00,\xi}. \end{split}\end{equation*} Hence, it follows from \eqref{A-w-00} and \eqref{A-c-1-2} that \begin{equation} \label{A-eta-00-eqn} 2(\Omega + c) \eta_{00,\tau} + 3c^2 \eta_{00}\eta_{00,\xi} = 0. \end{equation} Defining \begin{equation}\label{A-c-0-0} c_1 \overset{\text{def}}{=} -\frac{3c^2}{4(\Omega + c)}=-\frac{3 c^3}{2 (c^2 + 1)}, \end{equation} we may rewrite \eqref{A-eta-00-eqn} as \begin{equation}\label{A-eta-00-tau} \eta_{00,\tau} = c_1 (\eta_{00}^2)_{\xi}, \end{equation} which, together with \eqref{A-u-10-1}, implies \begin{equation}\label{A-u-10-xi} u_{10,\xi} = \big(c \eta_{10}- (c + c_1) \eta_{00}^2\big)_\xi. \end{equation}
Therefore, we get from the far field conditions $u_{10}, \, \eta_{00}, \eta_{10} \rightarrow 0$ as $|\xi| \rightarrow \infty$ that \begin{equation}\label{A-u-10-3} u_{10} = c \eta_{10}- (c + c_1) \eta_{00}^2, \end{equation} which follows from \eqref{A-eta-00-tau} that \begin{equation}\label{A-u-10-tau} u_{10, \tau} = c \eta_{10, \tau}- 4(c + c_1)c_1 \eta_{00}^2 \eta_{00, \xi}. \end{equation}
Similarly, vanishing the order $O(\varepsilon^0 \mu^1)$ terms of \eqref{A-Euler-1}, we obtain from the second equation in \eqref{A-equation-00} and the Taylor expansion \eqref{taylor-1} that \begin{equation*} \begin{cases} - c u_{01,\xi} + 2\Omega W_{01} = - p_{01,\xi} \quad &\text{in}\quad 0 < z < 1,\\ 2\Omega u_{01} = p_{01,z} \quad &\text{in}\quad 0 < z < 1,\\ u_{01,\xi} + W_{01,z} = 0 \quad &\text{in}\quad 0 < z < 1,\\ u_{01,z} = 0 \quad & \text{in} \quad 0 < z < 1,\\ p_{01} = \eta_{01} \quad & \text{on} \quad z = 1,\\ W_{01} = - c \eta_{01,\xi} \quad & \text{on} \quad z = 1,\\ W_{01} = 0 \quad & \text{on} \quad z = 0. \end{cases} \end{equation*} From this, we may readily get from the above argument that \begin{equation}\label{A-uwp-01-1} u_{01} = c\eta_{01}=c\eta_{01}(\tau, \xi), \, W_{01} = -cz\eta_{01,\xi},\, p_{01} = [2\Omega c(z-1)+1]\eta_{01}. \end{equation}
For the order $O(\varepsilon^2 \mu^0)$ terms of \eqref{A-Euler-1}, we obtain from the Taylor expansion \eqref{A-taylor-1} that \begin{equation}\label{A-equation-20} \begin{cases}
-c u_{20,\xi} + u_{10,\tau} + (u_{00}u_{10})_{\xi} + 2\Omega W_{20} = -p_{20,\xi} \quad &\text{in}\quad 0 < z < 1, \\
-2 \Omega u_{20}=-p_{20,z} \quad&\text{in}\quad 0 < z < 1,\\ u_{20,\xi} + W_{20,z} = 0 \quad&\text{in}\quad 0 < z < 1, \\ u_{20,z} = 0 \quad &\text{in}\quad 0 < z < 1, \\ p_{20} + \eta_{00}p_{10,z} + \eta_{10}p_{00,z}= \eta_{20}\quad&\text{on}\quad z = 1,\\ W_{20} + \eta_{00}W_{10,z} + \eta_{10}W_{00,z} \\ \quad\quad\quad= - c \eta_{20,\xi} + \eta_{10,\tau} + u_{00}\eta_{10,\xi} + u_{10} \eta_{00,\xi} \quad &\text{on}\quad z = 1,\\ W_{20} = 0\quad&\text{on}\quad z = 0. \end{cases} \end{equation} From the fourth equation in \eqref{A-equation-20}, we know that $u_{20}$ is independent of $z$, that is, $u_{20}=u_{20}(\tau, \xi)$, which along with the third equation in \eqref{A-equation-20} and the boundary condition of $W_{20}$ at $z=0$ implies that \begin{equation}\label{A-w-20-1} W_{20}=-z u_{20, \xi}. \end{equation} Combining \eqref{A-w-20-1} with the boundary condition of $W_{20}$ at $z=1$, we get from the equations of $W_{00, z}$ and $W_{10, z}$ that \begin{equation*} u_{20, \xi}= c \eta_{20,\xi} - \eta_{10,\tau} - (u_{00}\eta_{10} + u_{10} \eta_{00})_{\xi}, \end{equation*} that is, \begin{equation}\label{A-w-20-3} u_{20, \xi}= c \eta_{20,\xi} - \eta_{10,\tau} - 2c(\eta_{00}\eta_{10})_{\xi} +(c+c_1)(\eta_{00}^3)_{\xi}. \end{equation} While from the second equation in \eqref{A-equation-20} and the boundary condition of $p_{20}$ at $z=1$, we get \begin{equation*} \begin{split}
p_{20}=p_{20}|_{z=1}+\int_1^z p_{20, z'}\, dz'&= \eta_{20}-(\eta_{00}p_{10,z} + \eta_{10}p_{00,z} )+2\Omega \int_1^z u_{20}\, dz'\\ &= \eta_{20}-2\Omega (\eta_{00}u_{10} + \eta_{10}u_{00} )+2\Omega (z-1) u_{20}, \end{split}\end{equation*} which leads to \begin{equation}\label{A-p-20-2} \begin{split} p_{20, \xi}&= \eta_{20, \xi}-2\Omega (\eta_{00}u_{10} + \eta_{10}u_{00} )_{\xi}+2\Omega (z-1) u_{20, \xi}. \end{split}\end{equation} On the other hand, due to the first equation in \eqref{A-equation-20}, we deduce from \eqref{A-w-20-1} and \eqref{A-w-20-3} that \begin{equation}\label{A-p-20-3} -p_{20,\xi}= -c u_{20,\xi} + u_{10,\tau} + (u_{00}u_{10})_{\xi} - 2\Omega z u_{20, \xi}. \end{equation} Combining \eqref{A-p-20-2} with \eqref{A-p-20-3}, we have \begin{equation*} \eta_{20, \xi}-2\Omega (\eta_{00}u_{10} + \eta_{10}u_{00} )_{\xi}-(c+2\Omega) u_{20,\xi} + u_{10,\tau} + (u_{00}u_{10})_{\xi} =0. \end{equation*} Thanks to \eqref{A-u-00-1}, \eqref{A-u-10-3}, and \eqref{A-u-10-tau}, we obtain \begin{equation}\label{A-eta-10-eqn} \begin{split} 2(c+\Omega) \eta_{10,\tau}+3c^2(\eta_{00}\eta_{10})_{\xi} -(2c+\frac{4}{3}c_1)(c + c_1)(\eta_{00}^3)_{\xi} =0, \end{split}\end{equation} which leads to \begin{equation}\label{A-eta-10-tau}
\eta_{10, \tau}= 2c_1(\eta_{00}\eta_{10})_{\xi}+\frac{2c_1+3c}{3(c+\Omega)}(c+c_1)(\eta_{00}^3)_{\xi}. \end{equation} Therefore, we have \begin{equation*} u_{20, \xi}= c \eta_{20,\xi} - 2(c+c_1)(\eta_{00}\eta_{10})_{\xi}-\frac{2c_1-3\Omega}{3(c+\Omega)}(c+c_1)(\eta_{00}^3)_{\xi}, \end{equation*}
which along with the far field conditions $\eta_{00},\, \eta_{10}, \,\eta_{20}\rightarrow 0$ as $|\xi| \rightarrow \infty$ gives \begin{equation}\label{A-u-20} u_{20}= c \eta_{20} - 2(c+c_1)\eta_{00}\eta_{10}-\frac{2c_1-3\Omega}{3(c+\Omega)}(c+c_1)\eta_{00}^3. \end{equation} Thanks to \eqref{A-eta-00-tau} and \eqref{A-eta-10-tau}, we deduce that \begin{equation}\label{A-u-20-tau} \begin{split} u_{20, \tau}=c \eta_{20, \tau} - 4(c+c_1)c_1 (\eta_{00}^2\eta_{10})_{\xi}-\frac{8cc_1+4c_1^2+\frac{21}{4}c^2}{2(c+\Omega)}(c+c_1) (\eta_{00}^4)_{\xi}. \end{split}\end{equation}
For the order $O(\varepsilon^1 \mu^1)$ terms of \eqref{A-Euler-1}, we obtain from the Taylor expansion \eqref{A-taylor-1} that \begin{equation}\label{A-equation-11} \begin{cases}
- c u_{11,\xi} + u_{01,\tau} + u_{00}u_{01, \xi}+u_{10}u_{00, \xi}+ W_{00}u_{01, z}\\
\qquad\qquad\qquad\qquad\qquad+W_{10}u_{00, z}+ 2\Omega W_{11} = - p_{11,\xi} \quad &\text{in}\quad 0 < z < 1, \\ -cW_{00,\xi} - 2 \Omega u_{11} = - p_{11,z} \quad&\text{in}\quad 0 < z < 1,\\ u_{11,\xi} + W_{11,z} = 0 \quad&\text{in}\quad 0 < z < 1, \\ u_{11,z} - W_{00,\xi}= 0 \quad &\text{in}\quad 0 < z < 1, \\ p_{11} = \eta_{11}-(\eta_{00}p_{01, z} +\eta_{01}p_{00, z})\quad&\text{on}\quad z = 1,\\ W_{11} +W_{00, z} \eta_{01}+W_{01, z} \eta_{00} \\ \qquad\qquad = - c \eta_{11,\xi}+\eta_{01,\tau} + u_{00}\eta_{01, \xi}+ u_{01}\eta_{00, \xi} \quad &\text{on}\quad z = 1,\\ W_{11} = 0\quad&\text{on}\quad z = 0. \end{cases} \end{equation} Thanks to \eqref{A-w-00} and the fourth equation of \eqref{A-equation-11}, we have $u_{11, z} = -c z\eta_{00, \xi\xi}$, and then \begin{equation}\label{A-u-11-2} u_{11} = -\frac{c}{2}z^2 \eta_{00, \xi\xi}+ \Phi_{11}(\tau, \xi) \end{equation}
for some arbitrary smooth function $\Phi_{11}(\tau, \xi)$ independent of $z$. While from the third equation in \eqref{A-equation-11} with $W_{11}|_{z=0} = 0$, it follows that \begin{equation}\label{A-w-11-1}
W_{11} = W_{11}|_{z=0} +\int_0^z W_{11, z'}\,dz'=\frac{c}{6}z^3 \eta_{00, \xi\xi\xi}- z\partial_{\xi}\Phi_{11}(\tau, \xi), \end{equation} which, along with the equations of $W_{00, z}$ and $W_{01, z}$, and the boundary condition of $W_{11}$ on $\{z=1\}$, implies \begin{equation}\label{A-w-11-2} \begin{split} - \partial_{\xi}\Phi_{11}(\tau, \xi)= -\frac{c}{6} \eta_{00, \xi\xi\xi}+ (u_{00}\eta_{01}+\eta_{00}u_{01})_{\xi}- c \eta_{11,\xi}+\eta_{01,\tau}. \end{split} \end{equation} Hence, in view of \eqref{A-w-11-1}, \eqref{A-w-00}, \eqref{A-uwp-01-1}, and \eqref{A-u-00-1}, we obtain \begin{equation}\label{A-w-11-3} W_{11} =\frac{c}{6}z(z^2-1) \eta_{00, \xi\xi\xi}+z \bigg(- c \eta_{11,\xi}+\eta_{01,\tau} + (u_{00}\eta_{01}+\eta_{00}u_{01})_{\xi}\bigg). \end{equation} Due to \eqref{A-w-00}, \eqref{A-uwp-01-1}, \eqref{A-u-11-2}, and the boundary condition of $p_{11}$ in \eqref{A-equation-11}, we deduce from the second equation of \eqref{A-equation-11} that \begin{equation*} \begin{split}
&p_{11} =p_{11}|_{z = 1} + \int_1^z p_{11,z'}\, dz'=p_{11}|_{z = 1} + \int_1^z (cW_{00,\xi} +2 \Omega u_{11}) \, dz'\\ &=\eta_{11} - 2\Omega (u_{00}\eta_{01}+\eta_{00}u_{01}) - \bigg(\frac{c^2 }{2}(z^2-1)+\frac{\Omega c}{3}(z^3-1)\bigg)\eta_{00,\xi\xi}+2\Omega (z-1)\Phi_{11}, \end{split} \end{equation*} which implies \begin{equation}\label{A-p-11-2} \begin{split} p_{11, \xi} =\eta_{11, \xi} - 2\Omega (u_{00}\eta_{01}+\eta_{00}u_{01})_{\xi} &- \bigg(\frac{c^2}{2}(z^2-1)+\frac{\Omega c}{3}(z^3-1)\bigg)\eta_{00,\xi\xi\xi}\\ &+2\Omega (z-1)\partial_{\xi}\Phi_{11}. \end{split} \end{equation} Combining \eqref{A-p-11-2} and the first equation in \eqref{A-equation-11}, it follows from \eqref{A-w-00}, \eqref{A-uwp-01-1}, and \eqref{A-u-11-2} that \begin{equation}\label{A-p-11-3} \begin{split} &- c u_{11,\xi} + c\eta_{01,\tau} + c^2(\eta_{00}\eta_{01})_{\xi} + 2\Omega W_{11}+\eta_{11, \xi} - 4\Omega c (\eta_{00}\eta_{01})_{\xi}\\ & - \bigg(\frac{c^2}{2}(z^2-1)+\frac{\Omega c}{3}(z^3-1)\bigg)\eta_{00,\xi\xi\xi}+2\Omega (z-1)\partial_{\xi}\Phi_{11}=0. \end{split} \end{equation} Substituting \eqref{A-u-11-2} and \eqref{A-w-11-2} into \eqref{A-p-11-3}, we obtain \begin{equation}\label{A-eta-01-eqn} \begin{split} &2(\Omega+c) \eta_{01,\tau} + 3c^2(\eta_{00}\eta_{01})_{\xi} + \frac{c^2}{3}\eta_{00,\xi\xi\xi}=0, \end{split} \end{equation} that is, \begin{equation} \label{A-eta-01-tau} \eta_{01,\tau} = 2 c_1 (\eta_{00}\eta_{01})_\xi + \frac{2 c_1}{9} \eta_{00,\xi\xi\xi}, \end{equation} which, together with \eqref{A-w-11-2}, \eqref{A-w-11-3}, and \eqref{A-u-11-2}, leads to \begin{equation*} \begin{split} - \partial_{\xi}\Phi_{11}(\tau, \xi)=(\frac{2 c_1}{9}-\frac{c}{6}) \eta_{00, \xi\xi\xi}+ 2(c +c_1)(\eta_{00}\eta_{01})_{\xi}- c \eta_{11,\xi}, \end{split} \end{equation*} and then \begin{equation*} W_{11} =\bigg(\frac{2 c_1}{9}+\frac{c}{6}(z^2-1)\bigg)\, z\,\eta_{00, \xi\xi\xi}+ 2(c +c_1)\,z\,(\eta_{00}\eta_{01})_{\xi}- c\,z\, \eta_{11,\xi} \end{equation*} and \begin{equation}\label{A-u-11} u_{11} = \left(\frac{c}{6} - \frac{2c_1}{9} -\frac{c}{2}z^2 \right) \eta_{00,\xi\xi} + c \eta_{11} - 2 (c+c_1) \eta_{00}\eta_{01}, \end{equation}
where use has been made by the far field conditions $u_{11}, \, \eta_{00,\xi\xi},\, \eta_{00}, \,\eta_{01},\, \eta_{11}\rightarrow 0$ as $|\xi| \rightarrow \infty$.
Thanks to \eqref{A-eta-00-tau} and \eqref{A-eta-01-tau}, we obtain \begin{equation}\label{A-u-11-tau} \begin{split} u_{11, \tau} =&c \eta_{11, \tau} +\left(\frac{cc_1 }{6} - \frac{2c_1^2}{9} -\frac{cc_1 }{2}z^2 \right)(\eta_{00}^2)_{\xi\xi\xi} \\ &- 2 (c+c_1) \bigg(2 c_1 (\eta_{00}^2\eta_{01})_\xi + \frac{2 c_1}{9} \eta_{00}\eta_{00,\xi\xi\xi}\bigg). \end{split} \end{equation}
For the order $O(\varepsilon^3 \mu^0)$ terms of \eqref{A-Euler-1}, we obtain from the Taylor expansion \eqref{A-taylor-1} that \begin{equation}\label{A-equation-30} \begin{cases}
-c u_{30,\xi} + u_{20,\tau} + (u_{00}u_{20}+\frac{1}{2}u_{10}^2)_{\xi} + 2\Omega W_{30} = -p_{30,\xi} \quad &\text{in}\quad 0 < z < 1, \\
-2 \Omega u_{30}=-p_{30,z} \quad &\text{in}\quad 0 < z < 1,\\ u_{30,\xi} + W_{30,z} = 0 \quad &\text{in}\quad 0 < z < 1, \\ u_{30,z} = 0 \quad &\text{in}\quad 0 < z < 1, \\ p_{30} + \eta_{00}p_{20,z} + \eta_{10}p_{10,z} + \eta_{20}p_{00,z}= \eta_{30}\quad &\text{on}\quad z = 1,\\ W_{30} + \eta_{00}W_{20,z} + \eta_{10}W_{10,z} + \eta_{20}W_{00,z} \\ \quad\quad\quad= - c \eta_{30,\xi} + \eta_{20,\tau} + u_{00}\eta_{20,\xi} + u_{10} \eta_{10,\xi} + u_{20} \eta_{00,\xi} \quad &\text{on}\quad z = 1,\\ W_{30} = 0\quad &\text{on}\quad z = 0. \end{cases} \end{equation} From the fourth equation in \eqref{A-equation-30}, we know that $u_{30}$ is independent of $z$, that is, $u_{30}=u_{30}(\tau, \xi)$, which along with the third equation in \eqref{A-equation-30} and the boundary condition of $W_{30}$ at $z=0$ implies that $W_{30}=-z u_{30, \xi}$. Combining \eqref{w-20-1} with the boundary condition of $W_{20}$ at $z=1$, we have \begin{equation}\label{A-u-30-xi-1} u_{30, \xi}= c \eta_{30,\xi} - \eta_{20,\tau} - (u_{00}\eta_{20} + u_{10} \eta_{10}+ u_{20} \eta_{00})_{\xi}. \end{equation} While from the second equation in \eqref{A-equation-30} and the boundary condition of $p_{30}$ at $z=1$, we get \begin{equation*} \begin{split}
&p_{30}=p_{30}|_{z=1}+\int_1^z p_{30, z'}\, dz'\\ &= \eta_{30}-(\eta_{00}p_{20,z} + \eta_{10}p_{10,z} + \eta_{20}p_{00,z})+2\Omega \int_1^z u_{30}\, dz'\\ &= \eta_{30}-2\Omega (u_{00}\eta_{20} + u_{10} \eta_{10}+ u_{20} \eta_{00})+2\Omega (z-1) u_{30}, \end{split}\end{equation*} which leads to \begin{equation}\label{A-p-30-xi-1} \begin{split} p_{30, \xi}&= \eta_{30, \xi}-2\Omega (u_{00}\eta_{20} + u_{10} \eta_{10}+ u_{20} \eta_{00})_{\xi}+2\Omega (z-1) u_{30, \xi}. \end{split}\end{equation} On the other hand, from the first equation in \eqref{A-equation-30}, we have \begin{equation}\label{A-p-30-xi-2} \begin{split}
-p_{30,\xi}=-c u_{30,\xi} + u_{20,\tau} + (u_{00}u_{20}+\frac{1}{2}u_{10}^2)_{\xi} - 2\Omega z u_{30, \xi}. \end{split}\end{equation} Combining \eqref{A-p-30-xi-1} with \eqref{A-p-30-xi-2}, we get \begin{equation}\label{A-p-30-xi-3} \begin{split} 0= \eta_{30, \xi}-2\Omega (u_{00}\eta_{20} + u_{10} \eta_{10}+ u_{20} \eta_{00})_{\xi}-(c+2\Omega) u_{30, \xi}+ u_{20,\tau} + (u_{00}u_{20}+\frac{1}{2}u_{10}^2)_{\xi} . \end{split}\end{equation} Substituting \eqref{A-u-30-xi-1} and \eqref{A-u-20-tau} into \eqref{A-p-30-xi-3}, we obtain \begin{equation}\label{A-eta-20-eqn} \begin{split} 2(c+\Omega)\eta_{20,\tau} +3c^2(\eta_{00}\eta_{20})_{\xi} &+ \frac{3c^2}{2}(\eta_{10}^2)_{\xi}-2(2c_1+3c)(c+c_1)(\eta_{00}^2\eta_{10})_{\xi}\\ &-\frac{(64cc_1+24c_1^2+45c^2-15)}{12(c+\Omega)}(c+c_1)(\eta_{00}^4)_{\xi}=0, \end{split}\end{equation} that is, \begin{equation}\label{A-eta-20-tau} \begin{split} \eta_{20,\tau}= 2c_1(\eta_{00}\eta_{20})_{\xi} +c_1(\eta_{10}^2)_{\xi}&+\frac{2c_1+3c}{\Omega+c}(c+c_1)(\eta_{00}^2\eta_{10})_{\xi}\\ &+\frac{(64cc_1+24c_1^2+45c^2-15)}{24(c+\Omega)^2}(c+c_1)(\eta_{00}^4)_{\xi}. \end{split}\end{equation} Thanks to \eqref{A-u-30-xi-1} again, we have \begin{equation*} \begin{split} u_{30, \xi}=& c \eta_{30,\xi} - 2(c+c_1)(\eta_{00}\eta_{20})_{\xi} -(c+c_1)(\eta_{10}^2)_{\xi}-\frac{2c_1-3\Omega}{\Omega+c}(c+c_1)(\eta_{00}^2\eta_{10})_{\xi}\\ &-\frac{(64cc_1+24c_1^2+45c^2+24\Omega^2-3)}{24(c+\Omega)^2}(c+c_1)(\eta_{00}^4)_{\xi}, \end{split}\end{equation*} which implies \begin{equation}\label{A-u-30} \begin{split} u_{30}=& c \eta_{30} - 2(c+c_1)(\eta_{00}\eta_{20}) -(c+c_1)(\eta_{10}^2)-\frac{2c_1-3\Omega}{\Omega+c}(c+c_1)(\eta_{00}^2\eta_{10})\\ &-\frac{(64cc_1+24c_1^2+45c^2+24\Omega^2-3)}{24(c+\Omega)^2}(c+c_1)(\eta_{00}^4). \end{split}\end{equation} Therefore, due to \eqref{A-eta-00-tau}, \eqref{A-eta-10-tau}, and \eqref{A-eta-20-tau}, we have \begin{equation}\label{A-u-30-tau} \begin{split} u_{30, \tau}=& c \eta_{30, \tau} - \frac{2(3c^2+5cc_1+4c_1^2-3\Omega c_1)}{\Omega+c}(c+c_1)(\eta_{00}^3\eta_{10})_{\xi} \\ &-4c_1(c+c_1)(\eta_{00}\eta_{10}^2)_{\xi}-4c_1(c+c_1)(\eta_{00}^2\eta_{20})_{\xi} -B_1\eta_{00}^4\eta_{00, \xi} \end{split}\end{equation} with \begin{equation*}\begin{split} B_1\overset{\text{def}}{=} &\frac{(c+c_1)^2(82cc_1+36c_1^2+45c^2-18\Omega c_1-27\Omega c-15)}{3(\Omega+c)^2}\\ &+\frac{c_1(c+c_1)(64cc_1+24c_1^2+45c^2+24\Omega^2-3)}{3(\Omega+c)^2}. \end{split}\end{equation*} For the terms of \eqref{A-Euler-1} at order $O(\varepsilon^4 \mu^0)$, it is inferred from the Taylor expansion \eqref{A-taylor-1} that \begin{equation}\label{A-equation-40} \begin{cases}
-c u_{40,\xi} + u_{30,\tau} + (u_{00}u_{30}+u_{10}u_{20})_{\xi} + 2\Omega W_{40} = -p_{40,\xi} \quad &\text{in}\quad 0 < z < 1, \\
-2 \Omega u_{40}=-p_{40,z} \quad&\text{in}\quad 0 < z < 1,\\ u_{40,\xi} + W_{40,z} = 0 \quad&\text{in}\quad 0 < z < 1, \\ u_{40,z} = 0 \quad &\text{in}\quad 0 < z < 1, \\ p_{40} + \eta_{00}p_{30,z} + \eta_{10}p_{20,z} + \eta_{20}p_{10,z}+ \eta_{30}p_{00,z}= \eta_{40}\quad&\text{on}\quad z = 1,\\ W_{40} + \eta_{00}W_{30,z} + \eta_{10}W_{20,z} + \eta_{20}W_{10,z}+ \eta_{30}W_{00,z} \\ \quad= - c \eta_{40,\xi} + \eta_{30,\tau} + u_{00}\eta_{30,\xi} + u_{10} \eta_{20,\xi} + u_{20} \eta_{10,\xi}+ u_{30} \eta_{00,\xi} \quad &\text{on}\quad z = 1,\\ W_{40} = 0\quad&\text{on}\quad z = 0. \end{cases} \end{equation}
From the fourth equation in \eqref{A-equation-30}, we know that $u_{40}$ is independent of $z$, that is, $u_{40}=u_{40}(\tau, \xi)$, which along with the third equation in \eqref{A-equation-40} and the boundary condition of $W_{40}$ at $z=0$ implies that \begin{equation}\label{A-w-40-1} W_{40}=-z u_{40, \xi}. \end{equation} Combining \eqref{A-w-40-1} with the boundary condition of $W_{40}$ at $z=1$, we have \begin{equation}\label{A-u-40-xi-1} u_{40, \xi}= c \eta_{40,\xi} - \eta_{30,\tau} - (u_{00}\eta_{30} + u_{10} \eta_{20}+ u_{20} \eta_{10}+ u_{30} \eta_{00})_{\xi}, \end{equation} From the second equation in \eqref{A-equation-40} and the boundary condition of $p_{30}$ at $z=1$, we get \begin{equation*} \begin{split}
&p_{40}=p_{40}|_{z=1}+\int_1^z p_{40, z'}\, dz'\\ &= \eta_{40}-(\eta_{00}p_{30,z} + \eta_{10}p_{20,z} + \eta_{20}p_{10,z}+ \eta_{30}p_{00,z})+2\Omega \int_1^z u_{40}\, dz'\\ &= \eta_{40}-2\Omega (u_{00}\eta_{30} + u_{10} \eta_{20}+ u_{20} \eta_{10}+ u_{30} \eta_{00})+2\Omega (z-1) u_{40}, \end{split}\end{equation*} which implies \begin{equation}\label{A-p-40-xi-1} \begin{split} p_{40, \xi}&= -\eta_{40, \xi}-2\Omega (u_{00}\eta_{30} + u_{10} \eta_{20}+ u_{20} \eta_{10}+ u_{30} \eta_{00})_{\xi}+2\Omega (z-1) u_{40, \xi}. \end{split}\end{equation} On the other hand, from the first equation in \eqref{A-equation-40}, we have \begin{equation*} \begin{split}
-p_{40,\xi} =-c u_{40,\xi} + u_{30,\tau} + (u_{00}u_{30}+u_{10}u_{20})_{\xi} + 2\Omega W_{40}, \end{split}\end{equation*} which along with \eqref{A-w-40-1} and \eqref{A-p-40-xi-1} gives rise to \begin{equation}\label{A-p-40-xi-3} \begin{split} 0=&-(c+2\Omega) u_{40,\xi} + u_{30,\tau} + (u_{00}u_{30}+u_{10}u_{20})_{\xi} \\ &+\eta_{40, \xi}-2\Omega (u_{00}\eta_{30} + u_{10} \eta_{20}+ u_{20} \eta_{10}+ u_{30} \eta_{00})_{\xi} \end{split}\end{equation} Substituting \eqref{A-u-40-xi-1} and \eqref{A-u-30-tau} into \eqref{A-p-40-xi-3}, we obtain \begin{equation}\label{A-eta-30-eqn} \begin{split} &2(c+\Omega)\eta_{30,\tau} +3c^2(\eta_{00}\eta_{30}+\eta_{10}\eta_{20})_{\xi} -2(3c+2c_1)(c+c_1)(\eta_{00}^2\eta_{20}+\eta_{00}\eta_{10}^2)_{\xi}\\ &\quad-\frac{(64cc_1+24c_1^2+45c^2-15)}{3(c+\Omega)}(c+c_1)(\eta_{00}^3\eta_{10})_{\xi}-B_2(\eta_{00}^5)_{\xi}=0 \end{split}\end{equation} with \begin{equation*}\begin{split} B_2&\overset{\text{def}}{=} \frac{1}{5}B_1-\frac{(c+c_1)^2(2c_1-3\Omega)}{3(\Omega+c)}+\frac{2c(c+c_1)(64cc_1+24c_1^2+45c^2+24\Omega^2-3)}{12(\Omega+c)^2}\\ &=\frac{c^2(2-c^2)(3c^{10}+228c^8-540c^6-180c^4-13c^2+42)}{60(c^2+1)^6}. \end{split}\end{equation*}
For the terms in \eqref{A-Euler-1} at order $O(\varepsilon^2 \mu^1)$, we have \begin{equation}\label{A-equation-21} \begin{cases} -c u_{21,\xi} + u_{11,\tau} + (u_{00}u_{11}+u_{10}u_{01})_\xi+ W_{00}u_{11,z}+2\Omega W_{21} = - p_{21,\xi}\quad &\text{in}\quad 0 < z < 1,\\ -cW_{10,\xi} + W_{00,\tau} + u_{00}W_{00,\xi} + W_{00}W_{00,z} - 2\Omega u_{21} = - p_{21,z} \quad &\text{in}\quad 0 < z < 1,\\ u_{21,\xi}+W_{21,z} = 0\quad &\text{in}\quad 0 < z < 1,\\ u_{21,z}-W_{10,\xi} = 0\quad &\text{in}\quad 0 < z < 1,\\ p_{21} + \eta_{10}p_{01,z} + \eta_{01}p_{10,z}+\eta_{00}p_{11,z}+\eta_{11}p_{00,z}= \eta_{21}\quad &\text{on}\quad z=1,\\ W_{21} + \eta_{10}W_{01,z}+\eta_{01}W_{10,z}+\eta_{00}W_{11,z}+\eta_{11}W_{00,z} \\ = - c \eta_{21,\xi} + \eta_{11,\tau}+u_{00}\eta_{11,\xi}+u_{11}\eta_{00,\xi} +u_{10}\eta_{01,\xi}+u_{01}\eta_{10,\xi} \quad &\text{on}\quad z=1,\\ W_{21} = 0\quad &\text{on}\quad z=0. \end{cases} \end{equation} We now first derive from \eqref{A-w-10-1}, \eqref{A-u-10-xi}, and the fourth equation in \eqref{A-equation-21} that \begin{equation*} u_{21,z}=W_{10,\xi} = z\bigg(2(c+c_1)(\eta_{00,\xi}^2+\eta_{00}\eta_{00,\xi\xi})-c\eta_{10,\xi\xi}\bigg), \end{equation*} which gives \begin{equation*} u_{21}= \frac{z^2}{2}\bigg(2(c+c_1)(\eta_{00,\xi}^2+\eta_{00}\eta_{00,\xi\xi})-c\eta_{10,\xi\xi}\bigg)+ \Phi_{21}(\tau, \xi)=\frac{z^2}{2}H_1+ \Phi_{21}(\tau, \xi) \end{equation*} for some smooth function $\Phi_{21}(\tau, \xi)$ independent of $z$, where we denote \begin{equation*} H_1 \overset{\text{def}}{=}2(c+c_1)(\eta_{00,\xi}^2+\eta_{00}\eta_{00,\xi\xi})-c\eta_{10,\xi\xi}. \end{equation*}
Hence, we have \begin{equation*} \begin{split} u_{21, \xi}= \frac{z^2}{2}H_{1, \xi}+ \partial_{\xi}\Phi_{21}(\tau, \xi). \end{split} \end{equation*} On the other hand, thanks to the third equation in \eqref{A-equation-21} and the boundary condition of $W_{21}$ on $\{z=0\}$, we get \begin{equation*} \begin{split}
W_{21}&=W_{21}|_{z=0}+\int_0^z W_{21, z'}\,dz'=-\int_0^z u_{21, \xi}\,dz'= -\frac{z^3}{6}H_{1, \xi}-z \partial_{\xi}\Phi_{21}(\tau, \xi), \end{split} \end{equation*} which along with the boundary condition of $W_{21}$ on $\{z=1\}$ leads to \begin{equation*} \begin{split}
-\frac{1}{6}H_{1, \xi}- \partial_{\xi}\Phi_{21}(\tau, \xi)&= - c \eta_{21,\xi} + \eta_{11,\tau}+(u_{00}\eta_{11}+u_{11}\eta_{00} +u_{10}\eta_{01}+u_{01}\eta_{10})_{\xi}|_{z=1}\\
&= - c \eta_{21,\xi} + \eta_{11,\tau}+H_{2, \xi}|_{z=1},
\end{split} \end{equation*} where we denote \begin{equation*} H_2 \overset{\text{def}}{=} u_{00}\eta_{11}+u_{11}\eta_{00} +u_{10}\eta_{01}+u_{01}\eta_{10}. \end{equation*} It then follows that \begin{equation}\label{A-phi-21-1} \begin{split}
\partial_{\xi}\Phi_{21}(\tau, \xi)= c \eta_{21,\xi} - \eta_{11,\tau}-\frac{1}{6}H_{1, \xi}-H_{2, \xi}|_{z=1}, \end{split} \end{equation} which implies \begin{equation}\label{A-u-21-xi-2} \begin{split}
u_{21, \xi}= c \eta_{21,\xi} - \eta_{11,\tau}+(\frac{z^2}{2}-\frac{1}{6})H_{1, \xi}-H_{2, \xi}|_{z=1} \end{split} \end{equation} and \begin{equation}\label{A-w-21-1} \begin{split}
W_{21}= \frac{z(1-z^2)}{6}H_{1, \xi}-c z\eta_{21,\xi} +z \eta_{11,\tau}+z (H_{2, \xi}|_{z=1}). \end{split} \end{equation} Substituting the expressions of $W_{00,\tau}$, $u_{00}$, $W_{00,\xi}$, $W_{00}$, $W_{00,z}$, and $W_{10,\xi}$ into the second equation in \eqref{A-equation-21}, we obtain \begin{equation}\label{A-p-21-1} p_{21,z} = 2\Omega u_{21}-c^2 z \eta_{10,\xi\xi}+c(c +4 c_1 )z\eta_{00,\xi}^2+c(3c+4c_1 )z\eta_{00}\eta_{00,\xi\xi}. \end{equation} While from the boundary condition of $p_{21}$ on $z=1$, we have \begin{equation*}
p_{21}|_{z=1} = \eta_{21}+c^2\eta_{00}\eta_{00, \xi\xi}-2\Omega H_{2}|_{z=1}, \end{equation*} which along with \eqref{A-p-21-1} leads to \begin{equation}\label{A-p-21-3} \begin{split}
&p_{21}=p_{21}|_{z=1}+\int_1^z p_{21,z'}\,dz' \\
&= \eta_{21}-2\Omega H_{2}|_{z=1}+ 2\Omega \int_1^z u_{21}\, dz'-\frac{c^2}{2}(z^2-1) \eta_{10,\xi\xi}\\ &\quad+\frac{c(c +4c_1 )}{2} (z^2-1)\eta_{00,\xi}^2+\bigg(c^2+\frac{c(3c+4c_1)}{2} (z^2-1)\bigg)\eta_{00}\eta_{00,\xi\xi}, \end{split} \end{equation} and then \begin{equation}\label{A-p-21-3a} \begin{split}
&p_{21, \xi}= \eta_{21, \xi}-2\Omega H_{2, \xi}|_{z=1}+ 2\Omega \int_1^z u_{21, \xi}\, dz'-\frac{c^2 }{2}(z^2-1) \eta_{10,\xi\xi\xi}\\ &\quad+\frac{c(c +4 c_1 )}{2}(z^2-1) (\eta_{00,\xi}^2)_{\xi}+\bigg(c^2+\frac{c(3c+4c_1 )}{2} (z^2-1)\bigg)(\eta_{00}\eta_{00,\xi\xi})_{\xi}\\
&= -2\Omega z H_{2, \xi}|_{z=1}+ 2\Omega (z-1) \bigg(c \eta_{21,\xi} - \eta_{11,\tau}\bigg)+\frac{z(z^2-1)}{6}H_{1, \xi}-\frac{c^2 }{2}(z^2-1) \eta_{10,\xi\xi\xi}\\ &\quad+\eta_{21, \xi}+\frac{c(c +4 c_1 )}{2} (z^2-1)(\eta_{00,\xi}^2)_{\xi}+\bigg(c^2+\frac{c(3c+4c_1 )}{2}(z^2-1) \bigg)(\eta_{00}\eta_{00,\xi\xi})_{\xi}. \end{split} \end{equation} Thanks to the first equation in \eqref{A-equation-21}, \eqref{A-w-21-1}, and \eqref{A-w-00}, we get \begin{equation}\label{A-p-21-4} \begin{split} - p_{21,\xi}=&-c u_{21,\xi} + u_{11,\tau} + (u_{00}u_{11}+u_{10}u_{01})_\xi+ c^2z^2\eta_{00, \xi}\eta_{00,\xi\xi}\\
&+\frac{\Omega }{3}z(1-z^2)H_{1, \xi}-2\Omega c z\eta_{21,\xi} +2\Omega z \eta_{11,\tau}+2\Omega z H_{2, \xi}|_{z=1}. \end{split} \end{equation} Combining \eqref{A-p-21-4} with \eqref{A-p-21-3}, we get \begin{equation}\label{A-p-21-4a} \begin{split} &0=-c u_{21,\xi} + u_{11,\tau} + (u_{00}u_{11}+u_{10}u_{01})_\xi+ \bigg(\frac{c^2}{2}z^2+\frac{c(c +4 c_1 )}{2}(z^2-1)\bigg)(\eta_{00, \xi}^2)_{\xi}\\ &+\frac{\Omega }{3}z(1-z^2)H_{1, \xi}+(1-2\Omega c)\eta_{21, \xi}+ 2\Omega \eta_{11,\tau}+\frac{z(z^2-1)}{6}H_{1, \xi}-\frac{c^2 }{2}(z^2-1) \eta_{10,\xi\xi\xi}\\ &+\bigg(c^2+\frac{c(3c+4c_1 )}{2} (z^2-1)\bigg)(\eta_{00}\eta_{00,\xi\xi})_{\xi}. \end{split} \end{equation} Notice that \begin{equation*} \begin{split} &(u_{01}u_{10}+u_{00}u_{11})_\xi \\ & = c^2(\eta_{01}\eta_{10}+\eta_{00}\eta_{11})_\xi + \left(\frac{c^2}{6}-\frac{2cc_1}{9} - \frac{c^2 z^2}{2}\right)(\eta_{00}\eta_{00,\xi\xi})_\xi- 3 c(c+c_1)(\eta_{00}^2\eta_{01})_\xi \end{split} \end{equation*} and \begin{equation*} \begin{split}
&H_{2, \xi}|_{z=1} =3c^2(\eta_{01}\eta_{10}+\eta_{00}\eta_{11})_\xi - \left(\frac{c^2}{3}+\frac{2cc_1}{9}\right)(\eta_{00}\eta_{00,\xi\xi})_\xi- 3 c(c+c_1)(\eta_{00}^2\eta_{01})_\xi. \end{split} \end{equation*} We substitute \eqref{A-u-21-xi-2} and \eqref{A-u-11-tau} into \eqref{A-p-21-4a} to get \begin{equation}\label{A-eta-11-eqn} \begin{split} &2(\Omega + c)\eta_{11,\tau} + 3c^2(\eta_{00}\eta_{11}+\eta_{10}\eta_{01})_\xi-2(c+c_1)(3c+2c_1)(\eta_{00}^2\eta_{01})_\xi+\frac{c^2}{3}\eta_{10,\xi\xi\xi}\\ &-\left(\frac{c^2}{6}+\frac{10c c_1}{9}+\frac{2 c_1^2}{9}\right)(\eta_{00,\xi}^2)_{\xi}-\left(\frac{c^2}{3}+\frac{20 c c_1}{9}+\frac{8 c_1^2}{9}\right)(\eta_{00}\eta_{00,\xi\xi})_{\xi}=0. \end{split} \end{equation}
\vskip 0.2cm
\noindent {\bf Acknowledgments.} The work of Gui is supported in part by the NSF-China under the grants 11571279, 11331005, and the Foundation FANEDD-201315. The work of Liu is supported in part by the Simons Foundation grant-499875.
\vskip 0.2cm
\end{document} |
\begin{document}
\title{Estimation of disorders in the rest positions of two membranes in optomechanical systems}
\author{Claudio M. Sanavio} \affiliation{Department of Physics, University of Bologna, Via Irnerio 46, 40126 Bologna, Italy}
\author{J\'ozsef Zsolt Bern\'ad} \affiliation{Peter Gr\"unberg Institute (PGI-8), Forschungszentrum J\"ulich, D-52425 J\"ulich, Germany}
\author{Andr\'e Xuereb} \affiliation{Department of Physics, University of Malta, Msida MSD 2080, Malta}
\date{\today}
\begin{abstract} The formalism of quantum estimation theory is applied to estimate the disorders in the positions of two membranes positioned in a driven optical cavity. We consider the coupled-cavities and the transmissive-regime models to obtain effective descriptions of this system for different reflectivity values of the membranes. Our models consist also of high temperatures Brownian motions of the membranes, losses of the cavity fields, the input-output formalism, and a balanced homodyne photodetection of the cavity output field. In this two-parameter estimation scenario, we compare the classical and quantum Fisher information matrices and evaluate the accuracies of the estimations. We show that models prefer very different estimation strategies and the temperature does not have a detrimental effect on the estimation accuracies but makes it more difficult to attain the quantum optimal limit. Our analysis, based on recent experimental parameter values, also reveals that the best estimation strategies with unit efficient detectors are measurements of the quadratures of the output field.
\end{abstract}
\maketitle
\section{Introduction}\label{sec:I}
Parameter estimation is a crucial task at the heart of engineering and physical sciences \cite{Kaipio}. Quantum statistical inference attempts to find appropriate quantum measurements or estimators, from which the value of one or more parameters of a quantum mechanical system can be estimated \cite{Helstrom, Holevo, Wiseman}. This task may not always guarantee implementable measurements with current technologies, and therefore one has to consider a family of quantum measurements used in recent experimental setups. These measurements generate data that is inherently random, it is usually described by a probability density function depending on the true values of the parameters to be estimated. Estimators are functions on the data and their performance are usually assessed by their mean-squared error or variance when they are unbiased. Being able to place a lower bound on the mean-squared error or variance of any estimator provides us a benchmark against which we can compare the performances of different estimation strategies. Although many lower bounds exist for classical systems \cite{KBell}, Cram\'er-Rao lower bound is the one which has a straight extension to quantum systems and is by far the easiest to determine \cite{Helstrom68}. In the multi-parameter estimation case with unbiased estimators, which is our intention here, the covariance matrix of the estimates is lower bounded by the inverse of the quantum Fisher information matrix (QFIM) in terms of matrix inequalities. Provided that we would like to perform inference in a quantum mechanical system with a constrained set of quantum measurements, the process of estimation is divided in our approach into two parts. First, one determines the classical Fisher information matrix (CFIM) from the probability density function of the measurement data and investigates circumstances where the CFIM is in the trace norm as close as possible to the QFIM, which in terms of matrix inequalities is always larger or equal than the CFIM \cite{Petz}. Finally, in the classical postprocessing of measurement data, the attainability of the Cram\'er-Rao lower bound is investigated. \cite{vanTrees}.
In this paper, we follow the above-described methodology for estimating the disorder in the positions of mechanical membranes in an optical cavity. Multiple-membrane cavity optomechanics is getting increasing attention from the scientific community in the last decade. In contrast with the standard optomechanical set-up of a linear cavity composed of one fixed mirror and one movable end mirror, the membrane-in-the-middle (MIM) configuration sees the movable membrane, a dielectric thick surface, in between the two fixed mirrors composing the optical cavity. The interesting features of this set-up have been investigated both theoretically \cite{Chow1986,BhattacharyaMeystre2007,BhattacharyaUysMeystre2008} and experimentally \cite{Jayich2008}. The presence of the dielectric material changes the properties of the optical mode, its frequency, and therefore the position of nodes and anti-nodes. Following these interesting results, the theoretical investigation had shifted to multiple membrane-in-the-middle (MMIM) configuration \cite{BhattacharyaMeystre2008}, where more membranes are located inside the optical cavity. The analysis of these systems showed promising features, like the enhancement of optomechanical coupling strengths based on constructive interference \cite{XuerebGenesDantan2012,Rabl2011,LiXuerebMalossiVitali2016}.
Optomechanical systems are well-suited for studying the nature of quantum mechanics of macroscopic objects \cite{Marshall} as well as measuring weak forces with high sensitivity and precision \cite{Kippenberg}. They lie at the heart of laser-based interferometric gravitational wave observatories \cite{Abbott}, the theoretical background of which has been known for several decades \cite{Braginsky,Caves}. In such systems, the physical quantity of interest is encoded in the displacement of the moving element, which must be estimated with the greatest possible precision. These considerations carry forward to the MMIM scenario, and in particular to systems with two moving membranes, of which experimental investigations started only recently \cite{Piergentili2018,Wei2019}. However, previously set rest positions of the membranes can be displaced due to imperfections, and hence the precision of the whole experimental setup is affected. Here, we provide a systematic estimation of disorders in the positions of the membranes based on statistical inference. We follow and extend our previous frequentist statistical inference approach \cite{SanavioBernadXuereb2020}, where the measurement data on a cavity optomechanical system is obtained by determining the output field of the cavity with the help of the input-output relations \cite{Gardiner1985} and measuring the escaping field by balanced homodyne photodetection.
This paper is organized as follows. In Sec.~\ref{sec:II} we introduce two models describing the system when the reflectivity of the membranes is either high or low. Then, we employ the Heisenberg-Langevin equations to obtain the steady-state and its fluctuations for the output field to be measured. In Sec.~\ref{sec:III} we discuss our multi-parameter estimation strategy in the context of balanced homodyne photodetection. Then, we apply our strategy to infer the disorder in the positions of the membranes. In Sec.~\ref{sec:IV} we show the results and in Sec.~\ref{sec:V} we draw our conclusions. Detailed formulas supporting the main text are collected in the four appendices.
\section{Model}\label{sec:II}
We consider an optical cavity of length $L$ formed by perfectly reflecting end mirrors and two identical vibrating dielectric membranes, which are placed inside the cavity (see. Fig. \ref{fig:model}). Each of these membranes has reflectivity $r$, mass $m$, and mechanical frequency $\omega_m$. Furthermore, they are bounded by a harmonic potential $m\omega_m^2\hat{q}^2_i/2$, with $\hat{q}_i$ being the position operator.
\begin{figure}
\caption{Schematic representation of the system. Two movable dielectric membranes are placed inside a Fabry–P\'erot cavity. Further details about the scheme are in the text.}
\label{fig:model}
\end{figure}
One is able to find the electromagnetic field inside the cavity by solving the Helmholtz equation and setting the proper boundary conditions~\cite{Brooker2003}. However, the electric susceptibility inside the full cavity has to be modeled in order to incorporate both membranes~\cite{BhattacharyaMeystre2008}. In order to make the canonical quantization of such a system possible, Ref.~\cite{CheungLaw2011} has assumed a nonbirefringent membrane, i.e, refractive index $r$ does not depend on the polarization and propagation direction of the field, and also a nondispersive one, i.e, electric susceptibility of the membrane does not depend on the field's frequency. Now, based on the single membrane approach of Ref.~\cite{CheungLaw2011} we consider an identical second membrane. We assume that the two membranes have independent suspensions and therefore the second membrane is modeled as an additive contribution to the Hamiltonian of Ref. \cite{CheungLaw2011}. If the harmonic potentials bound both dielectric membranes about their rest positions such that the average position operator$\langle\hat{q}_i\rangle, (i = 1, 2)$ is small compared to the wavelengths of the field, then the linear approximation of field-membrane couplings is valid and the Hamiltonian reads \begin{eqnarray}\label{eq:Hamiltonian1} \hat{H}&=&\sum_{j}\hbar\omega_{j}\hat{a}_{j}^\dagger\hat{a}_{j}+\sum_{i=1}^2\hat{q}_i\sum_{j,k} g_{ijk}\bigg{(}\hat{a}_{j}^\dagger\hat{a}_{k}^\dagger+\hat{a}_{j}^\dagger\hat{a}_{k}+\text{h.c.}\bigg{)}\nonumber\\ &&+\sum_{i=1}^2\bigg{[}\frac{\hat{p}_i^2}{2m}+\frac{m\omega_\text{m}^2 \hat{q}^2_i}{2}\bigg{]}. \end{eqnarray} where $\hat{a}_j$ $(\hat{a}^\dagger_j)$ is the annihilation (creation) operator of the $j$th field mode with frequency $\omega_j$, which is obtained in the case when both membranes are in rest. Similarly, the coupling constants $g_{ijk}$ are \cite{CheungLaw2011}: \begin{equation}
g_{ijk}=\frac{\partial \omega_j(q)}{\partial q} \big |_{q=q^{(0)}_i} \delta_{j,k}+J_{ijk}, \label{eq:defgJ} \end{equation} where $\delta_{j,k}$ is the Kronecker delta and $q_i^{(0)}$s $(i=1,2)$ are the rest positions of the membranes in the absence of the electromagnetic field. $J_{ijk}$ is the coupling strength of photon emission and absorption processes, which occur between field modes $j$ and $k$ and are mediated by the $i$th membrane. We have also the following property: $J_{ijj}=0$ for all $i$ and $j$.
In this paper, we are interested in two different setups, in which differences are marked by the reflectivity of the membranes. When the reflectivity is high, one can use the so-called coupled-cavities (CC) model, for which three different modes are localized in the spacings between membranes and end mirrors. On the other hand, when the reflectivity is low we consider the electromagnetic field mode to be delocalized in the cavity. We dub this model the Transmissive regime (TR). Both CC and TR models have been largely investigated in literature \cite{BhattacharyaMeystre2008,Jayich2008,XuerebGenesDantan2013} and represent the two most pursued effective models for multiple optomechanical systems. Our goal is to show the differences and analogies during an estimation process.
\subsection{Dissipative dynamics}\label{sec:IIa}
Membranes interact with the surrounding gas atoms and are also coupled to the environment through the suspensions. Their dynamics are slow compared to the correlation times of the environments. This is the characteristic case of quantum Brownian motion and without loss of generality, we consider this as the only dissipative mechanism of the membranes, though, loss of mechanical excitations is a rich phenomenon. \cite{AspelmeyerKippenbergMarquardt2014}. Quantum Brownian motion in a harmonic potential $m\omega_\text{m}^2 \hat{q}^2/2$ is described by the following Heisenberg equations of motion \begin{eqnarray}
\dot{\hat{q}}&=&\frac{\hat{p}}{m}, \label{eq:QLEBrown} \\ \dot{\hat{p}}&=&-m\omega_\text{m}^2 \hat{q}-\gamma\hat{p}+\hat{\xi} .\nonumber \end{eqnarray} where $\gamma$ is the strength of the friction force. The operator $\hat{\xi}$ represents the quantum Brownian noise and we consider that the environment was initially in a thermal equilibrium state with temperature $T$. In the high-temperature limit, which is valid at room temperatures, the two-time correlation function of $\hat{\xi}(t)$ reads \cite{BreuerPetruccioni2002}: \begin{eqnarray} \langle\hat{\xi}(t)\hat{\xi}(t')\rangle=2 m \gamma k_B T \delta(t-t'). \nonumber \end{eqnarray}
Any mode of the field inside the optical cavity is subject to photon leakage through mirrors and membranes, which couple the inside field with the continuum of the outside field modes. The dynamic of any optical cavity mode is well described by the Heisenberg-Langevin equation~\cite{Gardiner1991}. Based on the input-output formalism this equation is given by the time evolution of the single mode field operator $\hat{a}$ subject to decay $\kappa$ and affected by noise, which appears explicitly as the input field $\hat{a}^{\text{in}}$. This equation reads \begin{equation}\label{eq:QLEopt} \frac{d\hat{a}}{dt}=-\frac{\kappa}{2}\hat{a}+\sqrt{\kappa}\hat{a}^{\text{in}}, \end{equation} where we have omitted, for now, the full Hamiltonian evolution of the system. The input operator $\hat{a}^{\text{in}}$ associated with the vacuum fluctuations of the continuum of modes outside the cavity is delta correlated in the vacuum state $\bra{0}[\hat{a}^{\text{in}}(t),\hat{a}^{\dagger\text{in}}(t')]\ket{0}=\delta(t-t')$. This is because the field modes have optical frequencies and thus the average number of thermal photons for these frequencies at room temperature is approximately zero. Furthermore, we can use the same input-output theory to describe the losses induced by the manufacturing errors of the membranes. As a result, in the CC model, each subcavity can experience a different decay rate $\kappa_j$ ($j\in\{1,2,3\}$). In the case of the TR model, we consider only one decay rate for the single mode field.
\subsection{Coupled-cavities (CC)}\label{sec:IIb}
When two membranes are placed inside an optical cavity, then there are three spacing or inner cavities between the membranes and the mirrors. We denote the length of each inner cavity by $L/3$ and thus the difference between the rest positions of the membranes $q^{(0)}_2-q^{(0)}_1=L/3$. If the reflectivity of the membranes is one, i.e., $r=1$, and they are resting, the optical cavity consists simply of three uncoupled inner cavities with eigenfrequencies \begin{equation}\label{eq:CCFrequencies} \omega_{n}=\frac{3 n\pi c}{L}, \end{equation} where $n$ is a positive integer and $c$ is the speed of light. Provided that the reflectivity is slightly smaller than one then the three inner cavities become coupled. Furthermore, we consider that in each inner cavity only one field mode is dominant with frequency $\omega_c$. This condition can be achieved by driving the system with a laser such that only these selected modes are enhanced \cite{BhattacharyaMeystre2008}. Therefore, we consider a laser with frequency $\omega_\text{L}$ and intensity $\varepsilon$ driving the first inner cavity ($j=1$) having a mirror for the left and a membrane for the right boundary. Now, if in \eqref{eq:Hamiltonian1} we neglect scattering processes between dominant and non-dominant modes and also the two-photon processes, we obtain \begin{eqnarray}\label{eq:CCHamiltonian} H_{\text{CC}}&=& \sum\limits_{j=2}^{3}\hbar\Delta_0 \hat{a}_j^\dagger\hat{a}_j+i\hbar \varepsilon(\hat{a}_1^{\dagger}-\hat{a}_1)\nonumber\\ &+&\sum\limits_{i=1}^{2}\bigg{[}\hbar g\hat{q}_i(\hat{a}^{\dagger}_{i}\hat{a}_i-\hat{a}^{\dagger}_{i+1}\hat{a}_{i+1}) \\ &+&\frac{\hat{p}^2_i}{2m}+\frac{m\omega_\text{m}^2 \hat{q}^2_i}{2}+\hbar J\hat{a}_{i+1}^\dagger\hat{a}_i+\hbar J\hat{a}_{i}^\dagger\hat{a}_{i+1}\bigg{]}, \nonumber \end{eqnarray} where $\Delta_0=\omega_c-\omega_\text{L}$, $g=g_{111}=g_{122}=g_{222}=g_{233}$ and $J=g_{112}=g_{121}=g_{223}=g_{232}$, see Eq. \eqref{eq:defgJ}. Note that Hamiltonian~\eqref{eq:CCHamiltonian} is already expressed in a rotating-frame of all three modes of the field with frequency $\omega_{\text{L}}$ \cite{AspelmeyerKippenbergMarquardt2014}.
The $i$th and $(i+1)$th modes are located at the left and right of the $i$th ($i \in \{1,2\}$) membrane respectively and therefore they exert an opposite light pressure on this membrane, which is reflected on the different signs of the coupling between the mechanical motion of the membrane and the two adjacent single mode fields. The electromagnetic field either passes through or pushes the membranes, where these effects are characterized by the hopping rate $J$ and uniform optomechanical coupling $g$. Both processes influence the motion of the membranes and thus the Hamiltonian can describe rich physics, though, due to the number of assumptions involved is still a ``minimal''-model. In fact, the optical hopping between the inner cavities accounts for the non-perfect reflectivity of the membranes, and for example, a classical understanding of the hopping rate with the method of the transfer matrix yields the relation $J=\omega_c \sqrt{2(1-r)}$~\cite{Jayich2008}.
A standard procedure consists of linearizing the dynamics by expanding the Hamiltonian around the steady-state~\cite{AspelmeyerKippenbergMarquardt2014}, which is reached due to decoherence and excitation losses in the system, see Sec. \ref{sec:IIa} for further details. This procedure is defined through the transformations $\hat{a}_j\to\alpha_j+\hat{a}_j$ and $\hat{q}_i\to q_{i}+\hat{q}_i$, where $\alpha_j$ and $q_{i}$ represent the steady state solutions for the $j$th field mode and the $i$th membrane respectively. The transformation is applied to the dynamics of the system, (see Appendix \ref{app:steadystateamplitude}), and doesn't affect the momentum operators $\hat{p}_i$ \cite{SanavioBernadXuereb2020}. The new operators describe the fluctuations around the steady-state and second-order terms in the transformed Hamiltonian are neglected. Finally, we can find the Hamiltonian that rules the dynamics of the fluctuation operators, which reads \begin{eqnarray}\label{eq:linearCCHamiltonian} H_{\text{CC,quad}}&=&\sum_{j=1}^{3}\hbar\Delta_j\hat{a}_j^\dagger\hat{a}_j+\sum_{i=1}^{2}\bigg{(}\hbar J\hat{a}_{i+1}^\dagger\hat{a}_i+\hbar J\hat{a}_{i}^\dagger\hat{a}_{i+1}\nonumber\\ &&+\hbar g\hat{q}_i(\alpha_i\hat{a}^{\dagger}_{i}-\alpha_{i+1}\hat{a}^{\dagger}_{i+1}+\bar{\alpha}_i\hat{a}_{i}-\bar{\alpha}_{i+1}\hat{a}_{i+1})\nonumber\\ &&+\frac{\hat{p}^2_i}{2m}+\frac{m\omega_\text{m}^2 \hat{q}^2_i}{2}\bigg{)}, \end{eqnarray} where the detunings $\Delta_j$ and the steady-state amplitudes $\alpha_j$ for the $j$th ($j \in \{1,2,3\}$) inner cavity can be obtained as a function of parameters of the Hamiltonian and loss mechanisms. We denoted $\bar{\alpha}$ as the complex conjugate of $\alpha$. We use this quadratic Hamiltonian to calculate the dynamics of the fluctuations and estimate the disorder in the positions of the membranes.
\subsection{Transmissive regime (TR)}\label{sec:IIc}
We now consider a different situation, where only one mode is present in the whole cavity and interacts with each of the membranes. This can be realized in what is called the transmissive regime \cite{XuerebGenesDantan2012} of the membrane stack. In general, for any value $r$ of a $N_m$ membranes system, one can find $N_m+1$ selected lengths $L/(N_m+1)$ of the inner cavities, such that the global reflectivity of the whole membrane set drops to zero. Thus, the field sees the membrane stack as a single membrane with low reflectivity, regardless of the original value of $r$. An analytical expression for the different optomechanical coupling strengths can also be obtained by using the transfer matrix method \cite{Piergentili2018,XuerebGenesDantan2013}. This is our starting point, where we consider a single mode field with frequency $\omega_c$ in the whole optical cavity. A laser with frequency $\omega_\text{L}$ and intensity $\varepsilon$ is also driving this mode. Now, we obtain another subcase of \eqref{eq:Hamiltonian1}, which reads \begin{eqnarray}\label{eq:HamiltonianOneMode} H_{\text{TR}}&=&\hbar\left(\Delta_0+\sum^2_{i=1} g_i \hat{q}_i\right)\hat{a}^\dagger\hat{a}\nonumber\\ &&+\sum^2_{i=1}\bigg{(}\frac{\hat{p}_i^2}{2m}+\frac{m\omega_\text{m}^2\hat{q}^2_i}{2}\bigg{)}\nonumber\\ &&+i \hbar \varepsilon(\hat{a}^{\dagger}-\hat{a}), \end{eqnarray} where $\Delta_0=\omega_c-\omega_\text{L}$, $g_1=g_{111}$, and $g_2=g_{211}$, see Eq. \eqref{eq:defgJ}. We immediately went to the rotating-frame of the field mode with frequency $\omega_\text{L}$ and assumed the disorder-free optomechanical coupling strengths of both membranes are equal.
The Hamiltonian which describes the dynamics in terms of the fluctuation is immediate. Based on the arguments of Sec. \ref{sec:IIb} this Hamiltonian in the transmissive regime reads \begin{eqnarray}\label{eq:HamiltonianOneModeLinear} H_{\text{TR,quad}}&=&\hbar\left[\Delta\hat{a}^\dagger\hat{a}+\sum^2_{i=1} g_i \hat{q}_i(\alpha \hat{a}^\dagger+\bar{\alpha}\hat{a})\right]\nonumber\\ &&+\sum^2_{i=1}\bigg{(}\frac{\hat{p}_i^2}{2m}+\frac{m\omega_\text{m}^2 \hat{q}^2_i}{2}\bigg{)}, \end{eqnarray} where the equations yielding the detuning $\Delta$ and the steady-state amplitude $\alpha$ are found in Appendix \ref{app:BCCmodel}.
\subsection{Heisenberg-Langevin equations}\label{sec:IId}
In the following, we present a general formalism that applies to both quadratic Hamiltonians in \eqref{eq:linearCCHamiltonian} and \eqref{eq:HamiltonianOneModeLinear}. We collect the operators of both dynamics into vectors of operators \begin{eqnarray}
c^{(\text{CC})}&=&(\hat{X}_1,\hat{Y}_1, \hat{X}_2, \hat{Y}_2,\hat{X}_3,\hat{Y}_3,\hat{p}_1,\hat{p}_2,\hat{q}_1,\hat{q}_2)^T, \label{CCc}\\
c^{(\text{TR})}&=&(\hat{X}, \hat{Y}, \hat{p}_1,\hat{p}_2,\hat{q}_1,\hat{q}_2)^T, \label{TRc} \end{eqnarray} where the superscript $T$ denotes the transposition and we have introduced the quadratures $\hat X=(\hat a^\dagger + \hat a)/\sqrt{2}$ and $\hat Y=i(\hat a^\dagger - \hat a)/\sqrt{2}$. We write the corresponding Heisenberg-Langevin equation as \begin{eqnarray}\label{eq:vectorialQLE} \frac{d}{dt} c^{(m)}&=&A^{(m)}c^{(m)}+\eta^{(m)}, \quad m\in \{\text{CC},\text{TR}\}, \nonumber \\ \end{eqnarray} where $\eta^{(m)}$ is the vector of all noise operators: \begin{eqnarray}
\eta^{(\text{CC})}&=&(\sqrt{\kappa_1} \hat{X}^{\text{in}}_1,\dots,\sqrt{\kappa_3}\hat{Y}^{\text{in}}_3,\hat{\xi},\hat{\xi},0,0)^T, \nonumber \\
\eta^{(\text{TR})}&=&(\sqrt{\kappa} \hat{X}^{\text{in}},\sqrt{\kappa}\hat{Y}^{\text{in}},\hat{\xi},\hat{\xi},0,0)^T. \nonumber \end{eqnarray} The dynamical matrices $A^{(CC)}$ and $A^{(TR)}$ contain terms obtained from the quadratic Hamiltonians and their explicit forms can be found in Appendices \ref{app:steadystateamplitude} and \ref{app:BCCmodel}. Finally, the formal solution of \eqref{eq:vectorialQLE} reads \begin{eqnarray}\label{eq:QLEsolution} c^{(m)}(t)&=&\exp\left[A^{(m)}t\right]c^{(m)}(0) \\ &+&\int_0^tdt'\exp\left[A^{(m)}(t-t')\right]\eta^{(m)}(t'). \nonumber \end{eqnarray}
The quadratic Hamiltonians in \eqref{eq:linearCCHamiltonian} and \eqref{eq:HamiltonianOneModeLinear} together with the loss mechanisms ensure that the state of the fluctuations is Gaussian~ \cite{WeedbrookPirandola2011}. As the fluctuations around the steady-state have zero means, it is immediate that this Gaussian state is fully described by the symmetric auto-correlation matrix \begin{equation} \sigma(t,t')=\frac{1}{2}\langle c(t) c(t')^T+ c(t')c(t)^T\rangle. \end{equation}
We use the solutions of the quantum Langevin equations \eqref{eq:QLEsolution} to find $\sigma(t,t)$ in the stationary limit $t \to \infty$. Provided that the both systems are stable, where conditions are derived by using the Routh-Hurwitz criterion \cite{Routh}, $\sigma=\lim_{t \to \infty} \sigma(t,t)$ fulfills the following Lyapunov equation~\cite{SanavioBernadXuereb2020}. \begin{eqnarray}\label{eq:Lyapunov} A\sigma+\sigma A^T=-D, \end{eqnarray} where \begin{equation}\label{eq:DMatrix} D=\int_0^{\infty} d\tau[M(\tau)\exp(A^T\tau)+\exp(A\tau)M(\tau)], \end{equation} and \begin{equation} M(t-t')=\frac{1}{2}\langle \eta(t)\eta(t')^T+\eta(t')\eta(t)^T\rangle \end{equation} is noise correlation matrix. In particular, the matrix entries are: \begin{eqnarray}\label{eq:MMatrix} \left[M\right]_{\hat{X}_i\hat{X}_j}(t-t')&=&\left[M \right]_{\hat{Y}_i\hat{Y}_j}(t-t')= \frac{\kappa_i}{2}\delta_{i,j}\delta(t-t')\nonumber\\ \left[M\right]_{\hat{p}_i\hat{p}_j}(t-t')&=&2m\gamma k_{\text{B}}T\delta_{i,j}\delta(t-t'). \nonumber \end{eqnarray}
Any experiment seeking to infer one or more parameters of this system has to perform measurements on the cavity output field. With the help of the input-output relations and considering that the output field possesses the same correlation functions as the optical input field, we have \begin{equation} \hat{a}^{\text{out}}=\sqrt{\kappa}\hat{a}-\hat{a}^{\text{in}}, \end{equation} from which we can find the output correlation matrix $\sigma^{\text{out}}$.
As the measurement is performed in a finite time interval $\tau$, only some frequencies are accessible to a detector. Hence, we can define the filter function $g_l(t)$~\cite{GenesMariTombesiVitali2008}, which accounts for a finite period of detection and is \begin{equation} g_l(t)=\frac{\theta(t)-\theta(t-\tau)}{\sqrt{\tau}}e^{-i\Omega_l t}, \end{equation} with $\Omega_i-\Omega_j=\frac{2\pi}{\tau}n$ and $n\in\mathbb{N}$. The latter condition allows us to define $N$ independent output modes \begin{equation} a_l^{\text{out}}(t)=\int_{-\infty}^t ds g_l(t-s)\hat{a}^{\text{out}}(s), \quad l=1,\dots,N, \end{equation} which are centered at the frequency $\Omega_l$ and with bandwidth $1/\tau$. Following our previous results in ~\cite{SanavioBernadXuereb2020,comment}, one can obtain the entries of the $2\times2$ correlation matrix $\sigma_l^{\text{out}}$ as \begin{eqnarray}\label{correlationmatrixoutput1} \sigma^{\text{out}}_{l,XX}&=&\frac{1}{2} \kappa \tau \text{sinc}^2\left(\frac{\Omega_l \tau}{2}\right) \left[\left(\sigma_{XX}-\sigma_{YY}\right) \cos (\Omega_l \tau) \right.\nonumber \\ &+&\left. \sigma_{XX}+2 \sigma_{XY} \sin (\Omega_l \tau)+\sigma_{YY}\right]+\frac{1}{2} \\ \label{correlationmatrixoutput2} \sigma^{\text{out}}_{l,XY}&=&\frac{1}{2} \kappa \tau \text{sinc}\left(\frac{\Omega_l \tau}{2}\right)^2 \left[ \left(\sigma_{YY}-\sigma_{XX}\right) \sin ( \Omega_k \tau ) \right. \nonumber \\ &+& \left. 2 \sigma_{XY} \cos (\Omega_l \tau) \right]\\\label{correlationmatrixoutput3} \sigma^{\text{out}}_{l,YY}&=&\frac{1}{2} \kappa \tau \text{sinc}^2\left(\frac{\Omega_l \tau}{2}\right) \left[ \left( \sigma_{YY}- \sigma_{XX}\right) \cos (\Omega_l \tau) \right. \nonumber \\ &+& \left. \sigma_{XX}-2 \sigma_{XY} \sin (\Omega_l \tau)+ \sigma_{YY}\right]+\frac{1}{2}, \end{eqnarray} where $\sigma_{AB}=\left \langle \hat{A} \hat{B}\right \rangle $ ($A,B \in \{X,Y\}$) are the entries of matrix $\sigma$ and $\text{sinc}(x)$ is the unnormalized sinc function $\text{sinc}(x)=\sin(x)/x$. In the TR model, $\sigma_{XX}$, $ \sigma_{XY}$, and $\sigma_{YY}$ are obtained directly from \eqref{eq:Lyapunov}, because there is only one mode of the field. The situation in the CC model is different, the output field will leak from the last ($j=3$) inner cavity and after solving the corresponding Lyapunov equation $\sigma_{X_3X_3}$, $ \sigma_{X_3Y_3}$, and $\sigma_{Y_3Y_3}$ have to substituted into Eqs. \eqref{correlationmatrixoutput1}, \eqref{correlationmatrixoutput2}, and \eqref{correlationmatrixoutput3} to obtain $\sigma_l^{\text{out}}$.
Thus, the state of output field fluctuations is given by the Gaussian Wigner function \begin{equation}\label{Wigner} W(\xi)=\frac{1}{2 \pi \sqrt{\det(\sigma^{\text{out}}_l)}}e^{-\frac{1}{2}R^T \left[\sigma^{\text{out}}_l\right]^{-1} R}, \end{equation} where $R=(X^{\text{out}}_l,Y^{\text{out}}_l)^T$.
\subsection{Effects of disorder in the positions of the membranes}\label{sec:IIe}
In the following, we are going to present effective versions of both models by assuming that the shift of the equilibrium position from $q_i^{(0)}$ to $q_i=q_i^{(0)}+ \delta q_i$ ($i\in\{1,2\}$) affects only two main parameters, the frequencies of the field modes and the optomechanical couplings. Provided that $\delta q_i \ll L$, the resonance frequency $\omega_c$ of all three inner cavities in the CC model changes as \cite{BhattacharyaUysMeystre2008} \begin{equation} \omega_{j}\approx \omega_c \left[1-3\left(\delta q_j-\delta q_{j-1}\right)/L\right], \nonumber \end{equation} where $j \in \{1,2,3\}$ and $\delta q_{0}=\delta q_3=0$, because the end mirrors are assumed to not change their positions. There is only one field mode in the case of the TR model, which changes according to the following function \cite{Piergentili2018} \begin{equation}
\omega_c(q_1,q_2)=\frac{n\pi c}{L}+(-1)^n \frac{c}{L} \arcsin \left[F(q_1, q_2) \right]- \frac{c}{L} \theta (q_1, q_2), \label{eq:piergentiliformula} \end{equation} where \begin{eqnarray}
F(q_1, q_2)&=&\frac{2 \sqrt{r} \cos \left[k(q_1+ q_2)\right] \sin \left[k (q_2-q_1)\right] }{\sqrt{1+r^2-2r\cos\left[2 k (q_2-q_1)\right]}}, \nonumber \\
\theta (q_1, q_2)&=& \arcsin \left[ \frac{ \sqrt{r} \sin \left[2k (q_2-q_1)\right] }{\sqrt{1+r^2-2r\cos\left[2 k (q_2-q_1)\right]}}\right], \nonumber \end{eqnarray} $k=n \pi/L$ and $n$ is fixed such that the above formula yields the cavity mode with frequency $\omega_c$ when $\delta q_1=\delta q_2=0$. It is worth noting that the phase related to the reflection of the membranes is set here to zero \cite{Piergentili2018}.
The optomechanical couplings strength is the derivative of the optical mode frequencies at the position of the $i$th membrane $q_i$, see Eq. \eqref{eq:defgJ}. In the case of the CC model, the optomechanical couplings of both membranes are changed to \cite{BhattacharyaUysMeystre2008} \begin{equation}
g_i(q_i)=\frac{n\pi c}{L^2}\frac{\sqrt{r} \sin \left(2kq_i\right)}{\sqrt{1-r \cos^2 \left(2kq_i\right) }}, \quad i\in\{1,2\}, \nonumber \end{equation} where $g_1(q_1^{(0)})=g_2(q_2^{(0)})=g$ and we have assumed that the disorder in the position of one of the membranes on the optomechanical coupling strength of the other membrane is negligible. Furthermore, we consider that the mode functions of each field mode in the three cavities are not changed significantly and thus the membrane induced coupling $J$ also remains unaffected \cite{CheungLaw2011}. Finally, in the TR model using Eq. \eqref{eq:piergentiliformula} the new optomechanical couplings are \begin{equation}
g_i=\frac{\partial \omega_c(q_1,q_2)}{\partial q_i} \big |_{q_i=q^{(0)}_i+\delta q_i}, \quad i\in\{1,2\},\nonumber \end{equation} and when $\delta q_1=\delta q_2=0$ then we reobtain the optomechanical couplings $g_1$ and $g_2$.
Therefore, in both models the steady state solutions will also depend on $\delta q_1$ and $\delta q_2$, which have to also be taken into account in the dynamical matrices $A^{(CC)}$ and $A^{(TR)}$, see Appendices \ref{app:steadystateamplitude} and \ref{app:BCCmodel}.
\section{Estimation} \label{sec:III}
In this section, we employ an estimation strategy concerning the inference of the disorders $\delta q_1$ and $\delta q_2$. Our starting point is the family of Wigner functions $W(\delta q)$ with $\delta q=(\delta q_1,\delta q_2)^T$ in Eq. \eqref{Wigner} that describes the possible states of the output field. In general, estimation aims to produce estimates of the unknown disorders from repeated measurements. These measurements are constrained by current technologies, which from the mathematical point of view means that we have access only to a subset of all possible positive-operator valued measures (POVM). A lower bound on the variance of any unbiased estimator is given by the Cram\'er-Rao inequality for both classical and quantum systems. Best-unbiased estimators are those, who can attain this bound. Finding the best-unbiased estimator, which might not even exist, is not an easy task, nonetheless when we also include the reduced number of implementable measurements, i.e., the case of our investigation. Therefore, given a set of measurements with tunable parameters, the best-unbiased estimators will be then those whose covariance matrix in a properly chosen norm gets close to the Cram\'er-Rao lower bound.
An outline of our view on the estimation approach is the following: \begin{itemize}
\item An output field of the cavity is subject to balanced homodyne photodetection (BHD). Based on our theoretical model these measurements provide us a probability density function (PDF), which
is functionally dependent on $\delta q$.
\item Then, we investigate the circumstances, where the classical Fisher information is the closest to its upper bound or benchmark value, i.e., the
quantum Fisher information. This step will set the values of the experimentally tunable parameters and thus providing the best PDF.
\item After obtaining the best PDF out of BHD, one has to do classical postprocessing of measurement data. As soon as the PDF is known and the measurement data is available,
a standard decision-making process of finding the best classical estimator is carried out. \end{itemize}
In our two-parameter estimation scenario, the covariance matrix $C(\delta q)$ of the estimates $\delta q=(\delta q_1,\delta q_2)^T$ fulfills \cite{Petz} \begin{equation}\label{eq:inequality} C(\delta q) \geq \mathcal{F}^{-1} \geq \mathcal{H}^{-1}, \end{equation} in terms of matrix inequalities, where $\mathcal{F}$ and $\mathcal{H}$ are the classical and quantum Fisher information matrices, respectively. In this sense the difference matrix $\mathcal{F}^{-1}-\mathcal{H}^{-1}$ is always non-negative definite.
The quantum Fisher information matrix (QFIM) depends only on the family of states $\rho(\delta q)$ and its components are \begin{equation} \mathcal{H}_{ij}=\frac{1}{2}\text{Tr}\left[\hat{\rho}(\delta q)\{\hat{\mathcal{L}}_{\delta q_i},\hat{\mathcal{L}}_{\delta q_j}\}\right], \quad i,j \in \{1,2\}, \end{equation} where $\{, \}$ denotes the anticommutator and $\hat{\mathcal{L}}_{\delta q_i}$ is the symmetric logarithmic derivative (SLD) operator, \begin{equation}\label{eq:SLDrelation} \frac{\partial}{\partial\delta q_i}\hat{\rho}(\delta q)=\frac{1}{2}\left\{\hat{\rho}(\delta q), \hat{\mathcal{L}}_{\delta q_i}\right\}. \end{equation}
We have already obtained the phase space representation $W(\delta q)$ of the density matrix $\hat{\rho}(\delta q)$ and therefore similarly to our approach in Ref.~\cite{SanavioBernadXuereb2020}, we derive the QFIM from the Gaussian Wigner function in Eq. \eqref{Wigner}. We neglect the subscripts of $\sigma^{\text{out}}_l$ in the subsequent discussion because we focus on the only mode of the electromagnetic field that is subject to detection, i.e., $\sigma=\sigma^{\text{out}}_l$. Furthermore, we also write $R=(X^{\text{out}}_l,Y^{\text{out}}_l)^T=(x,y)^T$.
The Weyl transform of the $2$ SLD operators $\mathcal{L}_{\delta q_i}$ is quadratic and can be written as \begin{equation}\label{eq:WeylSLD} L^i(x,y)=R^T\Phi^i R-\nu^i, \end{equation} with $\Phi^i=-\partial_{\delta q_i}(\sigma^{-1})/2$ and $\nu^i=Tr[\Phi^i\sigma]$. Consequently, we find the Weyl transform $L^{(2)}_{ij}(x,y)$ of the operator $\frac{1}{2}\{\hat{\mathcal{L}}_{\delta q_i},\hat{\mathcal{L}}_{\delta q_j}\}$, see the details in Appendix \ref{app:WeylTransform}. Now, we can calculate the elements of QFIM by using the phase space representation as \begin{equation} \mathcal{H}_{ij}=\int dx \, dy \, L^{(2)}_{ij}(x,y)W(x,y). \end{equation} Finally, we obtain \begin{eqnarray}\label{eq:QFIcomponentsWT} \mathcal{H}_{ij}&=&3 \text{Tr}[(\Phi^i\sigma\Phi^j\sigma)]-\nu^i\nu^j\\ &&+\bigg{(}\det\sigma-\frac{1}{2}\bigg{)}(\Phi_{11}^i\Phi_{22}^j+\Phi_{11}^j\Phi_{22}^i-2\Phi_{12}^i\Phi_{12}^j)\nonumber. \end{eqnarray} It is worth mentioning that in the case of $i=j$ \eqref{eq:QFIcomponentsWT} reduces to Eq.~(37) of Ref.\cite{SanavioBernadXuereb2020}.
On the other hand, the CFIM $\mathcal{F}$ depends on the PDF of the measurements. The entries are \begin{equation}\label{eq:CFIMatrix} \mathcal{F}_{ij}=\int dk P(k; \delta q)\left[\partial_{\delta q_i} \ln P(k; \delta q) \right] \left[\partial_{\delta q_j}\ln P(k; \delta q)\right], \end{equation} where $i,j \in \{1,2\}$ and $P(k; \delta q)$ is the PDF parameterized by the unknown $\delta q$, which describes the probability of observing the outcome $k$. As we have already outlined, we consider BHD measurements, which has been proved in Ref. \cite{SanavioBernadXuereb2020} to be an optimal measurement for the inference of the optomechanical coupling strength in a standard moving-end mirror setup. The Weyl transform of the BHD POVM is \begin{equation}\label{BHDPOVM} \Pi_k^\eta(x,y)= \sqrt{\frac{2\eta}{\pi (1-\eta)}}\exp\bigg{[}-\frac{2\eta(k-\frac{x\cos\theta+y\sin\theta}{\sqrt{2}})^2}{1-\eta}\bigg{]}. \end{equation}
where $k$ is an outcome of the measurement, $\eta$ is the detector efficiency and $\theta$ is the measured phase quadrature. This formula is usually obtained by considering an intense coherent local oscillator that interferes with the single mode field state to be measured at a $50/50$ beam splitter. Then, the two modes emerging from the beam splitter are measured by two photodetectors and the difference of the photon numbers $n_{12}$ is retained. These considerations yield $k=n_{12}/(2 \eta |\alpha_{LO}|)$ where $|\alpha_{LO}|^2$ is the mean photon number of the local oscillator's state.
The PDF $P(k; \delta q)$ is obtained by integrating the product of the phase space representation of BHD in Eq. \eqref{BHDPOVM} and the Wigner function, \begin{equation}\label{pdfk} P(k; \delta q)=\sqrt{\frac{r_\theta^\eta(\sigma)}{2 \pi}}\exp\left[-r_\theta^\eta(\sigma)k^2/2\right], \end{equation} where we have introduced the function \begin{eqnarray} r_\theta^\eta(\sigma)=\frac{4\eta}{1-\eta+2\eta R^T_\theta\sigma R_\theta} \nonumber \end{eqnarray} with $R_\theta=(\cos\theta,\sin\theta)^T$. Now, we employ Eq.~\eqref{eq:CFIMatrix} to find the entries of matrix $\mathcal{F}$ and get \begin{equation} \mathcal{F}_{ij}= \begin{cases} 2 \eta^2 \frac{\left(R_\theta^T\partial_{\delta q_i}\sigma R_\theta\right) \left( R_\theta^T\partial_{\delta q_j}\sigma R_\theta\right)}{\left(1-\eta+2\eta R_\theta^T\sigma R_\theta\right)^2}, & \mbox{if } i \ne j \\
2 \eta^2 \bigg{(}\frac{ R_\theta^T\partial_{\delta q_i}\sigma R_\theta}{1-\eta+2\eta R_\theta^T\sigma R_\theta}\bigg{)}^2, & \mbox{if } i=j \end{cases} \end{equation}
To search for conditions under which the remoteness between CFIM and QFIM is as small as possible we employ the trace norm to quantity this distance by \begin{equation}\label{distance}
d=\|\mathcal{H}-\mathcal{F}\|_1. \end{equation} This norm distance $d$ is a function of all parameters of the model and the measurement scenario as well. In the multiparameter estimation scenarios usually, there is no optimal measurement to reach equality $\mathcal{H}=\mathcal{F}$ \cite{Matsumoto}. In addition, we are only dealing with the subspace of all possible POVMs and our strategy will be to find the minimum of $d$ within the experimentally available parameter space.
Finally, we are going to show how classical estimation is going to work on the obtained data. Based on the PDF in Eq. \eqref{pdfk} an experiment can obtain a finite sample ${\bf k}=\{k_1,, k_2, \dots k_N\}$. After observing ${\bf k}$, we shall want to estimate the values of $\delta q$. We denote this estimate in vector notation as $\delta \tilde{q}({\bf k})$, which is the estimator applied on the data space. We assume that all observations are effectively independent because the values of the integrated photocurrents in BHD are recorded per pulse \cite{Raymer}. Then, \begin{equation}
P({\bf k}; \delta q)=\prod^N_{i=1} \sqrt{\frac{r_\theta^\eta(\sigma)}{2 \pi}}\exp\left[-r_\theta^\eta(\sigma)k^2_i/2\right]. \end{equation} It is straightforward to check that \begin{equation}
\int d{\bf k}\, P({\bf k}; \delta q)\left[\partial_{\delta q_i} \ln P({\bf k}; \delta q) \right]=0, \quad \forall \delta q_i \end{equation} with $i \in \{1,2\}$. Therefore, an unbiased estimator $\delta \tilde{q}({\bf k})$ attains the Cram\'er-Rao lower bound if and only if \cite{vanTrees} \begin{equation}\label{attaining}
\frac{\partial \ln P({\bf k}; \delta q) }{\partial \delta q}=\mathcal{I} (\delta q) \left[\delta \tilde{q}({\bf k})- \delta q \right], \end{equation} where $\mathcal{I}$ is some $2 \times 2$ matrix. The left-hand side of \eqref{attaining} reads \begin{eqnarray}
&&\frac{\partial \ln P({\bf k}; \delta q) }{\partial \delta q}=\begin{bmatrix}\frac{\partial \ln P({\bf k}; \delta q) }{\partial \delta q_1} \\
\frac{\partial \ln P({\bf k}; \delta q) }{\partial \delta q_2} \end{bmatrix} \\
&&=\begin{bmatrix} \frac{\eta R_\theta^T\partial_{\delta q_1}\sigma R_\theta}{1-\eta+2\eta R_\theta^T\sigma R_\theta}\left(\frac{4 \eta \sum^N_{i=1}k^2_i }{1-\eta+2\eta R_\theta^T\sigma R_\theta}-N \right) \\
\frac{\eta R_\theta^T\partial_{\delta q_2}\sigma R_\theta}{1-\eta+2\eta R_\theta^T\sigma R_\theta}\left(\frac{4 \eta \sum^N_{i=1}k^2_i}{1-\eta+2\eta R_\theta^T\sigma R_\theta}-N \right) \end{bmatrix}. \nonumber \end{eqnarray} We observe that this vector cannot be written in the form required by \eqref{attaining} \begin{equation}
\begin{bmatrix} \mathcal{I}_{11} (\delta q) & \mathcal{I}_{12} (\delta q) \\ \mathcal{I}_{21} (\delta q) & \mathcal{I}_{22} (\delta q)
\end{bmatrix} \begin{bmatrix}\delta \tilde{q}_1({\bf k})- \delta q_1 \\
\delta \tilde{q}_2({\bf k})- \delta q_2 \end{bmatrix}, \end{equation} and therefore, an efficient unbiased estimator does not exist. However, one can still look for minimum variance unbiased estimators by using the concept of complete sufficient statistics and the Rao-Blackwell-Lehmann-Scheffe theorem \cite{Kay, Casella}. By examining the PDF one can realize that \begin{equation}
T({\bf k})= \left[
\sum^N_{i=i} k^2_i, \quad \sum^N_{i=i} k^2_i \right]^T. \end{equation} is a sufficient statistic for $\delta q_1$ and $\delta q_2$. Taking the expectation value produces \begin{equation}
\int d{\bf k} \, P({\bf k}; \delta q) \,T({\bf k})=\begin{bmatrix}
N \frac{1-\eta+2\eta R^T_\theta\sigma R_\theta} {4\eta} \\ N \frac{1-\eta+2\eta R^T_\theta\sigma R_\theta} {4\eta}
\end{bmatrix}. \end{equation} The task is to find two functions $f_1$ and $f_2$ such that \begin{equation}
\int d{\bf k} \, P({\bf k}; \delta q) f_i\left[T({\bf k})\right]=\delta q_i, \quad i\in\{1,2\}. \end{equation} However, this turns out to be difficult because $\sigma$ is the solution of the Lyapunov equation \eqref{eq:Lyapunov} where $A$ contains $\delta q_1$ and $\delta q_2$. In fact, $\sigma$ is a function of the eigenvalues and eigenvectors of $A$, which depend on the parameters to be estimated. As $A$ is either a $6$ or $10$ dimensional matrix in our models (TR and CC) and the fact that finding analytical roots to general polynomial equations of degree five or higher is not possible shows that the two functions $f_1$ and $f_2$ cannot be determined analytically.
We have seen so far that the above two attempts fail analytically and the complete sufficient statistic approach may work with a considerable numerical effort. As next, the maximum likelihood approach could be tried, if $P({\bf k}; \delta q)$, the likelihood function, can be maximized either analytically or numerically. The likelihood equations are \begin{equation}
\left. \frac{\partial \ln P({\bf k}; \delta q) }{\partial \delta q_i}\right |_{\delta q=\delta\tilde q({\bf k})}=0, \quad i\in\{1,2\}, \end{equation} which yield two equations. These equations differ only in a non-zero factor and because on the right-hand side stays zero, we get only one equation to be solved \begin{equation}
R_\theta^T\sigma \Big(\delta \tilde{q}_1({\bf k}),\delta\tilde{q}_2({\bf k}) \Big) R_\theta=\frac{2}{N} \sum^N_{i=i} k^2_i-\frac{1-\eta}{2\eta}. \end{equation} This equation has to be solved numerically for a given sample ${\bf k}$ including the second partial derivative test with the Hessian matrix of $P({\bf k}; \delta q)$, which assures that we have found the maximum of the likelihood function. This approach guarantees estimates which are efficient asymptotically, i.e., $N \to \infty$. If one cannot succeed with the maximum likelihood approach then there is still the method of moments, however, these estimators are not optimal, and extracting the estimates of $\delta q_1$ and $\delta q_2$ out of $\sigma$ can only be solved numerically.
\section{Results} \label{sec:IV}
In this section, we numerically investigate the norm distance $d$ between CFIM and QFIM for an experimentally feasible situation. In Sec. \ref{sec:III} we have discussed the strategy of the estimation and argued that the estimators of the disorders can be found numerically from the measurement data. Therefore, we aim to minimize $d$ for the experimentally tunable parameters so that the postprocessing of the measurement data results in estimators with variance close to the benchmark value defined by QFIM. We are going to analyze both the CC and TR models presented in Sec. \ref{sec:II}.
For our numerical analysis, we take the experimental values from \cite{Piergentili2018}, where the optomechanical interaction has been studied for different input powers of the driving field. The cavity intensity decay rate $\kappa/2 \pi$ was found to be $83$ kHz. We consider the CC model to possess equal decay rates $\kappa_1=\kappa_3=\kappa$ for the first and third inner cavities. Furthermore, we also assume that the middle inner cavity decay rate $\kappa_2 \ll \kappa$, because the two membranes may absorb photons or scatter them out of the cavity, but this loss is negligible compared to the photon leakage at the end mirrors. In the TR model, there is only one decay rate. Photodetectors are considered to stay on for a temporal window of length $\tau=1/\kappa$. Intensity $\varepsilon$ of the driving field is equal to $\sqrt{2\kappa P/\hbar \omega_{\text{L}}}$, where $P$ is the power of the laser. The largest optomechanical coupling strength $g/2 \pi= 0.30$ Hz was obtained for low power, i.e, $P=130$ $\mu$W, with $\gamma/2 \pi=1.64$ Hz and $\omega_{\text{m}}/2 \pi=235.81$ MHz. Both membranes have the same masses $m=0.72$ ng and reflectivities $r=0.33$, while the experiment was performed at room temperature $T=300$ K. The low reflectivity of the membranes indicates that this experiment corresponds more to the TR model.
In order to address the CC model as well, we need to assume that the same experiment can be carried out with different membranes yielding much larger reflectivity values. In this context, the hopping rate $J$ which couples the modes of the CC model is obtained by setting the three inner cavity mode amplitudes to be approximately the same. An application of a driving laser from the left populates the mode of the left inner cavity, and without a sufficient large hopping rate there is a risk of leaving the mode of the right inner cavity very low populated or empty, and thus making impossible the detection procedure. A proper choice of the hopping constant, in our case a value $J\approx 200$kHz, prevents this to happen. With the formula $J=\omega_c \sqrt{2(1-r)}$ \cite{Jayich2008}, we can find the reflectivity of our membrane, yielding $r\approx 1$, which makes the CC model suitable to describe the system.
Our aim is to investigate the CFIM and the QFIM around these experimental values. The CFIM depends also on the detectors efficiency $\eta$ and the phase $\theta$ of the BHD. We assume $\eta=1$, as existing detectors are already close to ideals \cite{Daiss} and the destructive effects of non-ideal detection efficiency are known \cite{SanavioBernadXuereb2020}. Taking the inverse of CFIM and investigating the diagonal elements, which are the lower bounds of the variances of the estimators $\delta \tilde{q}_1$ and $\delta \tilde{q}_2$ in this BHD scenario, one can understand the dependence on the phase $\theta$. We have retrieved minimum values at $\tilde{\theta}^{(\text{CC})}=0$, and $\tilde{\theta}^{(\text{TR})}=\pi/2$, both with a period of $\pi$. Those values can be obtained numerically and depend strongly on the experimental values considered.
Once we have optimized for the detector's phase, we need to understand which central frequency $\Omega_l$ of the filter function gives us the best accuracy on the estimation of the disorders. Therefore, one has to calculate the inverse of QFIM and investigate both diagonal elements of $H^{-1}$, which are the smallest lower bounds of the variances of the estimators $\delta \tilde{q}_1$ and $\delta \tilde{q}_2$. Fig. \ref{fig:VarvsOmega} shows that the minimum variance is obtained at $\Omega_l=0$, i.e. in correspondence of the frequency of the driving laser. This result is valid for both the CC and the TR models.
\begin{figure}
\caption{Semi-logarithmic plot of quantum and classical lower bounds of the variance $Var(\delta q_1)$ expressed in $\text{m}^2$ as a function of $\Omega_l/\omega_m$. a) CC model. b) TR model. The experimental values are taken from \cite{Piergentili2018}.}
\label{fig:VarvsOmega}
\end{figure}
Beside this similarity, the two models don't share the same features. In fact, whereas for the TR model the BHD appears to be an optimal measurement, as the classical and quantum lower bounds for the variance coincide, for the CC model this measurement scenario is far from saturating inequality \eqref{eq:inequality} as both the diagonal components of the inverse of classical and quantum Fisher information matrix differ of many orders of magnitude. However, it is worth to notice that when the CC model is considered, BHD is able to offer estimates of disorders in the positions of the two membranes with extreme accuracy, i.e.,
$Var(\delta\tilde{q}_1)$ and $Var(\delta\tilde{q}_2)$ $\sim10^{-10}-10^{-16}\, \text{m}^2$.
Fig.~\ref{fig:dvsOmega} shows the distance $d$ in trace norm as a function of the filter frequency $\Omega_l$. For the CC model, the CFIM and the QFIM are far from each other, as $d$ is very large, suggesting the BHD is not the optimal measurement. However, we can be relieved by the fact the variances at $\Omega_l=0$ are very small (see Fig.~\ref{fig:VarvsOmega}). This is different in the TR model, where under optimal conditions ($\eta=1,\theta=\tilde{\theta}^{(\text{TR})}$), we have found that $d$ goes to zero when $\Omega_l=0$. We notice that $d$ is very small also for other values of $\Omega_l$, but on those points, the lower bound of the variance is larger (see Fig.\ref{fig:VarvsOmega}). This results in a poor estimation of the membrane position, with an uncertainty larger than the size of the cavity itself. This condition can easily be overcome by taking enough number $N$ of identical and independent measurements, which ultimately decreases the lower bound by a factor of $1/N$. Our analysis shows that in the TR model little information about the position of the membranes is contained in the state of the output field. Therefore, one has to tune the system parameters such that $d$ and the lower bound of the variance are getting close to a minimum.
\begin{center} \begin{figure}
\caption{Semi-logarithmic plot of distance $d$ in trace norm as a function of $\Omega_l/\omega_m$. a) CC model. b) TR model. The experimental values are taken from \cite{Piergentili2018}.}
\label{fig:dvsOmega}
\end{figure} \end{center}
The true values of the disorders largely modify the value of $d$. In Fig.~\ref{fig:dvsdq1} we plot the resulting distance $d$ in trace norm for the TR model, calculated as a function of $\delta{q}_1$, keeping $\delta{q}_2=0$. For a possible value of the disorder $\delta{q}_1\sim0.5\mu$m, the distance between CFIM and QFIM is further reduced and the estimation gets closer to the optimal. Analogous results are obtained when we keep $\delta{q}_1$ fixed and we vary the disorder for the other membrane. We notice that $d$ has the same period of the cavity frequency as expressed in Eq.~\eqref{eq:piergentiliformula}, and its minima are reached when $\omega_c$ is at maximum. Fig. \ref{fig:HvsT} shows how the variances lower bounds decreases with increasing temperature. The reason for this unexpected result has to be searched in the noise matrix $D$ of Eq. \eqref{eq:DMatrix}, from which we derive the correlation matrix $\sigma$, that has terms proportional to $T$. The off-diagonal component of the inverse matrix $\mathcal{H}^{-1}$ decreases, as the increase of temperature lowers the correlations between the two membranes.
\begin{figure}
\caption{a) Semi-logarithmic plots of distance $d$ in trace norm as a function of the disorder $\delta{q}_1$. b) The cavity frequency obtained by varying the prepared position of the first membrane with the disorder $\delta{q}_1$ and keeping $\delta{q}_2=0$. In b) the horizontal line refers to the cavity natural frequency in absence of membranes. Both figures belong to the TR model.}
\label{fig:dvsdq1}
\end{figure}
\begin{figure}
\caption{Log-log plots of the quantum lower bounds of the variances expressed in $\text{m}^2$ as a function of the temperature $T$. a) CC model. b) TR model. $H^{-1}_{12}$ gives information on the correlation of the data used for estimating the two disorders $\delta{q}_1$ and $\delta{q}_2$.}
\label{fig:HvsT}
\end{figure}
Finally, we consider only the TR model and we check the results when we change the reflectivity of the two membranes. Whereas the CC model is defined only for a high reflectivity $r\approx 1$ membrane, the TR model can be used for any value of $r$. Fig. \ref{fig:TRtoCC} shows the lower bounds of the variance increases with the reflectivity $r$
of the membrane, where we found it varies as the inverse of $\bar{n}^2=|\alpha|^4$, squared mean photon number in the cavity. The high reflectivity screens the radiation from passing through the membranes and lowers the rate of photons leaving the cavity, which results in an increased lower bound of the variance.
\begin{figure}
\caption{a) Log-log plot of the quantum lower bounds of the variances in $\text{m}^2$ as a function of the reflectivity $r$. b) Log-log plot of the mean photon number $\bar{n}$ in the cavity as a function of the reflectivity $r$. Both figures belong to the TR model.}
\label{fig:TRtoCC}
\end{figure}
\section{Conclusions and Discussions} \label{sec:V}
In this paper, we have investigated an optomechanical setup with a driven cavity containing two oscillating membranes. We have considered two possible theoretical models for the description of this system. The CC model focuses on a case, where three coupled single modes of the electromagnetic field are present in the inner cavities defined by the two membranes and the mirrors of the cavity. In the TR model, it is assumed that a global single mode of the radiation field is present in the whole cavity. Range of applicability of these models strongly depends on the reflectivity of the membranes. Our models consider also high-temperature quantum Brownian motions of the membranes, photon losses of the cavity fields, and the input-output formalism for the description of the output field escaping the cavity. In typical cavity optomechanical experiments, the estimation of parameters like the optomechanical coupling is done by detecting the light transmitted by the cavity, which is similar to our theoretical approach presented here. Within the CC and TR models, we have considered estimations of disorders in the positions of the membranes. For these estimations, the data is obtained via BHD of the escaping field and thus the estimators of disorders mapping the data into estimates have accuracies related to the CFIM. The quantum optimal accuracies are obtained from the QFIM. Without solving, in particular, the attainability of the CFIM related bounds by some unbiased estimators, we have focused from a purely theoretical point of view on the attainability of QFIM by CFIM. It is indeed true that most of the estimators even during classical postprocessing of data are unable to attain the Cram\'er-Rao bound \cite{Kay}, but there is still a well-understood decision-making process in estimator selection. In this view, our analysis serves the purpose of characterizing the chosen measurement setup for a certain estimation case, here the estimation of the disorders.
A comparison of CFIM and QFIM shows that the phase $\theta$ of the local oscillator in the BHD results in specific angles for an optimal estimation. In the unit detector efficiency limit we have obtained $\theta=0$, i.e, measuring the distribution of quadrature $X$ of the output field, for the CC and $\theta=\pi/2$, i.e, measuring the distribution of quadrature $P$ of the output field, for the TR model. This marked contrast could be an important help for experimental setups with different reflectivities of the membranes. With respect to the frequency of the filtering function used in input-output relations, both models deliver different optimal frequencies for the distance of QFIM and CFIM. However, it still seems when the frequency of the filter function matches the frequency of the driving laser a good enough accuracy can be obtained. Actually, there is an interesting effect, namely for certain values of parameters the CFIM might saturate QFIM, however, the related accuracies of the estimation could be very bad. Whenever is this the case we have indicated it, because we need not only to obtain saturation, but we have also to make sure that the related estimation precisions are good enough. This applies also to the effects of temperature, where the distance between CFIM and QFIM is increasing with the increase of the temperature, predictable behavior of the system. However, the accuracies of the estimators are getting better with the increase in temperature. This means that warmer baths of the membranes result in better precisions, a similar effect found by Ref. \cite{Sala}, but on the other hand, reaching the quantum optimal limit becomes more and more distant.
We have seen very different results whenever the considered model is the CC or the TR model. The two scenarios have shown lower bounds of variances of estimators with very different scales. Our choice for the hopping rate value $J$ has led the mean photon numbers in the three cavities of the CC model to be very different from the optical amplitude of the delocalized mode of the TR model. Furthermore, the two models are characterized by different expressions for the cavity frequency and consequently for the optomechanical coupling strength. It's worth to notice that the CC model offers a better description of the physics of some optomechanical lattice systems \cite{Schmidt,SanavioPeano}, and the TR model is more suited when one optical mode is coupled to multiple membranes. We believe that our implementable theoretical approach may serve the aim of realizing enhanced optomechanical performances \cite{Matheny,Li,Weaver}, the main objective of current experimental efforts.
Given the models considered here or in our previous work \cite{SanavioBernadXuereb2020} we can conclude that the probability density function of the data is always Gaussian, whose variance depends only on the parameters to be estimated, but unfortunately in a complicated matter. In this paper we have described several approaches, which suggest numerical approaches for finding minimum variance unbiased estimators. Therefore, a future goal may be to address this estimator selection issue analytically for this family of probability density functions.
\section{Acknowledgments}
This work is supported by the European Union's Horizon 2020 for research and innovation programme under Grant Agreement No.\ 732894 (FET Proactive HOT) and by the DFG under Germany's Excellence Strategy -- Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 -- 390534769. CMS is funded by the International Foundation of Big Data and Artificial Intelligence for Human Development within the project “Quantum computing for data analysis”
\appendix \begin{widetext}
\section{Steady-state amplitudes and the dynamical matrix in the CC model} \label{app:steadystateamplitude}
In the Heisenberg picture, the Hamiltonian in \eqref{eq:CCHamiltonian} together with the dissipative dynamics explained in \ref{sec:IIc} yield \begin{eqnarray} \dot{\hat{a}}_1&=&-i\Delta_0 \hat{a}_1-ig\hat{a}_1\hat{q}_1+\varepsilon-iJ\hat{a}_2-\frac{\kappa_1}{2}\hat{a}_1+\sqrt{\kappa_1}\hat{a}_{\text{in},1},\\ \dot{\hat{a}}_2&=&-i\Delta_0 \hat{a}_2-ig\hat{a}_2 (\hat{q}_2-\hat{q}_1)-iJ\left(\hat{a}_1+\hat{a}_3\right)-\frac{\kappa_2}{2}\hat{a}_2+\sqrt{\kappa_2}\hat{a}_{\text{in},2},\\ \dot{\hat{a}}_3&=&-i\Delta_0 \hat{a}_3+ig\hat{a}_3\hat{q}_2-iJ\hat{a}_2-\frac{\kappa_3}{2}\hat{a}_3+\sqrt{\kappa_3}\hat{a}_{\text{in},3},\\ \dot{\hat{p}}_1&=&-m\omega_{\text{m}}^2\hat{q}_1-\gamma\hat{p}_1-\hbar g \left(\hat{a}^\dagger_1\hat{a}_1-\hat{a}^\dagger_2\hat{a}_2\right)+\hat{\xi}, \\ \dot{\hat{p}}_2&=&-m\omega_{\text{m}}^2\hat{q}_2-\gamma\hat{p}_2-\hbar g \left(\hat{a}^\dagger_2\hat{a}_2-\hat{a}^\dagger_3\hat{a}_3\right)+\hat{\xi}, \\ \dot{\hat{q}}_1&=&\frac{\hat{p}_1}{m}, \quad \dot{\hat{q}}_2=\frac{\hat{p}_2}{m}, \end{eqnarray} and the dynamics of the hermitian conjugates of $\hat{a}_1$, $\hat{a}_2$, and $\hat{a}_3$. We introduce the following transformations $\hat{p}_i\to p_{i}+\hat{p}_i$, $\hat{q}_i\to q_{i}+\hat{q}_i$, and $\hat{a}_j\to\alpha_j+\hat{a}_j$, which can also be viewed as an application of different displacement operators to the master equation. In this case, one has to consider the dissipation of the field modes to be governed by the optical master equation~\cite{BreuerPetruccioni2002}, whereas the membranes follow the Caldeira-Leggett master equation~\cite{CaldeiraLeggett1981}. In the steady-state, we obtain the following system of equations \begin{eqnarray} 0&=&-i\Delta_0 \alpha_1-ig\alpha_1 q_1+\varepsilon-iJ\alpha_2-\frac{\kappa_1}{2}\alpha_1,\\ 0&=&-i\Delta_0 \alpha_2-ig\alpha_2 (q_2-q_1)-iJ\left(\alpha_1+\alpha_3\right)-\frac{\kappa_2}{2}\alpha_2,\\ 0&=&-i\Delta_0 \alpha_3+ig\alpha_3 q_2-iJ\alpha_2-\frac{\kappa_3}{2}\alpha_3,\\
0&=&-m\omega_{\text{m}}^2 q_1-\gamma p_1-\hbar g \left(|\alpha_1|^2-|\alpha_2|^2\right), \\
0&=&-m\omega_{\text{m}}^2 q_2-\gamma p_2-\hbar g \left(|\alpha_2|^2-|\alpha_3|^2\right), \\ 0&=&\frac{p_1}{m}, \quad 0=\frac{p_2}{m}. \end{eqnarray} It is immediate that \begin{eqnarray}
q_1&=&\frac{\hbar g}{m\omega_m^2}\left(|\alpha_2|^2-|\alpha_1|^2\right), \quad p_1=0, \\
q_2&=&\frac{\hbar g}{m\omega_m^2}\left(|\alpha_3|^2-|\alpha_2|^2\right), \quad p_2=0. \end{eqnarray} We can only find numerical solutions for the amplitudes $\alpha_1$, $\alpha_2$, and $\alpha_3$. Next, we introduce the quadratures $\hat X=(\hat a^\dagger + \hat a)/\sqrt{2}$ and $\hat Y=i(\hat a^\dagger - \hat a)/\sqrt{2}$ of the field operators. Then, we have the dynamical matrix \begin{equation}
A^{(CC)}=\begin{pmatrix}
A_{11} & A_{12} \\
A_{21} & A_{22}
\end{pmatrix}, \end{equation} acting on the vector of operators $(\hat{X}_1,\hat{Y}_1, \hat{X}_2, \hat{Y}_2,\hat{X}_3,\hat{Y}_3,\hat{p}_1,\hat{p}_2,\hat{q}_1,\hat{q}_2)^T$ with \begin{eqnarray}
A_{11}&=& \begin{pmatrix}
-\frac{\kappa_1}{2} & \Delta_1 & 0 & J & 0 & 0 \\
-\Delta_1 & -\frac{\kappa_1}{2} & -J & 0 & 0 & 0 \\
0 & J & -\frac{\kappa_2}{2} & \Delta_2 & 0 & J \\
-J & 0 & -\Delta_2 & -\frac{\kappa_2}{2} & -J & 0 \\
0 & 0 & 0 & J & -\frac{\kappa_3}{2} & \Delta_3 \\
0 & 0 & -J & 0 & -\Delta_3 & -\frac{\kappa_3}{2}
\end{pmatrix}, \\
\Delta_1&=&\Delta_0+g q_1, \quad \Delta_2=\Delta_0+g \left(q_2-q_1\right), \quad \Delta_3=\Delta_0-g q_2, \\
A_{12}&=& \begin{pmatrix}
0 & 0 & \sqrt{2}g\text{Im}[\alpha_1] & 0 \\
0 & 0 & -\sqrt{2}g\text{Re}[\alpha_1] & 0 \\
0 & 0 & -\sqrt{2}g\text{Im}[\alpha_2] & \sqrt{2}g\text{Im}[\alpha_2] \\
0 & 0 & \sqrt{2}g\text{Re}[\alpha_2] & -\sqrt{2}g\text{Re}[\alpha_2] \\
0 & 0 & 0 & -\sqrt{2}g\text{Im}[\alpha_3] \\
0 & 0 & 0 & \sqrt{2}g\text{Re}[\alpha_3]
\end{pmatrix}, \\
A_{21}&=& \begin{pmatrix}
-\sqrt{2}\hbar g\text{Re}[\alpha_1] & -\sqrt{2}\hbar g\text{Im}[\alpha_1] & \sqrt{2}\hbar g\text{Re}[\alpha_2] & \sqrt{2}\hbar g\text{Im}[\alpha_2] & 0 & 0 \\
0 & 0 & -\sqrt{2}\hbar g\text{Re}[\alpha_2] & -\sqrt{2}\hbar g\text{Im}[\alpha_2] & \sqrt{2}\hbar g\text{Re}[\alpha_3] & \sqrt{2}\hbar g\text{Im}[\alpha_3] \\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0
\end{pmatrix}, \\ A_{22}&=& \begin{pmatrix}
-\gamma & 0 & -m \omega_m^2 & 0 \\
0& -\gamma & 0 & -m \omega_m^2 \\
\frac{1}{m} & 0 & 0 & 0 \\
0& \frac{1}{m} & 0 & 0
\end{pmatrix}. \end{eqnarray} One has to analyze the stability of the dynamical matrix, checking that each eigenvalue of $A^{(CC)}$ has a negative real part. This condition is necessary to express the steady-state as a Gaussian state. In our numerical simulations this condition is always satisfied.
\section{Steady-state amplitudes and the dynamical matrix in the TR model} \label{app:BCCmodel}
The linearization of the dynamics involving Hamiltonian in \eqref{eq:HamiltonianOneMode} follows the same principles we saw for the CC model in Appendix \ref{app:steadystateamplitude}. Nevertheless, the corresponding equations are different as only one mode interacts with the mechanical oscillation of the membranes. In the Heisenberg picture, the resulting differential equations are \begin{eqnarray} \dot{\hat{a}}&=&-i\Delta_0 \hat{a}-i\hat{a}\left(g_1\hat{q}_1+g_2\hat{q}_2\right)+\varepsilon-\frac{\kappa}{2}\hat{a}+\sqrt{\kappa}\hat{a}_{\text{in}},\\ \dot{\hat{p}}_1&=&-m\omega_{\text{m}}^2\hat{q}_1-\gamma\hat{p}_1-\hbar g_1 \hat{a}^\dagger \hat{a}+\hat{\xi}, \\ \dot{\hat{p}}_2&=&-m\omega_{\text{m}}^2\hat{q}_2-\gamma\hat{p}_2-\hbar g_2 \hat{a}^\dagger \hat{a}+\hat{\xi}, \\ \dot{\hat{q}}_1&=&\frac{\hat{p}_1}{m}, \quad \dot{\hat{q}}_2=\frac{\hat{p}_2}{m}, \end{eqnarray} and the dynamics of the hermitian conjugates of $\hat{a}$. In the steady-state, after performing the transformations shown in Appendix \ref{app:steadystateamplitude} we obtain the following system of equations \begin{eqnarray} 0&=&-i\Delta_0 \alpha-i\alpha \left(g_1 q_1+g_2 q_2\right)+\varepsilon-\frac{\kappa}{2}\alpha,\\
0&=&-m\omega_{\text{m}}^2 q_1-\gamma p_1-\hbar g_1 |\alpha|^2, \\
0&=&-m\omega_{\text{m}}^2 q_2-\gamma p_2-\hbar g_2 |\alpha|^2, \\ 0&=&\frac{p_1}{m}, \quad 0=\frac{p_2}{m}. \end{eqnarray} Then, we have \begin{eqnarray}
q_1&=&\frac{\hbar g_1}{m\omega_m^2} |\alpha|^2, \quad q_2=\frac{\hbar g_2}{m\omega_m^2} |\alpha|^2, \quad p_1=p_2=0, \\
\varepsilon&=& \alpha \left(i\Delta_0 +i \hbar\frac{g^2_1+g^2_2}{m\omega_m^2}|\alpha|^2 -\frac{\kappa}{2}\right), \end{eqnarray} which can be solve analytically to obtain $q_1$, $q_2$, and $\alpha$. Then, we have the dynamical matrix \begin{equation} A^{(TR)}=\begin{pmatrix}
-\frac{\kappa}{2} & \Delta & 0 & 0 &\sqrt{2}g_1\text{Im}[\alpha] & \sqrt{2}g_2\text{Im}[\alpha] \\
-\Delta & -\frac{\kappa}{2} & 0 & 0 & -\sqrt{2}g_1\text{Re}[\alpha] & -\sqrt{2}g_2\text{Re}[\alpha] \\
-\sqrt{2}\hbar g_1\text{Re}[\alpha] & -\sqrt{2}\hbar g_1\text{Im}[\alpha] & -\gamma & 0
& -m \omega_m^2 & 0 \\
-\sqrt{2}\hbar g_2\text{Re}[\alpha] &
-\sqrt{2}\hbar g_2\text{Im}[\alpha] & 0 & -\gamma & 0 & -m
\omega_m^2 \\
0 & 0 & \frac{1}{m} & 0 & 0 & 0 \\
0 & 0 & 0 & \frac{1}{m} & 0 & 0 \end{pmatrix} \end{equation}
acting on the vector of operators $(\hat{X},\hat{Y},\hat{p}_1,\hat{p}_2,\hat{q}_1,\hat{q}_2)^T$ with $\Delta=\Delta_0+\hbar\frac{g^2_1+g^2_2}{m\omega_m^2}|\alpha|^2$. In our numerical simulations the stability of $A^{(TR)}$ is always satisfied.
\section{Weyl transform of the SLD} \label{app:WeylTransform}
In the main text, we have used the phase space formalism which relies on the Weyl transform \cite{Schleich}, a map from bounded operators to functions on the phase space. The Weyl transform $A(x,y)$ of an operator $\hat{A}$ is defined by \begin{equation}\label{eq:WeylTransform}
A(x,y)=\int d\xi e^{-iy\xi}\langle x+\frac{\xi}{2}|\hat{A}|x-\frac{\xi}{2}\rangle. \end{equation}
This approach is very useful for the calculation of the QFI for a Gaussian state \cite{SanavioBernadXuereb2020}, where the Weyl transform or Wigner function of a density operator $\hat{\rho}$ is a Gaussian function. The SLD operator $\hat{\mathcal{L}}_i$ satisfies the relation \eqref{eq:SLDrelation} and for a Gaussian state its Weyl transform corresponds to the expression in eq.~\eqref{eq:WeylSLD}, or explicitly
\begin{eqnarray} L^i(x,y) &=& \Phi^i_{11}x^2+\Phi^i_{22}y^2+2\Phi^i_{12}xy-\nu^i. \end{eqnarray}
The inverse transformation of this function yields the following operator \begin{eqnarray} \hat{\mathcal{L}}_i &=& \Phi^i_{11}\hat{x}^2+\Phi^i_{22}\hat{y}^2+\Phi^i_{12}(\hat{x}\hat{y}+\hat{y}\hat{x})-\nu^i\mathds{1}, \end{eqnarray} where one has to use the Weyl-ordering.
Now, the Weyl transform ~\eqref{eq:WeylTransform} is applied on the operator $\hat{\mathcal{L}}_i\hat{\mathcal{L}}_j$ yielding
\begin{eqnarray} L^{(2)}_{ij}(x,y)&=&\Phi_{11}^i\Phi_{11}^jx^4+2(\Phi_{11}^i\Phi_{12}^j+\Phi_{11}^j\Phi_{12}^i)x^3y+(\Phi_{11}^i\Phi_{22}^j+\Phi_{11}^j\Phi_{22}^i+4\Phi_{12}^i\Phi_{12}^j)x^2y^2 \nonumber\\ &+&2(\Phi_{22}^i\Phi_{12}^j+\Phi_{22}^j\Phi_{12}^i)xy^3+\Phi_{22}^i\Phi_{22}^jy^4-(\Phi_{11}^i\nu^j+\Phi_{11}^j\nu^i)x^2-(\Phi_{22}^i\nu^j+\Phi_{22}^j\nu^i)y^2\nonumber\\ &-&\frac{1}{2}(\Phi_{11}^i\Phi_{22}^j+\Phi_{22}^i\Phi_{11}^j-2\Phi_{12}^i\Phi_{12}^j)+\nu^i\nu^j. \end{eqnarray}
The QFI matrix entries are the mean values of the above set of functions with $i,j \in \{1,2\}$, which are calculated by integrating them with the Gaussian Wigner function $W(x,p)$. This leads to eq. \eqref{eq:QFIcomponentsWT}.
\end{widetext}
\begin{filecontents}{biblio.bib}
@book{Kaipio, title={Statistical and Computational Inverse Problems}, author={Kaipio, J. and Somersalo, E.}, series={Applied Mathematical Sciences, Vol. 1}, year={2005}, publisher={Springer-Verlag, New York} }
@book{Helstrom, title={Quantum Detection and Estimation Theory}, author={Helstrom, C. W.}, year={1976}, publisher={Academic Press, New York} }
@book{Holevo, title={Probabilistic and Statistical Aspects of Quantum Theory}, author={Holevo, A.}, year={2011}, publisher={Edizioni della Normale, Pisa} }
@book{Wiseman, title={Quantum Measurement and Control}, author={Wiseman, H. M. and Milburn, G. J.}, year={2010}, publisher={Cambridge University Press, Cambridge, UK} }
@article{KBell, author={Bell, K. L. and Steinberg, Y. and Ephraim, Y. and Van Trees, H. L.}, journal={IEEE Transactions on Information Theory}, volume={43}, pages={624}, numpages={13}, year={1997}}
@article{Helstrom68, author={Helstrom, C. W.}, journal={IEEE Transactions on Information Theory}, volume={14}, pages={234}, year={1968}}
@book{Petz, title={Quantum Information Theory and Quantum Statistics}, author={Petz, D.}, series={Theoretical and Mathematical Physics}, year={2008}, publisher={Springer, Berlin} }
@book{vanTrees, title={Detection, Estimation, and Modulation Theory, Part I.}, author={Van Trees, H. L.}, year={2001}, publisher={John Wiley and Sons, New York} }
@article{Chow1986, author={ {Weng Chow}}, journal={IEEE Journal of Quantum Electronics}, title={A composite-resonator mode description of coupled lasers}, year={1986}, volume={22}, number={8}, pages={1174-1183}, doi={10.1109/JQE.1986.1073104}}
@article{BhattacharyaMeystre2007,
title = {Trapping and Cooling a Mirror to Its Quantum Mechanical Ground State},
author = {Bhattacharya, M. and Meystre, P.},
journal = {Phys. Rev. Lett.},
volume = {99},
issue = {7},
pages = {073601},
numpages = {4},
year = {2007},
month = {Aug},
publisher = {American Physical Society},
doi = {10.1103/PhysRevLett.99.073601},
url = {https://link.aps.org/doi/10.1103/PhysRevLett.99.073601} }
@article{BhattacharyaUysMeystre2008,
title = {Optomechanical trapping and cooling of partially reflective mirrors},
author = {Bhattacharya, M. and Uys, H. and Meystre, P.},
journal = {Phys. Rev. A},
volume = {77},
issue = {3},
pages = {033819},
numpages = {12},
year = {2008},
month = {Mar},
publisher = {American Physical Society},
doi = {10.1103/PhysRevA.77.033819},
url = {https://link.aps.org/doi/10.1103/PhysRevA.77.033819} }
@article{Jayich2008,
doi = {10.1088/1367-2630/10/9/095008},
url = {https://doi.org/10.1088
year = 2008,
month = {sep},
publisher = {{IOP} Publishing},
volume = {10},
number = {9},
pages = {095008},
author = {A M Jayich and J C Sankey and B M Zwickl and C Yang and J D Thompson and S M Girvin and A A Clerk and F Marquardt and J G E Harris},
title = {Dispersive optomechanics: a membrane inside a cavity},
journal = {New Journal of Physics},
abstract = {We present the results of theoretical and experimental studies of dispersively coupled (or ‘membrane in the middle’) optomechanical systems. We calculate the linear optical properties of a high finesse cavity containing a thin dielectric membrane. We focus on the cavity's transmission, reflection and finesse as a function of the membrane's position along the cavity axis and as a function of its optical loss. We compare these calculations with measurements and find excellent agreement in cavities with empty-cavity finesses in the range 104–105. The imaginary part of the membrane's index of refraction is found to be ∼10−4. We calculate the laser cooling performance of this system, with a particular focus on the less-intuitive regime in which photons ‘tunnel’ through the membrane on a timescale comparable to the membrane's period of oscillation. Lastly, we present calculations of quantum non-demolition measurements of the membrane's phonon number in the low signal-to-noise regime where the phonon lifetime is comparable to the QND readout time.} }
@article{BhattacharyaMeystre2008,
title = {Multiple membrane cavity optomechanics},
author = {Bhattacharya, M. and Meystre, P.},
journal = {Phys. Rev. A},
volume = {78},
issue = {4},
pages = {041801},
numpages = {4},
year = {2008},
month = {Oct},
publisher = {American Physical Society},
doi = {10.1103/PhysRevA.78.041801},
url = {https://link.aps.org/doi/10.1103/PhysRevA.78.041801} }
@article{XuerebGenesDantan2012,
title = {Strong Coupling and Long-Range Collective Interactions in Optomechanical Arrays},
author = {Xuereb, Andr\'e and Genes, Claudiu and Dantan, Aur\'elien},
journal = {Phys. Rev. Lett.},
volume = {109},
issue = {22},
pages = {223601},
numpages = {5},
year = {2012},
month = {Nov},
publisher = {American Physical Society},
doi = {10.1103/PhysRevLett.109.223601},
url = {https://link.aps.org/doi/10.1103/PhysRevLett.109.223601} }
@article{Rabl2011,
title = {Photon Blockade Effect in Optomechanical Systems},
author = {Rabl, P.},
journal = {Phys. Rev. Lett.},
volume = {107},
issue = {6},
pages = {063601},
numpages = {5},
year = {2011},
month = {Aug},
publisher = {American Physical Society},
doi = {10.1103/PhysRevLett.107.063601},
url = {https://link.aps.org/doi/10.1103/PhysRevLett.107.063601} }
@article{LiXuerebMalossiVitali2016,
doi = {10.1088/2040-8978/18/8/084001},
url = {https://doi.org/10.1088
year = 2016,
month = {jun},
publisher = {{IOP} Publishing},
volume = {18},
number = {8},
pages = {084001},
author = {Jie Li and Andr{\'{e}} Xuereb and Nicola Malossi and David Vitali},
title = {Cavity mode frequencies and strong optomechanical coupling in two-membrane cavity optomechanics},
journal = {Journal of Optics},
abstract = {We study the cavity mode frequencies of a Fabry–Pérot cavity containing two vibrating dielectric membranes. We derive the equations for the mode resonances and provide approximate analytical solutions for them as a function of the membrane positions, which act as an excellent approximation when the relative and center-of-mass position of the two membranes are much smaller than the cavity length. With these analytical solutions, one finds that extremely large optomechanical coupling of the membrane relative motion can be achieved in the limit of highly reflective membranes when the two membranes are placed very close to a resonance of the inner cavity formed by them. We also study the cavity finesse of the system and verify that, under the conditions of large coupling, it is not appreciably affected by the presence of the two membranes. The achievable large values of the ratio between the optomechanical coupling and the cavity decay rate, , make this two-membrane system the simplest promising platform for implementing cavity optomechanics in the strong coupling regime.} }
@article{Marshall, author={Marshall, W. and Simon, C. and Penrose, R. and Bouwmeester, D.}, journal = {Phys. Rev. Lett.}, volume = {91}, pages = {130401}, year = {2003} }
@article{Kippenberg, author={Kippenberg, T. J. and Vahala, K. J.}, journal = {Science}, volume = {321}, pages = {1172}, year = {2008} }
@article{Abbott, author={Abbott, B. P and others},
collaboration = {LIGO Scientific Collaboration and Virgo Collaboration},
journal = {Phys. Rev. Lett.},
volume = {116},
issue = {6},
pages = {061102},
numpages = {16},
year = {2016} }
@article{Braginsky, author={Braginski, V. B. and Vorontsov, Y. I.}, journal = {Sov. Phys. Usp.}, volume = {17}, pages = {644}, year = {1976} }
@article{Caves, author={Caves, C. M.}, journal = {Phys. Rev. D}, volume = {23}, pages = {1693}, year = {1981} }
@article{Piergentili2018,
doi = {10.1088/1367-2630/aad85f},
url = {https://doi.org/10.1088
year = 2018,
month = {aug},
publisher = {{IOP} Publishing},
volume = {20},
number = {8},
pages = {083024},
author = {Paolo Piergentili and Letizia Catalini and Mateusz Bawaj and Stefano Zippilli and Nicola Malossi and Riccardo Natali and David Vitali and Giovanni Di Giuseppe},
title = {Two-membrane cavity optomechanics},
journal = {New Journal of Physics},
abstract = {We study the optomechanical behaviour of a driven Fabry–Pérot cavity containing two vibrating dielectric membranes. We characterize the cavity mode frequency shift as a
function of the two-membrane positions, and report a ∼2.47 gain in the optomechanical coupling strength of the membrane relative motion with respect to the single membrane case. This is achieved when the two membranes are properly positioned to form an inner cavity which is resonant with the driving field.
We also show that this two-membrane system has the capability to tune the single-photon optomechanical coupling on demand, and represents a promising platform for implementing cavity optomechanics with distinct oscillators. Such a configuration has the potential to enable cavity optomechanics in the strong single-photon coupling regime, and to study synchronization in optically linked mechanical resonators.} }
@article{Wei2019,
title = {Controllable two-membrane-in-the-middle cavity optomechanical system},
author = {Wei, Xinrui and Sheng, Jiteng and Yang, Cheng and Wu, Yuelong and Wu, Haibin},
journal = {Phys. Rev. A},
volume = {99},
issue = {2},
pages = {023851},
numpages = {5},
year = {2019},
month = {Feb},
publisher = {American Physical Society},
doi = {10.1103/PhysRevA.99.023851},
url = {https://link.aps.org/doi/10.1103/PhysRevA.99.023851} }
@article{SanavioBernadXuereb2020,
title = {Fisher-information-based estimation of optomechanical coupling strengths},
author = {Sanavio, Claudio and Bern\'ad, J\'ozsef Zsolt and Xuereb, Andr\'e},
journal = {Phys. Rev. A},
volume = {102},
issue = {1},
pages = {013508},
numpages = {10},
year = {2020},
month = {Jul},
publisher = {American Physical Society},
doi = {10.1103/PhysRevA.102.013508},
url = {https://link.aps.org/doi/10.1103/PhysRevA.102.013508} }
@article{Gardiner1985,
title = {Input and output in damped quantum systems: Quantum stochastic differential equations and the master equation},
author = {Gardiner, C. W. and Collett, M. J.},
journal = {Phys. Rev. A},
volume = {31},
issue = {6},
pages = {3761--3774},
numpages = {0},
year = {1985},
month = {Jun},
publisher = {American Physical Society},
doi = {10.1103/PhysRevA.31.3761},
url = {https://link.aps.org/doi/10.1103/PhysRevA.31.3761} }
@article{Matheny,
title = {Phase Synchronization of Two Anharmonic Nanomechanical Oscillators},
author = {Matheny, Matthew H. and Grau, Matt and Villanueva, Luis G. and Karabalin, Rassul B. and Cross, M. C. and Roukes, Michael L.},
journal = {Phys. Rev. Lett.},
volume = {112},
issue = {1},
pages = {014101},
numpages = {5},
year = {2014},
month = {Jan},
publisher = {American Physical Society},
doi = {10.1103/PhysRevLett.112.014101},
url = {https://link.aps.org/doi/10.1103/PhysRevLett.112.014101} }
@article{Li,
title = {Enhanced entanglement of two different mechanical resonators via coherent feedback},
author = {Li, Jie and Li, Gang and Zippilli, Stefano and Vitali, David and Zhang, Tiancai},
journal = {Phys. Rev. A},
volume = {95},
issue = {4},
pages = {043819},
numpages = {8},
year = {2017},
month = {Apr},
publisher = {American Physical Society},
doi = {10.1103/PhysRevA.95.043819},
url = {https://link.aps.org/doi/10.1103/PhysRevA.95.043819} }
@article{Weaver,
author = {Weaver, M. J. and Buters, F. and Luna, F. and Eerkens, H. and Heeck, K. and de Man, S. and Bouwmeester, D.},
journal = {Nat Commun.},
volume = {8},
pages = {824},
year = {2017},
url = {https://doi.org/10.1038/s41467-017-00968-9} } Weaver M J, Buters F, Luna F, Eerkens H, Heeck K, de Man S and Bouwmeester D
@book{Brooker2003,
title={Modern Classical Optics},
author={Brooker, G.},
isbn={9780198599654},
lccn={2003682111},
series={Oxford Master Series in Physics},
url={https://books.google.com.mt/books?id=Z1bY1eW4znIC},
year={2003},
publisher={OUP Oxford} }
@article{CheungLaw2011,
title = {Nonadiabatic optomechanical Hamiltonian of a moving dielectric membrane in a cavity},
author = {Cheung, H. K. and Law, C. K.},
journal = {Phys. Rev. A},
volume = {84},
issue = {2},
pages = {023812},
numpages = {10},
year = {2011},
month = {Aug},
publisher = {American Physical Society},
doi = {10.1103/PhysRevA.84.023812},
url = {https://link.aps.org/doi/10.1103/PhysRevA.84.023812} }
@article{AspelmeyerKippenbergMarquardt2014,
title = {Cavity optomechanics},
author = {Aspelmeyer, Markus and Kippenberg, Tobias J. and Marquardt, Florian},
journal = {Rev. Mod. Phys.},
volume = {86},
issue = {4},
pages = {1391--1452},
numpages = {62},
year = {2014},
month = {Dec},
publisher = {American Physical Society},
doi = {10.1103/RevModPhys.86.1391},
url = {https://link.aps.org/doi/10.1103/RevModPhys.86.1391} }
@article{XuerebGenesDantan2013,
title = {Collectively enhanced optomechanical coupling in periodic arrays of scatterers},
author = {Xuereb, Andr\'e and Genes, Claudiu and Dantan, Aur\'elien},
journal = {Phys. Rev. A},
volume = {88},
issue = {5},
pages = {053803},
numpages = {13},
year = {2013},
month = {Nov},
publisher = {American Physical Society},
doi = {10.1103/PhysRevA.88.053803},
url = {https://link.aps.org/doi/10.1103/PhysRevA.88.053803} }
@book{Gardiner1991,
title={Quantum Noise},
author={Gardiner, C.W.},
isbn={9780387536088},
lccn={lc91037383},
series={Springer series in synergetics},
url={https://books.google.com.mt/books?id=eFAbAQAAIAAJ},
year={1991},
publisher={Springer-Verlag} }
@article{BenguriaKac1981,
title = {Quantum Langevin Equation},
author = {Benguria, Rafael and Kac, Mark},
journal = {Phys. Rev. Lett.},
volume = {46},
issue = {1},
pages = {1--4},
numpages = {0},
year = {1981},
month = {Jan},
publisher = {American Physical Society},
doi = {10.1103/PhysRevLett.46.1},
url = {https://link.aps.org/doi/10.1103/PhysRevLett.46.1} }
@book{Routh, title={Applications of the Theory of Matrices}, author={Gantmacher, F. R.}, year={1959}, publisher={Wiley, New York}}
@article{WeedbrookPirandola2011,
title = {Gaussian quantum information},
author = {Weedbrook, Christian and Pirandola, Stefano and Garc\'{\i}a-Patr\'on, Ra\'ul and Cerf, Nicolas J. and Ralph, Timothy C. and Shapiro, Jeffrey H. and Lloyd, Seth},
journal = {Rev. Mod. Phys.},
volume = {84},
issue = {2},
pages = {621--669},
numpages = {0},
year = {2012},
month = {May},
publisher = {American Physical Society},
doi = {10.1103/RevModPhys.84.621},
url = {https://link.aps.org/doi/10.1103/RevModPhys.84.621} }
@book{BreuerPetruccioni2002,
title={The Theory of Open Quantum Systems},
author={Breuer, H.P. and Breuer, P.I.H.P. and Petruccione, F. and Petruccione, S.P.A.P.F.},
isbn={9780198520634},
lccn={2002075713},
url={https://books.google.com.mt/books?id=0Yx5VzaMYm8C},
year={2002},
publisher={Oxford University Press} }
@article{CaldeiraLeggett1981,
title = {Influence of Dissipation on Quantum Tunneling in Macroscopic Systems},
author = {Caldeira, A. O. and Leggett, A. J.},
journal = {Phys. Rev. Lett.},
volume = {46},
issue = {4},
pages = {211--214},
numpages = {0},
year = {1981},
month = {Jan},
publisher = {American Physical Society},
doi = {10.1103/PhysRevLett.46.211},
url = {https://link.aps.org/doi/10.1103/PhysRevLett.46.211} }
@article{GenesMariTombesiVitali2008,
title = {Robust entanglement of a micromechanical resonator with output optical fields},
author = {Genes, C. and Mari, A. and Tombesi, P. and Vitali, D.},
journal = {Phys. Rev. A},
volume = {78},
issue = {3},
pages = {032316},
numpages = {14},
year = {2008},
month = {Sep},
publisher = {American Physical Society},
doi = {10.1103/PhysRevA.78.032316},
url = {https://link.aps.org/doi/10.1103/PhysRevA.78.032316} }
@article{Matsumoto,
doi = {10.1088/0305-4470/35/13/307},
url = {https://doi.org/10.1088/0305-4470/35/13/307},
year = 2002,
month = {mar},
publisher = {{IOP} Publishing},
volume = {35},
number = {13},
pages = {3111--3123},
author = {K Matsumoto},
title = {A new approach to the Cram{\'{e}}r-Rao-type bound of the pure-state model},
journal = {Journal of Physics A: Mathematical and General},
abstract = {This paper sheds light on non-commutativity in quantum theory as regards theoretical estimation. In it, we calculate the quantum Cramér-Rao-type bound for many cases, by use
of a newly proposed powerful technique. We also discuss the use of collective measurement in statistical estimation.}
}
@article{Raymer,
author = {M. G. Raymer and J. Cooper and H. J. Carmichael and M. Beck and D. T. Smithey},
journal = {J. Opt. Soc. Am. B},
keywords = {Diode lasers; Homodyne detection; Light fields; Optical fields; Spontaneous emission; Ultrafast measurement},
number = {10},
pages = {1801--1812},
publisher = {OSA},
title = {Ultrafast measurement of optical-field statistics by dc-balanced homodyne detection},
volume = {12},
month = {Oct},
year = {1995},
url = {http://josab.osa.org/abstract.cfm?URI=josab-12-10-1801},
doi = {10.1364/JOSAB.12.001801},
abstract = {The technique of dc-balanced, pulsed homodyne detection for the purpose of determining optical-field statistics on short time scales is analyzed theoretically. Such measurements provide photon-number and phase distributions associated with a repetitive signal light field in a short time window. Time- and space-varying signal and local-oscillator pulses are treated, thus generalizing earlier treatments of photoelectron difference statistics in homodyne detection. Experimental issues, such as the effects of imperfect detector balancing on (time-integrated) dc detection and the consequences of background noise caused by non-mode-matched parts of the multimode signal field, are analyzed. The Wigner, or joint, distribution for the two field-quadrature amplitudes during the sampling time window can be directly determined by tomographic inversion of the measured photoelectron distributions. It is pointed out that homodyne detection provides a new method for the simultaneous measurement of temporal and spectral information. Although the theory is generally formulated, with both signal and local-oscillator fields being quantized, emphasis is placed on the limit of a strong, coherent local-oscillator field, making semiclassical interpretation possible.} }
@book{Casella, title={Statistical Inference}, author={G. Casella and R. L. Berger}, year={2002}, publisher={Duxbury, Pacific Grove, CA} }
@book{Kay, title={Fundamentals of Statistical Signal Processing: Estimation Theory}, author={S. M. Kay}, year={1993}, publisher={Prentice Hall PTR, Upper Saddle River, NJ} }
@book{Schleich, title={Quantum Optics in Phase Space}, author={W. P. Schleich}, year={2001}, publisher={Wiley-VCH, Weinheim} }
@book{comment, series={It is important to note that we have misprints in Eqs. (25) and (27), see \cite{SanavioBernadXuereb2020}. Here, we correct them.}}
@article{Schmidt, author = {M. Schmidt and S. Kessler and V. Peano and O. Painter and F. Marquardt}, journal = {Optica}, keywords = {Coupled resonators ; Optical microelectromechanical devices; Photonic crystals ; Cold atoms; Laser beams; Magnetic fields; Numerical simulation; Optical potentials; Spatial light modulators}, number = {7}, pages = {635--641}, publisher = {OSA}, title = {Optomechanical creation of magnetic fields for photons on a lattice}, volume = {2}, month = {Jul}, year = {2015}, url = {http://www.osapublishing.org/optica/abstract.cfm?URI=optica-2-7-635}, doi = {10.1364/OPTICA.2.000635}, }
@article{SanavioPeano,
title = {Nonreciprocal topological phononics in optomechanical arrays},
author = {Sanavio, Claudio and Peano, Vittorio and Xuereb, Andr\'e},
journal = {Phys. Rev. B},
volume = {101},
issue = {8},
pages = {085108},
numpages = {6},
year = {2020},
month = {Feb},
publisher = {American Physical Society},
doi = {10.1103/PhysRevB.101.085108},
url = {https://link.aps.org/doi/10.1103/PhysRevB.101.085108} }
@article{Daiss,
title = {Single-Photon Distillation via a Photonic Parity Measurement Using Cavity QED},
author = {Daiss, Severin and Welte, Stephan and Hacker, Bastian and Li, Lin and Rempe, Gerhard},
journal = {Phys. Rev. Lett.},
volume = {122},
issue = {13},
pages = {133603},
numpages = {6},
year = {2019},
month = {Apr},
publisher = {American Physical Society},
doi = {10.1103/PhysRevLett.122.133603},
url = {https://link.aps.org/doi/10.1103/PhysRevLett.122.133603} }
@article{Sala,
title = {Quantum estimation of coupling strengths in driven-dissipative optomechanics},
author = {Sala, Kamila and Doicin, Tib and Armour, Andrew D. and Tufarelli, Tommaso},
journal = {Phys. Rev. A},
volume = {104},
issue = {3},
pages = {033508},
numpages = {11},
year = {2021},
month = {Sep},
publisher = {American Physical Society},
doi = {10.1103/PhysRevA.104.033508},
url = {https://link.aps.org/doi/10.1103/PhysRevA.104.033508} }
\end{filecontents}
\end{document} |
\begin{document}
\title{Einstein solvmanifolds with a simple Einstein derivation}
\author{Y.Nikolayevsky}
\maketitle
\begin{abstract} The structure of a solvable Lie groups admitting an Einstein left-invariant metric is, in a sense, completely determined by the nilradical of its Lie algebra. We give an easy-to-check necessary and sufficient condition for a nilpotent algebra to be an Einstein nilradical whose Einstein derivation has simple eigenvalues. As an application, we classify filiform Einstein nilradicals (modulo known classification results on filiform graded Lie algebras). \end{abstract}
\section{Introduction} \label{s:intro}
In this paper, we study Riemannian homogeneous spaces with an Einstein metric of negative scalar curvature. The major open conjecture here is the \emph{Alekseevski Conjecture} \cite{A1} asserting that an Einstein homogeneous Riemannian space with negative scalar curvature admits a simply transitive solvable isometry group. This is equivalent to saying that any such space is a \emph{solvmanifold}, a solvable Lie group with a left-invariant Riemannian metric satisfying the Einstein condition.
Even assuming the Alekseevski Conjecture, the classification (or even the description) of Einstein solvmanifolds is quite a complicated problem. Very recently, an amazing progress in this direction was achieved by J.Lauret \cite{L4} who proved that any Einstein solvmanifold is \emph{standard}. This means that the metric solvable Lie algebra $\g$ of such a solvmanifold has the following property: the orthogonal complement $\ag$ to the derived algebra of $\g$ is abelian. The systematic study of standard Einstein solvmanifolds (and the term ``standard") originated from the paper \cite{H}. In particular, it is shown there that any non-unimodular Einstein solvmanifold admits a rank-one reduction (any unimodular Einstein solvmanifold is flat by \cite{DM}). On the Lie algebra level, this means that if $\g$ is an Einstein metric solvable Lie algebra and $\g=\ag \oplus \n$ (orthogonal sum), with $\n$ the nilradical of $\g$, then there exists a one-dimensional subspace $\ag_1 \subset \ag$ such that $\g_1 = \ag_1 \oplus \n$ with the induced inner product is again an Einstein metric solvable Lie algebra. What is more, $\g_1$ is essentially uniquely determined by $\n$, and all the Einstein metric solvable Lie algebras with the nilradical $\n$ can be obtained from $\g_1$ via a known procedure (by adjoining appropriate derivations).
In particular, the geometry (and the algebra) of an Einstein metric solvable Lie algebra is completely encoded in its nilradical. A nilpotent Lie algebra which can be a nilradical of a an Einstein metric solvable Lie algebra is called an \emph{Einstein nilradical}.
A vector $H$ spanning $\ag_1$ and scaled in such a way that $\|H\|^2 = \Tr \ad_H$ is called \emph{the mean curvature vector}, and the restriction of the derivation $\ad_H$ to $\n$ the \emph{Einstein derivation}. As it is proved in \cite{H}, the Einstein derivation is always semisimple, and, up to scaling, its eigenvalues are natural numbers. The ordered set of eigenvalues $\la_i \in \mathbb{N}$ of the (appropriately scaled) Einstein derivation, together with the multiplicities $d_i$, is called the \emph{eigenvalue type} of an Einstein solvmanifold (written as $(\la_1 < \ldots < \la_p; \, d_1, \ldots, d_p)$).
Considerable efforts were made towards the classification of Einstein nilradicals (Einstein solvmanifolds) of a given eigenvalue type. While in the trivial case of eigenvalue type $(1 ; n)$ there is only one possible Einstein nilradical (the abelian one; the corresponding solvmanifold is the hyperbolic space), even the eigenvalue type $(1 < 2 ; d_1, d_2)$ is far from being completely understood (see \cite{GK, H}).
In this paper, we study solvmanifolds with the simple eigenvalue type, $(\la_1 < \ldots < \la_n; \, 1, \ldots, 1)$, and give an easy-to-check necessary and sufficient condition to determine, whether a given nilpotent Lie algebra is an Einstein nilradical with such an eigenvalue type. We have to answer two questions: \begin{enumerate}[(i)]
\item \label{it:1}
given an arbitrary nilpotent Lie algebra, how to recognize the eigenvalue type of its Einstein solvable
extension (if for some inner product such an extension exists); in particular, how to determine, whether such an
extension, if it exists, has a simple eigenvalue type;
\item \label{it:2}
if, according to (\ref{it:1}), a nilpotent Lie algebra may potentially admit an Einstein metric solvable extension
with a simple eigenvalue type, whether it actually does. \end{enumerate}
The answer to (\ref{it:1}) is given by constructing the pre-Einstein derivation introduced in \cite{N2}. A semisimple derivation $\phi$ of a nilpotent Lie algebra $\n$ is called \emph{pre-Einstein}, if $\Tr \, (\phi \circ \psi) = \Tr \, \psi$, for any $\psi \in \Der (\n)$. A pre-Einstein derivation always exists and is unique up to conjugation, and its eigenvalues are rational. As it is shown in \cite{N2}, the condition ``$\phi > 0$ and $\ad_{\phi} \ge 0$" is necessary (but not sufficient) for $\n$ to be an Einstein nilradical (``$A > 0$" means that all the eigenvalues of the operator $A$ are real and positive). What is more, if $\n$ \emph{is} an Einstein nilradical, then its Einstein derivation is positively proportional to $\phi$ and the eigenvalue type of the corresponding Einstein metric solvable Lie algebra is just the set of the eigenvalues of $\phi$ and their multiplicities (see Section~\ref{s:pre} for details).
Our main result of is Theorem~\ref{t:one} below, answering (\ref{it:2}). Let $\n$ be a nilpotent Lie algebra of dimension $n$, with $\phi$ a pre-Einstein derivation. Suppose that all the eigenvalues of $\phi$ are simple. Let $e_i$ be the basis of eigenvectors for $\phi$ and let $[e_i, e_j] = \sum_{k=1}^n C_{ij}^k e_k$ (note that for every pair $(i, j)$, no more than one of the $C_{ij}^k$ is nonzero). In a Euclidean space $\Rn$ with the inner product $(\cdot, \cdot)$ and an orthonormal basis $f_1, \ldots, f_n$, define the finite subset $\mathbf{F}=\{\a_{ij}^k= f_i+f_j-f_k: C_{ij}^k \ne 0\}$. Let $L$ be the affine span of $\mathbf{F}$, the smallest affine subspace of $\Rn$ containing $\mathbf{F}$.
\begin{theorem}\label{t:one} Let $\n$ be a nilpotent Lie algebra whose pre-Einstein derivation has all the eigenvalues simple. The algebra $\n$ is an Einstein nilradical if and only if the projection of the origin of $\Rn$ to $L$ lies in the interior of the convex hull of $\mathbf{F}$. \end{theorem}
\begin{remark}\label{rem:payne} Denote $N = \# \mathbf{F}$ and introduce an $n \times N$ matrix $Y$ whose vector-columns are the vectors $\a_{ij}^k$ in some fixed order. Define the vector $[1]_N =(1,\ldots, 1)^t \in \mathbb{R}^N$ and an $N \times N$ matrix $U=Y^tY$. One can rephrase Theorem~\ref{t:one} as follows: \emph{a nilpotent Lie algebra $\n$ whose pre-Einstein derivation has all its eigenvalues simple is an Einstein nilradical if and only if there exists a vector $v \in \mathbb{R}^N$ all of whose coordinates are positive such that} \begin{equation}\label{eq:ut=1} Uv=[1]_N. \end{equation} By the result of \cite[Theorem 1]{P}, a metric nilpotent Lie algebra is nilsoliton, if and only if equation~\eqref{eq:ut=1} holds with respect to the basis of Ricci eigenvectors. Note that there are two fundamental differences between Theorem~\ref{t:one} and this result. First of all, our $\n$ is just a Lie algebra, no inner product is present. Secondly, and more importantly, the set $S$ of vectors $v$ satisfying \eqref{eq:ut=1} is the interior of a convex polyhedron in an affine subspace of $\RN$ (if it is nonempty). If $v=(v_{ij}^k)$ is a point from $S$, the skew-symmetric bilinear map on the linear space of $\n$ defined by $[e_i, e_j] = \pm (v_{ij}^k)^{1/2} e_k$ (where $i < j$ and $k$ is defined by the condition that $C_{ij}^k \ne 0$) may not even be a Lie bracket, and if it is, there is no apparent reason why the resulting Lie algebra has to be isomorphic to $\n$ (say, when $S$ has a positive dimension). The fact that the point $v$ can be chosen ``in the correct way" is the main content of Theorem~\ref{t:one}. It should be noted, however, that one can hardly expect to find the nilsoliton inner product explicitly, except in the cases when $\n$ is particularly nice. \end{remark}
The class of nilpotent algebras whose pre-Einstein derivation has all its eigenvalues simple is rather broad (see the end of Section~\ref{s:proof} for some examples). As an application of Theorem~\ref{t:one}, we give the classification of filiform Einstein nilradicals (a nilpotent Lie algebra $\n$ of dimension $n$ is called a \emph{filiform}, if its descending central series has the maximal possible length, $n-1$).
As any filiform algebra $\n$ is generated by two elements, its \emph{rank}, the dimension of the maximal torus of derivations (the maximal abelian subalgebra of $\Der (\n)$ consisting of semisimple elements), is at most two. Most of filiform algebras of dimension $n \ge 8$ are characteristically nilpotent \cite{GH1}, that is, have rank zero. Such algebras do not admit any gradation at all, hence cannot be Einstein nilradicals. There are two series of filiform algebras of rank two: $\m_0(n)$ (given by the relations $[e_1, e_i] = e_{i+1}$, $i=2, \ldots n-1$) and $\m_1(n), \; n$ is even (the relations of $\m_0(n)$ and $[e_i,e_{n-i+1}] = (-1)^i e_n, \; i = 2, \ldots , n - 1$) \cite{V, GH1}. Both of them are Einstein nilradicals: for $\m_0(n)$, this is proved in \cite[Theorem 4.2]{L2} (see also \cite[Theorem 27]{P}), for $\m_1(n)$ in \cite[Theorem 37]{P}.
Less is known about filiform algebras of rank one. According to \cite[Th\'{e}or\`{e}me 2]{GH1}, there are two classes of such algebras admitting a positive gradation: $A_r$ and $B_r$, with $r \ge 2$. An $n$-dimensional algebra $\n$ of the class $A_r$ ($n \ge r+3$) is given by the relations of $\m_0(n)$ and $[e_i, e_j] = c_{ij} e_{i+j+r-2}$ for $i,j \ge 2, \; i+j \le n+2-r$. An $n$-dimensional algebra $\n$ of the class $B_r$ ($n \ge r+3$, $n$ is even) is given by the relations $[e_1, e_i] = e_{i+1}, \quad i=2, \ldots n-2, \quad [e_i, e_j] = c_{ij} e_{i+j+r-2} , \quad i,j \ge 2, \; i+j \le n+2-r$.
The complete classification is known only for the algebras of the class $A_2$ (\cite{M}, and independently in \cite{CJ}): the class $A_2$ consists of five infinite series and of five one-parameter families $\g_\a(n)$ in dimensions $n=7, \ldots, 11$ (to the best of the author's knowledge, no classification results for $r \ge 3$ appeared in the literature). Based on that classification, we classify the algebras of the class $B_2$ and prove the following theorem (here $\mathcal{V}(n)$ is the $n$-dimensional truncated Witt algebra, the others are the members of finite families listed in the tables in Section~\ref{s:fili}):
\begin{theorem} \label{t:fili} {\ }
\emph{1.} A filiform algebra $\n \in A_2$ of dimension $n \ge 8$ is an Einstein nilradical if and only if it is isomorphic either to $\mathcal{V}(n)$ or to one of the following algebras from Table~\ref{tablega}: $\g_\a(8), \, \a \ne -2,\quad \g_\a(9), \, \a \ne -2$, $\g_\a(10), \, \a \ne -2,-1,\frac12, \quad \g_\a(11)$.
\emph{2.} The class $B_2$ consists of six algebras $\b(6), \b(9), \b_1(10), \b_2(10), \b_{\pm}(12)$ listed in Table~\ref{tableb2n}. All of them are Einstein nilradicals. \end{theorem}
In assertion 1 of Theorem~\ref{t:fili} we consider only the case $\dim \n \ge 8$, as every nilpotent algebra of dimension six or lower is an Einstein nilradical (\cite[Theorem 3.1]{W} and \cite[Theorem 5.1]{L2}), and all seven-dimensional filiform algebras, except for $\m_{0,1}(7) \cong \g_{-2}(7), \, \g_{-1}(7)$, and $\m_2(7)$, are Einstein nilradicals by \cite[Theorem 4.1]{LW}.
The paper is organized as follows. In Section~\ref{s:pre} we give the background on Einstein solvmanifolds and on the momentum map. The proof of Theorem~\ref{t:one} is given in Section~\ref{s:proof}. In Section~\ref{s:fili}, we consider graded filiform algebras and prove Theorem~\ref{t:fili} (in Section~\ref{ss:a2n} for the algebras of the class $A_2$, and in Section~\ref{ss:b2n} for the algebras of the class $B_2$).
\section{Preliminaries} \label{s:pre}
For an inner product $\< \cdot, \cdot \>$ on a Lie algebra $\g$, define the \emph{mean curvature vector} $H$ by $\<H, X\> = \Tr \ad_X$ (clearly, $H$ is orthogonal to the derived algebra of $\g$). For $A \in \End(\g)$, let $A^*$ be its metric adjoint and $S(A) =\frac12 (A +A^*)$ be the symmetric part of $A$. Let $\Ric$ be the Ricci $(0, 2)$-tensor (a quadratic form) of $(\g, \< \cdot, \cdot \>)$, and $\ric$ be the \emph{Ricci operator}, the symmetric operator associated to $\Ric$.
The Ricci operator of $(\g, \< \cdot, \cdot \>)$ is implicitly defined by \begin{equation}\label{eq:riccidef} \Tr \Bigl(\ric + S(\ad_H) + \frac12 B \Bigr) \circ A = \frac14 \sum\nolimits_{i,j} \<A[E_i, E_j] - [AE_i, E_j] - [E_i, AE_j], [E_i, E_j]\>, \end{equation} for any $A \in \End(\g)$, where $\{E_i\}$ is an orthonormal basis for $\g$, and $B$ is the symmetric operator associated to the Killing form of $\g$.
If $(\n, \< \cdot, \cdot \>)$ is a nilpotent metric Lie algebra, then $H = 0$ and $B = 0$, so \eqref{eq:riccidef} gives \begin{equation}\label{eq:riccinil} \Tr (\ric_{\n} \circ A) = \frac14 \sum\nolimits_{i,j} \<A[E_i, E_j] - [AE_i, E_j] - [E_i, AE_j], [E_i, E_j]\>. \end{equation}
An inner product on a solvable Lie algebra $\g$ is called \emph{standard}, if the orthogonal complement to the derived algebra $[\g, \g]$ is abelian. A metric solvable Lie algebra $(\g, \<\cdot,\cdot\>)$ is called \emph{standard}, if the inner product $\<\cdot,\cdot\>$ is standard.
By the result of \cite{L4}, any Einstein metric solvable Lie algebra must be standard.
As it is proved in \cite{AK}, any Ricci-flat metric solvable Lie algebra is flat. By the result of \cite{DM}, any Einstein metric solvable unimodular Lie algebra is also flat. In what follows, we always assume $\g$ to be nonunimodular ($H \ne 0$), with an inner product of a strictly negative scalar curvature $c \dim \g$.
Any Einstein metric solvable Lie algebra admits a rank-one reduction \cite[Theorem 4.18]{H}. This means that if
$(\g, \< \cdot, \cdot\>)$ is such an algebra, with the nilradical $\n$ and the mean curvature vector $H$, then the subalgebra $\g_1 = \mathbb{R}H \oplus \n$, with the induced inner product, is also Einstein. What is more, the derivation $\phi=\ad_{H|\n}:\n \to \n$ is symmetric with respect to the inner product, and all its eigenvalues belong to $\a \mathbb{N}$ for some constant $\a > 0$. This implies, in particular, that the nilradical $\n$ of an Einstein metric solvable Lie algebra admits an $\mathbb{N}$-gradation defined by the eigenspaces of $\phi$. As it is proved in \cite[Theorem 3.7]{L1}, a necessary and sufficient condition for a metric nilpotent algebra $(\n, \< \cdot, \cdot\>)$ to be the nilradical of an Einstein metric solvable Lie algebra is \begin{equation}\label{eq:ricn}
\ric_\n = c \, \id_\n + \phi, \end{equation} where $c \dim \g < 0$ is the scalar curvature of $(\g, \< \cdot, \cdot\>)$. This equation, in fact, defines
$(\g, \< \cdot, \cdot\>)$ in the following sense: given a metric nilpotent Lie algebra whose Ricci operator satisfies \eqref{eq:ricn}, with some constant $c < 0$ and some $\phi \in \Der(\n)$, one can define $\g$ as a one-dimensional extension of $\n$ by $\phi$. For such an extension $\g = \mathbb{R}H \oplus \n, \; \ad_{H|\n} = \phi$, and the inner product defined by $\<H, \n \> = 0,\; \|H\|^2 = \Tr \phi$ (and coinciding with the existing one on $\n$) is Einstein, with the scalar curvature $c \dim \g$. A nilpotent Lie algebra $\n$ which admits an inner product $\< \cdot, \cdot\>$ and a derivation $\phi$ satisfying \eqref{eq:ricn} is called an \emph{Einstein nilradical}, the corresponding derivation $\phi$ is called an \emph{Einstein derivation}, and the inner product $\< \cdot, \cdot\>$ the \emph{nilsoliton metric}.
As it is proved in \cite[Theorem 3.5]{L1}, a nilpotent Lie algebra admits no more than one nilsoliton metric, up to a conjugation and scaling (and hence, an Einstein derivation, if it exists, is unique, up to a conjugation and scaling). If $\la_1 < \ldots < \la_p$ are the eigenvalues of $\phi$, with $d_1, \ldots, d_p$ the corresponding multiplicities, we call $(\la_1 < \ldots < \la_p; \; d_1, \ldots, d_p)$ the \emph{eigenvalue type} of the Einstein metric solvable Lie algebra $(\g, \< \cdot, \cdot\>)$. With some abuse of language, we will also call $(\la_1 < \ldots < \la_p; \; d_1, \ldots, d_p)$ the \emph{eigenvalue type} of $\phi$.
The main tool in the proof of Theorem~\ref{t:one} is the moment map. Let $G$ be a reductive Lie group, with $K \subset G$ a maximal compact subgroup. Let $\g= \k \oplus \p$ be the Cartan decomposition of the Lie algebra of $G$, with $\k$ the Lie algebra of $K$. Suppose $G$ acts on a linear space $V$ endowed with a $K$-invariant inner product $\<\cdot,\cdot\>$. The action of $G$ is then descends to the projective space $\mathbb{P}V$.
The \emph{moment map} $\mm$ of the action of $G$ on $\mathbb{P}V$ is defined by \begin{equation}\label{eq:defmoment}
\mm: \mathbb{P}V \to \p^*, \quad \mm(x)(X)=\frac{1}{\|v\|^2} \frac{d}{dt}_{|t=0}\<\exp(tX).v, v\>,
\quad \text{for $x =[v], \, X \in \p$}. \end{equation}
The fact that the moment map can be used to study the nilsoliton metrics was first observed in \cite{L1}, where the following construction was given. Let $\mathcal{L}=(\Rn, \<\cdot, \cdot\>)$ be a linear space with the inner product, and let $V=\mathrm{Hom}(\Lambda^2 \mathcal{L},\mathcal{L})$ be the space of skew-symmetric bilinear maps from $S$ to itself. Denote $\mathcal{N} \subset V$ the (real algebraic) subset of those $\mu \in V$, which are nilpotent Lie brackets. The inner product on $V$ is defined in an obvious way: for $\mu_1, \mu_2 \in V$, $\<\mu_1,\mu_2\>=\sum_{ij}\<\mu_1(e_i,e_j),\mu_2(e_i,e_j)\>$, where $\{e_i\}$ is an orthonormal basis for $\mathcal{L}$. The group $G=\GL(n)$ acts on $V$ as follows: for $\mu \in V$ and $g \in G$, $g.\mu(X,Y) =g \mu(g^{-1}X, g^{-1}Y)$, where $X, Y \in \mathcal{L}$ (clearly, $\mathcal{N} \subset V$ is $G$-invariant). Take $\mathfrak{gl}(n)=\mathfrak{o}(n)+\p$, where $\p$ is the linear space of symmetric operators in $\mathcal{L}$ and identify $\p^*$ with $\p$ via the Killing form. Then the moment map $\mm$ of the action of $G$ on $\mathbb{P}V$ can be defined as in \eqref{eq:defmoment}, and one has the following result.
\begin{theorem}[{\cite{L1}, \cite{L3}}]\label{t:moment} Let $\mu \in \mathcal{N} \setminus 0$ and let $\Ric$ be the Ricci endomorphism of the metric nilpotent Lie algebra $(\mathcal{L}, \mu)$. Then
\emph{1.} $\mm([\mu]) = 4 \, \|\mu\|^{-2} \, \Ric$.
\emph{2.} Let $\n=(\Rn, \mu)$ be a nonabelian nilpotent Lie algebra. Choose an arbitrary inner product $\<\cdot,\cdot\>$ on $\Rn$. Then $\n$ is an Einstein nilradical if and only if the function $F: \GL(n) \to \mathbb{R}$, the squared norm of the moment map, defined by
$F(g) = \|\mm(g.[\mu])\|^2 = 16 \, \|g.\mu\|^{-4} \, \Tr \, \Ric_{g.\mu}^2$ attains its minimum. \end{theorem}
\section{Proof of Theorem~\ref{t:one}} \label{s:proof}
The proof of Theorem~\ref{t:one} is a combination of Theorem~\ref{t:moment} and the results on convexity of the image of the moment map of an orbit.
The first step in the proof is the observation that to check whether the condition of assertion 2 of Theorem~\ref{t:moment} is satisfied, one does not need the whole $\GL(n)$ orbit. Let $\n$ be a nilpotent Lie algebra of dimension $n$. A derivation $\phi$ of $\n$ is called \emph{pre-Einstein}, if it is real (all the eigenvalues of $\phi$ are real), semisimple, and \begin{equation}\label{eq:pEtrace}
\Tr (\phi \circ \psi) = \Tr \psi, \quad \text{for any $\psi \in \Der(\n)$}. \end{equation} Note that the Einstein derivation, if it exists, satisfies \eqref{eq:pEtrace}, up to a nonzero multiple, as easily follows from \eqref{eq:riccinil} and \eqref{eq:ricn}. By \cite[Proposition 1]{N2}, a pre-Einstein derivation always exists, is unique up to conjugation, and all its eigenvalues are rational. The main advantage of the pre-Einstein derivation in the study of Einstein nilradicals lies the fact that if $\n$ is an Einstein nilradical, then the Einstein derivation is a positive multiple of $\phi$ (up to conjugation). Thus finding a pre-Einstein derivation for a given nilpotent Lie algebra $\n$, one immediately gets a substantial portion of information on the nilsoliton inner product on $\n$, if the latter exists: for instance, in our case, when all the eigenvalues of $\phi$ are simple, we already have an orthogonal (but not orthonormal!) basis of Ricci eigenspaces of a (potentially existing) nilsoliton inner product. On the other hand, a given algebra $\n$ is not an Einstein nilradical, if its pre-Einstein derivation fails to have all its eigenvalues positive, or if the endomorphism $\ad_\phi$ of $\Der(\n)$ has nonpositive eigenvalues (see the proof of Theorem 3 of \cite{N2}).
Let $Z(\phi) \subset \GL(n)$ be the centralizer of $\phi$ in $\GL(n)$, and $Z_0(\phi)$ its identity component. The group $Z_0(\phi)$ is isomorphic to $\prod_i \GL^+(d_i)$, where $d_1, \ldots, d_p$ are the multiplicities of the eigenvalues of $\phi$ and $\GL^+(d) = \{M \in \GL(d) \, : \, \det M > 0\}$.
The following Lemma is essentially contained in \cite[Theorem 4.3]{L3} (see also \cite[Theorem 6.15, Lemma 6.14]{H}). Note that at this stage we do not use the assumption that all the eigenvalues of $\phi$ are simple.
\begin{lemma} \label{l:centralizer} Let $\n=(\Rn, \mu)$ be a nilpotent Lie algebra, with $\phi$ the pre-Einstein derivation. Let $Z_0(\phi) \subset \GL(n)$
be the identity component of the centralizer of $\phi$ in $\GL(n)$ and $\<\cdot,\cdot\>$ be an arbitrary inner product on $\Rn$ with respect to which $\phi$ is symmetric. The algebra $\n$ is an Einstein nilradical if and only if the function $F: Z_0(\phi) \to \mathbb{R}$ defined by $F(g) = \|\mm(g.[\mu])\|^2$ attains its minimum. \end{lemma}
\begin{proof} The claim follows from \cite[Theorem 4.3]{L3}, if we choose $G_\gamma$ to be $Z_0(\phi)$, and $\mathcal{C}$ to be the set of inner products on $\n$ with respect to which $\phi$ is symmetric. Then $\Ric^\gamma$, the projection of $\Ric$ to the Lie algebra $\z(\phi)$ of $Z_0(\phi)$, coincides with $\Ric$ by \cite[Lemma 2.2]{H}, as $\phi$ is a symmetric derivation.
The ``only if" part uses the fact that if a nilpotent Lie algebra is an Einstein nilradical, then its Einstein derivation is proportional to a pre-Einstein derivation. \end{proof}
Now suppose that all the eigenvalues $\la_i$ of $\phi$ are simple. Let $e_i$ be a basis of eigenvectors of $\phi$ and let $[e_i, e_j] = \sum_{k=1}^n C_{ij}^k e_k$. The number $C_{ij}^k$ can be nonzero only if $\la_i+\la_j = \la_k$, in particular, for every pair $(i, j)$, at most one of the $C_{ij}^k$ is nonzero. The group $Z_0(\phi)$ is abelian and is isomorphic to $\GL^+(1)^n$ acting as follows: an element $g=(e^{x_1}, \ldots, e^{x_n})$ sends $e_i$ to $e^{-x_i}e_i$. The corresponding action on $V$ is given by $C_{ij}^k \to e^{x_i+x_j-x_k}C_{ij}^k$. Fix an inner product on $\n$ such that $\<e_i, e_j\> = \K_{ij}$.
The moment map $\mm$ acts to the space $\z^*(\phi)$ of diagonal matrices with respect to the basis $\{e_i\}$. Identify $\z^*(\phi)$ with $\Rn$, with an inner product $(\cdot, \cdot)$ induced by the Killing form of $\mathfrak{gl}(n)$ and with the orthonormal basis $\{f_i\}$ ($f_i$ corresponds to the matrix having $1$ as its $(i,i)$-th entry and zero elsewhere).
Define $\mathbf{F}=\{\a_{ij}^k= f_i+f_j-f_k: C_{ij}^k \ne 0\} \subset \Rn$. The claim of Theorem~\ref{t:one} immediately follows from Lemma~\ref{l:centralizer} and the following lemma:
\begin{lemma}\label{l:moment} Let $\n=(\Rn, \mu)$ be a nilpotent Lie algebra whose pre-Einstein derivation $\phi$ has all its eigenvalues simple. For an arbitrary inner product on $\n$ with respect to which $\phi$ is symmetric, $$ \mm(Z_0(\phi).[\mu]) = (\Conv(\mathbf{F}))^0, $$ the interior of the convex hull of $\mathbf{F}$. \end{lemma} \begin{proof} By \cite[Proposition 4.1]{HS}, the set $\mm(Z_0(\phi).[\mu])$ is an open convex subset of an affine subspace of $\Rn$. To prove the lemma it therefore suffices to show that $\mm(Z_0(\phi).[\mu]) \subset \Conv(\mathbf{F})$ and that $\overline{\mm(Z_0(\phi).[\mu])} \supset \mathbf{F}$.
Let $X = (x_1, \ldots, x_n) \in \z(\phi) = \Rn$ and $g= \exp X = (e^{x_1}, \ldots, e^{x_n}) \in Z_0(\phi)$. Then \begin{equation}\label{eq:moment} \mm(g.[\mu])=\frac{\sum_{\a_{ij}^k \in \mathbf{F}} e^{2(\a_{ij}^k,X)} (C_{ij}^k)^2\a_{ij}^k} {\sum_{\a_{ij}^k \in \mathbf{F}} e^{2(\a_{ij}^k,X)}(C_{ij}^k)^2}. \end{equation} It follows that $\mm(Z_0(\phi).[\mu]) \subset \Conv(\mathbf{F})$, as for every $g \in Z_0(\phi), \quad \mm(g.[\mu])$ is the center of mass of the set $\mathbf{F}$, with the positive masses $e^{2(\a_{ij}^k,X)}(C_{ij}^k)^2$ placed at the vertices $\a_{ij}^k$. Moreover, $\overline{\mm(Z_0(\phi).[\mu])}$ contains all the vertices of $\mathbf{F}$. Indeed, let $\a_{ij}^k \in \mathbf{F}$ and let $X = f_i+f_j$. Then $(\a_{ij}^k, X)=2$ and $(\a_{ls}^r, X) < 2$ for any other vertex $\a_{ls}^r \in \mathbf{F}$. By \eqref{eq:moment}, $\lim_{t\to \infty} \mathbf{m}(\exp(tX). [\mu]) = \a_{ij}^k$. \end{proof}
\begin{remark} \label{rem:multiplicity} A direct generalization of Theorem~\ref{t:one} to the case when the pre-Einstein derivation $\phi$ has eigenvalues of higher multiplicities works only as a necessary condition. If $\la_1, \ldots, \la_p$ are the eigenvalues of $\phi$, with $\n_1, \ldots, \n_p$ the corresponding eigenspaces, then for every pair $(i,j), \quad [\n_i,\n_j]$ is either zero, or lies in some eigenspace $\n_k$. Defining $\mathbf{F}$ as the subset of $\mathbb{R}^p$ consisting of the vectors $\a_{ij}^k=f_i+f_j-f_k$ such that $[\n_i, \n_j] \subset \n_k,\; [\n_i, \n_j] \ne 0$ we get a necessary condition for $\n$ to be an Einstein nilradical similar to that of \cite[Lemma 1]{N1}.
The reason why the ``if" part of Theorem~\ref{t:one} fails in this case is because Lemma~\ref{l:moment} is no longer true. In general, let $G$ be a reductive Lie group acting on a linear space $V$ and let $\g= \k \oplus \p$ be the Cartan decomposition of its Lie algebra, with $\k$ the Lie algebra of a maximal compact subgroup $K \subset G$. Let $\ag \in \p$ be a maximal subalgebra (which is always abelian, as $[\p,\p] \subset \k$). The image of the moment map $\mathbf{m}$ of the action of $G$ on the projective space $\mathbb{P}V$ lies in $\p$. One has two sorts of the general convexity results for the image of the $G$-orbit of a point $x \in \mathbb{P}V$ under the moment map $\mathbf{m}$: the projection of $\mathbf{m}(G.x)$ to $\ag$ is convex (the Kostant Theorem) and the intersection of $\mathbf{m}(\overline{G.x})$ with the positive Weyl chamber $\ag_+ \subset \ag$ is convex \cite[Corollary 7.1]{S}.
If all the eigenvalues of $\phi$ are simple, the group $G= \prod \GL^+(1) \cong \Rn$ is abelian, so $\k =0, \g=\p=\ag$ and the Weyl group is trivial, hence $\ag_+ = \ag (= \Rn)$, which implies that the image of the orbit is convex in $\ag$. In general, however, this is not true even in very simple cases, as the following example shows (some examples of that sort in the settings different from ours can be found in \cite[Chapter 8]{S}). \end{remark}
\begin{example} Consider a $(2p+1)$-dimensional two-step nilpotent Lie algebra $\n$ given by the relations $[e_1, e_i]=e_{i+p}, \; i=2, \ldots p+1$ (note that such an algebra is an Einstein nilradical by \cite[Theorem 4.2]{L2}). A pre-Einstein derivation $\phi$ can be taken as $\phi(e_1) = \frac{2}{p+2} e_1,\; \phi(e_i) = \frac{p+1}{p+2} e_i, \, i=2,\ldots, p+1,\; \phi(e_j) = \frac{p+3}{p+2} e_j, \, j=p+2,\ldots, 2p+1$ (in fact, $\frac{2}{p+2} \phi$ is an Einstein derivation, if we choose an inner product on $\n$ in such a way that the vectors $e_i$ are orthonormal). The component of the identity of the centralizer of $\phi$ is the group $G=\GL^+(1) \times \GL^+(p) \times \GL^+(p)$. Introduce the inner product on $\mathbb{R}^{2p+1}$, the linear space of $\n$ by requiring that the basis $e_i$ is orthonormal. Denote $\mu$ the bracket defining $\n$. Then for $g = (t, g_1, g_2) \in G$ the bracket $g.\mu$ is given by $g.\mu(e_1,e_i) = t^{-1} g_2 [e_1, g_1^{-1} e_i], \; 2 \le i \le p+1$ (and $g.\mu(e_i,e_j) = 0$ for all the other pairs with $i < j$). For any $g \in G$ there exist an $h \in \GL^+(p)$ (acting on the space $\Span (e_{p+2}, \ldots, e_{2p+1})$) such that $g.\mu(e_1,e_i) = h e_{i+p}$ for $2 \le i \le p+1$. Such an $h$ is uniquely determined by $g \in G$ and the map from $G$ to $\GL^+(p)$ sending $g$ to $h$ is onto.
The moment map for the action of $G$ on $\mu$ is given by $$ \mathbf{m}(g.[\mu])= \begin{bmatrix}
-2 & 0 & 0 \\
0 & -2 h^th \cdot (\Tr h^t h )^{-1}& 0 \\
0 & 0 & 2 hh^t \cdot (\Tr hh^t)^{-1} \\ \end{bmatrix} $$ The intersection of $\mathbf{m}(g.[\mu])$ with $\ag$, the set of diagonal matrices, is the set $\{\diag(-2,-\la_1, \ldots, -\la_p,$ $\la_{\sigma(1)}, \ldots, \la_{\sigma(p)}) \, : \, \la_i > 0, \, \sum \la_i = 2, \, \sigma \in S_p\}$, where $S_p$ is the symmetric group of order $p$. So $\mathbf{m}(g.[\mu]) \cap \ag$ is the union of $p! \; (p-1)$-dimensional open simplices and is not convex. For instance, for $p=2, \quad \mathbf{m}(g.[\mu]) \cap \ag$ is the union of two diagonals of a square. It is easy to see, however, that $\mathbf{m}(g.[\mu]) \cap \ag_+$ is convex (and is a simplex minus some faces).
\end{example}
The class of nilpotent Lie algebras whose pre-Einstein derivation has all its eigenvalues simple is quite large. One example, the graded filiform algebras, will be considered in details in the next section. However, there are many other algebras with such a property. For instance, a pre-Einstein derivation of a nilpotent algebra with codimension one abelian ideal $\ag$ (see \cite[Section 4]{L2}) has all its eigenvalues simple, provided the operator $\ad_X$ (where $X \notin \ag$) has two Jordan blocks of dimensions $p_1$ and $p_2$, with $p_1+p_2$ an odd number. Another example is a two-step nilpotent algebra given by the relations $[e_1, e_2] = e_7, \, [e_1, e_3] = e_8, \,[e_2, e_3] = [e_4, e_5] = e_9$, $[e_3, e_4] = [e_1, e_6] = e_{10}$. We give yet another example below. \begin{example} Consider a family of eight-dimensional nilpotent Lie algebras given with respect to a basis $e_i,\; i=1,\ldots 8$, by the relations $[e_i,e_j] = c_{ij} e_{i+j+1}$, where all the $c_{ij}$'s with $i<j,\, i+j \le 7$, are nonzero. Any such algebra is isomorphic to exactly one of the algebras $\n_t(8), \; t \ne 0, 1$, defined by $[e_1, e_i] = e_{i+2}, \; i = 2, \ldots, 6, \; [e_2, e_3] = e_6,\, [e_2, e_4] = e_7,\, [e_2, e_5] = t e_8,\, [e_3, e_4] = (t-1) e_8$. The pre-Einstein derivation for each of the $\n_t(8), \; t \ne 0, 1$, is positively proportional to the derivation sending every $e_i$ to $(i+1)e_i$, hence has all its eigenvalues simple. The routine check shows that the set $S$ of positive solutions $v$ of \eqref{eq:ut=1} (see Remark~\ref{rem:payne}) is nonempty. In fact, all the $\n_t(8), \; t \ne 0, 1$, share the same $S$, which is the interior of a triangle in $\mathbb{R}^9$. By Theorem~\ref{t:one}, each of the algebras $\n_t(8), \; t \ne 0, 1$ is an Einstein nilradical. \end{example}
\section{Filiform $\mathbb{N}$-graded algebras and Einstein nilradicals} \label{s:fili}
In this section, we apply Theorem~\ref{t:one} to study filiform algebras. A \emph{filiform} algebra is a nilpotent algebra for which the descending central series $\n_0=\n,\; \n_{i+1} = [\n, \n_i],\, i \ge 0$, has the maximal possible length for the given dimension, namely $\n_{n-1} \ne 0$, where $n = \dim \n$.
We are primarily interested in those filiform algebras which are Einstein nilradicals. Every such algebra must admit an $\mathbb{N}$-gradation.
Any filiform algebra $\n$ is generated by two elements, so its rank is at most two. Most of filiform algebras of dimension $n \ge 8$ are characteristically nilpotent \cite{GH1}, that is, have rank zero. Those algebras do not admit any gradation at all, hence cannot be Einstein nilradicals (cf. \cite[Section 9.1]{P}); in fact, they cannot even be the nilradicals of anything other than themselves.
The case $\rk \, \n = 2$ is completely settled by the following two facts. First of all, by the result of \cite{V, GH1}, there are only two filiform Lie algebras of rank two, namely \begin{align}\label{eq:m0} &\m_0(n) \; &[e_1, e_i] = e_{i+1}, \quad &i=2, \ldots n-1, \\ &\m_1(n) \; &[e_1, e_i] = e_{i+1}, \quad &i=2, \ldots n-2, \quad [e_i,e_{n-i+1}] = (-1)^i e_n, \; i = 2, \ldots , n - 1, \label{eq:m1} \end{align} where the dimension $n$ of $\m_1(n)$ must be even. Secondly, both of these algebras are Einstein nilradicals. For $\m_0(n)$, this is proved in \cite[Theorem 4.2]{L2} (see also \cite[Theorem 27]{P}), for $\m_1(n)$ in \cite[Theorem 37]{P}. Note that the pre-Einstein derivation for both $\m_0$ and $\m_1$ is simple, so the fact that they are Einstein nilradicals can be also deduced from Theorem~\ref{t:one}.
Less is known in the case $\rk \, \n = 1$. As it follows from \cite[Th\'{e}or\`{e}me 2]{GH1} (see also \cite[Section~3.1]{GH2}), there are two series of classes of rank one filiform algebras admitting a positive gradation: \begin{itemize}
\item the $n$-dimensional algebras of the class $A_r,\; 2\le r \le n-3$ are given by the relations
\begin{equation*}
[e_1, e_i] = e_{i+1}, \quad i=2, \ldots n-1, \quad [e_i, e_j] = c_{ij} e_{i+j+r-2} , \quad i,j \ge 2, i+j \le n+2-r,
\end{equation*}
with the gradation $1, r, r+1, \ldots, n+r-2$ (the corresponding derivation $\phi$ is defined by
$\phi(e_1) = e_1$, $\phi(e_i) = (i+r-2) e_i,\;i \ge 2$).
\item the $n$-dimensional algebras of the class $B_r,\; 2\le r \le n-3$, $n$ is even, are given by the relations
\begin{equation}\label{eq:Brn}
[e_1, e_i] = e_{i+1}, \quad i=2, \ldots n-2, \quad [e_i, e_j] = c_{ij} e_{i+j+r-2} , \quad i,j \ge 2, i+j \le n+2-r,
\end{equation}
with the gradation $1, r, r+1, \ldots, n+r-3, n+2r-3$ (the corresponding derivation $\phi$ is defined by
$\phi(e_1) = e_1,\; \phi(e_i) = (i+r-2) e_i,\; 2 \le i \le n-1,\; \phi(e_n) = (n+2r-3) e_n$). \end{itemize} In order to get an algebra of rank precisely one, one requires that not all of the $c_{ij}$'s above are zeros. The main difficulty in classifying the algebras from $A_r$ and $B_r$ lies in the fact that the numbers $c_{ij}$ must satisfy the Jacobi equation. Note that for any $n \ge 5$ and any $2\le r \le n-3$, there exists, for instance, an algebra from $A_r$ with all the $c_{ij}$'s nonzero. The complete classification is known only for the algebras of the class $A_2$ (\cite{M,CJ}, the complex case was earlier done in \cite{AG}).
Note that although the ground field in \cite{GH1, GH2} is $\mathbb{C}$, the examination of the proof shows that the classification of filiform algebras of rank one works for $\mathbb{R}$ without any changes (actually, for any field of infinite characteristics; note, however, that the algebra $\b_{\pm}(12)$ from Table~\ref{tableb2n} below is not defined over $\mathbb{Q}$).
Every algebra $\n$ from $A_r$ or $B_r$ has only one semisimple derivation, which is automatically a pre-Einstein derivation (up to conjugation and scaling). As all the eigenvalues of it are simple (they are proportional to $(1, r, r+1, \ldots, n+r-2)$ for $A_r$ and to $(1, r, r+1, \ldots, n+r-3, n+2r-3)$ for $B_r$), the question of whether or not $\n$ is an Einstein nilradical is answered by Theorem~\ref{t:one}.
Note that the affine space $L$ in Theorem~\ref{t:one} (and hence the projection $p$ of the origin of $\Rn$ to it) is the same for all the $n$-dimensional algebras of each of the classes $A_r$ and $B_r$ (although the set $\mathbf{F}$ depends on the particular algebra). The explicit form of $p$ can be easily found, see e.g. \eqref{eq:pA2n} for $A_2$.
With the classification of algebras of the class $A_2$ in hands, we classify algebras of class $B_2$ (Table~\ref{tableb2n} in Section~\ref{ss:b2n}); it appears that there are only six of them. Then we apply Theorem~\ref{t:one} to find all the Einstein nilradicals in the classes $A_2$ and $B_2$, which proves Theorem~\ref{t:fili}.
\subsection{Algebras of the class $\mathbf{A_2}$} \label{ss:a2n}
According to the classification given in \cite[Theorem 5.17]{M}, the class $A_2$ consists of five infinite series and five one-parameter families in dimensions $7 \le n \le 11$. More precisely, there are two infinite series, $\m_2(n)$ and $\mathcal{V}_n$, defined for all $n$, two others, $\m_{0,1}(n)$ and $\m_{0,3}(n)$, defined for odd $n$, and one, $\m_{0,2}(n)$, defined for even $n$. The tables below give the commuting relations for the algebras from $A_2$ (they slightly differ from the ones from \cite{M}: first of all, we remove the algebra $\m_0$, as it is of rank two; secondly, we change the lower bounds for the dimensions, so in our tables some lower-dimensional algebras from different families could be isomorphic).
\begin{table}[h] \setlength{\extrarowheight}{2pt} \begin{center}
\begin{tabular}{|m{3cm}|m{11cm}|} \hline $\m_2(n), \, n \ge 5$ & $[e_1, e_i] = e_{i+1},$
$i = 2, \ldots , n-1$ \newline $[e_2, e_i] = e_{i+2},$
$i = 3, \ldots , n-2$ \\ \hline $\mathcal{V}(n), \, n \ge 4$ & $[e_i, e_j ] = (j-i) e_{i+j},$
$i + j \le n$ \\ \hline $\m_{0,1}(n), $ \newline $n=2m+1,\, n \ge 7$ & $[e_1, e_i] = e_{i+1},$
$i = 2, \ldots , n-1$ \newline $[e_l, e_{n-l}] = (-1)^{l+1} e_n,$
$l = 2, \ldots , m$ \\ \hline $\m_{0,2}(n),$ \newline $n=2m+2,\, n \ge 8$ & $[e_1, e_i] = e_{i+1},$
$i=2, \ldots , n-1$ \newline $[e_l, e_{n-1-l}] = (-1)^{l+1} e_{n-1},$
$l=2, \ldots , m$ \newline $[e_j , e_{n-j}] = (-1)^{j+1}(m-j+1)e_n,$
$j=2, \ldots , m$ \\ \hline $\m_{0,3}(n),$ \newline $n=2m+3,\, n \ge 9$ & $[e_1, e_i] = e_{i+1},$
$i=2, \ldots , n-1$ \newline $[e_l, e_{n-2-l}] = (-1)^{l+1}e_{n-2},$
$l=2, \ldots , m$ \newline $[e_j , e_{n-1-j}] = (-1)^{j+1}(m-j+1)e_{n-1},$
$j=2, \ldots , m$ \newline $[e_k, e_{n-k}] = (-1)^k ((k-2)m - (k-2)(k-1)/2)e_n,$
$k=3, \ldots , m+1$ \\ \hline \end{tabular} \caption{Infinite series of algebras of the class $A_2$.}\label{tablea2}
\begin{tabular}{|m{1cm}|m{12.9cm}|} \hline $\g_\a(7)$ & $[e_1, e_i] = e_{i+1}$, \quad $i = 2, \ldots , 6$ \newline $[e_2, e_3] = (2+\a) e_5,\, [e_2, e_4] = (2+\a) e_6,\, [e_2, e_5] = (1+\a) e_7,\, [e_3, e_4] = e_7$\\ \hline $\g_\a(8)$ & relations for $\g_\a(7)$ and \newline $[e_1, e_7 ] = e_8, \, [e_2, e_6 ] = \a e_8, \, [e_3, e_5 ] = e_8$\\ \hline $\g_\a(9)$ & relations for $\g_\a(8)$ and \newline $[e_1, e_8] = e_9, \, [e_2, e_7] = \frac{2 \a^2 + 3 \a - 2}{2\a+5} e_9, \, [e_3, e_6] = \frac{2 \a + 2}{2\a+5} e_9, \, [e_4, e_5] = \frac{3}{2\a+5} e_9$,
$\a \ne -\frac52$\\ \hline $\g_\a(10)$ & relations for $\g_\a(9)$ and \newline $[e_1, e_9] = e_{10},\, [e_2, e_8] = \frac{2 \a^2 + \a - 1}{2\a+5} e_{10}, \, [e_3, e_7] = \frac{2 \a - 1}{2\a+5} e_{10}, \, [e_4, e_6] = \frac{3}{2\a+5} e_{10}$,
$\a \ne -\frac52$ \\ \hline $\g_\a(11)$ & relations for $\g_\a(10)$ and \newline $[e_1, e_{10}] = e_{11},\, [e_2, e_9] = \frac{2 \a^3 + 2 \a^2 + 3}{2(\a^2+4\a+3)} e_{11}, \, [e_3, e_8] = \frac{4 \a^3 + 8 \a^2 - 8 \a - 21}{2(\a^2+4\a+3)(2\a+5)} e_{11}$, \newline $[e_4, e_7] = \frac{3(2 \a^2 + 4 \a + 5)}{2(\a^2+4\a+3)(2\a+5)} e_{11}, \, [e_5, e_6] = \frac{3(4 \a + 1)}{2(\a^2+4\a+3)(2\a+5)} e_{11}$,
$\a \ne -3, -\frac52, -1$\\ \hline \end{tabular} \caption{One-parameter families of algebras of the class $A_2$.}\label{tablega} \end{center} \end{table}
As we are interested only in determining whether a given algebra is an Einstein nilradical, by Theorem~\ref{t:one}, we need only the set $\mathbf{F}$ for each of the algebras, not the actual structural coefficients. The vector $p$, the projection of the origin of $\Rn$ to $L$, is given by \begin{equation}\label{eq:pA2n}
p_i= \frac{2}{n(n-1)}(2n+1-3i), \quad i=1, \ldots , n. \end{equation}
The proof goes on the case-by-case basis.
First of all, neither of the algebras $\m_2(n), \; \m_{0,1}(n), \, n=2m+1, \; \m_{0,2}(n), \, n=2m+2, \; \m_{0,3}(n), \; n=2m+3$ is an Einstein nilradical, when $n \ge 8$. The easiest way to see that is to produce a vector $a \in \Rn$ such that $(a, \a_{ij}^k) \ge 0$, for all $\a_{ij}^k \in \mathbf{F}$, but $(a, p) < 0$ (this implies that $p \notin \Conv(\mathbf{F}))$. Such a vector $a$ can be taken as $a_1=n-2, \, a_2=2(n-2), \, a_i =i(n-2)-n(n+1)/2, \, 3 \le i \le n$, for $\m_2(n)$, as $(1,1-m,2-m, \ldots, m-2, m-1,-1)^t$ for $\m_{0,1}(n)$, as $a=(1, 1-m, 2-m, \ldots, m-2, m-1,1,0)^t$ for $\m_{0,2}(n), \, n=2m+2$, and as $a_1=n+2, \, a_{n-2}=-n-4, \, a_{n-1}=-2, \, a_n=n, \, a_i =i(n+2)-\frac{n(n+1)}{2}$, $3 \le i \le n-3$, for $\m_{0,3}(n)$.
The only remaining algebra from Table~\ref{tablea2}, the algebra $\mathcal{V}(n)$, is an Einstein nilradical for any $n \ge 3$. This follows from the fact that $p=\frac{2}{n(n-1)} (2 \sum_{1\le i < j, \, i+j \le n} \a_{ij}^{i+j}+\a_{1m}^{m+1}+\sum_{i=1}^{m-1} \a_{i,i+2}^{2i+2})$, where $m =[n/2]$.
Apart for some finite number of exceptional values of $\a$, for every $n=7, \ldots, 11$, the set $\mathbf{F}$ for the algebras $\g_\a(n)$ from Table~\ref{tablega} is the same as for the corresponding algebra $\mathcal{V}(n)$, so each of them is an Einstein nilradical. We treat the exceptional values below by either giving the coefficient vector $c=(c_{ij}^k)$ of a convex linear combination $p=\sum c_{ij}^k \a_{ij}^k$ (the vectors $\a_{ij}^k$ are always ordered lexicographically), or otherwise, by showing that some coefficient of any such linear combination is nonpositive. According to Theorem~\ref{t:one}, the corresponding algebra is an Einstein nilradical in the former case, and is not in the latter one.
The exceptional values for $\g_{\a}(8)$ are $\a= -2, -1, 0$. The algebra $\g_{-2}(8)$ is not an Einstein nilradical, as for any linear combination of the vectors from $\mathbf{F}$ representing $p, \; c_{16}^7 < 0$. Both $\g_{0}(8)$ and $\g_{-1}(8)$ are Einstein nilradicals, the coefficient vector $c$ can be taken as $\frac{1}{28} (4, 2, 3, 1, 2, 2, 3, 2, 2, 2, 5)^t$ for $\a=0$ and as $\frac{1}{28}(2, 2, 2, 4, 3, 1, 3, 3, 2, 3, 3)^t$ for $\a=-1$.
The exceptional values for $\g_{\a}(9)$ are $\a= -2, -1, 0$. The algebra $\g_{-2}(9)$ is not an Einstein nilradical, as for any linear combination of the vectors from $\mathbf{F}$ representing $p, \; c_{16}^7 < 0$. All three algebras $\g_{-1}(9)$, $\g_{0}(9)$, and $\g_{1/2}(9)$ are Einstein nilradicals, the coefficient vector $c$ can be taken as $\frac{1}{36} (2, 2, 1, 2, 2, 2, 5, 3, 3, 4, 1, 3, 4, 2)^t$, as $\frac{1}{72} (6, 6, 5, 3, 5, 6, 1, 5, 5, 5, 5, 5, 5, 5, 5)^t$, and as $\frac{1}{36} (3, 1, 2, 4, 1, 3, 2$, $4, 2, 2, 2, 2, 2, 4, 2)^t$, respectively.
The exceptional values for $\g_{\a}(10)$ are $\a= -2, -1, 0, \frac12$. Neither of the algebras $\g_{-2}(10)$, $\g_{-1}(10)$, and $\g_{1/2}(10)$ is an Einstein nilradical, as any linear combination of the vectors from $\mathbf{F}$ representing $p$ has some of the coefficient nonpositive (specifically, for $\g_{-2}(10)$, we have $c_{16}^7 + c_{19}^{10} + c_{46}^{10} = 0$, for $\g_{-1}(10)$, $c_{14}^5 + c_{17}^8 = 0$, for $\g_{1/2}(10)$, $c_{14}^5 + c_{16}^7 + c_{34}^7 = 0$). The algebra $\g_{0}(10)$ is an Einstein nilradical, the coefficient vector $c$ can be taken as $\frac{1}{45} (4, 3, 2, 1, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 5, 2, 2, 2, 3)^t$.
The exceptional values for $\g_{\a}(11)$ are $\a= -2, -1/4, 0, 1/2, \a_1, \a_2$, where $\a_1$ is the unique real root of $2 \a^3 + 2 \a^2 + 3$, and $\a_2$ is the unique real root of $4 \a^3 + 8 \a^2 - 8 \a - 21$. All these algebras are Einstein nilradicals: the coefficient vectors $c$ can be taken as
$\frac{1}{220} (22, 12, 15, 12, 1, 8, 8, 1, 1, 8, 8, 8, 22, 14, 12, 17, 15, 8$, $14, 5, 8, 1)^t$, $\frac{1}{55} (2, 3, 2, 3, 2, 2, 2, 2, 2, 2, 1, 4, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 3, 4)^t$, $\frac{1}{55} (2, 2, 2, 2, 2, 4, 1, 3, 2, 2, 1, 4, 4$, $2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2)^t$, $\frac{1}{110} (9, 10, 1, 6, 1, 4, 4, 4, 1, 7, 6, 4, 4, 4, 1, 6, 5, 8, 7, 11, 6, 1)^t$, $\frac{1}{55} (3, 3, 2, 3, 2, 2, 1$, $2, 2, 3, 3, 2, 2, 2, 2, 2, 3, 1, 2, 3, 2, 3, 2, 3)^t$, and $\frac{1}{55} (1, 4, 2, 3, 2, 1, 3, 2, 2, 2, 3, 2, 2, 2, 2, 3, 2, 3, 2, 2, 2, 3, 3, 2)^t$, \linebreak respectively.
\subsection{Algebras of the class $\mathbf{B_2}$} \label{ss:b2n}
In this section, we classify the filiform algebras of the class $B_2$ and show that all of them are Einstein nilradicals, hence proving assertion 2 of Theorem~\ref{t:fili}.
An $n$-dimensional algebra $\n \in B_2$ is defined by the relations \eqref{eq:Brn}, with $r=2$. Any such algebra admits a derivation with the eigenvalues $1,2,\ldots, n-1, n+1$ (the corresponding eigenvectors are the $e_i$'s). We require that $\rk \, \n = 1$, that is, at least one of the $c_{ij}$ is nonzero. This sorts out the rank two algebra $\m_1(n)$ given by \eqref{eq:m1}.
We first prove the classification part of assertion 2 of Theorem~\ref{t:fili}. Namely, we show that $B_2$ consists of the six algebras given in Table~\ref{tableb2n}. Each of those algebras has an even dimension $n = 2m$ and is a cental extension of some $n-1$-dimensional algebra of the class $A_2$ by the cocycle $$ \omega = a_2 e_2^* \wedge e_{n-1}^* + a_3 e_3^* \wedge e_{n-2}^* + \ldots + a_m e_m^* \wedge e_{m+1}^*. $$
\begin{table}[h] \setlength{\extrarowheight}{2pt} \begin{center}
\begin{tabular}{|m{1.25cm}|m{3.2cm}|m{9.7cm}|} \hline Algebra & Extension of & {\centering Relations}\\ \hline $\b(6)$ & $\m_2(5)$ & relations for $\m_2(5)$ and $[e_i, e_{7-i}] = (-1)^i e_6,
i=2,3$\\ \hline $\b(8)$ & $\g_{-5/2}(7)$ & relations for $\g_{-5/2}(7)$ and $[e_i, e_{9-i}] = (-1)^i e_6,
i=2,3,4$\\ \hline $\b_1(10)$ & $\g_{-1}(9)$ & relations for $\g_{-1}(9)$ and $[e_i, e_{11-i}] = (-1)^i e_{10},
i=2,3,4,5$\\ \hline $\b_2(10)$ & $\g_{-3}(9)$ & relations for $\g_{-3}(9)$ and $[e_i, e_{11-i}] = (-1)^i e_{10},
i=2,3,4,5$\\ \hline $\b_{\pm}(12)$ & $\g_{\a}(11), \; \a=\frac{-4 \pm \sqrt10}{2}$ & relations for $\g_{\frac{-4 \pm \sqrt10}{2}}(12)$ and $[e_i, e_{13-i}] = (-1)^i e_{12},
i=2,\ldots,6$\\ \hline \end{tabular} \caption{Algebras of the class $B_2$.}\label{tableb2n} \end{center} \end{table}
\begin{proof}[Proof of the classification] Let $\n$ be a filiform algebra of dimension $n$ from the class $B_2$. Then $\n$ is of rank one and admits a derivation with the eigenvalues $1,2,\ldots, n-1, n+1$. Let $\{e_i\}$ be the corresponding basis of the eigenvectors. Then $\z(\n)$, the center of $\n$, is $\mathbb{R} e_n$, and the quotient algebra $\n'=\n/\z(\n)$ is a filiform $n-1$-dimensional algebra admitting a derivation with the eigenvalues $1,2, \ldots, n-1$. The corresponding eigenvectors are the images of the vectors $e_i, \, i=1, \ldots, n-1$, under the natural projection $\pi:\n \to \n'$. With a slight abuse of notation, we will still denote them $e_i$.
The algebra $\n'$ is either isomorphic to $\m_0$, or is one of the algebras from $A_2$. Up to scaling, we can assume that $[e_1, e_i]=e_{i+1}$, for all $i=2, \ldots, n-2$, and $[e_i, e_j] = c_{ij} e_{i+j}$ when $i+j \le n-1$, or zero otherwise.
Given $\n'$, one can construct $\n$ as a central extension of $\n'$, so that $\n = \n' \oplus \mathbb{R} e_n$ (as a linear space), with the Lie brackets given by $[\n, e_n] = 0, \; [X,Y]=[X,Y]_1 + \omega(X, Y) e_n$, for $X, Y \in \n'$, where $[X,Y]_1$ is the bracket of $X$ and $Y$ in $\n'$ and $\omega \in \Lambda^2(\n')$. The Jacobi equations are equivalent to the fact that $\omega$ is a $2$-cocycle, that is $\sigma_{XYZ}(\omega([X,Y],Z)) = 0$, for any $X, Y, Z \in \n'$, where $\sigma$ is the sum of the cyclic permutations. The fact that $\n$ is filiform implies that $\omega \ne 0$. Clearly, the proportional $\omega$'s yield isomorphic algebras.
In order for $\n$ to admit the gradation $1,2, \ldots, n-1, n+1$, the cocycle $\omega$ must be of a very special form, namely $\omega(e_i, e_j) = 0$ for all $i,j =1, \ldots, n-1$, unless $i+j = n+1$. It follows that $\omega= \sum_{i=2}^{n-1} a_i e_i^*\wedge e_{n+1-i}^*$, with $a_{n+1-i} = -a_i$. The cocycle condition with $X=e_1, \, Y=e_i, \, Z=e_{n-i}$, $2 \le i \le n-2$, implies $a_{i+1} = -a_i$. As $\omega \ne 0$, $n$ must be even, and up to scaling, we can take $\omega = \sum_{i=2}^m (-1)^i e_i^*\wedge e_{n+1-i}^*$, where $2m = n$.
For a triple $X=e_i, \, Y=e_j, \, Z= e_k$, the cocycle condition is nontrivial, only when $i+j+k = n+1$ and $i, j, k$ are pairwise distinct. In such a case we get \begin{equation}\label{eq:cocycle} c_{ij} (-1)^k + c_{jk} (-1)^i + c_{ki} (-1)^j = 0, \quad \text{for all $i, j, k \ge 2$, with $i+j+k= n+1$.} \end{equation} Now, if $\n' = \m_0$, the condition \eqref{eq:cocycle} is clearly satisfied, the resulting algebra $\n$ is isomorphic to $\m_1$ given by \eqref{eq:m1}. If not, then $\n' \in A_2$, so it is one of the algebras from Table~\ref{tablea2} or from Table~\ref{tablega}.
The direct check shows that the only odd-dimensional algebra from Table~\ref{tablea2} satisfying \eqref{eq:cocycle} is the algebra $\m_2(5) \cong \mathcal{V}(5)$. It extends to $\b(6)$.
From among the algebras in Table~\ref{tablega}, only $\g_{-5/2}(7), \; \g_{-1}(9), \; \g_{-3}(9)$ and $\g_{\a}(11),\; \a=\frac{-4 \pm \sqrt10}{2}$ satisfy \eqref{eq:cocycle}. The corresponding algebras of the class $B_2$ are given in Table~\ref{tableb2n}. \end{proof}
All the algebras of the class $B_2$ are Einstein nilradicals. It suffices to produce for each of them a vector $v$ with positive coordinates satisfying \eqref{eq:ut=1}. Such a $v$ can be taken as $\frac{1}{52} (13, 16, 12, 4, 13, 12)^t$, $\frac{1}{221}(44, 11, 33, 79, 17, 48, 17, 21, 17, 17, 78, 17)^t, \frac{1}{29}(2, 5, 4, 4, 4, 2, 4, 2, 4, 5, 4, 4, 4, 3, 4, 4, 3, 4)^t, \frac{1}{29}(3, 5, 3, 3$, \linebreak $3, 3, 5, 3, 3, 3, 3, 3, 5, 4, 1, 1, 3, 2, 6, 4)^t$, $\frac{1}{675} (33, 4, 93, 151, 105, 137, 8, 45, 20, 130, 45, 45, 45, 45, 45, 84, 45$, \linebreak $45, 112, 45, 45, 45, 45, 45, 45, 45, 45, 45, 172, 45)^t$, for the algebras $\b_6, \b_8, \b_1(10)$, $\b_2(10), \b_{\pm}(12)$, respectively.
\end{document} |
\begin{document}
\title{Representation of asymptotic values for nonexpansive stochastic control systems \thanks{The work has been supported in part by the NSF of P.R.China (No. 11222110), Shandong Province (No. JQ201202), NSFC-RS (No. 11661130148), 111 Project (No. B12023).}} \author{Juan Li,\,\, Nana Zhao\footnote{Corresponding author}\\ {\small School of Mathematics and Statistics, Shandong University, Weihai, Weihai 264209, P.~R.~China.}\\ {\small{\it E-mails: juanli@sdu.edu.cn, nnz0528@163.com.}} \date{August 02, 2017}} \maketitle \begin{abstract} In ergodic stochastic problems the limit of the value function $V_\lambda$ of the associated discounted cost functional with infinite time horizon is studied, when the discounted factor $\lambda$ tends to zero. These problems have been well studied in the literature and the used assumptions guarantee that the value function $\lambda V _\lambda$ converges uniformly to a constant as $\lambda\to 0$. The objective of this work consists in studying these problems under assumptions, namely, the nonexpansivity assumption, under which the limit function is not necessarily constant. Our discussion goes beyond the case of the stochastic control problem with infinite time horizon and discusses also $V_\lambda$ given by a Hamilton-Jacobi-Bellman equation of second order which is not necessarily associated with a stochastic control problem. On the other hand, the stochastic control case generalizes considerably earlier works by considering cost functionals defined through a backward stochastic differential equation with infinite time horizon and we give an explicit representation formula for the limit of $\lambda V_\lambda$, as $\lambda\to 0$.
\end{abstract}
\noindent \textbf{Keywords.} Stochastic nonexpansivity condition; limit value; BSDE.
\noindent \textbf{AMS Subject classification:} 60H10; 60K35
\section{{\protect \large {Introduction}}} In our paper we study the limit behaviour of the optimal value of a discounted cost functional with infinite time horizon as the discount factor $\lambda>0$ tends to zero. For this we consider a stochastic control system given by the controlled stochastic equation \begin{equation}\label{0} dX_t^{x,u}=b(X_t^{u,x},u_t)dt+\sigma(X_t^{x,u},u_t)dW_t,\, t\ge 0,\ \ X_0^{x,u}=x\in \mathbb{R}^N, \end{equation}
\noindent driven by a Brownian motion $W$ and an admissible control $u\in{\cal U}$, i.e. a control process $u$ which is adapted with respect to the filtration $\mathbf{F}=({\cal F}_t)_{t\ge 0}$ generated by $W$ and completed by all null sets. As we are interested in the limit behaviour of the controlled system, as $t\rightarrow +\infty$, we have to add to the usual Lipschitz and growth conditions on the coefficients $\sigma$ and $b$ also assumptions guaranteeing that, for all the process $X^{t,u}$ takes all its values in a compact $\overline{\theta}(\subset R^N)$, for all $x\in\overline{\theta}$ and all $u\in{\cal U}$. The cost functional $\overline{Y}_0^{x,u}$ associated with the dynamics $X^{x,u}$ is defined through a backward stochastic differential equation (BSDE) on the infinite time interval $[0,+\infty)$:
\begin{equation}\label{1} \overline{Y}^{\lambda,x,u}_t=\overline{Y}^{\lambda,x,u}_T+\int_t^T(\psi(X_s^{x,u}, \overline{Z}^{\lambda,x,u}_s,u_s)-\lambda \overline{Y}^{\lambda,x,u}_s)ds-\int_t^T\overline{Z}^{\lambda,x,u}_sdW_s,\, 0\le t\le T<+\infty,\end{equation}
\noindent and we define the value function \begin{equation}\label{3} V_\lambda(x):=\inf_{u\in{\cal U}}\overline{Y}_0^{x,u}.\end{equation}
We remark that, if $\psi(x,z,u)$ doesn't depend on $z$, we get the cost functional considered in \cite{Buckdahn 2013}:
\begin{equation}\label{2} \overline{Y}_0^{x,u}=E\left[\int_0^\infty e^{-\lambda t}\psi(X_t^{x,u},u_t)dt\right].\end{equation}
\noindent However, since the pioneering work by Pardoux and Peng \cite{peng1990} on BSDEs in 1990 and its extension by Darling, Pardoux \cite{Darling 1997} and by Peng \cite{S. Peng 1991}, and in particular since the works by Peng \cite{P 1992}, \cite{Peng 1997} on BSDE methods in stochastic control, it has become usual to study stochastic control systems whose cost functionals are defined through a BSDE. As concerns BSDEs with infinite time horizon, Chen \cite{Chen 1992} was the first to study such equations on an unbounded random time interval, Hamad\`{e}ne, Lepeltier and Wu \cite{Wu 1999} studied reflected BSDEs with one reflecting barrier and with infinite time horizon. Moreover, Briand, Hu \cite{Hu 1998} and Royer \cite{Royer 2004} generalized the existence results for BSDEs with unbounded random terminal time.
In our paper we begin our studies with the above infinite terminal time BSDE (\ref{1}), where we use techniques developed by Debusche, Hu and Tessitore \cite{Debussche 2011}, and we provide new estimates. Let us point out that in \cite{Debussche 2011} the authors have studied Ergodic BSDEs first introduced by Fuhrman, Hu and Tessotore \cite{Fuhrman 2009}; their $\lambda$ is a part of the solution. Our BSDE differs from theirs, its driving coefficient $\psi$ depends also on the control process, and its study differs, since we are not interested in the ergodic case, we study the limit behaviour of the value function $\lambda V_\lambda$ as $\lambda\rightarrow 0$ under assumptions which don't imply that the limit value function is a constant.
The limit problem for deterministic and stochastic control systems has been studied by different authors. Quincampoix and Renault \cite{Quincampoix 2011} studied a deterministic control problem with infinite time horizon and investigated the limit behaviour of the discounted value value function, when the discount factor tends to zero. For this they used a so-called nonexpansivity condition, and they gave, in particular, examples which show that -unlike the ergodic case- the limit value function can depend on the initial state $x$. In Buckdahn, Goreac and Quincampoix \cite{Buckdahn 2013} these studies are extended to stochastic control problems with value functions of the form (\ref{2}) (Abel mean) but also of Ces\'{a}ro mean. In \cite{Marc 2015}, for the case of deterministic controls, Cannarsa and Quincampoix extend these approaches by using a measurable viability theorem of Frankowska, Plaskacz, Rzezuchowski \cite{Frankowska 1995}, they characterize $V_\lambda$ as constrained viscosity solution of an associated Hamilton-Jacobi equation, and they study the limit problem.
The studies in our paper are heavily inspired by \cite{Buckdahn 2013} and \cite{Frankowska 1995}. The key assumption in \cite{Buckdahn 2013}, which allows to take the limit of the classical value function $\lambda V_\lambda$ (see (\ref{3})) as $\lambda\rightarrow 0$ is the nonexpansivity condition. However, as we generalize the cost functional by defining it through an infinite time horizon BSDE, we have also to extend our nonexpansivity assumption to the more general case we investigate (see our assumption (\ref{r49}) in Section 2). This extension is non trivial, it gives a stability to this assumption under Girsanov transformation which we have to work with, but however our condition coincides with that given in \cite{Buckdahn 2013}, if $\psi$ is independent of $z$. Under our nonexpansivity condition we show that the family of functions $\{\lambda V_\lambda\}$ is equi-continuous and equi-bounded on $\overline{\theta}$. Hence, due to the Arzel\`{a}-Ascoli Theorem, as $\lambda \rightarrow 0$, $\{\lambda V_\lambda\}$ has an accumulation point in the space of continuous functions over $\bar{\theta}$ endowed with the supremum norm.
The main objective of our paper is to get the existence of the limit, i.e., the uniqueness of this accumulation point, and to characterize the limit function $w_0=\lim_{\lambda\rightarrow 0}\lambda {V}_\lambda$. In our approach PDE methods play a central role. We recall that the PDE approach for the study of the limit behaviour for solutions of Hamilton-Jacobi equations with coercitive Hamiltonian essentially originates from Lions, Papanicolaou and Varadhan \cite{P.-L. Lions}. This work was extended by Arisawa \cite{M. Arisawa 1998} for the deterministic control setting and by Arisawa and Lions \cite{Arisawa Lions 1998} to the stochastic control framework. For subsequent works and extensions the reader is referred to \cite{Artstein 2000}, \cite{Quincampoix 2011} for the deterministic control case, and to \cite{G. K. Basak 1997}, \cite{V. Borkar 2007}, \cite{R. Buckdahn 2005}, \cite{A. Richou 2009} and the references therein for the stochastic framework. But all these approaches were made in the ergodic case, under suitable assumptions guaranteeing that the limit value is independent of the initial data.
In our paper we too use a PDE approach. For this end we characterize $V_\lambda$ as constrained viscosity solution of the associated Hamilton-Jabobi-Bellman (HJB) equation
\centerline{$\lambda V_\lambda(x)+H(x,DV_\lambda(x),D^2V_\lambda(x))=0,\, \, x\in\theta,$}
\centerline{$\lambda V_\lambda(x)+H(x,DV_\lambda(x),D^2V_\lambda(x))\ge 0,\, x\in\partial\theta,$}
\noindent (see Section 3). Avoiding assumptions which lead to the ergodic case, we suppose that the Hamiltonian $H$ satisfies a radial monotonicity condition
$$H(x,lp,lA)\le H(x,p,A),\, l\ge 1,\, (p,A)\in \mathbb{R}^N\times {\cal S}^N,$$
\noindent where ${\cal S}^N$ denotes the set of symmetric $N\times N$ matrices. This condition was introduced in \cite{Marc 2015}, and it guarantees the monotone and uniform convergence of $\lambda V_\lambda$, as $\lambda\rightarrow 0.$ As this convergence result for the constrained solution $V_\lambda$ of the above HJB equation is not directly related with the characterization of $V_\lambda$ as value function of our stochastic control problem, by using Katsoulakis' comparison results \cite{Katsoulakis 1994} for constrained solutions of PDEs, we extend our discussion to more general Hamiltonians which are not necessarily related with a stochastic control problem, but which satisfy the radial monotonicity condition. For this general case we characterize the limit $w_0=\lim_{\lambda\rightarrow 0}\lambda V_\lambda$ as maximal viscosity subsolution of some limit HJB equation (Theorem \ref{the:3.4}). More precisely, we prove that $$w_0(x)=\mathop{\rm sup}\{w(x):\, w\in \mbox{Lip}_{M_0}(\overline{\theta}),\, w+\overline{H}(x,Dw,D^2w)\le 0\mbox{ on }\theta \mbox{ in viscosity sense}\}$$
\noindent $x\in\overline{\theta},$ where $\overline{H}(x,p,A)=\min\left\{M_0,\mathop{\rm sup}_{l>0}H(x,lp,lA)\right\}$ (For details, see Theorem \ref{the:3.4}).
\noindent After, coming back to the special case that $V_\lambda$ is the value function of our stochastic control problem, we characterize the limit function $w_0=\lim_{\lambda\rightarrow 0}\lambda V_\lambda$ as viscosity solution by passing to the limit in the HJB equation associated with $V_\lambda$. For the special case $\psi(x,z,u)=\psi_1(x,u)+g(z)$ we give an explicit representation of $w_0$ (see Theorem \ref{th:4.2}) using Peng's notion of $g$-expectation $\varepsilon^g[\cdot]$ (\cite{peng}); it's a non linear expectation introduced through a BSDE with driving coefficient $g$. More precisely, we show that $$ w_0(x)=\inf_{t\ge 0,u\in{\cal U}}\varepsilon^g [\min_{v\in U}\psi(X_t^{x,u},0,v) ],\, x\in \overline{\theta}.$$
Our paper is organized as follows. In Section 2 we present the basic assumptions on the coefficient functions $b, \sigma, \psi$, we define the value function $V_\lambda(x)$, and we prove the existence and the uniqueness of the solution of the BSDEs on the infinite time interval $[0,\infty)$ (Proposition \ref{th:2.4}). We introduce the stochastic nonexpansivity condition and show that the nonexpansivity condition combined with standard assumptions implies the stochastic nonexpansivity condition (Proposition \ref{p:2.1}). A consequence is that the family of functions $\{\lambda V_\lambda\}_{\lambda>0}$ is equicontinuous and equibounded on $\overline{\theta}$ (Lemma \ref{lem:2.6}). In Section 3 we first define the constrained viscosity solution of general HJB equations which are not necessarily related with a stochastic control problem, and then we show in this general framework that $\lambda V_\lambda$ is monotone and converges uniformly to some limit $w_0$ as $\lambda\rightarrow 0$ (Theorem \ref{th:3.3}). Moreover, we give an explicit representation of $w_0(x)$ (Theorem \ref{the:3.4}). In Section 4 we consider the Hamiltonian $H$ related to the stochastic control problem, and we characterize $V_\lambda$ as the unique viscosity solution on $\overline{\theta}$ of the associated HJB equation (Proposition \ref{th:3.3.1} and Proposition \ref{th:3.2}). For the convenience of the reader, we give the proof of the dynamic programming principle (DPP) in the Appendix. Moreover, still in the stochastic control case the HJB equation satisfied by $w_0(x)$ (Theorem \ref{th:4.1}) is studied and an explicit formula for $w_0(x)$ (Theorem \ref{th:4.2}) is given with the help of the $g$-expectation, a nonlinear expectation introduced by Peng in \cite{peng}.
\section{ {\protect \large Preliminaries}}
Let $\{W_t\}_{t\geq0}$ be a standard $d$-dimensional Brownian motion defined on a complete probability space $(\Omega,\mathcal{F},\mathbb{P})$. Let $\mathbb{F}=\{\mathcal{F}_t\}_{t\geq0}$ be the filtration generated by $\{W_t\}_{t\geq0}$, and augmented by all $\mathbb{P}$-null sets. We put $\mathcal{F}_\infty=\bigvee\limits_{t\geq0}\mathcal{F}_t$. For any $N\geq1$, $|x|$ denotes the Euclidean norm of $x\in\mathbb{R}^N$ and $\langle\cdot,\cdot\rangle$ denotes the Euclidean scalar product. We introduce the following spaces of stochastic processes: \begin{equation*} \begin{split}
&S_{\mathbb{F}}^2(\mathbb{R}):=\Big\{(\phi_t)_{0\leq t< \infty}\ \text{real-valued continuous}\ \mathbb {F}\text{-adapted process}: \mathbb{E}[\mathop{\rm sup}\limits_{t\in[0,\infty)}|\phi_t|^2] <\infty\Big\};\\
&\mathcal{H}_{\mathbb{F}}^2(\mathbb{R}^{d}):= \Big\{(\phi_t)_{0\leq t< \infty}\ \mathbb{R}^{d}\text{-valued}\ \mathbb{F}\text{-progressively measurable process}: \mathbb{E}[\int_0^\infty|\phi_t|^2dt] <\infty\Big\};\\
&\mathcal{H}_{\mathbb{F}}^{2,-2\lambda}(0,T;\mathbb{R}^d):=\{(\phi_t)_{0\leq t\leq T}\ \mathbb{R}^d\text{-valued}\ \mathbb{F}\text{-progressively measurable process}: \\
&\qquad\qquad\qquad\qquad\quad \mathbb{E}[\int_0^T\exp(-2\lambda t)|\phi_t|^2dt]<\infty\};\\
&L_{\mathbb{F}}^\infty(0,\infty;\mathbb{R}^d):=\{(\phi_t)_{0\leq t< \infty}\ \mathbb{R}^{d}\text{-valued}\ \mathbb {F}\text{-adapted\ essentially\ bounded\ process}\};\\
&L^2(\mathcal{F}_\infty;\mathbb{R}):=\Big\{\xi\ \text{real-valued}\ \mathcal{F}_\infty \text{-measurable random variable}:\mathbb{E}[|\xi|^2]<\infty\Big\}.
\end{split} \end{equation*}
We suppose that $(U,d)$ is a compact metric space, $U$ is our control state space, and $\mathcal{U}=L_{\mathbb{F}}^\infty(0,\infty;U)$ is the space of all admissible control processes. It is defined as the set of all $U$-valued $\mathbb{F}$-adapted processes. Let us consider functions $b:\mathbb{R}^N\times U \rightarrow \mathbb{R}^N$ and $\sigma:\mathbb{R}^N\times U\rightarrow\mathbb{R}^{N\times d}$ satisfying standard conditions of continuity and Lipschitz property: \begin{equation*}\label{r1}
\left\{ \begin{array}{llll} \mbox{(Hi)}\ b,\ \sigma\ \mbox{are\ uniformly\ continuous\ on}\ \mathbb{R}^N\times U,\\ \mbox{(Hii)}\ \text{There\ exists\ a\ constant}\ c>0\ \mbox{such\ that} \\
\ \ \ |b(x,u)-b(x',u)|+|\sigma(x,u)-\sigma(x',u)| \leq c|x-x'|,\ \mbox{for\ all}\ x,\ x'\in\mathbb{R}^N,\ u\in U,\\ \tag{H1}
\ \ \ |b(x,u)|+|\sigma(x,u)|\leq c(1+|x|),\ \mbox{for\ all}\ x\in\mathbb{R}^N,\ u\in U. \end{array} \right. \end{equation*} \begin{lemma}\label{l:2.1} Under our standard assumptions (H1), for all control $u\in\mathcal{U}$, the controlled stochastic system \begin{equation}\label{r2}
\left\{ \begin{array}{ll} dX_t^{x,u}=b(X_t^{x,u},u_t)dt+\sigma(X_t^{x,u},u_t)dW_t, \ \ t\geq 0, \\ X_0^{x,u}=x\in\mathbb{R}^N, \end{array} \right. \end{equation} has a unique $\mathbb{R}^N$-valued continuous, $\mathbb{F}$-adapted solution $X^{x,u}=(X^{x,u}_t)_{t\geq0}$. Moreover, for all $T>0$, and $k\geq2$, there is a constant $C_k(T)>0$ such that \begin{equation*} \begin{split}
&\mathbb{E}[\mathop{\rm sup}\limits_{0\leq s\leq t}|X_s^{x,u}|^k]\leq C_k(T)(1+|x|^k),\\
&\mathbb{E}[\mathop{\rm sup}\limits_{0\leq s\leq t}|X_s^{x,u}-X_s^{x',u}|^k]\leq C_k(T)|x-x'|^k,\ t\in[0,T], x,\ x'\in\mathbb{R}^N, u\in\mathcal{U}. \end{split} \end{equation*} \end{lemma} The above result on SDEs is by now well known; for its proof the readers can refer to Ikeda, Watanabe \cite[pp.166-168]{Ikeda 1989} or Karatzas, Shreve \cite[pp.289-290]{Karatzas 1987}.
We suppose that there exists a non-empty open set $\theta\subset \mathbb{R}^N$ with compact closure $\overline{\theta}$ such that $\overline{\theta}$ is invariant with respect to the control system (\ref{r2}). Recall that the invariance of $\overline{\theta}$ is defined by the fact that, for all control process $u\in\mathcal{U}$, if $x\in\overline{\theta}$, also $X_t^{x,u}\in\overline{\theta}$, for all $t\geq0$, $\mathbb{P}$-a.s.
Given now a function $\psi:\mathbb{R}^N\times \mathbb{R}^d\times U\rightarrow \mathbb{R}$, for any $\lambda>0$, we consider the following BSDE on the infinite time interval $[0,\infty)$: \begin{equation}\label{r40}
\overline{Y}_t^{\lambda,x,u}=\overline{Y}_T^{\lambda,x,u}+\int_t^T(\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\lambda \overline{Y}_s^{\lambda,x,u})ds-\int_t^T\overline{Z}_s^{\lambda,x,u}dW_s,\ 0\leq t\leq T<\infty. \end{equation} \begin{definition} A couple of processes $(\overline{Y}^{\lambda,x,u},\overline{Z}^{\lambda,x,u})$ is called a solution of BSDE (\ref{r40}) on the infinite time interval if $(\overline{Y}^{\lambda,x,u},\overline{Z}^{\lambda,x,u})$ satisfies the equation (\ref{r40}), and $\overline{Y}^{\lambda,x,u}=(\overline{Y}^{\lambda,x,u}_t)_{t\geq0}\in\mathcal{S}_{\mathbb{F}}^2(\mathbb{R})$ is bounded by some constant $\widetilde{M}$ and $\overline{Z}^{\lambda,x,u}=(\overline{Z}^{\lambda,x,u}_t)_{t\geq0}$ is in the space $$\mathcal{H}_{loc}^2(\mathbb{R}^d)=\{(\phi_t)_{0\leq t<\infty}: \mathbb{R}^d\mbox{-valued}\ \mathbb{F}\mbox{-progressively measurable},\
\displaystyle\mathbb{E}[\int_0^T|\phi_t|^2dt]<+\infty,\ 0\leq T<\infty\}.$$ \end{definition}
We suppose that $\psi:\mathbb{R}^N\times \mathbb{R}^d\times U\rightarrow \mathbb{R}$ satisfies the following conditions: \begin{equation*}\label{r48}
\left\{ \begin{array}{llll} \mbox{(Hiii)}\ \psi\ \text{is\ continuous\ on}\ \mathbb{R}^N\times\mathbb{R}^d\times U;\\ \mbox{(Hiv)}\ \text{There\ exist\ nonnegative\ constants}\ K_x, K_z\ \mbox{and}\ M \ \text{such\ that}\\
\ \ \ |\psi(x,z,u)-\psi(x',z',u)|\leq K_x|x-x'|+K_z|z-z'|,\\
\ \ \ |\psi(x,0,u)|\leq M, \ \ (x,x',z,z',u)\in \mathbb{R}^{2N}\times\mathbb{R}^{2d}\times U.\tag{H2} \end{array} \right. \end{equation*} The following proposition will be used frequently in what follows. We adapt the proof from \cite{Debussche 2011}, and also prove new estimates. \begin{proposition}\label{th:2.4} Under the assumptions (\ref{r1}) and (\ref{r48}), BSDE (\ref{r40}) on the infinite time interval $[0,\infty)$ has a unique solution $(\overline{Y}^{\lambda,x,u},\overline{Z}^{\lambda,x,u})\in L^\infty _{\mathbb{F}}(0,T;\mathbb{R})\times\mathcal{H}^2_{loc}(\mathbb{R}^d)$. Moreover, we have \begin{equation*}
|\overline{Y}_t^{\lambda,x,u}|\leq\frac{M}{\lambda},\ t\geq0,\ \text{and}\ \mathbb{E}[\int_0^\infty|e^{-\lambda t}\overline{Z}_t^{\lambda,x,u}|^2dt]\leq2(\frac{M}{\lambda})^2(2+\frac{K_z^2}{\lambda}). \end{equation*} \end{proposition} \begin{proof}
\textbf{Uniqueness}. Let $x\in\mathbb{R}^N \mbox{and}\ u\in \mathcal{U}$ be arbitrarily given. Suppose that $(\overline{Y}_t^{1,\lambda,x,u},\overline{Z}_t^{1,\lambda,x,u})_{t\geq0}$ and $(\overline{Y}_t^{2,\lambda,x,u},\overline{Z}_t^{2,\lambda,x,u})_{t\geq0}$ are two solutions of BSDE (\ref{r40}) such that $\overline{Y}^{1,\lambda,x,u}, \overline{Y}^{2,\lambda,x,u}$ are continuous and bounded and $\overline{Z}^{1,\lambda,x,u}, \overline{Z}^{2,\lambda,x,u}\in \mathcal{H}^2_{loc}(\mathbb{R}^d)$. Let us set $\widehat{Y}_t=\overline{Y}_t^{1,\lambda,x,u}-\overline{Y}_t^{2,\lambda,x,u}$ and $\widehat{Z}_t=\overline{Z}_t^{1,\lambda,x,u}-\overline{Z}_t^{2,\lambda,x,u}$, $t\geq0$. Then, $\widehat{Y}$ is continuous and $|\widehat{Y}|\leq\overline{M}$, for some constant $\overline{M}$. We define \begin{equation*}
\gamma_s=\left\{
\begin{array}{lll} &\frac{\psi(X_s^{x,u},\overline{Z}_s^{1,\lambda,x,u},u_s)-\psi(X_s^{x,u},\overline{Z}_s^{2,\lambda,x,u},u_s)}{|\widehat{Z}_s|^2}(\widehat{Z}_s)^*,\ \mbox{if}\ \widehat{Z}_s\neq 0;\\ & 0, \ \mbox{otherwise}, \end{array}\right. \end{equation*}
and we notice that $|\gamma_s|\leq K_z,\ s\geq0$. Let $0\leq T<\infty$ be arbitrarily fixed. We define the probability $\mathbb{P}_T^\gamma$ on $(\Omega,\mathcal{F})$ by setting \begin{equation*}
\frac{d\mathbb{P}_T^\gamma}{d\mathbb{P}}=\exp\{\int_0^T\gamma_sdW_s-\frac{1}{2}\int_0^T|\gamma_s|^2ds\}. \end{equation*} Then, from Girsanov's theorem, \begin{equation*} \begin{split}
\widehat{Y}_t=&\widehat{Y}_T+\int_t^T(\psi(X_s^{x,u},\overline{Z}_s^{1,\lambda,x,u},u_s)-\psi(X_s^{x,u},\overline{Z}_s^{2,\lambda,x,u},u_s))ds-\lambda\int_t^T\widehat{Y}_sds-\int_t^T\widehat{Z}_sdW_s\\
=& \widehat{Y}_T-\lambda\int_t^T\widehat{Y}_sds-\int_t^T\widehat{Z}_s(dW_s-\gamma_sds)\\
=&\widehat{Y}_T-\lambda\int_t^T\widehat{Y}_sds-\int_t^T\widehat{Z}_sdW^{\gamma,T}_s,\ t\in[0,T],
\end{split} \end{equation*} where $\displaystyle W^{\gamma,T}_t=W_t-\int_0^t\gamma_sds,\ t\in[0,T],$ is an $(\mathbb{F},\mathbb{P}_T^\gamma)$-Brownian motion. Applying It\^{o}'s formula to $e^{-\lambda s}\widehat{Y}_s$, we get \begin{equation*}
e^{-\lambda T}\widehat{Y}_T-e^{-\lambda t}\widehat{Y}_t=\int_t^Te^{-\lambda s}\widehat{Z}_sdW_s^{\gamma,T},\ t\in[0,T]. \end{equation*}
From standard estimates we see that $\displaystyle (\int_0^te^{-\lambda s}\widehat{Z}_sdW_s^{\gamma,T})_{t\in[0,T]}$ is an $(\mathbb{F},\mathbb{P}_T^\gamma)$-martingale. Thus, denoting by $\mathbb{E}^\gamma_T[\cdot\big|\mathcal{F}_t]$ the conditional expectation under $\mathbb{P}_T^\gamma$, it follows that \begin{equation*}
\widehat{Y}_t=\mathbb{E}^\gamma_T[\widehat{Y}_t\big|\mathcal{F}_t]=\mathbb{E}_T^\gamma[e^{-\lambda (T-t)}\widehat{Y}_T\big|\mathcal{F}_t]-\mathbb{E}_T^\gamma[\int_t^Te^{-\lambda (s-t)}\widehat{Z}_sdW_s^{\gamma,T}\big|\mathcal{F}_t]=\mathbb{E}_T^\gamma[e^{-\lambda (T-t)}\widehat{Y}_T\big|\mathcal{F}_t],\ t\in[0,T]. \end{equation*}
Recall that $|\widehat{Y}_s|\leq\overline{M},\ s\geq0$. Hence, $|\widehat{Y}_t|\leq e^{-\lambda(T-t)}\overline{M},\ 0\leq t\leq T<\infty$. Finally, letting $T$ tend to infinity, we obtain that, for any $t\geq0$, $\widehat{Y}_t=0,\ \mathbb{P}\text{-}a.s.$, i.e., $\overline{Y}_t^{1,\lambda,x,u}=\overline{Y}_t^{2,\lambda,x,u},\ \text{for\ all}\ t\geq0,\ \mathbb{P}\text{-}a.s.$
\textbf{Existence.} For arbitrarily given $x\in\mathbb{R}^N, u\in \mathcal{U}\ \mbox{and}\ n\geq1$, we define $(\overline{Y}_t^{n,\lambda,x,u},\overline{Z}_t^{n,\lambda,x,u})_{t\geq 0}\in\mathcal{S}_{\mathbb{F}}^2([0,n];\mathbb{R})\times\mathcal{H}_{\mathbb{F}}^2([0,n];\mathbb{R}^d)$ as the unique solution of the following BSDE: \begin{equation}\label{r80}
\overline{Y}_t^{n,\lambda,x,u}=\int_t^n(\psi(X_s^{x,u},\overline{Z}_s^{n,\lambda,x,u},u_s)-\lambda\overline{Y}_s^{n,\lambda,x,u})ds-\int_t^n\overline{Z}_s^{n,\lambda,x,u}dW_s,\ t\in[0,n]. \end{equation} Then, from a classical result for BSDEs we get the existence and the uniqueness of the solution $(\overline{Y}^{n,\lambda,x,u},\overline{Z}^{n,\lambda,x,u})$ under the assumption (H2). Now we will give the proof in four steps.\\ \textbf{Step 1.} $(\overline{Y}_t^{n,\lambda,x,u})_{t\in[0,n]}$ is bounded, uniformly with respect to $n$.\\ \indent Indeed, by introducing the ${\mathbb{F}}$-adapted process \begin{equation*}
\gamma_s^n=\left\{
\begin{array}{lll} &\frac{\psi(X_s^{x,u},\overline{Z}_s^{n,\lambda,x,u},u_s)-\psi(X_s^{x,u},0,u_s)}{|\overline{Z}_s^{n,\lambda,x,u}|^2}(\overline{Z}_s^{n,\lambda,x,u})^*, \ \mbox{if}\ \overline{Z}_s^{n,\lambda,x,u}\neq 0;\\ & 0, \ \mbox{otherwise},\ s\in[0,n], \end{array}\right. \end{equation*} the above BSDE takes the form \begin{equation*}
\overline{Y}_t^{n,\lambda,x,u}=\int_t^n(\psi(X_s^{x,u},0,u_s)-\lambda\overline{Y}_s^{n,\lambda,x,u})ds-\int_t^n\overline{Z}_s^{n,\lambda,x,u}(dW_s-\gamma_s^nds), \ t\in[0,n]. \end{equation*}
As $\mid\gamma_s^n\mid\leq K_z,\ s\in[0,n],\ n\geq0,$ we know from the Girsanov Theorem that $\displaystyle W_t^n=W_t-\int_0^t\gamma_s^nds,\ t\in[0,n],$ is a Brownian motion under $\displaystyle d\mathbb{P}^n=\exp\{\int_0^n\gamma_s^ndW_s-\frac{1}{2}\int_0^n|\gamma_s^n|^2ds\}d\mathbb{P}$.
Consequently, applying It\^{o}'s formula to $e^{-\lambda t}\overline{Y}^{n,\lambda,x,u}_t,\ t\in[0,n]$, and taking the conditional expectation $\mathbb{E}^n[\cdot\big|\mathcal{F}_t]$ with respect to $\mathbb{P}^n$, we obtain \begin{equation*}
\overline{Y}_t^{n,\lambda,x,u}=\mathbb{E}^n[\int_t^ ne^{-\lambda(s-t)}\psi(X_s^{x,u},0,u_s)ds\big|\mathcal{F}_t],\ t\in[0,n]. \end{equation*}
Finally, as $|\psi(x',0,u')|\leq M,\ (x',u')\in\mathbb{R}^N\times U$, it follows that \begin{equation*}
|\overline{Y}_t^{n,\lambda,x,u}|\leq M\int_t^ne^{-\lambda(s-t)}ds\leq\frac{M}{\lambda},\ t\in[0,n],\ n\geq1. \end{equation*} Let us show now the second step.\\ \textbf{Step 2.} The sequence $(\overline{Y}_t^{n,\lambda,x,u})_{t\in[0,n]},\ n\geq1,$ converges uniformly on compacts, $\mathbb{P}$-a.s., as $n\rightarrow\infty$.
For $n,\ m\geq1$ with $n\geq m$, we define\\ $$ \gamma_s^{n,m}=\frac{\psi(X_s^{x,u},\overline{Z}_s^{n,\lambda,x,u},u_s)-\psi(X_s^{x,u},\overline{Z}_s^{m,\lambda,x,u},u_s)}
{|\overline{Z}_s^{n,\lambda,x,u}-\overline{Z}_s^{m,\lambda,x,u}|^2}(\overline{Z}_s^{n,\lambda,x,u}-\overline{Z}_s^{m,\lambda,x,u})^*,\ \mbox{if}\ \overline{Z}_s^{n,\lambda,x,u}\neq\overline{Z}_s^{m,\lambda,x,u};$$
\noindent and $\gamma_s^{n,m}=0$, otherwise, $s\in[0,m]$. As $|\gamma_s^{n,m}|\leq K_z,\ s\in[0,m]$, we can use the Girsanov Theorem to introduce the probability measure $\displaystyle d\mathbb{P}^{n,m}=\exp\{\int_0^m\gamma_s^{n,m}dW_s-\frac{1}{2}\int_0^m|\gamma_s^{n,m}|^2ds\}d\mathbb{P},$ under which $\displaystyle W_t^{n,m}=W_t-\int_0^t\gamma_s^{n,m}ds,\ t\in[0,m],$ is an $\mathbb{F}$-Brownian motion. From (\ref{r80}) we have \begin{equation*}
\overline{Y}_t^{n,\lambda,x,u}-\overline{Y}_t^{m,\lambda,x,u}=e^{-\lambda(m-t)}\overline{Y}_m^{n,\lambda,x,u}+\int_t^me^{-\lambda (s-t)}(\overline{Z}_s^{n,\lambda,x,u}-\overline{Z}_s^{m,\lambda,x,u})dW_s^{n,m},\ t\in[0,m]. \end{equation*}
Consequently, considering that $|\overline{Y}_m^{n,\lambda,x,u}|\leq\frac{M}{\lambda}$ (Step1), by taking the conditional expectation under $\mathbb{P}^{n,m}$, we get \begin{equation*}
|\overline{Y}_t^{n,\lambda,x,u}-\overline{Y}_t^{m,\lambda,x,u}|=|\mathbb{E}^{n,m}[e^{-\lambda(m-t)}\overline{Y}_m^{n,\lambda,x,u}\big|\mathcal{F}_t]|\leq\frac{M}{\lambda}e^{-\lambda(m-t)},\ 0\leq t\leq m\leq n, \end{equation*} i.e., for all $T>0,\ n\geq m\geq T,$ \begin{equation}\label{t1}
\mathop{\rm sup}\limits_{t\in[0,T]}|\overline{Y}_t^{n,\lambda,x,u}-\overline{Y}_t^{m,\lambda,x,u}|\leq \frac{M}{\lambda}e^{-\lambda(m-T)}\xrightarrow[ n\geq m\rightarrow\infty]{} 0,\ \mathbb{P}\mbox{-}a.s. \end{equation}
Consequently, there is a continuous adapted process $\overline{Y}^{\lambda,x,u}=(\overline{Y}_t^{\lambda,x,u})_{t\geq0}$ to which $(\overline{Y}_{t\wedge n}^{n,\lambda,x,u})_{t\geq0}$ converges uniformly on compacts, $\mathbb{P}$-a.s. Moreover, $|\overline{Y}_t^{\lambda,x,u}|\leq\frac{M}{\lambda},\ t\geq0,\ \mathbb{P}$-a.s.\\ \textbf{Step 3.} There is a process $\overline{Z}^{\lambda,x,u}=(\overline{Z}_t^{\lambda,x,u})_{t\geq0}\in\mathcal{H}^2_{loc}(\mathbb{R}^d)$ such that, for all $T>0$, \begin{equation*}
\mathbb{E}[\int_0^T|\overline{Z}_t^{\lambda,x,u}-\overline{Z}_t^{n,\lambda,x,u}|^2dt]\xrightarrow[n\rightarrow\infty]{} 0. \end{equation*}
Indeed, for $n\geq m\geq T$, we get from (\ref{r80}) and (\ref{t1}) \begin{equation*} \begin{split}
&\mathbb{E}[\int_0^T|\overline{Z}_t^{n,\lambda,x,u}-\overline{Z}_t^{m,\lambda,x,u}|^2dt]\\
\leq&2\mathbb{E}[\int_0^T|\overline{Y}_s^{n,\lambda,x,u}-\overline{Y}_s^{m,\lambda,x,u}||\psi(X_s^{x,u},\overline{Z}_s^{n,\lambda,x,u},u_s)-\psi(X_s^{x,u},\overline{Z}_s^{m,\lambda,x,u},u_s)| ds]\\
&+\mathbb{E}[|\overline{Y}_T^{n,\lambda,x,u}-\overline{Y}_T^{m,\lambda,x,u}|^2]\leq (4K_z^2T+1)\frac{M^2}{\lambda^2}e^{-2\lambda(m-T)}+\frac{1}{2}\mathbb{E}[\int_0^T|\overline{Z}_t^{n,\lambda,x,u}-\overline{Z}_t^{m,\lambda,x,u}|^2dt]. \end{split} \end{equation*} This proves that, for $n\geq m\geq T,$ \begin{equation*}
\mathbb{E}[\int_0^T|\overline{Z}_t^{n,\lambda,x,u}-\overline{Z}_t^{m,\lambda,x,u}|^2dt]\leq2(4K_z^2T+1)\frac{M^2}{\lambda^2}e^{-2\lambda(m-T)}\xrightarrow[n\geq m\rightarrow\infty]{}0. \end{equation*} This completes Step 3.\\ \textbf{Step 4.} Finally, recall that from (\ref{r80}), for $n\geq T$, \begin{equation*}
\overline{Y}_t^{n,\lambda,x,u}= \overline{Y}_T^{n,\lambda,x,u}+\int_t^T(\psi(X_s^{x,u},\overline{Z}_s^{n,\lambda,x,u},u_s)-\lambda\overline{Y}_s^{n,\lambda,x,u})ds-\int_t^T\overline{Z}_s^{n,\lambda,x,u}dW_s,\ t\in[0,T]. \end{equation*} The Steps 2 and 3 allow to take the limit in this BSDE, as $n\rightarrow\infty$, and we obtain that $(\overline{Y}^{\lambda,x,u},\overline{Z}^{\lambda,x,u})$ is the solution of the following BSDE: \begin{equation}\label{r81}
\overline{Y}_t^{\lambda,x,u}=\overline{Y}_T^{\lambda,x,u}+\int_t^T(\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\lambda\overline{Y}_s^{\lambda,x,u})ds-\int_t^T\overline{Z}_s^{\lambda,x,u}dW_s,\ 0\leq t\leq T<+\infty, \end{equation}
with $| \overline{Y}_t^{\lambda,x,u}|\leq\frac{M}{\lambda},\ t\geq0,$ and $\overline{Z}^{\lambda,x,u}\in\mathcal{H}^2_{loc}(\mathbb{R}^d)$.
It remains to show that \begin{equation*}
\mathbb{E}[\int_0^\infty|e^{-\lambda t}\overline{Z}_t^{\lambda,x,u}|^2dt]\leq2(\frac{M}{\lambda})^2(2+\frac{K_z^2}{\lambda}). \end{equation*}
For this, by applying It\^{o}'s formula to $|e^{-\lambda t}\overline{Y}_t^{\lambda,x,u}|^2$, it follows from (\ref{r81}) that, for all $T>0$, \begin{equation*}
\begin{split}
&\mathbb{E}[\int_0^T|e^{-\lambda t}\overline{Z}_t^{\lambda,x,u}|^2dt]\leq\mathbb{E}[|e^{-\lambda T}\overline{Y}_T^{\lambda,x,u}|^2+2\int_0^Te^{-2\lambda t}\overline{Y}_t^{\lambda,x,u}\psi(X_t^{x,u},\overline{Z}_t^{\lambda,x,u},u_t)dt]\\
\leq&\mathbb{E}[|e^{-\lambda T}\overline{Y}_T^{\lambda,x,u}|^2+2\int_0^Te^{-2\lambda t}|\overline{Y}_t^{\lambda,x,u}|(M+K_z|\overline{Z}_t^{\lambda,x,u}|)dt],
\end{split} \end{equation*} and, hence, \begin{equation*}
\frac{1}{2}\mathbb{E}[\int_0^T|e^{-\lambda t}\overline{Z}_t^{\lambda,x,u}|^2dt]\leq\mathbb{E}[|e^{-\lambda T}\overline{Y}_T^{\lambda,x,u}|^2]+2\mathbb{E}[\int_0^Te^{-2\lambda t}M|\overline{Y}_t^{\lambda,x,u}|dt]+2K_z^2\mathbb{E}[\int_0^T|e^{-\lambda t}\overline{Y}_t^{\lambda,x,u}|^2dt]. \end{equation*}
Therefore, using that $|\overline{Y}_t^{\lambda,x,u}|\leq\frac{M}{\lambda},\ t\geq0$, we obtain the stated estimate for $\displaystyle \mathbb{E}[\int_0^\infty|e^{-\lambda t}\overline{Z}_t^{\lambda,x,u}|^2dt]$. \end{proof} For any $\lambda>0$, let us define the value function \begin{equation}\label{r94}
V_\lambda(x):=\inf\limits_{u\in\mathcal{U}}\overline{Y}_0^{\lambda,x,u},\ x\in\mathbb{R}^N, \end{equation} \noindent where $\overline{Y}^{\lambda,x,u}$ is introduced by the BSDE (\ref{r40}).
We make the following so called \underline{nonexpansivity condition} for (\ref{r94}): For all $x,x'\in \mathbb{R}^N, u\in U,$ there exists $v\in U$ such that, for all $z\in\mathbb{R}^d$, \begin{equation*}\label{r49} \left\{ \begin{array}{llll}
\mbox{(i)}\ g(x,x',u,v):=\langle x-x',b(x,u)-b(x',v)\rangle+\frac{1}{2}|\sigma(x,u)-\sigma(x',v)|^2\\
\qquad\qquad\qquad\qquad+K_z|\sigma(x,u)-\sigma(x',v)||x-x'|\leq0;\\ \mbox{(ii)}\ \text{There\ exists\ a\ constant}\ \overline{c}_0>0\ \text{such\ that}\\
\ \ \ \ \ \widetilde{\psi}(x,x',z,u,v):=|\psi(x,z,u)-\psi(x',z,v)|-\overline{c}_0|x-x'|\leq0, \tag{H3} \end{array} \right. \end{equation*} with $K_z>0$ introduced in (\ref{r48}).
We also introduce a new \underline{stochastic nonexpansivity condition:} For all $\varepsilon>0,\ \lambda>0,\ x,\ x'\in \overline{\theta}$, and all $u\in\mathcal{U}$, there exists $v\in\mathcal{U}$ such that, for all $\gamma\in L_{\mathbb{F}}^\infty(0,\infty;\mathbb{R}^d)$ with $|\gamma_s|\leq K_z$, dsdP-a.e., and with the notation $\displaystyle L_t^\gamma=\exp\{\int_0^t\gamma_sdW_s-\frac{1}{2}\int_0^t|\gamma_s|^2ds\}$, \begin{equation*}\label{r50} \left\{ \begin{array}{llll}
\mbox{(i)}\ \displaystyle\big(\mathbb{E}[L_t^\gamma|X_t^{x,u}-X_t^{x',v}|^2]\big)^{\frac{1}{2}}\leq|x-x'|+\varepsilon,\ t\geq0;\\
\mbox{(ii)}\ \displaystyle\lambda\int_0^\infty e^{-\lambda s}\mathbb{E}[L_s^\gamma|\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v},\overline{Z}_s^{\lambda,x,u},v_s)|]ds\leq \overline{c}_0|x-x'|+\varepsilon. \tag{H4}
\end{array} \right. \end{equation*} \begin{remark} Let us recall the nonexpansivity condition in \cite{Buckdahn 2013}, established for $\psi=\psi(x,u)$, which is extended by (\ref{r49}): For all $(x,x',u,v)\in\mathbb{R}^{2N}\times U^2$, \begin{equation*}
\mathop{\rm sup}\limits_{u\in U}\inf\limits_{v\in U}\max\big((\langle x-x',b(x,u)-b(x',v)\rangle+\frac{1}{2}|\sigma(x,u)-\sigma(x',v)|^2),|\psi(x,u)-\psi(x',v)|-\overline{c}_0|x-x'|\big)\leq 0. \end{equation*} Observe that, if $\psi$ is independent of $z$ (that is, $\psi=\psi(x,u)$), then $K_z=0$ and (\ref{r49}) coincides with the above nonexpansivity condition in \cite{Buckdahn 2013}.\\ \indent But (\ref{r50}) is new, it reformulates the stochastic nonexpansivity condition in \cite{Buckdahn 2013} by taking into account the BSDE over the infinite time interval $[0,\infty)$. \end{remark} \begin{example} Let $d=1$ and $b(x,u)=-3x,\ \sigma(x,u)=x,\ \psi(x,u,z)=z$, for $x\in\mathbb{R}^N,\ u\in U$ and $z\in\mathbb{R}$. Then $K_z=1$, and for $\overline{c}_0=1$ we have \begin{equation*} \begin{split}
g(x,x',u,v):=&\langle x-x',b(x,u)-b(x',v)\rangle+\frac{1}{2}|\sigma(x,u)-\sigma(x',v)|^2+K_z|\sigma(x,u)-\sigma(x',v)||x-x'|\\
=&-\frac{3}{2}|x-x'|^2\leq0,
\end{split} \end{equation*} and \begin{equation*}
\widetilde{\psi}(x,x',z,u,v):=|\psi(x,z,u)-\psi(x',z,v)|-\overline{c}_0|x-x'|=-\overline{c}_0|x-x'|\leq0. \end{equation*}
\end{example} \begin{proposition}\label{p:2.1} Under the assumptions (\ref{r1}) and (\ref{r48}) the nonexpansivity condition (\ref{r49}) implies the stochastic nonexpansivity condition (\ref{r50}). \end{proposition} \begin{proof} We fix arbitrarily $(x,x')\in \overline{\theta}^2$, $\lambda>0$, $T>0$, $\varepsilon>0$, and $u\in\mathcal{U}$. Without loss of generality, let us suppose that $u$ is a step process, i.e., that there exists a partition of $[0,T ]$, denoted by $0=t_0<t_1<t_2<\cdot\cdot\cdot<t_M=T$, and random variables $u_i\in L^0(\mathcal{F}_{t_i}; U)$, $0\leq i\leq M-1$, such that \begin{equation*} u=\sum^{M-1}_{i=0}u_i1_{(t_i,t_{i+1}]}. \end{equation*} The reader can be referred to \cite{Krylov 1999} for further details. Indeed, we can make this choice, since these step functions are dense in the space of admissible controls $\mathcal{U}$ endowed with the metric $\displaystyle(\mathbb{E}[\int_0^\infty e^{-t}d(u_t,u'_t)^2dt])^{\frac{1}{2}},\ u,\ u'\in \mathcal{U}$, and the controlled state process $X^{x,u}$ as well as the solution $(\overline{Y}^{\lambda,x,u},\overline{Z}^{\lambda,x,u})$ of the BSDE (\ref{r40}) are $L^2$-continuous in $u\in \mathcal{U}$. Now we introduce the set-valued function \begin{equation*} \begin{split}
\overline{\theta}^2\times U\ni(x,x',u)\rightsquigarrow\Xi(x,x',u):=&\{v\in U:g(x,x',u,v)\leq0,\ \widetilde{\psi}(x,x',z,u,v)\leq0,\ \text{for\ all}\ z\in\mathbb{R}^d\}.
\end{split} \end{equation*} From the fact that $\Xi$ is upper semicontinuous and has nonempty compact values we know that there exists a Borel function (see Aubin and Frankowska \cite{Aubin 1990}) \begin{equation*}
\widehat{v}:\overline{\theta}^2\times U\to U,\ \text{with}\ \widehat{v}(x,x',u)\in\Xi(x,x',u),\ \text{for\ all}\ (x,x',u)\in\overline{\theta}^2\times U. \end{equation*} \textbf{Step1.} On $[0,t_1]$, setting $\tau_0=0$, we define $$v_t^{0,0}:=\widehat{v}(X_0^{x,u},x',u_t)=\widehat{v}(x,x',u_0)(=v_0^{0,0}),$$
and \begin{equation*} \begin{split}
\tau_1:=&\inf\{t\geq0:g(X_t^{x,u},X_t^{x',v^{0,0}},u_t,v_t^{0,0})>\delta\ \mbox{or} \mathop{\rm sup}\limits_{z\in\mathbb{R}^d}\widetilde{\psi}(X_t^{x,u},X_t^{x',v^{0,0}},z,u_t,v_t^{0,0})>\delta\}\wedge\frac{t_1}{n},\ n\geq1,
\end{split} \end{equation*} where $\delta>0$ is arbitrarily small and will be specified later. Similar to the proof of Lemma 3 in \cite{Buckdahn 2013}, from the assumption that the compact $\overline{\theta}$ is invariant with respect to control system (\ref{r2}) and from (\ref{r1}) we get for all $t\in[0,t_1]$, \begin{equation*} \begin{split}
&g(X_t^{x,u},X_t^{x',v^{0,0}},u_t,v_t^{0,0})
=\langle X_t^{x,u}-X_t^{x',v^{0,0}},b(X_t^{x,u},u_t)-b(X_t^{x',v^{0,0}},v_t^{0,0})\rangle\\
&+\frac{1}{2}|\sigma(X_t^{x,u},u_t)-\sigma(X_t^{x',v^{0,0}},v_t^{0,0})|^2+K_z|\sigma(X_t^{x,u},u_t)-\sigma(X_t^{x',v^{0,0}},v_t^{0,0})||X_t^{x,u}-X_t^{x',v^{0,0}}|\\
\leq&\langle x-x',b(x,u_0)-b(x',v_0^{0,0})\rangle+\frac{1}{2}|\sigma(x,u_0)-\sigma(x',v_0^{0,0})|^2+K_z|\sigma(x,u_0)-\sigma(x',v_0^{0,0})||x-x'|\\
&+c(|x-X_t^{x,u}|+|x'-X_t^{x',v^{0,0}}|)\\
=&g(x,x',u_0,v_0^{0,0})+c(|x-X_t^{x,u}|+|x'-X_t^{x',v^{0,0}}|),
\end{split} \end{equation*} and \begin{equation*} \begin{split} &\widetilde{\psi}(X_t^{x,u},X_t^{x',v^{0,0}},\overline{Z}_t^{\lambda,x,u},u_t,v_t^{0,0})\\
=&|\psi(X_t^{x,u},\overline{Z}_t^{\lambda,x,u},u_t)-\psi(X_t^{x',v^{0,0}},\overline{Z}_t^{\lambda,x,u},v_t^{0,0})|-\overline{c}_0| X_t^{x,u}-X_t^{x',v^{0,0}}|\\
\leq&|\psi(x,\overline{Z}_t^{\lambda,x,u},u_0)-\psi(x',\overline{Z}_t^{\lambda,x,u},v_0^{0,0})|-\overline{c}_0|x-x'|+c(|X_t^{x,u}-x|+| X_t^{x',v^{0,0}}-x'|)\\
=&\widetilde{\psi}(x,x',\overline{Z}_t^{\lambda,x,u},u_0,v_0^{0,0})+c(|X_t^{x,u}-x|+|X_t^{x',v^{0,0}}-x'|), \end{split} \end{equation*} for some constant $c$ depending on the coefficients $\sigma,b,\psi$ and on $\overline{\theta}$.
Then, from the choice of $v^{0,0}$ we have that
\begin{equation*}
g(X_t^{x,u},X_t^{x',v^{0,0}},u_t,v_t^{0,0})\leq c(|x-X_t^{x,u}|+|x'-X_t^{x',v^{0,0}}|),
\end{equation*} and \begin{equation*}
\widetilde{\psi}(X_t^{x,u},X_t^{x',v^{0,0}},\overline{Z}_t^{\lambda,x,u},u_t,v_t^{0,0})\leq c(|X_t^{x,u}-x|+|X_t^{x',v^{0,0}}-x'|),\ t\in[0,t_1]. \end{equation*}
Thus, applying Markov's inequality and Burkholder's inequality, we have that, for all $p>1,\ n\geq1$, there is a constant $c_p>0$ such that
\begin{equation}\label{r88}
\begin{split}
&\mathbb{P}(\tau_1<\frac{t_1}{n})\leq\mathbb{P}(\mathop{\rm sup}\limits_{t\in[0,\frac{t_1}{n}]}c(|x'-X_t^{x',v^{0,0}}|+|x-X_t^{x,u}|)\geq\delta)\\
&\leq\frac{c}{\delta^{4p}}(\mathbb{E}[\mathop{\rm sup}\limits_{t\in[0,\frac{t_1}{n}]}|x'-X_t^{x',v^{0,0}}|^{4p}]+\mathbb{E}[\mathop{\rm sup}\limits_{t\in[0,\frac{t_1}{n}]}|x-X_t^{x,u}|^{4p}])\leq\frac{c^2_pt_1^{2p}}{\delta^{4p}n^{2p}}.
\end{split}
\end{equation}
Recalling the definition of $L^\gamma$, we conclude that
\begin{equation}\label{r89}
\begin{split}
\mathop{\rm sup}\limits_{|\gamma|\leq K_z}\mathbb{E}[L_{\frac{t_1}{n}}^{\gamma}1_{\{\tau_1<\frac{t_1}{n}\}}]\leq \mathop{\rm sup}\limits_{|\gamma|\leq K_z}\big(\mathbb{E}[(L_{\frac{t_1}{n}}^{\gamma})^2]\big)^\frac{1}{2}\big(\mathbb{P}(\tau_1<\frac{t_1}{n})\big)^\frac{1}{2}\leq e^{\frac{1}{2}K_z^2\frac{t_1}{n}}\frac{c_pt_1^p}{\delta^{2p}n^p}.
\end{split}
\end{equation} For $1\leq i\leq n-1$, let us define iteratively $v^{0,i}$ and $\tau_{i+1}$. Given $\tau_i$ and $v^{0,i-1}\in\mathcal{U}$ we put
\begin{equation*}
v_t^{0,i}:=v_t^{0,i-1}1_{\{t\leq\tau_i\}}+\widehat{v}(X_{\tau_i}^{x,u},X_{\tau_i}^{x',v^{0,i-1}},u_t)1_{\{t>\tau_i\}},
\end{equation*}
and
\begin{equation*}
\begin{split}
\tau_{i+1}:=&\inf\{t\geq\tau_{i}:g(X_t^{x,u},X_t^{x',v^{0,i}},u_t,v_t^{0,i})>\delta\ \mbox{or} \mathop{\rm sup}\limits_{z\in\mathbb{R}^d}\widetilde{\psi}(X_t^{x,u},X_t^{x',v^{0,i}},z,u_t,v_t^{0,i})>\delta\}\wedge\frac{(i+1)t_1}{n}.
\end{split}
\end{equation*}
From the strong Markov property we have, in analogy to (\ref{r88}),
\begin{equation*}
\mathbb{P}(\tau_{i+1}-\tau_i<\frac{t_1}{n}/X_{\tau_i}^{x,u}=\widehat{x},X_{\tau_i}^{x',v^{0,i}}=\widehat{x'},\tau_i=\widehat{t}\ )\leq\frac{c^2_pt_1^{2p}}{\delta^{4p}n^{2p}},\ (\widehat{x},\widehat{x}')\in\overline{\theta}^2,\ \widehat{t}\in[0,\frac{it_1}{n}],
\end{equation*}
and, thus,
\begin{equation*}
\mathbb{P}(\tau_{i+1}-\tau_i<\frac{t_1}{n})\leq\frac{c^2_pt_1^{2p}}{\delta^{4p}n^{2p}}.
\end{equation*} Moreover, similar to (\ref{r89}) we get
\begin{equation*}
\begin{split}
\mathop{\rm sup}\limits_{|\gamma|\leq K_z}\mathbb{E}[L_{t_1}^{\gamma}1_{\{\tau_{i+1}-\tau_i<\frac{t_1}{n}\}}]\leq \mathop{\rm sup}\limits_{|\gamma|\leq K_z}\big(\mathbb{E}[(L_{t_1}^{\gamma})^2]\big)^\frac{1}{2}\big(\mathbb{P}(\tau_{i+1}-\tau_i<\frac{t_1}{n})\big)^\frac{1}{2}\leq e^{\frac{1}{2}K_z^2t_1}\frac{c_pt_1^p}{\delta^{2p}n^p}.
\end{split}
\end{equation*} This shows that there exists a constant $\overline{c}_p>0$ such that \begin{equation}\label{112}
\mathop{\rm sup}\limits_{|\gamma|\leq K_z}\mathbb{E}[L_{t_1}^{\gamma}1_{\{\tau_n<t_1\}}]\leq \sum_{i=0}^{n-1}\mathop{\rm sup}\limits_{|\gamma|\leq K_z}\mathbb{E}[L_{t_1}^{\gamma}1_{\{\tau_{i+1}-\tau_i<\frac{t_1}{n}\}}]\leq e^{\frac{1}{2}K_z^2t_1}\frac{\overline{c}_pt_1^p}{\delta^{2p}n^{p-1}}. \end{equation}
Let $d\mathbb{P}^\gamma_{t_1}=L_{t_1}^\gamma d\mathbb{P}$, and recall that due to the Girsanov Theorem $\displaystyle W_t^\gamma=W_t-\int_0^t\gamma_sds,\ t\in[0,t_1],$ is an $(\mathbb{F},\mathbb{P}_{t_1}^\gamma)$-Brownian motion. Let us define $\displaystyle\mathbb{E}_{t_1}^\gamma[\cdot]=\int_\Omega(\cdot) d\mathbb{P}_{t_1}^\gamma=\mathbb{E}[L_{t_1}^\gamma(\cdot)]$. Applying It\^{o}'s formula to $|X_t^{x,u}-X_t^{x',v^{0,n}}|^2$, for all $t\leq t_1$ we have \begin{eqnarray}\label{r41} \begin{split}
\mathbb{E}_{t_1}^\gamma[|X_t^{x,u}-X_t^{x',v^{0,n}}|^2] =&|x-x'|^2 +2\mathbb{E}_{t_1}^\gamma\Big[\int_0^t\Big(\langle X_s^{x,u}-X_s^{x',v^{0,n}},b(X_s^{x,u},u_s)-b(X_s^{x',v^{0,n}},v_s^{0,n})\rangle\\
&+ \frac{1}{2}|\sigma(X_s^{x,u},u_s)-\sigma(X_s^{x',v^{0,n}},v_s^{0,n})|^2\Big) ds \Big] \\ +&2\mathbb{E}_{t_1}^\gamma\Big[\int_0^t(X_s^{x,u}-X_s^{x',v^{0,n}})(\sigma(X_s^{x,u},u_s)-\sigma(X_s^{x',v^{0,n}},v_s^{0,n}))dW_s].
\end{split} \end{eqnarray}
Thus, substituting $dW_s=dW_s^\gamma+\gamma_sds$ and taking into account that $|\gamma_s|\leq K_z$, dsd$\mathbb{P}$-a.e., we obtain \begin{equation}\label{r91} \begin{split}
&\mathbb{E}_{t_1}^\gamma[|X_t^{x,u}-X_t^{x',v^{0,n}}|^2]
\leq|x-x'|^2+2\mathbb{E}_{t_1}^\gamma\Big[\int_0^t\Big(\langle X_s^{x,u}-X_s^{x',v^{0,n}},b(X_s^{x,u},u_s)-b(X_s^{x',v^{0,n}},v_s^{0,n})\rangle\\
&+\frac{1}{2}|\sigma(X_s^{x,u},u_s)-\sigma(X_s^{x',v^{0,n}},v_s^{0,n})|^2+K_z|\sigma(X_s^{x,u},u_s)-\sigma(X_s^{x',v^{0,0}},v_s^{0,0})||X_s^{x,u}-X_s^{x',v^{0,0}}|\Big) ds \Big]\\
=&|x-x'|^2+2\mathbb{E}_{t_1}^\gamma[\int_0^tg(X_s^{x,u},X_s^{x',v^{0,n}},u_s,v_s^{0,n})ds]\\
\leq& |x-x'|^2+2\mathbb{E}_{t_1}^\gamma[ct1_{\{t>\tau_n\}}+t\delta 1_{\{t\leq\tau_n\}}]\\
\leq& |x-x'|^2+\frac{ct_1^{p+1}}{\delta^{2p}n^{p-1}}e^{K_z^2t_1}+ct_1\delta,\ t\in[0,t_1].
\end{split} \end{equation} For this we have used the definition of $\tau_n$ and the boundedness of $g$ over $\overline{\theta}\times\overline{\theta}\times U\times U$. Consequently, \begin{equation}\label{r90} \begin{split}
&\lambda\int_0^{t_1}e^{-\lambda s}\mathbb{E}[L_s^\gamma|\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^{0,n}},\overline{Z}_s^{\lambda,x,u},v_s^{0,n})|]ds\\
=&\lambda\int_0^{t_1}e^{-\lambda s}\mathbb{E}_{t_1}^\gamma[\widetilde{\psi}(X_s^{x,u},X_s^{x',v^{0,n}},\overline{Z}_s^{\lambda,x,u},u_s,v_s^{0,n})]ds+\lambda\int_0^{t_1}e^{-\lambda s}\mathbb{E}_{t_1}^\gamma[\overline{c}_0|X_s^{x,u}-X_s^{x',v^{0,n}}|]ds\\
\leq&\lambda\int_0^{t_1}e^{-\lambda s}\mathbb{E}_{t_1}^\gamma[\delta1_{\{\tau_n\geq s\}}+\widetilde{\psi}(X_s^{x,u},X_s^{x',v^{0,n}},\overline{Z}_s^{\lambda,x,u},u_s,v_s^{0,n})1_{\{\tau_n<s\}}]ds\\
&+\overline{c}_0\lambda\int_0^{t_1}e^{-\lambda s}(| x-x'|+\frac{ct_1^{\frac{p+1}{2}}}{\delta^pn^{\frac{p-1}{2}}}e^{\frac{1}{2}K_z^2t_1}+(ct_1\delta)^{\frac{1}{2}})ds. \end{split} \end{equation} We remark that \begin{equation*}
|\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)|\leq|\psi(X_s^{x,u},0,u_s)|+K_z|\overline{Z}_s^{\lambda,x,u}|\leq M+K_z| \overline{Z}_s^{\lambda,x,u}|, \end{equation*} and as the same estimate holds true for $\psi(X_s^{x',v^{0,n}},\overline{Z}_s^{\lambda,x,u},v^{0,n}_s)$, we have \begin{equation*}
\widetilde{\psi}(X_s^{x,u},X_s^{x',v^{0,n}},\overline{Z}_s^{\lambda,x,u},u_s,v_s^{0,n})\leq2M+2K_z|\overline{Z}_s^{\lambda,x,u}|,\ s\in[0,t_1]. \end{equation*} Thus, \begin{equation*} \begin{array}{lll}
&\displaystyle\lambda\int_0^{t_1}e^{-\lambda s}\mathbb{E}_{t_1}^\gamma[\widetilde{\psi}(X_s^{x,u},X_s^{x',v^{0,n}},\overline{Z}_s^{\lambda,x,u},u_s,v_s^{0,n})1_{\{\tau_n<s\}}]ds\\
&\displaystyle\leq2M\mathbb{E}_{t_1}^\gamma[1_{\{\tau_n<t_1\}}]\lambda\int_0^{t_1}e^{-\lambda s}ds+2K_z\lambda\int_0^{t_1}e^{-\lambda s}\mathbb{E}_{t_1}^\gamma[|\overline{Z}_s^{\lambda,x,u}|1_{\{\tau_n<s\}}]ds,
\end{array} \end{equation*} where \begin{equation*}
\lambda\int_0^{t_1}e^{-\lambda s}\mathbb{E}_{t_1}^\gamma[|\overline{Z}_s^{\lambda,x,u}|1_{\{\tau_n<s\}}]ds\leq \lambda(\mathbb{E}[\int_0^{t_1}|e^{-\lambda s}\overline{Z}_s^{\lambda,x,u}|^2ds])^{\frac{1}{2}}\cdot \sqrt{t_1}\cdot(\mathbb{E}[(L_{t_1}^\gamma)^4])^{\frac{1}{4}}(P\{\tau_n<t_1\})^{\frac{1}{4}}. \end{equation*} Recall that \begin{equation*}
\mathbb{E}[\int_0^{t_1}|e^{-\lambda s}\overline{Z}_s^{\lambda,x,u}|^2ds]\leq 2(\frac{M}{\lambda})^2(2+\frac{K_z^2}{\lambda}), \end{equation*} and observe that \begin{equation*}
(\mathbb{E}[(L_{t_1}^\gamma)^4])^{\frac{1}{4}}\leq e^{2K_z^2t_1}, \end{equation*} and \begin{equation*}
\mathbb{P}\{\tau_n<t_1\}\leq\sum\limits_{i=1}^n\mathbb{P}\{\tau_i-\tau_{i-1}<\frac{t_1}{n}\}\leq\frac{C_p^2t_1^{2p}}{\delta^{4p}n^{2p-1}}. \end{equation*} Hence, \begin{equation*}
\lambda\int_0^{t_1}e^{-\lambda s}\mathbb{E}_{t_1}^\gamma[|\overline{Z}_s^{\lambda,x,u}|1_{\{\tau_n<s\}}]ds\leq C_{M,\lambda}e^{2K_z^2t_1}\frac{C_p^{\frac{1}{2}}t_1^{\frac{p+1}{2}}}{\delta^{p}n^{\frac{p-1}{2}}}, \end{equation*} and supposing without loss of generality that $\delta\in(0,1),\ K_z\geq1$ and $t_1(=t_1-t_0)\leq1$, we get from (\ref{r90}), (\ref{112}) and the above estimates, \begin{equation*} \begin{split}
&\lambda\int_0^{t_1}e^{-\lambda s}\mathbb{E}[L_s^\gamma| \psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^{0,n}},\overline{Z}_s^{\lambda,x,u},v_s^{0,n})|]ds\\
\leq& \lambda\int_0^{t_1}e^{-\lambda s}ds\cdot(\overline{c}_0| x-x'|+c\delta^{\frac{1}{2}}+c_p\frac{t_1^{\frac{p}{2}}}{\delta^{p}n^{\frac{p-1}{2}}}e^{2K_z^2t_1}).
\end{split} \end{equation*} Recall that $\delta\in(0,1)$ is arbitrary. Thus, choosing $\delta>0$ sufficiently small and $n$ large enough, we have for $v^0:=v^{0,n}\in\mathcal{U}$ \begin{equation}\label{t2} \begin{array}{lll}
&{\rm (i)}\ (\mathbb{E}[L_t^\gamma|X_t^{x,u}-X_t^{x',v^{0}}|^2])^{\frac{1}{2}}\leq | x-x'|+\varepsilon\frac{t_1}{(\overline{c}_0(T+2))^M},\ t\in[0,t_1];\\
&{\rm (ii)}\ \displaystyle\lambda \int_0^{t_1}e^{-\lambda s}\mathbb{E}[L_s^\gamma|\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^{0}},\overline{Z}_s^{\lambda,x,u},v_s^{0})|]ds\\
&\displaystyle\ \ \ \ \ \ \ \leq\lambda\int_0^{t_1}e^{-\lambda s}ds\cdot\overline{c}_0(|x-x'|+\frac{\varepsilon}{(\overline{c}_0(T+2))^M}), \end{array} \end{equation}
for all $\gamma\in L_{\mathbb{F}}^\infty(0,\infty;\mathbb{R}^d)$ with $|\gamma_s|\leq K_z$, dsd$\mathbb{P}$-a.e.\\ \noindent\textbf{Step 2.} We consider the interval $[0,t_2]$: Starting now from $(X_{t_1}^{x,u},X_{t_1}^{x',v^0})$ at time $t_1$, and with $u=u_{t_1}$ on $[t_1,t_2]$, we construct $v^1$. We begin with putting\\ $v_t^{1,0}:=v_t^01_{[0,t_1)}(t)+\widehat{v}(X_{t_1}^{x,u},X_{t_1}^{x',v^0},u_t)1_{[t_1,t_2]}(t)
=v_t^01_{[0,t_1)}(t)+\widehat{v}(X_{t_1}^{x,u},X_{t_1}^{x',v^0},u_{t_1})1_{[t_1,t_2]}(t), t\in[0,t_2]$. Similar to Step 1, we construct a sequence of control processes $(v^{1,n})_{n\geq0}$. Letting $n$ be large enough, there exists $v^1:=v^{1,n}$ such that, for all $\gamma\in L_{\mathbb{F}}^\infty(0,\infty;\mathbb{R}^d)$ with $|\gamma_s|\leq K_z$, dsd$\mathbb{P}$-a.e., \begin{equation}\label{t3}
(\mathbb{E}_{t_2}^\gamma[|X_t^{x,u}-X_t^{x',v^1}|^2\big|\mathcal{F}_{t_1}])^{\frac{1}{2}}=(\mathbb{E}[\frac{L_{t_2}^\gamma}{L_{t_1}^\gamma}\mid X_{t}^{x,u}-X_{t}^{x',v^1}|^2\big|\mathcal{F}_{t_1}])^{\frac{1}{2}}\leq|X_{t_1}^{x,u}-X_{t_1}^{x',v^0}|+\varepsilon\frac{t_2-t_1}{(\overline{c}_0(T+2))^M}, \end{equation} for all $t\in[t_1,t_2]$, and \begin{equation*} \begin{split}
& \lambda\int_{t_1}^{t_2} e^{-\lambda s}\mathbb{E}_s^\gamma[|\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^1},\overline{Z}_s^{\lambda,x,u},v_s^1)|\big|\mathcal{F}_{t_1}]ds\\
\leq&\lambda\int_{t_1}^{t_2} e^{-\lambda s}ds\cdot\overline{c}_0(| X_{t_1}^{x,u}-X_{t_1}^{x',v^0}|+\frac{\varepsilon}{(\overline{c}_0(T+2))^M}). \end{split} \end{equation*} Then, from (\ref{t2}) and (\ref{t3}), \begin{equation*} \begin{split}
&(\mathbb{E}_{t_2}^\gamma[|X_t^{x,u}-X_t^{x',v^1}|^2])^{\frac{1}{2}}=(\mathbb{E}_{t_1}^\gamma[((\mathbb{E}_{t_2}^\gamma[|X_t^{x,u}-X_t^{x',v^1}|^2\big|\mathcal{F}_{t_1}])^\frac{1}{2})^2])^\frac{1}{2}\\
& \leq (\mathbb{E}_{t_1}^\gamma[(|X_{t_1}^{x,u}-X_{t_1}^{x',v^0}|+\varepsilon\frac{t_2-t_1}{(\overline{c}_0(T+2))^M})^2])^\frac{1}{2}\\
& \leq (\mathbb{E}_{t_1}^\gamma[|X_{t_1}^{x,u}-X_{t_1}^{x',v^0}|^2])^\frac{1}{2}+\varepsilon\frac{t_2-t_1}{(\overline{c}_0(T+2))^M}\\
& \leq (|x-x'|+\varepsilon\frac{t_1}{(\overline{c}_0(T+2))^M})+\varepsilon\frac{t_2-t_1}{(\overline{c}_0(T+2))^M}. \end{split} \end{equation*} This combined once more with the result (\ref{t2}) of Step 1 yields \begin{equation*}
\mathop{\rm sup}\limits_{|\gamma|\leq K_z}(\mathbb{E}_t^\gamma[|X_t^{x,u}-X_t^{x',v^1}|^2)^{\frac{1}{2}}\leq |x-x'|+\varepsilon\frac{t_2}{(\overline{c}_0(T+2))^M},\ t\in[0,t_2]. \end{equation*} On the other hand, arguing similarly with using (\ref{t2}) and (\ref{t3}), we get \begin{equation*} \begin{split}
& \lambda\int_{t_1}^{t_2} e^{-\lambda s}\mathbb{E}_s^\gamma[|\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^1},\overline{Z}_s^{\lambda,x,u},v_s^1)|]ds\\
\leq&\lambda\int_{t_1}^{t_2} e^{-\lambda s}ds\cdot\overline{c}_0((\mathbb{E}_{t_1}^\gamma[| X_{t_1}^{x,u}-X_{t_1}^{x',v^1}|^2])^{\frac{1}{2}}+\frac{\varepsilon}{(\overline{c}_0(T+2))^M})\\
\leq& \lambda\int_{t_1}^{t_2} e^{-\lambda s}ds(\overline{c}_0| x-x'|+\frac{\overline{c}_0(t_1+1)\varepsilon}{(\overline{c}_0(T+2))^{M}}), \end{split} \end{equation*} which combined with the corresponding estimate (\ref{t2}) of Step 1 yields \begin{equation*} \begin{split}
& \lambda\int_0^{t_2}e^{-\lambda s}\mathbb{E}_s^\gamma[|\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^1},\overline{Z}_s^{\lambda,x,u},v_s^1)|]ds\\
\leq&\lambda\int_0^{t_2}e^{-\lambda s}ds(\overline{c}_0|x-x'|+\frac{\overline{c}_0(t_1+2)\varepsilon}{(\overline{c}_0(T+2))^M})\\
\leq&\lambda\int_0^{t_2}e^{-\lambda s}ds(\overline{c}_0|x-x'|+\frac{\varepsilon}{(\overline{c}_0(T+2))^{M-1}}), \end{split} \end{equation*}
for all $\gamma\in L_{\mathbb{F}}^\infty(0,\infty;\mathbb{R}^d)$ with $|\gamma_s|\leq K_z$, dsd$\mathbb{P}$-a.e.
Similarly, we make our construction on $[t_2,t_3]$, $[t_3,t_4]$, $\cdots$, $[t_{M-1},t_M]$, to finally get a process $v^{M-1}$ defined on $[0,T]$, such that \begin{equation*} \left\{ \begin{split}
&\mbox{(i)}\ \mathop{\rm sup}\limits_{|\gamma|\leq K_z}(\mathbb{E}_t^\gamma[|X_t^{x,u}-X_t^{x',v^{M-1}}|^2])^{\frac{1}{2}}\leq |x-x'|+\varepsilon\frac{T}{(\overline{c}_0(T+2))^M}\leq|x-x'|+\frac{\varepsilon}{2},\ t\in[0,T];\\
&\mbox{(ii)}\displaystyle\ \lambda\int_0^{T}e^{-\lambda s}\mathbb{E}_s^\gamma[|\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^{M-1}},\overline{Z}_s^{\lambda,x,u},v_s^{M-1})|]ds\\
&\ \ \ \ \ \ \leq\displaystyle \lambda\int_0^{T}e^{-\lambda s}ds(\overline{c}_0|x-x'|+\frac{\varepsilon}{2}), \end{split} \right. \end{equation*}
for all $\gamma\in L_{\mathbb{F}}^\infty(0,\infty;\mathbb{R}^d)$ with $|\gamma_s|\leq K_z$, dsd$\mathbb{P}$-a.e. Here we have supposed without loss of generality that $\overline{c}_0\geq1$. Let now $T=1, \rho=\rho_1=\frac{1}{2}, \widetilde{v}^1:=v^{M-1}$; we can make the same construction on $[1,2]$, starting with $(X_1^{x,u},X_1^{x',v^{M-1}},u_1)$, but now for $\frac{\varepsilon}{4}.$ Thus, we get $v^2$ on $[1,2]$, $\widetilde{v}^2:=\widetilde{v}^11_{[0,1]}+v^21_{[1,2]}$. Similarly, by iteration, for $\frac{\varepsilon}{2^{j+1}}$, we make our construction on $[j,j+1], j\geq2$. Then we get the construction of $v\in\mathcal{U}$ such that \begin{equation*} \begin{array}{lll}
&\displaystyle {\rm (i)}\ (\mathbb{E}_t^\gamma[|X_t^{x,u}-X_t^{x',v}|^2])^{\frac{1}{2}}\leq |x-x'|+\varepsilon(\sum_{j=1}^\infty\frac{1}{2^j})=|x-x'|+\varepsilon,\ t\geq0,\\
&\displaystyle {\rm (ii)}\ \lambda\int_0^\infty e^{-\lambda s}\mathbb{E}_s^\gamma[|\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v},\overline{Z}_s^{\lambda,x,u},v_s)|]ds\\
&\displaystyle \ \ \ \ \leq\lambda\int_0^\infty e^{-\lambda s}ds(\overline{c}_0|x-x'|+\varepsilon)=\overline{c}_0|x-x'|+\varepsilon, \end{array} \end{equation*}
for all $\gamma\in L_{\mathbb{F}}^\infty(0,\infty;\mathbb{R}^d)$ with $|\gamma_s|\leq K_z$, dsd$\mathbb{P}$-a.e. \end{proof} \begin{lemma}\label{lem:2.6} We suppose that (\ref{r1}), (\ref{r48}) and (\ref{r49}) hold. Then the family of functions $\{\lambda V_\lambda\}_\lambda$ is equicontinuous and equibounded on $\overline{\theta}$. Indeed, for the constants $\overline{c}_0>0$, $M>0$ defined in (\ref{r48}), it holds that, for all $\lambda>0$, and for all $ x,x'\in\overline{\theta},$ \begin{equation*} \left\{ \begin{array}{ll}
{\rm{(i)}}\ |\lambda V_\lambda(x)-\lambda V_\lambda(x')|\leq \overline{c}_0|x-x'|, \\
{\rm{(ii)}}\ |\lambda V_\lambda(x)|\leq M. \end{array} \right. \end{equation*}
\end{lemma} \begin{proof}
From Proposition \ref{th:2.4} we know that for all $t\geq0,\ \lambda>0$, $|\overline{Y}_t^{\lambda,x,u}|\leq\frac{M}{\lambda}$. Thus we have \begin{equation*}
|\lambda V_\lambda(x)|\leq\lambda\mathop{\rm sup}\limits_{u\in\mathcal{U}}|Y_0^{\lambda,x,u}|\leq M. \end{equation*} It remains to prove (i). Let $\lambda>0,\ x,\ x'\in\mathbb{R}^N$. For any $\varepsilon>0$, let $u\in\mathcal{U}$ be such that \begin{equation}\label{lg1} V_\lambda(x)\geq \overline{Y}_0^{\lambda,x,u}-\frac{\varepsilon}{\lambda}. \end{equation} Then, we have from Proposition \ref{p:2.1} that, there is $v^\varepsilon\in\mathcal{U}$ such that (\ref{r50}) holds true.
Let us define $Y_s^\varepsilon=\overline{Y}_s^{\lambda,x,u}-\overline{Y}_s^{\lambda,x',v^\varepsilon}, Z_s^\varepsilon=\overline{Z}_s^{\lambda,x,u}-\overline{Z}_s^{\lambda,x',v^\varepsilon},$ and \begin{equation*} \gamma_s^\varepsilon=\frac{\psi(X_s^{x',v^\varepsilon},\overline{Z}_s^{\lambda,x,u},v_s^\varepsilon)- \psi(X_s^{x',v^\varepsilon},\overline{Z}_s^{\lambda,x',v^\varepsilon},v_s^\varepsilon)}
{|\overline{Z}_s^{\lambda,x,u}-\overline{Z}_s^{\lambda,x',v^\varepsilon}|^2} \cdot(\overline{Z}_s^{\lambda,x,u}-\overline{Z}_s^{\lambda,x',v^\varepsilon})^*,\ \mbox{if}\ \overline{Z}_s^{\lambda,x,u}\neq\overline{Z}_s^{\lambda,x',v^\varepsilon}; \end{equation*}
\noindent otherwise, $\gamma_s^\varepsilon=0,\ s\geq0$, where $(\overline{Y}^{\lambda,x,u},\overline{Z}^{\lambda,x,u})$ and $(\overline{Y}^{\lambda,x',v^\varepsilon},\overline{Z}^{\lambda,x',v^\varepsilon})$ are the solutions of BSDE (\ref{r40}) with the driving coefficient $\psi(X^{x,u},\cdot,u)$ and $\psi(X^{x',v^\varepsilon},\cdot,v^\varepsilon)$, respectively. We note that from (\ref{r48}) it follows that $|\gamma_s^\varepsilon|\leq K_z$. Putting \begin{equation*}
L_s^\varepsilon=\exp\{\int_0^s\gamma^\varepsilon_rdW_r-\frac{1}{2}\int_0^s|\gamma^\varepsilon_r|^2dr\},\ s\geq0, \end{equation*} we define probability measures $\mathbb{P}_s^\varepsilon$ on $(\Omega,\mathcal{F})$ by setting \begin{equation*}
\frac{d\mathbb{P}_s^\varepsilon}{d\mathbb{P}}=\exp\{\int_0^s\gamma^\varepsilon_rdW_r-\frac{1}{2}\int_0^s|\gamma^\varepsilon_r|^2dr\},\ s\geq0. \end{equation*} Then, it follows from Girsanov's theorem that \begin{equation*}
W^\varepsilon_t=W_t-\int_0^t\gamma^\varepsilon_rdr,\ t\in[0,s], \end{equation*} is an $(\mathbb{F,\mathbb{P}}_s^\varepsilon)$-Brownian motion. Then, for all $0\leq t\leq T<\infty$, \begin{equation*} \begin{split}
Y_t^\varepsilon=&Y_T^\varepsilon-\lambda\int_t^TY_s^\varepsilon ds+\int_t^T(\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^\varepsilon},\overline{Z}_s^{\lambda,x',v^\varepsilon},v_s^\varepsilon))ds
-\int_t^TZ_s^\varepsilon dW_s\\
=& Y_T^\varepsilon-\lambda\int_t^TY_s^\varepsilon ds+\int_t^T(\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^\varepsilon},\overline{Z}_s^{\lambda,x,u},v_s^\varepsilon))ds
-\int_t^TZ_s^\varepsilon dW^\varepsilon_s.
\end{split} \end{equation*}
By applying It\^{o}'s formula to $e^{-\lambda t}Y_t^\varepsilon$, and taking the conditional expectation $\mathbb{E}^\varepsilon_T[\cdot\big|\mathcal{F}_t]$ with respect to $\mathbb{P}^\varepsilon_T$, we obtain \begin{equation*} \begin{split}
Y_t^\varepsilon=&\mathbb{E}_T^\varepsilon[\int_t^Te^{-\lambda (s-t)}\big(\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^\varepsilon},\overline{Z}_s^{\lambda,x,u},v_s^\varepsilon)\big)ds\big|\mathcal{F}_t]
+\mathbb{E}_T^\varepsilon[e^{-\lambda(T-t)}Y_T^\varepsilon\big|\mathcal{F}_t]. \end{split} \end{equation*}
Let $t=0$. Since $|Y_T^\varepsilon|=|\overline{Y}_T^{\lambda,x,u}-\overline{Y}_T^{\lambda,x',v^\varepsilon}|\leq \frac{2M}{\lambda}$, it follows that \begin{equation*} \begin{split}
|Y_0^\varepsilon|\leq& \frac{2M}{\lambda}e^{-\lambda T}+\int_0^Te^{-\lambda s}\mathbb{E}[L_s^\varepsilon\mid\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^\varepsilon},\overline{Z}_s^{\lambda,x,u},v_s^\varepsilon)\mid]ds\\
\leq& \frac{2M}{\lambda}e^{-\lambda T}+\frac{\overline{c}_0}{\lambda}(|x-x'|+\varepsilon),\ T\geq0. \end{split} \end{equation*} Here we have used the fact that due to the choice of $v^\varepsilon$ (\ref{r50}) is satisfied. Now letting $T$ tend to infinity we get \begin{equation}\label{lg2}
|\overline{Y}_0^{\lambda,x,u}-\overline{Y}_0^{\lambda,x',v^\varepsilon}|\leq \frac{\overline{c}_0}{\lambda}(|x-x'|+\varepsilon). \end{equation} Finally, from the arbitrariness of $u\in\mathcal{U}$ and $\varepsilon>0$ it follows that \begin{equation*}
|\lambda V_\lambda(x)-\lambda V_\lambda(x')|\leq \overline{c}_0|x-x'|. \end{equation*} \noindent Indeed, from (\ref{lg1}) and (\ref{lg2}) we have \begin{equation*}
\lambda V_\lambda(x)-\lambda V_\lambda(x')\geq\lambda(\overline{Y}_0^{\lambda,x,u}-\overline{Y}_0^{\lambda,x',v^\varepsilon})-\varepsilon\geq-\overline{c}_0|x-x'|-(\overline{c}_0+1)\varepsilon, \end{equation*}
and letting $\varepsilon\downarrow0$ yields $\lambda V_\lambda(x)-\lambda V_\lambda(x')\geq-\overline{c}_0|x-x'|$. The symmetry of the argument in $x$ and $x'$ gives the inverse inequality. The proof is complete. \end{proof}
\section{ {\protect \large Hamilton-Jacobi-Bellman equations}} Before we study in the next section the HJB equations associated with the stochastic control problem (\ref{r94}), let us begin a more general discussion in this section, where we consider a Hamiltonian $H:\mathbb{R}^N\times\mathbb{R}^N\times\mathcal{S}^N\to\mathbb{R}$ not necessarily related with a stochastic control problem. By $\mathcal{S}^N$ we denote the set of symmetric $N\times N$ matrices.
Let $H:\mathbb{R}^N\times\mathbb{R}^N\times\mathcal{S}^N\to\mathbb{R}$ be a uniformly continuous function satisfying the monotonicity assumption:\\ \noindent\textbf{($A_H)$} (i) $H(x,p,A)\leq H(x,p,B),\ \text{for\ all}\ (x,p)\in\mathbb{R}^N\times\mathbb{R}^N,\ A,\ B\in\mathcal{S}^N \mbox{with}\ B\leq A$ (i.e., $A-B$ is positive semidefinite).\\
We consider the PDE \begin{equation}\label{r3.1} \lambda V(x)+H(x,DV(x),D^2V(x))=0,\ x\in\overline{\theta}. \end{equation} Let $V:\overline{\theta}\to\mathbb{R}$ be a bounded measurable function. We define \begin{equation*}
V^*(x)=\varlimsup\limits_{y\to x}V(y),\ V_*(x)=\varliminf\limits_{y\to x}V(y),\ x\in\overline{\theta}. \end{equation*} Then, $V^*:\overline{\theta}\to\mathbb{R}$ is upper semicontinuous (we write $V^*\in \mbox{USC}(\overline{\theta})$) and $V_*:\overline{\theta}\to\mathbb{R}$ is lower semicontinuous ($V_*\in \mbox{LSC}(\overline{\theta})$). \begin{definition}\label{def3.1} $V$ is a constrained viscosity solution of (\ref{r3.1}), if it solves \begin{equation*}
\lambda V(x)+H(x,DV(x),D^2V(x))=0,\ x\in\theta, \end{equation*} \begin{equation*}
\lambda V(x)+H(x,DV(x),D^2V(x))\geq0,\ x\in\partial\theta, \end{equation*} in viscosity sense, i.e., if\\ \rm{i)} \emph{$V$ is a viscosity subsolution of (\ref{r3.1}) on $\theta$, and}\\ \rm{ii)} \emph{$V$ is a viscosity supersolution of (\ref{r3.1}) on $\overline{\theta}$}. \end{definition} \begin{remark} Recall that\\ \rm{i)} \emph{$V$ is a viscosity subsolution of (\ref{r3.1}) on $\theta$, if for all $x\in\theta$ and all $\varphi\in C^2(\mathbb{R}^N)$ such that $V^*-\varphi$ achieves a local maximum on $\theta$ at $x$, it holds} \begin{equation*}
\lambda V^*(x)+H(x,D\varphi(x),D^2\varphi(x))\leq0; \end{equation*} \rm{ii)} \emph{$V$ is a viscosity supersolution of (\ref{r3.1}) on $\overline{\theta}$, if for all $x\in\overline{\theta}$ and all $\varphi\in C^2(\mathbb{R}^N)$ such that $V_*-\varphi$ achieves a local minimum on $\overline{\theta}$ at $x$, it holds} \begin{equation*}
\lambda V_*(x)+H(x,D\varphi(x),D^2\varphi(x))\geq0. \end{equation*} \emph{The reader can refer to Crandall, Ishii, Lions \cite{ISHII 1992}.} \end{remark} Existence and comparison results for the viscosity solution of (\ref{r3.1}) have been established (see Theorem 2.1, 2.2 and 3.1 in Katsoulakis \cite{Katsoulakis 1994}) under the additional assumptions:\\
\noindent\textbf{($A_\theta)$} There exists a bounded, uniformly continuous function $m:\overline{\theta}\rightarrow \mathbb{R}^N$ with $|m|\leq 1$ and a\\ \indent\ constant $r>0$ such that\\ \centerline{$B(x+sm(x),rs)\subset \theta , \ \text{for\ all}\ x\in \overline{\theta},\ s\in(0,r].$} \indent\ $B(x,s)\subset\mathbb{R}^N$ denotes the open ball with center at $x$ and radius $s$.\\ \noindent\textbf{($A_H)$} (ii) There is a continuity modulus $\rho: \mathbb{R}_+\to\mathbb{R}_+$, $\rho(0+)=0$, such that,\\
\indent\qquad\ $|H(x,p,A)-H(x,q,B)|\leq\rho(|p-q|+|A-B|),\ x,\ p,\ q\in\mathbb{R}^N, A,\ B\in\mathcal{S}^N,\ \mbox{and}$\\
\indent \qquad\ $|H(y,p,B)-H(x,p,A)|\leq\rho(\frac{1}{\varepsilon}|x-y|^2+|x-y|(|p|+1)),\ \mbox{for\ all}\ x,\ y,\ p\in\mathbb{R}^N,\ \varepsilon>0,$ \indent \qquad\ $A,\ B\in\mathcal{S}^N,$ such that $$-\frac{3}{\varepsilon}\begin{pmatrix} I&0\\ 0&I \end{pmatrix}\leq \begin{pmatrix} A&0\\ 0&B \end{pmatrix}\leq \frac{3}{\varepsilon}\begin{pmatrix} I&-I\\ -I&I \end{pmatrix},$$ \indent \qquad\ where $I\in\mathbb{R}^{N\times N}$ denotes the unit matrix in $\mathbb{R}^{N\times N}$.\\
Under the above assumptions Katsoulakis \cite{Katsoulakis 1994} has shown the following theorems: \begin{theorem}\label{th1}(Comparison principle; Theorem 2.2 in \cite{Katsoulakis 1994}) Let $u\in USC(\overline{\theta})$ be a subsolution of (\ref{r3.1}) on $\theta$ and $v\in LSC(\overline{\theta})$ a supersolution of (\ref{r3.1}) on $\overline{\theta}$. Then $u\leq v$ on $\overline{\theta}$. \end{theorem} \begin{remark} In Theorem 2.2 in \cite{Katsoulakis 1994} the condition on $u$ is slightly weaker formulated: $u\in USC(\theta)$ is nontangential upper semicontinuous on $\partial\theta$. \end{remark} \begin{theorem}\label{th2}(Existence; Theorem 3.1 in \cite{Katsoulakis 1994}) If, in addition to the above assumptions, there is a bounded supersolution $\widetilde{v}\in LSC(\overline{\theta})$ of (\ref{r3.1}), then (\ref{r3.1}) has a constrained viscosity solution $v\in LSC(\overline{\theta})$; it is given by the smallest supersolution of (\ref{r3.1}) in $LSC(\overline{\theta})$. \end{theorem} \begin{remark} Let us suppose that $H$ satisfies the radial monotonicity assumption, which is introduced in Theorem \ref{th:3.3}. Then \begin{equation}\label{r3.2} H(x,p,A)\geq H(x,0,0),\ (x,p,A)\in\overline{\theta}\times\mathbb{R}^N\times\mathcal{S}^N\ (\mbox{see\ Lemma \ref{l:3.4}}). \end{equation} Furthermore, let $K\in\mathbb{R}$ be such that $K\geq-H(x,0,0),\ x\in\overline{\theta}$. Then one checks easily that $\widetilde{v}(x)=\frac{K}{\lambda},\ x\in\overline{\theta}$, is a viscosity supersolution of (\ref{r3.1}) on $\overline{\theta}$.\\ Indeed, if $x\in\overline{\theta}$ and $\varphi\in C^2(\mathbb{R}^N)$ such that $\widetilde{v}-\varphi\geq\widetilde{v}(x)-\varphi(x)$ on $\overline{\theta}$, then clearly \begin{equation*}
\lambda \widetilde{v}(x)+ H(x,D\varphi(x),D^2\varphi(x))\geq 0,\ \ x\in\overline{\theta}, \end{equation*} with the help of (\ref{r3.2}). \end{remark}
Let us introduce the space $Lip_{M_0}(\overline{\theta})$ ($M_0>0$) as space of all Lipschitz functions $u: \overline{\theta}\to\mathbb{R}$ with $|u(x)|\leq M_0$, $|u(x)-u(y)|\leq M_0|x-y|, x,y\in\overline{\theta},$ and let us suppose that, for $M_0>0$ large enough,\\ (H) PDE (\ref{r3.1}) has a solution $V_\lambda$ such that $\lambda V_\lambda\in Lip_{M_0}(\overline{\theta})$, for all $\lambda>0$. \begin{remark} Under the assumption of the existence of a bounded supersolution on $\overline{\theta}$, we have from Theorem \ref{th2} the existence of a viscosity solution $V_\lambda\in LSC(\overline{\theta})$. With (H) we suppose that $\lambda V_\lambda\in Lip_{M_0}(\overline{\theta})$. The uniqueness of this solution $V_\lambda\in Lip_M(\overline{\theta})$ is guaranteed by the comparison priciple (Theorem \ref{th1}). Later, in the discussion of the case where the Hamiltonian $H$ is associated with the stochastic control problem (\ref{r94}), we will see that for such Hamiltonians PDE (\ref{r3.1}) has a unique constrained viscosity solution $\lambda V_\lambda\in Lip_{M_0}(\overline{\theta})$, for all $\lambda>0$. \end{remark} In what follows we work with the hypothesis (H).
Associated with our problem is the family of Hamiltonians \begin{equation*} H_\lambda(x,p,A):=\lambda H(x,\frac{1}{\lambda}p,\frac{1}{\lambda}A),\ (x,p,A)\in\overline{\theta}\times\mathbb{R}^N\times\mathcal{S}^N,\ \lambda>0, \end{equation*} where $H$ is supposed to satisfy ($A_H$). \begin{theorem}\label{th:3.3} We suppose that, in addition to \textbf{($A_\theta$)}, \textbf{($A_H$)} and (H), the Hamiltonian $H$ satisfies the radial monotonicity condition: \begin{equation}\label{r15} H(x,l p,l A)\geq H(x,p,A),\ \text{for\ all\ real}\ l\geq 1,\ (x,p,A)\in\overline{\theta}\times \mathbb{R}^N\times \mathcal{S}^N.\tag{H5} \end{equation} For all $\lambda >0$, let $V_\lambda$ be the constrained viscosity solution of PDE (\ref{r3.1}) such that $\lambda V_\lambda\in \mbox{Lip}_M(\overline{\theta})$. Then \\ \rm{(i)} \emph{$\lambda\rightarrow\lambda V_\lambda(x)$ is nondecreasing, for every $x\in\overline{\theta}$;}\\ \rm{(ii)} \emph{The limit $\lim_{\lambda\rightarrow0^+}\lambda V_\lambda(x)$ exists, for every $x\in\overline{\theta}$;}\\ \rm{(iii)} \emph{The convergence in (ii) is uniform on $\overline{\theta}$.} \end{theorem} \begin{proof} First, we know that for every $\lambda >0$, $w_\lambda(x):=\lambda V_\lambda(x)$ is a constrained viscosity solution of \begin{equation}\label{r16} \lambda w_\lambda(x)+H_\lambda(x,D w_\lambda(x),D^2w_\lambda(x))=0. \end{equation} For any $\lambda,\ \mu>0$, we have $\displaystyle \frac{\lambda}{\mu}H_\mu(x,\frac{\mu}{\lambda}p,\frac{\mu}{\lambda}A)=\frac{\lambda}{\mu}(\mu H(x,\frac{\mu}{\lambda}(\frac{1}{\mu}p),\frac{\mu}{\lambda}(\frac{1}{\mu}A)))=H_\lambda(x,p,A). $\\ Using the radial monotonicity condition (\ref{r15}) we have, for any $\mu>\lambda>0$, in viscosity sense, \begin{eqnarray*} \begin{array}{lll}
&\displaystyle\lambda w_\mu(x)+H_\lambda(x,D w_\mu(x),D^2w_\mu(x)) = \mu\cdot\frac{\lambda}{\mu} w_\mu(x)+\frac{\lambda}{\mu}H_\mu(x,\frac{\mu}{\lambda}D w_\mu(x),\frac{\mu}{\lambda}D^2w_\mu(x)) \\
&\displaystyle =\frac{\lambda}{\mu}[\mu w_\mu(x)+\mu H(x,\frac{\mu}{\lambda}(\frac{1}{\mu}D w_\mu(x)),\frac{\mu}{\lambda}(\frac{1}{\mu}D^2w_\mu(x)))] \\
&\displaystyle \geq\frac{\lambda}{\mu}[\mu w_\mu(x)+\mu H(x,\frac{1}{\mu}D w_\mu(x),\frac{1}{\mu}D^2w_\mu(x))] \\
&\displaystyle = \frac{\lambda}{\mu}(\mu w_\mu(x)+H_\mu(x,Dw_\mu(x),D^2w_\mu(x)))\geq0,\ x\in\overline{\theta}.
\end{array} \end{eqnarray*} Therefore, $w_\mu\in Lip_{M_0}(\overline{\theta})$ is a viscosity supersolution to (\ref{r16}) on $\overline{\theta}$. From the comparison principle-Theorem \ref{th1}, $w_\mu\geq w_\lambda$ on $\overline{\theta}$. Statement (ii) follows from (i) and the boundedness of $\lambda V_\lambda$, $\lambda>0$, while thanks to the fact that $\lambda V_\lambda\in \mbox{Lip}_M(\overline{\theta}),\ \lambda>0$, the Arzel\`{a}-Ascoli Theorem yields (iii). \end{proof} \begin{lemma}\label{l:3.4} Let $H(x,p,A)$ be convex in $(p,A)\in\mathbb{R}^N\times\mathcal{S}^N$. Then we have the following equivalence:\\ \rm{i)}\ \emph{The radial monotonicity (\ref{r15}) holds true for $H(x,\cdot,\cdot)$;}\\ \rm{ii)}\ $H(x,l'p,l'A)\geq H(x,lp,lA),\ 0\leq l\leq l',\ (p,A)\in\mathbb{R}^N\times\mathcal{S}^N$;\\ \rm{iii)}\ $H(x,p,A)\geq H(x,0,0),\ (p,A)\in\mathbb{R}^N\times\mathcal{S}^N$. \end{lemma} \begin{proof} Indeed, i) and ii) are obviously equivalent. Moreover, ii) implies iii) (take $l'=1$, and $l=0$). Thus, it remains only to show that iii) implies ii). For this end, given any $(p,A)\in\mathbb{R}^N\times\mathcal{S}^N$, we consider the function $G(l):=H(x,lp,lA),\ l\geq0$. From the convexity of $H(x,\cdot,\cdot)$ it follows that of $G$, and iii) implies that $G(l)\geq G(0),\ l\geq0$. Consequently, for $l'\geq l\geq0$ and $\displaystyle k:=\frac{l}{l'}\in[0,1]$, \begin{equation*}
H(x,lp,lA)=G(l'k)=G(l'k+(1-k)0)\leq kG(l')+(1-k)G(0)\leq kG(l')+(1-k)G(l')=H(x,l'p,l'A). \end{equation*} \end{proof} \begin{remark} We suppose that $H$ is of the form (\ref{r20}) and $(-\psi)(x,\cdot,u)=\{z\mapsto(-\psi)(x,z,u)\}$ is convex, for all $(x,u)\in\overline{\theta}\times U$. Then $H(x,p,A)$ is convex in $(p,A)$, for all $x\in\overline{\theta}$.
Under the additional assumption of the existence of some $u_0\in U$ such that $b(x,u_0)=0,\ \sigma(x,u_0)=0$ and $\psi(x,0,u)\geq\psi(x,0,u_0)$, for all $u\in U$, we have \begin{equation*} \begin{split}
&H(x,p,A)=\mathop{\rm sup}\limits_{u\in U}\{\langle-p,b(x,u)\rangle-\frac{1}{2}tr(\sigma\sigma^*(x,u)A)-\psi(x,p\sigma(x,u),u)\}\\
&\geq\langle-p,b(x,u_0)\rangle-\frac{1}{2}tr(\sigma\sigma^*(x,u_0)A)-\psi(x,p\sigma(x,u_0),u_0)\\
&=-\psi(x,0,u_0)=\mathop{\rm sup}\limits_{u\in U}\{-\psi(x,0,u)\}=H(x,0,0),\ \ (p,A)\in\mathbb{R}^N\times\mathcal{S}^N. \end{split} \end{equation*} Then Lemma \ref{l:3.4} yields that $H$ satisfies the radial monotonicity condition.
However, without additional assumption for $H$ of the form (\ref{r20}), only with $(-\psi)(x,z,u)$ is convex in $z$, we don't, in general, have the radial monotonicity.
Indeed, for example, if, for some $\varepsilon>0$ and $x\in\overline{\theta}$, $\sigma\sigma^*(x,u)\geq\varepsilon1_{\mathbb{R}^N}, u\in U$, then \begin{equation*} \begin{split}
& H(x,0,A)=\mathop{\rm sup}\limits_{u\in U}\{-\frac{1}{2}tr(\sigma\sigma^*(x,u)A)-\psi(x,0,u)\}\\
&\leq-\frac{1}{2}\varepsilon tr(A)+\mathop{\rm sup}\limits_{u\in U}\{-\psi(x,0,u)\}=-\frac{1}{2}\varepsilon tr(A)+H(x,0,0)\\
& < H(x,0,0), \ \mbox{for\ all}\ A\in\mathcal{S}^N \ \mbox{with}\ \ tr(A)>0. \end{split} \end{equation*} \end{remark} Under the assumptions of Theorem \ref{th:3.3} we let $w_0(x)=\lim\limits_{\lambda\rightarrow0^+}\lambda V_\lambda(x)$, $x\in\overline{\theta}$. Next we will characterize $w_0(x)$ under the condition of radial monotonicity of $H$ as maximal viscosity subsolution of the PDE \begin{equation}\label{r3.4}
W(x)+\overline{H}(x,DW(x),D^2W(x))=0,\ x\in\theta, \end{equation} where $\overline{H}(x,p,A):=\min\{M_0,\mathop{\rm sup}\limits_{l>0}H(x,lp,lA)\}$. \begin{remark} As $H:\overline{\theta}\times\mathbb{R}^N\times\mathcal{S}^N\to\mathbb{R}$ is continuous and $\mathop{\rm sup}\limits_{l\geq0}H(x,lp,lA)=\lim\limits_{l\to\infty}\uparrow H(x,lp,lA)$, the function $\overline{H}$ is lower semicontinuous. Recall that a function $W\in USC(\overline{\theta})$ is a viscosity subsolution of (\ref{r3.4}) on $\theta$, if for all $x\in\theta$, $\varphi\in C^2(\mathbb{R}^N)$ such that $W-\varphi\leq W(x)-\varphi(x)$ on $\theta$, $$W(x)+\overline{H}(x,D\varphi(x),D^2\varphi(x))\leq0.$$ \end{remark} \begin{theorem}\label{the:3.4} We make the same assumptions as in Theorem \ref{th:3.3}. For all $\lambda>0$, let $V_\lambda$ be the unique constrained viscosity solution of the PDE \begin{equation}\label{r99} \lambda V(x)+H(x,DV(x),D^2V(x))=0,\ x\in \overline{\theta}, \end{equation} such that $\lambda V_\lambda\in \mbox{Lip}_{M_0}(\overline{\theta})$, for some $M_0> 0$ large enough and independent of $\lambda$. Then, $w_0(x):=\lim\limits_{\lambda\to0^+}\lambda V_\lambda(x)$, $x\in\overline{\theta}$, is the maximal viscosity subsolution of (\ref{r3.4}), $$w_0(x)=\mathop{\rm sup}\{w(x):w\in \mbox{Lip}_{M_0}(\overline{\theta}), w+\overline{H}(x,Dw,D^2w)\leq 0 \ \mbox{on} \ \theta\ (\mbox{in\ viscosity\ sense})\},\ x\in\overline{\theta},$$ where $\overline{H}(x,p,A):=\min \Big\{M_0, \mathop{\rm sup}\limits_{l>0}H(x,l p,l A)\Big\}$. \end{theorem} \begin{proof} We define the set $$\mathcal{S}_{H,M_0}=\{w: w\in \mbox{Lip}_{M_0}(\overline{\theta}),\ w+\overline{{H}}(x,Dw,D^2w)\leq 0 \ \text{on} \ \theta\ (\mbox{in\ viscosity\ sense})\},$$ and we set $\bar{w}(x)=\mathop{\rm sup}\{w(x), w\in\mathcal{S}_{H,M_0}\}.$\\ \textbf{Step 1}. We show that $w_0$ is a viscosity subsolution of (\ref{r3.4}), which implies that $w_0\in\mathcal{S}_{H,M}$ and, thus $w_0\leq \bar{w}$.\\ \textbf{Step 1.1}. We first prove that $w_\lambda=\lambda V_\lambda(x)\in Lip_{M_0}(\overline{\theta})$ is also a constrained viscosity solution of the equation \begin{equation}\label{rl7}
w(x)+H^{M_0}(x,\frac{1}{\lambda}Dw(x),\frac{1}{\lambda}D^2w(x))=0,\ x\in\overline{\theta}, \end{equation} where $H^{M_0}(x,p,A):=\min\{M_0, H (x,p,A)\},\ \text{for\ all}\ (x,p,A)\in \overline{\theta}\times \mathbb{R}^N\times \mathcal{S}^N.$
In fact, let $x\in\theta$ and $\phi\in C^2(\mathbb{R}^N)$ be such that $(w_\lambda-\phi)(x)=\max\{(w_\lambda-\phi)(\overline{x}),\ \overline{x}\in\overline{\theta}\}$. Then, as $V_\lambda$ is a constrained viscosity solution of (\ref{r99}) and $w_\lambda=\lambda V_\lambda$, we have $$w_\lambda(x)+H(x,\frac{1}{\lambda}D\phi(x),\frac{1}{\lambda}D^2\phi(x))\leq 0.$$ Furthermore, from $w_\lambda\in \mbox{Lip}_{M_0}(\overline{\theta})$ we get $$H(x,\frac{1}{\lambda}D\phi(x),\frac{1}{\lambda}D^2\phi(x))\leq -w_\lambda(x)\leq M_0.$$ It follows that $$w_\lambda(x)+H^{M_0}(x,\frac{1}{\lambda}D\phi(x),\frac{1}{\lambda}D^2\phi(x))=w_\lambda(x)+H(x,\frac{1}{\lambda}D\phi(x),\frac{1}{\lambda}D^2\phi(x))\leq 0,$$ i.e., $w_\lambda$ is a constrained subsolution of (\ref{rl7}) in $\overline{\theta}$.
Next we show that $w_\lambda$ is also a supersolution on $\overline{\theta}$. Let $x\in\overline{\theta}$ and $\varphi\in C^2(\mathbb{R}^N)$ be such that $(w_\lambda-\varphi)(x)=\min\{(w_\lambda-\varphi)(\overline{x}), \overline{x}\in\overline{\theta}\}$. Obviously, from (\ref{r99}) and the fact that $w_\lambda\in\mbox{Lip}_{M_0}(\overline{\theta})$ we have the following both inequalities, \begin{equation*} \left\{ \begin{array}{lll} w_\lambda(x)+H(x,\frac{1}{\lambda}D\varphi(x),\frac{1}{\lambda}D^2\varphi(x))\geq 0,\\ w_\lambda(x)+M_0\geq 0. \end{array} \right. \end{equation*} Thus, $w_\lambda(x)+H^{M_0}(x,\frac{1}{\lambda}D\varphi(x),\frac{1}{\lambda}D^2\varphi(x))\geq 0$, which implies (\ref{rl7}).\\ \textbf{Step 1.2}. Now we show $w_0\in\mathcal{S}_{H,M_0}$, i.e., $w_0\in\mbox{Lip}_{M_0}(\overline{\theta})$ and \begin{equation}\label{r18} w_0+\overline{H}(x,Dw_0,D^2w_0)\leq 0\ \mbox{in} \ \theta, \end{equation} in viscosity sense.
Indeed, let us fix $l>0$. Then (\ref{r15}) and (\ref{rl7}) yield, for any $0<\lambda\leq\frac{1}{l}$, $$w_\lambda+H^{M_0}(x,l Dw_\lambda,l D^2w_\lambda)\leq w_\lambda+H^{M_0}(x,\frac{1}{\lambda}Dw_\lambda,\frac{1}{\lambda}D^2w_\lambda)=0 \ \ \mbox{in} \ \theta,$$ in viscosity sense. Recall that $w_\lambda\in\mbox{Lip}_{M_0}(\overline{\theta}), \lambda>0$, and that $w_0$ is the uniform limit of $w_\lambda$, as $\lambda\downarrow0$. Consequently, $w_0\in Lip_{M_0}(\overline{\theta})$. Moreover, by the result that the uniform limit of subsolutions is a subsolution again, we conclude that, in viscosity sense, $$w_0+H^{M_0}(x,l Dw_0,l D^2w_0)\leq 0, \ l>0.$$ Finally, taking the supremum with respect to $l>0$ over the increasing left-hand side, it follows that (\ref{r18}) holds.
Consequently, $w_0\in\mathcal{S}_{H,M_0}$ and thus $w_0\leq \bar{w}$.\\ \textbf{Step 2}. Notice that also $\bar{w}\in\mbox{Lip}_{M_0}(\overline{\theta})$. Thus, in order to prove that $w_0\geq \bar{w}$, we need to check that \begin{equation}\label{r19}
\bar{w}+\overline{H}(x,D\bar{w},D^2\bar{w})\leq 0 \ \mbox{in} \ \theta. \end{equation}
The above property of the upper envelope of a bounded family of subsolutions is well known when $H$ is continuous and can be extended to $\overline{H}$. Let $x\in\theta$ and $\phi\in C^2(\mathbb{R}^N)$ be such that $(\bar{w}-\phi)(x)=\max\{(\bar{w}-\phi)(\overline{x}), \overline{x}\in\overline{\theta}\}$. By adding a constant to $\phi$ one can assume that $\bar{w}(x)=\phi(x)$. For $\varepsilon>0$ we put $\phi_\varepsilon(y)=\phi(y)+\varepsilon|x-y|^2$, $y\in\mathbb{R}^N$. Then, $$(\bar{w}-\phi_\varepsilon)(y)\leq -\varepsilon|y-x|^2,$$ for every $y$ in $\overline{\theta}$, and, hence, also in some closed ball $B_r(x)\subseteq\theta$. Thus, by the very definition of $\bar{w}$, there exists a sequence$\{w^n\}_n\subseteq \mathcal{S}_{H,M_0}$ such that $w^n(x)\geq \bar{w}(x)-\frac{1}{n}$ for all $n\geq 1$. Let $x_n^\varepsilon$ be a maximum point of $w^n-\phi_\varepsilon$ over $B_r(x)$. Then we have that
$$-\frac{1}{n}\leq (w^n-\phi_\varepsilon)(x)\leq (w^n-\phi_\varepsilon)(x_n^\varepsilon)\leq -\varepsilon|x_n^\varepsilon-x|^2.$$ Consequently, $x_n^\varepsilon\rightarrow x$ and $(w^n-\phi_\varepsilon)(x_n^\varepsilon)\rightarrow 0$, as $n\rightarrow\infty$. Therefore, $w^n(x_n^\varepsilon)\rightarrow \bar{w}(x)$, as $n\rightarrow\infty$. Moreover, for all $l >0$ and $n$ large enough, we have \begin{equation}\label{th3.4.5} w^n(x_n^\varepsilon)+H^{M_0}(x_n^\varepsilon,l D\phi(x_n^\varepsilon),l D^2\phi(x_n^\varepsilon))\leq w^n(x_n^\varepsilon)+\overline{H}(x_n^\varepsilon,D\phi(x_n^\varepsilon),D^2\phi(x_n^\varepsilon))\leq 0. \end{equation} On the other hand, \begin{equation*}
\overline{H}(x,p,A)=\lim\limits_{l\to\infty}\uparrow H^{M_0}(x,lp,lA),\ (x,p,A)\in \mathbb{R}^N\times\mathbb{R}^N\times\mathcal{S}^N. \end{equation*} Passing in (\ref{th3.4.5}) to the limit as $n\to \infty$ yields $\bar{w}(x)+H^{M_0}(x,l D\phi_\varepsilon(x),l D^2\phi_\varepsilon(x))\leq 0$, which in turn implies $$\bar{w}(x)+\overline{H}(x,D\phi_\varepsilon(x),D^2\phi_\varepsilon(x))\leq 0,$$ by taking first the limit as $\varepsilon\to 0$ and after the supremum over $l>0$, i.e., we have shown (\ref{r19}).
Now, with the same $\phi$ and the same $x\in\theta$ as above, from (\ref{r19}) it follows that, for any $\lambda>0$, $$\bar{w}(x)+H^{M_0}(x,\frac{1}{\lambda}D\phi(x),\frac{1}{\lambda}D^2\phi(x))\leq \bar{w}(x)+\overline{H}(x,D\phi(x),D^2D\phi(x))\leq 0,$$ i.e., $\bar{w}$ is a (continuous) viscosity subsolution of (\ref{rl7}) in $\theta$. Since $w_\lambda$ is a continuous constrained viscosity solution of (\ref{rl7}), from Theorem \ref{th1} (comparison principle) we have that $$\bar{w}(x)\leq w_\lambda(x), \ \mbox{for\ all}\ x\in\overline{\theta}.$$ Taking the limit as $\lambda\rightarrow 0^+$ yields $w_0\geq \bar{w}$ and completes the proof.\\ \end{proof} We now give an application of the above Theorem \ref{the:3.4}, which generalize the results in \cite{Marc 2015}.
For this recall that the second order superjet at $x\in\overline{\theta}$ for a function $u\in USC(\overline{\theta})$ is defined by \begin{equation*}
J^{2,+}u(x)=\{(D\varphi(x),D^2\varphi(x)): \varphi\in C^2(\mathbb{R}^N), u-\varphi\leq u(x)-\varphi(x)\ \mbox{on}\ \overline{\theta}\}, \end{equation*} while, for $v\in LSC(\overline{\theta})$, \begin{equation*}
J^{2,-}v(x)=\{-(p,A)|(p,A)\in J^{2,+}(-v)(x)\} \end{equation*} defines the second order subjet.
\begin{corollary}\label{c1} Under the same assumptions in Theorem \ref{the:3.4}, we have, for all $x\in\theta$, \begin{equation*}
\{(p,A)\in J^{2,+}w_0(x): \mathop{\rm sup}\limits_{l>0}H(x,lp,lA)=+\infty\}=\emptyset. \end{equation*} \end{corollary} \begin{proof} Assume that, for some $x\in\theta$, $(p,A)\in J^{2,+}w_0(x)$ and \begin{equation}\label{b1}
\mathop{\rm sup}\limits_{l>0}H(x,lp,lA)=+\infty. \end{equation} From (\ref{b1}), $\overline{H}(x,p,A)=M_0$. Furthermore, from (\ref{r18}), in viscosity sense, \begin{equation*}
w_0(x)+\overline{H}(x,Dw_0(x),D^2w_0(x))\leq0,\ x\in\theta. \end{equation*}
Consequently, since $w_0\in Lip_{M_0}(\overline{\theta})$ we get $0\geq w_0(x)+\overline{H}(x,p,A)=w_0(x)+M_0$. Therefore, $w_0(x)\leq-M_0$. But, on the other hand $w_0\in Lip_{M_0}(\overline{\theta})$ implies $|w_0(x)|\leq M_0$. Hence, $w_0(x)=-M_0$. As for all $M\geq M_0$, $w_0\in\mathcal{S}_{H,M}$ and $w_0\in Lip_{M}(\overline{\theta})$, the same argument also gives $w_0(x)=-M$. This is a contradiction, and it follows that there cannot exist $(p,A)\in J^{2,+}w_0(x)$ with $\mathop{\rm sup}\limits_{l>0}H(x,lp,lA)=+\infty$. \end{proof} \begin{corollary}\label{c2} Under the same assumptions in Theorem \ref{the:3.4} we suppose that, for all $x\in\theta, (p,A)\in(\mathbb{R}^N\backslash\{0\})\times\mathcal{S}^N$, $\mathop{\rm sup}\limits_{l>0}H(x,lp,lA)=+\infty$. Then, $w_0$ is a constant on $\overline{\theta}$. \end{corollary} \begin{proof} Let $x_0\in \theta$ and $\varphi_\varepsilon\in C^2(\mathbb{R}^N)$, $\varepsilon>0$, be such that
i) $\varphi_\varepsilon(x)=\left\{ \begin{array}{lll}
\frac{\varepsilon}{2}|x-x_0|^2,\ x\in\theta\ \mbox{with\ dist}(x,\partial\theta)\geq\frac{1}{2} \mbox{dist}(x_0,\partial\theta)\ (>0),\\ \geq3M_0,\ x\in \theta^C, \end{array} \right.$
ii) $D{\varphi_\varepsilon}(x)\neq0,\ x\in\mathbb{R}^N\setminus\{x_0\},$
iii) $\varphi_\varepsilon(x)\to0\ (\varepsilon\downarrow0),$ for all $x\in\theta$.
\noindent For $\psi_\varepsilon(x)=w_0(x)-\varphi_\varepsilon(x)$, $x\in\overline{\theta}$, let $x_\varepsilon\in\overline{\theta}$ be the maximum point of $\psi_\varepsilon$. As for all $x'\in\partial\theta, \psi_\varepsilon(x')\leq w_0(x')-3M_0\leq-2M_0$ (Recall that $w_0\in \mbox{Lip}_{M_0}(\overline{\theta})$), and $\psi_\varepsilon(x_\varepsilon)\geq-M_0.$\ It follows that $x_\varepsilon\in\theta$. Since $(D\varphi_\varepsilon(x_\varepsilon),D^2\varphi_\varepsilon(x_\varepsilon))\in J^{2,+}w_0(x_\varepsilon)$, we get from Corollary \ref{c1} that $\mathop{\rm sup}\limits_{l>0}H(x_\varepsilon,lD\varphi_\varepsilon(x_\varepsilon),lD^2\varphi_\varepsilon(x_\varepsilon))<+\infty$.
Hence, from the assumptions of Corollary \ref{c2}, $D\varphi_\varepsilon(x_\varepsilon)=0$, i.e., $x_\varepsilon=x_0,\ \varepsilon>0$. Consequently, $$w_0(x_0)=\psi_\varepsilon(x_0)\geq\psi_\varepsilon(x)=w_0(x)-\varphi_\varepsilon(x)\to w_0(x),\ \mbox{as}\ \varepsilon\to 0,\ \mbox{for\ all}\ x\in \theta.$$ Then, from the arbitrariness of $x_0\in\theta$ it follows that $w_0(x)=w_0(x_0)$, for all $x\in\theta$, and from the continuity of $w_0$ on $\overline{\theta}$ we, finally, have $w_0(x)=w_0(x_0), x\in\overline{\theta}$.
\end{proof}
\section{ {\protect \large Convergence problem for the optimal control}} After a more general discussion in the previous section, in this part we consider the Hamiltonian $H$ of the form \begin{equation}\label{r20}
H(x,p,A):=\mathop{\rm sup}\limits_{u\in U}\{{\langle -p,b(x,u)\rangle}-\frac{1}{2}tr(\sigma\sigma^*(x,u)A)-\psi(x,p\sigma(x,u),u)\}, \ (x,p,A)\in\mathbb{R}^N\times\mathbb{R}^N\times\mathcal{S}^N. \end{equation} \begin{proposition}\label{th:3.3.1} Under the assumptions (\ref{r1}), (\ref{r48}) and (\ref{r49}) the value function $V_\lambda$ defined by (\ref{r94}) is a viscosity solution of the Hamilton-Jacobi-Bellman equation \begin{equation}\label{r8} \lambda V(x)+H(x,DV(x),D^2V(x))=0,\ x\in\overline{\theta}, \end{equation} where $H(x,p,A)$ is defined by (\ref{r20}). \end{proposition} \begin{remark} Proposition \ref{th:3.3.1} shows that $V_\lambda$ defined by (\ref{r94}) is in $Lip_{\frac{M_0}{\lambda}}(\overline{\theta})$ and it is a viscosity solution on $\overline{\theta}$ of (\ref{r8}), i.e., a super-but also a subsolution on $\overline{\theta}$. Thus, $V_\lambda$ is, in particular also a constrained viscosity solution of (\ref{r8}), but unlike a constrained viscosity solution, $V_\lambda$ also satisfies, in viscosity sense, \begin{equation*}
\lambda V_\lambda(x)+H(x,DV_\lambda(x),D^2V_\lambda(X))\leq0,\ x\in\partial\theta, \end{equation*} i.e., for all $x\in\partial\theta$, $\varphi\in C^2(\mathbb{R}^N)$ with $V-\varphi\leq V(x)-\varphi(x)$, it holds \begin{equation*}
\lambda V_\lambda(x)+H(x,D\varphi(x),D^2\varphi(x))\leq0. \end{equation*}
\end{remark}
The proof of Proposition \ref{th:3.3.1} uses Peng's BSDE method developed in \cite{Peng 1997}. To prove the proposition we need the dynamic programming principle (DPP) and the following three lemmas based on the notion of stochastic backward semigroups introduced by Peng \cite{Peng 1997}.
Given the initial value $x$ at time $t=0$ of SDE (\ref{r2}), $u(\cdot)\in\mathcal{U}$ and $\eta\in L^2(\Omega,\mathcal{F}_t,\mathbb{P})$, we define a stochastic backward semigroup: For given $\lambda>0,\ x\in\overline{\theta},\ u\in\mathcal{U},\ t\in\mathbb{R}_+$, we put \begin{equation*}
G_{s,t}^{\lambda,x,u}[\eta]:=Y_s^\eta,\ s\in[0,t],\ \eta\in L^2(\Omega,\mathcal{F}_t,\mathbb{P}), \end{equation*} where $(Y_s^\eta)_{s\in[0,t]}$ is the unique solution of the BSDE
\begin{equation*}
\left\{ \begin{array}{ll} dY_s^\eta=-(\psi(X_s^{x,u},Z_s^\eta,u_s)-\lambda Y_s^\eta)ds+Z_s^\eta dW_s,\ s\in[0,t],\\ Y_t^\eta=\eta. \end{array} \right.
\end{equation*} \begin{proposition}\label{dpp}(DPP)
Under the assumptions (\ref{r1}), (\ref{r48}) and (\ref{r49}), for all $\lambda>0,\ x\in\mathbb{R}^N$ and $t\geq0$, it holds
\begin{equation*}
V_\lambda(x)=\inf\limits_{u\in\mathcal{U}}G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})].
\end{equation*} \end{proposition} The proof of the DPP uses arguments, which are for the case of control problems with finite time horizon rather standard by now (see, e.g., \cite{Buckdahn 2008}). But here the time horizon is infinite, and for convenience we give the proof in the Appendix. Let us give now three auxiliary lemmas.
For this, given a test function $\varphi\in C^3(\mathbb{R}^N)$, we define, for all $(x,y,z,u)\in\mathbb{R}^N\times\mathbb{R}\times\mathbb{R}^d\times U,$ \begin{equation*}
\Phi(x,y,z,u):=\langle D\varphi(x),b(x,u)\rangle+\frac{1}{2}tr(\sigma\sigma^*(x,u)D^2\varphi(x))+\psi(x,z+D\varphi(x)\sigma(x,u),u)-\lambda\cdot(y+\varphi(x)). \end{equation*} Let $(Y_s^{1,u},Z_s^{1,u})_{s\in[0,t]}\in\mathcal{S}_{\mathbb{F}}^2([0,t];\mathbb{R})\times\mathcal{H}^2_{\mathbb{F}}([0,t];\mathbb{R}^d)\footnotemark[1]$ \footnotetext[1]{$\mathcal{S}_{\mathbb{F}}^2([0,t];\mathbb{R}):=\{\phi=(\phi_{s})_{s\in[0,t]}:(\phi_{s\wedge t})_{s\geq0}\in\mathcal{S}_{\mathbb{F}}^2(\mathbb{R})\}$, $\mathcal{H}^2_{\mathbb{F}}([0,t];\mathbb{R}^d):=\{\varphi=(\varphi_s)_{s\in[0,t]}:(\varphi_s1_{[0,t]}(s))_{s\geq0}\in\mathcal{H}^2_{\mathbb{F}}(\mathbb{R}^d)\}.$} be the unique solution of the BSDE \begin{equation}\label{r95} \left\{ \begin{array}{ll} dY_s^{1,u}=-\Phi(X_s^{x,u},Y_s^{1,u},Z_s^{1,u},u_s)ds+Z_s^{1,u}dW_s,\ s\in[0,t],\\ Y_t^{1,u}=0. \end{array} \right. \end{equation} As $\Phi(x,\cdot,\cdot,u)$ is Lipschitz, uniformly in $(x,u)$, and $\Phi(x,0,0,u)$ is bounded on $\overline{\theta}\times U$, the existence and the uniqueness are by now standard. \begin{lemma}\label{lem1} $Y_s^{1,u}=G_{s,t}^{\lambda,x,u}[\varphi(X_t^{x,u})]-\varphi(X_s^{x,u}),\ s\in[0,t].$ \end{lemma} \begin{proof} Notice that $G_{s,t}^{\lambda,x,u}[\varphi(X_t^{x,u})]$ is defined by the solution of the following BSDE \begin{equation*}
\left\{
\begin{array}{ll} dY_s^{\varphi}=-( \psi(X_s^{x,u},Z_s^{\varphi},u_s)-\lambda Y_s^\varphi)ds+Z_s^{\varphi}dW_s,\ s\in[0,t],\\ Y_t^{\varphi}=\varphi(X_t^{x,u}), \end{array}
\right. \end{equation*} that is \begin{equation*}
G_{s,t}^{\lambda,x,u}[\varphi(X_t^{x,u})]=Y_s^{\varphi},\ s\in[0,t]. \end{equation*} We only need to prove that $Y_s^{\varphi}-\varphi(X_s^{x,u})= Y_s^{1,u}$, $s\in[0,t]$. Applying It\^{o}'s formula to $\varphi(X_s^{x,u})$, it is obvious that $d(Y_s^{\varphi}-\varphi(X_s^{x,u}))= dY_s^{1,u}$. As $Y_t^{\varphi}-\varphi(X_t^{x,u})=0=Y_t^{1,u}$, it follows that $Y_s^{\varphi}-\varphi(X_s^{x,u})= Y_s^{1,u}$, $s\in[0,t]$. \end{proof} Now we consider BSDE (\ref{r95}) in which $X_s^{x,u}$ is replaced by its initial condition $X_0^{x,u}=x$: \begin{equation}\label{r96} \left\{ \begin{array}{ll} dY_s^{2,u}=-\Phi(x,Y_s^{2,u},Z_s^{2,u},u_s)ds+Z_s^{2,u}dW_s,\ s\in[0,t],\\ Y_t^{2,u}=0. \end{array} \right. \end{equation} It has a unique solution $(Y^{2,u},Z^{2,u})\in\mathcal{S}_{\mathbb{F}}^2([0,t];\mathbb{R})\times\mathcal{H}_{\mathbb{F}}^2([0,t];\mathbb{R}^d)$. \begin{lemma}\label{lem2}
We have $|Y_0^{1,u}-Y_0^{2,u}|\leq ct^{\frac{3}{2}},\ \mbox{for all}\ t\in [0, T],\ u\in\mathcal{U}$, where $c\in\mathbb{R}_+$ is independent of $u\in\mathcal{U}$ and depends only on $T>0$. \end{lemma} \begin{proof} As $\overline{\theta}\subset\mathbb{R}^N$ is compact, $\varphi, D\varphi$ and $D^2\varphi$ are bounded and Lipschitz on $\overline{\theta}$. Combined with the boundedness and the Lipschitz property of $b(\cdot,u), \sigma(\cdot,u)$ which is uniform with respect to $u\in U$, this has the consequence that $\overline{\theta}\ni x\rightarrow\Phi(x,y,z,u)$ is Lipschitz, uniformly in $(y,z,u)$. Then, using BSDE and SDE standard estimates, we get \begin{equation*} \begin{array}{lll}
&\displaystyle|Y_0^{1,u}-Y_0^{2,u}|^2\leq\mathbb{E}[|Y_0^{1,u}-Y_0^{2,u}|^2+\int_0^t|Z_s^{1,u}-Z_s^{2,u}|^2ds]\\
\leq&\displaystyle c\mathbb{E}[(\int_0^t|\Phi(X_s^{x,u},Y_s^{2,u},Z_s^{2,u},u_s)-\Phi(x,Y_s^{2,u},Z_s^{2,u},u_s)|ds)^2]\\
\leq&\displaystyle ct^2\mathbb{E}[\mathop{\rm sup}\limits_{s\in[0,t]}|X_s^{x,u}-x|^2]\leq ct^3.\\ \end{array} \end{equation*} \end{proof} We now define $\overline{\Phi}(x,y,z):=\inf\limits_{u\in U}\Phi(x,y,z,u),\ (x,y,z)\in\overline{\theta}\times\mathbb{R}\times\mathbb{R}^d.$ Note that $\overline{\Phi}(x,y,z)=\overline{\Phi}(x,0,z)-\lambda y$ and that $(x,y,z)\rightarrow\overline{\Phi}(x,y,z)$ is Lipschitz. We consider the following ODE \begin{equation}\label{r97} \left\{ \begin{array}{ll}
dY_s^0=-\overline{\Phi}(x,Y_s^0,0)ds,\ s\in[0,t],\\
Y_t^0=0. \end{array} \right. \end{equation} \begin{remark} As $\overline{\Phi}(x,y,z)=\overline{\Phi}(x,0,z)-\lambda y$, the unique solution of (\ref{r97}) is given by \begin{equation*}
Y_s^0=\int_s^te^{-\lambda(r-s)}dr\cdot\overline{\Phi}(x,0,0),\ s\in[0,t]. \end{equation*} \end{remark} \begin{lemma}\label{lem3}
$Y_s^0=\mathop{\rm essinf}\limits_{u\in\mathcal{U}}Y_s^{2,u}, s\in[0,t]$, i.e., in particular, $Y_0^0=\inf\limits_{u\in\mathcal{U}}Y_0^{2,u}$. \end{lemma} \begin{proof} From the comparison theorem for BSDEs we obtain easily that $Y_s^0\leq Y_s^{2,u},\ s\in[0,t]$, for all $u\in\mathcal{U}$. On the other hand, as $U$ is compact and $\Phi(x,0,0,\cdot)$ continuous on $U$, there is $u^*\in U$ such that $\overline{\Phi}(x,0,0)=\Phi(x,0,0,u^*)$. Then, for $u=(u_s)_{s\geq0}\in\mathcal{U}$ defined by $u_s=u^*,\ s\geq0$, $(Y^0,Z^0)=(Y^0,0)$ solves the BSDE \begin{equation*}
dY_s^{2,u}=-\Phi(x,Y_s^{2,u},Z_s^{2,u},u_s)ds+Z_s^{2,u}dW_s,\ s\in[0,t],\ Y_t^{2,u}=0, \end{equation*} and from the uniqueness of its solution we get $Y_s^{2,u}=Y_s^0,\ s\in[0,t]$. The proof is complete. \end{proof} Now we are able to give the proof of Proposition \ref{th:3.3.1}. \begin{proof}\textbf{(of Proposition \ref{th:3.3.1}.)} From Lemma \ref{lem:2.6} we know that $V_\lambda\in C(\overline{\theta})$. Let $x\in {\theta}$ and $\varphi\in C^3(\mathbb{R}^N)$ be such that $0=(V_\lambda-\varphi)(x)\geq V_\lambda-\varphi$ on $\overline{\theta}$. Then, for all $u\in\mathcal{U}$ and $t>0,$ the DPP and the monotonicity of $G_{0,t}^{\lambda,x,u}[\cdot]$ yield: \begin{equation*}
\varphi(x)=V_\lambda(x)=\inf\limits_{u\in\mathcal{U}}G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})]\leq\inf\limits_{u\in\mathcal{U}}G_{0,t}^{\lambda,x,u}[\varphi(X_t^{x,u})]. \end{equation*} Thus, due to Lemma \ref{lem1}, \begin{equation*}
\inf\limits_{u\in\mathcal{U}}Y_0^{1,u}= \inf\limits_{u\in\mathcal{U}}(G_{0,t}^{\lambda,x,u}[\varphi(X_t^{x,u})]-\varphi(x))\geq0, \end{equation*} and Lemma \ref{lem2} implies together with Lemma \ref{lem3}, \begin{equation*}
Y_0^0=\inf\limits_{u\in\mathcal{U}}Y_0^{2,u}\geq-ct^{\frac{3}{2}},\ \mbox{i.e.},\ \int_0^te^{-\lambda r}dr\cdot\overline{\Phi}(x,0,0)\geq-ct^{\frac{3}{2}},\ t>0. \end{equation*} Dividing the above relation by $t$ and taking after the limit as $t\downarrow0$, we get \begin{equation*}
0\leq\overline{\Phi}(x,0,0)=\inf\limits_{u\in U}\{\langle D\varphi(x),b(x,u)\rangle+\frac{1}{2}tr(\sigma\sigma^*(x,u)D^2\varphi(x))+\psi(x,D\varphi(x)\sigma(x,u),u)\}-\lambda\varphi(x), \end{equation*} i.e., $\lambda V_\lambda(x)+H(x,D\varphi(x),D^2\varphi(x))\leq0$. This proves that $V_\lambda$ is a subsolution on $\overline{\theta}$; the proof that $V_\lambda$ is a viscosity supersolution on $\overline{\theta}$ is similar, and thus, omitted here.
As $V_\lambda\in Lip_{\frac{M_0}{\lambda}}(\overline{\theta})$ is a constrained viscosity solution of (\ref{r8}), we have from Theorem \ref{th1} (Comparison principle) its uniqueness in $C(\overline{\theta})$. However, for the convenience of the reader let us give the following comparison result and its proof for Hamiltonians of the form (\ref{r20}). \end{proof}
\indent We have the uniqueness of the viscosity solution from the following theorem. For this we recall that $\overline{\theta}$ is a compact subset of $\mathbb{R}^N$ and invariant with respect to the control system (\ref{r2}). \begin{proposition}\label{th:3.2} Assume (\ref{r1}) holds. Let $H_1,\ H_2:\mathbb{R}^N\times\mathbb{R}^N\times\mathcal{S}^N\rightarrow\mathbb{R}$ be two Hamiltonians of the form (\ref{r20}) with $\psi=\psi_1$ and $\psi=\psi_2$, respectively, where $\psi_1$ and $\psi_2$ are assumed to satisfy (\ref{r48}). We suppose that $u\in USC(\overline{\theta})$ is a subsolution of \begin{equation*}
\lambda V(x)+H_1(x,D\varphi(x),D^2\varphi(x))=0,\ x\in\overline{\theta},
\end{equation*} and $v\in LSC(\overline{\theta})$ is a supersolution of \begin{equation*}
\lambda V(x)+H_2(x,D\varphi(x),D^2\varphi(x))=0,\ x\in\overline{\theta}.
\end{equation*} Then it holds \begin{equation*}
\lambda (u(x)- v(x))\leq\mathop{\rm sup}\limits_{u\in U, x\in\bar{\theta}
\atop \scriptstyle z\in\mathbb{R}^d}\{|\psi_1(x,z,u)-\psi_2(x,z,u)|\},\ \mbox{for any}\ x\in\overline{\theta}. \end{equation*} \end{proposition} \begin{proof}
Let $u\in USC(\overline{\theta})$ be a subsolution and $v\in LSC(\overline{\theta})$ a supersolution. For $\varepsilon>0$ arbitrarily chosen, we define $\Phi_\varepsilon(x,x'):=u(x)-v(x')-\frac{1}{2\varepsilon}|x-x'|^2$, $(x,x')\in\overline{\theta}\times\overline{\theta}$. Let $(x_\varepsilon,x'_\varepsilon)\in\overline{\theta}\times\overline{\theta}$ denote a maximum point of the USC-function $\Phi_\varepsilon $ on the compact set $\overline{\theta}\times\overline{\theta}$. We set $\varphi_\varepsilon(x,x')=\frac{1}{2\varepsilon}|x-x'|^2$. Then $u(x)-\varphi_\varepsilon(x,x'_\varepsilon)$ attains a maximum at $x=x_\varepsilon$ and $v(x')+\varphi_\varepsilon(x_\varepsilon,x')$ attains a minimum at $x'=x'_\varepsilon$.
From Theorem 3.2 in \cite{ISHII 1992} we have the existence of two matrices $A, B\in\mathcal{S}^N$ with \begin{equation*} (\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon},A)\in \overline{J}^{2,+}u(x_\varepsilon),\quad (\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon},B)\in \overline{J}^{2,-}v(x'_\varepsilon), \end{equation*} such that \begin{gather}\label{r98} \begin{pmatrix} A&0\\ 0&-B \end{pmatrix} \leq A_0+\varepsilon A_0^2,\ A_0=D^2\varphi_\varepsilon(x,x')=\frac{1}{\varepsilon} \begin{pmatrix} I&-I\\ -I&I \end{pmatrix}. \end{gather} We notice that $A_0+\varepsilon A_0^2=\frac{3}{\varepsilon} \begin{pmatrix} I&-I\\ -I&I \end{pmatrix}.$ Then, as $u\in USC(\overline{\theta})$ is a subsolution on $\overline{\theta}$ and $v\in LSC(\overline{\theta})$ a supersolution on $\overline{\theta}$, \begin{equation}\label{t5}
\lambda u(x_\varepsilon)+H_1(x_\varepsilon,\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon},A)\leq0,\ \
\lambda v(x'_\varepsilon)+H_2(x'_\varepsilon,\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon},B)\geq0. \end{equation} We set $\beta:=\mathop{\rm sup}\limits_{x\in\bar{\theta}}(u(x)-v(x))$. As $\overline{\theta}$ is compact, and $u-v$ upper semicontinuous on $\overline{\theta}$, there exists $\overline{x}\in\overline{\theta}$\ such that $u(\overline{x})-v(\overline{x})=\beta$. Then $\Phi_\varepsilon(x_\varepsilon,x'_\varepsilon)\geq u(\overline{x})-v(\overline{x})=\beta.$\\ Obviously, since \begin{equation*}
\frac{|x_\varepsilon-x'_\varepsilon|^2}{2\varepsilon}=u(x_\varepsilon)-v(x'_\varepsilon)-\Phi_\varepsilon(x_\varepsilon,x'_\varepsilon)\leq u(x_\varepsilon)-v(x'_\varepsilon)-\beta\leq c,\ \varepsilon>0, \end{equation*}
we have that $|x_\varepsilon-x'_\varepsilon|^2\leq c\varepsilon$, and by letting $\varepsilon\to 0$ we get $\lim\limits_{\varepsilon\downarrow0}|x_\varepsilon-x'_\varepsilon|=0.$
As $\overline{\theta}$ is compact, there exists a subsequence of $(x_\varepsilon,x'_\varepsilon)\in\overline{\theta}, \varepsilon>0$, again denoted by $(x_\varepsilon,x'_\varepsilon)$, and some $\widehat{x}\in\overline{\theta}$ such that
$x_\varepsilon\rightarrow\widehat{x},\ x'_\varepsilon\rightarrow\widehat{x}$ as $\varepsilon\downarrow0$. Consequently, $$0\leq\varlimsup\limits_{\varepsilon\downarrow0}\frac{|x_\varepsilon-x'_\varepsilon|^2}{2\varepsilon}\leq u(\widehat{x})-v(\widehat{x})-\beta\leq0.$$ It follows that \begin{equation*} \left\{ \begin{array}{lll} u(\widehat{x})-v(\widehat{x})=\beta=\max\limits_{x\in\overline{\theta}}(u(x)-v(x)),\\
\lim\limits_{\varepsilon\downarrow0}\frac{|x_\varepsilon-x'_\varepsilon|^2}{2\varepsilon}=0. \end{array} \right. \end{equation*} From (\ref{t5}) we have \begin{equation*} \begin{split}
0\geq\lambda u(x_\varepsilon)+\mathop{\rm sup}\limits_{u\in U}\{-b(x_\varepsilon,u)\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon}-\frac{1}{2}tr(\sigma\sigma^*(x_\varepsilon,u)A)-\psi_1(x_\varepsilon,\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon}\sigma(x_\varepsilon,u),u)\},\\
0\leq\lambda v(x'_\varepsilon)+\mathop{\rm sup}\limits_{u\in U}\{-b(x'_\varepsilon,u)\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon}-\frac{1}{2}tr(\sigma\sigma^*(x'_\varepsilon,u)B)-\psi_2(x'_\varepsilon,\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon}\sigma(x'_\varepsilon,u),u)\}.
\end{split} \end{equation*} Then, combined with (\ref{r98}), we obtain by using the Lipschitz assumption on $b, \sigma, \psi_1$ and $\psi_2$, \begin{equation*} \begin{array}{lll}
&\displaystyle\lambda (u(x_\varepsilon)-v(x'_\varepsilon))
\leq\mathop{\rm sup}\limits_{u\in U}\{(b(x_\varepsilon,u)-b(x'_\varepsilon,u))\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon}+\frac{1}{2}tr\big(\sigma\sigma^*(x_\varepsilon,u)A-\sigma\sigma^*(x'_\varepsilon,u)B\big)\\
&\displaystyle\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad +(\psi_1(x_\varepsilon,\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon}\sigma(x_\varepsilon,u),u)-\psi_2(x'_\varepsilon,\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon}\sigma(x'_\varepsilon,u),u))\}\\
&\displaystyle\leq c\big(\frac{|x_\varepsilon-x'_\varepsilon|^2}{\varepsilon}+\mathop{\rm sup}\limits_{u\in U}\{|\psi_1(x_\varepsilon,\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon}\sigma(x_\varepsilon,u),u)-\psi_2(x'_\varepsilon,\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon}\sigma(x'_\varepsilon,u),u)|\}\big)\\
&\displaystyle\leq c(\frac{|x_\varepsilon-x'_\varepsilon|^2}{\varepsilon}+|x_\varepsilon-x'_\varepsilon|)+\mathop{\rm sup}\limits_{x\in\overline{\theta}, p\in\mathbb{R}^N,
\atop \scriptstyle u\in U}\{|\psi_1(x,p\sigma(x,u),u)-\psi_2(x,p\sigma(x,u),u)|\}.
\end{array} \end{equation*} Finally, letting $\varepsilon\downarrow0$, this yields \begin{equation*}
\lambda\max\limits_{x\in\overline{\theta}}(u(x)-v(x))=\lambda (u(\widehat{x})-v(\widehat{x}))\leq \mathop{\rm sup}\limits_{x\in\overline{\theta}, p\in\mathbb{R}^N,
\atop \scriptstyle u\in U}\{|\psi_1(x,p\sigma(x,u),u)-\psi_2(x,p\sigma(x,u),u)|\}. \end{equation*} \end{proof}
\begin{theorem}\label{th:4.1} We suppose that the assumptions (\ref{r1}), (\ref{r48}) and (\ref{r49}) hold. Moreover, we suppose: \begin{equation*}\label{r100} \begin{array}{lll} \mbox{There\ is\ a\ concave\ increasing\ function}\ \rho:\mathbb{R}_+\to\mathbb{R}_+ \ \mbox{with}\ \rho(0+)=0\ \mbox{such\ that,\ for\ all}\ (x,z)\\ \tag{H6}
\in\mathbb{R}^N\times\mathbb{R}^d,\ u,\ u'\in U,\ \mid \psi(x,z,u)-\psi(x,z,u')\mid\leq(1+|z|)\rho(d(u,u')) \end{array} \end{equation*} (Recall that $d$\ is the metric we consider on the control state space $U$). Then, along a suitable subsequence $0<\lambda_n\downarrow 0$, there exists the uniform limit $\widetilde{w}(x)=\lim\limits_{\lambda\to0^+}\lambda V_\lambda(x)$ (recall that $V_\lambda(x)$ is defined by (\ref{r94})) and it is a viscosity solution of the equation $$h(x,D\widetilde{w}(x),D^2\widetilde{w}(x))=0,\ x\in \overline{\theta},$$ in the sense of Definition \ref{def3.1}, where $h(x,p,A)=\max\limits_{u\in U}\{\langle -p,b(x,u)\rangle-\frac{1}{2}tr(\sigma\sigma^*(x,u)A)-\widetilde{\psi}(p\sigma(x,u),u)\}$. The function $\widetilde{\psi}$ is described below in the proof. \end{theorem} \begin{proof} Due to Proposition \ref{th:3.3.1} $V_\lambda(x)$ defined by (\ref{r94}) is a viscosity solution of $$\lambda V(x)+H(x,DV(x),D^2V(x))=0\ \mbox{on}\ \overline{\theta}$$ (i.e., unlike a constrained viscosity solution $V_\lambda$ is a viscosity super-but also subsolution on $\overline{\theta}$), and $\lambda V_\lambda\in\mbox{Lip}_{M_0}(\overline{\theta})$, for $M_0\geq \max\{\overline{c}_0,M\}$ (see Lemma \ref{lem:2.6}) and due to Proposition \ref{th:3.2} this viscosity solution is unique. We define $w_\lambda(x):=\lambda V_\lambda(x), x\in\overline{\theta}$. Then $w_\lambda$ is the unique viscosity solution of \begin{equation}\label{226}\lambda w_\lambda(x)+H_\lambda(x,Dw_\lambda(x),D^2w_\lambda(x))=0\ \mbox{on}\ \overline{\theta},\end{equation} where $$H_\lambda(x,p,A):=\lambda H(x,\frac{1}{\lambda} p,\frac{1}{\lambda} \lambda A)=\max\limits_{u\in U}\{\langle -p,b(x,u)\rangle-\frac{1}{2}tr(\sigma\sigma^*(x,u)A)-\lambda \psi(x,\frac{1}{\lambda}p\sigma(x,u),u)\}.$$ Due to (\ref{r48}) and (\ref{r100}) we have, for all $\lambda\in(0,1],\ (x,z,u),\ (x',z',u')\in\overline{\theta}\times\mathbb{R}^d\times U$, \begin{equation*} \begin{split}
&{\rm i)}\ | \lambda\psi(x,\frac{1}{\lambda}z,u)|\leq \lambda M+K_z|z|;\\
&{\rm ii)}\ | \lambda\psi(x,\frac{1}{\lambda}z,u)-\lambda\psi(x',\frac{1}{\lambda}z',u')|\leq \lambda K_x|x-x'|+K_z|z-z'|+(\lambda+|z|)\rho(d(u,u')), \end{split} \end{equation*} i.e., combined with Lemma \ref{lem:2.6}, where we have shown that \begin{equation*}
|w_\lambda(x)|\leq M,\ x\in\overline{\theta};\ \ |w_\lambda(x)-w_\lambda(x')|\leq\overline{c}_0|x-x'|,\ x,\ x'\in\overline{\theta},\ \lambda>0, \end{equation*} we can apply the Arzel\'{a}-Ascoli Theorem to conclude that, for some sequence $\lambda_n\downarrow0$ (as $n\to\infty$), there are functions $\widetilde{w}:\overline{\theta}\to\mathbb{R},\ \ \widetilde{\psi}: \overline{\theta}\times\mathbb{R}^d\times U\to\mathbb{R}$ such that, for some $\widetilde{w}\in C(\overline{\theta})$, $w_{\lambda_n}\to\widetilde{w}$ ($n\to\infty$) uniformly on $\overline{\theta}$, and $\lambda_n\psi(x,\frac{1}{\lambda_n}z,u)\to\widetilde{\psi}(x,z,u)$ ($n\to\infty$), uniformly on compacts in $\overline{\theta}\times\mathbb{R}^d\times U$. Obviously, \begin{equation*} \begin{split}
&|\widetilde{w}(x)|\leq M,\ \ |\widetilde{w}(x)-\widetilde{w}(x')|\leq\overline{c}_0|x-x'|,\ x,\ x'\in\overline{\theta},\ \ \mbox{and}\\
&|\widetilde{\psi}(x,z,u)|\leq K_z|z|,\ \ |\widetilde{\psi}(x,z,u)-\widetilde{\psi}(x',z',u')|\leq K_z|z-z'|+|z|\rho(d(u,u')), \end{split} \end{equation*} i.e., $\widetilde{\psi}(x,z,u)=\widetilde{\psi}(z,u),\ (x,z,u)\in\overline{\theta}\times\mathbb{R}^d\times U$, is independent of $x\in\overline{\theta}$.
Then, putting \begin{equation*} h(x,p,A):=\max\limits_{u\in U}\{\langle-p,b(x,u)\rangle-\frac{1}{2}tr(\sigma\sigma^*(x,u)A)-\widetilde{\psi}(p\sigma(x,u),u)\}, \end{equation*} it follows that also $H_{\lambda_n}\to h\ (n\to\infty)$ uniformly on compacts. Finally, from (\ref{226}) and the stability result for viscosity solutions we see that $\widetilde{w}$ is a viscosity solution of the equation \begin{equation*}
h(x,D\widetilde{w}(x),D^2\widetilde{w}(x))=0,\ x\in\overline{\theta}. \end{equation*} \end{proof} \begin{remark} In Buckdahn, Li, Quincampoix \cite{Li} it is shown that the sequence $(w_\lambda)_{\lambda>0}$, as $\lambda\downarrow0$, can have at most only one accumulation point in the space $C(\overline{\theta})$ endowed with the supremum norm. As $\widetilde{w}$ is an accumulation point of $(w_\lambda)_{\lambda>0}$ and as due to Lemma \ref{lem:2.6} every subsequence of $w_\lambda$, $\lambda\downarrow0$, has a converging subsubsequence (Arzel\'{a}-Ascoli Theorem), it follows that $w_\lambda\to\widetilde{w} (\lambda\downarrow0)$, uniformly on $\overline{\theta}$. In particular, if we also suppose (\ref{r15}), we have $\widetilde{w}=w_0$. \end{remark} \begin{theorem}\label{th:4.2} We suppose that the assumptions (\ref{r1}), (\ref{r49}) and (\ref{r15}) hold true. Now we consider the case: $\psi(x,z,u)=\psi_1(x,u)+g(z)$, where $\psi_1:\overline{\theta}\times U\rightarrow \mathbb{R}$ is bounded (by $M$), uniformly continuous and satisfies \begin{equation*}
|\psi_1(x,u)-\psi_1(x',u)|\leq K_x|x-x'|,\ \mbox{for any}\ x,\ x'\in\overline{\theta},\ u\in U, \end{equation*} while $g:\mathbb{R}^d\rightarrow\mathbb{R}$ is supposed to be Lipschitz (with Lipschitz constant $K_z$), positive homogeneous, concave and satisfies $g(0)=0$. For $\eta\in L^2(\mathcal{F}_t)$, we consider the following BSDE \begin{equation}\label{e1}
Y_s^\eta=\eta+\int_s^tg(Z_r^\eta)dr-\int_s^tZ_r^\eta dW_r,\ s\in[0,t], \end{equation} and define the nonlinear expectation $\varepsilon^g[\eta]:=Y_0^\eta$. Then, there exists the uniform limit ${w}_0(x)=\lim\limits_{\lambda\to0^+}\lambda V_\lambda(x)$ (recall $V_\lambda(x)$ is defined by (\ref{r94})), and \begin{equation*}
w_0(x)=\inf\limits_{t\geq0, u\in\mathcal{U}}\varepsilon^g[\min\limits_{v\in U}\psi(X_t^{x,u},0,v)],\ \ \mbox{for any}\ x\in\overline{\theta}. \end{equation*} \end{theorem} \begin{remark} {\rm (i)}\ $\varepsilon^g[.]$ is called $g$-expectation, it was first introduced by Peng, see, e.g., \cite{peng}. Its definition is independent of $t$. Indeed, if $\eta\in L^2(\mathcal{F}_s),\ s\leq t$, then, in (\ref{e1}), $Z_r^\eta=0,\ r\in[s,t]$.
{\rm (ii)} We recall the properties of $\varepsilon^g[\cdot]$, in particular, its concavity under the above assumptions on $g$: Let $\lambda_1, \lambda_2\in(0,1), \mbox{such\ that}\ \lambda_1+\lambda_2=1, \eta_1, \eta_2\in L^2(\mathcal{F}_t)$, $\overline{Y}_s:=(\lambda_1Y_s^{\eta_1}+\lambda_2Y_s^{\eta_2})-Y_s^{\lambda_1\eta_1+\lambda_2\eta_2}, \overline{Z}_s:=(\lambda_1Z_s^{\eta_1}+\lambda_2Z_s^{\eta_2})-Z_s^{\lambda_1\eta_1+\lambda_2\eta_2}$, $s\in[0,t]$. As the function $g$ is Lipschitz and concave, we get \begin{equation*}
\begin{split}
&(\overline{Y}_s)^+(\lambda_1g(Z_s^{\eta_1})+\lambda_2g(Z_s^{\eta_2})-g(Z_s^{\lambda_1\eta_1+\lambda_2\eta_2}))\\
\leq&(\overline{Y}_s)^+(g(\lambda_1Z_s^{\eta_1}+\lambda_2Z_s^{\eta_2})-g(Z_s^{\lambda_1\eta_1+\lambda_2\eta_2}))\\
\leq&L(\overline{Y}_s)^+|\overline{Z}_s|, \ s\in[0,t].
\end{split} \end{equation*}
Hence, $\displaystyle\mathbb{E}[((\overline{Y}_s)^+)^2]+\mathbb{E}[\int_s^t|\overline{Z}_r|^21_{\{\overline{Y}_r>0\}}dr]\leq 2L\mathbb{E}[\int_s^t(\overline{Y}_s)^+|\overline{Z}_r|dr],\ s\in[0,t],$ and a standard estimate and Gronwall's inequality give $(\overline{Y}_s)^+=0$, i.e., $\lambda_1Y_s^{\eta_1}+\lambda_2Y_s^{\eta_2}\leq Y_s^{\lambda_1\eta_1+\lambda_2\eta_2},\ s\in[0,t],\ \mathbb{P}$-a.s. Thus, for $s=0$, $\varepsilon^g[\lambda_1\eta_1+\lambda_2\eta_2]\geq \lambda_1\varepsilon^g[\eta_1]+\lambda_2\varepsilon^g[\eta_2]$.\end{remark} \begin{proof}\textbf{(of Theorem \ref{th:4.2}.)}\\ \textbf{Step 1}. From Proposition \ref{dpp} (DPP) we have $V_\lambda(x)=\inf\limits_{u\in\mathcal{U}}G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})]$, where $G_{0,t}^{\lambda,x,u}[\eta]=\widetilde{Y}_{0,t}^{\lambda,x,u,\eta}$, for $\eta\in L^2(\mathcal{F}_t)$, defined by the BSDE \begin{equation*}
\left\{ \begin{array}{ll} d\widetilde{Y}_{s,t}^{\lambda,x,u,\eta}=-(\psi(X_s^{x,u},\widetilde{Z}_{s,t}^{\lambda,x,u,\eta},u_s)-\lambda\widetilde{Y}_{s,t}^{\lambda,x,u,\eta})ds+\widetilde{Z}_{s,t}^{\lambda,x,u,\eta}dW_s,\\ \widetilde{Y}_{t,t}^{\lambda,x,u,\eta}=\eta,\ \eta:=V_\lambda(X_t^{x,u}). \end{array} \right. \end{equation*} Combined with the positive homogeneity of $g$, we obtain, for $s\in[0,t]$, \begin{equation*}
d(\lambda e^{-\lambda s}\widetilde{Y}_{s,t}^{\lambda,x,u,\eta}+\lambda\int_0^se^{-\lambda r}\psi_1(X_r^{x,u},u_r)dr)=-g(e^{-\lambda s}\lambda\widetilde{Z}_{s,t}^{\lambda,x,u,\eta})ds+e^{-\lambda s}\lambda\widetilde{Z}_{s,t}^{\lambda,x,u,\eta}dW_s. \end{equation*} On the other hand, \begin{equation*}
\lambda e^{-\lambda t}\widetilde{Y}_{t,t}^{\lambda,x,u,\eta}+\lambda\int_0^te^{-\lambda r}\psi_1(X_r^{x,u},u_r)dr=e^{-\lambda t}\lambda V_\lambda(X_t^{x,u})+\lambda\int_0^te^{-\lambda r}\psi_1(X_r^{x,u},u_r)dr. \end{equation*} Thus, for $\eta=V_\lambda(X_t^{x,u})$, \begin{equation*}
\varepsilon^g[e^{-\lambda t}\lambda V_\lambda(X_t^{x,u})+\lambda\int_0^te^{-\lambda r}\psi_1(X_r^{x,u},u_r)dr]=\lambda\widetilde{Y}_{0,t}^{\lambda,x,u,\eta}=\lambda G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})]. \end{equation*} Hence, \begin{equation}\label{f1}
\lambda V_\lambda(x)=\inf\limits_{u\in\mathcal{U}}\lambda G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})]= \inf\limits_{u\in\mathcal{U}}\varepsilon^g[e^{-\lambda t}\lambda V_\lambda(X_t^{x,u})+\lambda\int_0^te^{-\lambda r}\psi_1(X_r^{x,u},u_r)dr]. \end{equation} Notice that (see, e.g., \cite{peng}, or use just classical estimates for BSDE) \begin{equation}\label{f2}
\begin{split}
&|\varepsilon^g[e^{-\lambda t}\lambda V_\lambda(X_t^{x,u})+\lambda\int_0^te^{-\lambda r}\psi_1(X_r^{x,u},u_r)dr]-\varepsilon^g[w_0(X_t^{x,u})]|\\
\leq&c\parallel e^{-\lambda t}\lambda V_\lambda(X_t^{x,u})+\lambda\int_0^te^{-\lambda r}\psi_1(X_r^{x,u},u_r)dr-w_0(X_t^{x,u})\parallel_{L^2(\Omega)}\\
\leq&c((1-e^{-\lambda t})+\parallel\lambda V_\lambda-w_0\parallel_\infty+\lambda tM),\ \mbox{for any}\ \lambda,\ t\geq0,\ u\in\mathcal{U}.
\end{split} \end{equation} Thus, combining (\ref{f1}) and (\ref{f2}), we have \begin{equation*}
\lambda V_\lambda(x)= \inf\limits_{u\in\mathcal{U}}\varepsilon^g[w_0(X_t^{x,u})]+R_t^{\lambda,x},\ \text{with}\ |R_t^{\lambda,x}|\leq c((1-e^{-\lambda t})+\parallel \lambda V_\lambda-w_0\parallel_\infty+\lambda tM). \end{equation*} Then, letting $\lambda$ tend to 0 we get \begin{equation}\label{d1}
w_0(x)=\inf\limits_{u\in\mathcal{U}}\varepsilon^g[w_0(X_t^{x,u})],\ \mbox{for any}\ t\geq0,\ x\in\overline{\theta}. \end{equation}
\textbf{Step 2.}\ From (\ref{f1}), using the monotonicity of $\varepsilon^g$ (resulting from the BSDE comparison theorem) and recalling that $|\lambda V_\lambda(x)|\leq M, \ \mbox{for\ all}\ x\in\overline{\theta},\ \lambda\geq0$, we obtain \begin{equation*}
\lambda V_\lambda(x)\geq \inf\limits_{u\in\mathcal{U}}\varepsilon^g[-Me^{-\lambda t}+\lambda\int_0^te^{-\lambda r}\min\limits_{v\in U}\psi_1(X_r^{x,u},v)dr]. \end{equation*} Similar to (\ref{f2}) we get \begin{equation*}
\begin{split}
&\mathop{\rm sup}\limits_{u\in\mathcal{U}}|\varepsilon^g[-Me^{-\lambda t}+\lambda\int_0^te^{-\lambda r}\min\limits_{v\in U}\psi_1(X_r^{x,u},v)dr]-\varepsilon^g[\lambda\int_0^\infty e^{-\lambda r}\min\limits_{v\in U}\psi_1(X_r^{x,u},v)dr]|\\
\leq& c(Me^{-\lambda t}+\lambda\int_t^\infty e^{-\lambda r}dr\cdot M)=2cMe^{-\lambda t}\xrightarrow[t\uparrow+\infty]{}0.
\end{split} \end{equation*} Consequently, using the concavity of $\varepsilon^g[\cdot]$ (see Remark 4.3.) this yields \begin{equation*}
\begin{split}
&\lambda V_\lambda(x)\geq \inf\limits_{u\in\mathcal{U}}\varepsilon^g[\lambda\int_0^\infty e^{-\lambda r}\cdot\min\limits_{v\in U}\psi_1(X_r^{x,u},v)dr]-2cMe^{-\lambda t}\\
&\geq \inf\limits_{u\in\mathcal{U}}\lambda\int_0^\infty e^{-\lambda r}\varepsilon^g[\min\limits_{v\in U}\psi_1(X_r^{x,u},v)]dr-2cMe^{-\lambda t}\\
&\geq \inf\limits_{t\geq 0, u\in\mathcal{U}}\varepsilon^g[\min\limits_{v\in U}\psi_1(X_t^{x,u},v)]-2cMe^{-\lambda t},\ t\geq 0,\ x\in\overline{\theta}.
\end{split} \end{equation*} Taking the limit as $t\rightarrow +\infty$ we get immediately \begin{equation}\label{1000}\lambda V_\lambda(x)\geq \inf\limits_{t\geq0, u\in\mathcal{U}}\varepsilon^g[\min\limits_{v\in U}\psi_1(X_t^{x,u},v)],\ \mbox{for any}\ x\in\overline{\theta}.\end{equation}
Combining Propositions \ref{th:3.3.1} and \ref{th:3.2} (comparison result), (\ref{r15}) allows to use the proof of Theorem \ref{the:3.4} without using ($A_\theta)$ (($A_H$) and (H) are satisfied since Hamiltonian $H$ is of the form (\ref{r20})) that $\lambda V_\lambda\to w_0$ uniformly on $\overline{\theta}$ as $\lambda\downarrow0$, and $w_0$ is the maximal viscosity subsolution on $\overline{\theta}$ of \begin{equation}\label{r4.13}
w_0(x)+\overline{H}(x,Dw_0(x),D^2w_0(x))\leq0,\ x\in\theta.
\end{equation}Hence, letting $\lambda\downarrow0$ in above inequality (\ref{1000}) yields
$$w_0(x)\geq \inf\limits_{t\geq0, u\in\mathcal{U}}\varepsilon^g[\min\limits_{v\in U}\psi_1(X_t^{x,u},v)],\ \mbox{for any}\ x\in\overline{\theta}.$$ \textbf{Step 3.}\ Recall (\ref{r4.13}). Then, for all $x\in\theta,\ (p,A)\in J^{2,+}w_0(x)$ thanks to (\ref{r15}) (see also Lemma \ref{l:3.4}) we have \begin{equation*}
0\geq w_0(x)+\overline{H}(x,p,A)\geq w_0(x)+\overline{H}(x,0,0)=w_0(x)+\mathop{\rm sup}\limits_{u\in U}(-\psi(x,0,u)). \end{equation*} This shows that, if $J^{2,+}w_0(x)\neq\emptyset$, then \begin{equation}\label{d2}
w_0(x)\leq\min\limits_{v\in U}\psi(x,0,v). \end{equation} Let $x\in\theta$\ and $\varepsilon>0$, and define \begin{equation*}
\psi_\varepsilon(y):=w_0(y)-\frac{1}{2\varepsilon}|y-x|^2,\ y\in\overline{\theta}. \end{equation*}
Let $y_\varepsilon\in\overline{\theta}$ be a maximum point of $\psi_\varepsilon$. As $\psi_\varepsilon(y_\varepsilon)\geq\psi_\varepsilon(x)=w_0(x)$,\ $\frac{1}{2\varepsilon}|y_\varepsilon-x|^2\leq w_0(y_\varepsilon)-w_0(x)\leq2M$, we get $y_\varepsilon\xrightarrow[\varepsilon\downarrow0]{}x$, i.e., for $\varepsilon>0$ small enough, $y_\varepsilon\in\theta$. On the other hand, we have $(p,A):=(\frac{y_\varepsilon-x}{\varepsilon}, \frac{1}{\varepsilon}I_{\mathbb{R}^N})\in J^{2,+}w_0(y_\varepsilon)$.
From (\ref{d2}) we have $w_0(y_\varepsilon)\leq\min\limits_{v\in U}\psi(y_\varepsilon,0,v)$, and taking $\varepsilon\downarrow0$ yields $w_0(x)\leq \min\limits_{v\in U}\psi(x,0,v),$ $ \mbox{for any}\ x\in\theta$, and by the continuity of both sides of the inequality in $x\in\overline{\theta}$ we have \begin{equation*}
w_0(x)\leq \min\limits_{v\in U}\psi(x,0,v),\ \mbox{for all}\ x\in\overline{\theta}. \end{equation*} Finally, it follows from (\ref{d1}) and the monotonicity of $\varepsilon^g[\cdot]$ that $$
w_0(x)\leq \inf\limits_{u\in\mathcal{U}}\varepsilon^g[\min\limits_{v\in U}\psi(X_t^{x,u},0,v)],\ t\geq0,\ x\in\overline{\theta},$$ which means $w_0(x)\leq \inf\limits_{t\geq0, u\in\mathcal{U}}\varepsilon^g[\min\limits_{v\in U}\psi(X_t^{x,u},0,v)],\ x\in\overline{\theta}.$ Combined with Step 2 we get \begin{equation*}
w_0(x)=\inf\limits_{t\geq0, u\in\mathcal{U}}\varepsilon^g[\min\limits_{v\in U}\psi(X_t^{x,u},0,v)],\ \mbox{for any}\ x\in\overline{\theta}. \end{equation*}\end{proof} \begin{remark} Let us consider the special case where $\psi$ is independent of $z$, i.e., $g(z)=0$. Then we get $w_0(x)=\inf\limits_{t\geq0, u\in\mathcal{U}}\mathbb{E}[\min\limits_{v\in U}\psi(X_t^{x,u},0,v)]$, for any $x\in\overline{\theta}$. \end{remark} Let us come back now to a general case of $\psi$. \begin{theorem}\label{th:4.3} We suppose that the assumptions (\ref{r1}), (\ref{r48}), (\ref{r49}), (\ref{r15}), (\ref{r100}) and\textbf{ ($A_\theta$)} hold true. Moreover, let $H(x,p,A)$ be convex in $(p,A)\in\mathbb{R}^N\times\mathcal{S}^N$, for all $x\in\overline{\theta}$. Then, we have \begin{equation*}
\begin{aligned} w_0(x)\leq &\inf\{G_{0,t}^{\widetilde{\psi},x,u}[\min\limits_{v\in U}\psi(X_t^{x,u},0,v)]\mid u\in\mathcal{U},\ t\geq0,\ \widetilde{\psi}\ \mbox{such that there exists}\ \lambda_n\downarrow0\ \mbox{with}\\ &\ \ \ \ \lambda_n\psi(x,\frac{1}{\lambda_n}z,u)\rightarrow\widetilde{\psi}(z,u)\},\ x\in\overline{\theta}. \end{aligned} \end{equation*} \end{theorem} \begin{proof} From Propositions \ref{th:3.3.1} and \ref{th:3.2} we get\\ 1) The limit $w_0(x)=\lim_{\lambda\rightarrow0^+}\lambda V_\lambda(x)$, for every $x\in\overline{\theta}$; and the convergence is uniform on $\overline{\theta}$.\\ 2) There exists $\widetilde{\psi}$\ such that $\lambda_n\psi(x,\frac{1}{\lambda_n}z,u)\to\widetilde{\psi}(z,u)$\ as $\lambda_n\downarrow0$, uniformly on compacts. Moreover, \begin{equation*}
|\lambda_n\psi(x,\frac{1}{\lambda_n}z,u)| \leq \lambda_nM+K_z|z|,\ n\geq1,\quad |\widetilde{\psi}(z,u)|\leq K_z|z|,\ z\in\mathbb{R}^d. \end{equation*} From Proposition \ref{dpp} (DPP) we have \begin{equation*}
V_\lambda(x)=\inf_{u\in\mathcal{U}}G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})],\ t>0,\ x\in \overline{\theta}. \end{equation*} We put $\widetilde{Y}_s^{\lambda,x,u}:=G_{s,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})],\ s\in[0,t].$\ Then $
V_\lambda(x)=\inf\limits_{u\in\mathcal{U}}\widetilde{Y}_0^{\lambda,x,u},$ where \begin{equation*}
\widetilde{Y}_s^{\lambda,x,u}=V_\lambda(X_t^{x,u})+\int_s^t(\psi(X_r^{x,u},\widetilde{Z}_r^{\lambda,x,u},u_r)
-\lambda\widetilde{Y}_r^{\lambda,x,u})dr-\int_s^t\widetilde{Z}_r^{\lambda,x,u}dW_r,\ s\in[0,t]. \end{equation*} By applying It\^{o}'s formula to $e^{-\lambda s}\widetilde{Y}_s^{\lambda,x,u}$ we have \begin{equation*} e^{-\lambda s}\lambda\widetilde{Y}_s^{\lambda,x,u}=e^{-\lambda t}\lambda\widetilde{Y}_t^{\lambda,x,u}+\int_s^t\lambda e^{-\lambda r}\psi(X_r^{x,u},\widetilde{Z}_r^{\lambda,x,u},u_r)dr-\int_s^t\lambda e^{-\lambda r}\widetilde{Z}_r^{\lambda,x,u}dW_r. \end{equation*} As $e^{-\lambda t}\lambda V_\lambda(X_t^{x,u})\xrightarrow[L^\infty]{} w_0(X_t^{x,u})$, as $\lambda\rightarrow 0$, uniformly with respect to $(t,x),\ u\in\mathcal{U}$, we consider the following BSDE: \begin{equation}\label{326}
Y_s^{x,u}=w_0(X_t^{x,u})+\int_s^t\widetilde{\psi}(Z_r^{x,u},u_r)dr-\int_s^tZ_r^{x,u}dW_r,\ s\in[t,T]. \end{equation} From a standard estimate for BSDEs it follows that, for all $p\in(1,2)$, \begin{equation*} \begin{split}
&\mathbb{E}[\mathop{\rm sup}\limits_{s\in[0,t]}|e^{-\lambda s}(\lambda\widetilde{Y}_s^{\lambda,x,u})-Y_s^{x,u}|^p+(\int_0^t|e^{-\lambda s}(\lambda\widetilde{Z}_s^{\lambda,x,u})-Z_s^{x,u}|^2ds)^{\frac{p}{2}}]\\
\leq& C_p\mathbb{E}[|e^{-\lambda t}(\lambda\widetilde{Y}_t^{\lambda,x,u})-Y_t^{x,u}|^p]+C_p\mathbb{E}[(\int_0^t|e^{-\lambda r}\lambda\psi(X_r^{x,u},\widetilde{Z}_r^{\lambda,x,u},u_r)-\widetilde{\psi}(Z_r^{x,u},u_r)|dr)^p]\\
\leq& C_p\mathbb{E}[|e^{-\lambda t}(\lambda\widetilde{Y}_t^{\lambda,x,u})-Y_t^{x,u}|^p]\\
&+C_p\mathbb{E}[(\int_0^t\lambda|\psi(X_r^{x,u},\widetilde{Z}_r^{\lambda,x,u},u_r)-\psi(X_r^{x,u},\frac{1}{\lambda}{Z}_r^{x,u},u_r)|dr)^p] (=:I_1(\lambda))\\
&+C_p\mathbb{E}[(\int_0^t|e^{-\lambda r}(\lambda\psi(X_r^{x,u},\frac{1}{\lambda}{Z}_r^{x,u},u_r))-\widetilde{\psi}(Z_r^{x,u},u_r)|1_{\{|Z_r^{x,u}|\leq\alpha\}}dr)^p] (=:\rho_\alpha(\lambda))\\
&+C_p\mathbb{E}[(\int_0^t(\lambda M+2K_z|Z_r^{x,u}|)1_{\{|Z_r^{x,u}|>\alpha\}}dr)^p] (=:I_2(\lambda,\alpha)).
\end{split} \end{equation*}
Notice that $\displaystyle I_2(\lambda,\alpha)\leq C_{p,M}\lambda^p+C_{p,K_z}\mathbb{E}[\int_0^t\frac{|Z_r^{x,u}|^2}{\alpha^{2-p}}dr]$. As $w_0\in C_b(\overline{\theta})$ and $|\widetilde{\psi}(z,u)|\leq K_z|z|,\ (z,u)\in\mathbb{R}^d\times U$, it follows from the BSDE (\ref{326}) for $(Y^{x,u},Z^{x,u})$ that \begin{equation*}
\mathop{\rm sup}\limits_{(x,u)\in\overline{\theta}\times\mathcal{U}}\mathbb{E}[\int_0^t|Z_r^{x,u}|^2dr]<\infty. \end{equation*} We also remark that, again by a BSDE standard estimates, there is some $K\in\mathbb{R}_+$ such that \begin{equation*}
\lambda^2\mathbb{E}[\int_0^t|\widetilde{Z}_r^{\lambda,x,u}|^2dr]\leq K,\ \mbox{for\ all}\ \lambda>0. \end{equation*} Then, \begin{equation*}
I_2(\lambda,\alpha)\leq C_{p,M}\lambda^p+C'_{p,K_z}\frac{1}{\alpha^{2-p}}. \end{equation*} For $I_1(\lambda)$ we have \begin{equation*} \begin{split}
I_1(\lambda)\leq& C'_p\mathbb{E}[(\int_0^t|\lambda\widetilde{Z}_r^{\lambda,x,u}-Z_r^{x,u}|dr)^p]\\
\leq&C''_pt^{\frac{p}{2}}\mathbb{E}[(\int_0^t|e^{-\lambda r}(\lambda\widetilde{Z}_r^{\lambda,x,u})-Z_r^{x,u}|^2dr)^{\frac{p}{2}}]+C'''_p\mathbb{E}[(\int_0^t(1-e^{-\lambda r})^2|\lambda\widetilde{Z}_r^{\lambda,x,u}|^2dr)^{\frac{p}{2}}]\\
\leq& C''_pt^{\frac{p}{2}}\mathbb{E}[(\int_0^t|e^{-\lambda r}(\lambda\widetilde{Z}_r^{\lambda,x,u})-Z_r^{x,u}|^2dr)^{\frac{p}{2}}]+C'''_p(1-e^{-\lambda t})^pK.
\end{split} \end{equation*} Hence, for $t>0$ small enough such that $C''_pt^{\frac{p}{2}}\leq\frac{1}{2}$, we get \begin{equation*} \begin{split}
&\mathbb{E}[\mathop{\rm sup}\limits_{s\in[0,t]}|e^{-\lambda s}(\lambda\widetilde{Y}_s^{\lambda,x,u})-Y_s^{x,u}|^p+\frac{1}{2}(\int_0^t|e^{-\lambda s}(\lambda\widetilde{Z}_s^{\lambda,x,u})-Z_s^{x,u}|^2ds)^{\frac{p}{2}}]\\
\leq& C\rho_\alpha(\lambda)+\frac{C}{\alpha^{2-p}},\ \mbox{for any}\ (x,u)\in\overline{\theta}\times\mathcal{U},\
\mbox{and any}\ \alpha>0. \end{split} \end{equation*} Observe that $\rho_\alpha(\lambda)\xrightarrow[\lambda=\lambda_n\downarrow0]{}0$, $\frac{C}{\alpha^{2-p}}\xrightarrow[\alpha\uparrow\infty]{}0$. Hence, \begin{equation*}
\mathbb{E}[\mathop{\rm sup}\limits_{s\in[0,t]}|e^{-\lambda s}(\lambda\widetilde{Y}_s^{\lambda,x,u})-Y_s^{x,u}|^p+\frac{1}{2}(\int_0^t|e^{-\lambda s}(\lambda\widetilde{Z}_s^{\lambda,x,u})-Z_s^{x,u}|^2ds)^{\frac{p}{2}}]\xrightarrow[\lambda=\lambda_n\downarrow0]{}0, \end{equation*}
uniformly in $(x,u)\in\overline{\theta}\times\mathcal{U}$, for $t>0$ small enough; otherwise, for $\delta>0$ small enough, by making above discussion first on $[t-\delta,t]$, after on $[t-2\delta,t-\delta]$, etc., we get by iteration \begin{equation*}
\mathop{\rm sup}\limits_{u\in\mathcal{U}}|\lambda\widetilde{Y}_0^{\lambda,x,u}-Y_0^{x,u}|\xrightarrow[\lambda=\lambda_n\downarrow0]{}0, \end{equation*} and, consequently, \begin{equation*}
|\inf\limits_{u\in\mathcal{U}}(\lambda\widetilde{Y}_0^{\lambda,x,u})-\inf\limits_{u\in\mathcal{U}}Y_0^{x,u}|\xrightarrow[\lambda=\lambda_n\downarrow0]{}0. \end{equation*} But this means that \begin{equation*}
\inf\limits_{u\in\mathcal{U}}Y_0^{x,u}=w_0(x). \end{equation*} Notice that from BSDE (\ref{326}) we have \begin{equation*}
Y_0^{x,u}=w_0(X_t^{x,u})+\int_0^t\widetilde{\psi}(Z_s^{x,u},u_s)ds-\int_0^tZ_s^{x,u}dW_s. \end{equation*} On the other hand, defining the backward stochastic semigroup \begin{equation*}
G_{s,t}^{\widetilde{\psi},x,u}(\eta):=Y_s^{x,u,\eta} \end{equation*} through the associated BSDE \begin{equation*}
Y_s^{x,u,\eta}=\eta+\int_s^t\widetilde{\psi}(Z_r^{x,u,\eta},u_r)dr-\int_s^tZ_r^{x,u,\eta}dW_r,\ \eta\in L^2(\Omega,\mathcal{F}_t,\mathbb{P}), \end{equation*} we get \begin{equation*}
w_0(x)=\inf\limits_{u\in\mathcal{U}}G_{0,t}^{\widetilde{\psi},x,u}[w_0(X_t^{x,u})]. \end{equation*} Consequently, \begin{equation}\label{426}
\begin{aligned} w_0(x)= &\inf\{G_{0,t}^{\widetilde{\psi},x,u}[w_0(X_t^{x,u})]\mid u\in\mathcal{U},\ t\geq0,\ \widetilde{\psi}\ \mbox{such that there exists}\ \lambda_n\downarrow0\ \mbox{with}\\ &\ \ \ \ \lambda_n\psi(x,\frac{1}{\lambda_n}z,u)\rightarrow\widetilde{\psi}(z,u)\},\ x\in\overline{\theta}. \end{aligned} \end{equation} From Lemma \ref{l:3.4} we have $H(x,p,A)\geq H(x,0,0),\ \mbox{for any}\ (p,A)\in \mathbb{R}^N\times\mathcal{S}^N, x\in\overline{\theta}.$
Therefore, from Proposition \ref{th:3.3.1}, in viscosity sense \begin{equation*} \begin{split}
0\geq&\lambda V_\lambda(x)+H(x,DV_\lambda(x),D^2V_\lambda(x))\\
\geq& \lambda V_\lambda(x)+H(x,0,0)=\lambda V_\lambda(x)+\max\limits_{u\in {U}}\{-\psi(x,0,u)\}, \end{split} \end{equation*} for all $x\in\theta$ with $J^{2,+}V_\lambda(x)\neq0$. Using the same argument as in the proof of Step 2 of Theorem \ref{th:4.1}, we see that this implies that $0\geq\lambda V_\lambda(x)+\max\limits_{u\in U}\{-\psi(x,0,u)\}$, for all $x\in\overline{\theta}$. By taking the limit, as $\lambda\rightarrow 0$, it follows that \begin{equation*}
w_0(x)\leq \min\limits_{u\in {U}}\psi(x,0,u). \end{equation*} Therefore, from (\ref{426}) and the comparison theorem for BSDEs we get directly \begin{equation*}
\begin{aligned} w_0(x)\leq &\inf\{G_{0,t}^{\widetilde{\psi},x,u}[\min\limits_{v\in U}\psi(X_t^{x,u},0,v)]\mid u\in\mathcal{U},\ t\geq0,\ \widetilde{\psi}\ \mbox{such that there exists}\ \lambda_n\downarrow0\ \mbox{with}\\ &\ \ \ \ \lambda_n\psi(x,\frac{1}{\lambda_n}z,u)\rightarrow\widetilde{\psi}(z,u)\},\ x\in\overline{\theta}. \end{aligned} \end{equation*} \end{proof} \section{ {\protect \large Appendix: Proof of Proposition \ref{dpp} (DPP)}} This appendix is devoted to the proof of the DPP (Proposition \ref{dpp}). For the proof we need an auxiliary result. For this we note that, as the filtration used in Section 4 is the Brownian one, we can suppose without loss of generality that $(\Omega,\mathcal{F},\mathbb{P})$ is the standard Wiener space, $\Omega=C_0(\mathbb{R}_+;\mathbb{R}^d)=\{\omega\in C(\mathbb{R}_+;\mathbb{R}^d): \omega(0)=0\}$ , endowed with Borel $\sigma$-algebra over $C_0(\mathbb{R}_+;\mathbb{R}^d)$ and the Wiener measure, with respect to which $\mathbb{F}$ is completed. The coordinate process $W_t(\omega)=\omega_t, t\geq0, \omega\in\Omega$, is a d-dimensional Brownian motion, and the filtration $\mathbb{F}$ is generated by $W$. \begin{lemma} We assume that (H1) and (H2) hold. Let $t\geq0,\ u\in\mathcal{U}_t=L_{\mathbb{F}}^\infty(t,\infty;U)$. Let $X^{t,x,u}$ be the unique continuous and $\mathbb{F}$-adapted solution of the following SDE: \begin{equation*}\label{A1}
X_s^{t,x,u}=x+\int_t^sb(X_r^{t,x,u},u_r)dr +\int_t^s\sigma(X_r^{t,x,u},u_r)dW_r,\ s\geq t,\ x\in\mathbb{R}^N, \tag{A1} \end{equation*} and let $(Y^{\lambda,t,x,u}, Z^{\lambda,t,x,u})$ be the unique solution of the following BSDE on the infinite time inteval: \begin{equation*}\label{A2}
Y_s^{\lambda,t,x,u}=Y_T^{\lambda,t,x,u}+\int_s^T(\psi(X_r^{t,x,u},Z_r^{\lambda,t,x,u},u_r)-\lambda Y_r^{\lambda,t,x,u})dr-\int_s^TZ_r^{\lambda,t,x,u}dW_r,\ t\leq s\leq T<+\infty, \tag{A2} \end{equation*} where $Y^{\lambda,t,x,u}=(Y_s^{\lambda,t,x,u})_{s\geq t}$ is a bounded continuous $\mathbb{F}$-adapted process and $Z^{\lambda,t,x,u}=(Z_s^{\lambda,t,x,u})_{s\geq t}\in\mathcal{H}^2_{loc}(t,\infty;\mathbb{R}^d)$.
Let $\theta_t=\theta_t(\omega)$ be the translation operator on $\Omega$, $\theta_t(\omega)_s=\omega(s+t)-\omega(t),\ \omega\in\Omega, s\geq t$. Given $u\in\mathcal{U}$ we can identify $u$ with a measurable functional applying to $W$. Thus, given an arbitrary element $u_0$ of $\mathcal{U}$, we can define \begin{equation*} \overline{u}_s:= \left\{ \begin{array}{lll} u_0,\ s\in[0,t),\\ u_{s-t}(\theta_t),\ s\geq t. \end{array} \right. \end{equation*} Then, $\overline{u}\in\mathcal{U}$ and \begin{equation*}\label{A3}
X_s^{x,u}(\theta_t)=X_{s+t}^{t,x,\overline{u}},\ Y_s^{\lambda,x,u}(\theta_t)=Y_{s+t}^{\lambda,t,x,\overline{u}},\ s\geq0,\ \mathbb{P}\text{-a.s.}, \tag{A3} \end{equation*} and \begin{equation*}\label{A4}
Z_s^{\lambda,x,u}(\theta_t)=Z_{s+t}^{\lambda,t,x,\overline{u}},\ dsd\mathbb{P}\text{-a.e.}, \ s\geq 0. \tag{A4} \end{equation*} \end{lemma} \begin{proof} While the existence and the uniqueness of the solution for (\ref{A1}) is standard, that of (\ref{A2}) is shown in analogy to Proposition \ref{th:2.4}.
Given $u\in\mathcal{U}$, it is obvious that also $\overline{u}\in\mathcal{U}$, and applying the transformation $\theta_t$ to (\ref{0}) and (\ref{r40}) we see that $(X_{s-t}^{x,u}(\theta_t))_{s\geq t}$, $(Y_{s-t}^{\lambda,x,u}(\theta_t),Z_{s-t}^{\lambda,x,u}(\theta_t))_{s\geq t}$ are solution of (\ref{A1}) and (\ref{A2}) respectively, with control process $\overline{u}$ instead of $u$. From the uniqueness of the solutions of (\ref{A1}) and (\ref{A2}) we obtain (\ref{A3}) and (\ref{A4}). \end{proof} Now we can prove Proposition \ref{dpp} (DPP). \begin{proof}\textbf{(of Proposition \ref{dpp}.)} Let us put $\overline{V}_\lambda(x):=\inf\limits_{u\in\mathcal{U}}G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})]$. We have to show that $\overline{V}_\lambda(x)=V_\lambda(x)$.\\ 1) As, for all $y\in\mathbb{R}^d$, ${V}_\lambda(y)$ is deterministic, we obtain from the preceding Lemma 5.1 \begin{equation*}
V_\lambda(y)=\mathop{\rm essinf}\limits_{v\in\mathcal{U}}Y_0^{\lambda,y,v}(\theta_t)=\mathop{\rm essinf}\limits_{v\in\mathcal{U}}Y_t^{\lambda,t,y,\overline{v}},\ \mathbb{P}\text{-a.s.}, \end{equation*} with \begin{equation*} \overline{v}_s:= \left\{ \begin{array}{lll} u_0,\ s\in[0,t),\\ v_{s-t}(\theta_t),\ s\geq t. \end{array} \right. \end{equation*} Then, by a standard argument (see, e.g., \cite{Peng 1997}), \begin{equation*}
V_\lambda(X_t^{x,u})=\mathop{\rm essinf}\limits_{v\in\mathcal{U}}Y_t^{\lambda,t,X_t^{x,u},\overline{v}}=\mathop{\rm essinf}\limits_{v\in\mathcal{U}}Y_t^{\lambda,x,u\oplus\overline{v}}, \end{equation*} where \begin{equation*} (u\oplus\overline{v})_s= \left\{ \begin{array}{lll} u_s,\ s\in[0,t)\\ \overline{v}_s,\ s\geq t \end{array} \right. \in\mathcal{U}. \end{equation*} Again from an argument by now standard (see, e.g., \cite{Peng 1997}), for all $\varepsilon>0$, there exists $v\in\mathcal{U}$ such that $u=v$, dsd$\mathbb{P}$-a.e. on $[0,t]\times\Omega$ and \begin{equation*}
V_\lambda(X_t^{x,u})\geq Y_t^{\lambda,x,v}-\varepsilon,\ \mathbb{P}\text{-a.s.} \end{equation*} Then, from the monotonicity and the Lipschitz property (in $L^2$) of $G_{0,t}^{\lambda,x,u}[\cdot]$ (resulting from BSDE standard estimates, see, e.g., \cite{Peng 1997}), \begin{equation*}
\begin{split}
G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})]\geq& G_{0,t}^{\lambda,x,u}[Y_t^{\lambda,x,v}-\varepsilon]
\geq G_{0,t}^{\lambda,x,u}[Y_t^{\lambda,x,v}]-C\varepsilon
=Y_0^{\lambda,x,v}-C\varepsilon\\
\geq&\inf\limits_{v\in\mathcal{U}}Y_0^{\lambda,x,v}-C\varepsilon=V_\lambda(x)-C\varepsilon,\ \mathbb{P}\text{-a.s.}
\end{split} \end{equation*} Consequently, letting $\varepsilon\downarrow0$, we see that \begin{equation*}
\overline{V}_\lambda(x)=\inf\limits_{u\in\mathcal{U}}G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})]\geq V_\lambda(x). \end{equation*} 2) To prove that $V_\lambda(x)\geq\overline{V}_\lambda(x)$, we let, for any given $\varepsilon>0$, $u\in\mathcal{U}$ be such that $V_\lambda(x)\geq Y_0^{\lambda,x,u}-\varepsilon$. Then, \begin{equation*}
\begin{split}
V_\lambda(x)\geq& Y_0^{\lambda,x,u}-\varepsilon=G_{0,t}^{\lambda,x,u}[Y_t^{\lambda,x,u}]-\varepsilon\\
\geq&G_{0,t}^{\lambda,x,u}[\mathop{\rm essinf}\limits_{\overline{v}\in\mathcal{U}}Y_t^{\lambda,x,u\oplus\overline{v}}]-\varepsilon\\
=&G_{0,t}^{\lambda,x,u}[\mathop{\rm essinf}\limits_{\overline{v}\in\mathcal{U}}Y_t^{\lambda,t,X_t^{x,u},\overline{v}}]-\varepsilon,\ \mathbb{P}\text{-a.s.}
\end{split} \end{equation*}
But, $Y_t^{\lambda,t,X_t^{x,u},\overline{v}}=(Y_0^{\lambda,y,v})(\theta_t)|_{y=X_t^{x,u}}$, and thus \begin{equation*}
\mathop{\rm essinf}\limits_{\overline{v}\in\mathcal{U}}Y_t^{\lambda,t,X_t^{x,u},\overline{v}}=(\inf\limits_{\overline{v}\in\mathcal{U}}Y_0^{\lambda,y,\overline{v}})(\theta_t)|_{y=X_t^{x,u}}=V_\lambda(X_t^{x,u}). \end{equation*} Consequently, \begin{equation*}
V_\lambda(x)\geq G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})]-\varepsilon,\ \text{for\ all}\ u\in\mathcal{U}, \end{equation*} from where it follows that \begin{equation*}
V_\lambda(x)\geq \inf\limits_{u\in\mathcal{U}}G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})]-\varepsilon, \end{equation*} and letting $\varepsilon\downarrow0$ we get $V_\lambda(x)\geq\overline{V}_\lambda(x)$. \end{proof}
\end{document} |
\begin{document}
\title{{\bf An FPT Algorithm Beating 2-Approximation for $k$-Cut}}
\author{ Anupam Gupta\thanks{Supported in part by NSF awards
CCF-1536002, CCF-1540541, and CCF-1617790. This work was done in
part when visiting the Simons Institute for the Theory of
Computing. } \and Euiwoong Lee\thanks{Supported by NSF award CCF-1115525, Samsung scholarship, and Simons award for graduate students in TCS.}
\and Jason Li\thanks{{\tt jmli@andrew.cmu.edu} }}
\date{Computer Science Department \\ Carnegie Mellon University \\ Pittsburgh, PA 15213.}
\thispagestyle{empty} \maketitle \begin{abstract}
In the $k$\textsc{-Cut}\xspace problem, we are given an edge-weighted graph $G$ and an
integer $k$, and have to remove a set of edges with minimum total
weight so that $G$ has at least $k$ connected components. Prior work
on this problem gives, for all $h \in [2,k]$, a $(2-h/k)$-approximation
algorithm for $k$-cut that runs in time $n^{O(h)}$. Hence to get a
$(2 - \varepsilon)$-approximation algorithm for some absolute constant
$\varepsilon$, the best runtime using prior techniques is
$n^{O(k\varepsilon)}$. Moreover, it was recently shown that getting a
$(2 - \varepsilon)$-approximation for general $k$ is NP-hard, assuming the
Small Set Expansion Hypothesis.
If we use the size of the cut as the parameter, an FPT algorithm to
find the exact $k$\textsc{-Cut}\xspace is known, but solving the $k$\textsc{-Cut}\xspace problem exactly
is $W[1]$-hard if we parameterize only by the natural parameter of
$k$. An immediate question is: \emph{can we approximate $k$\textsc{-Cut}\xspace better
in FPT-time, using $k$ as the parameter?}
We answer this question positively. We show that for some absolute
constant $\varepsilon > 0$, there exists a $(2 - \varepsilon)$-approximation
algorithm that runs in time $2^{O(k^6)} \cdot \widetilde{O} (n^4) $. This is the first FPT
algorithm that is parameterized only by $k$ and strictly improves the
$2$-approximation. \end{abstract}
\setcounter{page}{1}
\section{Introduction} \label{sec:introduction}
We consider the $k$\textsc{-Cut}\xspace problem: given an edge-weighted graph $G = (V,E,w)$ and an integer $k$, delete a minimum-weight set of edges so that $G$ has at least $k$ connected components. This problem is a natural generalization of the global min-cut problem, where the goal is to break the graph into $k=2$ pieces. Somewhat surprisingly, the problem has poly-time algorithms for any constant $k$: the current best result gives an $\tilde{O}(n^{2k})$-time deterministic algorithm~\cite{Thorup08}. On the approximation algorithms front, several $2$-approximation algorithms are known~\cite{SV95, NR01, RS02}. Even a trade-off result is known: for any $h \in [1,k]$, we can essentially get a $(2-\frac{h}{k})$-approximation in $n^{O(h)}$ time~\cite{XCY11}. Note that to get $(2-\varepsilon)$ for some absolute constant $\varepsilon > 0$, this algorithm takes time $n^{O(\varepsilon k)}$, which may be undesirable for large $k$. On the other hand, achieving a $(2-\varepsilon)$-approximation is NP-hard for general $k$, assuming the Small Set Expansion Hypothesis (SSEH)~\cite{Manurangsi17}.
What about a better \emph{fine-grained result} when $k$ is small? Ideally we would like a runtime of $f(k) \mathrm{poly}(n)$ so it scales better as $k$ grows --- i.e., an FPT algorithm with parameter $k$. Sadly, the problem is $W[1]$-hard with this parameterization~\cite{DEFPR03}. (As an aside, we know how to compute the optimal $k$\textsc{-Cut}\xspace in time $f(|\ensuremath{\mathsf{Opt}}\xspace|) \cdot n^{2}$~\cite{KT11, Chitnis}, where $|\ensuremath{\mathsf{Opt}}\xspace|$ denotes the cardinality of the optimal $k$\textsc{-Cut}\xspace.)
The natural question suggests itself: can we give a better approximation algorithm that is FPT in the parameter $k$?
Concretely, the question we consider in this paper is: \emph{If we
parameterize $k$\textsc{-Cut}\xspace by $k$, can we get a $(2-\varepsilon)$-approximation for
some absolute constant $\varepsilon > 0$ in FPT time---i.e., in time
$f(k) \mathrm{poly}(n)$?} (The hard instances which show $(2-\varepsilon)$-hardness assuming SSEH~\cite{Manurangsi17} have $k = \Omega(n)$, so such an FPT result is not ruled out.) We answer the question positively.
\begin{theorem}[Main Theorem]
\label{thm:kcut-main}
There is an absolute constant $\varepsilon > 0$ and an a
$(2-\varepsilon)$-approximation algorithm for the $k$\textsc{-Cut}\xspace problem on general
weighted graphs that runs in time $2^{O(k^6)} \cdot \tilde{O}(n^4)$. \end{theorem}
Our current $\varepsilon$ satisfies $\varepsilon \geq 0.0003$ (see the calculations in \S\ref{sec:conclusion}). We hope that our result will serve as a proof-of-concept that we can do better than the factor of~2 in FPT$(k)$ time, and eventually lead to a deeper understanding of the trade-offs between approximation ratios and fixed-parameter tractability for the $k$\textsc{-Cut}\xspace problem. Indeed, our result combines ideas from approximation algorithms and FPT, and shows that considering both settings simultaneously can help bypass lower bounds in each individual setting, namely the $W[1]$-hardness of an exact FPT algorithm and the SSE-hardness of a polynomial-time $(2-\varepsilon)$-approximation.
To prove the theorem, we introduce two variants of $k$\textsc{-Cut}\xspace. \Laminarkcut{k} is a special case of $k$\textsc{-Cut}\xspace where both the graph and the optimal solution are promised to have special properties, and \textsc{Minimum Partial Vertex Cover}\xspace (\textsc{Partial VC}\xspace) is a variant of $k$\textsc{-Cut}\xspace where $k - 1$ components are required to be singletons, which served as a hard instance for both the exact $W[1]$-hardness and the $(2 - \varepsilon)$-approximation SSE-hardness. Our algorithm consists of three main steps where each step is modular, depends on the previous one: an FPT-AS for \textsc{Partial VC}\xspace, an algorithm for \Laminarkcut{k}, and a reduction from $k$\textsc{-Cut}\xspace to \Laminarkcut{k}. In the following section, we give more intuition for our three steps.
\subsection{Our Techniques} \label{sec:techniques}
For this section, fix an optimal $k$-cut ${\cal S}^* = \{ S^*_1, \dots, S^*_k\}$, such that $w(\partial{S^*_1}) \leq \dots \leq w(\partial{S^*_k})$. Let the optimal cut value be $\ensuremath{\mathsf{Opt}}\xspace := w(E(S^*_1, \dots, S^*_k)) = \sum_{i=1}^k w(\partial{S^*_i}) / 2$; here $E(A_1,\cdots, A_k)$ denotes the edges that go between different sets in this partition. The $(2 - 2/k)$-approximation iterative greedy algorithm by Saran and Vazirani~\cite{SV95} repeatedly computes the minimum cut in each connected component and takes the cheapest one to increase the number of connected components by $1$. Its generalization by Xiao et al.~\cite{XCY11} takes the minimum $h$-cut instead of the minimum $2$-cut to achieve a $(2 - h/k)$-approximation in time $n^{O(h)}$.
\subsubsection{Step I: \textsc{Minimum Partial Vertex Cover}\xspace} \label{sec:overview-pvc}
The starting point for our algorithm is the $W[1]$-hardness result of Downey et al.~\cite{DEFPR03}: the reduction from $k$-clique results in a $k$\textsc{-Cut}\xspace instance where the optimal solution consists of $k-1$ singletons separated from the rest of the graph. Can we approximate such instances well? Formally, the \textsc{Partial VC}\xspace problem asks: given a edge-weighted graph, find a set of $k-1$ vertices such that the total weight of edges hitting these vertices is as small as possible? Extending the result of Marx~\cite{Marx07} for the maximization version, our first conceptual step is an FPT-AS for this problem, i.e., an algorithm that given a $\delta >0$, runs in time $f(k,\delta)\cdot \mathrm{poly}(n)$ and gives a $(1+\delta)$-approximation to this problem.
\subsubsection{Step II: Laminar $k$-cut} \label{sec:overview-lam}
The instances which inspire our second idea are closely related to the hard instances above. One instance on which the greedy algorithm of Saran and Vazirani gives a approximation no better than $2$ for large $k$ is this: take two cliques, one with $k$ vertices and unit edge weights, the other with $k^2$ vertices and edge weights $1/(k+1)$, so that the weighted degree of all vertices is the same. (Pick one vertex from each clique and identify them to get a connected graph.) The optimal solution is to delete all edges of the small clique, at cost $\binom{k}{2}$. But if the greedy algorithm breaks ties poorly, it will cut out $k-1$ vertices one-by-one from the larger clique, thereby getting a cut cost of $\approx k^2$, which is twice as large. Again we could use \textsc{Partial VC}\xspace to approximate this instance well. But if we replace each vertex of the above instance itself by a clique of high weight edges, then picking out single vertices obviously does not work. Moreover, one can construct recursive and ``robust'' versions of such instances where we need to search for the ``right'' (near-)$k$-clique to break up. Indeed, these instances suggest the use of dynamic programming (DP), but what structure should we use DP on?
One feature of such ``hard'' instances is that the optimal $k$\textsc{-Cut}\xspace ${\cal S}^* = \{S_1^*, \ldots, S_k^*\}$ is composed of near-min-cuts in the graph. Moreover, no two of these near-min-cuts cross each other. We now define the \Laminarkcut{k} problem: find a $k$\textsc{-Cut}\xspace on an instance where none of the $(1+\varepsilon)$-min-cuts of the graph cross each other, and where each of the cut values $w(\partial{S_i^*})$ for $i = 1,\ldots, k-1$ are at most $(1+\varepsilon)$ times the min-cut. Because of this laminarity (i.e., non-crossing nature) of the near-min-cuts, we can represent the near-min-cuts of the graph using a tree $\mathcal T$, where the nodes of $G$ sit on nodes of the tree, and edges of $\mathcal T$ represent the near-min-cuts of $G$. Rooting the tree appropriately, the problem reduces to ``marking'' $k-1$ incomparable tree nodes and take the near-min-cuts given by their parent edges, so that the fewest edges in $G$ are cut. Since all the cuts represented by $\mathcal T$ are near-min-cuts and almost of the same size, it suffices to mark $k-1$ incomparable nodes to maximize the number of edges in $G$ both of whose endpoints lie below a marked node. We call such edges \emph{saved} edges. \agnote{Any
figures?} In order to get a $(2-\varepsilon)$-approximation for \Laminarkcut{k}, it suffices to save $\approx \varepsilon k \mathsf{Mincut}$ weight of edges.
Note that if $\mathcal T$ is a star with $n$ leaves and each vertex in $G$ maps to a distinct leaf, this is precisely the \textsc{Partial VC}\xspace problem, so we do not hope to find the optimal solution (using dynamic programming, say). Moreover, extending the FPT-AS for \textsc{Partial VC}\xspace to this more general setting does not seem directly possible, so we take a different approach. We call a node an \emph{anchor} if has some $s$ children which when marked would save $\approx \varepsilon s \mathsf{Mincut}$ weight. We take the following ``win-win'' approach: if there were $\Omega(k)$ anchors that were incomparable, we could choose a suitable subset of $k$ of their children to save $\approx \varepsilon k \mathsf{Mincut}$ weight. And if there were not, then all these anchors must lie within a subtree of $\mathcal T$ with at most $k$ leaves. We can then break this subtree into $2k$ paths and guess which paths contain anchors which are parents of the optimal solution. For each such guess we show how to use \textsc{Partial VC}\xspace to solve the problem and save a large weight of edges. Finally how to identify these anchors? Indeed, since all the mincuts are almost the same, finding an anchor again involves solving the \textsc{Partial VC}\xspace problem!
\subsubsection{Step III: Reducing $k$\textsc{-Cut}\xspace to \Laminarkcut{k}} \label{sec:overview-redn}
\begin{wrapfigure}{L}{0.38\textwidth}
\centering
\includegraphics[width=0.35\textwidth]{conform}
\caption{\label{fig:conform} The blue set on the right,
formed by $S_5^* \cup S_7^* \cup S_{11}^*$, conforms to the
algorithm's partition ${\cal S}$ on the left.} \end{wrapfigure}
We now reduce the general $k$\textsc{-Cut}\xspace problem to \Laminarkcut{k}. This reduction is again based on observations about the graph structure in cases where the iterative greedy algorithms do not get a $(2 - \varepsilon$)-approximation. Suppose ${\cal S} = \{ S_1, \dots, S_{k'} \}$ be the connected components of $G$ at some point of an iterative algorithm ($k' \leq k$). For a subset $\emptyset \neq U \subsetneq V$, we say that $U$ {\em conforms} to partition ${\cal S}$ if there exists a subset $J \subsetneq [k']$ of parts such that $U = \cup_{j \in J} S_j$. One simple but crucial observation is the following: if there exists a subset $\emptyset \neq I \subsetneq [k]$ of indices such that $\cup_{i\in I} S^*_i$ conforms to ${\cal S}$ (i.e., $\cup_{i\in I} S^*_i = \cup_{j \in J} S_j$), we can ``guess'' $J$ to partition $V$ into the two parts $\cup_{i \in I} S^*_i$ and $\cup_{i \notin I} S^*_i$. Since the edges between these two parts belong to the optimal cut and each of them is strictly smaller than $V$, we can recursively work on each part without any loss.
Moreover, the number of choices for $J$ is at most $2^{k'}$ and each guess produces one more connected component, so the total running time can be bounded by $f(k)$ times the running time of the rest of the algorithm, for some function $f(\cdot)$. Therefore, we can focus on the case where none of $\cup_{i \in I} S^*_i$ conforms to the algorithm's partition ${\cal S}$ at any point during the algorithm's execution.
\begin{wrapfigure}{R}{0.38\textwidth}
\centering
\includegraphics[width=0.35\textwidth]{histogram}
\caption{\label{fig:histo} The blue curve shows cut sizes
for algorithm's cuts, red curve shows $w(\partial{S^*_i})$
values. The blue area (and in fact all the area below
$w(\partial{S_1^*})$ and above the algorithm's curve) makes the first
inequality loose. The grey area (and in fact all the area above
$w(\partial{S_1^*})$ and below OPT's curve) makes the second
inequality loose.} \end{wrapfigure}
Now consider the iterative min-cut algorithm of Saran and Vazirani, and let $c_i$ be the cost of the min cut in the $i^{th}$ iteration ($1 \leq i \leq k - 1$). By our above assumption about non-conformity, none of $\cup_{i \in I} S^*_i$, and in particular the subset $S^*_1$, conform to the current components. This implies that deleting the remaining edges in $\partial S^*_1$ is a valid cut that increases the number of connected components by at least $1$, so $c_i \leq w(\partial{S^*_1})$. Then we have the following chain of inequalities: \[ \sum_{i=1}^{k-1} c_i \leq k \cdot w(\partial{S^*_1}) \leq \sum_{i=1}^k w(\partial{S^*_i}) = 2\ensuremath{\mathsf{Opt}}\xspace. \]
If the iterative min-cut algorithm could not get a $(2 - \varepsilon)$-approximation, the two inequalities above must be essentially tight. Hence almost all our costs $c_i$ must be close to $w(\partial{S^*_1})$ and almost all $w(\partial{S^*_i})$ must be close to $w(\partial{S^*_1})$. Slightly more formally, let $\mathfrak{a} \in [k]$ be the smallest integer such that
$c_{\mathfrak{a}} \gtrsim w(\partial{S^*_1})$ ---so that the first $\mathfrak{a} - 1$ cuts are ones where we pay ``much'' less than $\partial{S^*_1}$ and make the first inequality loose. And let $\mathfrak{b} \in [k]$ be the smallest number such that
$w(\partial{S^*_{\mathfrak{b}}}) \gtrsim w(\partial{S^*_1})$ --- so that the last $k - \mathfrak{b}$ cuts in OPT are much larger than $\partial{S^*_1}$ and make the second inequality loose. Then if the iterative min-cut algorithm is no better than a $2$-approximation, we can imagine that $\mathfrak{a} = o(k)$ and $\mathfrak{b} \geq k - o(k)$. For simplicity, let us assume that $\mathfrak{a} = 1$ and $\mathfrak{b} = k$ here.
Indeed, instead of just considering min-cuts, suppose we also consider min-4-cuts, and take the one with better edges cut per number of new components. The arguments of the previous paragraph still hold, so $\mathfrak{a} = 1$ implies that the best min-cuts and best min-4-way cuts (divided by 3) are roughly at least $w(\partial{S^*_1})$ in the original $G$. Since the min-cut is also at most $w(\partial{S^*_1})$, the weight of the min-cut is roughly $w(\partial{S^*_1})$ and none of the near-min-cuts cross (else we would get a good 4-way cut). I.e., the near-min-cuts in the graph form a laminar family. Together with the fact that $\partial{S^*_1}, \dots, \partial{S^*_{k - 1}}$ are near-min-cuts (we assumed $\mathfrak{b} = k$), this is precisely an instance of \Laminarkcut{k}, which completes the proof!
\iffalse \alert{I would stop here and leave the rest of the details to the main
body, maybe just move on to explaining laminar-cut.} \elnote{I am fine with this except that the another promise of Laminar ensure that all but one optimal components are near min cuts. }
Let $S^*_{\geq \mathfrak{b}} := \cup_{i \geq \mathfrak{b}} S^*_{i}$ so that in the $\mathfrak{b}$-cut $\{ S^*_1, \dots, S^*_{\mathfrak{b} - 1}, S^*_{\geq \mathfrak{b}} \}$, all but one component satisfy that the weight of their boundary is close to $w(\partial{S^*_1})$. Another application of the conformity argument ensures that among the current components $S_1, \dots, S_{\mathfrak{a}}$, \elnote{this is another cute idea... should we not explain it here? If
we do, we definitely need a figure explaining what happens when two
$S_j$'s cross one $S^*_i$. } \agnote{I would say we postpone this idea
for now, we might lose people.} there is exactly one component (say $S_1$) intersecting every $S^*_1, \dots, S^*_{\mathfrak{b} - 1}, S^*_{\geq \mathfrak{b}}$ and all the others are strictly contained in one $S^*_i$ (or $S^*_{\geq
\mathfrak{b}}$). In particular, note that $S_1$ intersects both $S^*_1$ and $V \setminus S^*_1$, so $\mathsf{Mincut}(G[S_1]) \leq w(\partial{S^*_1})$.
Then we can see that $G[S_1]$ satisfies the promises of $\Laminarcut{\mathfrak{b}}{\epsilon'}$: there exists a $\mathfrak{b}$-cut where all but one component are close to the min cut and no two near-min cuts cross. We guess $S_1$, run our algorithm for $\Laminarcut{\mathfrak{b}}{\epsilon'}$ for $G[S_1]$ to get $\mathfrak{b} - 1$ more components. It results in $\mathfrak{a} + \mathfrak{b} - 1$ components, and if it is still less than $k$, we finally perform the iterative greedy algorithm (here the min $2$-cut suffices) to get $k - \mathfrak{a} - \mathfrak{b} + 1$ more components. The conformity still ensures that we pay at most $w(\partial{S^*_1})$ in each iteration.
The total cost is the sum of (1) the cost to get the first $\mathfrak{a}$ components (2) the cost of $\Laminarcut{\mathfrak{b}}{\epsilon_1}$, and (3) the cost to get the final $k - \mathfrak{a} - \mathfrak{b} + 1$ components. Since $\mathfrak{a}$ and $k - \mathfrak{b}$ are very small compared to $k$, (1) and (3) do not contribute much, so we can beat the $2$-approximation if we do for $\Laminarcut{\mathfrak{b}}{\epsilon_1}$. \fi
\paragraph{Roadmap.} After some related work and preliminaries, we first present the details of the reduction from $k$\textsc{-Cut}\xspace to \Laminarkcut{k} in Section~\ref{sec:reduction}. Then in Section~\ref{sec:laminar} we give the algorithm for \Laminarkcut{k} assuming an algorithm for \textsc{Partial VC}\xspace. Finally we give our FPT-AS for \textsc{Partial VC}\xspace in Section~\ref{sec:partial-vc}.
\subsection{Other Related Work} \label{sec:related}
The $k$\textsc{-Cut}\xspace problem has been widely studied. Goldschmidt and Hochbaum gave an $O(n^{(1/2- o(1))k^2})$-time algorithm~\cite{GH94}; they also showed that the problem is NP-hard when $k$ is part of the input. Karger and Stein improved this to an $O(n^{(2-o(1))k})$-time randomized Monte-Carlo algorithm using the idea of random edge-contractions~\cite{KS96}. After Kamidoi et al.~\cite{KYN06} gave an $O(n^{4k + o(1)})$-time deterministic algorithm based on divide-and-conquer, Thorup gave an $\tilde{O}(n^{2k})$-time deterministic algorithm based on tree packings~\cite{Thorup08}. Small values of $k \in [2, 6]$ also have been separately studied~\cite{NI92, HO92, BG97, Karger00, NI00, NKI00, Levine00}.
On the approximation algorithms front, a $2(1-1/k)$-approximation was given by Saran and Vazirani~\cite{SV95}. Naor and Rabani~\cite{NR01}, and Ravi and Sinha~\cite{RS02} later gave $2$-approximation algorithms using tree packing and network strength respectively. Xiao et al.~\cite{XCY11} completed the work of Kapoor~\cite{Kapoor96} and Zhao et al.~\cite{ZNI01} to generalize Saran and Vazirani to essentially give an $(2 - h/k)$-approximation in time $n^{O(h)}$. Very recently, Manurangsi~\cite{Manurangsi17} showed that for any $\varepsilon > 0$, it is NP-hard to achieve a $(2 - \varepsilon)$-approximation algorithm in time $\mathrm{poly}(n,k)$ assuming the Small Set Expansion Hypothesis.
\emph{FPT algorithms:} Kawarabayashi and Thorup give an $f(\ensuremath{\mathsf{Opt}}\xspace) \cdot n^{2}$-time algorithm~\cite{KT11} for unweighted graphs. Chitnis et al.~\cite{Chitnis} used a randomized color-coding idea to give a better runtime, and to extend the algorithm to weighted graphs. In both cases, the FPT algorithm is parameterized by the cardinality of edges in the optimal $k$\textsc{-Cut}\xspace, not by $k$. For a comprehensive treatment of FPT algorithms, see the excellent book~\cite{FPT-book}, and for a survey on approximation and FPT algorithms, see~\cite{Marx07}.
\emph{Multiway Cut:} A problem very similar to $k$\textsc{-Cut}\xspace is the \textsc{Multiway Cut} problem, where we are given $k$ terminals and want to disconnect the graph into at least $k$ pieces such that all terminals lie in distinct components. However, this problem behaves quite differently: it is NP-hard even for $k=3$ (and hence an $n^{f(k)}$ algorithm is ruled out); on the other hand several algorithms are known to approximate it to factors much smaller than~$2$ (see, e.g.,~\cite{BuchbinderSW17} and references therein). FPT algorithms parameterized by the size of $\ensuremath{\mathsf{Opt}}\xspace$ are also known; see~\cite{CaoCF14} for the best result currently known.
\section{Notation and Preliminaries} \label{sec:prelims}
For a graph $G = (V,E)$, and a subset $S \subseteq V$, we use $G[S]$ to denote the subgraph induced by the vertex set $S$. For a collection of disjoint sets $S_1, S_2, \ldots, S_t$, let $E(S_1, \ldots, S_t)$ be the set of edges with endpoints in some $S_i, S_j$ for $i \neq j$. Let $\partial S = E(S, V \setminus S)$. We say two cuts $(A, V\setminus A)$ and $(B,V\setminus B)$ \emph{cross} if none of the four sets $A \setminus B, B \setminus A, A \cap B$, and $V \setminus (A \cup B)$ is empty. $\mathsf{Mincut}$ and $\text{\sf{Min-4-cut}}$ denote the weight of the min-2-cut and the min-4-cut respectively.
A cut $(A, V \setminus A)$ is called $(1 + \varepsilon)$-mincut if $w(A, V \setminus A) \leq (1 + \varepsilon) \mathsf{Mincut}$. \begin{restatable}[\textsc{Laminar $k$-Cut}$(\varepsilon_1)$]{definition}{LamDef}
\label{def:laminarcut}
The input is a graph $G = (V,E)$ with edge weights, and two parameters
$k$ and $\varepsilon_1$, satisfying two promises: (i)~no two $(1+\varepsilon_1)$-mincuts
cross each other, and (ii)~there exists a $k$-cut ${\cal S}' = \{S_1',
\ldots, S_k'\}$ in $G$ with $w(\partial(S_i')) \le (1+\varepsilon_1)\mathsf{Mincut}(G)$
for all $i \in [1,k-1]$. Find a $k$-cut with the total weight.
The approximation ratio is defined as the ratio of
the weight of the returned cut to the weight of the $k$\textsc{-Cut}\xspace ${\cal S}'$
(which can be possibly less than $1$). \end{restatable}
\begin{definition}[\textsc{Minimum Partial Vertex Cover}\xspace]
\label{def:pvc}
Given a graph $G = (V,E)$ with edge and vertex weights, and an integer
$k$, find a vertex set $S \subseteq V$ with $|S| = k$ nodes, minimizing the
weight of the edges hitting the set $S$ plus the weight of all
vertices in $S$. \end{definition}
\section{Reduction to $\Laminarcut{k}{\varepsilon_1}$} \label{sec:reduction}
In this section we give our reduction from $k$\textsc{-Cut}\xspace to $\Laminarcut{k}{\varepsilon_1}$, showing that if we can get a better-than-2 approximation for the latter, we can beat the factor of two for the general $k$\textsc{-Cut}\xspace problem too. We assume the reader is familiar with the overview in Section~\ref{sec:overview-redn}. Formally, the main theorem is the following.
\begin{theorem}
\label{thm:reduction1}
Suppose there exists a $(2 - \varepsilon_2)$-approximation algorithm for
$\Laminarcut{k}{\varepsilon_1}$ for some $\varepsilon_1 \in (0, 1/4)$ and
$\varepsilon_2 \in (0, 1)$ that runs in time $f(k) \cdot g(n)$. Then
there exists a $(2 - \varepsilon_3)$-approximation algorithm for $k$\textsc{-Cut}\xspace
that runs in time $2^{O(k^2 \log k)} \cdot f(k) \cdot (n^4 \log^3 n + g(n))$ for some constant
$\varepsilon_3 > 0$.
\end{theorem}
\begin{algorithm}
\caption{$\text{Main}(G = (V, E, w), k)$}
\label{alg:main}
\begin{algorithmic}[1]
\State $k' = 1$, $S_1 \gets V$
\While {$k' < k$ }
\For {$\boldsymbol{\mathrm{r}} \in [k]^{k'}$ } \label{line:start-of-check} \Comment {Further partition each $S_i$ into $r_i$ components by Laminar}
\State $|\boldsymbol{\mathrm{r}}| \gets \sum_{j=1}^{k'} r_j$; $\{ C_1, \dots,
C_{|\boldsymbol{\mathrm{r}}|} \} \gets \cup_{i \in [k']}
\text{Laminar}(\inducedG{S_i}, r_i)$.
\If{$|\boldsymbol{\mathrm{r}}| \geq k$} $C_k \gets C_k \cup \dots \cup C_{|\boldsymbol{\mathrm{r}}|}$
\Else \State $\{C_1, \dots, C_{k}\} \gets \text{Complete}(G, k, C_1, \dots, C_{|\boldsymbol{\mathrm{r}}|})$ \label{line:complete}
\EndIf
\State \text{Record}($\text{Guess}(\{C_1, \dots, C_k \})$)
\EndFor \label{line:end-of-check}
\State {} \Comment {Split some $S_i$ by a mincut or a min-4-cut}
\If{$k' > k - 3$ or $\min_{i \in [k']} \mathsf{Mincut} (\inducedG{S_{i}}) \leq
\min_{i \in [k']} \text{\sf{Min-4-cut}} (\inducedG{S_{i}}) / 3$} \label{line:start-extend}
\State $i \gets \min_{i} \mathsf{Mincut} (\inducedG{S_{i}})$; $\{ T_1, T_2 \} \gets \text{Mincut}(\inducedG{S_{i}})$
\State $S_i \gets T_1$; $S_{k' + 1} \gets T_2$; $c_{k'} \gets \mathsf{Mincut} (\inducedG{S_{i}})$; $k' \gets k' + 1$
\Else
\State $i \gets \arg\min_{i} \text{\sf{Min-4-cut}} (\inducedG{S_{i}})$; $\{ T_1, \dots, T_4 \} \gets \text{Min-4-cut}(\inducedG{S_{i}})$;
$S_{i} \gets T_1$
\State $S_{k' + 1}, S_{k' + 2}, S_{k' + 3} \gets T_2, T_3, T_4$;
$c_{k'}, c_{k' + 1}, c_{k' + 2} \gets \text{\sf{Min-4-cut}} (\inducedG{S_{i}}) / 3$;
$k' \gets k' + 3$
\EndIf \label{line:end-extend}
\EndWhile
\State let ${\cal S} = \{S_1, \ldots, S_k\}$ be the final reference $k$-partition.
\State \text{Record}($\text{Guess}(G, k, {\cal S})$) \label{line:lastupdate}
\State Return the best recorded $k$-partition.
\end{algorithmic} \end{algorithm}
\begin{algorithm} \caption{$\text{Complete}(G = (V, E, w), k, {\cal C} = \{C_1, \ldots, C_\ell\})$} \label{alg:complete} \begin{algorithmic}[1] \While {$\ell < k$} \State $i \gets \min_{i \in [\ell]} \mathsf{Mincut}(\inducedG{C_i})$; $T_1, T_2 \gets \text{Mincut}(\inducedG{C_i})$ \State $C_i \gets T_1$; $C_{\ell + 1} \gets T_2$; $\ell \gets \ell + 1$ \EndWhile \State Return ${\cal C} := \{C_1, \dots, C_k\}$. \end{algorithmic} \end{algorithm}
\begin{algorithm}
\caption{$\text{Guess}(G = (V, E, w), k, {\cal C} = \{C_1, \dots, C_k\})$}
\label{alg:guess}
\begin{algorithmic}[1]
\State \text{Record}($C_1, \dots, C_k$) \Comment{Returned
partition no worse than starting partition}
\For {$\emptyset \neq J \subsetneq [k]$ }
\For {$k' = 1, 2, \dots, k - 1$ }
\State $L \gets \cup_{j \in J} C_j$; $R \gets V
\setminus L$ \Comment {Divide $S_i$ into two groups,
take union of each group}
\State $D_1, \dots, D_{k'} \gets \text{Main}(\inducedG{L},
k')$ \Comment{and recurse}
\State $D_{k'+1}, \dots, D_k \gets \text{Main}(\inducedG{R}, k - k')$
\State \text{Record}($D_1, \dots, D_k$)
\EndFor
\EndFor
\State Return the best recorded $k$-partition among all these guesses.
\end{algorithmic} \end{algorithm}
The main algorithm is shown in Algorithm~\ref{alg:main} (``\text{Main}''). It maintains a ``reference'' partition ${\cal S}$, which is initially the trivial partition where all vertices are in the same part. At each point, it guesses how many pieces each part $S_i$ of this reference partition ${\cal S}$ should be split into using the ``Laminar'' procedure, and then extends this to a $k$-cut using greedy cuts if necessary (Lines~\ref{line:start-of-check}--\ref{line:end-of-check}). It then extends the reference partition by either taking the best min-cut or the best min-4-cut among all the parts (Lines~\ref{line:start-extend}--\ref{line:end-extend}).
Every time it has a $k$-partition, it guesses (using ``Guess'') if the union of some of the parts equals some part of the optimal partition, and uses that to try get a better partition. If one of the guesses is right, we strictly increase the number of connected components by deleting edges in the optimal $k$-cut, so we can recursively solve the two smaller parts. If none of our guesses was right during the algorithm, our analysis in Section~\ref{subsec:approx_factor} shows that there exist values of $k', \boldsymbol{\mathrm{r}}$ such that ${\cal C} = \{C_1, \dots, C_k\}$ in Line~\ref{line:complete}, obtained from the reference partition ${\cal S} = \{ S_1, \dots, S_{k'} \}$ by running Laminar($G[S_{i}], r_i$) for each $i \in [k']$ and using Complete if necessary to get $k$ components, beats the $2$-approximation. Finally, a couple words about each of the subroutines. \begin{itemize} \item Mincut$(G = (V, E, w))$ (resp.\ Min-4-cut$(G)$) returns the minimum $2$-cut (resp.\ $4$-cut) as a partition of $V$ into $2$ (resp. $4$) subsets. \item The subroutine ``Laminar'' returns a $(2-\varepsilon_2)$-approximation
for \Laminarcut{$k$}{$\varepsilon_1$}, using the algorithm from
Theorem~\ref{thm:laminar}. Recall the definition of the problem in Definition~\ref{def:laminarcut}.
\item The operation ``\text{Record}(${\cal P}$)'' in \text{Guess}\ and \text{Main}\ takes a
$k$-partition ${\cal P}$ and compares the weight of edges crossing this
partition to the least-weight $k$-partition recorded thus far (within
the current recursive call). If the current partition has less weight,
it updates the best partition accordingly.
\item Algorithm~\ref{alg:complete}(``\text{Complete}'') is a simple algorithm
that given an $\ell$-partition ${\cal P}$ for some $\ell \leq k$, outputs a
$k$-partition by iteratively taking the mincut in the current graph.
\item Algorithm~\ref{alg:guess}(``\text{Guess}''), when given an
$\ell$-partition ${\cal P}$ ``guesses'' if the vertices belonging to some
parts of this partition $\{ S_j \}_{j \in J}$ coincide with the union
of some $k'$ parts of the optimal partition. If so, we have made
tangible progress: it recursively finds a small $k'$-cut in the graph
induced by $\cup_{j \in J} S_j$, and a small $k-k'$ cut in the
remaining graph. It returns the best of all these guesses. \end{itemize}
\subsection{The Approximation Factor} \label{subsec:approx_factor}
\begin{lemma}[Approximation Factor]
\label{lem:apx-main}
$\text{Main}(G, k)$ achieves a $(2 - \varepsilon_3)$ approximation for some
$\varepsilon_3 > 0$ that depends on $\varepsilon_1, \varepsilon_2$ in
Theorem~\ref{thm:reduction1}. \end{lemma}
\begin{proof}
We prove the lemma by induction on $k$. The value of $\varepsilon_3$ will be
determined later. The base case $k = 1$ is trivial. Fix some value
of $k$, and a graph $G$. Let ${\cal S} = \{S_1, \dots, S_k\}$ be the
final reference partition generated by the execution of $\text{Main}(G, k)$,
and let $c_1, \dots, c_{k - 1}$ be the values associated with it. From the
definition of the $c_i$'s in Procedure~\text{Main}, $\sum_{i=1}^{k - 1} c_i =
w(E(S_1, \dots, S_k))$. The $k$-partition returned by $\text{Main}(G, k)$ is
no worse than this partition ${\cal S}$ (because of the update on
line~\ref{line:lastupdate}), and hence has cost at most $\sum_{i=1}^{k-1}
c_i = w(E(S_1, \dots, S_k))$. Let us fix an optimal $k$-cut ${\cal S}^*
= \{ S^*_1, \dots, S^*_k\}$, and let $w(\partial{S^*_1}) \leq \dots
\leq w(\partial{S^*_k})$. Let $\ensuremath{\mathsf{Opt}}\xspace := w(E(S^*_1, \dots, S^*_k)) =
\sum_{i=1}^k w(\partial{S^*_i}) / 2$.
\begin{definition}[Conformity]
\label{def:conform}
For a subset $\emptyset \neq U \subsetneq V$, we say that $U$ {\em
conforms} to partition ${\cal S}$ if there exists a subset $J
\subsetneq [k]$ of parts such that $U = \cup_{j \in J} S_j$. (See Figure~\ref{fig:conform}.)
\end{definition}
The following claim shows that if there exists a subset $\emptyset
\neq I \subsetneq [k]$ of indices such that $\cup_{i\in I} S^*_i$
conforms to ${\cal S}$, the induction hypothesis guarantees a $(2 -
\varepsilon_3)$-approximation.
\begin{claim}
Suppose there exists a subset $\emptyset \neq I \subsetneq [k]$ such
that $\cup_{i \in I} S_i^*$ conforms to ${\cal S}$. Then $\text{Main}(G, k)$
achieves a $(2 - \varepsilon_3)$-approximation.
\label{claim:good}
\end{claim}
\begin{proof}
Since $S^*_I := \cup_{i \in I} S^*_i$ conforms to ${\cal S}$, during
the run of $\text{Guess}(G, k, {\cal S})$ it will record the $k$-partition
$(\text{Main}(\inducedG{S^*_I}, |I|), \text{Main}(\inducedG{V \setminus S^*_I},
k - |I|) )$, and hence finally output a $k$-partition which cuts no
more edges than this starting partition. By the induction
hypothesis, $\text{Main}(\inducedG{S^*_I}, |I|)$ gives a $|I|$-cut of
$\inducedG{S^*_I}$ whose cost is at most $(2 - \varepsilon_3)$ times
$w(E(S^*_i)_{i \in I})$, and $\text{Main}(\inducedG{V \setminus S^*_I}, k
- |I|)$ outputs a $(k - |I|)$-cut of $\inducedG{V\setminus S^*_I}$
of cost at most $(2 - \varepsilon_3)$ times $w(E(S^*_i)_{i \notin
I})$. Thus, the value of the best $k$-partition returned by
$\text{Main}(G, k)$ is at most
\begin{align*}
& w(E(S^*_I, V \setminus S^*_I)) + (2 - \varepsilon_3) \left(
w(E(S^*_i)_{i \in I}) + w(E(S^*_i)_{i \notin I}) \right) \\
\leq & \ (2 - \varepsilon_3) w(E(S^*_1, \dots, S^*_k)) = (2 -
\varepsilon_3) \mathsf{Opt}. \qedhere
\end{align*}
\end{proof}
Therefore, to prove Lemma~\ref{lem:apx-main}, it suffices to assume
that no collection of parts in \ensuremath{\mathsf{Opt}}\xspace conforms to our partition at any
point in the algorithm. I.e.,
\begin{leftbar}
\As{1}: for every subset $\emptyset \neq I \subsetneq [k]$, $\cup_{i
\in I} S^*_i$ does not conform to ${\cal S} = \{ S_1, \dots, S_k\}$.
\end{leftbar}
Next, we study how $\ensuremath{\mathsf{Opt}}\xspace$ is related to $w(\partial{S_1^*})$. Note
that $\ensuremath{\mathsf{Opt}}\xspace \geq (k/2) \cdot w(\partial{S_1^*})$. The next claim shows
that we can strictly improve the $2$-approximation if $\ensuremath{\mathsf{Opt}}\xspace$ is even
slightly bigger than that.
\begin{claim}
For every $i = 1, \dots, k-1$, $c_i \leq w(\partial{S^*_1})$. Moreover,
if $\ensuremath{\mathsf{Opt}}\xspace \geq (k - 1)w(\partial{S_1^*}) / (2 - \varepsilon_3)$, $\text{Main}(G,
k)$ achieves a $(2 - \varepsilon_3)$-approximation.
\label{claim:notgood}
\end{claim}
\begin{proof}
Consider the beginning of an arbitrary iteration of the while loop
of $\text{Main}(G, k)$. Let $k'$ and ${\cal S}' = \{ S_1, \dots, S_{k'} \}$
be the values at that iteration. By \As{1}, set $S_1^*$ does not
conform to ${\cal S}'$ (because ${\cal S}'$ only gets subdivided as the
algorithm proceeds, and $S_1^*$ does not conform to the final partition
${\cal S}$). So there exists some $i \in [k']$ such that $S_i$
intersects both $S^*_1$ and $V \setminus S^*_1$. If we consider
$\inducedG{S_i}$ and its mincut,
\[
\mathsf{Mincut}(\inducedG{S_i}) \leq
w(E(S_i \cap S^*_1, S_i \setminus S^*_1))
\leq w(\partial{S^*_1}).
\]
Now the new $c_j$ values created in this iteration of the while loop
are at most the smallest mincut value, so we have that each $c_j
\leq w(\partial{S_1^*})$. Therefore,
\[
w(E(S_1, \dots, S_k)) = \sum_{i=1}^{k - 1} c_i \leq (k - 1)\cdot
w(\partial{S_1^*}),
\]
and $\text{Main}(G, k)$ achieves a $(2 - \varepsilon_3)$-approximation if $(k
- 1) w(\partial{S^*_1}) \leq (2 - \varepsilon_3) \ensuremath{\mathsf{Opt}}\xspace$.
\end{proof}
Consequently, it suffices to additionally assume that $\ensuremath{\mathsf{Opt}}\xspace$ is
close to $(\nicefrac{k}{2}) \, w(\partial{S^*_1})$. Formally,
\begin{leftbar}
\As{2}: $ \ensuremath{\mathsf{Opt}}\xspace < w(\partial{S^*_1}) \cdot \frac{k - 1}{2 - \varepsilon_3} $.
\end{leftbar}
Recall that $\varepsilon_1, \varepsilon_2 > 0$ are the parameters such that
there is a $(2 - \varepsilon_2)$-approximation algorithm for
$\Laminarcut{k}{\varepsilon_1}$. Let $\mathfrak{a} \in [k]$ be the smallest
integer such that $c_{\mathfrak{a}} > w(\partial{S^*_1}) (1 -
\nicefrac{\varepsilon_1}{3})$ (set $\mathfrak{a} = k$ if there is no such integer).
(See Figure~\ref{fig:histo}.)
In other words, $\mathfrak{a}$ is the value of $k'$ in the while loop of
$\text{Main}(G, k)$ when both $\min_i \mathsf{Mincut}(\inducedG{S_i})$ and $\min_i
\text{\sf{Min-4-cut}}(\inducedG{S_i}) / 3$ are bigger than $w(\partial{S^*_1}) (1 -
\nicefrac{\varepsilon_1}{3})$ for the first time. Let $\varepsilon_4 > 0$ be a constant
satisfying
\begin{equation}
\label{eq:para_1}
(2/3) \cdot \varepsilon_1 \varepsilon_4 \geq \varepsilon_3.
\end{equation}
The next claim shows that we are done if $\mathfrak{a}$ is large.
\begin{claim}
If $\mathfrak{a} \geq \varepsilon_4 k$, $\text{Main}(G, k)$ achieves a $(2 -
\varepsilon_3)$-approximation.
\label{clm:left_tail}
\end{claim}
\begin{proof}
If $\mathfrak{a} \geq \varepsilon_4 k$, we have
\begin{align*}
\sum_{i=1}^{k-1} c_i &\leq
(\mathfrak{a} - 1) (1 - \nicefrac{\varepsilon_1}{3})\cdot w(\partial{S^*_1}) +
(k - \mathfrak{a})\cdot w(\partial{S^*_1}) \\
& \leq k \cdot w(\partial{S^*_1})\cdot (1 - \nicefrac{\varepsilon_1 \varepsilon_4}{3})
\leq (2 - (\nicefrac23) \varepsilon_1 \varepsilon_4) \ensuremath{\mathsf{Opt}}\xspace \leq (2 - \varepsilon_3)
\ensuremath{\mathsf{Opt}}\xspace. \qedhere
\end{align*}
\end{proof}
Thus, we can assume that our algorithm finds very few cuts appreciably
smaller than $w(\partial{S^*_1})$.
\begin{leftbar}
\As{3}: $\mathfrak{a} < \varepsilon_4 k$.
\end{leftbar}
Let $\mathfrak{b} \in [k]$ be the smallest number such that
$w(\partial{S^*_{\mathfrak{b}}}) > w(\partial{S^*_1}) (1 +
\nicefrac{\varepsilon_1}{3})$; let it be $k$ if there is no such
number. (Again, see Figure~\ref{fig:histo}.) Observe that $\mathfrak{a}$ is
defined based on our algorithm, whereas $\mathfrak{b}$ is defined based on the
optimal solution. Let $\varepsilon_5 > 0$ be a constant satisfying:
\begin{equation}
\label{eq:para_2}
\frac{1}{2 - \varepsilon_3} \leq \frac{1 + \nf{\varepsilon_1 \varepsilon_5}{3}}{2}
~~~\Leftrightarrow~~~ (1 + \nf{\varepsilon_1 \varepsilon_5}{3})(2 -
\varepsilon_3) \geq 2.
\end{equation}
The next claim shows that $\mathfrak{b}$ should be close to $k$.
\begin{claim}
$\mathfrak{b} \geq (1 - \varepsilon_5)k$.
\end{claim}
\begin{proof}
Suppose that $\mathfrak{b} <(1 - \varepsilon_5) k$. We have
\begin{align*}
& \ \frac{ k \cdot w(\partial{S^*_1}) }{2 - \varepsilon_3} \stackrel{\As{2}}{>} \ensuremath{\mathsf{Opt}}\xspace = \frac{1}{2} \sum_{i = 1}^k w(\partial{S^*_i}) \\
\geq& \ \frac{w(\partial{S^*_1})}{2} \left( (1 - \varepsilon_5)k +
\varepsilon_5 k (1 + \varepsilon_1 / 3) \right)
=
\frac{k\cdot w( \partial{S^*_1} )}{2} \left( 1 + \nf{\varepsilon_1 \varepsilon_5}{3} \right),
\end{align*}
which contradicts~\eqref{eq:para_2}.
\end{proof}
Therefore, we can also assume that very few cuts in \ensuremath{\mathsf{Opt}}\xspace
are appreciably larger than $w( \partial{S^*_1})$.
\begin{leftbar}
\As{4}: $\mathfrak{b} \geq (1 - \varepsilon_5) k$.
\end{leftbar}
\textbf{Constructing an Instance of Laminar Cut:} In order to
construct the instance for the problem, let $S^*_{\geq \mathfrak{b}} = \cup_{i = \mathfrak{b}}^{k}
S^*_i$ be the union of these last few components from ${\cal S}^*$ which
have ``large'' boundary. Consider the iteration of the while loop
when $k' = \mathfrak{a}$ and consider $S_1, \dots, S_{\mathfrak{a}}$ in that
iteration. By its definition, $c_{\mathfrak{a}} > w(\partial{S_1^*}) (1 -
\nicefrac{\varepsilon_1}{3})$. Hence
\begin{gather}
\min_i \mathsf{Mincut}(\inducedG{S_i}) > w(\partial{S_1^*}) (1 -
\nicefrac{\varepsilon_1}{3}),
\label{eq:stop1} \\
\min_i \text{\sf{Min-4-cut}}(G[S_i]) > 3 w(\partial{S_1^*}) (1 -
\nicefrac{\varepsilon_1}{3}).
\label{eq:stop2}
\end{gather}
In particular, \eqref{eq:stop2} implies that no two near-min-cuts
cross, since two crossing near-min-cuts will result in a $4$-cut of
weight roughly at most $2 w(\partial{S_1^*})$.
However, we are not yet done, since we need to factor out the effects
of the $\mathfrak{a} - 1$ ``small'' cuts found by our algorithm. For this, we need
one further idea.
Let $\boldsymbol{\mathrm{r}} = (r_1, r_2, \ldots, r_{\mathfrak{a}}) \in [k]^{\mathfrak{a}}$ be such that
$r_i$ is the number of sets
$S^*_1, \dots, S^*_{\mathfrak{b} - 1}, S^*_{\geq \mathfrak{b}}$ that intersect with
$S_i$, and let $|\boldsymbol{\mathrm{r}}| := \sum_{i = 1}^{\mathfrak{a}} r_i$. If we consider the
bipartite graph where the left vertices are the algorithm's
components $S_1, \dots, S_{\mathfrak{a}}$, the right vertices are
$S^*_1, \dots, S^*_{\mathfrak{b} - 1}, S^*_{\geq \mathfrak{b}}$, and two sets have an
edge if they intersect, then $|\boldsymbol{\mathrm{r}}|$ is the number of edges. Since
there is no isolated vertex and the graph is connected (otherwise
there would exist $\emptyset \neq I \subsetneq [k']$ and
$\emptyset \neq J \subsetneq [k]$ with
$\cup_{i \in I}S_i = \cup_{j \in J} S^*_j$ contradicting~\As{1}),
the number of edges is $|\boldsymbol{\mathrm{r}}| \geq \mathfrak{a} + \mathfrak{b} - 1$.
\begin{claim}
\label{clm:promises}
For each $i$ with $r_i \geq 2$,
the graph $\inducedG{S_i}$ satisfies the two promises of the problem
$\Laminarcut{r_i}{\varepsilon_1}$.
\end{claim}
\begin{proof}
Fix $i$ with $r_i \geq 2$. Let
$J := \{ j \in [\mathfrak{b} - 1] \mid S_i \cap S^*_j \neq \emptyset \}$ be
the sets $S_j^*$ among the first $\mathfrak{b} - 1$ sets in the optimal
partition that intersect $S_i$. Since $|J| \geq r_i - 1$ and $r_i \geq 2$, $|J| \geq 1$.
Note that
$(1 - \nf{\varepsilon_1}3)\cdot w(\partial{S^*_1}) <
\mathsf{Mincut}(\inducedG{S_i})$ by~\eqref{eq:stop1}.
For every $j \in J$,
\[
\mathsf{Mincut}(\inducedG{S_i}) \leq
w(E(S_i \cap S^*_j, S_i \setminus S^*_j)) \leq w(\partial{S^*_j})
\leq (1 + \varepsilon_1 / 3)\; w(\partial{S^*_1}) \leq (1 + \varepsilon_1)
\;\mathsf{Mincut}(\inducedG{S_i}).
\]
The first and second inequality hold since both parts
$S_i \cap S^*_j$ and $S_i \setminus S^*_j$ are nonempty, and hence
deleting all the edges in $\partial{S_j^*}$ would separate
$G[S_i]$.
The third inequality is by the choice of $\mathfrak{b}$, and the last
inequality uses~(\ref{eq:stop1}) and the fact that
$(1 + \nicefrac{\varepsilon_1}{3}) \leq (1 + \varepsilon_1)(1 -
\nicefrac{\varepsilon_1}{3})$ when $\varepsilon_1 < 1/4$.
This implies that in $\inducedG{S_i}$, for every $j \in J$,
$(S_i \cap S^*_j, S_i \setminus S^*_j)$ is a $(1 + \varepsilon_1)$-mincut.
Furthermore, in $\inducedG{S_i}$, no two $(1+\varepsilon_1)$-mincuts cross
because it will result a 4-cut of cost at most
\[
2(1+\varepsilon_1)\; \mathsf{Mincut}(\inducedG{S_i}) \leq 2(1+\varepsilon_1) (1 + \varepsilon_1 / 3)
\; w(\partial{S_1^*}),
\]
contradicting~\eqref{eq:stop2}. (Note that $2(1 +
\varepsilon_1) (1 + \nicefrac{\varepsilon_1}{3}) \leq 3(1 - \nicefrac{\varepsilon_1}{3})$ when
$\varepsilon_1 < 1/4$.) Hence, in $\inducedG{S_i}$, the two promises for
$\Laminarcut{r_i}{\varepsilon_1}$ are satisfied.
\end{proof}
Our algorithm $\text{Main}(G, k$) runs $\text{Laminar}(\inducedG{S_i}, r_i)$ for
each $i \in [\mathfrak{a}]$ when it sets $k' = \mathfrak{a}$ and the vector $\boldsymbol{\mathrm{r}}$ as
defined above. As in the algorithm, let
${\cal C} = \{C_1, \dots, C_{k}\}$ be the partition obtained in
Line~\ref{line:complete}. In other words, to obtain the $k$ sets
$C_1, \dots, C_k$ from the set $V$, we take the reference partition
$S_1, \dots, S_{\mathfrak{a}}$ and further partition these sets using Laminar
to get $|\boldsymbol{\mathrm{r}}|$ parts $C_1, \dots, C_{|\boldsymbol{\mathrm{r}}|}$. If $|\boldsymbol{\mathrm{r}}| \geq k$, we
can merge the last $|\boldsymbol{\mathrm{r}}| - k + 1$ parts to get exactly $k$ parts if
we want (but we will not take any edge savings into account in this
calculation). If $|\boldsymbol{\mathrm{r}}| < k$, we get $k - |\boldsymbol{\mathrm{r}}|$ more parts using
the Complete procedure.
The total cost of this solution ${\cal C}$ is $w(E(C_1, \dots, C_k))$,
which is $\sum_{j=1}^{\mathfrak{a} - 1} c_j \leq (\mathfrak{a} - 1) w(\partial{S^*_1})$
plus the cost of $\text{Laminar}(\inducedG{S_i}, r_i)$ for all
$i \in [\mathfrak{a}]$ and the cost of $\text{Complete}$. Since
Claim~\ref{clm:promises} considers the partition of each
$\inducedG{S_i}$ obtained by cutting edges belonging to the optimal
$k$-partition,
the sum of the cost of the $r_i$-partition we compare to in
each \Laminarkcut{r_i} is exactly $\ensuremath{\mathsf{Opt}}\xspace$.
Hence the cost of the
solution given by $\text{Laminar}(\inducedG{S_i}, r_i)$ summed over
$i \in [\mathfrak{a}]$ is bounded by $(2 - \varepsilon_2) \ensuremath{\mathsf{Opt}}\xspace$, by the approximation
assumption in Theorem~\ref{thm:reduction1}.
If $\cup_{i \in I} S^*_i$ for some $\emptyset \neq I \subsetneq [k]$
conforms to ${\cal C}$, then since \text{Main}\ also records $\text{Guess}({\cal C})$,
the proof of Claim~\ref{claim:good} guarantees that $\text{Main}(G, k)$
gives a $(2 - \varepsilon_3)$ approximation using the induction hypothesis.
Otherwise, $S^*_1$ does not conform to ${\cal C}$, so the arguments used
in the proof of Claim~\ref{claim:notgood} show that the cost of
$\text{Complete}$ is at most $(k - |\boldsymbol{\mathrm{r}}|)\, w(\partial{S^*_1})$ if
$|\boldsymbol{\mathrm{r}}| \leq k$, and $0$ otherwise. Since $|\boldsymbol{\mathrm{r}}| \geq \mathfrak{a} + \mathfrak{b} - 1$,
the total cost $w(E(C_1, \dots, C_k))$ is then bounded
by
\begin{align*}
& (\mathfrak{a} - 1 ) w(\partial{S^*_1}) + (2 - \varepsilon_2) \ensuremath{\mathsf{Opt}}\xspace +
(k - \mathfrak{a} - \mathfrak{b} + 1) w(\partial{S^*_1}) \\
= & \ (2 - \varepsilon_2) \ensuremath{\mathsf{Opt}}\xspace + (k - \mathfrak{b}) w(\partial{S^*_1}) \\
\leq & \ (2 - \varepsilon_2) \ensuremath{\mathsf{Opt}}\xspace + \varepsilon_5 k \cdot
w(\partial{S^*_1}) \tag{by \As{4}}\\
\leq & \ (2 - \varepsilon_2 + 2 \varepsilon_5) \ensuremath{\mathsf{Opt}}\xspace.
\end{align*}
Therefore, if
\begin{equation}
\varepsilon_3 \leq \varepsilon_2 -2 \varepsilon_5,
\label{eq:para_4}
\end{equation}
then $\text{Main}(G, k)$ gives a $(2 - \varepsilon_3)$ approximation in every
possible case. We set $\varepsilon_3, \varepsilon_4, \varepsilon_5 > 0$ so that they
satisfy the three conditions~\eqref{eq:para_1}, \eqref{eq:para_2},
and~\eqref{eq:para_4}, namely,
\[
(2/3) \cdot \varepsilon_1 \varepsilon_4 \geq \varepsilon_3, \quad (1 +
\varepsilon_1 \varepsilon_5 / 3)(2 - \varepsilon_3) \geq 2,
\quad \varepsilon_3 \leq \varepsilon_2 - 2 \varepsilon_5.
\]
(For instance, setting $\varepsilon_4 = \varepsilon_5 = \min(\varepsilon_1,
\varepsilon_2) / 3$ and $\varepsilon_3 = \varepsilon_4^2$ works.) \end{proof}
\subsection{Running Time}
We prove that this algorithm also runs in FPT time, finishing the proof of Theorem~\ref{thm:reduction1}. \begin{lemma} Suppose that $\text{Laminar}(G, k)$ runs in time $f(k) \cdot g(n)$. Then Main$(G, k)$ runs in time $2^{O(k^2 \log k)} \cdot f(k) \cdot (g(n) + n^4 \log^3 n)$. \end{lemma} \begin{proof} Let $\mathsf{Time}(\text{P})$ denote the running time of a procedure \text{P}. Here each procedure is only parameterized by the number of sets it outputs (e.g., $\text{Main}(k), \text{Guess}(k), \text{Complete}(k), \text{Laminar}(k)$). We use the fact that the global min-cut can be computed in time $O(n^2 \log^3 n)$~\cite{KS96} and the min-$4$-cut can be computed in $O(n^4 \log^3 n)$~\cite{Levine00}. First, $\mathsf{Time}(\text{Complete}(k)) = O(kn^2 \log^3 n)$. For $\text{Guess}$ and $\text{Main}$, \[ \mathsf{Time}(\text{Guess}(k)) \leq k \cdot 2^{k + 1} \cdot ( \mathsf{Time}(\text{Main}(k - 1)) + O(n) ), \] and \begin{align*} \mathsf{Time}(\text{Main}(k)) & \leq k^k \cdot (\mathsf{Time}(\text{Laminar}(k)) +
\mathsf{Time}(\text{Guess}(k)) + \mathsf{Time}(\text{Complete}(k))) + O(k n^4 \log^3 n) \\ & \leq 2^{O(k \log k)} \cdot f(k) \cdot (g(n) + O(n^4 \log^3 n)) + 2^{O(k \log k)} \cdot \mathsf{Time}(\text{Main}(k - 1)). \end{align*} We can conclude $\mathsf{Time}(\text{Main}(k)) \leq 2^{O(k^2 \log k)} \cdot f(k) \cdot (g(n) + n^4 \log^3 n)$. \end{proof}
\section{An Algorithm for \Laminarkcut{k}} \label{sec:laminar}
Recall the definition of the \Laminarkcut{k} problem: \LamDef*
Let $\mathcal O_{\varepsilon_1}$ contain all partitions $S_1,\ldots,S_k$ of $V$ with the restriction that the boundaries of the first $k-1$ parts is small---i.e., $w(\partial{S_i}) \le (1+\varepsilon_1)\mathsf{Mincut}(G)$ for all $i \in [k-1]$. We emphasize that the weight of the last cut, i.e., $w(\partial{S_k})$, is unconstrained. In this section, we give an algorithm to find a $k$-partition (possibly not in $\mathcal O_{\varepsilon_1}$) with total weight \[ w(E(S_1,\ldots,S_k)) \le (2 - \varepsilon_2) \min\limits_{\{S_i'\}\in
\mathcal O_{\varepsilon_1}} w(E(S_1',\ldots,S_k')). \]
Formally, the main theorem of this section is the following: \begin{theorem}[Laminar Cut Algorithm]
\label{thm:laminar}
Suppose there exists a $(1+\delta)$-approximation algorithm for
$\PartialVC{k}$ for some $\delta \in (0, 1/24)$ that runs in time
$f(k) \cdot g(n)$. Then, for any $\varepsilon_1\in(0,1/6-4\delta)$, there
exists a $(2 - \varepsilon_2)$-approximation algorithm for
$\Laminarcut{k}{\varepsilon_1}$ that runs in time $2^{O(k)}f(k)(\tilde O(n^4) + g(n))$
for some constant $\varepsilon_2 > 0$. \end{theorem}
In the rest of this section we present the algorithm and the analysis. For a formal description, see the pseudocode in Appendix~\ref{sec:pseudocode-laminar}.
\subsection{Mincut Tree}
The first idea in the algorithm is to consider the structure of a laminar family of cuts. Below, we introduce the concept of a \textit{mincut tree}. The vertices of the mincut tree are called \textit{nodes}, to distinguish them from the vertices of the original graph.
\begin{definition}[Mincut Tree] A tree $\mathcal T = (V_{\mathcal T}, E_{\mathcal T}, w_{\mathcal T})$ is a \textbf{$(1+\varepsilon_1)$-mincut tree} on a graph $G=(V,E,w)$ with mapping $\phi : V \to V_{\mathcal T}$ if the following two sets are equivalent: \begin{enumerate} \item The set of all $(1+\varepsilon_1)$-mincuts of $G$. \item Cut a single edge $e \in E_{\mathcal T}$ of the tree, and let $A_e
\subset V_{\mathcal T}$ be the nodes on one side of the cut. Define $S_e :=
\phi^{-1}(A_e) = \{v \mid \phi(v) \in A_e\}$ for each $e\in E_{\mathcal T}$,
and take the set of cuts $ \{ (S_e, V \setminus S_e) : e \in E_{\mathcal T}
\}$. \end{enumerate} Moreover, for every pair of corresponding $(1+\varepsilon_1)$-mincut $(S_e, V
\setminus S_e)$ and edge $e \in E_{\mathcal T}$, we have $w_{\mathcal T}(e) = w(E(S_e,
V\setminus S_e))$. \end{definition}
We use the term \textit{mincut tree} without the $(1+\varepsilon_1)$ when the value of $\varepsilon_1$ is either implicit or irrelevant.
For the rest of this section, let \[ {\mu} := \mathsf{Mincut}(G) \] for brevity. Observe that the last condition implies that ${\mu}\le w_{\mathcal T}(e)\le(1+\varepsilon_1){\mu}$ for all $e\in E_{\mathcal T}$. The existence of a mincut tree (and the algorithm for it) assuming laminarity, is standard, going back at least to Edmonds and Giles~\cite{EG75}.
\begin{theorem}[Mincut Tree Existence/Construction] \label{thm:mincutTreeExistence}
If the set of $(1+\varepsilon_1)$-mincuts of a graph is laminar, then an
$O(n)$-sized $(1+\varepsilon_1)$-mincut tree always exists, and can
be found in $O(n^3)$ time. \end{theorem}
\begin{proof}
We refer the reader to~\cite[Section~2.2]{KV12}. Fix a vertex $v \in
V$, and for each $(1+\varepsilon_1)$-mincut $(S,V\setminus S)$, pick the side
that contains $v$; this family of subsets of $V$ satisfies the laminar
condition in Proposition~2.12 of that book. Corollary~2.15 proves that
this family has size $O(n)$, and the construction of $T$ in
Proposition~2.14 gives the desired mincut tree. Furthermore, we can
compute the mincut tree in $O(n^3)$ time as follows: first precompute
whether $X \subset Y$ for every two sets $X$ and $Y$ in the family,
and then compute $T$ following the construction in the proof of
Proposition~2.14. \end{proof}
\begin{definition}[Mincut Tree Terminology] Let $\mathcal T$ be a rooted mincut
tree. For $a \in V_{\mathcal T}$, define the following terms: \begin{OneLiners} \item[1.] $\ensuremath{\mathrm{children}}(a)$: the set of children of node $a$ in the rooted tree. \item[2.] $\ensuremath{\mathrm{desc}}(a)$: the set of descendants of $a$, i.e., nodes $b \in V_{\mathcal T} \setminus a$ whose path to the root includes $a$. \item[3.] $\ensuremath{\mathrm{anc}}(a)$: the set of ancestors of $a$, i.e., nodes $b \in V_{\mathcal T} \setminus a$ on the path from $a$ to the root. \item[4.] $\ensuremath{\mathrm{subtree}}(a)$: vertices in the subtree rooted at $a$, i.e., $\{a\} \cup \ensuremath{\mathrm{desc}}(a)$. \end{OneLiners} \end{definition}
For the set of partitions $\mathcal O_{\varepsilon_1}$ (as defined at the beginning of this section), we observe the following.
\begin{claim}[Representing Laminar Cuts in $\mathcal T$]
Let $\mathcal T = (V_{\mathcal T}, E_{\mathcal T}, w_{\mathcal T})$ be a $(1+\varepsilon_1)$-mincut tree of
$G=(V,E,w)$, and consider a partition $\{S_1, \ldots, S_k\} \in
\mathcal O_{\varepsilon_1}$. Then, there exists a root $r \in V_{\mathcal T}$ and nodes
$a_1, \ldots, a_{k-1} \in V_{\mathcal T} \setminus r$ such that if we root
the tree $\mathcal T$ at $r$,
\begin{enumerate}
\item For any two nodes in $\{a_1, \ldots, a_{k-1}\}$, neither is an
ancestor of the other. (We call two such nodes
\textbf{incomparable}).
\item For each $v_i$, let $A_i := \ensuremath{\mathrm{subtree}}(a_i)$, and let $A_k =
V_{\mathcal T} \setminus \bigcup_{i=1}^{k-1} A_i$ (so that $r \in A_k$). We
have the two equivalences $\{\phi^{-1}(A_i) \mid i \in [k-1]\} = \{S_1,
\ldots, S_{k-1}\}$ and $\phi^{-1}(A_k) = S_k$. In other words, the
components $A_i \subset V_{\mathcal T}$, when mapped back by $\phi^{-1}$,
correspond exactly to the sets $S_i \subset V$, with the additional
guarantee that $A_k$ and $S_k$ match. \end{enumerate} \end{claim}
\begin{proof}
Since $S_i$ is a $(1+\varepsilon_1)$-mincut for each $i \in [k-1]$, there
exists an edge $e_i \in E_{\mathcal T}$ such that the set $A_i'$ of nodes on
one side of $e_i$ satisfies $\phi^{-1}(A_i') = S_i $. The sets $A_i'$
for $i\in[k-1]$ are necessarily disjoint, and they cannot span all
nodes in $V_{\mathcal T}$, since $S_k$ is still unaccounted for. If we root
$\mathcal T$ at a node $r$ not in any $A_i'$, then each $A_i'$ is a subtree
of the rooted $\mathcal T$. Altogether, the roots of the subtrees $A_i'$
satisfy condition~(1) of the lemma, and the $A_i'$ themselves satisfy
condition~(2). \end{proof}
For a graph $G=(V,E,w)$ and mincut tree $\mathcal T=(V_{\mathcal T},E_{\mathcal T},w_{\mathcal T})$ with mapping $\phi:V\to V_{\mathcal T}$, define $E_G(A,B)$ for $A,B \subset V_{\mathcal T}$ as $E\left(\phi^{-1}(A), \phi^{-1}(B)\right)$, i.e., the total weight of edges crossing the sets corresponding to $A$ and $B$ in $V$.
\begin{observation}
Given a root $r\in V_{\mathcal T}$ and incomparable nodes $a_1, \ldots,
a_{k-1} \in V_{\mathcal T} \setminus r$, we can bound the corresponding
partition $S_1,\ldots,S_k$ as follows:
\begin{align*}
w(E(S_1, \ldots, S_k)) &= \textstyle \sum_{i=1}^{k-1}
w(\partial(S_i)) - \sum_{i<j \le k-1} w(E(S_i, S_j)) \\ &=
\textstyle \sum_{i=1}^{k-1} w_{\mathcal T}(e_i) - \sum_{i<j\le k-1}
w(E_G(\ensuremath{\mathrm{subtree}}(a_i),\ensuremath{\mathrm{subtree}}(a_j))) ,
\end{align*}
where $e_i$ is the parent edge
of $v_i$ in the rooted tree. \end{observation}
Note that ${\mu} \le w_{\mathcal T}(e) \le (1+\varepsilon_1){\mu}$ for all $e\in E_{\mathcal T}$, so to approximately minimize the above expression for a fixed root $r$, it suffices to approximately maximize \begin{align*}
\textstyle \mathsf{Saved}(a_1,\ldots,a_{k-1}) := \sum\limits_{i<j\le k-1}
w(E_G(\ensuremath{\mathrm{subtree}}(a_i),\ensuremath{\mathrm{subtree}}(a_j))), \label{eq:saved} \end{align*} which we think of as the edges \textit{saved} in the double counting of $\sum_{i=1}^{k-1}w_{\mathcal T}(e_i)$. The actual approximation factor is made precise in the proof of Theorem~\ref{thm:laminar}.
To maximize the number of saved edges over all partitions in $\mathcal O_{\varepsilon_1}$, it suffices to try all possible roots $r$ and take the best partition. Therefore, for the rest of this section, we focus on maximizing $\mathsf{Saved}(a_1,\ldots,a_{k-1})$ for a fixed root $r$. Let $\ell^*(r)$ be that maximum value for root $r$, and let $\mathsf{Opt}(r) = \{a_1^*, \ldots, a_{k-1}^*\} \subset V_{\mathcal T}$ be the solution that attains it.
\subsection{Anchors}
Root the mincut tree $\mathcal T$ at $r$, and let $a_1^*,\ldots,a_{k-1}^*$ be incomparable nodes in the solution $\ensuremath{\mathsf{Opt}}\xspace(r)$. First, observe that we can assume w.l.o.g.\ that for each node $a_i^*$, its parent node is an ancestor of some $a_j^* \neq a_i^*$: if not, we can replace $a_i^*$ with its parent, which can only increase $\mathsf{Saved}(a_1^*,\ldots,a_{k-1}^*)$.
\begin{observation} Consider nodes $a_1^*,\ldots,a_s^* \in \mathsf{Opt}(r)$ which share the same parent $a \notin \mathsf{Opt}(r)$, and assume that $a$ has no other descendants. If we replace $a_1^*,\ldots,a_s^*$ in $\mathsf{Opt}(r)$ with $a$, then we lose at most $\mathsf{Saved}(a_1^*,\ldots,a_s^*)$ in our solution.\footnote{The new solution may no longer have $k-1$ nodes, but we will fix this problem in the proof of Theorem~\ref{thm:laminar}. For now, assume that we are allowed to choose any number up to $k-1$ nodes.} \end{observation}
If $\mathsf{Saved}(a_1^*,\ldots,a_s^*)$ is small, i.e., compared to $(s-1){\mu}$, then we do not lose too much. This idea motivates the idea of anchors.
\begin{definition}[Anchors] Let $\mathcal T=(V_{\mathcal T},E_{\mathcal T},w_{\mathcal T})$ be a rooted tree. For a fixed constant $\varepsilon_3 > 0$, define an \textbf{$\varepsilon_3$-anchor} to be a node $a\in V_{\mathcal T}$ such that there exists $s \in [2,k-1]$ and $s$ children $a_1,\ldots,a_s$ such that $\mathsf{Saved}(a_1,\ldots,a_s) \ge \varepsilon_3(s-1){\mu}$. When the value of $\varepsilon_3$ is implicit, we use the term anchor, without the $\varepsilon_3$. \end{definition}
We now claim that we can transform any solution to another well-structured solution, with only a minimal loss.
\begin{lemma}[Shifting Lemma]
\label{lem:anchor}
Let $a_1,\ldots,a_{k-1}$ be a set of incomparable nodes of a
$(1+\varepsilon_1)$-mincut tree $\mathcal T$. Then, there exists a set
$b_1,\ldots,b_s$ of incomparable nodes, for $1\le s\le k-1$, such that \begin{enumerate} \item The parent of every node $b_i$ is either an $\varepsilon_3$-anchor, or is
an ancestor of some node $b_j \neq b_i$ whose parent is an anchor. \item $\mathsf{Saved}(b_1,\ldots,b_s) \ge \mathsf{Saved}(a_1,\ldots,a_{k-1}) -
\varepsilon_3(k-s){\mu}$. \end{enumerate} In particular, if $\{a_1, \ldots, a_{k-1}\} = \mathsf{Opt}(r)$, condition~(2) implies $\mathsf{Saved}(b_1,\ldots,b_s) \ge \ell^*(r) - \varepsilon_3(k-1){\mu}$. \end{lemma}
\begin{proof}
We begin with the solution $b_i=a_i$ for all $i$, and iteratively
shift non-anchors in the solution while maintaining the potential
function $\Phi:=\mathsf{Saved}(b_1,\ldots,b_s) - \mathsf{Saved}(a_1,\ldots,a_{k-1}) +
\varepsilon_3(k-s){\mu}$ nonnegative.
At the beginning, $\Phi=0$. Suppose there is a
node $b_i$ not satisfying condition (1). Choose one such $b_i$ of
maximum depth in the tree, and let $b'$ be its non-anchor parent. Then
the only descendants of $b'$ in the current solution are siblings of
$b_i$. Replace $b_i$ and its $s'$ siblings in the solution by
$b'$. Since $b'$ is not an anchor, $\mathsf{Saved}(b_1,\ldots,b_s)$ drops by
at most $\varepsilon_3(s'-1){\mu}$. This drop is compensated by the decrease
of the solution size from $s$ to $s-(s'-1)$. \end{proof}
Hence, at a loss of $\varepsilon_3(k-1){\mu}$, it suffices to focus on a solution $\mathsf{Opt}'(r)$ which fulfills condition (1) of Lemma~\ref{lem:anchor} and has $\mathsf{Saved}$ value $\ell'(r) \geq \ell^*(r) - \varepsilon_3(k-1){\mu}$.
The rest of the algorithm splits into two cases. At a high level, if there are enough anchors in a mincut tree $\mathcal T$ that are incomparable with each other, then we can take such a set and be done. Otherwise, the set of anchors can be grouped into a small number of paths in $\mathcal T$, and we can afford to try all possible arrangements of anchors. But first we show how to find all the anchors in $\mathcal T$.
\subsection{Finding Near-Anchors}
\newcommand{\mathcal{A}}{\mathcal{A}}
\begin{lemma}[Finding (Near-)Anchors]
\label{lem:compute-anchors}
Assume access to a $(1+\delta)$-approximation algorithm for $\PartialVC k$
running in time $f(k) \cdot g(n)$. Then, there is an algorithm running
in time $O(n \cdot (n^2 + k \cdot f(k) \cdot g(n)))$ that computes a
set $\mathcal{A}$ of ``near''-anchors in $\mathcal T$, i.e., vertices $a \in
V_{\mathcal T}$ for which there exists an integer $s \in [2,k-1]$ and $s$
children $b_1, \ldots, b_s$ such that $\mathsf{Saved}(b_1,\ldots,b_s) \geq
\varepsilon_3(s-1){\mu} - \delta(1+\varepsilon_1)s{\mu}$. \end{lemma}
\begin{proof}
To determine if a node $a$ is an anchor or not, for each integer $s
\in [2, k-1]$ we wish to compute the maximum value of
$\mathsf{Saved}(b_1,\ldots,b_s)$ for $b_1,\ldots,b_s \in
\ensuremath{\mathrm{children}}(a)$. Consider the following weighted, complete graph with
vertex and edge weights: for each $b \in \ensuremath{\mathrm{children}}(a)$ create a vertex
$x_b$, and the edge $(x_{b_1}, x_{b_2})$ has weight
$\mathsf{Saved}(b_1,b_2)$. Each vertex $x_b$ also has weight $(1+\varepsilon_1){\mu}
- w(\partial x_b)$, where $w(\partial x_b)$ is the sum of the weights
of edges incident to $x_b$. Note that this graph is
$(1+\varepsilon_1){\mu}$-regular, if we include vertex weights in the
definition of vertex degree.
Observe that $w(\partial x_b)
\le \partial\left(\phi^{-1}(\ensuremath{\mathrm{subtree}}(b))\right) \le (1+\varepsilon_1){\mu}$,
since every edge in $G$ that contributes to $\mathsf{Saved}(b,b')$ for another
child $b'$ also contributes to the cut
$\partial\left(\phi^{-1}(\ensuremath{\mathrm{subtree}}(b))\right)$, which we know is $\le
(1+\varepsilon_1){\mu}$.
Therefore, each vertex has a nonnegative weight.
Also, a partial vertex cover on this graph with vertices $x_{b_1},
\ldots, x_{b_s}$ has weight exactly $(1+\varepsilon_1)s{\mu} - \mathsf{Saved} (b_1,
\ldots, b_s)$.
Let $b_1^*,\ldots,b_s^* \in \ensuremath{\mathrm{children}}(a)$ be the solution with maximum
$\mathsf{Saved}(b_1^*,\ldots,b_s^*)$. To compute this maximum, we can build
the above graph and run the $(1+\delta)$-approximate partial vertex
cover algorithm from Theorem~\ref{thm:pvc}. The solution
$b_1,\ldots,b_s$ satisfies
\[ (1+\varepsilon_1)s{\mu} - \mathsf{Saved}(b_1,\ldots,b_s) \le
(1+\delta)\left((1+\varepsilon_1)s{\mu} - \mathsf{Saved}(b_1^*,\ldots,b_s^*)\right),
\]
so that
\begin{align*}
\mathsf{Saved}(b_1,\ldots,b_s) & \ge (1+\delta)\,\mathsf{Saved}(b_1^*,\ldots,b_s^*)
- \delta(1+\varepsilon_1)s {\mu} \\
& \ge \mathsf{Saved}(b_1^*,\ldots,b_s^*) - \delta(1+\varepsilon_1)s{\mu}.
\end{align*}
We run this subprocedure for the vertex $a$ for each integer $2 \leq s
\le \min\{|\ensuremath{\mathrm{children}}(a)|, k-1\}$, and mark vertex $a$ if there exists
an integer $s$ such that the weight of saved edges is at least
$\varepsilon_3(s-1){\mu} - \delta(1+\varepsilon_1)s{\mu}$. The set $\mathcal{A}$ of
near-anchors is exactly the set of marked vertices.
As for running time, for each node $a$, it takes $O(n^2)$ time to construct the $\textsc{Partial VC}\xspace$ graph and $O(k) \cdot f(k) \cdot g(n)$ time to solve $\PartialVC s$ for each $s \in [2,k-1]$. Repeating the above for each of the $O(n)$ nodes achieves the promised running time.
\end{proof}
\subsection{Many Incomparable Near-Anchors}
\begin{lemma}[Many Anchors]
\label{lem:incomparable}
Suppose we have access to a $(1+\delta)$-approximation algorithm for
$\PartialVC k$ running in time $f(k) \cdot g(n)$. Suppose the set
$\mathcal{A}$ of near-anchors contains $k-1$ incomparable nodes from the
mincut tree $\mathcal T$. Then, there is an algorithm computing a solution
with $\mathsf{Saved}$ value
$\ge \frac14\varepsilon_3(k-1){\mu} - \delta(1+\varepsilon_1)(k-1){\mu}$ for any
$\delta>0$, running in time
$O(n \cdot (n^2 + k \cdot f(k) \cdot g(n)))$. \end{lemma}
\begin{proof}
First, we compute the set $\mathcal{A}$ in $O(n \cdot (n^2 + k \cdot f(k)
\cdot g(n))$ time, according to Lemma~\ref{lem:compute-anchors}. If
$\mathcal{A}$ contains $k-1$ incomparable nodes, we can \textit{find}
them in $O(n^2)$ time by greedily choosing nodes in a topological,
bottom-first order (see lines 4--11 in
Algorithm~\ref{alg:laminarRooted}). Each of these $k-1$ marked nodes
$a_1, \ldots, a_{k-1}$ has an associated value $s_i$, indicating that
$a_i$ has some $s_i$ children whose $\mathsf{Saved}$ value is at least
$\varepsilon_3(s_i-1){\mu} - \delta(1+\varepsilon_1)s_i{\mu}$. If we consider a
subset $A \subset [k-1]$ and choose the $s_i$ children for each $a_i$
with $i\in A$, then we get a set with $\sum_{i\in A} s_i$ nodes, whose
total $\mathsf{Saved}$ value at least \[\varepsilon_3\left( \sum_{i\in
A}(s_i-1)\right){\mu} - \delta(1+\varepsilon_1)\left( \sum_{i\in A}s_i
\right){\mu}.\] Assuming that $\sum_{i\in A}s_i \le k-1$, i.e., we
choose at most $k-1$ children, the second
$\delta(1+\varepsilon_1)\left(\sum_{i\in A}s_i\right){\mu}$ term is at most
$\delta(1+\varepsilon_1) (k-1){\mu}$. To optimize the $\varepsilon_3\left( \sum_{i\in
A}(s_i-1)\right){\mu}$ term, we reduce to the following knapsack
problem: we have $k-1$ items $i \in [k-1]$ where item $i$ has size
$s_i \in [2,k-1]$ and value $s_i-1$, and our bag size is $k-1$. A
knapsack solution of value $Z:=\sum_{i\in A}(s_i-1)$ translates to a
solution with $\mathsf{Saved}$ value $\ge \varepsilon_3{\mu} \cdot Z -
\delta(1+\varepsilon_1)(k-1){\mu}$. By Lemma~\ref{lemma:knapsack}, when $k
\ge 5$, we can compute a solution $A \subset [k-1]$ of value $\ge
(k-1)/4$ in $O(k)$ time. (If $k\le4$, we can use the exact $\tilde O(n^4)$
$k$\textsc{-Cut}\xspace algorithm from~\cite{Levine00}.) Selecting the children of each
$u_i$ with $i\in A$ gives a total $\mathsf{Saved}$ value of at least
$\frac14\varepsilon_3(k-1){\mu} - \delta(1+\varepsilon_1)(k-1){\mu}$. \end{proof}
\subsection{Few Incomparable Near-Anchors}
\begin{figure}
\caption{ Establishing the set of branches $\mathcal B$. The circled nodes on the left are the near-anchors. The middle graph is the tree $\mathcal T'$. On the right, each non-black color is an individual branch; actually, the branches only consist of nodes, but we connect the nodes for visibility. Also, note that the root is its own branch. The red, orange, yellow, and green branches form an incomparable set.}
\label{figure:branches}
\end{figure}
If the condition in Lemma~\ref{lem:incomparable} does not hold, then there exist $\le k-2$ paths from the root in $\mathcal T$ such that every node in the near-anchor set $\mathcal{A}$ lies on one of these paths. If we view the union of these paths as a tree $\mathcal T'$ with $\le k-2$ leaves, then we can partition the nodes in tree $\mathcal T'$ into a collection $\mathcal B$ of at most $2k-3$ \textit{branches}. Each branch $B$ is a collection of vertices obtained by taking either a leaf of $\mathcal T'$ or a vertex of degree more than two, and all its immediate degree-2 ancestors; see Figure~\ref{figure:branches}. Note that it is possible that the root node is its own branch. Hence, given two branches $B_1, B_2 \in \mathcal B$, either every node from $B_1$ is an ancestor of every node from $B_2$ (or vice versa), or else every node from $B_1$ is incomparable with every node from $B_2$.
Let $A' \subseteq \mathcal{A}$ be the set of anchors with at least one child in $\mathsf{Opt}'(r)=\{a_1^*,\ldots,a_s^*\}$; recall that $\mathsf{Opt}'(r)$ was produced by the shifting procedure in Lemma~\ref{lem:anchor}. Let $A^* \subseteq A'$ be the \textit{minimal} anchors in $A'$, i.e., every anchor in $A'$ that is not an ancestor of any other anchor in $A'$. We know that every anchor in $A^*$ falls inside our set of branches, although the algorithm does not know where. Moreover, by condition~(1) of Lemma~\ref{lem:anchor}, the parent of every $a_i^* \in \mathsf{Opt}'(r)$ either lies in $A^*$, or is an ancestor of an anchor in $A^*$.
As a warm-up, consider the case where all the anchors in $A'$ are contained within a single branch.
\begin{figure}\label{figure:single-branch}
\end{figure}
\begin{claim}[Warm-up]
\label{claim:singleBranch}
Assume there exists a $(1+\delta)$-approximation algorithm for $\PartialVC k$ running in time $f(k) \cdot g(n)$. Suppose the set of anchors $A'$ with at least one child in
$\mathsf{Opt}'(r)$ is contained within a single branch $B$. Then there is an
algorithm computing a solution with $\mathsf{Saved}$ value at least $\ell'(r)
- \delta(1+\varepsilon_1)(k-1){\mu}$, running in time $O(n \cdot (n^2 + f(k) \cdot g(n)))$. \end{claim}
\begin{proof}
If all of $A'$ lies on $B$, the minimal anchor $a^* \in A^*$ must also
be in $B$. Moreover, for every $a_i^*\in\mathsf{Opt}'(r)$, its parent is
either $a^*$ or an ancestor of $a^*$, which means that $\mathsf{Opt}'(r) \subseteq
\ensuremath{\mathrm{children}}((\{a^*\} \cup \ensuremath{\mathrm{anc}}(a^*)) \cap B)$. Since the nodes in $\ensuremath{\mathrm{children}}((\{a^*\}
\cup \ensuremath{\mathrm{anc}}(a^*)) \cap B)$ are incomparable (see Figure~\ref{figure:single-branch}), we can construct the same
graph as the one in Lemma~\ref{lem:compute-anchors} on all these nodes
in $\ensuremath{\mathrm{children}}((\{a^*\} \cup \ensuremath{\mathrm{anc}}(a^*)) \cap B)$ and run the \textsc{Partial VC}\xspace-based
algorithm to get the same $\mathsf{Saved}$ guarantees (see
Algorithm~\ref{alg:subtreePVC}).
Therefore, the algorithm guesses the location of $a^*$ inside $B$ by
trying all possible $|B|=O(n)$ nodes, and for each choice of $a^*$,
runs the $(1-\delta)$-approximate \textsc{Partial VC}\xspace-based algorithm from
Lemma~\ref{lem:compute-anchors} on the corresponding graph (see
Algorithm~\ref{alg:singleBranch}). \end{proof}
Now for the general case. Consider $\mathsf{Opt}'(r)$ and the set of all branches $\mathcal B$. Let $\mathcal B^* \subseteq \mathcal B$ be the incomparable branches that contain the minimal anchors, i.e., those in $A^*$. We classify the $\ell(r')$ saved edges in $\mathsf{Opt}'(r)$ into two groups (see Figure~\ref{figure:single-branch}): if an edge is saved between the subtrees below $a_i^*,a_j^* \in \mathsf{Opt}'(r)$ whose parent(s) belong to the same branch in $\mathcal B^*$, then call this \textit{an internal edge}. Otherwise, it is an \textit{external edge}: these are saved edges in $\mathsf{Opt}'(r)$ that either go between two subtrees in different branches, or between subtrees in the same branch in $\mathcal B \setminus \mathcal B^*$. One of the two sets has $\ge \frac12\ell'(r)$ saved edges, and we provide two separate algorithms, one to approximate each group.
\begin{lemma}\label{lem:chains}
Assume there exists a $(1+\delta)$-approximation algorithm for $\PartialVC k$ running in time $f(k) \cdot g(n)$. Suppose that all anchors of $\mathsf{Opt}'(r)$ are contained in a set $\mathcal B$ of
$\le 2k-3$ branches. Then there is an algorithm that computes a
solution with $\mathsf{Saved}$ value $\ge \frac12\ell'(r) -
\delta(1+\varepsilon_1)(k-1){\mu}$, running in time $ 2^{O(k)} \cdot (n^2+f(k)\cdot g(n)) $. \end{lemma}
\begin{proof}
\emph{Case I: internal edges $\ge\frac12\ell'$.} For each branch
$B\in\mathcal B$ and each $s\in [k-1]$, compute a solution of $s$ nodes that
maximizes the number of internal edges \textit{within branch
$\mathcal B$}, in the same manner as in
Claim~\ref{claim:singleBranch}; this takes time $O(k^2n \cdot (n^2 + f(k) \cdot g(n)))$. Finally, guess all possible $\le
2^{2k-3}$ subsets of incomparable branches; for each subset
$\mathcal B'\subseteq\mathcal B$, try all vectors $\mathbf i \in [k-1]^{\mathcal B'}$ with
$\sum_{B\in\mathcal B'} i_B \le k-1$, look up the solution using $i_B$
vertices in branch $B$, and sum up the total number of internal
edges. Actually, trying all vectors $\mathbf i \in [k-1]^{\mathcal B'}$ takes $k^{O(k)}$ time, but we can speed up this step to $\mathrm{poly}(k)$ time using dynamic programming. Since one
of the guesses $\mathcal B'$ will be $\mathcal B^*$, the best solution will save at
$\ge\frac12\ell'(r)-\delta(1+\varepsilon_1)(k-1){\mu}$ edges. The total running time for this case is $O(k^2 \cdot f(k) \cdot g(n) + 2^{2k}\cdot\mathrm{poly}(k))$.
\emph{Case II: external edges $\ge\frac12\ell'$.} Again, we guess the
set $\mathcal B^*\subset\mathcal B$ of incomparable branches containing minimal
anchors $A^*$. For a branch $B \in \mathcal B^*$, let $a_B:=(a \in B : B
\setminus a \subseteq \ensuremath{\mathrm{desc}}(a))$ be the ``highest'' node in $B$, that is an
ancestor of every other node in $B$. For each branch, we can replace
all nodes in $\mathsf{Opt}'(r)$ that are descendants of $a_B$ with just $a_B$;
doing can only increase the number of external edges. The new solution
has all nodes contained in the set
\[ \ensuremath{\mathrm{children}}\bigg(\ensuremath{\mathrm{anc}}\bigg(\bigcup_{B\in\mathcal B^*}\{a_B\}\bigg)\bigg), \]
which is a set of incomparable nodes. Therefore, we can construct the
graph of Lemma~\ref{lem:compute-anchors} and use the \textsc{Partial VC}\xspace-based
algorithm with this node set instead. This gives a solution with $\ge
\frac12\ell'(r)-\delta(1+\varepsilon_1)(k-1){\mu}$ saved edges. The total running time for this case is $O(2^{2k}\cdot (n^2 + f(k) \cdot g(n)))$. \end{proof}
\subsection{Combining Things Together}
Putting things together, we conclude with Theorem~\ref{thm:laminar}. We refer the reader to Algorithm~\ref{alg:laminar} for the pseudocode of the entire algorithm.
\begin{proof}[Proof (Theorem~\ref{thm:laminar}).]
Let the original graph be $G=(V,E,w).$ We compute a $(1+\varepsilon_1)$-mincut
tree $\mathcal T=(V_{\mathcal T},E_{\mathcal T},w_{\mathcal T})$ with mapping $\phi:V\to V_{\mathcal T}$ in time $O(n^3)$, following Theorem~\ref{thm:mincutTreeExistence}.
Then, by running the two algorithms in Lemma~\ref{lem:incomparable} and
Lemma~\ref{lem:chains}, we compute a solution with $s \le k-1$
vertices with $\mathsf{Saved}$ value at least
\begin{align*}
& \max\left\{
\frac14\varepsilon_3(k-1){\mu} - \delta(1+\varepsilon_1)(k-1){\mu},\
\frac12\ell'(r) - \delta(1+\varepsilon_1)(k-1){\mu} \right\} \\
= & \max\left\{ \frac14\varepsilon_3(k-1){\mu}, \ \frac12\ell'(r) \right\} - \delta(1+\varepsilon_1)(k-1){\mu}
\end{align*}
for each root $r \in V_{\mathcal T}$ (see
Algorithm~\ref{alg:laminarRooted}). Using $\max\{p, q\} \geq
(4p+2q)/6$ and
$\ell'(r) \ge \ell^*(r) - \varepsilon_3(k-1){\mu}$
we get a solution with $\mathsf{Saved}$ value at least
\begin{align*}
& \frac16 \left( 4 \cdot \frac14\varepsilon_3(k-1){\mu} + 2 \cdot \frac12 \left[ \ell^*(r) - \varepsilon_3(k-1){\mu} \right] \right) - \delta(1+\varepsilon_1)(k-1){\mu} \\
\ge & \frac16\ell^*(r) - 2\delta(k-1){\mu},
\end{align*}
using that $\varepsilon_1 \leq 1$. In particular, the best solution
$v_1,\ldots,v_s \in V_{\mathcal T}$ over all $r$ satisfies
\[ \mathsf{Saved}(v_1,\ldots,v_s) \ge \frac16 \ell^* - 2\delta(k-1){\mu} ,\]
where $\ell^*(r)$ was replaced by $\ell^*$.
Let $v_1,\ldots,v_s \in V_{\mathcal T}$ be our solution with
$\mathsf{Saved}(v_1,\ldots,v_s) \ge \frac16 \ell^* - 2\delta(k-1){\mu}$. Let
$S_1,\ldots,S_s \subset V$ be the corresponding subsets in $V$, i.e.,
$S_i := \phi^{-1}(\ensuremath{\mathrm{subtree}}(v_i))$. Then, add the complement set
$S_{s+1}:=V \setminus \bigcup_{i\in[s]}S_i$ to the solution, so that
the sets $S_i$ partition $V$, and
\[w(E(S_1,\ldots,S_{s+1})) \le s(1+\varepsilon_1){\mu} - \left(\frac16 \ell^* -
2\delta(k-1){\mu}\right). \] Then, extend the solution to a
$k$-partition using Algorithm~\ref{alg:complete}. We now claim that every additional
cut that Algorithm~\ref{alg:complete} makes is a $(1+\varepsilon_1)$-mincut.
To see this, observe that $S_1^*, \ldots, S_{k-1}^*$ are all $(1+\varepsilon_1)$-mincuts and one
of them, say $S_j^*$, has to intersect some $S_i$. Then, the cut $(S_i \cap S_j^*, S_i \setminus S_j^*)$
is a $(1+\varepsilon_1)$-mincut in $S_i$. We can repeat this argument as long as we have $< k$ components $S_i$.
At the end, we have a
solution $S_1', \ldots, S_k'$ satisfying
\begin{align*}
w(E(S_1', \ldots, S_k')) & \le w(E(S_1,\ldots,S_s)) +
(k-1-s)(1+\varepsilon_1){\mu}
\\
& \le (k-1)(1+\varepsilon_1){\mu}- \left(\frac16 \ell^* -
2\delta(k-1){\mu}\right)
\end{align*}
Let $S^*_1,\ldots,S^*_k$ be the optimal partition in $\mathcal O_{\varepsilon_1}$
satisfying $\phi(r) \in S^*_k$, and let $\ell^*$ be the maximum of
$\mathsf{Saved}(v_1^*,\ldots,v_{k-1}^*)$ over incomparable
$v_1^*,\ldots,v_{k-1}^*$. Our solution has approximation ratio
\begin{align*}
\frac{w(E(S_1,\ldots,S_k))}{w(E(S_1^*,\ldots,S_k^*))} & \le
\frac{(k-1)(1+\varepsilon_1){\mu} - \frac16\ell^* +
2\delta(k-1){\mu}}{(k-1){\mu} - \ell^*} \\
& = \frac{(k-1)(1+\varepsilon_1){\mu} - \frac16\ell^* }{(k-1){\mu} -
\ell^*} + \frac{ 2\delta(k-1){\mu}}{(k-1){\mu} - \ell^*} \\
& \le 2(1+\varepsilon_1) - \frac16 + 4\delta,
\end{align*}
with the worst case achieved at $\ell^*=\frac12(k-1){\mu}$, which is
the highest $\ell^*$ can be. Setting $\varepsilon_2:=1/6 - 2\varepsilon_1-4\delta$
concludes the proof.
As for running time, we run the algorithms in Lemma~\ref{lem:incomparable} and Lemma~\ref{lem:chains} sequentially, and the final running time is $\tilde 2^{O(k)}f(k)(\tilde O(n^4) + g(n))$. (The $\tilde O(n^4)$ comes from the case when $k\le 4$, in which we solve the problem exactly in $\tilde O(n^4)$ time.) \end{proof}
\newcommand{\mathsf{Wdeg}}{\mathsf{Wdeg}}
\section{An FPT-AS for \textsc{Minimum Partial Vertex Cover}\xspace} \label{sec:partial-vc}
Recall the \textsc{Minimum Partial Vertex Cover}\xspace (\textsc{Partial VC}\xspace) problem: the input is a graph $G = (V,E)$ with edge and vertex weights, and an integer $k$. For a set $S$, define
$E_S$ to be the set of edges with at least one endpoint in $S$. The goal of the problem is to find a set $S$ with size $|S| = k$, minimizing the weight $w(E_S) + w(S)$, i.e., the weight of all edges hitting $S$ plus the weight of all vertices in $S$. Our main theorem is the following. \begin{theorem}[\textsc{Minimum Partial Vertex Cover}\xspace]
\label{thm:pvc}
There is a randomized algorithm for \textsc{Partial VC}\xspace on weighted graphs that, for
any $\delta \in (0,1)$, runs in $O(2^{k^6/\delta^3} (m +
k^8/\delta^3)\,n \log n)$ time and outputs a
$(1+\delta)$-approximation to \textsc{Partial VC}\xspace with probability $1 - 1/\mathrm{poly}(n)$. \end{theorem}
We first extend a result of Marx~\cite{Marx07} to give a $(1+\delta)$-approximation algorithm for the case where $G$ has edge weights being integers in $\{1, \ldots, M\}$ and no vertex weights, and then show how to reduce the general case to this special case, losing only another $(1+\delta)$-factor.
\subsection{Graphs with Bounded Weights}
\begin{lemma}
\label{lem:pvc-simple}
Let $\delta \leq 1$. There is a randomized algorithm for the \textsc{Partial VC}\xspace
problem on simple graphs with edge weights in $\{1, \ldots, M\}$ (and
no vertex weights) that runs in $O(m+Mk^4/\delta)$ time, and outputs a $(1+\delta)$-approximation with probability at least
$2^{-(Mk^2/\delta)}$. \end{lemma}
\begin{proof}
This is a simple extension of a result for the
maximization case given by Marx~\cite{Marx07}. We give two algorithms: one for the case when the
optimal value is smaller than $\tau := Mk^2/\delta$ (which returns the
correct solution in time, but with probability $2^{-(Mk^2/\delta)}$),
and another for the case of the optimal value being at least $\tau$
(which deterministically returns a $(1+\delta)$-approximation in
linear time). We run both and return the better of the two solutions.
First, the case when the optimal value is at least $\tau$. Let the
\emph{weighted degree} of a node $v$, denoted $w(\partial v)$ be
defined as $\sum_{e: v \in e} w(e)$. Observe that for
any set $S$ with $|S| \leq k$,
\[ 0 \leq \sum_{v \in S} w(\partial v) - w(E_S) \leq M\cdot
\binom{k}{2}. \] Hence, if $S^*$ is the optimal solution and
$w(E_{S^*}) \geq \tau$, then picking the set of $k$ vertices with the
least weighted degrees is a $(1+\delta)$-approximation.
Now for the case when the optimal value is at most $\tau$. In this case,
the optimal set $S^*$ can have at most $\tau$ edges incident to it, since
each edge must have weight at least $1$. Consider the color-coding
scheme where we independently and uniformly colors the vertices of $G$
with two colors (red and blue). With probability $2^{-(\tau+k)}$, all the
vertices in $S^*$ are colored red, and all the vertices in $N(S^*)
\setminus S^*$ are colored blue. Consider the ``red components'' in
the graph obtained by deleting the blue vertices. Then $S^*$ is the
union of one or more of these red components. To find it, define the
``size'' of a red component $C$ as the number of vertices in it, and
the ``cost'' as the total weight of edges in $G$ that are incident to
it (i.e., cost $= \sum_{e \in E: e \cap C \neq \emptyset} w(e)$.)
Now we can use dynamic programming to find a collection of red
components with total size equal to $k$ and minimum total cost: this
gives us $S$ (or some other solution of equal cost). Indeed, if we
define the ``type'' of each component to be the tuple $(s,c)$ where $s
\in [1\ldots k]$ is the size (we can drop components of size greater
than $k$) and $c \in [1\ldots \tau]$ is the cost (we can drop all
components of greater cost). Let $T(s,c)$ be the number of copies of
type $(s,c)$, capped at $k$. Assume the types are numbered $\tau_1,
\tau_2, \ldots, \tau_{k\tau}$. Now if $C(i,j)$ is the minimum cost we can
have with components of type $\leq \tau_i = (s,c)$ whose total size is
$j$, then
\[ C(i,j) = \min_{0 \leq \ell \leq T(s,c)} C(i-1, j - \ell s) + \ell
c. \] Finally, we return the component achieving $C(k\tau,k)$. This can
all be done in $O(m + k^2\tau)$ time. \end{proof}
Repeating the algorithm $O(2^{\tau + k} \log n) = O(2^{Mk^2/\delta + k} \log n)$ times and outputting the best set found in these repetitions gives an algorithm that finds a $(1+\delta)$-approximation with probability $1 - 1/\mathrm{poly}(n)$.
\subsection{Solving The General Case}
We now reduce the general \textsc{Partial VC}\xspace problem, where we have no bounds on the edge weights (and we have vertex weights), to the special case from the previous section.
The idea is simple: given a graph $G = (V,E)$ with edge and vertex weights, we construct a collection of $|V|$ simple graphs $\{ H_v \}_{v
\in V}$, each defined on the vertex set $V$ plus a couple new nodes, and having $O(|V|+|E|)$ edges, with each edge-weight $w'(e)$ being an integer in $\{1, \ldots, M\}$ and $M = O(k/\delta)^2$, and with no vertex weights. We find a $(1+\delta/2)$-approximate \textsc{Partial VC}\xspace solution on each $H_v$, and then output the set $S$ which has the smallest weight (in $G$) among these. We show how to ensure that $S \subseteq V$ and that it is a $(1+\delta)$-approximation of the optimal solution in $G$.
\begin{proof}[Proof of Theorem~\ref{thm:pvc}]
Let $S^*$ be an optimal solution on $G$. Define the \emph{extended
weighted degree} of a vertex $v$, denoted by $\mathsf{Wdeg}(v)$, to be its
vertex weight plus the weight of all edges adjacent to it. I.e.,
$\mathsf{Wdeg}(v) := w(v) + w(\partial v)$.
Firstly, assume we know a vertex $v^* \in S^*$ with the largest
$\mathsf{Wdeg}(v^*)$; we just enumerate over all vertices to find this vertex.
We now proceed to construct the graph $H_{v^*}$. Let $L =
\mathsf{Wdeg}(v^*)$, and delete all vertices $u$ with $\mathsf{Wdeg}(u) > L$. Note
that (a)~any solution containing $v^*$ has total weight at least $L$,
and (b)~each remaining edge and vertex has weight $\leq L$.
Assume that $G$ is simple, since we can combine parallel edges
together by summing their weights. Create two new vertices $p, q$,
and add an edge of weight $L k^2$ between them; this ensures that
neither of these vertices is ever chosen in any near-optimal
solution.
Let $\delta' > 0$ be a parameter to be fixed later; think of $\delta'
\approx \delta$. For each edge $e = (u,v)$ in the edge set $E$ that
has weight $w(e) < L\delta'/k^2$, remove this edge and add its weight
$w(e)$ to the weight of both its endpoints $u,v$. Finally, when there
are no more edges with $w(e) < L\delta'/k^2$, for each vertex $u$ in
$V$, create a new edge $\{u,p\}$ with weight being equal to the
current vertex weight $w(u)$, and zero out the vertex weight. Let the
new edge set be denoted by $E'$. We claim that for any set $S
\subseteq V$ of size $\le k$,
\[ \left( \sum_{e \in E': e \cap S} w(e)\right) - \left( \sum_{e \in
E: e \cap S} w(e) + \sum_{v \in S} w(v) \right) \leq \delta' L. \]
Indeed, the only change comes because of edges with weight $w(e) <
L\delta'/k^2$ and with both endpoints within $S$---these edges
contributed once earlier, but replacing them by the two edges means we
now count them twice. Since there are at most $\binom{k}{2}$ such
edges, they can add at most $\delta' L$.
At this point, all edges in the original edge set $E$ have weights in
$[L \delta'/k^2, Lk^2]$; the only edges potentially having weights $<
L \delta'/k^2$ are those between vertices and the new vertex $p$. For
any such edge with weight $< L \delta'/k$, we delete the edge. This
again changes the optimal solution by at most an additive $L \delta'$,
and ensure all edges in the new graph have weights in $[L \delta'/k^2,
Lk^2]$. Note that since the optimal solution has value at least $L$
by our guess, these additive changes of $L \delta'$ to the optimal
solution mean a multiplicative change of only $(1+\delta')$.
Finally, discretize the edge weights by rounding each edge weight to
the closest integer multiple of $L \delta'^2/k^2$. Since each edge
weight $\geq L \delta'/k^2$, each edge weight incurs a further
multiplicative error at most $1+\delta'$. Note that $M =
k^4/\delta'^2$. Now use Lemma~\ref{lem:pvc-simple} to get a
$(1+\delta')$-approximation for \textsc{Partial VC}\xspace on this instance with high
probability. Setting $\delta' = O(\delta)$ ensures that this solution
is within a factor $(1+\delta)$ of that in $G$. \end{proof}
\section{Conclusion and Open Problems} \label{sec:conclusion} Putting the sections together, we conclude with a proof of our main theorem.
\begin{proof}[Proof of Theorem~\ref{thm:kcut-main}]
Fix some $\delta \in (0,1/24)$. By Theorem~\ref{thm:pvc}, there is a
$(1+\delta)$-approximation algorithm for $\PartialVC k$ running in
time $O(2^{k^6/\delta^3} (m + k^8/\delta^3)\,n \log n) = 2^{O(k^6)}
n^4$ time. Plugging in $f(k) := 2^{O(k^6)}$ and $g(n) := n^4$ into
Theorem~\ref{thm:laminar}, we get a $(2-\varepsilon_2)$-approximation algorithm
to $\Laminarcut k{\varepsilon_1}$ in time $2^{O(k)}f(k)(n^3 + g(n)) =
2^{O(k^6)} n^4$, for a fixed $\varepsilon_1 \in (0,1/6-4\delta)$. Plugging in
$f(k):=2^{O(k^6)}$ and $g(n):=n^4$ into Theorem~\ref{thm:reduction1}
gives a $(2-\varepsilon_3)$-approximation for $$k$\textsc{-Cut}\xspace$ in time $2^{O(k^2 \log k)} \cdot
f(k) \cdot (n^4 \log^3 n + g(n)) = 2^{O(k^6)} n^4 \log^3 n$.
Finally, for our approximation factor. Theorem~\ref{thm:laminar} sets
$\varepsilon_2:=1/6 - 2\varepsilon_1 - 4\delta$ for any small enough $\delta$. We can
take $\varepsilon_1$ and $\varepsilon_2$ to be equal, so that $\varepsilon_1 = \varepsilon_2 = 1/18 -
\nicefrac43\cdot\delta$. Finally, setting
$\varepsilon_4=\varepsilon_5=\min(\varepsilon_1,\varepsilon_2)/3$ and $\varepsilon_3:=\varepsilon_4^2$ in
Theorem~\ref{thm:reduction1} gives $\varepsilon_3 = 1/54^2 - \delta'$ for some
arbitrarily small $\delta'>0$. In other words, our approximation
factor is $2 - 1/54^2 + \delta'$, or $1.9997$ for an appropriately
small $\delta'$. \end{proof}
Our result combines ideas from approximation algorithms and FPT algorithms and shows that considering both settings simultaneously can help bypass lower bounds in each individual setting, namely the $W[1]$-hardness of an exact FPT algorithm and the SSE-hardness of a polynomial-time $(2-\varepsilon)$-approximation. While our improvement is quantitatively modest, we hope it will prove qualitatively significant. Indeed, we hope these and other ideas will help resolve whether an $(1+\varepsilon)$-approximation algorithm exists in FPT time, and to show a matching lower and upper bound.
\paragraph{Acknowledgments.} We thank Marek Cygan for generously giving his time to many valuable discussions.
{\small
}
\appendix
\section{Pseudocode for \Laminarcut{$k$}{$\varepsilon_1$}} \label{sec:pseudocode-laminar}
\begin{algorithm} \caption{SubtreePartialVC$(G,\mathcal T,A,s,\delta)$} \label{alg:subtreePVC} \begin{algorithmic}
\If {$|A| < s$} \State \Return None \EndIf
\For {$a \in A$} \State $C_a \gets V(a) \cup \displaystyle\bigcup\limits_{a' \in \ensuremath{\mathrm{desc}}(a)}V(a')$ \EndFor \Comment{\textbf{Assert}: $C_a$ are all disjoint}
\
\State $\mathcal C \gets \{ C_a : a \in A\}$
\State $H \gets \text{Contract}(G, \mathcal C)$ \Comment{For each $C_a \in \mathcal C$, contract all vertices in $C_a$ into a single vertex in $H$}
\
\For {$i \in [k-1]$}
\State $P_{i} \gets \text{PartialVC}(H,i)$ \Comment{$P_{i} \in V(H)^i$}
\State $\mathcal S_i \gets \text{Expand}(H,P_i)$ \Comment { \parbox[t]{.5\linewidth}{ Map each $v \in P_i$ to the set of vertices in $V$ which contract to $v$ in $H$, and call the result $\mathcal S_i \in \left(2^V\right)^i$ } } \EndFor \State \Return $\{\mathcal S_{i} : i \in [s]\}$
\end{algorithmic} \end{algorithm}
\begin{algorithm} \caption{SingleBranch$(G,\mathcal T,B,k,\delta)$} \label{alg:singleBranch} \begin{algorithmic} \For{$a \in B$} \State $\text{Record}(\text{SubtreePartialVC}(G, \mathcal T, \ensuremath{\mathrm{children}} \left((\{a\} \cup \ensuremath{\mathrm{anc}}(a)) \cap B \right), k-1, \delta))$ \EndFor \State Return the best recorded solution $\{v_1, \ldots, v_{k-1}\} \in V_{\mathcal T}$. \end{algorithmic} \end{algorithm}
\begin{algorithm} \caption{Laminar$(G=(V,E,w),\mathcal T,k,\varepsilon_1,\delta)$} \label{alg:laminar} \begin{algorithmic}
\State $\mathcal T=(V_{\mathcal T},E_{\mathcal T},w_{\mathcal T}) \gets \text{MincutTree}(G)$. \For{$r \in V_{\mathcal T}$} \State Root $\mathcal T$ at $r$. \State $\text{Record}(\text{LaminarRooted}(G,\mathcal T,r,k,\varepsilon_1,\delta))$ \EndFor \State Return the best recorded $k$-partition.
\end{algorithmic} \end{algorithm}
\begin{algorithm} \caption{LaminarRooted$(G=(V,E,w),\mathcal T,r,k,\delta_1,\delta)$} \label{alg:laminarRooted} \begin{algorithmic} \For {$a \in V(\mathcal T)$}
\State $\{S_{a,i} : i \in [k-1]\} \gets \text{SubtreePartialVC}(G, \mathcal T, \ensuremath{\mathrm{children}}(a), k-1, \delta)$ \Comment{$S_{a,i} \in \left(2^V\right)^i$} \EndFor
\
\State $A \gets \emptyset$ \Comment{$A \subset V(\mathcal T) \times [k]$ is the set of \textit{anchors}} \For {$a \in V(\mathcal T)$ in topological order from leaf to root}
\State $\varepsilon_3 \gets \frac{1-\delta}4-2\varepsilon_1$ \Comment {The optimal value of $\varepsilon_3$}
\State $I_a \gets \{i \in [k-1] : \text{Value}(P_{a,i}) \ge \varepsilon_3(1-\delta)(i-1){\mu} \}$
\If {$I_a \ne \emptyset$ \textbf{and} $\nexists (a',i) \in A : a' \in \ensuremath{\mathrm{desc}}(a)$} \Comment{Only take \textit{minimal} anchors}
\State $A \gets A \cup \{(a, \max I_a)\}$
\EndIf \EndFor
\
\If {$|A| \ge k-1$} \Comment{\textbf{Case (K)}: Knapsack}
\State $A' \gets \text{Knapsack}(A)$ \Comment{The Knapsack algorithm as described in Lemma~\ref{lem:incomparable}}
\State $\mathcal S \gets \displaystyle\bigcup\limits_{(a,i) \in A} \{S_{a,i}\}$ \Comment{The partition for Case (K), to be computed. \textbf{Assert}: $|S| \le k-1$}
\State $\text{Record}(\text{Complete}(G,k,\mathcal S))$ \Else
\State $\mathcal B \gets \text{Branches}(A)$ \Comment{$\mathcal B \subset \left( 2^{V(\mathcal T)} \right)^r$ for some $k-1 \le r \le 2k-3$}
\
\For {$B \in \mathcal B$} \Comment {\textbf{Case (B1)}: Compute branches independently}
\State $\{P_{B,i} : i \in [k-1]\} \gets \text{SingleBranch}(G, \mathcal T, B, k-1, \delta)$ \Comment{$P_{B,i} \in V^i$}
\EndFor
\State $(\mathcal B^*,\mathbf i^*) \gets \operatornamewithlimits{argmin}\limits_{ \substack {\mathcal B' \subset \mathcal B \text{ incomparable},\\ \mathbf i \in [k-1]^{\mathcal B'} : \ \sum_{B} i_B\ =\ k-1} } \ \displaystyle\sum\limits_{B \in \mathcal B'} w(E(P_{B,i_B}))$ \Comment{Computed by brute force}
\State $\mathcal S_1 \gets \bigcup_{B \in \mathcal B^*} \{P_{B, i_B}\}$ \Comment{The partition in Case (B1)}
\State $\text{Record}(\text{Complete}(G,k,\mathcal S_1))$
\
\For {$B \in \mathcal B$} \Comment{\textbf{Case (B2)}: Guess the branches with the anchors}
\State $a_B \gets (a \in B : B \setminus a \subset \ensuremath{\mathrm{desc}}(a))$ \Comment{$a_B$ is the common ancestor of branch $B$}
\EndFor
\For {$\mathcal B' \subset \mathcal B$ s.t.\ $\nexists B_1,B_2\in\mathcal B' : B_1 \subset \ensuremath{\mathrm{desc}}(B_2)$} \Comment{Subsets whose branches are incomparable}
\State $A_{\mathcal B'} \gets \ensuremath{\mathrm{children}}\left(\bigcup_{B \in \mathcal B'} \left( \{a_B\} \cup \ensuremath{\mathrm{anc}}(a_B) \right) \right)$
\State $\mathcal S_{2,\mathcal B'} \gets \text{SubtreePartialVC}(G,\mathcal T,A_{\mathcal B'},k-1,\delta)$ \Comment{The partition for $\mathcal B'$ in Case (B2)}
\State $\text{Record}(\text{Complete}(G,k,\mathcal S_{2,\mathcal B'}))$
\EndFor \EndIf
\
\State Return the best recorded $k$-partition.
\end{algorithmic} \end{algorithm}
\section{Missing Proofs} \label{sec:missing-proofs}
\begin{lemma} \label{lemma:knapsack} Consider the knapsack instance of $k-1$ items $i \in [k-1]$ where item $i$ has size $s_i \in [2,k-1]$ and value $s_i-1$. There is an algorithm achieving value $\ge (k-1)/4$ for $k \ge 5$, running in $O(k)$ time. \end{lemma}
\begin{proof} Consider the greedy
knapsack solution where we always choose the heaviest item, if still
possible. Let $A \in [k-1]$ be our solution. If our total size
$\sum_{i\in A}s_i$ is at least $k - 1 - \sqrt k$, then our value is
at least $\sum_{i\in A}(s_i-1) \ge \sum_{i\in A}s_i/2 \ge (k-1-\sqrt
k)/2$. Otherwise, since we could not fit the next item of size at
least $\sqrt k$ into our solution, all of our items have size at
least $\sqrt k$. Furthermore, our total solution size is at least
$(k-1)/2$, so $\sum_{i \in A}(s_i-1) \ge \sum_{i\in A}(1-1/\sqrt
k)s_i \ge (1-1/\sqrt k)(k-1)/2$. When $k \ge 5$, the
value is $\ge (1-1/\sqrt 5)(k-1)/2 \ge (k-1)/4$. \end{proof}
\end{document} |
\begin{document}
\title[Bounds on quasimodes for semiclassical Schr\"odinger operators] {Pointwise bounds on quasimodes of semiclassical Schr\"odinger
operators in dimension two } \thanks{HS is supported in part by NSF grant
DMS-1161283 and MZ is supported in part by NSF grant DMS-1201417.} \author{Hart F. Smith and Maciej Zworski} \address{Department of Mathematics, University of Washington, Seattle, WA 98195} \email{hart@math.washington.edu} \address{Department of Mathematics, University of California, Berkeley, CA 94720} \email{zworski@math.berkeley.edu}
\begin{abstract} We prove optimal pointwise bounds on quasimodes of semiclassical Schr\"odinger operators with arbitrary smooth real potentials in dimension two. This end-point estimate was left open in the general study of semiclassical $ L^p $ bounds conducted by Koch-Tataru-Zworski \cite{KTZ}. However, we show that the results of \cite{KTZ} imply the two dimensional end-point estimate by scaling and localization. \end{abstract}
\maketitle
\newsection{Introduction}
Let ${\rm g}_{ij}(x)$ be a positive definite Riemannian metric on ${\mathbb R}^2$ with the corresponding Laplace-Beltrami operator, \[ \Delta_{\rm g} u := \frac 1 { \sqrt {\bar {\rm g}} } \sum_{ i,j} \partial_{x_j }\left( {\rm g}^{ij} \sqrt {\bar {\rm g}} \, \partial_{x_j} u \right) , \ \ ( {\rm g}^{ij} ) := ( {\rm g}_{ij} )^{-1} , \ \ \bar {\rm g} := \det ( {\rm g}_{ij} ) , \] and let $V \in C^\infty ( {\mathbb R}^2 ) $ be real valued. We prove the following general bound which was already established (under an additional necessary condition) in higher dimensions in \cite{KTZ}, but which was open in dimension two:
\noindent {\bf Theorem.} {\em Suppose that $h\le 1$, and that $u\in H^2_c({\mathbb R}^2)$ with $\text{supp}(u) \subseteq K \Subset {\mathbb R}^2$. Suppose that $u$ satisfies \begin{equation}\label{eqn:semiclass}
\bigl\| - h^2\Delta_{\rm g} u+Vu\|_{L^2}\le h\,,\qquad \|u\|_{L^2}\le 1\,. \end{equation} Then \begin{equation}\label{est:semiclass}
\|u\|_{L^\infty}\le C\,h^{-\frac 12}\,, \end{equation} where the constant $C$ depends only on ${\rm g}$, $V$, and $K$. }
A function $ u $ satisfying \eqref{eqn:semiclass} is sometimes called a weak quasimode. It is a local object in the sense that if $ u $ is a weak quasimode then $ \psi u $, $ \psi \in C_{c}^\infty ( {\mathbb R}^2 ) $ is also one. The localization is also valid in phase space: for instance if $ \chi \in C_c^\infty ( {\mathbb R}^2 \times {\mathbb R}^2 ) $ then $ \chi^w ( x, h D) u $ is also a weak quasimode -- see \cite[Chapter 7]{DiSj} or \cite[Chapter 4]{e-z} for the review of the Weyl quantization $ \chi \mapsto \chi^w $.
If $ \liminf_{|x| \to \infty } V > 0 $, then $ - h^2 \Delta + V $ (defined on $ C_c ^\infty ( {\mathbb R}^2 ) $) is essentially self-adjoint and the spectrum of $ - h^2 \Delta + V $ is discrete in a neighbourhood of $ 0$ -- see for instance \cite[Chapter 4]{DiSj}. In this case weak quasimodes arise as {\em spectral clusters}: \begin{equation}
\label{eq:specc} w = \sum_{ | E_j | \leq C h } c_j w_j , \ \ ( - h^2 \Delta + V ) w_j = E_j w_j , \ \ \langle w_j , w_k \rangle_{L^2} = \delta_{jk} ,
\ \ \sum_{ j} |c_j|^2 \leq 1 .\end{equation} Then $ u = \chi w $, for any $ \chi \in C_c^\infty ( {\mathbb R}^2 ) $, is a weak quasimode in the sense of \eqref{eqn:semiclass}.
Since $ V(x ) \geq c > 0 $ for $ |x| \geq R $, Agmon estimates (see for instance \cite[Chapter 6]{DiSj})
and Sobolev embedding show that $ |u ( x ) | \leq e^{ c/h } $ for
$ |x| \geq R$. Hence we get global bounds
\[ | w ( x ) | \leq C h^{-\frac12} , \ \ x \in {\mathbb R}^2 .\] It should be stressed however that a weak quasimode is a more general notion than a spectral cluster.
The result also holds when $ {\mathbb R}^2 $ is replaced by a two dimensional manifold and, as in the example above, gives global bounds on spectral clusters \eqref{eq:specc} when the manifold is compact. If $V<0$ this is also a by-product of the bound of Avakumovic-Levitan-H\"ormander on the spectral function -- see \cite{Sogge}, and for a simple proof of a semiclassical generalization see \cite[\S 3]{KTZ} or \cite[\S 7.4]{e-z}.
In higher dimensions the theorem requires an additional phase space localization assumption and is a special case of \cite[Theorem 6]{KTZ}: Suppose $ p ( x, \xi ) $ is a function on $ {\mathbb R}^n \times {\mathbb R}^n $ satisfying $ \partial_x^\alpha \partial_\xi^\beta p(x,\xi) = \mathcal O ( \langle \xi \rangle^m) $
for some $m$. Suppose also that $ K \Subset {\mathbb R}^n \times {\mathbb R}^n $, and that for $ ( x , \xi ) \in K $ \[ p ( x, \xi ) = 0 , \ \ \partial_\xi p ( x , \xi ) = 0 \ \Longrightarrow \ \partial_\xi^2 p ( x , \xi ) \ \text{ is
nondegenerate. } \] Then for $ u( h ) $ such that \begin{equation} \label{eq:uK}
u ( h ) = \chi^w ( x, h D) u + {\mathcal O}_{\mathscr S} ( h^\infty ) \, , \ \ \ \operatorname{supp} \chi \subset K \,, \end{equation} we have \begin{equation}
\label{eq:KT0} \| u ( h ) \|_\infty \leq C h^{ - \frac{n-1} 2 } \left( \| u ( h
) \|_{ L^2} + \frac 1 h \| p^w ( x , h D ) u \|_{ L^2 } \right) , \ \ n \geq 3. \end{equation} When $ n = 2 $ the bound holds with $ (\log ( 1/h ) / h )^{\frac12} $, which is optimal in general if $ \partial^2_\xi p $ is not positive definite -- see \cite[\S 3, \S 6]{KTZ} and \S 3 below for examples.
A small bonus in dimension two is the fact that the frequency localization condition \eqref{eq:uK} needed for \eqref{eq:KT0} is not necessary -- see \eqref{eq:chiD} below.
The proof of the theorem is reduced to a local result presented in Proposition \ref{theorem2}. That result follows in turn from a rescaling argument involving several cases, some of which use the most technically involved result of \cite{KTZ}: if $ p ( x, \xi ) = \sum_{ i, j} {\rm g}^{ij} \xi_i \xi_j + V ( x )$ and for $ ( x, \xi ) \in K $ \[ p ( x, \xi ) = 0, \ \ \partial_\xi p ( x, \xi ) = 0 \ \Longrightarrow \ d_x p ( x, \xi ) = d V ( x ) \neq 0 , \] then for $ u ( h) $ satisfying \eqref{eq:uK}, the bound \eqref{eq:KT0} holds for any $ n \geq 2 $ -- see \cite[Corollary 1]{KTZ} for the original results and Propositions \ref{proposition1} and \ref{proposition2} for rescaled versions used in our proof. We do not know of any simpler way to obtain \eqref{eqn:semiclass}.
\newsection{Proof of Theorem} By compactness of $K\supseteq\text{support}(u)$, it suffices to prove uniform $L^\infty$ bounds on $u$ over each small ball intersecting $K$, where in our case the diameter of the ball can be taken to depend only on ${\mathcal C}^N$ estimates for ${\rm g}$ and $V$ over a unit sized neighborhood of $K$, for some large $N$. Without loss of generality we consider a ball centered at the origin in ${\mathbb R}^2$. Let $$
B=\{x\in{\mathbb R}^2\,:\,|x|<1\}\,,\qquad B^*=\{x\in{\mathbb R}^2\,:\,|x|<2\}\,. $$ After a linear change of coordinates, we may assume that \begin{equation}\label{cond1} {\rm g}^{ij}(0)=\delta^{ij}\,. \end{equation} Next, by replacing $V(x)$ by $cV(cx)$ and ${\rm g}^{ij}(x)$ by ${\rm g}^{ij}(cx)$, for some constant $c\le 1$ depending on the ${\mathcal C}^2$ norm of ${\rm g}$ and $V$ over a unit neighborhood of $K$, we may assume that \begin{equation}\label{cond2}
\sup_{x\in B^*}|V(x)|+|dV(x)|\le 2\,,\quad \sup_{x\in B^*}|d^2 V(x)|+\sum_{i,j=1}^2|d{\rm g}^{ij}(x)|\le .01\,. \end{equation} This has the effect of multiplying $h$ by a constant in the equation \eqref{eqn:semiclass}, which can be absorbed into the constant $C$ in \eqref{est:semiclass}.
In general, we let \begin{equation}\label{cond3}
C_N=\sup_{x\in B^*}\sup_{|\alpha|\le N}
\Bigl(\,|\partial^\alpha V(x)|+\sum_{i,j=1}^2|\partial^\alpha{\rm g}^{ij}(x)|\,\Bigr)\,, \end{equation} and will deduce the main theorem as a corollary of the following
\begin{proposition}\label{theorem2} Suppose $h\le 1$, that ${\rm g},V$ satisfy \eqref{cond1} and \eqref{cond2}, and that $u$ satisfies \begin{equation}\label{eqn:semiclass'}
\bigl\| -h^2\Delta_{\rm g} u+Vu\|_{L^2(B^*)}\le h\,,\qquad \|u\|_{L^2(B^*)}\le 1\,. \end{equation} Then \begin{equation}\label{est:semiclass'}
\|u\|_{L^\infty(B)}\le C\,h^{-\frac 12}\,, \end{equation} where the constant $C$ depends only on $C_N$ in \eqref{cond3} for some fixed $N$. \end{proposition}
We start the proof of Proposition \ref{theorem2} by recording the following two propositions, which are consequences of \cite[Corollary 1]{KTZ}.
\begin{proposition}\label{proposition1}
Suppose that \eqref{cond1}-\eqref{cond2} hold, and that $\frac 12 \le |V(x)|\le 2$ for $|x|\le 2$. If the following holds, and $h\le 1$, $$
\bigl\| -h^2\Delta_{\rm g} u+Vu\|_{L^2(B^*)}\le h\,,\qquad \|u\|_{L^2(B^*)}\le 1\,, $$
then $\|u\|_{L^\infty(B)}\le C\,h^{-\frac 12}$, where $C$ depends only on $C_N$ in \eqref{cond3} for some fixed $N$. \end{proposition}
\begin{proposition}\label{proposition2}
Suppose that \eqref{cond1}-\eqref{cond2} hold, and that $V(0)=0$ and $|dV(0)|=1$. If the following holds, and $h\le 1$, $$
\bigl\| -h^2\Delta_{\rm g} u+Vu\|_{L^2(B^*)}\le h\,,\qquad \|u\|_{L^2(B^*)}\le 1\,, $$
then $\|u\|_{L^\infty(B)}\le C\,h^{-\frac 12}$, where $C$ depends only on $C_N$ in \eqref{cond3} for some fixed $N$.
\end{proposition}
To see that these follow from \cite[Corollary 1]{KTZ}, we first note that in Proposition \ref{proposition2} above, since $|d^2V|\le .01$, we have $.98\le |dV(x)|\le 1.02$ for
$|x|\le 2$, so since ${\rm g}$ is positive definite the conditions on ${\rm g}$ and $V$ in \cite[Corollary 1]{KTZ} are met. We remark that the condition $V(0)=0$ guarantees the the zero set of $V$ is a nearly-flat curve through the origin, although this is not strictly needed to apply the results of \cite{KTZ}. That the constants in the estimates of \cite{KTZ} depend only on the above bounds on ${\rm g}$ and $V$ follow from their proofs.
Next, the estimates above can be localized, as remarked before, so we may assume that $u$ is compactly supported in $|x|<\frac 32$, after which we may extend ${\rm g}$ and $V$ globally without affecting the application of \cite[Corollary 1]{KTZ}. Indeed, in both propositions above the assumptions imply $\|du\|_{L^2(|x|<{3/2})}\lesssim h^{-1}$, so that one may cut off $u$ by a smooth function which is supported in $|x|<\frac 32$ and equals 1 for $|x|<1$.
Finally, the condition (1.4) of \cite{KTZ} that $u-\chi(hD)u={\mathcal O}_{\mathscr S} ( h^\infty )$ for some
$\chi\in {\mathcal C}_c^\infty$ is not needed for the $L^\infty$ results of that paper to hold in dimension two. To see this, we note that since $|V|<2$ and $|{\rm g}^{ij}(x)-\delta_{ij}|\le .02$
on the ball $|x|<2$, then if $u$ is supported in $|x|<\frac 32$ and $\chi(\xi)=1$ for $|\xi|<4$, then $$
\|(hD)^2(u-\chi(hD)u)\|_{L^2} = {\mathcal O}(h)\,. $$ This follows by the semiclassical pseudodifferential calculus (see \cite[Theorem 4.29]{e-z}), since for $ \chi_0 \in C^\infty_c ( {\mathbb R}^2 ) $ with $\operatorname{supp} \chi_0 \subset B^*$,
$ \chi_0 ( x )(1 - \chi ( \xi ) ) |\xi|^2/( |\xi|^2 + V( x) ) \in S ( {\mathbb R}^2 \times {\mathbb R}^2 ) $.
Hence, writing $ \hat u ( \xi ) $ for the standard Fourier transform of $u$, \begin{equation} \label{eq:chiD}
\begin{split} \|u-\chi(hD)u\|_{L^\infty} & \le \frac 1 { (2 \pi)^2} \int_{
{\mathbb R}^2} | 1 - \chi ( h \xi ) | | \hat u ( \xi ) | \,d \xi \\ & \leq
C \int | h \xi |^2 | 1 - \chi ( h \xi ) | | \hat u ( \xi ) |
( 1 +|h\xi|^2 )^{-1} \, d \xi \\
& \leq C \|(hD)^2(u-\chi(hD)u) \|_{L^2} \left( \int_{ {\mathbb R}^2 } ( 1 + | h \xi |^2 )^{-2} \, d \xi \right)^{\frac12} \\ & \leq C h \, h^{-1} = C \,, \end{split} \end{equation} an even better estimate than required.
We supplement Propositions \ref{proposition1} and \ref{proposition2} with the following two lemmas.
\begin{lemma}\label{lemma3}
Suppose that \eqref{cond1}-\eqref{cond2} hold, and that $|V(x)|\le 99\,h$ for $|x|\le 2h^{\frac 12}$. If the following holds, and $h\le 1$, $$
\bigl\| -h^2\Delta_{\rm g} u+Vu\|_{L^2(|x|<2h^{1/2})}\le h\,,\qquad
\|u\|_{L^2(|x|<2h^{1/2})}\le 1\,, $$
then $\|u\|_{L^\infty(|x|<h^{1/2})}\le C\,h^{-\frac 12}$, where $C$ depends only on $C_N$ in \eqref{cond3} for some fixed $N$. \end{lemma} \begin{proof}
Consider the function $\tilde u(x)=h^{\frac 12} u(h^{\frac 12}x)$, and ${\tilde {\rm g}}^{ij}(x)={\rm g}^{ij}(h^{\frac 12}x)$. Then, since $\|Vu\|_{L^2(|x|<2h^{1/2})}\le 99h$, we have $$
\|\Delta_{\tilde {\rm g}}\tilde u\|_{L^2(|x|<2)}\le 100\,,\qquad \|\tilde u\|_{L^2(|x|<2)}\le 1\,. $$ Since the spatial dimension equals 2, interior Sobolev estimates yield
$\|\tilde u\|_{L^\infty(|x|<1)}\le C$, where we note that the conditions \eqref{cond1} and \eqref{cond2} hold for $\tilde{\rm g}$ since $h^{\frac 12}\le 1$. \end{proof}
\begin{lemma}\label{lemma4}
Suppose that \eqref{cond1}-\eqref{cond2} hold, and that $\frac 12 c\le |V(x)|\le 2c$ for $|x|\le 2c^{\frac 12}$. If the following holds, and $h\le c\le 1$, $$
\bigl\| -h^2\Delta_{\rm g} u+Vu\|_{L^2(|x|<2c^{1/2})}\le h\,,\qquad
\|u\|_{L^2(|x|<2c^{1/2})}\le 1\,, $$
then $\|u\|_{L^\infty(|x|<c^{1/2})}\le C\,h^{-\frac 12}$, where $C$ depends only $C_N$ in \eqref{cond3} for some fixed $N$. \end{lemma} \begin{proof} Let $\tilde u(x)=c^{\frac 12} u(c^{\frac 12}x)$, ${\tilde {\rm g}}^{ij}(x)={\rm g}^{ij}(c^{\frac 12}x)$, and $\tilde V(x)=c^{-1}V(c^{\frac 12}x)$. Note that the assumptions on $V(x)$ in the statement and in \eqref{cond2} imply that
$|dV(x)|\le c^{\frac 12}$ for $|x|<2 c^{1/2}$, so that $\tilde V$ satisfies \eqref{cond2}, and the constants $C_N$ in \eqref{cond3} can only decrease for $c\le 1$. Then with $\tilde h=c^{-1}h\le 1$, $$
\|-\tilde h^2\Delta_{\tilde{\rm g}} \tilde u+\tilde V\tilde u\|_{L^2(|x|<2)}\le \tilde h\,,\qquad \|\tilde u\|_{L^2(|x|<2)}\le 1\,. $$ By Proposition \ref{proposition1}, we have
$\|\tilde u\|_{L^\infty(|x|<1)}\le C{\tilde h}^{-\frac 12}$, giving the desired result. \end{proof}
\begin{proof}[Proof of Proposition \ref{theorem2}]
It suffices to prove that for each $|x_0|<1$ there is some $\frac 12\ge r>0$ so that
$\|u\|_{L^\infty(|x-x_0|<r)}\le C\, h^{-\frac 12}$, with a global constant $C$. Without loss of generality we take $x_0=0$.
We will split consideration up into four cases, depending on the relative size of $|V(0)|$ and $|dV(0)|$. Since for $h$ bounded away from $0$ the result follows by elliptic estimates, we will assume $h\le\frac 14$ so that $h^{\frac 12}$ below is at most $\frac 12$.
\noindent{\bf Case 1:} $|V(0)|\le h\,,\;|dV(0)|\le 8 h^{\frac 12}$. Since $|d^2V(x)|\le .01$, then Lemma \ref{lemma3} applies to give the result with $r=h^{\frac 12}$.
\noindent{\bf Case 2:} $|V(0)|\le h\,,\;|dV(0)|\ge 8h^{\frac 12}$. Since we may add a constant of size $h$ to $V$ without affecting \eqref{eqn:semiclass'}, we may assume $V(0)=0$. By rotating we may then assume $$V(x)=\beta x_1+f_{ij}(x)x_i x_j\,,$$
where $\beta=|dV(0)|\ge 8h^{\frac 12}$. Dividing $V$ by 4 if necessary we may assume $\beta\le \frac 12$. Let $\tilde u=\beta u(\beta x)$, $\tilde{\rm g}^{ij}(x)={\rm g}^{ij}(\beta x)$, and $$ \tilde V(x)=\beta^{-2}V(\beta x)=x_1+f_{ij}(\beta x)x_i x_j\,. $$ With $\tilde h=\beta^{-2}h<1$ we have $$
\|-\tilde h^2\Delta_{\tilde{\rm g}}\tilde u +\tilde V\tilde u\|_{L^2(|x|<2)}\le \tilde h\,,
\qquad \|\tilde u\|_{L^2(|x|<2)}\le 1\,. $$
Proposition \ref{proposition2} applies, since $\tilde g$ and $\tilde V$ satisfy \eqref{cond1}-\eqref{cond2}, and the constants $C_N$ in \eqref{cond3} for $\tilde g$ and $\tilde V$ are bounded by those for ${\rm g}$ and $V$. Thus $\|\tilde u\|_{L^\infty(|x|<1)}\le C\tilde h^{-\frac 12}$, giving the desired result on $u$ with $r=|dV(0)|$.
\noindent{\bf Case 3:} $|V(0)|\ge h\,,\;|dV(0)|\le 9|V(0)|^{\frac 12}$. In this case, with $c=|V(0)|$, it follows that $\frac 12 c\le |V(x)|\le 2c$ for $|x|\le \frac 1{20} c^{\frac 12}$. We may apply Lemma \ref{lemma4} with $V$ replaced by $\frac 1{1600}V$ to get the desired result with $r=\frac 1{40}|V(0)|^{\frac 12}$.
\noindent{\bf Case 4:} $|V(0)|\ge h\,,\;|dV(0)|\ge 9 |V(0)|^{\frac 12}$. Since $|d^2V(x)|\le .01$, it follows that there is a point $x_0$ with
$|x_0|\le \frac 18 |V(0)|^{\frac 12}$
where $V(x_0)=0$. Since $|dV(x_0)|\ge 8 |V(0)|^{\frac 12}\ge 8h^{\frac 12}$, we may translate and apply Case 2 to get $L^\infty$ bounds on $u$ over a neighborhood of radius $|dV(x_0)|$ about $x_0$. This neighborhood contains the neighborhood about $0$ of radius
$r=.9998\,|dV(0)|$. \end{proof}
\newsection{A counter-example for indefinite ${\rm g}$.} In \cite[Section 5]{KTZ}, it was shown that there exist $u_h$ for which \begin{equation}\label{KTZexample}
\|-h^2(\partial_{x_1}^2-\partial_{x_2}^2)u_h+(x_1^2-x_2^2)u_h\|_{L^2}\le h\,,\qquad
\|u_h\|_{L^2}\le 1\,, \end{equation}
for which $\|u_h\|_{L^\infty}\approx |\log h|^\frac 12 h^{-\frac 12}\,,$ showing that the assumption of definiteness of ${\rm g}$ cannot be relaxed to non-degeneracy in the main theorem. In \cite[Theorem 6]{KTZ} the positive result was established showing that this growth of $\|u_h\|_{L^\infty}$ for indefinite, non-degenerate ${\rm g}$ in two dimensions is in fact worst case.
The example of \cite{KTZ} was produced using harmonic oscillator eigenstates. Here we present a different construction of such a $u_h$ with similar $L^\infty$ growth to help illustrate the role played by the degeneracy of ${\rm g}$. The idea is to produce a collection $u_{h,j}$ of functions satisfying \eqref{KTZexample} (or equivalent), for which
$u_{h,j}(0)=h^{-\frac 12}$, and where $j$ runs over $\approx|\log h|$ different values. The examples will have disjoint frequency support, hence are orthogonal in $L^2$. Upon summation over $j$ the $L^2$ norm then grows as $|\log h|^\frac 12$, whereas the $L^\infty$ norm grows as $|\log h|\,h^{-\frac 12}$, yielding an example with worst case growth after normalization.
We start by considering the form $\xi_1\xi_2$ with $V=0$. To assure that $\|h^2\partial_{x_1}\partial_{x_2} u_h\|_{L^2}\le h$, we will take the Fourier transform of $u_h$ to be contained in the set $|\xi_1\xi_2|\le 2h^{-1}$, as well as
$|\xi|\le 2h^{-1}$ to satisfy the frequency localization condition \cite[(1.4)]{KTZ}.
Our example is then based on the fact that one can find $\approx|\log h|$ disjoint rectangles, each of volume $h^{-1}$, within this region, as illustrated in the diagram. Each $u_{h,j}$ will be an appropriately scaled Schwartz function with Fourier transform localized to one of the rectangles.
\centerline{\includegraphics[width=6truein]{sublevel.pdf}}
We now fix $\psi,\chi\in {\mathcal C}_c^\infty({\mathbb R})$, with $0\le \psi(x)\le 2$ and $0\le \chi(x)\le 1$, with $\int\psi=\int\chi=1$, and where $$ \text{supp}(\psi)\subset [1,2]\,,\qquad \text{supp}(\chi)\subset [-1,1]\,. $$ We additionally assume $\chi(0)=1$.
Let $$ u_{h,j}(x)=h^{\frac 12}\int e^{ix_1\xi_1+ix_2\xi_2}\chi(2^jh\,\xi_1)\psi(2^{-j}\xi_2) \,d\xi_1\,d\xi_2 =h^{-\frac 12}\check{\chi}(2^{-j}h^{-1}x_1)\check{\psi}(2^jx_2) \,. $$
By the Plancherel theorem, $\|u_{h,j}\|_{L^2}\approx 1$ and
$\|h^2D_1D_2 u_{h,j}\|_{L^2}\lesssim h\,.$ Furthermore, $u_{h,j}(0)=h^{-\frac 12}$. By disjointness of the Fourier transforms, for $i\ne j$ we have $\langle u_{h,i},u_{h,j}\rangle =0$, and similarly $\langle \partial_{x_1}\partial_{x_2} u_{h,i},\partial_{x_1}\partial_{x_2} u_{h,j}\rangle =0$.
We then form $$
u_h(x)=|\log h|^{-\frac 12}\sum_{1\le 2^j\le h^{-1}}u_{h,j}(x)\,. $$
Since there are $\approx|\log h|$ terms in the sum, and the terms are orthogonal in $L^2$, it follows that $$
\|u_h\|_{L^2}\approx 1\,,\qquad \|h^2 \partial_{x_1}\partial_{x_2}u_h\|_{L^2({\mathbb R}^2)}\lesssim h\,,\qquad u_h(0)\approx
|\log h|^{\frac 12}h^{-\frac 12}\,. $$ Although the example is not compactly supported, it is rapidly decreasing (uniformly so for $h<1$), and one may smoothly cutoff to a bounded set without changing the estimates.
We observe that for this example it also holds that $$
\|x_1x_2 u_h\|_{L^2}\lesssim h\,. $$ Hence, $u_h$ is also a counterexample for the form $\xi_1\xi_2\pm x_1x_2$. Rotating by $\pi/4$ gives the form $\xi_1^2-\xi_2^2\pm(x_1^2-x_2^2)$, including in particular the form considered in \cite[Section 6]{KTZ}.
We also observe that $x_1^2u_h$ will be $\mathcal{O}_{L^2}(h)$ if one restricts the sum in $u_h$ to $1\le 2^j\le h^{-\frac 12}$, which still has
$\approx |\log h|$ values of $j$, and thus exhibits the same $L^\infty$ growth as $u_h$. This idea does not however work to yield a counterexample for the form $\xi_1\xi_2+x_1^2+x_2^2$.
\end{document} |
\begin{document}
\title[An equivalence of scalar curvatures]{An equivalence of scalar curvatures\\on Hermitian manifolds} \author{Michael G. Dabkowski} \address{Department of Mathematics, University of Michigan, Ann Arbor, MI, 48109} \email{mgdabkow@umich.edu} \author{Michael T. Lock} \address{Department of Mathematics, University of Texas, Austin, TX, 78712} \email{mlock@math.utexas.edu} \thanks{The second author was partially supported by NSF Grant DMS-1148490} \date{May 11, 2015} \dedicatory{This article is dedicated Konstantin Leyzerovsky on the occasion of his birthday.} \begin{abstract} For a K\"ahler metric, the Riemannian scalar curvature is equal to twice the Chern scalar curvature. The question we address here is whether this equivalence can hold for a non-K\"ahler Hermitian metric. For such metrics, if they exist, the Chern scalar curvature would have the same geometric meaning as the Riemannian scalar curvature. Recently, Liu-Yang showed that if this equivalence of scalar curvatures holds even in average over a compact Hermitian manifold, then the metric must in fact be K\"ahler. However, we prove that a certain class of non-compact complex manifolds do admit Hermitian metrics for which this equivalence holds. Subsequently, the question of to what extent the behavior of said metrics can be dictated is addressed and a classification theorem is proved. \end{abstract} \maketitle \setcounter{tocdepth}{1} \tableofcontents
\section{Introduction} This article examines a relationship between scalar curvatures on Hermitian manifolds in the non-K\"ahler setting. While there has been a plethora of work concerning the existence of certain classes of desirable K\"ahler metrics, there is markedly less understanding of what are ``choice'' non-K\"ahler Hermitian metrics. For a K\"ahler manifold, the Chern connection on the holomorphic tangent bundle can be seen to coincide with Levi-Civita connection on the underlying Riemannian manifold, however, these can differ wildly for a non-K\"ahler Hermitian manifold. Our work here is inspired by the following broad question -- In the realm of non-K\"ahler Hermitian metrics on a complex manifold, do there exist some which induce a ``desirable relationship'' between the complex geometry and underlying real geometry? Admittedly, this question is not only broad but also incredibly vague as it is not at all clear what ``desirable relationship'' should mean. More specifically, what we wonder here, is whether some aspects of the respective geometries can coincide in the non-K\"ahler setting as they would in the K\"ahler setting. One of the simplest such measures is that of scalar curvature. On a Hermitian manifold, it is natural to consider the Riemannian and Chern scalar curvatures, i.e. those of the Levi-Civita connection on the underlying Riemannian manifold and the Chern connection respectively. If the manifold is K\"ahler, then the Riemannian scalar curvature, $S$, is equal to twice the Chern scalar curvature, $S_C$. This motivates the following definition.
\begin{definition} {\em A Hermitian metric $g$, on a complex manifold $(M,J)$, is said to be a {\em K\"ahler like scalar curvature (Klsc) metric} if $S=2S_C$.} \end{definition} From this, we ask whether a complex manifold can admit a non-K\"ahler Hermitian Klsc metric. Since every Riemannian metric on a Riemann surface is K\"ahler, we are of course asking this for complex dimension $n\geq 2$. Formulas relating the Chern and Riemannian scalar curvatures on compact Hermitian manifolds were found in \cite{Gauduchon_scalar,Liu-Yang_2}, where it is subsequently shown that if $S$ and $2S_C$ are even equal in average over a compact manifold, which is clearly implied by Klsc, then the metric must in fact be K\"ahler. Therefore, the actual question that we ask here is whether a non-compact complex manifold can admit non-K\"ahler Hermitian Klsc metrics?
The Riemannian scalar curvature has a very clear geometric meaning in that it arises in the expansion of the volume of geodesic balls. Specifically, for a Riemannian manifold $(M,g)$, the volume of the geodesic ball of radius $r$ around $p\in M$ has the asymptotic expansion \begin{align} \label{vol_expansion} vol(B(p,r))=\omega_nr^n\Big(1-\frac{S(p)}{6(n+2)}r^2+\mathcal{O}(r^4)\Big), \end{align} where $\omega_n$ denotes the volume of the unit ball in $\mathbb{R}^n$, see \cite{Besse}. Hence, when the scalar curvature is positive or negative at a point, the volume of a small geodesic ball around that point is respectively smaller or larger than it would be for a Euclidean ball of the same radius. If the metric is not K\"ahler, the Chern scalar curvature does not in general have such an obvious geometric interpretation. However, if there were to exist a non-K\"ahler Hermitian Klsc metric, then its Chern scalar curvature would have such a meaning as $S(p)$ could be replaced with $2S_C(p)$ in the expansion \eqref{vol_expansion}.
In a different vein, there has been recent interest in the so called Chern-Yamabe problem. Given a compact Hermitian manifold, this is the problem of finding a conformal deformation to a metric with constant Chern scalar curvature. In \cite{Angella-Simone-Spotti}, it is proved that such conformal deformations exist in the case that the constant is non-positive. (It is also worthwhile to note that an almost Hermitian version of the Yamabe problem was proved in \cite{delRio_Simanca}.) One may ask when this problem overlaps with the usual Yamabe problem, which motivates the study of Klsc metrics. In the compact case, recall that a Hermitian metric is a Klsc metric if and only if it is K\"ahler \cite{Gauduchon_scalar,Liu-Yang_2}. In this paper, we study the Klsc condition in the non-compact ${\rm{U}}(n)$-invariant case, and both find and classify non-K\"ahler solutions. In turn, this raises a very interesting question regarding the singular Yamabe problem and its Chern-scalar curvature analogue, see \eqref{singular_yamabe} in Section \eqref{questions} for more details.
We answer the question of existence of non-K\"ahler Hermitian Klsc metrics in the affirmative, by using conformal techniques, in Section \ref{conformal_method} below. Interestingly, this shows that the obstruction in the compact case is global as opposed to local. The idea here is to begin with a Hermitian metric and search for a non-K\"ahler Hermitian Klsc metric in its conformal class while keeping the complex structure fixed. While this idea would be interesting to pursue in general, in order to make our work here more tractable, we have restricted our attention to the situation where the initial metric is a ${\rm{U}}(n)$-invariant K\"ahler metric. This restriction not only enables us to find the desired metric in each conformal class explicitly, observe Theorem \ref{thm_1} below, but also lends itself to posing, and subsequently answering, questions involving the ability to specify the behavior of non-K\"ahler Klsc metrics as well as allows us to give a complete description of their moduli space. These results are discussed in Section \ref{specifying}, where a detailed analysis of the regularity of these metrics is also given.
While background material and our conventions are discussed more thoroughly in Section \ref{background}, it is necessary to make a brief remark on notation here. On $\mathbb{C}^n$ we let $J_0$ denote the standard complex structure and define the radial variable $\mathbf{z}=\sum_{i=1}^{n}|z_i|^2$. We will frequently consider Hermitian metrics on annuli of the form \begin{align} \mathcal{A}_{\alpha,\beta}=\{(z_1,\cdots,z_n)\in\mathbb{C}^n:0\leq\alpha<\mathbf{z}<\beta\}. \end{align} Here $\sqrt{\alpha}$ and $\sqrt{\beta}$ are the radii of the annulus $\mathcal{A}_{\alpha,\beta}$. Often, only the metric will be written, or it being non-K\"ahler Klsc referred to, and the fact that it is Hermitian with respect to $J_0$ will be implicit from context.
\subsection{The conformal method} \label{conformal_method} In this section, we give an existence result for non-K\"ahler Hermitian Klsc metrics. We begin by considering ${\rm{U}}(n)$-invariant K\"ahler metrics, i.e., those arising from some K\"ahler potential $\phi(\mathbf{z})$, which is just a function of the radial variable $\mathbf{z}$, on the annulus $\mathcal{A}_{\alpha,\beta}\subset (\mathbb{C}^n,J_0)$. Then, we seek a conformal factor, which can be understood in terms of the initial metric, that yields a non-K\"ahler Klsc metric. Here the complex structure is fixed, so the resulting metric would be Hermitian with respect to $J_0$ as well. In the following theorem, we see that such a conformal factor always exists. \begin{theorem} \label{thm_1} Let $g$ be a ${\rm{U}}(n)$-invariant K\"ahler metric, with K\"ahler potential $\phi(\mathbf{z})$, on the annulus $\mathcal{A}_{\alpha,\beta}\subset(\mathbb{C}^n,J_0)$, for $n\geq 2$. Then, the conformal metric $(v^2g, J_0)$, where \begin{align} \begin{split} \label{conformal_factor} \phantom{=}v=\Big(\int\frac{1}{\mathbf{z}^n(\phi')^{n-1}}d\mathbf{z}\Big)^{\frac{1}{2n-1}}, \end{split} \end{align} is a non-K\"ahler Hermitian Klsc metric. Furthermore, up to scale and the constant of integration, this is the unique ${\rm{U}}(n)$-invariant conformal factor with this property. \end{theorem} This is proved in Section \ref{thm_1_proof}.
\begin{remark} {\em Since the initial metric is K\"ahler, the constant of integration, which is suppressed in \eqref{conformal_factor}, can always be chosen so that the conformal factor does not vanish on the given annulus $\mathcal{A}_{\alpha,\beta}$. In general, however, the constant of integration can cause the conformal factor to vanish on the sphere of radius $\gamma$ for at most one isolated value of $\mathbf{z}=\gamma\in (\alpha,\beta)$. (It can cause it to vanish at most once because $\phi$ is a K\"ahler potential on the annulus.) In this case, the annulus splits into two sub-annuli, $\mathcal{A}_{\alpha,\beta}=\mathcal{A}_{\alpha,\gamma}\amalg\mathcal{A}_{\gamma,\beta}$, and a non-K\"ahler Hermitian Klsc metric is obtained on each.} \end{remark}
\begin{remark} {\em Although a compact Hermitian manifold cannot admit a non-K\"ahler Klsc metric, recall \cite{Gauduchon_scalar,Liu-Yang_2} as mentioned above, Theorem \ref{thm_1} can still be applied in the compact setting in order to obtain non-K\"ahler Klsc metrics on an annular decomposition of the underlying manifold minus closed sets of positive codimension. Here, by the compact setting, we we mean a compact manifold obtained by completing $\mathbb{C}^n\setminus\{0\}$ by adding in a point or a $\mathbb{CP}^{n-1}$ at $\{0\}$ and $\{\infty\}$ with a K\"ahler metric coming from extending a ${\rm{U}}(n)$-invariant K\"ahler metric on $\mathbb{C}^n$ over these compactifications, e.g., the Fubini-Study metric on $\mathbb{CP}^n$. See Section \ref{Fubini-Study} for an example. } \end{remark}
\subsection{Moduli space and behavior specification} \label{specifying} Here, we determine the moduli space of ${\rm{U}}(n)$-invariant non-K\"ahler Hermitian Klsc metrics on an annulus as well as address the question of to what extent the behavior of such metrics can be dictated. Since the standard metric on $S^{2n-1}$ decomposes into the Fubini-Study metric on $\mathbb{CP}^{n-1}$ and metric along the Hopf fiber, which we denote by $g_{\mathbb{CP}^{n-1}}$ and $h$ respectively, any ${\rm{U}}(n)$-invariant metric on the annulus $\mathcal{A}_{\alpha,\beta}\subset\mathbb{C}^n$ can be written as \begin{align} \label{U(n)-invariant_g} g=\mathscr{E}(\mathbf{z})\cdot\Big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\Big)+\mathbf{z} \mathscr{F}(\mathbf{z})\cdot\big(g_{\mathbb{CP}^{n-1}}\big), \end{align} where $\mathscr{E}(\mathbf{z})$ and $\mathscr{F}(\mathbf{z})$ are smooth positive functions of the radial variable $\mathbf{z}$ on $\mathcal{A}_{\alpha,\beta}$. Therefore, we ask whether it is possible to specify such an $\mathscr{E}(\mathbf{z})$ or $\mathscr{F}(\mathbf{z})$ and obtain a non-K\"ahler Hermitian Klsc metric of the form \eqref{U(n)-invariant_g}. We will see, in Theorem \ref{thm_2} below, that if the specified function satisfies a certain broad admissibility condition, then fixing one of either $\mathscr{E}(\mathbf{z})$ or $\mathscr{F}(\mathbf{z})$ along with a constant determines a unique such metric. From this, we will then be able to completely describe the moduli space. For certain analytic reasons, which will be apparent in the proofs, we will choose to fix such an $\mathscr{F}(\mathbf{z})$ and recover an $\mathscr{E}(\mathbf{z})$. We begin with an admissibility requirement for the specified function, which will ensure that we do, in fact, obtain metrics in Theorem~\ref{thm_2}, Theorem \ref{thm_3} and Corollay \ref{cor_thm_3} below.
\begin{definition} {\em Let $\mathscr{F}(\mathbf{z})$ be a smooth positive function on the annulus $\mathcal{A}_{\alpha,\beta}\subset\mathbb{C}^n$, and fix a constant $\mathcal{C}$. The pair $(\mathscr{F},\mathcal{C})$ is said to be {\em admissible} on $\mathcal{A}_{\alpha,\beta}$ if \begin{align*} \mathbf{z}\mathscr{F}\cdot\Big(\frac{1}{2n-1} \int\frac{1}{\mathbf{z}^n \mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}\Big)^{-2} \end{align*} is strictly increasing there.} \end{definition} \begin{remark} {\em Although this admissibility condition may seem unusual, intuitively it can be thought of as the weakening of the condition that $(\mathbf{z} \mathscr{F})'>0$, see Remark~\ref{admissible_remark} for a more in depth explanation. In turn, we will see that there is a tremendously wide range of functions which satisfy this admissibility condition. } \end{remark}
Now, we are able to answer the question of specifying the behavior of the non-K\"ahler Klsc metric posed above, as well as completely describe the moduli space.
\begin{theorem} \label{thm_2} To each admissible pair $(\mathscr{F},\mathcal{C})$ on the annulus $\mathcal{A}_{\alpha,\beta}\subset~(\mathbb{C}^n,J_0)$, for $n\geq 2$, there exists a unique associated ${\rm{U}}(n)$-invariant non-K\"ahler Hermitian Klsc metric given by \begin{equation*} g_{(\mathscr{F},\mathcal{C})}=\Bigg((\mathbf{z}\mathscr{F})'-\frac{2}{\mathbf{z}^{n-1}\mathscr{F}^{n-2}}\Big(\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}\Big)^{-1}\Bigg)\Big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\Big)+(\mathbf{z} \mathscr{F})\cdot g_{\mathbb{CP}^{n-1}}. \end{equation*} Moreover, the moduli space of ${\rm{U}}(n)$-invariant non-K\"ahler Hermitian Klsc metrics on $(\mathcal{A}_{\alpha,\beta},J_0)$ is given by all such metrics as $(\mathscr{F},\mathcal{C})$ ranges over all admissible pairs there. \end{theorem}
This is proved in Section \ref{pf_thm_2}. In other words, what we find here is that the moduli space of ${\rm{U}}(n)$-invariant non-K\"ahler Hermitian Klsc metrics on an annulus is equivalent to the space of admissible pairs there. Each $\mathscr{F}(\mathbf{z})\in C^{\infty}(\mathcal{A}_{\alpha,\beta})$, that has some constant with which it forms an admissible pair, in fact forms an admissible pair exactly with an open interval of constants (this is seen easily by examining the admissibility condition). To each such $\mathscr{F}(\mathbf{z})$, there is an associated $1$-parameter family of non-K\"ahler Klsc metrics in the moduli space given by $g_{(\mathscr{F},\mathcal{C})}$ as $\mathcal{C}$ ranges over that interval of constants for which the pair $(\mathscr{F},\mathcal{C})$ is admissible. Therefore, the moduli space can be viewed as a collection of these $1$-parameter families of metrics over the set of functions which have the ability to be admissible.
Given this classification of the ${\rm{U}}(n)$-invariant non-K\"ahler Hermitian Klsc metrics on an annulus, it is reasonable to ask about their boundary behavior. In particular, do the spaces admitting these non-K\"ahler Klsc metrics necessarily have two open ends? In the following result, we see that if the specified $\mathscr{F}(\mathbf{z})$ is smooth on all of $\mathbb{C}^n$, then we obtain a non-K\"ahler Klsc metric on $\mathbb{C}^n$ with a canonical asymptotically cone-like singularity at the origin in that, near the origin, after a change of variables it looks like a canonical model plus higher order terms (which we abbreviate by {\em h.o.t.}). After this change of variables, however, the coefficients of these higher order terms may not always extend smoothly over the origin. We describe the regularity of such a metric by saying that it is of class $C^{\ell}$, where $C^{\ell}$ is the lowest regularity that any coefficient of the higher order terms has, in the appropriate variable, over the origin.
\begin{theorem} \label{thm_3} Let $(\mathscr{F},\mathcal{C})$ be an admissible pair on the annulus $\mathcal{A}_{0,\infty}=\mathbb{C}^n\setminus\{0\}$, for $n\geq 2$, where $\mathscr{F}(\mathbf{z})$ is a smooth function on $\mathbb{C}^n$ whose first nonzero term in the Taylor expansion about the origin is of order $k$. Consider the uniquely associated ${\rm{U}}(n)$-invariant non-K\"ahler Hermitian Klsc metric $g_{(\mathscr{F},\mathcal{C})}$ of Theorem \ref{thm_2}. \begin{enumerate} \item The metric $g_{(\mathscr{F},\mathcal{C})}$ extends to $(\mathbb{C}^n,J_0)$ with a canonical asymptotically cone-like singularity at the origin in the sense that there exists a change of variables so that near the origin the metric is given by \begin{align*} g_{(\mathscr{F},\mathcal{C})}=dr^2+\Big(\frac{k+1}{2n-1}\Big)r^2\Big(g_{\mathbb{CP}^{n-1}}+(k+1)(2n-1)h\Big)+h.o.t.. \end{align*} \item Furthermore, the class of this extension over the origin is described as follows: \begin{itemize} \item When $k=0$ or $k=1$, this metric is precisely of class \begin{equation*} \begin{cases}
C^{\infty}\phantom{=}&\text{if}\phantom{=}\frac{\partial^{(k+1)(n-1)}}{\partial\mathbf{z}^{(k+1)(n-1)}}\big(\frac{\mathbf{z}^k}{\mathscr{F}}\big)^{n-1}\big|_{\mathbf{z}=0}=0\\ C^{2n-3}\phantom{=}&\text{otherwise} \end{cases}. \end{equation*} \item When $k\geq 2$, this metric is, a priori, of class $C^0$. \end{itemize} \end{enumerate} \end{theorem} This is proved in Section \ref{pf_thm_3}. \begin{remark} \label{regularity_remark}
{\em When $k\geq 2$, the metrics of Theorem \ref{thm_3} can only, a priori, be guaranteed to be of class $C^0$ over the origin, however, in general such a metric may be of a class with much higher regularity. This will depend upon the lowest fractional power of the variable $r$ that appears in a certain expansion of the coefficients of the higher order terms, and upon whether the derivative $\frac{\partial^{(k+1)(n-1)}}{\partial\mathbf{z}^{(k+1)(n-1)}}\big(\frac{\mathbf{z}^k}{\mathscr{F}}\big)^{n-1}\big|_{\mathbf{z}=0}$ vanishes. When $k\geq2$, it is actually possible for such a metric to be of class $C^{\infty}$ as well. See Section~\ref{regularity_proof}, in particular Remark \ref{k>2Cinfty}, for a thorough explanation, and Section \ref{fix_2} for an example. } \end{remark}
The singularity at the origin of the non-K\"ahler Klsc metrics, obtained in Theorem~\ref{thm_3}, depends only upon the complex dimension and the order of the first nonzero term of the Taylor expansion of $\mathscr{F}$ at the origin. In particular, notice that if $\mathscr{F}(0)>0$, so $k=0$, then then the singularity is determined by the complex dimension alone. Similarly, given certain relationships between $k$ and $n$, we can obtain non-K\"ahler Klsc metrics which are asymptotic to a cone over a scaled $S^{2n-1}$, and in some cases even nonsingular non-K\"ahler Klsc metrics. Specifically, we have the following corollary. \begin{corollary} \label{cor_thm_3} Let $(\mathscr{F},\mathcal{C})$ be an admissible pair on the annulus $\mathcal{A}_{0,\infty}=\mathbb{C}^n\setminus~\{0\}$, for $n\geq 2$, where $\mathscr{F}(\mathbf{z})$ is a smooth function on $\mathbb{C}^n$ whose first nonzero term in the Taylor expansion about the origin is of order $k$. If $(k+1)(2n-1)=\ell^2$, for some positive integer $\ell$, then the uniquely associated ${\rm{U}}(n)$-invariant non-K\"ahler Hermitican Klsc metric $g_{(\mathscr{F},\mathcal{C})}$ on $(\mathbb{C}^n, J_0)$, of Theorem~\ref{thm_3}, descends via a $\mathbb{Z}/\ell\mathbb{Z}$ quotient to \begin{align*} \overline{g}_{(\mathscr{F},\mathcal{C})}=dr^2+\Big(\frac{k+1}{2n-1}\Big)r^2g_{S^{2n-1}}+h.o.t.. \end{align*} In particular, if $k=2n-2$, then \begin{align*} \overline{g}_{(\mathscr{F},\mathcal{C})}=dr^2+r^2g_{S^{2n-1}}+h.o.t. \end{align*} is a nonsingular Klsc metric. \end{corollary} This is proved in Section \ref{pf_cor_thm_3}. Note that the regularity statements of Theorem \ref{thm_3} hold for these metrics as well and, in particular, when $k=2n-2$ the nonsingular Klsc metric $\overline{g}_{(\mathscr{F},\mathcal{C})}$ is, a priori, of class $C^0$ over the origin. However, once again, these often will have higher regularity, recall Remark \ref{regularity_remark}. See Section \ref{fix_2} for an example.
\begin{remark} {\em Both Theorem \ref{thm_3} and Corollary \ref{cor_thm_3} can be restated to hold for the pair $(\mathscr{F},\mathcal{C})$ being admissible on the annulus $\mathcal{A}_{0,\beta}\subset\mathbb{C}^n$, for any $\beta>0$, as opposed to on $\mathcal{A}_{0,\infty}=\mathbb{C}^n\setminus\{0\}$. In this case, the resulting Klsc metric, $g_{(\mathscr{F},\mathcal{C})}$, will be defined on the open ball of radius $\sqrt{\beta}$ about the origin and the same statements concerning the singularity and regularity hold. The statements of Theorem \ref{thm_3} and Corollary \ref{cor_thm_3} are given for all of $\mathbb{C}^n$ only for simplicity of presentation.} \end{remark}
\subsection{Questions} \label{questions} There are several natural questions motivated by our work here: \begin{enumerate} \item The conformal method used to proved Theorem \ref{thm_1} is simplified by restricting to the ${\rm{U}}(n)$-invariant setting as it reduces the question from a quasi-linear PDE to a non-linear ODE. Does a generalization of Theorem \ref{thm_1} hold when no symmetry is assumed? If so, what can then be said about generalizations of the results of Section \ref{specifying}?
\item Every ${\rm{U}}(n)$-invariant Hermitian metric on an annulus $\mathcal{A}_{\alpha,\beta}\subset(\mathbb{C}^n,J_0)$ is conformal to a ${\rm{U}}(n)$-invariant K\"ahler metric there. However, this is not necessarily true if the ${\rm{U}}(n)$-invariant assumption is dropped. The fact that we begin with a K\"ahler metric and then search for the desired conformal factor simplifies the proof of Theorem \ref{thm_1} since $S=2S_C$ for such an initial metric, which leads to cancelation in \eqref{eq_1}, and also because the Laplacian for a K\"ahler metric here takes the form \eqref{laplacian}, which reduces \eqref{eq_2} to \eqref{eq_3}. If a generalization of Theorem \ref{thm_1} exists when no symmetry is assumed, could it be further generalized to the situation where the initial metric is a non-conformally-K\"ahler Hermitian metric?
\item As mentioned above, the Klsc condition gives a clear and geometric meaning to the Chern scalar curvature as it can replace the Riemannian scalar curvature in the expansion of the volume of geodesic balls \eqref{vol_expansion}. Does the Klsc condition have other geometric implications, or force restrictions of the scalar curvature, on non-compact non-K\"ahler Hermitian manifolds?
\item \label{singular_yamabe} The singular Yamabe problem is stated as follows: Given a compact Riemannian manifold $(M,g)$ and a closed subset $K\subset M$, find a complete metric $\tilde{g}$ on $M\setminus K$ which has constant scalar curvature and is conformal to $g$. While this problem is not solved completely, there has been considerable progress, for instance, see \cite{LoewnerNirenberg,Schoen_singularYamabe,SchoenYau_singularYamabe,AvilesMcOwen,Mazzeo_singularYamabe,MazzeoPacard_sYamabe1,MazzeoPacard_sYamabe2,Finn1,Finn2}. The Chern-Yamabe problem generalizes naturally to the singular case as well. To the best of the authors knowledge, this has not been studied. From our work here, we know that there exist non-K\"ahler Klsc metrics in the non-compact setting. Thus, we ask whether there are non-K\"ahler Hermitian Klsc metrics which simultaneously solve both the singular Yamabe and singular Chern-Yamabe problems? \end{enumerate}
\section{Background and conventions} \label{background} In this section we provide a brief description of the complex geometry necessary for this work as well as fix our conventions and notation. In particular, we discuss Hermitian metrics, the Chern connection and the curvature of the Chern connection. For a more thorough exposition of this material see \cite{Huybrechts}.
Let $(M,J)$ be a complex $n$-dimensional manifold. The complexified tangent bundle decomposes as \begin{align} TM\otimes \mathbb{C}=T^{1,0}M\oplus T^{0,1}M, \end{align} into the $\pm\sqrt{-1}$ eigenspaces of $J$ respectively. In terms of a local holomorphic coordinate system $\{z_1,\cdots,z_n\}$, where $z_i=x_i+\sqrt{-1}y_i$, locally these eigenspaces are \begin{align} \begin{split} T^{1,0}M=\text{span}\Big\{\frac{\partial}{\partial z_1}, \cdots, \frac{\partial}{\partial z_n}\Big\}\phantom{=}\text{and}\phantom{=}T^{0,1}M=\text{span}\Big\{\frac{\partial}{\partial \bar{z}_1}, \cdots, \frac{\partial}{\partial \bar{z}_n}\Big\}, \end{split} \end{align} where \begin{align} \frac{\partial}{\partial z_i}=\frac{1}{2}\Big(\frac{\partial}{\partial x_i}-\sqrt{-1}\frac{\partial}{\partial y_i}\Big)\phantom{=}\text{and}\phantom{=}\frac{\partial}{\partial \bar{z}_i}=\frac{1}{2}\Big(\frac{\partial}{\partial x_i}+\sqrt{-1}\frac{\partial}{\partial y_i}\Big). \end{align} The complexified cotangent bundle decomposes into the $\pm\sqrt{-1}$ eigenspaces of $J$ analogously as \begin{align} T^*M\otimes \mathbb{C}=\Lambda^{1,0}M\oplus \Lambda^{0,1}M, \end{align} which are locally given by \begin{align} \begin{split} \Lambda^{1,0}M=\text{span}\big\{dz_1, \cdots, dz_n\big\}\phantom{=}\text{and}\phantom{=} \Lambda^{0,1}M=\text{span}\big\{d\bar{z}_1, \cdots, d\bar{z}_n\big\}, \end{split} \end{align} where \begin{align} dz_i=dx_i+\sqrt{-1}dy_i\phantom{=}\text{and}\phantom{=}d\bar{z}_i=dx_i-\sqrt{-1}dy_i. \end{align}
On a complex manifold $(M,J)$, a Riemannian metric $g$ is Hermitian if \begin{align} g(JX,JY)=g(X,Y) \end{align} for all vector fields $X$ and $Y$. Given a Hermitian metric $g$, the associated $2$-form $\omega\in\Lambda^{1,1}$ is defined by $\omega(\cdot,\cdot)=g(J\cdot,\cdot)$ and can be written as \begin{align} \omega=\sqrt{-1}\sum_{i,j}g_{i\bar{j}}dz_i\wedge d\bar{z}_j, \end{align} where $g_{i\bar{j}}=g(\frac{\partial}{\partial z_i},\frac{\partial}{\partial \bar{z}_j})$. We will use a Hermitian metric and its associated $2$-form interchangeably. A Hermitian metric $g$ is K\"ahler if the associated $2$-form $\omega$ is closed, and in this case $\omega$ is called the K\"ahler form.
The Chern connection, on the Hermitian holomorphic tangent bundle $(T^{1,0}M,g)$, is the unique connection that is compatible with both the Hermitian metric and complex structure. This is opposed to the Levi-Civita connection which is the unique symmetric (torsion free) connection that is compatible with the Riemannian metric. There is a natural isomorphism between $T^{1,0}M$ and $TM$ under which the Chern connection (or, more generally, any Hermitian connection) corresponds to a connection on the underlying Riemannian manifold. However, the Chern connection may have a nonvanishing torsion tensor, and the induced connection is not necessarily the Levi-Civita connection. In fact, under this isomorphism, the Chern and Levi-Civita connections coincide if and only if the metric is K\"ahler. Note that in the non-K\"ahler setting, although these connections differ, one can still look for relationships between their respective induced geometries which is precisely the focus of this paper.
We conclude this section with a short discussion on the curvature tensor with respect to the Chern connection, and for explicit curvature formulas we refer the reader to \cite{Liu-Yang_1}. Write the components of the full curvature tensor with respect to the Chern connection as $\Theta_{i\bar{j}k\bar{\ell}}$. Taking the trace on the $k$ and $\bar{\ell}$ indices yields the components of the Chern-Ricci curvature \begin{align} \Theta_{i\bar{j}}=g^{k\bar{\ell}}\Theta_{i\bar{j}k\bar{\ell}}, \end{align} which has the associated Chern-Ricci form \begin{align} \rho=\sqrt{-1}\Theta_{i\bar{j}}dz_i\wedge d\bar{z}_j=-\sqrt{-1}\partial\bar{\partial}\log\det(g_{i\bar{j}}). \end{align} Then, by taking the trace of the Chern-Ricci curvature, we obtain the Chern scalar curvature \begin{align} S_C=g^{i\bar{j}}\Theta_{i\bar{j}}=g^{i\bar{j}}g^{k\bar{\ell}}\Theta_{i\bar{j}k\bar{\ell}}. \end{align} Lastly, we note that on a complex $n$-dimensional Hermitian manifold $(M,J,g)$, the Chern scalar-curvature, Chern-Ricci form and the associated $2$-form satisfy the relationship \begin{align} \label{S_C_relationship_1} \frac{S_C}{n}\omega^n=\rho\wedge\omega^{n-1}. \end{align} This will be useful in finding how the Chern scalar curvature transforms under a conformal change in the metric in Section~\ref{Chern_scalar_change}.
\section{The conformal method} \label{thm_1_proof} In this section we prove Theorem \ref{thm_1}, the idea of which is as follows. Starting with a K\"ahler metric, we examine how both the Riemannian and Chern scalar curvatures change under conformal transformation. The transformation of the Riemannian scalar curvature under a conformal change in the metric is well-known. Let $(M,g)$ be a $2n$-dimensional Riemannian manifold, where $n\geq 2$, with Riemannian scalar curvature $S$. Then the conformal metric $\tilde{g}=u^{\frac{2}{n-1}}g$ has Riemannian scalar curvature \begin{align} \label{riemannian_change} \widetilde{S}=-2\frac{2n-1}{n-1}u^{-\frac{n+1}{n-1}}\Delta u+Su^{-\frac{2}{n-1}}. \end{align} While a general formula for the transformation of the Chern scalar curvature under a conformal change is known, our interest is in how it transforms specifically in the ${\rm{U}}(n)$-invariant setting when the initial metric is K\"ahler. We prove such a formula below in Section \ref{Chern_scalar_change}. This target simplification is essential to the rest of the proof. Then, in Section \ref{finding_u}, we equate the formulas for a conformal change in the respective scalar curvatures, with the appropriate scale factor, to obtain a quasi-linear PDE which reduces to a non-linear second order ODE from the ${\rm{U}}(n)$-invariance assumption. Finally, we solve this to find an explicit solution for a conformal factor, which is unique up to scale and constant of integration, that results in a non-K\"ahler Hermitian Klsc metric.
Given the complexity of the equations, it is somewhat surprising that we were able to find an explicit solution. Having such an explicit solution is very useful as it allows for the results of Section \ref{specifying}. Also, it is important to remark here that the fact that the initial metric is K\"ahler will be of considerable aid the analysis.
\subsection{The conformal transformation of Chern scalar curvature} \label{Chern_scalar_change} Let $(\omega,J_0)$ be a ${\rm{U}}(n)$-invariant K\"ahler metric on a spherical shell about the origin in $\mathbb{C}^n$, where \begin{align} \omega=\sqrt{-1}\partial\bar{\partial}\phi(\mathbf{z}) \end{align} for some K\"ahler potential $\phi(\mathbf{z})$. The volume form is given by \begin{align} \frac{\omega^n}{n!}=Vdz_1\wedge d\bar{z}_1\wedge\cdots\wedge dz_n\wedge d\bar{z}_n, \end{align} and the Chern-Ricci form by \begin{align} \rho=-\sqrt{-1}\partial\bar{\partial}\log(V). \end{align}
Consider the conformal metric $(\widetilde{\omega},J_0)$, where $\widetilde{\omega}=u^{\frac{2}{n-1}}\omega$. Recalling \eqref{S_C_relationship_1}, since this conformal metric is Hermitian with respect to the complex structure $J_0$, its Chern scalar curvature, $\widetilde{S_C}$, and Chern-Ricci form, $\widetilde{\rho}$, satisfy \begin{align} \label{S_C_relationship_2} \Bigg(\frac{\widetilde{S_C}}{n}\Bigg)\widetilde{\omega}^n=\widetilde{\rho}\wedge\widetilde{\omega}^{n-1}. \end{align} We will use \eqref{S_C_relationship_2} to extract the transformation of the Chern scalar curvature under a conformal change.
First, observe that since \begin{align} \begin{split} \label{conformal_omega} \widetilde{\omega}^{n-1}=&u^2\omega^{n-1}\\ \text{and}\phantom{====}&\\ \widetilde{\omega}^n=&u^{\frac{2n}{n-1}}\omega^n, \end{split} \end{align} the Chern-Ricci form transforms as \begin{align} \label{conformal_rho} \widetilde{\rho}=-\sqrt{-1}\partial\bar{\partial}\log\Big(u^{\frac{2n}{n-1}}V\Big)=-\sqrt{-1}\Big(\frac{2n}{n-1}\Big)\partial\bar{\partial}\log(u)+\rho. \end{align} Combining \eqref{S_C_relationship_2}, \eqref{conformal_omega} and \eqref{conformal_rho} we find that \begin{align} \Bigg(\frac{\widetilde{S_C}}{n}\Bigg)\widetilde{\omega}^n=u^2\Bigg(-\sqrt{-1}\Big(\frac{2n}{n-1}\Big)\partial\bar{\partial}\log(u)\wedge\omega^{n-1}+\frac{S_C}{n}\omega^n\Bigg). \end{align}
Now, because of the ${\rm{U}}(n)$-invariance, without loss of generality we can restrict our attention to the $z_1$-axis where the K\"ahler form is given by \begin{align} \label{omega_z1} \omega=\sqrt{-1}\partial\bar{\partial}\phi=\sqrt{-1}\Bigg((\mathbf{z}\phi')'dz_1\wedge d\bar{z}_1+\sum_{i=2}^n\phi' dz_i\wedge d\bar{z}_i\Bigg) \end{align} and, following from \eqref{conformal_rho}, the Chern-Ricci form of the conformal metric by \begin{align} \widetilde{\rho}=-\sqrt{-1}\frac{2n}{n-1}\Bigg(\Big(\mathbf{z}\frac{u'}{u}\Big)'dz_1\wedge d\bar{z}_1+\sum_{i=2}^n\frac{u'}{u} dz_i\wedge d\bar{z}_i\Bigg)+\rho. \end{align} On the $z_1$-axis, from \eqref{omega_z1}, observe that \begin{align} \begin{split} \label{omega_n_n-1} \omega^n=&\Big((-1)^{\frac{n}{2}} n! (\phi')^{n-1}(\mathbf{z}\phi')'\Big)dz_1\wedge d\bar{z}_1\wedge \cdots \wedge dz_n\wedge d\bar{z}_n\\ \text{and}\phantom{==}&\\ \omega^{n-1}=&(-1)^{\frac{n-1}{2}}(n-1)!\Bigg((\phi')^{n-1}dz_2\wedge d\bar{z}_2\wedge \cdots \wedge dz_n\wedge d\bar{z}_n\\ &\phantom{========}+(\phi')^{n-2}(\mathbf{z}\phi')'\sum _{i=2}^ndz_1\wedge d\bar{z}_1\wedge \cdots \wedge\widehat{dz_i\wedge d\bar{z}_i}\cdots\wedge dz_n\wedge d\bar{z}_n\Bigg), \end{split} \end{align} where $\widehat{dz_i\wedge d\bar{z}_i}$ denotes the omission of that wedge product.
Therefore, from \eqref{conformal_rho} and \eqref{omega_n_n-1}, we find that \begin{align} \begin{split} \label{chern_change_wedge} \frac{\widetilde{S_C}}{n}\widetilde{\omega}^n =(-1)^{\frac{n}{2}}n!u^2(\phi')^{n-2}\Bigg[&-\Big(\frac{2}{n-1}\Big)\Bigg(\Big(\mathbf{z}\frac{u'}{u}\Big)'\phi'+(n-1)\Big(\frac{u'}{u}\Big)(\mathbf{z}\phi')'\Bigg)\\ &+\frac{S_C}{n}\phi'(\mathbf{z}\phi')'\Bigg]dz_1 \wedge d\bar{z}_1\wedge\cdots \wedge dz_n\wedge d\bar{z}_n. \end{split} \end{align} Finally, from \eqref{conformal_omega}, \eqref{omega_n_n-1} and \eqref{chern_change_wedge}, we find that, under such a conformal change, the Chern scalar curvature transforms as \begin{align} \label{chern_change} \widetilde{S_C}=-\frac{2n}{n-1}\Big(\frac{u^{-\frac{2}{n-1}}}{\phi'(\mathbf{z}\phi')'}\Big)\Bigg(\Big(\mathbf{z}\frac{u'}{u}\Big)'\phi'+(n-1)\Big(\frac{u'}{u}\Big)(\mathbf{z}\phi')'\Bigg)+u^{-\frac{2}{n-1}}S_C. \end{align}
\subsection{The Klsc conformal factor} \label{finding_u} Let $(\omega,J_0)$ be a ${\rm{U}}(n)$-invariant K\"ahler metric on a spherical shell about the origin in $\mathbb{C}^n$ with K\"ahler potential $\phi(\mathbf{z})$. Since $(\omega,J_0)$ is K\"ahler, we have that the Riemannian and Chern scalar curvatures here satisfy \begin{align} \label{initial_equivalence} S=2S_C. \end{align} We would like to find a conformal factor $u^{\frac{2}{n-1}}$ so that \begin{align} \widetilde{S}=2\widetilde{S_C}, \end{align} Therefore, we divide \eqref{riemannian_change} by $2$ and set it equal to \eqref{chern_change} to obtain the equation \begin{align} \begin{split} \label{eq_1} -\frac{2n-1}{n-1}u^{-\frac{n+1}{n-1}}\Delta u+u^{-\frac{2}{n-1}}\Big(\frac{S}{2}\Big)=&-\frac{(2n)u^{-\frac{2}{n-1}}}{(n-1)\phi'(\mathbf{z}\phi')'}\Bigg(\Big(\mathbf{z}\frac{u'}{u}\Big)'\phi'+(n-1)\Big(\frac{u'}{u}\Big)(\mathbf{z}\phi')'\Bigg)\\ &+u^{-\frac{2}{n-1}}S_C. \end{split} \end{align} It is important to note that, since the initial metric is K\"ahler, from \eqref{initial_equivalence}, the terms $u^{-\frac{2}{n-1}}(S/2)$ and $u^{-\frac{2}{n-1}}S_C$, on the left and right hand sides of \eqref{eq_1} respectively, cancel. This is the first reduction made possible due to the fact that the initial metric is K\"ahler.
Equation \eqref{eq_1} then further reduces to the following non-linear second order ODE: \begin{align} \label{eq_2} \Big(\frac{\phi'(\mathbf{z}\phi')'}{u}\Big)\Delta u=\Big(\frac{2n}{2n-1}\Big)\Bigg(\Big(\mathbf{z}\frac{u'}{u}\Big)'\phi'+(n-1)\Big(\frac{u'}{u}\Big)(\mathbf{z}\phi')'\Bigg). \end{align} We exploit that the initial metric is K\"ahler once again to simplify \eqref{eq_2}. For a general K\"ahler metric $(g,J)$, the Riemannian Laplacian is given by $\Delta(\cdot)=2g^{i\bar{j}}\frac{\partial^2}{\partial z_i\partial \bar{z}_j}(\cdot)$. Therefore, here \begin{align} \label{laplacian} \Delta u=2\Bigg(\frac{(\mathbf{z} u')'}{(\mathbf{z}\phi')'}+(n-1)\frac{u'}{\phi'}\Bigg), \end{align} hence \eqref{eq_2} reduces to \begin{align} \label{eq_3} \frac{1}{u}\Big((\mathbf{z} u')'\phi'+(n-1)u'(\mathbf{z} \phi')'\Big)=\frac{n}{2n-1}\Bigg(\Big(\mathbf{z}\frac{u'}{u}\Big)'\phi'+(n-1)\Big(\frac{u'}{u}\Big)(\mathbf{z}\phi')'\Bigg). \end{align} Next, multiply \eqref{eq_3} through by $\mathbf{z}$, and change variables by letting \begin{align} \begin{split} y=&\frac{u'}{u},\phantom{=}\text{so }y'+y^2=\frac{u''}{u},\\ \text{and}\phantom{==}&\\ f=&\mathbf{z}\phi'. \end{split} \end{align} This reduces the equation to \begin{align} \label{eq_4} fy+\mathbf{z} f(y'+y^2)+(n-1)\mathbf{z} f'y =\frac{n}{2n-1}\Big(fy+\mathbf{z} fy'+(n-1) \mathbf{z} f'y\Big). \end{align} Then, isolate the $y^2$ term to obtain the equation \begin{align} \label{eq_5} \mathbf{z} f y^2=-\Big(\frac{n-1}{2n-1}\Big)\Big(f+(n-1)\mathbf{z} f'\Big)y-\Big(\frac{n-1}{2n-1}\Big)\mathbf{z} fy'. \end{align} From multiplying \eqref{eq_4} through by $-\frac{2n-1}{n-1}\big(\mathbf{z} f y^2\big)^{-1}$, we obtain \begin{align} \label{eq_6} -\frac{2n-1}{n-1}=\Big(\frac{f+(n-1)\mathbf{z} f'}{\mathbf{z} f}\Big)\frac{1}{y}+\frac{y'}{y^2}. \end{align} Change variables once again by setting \begin{align} w=\frac{1}{y},\phantom{=}\text{so }w'=-\frac{y'}{y^2}. \end{align} Then, \eqref{eq_6} becomes \begin{align} \label{eq_7} w'-\Big(\frac{f+(n-1)\mathbf{z} f'}{\mathbf{z} f}\Big)w=\frac{2n-1}{n-1}. \end{align} By using the integrating factor \begin{align} \exp\Big(-\int \frac{f+(n-1)\mathbf{z} f'}{\mathbf{z} f} d\mathbf{z}\Big)=\frac{1}{\mathbf{z} f^{n-1}}, \end{align} equation \eqref{eq_7} can be solved to find \begin{align} \label{eq_8} w=\frac{2n-1}{n-1}\big(\mathbf{z} f^{n-1}\big)\int\frac{1}{\mathbf{z} f^{n-1}}d\mathbf{z}. \end{align} Here, we suppress the constant of integration. Now, since $w=\frac{u}{u'}$, by inverting both sides of \eqref{eq_8} and integrating, we find that \begin{align} \label{eq_9} \log(u)=\int\frac{1}{w}d\mathbf{z}=\frac{n-1}{2n-1}\int\frac{1}{\mathbf{z} f^{n-1}\big(\int\frac{1}{\mathbf{z} f^{n-1}}d\mathbf{z}\big)}d\mathbf{z}. \end{align} To integrate the right hand side of \eqref{eq_9}, we make the substitution \begin{align} h=\int\frac{1}{\mathbf{z} f^{n-1}}d\mathbf{z},\phantom{=}\text{so}\phantom{=}dh=\frac{1}{\mathbf{z} f^{n-1}}d\mathbf{z}, \end{align} and see that \begin{align} \label{eq_10}
\log(u)=\frac{n-1}{2n-1}\int\frac{1}{h}dh=\frac{n-1}{2n-1}\log\Big|\int\frac{1}{\mathbf{z} f^{n-1}}d\mathbf{z}\Big|+C, \end{align} for some constant $C$. Finally, by exponentiating, we see that \begin{align}
u=C\Big|\int\frac{1}{\mathbf{z} f^{n-1}}d\mathbf{z}\Big|^{\frac{n-1}{2n-1}}=C\Big|\int\frac{1}{\mathbf{z}^n(\phi')^{n-1}}d\mathbf{z}\Big|^{\frac{n-1}{2n-1}}. \end{align} Therefore, up to scale, the desired Klsc conformal factor is \begin{align} v^2=u^\frac{2}{n-1}=\Big(\int\frac{1}{\mathbf{z}^n(\phi')^{n-1}}d\mathbf{z}\Big)^{\frac{2}{2n-1}}. \end{align}
\begin{remark} {\em The fact that the conformal metric $(v^2\omega,J_0)$ is no longer K\"ahler follows immediately from examining its associated $2$-form restricted to the $z_1$-axis because if it were K\"ahler there would exist some function $\psi(\mathbf{z})$ so that $(\mathbf{z}\psi')'=v^2(\mathbf{z}\phi')'$ and $\psi'=v^2\phi'$, but such a $\psi(\mathbf{z})$ cannot exist since $v$ is non-constant. Therefore, $(v^2\omega,J_0)$ is a non-K\"ahler Hermitian Klsc metric. } \end{remark}
\section{Moduli space and behavior specification} \label{pf_thm_2} In this section we prove Theorem \ref{thm_2}. We would like to show that there is a ${\rm{U}}(n)$-invariant non-K\"ahler Hermitian Klsc metric where the coefficient of the $g_{\mathbb{CP}^{n-1}}$ component is given by $\mathbf{z}\mathscr{F}$ which, given a further specified constant, is unique. The proof will proceed in three steps. First, we will show that, from such a given function $\mathscr{F}(\mathbf{z})$, we can recover a $\phi(\mathbf{z})$ so that $\mathscr{F}=\big(\int\frac{1}{\mathbf{z}^n(\phi')^{n-1}}d\mathbf{z}\big)^{\frac{2}{2n-1}}\phi'$. We would like this $\phi(\mathbf{z})$ to be a K\"ahler potential, so next, we show that if the pair $(\mathscr{F},\mathcal{C})$ is admissible on that annulus, then the recovered function $\phi(\mathbf{z})$ is in fact a K\"ahler potential there. Finally, this is used to obtain the other component of the metric.
\begin{proposition} \label{phi'_prop} Let $\mathscr{F}(\mathbf{z})$ be a smooth positive function on the annulus $\mathcal{A}_{\alpha,\beta}\subset{\mathbb{C}^n}$. Then, the function $\phi'(\mathbf{z})$ defined by \begin{align} \label{phi_from_G} \phi'(\mathbf{z})=\mathscr{F}\cdot\Big(\frac{1}{2n-1}\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}\Big)^{-2} \end{align} is such that \begin{align} \label{G_from_phi} \mathscr{F}(\mathbf{z})=\phi'\cdot\Big(\int\frac{1}{\mathbf{z}^n(\phi')^{n-1}}d\mathbf{z}\Big)^{\frac{2}{2n-1}}. \end{align} Furthermore, $\phi'(\mathbf{z})$ is the unique such function up to the constant $\mathcal{C}$. \end{proposition}
\begin{proof} Begin by observing that \begin{align} \mathbf{z}^n\mathscr{F}^{n-1}=\Big(\int\frac{1}{\mathbf{z}^n(\phi')^{n-1}}d\mathbf{z}\Big)^{\frac{2(n-1)}{2n-1}}\mathbf{z}^n(\phi')^{n-1}, \end{align} since $\mathscr{F}=\big(\int\frac{1}{\mathbf{z}^n(\phi')^{n-1}}d\mathbf{z}\big)^{\frac{2}{2n-1}}\phi'$, and thus that \begin{align} \label{int_1/zF_1} \int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}=\int\frac{1}{\mathbf{z}^n(\phi')^{n-1}\big(\int\frac{1}{\mathbf{z}^n(\phi')^{n-1}}d\mathbf{z}\big)^{\frac{2(n-1)}{2n-1}}}d\mathbf{z}. \end{align} To integrate the right hand side of \eqref{int_1/zF_1}, we make the substitution \begin{align} h=\int\frac{1}{\mathbf{z}^n(\phi')^{n-1}}d\mathbf{z},\phantom{=}\text{so}\phantom{=}dh=\frac{1}{\mathbf{z}^n(\phi')^{n-1}}d\mathbf{z}, \end{align} and find that \begin{align} \label{int_1/zF_2} \int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{\widetilde{C}}=(2n-1)h^{\frac{1}{2n-1}}=(2n-1)\Big(\int\frac{1}{\mathbf{z}^n(\phi')^{n-1}}d\mathbf{z}\Big)^{\frac{1}{2n-1}}, \end{align} where $\mathcal{\widetilde{C}}=(2n-1)\mathcal{C}$ is the constant of integration. Raising the terms of \eqref{int_1/zF_2} to the $(2n-1)^{th}$-power yeilds \begin{align} \Big(\frac{1}{2n-1}\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}\Big)^{2n-1}=\int\frac{1}{\mathbf{z}^n(\phi')^{n-1}}d\mathbf{z}. \end{align} Taking a derivative, we see that \begin{align} \frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}\Big(\frac{1}{2n-1}\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}\Big)^{2n-2}=\frac{1}{\mathbf{z}^n(\phi')^{n-1}}. \end{align} Finally, we are able to solve for $\phi'(\mathbf{z})$ in terms of $\mathscr{F}(\mathbf{z})$ as \begin{align} \phi'(\mathbf{z})=\mathscr{F}\cdot\Big(\frac{1}{2n-1}\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}\Big)^{-2}. \end{align} \end{proof}
The function $\phi'(\mathbf{z})$ recovered in Proposition \ref{phi'_prop} is possibly the derivative of a K\"ahler potential $\phi(\mathbf{z})$. Since we are in the ${\rm{U}}(n)$-invariant setting, in order for $\phi(\mathbf{z})$ to be a K\"ahler potential on the annulus $\mathcal{A}_{\alpha,\beta}$, it is necessary for both $\phi'>0$ and $(\mathbf{z}\phi')'>0$ there. In the following proposition, we see exactly when this is satisfied.
\begin{proposition} \label{admissible_proposition} Let $\mathscr{F}(\mathbf{z})$ be a smooth positive function on the annulus $\mathcal{A}_{\alpha,\beta}\subset\mathbb{C}^n$, and let $\mathcal{C}$ be a fixed constant. If the function \begin{align} \label{(G,C)_increasing_2} \mathbf{z} \mathscr{F}\cdot\Big(\frac{1}{2n-1}\int\frac{1}{\mathbf{z}^n \mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}\Big)^{-2} \end{align} is strictly increasing on $\mathcal{A}_{\alpha,\beta}$, then the function $\phi(\mathbf{z})$ defined by \begin{align} \label{phi_from_G_2} \phi(\mathbf{z})=\int\frac{\mathscr{F}}{\big(\frac{1}{2n-1}\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}\big)^{-2}}d\mathbf{z} \end{align} is a K\"ahler potential there. \end{proposition}
\begin{proof} A function $\phi(\mathbf{z})$ is a K\"ahler potential on the annulus $\mathcal{A}_{\alpha,\beta}$ if and only if both $\phi'>0$ and $(\mathbf{z}\phi')'>0$ there. First, note that since $\mathscr{F}>0$, if \eqref{(G,C)_increasing_2} is strictly increasing, then $\phi'>0$ on the annulus. Then, observe, from \eqref{phi_from_G} and \eqref{phi_from_G_2}, that the inequality $(\mathbf{z}\phi')'>0$ is equivalent to the inequality \begin{align} \label{admissible_inequality} \frac{\partial}{\partial\mathbf{z}}\Bigg(\mathbf{z} \mathscr{F}\cdot\Big(\frac{1}{2n-1}\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}\Big)^{-2}\Bigg)>0 \end{align} from which the strictly increasing condition of \eqref{(G,C)_increasing_2} follows. \end{proof}
\begin{remark} \label{admissible_remark} {\em We call a pair $(\mathscr{F},\mathcal{C})$ {\em admissible} if the recovered function $\phi(\mathbf{z})$, given by \eqref{phi_from_G_2}, is a K\"ahler potential as then $\mathscr{F}(\mathbf{z})$ arises from a K\"ahler potential as a part of the conformal Klsc metric. Also, although the admissibility condition may seem complicated, it in fact can be viewed as in some sense as a weakening of the condition that $(\mathbf{z} \mathscr{F})'>0$. This is seen as follows. If the constant of $\mathcal{C}=0$, expanding \eqref{admissible_inequality} and simplifying yields the inequality \begin{align} \label{admissible_inequality_2} (\mathbf{z}\mathscr{F})'-\frac{2}{\mathbf{z}^{n-1}\mathscr{F}^{n-2}}\Big(\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}\Big)^{-1}>0. \end{align} Then, observe that if $(\mathbf{z}\mathscr{F})'>0$, the anti-derivative $\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}<0$ for nearly all such $\mathscr{F}$ and therefore \eqref{admissible_inequality_2} is satisfied. It is important to note from this, that pairs $(\mathscr{F},\mathcal{C})$ satisfying the admissibility condition are plentiful.} \end{remark}
We have thus far shown that, given an admissible pair $(\mathscr{F},\mathcal{C})$ on an annulus $\mathcal{A}_{\alpha,\beta}$, we can recover a K\"ahler potential $\phi(\mathbf{z})$. Let $(\omega, J_0)$ denote the K\"ahler form of the K\"ahler metric arising from this potential, and consider the corresponding conformal non-K\"ahler Klsc metric as given by Theorem \ref{thm_1}. Because of the ${\rm{U}}(n)$-invariance, we can restrict our attention to examining the associated $2$-form of this metric, $(\widetilde{\omega},J_0)$, on the $z_1$-axis which from \eqref{omega_z1} and Theorem \ref{thm_1}, we see is given by \begin{align} \label{hermitian_form_1} \widetilde{\omega}=\sqrt{-1}\Big(\int\frac{1}{\mathbf{z}^n(\phi')^{n-1}}d\mathbf{z}\Big)^{\frac{2}{2n-1}}\Bigg((\mathbf{z}\phi')'dz_1\wedge d\bar{z}_2+\phi'\sum_{j=2}^ndz_j\wedge d\bar{z}_j\Bigg). \end{align} From Proposition \ref{phi'_prop}, we see both that $\mathscr{F}=\phi'\cdot\Big(\int\frac{1}{\mathbf{z}^n(\phi')^{n-1}}d\mathbf{z}\Big)^{\frac{2}{2n-1}}$ and that \begin{align} \label{F/phi'} \Big(\int\frac{1}{\mathbf{z}^n(\phi')^{n-1}}d\mathbf{z}\Big)^{\frac{2}{2n-1}}=\frac{\mathscr{F}}{\phi'}=\Big(\frac{1}{2n-1}\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}\Big)^{2}, \end{align} from which we find that \begin{align} \begin{split} (\mathbf{z}\phi')'\cdot\Big(&\int\frac{1}{\mathbf{z}^n(\phi')^{n-1}}d\mathbf{z}\Big)^{\frac{2}{2n-1}}\\ &=\Big(\frac{1}{2n-1}\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}\Big)^{2}\cdot\frac{\partial}{\partial\mathbf{z}}\Bigg(\mathbf{z} \mathscr{F}\cdot\Big(\frac{1}{2n-1}\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}\Big)^{-2}\Bigg)\\ &=(\mathbf{z} \mathscr{F})'-\frac{2}{\mathbf{z}^{n-1}\mathscr{F}^{n-2}}\Big(\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}\Big)^{-1}. \end{split} \end{align} Therefore, the associated $2$-form of the conformal Klsc metric \eqref{hermitian_form_1} is given by \begin{align} \begin{split} \label{hermitian_form_2} \widetilde{\omega}=\sqrt{-1}\Bigg(\Bigg[(\mathbf{z} \mathscr{F})'-\frac{2}{\mathbf{z}^{n-1}\mathscr{F}^{n-2}}\Big(\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}\Big)^{-1} \Bigg]&dz_1\wedge d\bar{z}_1\\ +\mathscr{F}\sum_{j=2}^n&dz_j\wedge d\bar{z}_j\Bigg). \end{split} \end{align} Recall that the standard metric on $S^{2n-1}$ is decomposed into $g_{\mathbb{CP}^{n-1}}$, the Fubini-Study metric on $\mathbb{CP}^{n-1}$, and $h$, the metric along the Hopf fiber. Therefore, since $\omega(\cdot,\cdot)=g(J_0\cdot, \cdot)$ and we are in the ${\rm{U}}(n)$-invariant setting, from \eqref{hermitian_form_2} we see that the non-K\"ahler Hermitian Klsc metric is explicitly given by \begin{align} \label{gFC_metric} g_{(\mathscr{F},\mathcal{C})}=\Bigg((\mathbf{z}\mathscr{F})'-\frac{2}{\mathbf{z}^{n-1}\mathscr{F}^{n-2}}\Big(\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}\Big)^{-1}\Bigg)\Big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\Big)+(\mathbf{z} \mathscr{F})g_{\mathbb{CP}^{n-1}}. \end{align}
Lastly, we obtain the uniqueness and classification result of Theorem \ref{thm_2} as follows. First, note that due to the admissibility condition, the coefficient of the $\big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\big)$ component of the metric $g_{(\mathscr{F},\mathcal{C})}$ from \eqref{gFC_metric} is smooth on the annulus. Next, we will use the fact that any ${\rm{U}}(n)$-invariant Hermitian metric on an annulus $\mathcal{A}_{\alpha,\beta}\subset (\mathbb{C}^n,J_0)$ is conformal to a K\"ahler metric there, and furthermore that, up to scale, there exists a unique ${\rm{U}}(n)$-invariant K\"ahler metric in each conformal class. This is seen as follows. Let $\Omega$ be the associated $2$-form to a ${\rm{U}}(n)$-invariant non-K\"ahler Hermitian metric on $\mathcal{A}_{\alpha,\beta}\subset(\mathbb{C}^n,J_0)$. Then, there exists some $\mathscr{E}(\mathbf{z}),\mathscr{F}(\mathbf{z})\in C^{\infty}(\mathcal{A}_{\alpha,\beta})$ so that the restriction of $\Omega$ to the $z_1$-axis is given by \begin{align} \Omega=\sqrt{-1}\Big(\mathscr{E}(\mathbf{z})dz_1\wedge d\bar{z}_1+\mathscr{F}(\mathbf{z})\sum_{i=2}^ndz_i\wedge d\bar{z}_i\Big). \end{align} We seek some function $v(\mathbf{z})\in C^{\infty}(\mathcal{A}_{\alpha,\beta})$ so that the conformal metric $e^v\Omega$ is K\"ahler, which would mean that there exists some potential function $\phi(\mathbf{z})$ so that \begin{align} e^v\mathscr{E}=(\mathbf{z}\phi')'\phantom{=}\text{and}\phantom{=}e^v\mathscr{F}=\phi', \end{align} where $\phi'$ is the derivative of the K\"ahler potential. Substituting the second equation into the first, we find that \begin{align} \label{second_conformal} v=\int\frac{\mathscr{E}-(\mathbf{z} \mathscr{F})'}{\mathbf{z} \mathscr{F}} d\mathbf{z}\phantom{=}\text{and}\phantom{=}\phi'=\exp\Big(\int\frac{\mathscr{E}-\mathscr{F}}{\mathbf{z} \mathscr{F}}d\mathbf{z}\Big), \end{align} and hence that there is a unique, up to scale, ${\rm{U}}(n)$-invariant K\"ahler metric in each conformal class. Finally, suppose this non-K\"ahler Hermitian metric $(\mathcal{A}_{\alpha,\beta},J_0,\Omega)$ is a Klsc metric. We will show that it is necessarily of the form $g_{(\mathscr{F},\mathcal{C})}$ in \eqref{gFC_metric}. By equating the formulas \eqref{phi_from_G} and \eqref{second_conformal} for $\phi'(\mathbf{z})$ and taking a derivative, we find that \begin{align} \begin{split} \label{equate_derivative} \Big(\frac{1}{2n-1}\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}_1\Big)^{-2}\Bigg(\mathscr{F}'&-\frac{2}{\mathbf{z}^n\mathscr{F}^{n-2}}\Big(\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}_1\Big)^{-1}\Bigg)\\ =&\Big(\frac{\mathscr{E}-\mathscr{F}}{\mathbf{z} \mathscr{F}}\Big)\exp\Big(\int\frac{\mathscr{E}-\mathscr{F}}{\mathbf{z} \mathscr{F}}d\mathbf{z}+\mathcal{C}_2\Big). \end{split} \end{align} Next, by substituting the formula \eqref{phi_from_G} for the term $\exp\big(\int\frac{\mathscr{E}-\mathscr{F}}{\mathbf{z} \mathscr{F}}d\mathbf{z}+\mathcal{C}_2\big)$ on the right hand side of \eqref{equate_derivative}, since they are equivalent expressions of $\phi'(\mathbf{z})$, and making appropriate cancellations, we obtain the equation \begin{align} \mathscr{F}'-\frac{2}{\mathbf{z}^n\mathscr{F}^{n-2}}\Big(\frac{1}{2n-1}\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}_1\Big)^{-1}=\frac{\mathscr{E}-\mathscr{F}}{\mathbf{z}}. \end{align} This can be solved for $\mathscr{E}$ to find that \begin{align} \mathscr{E}=(\mathbf{z}\mathscr{F})'-\frac{2}{\mathbf{z}^{n-1}\mathscr{F}^{n-2}}\Big(\frac{1}{2n-1}\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}_1\Big)^{-1}, \end{align} which is precisely the same formula as the coefficient of the $\big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\big)$ component of $g_{(\mathscr{F},\mathcal{C})}$ in \eqref{gFC_metric}. Therefore, any non-K\"ahler Hermitian Klsc metric on $(\mathcal{A}_{\alpha,\beta},J_0)$ must take the form $g_{(\mathscr{F},\mathcal{C})}$ for some admissible pair.
\section{Asymptotically canonical singularities} \label{pf_thm_3} In this section we prove Theorem \ref{thm_3} and Corollary \ref{cor_thm_3}, the former of which is constituted by Section \ref{expansion_proof} and Section \ref{regularity_proof}, and the latter by Section \ref{pf_cor_thm_3}.
Let $(\mathscr{F},\mathcal{C})$ be an admissible pair on the annulus $\mathcal{A}_{\alpha,\beta}\subset\mathbb{C}^n$, where the Taylor expansion of $\mathscr{F}(\mathbf{z})\in C^{\infty}(\mathbb{C}^n)$ about the origin is given by \begin{align} \label{G_expansion} \mathscr{F}(\mathbf{z})=\sum_{j=k}^{\infty}\frac{\mathscr{F}^{(j)}(0)}{j!}\mathbf{z}^j. \end{align} Note that the first nonzero term is of order $k$. From Theorem \ref{thm_2}, we have that \begin{align} g_{(\mathscr{F},\mathcal{C})}=\Bigg((\mathbf{z} \mathscr{F})'-\frac{2}{\mathbf{z}^{n-1}\mathscr{F}^{n-2}}\Big(\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}\Big)^{-1}\Bigg)\Big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\Big)+(\mathbf{z} \mathscr{F})g_{\mathbb{CP}^{n-1}} \end{align} is the corresponding unique Klsc metric on $\mathcal{A}_{0,\infty}$. In order to understand the behavior of $g$ near the origin, it is necessary to analyze the asymptotic behavior of the terms which comprise the coefficients of $\big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\big)$ and $g_{\mathbb{CP}^{n-1}}$ as they approach $\mathbf{z}=0$. In Section \ref{expansion_proof}, we prove the first statement of Theorem \ref{thm_3}, that the Klsc metric extends to all of $\mathbb{C}^n$ and has a canonical asymptotically cone-like singularity at the origin. Then, in Section \ref{regularity_proof}, we prove the regularity conditions at the origin.
Throughout the proof a variety of expansions will be considered and manipulated, and it will only be necessary to keep track of the coefficients of the lowest order terms. Therefore, for brevity we denote the coefficients of the higher order terms of the expansions with capital Latin letters indexed with respect to the appropriate term in the expansion.
\subsection{The expansion of the Klsc metric} \label{expansion_proof} We begin by examining the coefficient of $\big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\big)$. To study the integral term, first observe from \eqref{G_expansion}, the expansion of $\mathscr{F}(\mathbf{z})$, that \begin{align} \label{G^n-1} \mathscr{F}^{n-1}=\Big(\frac{\mathscr{F}^{(k)}(0)}{k!}\Big)^{n-1}\mathbf{z}^{k(n-1)}+\sum_{j=1}^{\infty}A_j\mathbf{z}^{k(n-1)+j}, \end{align} where the constants $A_j$ are the appropriate combination of the coefficients of the Taylor expansion of $\mathscr{F}(\mathbf{z})$. Then, around $\mathbf{z}=0$, we have the expansion \begin{align} \frac{\mathbf{z}^{k(n-1)}}{\mathscr{F}^{n-1}}=\frac{\mathbf{z}^{k(n-1)}}{\big(\frac{\mathscr{F}^{(k)}(0)}{k!}\big)^{n-1}\mathbf{z}^{k(n-1)}+\sum_{j=1}^{\infty}A_j\mathbf{z}^{k(n-1)+j}}=\Big(\frac{\mathscr{F}^{(k)}(0)}{k!}\Big)^{1-n}\Big(1+\sum_{j=1}^{\infty}B_j\mathbf{z}^j\Big), \end{align}
where $B_j=\big(\frac{\mathscr{F}^{(k)}(0)}{k!}\big)^{n-1}\big(\frac{1}{j!}\big)\frac{\partial^j}{\partial\mathbf{z}^j}\big(\frac{\mathbf{z}^{k(n-1)}}{\mathscr{F}^{n-1}}\big)\big|_{\mathbf{z}=0}$, and thus \begin{align} \frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}=\Big(\frac{1}{\mathbf{z}^{k(n-1)+n}}\Big)\Big(\frac{\mathbf{z}^{k(n-1)}}{\mathscr{F}^{n-1}}\Big)=\Big(\frac{\mathscr{F}^{(k)}(0)}{k!}\Big)^{1-n}\Big(\frac{1}{\mathbf{z}^{k(n-1)+n}}\Big)\Big(1+\sum_{j=1}^{\infty}B_j\mathbf{z}^j\Big). \end{align} Therefore, in a small neighborhood of $\mathbf{z}=0$, we find that \begin{align} \begin{split} \label{int_1/zG} \int&\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}=\Big(\frac{\mathscr{F}^{(k)}(0)}{k!}\Big)^{1-n}\int \frac{1+\sum_{j=1}^{\infty}B_j\mathbf{z}^j}{\mathbf{z}^{k(n-1)+n}}d\mathbf{z}\\ &\phantom{=}=-\Big(\frac{\mathscr{F}^{(k)}(0)}{k!}\Big)^{1-n}\Bigg(\frac{1}{(n-1)(k+1)\mathbf{z}^{(n-1)(k-1)}}+\sum_{j=1}^{(n-1)(k+1)-1}C_j\mathbf{z}^{-(n-1)(k+1)+j}\\ &\phantom{============}-B_{(n-1)(k+1)}\log(\mathbf{z})-\sum_{j=(n-1)(k+1)+1}^{\infty}C_j\mathbf{z}^{-(n-1)(k+1)+j}\Bigg), \end{split} \end{align} where $C_j=\frac{B_j}{(n-1)(k+1)-j}$ for $j\neq (n-1)(k+1)$. Notice that the coefficient of the $\log(\mathbf{z})$ term is still just $B_{(n-1)(k+1)}$ From this, we see that \begin{align} \begin{split} \label{int_1/zG^-1} \Big(\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}\Big)^{-1}=&-\frac{(n-1)(k+1)\big(\frac{\mathscr{F}^{(k)}(0)}{k!}\big)^{n-1}\mathbf{z}^{(n-1)(k+1)}}{1+\sum_{j=1}^{\infty}D_j\mathbf{z}^j-E\mathbf{z}^{(n-1)(k+1)}\log(\mathbf{z})}, \end{split} \end{align} where the constants \begin{align} \begin{split} \label{D,E_coeffiecients} D_j&=\begin{cases} (n-1)(k+1)C_j\phantom{=}&j< (n-1)(k+1)\\ \mathcal{C}(n-1)(k+1)\big(\frac{\mathscr{F}^{(k)}(0)}{k!}\big)^{n-1}\phantom{=}&j=(n-1)(k+1)\\ -(n-1)(k+1)C_j\phantom{=}&j> (n-1)(k+1) \end{cases}\\ \text{and}\phantom{=}&\\ E&=(n-1)(k+1)B_{(n-1)(k+1)}. \end{split} \end{align}
Next, similarly to \eqref{G^n-1}, we find that \begin{align} \label{z^n-1G^n-2} \mathbf{z}^{n-1}\mathscr{F}^{n-2}=\mathbf{z}^{k(n-2)+n-1}\Big(\frac{\mathscr{F}^{(k)}(0)}{k!}\Big)^{n-2}\Big(1+\sum_{j=1}^{\infty}F_j\mathbf{z}^j\Big), \end{align} for some appropriately defined constants $F_j$.
Therefore, from \eqref{int_1/zG^-1} and \eqref{z^n-1G^n-2}, we see that in some neighborhood of $\mathbf{z}=0$ \begin{align} \begin{split} \label{coefficient_1_expansion_0} -\frac{2}{\mathbf{z}^{n-1}\mathscr{F}^{n-2}}&\Big(\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}\Big)^{-1}\\ =&\Bigg(\frac{2\big(\frac{\mathscr{F}^{(k)}(0)}{k!}\big)^{2-n}}{\mathbf{z}^{k(n-2)+n-1}\big(1+\sum_{j=1}^{\infty}F_j\mathbf{z}^j\big)}\Bigg)\Bigg(\frac{(n-1)(k+1)\big(\frac{\mathscr{F}^{(k)}(0)}{k!}\big)^{n-1}\mathbf{z}^{(n-1)(k+1)}}{1+\sum_{j=1}^{\infty}D_j\mathbf{z}^j+E\mathbf{z}^{(n-1)(k+1)}\log(\mathbf{z})}\Bigg)\\ =&\frac{2(n-1)(k+1)\big(\frac{\mathscr{F}^{(k)}(0)}{k!}\big)\mathbf{z}^k}{1+\sum_{j=1}^{\infty}G_j\mathbf{z}^j+\mathbf{z}^{(n-1)(k+1)}\log(\mathbf{z})\sum_{j=0}^{\infty}H_j\mathbf{z}^j}, \end{split} \end{align} for some appropriately defined constants $G_j$ and $H_j$. Then, letting \begin{align} \label{G(z)} \mathcal{G}(\mathbf{z})=\sum_{j=1}^{\infty}G_j\mathbf{z}^j+\mathbf{z}^{(n-1)(k+1)}\log(\mathbf{z})\sum_{j=0}^{\infty}H_j\mathbf{z}^j, \end{align} observe that, around $\mathbf{z}=0$, we have the following expansion \begin{equation} \label{1/zF_int} -\frac{2}{\mathbf{z}^{n-1}\mathscr{F}^{n-2}}\Big(\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}\Big)^{-1}=2(n-1)(k+1)\Big(\frac{\mathscr{F}^{(k)}(0)}{k!}\Big)\mathbf{z}^k\Big(1+\sum_{j=1}^{\infty}(-1)^{j}\mathcal{G}(\mathbf{z})^j\Big). \end{equation} Note that, since there are no constant terms in $\mathcal{G}(\mathbf{z})$, the lowest order term of \eqref{1/zF_int} is just $2(n-1)(k+1)\big(\frac{\mathscr{F}^{(k)}(0)}{k!}\big)\mathbf{z}^k$.
The other term in the coefficient of $\big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\big)$ is $(\mathbf{z}\mathscr{F})'$. From \eqref{G_expansion}, we see that this has the expansion \begin{align} \label{zF'_expansion} (\mathbf{z} \mathscr{F})'=\sum_{j=k}^{\infty}(j+1)\frac{\mathscr{F}^{(j)}(0)}{j!}\mathbf{z}^j \end{align} around $\mathbf{z}=0$.
We use the expansions \eqref{1/zF_int} and \eqref{zF'_expansion} together to find the expansion of the coefficient of $\big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\big)$ around $\mathbf{z}=0$. It will be important to keep track of the coefficient of the lowest order term, and since the lowest order terms in both \eqref{1/zF_int} and \eqref{zF'_expansion} are constant multiples of $\mathbf{z}^k$, we combine the higher order terms as \begin{align} \label{higher_order} \mathcal{H}(\mathbf{z})=2(n-1)(k+1)\big(\frac{\mathscr{F}^{(k)}(0)}{k!}\big)\mathbf{z}^k\sum_{j=1}^{\infty}(-1)^j\mathcal{G}(\mathbf{z})^j+\sum_{j=k+1}^{\infty}(j+1)\frac{\mathscr{F}^{(j)}(0)}{j!}\mathbf{z}^j. \end{align} Note that the lowest possible order term of $\mathcal{H}(\mathbf{z})$ is a constant multiple of $\mathbf{z}^{k+1}$. Finally, around the origin, we find that the coefficient of $\big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\big)$ \begin{align} \label{coefficient_1_expansion} (\mathbf{z} \mathscr{F}(\mathbf{z}))'-\frac{2}{\mathbf{z}^{n-1}\mathscr{F}^{n-2}}\Big(\int\frac{1}{\mathbf{z}^n\mathscr{F}^{n-1}}d\mathbf{z}+\mathcal{C}\Big)^{-1}=&(2n-1)(k+1)\Big(\frac{\mathscr{F}^{(k)}(0)}{k!}\Big)\mathbf{z}^k+\mathcal{H}(\mathbf{z}). \end{align}
Now, we examine the expansion of $\mathbf{z}\mathscr{F}$, the coefficient of $g_{\mathbb{CP}^{n-1}}$. From \eqref{G_expansion}, we see that \begin{align} \label{coefficient_2_expansion} \mathbf{z}\mathscr{F}=\sum_{j=k}^{\infty}\frac{\mathscr{F}^{(j)}(0)}{j!}\mathbf{z}^{j+1}, \end{align} around $\mathbf{z}=0$, and notice that the lowest order term here is just $\big(\frac{\mathscr{F}^{(k)}(0)}{k!}\big)\mathbf{z}^{k+1}$.
Using \eqref{coefficient_1_expansion} and \eqref{coefficient_2_expansion}, we find that, around $\mathbf{z}=0$, the metric $g$ has the following expansion. \begin{align} \begin{split} \label{metric_expansion_1} g_{(\mathscr{F},\mathcal{C})}=(2n-1)(k+1)\Big(&\frac{\mathscr{F}^{(k)}(0)}{k!}\Big)\mathbf{z}^k\Big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\Big)+\Big(\frac{\mathscr{F}^{(k)}(0)}{k!}\Big)\mathbf{z}^{k+1}g_{\mathbb{CP}^{n-1}}\\ &+\mathcal{H}(\mathbf{z})\Big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\Big)+\Big(\sum_{j=k+1}^{\infty}\frac{\mathscr{F}^{(j)}(0)}{j!}\mathbf{z}^{j+1}\Big)g_{\mathbb{CP}^{n-1}}. \end{split} \end{align} Denote the terms $\mathcal{H}(\mathbf{z})\big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\big)+\big(\sum_{j=k+1}^{\infty}\frac{\mathscr{F}^{(j)}(0)}{j!}\mathbf{z}^{j+1}\big)g_{\mathbb{CP}^{n-1}}$ by $h.o.t.$, for higher order terms, since they are faster vanishing at the original than the first terms in the expansion. Now, make a change in the radial variable by setting \begin{align} \label{r_change} r^2=\Big(\frac{2n-1}{k+1}\Big)\Big(\frac{\mathscr{F}^{(k)}(0)}{k!}\Big)\mathbf{z}^{k+1}. \end{align} In these coordinates, the expansion \eqref{metric_expansion_1} of $g$ becomes \begin{align} \label{metric_expansion_2} g_{(\mathscr{F},\mathcal{C})}=dr^2+\Big(\frac{k+1}{2n-1}\Big)r^2\Big(g_{\mathbb{CP}^{n-1}}+(2n-1)(k+1)h\Big)+h.o.t.. \end{align} From \eqref{higher_order} and \eqref{coefficient_2_expansion}, we see that the higher order terms in these coordinates satisfy the decay conditions \begin{align} \label{hot_decay} h.o.t.=\mathcal{O}\Big(r^{\frac{2}{k+1}}\Big)dr^2+\mathcal{O}\Big(r^{\frac{2(k+2)}{k+1}}\Big)h+\mathcal{O}\Big(r^{\frac{2(k+2)}{k+1}}\Big)g_{\mathbb{CP}^{n-1}}. \end{align} Clearly, this metric extends to all of $\mathbb{C}^n$ and has the stated canonical asymptotically cone-like singularity at the origin.
\subsection{Regularity at the origin} \label{regularity_proof} Here, we examine the regularity, over the origin, of the coefficients of the higher order terms of the metric $g_{(\mathscr{F},\mathcal{C})}$ in the variable $r$. There are two issues of which to be wary. Firstly, there may be fractional powers of $r$ in the expansion of these coefficients when we change variables by \eqref{r_change} to obtain \eqref{metric_expansion_2} from \eqref{metric_expansion_1}. Secondly, if the $\mathbf{z}^{(n-1)(k+1)}\log(\mathbf{z})\sum_{j=0}^{\infty}H_j\mathbf{z}^j$ term in $\mathcal{G}(\mathbf{z})$ is nonzero, it will cause derivatives of the coefficient of the $\big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\big)$ component of the metric to blow up in both $r$ and $\mathbf{z}$ variables, recall \eqref{G(z)}.
We first address the issue of fractional powers of $r$ in the expansion of these coefficients. Ignoring the higher order terms that have coefficients involving $\log(\mathbf{z})$, which will be dealt with below, up to constants, the other components of the higher order terms of $g_{(\mathscr{F},\mathcal{C})}$ are of the form $\mathbf{z}^a(d\sqrt{\mathbf{z}})^2$, $\mathbf{z}^bh$ and $\mathbf{z}^cg_{\mathbb{CP}^{n-1}}$ for some integers $a\geq k+1$ and $b,c\geq k+2$. From the change of variables \eqref{r_change}, observe that for an arbitrary integer $d$ \begin{align} \begin{split} \label{z_l_hot} \mathbf{z}^{d}=&\Big(\frac{2n-1}{k+1}\Big)^{-\frac{d}{k+1}}\Big(\frac{\mathscr{F}^{(k)}(0)}{k!}\Big)^{-\frac{d}{k+1}}r^{\frac{2d}{k+1}},\phantom{=}\text{and}\\ \mathbf{z}^{d}(d\sqrt{\mathbf{z}})^2=&(k+1)^{\frac{d-k}{k+1}}(2n-1)^{-\frac{k+d+2}{k+1}}\Big(\frac{\mathscr{F}^{(k)}(0)}{k!}\Big)^{-\frac{k+d+2}{k+1}}r^{\frac{2(d-k)}{k+1}}dr^2. \end{split} \end{align} When $k=0$ or $k=1$, notice that there are no fractional powers of $r$ in the coefficients here, and the only regularity issue to consider is the higher order $\log(\mathbf{z})$ term. However, for $k\geq 2$ there may be fractional powers. Observe, from \eqref{higher_order}, \eqref{coefficient_1_expansion} and \eqref{coefficient_2_expansion}, that the lowest possible such power necessarily comes from a $\mathbf{z}^{a}(d\sqrt{\mathbf{z}})^2$ term. Since it is possible for $a=k+1$, from \eqref{z_l_hot}, we see that in the variable $r$ there may be an $r^{\frac{2}{k+1}}dr^2$ term which is only of class $C^0$ when $k\geq 2$. Therefore, when $k\geq 2$, we can only guarantee, a priori, that $g_{(\mathscr{F},\mathcal{C})}$ is of class $C^0$ over the origin.
However, when $k\geq 2$, there still may be higher regularity in a particular case, and if further information is known about the expansion of $\mathscr{F}$ it may be possible to improve this estimate. For instance, if the order of the second nonzero term in the Taylor expansion of $\mathscr{F}(\mathbf{z})$ about the origin is known, denote this by $\ell$, then the lowest possible order term with fractional powers of $r$ will be a constant multiple of $r^{\frac{2(\ell-k)}{k+1}}dr^2$. If $\ell$ is such that $\frac{2(\ell-k)}{k+1}$ is an integer, then one would look a step further into the expansion, and so on and so forth.
Now, we address the issue of of the $\log(\mathbf{z})$ term. Observe, from \eqref{G(z)} and \eqref{1/zF_int}, that the lowest order term in the expansion of the coefficient of $\big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\big)$ that contains a $\log(\mathbf{z})$ is a constant multiple of $\mathbf{z}^{(n-1)(k+1)+k}\log(\mathbf{z})\sum_{j=0}^{\infty}H_j\mathbf{z}^j$. Changing variables as in \eqref{r_change}, we see that \begin{align} \label{zlog_expansion} \mathbf{z}^{(n-1)(k+1)+k}\log(\mathbf{z})(d\sqrt{\mathbf{z}})^2=\frac{2}{C^n(k+1)^3}r^{2(n-1)}\log\Big(\frac{r}{\sqrt{C}}\Big)dr^2, \end{align} where the constant $C=\big(\frac{2n-1}{k+1}\big)\big(\frac{\mathscr{F}^{(k)}(0)}{k!}\big)$. Recall, from \eqref{coefficient_1_expansion_0}, that if any of the $H_j$ are nonzero, then necessarily $H_0$ is nonzero. Therefore, if $\sum_{j=0}^{\infty}H_j\mathbf{z}^j$ is nonzero, from \eqref{zlog_expansion}, we see that there are precisely $2n-3$ derivatives in the $r$ variable before the term containing $\log(r)$ blows up at the origin.
The obstruction to regularity of the terms containing $\log(r)$ is removed if and only if $H_j=0$ for all $j$. Observe, from \eqref{int_1/zG}, \eqref{int_1/zG^-1}, \eqref{D,E_coeffiecients} and \eqref{coefficient_1_expansion_0}, that this occurs if and only if \begin{align}
\frac{\partial^{(n-1)(k+1)}}{\partial\mathbf{z}^{(n-1)(k+1)}}\Big(\frac{\mathbf{z}^k}{\mathscr{F}}\Big)^{n-1}\Big|_{\mathbf{z}=0}=0. \end{align} This is equivalent to the $(n-1)(k+1)$-degree term of the Taylor expansion of $\big(\frac{\mathbf{z}^k}{\mathscr{F}}\big)^{n-1}$ around $\mathbf{z}=0$ vanishing.
In the cases that $k=0$ or $k=1$, since there are no fractional powers of $r$ in the expansion of the coefficients of $g_{(\mathscr{F},\mathcal{C})}$ as we saw above, this is the only obstruction to the coefficients of the metric extending smoothly over the origin, and the regularity results of Theorem~\ref{thm_3} follow. When $k\geq2$, recall that, a priori, the metric is only guaranteed to be of class $C^0$ over the origin since there may be an $r^{\gamma}$ term, where $0<\gamma<1$, in a coefficient of the expansion of the higher order terms of the metric. However, given more information about $\mathscr{F}(\mathbf{z})$ and its expansion, the following precise statement can be made. Let $\eta$ denote the lowest non-integral power of $r$ in the coefficients of the expansion of $g_{(\mathscr{F},\mathcal{C})}$. Then, the metric is of class \begin{align} \begin{cases}
C^{\lfloor \eta\rfloor}\phantom{=}&\text{if }\frac{\partial^{(n-1)(k+1)}}{\partial\mathbf{z}^{(n-1)(k+1)}}\big(\frac{\mathbf{z}^k}{\mathscr{F}}\big)^{n-1}\big|_{\mathbf{z}=0}=0\\ C^{\min\{\lfloor \eta\rfloor, 2n-3\}}\phantom{=}&\text{otherwise} \end{cases}, \end{align} where $\lfloor\cdot\rfloor$ is the greatest integer function. \begin{remark} \label{k>2Cinfty}
{\em Note that, even when $k\geq 2$, it is possible that the coefficients of $g_{(\mathscr{F},\mathcal{C})}$ only contain integral powers of $r$. In this case we say that $\eta=\infty$. Then, the metric is of class $C^{\infty}$ if and only if $\frac{\partial^{(n-1)(k+1)}}{\partial\mathbf{z}^{(n-1)(k+1)}}\big(\frac{\mathbf{z}^k}{\mathscr{F}}\big)^{n-1}\big|_{\mathbf{z}=0}=0$. For example, consider the admissible pair $(\mathscr{F},\mathcal{C})=(\mathbf{z}^2+\mathbf{z}^8,0)$ in complex dimension $n=2$. The associated metric $g_{(\mathscr{F},\mathcal{C})}$ will be of class $C^{\infty}$ over the origin. See Section \ref{fix_2}.} \end{remark}
\subsection{Proof of Corollary \ref{cor_thm_3}} \label{pf_cor_thm_3} Consider the uniquely associated ${\rm{U}}(n)$-invariant non-K\"ahler Hermitian Klsc metric, $g_{(\mathscr{F},\mathcal{C})}$, to an admissible pair $(\mathscr{F},\mathcal{C})$ on the annulus $\mathcal{A}_{0,\infty}\subset(\mathbb{C}^n,J_0)$, where $\mathscr{F}(\mathbf{z})\in C^{\infty}(\mathbb{C}^n)$ has first nonzero term of its Taylor expansion about the origin of order $k$. From Theorem \ref{thm_3}, we see that this extends to a metric on $\mathbb{C}^n$ with an asymptotically cone-like singularity at the origin. If $(2n-1)(k+1)=\ell^2$, for some integer $\ell$, then observe, from Theorem \ref{thm_3}, that this metric is given by \begin{align} g_{(\mathscr{F},\mathcal{C})}=dr^2+\Big(\frac{k+1}{2n-1}\Big)r^2\Big(g_{\mathbb{CP}^{n-1}}+\ell^2h\Big)+h.o.t.. \end{align} Then, take the $\mathbb{Z}/\ell\mathbb{Z}$ quotient along the Hopf fibers generated by \begin{align} (z_1,\cdots,z_n)\mapsto (e^{2\pi i/\ell}z_1,\cdots,e^{2\pi i/\ell}z_n), \end{align} and note that this can be identified with $\mathbb{C}^n$ itself. Since the complex structure is compatible with this process, it descends to this quotient. Thus, we see that the metric $g_{(\mathscr{F},\mathcal{C})}$ descends to \begin{align} \overline{g}_{(\mathscr{F},\mathcal{C})}=dr^2+\Big(\frac{k+1}{2n-1}\Big)r^2g_{S^{2n-1}}+h.o.t., \end{align} where $g_{S^{2n-1}}$ is the standard metric on $S^{2n-1}$. Finally, notice that if $k+1=2n-1$, in which case $(2n-1)(k+1)$ is always a square, then \begin{align} \overline{g}_{(\mathscr{F},\mathcal{C})}=dr^2+r^2g_{S^{2n-1}}+h.o.t. \end{align} is a nonsingular metric. These metrics are of the same regularity class over the origin as the initial metric $g_{(\mathscr{F},\mathcal{C})}$, recall Section \ref{regularity_proof}. \begin{remark} {\em An example of a nonsingular Klsc metric of class $C^{\infty}$ is obtained from the admissible pair $(\mathscr{F},\mathcal{C})=(\mathbf{z}^2+\mathbf{z}^8,0)$ in complex dimension $n=2$, since $3=2n-1=k+1$, and from Remark \ref{k>2Cinfty} we know that $\overline{g}_{(\mathscr{F},\mathcal{C})}$ is of class $C^{\infty}$ over the origin. See Section \ref{fix_2}. } \end{remark}
\section{Examples} In this section we give several examples of non-K\"ahler Hermitian Klsc metrics arising via Theorem~\ref{thm_1}, Theorem \ref{thm_2} and Theorem \ref{thm_3}. In Section \ref{flat}, Section~\ref{burns} and Section \ref{Fubini-Study}, we begin with a K\"ahler metric and obtain the corresponding non-K\"ahler Hermitian Klsc metric via Theorem \ref{thm_1}. We also find the scalar curvature of the resulting Klsc metric in each case and examine its behavior. Then, in Section~\ref{fix_1} and Section \ref{fix_2}, we fix an admissible pair $(\mathscr{F},\mathcal{C})$ on the annulus $\mathcal{A}_{0,\infty}$, and find the unique associated non-K\"ahler Hermitian Klsc metric. We also examine the behavior and regularity of the respective Klsc metrics at the origin as well as find the K\"ahler metrics to which they are conformal. To simplify the computations, we will work in in complex dimension $n=2$, except in Section \ref{flat}, as well as take the constant of the admissible pair to be $\mathcal{C}=0$ in Section \ref{fix_1} and Section \ref{fix_2}.
\subsection{The non-K\"ahler Klsc metric conformal to the Euclidean metric on $\mathbb{C}^n$} \label{flat} Consider the flat metric, $g_{Euc}=(d\sqrt{\mathbf{z}})^2+\mathbf{z} g_{S^{2n-1}}$, on $\mathbb{C}^n$ which arises from the K\"ahler potential $\phi=\mathbf{z}$. From Theorem \ref{thm_1}, the conformal metric $g_{Klsc}=v^2g_{Euc}$, where \begin{align} \label{flat_u} v^2=(n-1)^{-\frac{2}{2n-1}}\mathbf{z}^{-\frac{2(n-1)}{2n-1}}, \end{align} is a non-K\"ahler Hermitian Klsc metric. Changing variables by setting \begin{align} r^2=(n-1)^{-\frac{2}{2n-1}}(2n-1)^2\mathbf{z}^{\frac{1}{2n-1}}, \end{align} we see that the Klsc metric, given by \begin{align} g_{Klsc}=dr^2+\frac{r^2}{(2n-1)^2}g_{S^{2n-1}}, \end{align} is a conical metric.
Lastly, we find that the scalar curvature of this Klsc metric $g_{Klsc}$ is given by \begin{align} S_{g_{Klsc}}=2S_{C_{g_{Klsc}}}=8n(n-1)^{\frac{4n}{2n-1}}\mathbf{z}^{-\frac{1}{2n-1}}=8n(2n-1)(n-1)^2r^{-2}. \end{align}
\subsection{The non-K\"ahler Klsc metric conformal to the Burns metric on $\mathcal{O}_{\mathbb{P}^1}(-1)$} \label{burns} The Burns metric is a scalar-flat K\"ahler ALE metric on the blow-up of $\mathbb{C}^2$ at the origin \cite{Burns}. It arises from the K\"ahler potential $\phi=\mathbf{z}+\log(\mathbf{z})$, and is given by \begin{align} g_B=(d\sqrt{\mathbf{z}})^2+(\mathbf{z}+1)g_{\mathbb{CP}^1}+\mathbf{z} h. \end{align} Note the $\mathbb{CP}^1$ at the origin. From Theorem \ref{thm_1}, we see that the conformal metric $g_{Klsc}=v^2g_{B}$, where \begin{align} \label{flat_burns} v^2=\log^{\frac{2}{3}}\Big(1+\frac{1}{\mathbf{z}}\Big), \end{align} is a non-K\"ahler Hermitian Klsc metric. Explicitly, we have that \begin{align} g_{Klsc}=\log^{\frac{2}{3}}\Big(1+\frac{1}{\mathbf{z}}\Big)\Big((d\sqrt{\mathbf{z}})^2+(\mathbf{z}+1)g_{\mathbb{CP}^1}+\mathbf{z} h\Big). \end{align} Notice that here, the Klsc metric $g_{Klsc}$ is defined on $\mathbb{C}^2\setminus\{0\}$. In a sense, one can view the conformal factor as causing the metric to blow up along the $\mathbb{CP}^1$ at the origin while the Hopf fibers still shrink.
Lastly, we find that the scalar curvature of this Klsc metric $g_{Klsc}$ is given by \begin{align} S_{g_{Klsc}}=2S_{C_{g_{Klsc}}}=\frac{8}{3}\Bigg(\mathbf{z}(\mathbf{z}+1)^2\log^{\frac{8}{3}}\Big(1+\frac{1}{\mathbf{z}}\Big)\Bigg)^{-1}. \end{align} Notice that the scalar curvature blows as the origin is approached.
\subsection{The non-K\"ahler Klsc metric conformal to the Fubini-Study metric} \label{Fubini-Study} Here we will see Theorem \ref{thm_1} applied to a compact manifold and obtain non-K\"ahler Klsc metrics on an annular decomposition of the manifold minus closed sets of positive codimension. Consider the Fubini-Study metric, which is K\"ahler Einstein, on $\mathbb{CP}^2$. It arises from the K\"ahler potential $\phi=\log(\mathbf{z}+1)$, and is given by \begin{align} g_{FS}=\frac{1}{(\mathbf{z}+1)^2}\Big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\Big)+\frac{\mathbf{z}}{(\mathbf{z}+1)}g_{\mathbb{CP}^1}. \end{align} Clearly, this extends over the point at $\mathbf{z}=0$ and the $2$-sphere at infinity smoothly. From Theorem \ref{thm_1}, we see that the conformal metric $g_{Klsc}=v^2g_{FS}$, where \begin{align}
v^2=\Big|\log(\mathbf{z})-\frac{1}{\mathbf{z}}\Big|^{\frac{2}{3}}, \end{align} is a non-K\"ahler Klsc metric wherever it is defined. To see where this is, first notice that $\log(\mathbf{z})-\frac{1}{\mathbf{z}}=0$ at $\mathbf{z}=\gamma\approx 1.763\dots$. Next, observe that the conformal factor causes the metric to blow up along to $2$-sphere at infinity, while the Hopf fibers still shrink there. Therefore, removing the $2$-sphere at infinity, the point at the origin and the $S^3$ given by $\mathbf{z}=\gamma$, we find that the conformal metric \begin{align}
g_{Klsc}=\Big|\log(\mathbf{z})-\frac{1}{\mathbf{z}}\Big|^{\frac{2}{3}}\Bigg(\frac{1}{(\mathbf{z}+1)^2}\Big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\Big)+\frac{\mathbf{z}}{(\mathbf{z}+1)}g_{\mathbb{CP}^1}\Bigg) \end{align} is a non-K\"ahler Klsc metric on the disjoint union of annuli $\mathcal{A}_{0,\gamma}\amalg\mathcal{A}_{\gamma,\infty}$.
Lastly, we find that the scalar curvature of this Klsc metric $g_{Klsc}$. Although it is necessary to separate the computation of the transformation of scalar curvature on each of the respective annuli since $\log(\mathbf{z})-\frac{1}{\mathbf{z}}$ is negative for $\mathbf{z}<\gamma$ and positive for $\mathbf{z}>\gamma$, we find that the same formula gives the scalar curvature of the non-K\"ahler Klsc metric on both $\mathcal{A}_{0,\gamma}$ and $\mathcal{A}_{\gamma,\infty}$, and this is \begin{align} S_{g_{Klsc}}=2S_{C_{g_{Klsc}}}=\Big(\log(\mathbf{z})+\frac{1}{\mathbf{z}}\Big)^{-\frac{2}{3}}\Bigg(\frac{8}{3}(\mathbf{z}+1)\Big(1+\frac{1}{\mathbf{z}}\Big)^3\Big(\log(\mathbf{z})+\frac{1}{\mathbf{z}}\Big)^{-1}+12\Bigg). \end{align}
\subsection{Obtaining the non-K\:ahler Klsc metric by fixing $\mathscr{F}(\mathbf{z})=\mathbf{z}^2+\mathbf{z}^8$ on $\mathbb{C}^2$} \label{fix_2} Since $(\mathbf{z}^2+\mathbf{z}^8)'>0$ for all $\mathbf{z}>0$, the pair \begin{align} (\mathscr{F},\mathcal{C})=(\mathbf{z}^2+\mathbf{z}^8,0) \end{align} is clearly admissible on the annulus $\mathcal{A}_{0,\infty}\subset\mathbb{C}^2$. Note that the first nonzero term of the Taylor expansion of $\mathscr{F}$ about the origin is of order $2$. To obtain the corresponding Klsc metric, from Theorem \ref{thm_2} and Theorem \ref{thm_3}, in a small neighborhood of the origin we compute that \begin{align} \begin{split} (\mathbf{z}\mathscr{F})'-\frac{2}{\mathbf{z}}\Big(\int\frac{1}{\mathbf{z}^2\mathscr{F}}d\mathbf{z}\Big)^{-1}= & 9\mathbf{z}^2+9\mathbf{z}^8+6\mathbf{z}^2\sum_{n=1}^{\infty}\Big(\sum_{j=1}^{\infty}\frac{(-1)^j}{2j-1}\mathbf{z}^{6j}\Big)^n. \end{split} \end{align} Thus, in a small neighborhood of the origin,the Klsc metric is given by \begin{align} \begin{split} g_{(\mathscr{F},0)}=\Bigg(9\mathbf{z}^2+9\mathbf{z}^8+6\mathbf{z}^2\sum_{n=1}^{\infty}\Big(\sum_{j=1}^{\infty}\frac{(-1)^j}{2j-1}\mathbf{z}^{6j}\Big)^n\Bigg)\Big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\Big)+(\mathbf{z}^3+\mathbf{z}^9)g_{\mathbb{CP}^1}. \end{split} \end{align} Then, by making the change of variables $r^2=\mathbf{z}^3$, we see that \begin{align} g_{(\mathscr{F},0)}=dr^2+r^2(g_{\mathbb{CP}^1}+9h)+\Bigg(r^4+\frac{6}{9}\sum_{n=1}^{\infty}\Big(\sum_{j=1}^{\infty}\frac{(-1)^j}{2j-1}r^{4j}\Big)^n\Bigg)(dr^2+9r^2h)+r^6g_{\mathbb{CP}^1}. \end{align} Notice that there are no fractional powers of $r$ in the coefficients of the metric. Therefore, recalling Remark \ref{k>2Cinfty}, even though $k=2$, this metric is of class $C^{\infty}$ over the origin since \begin{align}
\frac{\partial}{\partial\mathbf{z}}\Big(\frac{\mathbf{z}^2}{\mathscr{F}}\Big)\Big|_{\mathbf{z}=0}=0. \end{align}
Now, as in Corollary \ref{cor_thm_3}, the metric descends via the $\mathbb{Z}/3\mathbb{Z}$ quotient along the Hopf fibers generated by \begin{align} (z_1,z_2)\mapsto(e^{\frac{2\pi i}{3}}z_1,e^{\frac{2\pi i}{3}}z_2) \end{align} to the nonsingular Klsc metric given by \begin{align} \begin{split} \overline{g}_{(\mathscr{F},0)}=dr^2+r^2(g_{\mathbb{CP}^1}+h)+\Bigg(r^4+\frac{6}{9}\sum_{n=1}^{\infty}\Big(\sum_{j=1}^{\infty}\frac{(-1)^j}{2j-1}r^{4j}\Big)^n\Bigg)(dr^2+r^2h)+r^6g_{\mathbb{CP}^1}. \end{split} \end{align} Furthermore, note that this is in fact a smooth nonsingular metric.
Lastly, we are able to recover the K\"ahler metric to which the Klsc metric $g_{(\mathscr{F},0)}$ is conformal. From Proposition \ref{admissible_proposition}, we find that the derivative of the K\"ahler potential corresponding to this Klsc metric is \begin{align} \phi'=(\mathbf{z}^2+\mathbf{z}^8)\Big(\frac{1}{3}\int\frac{1}{\mathbf{z}^2(\mathbf{z}^2+\mathbf{z}^8)}d\mathbf{z}\Big)^{-2}=\frac{81(\mathbf{z}^2+\mathbf{z}^8)}{\big(\tan^{-1}(\frac{1}{\mathbf{z}^3})-\frac{1}{\mathbf{z}^3}\big)^2}. \end{align} Then, using this to compute $(\mathbf{z}\phi')'$, we can recover the K\"ahler form restricted to the $z_1$-axis and, in turn, find that the K\"ahler metric is given by \begin{equation} g=\frac{243\big((3\mathbf{z}^9+\mathbf{z}^3)\tan^{-1}\big(\frac{1}{\mathbf{z}^3}\big)-3(\mathbf{z}^6+1)\big)}{\mathbf{z}\big(\tan^{-1}(\frac{1}{\mathbf{z}^3})-\frac{1}{\mathbf{z}^3}\big)^3}\Big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\Big)+\frac{81(\mathbf{z}^3+\mathbf{z}^9)}{\big(\tan^{-1}(\frac{1}{\mathbf{z}^3})-\frac{1}{\mathbf{z}^3}\big)^2} \big(g_{\mathbb{CP}^1}\big). \end{equation}
\subsection{Obtaining the non-K\:ahler Klsc metric by fixing $\mathscr{F}(\mathbf{z})=\mathbf{z}+\mathbf{z}^2$ on $\mathbb{C}^2$} \label{fix_1} While this choice of specified function may look similar to that of Section \ref{fix_2}, it will illustrate a different type of possible behavior in that it will only be of class $C^1$ over the origin as opposed to class $C^{\infty}$. Furthermore, it will not descend to a nonsingular metric.
Since $(\mathbf{z}+\mathbf{z}^2)'>0$ for all $\mathbf{z}>0$, the pair \begin{align} (\mathscr{F},\mathcal{C})=(\mathbf{z}+\mathbf{z}^2,0) \end{align} is clearly admissible on the annulus $\mathcal{A}_{0,\infty}\subset\mathbb{C}^2$. Note that the first nonzero term of the Taylor expansion of $\mathscr{F}$ about the origin is of order $1$. To obtain the corresponding Klsc metric, from Theorem \ref{thm_2} and Theorem \ref{thm_3}, we compute that \begin{align} \begin{split} (\mathbf{z}\mathscr{F})'-\frac{2}{\mathbf{z}}\Big(\int\frac{1}{\mathbf{z}^2\mathscr{F}}d\mathbf{z}\Big)^{-1}= 2\mathbf{z}+3\mathbf{z}^2+\frac{4\mathbf{z}}{1-2\mathbf{z}+2\mathbf{z}^2\log(1+\frac{1}{\mathbf{z}})}. \end{split} \end{align} Thus, the Klsc metric is given by \begin{align} \begin{split} g_{(\mathscr{F},0)}=\Bigg(2\mathbf{z}+3\mathbf{z}^2+\frac{4\mathbf{z}}{1-2\mathbf{z}+2\mathbf{z}^2\log(1+\frac{1}{\mathbf{z}})}\Bigg)\Big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\Big)+(\mathbf{z}^2+\mathbf{z}^3)g_{\mathbb{CP}^1}. \end{split} \end{align} Then, by making the change of variables $r^2=\frac{3}{2}\mathbf{z}^2$ and taking the appropriate expansions in a small neighborhood of the origin, we see that \begin{align} \begin{split} g_{(\mathscr{F},0)}=&dr^2+\frac{2}{3}r^2(g_{\mathbb{CP}^1}+6h)+\Big(\frac{r}{\sqrt{6}}+\frac{2}{3}\sum_{n=1}^{\infty}H(r)^n\Big)(dr^2+4r^2h)+\Big(\frac{2}{3}\Big)^{\frac{3}{2}}r^3g_{\mathbb{CP}^1}, \end{split} \end{align} where \begin{align} H(r)=\big(\sqrt{8/3}\big)r-\frac{4}{3}r^2\log\Big(1+\big(\sqrt{3/2}\big)r^{-1}\Big). \end{align} Notice that while there are no fractional powers of $r$ in the coefficients of the metric there are, however, nonvanishing $\log(\cdot)$ terms. Without actually having found this expansion explicitly, we could have seen that this occurs from Theorem \ref{thm_3} since \begin{align}
\frac{\partial^2}{\partial\mathbf{z}^2}\Big(\frac{\mathbf{z}}{\mathscr{F}}\Big)\Big|_{\mathbf{z}=0}=2\neq0. \end{align} In turn, this metric is only of class $C^1$ over the origin.
Lastly, we are able to recover the K\"ahler metric to which this Klsc metric $g_{(\mathscr{F},0)}$ is conformal. From Proposition \ref{admissible_proposition}, we find that the derivative of the K\"ahler potential corresponding to this Klsc metric is \begin{align} \phi'=(\mathbf{z}^2+\mathbf{z}^2)\Big(\frac{1}{3}\int\frac{1}{\mathbf{z}^2(\mathbf{z}^2+\mathbf{z}^2)}d\mathbf{z}\Big)^{-2}=\frac{36(\mathbf{z}^3+\mathbf{z}^4)}{\big(1-2\mathbf{z}+2\mathbf{z}^2\log(1+\frac{1}{\mathbf{z}})\big)^2}. \end{align} Then, using this to compute $(\mathbf{z}\phi')'$, we can recover the K\"ahler form restricted to the $z_1$-axis and, in turn, find that the K\"ahler metric is given by \begin{align} \begin{split} g=\frac{36\mathbf{z}^5\big(6-\mathbf{z}-6\mathbf{z}^2+2(3\mathbf{z}+2)\mathbf{z}^2\log(1+\frac{1}{\mathbf{z}})\big)}{\big(1-2\mathbf{z}+2\mathbf{z}^2\log(1+\frac{1}{\mathbf{z}})\big)^3}&\Big((d\sqrt{\mathbf{z}})^2+\mathbf{z} h\Big)\\ +\frac{36(\mathbf{z}^4+\mathbf{z}^5)}{\big(1-2\mathbf{z}+2\mathbf{z}^2\log(1+\frac{1}{\mathbf{z}})\big)^2} &\big(g_{\mathbb{CP}^1}\big). \end{split} \end{align}
\end{document} |
\begin{document}
\title{COMBINATORIAL KNOT FLOER HOMOLOGY AND DOUBLE BRANCHED COVERS}
\author{Fatemeh Douroudian}
\address{Department of Mathematics, Faculty of Mathematical Sciences, Tarbiat Modares University, P.O. Box 14115-137, Tehran, Iran.} \email{douroudian@modares.ac.ir}
\keywords{Heegaard-Floer, knot homology, double branched covers}
\subjclass[2010]{57M25, 57M27}
\begin{abstract} Using a Heegaard diagram for the pullback of a knot $K \subset S^3$ in its double branched cover $\Sigma_2(K)$, we give a combinatorial proof for the invariance of the associated knot Floer homology over $\mathbb{Z}$. \end{abstract}
\maketitle
\section{Introduction} \label{intro}
Heegaard Floer homology, introduced by Ozsv\'ath and Szab\'o, is a collection of invariants for closed oriented three-manifolds. Various versions of Heegaard Floer homology is defined by counting some holomorphic disks in the symmetric product of a Riemann surface. There is a relative version of the theory for a pair $(Y, K)$, where $K$ is a nullhomologous knot in the three-manifold $Y$, which is developed by Ozsv\'ath and Szab\'o \cite{OS} and independently by Rasmussen \cite{Ras}. However working with holomorphic disks makes it difficult to compute the homolgy groups. In~\cite{Sarkar} Sarkar and Wang gave an algorithm that makes the computation of hat version combinatorial. In~\cite{nice}, Ozsv\'ath, Stipsicz and Szab\'o gave a combinatorial algorithm for constructing $\widehat{HF}(Y)$ and provide a topological proof of its invariance.
Given a knot $K\subset S^3$, a \emph{grid diagram} $G$ associated with $K$ is an $n\times n$ planar grid, together with two sets $\mathbb{X}=\{ X_i\rbrace _{i=1}^{i=n}$ and $\mathbb{O}=\{ O_i\rbrace _{i=1}^{i=n}$ of basepoints. Each column and each row contains exactly one $X$ and one $O$ inside. We view this grid diagram as a torus $T^2 \subset S^3$ by standard edge identifications. Here each horizontal line is an $\alpha$ circle and each vertical line is a $\beta$ circle, and $(T^2, \mathbf \alpha, \mathbf \beta, \mathbb{O}, \mathbb{X})$ is a multi-pointed Heegaard diagram for $(S^3, K)$. Manolescu, Ozsv\'ath and Sarkar~\cite{MOS} showed that such diagrams can be used to compute $\widehat{HFK}(S^3, K)$ combinatorially. In \cite{MOST}, Manolescu, Ozsv\'ath, Szab\'o and Thurston gave a combinatorial proof of the invariance of knot Floer homology and a combinatorial presentation of the basic properties of link Floer homology over $\mathbb{Z}$. In \cite{L}, Levine gave a construction of a Heegaard diagram for the pullback $\widetilde K$ of a knot $K \subset S^3$ in its $m$-fold cyclic branched cover $\Sigma_m(K)$ to compute $\widehat{HFK}(\Sigma_m(K),\widetilde K)$ over $\mathbb{Z}_2$ combinatorially. In this paper we use a recent work of ~Ozsv\'ath, Stipsicz and Szab\'o \cite{Sign}, where they assign signs to the rectangles and bigons in a nice Heegaard diagram, and we provide a combinatorial proof of the invariance of the knot Floer homology of $(\Sigma_2(K),\widetilde K)$ with $\mathbb{Z}$ coefficients. Note that the invariance of $\widehat{HFK}(\Sigma_2(K),\widetilde K;\mathbb{Z})$ was known by the works of \cite{OS} and \cite{Ras}, but in this paper we give a combinatorial argument which allows us to compute $\widehat{HFK}(\Sigma_2(K),\widetilde K;\mathbb{Z})$ combinatorially.
The construction of a nice Heegaard diagram for $(\Sigma_m (K),\widetilde K)$, described in \cite{L} is as follows: Let $T^2$ be the torus which describes the grid diagram $G$ of $K$. First isotope $K$ so that it lies in $H_{\alpha}$, the handlebody corresponding to $\alpha$ curves. Next let $F$ be a Seifert surface that is contained entirely in a ball in $H_{\alpha}$. We can see, after isotoping back both of $K$ and $F$ to $H_{\beta}$, that the intersections of $F$ with $T^2$ are the vertical lines that connect the $X$ and the $O$ in each column in $G$. We define $\widetilde T$ to be the surface obtained by gluing together $m$ copies of $T^2$, denoted by $T_0, \dots, T_{m-1}$, along the branch cuts connecting the pairs of $X$ and $O$ in each column such that whenever $X$ is above $O$, the left side of the branch cut in $T_k$ is glued to the right side of the same cut in $T_{k+1}$; if the $O$ is above the $X$, then glue the left side of the branch cut in $T_k$ to the right side of the same cut in $T_{k-1}$ (indices modulo $m$). The projection map $\pi :\widetilde T \rightarrow T^2$ is an $m$-fold cyclic branched cover, branched over the basepoints. Each $\alpha$ and $\beta$ circle in $T^2$ bounds a disk away from branch points in $S^3-K$ so each of them has $m$ distinct lifts to $\Sigma_m(K)$, the $m$-fold cyclic branched cover. Each lift of each $\alpha$ circle intersects exactly one lift of each $\beta$ circle. This can be shown as in Fig.~\ref{grid} with $m$ disjoint \emph{grids}. Denote by $\widetilde \beta _{j}^{i}$ for $i=0, \dots, m-1$ and $j=0, \dots, n-1$ the vertical arcs, and denote by $\widetilde \alpha _{j}^{i}$ the arc which has intersection with $\widetilde \beta _{0}^{i}$. Let us denote this Heegaard diagram by $\widetilde G=(\widetilde T,\widetilde{\bm{\alpha}},\widetilde{\bm{\beta}},\mathbb{O}, \mathbb{X})$, where $\widetilde{\bm{\alpha}}$ and $\widetilde{\bm{\beta}}$ are the lifts of ${\bm{\alpha}}$ and ${\bm{\beta}}$ to $\widetilde T$. In the figures for $m=2$, we draw the lifts with superscript zero (i.e. $\widetilde\alpha^0_i$ and $\widetilde\beta^0_j$) with solid lines, and the lifts with superscript one (i.e. $\widetilde\alpha^1_i$ and $\widetilde\beta^1_j$) with dashed lines.
\begin{figure}\label{grid}
\end{figure}
Denote by $\RR(G)$ the set of embedded rectangles in $T^2$ which do not contain any basepoints and whose left and right edges are arcs of $\beta$ circles and whose upper and lower edges are arcs of $\alpha$ circles. Each rectangle in $\RR(G)$ has $m$ disjoint lifts to $\widetilde T$ (possibly passing through the branch cuts), denote the set of such lifts by $\RR(\widetilde G)$.
Let the set of generators $\mathbf{S}(\widetilde G)$ be the set of unordered $mn$-tuples $\mathbf{x}$ of intersection points between $\widetilde \alpha^i_j$ and $\widetilde \beta^i_j$ for $i = 0 , \dots, m-1$ and $j= 0, \dots, n-1 $, such that each of $\widetilde \alpha^i_j$ and $\widetilde \beta^i_j$ has exactly one component of $\mathbf{x}$. Also denote by $\mathbf{S}(G)$ the set of generators for $G$. In \cite{L}, Levine showed that any generator $\mathbf{x} \in \mathbf{S}(\widetilde G)$ can be decomposed (non-uniquely) as $\mathbf{x}= \widetilde\mathbf{x}_1\cup \dots \cup \widetilde\mathbf{x}_m$, where $\mathbf{x}_1, \dots, \mathbf{x}_m$ are generators in $\mathbf{S}(G)$, and $\widetilde\mathbf{x}_i$ is a lift of $\mathbf{x}_i$ to $\widetilde G$.
The Alexander grading of a generator $\mathbf{x} \in \mathbf{S}(\widetilde G)$ is defined as follows: Given two finite sets of points $A$, $B$ in the plane, let $\mathcal{I}(A, B)$ be the number of pairs $(a_1,a_2)\in A$ and $(b_1,b_2)\in B$ such that $a_1<b_1$ and $a_2<b_2$. Let $\mathcal{J}(A, B)=\dfrac{1}{2}(\mathcal{I}(A, B)+\mathcal{I}(B,A))$. Given $\mathbf{x}_i \in \mathbf{S}(G)$, define \begin{center} $A(\mathbf{x}_i)= \J(\mathbf{x}_i-\dfrac{1}{2}(\mathbb{X}+\mathbb{O}),\mathbb{X}-\mathbb{O})- (\dfrac{n-1}{2})$. \end{center} in which we consider $\J$ as a bilinear function of its two variables. For any generator $\mathbf{x} \in \mathbf{S}(\widetilde G)$, consider one of the decompositions $\mathbf{x}= \widetilde\mathbf{x}_1\cup \dots \cup \widetilde\mathbf{x}_m$ and define \begin{center} $A(\mathbf{x})= \dfrac{1}{m} \sum_i A(\mathbf{x}_i)$. \end{center} A simple calculation shows that the value of $A(\mathbf{x})$ in the above definition is well-defined (i.e. it is independent of the choice of the decomposition).
Let $C({\widetilde G})$ be the $\mathbb{Z}_2$ vector space generated by $\mathbf{S}(\widetilde G)$. Define a differential $\partial$ on $C({\widetilde G})$ by setting a nonzero coefficient for $\mathbf{y}$ in $\partial \mathbf{x}$ if and only if $\mathbf{x}, \mathbf{y} \in \mathbf{S}(\widetilde G)$ agree along all but two vertical circles and there is a rectangle $R \in \RR(\widetilde G)$ whose lower-left and upper-right corners are in $\mathbf{x}$ and whose lower-right and upper-left corners are in $\mathbf{y}$, and which does not contain any components of $\mathbf{x}$ in its interior. We denote by $Rect(\mathbf{x}, \mathbf{y})$ the set of such rectangles.
\begin{center} $\partial \mathbf{x}=\sum\limits_{\mathbf{y} \in \mathbf{S}(\widetilde G)} \sum\limits_{R\in Rect(\mathbf{x},\mathbf{y})} \mathbf{y}$ \end{center}
Note that the $\partial$ preserves the Alexander grading.
Let $H_*(\widetilde G)$ be the homology of the chain complex $(C(\widetilde G), \partial)$.
Later we consider $C(\widetilde G)$ as a free $\mathbb{Z}$-module and define the boundary map using the signs that are assigned to each $R\in Rect(\mathbf{x},\mathbf{y})$.
\begin{remark}\label{horver} In this construction we cut along the vertical lines connecting the $X$ and the $O$ in each column, and glue two copies of the resulting surface. Note that we can alternatively consider a different Seifert surface, with the property that its intersection with the torus $T^2$ be the horizontal lines connecting the $X$ and the $O$ in each row, and glue two copies of the resulting surface. In this way we get the same Heegaard diagram up to homeomorphism (this follows for example from \cite{Lickorish} Theorem 7.9), and so in the definition of $(C(\widetilde G), \partial)$ we could have used this alternative construction and it would not change the resulting complex, and in priori the homology groups. \end{remark}
Other useful notations are as follows: Let $\mathbf{x} , \mathbf{y} \in \mathbf{S}(\widetilde G)$. A \emph{path} $\gamma$ from $\mathbf{x}$ to $\mathbf{y}$ is a closed oriented path composed of arcs on $\alpha -$ and $\beta -$circles, which has its corners among $\mathbf{x}$ and $\mathbf{y}$, oriented so that $\partial(\gamma \cap \alpha)=\mathbf{y} -\mathbf{x}$ and $\partial(\gamma \cap \beta)=\mathbf{x} -\mathbf{y}$. Let $D_1 , \dots , D_m$ be the components of $\widetilde T \backslash {\bm \alpha \cup \bm \beta}$, a \emph{domain} $p$ from $\mathbf{x}$ to $\mathbf{y}$ is a two-chain $D = \sum a_i D_i$ in $\widetilde T$ whose boundary $\partial p$ is a path from $\mathbf{x}$ to $\mathbf{y}$. Define $\pi (\mathbf{x}, \mathbf{y})$ to be the set of domains from $\mathbf{x}$ to $\mathbf{y}$. There is a natural composition law
\begin{center} $* : \pi(\mathbf a,\mathbf b) \times \pi(\mathbf b,\mathbf c) \longrightarrow \pi(\mathbf a,\mathbf c)$ \end{center}
In order to state the main theorem of this paper, we need one more definition. Following \cite{OSS}, we define the stable knot Floer homology $\widehat{HFK}_{st}(Y,K)$ of a knot inside a three manifold $Y$ as follows:
\begin{definition} Two pairs $(V_1, a_1)$ and $(V_2, a_2)$ of vector spaces, where $V_1$, $V_2$ are finite dimensional free modules over the ring $\mathbb{F}$ and $a_1,a_2$ are non-negative integers, are called equivalent pairs if either $a_1\geq a_2$ and $V_1\cong V_2 \otimes (\mathbb{F} \oplus \mathbb{F})^{a_1 -a_2}$, or $a_2\geq a_1$ and $V_2\cong V_1 \otimes (\mathbb{F} \oplus \mathbb{F})^{a_2 -a_1}$. If we are working with graded vector spaces then we require the isomorphisms to preserve the grading. \end{definition}
Let $D=(\Sigma, \bm{\alpha}, \bm{\beta}, \mathbf{w}, \mathbf{z})$ be a nice Heegaard decomposition of $Y$ compatible with the knot $K \subset Y$ such that $|\mathbf{w} |=|\mathbf{z} |$. Then the equivalence class of the pair $(H_*(\widetilde G), |\mathbf{z} |)$ is the stable knot Floer homology of $K \subset Y$ denoted by $\widehat{HFK}_{st}(Y,K)$ (as graded vector space).
Note that $\widehat{HFK}_{st}(Y,K)$ and $\widehat{HFK}(Y,K)$ are in fact equivalent. As shown in \cite{OSS, L, MOST}, we have $H_*(\widetilde G) \cong \widehat{HFK}(Y,K) \otimes (\mathbb{F} \oplus \mathbb{F})^{|\mathbf{z} | -1} $, where $\mathbb{F}$ is either $\mathbb{Z}_2$ or $\mathbb{Z}$ depending on the way we define the chain complex $(C(\widetilde G), \partial)$. The statement of the main theorem is as follows:
\begin{theorem} \label{main} Let $K$ be an oriented knot and $\widetilde K$ be its pullback in the double branched cover $\Sigma_2(K)$. Then the stable knot Floer homology of $\widetilde K \subset \Sigma_2(K)$ over $\mathbb{Z}$ is an invariant of the knot. \end{theorem}
In Section~\ref{assign}, we review some definitions and theorems from~\cite{Sign}. In Section~\ref{invar} we prove the invariance of the knot Floer homology of the pullback of a knot $K \subset S^3$ in its double branched cover $\Sigma_2(K)$ over $\mathbb{Z}$.
\section{Prelimineries} \label{assign}
In~\cite{Sign}, it is shown that there exists a sign assignment for nice Heegaard diagrams, using this fact, we want to extend the definition of boundary map from the case of $\mathbb{Z}_2$ to the case with coefficients in $\mathbb{Z}$. To be self-contained, we review some definitions and theorems, for more details see~\cite{Sign}.
We call an empty rectangle or bigon in a nice Heegaard diagram, a \emph{flow}. Fix two sets $\bm \alpha$ and $\bm \beta$ such that $|\bm \alpha|=|\bm \beta|=m$. We define formal generators and formal flows as follows:
\begin{definition} A formal generator of power $m$ is a pair $(\epsilon,\sigma)$, where $\epsilon = (\epsilon_1, \dots , \epsilon_m) \in {\{ \pm 1 \rbrace }^m$ and $\sigma \in S_m$. A formal generator of power $m$ has the following equivalent definition: a pairing of elements of $\bm \alpha$ and $\bm \beta$ with an assignment of $ \pm 1 $ to each pair. \end{definition}
We can represent this definition pictorially. Draw $m$ oriented crossings and label the oriented arcs at the $i^{th}$ crossing with $\alpha_i$ and $\beta_{\sigma(i)}$. The sign $\epsilon_i$ is the intersection number $\alpha_i\cdot \beta_{\sigma(i)}$, computed with respect to the standard orientation of the plane.
\begin{definition} Consider two balls $U$ and $V$ of radii $1$ embedded in the complex plane, such that $U$ is centered at $0$ and $V$ is centered at the point $\sqrt{2}$. Let $\textbf{B} = U\cap V$, we call the point $\frac{\sqrt{2}}{2} - \frac{\sqrt{2}}{2} i \in \partial U \cap \partial V$ the initial point of $\textbf{B}$ and the point $\frac{\sqrt{2}}{2} + \frac{\sqrt{2}}{2} i \in \partial U \cap \partial V$ the terminal point of $\textbf{B}$. The arc $\partial U \cap \textbf{B}$ is called $\bm a$ and the arc $\partial V \cap \textbf{B}$ is called $\bm b$. We call this setting the \emph{ideal bigon}, and when there is no confusion denote it only by $\textbf{B}$. \end{definition}
\begin{definition} Let $\mathbf{x} = (\epsilon , \sigma)$ and $\mathbf{y} = (\epsilon ' , \sigma ')$ be two formal generators of power $m$ such that for some $i \in \{ 1, \dots , m \rbrace$ we have $\epsilon_i = -\epsilon_i '$ and $\epsilon_j = \epsilon_j '$ for all $j\neq i$, and also $\sigma = \sigma '$. A \emph{formal bigon} of power $m$, from $\mathbf{x}$ to $\mathbf{y}$ is the data of an orientation of the arcs $\bm a$ and $\bm b$ of the ideal bigon $\textbf{B}$ with the following (equivalent) properties: \begin{itemize} \item the local intersection number of $\bm a$ and $\bm b$ at the initial point of $\textbf{B}$ (with respect to the orientations that we fixed) is $\epsilon_i$ \item the local intersection number of $\bm a$ and $\bm b$ at the terminal point of $\textbf{B}$ (with respect to the orientations that we fixed) is $\epsilon_i '$ \end{itemize}
We denote such formal bigon by $\mathcal{B} :\mathbf{x} \rightarrow \mathbf{y}$. \end{definition}
\begin{definition} Consider a square $\textbf{R}$ embedded in complex plane with corners $0 , 1 , i , 1+i$. We label the upper (resp. lower) edges $N$ (resp. $S$). Also label the right (resp. left) edges $E$ (resp. $W$). We call the pair $(SW , NE)$ (of intersection poits) the initial pair of points, and $(SE , NW)$ the terminal pair of points. We call $\textbf{R}$ an \emph{ideal rectangle}. \end{definition}
\begin{definition}\label{formalrec} Let $\mathbf{x} = (\epsilon , \sigma)$ and $\mathbf{y} = (\epsilon ' , \sigma ')$ be two formal generators of power $m$ such that for disjoint $s,n \in \{ 1, \dots , m \rbrace $ and disjoint $e,w \in \{ 1, \dots , m \rbrace $ we have: \begin{itemize} \item $\sigma(s)=w$ , $\sigma(n)=e$ \item $\sigma'(s)=e$ , $\sigma '(n)=w$ \item $\sigma(j)=\sigma '(j)$ for all $j\neq s, n$ \item $\epsilon_j = \epsilon_j '$ for all $j\neq n, s.$ \end{itemize} A \emph{formal rectangle} of power $m$, from $\mathbf{x}$ to $\mathbf{y}$ which we denote by $\mathcal{R} :\mathbf{x} \rightarrow \mathbf{y}$, is the data of an ideal rectangle $\textbf{R}$ with a choice of orientation for the four edges of $\textbf{R}$ with following properties: the local intersection signs for the initial points $SW$ and $NE$ coincide with the signs $\epsilon_s$ and $\epsilon_n$. Also the local intersection signs for the terminal points $SE$ and $NW$ coincide with $\epsilon'_s$ and $\epsilon'_n$. \end{definition}
For such a formal rectangle, we denote the edges $N$, $S$, $E$ and $W$ by $\alpha_n$, $\alpha_s$, $\beta_e$ and $\beta_w$. We represent such formal rectangle by the sequence $\alpha_n \rightarrow \beta_w \rightarrow \alpha_s \rightarrow \beta_e$. The data of $\sigma(j)$ and $\epsilon(j)$ for $j \neq n,s$ are not explicitly mentioned in this \emph{arrow notation} and should be understood from the context.
We call either a formal bigon or a formal rectangle, a \emph{formal flow}. Given two formal generators $\mathbf{x} = (\epsilon , \sigma)$ and $\mathbf{y} = (\epsilon ' , \sigma ')$, we denote by $\phi :\mathbf{x} \rightarrow \mathbf{y}$ a formal flow from $\mathbf{x}$ to $\mathbf{y}$. Denote by $\FF_m$ the set of formal flows of power $m$ (with respect to the initial and terminal formal generators). Given two formal flows $\phi_1 :\mathbf{x} \rightarrow \mathbf{y}$ and $\phi_2 :\mathbf{y} \rightarrow \mathbf{z}$, for ease of language we call the pair $(\phi_1 , \phi_2)$, the \emph{composition} of $\phi_1$ and $\phi_2$. Note that although we call the pair $(\phi_1 , \phi_2)$ the composition of $\phi_1$ and $\phi_2$, , $(\phi_1 , \phi_2)$ is not a flow and it is not of the same type as $\phi_1$ and $\phi_2$.
\begin{definition} Let $\mathbf{x} = (\epsilon , \sigma)$, $\mathbf{y} = (\epsilon ' , \sigma ')$ and $\mathbf{z} = (\epsilon '', \sigma '')$ be formal generators of the same power, and $\phi_1 :\mathbf{x} \rightarrow \mathbf{y}$ and $\phi_2 :\mathbf{y} \rightarrow \mathbf{z}$ be two formal flows. If with some orientations and labelling of arcs with $\alpha_i$ and $\beta_j$ the pair $(\phi_1 , \phi_2)$ has one of the forms in Fig.~\ref{dege}, then we say the pair $(\phi_1 , \phi_2)$ is a \emph{boundary degeneration}. Note that in this case $\mathbf{z} =\mathbf{x} $. The boundary degeneration is of Type $\alpha$ (resp. $\beta$), when the circle(s) in Fig.~\ref{dege} is decorated with $\alpha$ (resp. $\beta$). \end{definition}
\begin{figure}\label{dege}
\end{figure}
We define a sign assignment of power $m$ as follows:
\begin{definition} \label{def sign} A sign assignment $\SS$ of power $m$ is a map $\mathcal{S}:\FF_m \longrightarrow \{\pm 1\rbrace$ that satisfies the following conditions:
(S-1) if the composite flow $(\phi_1 , \phi_2)$ is a degeneration of Type $\alpha$, then \begin{center} $\SS(\phi_1) \cdot \SS(\phi_2)=1$ \end{center}
(S-2) if the composite flow $(\phi_1 , \phi_2)$ is a degeneration of Type $\beta$, then \begin{center} $\SS(\phi_1) \cdot \SS(\phi_2)= -1$ \end{center}
(S-3) given two pairs $(\phi_1, \phi_2)$ and $(\phi_3, \phi_4)$ such that the initial formal generator of $\phi_1$ and $\phi_3$ are the same and the terminal formal generator of $\phi_2$ and $\phi_4$ are the same, then \begin{center} $\SS(\phi_1) \cdot \SS(\phi_2)+\SS(\phi_3) \cdot\SS(\phi_4)=0$. \end{center} \end{definition}
We can construct new sign assignments from an old one:
\begin{definition} Let $\SS$ be a sign assignment and $u$ be any map from the set of formal generators of power $m$ to $\{\pm 1\rbrace$, then we can define a new sign assignment $\SS'$ such that for any $\phi:\mathbf{x}\rightarrow\mathbf{y}$ in $\FF_m$, $\SS'(\phi)=u(\mathbf{x})\cdot\SS(\phi)\cdot u(\mathbf{y})$. If $\SS$ and $\SS'$ are related as above, we say that $\SS$ and $\SS'$ are gauge equivalent sign assignments. \end{definition}
Using these definitions the precise statement of the result of \cite{Sign} that we need is as follows: \begin{theorem} For a given power $m$ there exists a sign assignment, and it is unique up to gauge equivalence. \end{theorem}
If a nice Heegaard diagram has $m$ $\alpha$-curves, we say that the Heegaard diagram is of \emph{power $m$}. Let $\mathcal{D}=(\Sigma, \bm{\alpha}, \bm{\beta}, \mathbf w, \mathbf z)$ be a nice Heegaard diagram of power $m$. Denote by $\mathbf{S}$ the set of generators. We fix an ordering for $\bm{\alpha}$ and $\bm{\beta}$, and an orientation for each $\alpha$- and $\beta$-circle. Then a generator $\mathbf{x}$ of the Heegaard diagram specifies a formal generator $\mathbf{x}_f$ of power $|\bm{\alpha}|$. Let $\mathbf{x},\mathbf{y} \in \mathbf{S}$, an empty bigon or empty rectangle $\phi$ from $\mathbf{x}$ to $\mathbf{y}$ determines a formal bigon or formal rectangle $F(\phi):\mathbf{x}_f \rightarrow \mathbf{y}_f$ of power $|\bm{\alpha}|$. Fix a sign assignment $\SS$ of the same power. For each $\mathbf{x}\in \SS$, the boundary operator $\widetilde{\partial}^{\mathbb{Z}}(\mathbf{x})$ is defined as follows: $$\widetilde{\partial}^{\mathbb{Z}}(\mathbf{x})=\sum_{\mathbf{y}\in \mathbf{S}}\sum_{\phi \in \mathrm{Flows}(\mathbf{x},\mathbf{y})} \SS(F(\phi))\mathbf{y} ,$$ \noindent where we denote by $\mathrm{Flows}(\mathbf{x},\mathbf{y})\subset \pi(\mathbf{x},\mathbf{y})$ the set of empty bigons and empty rectangles from $\mathbf{x}$ to $\mathbf{y}$. The following theorem is proved in \cite{Sign}.
\begin{theorem} \label{signassign} \cite{Sign} The map $\widetilde{\partial} ^\mathbb{Z}$ over $\mathbb{Z}$ satisfies $(\widetilde{\partial}^\mathbb{Z})^2=0$. The resulting Floer homology $\widetilde{HF}(\mathcal{D};\mathbb{Z})$ is independent of the choice of $\SS$, the order of the $\alpha$- and $\beta$-curves, and the chosen orientation on each of the $\alpha$- and $\beta$-curves. \end{theorem}
\section{Invariance of Knot Floer Homology} \label{invar} In this section we want to show that the combinatorial knot Floer homology of $\widetilde K \subset \Sigma_2(K)$ is independent of $G$, the grid diagram for $K \subset S^3$. As a result of the work of Cromwell \cite{Cromwell}, any two grid diagrams of a knot $K \subset S^3$ can be connected by a sequence of three elementary moves, each resulting in a new Heegaard diagram for $\widetilde K \subset \Sigma_2(K)$, as follows:
\begin{enumerate}\label{moves} \item $\mathbf{Cyclic\;Permutation}$ This move corresponds to cyclically permuting the rows or the columns of $G$, the grid diagram for the knot $K$, and obtaining a new grid diagram $H$ for $K$. Consequently we obtain a new Heegaard diagram $\widetilde H$ from $\widetilde G$ for $\widetilde{K}$.
\item $\mathbf{Commutation}$ Consider two consecutive columns in a grid diagram $G$ of a knot $K$, the $X$ and $O$ decorations of one of the columns separate the vertical circle into two arcs. If both of the $X$ and $O$ decorations of the other column is in one of the arcs, switching these two columns is a commutation move for $G$ and leads to another grid diagram $H$ for $K$. For the knot $\widetilde{K}$, we consider the Heegaard diagram $\widetilde{H}$ to be obtained from $\widetilde{G}$ by a commutation move. Commutation can be alternatively done by reversing the roles of rows and columns.
\item $\mathbf{Stabilization/Destabilization}$ Let $G$ be a $n\times n$ grid diagram of the knot $K$. We want to add two consecutive breaks in $K$ and introduce a new $(n+1)\times (n+1)$ grid diagram $H$ for $K$. Label the decorations of $G$ by ${\lbrace O_i \rbrace }^{i=n+1}_{i=2}$ and ${\lbrace X_i \rbrace }^{i=n+1}_{i=2}$. Consider a row with decorations $O_i$ and $X_i$ in the grid diagram $G$, split the row so that it becomes two rows, and introduce a new column somewhere between $O_i$ and $X_i$. We copy $O_i$ onto one of the new rows and the $X_i$ in the other row. Then we add $O_1$ and $X_1$ in the two squares which are the intersections of these two rows with the new column, such that $O_1$ is in the same row as $X_i$. We let $\alpha_1$ denote the new horizontal circle in $H$ which separates $O_1$ from $X_1$. The number of $\beta$-circles in $H$ is one more than $G$. We denote the $\beta$-circle which is just to the left of $O_1$ and $X_1$ by $\beta_1$, and the $\beta$-circle just to the right of $O_1$ and $X_1$ by $\beta_2$. We call $H$ a stabilization of the grid diagram $G$ of $K$. There is a similar stabilization move where the roles of the rows and the columns are interchanged. Using commutation moves, we consider only certain stabilization moves where three of $O_1$, $X_1$, $O_i$ and $X_i$ are placed such that they have a corner in common. For the knot $\widetilde{K}$, we consider the Heegaard diagram $\widetilde{H}$ to be obtained from $\widetilde{G}$ by stabilization. The grid diagram $\widetilde H$ has two $\beta$-circles and two $\alpha$-circles more than $\widetilde G$. Destabilization is the reverse of the stabilization move.
Note that the definition of the complex is invariant if we rotate everything $180^{\circ}$. This means that without loss of generality we assume that the $O_1$ is directly above $X_1$.
\end{enumerate}
\begin{remark} \label{horver2} Note that for each elementary move, we have to consider two separate cases. One for applying the move to rows, and the other for applying it to the columns. By Remark~\ref{horver} the chain complex is independent of the choice of the direction of the cuts, so by symmetry we can assume that the cuts are vertical and we always apply the elementary moves to the columns. \end{remark}
\subsection{Cyclic Permutation}
The case when we cyclically permute the columns and the complex is considered with $\mathbb{Z}_2$ is tautological, since in this case the complex only depends on the topology of the Heegaard diagram, and the cyclic permutation does not change the position of branch cuts. In the case with coefficient in $\mathbb{Z}$ one should note that the cyclic permutation still does not change the topology of the Heegaard diagram, but it will permute the order of $\alpha$ and $\beta$ curves. Since the homology is independent of this ordering (see Theorem~\ref{signassign}) we see that the complex and the $\partial$ operator will be invariant under the cyclic permutation.
Using Remark~\ref{horver2} the case that we apply the cyclic permutation to the rows follows from the case that we apply the cyclic permutation to the columns, and so we are done.
\subsection{Commutation} \label{comm} Let $\widetilde G$ be a Heegaard diagram for $\widetilde K$, and let $\widetilde H$ be the Heegaard diagram obtained by commutation.
We define a Heegaard diagram $E$ associated to the specific commutation, consisting of two $n\times n$ grids where the opposite sides of each grid are identified. The $X$ and $O$ decorations in the right grid are the same as the decorations for $G$, in the left grid we just interchange decorations of the two columns where we want to make the commutation move. See Fig.~\ref{pentagon}. Let $\widetilde \beta^i_j$ denote the vertical arcs in the Heegaard diagram for $i = 0, 1$ and $j= 0, \dots, n-1$. For the horizontal arcs, we denote by $\widetilde \alpha^i_j$ the arc which has intersection with $\widetilde \beta^i_0$.
Let the set of generators $\mathbf{S}(E)$ be the set of unordered $2n$-tuples $\mathbf{x}$ of intersection points between $\widetilde \alpha^i_j$ and $\widetilde \beta^i_j$ for $i= 0, 1$ and $j= 0, \dots, n-1 $, such that each of $\widetilde \alpha^i_j$ and $\widetilde \beta^i_j$ has exactly one component of $\mathbf{x}$.
Define the set of allowed regions $\RR(E)$ to be the set of rectangles that are topological disks and whose upper and lower edges are arcs in $\alpha$-circles and whose left and right edges are arcs of $\beta$-circles.
Let $C(E)$ be the free $\mathbb{Z}$-module generated by $\mathbf{S}(E)$. Given $\mathbf{x}, \mathbf{y} \in \mathbf{S}(E)$ such that they agree along all but two vertical circles, let $Rect_E(\mathbf{x},\mathbf{y})$ denote the set of rectangles $R \in \RR(E)$ whose lower-left and upper-right corners are in $\mathbf{x}$ and whose lower-right and upper-left corners are in $\mathbf{y}$, and which does not contain any $X$, $O$, or components of $\mathbf{x}$ in its interior. Define the differential $\partial_E$ on $C(E)$ as follows: $$\partial_E \mathbf{x}=\sum_{\mathbf{y}\in \mathbf{S}(E)}\sum_{R \in Rect_E(\mathbf{x},\mathbf{y})} \SS(F(R)) \mathbf{y} ,$$ where $F(R)$ denotes the formal rectangle associated with $R$.
It is convenient to draw the new Heegaard diagram $E$ in the same diagram as of the Heegaard diagram of $\widetilde G$, replacing the distinguished vertical circle $\widetilde \beta$ in $\widetilde G$ with a different one $\gamma$ in $E$. The circles $\widetilde \beta$ and $\gamma$ intersect each other in two points which are not on any horizontal circle. See Fig.~\ref{pentagon}.
Let us define the Alexander grading of a generator $\mathbf{x} \in \mathbf{S}(E)$ as follows:
We can decompose a generator $\mathbf{x}$ of $C(E)$ as $\widetilde\mathbf{x}_1\cup \widetilde\mathbf{x}_2$ (non-uniquely) with the condition that the component of $\mathbf{x}$ on the circle $\gamma$ belongs to the $\widetilde\mathbf{x}_1$. This is possible since on each $\beta$ circle we have one component of $\mathbf{x}$ and on each $\alpha$ circle we have two components of $\mathbf{x}$ so we are in the situation of Lemma 3.1. of \cite{L}, and we can decompose $\mathbf{x}$ as the union of two generators, and we reorder them to satisfy the above condition. Now consider $\widetilde\mathbf{x}_1$ as an element of $C(H)$ and $\widetilde\mathbf{x}_2$ as an element of $C(G)$, and define the Alexander grading of $\mathbf{x}$ as: $$A(\mathbf{x})=\frac{A_{C(H)}(\widetilde\mathbf{x}_1)+A_{C(G)}(\widetilde\mathbf{x}_2)}{2}$$
Here $A_{C(H)}(\widetilde\mathbf{x}_1)$ means that we compute the Alexander grading with the markings of $H$, and similarly for $A_{C(G)}(\widetilde\mathbf{x}_2)$ we use the markings of $G$. It is not hard to see that this is well-defined and is independent of the chosen decomposition.
We define a chain map (similar to \cite{MOST}) $\Phi_{\widetilde \beta \gamma} : C(\widetilde G)\longrightarrow C(E)$ by counting pentagons with the sign that we associate to each pentagon. Let $\mathbf{x} \in \mathbf{S}(\widetilde G)$ and $\mathbf{y} \in \mathbf{S}(E)$, define ${Pent}_{\widetilde \beta \gamma}(\x, \y) $ to be the space of pentagons where each pentagon is an embedded disk in $\widetilde T$ whose boundary consists of five arcs, each contained in a horizontal or a vertical circle, arranged as follows. We start at the $\widetilde \beta$ component of $\mathbf{x}$ and traverse the boundary with the orientation that comes from the pentagon, go through the arc of a horizontal circle $\alpha_i$, meet its corresponding $\mathbf{y}$ component, continue through an arc of a vertical circle $\beta_k$, meet a component of $\mathbf{x}$, proceed to another horizontal circle $\alpha_j$, meet the component of $\mathbf{y}$ which is on the distinguished circle $\gamma$, continue along an arc in $\gamma$, meet an intersection point of $\widetilde \beta$ with $\gamma$, which we call $a$. Finally traverse an arc in $\widetilde \beta$ until we come back at the initial component of $\mathbf{x}$. All the angles here are required to be acute and all the pentagons are empty from $O$ or $X$ decorations and $\mathbf{x}$ components. For later use, we represent a pentagon as above by the sequence $\alpha_i \rightarrow \beta_k \rightarrow \alpha_j \rightarrow \gamma \rightarrow \widetilde\beta$ to keep track of the boundary arcs. By using such pentagons, we connect $\mathbf{x}$ to $\mathbf{y}$, where the two generators differ from each other in exactly two components. See Fig.~\ref{pentagon}.
\begin{figure}\label{pentagon}
\end{figure}
Fix an order for $\bm{\widetilde\alpha}$ and $\bm{\widetilde\beta}$, also fix an orientation for each $\alpha$- and $\beta$-circles. Let $\SS$ be a sign assignment of power $|\bm{\widetilde\alpha}|$. Note that $\gamma$ has the same index in the ordering as $\beta$. We orient $\gamma$ in the same way as $\beta$. In this way the generators and flows in each diagram can be represented naturally by formal generators and formal flows.
In order to define the chain map, we need to assign signs to the pentagons. There are two types of pentagons: either the pentagon is on the left of the curve $\widetilde \beta$ which we call it a left pentagon, or on the right side of the curve $\widetilde \beta$ and we call it a right pentagon.
We associate to a pentagon $p\in {Pent}_{\widetilde \beta \gamma}(\x, \y)$ a formal rectangle $r(p)$. Let $p$ be represented by the sequence $\alpha_i \rightarrow \beta_k \rightarrow \alpha_j \rightarrow \gamma \rightarrow \widetilde\beta$. If $p$ is a left (resp. right) pentagon then its associated formal rectangle is represented by the sequence $\alpha_i \rightarrow \beta_k \rightarrow \alpha_j \rightarrow \widetilde\beta $ (resp. $\alpha_j \rightarrow \gamma \rightarrow \alpha_i \rightarrow \beta_k $).
Define the sign for a pentagon as follows:
\begin{center} $\varepsilon(p)=$ $\left \{ \begin{array}{ll} \SS(r(p)) & \text{if $p$ is a left pentagon} \\ -\SS(r(p)) & \text{if $p$ is a right pentagon} \end{array} \right. $ \end{center}
Given $\mathbf{x} \in \mathbf{S}(\widetilde G)$, define
\begin{equation} \Phi_{\widetilde \beta \gamma}(\mathbf{x}) = \sum_{\mathbf{y} \in \mathbf{S}(E)} \sum_{p \in {Pent}_{\widetilde \beta \gamma}(\x, \y)} {\varepsilon(p)\mathbf{y}} \in C(E) \end{equation}
Similarly we define $\Phi_{\gamma \widetilde \beta} : C(E)\longrightarrow C(\widetilde G)$. Given $\mathbf{x} \in \mathbf{S}(E)$ and $\mathbf{y} \in \mathbf{S}(\widetilde G)$, we define the space of pentagons $Pent_{\widetilde \beta \gamma}(\mathbf{x},\mathbf{y})$, where each pentagon is an embedded disk in $\widetilde T$ whose boundary consists of five arcs, each contained in a horizontal or a vertical circle, arranged as follows. With the boundary orientation, We start at the $\gamma$ component of $\mathbf{x}$, traverse the arc of a horizontal circle $\alpha_i$, reach its corresponding $\mathbf{y}$ component, traverse an arc of a vertical circle $\beta_k$, meet a component of $\mathbf{x}$, continue through another horizontal circle $\alpha_j$, meet the $\widetilde \beta$ component of $\mathbf{y}$, continue along an arc in $\widetilde \beta$, meet an intersection point of $\widetilde \beta$ with $\gamma$ (we call it $b$). At last, go through an arc in $\gamma$ to come back at the initial component of $\mathbf{x}$. All the angles here are required to be acute and all the pentagons are empty the components of $\mathbf{x}$ and $O$ or $X$ decorations. We represent a pentagon as above by the sequence $\alpha_i \rightarrow \beta_k \rightarrow \alpha_j \rightarrow \widetilde\beta \rightarrow \gamma$. We denote by $r(p)$ the formal rectangle associated with $p$, which is represented by the sequence $\alpha_i \rightarrow \beta_k \rightarrow \alpha_j \rightarrow \widetilde\beta$ if $p$ is a left pentagon, and with the sequence $\alpha_j \rightarrow \widetilde\beta \rightarrow \alpha_i \rightarrow \beta_k$ if $p$ is a right pentagon. We define the sign for the pentagon $p$ exactly the same as the previous case.
\begin{lemma} The map $\Phi_{\widetilde \beta \gamma} : C(\widetilde G)\longrightarrow C(E)$ preserves the Alexander grading, and is an anti-chain map. \end{lemma}
\begin{proof} The fact that the Alexander grading will not change is straightforward.
The Argument to show that $\Phi_{\widetilde \beta \gamma}$ is an anti-chain map is similar to Lemma 3.1 and Lemma 4.23 in \cite{MOST}. We consider different compositions of a pentagon and a rectangle, whether they are disjoint or have overlapping interiors or have an edge in common. In most cases the composite region has two different decompositions and the consistency of the signs follows from the property S-3 in Definition~\ref{def sign}. However there is one special case of composite regions that has a unique decomposition. They can be paired and each pair has the same initial and terminal points, one of the decompositions represents a term in $\partial \circ \Phi_{\widetilde \beta \gamma}$ and the other one represents a term in $\Phi_{\widetilde \beta \gamma} \circ \partial$; See Fig.~\ref{pentagon2s}. Note that the formal rectangles associated with $r$ and $p$ form a $\beta$-degeneration. Hence from the property S-2, we know that if $p$ is a left pentagon $\varepsilon(p) \cdot \SS(F(r))=-1$. The formal rectangles associated with $r'$ and $p'$ make a $\beta$-degeneration. Note that there is a minus in the definition of the sign of right pentagons so $\varepsilon(p') \cdot \SS(F(r'))=1$. Hence the two composite regions have opposite signs and cancel each other out.
\begin{figure}\label{pentagon2s}
\end{figure}
\end{proof}
Following the ideas of \cite{MOST}, in order to define chain homotopy operators, we count hexagons. Given $\mathbf{x}, \mathbf{y} \in \mathbf{S}(\widetilde G)$, we let ${Hex}_{\widetilde \beta \gamma \widetilde \beta}(\x, \y)$ denote the set of embedded hexagons in $\widetilde T$. The boundary of a hexagon consists of six arcs, each contained in a horizontal or a vertical circle. More specifically, under the orientation induced on the boundary of $h$, we start at the $\widetilde \beta$-component of $\mathbf{x}$, traverse the arc of a horizontal circle $\alpha_i$, meet its corresponding component of $\mathbf{y}$, continue through an arc of a vertical circle $\beta_k$, meet its corresponding component of $\mathbf{x}$, proceed to another horizontal circle $\alpha_j$, meet its component of $\mathbf{y}$, which is contained in the distinguished circle $\widetilde \beta$, continue along $\widetilde \beta$ until the intersection point $b$ of $\widetilde \beta$ and $\gamma$, proceed on $\gamma$ to the intersection point $a$ of $\widetilde \beta$ with $\gamma$, and finally, continue on $\widetilde \beta$ to the $\widetilde \beta$-component of $\mathbf{x}$, which was also our initial point. All the angles of our hexagon are required to be acute and all the hexagons are empty from the $O$ or the $X$ decorations and the $\mathbf{x}$ components. We represent a hexagon as above by the sequence $\alpha_i \rightarrow \beta_k \rightarrow \alpha_j \rightarrow \widetilde\beta \rightarrow \gamma \rightarrow \widetilde\beta$. See Fig.~\ref{hexagon}.
\begin{figure}\label{hexagon}
\end{figure}
We now want to define signs for hexagons such that the maps defined by counting hexagons with respect to these signs are chain homotopy maps. Let $h \in Hex_{\widetilde \beta \gamma \widetilde \beta} (\mathbf{x} , \mathbf{y})$ be a hexagon that is represented by the sequence $\alpha_i \rightarrow \beta_k \rightarrow \alpha_j \rightarrow \widetilde\beta \rightarrow \gamma \rightarrow \widetilde\beta$. We call $h$ a left (resp. right) hexagon if it is in the left (resp. right) side of $\widetilde\beta$. We associate to a left (resp. right) hexagon, the formal rectangle $r(h)$ that is represented by the sequence $\alpha_i \rightarrow \beta_k \rightarrow \alpha_j \rightarrow \widetilde\beta$ (resp. $\alpha_j \rightarrow \widetilde\beta \rightarrow \alpha_i \rightarrow \beta_k$). We define the sign of $h$ to be $\varepsilon(h)=\SS(r(h))$.
We define the homotopy operator $H_{\widetilde \beta \gamma \widetilde \beta}: C(\widetilde G)\longrightarrow C(\widetilde G)$ as follows: \begin{equation} H_{\widetilde \beta \gamma \widetilde \beta}(\mathbf{x}) = \sum_{\mathbf{y} \in \mathbf{S}(\widetilde G)} \sum_{h \in {Hex}_{\widetilde \beta \gamma \widetilde \beta}(\x, \y)} \varepsilon(h) \mathbf{y} . \end{equation} Similarly we define $H_{\gamma \widetilde \beta \gamma}: C(E)\longrightarrow C(E)$ over $\mathbb{Z}$.
\begin{proposition} The map ${\Phi}_{\widetilde \beta \gamma} : C(\widetilde G) \longrightarrow C(E)$ is a chain homotopy equivalence with respect to sign assignments. In other words
\begin{center} $\mathbb{I} + {\Phi}_{\gamma \widetilde \beta} \circ {\Phi}_{\widetilde \beta \gamma} + \partial \circ H_{\widetilde \beta \gamma \widetilde \beta} + H_{\widetilde \beta \gamma \widetilde \beta} \circ \partial = 0$
$\mathbb{I} + {\Phi}_{\widetilde \beta \gamma} \circ {\Phi}_{\gamma \widetilde \beta} + \partial \circ H_{\gamma \widetilde \beta \gamma} + H_{\gamma \widetilde \beta \gamma} \circ \partial = 0.$ \end{center}
\end{proposition}
\begin{proof}
The proof is similar to the proof of \cite[Proposition 3.2 and 4.24]{MOST}. Note that when we have a composite region, we consider the formal rectangle associated with the rectangle or pentagon or hexagon at hand. Hence the desired result came from the property S-3. \end{proof}
A similar argument about the Heegaard diagrams $\widetilde H$ and $E$ proves the desired result.
\subsection{Stabilization} \label{stab}
Now that we proved commutation invariance, we want to show the stabilization invariance. Let $\widetilde G=(\widetilde T, \bm{\widetilde \alpha}, \bm{\widetilde \beta}, \mathbb{O}, \mathbb{X})$ be a Heegaard diagram and denote a stabilization of $\widetilde G$ by $\widetilde H=(\widetilde U, \bm{\widetilde \alpha '}, \bm{\widetilde \beta '}, \mathbb{O} \cup O_1, \mathbb{X} \cup X_1)$ where $\bm{\widetilde \alpha '} = \bm{\widetilde \alpha} \cup \widetilde\alpha^0_1 \cup \widetilde\alpha^1_1$ and $\bm{\widetilde \beta '} = \bm{\widetilde \beta} \cup \widetilde\beta^0_1 \cup \widetilde\beta^1_1$.
Fix an ordering for the $\alpha$- and $\beta$-circles in $\widetilde H$ and orient them. Let $\SS$ be a sign assignment of power $2n+2$. In this way, we can associate to each generator and flow of the diagram $\widetilde H$ a formal generator and a formal flow of power $2n+2$. We need to fix a sign assignment for the Heegaard diagram $\widetilde G$. For each $\mathbf{x}\in S(\widetilde G)$, by adding to $\mathbf{x}$ the two components $w_0$ and $w_1$ (with the intersection signs that come from the orientations of the corresponding arcs in $\widetilde H$), we can also associate a formal generator $\mathbf{x}_f$ of power $2n+2$ to $\mathbf{x}$. Also to each flow $\phi:\mathbf{x}\rightarrow\mathbf{y}$ in $\widetilde G$ we can associate a formal flow $F(\phi):\mathbf{x}_f\rightarrow\mathbf{y}_f$ of power $2n+2$.
Here we consider all complexes with coefficients in $\mathbb{Z}$ and the $\partial$ operator is defined with respect to the sign assignment in the end of Section~\ref{assign}. Note that the sign assignment of power $2n$ that we obtained from a sign assignment of power $2n+2$ for the complex $C(\widetilde G)$ is gauge equivalent to any sign assignment of power $2n$. So it is well-defined to work with the above induced sign assignment.
Let $B = C(\widetilde G)$ and $C=C(\widetilde H)$ and $C'$ be the chain complex $B[1]\oplus B$ ($B[1]$ is the chain complex obtained from $B$ by shifting the Alexander grading by $1$), endowed with the differential
$\partial' : C'\longrightarrow C'$ given by \[ \partial'(a,b)=(\partial a,-\partial b), \] where $\partial$ denotes the differential within $B$. Note that $C'$ is the mapping cone of the zero map between $B$ and itself. Let $\L$ and $\RR\cong B$ be the subgroups of $C'$ of elements of the form $(c,0)$ and $(0,c)$ for $c\in B$, respectively. The module $\RR$ inherits the Alexander grading from its identification with $B$ and $\L$ is given the Alexander grading which is one less than the one it inherits from its identification with $B$. This shows that $H(C')=H(B)\otimes V$ where $V\cong \mathbb{F} \oplus \mathbb{F}$ with generators in gradings $0$ and $-1$, where $\mathbb{F}$ is the ring of coefficient; in this case $\mathbb{Z}$.
Consider the stabilized grid diagram $H$ for the knot $K$. We denote by $X_2$, the element of $\mathbb{X}$ which is placed in the row that $O_1$ is placed. As we mentioned in the definition of stabilization move, we only need to consider stabilizations such that $X_2$ is placed in the square just to the left or just to the right of $O_1$ (in the grid diagram $H$).
We denote by $w_0$ (resp. $w_1$), the intersection of $\widetilde \beta_1^0$ (resp. $\widetilde \beta_1^1$) with the lifts of $\alpha_1$. Let $(I,I)\subset \mathbf{S}(\widetilde H)$ be the set of those generators which have both of their $\widetilde\alpha^i_1$ components for $i=0,1$ on the lifts of $\beta_1$ (i.e. the generators which have $w_0$ and $w_1$ as its components.) There is a natural one-to-one correspondence between elements of $\mathbf{S}(\widetilde G)$ and elements of $(I,I)$ with the following property: For $\mathbf{x} \in \mathbf{S}(\widetilde G)$, let $\psi(\mathbf{x}) \in \mathbf{S}(\widetilde H)$ be the associated generator in $(I,I)$. We have \begin{equation} A_{C(\widetilde G)}(\mathbf{x})=A_{C(\widetilde H)}(\psi (\mathbf{x}))+1=A_{C'}(0,\mathbf{x})=A_{C'}(\mathbf{x},0)+1 \end{equation}
In order to define a map from $C(\widetilde H)$ to either $\L$ or $\RR$ we need to define some combinatorial objects.
Let $\gamma$ be an arc that connects $O_1$ to $X_1$ in the Heegaard diagram $H$ for $K$. Denote by $\widetilde \gamma$ the preimage of $\gamma$ in the the Heegaard diagram $\widetilde H$ for $\widetilde K$. Note that $\widetilde \gamma$ is a circle.
\begin{definition} Let $\mathbf{x},\mathbf{y}\in \mathbf{S}(\widetilde H)$, a \emph{pseudo-domain} from $\mathbf{x}$ to $\mathbf{y}$ is a two-chain in the Heegaard diagram $\widetilde H$ whose boundary consists of a path from $\mathbf{x}$ to $\mathbf{y}$ and possibly a number of copies of $\gamma$. We denote by $\sigma(\mathbf{x},\mathbf{y})$ the set of pseudo-domains from $\mathbf{x}$ to $\mathbf{y}$. Note that $\pi(\mathbf{x},\mathbf{y}) \subset \sigma(\mathbf{x},\mathbf{y})$. \end{definition}
\begin{remark} In this definition we allow the two-chains to have a number of copies of $\gamma$ in their boundary, but for most parts of the paper we only work with regions with at most one such copy. The reason that we allow multiples of $\gamma$ is to be able to extend the $*$ operator to $\sigma(\mathbf{x},\mathbf{y})$. \end{remark}
We introduced the $*$ operation earlier. There is a natural extension of $*$ to pseudo-domains. Given $\mathbf{a} , \mathbf{b} ,\mathbf{c} \in S(\widetilde H)$, if $p \in \sigma(\mathbf{a}, \mathbf{b})$ and $p' \in \sigma (\mathbf{b} , \mathbf{c})$, we can add them as two-chains and by the definition of pseudo-domains it will be an element of $\sigma(\mathbf{a},\mathbf{c})$. In this way we get $$*:\sigma(\mathbf{a}, \mathbf{b})\times \sigma (\mathbf{b} , \mathbf{c})\longrightarrow \sigma(\mathbf{a},\mathbf{c}) $$
\begin{definition}\label{prect} Let $\mathbf{x}, \mathbf{y} \in S(\widetilde H)$ be two generators that differ exactly along $\widetilde\beta_1^0$ and $\widetilde\beta_2^0$. A \emph{punctured rectangle} $\mathfrak{a}$ that connects $\mathbf{x}$ to $\mathbf{y}$ is a topologically embedded punctured disk in $\widetilde U$ (the Heegaard surface of $\widetilde H$). The boundary of $\mathfrak{a}$ consists of the circle $\gamma$ and a path from $\mathbf{x}$ to $\mathbf{y}$ consisting of four arcs such that $\partial \mathfrak{a} \cap \bm{\widetilde \beta '} \subset \widetilde\beta_1^0 \cup \widetilde\beta_2^0$ or $\partial \mathfrak{a} \cap \bm{\widetilde \beta '} \subset \widetilde\beta_1^1 \cup \widetilde\beta_2^1$. We denote by $A(\mathbf{x} , \mathbf{y})$ the set of punctured rectangles from $\mathbf{x}$ to $\mathbf{y}$. \end{definition}
Although $\mathfrak{a} \in A(\mathbf{x} , \mathbf{y})$ is not a domain, for our combinatorial purposes we need to connect the generators as above. Note that $A(\mathbf{x} , \mathbf{y})$ has at most 1 element.
Given $\mathbf{x} , \mathbf{y} \in S(\widetilde H)$ that differ exactly along $\widetilde\beta_1^0$ and $\widetilde\beta_2^0$, let $\mathfrak{a} \in A(\mathbf{x}, \mathbf{y})$ be a punctured rectangle. There is a unique empty rectangle in $Rect(\mathbf{y}, \mathbf{x})$ that we denote by $r_{\mathfrak{a}}$ and we call it the complementary rectangle of $\mathfrak{a}$. Note that the support of the union of ${\mathfrak{a}}$ and $r_{\mathfrak{a}}$ is topologically a sphere with three punctures. Define $\mu(\mathfrak{a}) := -\SS(F(r_{\mathfrak{a}}))$, where $F(r_{\mathfrak{a}})$ is the formal rectangle corresponding to $r_{\mathfrak{a}}$.
There is another way to define the sign of ${\mathfrak{a}}$. We associate to ${\mathfrak{a}}$, the formal rectangle $F({\mathfrak{a}}): \mathbf{x}_f \rightarrow \mathbf{y}_f$ in $\FF_{2n+2}$ that has the same boundary arcs as ${\mathfrak{a}}$, where $\mathbf{x}_f$ is the formal generator associated with $\mathbf{x}$. Define $\mu(\mathfrak{a}) := \SS(F({\mathfrak{a}}))$. This definition gives the same sign for ${\mathfrak{a}}$ as the previous definition, because of the property S-2 of the definition of sign assignments.
\begin{definition} \label{lshape} For $\mathbf{x} \in S(\widetilde H)$, $w_i \in \mathbf{y} \in S(\widetilde H)$, we define an \emph{L-shape} associated with $w_i$ ($i= 0, 1$) that connects $\mathbf{x}$ to $\mathbf{y}$ to be an embedded punctured disk that is empty from the components of $\mathbf{x}$ and basepoints except for $O_1$ and $X_1$. Its boundary consists of the circle $\gamma$ and a path from $\mathbf{x}$ to $\mathbf{y}$ that consists of six arcs, arranged as follows. We start from $w_i$ which is a component of $\mathbf{y}$ and traverse the boundary with the orientation that comes from the L-shape (counterclockwise). We go through $\widetilde \beta _1^i$ we reach at a component of $\mathbf{x}$. Then we traverse an $\alpha$-curve to reach the component of $\mathbf{y}$ on $\widetilde\beta_2^i$, then going through $\widetilde\beta_2^i$ we meet a component of $\mathbf{x}$. Then we traverse an $\alpha$-curve, namely $\alpha_j$, to meet a component of $\mathbf{y}$, we go through a $\beta$-curve, namely $\beta_k$, and reach a component of $\mathbf{x}$. Finally we go along a lift of $\alpha_1$ to reach $w_i$. See Fig.~\ref{L}. Alternately we can define an L-shape $l$ to be the composite region made up of a punctured rectangle $a \in A(\mathbf{x} , \mathbf{z})$ and $r\in Rect(\mathbf{z}, \mathbf{y})$, where $\mathbf{z} \in S(\widetilde H)$ differs with $\mathbf{x}$ exactly along $\widetilde \beta_1^i$ and $\widetilde \beta_2^i$, and the upper (resp. lower) edge of $r$ is an arc in $\alpha_j$ (resp. $\alpha_1$), and the right (resp. left) edge of $r$ is an arc in $\widetilde \beta_1^i$ (resp. $\beta_k$), i.e. $l = \mathfrak{a} *r$.
We define the sign of an L-shape $l = \mathfrak{a} *r$ (associated with $w_0$ or $w_1$) that connects $\mathbf{x}$ to $\mathbf{y}$ as follows: $$\mu(l) := \mu(\mathfrak{a}) \cdot \SS(F(r)),$$ \noindent where $F(r)$ is the formal rectangle associate with the empty rectangle $r$. \end{definition}
\begin{definition} \label{Ldomain} Let $\mathbf{x} \in S(\widetilde H)$ and $\mathbf{y} \in (I,I) \subset S(\widetilde H)$. A pseudo-domain $p\in \sigma (\mathbf{x},\mathbf{y})$ is called \emph{Type L} if one of the following is true:
\textbf{L(1)} $\mathbf{x}=\mathbf{y}$, and $p$ is the trivial domain. In this case we define the sign of $p$ as $\mu(p):= 1$.
\textbf{L(2)} $\mathbf{x}$ contains either $w_0$ or $w_1$ but not both, and $p$ is an L-shape. See Fig.~\ref{L}. In this case, the sign of $p$ is $\mu(p)$ as defined above.
\textbf{L(3)} $\mathbf{x}$ contains neither $w_0$ nor $w_1$, and $p$ is the sum of two L-shapes $l_0$ and $l_1$. Note that in this case the branched cuts inside the L-shape regions are glued together to form a domain whose boundary consists of twelve arcs. In this case, we define $\mu(p):= \mu(l_0)\cdot \mu(l_1)$.
\begin{figure}\label{L}
\end{figure}
\end{definition}
\begin{definition} \label{oct} For $\mathbf{x} \in S(\widetilde H)$ and $\mathbf{y} \in (I,I) \subset S(\widetilde H)$, an \emph{octagon} $\theta \in \pi(\mathbf{x} , \mathbf{y})$ is topologically an embedded disk in $\widetilde U$ (the Heegaard surface of $\widetilde H$) whose boundary is a path form $\mathbf{x}$ to $\mathbf{y}$ that consists of eight arcs, arranged as follows. Starting from the component of $\mathbf{x}$ on $\widetilde\beta_1^0$, we traverse the boundary with the orientation that comes from the octagon. We go along an $\alpha$-curve to meet a $\mathbf{y}$ component. We traverse a $\beta$-curve to reach a component of $\mathbf{x}$, then we go through one of the lifts of $\alpha_1$, passing the branch cut connecting $O_1$ and $X_1$, we meet a component of $\mathbf{y}$ that is on the $\widetilde\beta_1^1$ and is the same as $w_1$. We go through $\widetilde\beta_1^1$ and get to a component of $\mathbf{x}$, after traversing an $\alpha$-curve we reach a $\mathbf{y}$ component, then we go along a $\beta$-curve to meet one of $\mathbf{x}$ components. Going through the other lifts of $\alpha_1$, we pass the branched cut connecting $O_1$ and $X_1$ and we reach a component of $\mathbf{y}$ which is on $\widetilde\beta_1^0$ and is equal to $w_0$. Finally we go along $\widetilde\beta_1^0$ and come back to the $\mathbf{x}$ component that we started from. The interior of $\theta$ is empty from the components of $\mathbf{x}$ and basepoints other that $X_1$. See Figs.~\ref{R} and \ref{octagon}.
\begin{figure}\label{R}
\end{figure}
\end{definition}
Given $\mathbf{x} \in S(\widetilde H)$ and $\mathbf{y} \in (I,I) \subset S(\widetilde H)$, let $\theta \in \pi(\mathbf{x} , \mathbf{y})$ be an octagon. Consider an arc $\gamma_i$ in the $i^{th}$ grid ($i=0,1$). The arc starts from a point on the lift of $\alpha_1$ just to the right of the branched cut connecting $O_1$ and $X_1$. The arc $\gamma_i$ intersects $\widetilde\beta_1^i$ and goes to the square (or octagon) just to the left of $X_1$ in the $i^{th}$ grid. We isotope each lift of $\alpha_1$ by doing a finger move along the appropriate arc, either $\gamma_0$ or $\gamma_1$. See Fig.~\ref{octagon}.
Without loss of generality, we assume that $\gamma_0$ (resp. $\gamma_1$) starts from a point on $\widetilde\alpha_1^0$ (resp. $\widetilde\alpha_1^1$). Let $R_0 \in Rect(\mathbf{x} , \mathbf{z})$ be the new rectangle that is created after the isotopy of $\widetilde\alpha_1^0$ along $\gamma_0$, and let $R_1 \in Rect(\mathbf{z} , \mathbf{w})$ be the other rectangle which is created after the isotopy along $\gamma_1$, where $\mathbf{z}$ and $\mathbf{w}$ are generators in $S(\widetilde H)$. See Fig.~\ref{octagon}. With the orientations that we fixed for $\bm \alpha$ and $\bm \beta$ in the beginning of this section, we can associate to each generator $\mathbf{x} \in \mathbf{S}(\widetilde H)$ and each empty rectangle $r$, a formal generator $\mathbf{x}_f$ and a formal rectangle $F(r)$.
Denote by $R_{\theta}$ the formal rectangle from $\mathbf{w}_f$ to $\mathbf{y}_f$, such that the sequence $\widetilde\alpha_1^1 \rightarrow \widetilde\beta_1^0 \rightarrow \widetilde\alpha_1^0 \rightarrow \widetilde\beta_1^1$ represents its edges (with the notation of Definition~\ref{formalrec}), and we orient its edges so that the intersection numbers of the lower-left and upper-right corners coincide with the intersection numbers of the components of $\mathbf{w}_f$, and the intersection numbers of the lower-right and upper-left corners coincide with the intersection numbers of the components of $\mathbf{y}_f$. See Fig.~\ref{octagon}.
We define the sign for the octagon $\theta$ as follows: \[ \mu(\theta):=\SS(F(R_0))\cdot \SS(F(R_1))\cdot \SS(R_{\theta}), \]
\begin{figure}\label{octagon}
\end{figure}
\begin{definition} \label{Rdomain} Let $\mathbf{x} \in S(\widetilde H)$ and $\mathbf{y} \in (I,I) \subset S(\widetilde H)$. A pseudo-domain $p\in \sigma(\mathbf{x},\mathbf{y})$ is called \emph{Type R} if one of the following is true:
\textbf{R(1)} There exists an octagon $\theta \in \pi(\mathbf{x}, \mathbf{y})$, and $p=\theta$. In this case, the sign $\mu(p)$ is defined as above.
\textbf{R(2)} There are intermediate generators $\mathbf{u} , \mathbf{v} \in S(\widetilde H)$ such that there exist $\mathfrak{a}\in A(\mathbf{x} , \mathbf{ u})$, $r\in Rect(\mathbf{ u} , \mathbf{v})$ and an octagon $\theta \in \pi (\mathbf{ v} , \mathbf{y})$ with following properties for $i=0$ or $i=1$: \begin{itemize} \item $\mathbf{u}$ and $\mathbf{x}$ differ exactly along $\widetilde \beta_1^i$ and $\widetilde \beta_2^i$. \item the right edge of $r$ is contained in $\widetilde \beta_1^i$. \item the upper (resp. lower) edge of $r$ is contained in the same $\alpha$-circle as the upper edge of $\mathfrak{a}$ (resp. as one of $\theta$ edges). \item the lower-right corner of $r$ which is the component of $\mathbf{v}$ along $\widetilde \beta_1^i$, should be contained in the segment of the left edge of $\mathfrak{a}$ between $w_i$ and the lower-left corner of $\mathfrak{a}$. \end{itemize} In this case $p=\mathfrak{a} *r*\theta$ and we define its sign to be $\mu(p) : = \mu(\mathfrak{a}) \cdot \SS(F(r))\cdot \mu(\theta)$. See Fig.~\ref{R(2)}.
\textbf{R(3)}\hspace{5pt} There are intermediate generators $\mathbf{ u} , \mathbf{ u'} ,\mathbf{v} , \mathbf{ v'} \in S(\widetilde H)$ such that there exist $\mathfrak{a} \in A(\mathbf{x} , \mathbf{ u})$, $r\in Rect(\mathbf{u} , \mathbf{v})$, $\mathfrak{a} '\in A(\mathbf{v} , \mathbf{u'})$, $r'\in Rect(\mathbf{u'} , \mathbf{v'})$ and an octagon $\theta \in \pi(\mathbf{v'} , \mathbf{y})$ with following properties: \begin{itemize} \item $\mathbf{x}$ and $\mathbf{u}$ differ exactly along $\widetilde \beta_1^0$ and $\widetilde \beta_2^0$. \item $\mathbf{v}$ and $\mathbf{u'}$ differ exactly along $\widetilde \beta_1^1$ and $\widetilde \beta_2^1$. \item the right edge of $r$ (resp. $r'$) is contained in $\widetilde \beta_1^0$ (resp. $\widetilde \beta_1^1$). \item the upper (resp. lower) edge of $r$ is contained in the same $\alpha$-circle as the upper edge of $\mathfrak{a}$ (resp. one of $\theta$ edges). \item the upper (resp. lower) edge of $r'$ is contained in the same $\alpha$-circle as the upper edge of $\mathfrak{a} '$ (resp. one of $\theta$ edges). \item the lower-right corner of $r$ which is the component of $\mathbf{v}$ along $\widetilde \beta_1^0$, should be contained in the segment of the left edge of $\mathfrak{a}$ between $w_0$ and the lower-left corner of $\mathfrak{a}$. \item the lower-right corner of $r'$ which is the component of $\mathbf{v'}$ along $\widetilde \beta_1^1$, should be contained in the segment of the left edge of $\mathfrak{a} '$ between $w_1$ and the lower-left corner of $\mathfrak{a} '$. \end{itemize} In this case $p=\mathfrak{a} *r* \mathfrak{a} '*r'*\theta$ and we define its sign to be $\mu(p) : = \mu(\mathfrak{a}) \cdot \SS(r) \cdot \mu(\mathfrak{a} ') \cdot \SS(F(r')) \cdot \mu(\theta)$.
\begin{figure}\label{R(2)}
\end{figure}
\end{definition}
We now define the maps \[F^L:C(\widetilde H)\rightarrow\mathcal{L}\] \[F^R:C(\widetilde H)\rightarrow\mathcal{R},\] where $F^L$ (resp. $F^R$) counts pseudo-domains of Type $L$ (resp. $R$), more precisely define \begin{eqnarray*} F^L(\mathbf{x})= \sum_{\mathbf{y} \in \mathbf{S}(\widetilde H)}\sum_{p\in \sigma^{L}(\mathbf{x},\mathbf{y})} \mu(p) \mathbf{y} \\ F^R(\mathbf{x})= \sum_{\mathbf{y} \in \mathbf{S}(\widetilde H)}\sum_{p\in \sigma^{R}(\mathbf{x},\mathbf{y})} \mu(p) \mathbf{y} , \end{eqnarray*} \noindent where $\sigma^{L}(\mathbf{x},\mathbf{y})$ (resp. $\sigma^{R}(\mathbf{x},\mathbf{y})$) denotes the set of pseudo-domains of Type $L$ (resp. $R$). We set $\sigma^{F}(\mathbf{x},\mathbf{y})=\sigma^{L}(\mathbf{x},\mathbf{y})\cup \sigma^{R}(\mathbf{x},\mathbf{y})$ and define
\[F=\begin{pmatrix} F^L\\ F^R \end{pmatrix}:C(\widetilde H)\rightarrow C'\]
Now we want to show that $F$ is a chain map. Note that in the boundary map that appears in $\partial\circ F$ we count empty rectangles in $\widetilde G$. Recall that there is a one-to-one correspondence between $\mathbf{S}(\widetilde G)$ and the set $(I,I)\subset \mathbf{S}(\widetilde H)$ that is given by $\psi$.
\begin{definition} Given $\mathbf{x} , \mathbf{y} \in \mathbf{S}(\widetilde G)$ and an empty rectangle $r\in Rect(\mathbf{x},\mathbf{y})$, there is a pseudo-domain $r'$ between the two associated generators $\psi(\mathbf{x}),\psi(\mathbf{y})\in(I,I)$ (not necessarily empty). By abuse of notation we denote $r'$ by $\psi(r)$. If $\psi(r)$ is a topological rectangle in $\widetilde H$ we call it a Type 1 pseudo-rectangle. On the other hand there are empty rectangles $r$ in $\widetilde G$ that $\psi(r)$ is an annuli that contains either $w_0$ or $w_1$ in its interior. Its boundary (aside from the circle $\gamma$) is a path from $\psi(\mathbf{x})$ to $\psi(\mathbf{y})$ which is made of four arcs contained in $\alpha$- and $\beta$-circles other than the circles $\widetilde \alpha_1^i$ or $\widetilde \beta_1^i$ for $i=0, 1$. We call these pseudo-domains Type 2 pseudo-rectangles (See Fig.~\ref{Type2rect}). \end{definition}
The boundary map of $C(\widetilde G)$ can be computed by counting pseudo-rectangles in $\widetilde H$. Note that a Type 2 pseudo-rectangle is a pseudo-domain in $\widetilde H$ of the form $r_1 * \mathfrak{a} * r_2$, where $r_1 \in Rect (\psi(\mathbf{x}) , \mathbf{u})$, $\mathfrak{a} \in A(\mathbf{u} , \mathbf{v})$ and $r_2 \in Rect (\mathbf{v} , \psi(\mathbf{y}))$. Here $\mathbf{u} , \mathbf{v} \in \mathbf{S}(\widetilde H)$ are intermediate generators. Denote by $F(r)$ the formal rectangle that has the same boundary arcs as $r$ (which is the same as the boundary arcs of $\psi(r)$). We define the sign of $\psi(r)$ to be the sign of $F(r)$, i.e. $\SS(r)=\SS(F(r))$.
\begin{remark} \label{rem} Let $\mathbf{x}_f$ and $\mathbf{y}_f$ be formal generators of power $m$ and $\phi:\mathbf{x}_f \rightarrow \mathbf{y}_f$ be a formal rectangle that is embedded in a grid diagram with the support $ABCD$, where $A$, $B$, $C$ and $D$ meet in a component of the initial generator in the interior. See Fig.~\ref{pointedrect}. The proof of Proposition~4.4 in~\cite{Sign} shows that the sign of formal rectangles satisfies the following identities: \begin{eqnarray*} \SS(ABCD) &=&\SS(A)\cdot \SS(BC)\cdot \SS(D)\\ &=&\SS(C)\cdot \SS(AD)\cdot \SS(B), \end{eqnarray*} \noindent where the alphabets show the support of the formal rectangles with appropriate initial generators. For example, in the first equality $A$ denotes the formal rectangle $r_A :\mathbf{x}_f \rightarrow \mathbf{z}_f$, $BC$ is the support of the embedded rectangle associated with $r_{BC}: \mathbf{z}_f \rightarrow \mathbf{w}_f$ and $r_{D} :\mathbf{w}_f \rightarrow \mathbf{y}_f$ is denoted by $D$, where $\mathbf{z}_f$ and $\mathbf{w}_f$ are the formal generators of power $m$. In this case the formal rectangle $\phi$ can alternatively described as the composition of the three formal rectangles $r_A$, $r_{BC}$ and $r_{D}$.
\begin{figure}\label{pointedrect}
\end{figure}
\end{remark}
\begin{lemma} Let $r$ be a Type~2 pseudo-rectangle that is defined by $r_1 * \mathfrak{a} * r_2$. Then $$\SS(r)=\SS(r_1)\cdot \mu(\mathfrak{a})\cdot \SS(r_2).$$ \end{lemma}
\begin{proof} The proof follows from~\cite[Proposition~4.4]{Sign} as explained in Remark~\ref{rem}. \end{proof}
\begin{figure}\label{Type2rect}
\end{figure}
We consider only those stabilizations, that the basepoint $X_2$ is placed just to the left or just to the right of $O_1$ in the grid diagram $H$. If $X_2$ is placed just to the left of $O_1$, by definition, we can not have any L-shapes, Type~2 pseudo-rectangles, or pseudo-domains of the form R(2) and R(3). However, if $X_2$ is placed just to the right of $O_1$, since the pseudo-domains can not contain $X_2$, there are some restrictions on them.
Given $\mathbf{x} ,\mathbf{y} \in (I,I)\subset S(\widetilde H)$, let $r\in Rect(\mathbf{x}, \mathbf{y})$ be an empty rectangle in $\widetilde G$. Note that the circles $\widetilde\alpha_1^i$ and $\widetilde\beta_1^i$ (for $i=0,1$) are not in the sets of $\alpha$ and $\beta$-circles for the Heegaard diagram $\widetilde G$ (these circles are added in the stabilization move). Hence the arcs in the boundary of $r$ can not be contained in $\widetilde\alpha_1^i$ and $\widetilde\beta_1^i$ for $i=0,1$. We first show that $F$ is a chain map over $\mathbb{Z}_2$ and later we prove it with $\mathbb{Z}$ coefficients.
\begin{lemma} \label{FZ2} The map $F:C(\widetilde H)\rightarrow C'$ preserves the Alexander grading, and is a chain map over $\mathbb{Z}_2$. \end{lemma}
The idea of the proof is similar to the proof of~\cite[Lemma 3.5]{MOST}, but we give a complete proof and discuss all the possible configurations and include all the necessary changes to their proof.
\begin{proof}
\begin{figure}\label{I1}
\end{figure}
\begin{figure}\label{newII0}
\end{figure}
The fact that the Alexander grading is preserved, comes from the definition of Alexander grading and the way we have defined $\mathcal{L}$ and $\mathcal{R}$ with appropriate gradings.
We group together different possibilities for the terms in $\partial \circ F$ and $F\circ\partial$. Using basic planar geometry, we arrange the cases of the composition of an empty rectangle $r$ and a pseudo-domain $p\in \sigma^F$, according to the type of the empty pseudo-rectangle $r$, and the number of common corners of $r$ and $p$. Note that the image of the support of the composite of $r$ and $p$ in the grid diagram $H$, can not wrap horizontally around the Heegaard surface associated with $H$ (i.e. the torus). Because this composition can not contain basepoints (other than possibly $O_1$ and $X_1$).
If $r$ is of Type 1, we have the following cases:
I(0) A composition of a pseudo-rectangle $r$ of Type 1 and a pseudo-domain $p\in \sigma^F$, where they do not have any corners in common. This composition can be counted in either of $\partial \circ F$ or $F\circ\partial$.
I(1) A composition of $r$ (a pseudo-rectangle of Type 1) and $p\in \sigma^F$ in either order, where they have one corner in common and $r$ does not contain $w_0$ (or $w_1$) in its boundary. This composition has a unique concave corner, cutting through this concave corner in either of the two possible manners results in a decomposition. See Fig.~\ref{I1}.
I($1'$) A composition of a pseudo-rectangle $r$ of Type 1 and $p\in \sigma^F$, where they share one corner and $w_0$ (or $w_1$) is in the boundary of $r$. In this case, at least one of the edges of $r$ is contained in the circles that are not in $\widetilde G$, hence the composite region is of the form $r*p$ and is counted in $F\circ \partial$. Considering that $r$ is empty from basepoints, there are two possibilities here: First $w_0$ (or $w_1$) is in the right edge of $r$. Second, the lower-right corner of $r$ is $w_0$ (or $w_1$).
I(2) A composition in either order of a pseudo-rectangle $r$ of Type 1 and $p\in \sigma^F$, where they share two corners other than possibly $w_0$ (or $w_1$). See Fig.~\ref{newII0}.
I(3) A composition of a pseudo-rectangle $r$ of Type 1 and $p\in \sigma^F$, where they share three corners other than possibly $w_0$ (or $w_1$). In this case we can see that the composite region contains one of the two columns that contain $O_1$ and $X_1$. See Figs.~\ref{newI3L} and \ref{newI3R}.
\begin{figure}\label{newI3L}
\end{figure}
\begin{figure}\label{newI3R}
\end{figure}
If $r$ is of Type 2, it can only appear in the terms of the form $\partial\circ F$ and we have the following cases:
II(0) The pseudo-rectangle $r$ of Type 2 has no corner in common with $p\in \sigma^F$, and the composite region is of the form $p*r$. See Fig.~\ref{newII0}.
II(1) A composite region of the form $p*r$, where $r$ is a pseudo-rectangle of Type 2 that shares one corner with $p\in \sigma^F$. We can see that in this case the composite region contains one of the two columns that contain $O_1$ and $X_1$. See Fig.~\ref{newII1}.
\begin{figure}\label{newII1}
\end{figure}
Contributions from case I(0) cancel each other out. As we mentioned, we can cut a composite region of the form I(1) in two different ways, so that the contributions from I(1) cancel each other out as well. We have illustrated some examples in Fig.~\ref{I1}.
By a simple exercise in planar geometry, we see that all the possible configurations from the contributions of I(2) and II(0) (up to cyclic permutations) can be illustrated as in Fig.~\ref{newII0} and they cancel each other out. Note that we draw the pictures in the simplest way; in general there can be some branch cuts and the regions would pass through the branch cuts and go to the other grid. Also in each picture the part of the pseudo-domain that has no corners in common with the empty rectangle, can be replaced with the other option. For example, in the left picture in Fig.~\ref{newII0}, $p$ is of the form L(2) and $p'$ is of the form L(3). Alternatively $p$ can be of the form L(1), i.e. the generator $\mathbf{x}$ can be such that $w_1\in \mathbf{x}$ and we do not need to use the L-shape ${L_1}{L_2}$. In this case the term $p*r$ from II(0) is again cancelled out against a term $r'*p'$ from I(2), but this time $p$ is trivial and $p'$ is of the form L(2), because we eliminated the L-shape ${L_1}{L_2}$ from $p$ and $p'$ in this alternative case.
Contributions from I(1$'$) will cancel out against contributions from I(3) and II(1); this can be seen by adding the appropriate column (either in the first or the second grid) that contains $O_1$ and $X_1$, to the composite region of I(1$'$). The configuration that we group with the contribution from I(1$'$) depends on the place of the component of the generator $\mathbf{x}$ on the lift of $\beta_2$ in the column that we added. All the possible configurations are shown in Figs.~\ref{newI3L},\ref{newI3R} and \ref{newII1}. Here again, there is an alternative configuration for each picture that we do not draw. Also there can be some branched cuts that the composite regions go through them, but we illustrated the configuration in the simplest way.
\end{proof}
Now we turn to prove that $F$ is a chain map with coefficients in $\mathbb{Z}$.
\begin{lemma} \label{FZ} The map $F:C(\widetilde H)\rightarrow C'$ preserves the Alexander grading, and is a chain map over $\mathbb{Z}$. \end{lemma}
\begin{proof} This lemma is a generalization of lemma~\ref{FZ2} and we only need to show that the configurations that are grouped together in the proof of lemma~\ref{FZ2}, have the right signs. By abuse of notation (for convenience), we show the formal rectangle $F(r)$ associated with an empty rectangle $r$, simply by $r$.
Depending on the order of the compositions in different cases, we need to prove the following equalities: \begin{itemize} \item If $p,p'\in\sigma^L$ and $p*r=r'*p'$, then $\mu(p)\cdot \SS(r)=\SS(r')\cdot \mu(p')$ \item If $p,p'\in\sigma^R$ and $p*r=r'*p'$, then $\mu(p)\cdot \SS(r)=-\SS(r')\cdot \mu(p')$ \item If $p,p'\in\sigma^F$ and $p*r=p'*r'$ then $\mu(p)\cdot \SS(r)=-\mu(p')\cdot \SS(r'),$ (similarly for $r*p=r'*p'$) \end{itemize} \noindent where $p,p'\in \sigma^F$, $r$ and $r'$ are empty rectangles.
Note that throughout this proof, for simplicity we call the rectangles and pseudo-domains with alphabets that shows their underlying regions. However, this notion does not specify the initial generators. In other words, we represent two rectangles with the same support and different initial generators, with the same alphabets. In this case, the initial generator of the rectangles should be understood from the context.
Note that $\SS$ depends on the initial formal generator of a rectangle or pseudo-domain. Let $A$ and $B$, be two formal rectangles, in our notation when we write $\SS(A)\cdot \SS(B)$, this means that the initial formal generator of $B$ is the same as the terminal formal generator of $A$, hence the terms do not freely commute.
\textbf{Contributions from I(0):}
In case of terms in I(0), the cancelling pairs are of the form $p*r=r'*p'$, where $r$ and $r'$ are of Type~1 and have the same support, also $p,p'\in \sigma^F$ have the same support but they differ in the initial generators; If $p,p'\in \sigma^L$ are of the form L(1), $\mu(p)=\mu(p')=1$. If $p,p'\in \sigma^L$ are of the form L(2) (resp. L(3)), we need to use the property S-3, two (resp. four) times, and this gives the desired equality.
Suppose that $p,p'\in \sigma^R$ are octagons. The sign of an octagon is defined by the product of the signs of the three associated formal rectangles. So we need to use the property S-3 of the sign assignments, three times. This gives a $(-1)^3=-1$. If $p,p'\in \sigma^R$ are of the form R(2) (resp. R(3)), we need to use the property S-3, five (resp. seven) times, that gives a $-1$. Hence we get the desired equality.
\textbf{Contributions from I(1):}
Consider a rectangle $r$ and a pseudo-domain $p$, where they share a corner, and $r$ is disjoint from $w_0$ and $w_1$. Note that $p$ cannot be trivial in this case.
Let $p\in \sigma^F$ be a non-trivial pseudo-domain of Type L or R. Let $m$ be the number of formal rectangles that is used in defining the sign of $p$. Let $r_i$ ($i=1,\dots,m$) be the $i^{th}$ formal rectangle that we use for defining the sign of $p$.
\begin{figure}\label{octagonjuxta}
\end{figure}
\begin{itemize}
\item Suppose that the composite domain of $r$ and $p$ is of the form $r*p$ and the alternative decomposition is $r'*p'$. Among the pseudo-rectangles $r_i$ ($i=1,\dots,m$) that appear in the decomposition of $p$, one shares a corner with $r$, we call that $r_j$. If we start from the decomposition of $\SS(r)\cdot \mu(p)$, we need to use the property S-3, $j-1$ times to reach a decomposition such that a term of the form $\SS(s)\cdot \SS(r_j)$ appears, where $s$ has the same support as $r$. Then we need to use the property S-3, once more, to get $S(s')\cdot S(r'_j)$, where $s'$ has the same support as $r'$. At last, we need $j-1$ more use of the property S-3 to get to $\SS(r')\cdot \mu(p')$. Here we used the property S-3, $2(j-1)+1$ times. Hence we obtain $\SS(r)\cdot \mu(p) = - \SS(r')\cdot \mu(p')$. The case $p*r=p'*r'$ is similar.
As an example, see the left picture of Fig.~\ref{octagonjuxta}. Here we illustrated a juxtaposition of a rectangle $C$ and an octagon $ABD$ that has an alternate decomposition $r*p=r'*p'$, where $r=C$, $p=ABD$, $r'=B$ and $p'=ACD$. \begin{eqnarray*} \SS(r)\cdot \mu(p) &=&[\SS(C)\cdot \SS(r_{AB})]\cdot \SS(r_D)\cdot \SS(R_{ABD})\\ &=&-\SS(B)\cdot \SS(r_{AC})\cdot \SS(r_D)\cdot \SS(R_{ABD})\\ &=&-\SS(r')\cdot \mu(p'). \end{eqnarray*} Note that the third formal rectangle in the definition of the sign of the octagons $ABD$ and $ACD$ are the same.
\item Suppose that the composition domain of $r$ and $p$ is of the form $r*p$, and the alternative decomposition is of the form $p'*r'$. Among the pseudo-rectangles $r_i$ ($i=1,\dots,m$) that appear in the decomposition of $p$, one shares a corner with $r$, we call that $r_j$. If we start with the decomposition of $\SS(r)\cdot \mu(p)$, we need to use the property S-3, $j-1$ times, so that we reach a decomposition where a term $\SS(s)\cdot \SS(r_j)$ appears, here $s$ has the same support as $r$. Using the property S-3, once more, leads to a terms $\SS(r'_j)\cdot \SS(s')$, where $s'$ has the same support as $r'$. At last, by using the property S-3, $m-j$ times, we get to $(-1)^m\mu(p')\cdot \SS(r')$. If $p,p'\in \sigma^L$, $m$ is even. If $p,p'\in \sigma^R$, $m$ is odd. This gives the desired equality. The case $p*r=r'*p'$ is similar.
As an example, see the right picture of the Fig.~\ref{octagonjuxta}. The composite region has two decomposition of the forms $r*p=p'*r'$, where $p=ABC$, $r=D$, $r'=CD$ and $p'=AB$. In general, we denote the rectangles that we use to define the sign of an octagon $p=AB$, by $r_A$, $r_B$ and $R_{AB}$, respectively (i.e. $\mu(AB)=\SS(r_A)\cdot \SS(r_B)\cdot \SS(R_{AB})$).
Note that the third formal rectangle that we use to define the sign of the octagons $ABC$ and $AB$, have the same boundary arcs, the same orientations, and the same component of the generators in their corners. The only difference is the components of the initial generator which are not at the corners of the formal rectangles. We denote these formal rectangles by $R_{ABC}$ and $R_{AB}$, respectively. \begin{eqnarray*} \SS(r)\cdot \mu(p) &=&\SS(D)\cdot \SS(r_A)\cdot \SS(r_{BC})\cdot \SS(R_{ABC})\\ &=&-\SS(r_A)\cdot \SS(D)\cdot \SS(r_{BC})\cdot \SS(R_{ABC})\\ &=&\SS(r_A)\cdot \SS(r_B)\cdot \SS(CD)\cdot \SS(R_{ABC})\\ &=&-\SS(r_A)\cdot \SS(r_B)\cdot \SS(R_{AB})\cdot \SS(CD)\\ &=&-\mu(p')\cdot \SS(r'). \end{eqnarray*} In the fourth equality, the rectangle with the support $CD$ is disjoint from the formal rectangle $R_{ABC}$. When we switch their order, the initial generator of $R_{ABC}$ changes, and with the new initial generator, it becomes $R_{AB}$.
\end{itemize}
\textbf{Contributions of I(2) and II(0):}
Based on the argument in the proof of Lemma~\ref{FZ2}, we only need to obtain the sign assignment in the cases that we illustrated in Fig.~\ref{newII0}.
The left picture shows a term $p*r$ from II(0), where $p={L_1}{L_2}$ is a Type L pseudo-domain of the form L(2) and $r=ABCD$ is a Type 2 pseudo-rectangle, which cancels out with a term $r'*p'$ from I(2), where $r'=A$ is a Type 1 pseudo-rectangle and $p'=BCD{L_1}{L_2}$ is a Type L pseudo-domain of the form L(3). \begin{eqnarray*} \mu(p)\cdot \SS(r) &=&\SS({L_1})\cdot \SS({L_2})\cdot \SS(A)\cdot \SS(BC)\cdot \SS(D)\\ &=&\SS(A)\cdot [\SS(BC)\cdot \SS(D)\cdot \SS({L_1})\cdot \SS({L_2})]\\ &=&\SS(r')\cdot \mu(p'). \end{eqnarray*} \noindent In the second equality, since the terms with supports $L_1$ and $L_2$ are disjoint from the terms that we used to define the sign of $r$, we can move them to the right by using the property S-3 an even number of times.
The picture in the middle of the Fig.~\ref{newII0} shows a contribution $p*r$ from II(0) that cancels out with a contribution $p'*r'$ from I(2), where $p, p'\in \sigma^R$, $p=ABCDH$ is an octagon and $p'=ABDEFGH$ is of the form R(2), $r'=C$ is a Type 1 pseudo-rectangle and $r=AEFG$ is a Type 2 pseudo-rectangle. Note that the formal rectangle associated with $r$ has no edge in common with the three formal rectangles that we use for defining the sign of $p$. Hence in the second equality we move $r$, using the property S-3, three times. Note that the terms that we showed with $AEFG$, are actually two different rectangles with the same support. \begin{eqnarray*} \mu(p)\cdot \SS(r) &=&\mu(p)\cdot \SS(AEFG)\\ &=&-\SS(AEFG)\cdot \mu(ABCDH)\\ &=&-\SS(AEFG)\cdot \SS(r_{ABCD})\cdot \SS(r_H)\cdot \SS(R_{ABCDH})\\ &=&-\SS(AEFG)\cdot \SS(B)\cdot \SS(CD)\cdot \SS(r_{A})\cdot \SS(r_H)\cdot \SS(R_{ABCDH})\\ &=&\SS(AEFG)\cdot \SS(B)\cdot \SS(r_{AD})\cdot \SS(C)\cdot \SS(r_{H})\cdot \SS(R_{ABCDH})\\ &=&\SS(AEFG)\cdot \SS(B)\cdot \SS(r_{AD})\cdot \SS(r_{H})\cdot \SS(R_{ADH})\cdot \SS(C)\\ &=&\SS(AEFG)\cdot \SS(B)\cdot \mu(ADH)\cdot \SS(C)\\ &=&-\SS(ABG)\cdot \SS(EF)\cdot \mu(ADH)\cdot \SS(C)\\ &=&-\mu(p')\cdot \SS(r'). \end{eqnarray*} \noindent In the fourth equality, we use Remark~\ref{rem} to write the sign of the formal rectangle $r_{ABCD}$ as $\SS(B)\cdot \SS(CD)\cdot \SS(r_A)$. In the sixth equality, when we move the term $\SS(C)$ using property S-3, the third formal rectangle $R_{ABCDH}$ in the definition of the sign of the octagon ${ABCDH}$, only differs with $R_{ADH}$ in the initial generators.
The right picture in Fig.~\ref{newII0}, shows a contribution $p*r$ from II(0) that cancels out with a contribution $r'*p'$ from I(2), where $p, p'\in \sigma^R$, $p=AGH$ is an octagon and $p'=ABDEFGH$ is of the form R(2), $r'=C$ is a Type 1 pseudo-rectangle and $r=ABCDEF$ is a Type 2 pseudo-rectangle. \begin{eqnarray*} \mu(p)\cdot \SS(r) &=&\mu(AGH)\cdot \SS(ABCDEF)\\ &=&-\SS(ABCDEF)\cdot \mu(AGH)\\ &=&\SS(C)\cdot [\SS(ABF)\cdot \SS(DE)\cdot \mu(AGH)]\\ &=&\SS(r')\cdot \mu(p'). \end{eqnarray*} \noindent In the second equality, the formal rectangle associated with $r$ has no edge in common with the three rectangles associated to the octagon $p$. Hence we need to use the property S-3, three times. Note that the two terms, with the support $ABCDDEF$, that are shown in the two sides of the second equality, are not the same rectangles, they differ in the initial generators.
In the third equality, we can decompose the formal rectangle $ABCDEF$, using Remark~\ref{rem}, as follows: $$\SS(ABCDEF)=\SS(C)\cdot \SS(ABF)\cdot \SS(DE)$$
\textbf{Contributions from I(1$'$) and I(3):}
We discussed in Lemma~\ref{FZ2}, that the contributions from I(1$'$) cancels out the contributions from I(3) and II(1). In the following we show the sign consistency in each of the cases in Figs.~\ref{newI3L},\ref{newI3R} and \ref{newII1}.
The left picture in Fig.~\ref{newI3L}, shows a contribution $r*p$ from I(3) that cancels out with a contribution $r'*p'$ from I($1'$), where $p, p'\in \sigma^L$, $p=ACD{L_1}{L_2}$ is of the form L(3) and $p'={L_1}{L_2}$ is of the form L(2), $r=B$ and $r'=D$ has $w_0$ as its lower-right corner. \begin{eqnarray*} \SS(r)\cdot \mu(p) &=&[\SS(B)\cdot \SS(AC)]\cdot \SS(D)\cdot \SS(L_1)\cdot \SS(L_2)\\ &=&-\SS(D)\cdot \SS(L_1)\cdot \SS(L_2)\\ &=&-\SS(r')\cdot \mu(p'). \end{eqnarray*} In the second equality we used the property S-2.
The right picture in Fig.~\ref{newI3L}, shows a contribution $p*r$ from I(3) that cancels out with a contribution $r'*p'$ from I($1'$), where $p, p'\in \sigma^L$, $p=ABE$ is of the form L(2) and $p'$ is of the form L(1), $r=CD$ is shaded horizontally and $r'=DE$ has $w_0$ as its lower-right corner. \begin{eqnarray*} \mu(p)\cdot \SS(r) &=&\SS(AB)\cdot [\SS(E)\cdot \SS(CD)]\\ &=&-[\SS(AB)\cdot \SS(C)]\cdot \SS(DE)\\ &=&\SS(DE)\\ &=&\SS(r')\cdot \mu(p'). \end{eqnarray*} Note that $p'$ is of Type L(1) and its sign is defined to be equal to $1$.
The left picture in Fig.~\ref{newI3R}, shows a contribution $r*p$ from I(3) that cancels out with a contribution $r'*p'$ from I($1'$), where $p, p'\in \sigma^R$, $p=ABCEFGH$ is of the form R(2) and $p'=ABH$ is an octagon, $r=D$ is shaded horizontally and $r'=FG$ has $w_0$ in the interior of its right edge. \begin{eqnarray*} \SS(r)\cdot \mu(p) &=&[\SS(D)\cdot \SS(ACE)]\cdot \SS(FG)\cdot \mu(ABH)\\ &=&-\SS(FG)\cdot \mu(ABH)\\ &=&-\SS(r')\cdot \mu(p'). \end{eqnarray*}
The right picture in Fig.~\ref{newI3R}, shows a contribution $p*r$ from I(3) that cancels out with a contribution $r'*p'$ from I($1'$), where $p, p'\in \sigma^R$, $p=ABDFGHK$ is of the form R(2) and $p'=AHK$ is an octagon, $r=CE$ is shaded horizontally and $r'=EFG$ has $w_0$ in the interior of its right edge. \begin{eqnarray*} \mu(p)\cdot \SS(r) &=&\SS(ABD)\cdot \SS(FG)\cdot [\mu(AHK)\cdot \SS(CE)]\\ &=&-\SS(ABD)\cdot [\SS(FG)\cdot \SS(CE)]\cdot \mu(AHK)\\ &=&[\SS(ABD)\cdot \SS(C)]\cdot \SS(EFG)\cdot \mu(AHK)\\ &=&-\SS(EFG)\cdot \mu(AHK)\\ &=&-\SS(r')\cdot \mu(p'). \end{eqnarray*}
In the second equality, the rectangle $CE$ is disjoint from the octagon $AHK$, hence we use the property S-3 three times.
\textbf{Contributions from I(1$'$) and II(2):}
The left picture in Fig.~\ref{newII1}, shows a term $p*r$ from II(1) that cancels out a term $r'*p'$ from I(1$'$), where $p, p' \in \sigma^R$, $p=ABF$ and $p'=AF$ are an octagons, $r=ACDE$ is a Type 2 pseudo-rectangle, and $r'=DE$ is shaded gray. Fig.~\ref{octagonII1} is an illustration of the formal flows at hand. Note that the formal rectangles associated with $K$ and $DE$ are the same. \begin{eqnarray*} \mu(p)\cdot \SS(r) &=&\SS(r_F)\cdot \SS(r_{AB})\cdot \SS(R_{ABF})\cdot \SS(ACDE)\\ &=&-\SS(r_F)\cdot [\SS(r_{AB})\cdot \SS(ACDE)]\cdot \SS(R_{AF})\\ &=&\SS(r_F)\cdot \SS(K)\cdot \SS(r_A)\cdot \SS(R_{AF})\\ &=&-\SS(K)\cdot \SS(r_F)\cdot \SS(r_A)\cdot \SS(R_{AF})\\ &=&-\SS(r')\cdot \mu(p'). \end{eqnarray*} In the second equality, the formal rectangle associated with the Type~2 pseudo-rectangle $r$ has no edge in common with the formal rectangle $R_{ABF}$. By using the property S-3, we see that when we change the order of the terms, the terminal generator of the formal rectangle $ACDE$, which is the initial generator of the next term, is the same as the initial generator of the formal rectangle $R_{AF}$. In the third equality, we use the property S-3.
\begin{figure}\label{octagonII1}
\end{figure}
The picture in the middle of the Fig.~\ref{newII1}, shows a term $p*r$ from II(1) that cancels out a term $r'*p'$ from I(1$'$), where $p, p' \in \sigma^R$, $p=ACDEGHJKLM$ is of the form R(2), $p'=CDEGHJKLM$ is of the form R(2) and is in gray, $r=BCDEFGHI$ is a Type 2 pseudo-rectangle, and $r'=FGHI$ is shaded horizontally. \begin{eqnarray*} \mu(p)\cdot \SS(r) &=&\SS(ACDE)\cdot \SS(GHJK)\cdot [\mu(DLM)\cdot \SS(BCDEFGHI)]\\ &=&-\SS(ACDE)\cdot [\SS(GHJK)\cdot \SS(BCDEFGHI)]\cdot \mu(DLM)\\ &=&\SS(ACDE)\cdot \SS(BCDEFGHI)\cdot \SS(GHJK)\cdot \mu(DLM)\\ &=&\SS(ACDE)\cdot \SS(B)\cdot \SS(FGHI)\cdot \SS(CDE)\cdot \SS(GHJK)\cdot \mu(DLM)\\ &=&-\SS(FGHI)\cdot [\SS(CDE)\cdot \SS(GHJK)\cdot \mu(DLM)]\\ &=&-\SS(r')\cdot \mu(p'). \end{eqnarray*} Note that the formal rectangle associated with the Type~2 pseudo-rectangle $r$ has no edge in common with the formal rectangles associated with the octagon $DLM$. Hence using the property S-3 three times, we obtain the second equality. In the fourth equality, we use the Remark~\ref{rem} to write the sign of a formal rectangle $BCDEFGHI$, whose associated embedded rectangle in the Heegaard diagram has a component of the initial generator in its interior, as follows: $$\SS(BCDEFGHI)=\SS(B)\cdot \SS(FGHI)\cdot \SS(CDE).$$ Note that the definition of the sign depends on the place of the component of the initial generator in the interior of the rectangle with support $BCDEFGHI$. Since the term with support $ACDE$ comes first, the rectangles in the decomposition of $r$ meet in $v_0$.
The right picture in Fig.~\ref{newII1}, shows a term $p*r$ from II(1) that cancels out a term $r'*p'$ from I(1$'$), where $p, p' \in \sigma^L$ are of the form L(3), $p=ACEG{L_1}{L_2}$, $p'=CEG{L_1}{L_2}$, $r=BCDEF$ is a Type 2 pseudo-rectangle, and $r'=DEF$ is a Type 1 pseudo-rectangle that has $w_1$ in the interior of its right edge. \begin{eqnarray*} \mu(p)\cdot \SS(r) &=&\SS(L_1)\cdot \SS(L_2)\cdot \SS(AC)\cdot [\SS(EG)\cdot \SS(F)]\cdot \SS(BC)\cdot \SS(DE)\\ &=&-\SS(L_1)\cdot \SS(L_2)\cdot \SS(AC)\cdot \SS(EF)\cdot [\SS(G)\cdot \SS(BC)]\cdot \SS(DE)\\ &=&\SS(L_1)\cdot \SS(L_2)\cdot \SS(AC)\cdot \SS(EF)\cdot \SS(BC)\cdot [\SS(G)\cdot \SS(DE)]\\ &=&-\SS(L_1)\cdot \SS(L_2)\cdot \SS(AC)\cdot \SS(EF)\cdot [\SS(BC)\cdot \SS(D)]\cdot \SS(EG)\\ &=&\SS(L_1)\cdot \SS(L_2)\cdot \SS(AC)\cdot \SS(EF)\cdot \SS(BD)\cdot \SS(C)\cdot \SS(EG)\\ &=&\SS(AC)\cdot [\SS(EF)\cdot \SS(BD)]\cdot [\SS(L_1)\cdot \SS(L_2)\cdot \SS(C)\cdot \SS(EG)]\\ &=&-[\SS(AC)\cdot \SS(B)]\cdot \SS(DEF)\cdot \mu(p')\\ &=&\SS(r')\cdot \mu(p'). \end{eqnarray*} Note that using the property S-2 in the last equality, we have $\SS(AC)\cdot \SS(B)=-1$, since the formal rectangle associated with the punctured rectangle $AC$, makes a Type $\beta$ degeneration together with the formal rectangle associated with $B$.
\end{proof}
We decompose the set of generators of the Heegaard diagram $\widetilde H$ according to the position of the components of a generator on $\widetilde\alpha^0_1$ and $\widetilde\alpha^1_1$. We represent the type of each generator with an ordered pair. The first (resp. second) entry is $I$ if the component on $\widetilde\alpha^0_1$ (resp. $\widetilde\alpha^1_1$) is on one of the lifts of $\beta_1$ (either $\widetilde\beta_1^0$ or $\widetilde\beta_1^1$). The first (resp. second) entry is $J$ when the $\widetilde\alpha_1^0$ (resp. $\widetilde\alpha_1^1$) component is on one of the lifts of $\beta_2$. The $N$ in the first (resp. second) entry shows that the generator has its $\widetilde\alpha^0_1$ (resp. $\widetilde\alpha^1_1$) component neither on the lifts of $\beta_1$ nor on the lifts of $\beta_2$. Hence we have the following decomposition for the set of generators: \[\mathbf{S}(\widetilde H)=(I,I)\cup(I,J)\cup(I,N)\cup(J,I)\cup(N,I)\cup(J,J)\cup(J,N)\cup(N,J)\cup(N,N)\]
Having the above decomposition of the set of generators of $C$ we get a decomposition of $C$ as the direct sum of the sub-modules generated by the associated generators. \[C=C^{I,I}\oplus C^{I,J}\oplus C^{I,N}\oplus C^{J,I}\oplus C^{N,I}\oplus C^{J,J}\oplus C^{J,N}\oplus C^{N,J}\oplus C^{N,N}\\ \]
In order to show that $F$ is a quasi-isomorphism, following the ideas of \cite{MOST}, we define a filtration.
Denote by $\mathbf{S}(\widetilde H,k)$ the set of generators $\mathbf{x}\in S(\widetilde H)$ with Alexander gradings equal to $k\in \mathbb{Z}$. Let $C(\widetilde H,k)$ be the complex generated by elements of $\mathbf{S}(\widetilde H,k)$. Since $\partial$ preserves the Alexander grading $C(\widetilde H,k)$ is a summand of $C(\widetilde H)$.
Let $\Pi:\widetilde H\rightarrow H$ be the double branched cover map and $Q\subset H$ be a set consisting of one point in each square of $H$ other than those in the same row or the same column as $\Pi(O_1)$. Let $\mathbf{x},\mathbf{y}\in \mathbf{S}(\widetilde H)$ and $p\in \sigma(\mathbf{x},\mathbf{y})$, the image of $p$ under $\Pi$ is a 2-chain (not necessarily a domain) in the Heegaard diagram $H$. We denote by $X_1(p)$ (respectively $O_1(p)$), the number of times that $\Pi(X_1)$ (respectively $\Pi(O_1)$) lies in $\Pi(p)$ counted with multiplicity.
\begin{lemma}\label{2chain} Let $D$ be a 2-chain (not necessarily a domain) in the Heegaard diagram $H$ that contains no basepoints, such that $\partial D$ consists of $\alpha$- and $\beta$-circles. Then $D$ is trivial. ($D$ might contain a basepoint in its interior.) \end{lemma}
\begin{proof} Let $D$ be as above. We add some number of rows and some number of columns to $D$ to get a 2-chain $D'$, so that each square has non-negative multiplicity. There might be some rows or columns in $D'$ i.e. a row or column in $H$ with positive multiplicities in $D'$, we subtract them to get a 2-chain $D''$, with non-negative multiplicities and containing no row or column.
If we consider two adjacent squares in $H$, if they have multiplicities $a$ and $b$ in $D''$, then the segment between them will occur in the boundary of $D''$ with multiplicity $a-b$ (Note that the sign of $a-b$ reflects the direction in which this segment appears in the boundary of $D''$, and if $a=b$ then this segment has multiplicity $0$ in the boundary). Hence if $\alpha_1$ is a circle in the boundary of $D''$, since all the squares have non-negative multiplicity in $D''$, then depending on the orientation of $\alpha_1$ that comes from $D''$, one of the rows that have $\alpha$ as their boundary, has positive multiplicity in $D''$ which contradicts the way that we defined $D''$. More precisely if the induced orientation of $\alpha_1$ from the boundary of $D''$ is from left to right (or right to left), for any segment of it if the multiplicities of the squares above and below it are $a$ and $b$ then we will have $a \geq a-b=mult(\alpha_1) > 0$ (or $b \geq b-a=mult(\alpha_1) > 0$), which holds for the multiplicities of all the squares on the row above (or below) $\alpha_1$ depending on the orientation of $\alpha_1$. So there is no $\alpha$-circle in $\partial D''$. Similarly we can prove that there is no $\beta$-circle in the boundary of $D''$. One can see each pair of squares that has an edge in common have the same multiplicity in $D''$. From the definition of $D''$ we know that there exist squares that have zero multiplicity in $D''$, since it does not contain any row or column. So $D''$ is trivial. Hence $D$ is a sum of a number of rows and a number of columns.
In the representation of this two-chain as sum of rows and columns if a row has multiplicity $c$, then it contains an $O_i$ for some $i$. Since this two-chain does not contain $O_i$, the column through $O_i$ should have multiplicity $-c$. Now this column contains an $X_j$ so the row through that should have multiplicity $c$. Continuing in this way, since we work with a knot rather than a link, we see that all the rows have multiplicity $c$ and all the columns have multiplicity $-c$. So the two-chain is in fact trivial.
\end{proof}
For fixed $\mathbf{x} , \mathbf{y} \in \mathbf{S}(\widetilde H,k)$, if $p, p' \in \sigma (\mathbf{x} , \mathbf{y} )$ are two pseudo-domains such that $O_1(p)=X_1(p)=O_1(p')=X_1(p')$ and there are no other basepoints inside them, counting with multiplicity by Lemma~\ref{2chain}, we have: $$\# (Q\cap \Pi(p))=\# (Q\cap \Pi(p')).$$
We call a pseudo-domains a \emph{Q-fine} pseudo-domains if its multiplicity at each base point except (possibly) $X_1$ is zero. So we can find a function $\QQ$ such that \begin{equation}\label{QQ} \QQ(\mathbf{x})-\QQ(\mathbf{y})=\# (Q\cap \Pi(p)) \end{equation} \noindent for all Q-fine pseudo-domains $p \in \sigma (\mathbf{x}, \mathbf{y})$.
The construction of $\QQ$ is a simple combinatorial process. Fix a generator $\mathbf{x}_{0}\in \mathbf{S}(\widetilde H)$ and assign an arbitrary value to $\QQ(\mathbf{x}_0)$. Let $\mathbf{y}\in \mathbf{S}(\widetilde H)$, we can define the value of $\QQ$ at $\mathbf{y}$ if there is a Q-fine pseudo-domain between $\mathbf{x}_0$ and $\mathbf{y}$ (i.e. from $\mathbf{x}_0$ to $\mathbf{y}$ or from $\mathbf{y}$ to $\mathbf{x}_0$.) If there is a Q-fine pseudo-domain $p\in \sigma(\mathbf{x}_0,\mathbf{y})$, then according to Equation~\ref{QQ} the value of $\QQ$ on $\mathbf{y}$ is defined to be $\QQ(\mathbf{y}) = \QQ(\mathbf{x}_0)-\# (Q\cap \Pi(p))$. If there is a Q-fine pseudo-domain $p'\in\sigma (\mathbf{y} , \mathbf{x}_0 )$, we have $\QQ(\mathbf{y})=\QQ(\mathbf{x}_0)+ \# (Q\cap \Pi(p))$. Similarly if there is a chain of Q-fine pseudo-domains between $\mathbf{x}_0$ and $\mathbf{y}$, i.e. there are a number of generators $\mathbf{x}_i$ for $i=1,\cdots, n$, where $\mathbf{x}_n=\mathbf{y}$, and there is a Q-fine domain between each two consecutive generators $\mathbf{x}_i$ and $\mathbf{x}_{i+1}$. Then the value of $\QQ$ on $\mathbf{y}$ can be defined. If there is no such chain between a generator $\mathbf{x}'$ and $\mathbf{x}_0$ we can define $\QQ(\mathbf{x}')$ arbitrarily and continue the above procedure. Since the number of generators is finite this procedure will stop at some point.
In order to show that $\QQ$ is well-defined we have to consider two such chains between $\mathbf{x}_0$ and $\mathbf{y}$, and show that using each chain we get the same value for $\QQ$ at $\mathbf{y}$. To put it differently if we consider the union of these chains, we get a loop starting and ending at $\mathbf{x}_0$ and we want to show that if we follow the path the changes in the value of $\QQ$ adds up to zero. But for each edge of this loop, the change in the value of $\QQ$ along this edge is equal to $\# (Q\cap \Pi(p_i))$, where $p_i$ is the pseudo-domain associated to that edge. Hence the sum of changes in the value of $\QQ$ along this path is equal to $\# (Q\cap \sum(-1)^{n_i}\Pi(p_i))$ where $n_i$ is 1 or 0 depending on whether $p_i$ is in $\sigma (\mathbf{x}_i , \mathbf{x}_{i+1} )$ or in $\sigma (\mathbf{x}_{i+1} , \mathbf{x}_{i})$.
Note that $p=\sum(-1)^{n_i} p_i$ is a pseudo-domain from a generator to itself so its boundary consists of a number of copies of $\alpha$- and $\beta$-circles. Note that since each $p_i$ has no basepoints except possibly $X_1$, the same holds for their sum that is $p$. Now $p$ is from a generator to itself so it does not change the Alexander grading, so the multiplicity of $X_1$ should also be zero. Hence when we take the image of $p$ under the $\Pi$ we get a two-chain satisfying the condition of the Lamma~\ref{2chain}, hence $\Pi(p)$ is trivial and in particular this shows that $\# (Q\cap p)= \# (Q\cap\sum(-1)^{n_i}\Pi(p_i))=0$. Hence the total changes of $\QQ$ along this loop is zero, which shows that $\QQ$ is well-defined. One should note that in this proof we see that the two-chain obtained by the projection $\Pi$ is trivial, which does not mean that the domain in the Heegaard diagram of $\widetilde H$ is trivial.
Using $\QQ$ we can define a filtration on $C(\widetilde H,k)$. The boundary operator of the associated graded object counts those rectangles which do not contain any basepoints and does not contain any points of $Q$. We denote by $C_Q$ the associated graded object, and omit the index $k$ when there is no confusion.
In order to study the boundary map between different submodules and calculate the homology groups, we need a new definition.
\begin{definition} Given $\mathbf{x}, \mathbf{y} \in S(\widetilde H)$, we call an empty rectangle $r \in Rect(\mathbf{x},\mathbf{y})$ that has no points of $Q$ inside it, an \emph{undone} rectangle with respect to the initial generator $\mathbf{x}$. In this case, the components of $\mathbf{x}$ are the upper-right and lower-left corners of $r$. We call a rectangle $r \in Rect(\mathbf{y},\mathbf{x})$ that has no points of $Q$ inside it, a \emph{done} rectangle with respect to $\mathbf{x}$.
See Fig.~\ref{cNN}.
Each term in the $\partial\mathbf{x}$ is obtained by picking an undone rectangle $r_0 \in Rect(\mathbf{x},\mathbf{y})$. We refer to the generator $\mathbf{y}$, as \emph{the generator obtained from $\mathbf{x}$ by using $r_0$}. \end{definition}
\begin{lemma} \label{Homology} $H_*(C_Q)$ is isomorphic to free $\mathbb{Z}_2$-module generated by elements of $(I,I)$ and $(J,J)$. \end{lemma}
\begin{proof} There are two cases depending on whether $X_2$ is placed in the square, just to the left or just to the right of $O_1$, in the grid diagram $H$ for $K$. First, suppose $X_2$ is in the square just to the left of the square marked $O_1$. Then we have a direct splitting ${C}_Q={C}_Q^{I,I} \oplus B_1 \oplus B_2 \oplus B_3$, where $B_1$ is the module generated by all generators of types $(I,N)$ and $(I,J)$, $B_2$ is the module generated by all generators of types $(N,I)$ and $(J,I)$, $B_3$ is the module generated by all generators of types $(N,N)$, $(J,N)$, $(N,J)$ and $(J,J)$, $B_4$ is the module generated by all generators of types $(N,N)$, $(J,N)$ and $(N,J)$.
The differentials in ${C}_Q^{I,I}$ are trivial, hence its homology is the free $\mathbb{Z}_2$-module generated by elements of $(I,I)$. Also $B_1$, $B_2$ and $B_3$ are chain complexes fitting into these exact sequences
\begin{center} \exact{{C}_Q^{I,N}}{B_1}{{C}_Q^{I,J}}
\exact{{C}_Q^{N,I}}{B_2}{{C}_Q^{J,I}}
\exact{B_4}{B_3}{{C}_Q^{J,J}}\label{seq3} \end{center}
Here $B_4$ fits into the below short exact sequence \begin{center} \exact{{C}_Q^{N,N}}{B_4}{{C}_Q^{J,N}\oplus {C}_Q^{N,J}} \end{center}
For explanation of these sequences see the captions of Figs.~\ref{cNN}, \ref{cIJ} and \ref{cJN}. First we show that $H_*({C}_Q^{N,N})$ is zero. Note that for $\mathbf{x} \in S(\widetilde H)$ we can consider all the done and undone rectangles, then the terms of $\partial \mathbf{x}$ in ${C}_Q$ are given by counting undone rectangles with respect to $\mathbf{x}$.
In the first Heegaard diagram in Fig.~\ref{cNN}, there are exactly three disjoint undone rectangles and one done rectangle, with respect to $\mathbf{x}$. We represent $\mathbf{x}$ by its undone rectangles, $e_1\wedge e_{1}' \wedge e_2$. In each term in $\partial \mathbf{x}$ in ${C}_Q$, one of these undone rectangles (with respect to $\mathbf{x}$) will be counted. For example, we denote by $ e_1\wedge e_{1}'$, the term that comes from using $e_2$. In other words, it represents a generator $\mathbf{y}$ such that $e_1$ and $e_{1}'$ are the only undone rectangles with respect to $\mathbf{y}$. Hence $\partial \mathbf{x}$ in ${C}_Q$ can be represented by the following notation (we call this the induced boundary operator): \[ \partial (e_1\wedge e_{1}' \wedge e_2) = e_1\wedge e_{1}' + e_{1}' \wedge e_2 + e_1 \wedge e_2. \]
One can see that each generator in ${C}_Q$ can have at most four undone rectangles. Given a generator $\mathbf{x}$, we construct a set of generators associated with $\mathbf{x}$. This set consists of the generators obtained from $\mathbf{x}$ using a number of undone rectangles, and also those generators that $\mathbf{x}$ is obtained from them by using a number of undone rectangles. By the above notation, this set with the induced boundary operator from $C_Q$ is isomorphic to an exterior algebra over a vector space of dimension at most four with coefficients in $\mathbb{Z}_2$.
In the above case, the vector space is four dimensional. As another example, see the second Heegaard diagram in Fig.~\ref{cNN}; there are two disjoint undone rectangles with respect to $\mathbf{x}$. Hence we represent the generator $\mathbf{x}$ with $e_1 \wedge e_2$. The induced boundary map is represented as follows: \[ \partial (e_1 \wedge e_2) = e_1 + e_2 \] Note that here the exterior algebra is over a three dimensional vector space.
This shows that we can partition our complex as a union of subcomplexes, each isomorphic to the exterior algebra over a vector space with coefficients in $\mathbb{Z}_2$. Since this complex is isomorphic to the complex of the reduced homology of an $n$-simplex over $\mathbb{Z}_2 $, hence the homology of ${C}_Q^{N,N}$ being equal to the direct sum of the homology of these subcomplexes is trivial. The same idea shows that $H_*({C}_Q^{I,N})=0$ and $H_*({C}_Q^{N,I})=0$.
\begin{figure}\label{cNN}
\end{figure}
\begin{figure}\label{cIJ}
\end{figure}
In order to compute the homology of $B_1$, it is enough to compute the homology of ${C}_Q^{I,J}$ as a quotient. Each generator in ${C}_Q^{I,J}$ has at most two terms in its boundary, one of which is in ${C}_Q^{I,N}$. Hence $H_*(B_1)=H_*({C}_Q^{I,J})=0$, See Fig.~\ref{cIJ}. In the same way we can prove that $H_*(B_2)=H_*({C}_Q^{J,I})=0$ and $H_*(B_4)=H_*({C}_Q^{J,N}\oplus {C}_Q^{N,J})=0$, See Fig.~\ref{cJN}. Finally the differentials in ${C}_Q^{J,J}$ are trivial, so its homology is the free $\mathbb{Z}_2$-module generated by the elements of $(J,J)$.
\begin{figure}\label{cJN}
\end{figure}
Suppose on the other hand that $X_2$ is just to the right of $O_1$, in the grid diagram $H$ for $K$. Then there is a direct sum splitting ${C}_Q={C}_Q^{J,J} \oplus B'_1 \oplus B'_2 \oplus B'_3$, where $B'_1$ is the module generated by all generators of types $(J,I)$ and $(J,N)$, $B'_2$ is the module generated by all generators of types $(I,J)$ and $(N,J)$, $B'_3$ is the module generated by all generators of types $(I,I)$, $(I,N)$, $(N,I)$ and $(N,N)$, $B'_4$ is the module generated by all generators of types $(I,I)$, $(I,N)$ and $(N,I)$. Also $B'_1$, $B'_2$ and $B'_3$ are chain complexes fitting into these exact sequences
\begin{center} \exact{{C}_Q^{J,I}}{B'_1}{{C}_Q^{J,N}}
\exact{{C}_Q^{I,J}}{B'_2}{{C}_Q^{N,J}}
\exact{B'_4}{B'_3}{{C}_Q^{N,N}} \label{2seq2} \end{center} Here $B'_4$ fits in the following exact sequence \begin{center} \exact{{C}_Q^{I,I}}{B'_4}{{C}_Q^{I,N}\oplus{C}_Q^{N,I}} \label{2seq3} \end{center} All the differentials in ${C}_Q^{J,J}$ are trivial, so its homology is the $\mathbb{Z}_2$-module generated by the elements of $(J,J)$. It is easy to see that $H_*({C}_Q^{J,I})=0$. If we consider ${C}_Q^{J,N}$ as a quotient chain complex, it is also easy to show that $H_*({C}_Q^{J,N})=0$. Hence we have $H_*(B'_1)=0$. In the same way we can prove that $H_*(B'_2)=0$. The differentials in ${C}_Q^{I,I}$ are trivial, so its homology is the $\mathbb{Z}_2$-module generated by the elements of $(I,I)$. Also $H_*({C}_Q^{I,N}\oplus{C}_Q^{N,I})=0$ as a quotient chain complex. A simple calculation shows that $H_*({C}_Q^{N,N})=0$. Considering the above exact sequences, we get the desired result. \end{proof}
\begin{proposition} \label{quasi} The map $F$ is a filtered quasi-isomorphism. \end{proposition}
Note that in order to define the function $\QQ$ for a generator $\mathbf{x}$ of $C(\widetilde G)$, we consider $\psi(\mathbf{x})$ and define $\QQ(\mathbf{x}):= \QQ(\psi(\mathbf{x}))$. For an element $(\mathbf{x}_0,\mathbf{x}_1) \in C'$ we define $$\QQ(\mathbf{x}_0,\mathbf{x}_1):=max\{\QQ(\mathbf{x}_0),\QQ(\mathbf{x}_1) \rbrace.$$
\begin{proof} First we show that $F$ preserve the $Q$-filtration. Let $\mathbf{x}$ be a generator of $C$ we want to show $\QQ(\mathbf{x}) \geq \QQ(F(\mathbf{x}))$. So we have to show that $\QQ(\mathbf{x}) \geq \QQ(F^L(\mathbf{x}))$ and $\QQ(\mathbf{x}) \geq \QQ(F^R(\mathbf{x}))$.
We decompose a pseudo-domain of type $L$ or $R$ as the $*$ of a number of punctured rectangles, octagons that contain $X_1$ and empty rectangles.
Let $\mathbf{u}, \mathbf{v}\in \mathbf{S}(\widetilde H)$ such that there is a punctured rectangle $\mathfrak{a} \in A(\mathbf{u} , \mathbf{v})$, then the image of the complementary rectangle $r_{\mathfrak{a}}\in Rect(\mathbf{v}, \mathbf{u})$ under $\Pi$ is empty from the points in $Q$. Hence by Equation~\ref{QQ} we have $\QQ(\mathbf{u})=\QQ(\mathbf{v})$. For $\mathbf{w}, \mathbf{z}\in \mathbf{S}(\widetilde H)$ if there is an octagon that contains $X_1$ or an empty rectangle from $\mathbf{w}$ to $\mathbf{z}$, from Equation~\ref{QQ} we have $\QQ(\bm w) \geq \QQ(\bm z)$. Hence $\QQ(\mathbf{x}) \geq \QQ(F(\mathbf{x}))$ i.e. $F$ preserves the $Q$-filtration. \end{proof}
We consider the map induced by $F$: \[F_Q:{C}_Q \longrightarrow{C}_Q '\]
By Lemma~\ref{Homology}, the homology of ${C}_Q$ is carried by the subcomplex ${C}_Q^{(I,I)}\oplus{C}_Q^{(J,J)}\subset{C}_Q$. So we consider the restriction of $F_Q$ to this subcomplex, and will show that it induces an isomorphism. Also by definition we have ${C}_Q '=\L_Q\oplus\RR_Q$. Note that the only pseudo-domains that are allowed, are those that do not have intersection with the dots in $Q$.
Let $\mathbf{x}$ be a generator in ${C}_Q^{(I,I)}$. Since $F(\mathbf{x})$ ends up in $(I,I)$, the only pseudo-domain that contributes to $F(\mathbf{x})$ is the trivial domain of Type $L$. Hence $F_Q^L$ restricted to $C_Q^{(I,I)}$ is an isomorphism.
The restriction of $F_Q^R$ to ${C}_Q^{(J,J)}$, counts pseudo-domains of Type $R$ which are supported in the two rows and the two columns through $O_1$; that is we only count octagons. But for each element in $(J,J)$ there is a unique way of assigning an octagon. Thus, in this case, $F_Q$ is a quasi-isomorphism.
Now we need the following algebraic lemma; for a proof see Theorem 3.2 from \cite{Mc}.
\begin{lemma} \label{nice} Suppose that $F:C\longrightarrow C'$ is a filtered chain map which induces an isomorphism on the homology of the associated graded object. Then $F$ is a quasi-isomorphism. \end{lemma}
We now use Lemma~\ref{nice}, first for the filtration of $Q$, and then for the Alexander grading to conclude that $F$ is a quasi-isomorphism.
This completes the proof of the fact that $H(C)\cong H(C')\cong H(B)\otimes V$, where $V\cong \mathbb{Z}_2 \oplus \mathbb{Z}_2$ with generators in gradings $0$ and $-1$.
\begin{lemma}\label{gauge} $H_*(C_Q)$ is isomorphic to free $\mathbb{Z}$-module generated by elements of $(I,I)$ and $(J,J)$. \end{lemma}
\begin{proof} This lemma is the generalization of Lemma~\ref{Homology} over $\mathbb{Z}$. Here we use the same exterior algebra notation. For example if we start with generator $\mathbf{x}$ as in Fig.~\ref{cNN}, which is denoted by $e_1\wedge e_2 \wedge e_3$ and use the empty rectangle $e_3$, then we get the term $e_1 \wedge e_2$ with the sign $s_3$ of the formal rectangle associated to $e_3$ with the initial generator $\mathbf{x}$. So we get an expression of the form: $$\partial (e_1\wedge e_2 \wedge e_3) = s_1\cdot e_2 \wedge e_3 + s_2\cdot e_1 \wedge e_3 +s_3\cdot e_1\wedge e_2 $$ It is not hard to see that since the sign assignment has the property S-3, with an appropriate gauge transform, we can change the sign assignment such that we have: $$\partial (e_{n_1}\wedge \cdots \wedge e_{n_k})= \displaystyle \sum (-1)^{i} (e_{n_1}\wedge \cdots \wedge \widehat{e_{n_i}}\wedge \cdots\wedge e_{n_k})$$ This means that similar to case of $\mathbb{Z}_2$, we split the complex as the direct sum of a number of subcomplexes, each isomorphic to an exterior algebra of a vector space of dimension at most $4$ over $\mathbb{Z}$, and similarly they are isomorphic to the complex of reduced homology for an $n$-simplex ($n \leq 4$) over $\mathbb{Z}$. Hence the homology of the complex is the direct sum of the homology of these subcomplexes, and so it is trivial. \end{proof}
Using Lemma~\ref{nice} and a similar argument as in Proposition \ref{quasi}, show that $H(C)\cong H(C')$ over $\mathbb{Z}$ and this concludes the proof of the fact that $H(C)\cong H(B) \otimes V$, where $V \cong \mathbb{Z} \oplus \mathbb{Z}$ with generators in gradings $0$ and $-1$.
\section*{Acknowledgments} I am grateful to Zolt\'an Szab\'o for suggesting this problem, numerous helpful discussions, continuous advice through the course of this work, and reading a draft of this paper. I would also like to thank Iman Setayesh for helpful conversations and his contribution in defining the $\QQ$ filtration in Section~\ref{stab} and Lemma~\ref{gauge}. This work was done when I was a visiting student research collaborator at Princeton university, and I am grateful for the opportunity. This paper is a part of my Ph.D. thesis as a graduate student in Sharif University of Technology under the supervision of Mohammadreza Razvan. I also thank the referee for the helpful comments and suggestions, and Eaman Eftekhary for reading a draft of this paper.
\end{document} |
\begin{document}
\author[Robert Laterveer] {Robert Laterveer}
\address{Institut de Recherche Math\'ematique Avanc\'ee, CNRS -- Universit\'e de Strasbourg,\ 7 Rue Ren\'e Des\-car\-tes, 67084 Strasbourg CEDEX, FRANCE.} \email{robert.laterveer@math.unistra.fr}
\title[Zero--cycles on self--products of surfaces]{Zero--cycles on self--products of surfaces: some new examples verifying Voisin's conjecture}
\begin{abstract} An old conjecture of Voisin describes how $0$--cycles of a surface $S$ should behave when pulled--back to the self--product $S^m$ for $m>p_g(S)$. We exhibit some surfaces with large $p_g$ that verify Voisin's conjecture. \end{abstract}
\keywords{Algebraic cycles, Chow groups, motives, Voisin conjecture, Kimura finite--dimensionality conjecture} \subjclass[2010]{Primary 14C15, 14C25, 14C30.}
\maketitle
\section{Introduction}
Let $X$ be a smooth projective variety over $\mathbb{C}$, and let $A^i(X)_{\mathbb{Z}}:=CH^i(X)_{}$ denote the Chow groups of $X$ (i.e. the groups of codimension $i$ algebraic cycles on $X$ with $\mathbb{Z}$--coefficients, modulo rational equivalence \cite{F}). Let $A^i_{hom}(X)_{\mathbb{Z}}$ (and $A^i_{AJ}(X)_{\mathbb{Z}}$) denote the subgroup of homologically trivial (resp. Abel--Jacobi trivial) cycles.
The Bloch--Beilinson--Murre conjectures present a beautiful and coherent dream--world in which Chow groups are determined by cohomology and the coniveau filtration \cite{J2}, \cite{J4}, \cite{Mur}, \cite{Kim}, \cite{MNP}, \cite{Vo}. The following particular instance of this dream--world was first formulated by Voisin:
\begin{conjecture}[Voisin 1993 \cite{V9}]\label{conj} Let $S$ be a smooth projective surface. Let $m$ be an integer larger than the geometric genus $p_g(S)$. Then for any $0$--cycles $a_1,\ldots,a_m\in A^2_{AJ}(S)_{\mathbb{Z}}$, one has
\[ \sum_{\sigma\in\mathfrak S_m} \hbox{sgn}(\sigma) a_{\sigma(1)}\times\cdots\times a_{\sigma(m)}=0\ \ \ \hbox{in}\ A^{2m}(S^m)_{\mathbb{Z}}\ .\]
(Here $\mathfrak S_m$ is the symmetric group on $m$ elements, and $ \hbox{sgn}(\sigma)$ is the sign of the permutation $\sigma$.)
\end{conjecture}
For surfaces of geometric genus $0$, Conjecture \ref{conj} reduces to Bloch's conjecture \cite{B}. For surfaces $S$ of geometric genus $1$, Conjecture \ref{conj} takes on a particularly simple form: in this case, the conjecture stipulates that any $a_1, a_2\in A^2_{AJ}(S)_{\mathbb{Z}}$ should verify the equality
\[ a_1\times a_2 =a_2\times a_1\ \ \ \hbox{in}\ A^4(S\times S)_{\mathbb{Z}}\ .\]
This conjecture is still open for a general $K3$ surface; examples of surfaces of geometric genus $1$ verifying this conjecture are given in \cite{V9}, \cite{16.5}, \cite{19}, \cite{21}. One can also formulate versions of Conjecture \ref{conj} for higher--dimensional varieties; this is studied in \cite{V9}, \cite{17}, \cite{24.4}, \cite{24.5}, \cite{BLP}, \cite{LV}, \cite{Ch}.
On a historical note, it is interesting to observe that Voisin's Conjecture \ref{conj} antedates Kimura's conjecture ``all varieties have finite--dimensional motive'' \cite{Kim}. Both conjectures have a similar flavour: Chow groups of a surface $S$ should have controlled behaviour when pulled--back to the self--product $S^m$, for large $m$. The difference between Voisin's conjecture and Kimura's conjecture lies in the index $m$ which is much lower in Voisin's conjecture. In fact (as explained in \cite{BLP}), Voisin's conjecture follows from a combination of Kimura's conjecture with a strong form of the generalized Hodge conjecture.
The goal of the present note is to collect some (easy) examples of surfaces with geometric genus larger than $1$ verifying Voisin's conjecture.
\begin{nonumbering}[=Corollaries \ref{main1}, \ref{cor2}, \ref{cor4} and \ref{last}] The following surfaces verify Conjecture \ref{conj}:
\item
{(\rom1)} generalized Burniat type surfaces in the family $\mathcal S_{16}$ of \cite{BCF} ($p_g(S)=3$);
\item
{(\rom2)} the hypersurfaces $S\subset A/\iota$ considered in \cite{LNP}, where $A$ is an abelian threefold and $\iota$ is the $-1$-involution ($p_g(S)=3$);
\item
{(\rom3)} minimal surfaces $S$ of general type with $p_g(S)=q(S)=3$ and $K^2_S=6$;
\item{(\rom4)} the double cover of certain cubic surfaces (among which the Fermat cubic)
branched along the Hessian ($p_g(S)=4$);
\item
{(\rom5)} the Fano surface of lines in a smooth cubic threefold ($p_g(S)=10$);
\item{(\rom6)} the quotient $S=F/\iota$, where $F$ is the Fano surface of conics in a Verra threefold and $\iota$ is a certain involution ($p_g(S)=36$);
\item{(\rom7)} the surface of bitangents $S$ of a general quartic in $\mathbb{P}^3$ ($p_g(S)=45$);
\item{(\rom8)} the singular locus $S$ of a general EPW sextic ($p_g(S)=45$).
\end{nonumbering}
A by--product of the proof is that these surfaces all have finite--dimensional motive, in the sense of Kimura \cite{Kim} (this appears to be a new observation for cases (\rom6)--(\rom8)). Also,
certain instances of the generalized Hodge conjecture are verified:
\begin{nonumberingc}[=Corollary \ref{ghc}] Let $S$ be any of the above surfaces, and let $m>p_g(S)$. Then the sub--Hodge structure
\[ \wedge^m H^2(S,\mathbb{Q})\ \subset\ H^{2m}(S^m,\mathbb{Q}) \]
is supported on a divisor.
\end{nonumberingc}
The surfaces considered in this note have an interesting feature in common (which makes it easy to prove Conjecture \ref{conj} for them): for many of them, intersection product induces a surjection
\[ A^1_{hom}(S)\otimes A^1_{hom}(S)\ \twoheadrightarrow\ A^2_{AJ}(S)\ .\]
In the other cases (cases (\rom2), (\rom4), (\rom6)--(\rom8), which have $q(S)=0$), the surface $S$ is dominated by a surface $T$ with the property that the intersection product map
\[ A^1_{hom}(T)\otimes A^1_{hom}(T)\ \to\ A^2_{AJ}(T)\ \]
surjects onto $\ima \bigl( A^2_{AJ}(S)\to A^2_{AJ}(T)\bigr)$.
Using this feature, to prove Conjecture \ref{conj} for these surfaces one is reduced to a problem concerning $0$--cycles on abelian varieties. This last problem has recently been solved by Vial \cite{Ch}, using a strong version of the generalized Hodge conjecture for generic abelian varieties.
\vskip0.6cm
\begin{convention} In this note, the word {\sl variety\/} will refer to a reduced irreducible scheme of finite type over $\mathbb{C}$. A {\sl subvariety\/} is a (possibly reducible) reduced subscheme which is equidimensional.
{\bf Unless indicated otherwise, all Chow groups will be with rational coefficients}: we will denote by $A_j(X)$ the Chow group of $j$--dimensional cycles on $X$ with $\mathbb{Q}$--coefficients (and by $A_j(X)_{\mathbb{Z}}$ the Chow groups with $\mathbb{Z}$--coefficients); for $X$ smooth of dimension $n$ the notations $A_j(X)$ and $A^{n-j}(X)$ are used interchangeably.
The notations $A^j_{hom}(X)$, $A^j_{AJ}(X)$ will be used to indicate the subgroups of homologically trivial, resp. Abel--Jacobi trivial cycles.
The contravariant category of Chow motives (i.e., pure motives with respect to rational equivalence as in \cite{Sc}, \cite{MNP}) will be denoted $\mathcal M_{\rm rat}$.
We will write $H^j(X)$ to indicate singular cohomology $H^j(X,\mathbb{Q})$. \end{convention}
\section{Generalized Burniat type surfaces with $p_g=3$}
\begin{definition}[\cite{BCF}]\label{gbt} Let $A=E_1\times E_2\times E_3$ be a product of elliptic curves. A {\em generalized Burniat type surface\/} (or ``GBT surface'')
is a quotient $S=Y/G$, where $Y\subset A$ is a smooth hypersurface corresponding to the square of a principal polarization, and $G\cong \mathbb{Z}_2^3$ acts freely.
\end{definition}
\begin{remark} GBT surfaces are minimal surfaces of general type with $p_g(S)=q(S)$ ranging from $0$ to $3$. There are $16$ irreducible families of GBT surfaces, labelled $\mathcal S_1,\ldots \mathcal S_{16}$ in \cite{BCF}. The families $\mathcal S_1, \mathcal S_2$ have moduli--dimension $4$, the other families are $3$--dimensional.
\end{remark}
\begin{theorem}[Peters \cite{Chris}]\label{Gbt} Let $S$ be a GBT surface with $p_g(S)=3$ (i.e., $S$ is in the family labelled $\mathcal S_{16}$ in \cite{BCF}), and let
$A$ be the abelian threefold as in definition \ref{gbt}.
\noindent
(\rom1)
$S$ has finite--dimensional motive, and there are natural isomorphisms
\[ A^2_{(2)}(A)\ \xrightarrow{\cong}\ A^2_{AJ}(S)\ \xrightarrow{\cong}\ A^3_{(2)}(A)\ .\]
(Here $A^\ast_{(\ast)}(A)$ refers to Beauville's decomposition \cite{Beau}.)
\noindent
(\rom2) Intersection product induces a surjection
\[ A^1_{hom}(S)\otimes A^1_{hom}(S)\ \twoheadrightarrow\ A^2_{AJ}(S)\ .\]
\end{theorem}
\begin{proof} Part (\rom1) is \cite[Theorem 4.2]{Chris}.
Part (\rom2) follows from (\rom1), in view of the fact that intersection product induces a surjection
\[ A^1_{hom}(A)\otimes A^1_{hom}(A)\ \twoheadrightarrow\ A^2_{(2)}(A) \ \]
\cite[Proposition 4]{Beau}.
\end{proof}
Property (\rom2) of Theorem \ref{Gbt} is relevant to Conjecture \ref{conj}:
\begin{proposition}\label{handy0} Let $S$ be a smooth projective surface, and assume that intersection product induces a surjection
\[ A^1_{hom}(S)\otimes A^1_{hom}(S)\ \twoheadrightarrow\ A^2_{AJ}(S)\ .\]
Then $S$ has finite--dimensional motive.
Also, Conjecture \ref{conj} is true for $S$ with $m>{q(S)\choose 2}$.
(In particular, in case of equality $p_g(S)= {q(S)\choose 2}$ the full Conjecture \ref{conj} is true for $S$.)
\end{proposition}
\begin{proof} Let $\alpha\colon S\to A:=\hbox{Alb}(S)$ be the Albanese map. There is a commutative diagram
\[ \begin{array}[c]{ccc}
A^1_{hom}(S)\otimes A^1_{hom}(S) &\to& A^2_{AJ}(S)\\
&&\\
\ \ \ \ \uparrow{\scriptstyle (\alpha^\ast,\alpha^\ast)}&& \ \ \ \ \uparrow{\scriptstyle \alpha^\ast}\\
&&\\
A^1_{hom}(A)\otimes A^1_{hom}(A) &\to& A^2_{(2)}(A)\\
\end{array} \]
(where horizontal maps are induced by intersection product, and $A^\ast_{(\ast)}(A)$ refers to the Beauville decomposition \cite{Beau} of the Chow ring of any abelian variety). As the left vertical map is an isomorphism, the assumption implies that the right vertical map is surjective. In view of \cite[Theorem 3.11]{V3}, this implies $S$ has finite--dimensional motive. (For an alternative proof of \cite[Theorem 3.11]{V3} in terms of birational motives, cf. \cite[Theorem B.7]{LNP}. For a similar result, cf. \cite[Proposition 2.1]{Diaz}.)
Next, let us consider Conjecture \ref{conj} for $S$. Thanks to Rojtman's result \cite{Ro}, it suffices to establish Conjecture \ref{conj} for $0$--cycles with $\mathbb{Q}$--coefficients.
Because $\alpha^\ast\colon A^2_{(2)}(A)\to A^2_{AJ}(S)$ is surjective, to prove Conjecture \ref{conj} for $S$ it suffices to prove (a version of) Conjecture \ref{conj} for elements $b_1,\ldots,b_m\in A^2_{(2)}(A)$. We now reduce to $0$--cycles on $A$: given $b_j\in A^2_{(2)}(A)$, let
\[ c_j:= b_j\cdot h^{q-2}\ \ \in\ A^q_{(2)}(A)\ ,\ \ \ j=1,\ldots,m\ ,\]
be $0$--cycles, where $q:=q(S)$ is the dimension of $A$ and $h\in A^1(A)$ is a symmetric ample divisor.
Let us consider the $\mathfrak S_m$--invariant ample divisor
\[ H:= \sum_{j=1}^m (pr_j)^\ast(h)\ \ \ \in\ A^1(A^m)\ .\]
From K\"unnemann's hard Lefschetz result \cite{Kun}, we know that the map
\[ \cdot H^{m(q-2)}\colon\ \ A^{2m}_{(2m)}(A^m)\ \to\ A^{qm}_{(2m)}(A^m) \]
is an isomorphism. On the other hand,
\[ \begin{split}
c_{\sigma(1)}\times\cdots\times c_{\sigma(m)}&= \bigl(b_{\sigma(1)}\times\cdots\times b_{\sigma(m)} \bigr)\cdot \bigl( h^{q-2}\times\cdots\times h^{q-2}\bigr)\\
&= \bigl(b_{\sigma(1)}\times\cdots\times b_{\sigma(m)} \bigr)\cdot H^{m(q-2)}\ \ \ \hbox{in}\ A^{qm}_{(2m)}(A^m)\\
\end{split}\]
(since intersecting $A^2(A)$ with a power $h^r, r>q-2$ gives $0$).
We are thus reduced to proving that for any $c_1,\ldots,c_m\in A^q_{(2)}(A)$, where $m>{q\choose 2}$, there is equality
\[ \sum_{\sigma\in\mathfrak S_m} \hbox{sgn}(\sigma) \, c_{\sigma(1)}\times\cdots\times c_{\sigma(m)}=0\ \ \ \hbox{in}\ A^{gm}(A^m)_{}\ .\]
At this point, we can invoke the following general result on $0$--cycles on abelian varieties to conclude:
\begin{theorem}[Vial \cite{Ch}] Let $A$ be an abelian variety of dimension $g$, and let $c_1,\ldots,c_m\in A^g_{(k)}(A)$.
If $k$ is even and $m>{g\choose k}$, there is vanishing
\[ \sum_{\sigma\in\mathfrak S_m} \hbox{sgn}(\sigma) \, c_{\sigma(1)}\times\cdots\times c_{\sigma(m)}=0\ \ \ \hbox{in}\ A^{mg}(A^m)_{}\ .\]
If $k$ is odd and $m>{g\choose k}$, there is vanishing
\[ \sum_{\sigma\in\mathfrak S_m} c_{\sigma(1)}\times\cdots\times c_{\sigma(m)}=0\ \ \ \hbox{in}\ A^{mg}(A^m)_{}\ .\]
\end{theorem}
\begin{proof} This is \cite[Theorem 4.1]{Ch}, whose proof uses the concept of ``generically defined cycles on abelian varieties'', and a strong form of the generalized Hodge conjecture for powers of generic abelian varieties, due to Hazama \cite[Theorem 2.12]{Ch}. The case $k=g$ was proven earlier (and differently) in
\cite[Example 4.40]{Vo}.
\end{proof}
This ends the proof of Proposition \ref{handy0}.
\end{proof}
We can now prove that surfaces in the family $\mathcal S_{16}$ verify Voisin's conjecture:
\begin{corollary}\label{main1} Let $S$ be a GBT surface with $p_g(S)=3$ (i.e., $S$ is in the family labelled $\mathcal S_{16}$ in \cite{BCF}).
Then $S$ verifies Conjecture \ref{conj}: for any $m>3$ and $a_1,\ldots,a_m\in A^2_{AJ}(S)$, there is equality
\[ \sum_{\sigma\in\mathfrak S_m} \hbox{sgn}(\sigma) a_{\sigma(1)}\times\cdots\times a_{\sigma(m)}=0\ \ \ \hbox{in}\ A^{2m}(S^m)\ .\]
\end{corollary}
\begin{proof} This follows from Proposition \ref{handy0}, in view of Theorem \ref{Gbt} plus the fact that $q(S)=p_g(S)=3$.
\end{proof}
We recall that the truth of Conjecture \ref{conj} implies a certain instance of the generalized Hodge conjecture:
\begin{corollary}\label{ghc} Let $S$ be a surface verifying Conjecture \ref{conj}, and let $m>p_g(S)$. Then the sub--Hodge structure
\[ \wedge^m H^2(S,\mathbb{Q})\ \subset\ H^{2m}(S^m,\mathbb{Q}) \]
is supported on a divisor.
\end{corollary}
\begin{proof} This is already observed in \cite{V9}. Consider the Chow motive $\wedge^m h^2(S)$ defined by the idempotent
\[ \Gamma:= \bigl(\sum_{\sigma\in\mathfrak S_m} \hbox{sgn}(\sigma) \Gamma_\sigma\bigr)\circ \bigl(\pi^2_S\times\cdots\times \pi^2_S\bigr)\ \ \ \in\ A^{2m}(S^m\times S^m)\ .\]
Conjecture \ref{conj} is equivalent to saying that $A_0(\wedge^m h^2(S))=0$.
Applying the Bloch--Srinivas argument \cite{BS} to $\Gamma$, one obtains a rational equivalence
\[ \Gamma=\gamma\ \ \ \hbox{in}\ A^{2m}(S^m\times S^m)\ ,\]
where $\gamma$ is a cycle supported on $S^m\times D$ for some divisor $D\subset S^m$.
On the other hand, $\Gamma$ acts on $H^{2m}(S^m,\mathbb{Q})$ as projector on $\wedge^m H^2(S,\mathbb{Q})$. It follows that $ \wedge^m H^2(S,\mathbb{Q})$ is supported on $D$.
\end{proof}
\section{A criterion}
The approach of the last section can be conveniently rephrased as follows:
\begin{proposition}\label{handy} Let $S$ be a smooth projective surface. Assume that $S$ has finite--dimensional motive, and that cup product induces an isomorphism
\[ C\colon\ \ \wedge^2 H^1(S,\mathcal O_S) \ \xrightarrow{\cong}\ H^2(S,\mathcal O_S)\ .\]
Then Conjecture \ref{conj} is true for $S$.
\end{proposition}
\begin{proof} Surjectivity of $C$, combined with finite--dimensionality of the motive of $S$, ensures that intersection product induces a surjection
\[ A^1_{hom}(S)\otimes A^1_{hom}(S)\ \twoheadrightarrow\ A^2_{AJ}(S)\ \]
\cite{moib}. The assumption that $C$ is an isomorphism implies that $p_g(S)={{q(S)}\choose{2}}$. The result now follows from Proposition \ref{handy0}.
\end{proof}
This takes care of two more cases announced in the introduction:
\begin{corollary}\label{cor2} Conjecture \ref{conj} is true for the following surfaces:
\item
{(\rom1)} minimal surfaces of general type with $p_g(S)=q(S)=3$ and $K^2=6$;
\item
{(\rom2)} the Fano surface of lines in a cubic threefold ($p_g(S)=10$).
\end{corollary}
\begin{proof}
In case (\rom1), it is known that $S$ is the symmetric square $S=C^{(2)}$ where $C$ is a genus $3$ curve \cite{CCML} (cf. also \cite[Theorem 9]{BCP}). Thus, the assumptions of Proposition \ref{handy} are clearly satisfied.
As for case (\rom2), it is well--known this satisfies the assumptions of Proposition \ref{handy} (finite--dimensionality is proven in \cite{Diaz} and \cite{22}). Alternatively, one could apply Proposition \ref{handy0} directly (the assumption of Proposition \ref{handy0} is satisfied by the Fano surface thanks to \cite{B}; an alternative proof is sketched in \cite[Remark 20.8]{SV}).
\end{proof}
\section{A variant criterion}
Let us now state a variant version of Proposition \ref{handy0}:
\begin{proposition}\label{handy1} Let $S$ be a smooth projective surface. Assume that $S=S^\prime/<\iota>$, where $\iota$ is an automorphism of a surface $S^\prime$
such that intersection product induces a surjection
\[ A^1_{hom}(S^\prime)\otimes A^1_{hom}(S^\prime) \ \twoheadrightarrow\ A^2_{AJ}(S^\prime)^\iota\ .\]
Then $S$ has finite--dimensional motive.
Also, Conjecture \ref{conj} is true for $S$ with $m>{q(S^\prime)\choose 2}$.
(In particular, if $p_g(S)={q(S^\prime)\choose 2}$ the full Conjecture \ref{conj} is true for $S$.)
\end{proposition}
\begin{proof} This is proven just as Proposition \ref{handy0}.
\end{proof}
This takes care of several more cases announced in the introduction:
\begin{corollary}\label{cor4} Conjecture \ref{conj} is true for the following surfaces:
\item
{(\rom1)} surfaces $S=T/<\iota>$, where $T$ is a smooth divisor in the linear system $\vert 2\Theta\vert$ on a principally polarized abelian threefold, and $\iota$ is the $(-1)$--involution ($p_g(S)=3$);
\item{(\rom2)} the quotient $S=F/\iota$, where $F$ is the Fano surface of conics in a general Verra threefold and $\iota$ is a certain involution ($p_g(S)=36$);
\item{(\rom3)} the surface of bitangents $S$ of a general quartic in $\mathbb{P}^3$ ($p_g(S)=45$);
\item{(\rom4)} the surface $S$ that is the singular locus of a general EPW sextic ($p_g(S)=45$).
\end{corollary}
\begin{proof}
\noindent
\item{(\rom1)} The surface $S$ verifies the assumptions of Proposition \ref{handy1} with $S^\prime=T$, according to \cite[Subsection 7.2]{LNP}.
\noindent
\item{(\rom3)} More generally, one may consider the surface $S$ studied by Welters \cite{Wel} and defined as follows. Let $Y$ be a {\em quartic double solid\/}, i.e.
$Y\to\mathbb{P}^3$ is a double cover branched along a smooth quartic $Q$. Let $T$ be the surface of conics contained in $Y$, and let $\iota\in\aut(T)$ be the involution induced by the covering involution of $Y$.
Then the surface $S:=T/<\iota>$ is a smooth surface of general type with $p_g(S)=45$.
(The generic quartic $K3$ surface $Q$ does not contain a line. In this case, as explained in \cite{Fer} (cf. also \cite[Example 3.5]{Beau1} and \cite[Remark 8.5]{Huy1}), the surface $S$ is (isomorphic to) the so--called ``surface of bitangents'', which is the fixed locus of Beauville's anti--symplectic involution
\[ Q^{[2]}\ \to\ Q^{[2]} \]
first considered in \cite{Beau0}. As noted in \cite[Example 3.5]{Beau1}, this gives another proof of the fact that $p_g(S)=45$.)
Voisin has proven \cite[Corollaire 3.2(b)]{V8} (cf. also \cite[Remarque 3.4]{V8}) that intersection product induces a surjection
\[ A^1_{hom}(T)\otimes A^1_{hom}(T)\ \twoheadrightarrow\ A^2_{AJ}(T)^\iota=A^2_{AJ}(S)\ .\]
Since $p_g(S)=45$ and $q(T)=10$ \cite{Wel}, the assumptions of Proposition \ref{handy1} are met with.
\noindent
\item
{(\rom2)} A {\em Verra threefold\/} $Y$ is a divisor of bidegree $(2,2)$ in $\mathbb{P}^2\times\mathbb{P}^2$ (these varieties were introduced in \cite{Ver}). Let $F$ be the Fano surface of conics of bidegree $(1,1)$ contained in $Y$. As observed in \cite[Section 5]{IKKR}, $F$ admits an involution $\iota$ such that $(F,\iota)$ enters into the set--up of Voisin's work \cite{V8}. Thus, \cite[Corollaire 3.2(b)]{V8} implies that intersection product induces a surjection
\[ A^1_{hom}(F)\otimes A^1_{hom}(F)\ \twoheadrightarrow\ A^2_{AJ}(F)^\iota=A^2_{AJ}(S)\ .\]
Since $q(F)=9$ and $p_g(S)=36$ \cite[Proposition 5.1]{IKKR}, the assumptions of Proposition \ref{handy1} are again met with.
\noindent
\item
{(\rom4)} Let $Y$ be a transverse intersection of the Grassmannian $Gr(2,5)\subset\mathbb{P}^9$ with a codimension $2$ linear subspace and a quadric (i.e., $Y$ is an {\em ordinary Gushel--Mukai threefold\/}, in the language of \cite{DK}, \cite{DK1}). For generic $Y$, the surface $F$ of conics contained in $Y$ is smooth and irreducible.
There exists a birational involution $\iota\in\hbox{Bir}(F)$, such that intersection product induces a surjection
\[ A^1_{hom}(F)\otimes A^1_{hom}(F)\ \twoheadrightarrow\ A^2_{AJ}(F)^\iota\ \]
\cite[Corollaire 3.2(b)]{V8}. The surface $F$ and the birational involution $\iota$ are also studied in \cite{Lo} and \cite{DIM}. There exists a (geometrically meaningful) birational morphism $F\to F_m$, where $F_m$ is smooth and such that $\iota$ extends to a morphism $\iota_m$ on $F_m$ \cite{Lo}, \cite[Section 6]{DIM}, \cite[Section 5.1]{IM}. For $Y$ generic, the quotient $S:=F_m/<\iota_m>$ is smooth, and it is isomorphic to the singular locus of the EPW sextic associated to $Y$.
(This is contained in \cite{Lo}, \cite{DIM}. The double cover $F_m\to S$ is also described in \cite[Theorem 5.2(2)]{DK3}.)
Since $A^1_{hom}(), A^2_{AJ}()$ are birational invariants among smooth varieties, Voisin's result implies there is also a surjection
\[ A^1_{hom}(F_m)\otimes A^1_{hom}(F_m)\ \twoheadrightarrow\ A^2_{AJ}(F_m)^{\iota_m}=A^2_{AJ}(S)\ .\]
It is known that $q(F_m)=10$ \cite{Lo} and $p_g(S)=45$ \cite{OG0} (this can also be deduced from \cite{Beau1}), and so Proposition \ref{handy1} applies.
\end{proof}
\begin{remark} In cases (\rom2), (\rom3) and (\rom4) of Corollary \ref{cor4}, the surface $S$ is the fixed locus of an anti--symplectic involution of a hyperk\"ahler fourfold. For the surface of bitangents, this is Beauville's involution on the Hilbert square $Q^{[2]}$.
For the singular locus $S$ of a general EPW sextic, this is (isomorphic to) the fixed locus of the anti--symplectic involution of the associated double EPW sextic.
For the surface $S$ of (\rom2), this is the anti--symplectic involution of the ``double EPW quartic'' (double EPW quartics form a $19$--dimensional family of hyperk\"ahler fourfolds, introduced in \cite{IKKR}).
Is this merely a coincidence, or is there something fundamental going on ? Do other two--dimensional fixed loci of anti--symplectic involutions of hyperk\"ahler fourfolds
also enter in the set--up of Proposition \ref{handy1} ?
\end{remark}
\begin{remark} Inspired by the famous results concerning the Fano surface of the cubic threefold, Voisin \cite{V8} systematically studies the Fano surface $F$ of conics contained in Fano threefolds $Y$. Under certain conditions, she is able to prove \cite[Corollaire 3.2]{V8} that there is a birational involution $\iota$ on $F$, with the property that
\[ A^1_{hom}(F)\otimes A^1_{hom}(F)\ \to\ A^2_{AJ}(F)^{<\iota>} \]
is surjective (and so one could hope to apply Proposition \ref{handy1} to find more examples of surfaces verifying Conjecture \ref{conj}).
Examples given in \cite{V8} (other than those mentioned in Corollary \ref{cor4} above) include:
\noindent
\item{(1)}
Fano threefolds $Y$ of index $1$, Picard number $1$ and genus $g\in[7,10]\cup\{12\}$ \cite[Section 2.4]{V8};
\noindent
\item{(2)}
a general complete intersection of two quadrics in $\mathbb{P}^5$ \cite[Section 2.7]{V8};
\noindent
\item{(3)}
the intersection of the Grassmannian $Gr(2,5)\subset\mathbb{P}^9$ with a general codimension $3$ linear subspace \cite[Section 2.7]{V8}.
(In all these cases, $\iota$ is actually the identity.)
In case (1), the surface of conics $F$ is not very interesting. (for $g=12$, $F\cong\mathbb{P}^2$ \cite[Proposition B.4.1]{KPS}; for $g=10$, $F$ is an abelian surface \cite[Proposition B.5.5]{KPS}; ; for $g=9$, $F$ is a $\mathbb{P}^1$--bundle over a curve \cite[Proposition 2.3.6]{KPS}; for $g=8$, $F$ is isomorphic to the Fano surface of a cubic threefold \cite[Proposition B.6.1]{KPS}; for $g=7$, $F$ is the symmetric product of a curve of genus $7$ \cite{Kuz05}. These results are also discussed in \cite[Section 3.1]{IM0}.)
The other two cases also turn out to reduce to known cases: Indeed, for case (2) the Fano surface of lines is isomorphic to the Jacobian of a genus $2$ curve \cite[Theorem 2]{DR}. For case (3), the Fano threefold $Y$ is birational to a cubic threefold $Y^\prime$, and the Fano surface of conics on $Y$ is birational to the Fano surface of lines on
$Y^\prime$ \cite[Theorem B and Section 6]{Puts}. Since Conjecture \ref{conj} is obviously a birationally invariant statement, Conjecture \ref{conj} for the Fano surface of case (3) thus reduces to Corollary \ref{cor2}(\rom2).
\end{remark}
\begin{remark} There are interesting relations between the surfaces of Corollary \ref{cor4} and other Fano surfaces:
In case (\rom2), the general Verra threefold $Y$ is birational to a one--nodal ordinary Gushel--Mukai threefold $\bar{X}$, and there is an induced birational map between the Fano surface of lines $F(Y)$ and the Fano surface of conics $F(\bar{X})$ \cite[Section 5.4 and Proposition 6.6]{DIM2}.
In case (\rom3), the general quartic double solid $Y$ is known to be birational to a one--nodal ordinary degree $10$ Fano threefold $\bar{X}$, and there is an induced birational map between the Fano surface of lines $F(Y)$ and the Fano surface of conics $F(\bar{X})$ \cite[Proposition 5.2]{DIM}.
\end{remark}
\section{Double covers of cubic surfaces}
\begin{theorem}[Ikeda \cite{Ike}]\label{ike} Let $Y\subset\mathbb{P}^3$ be a smooth cubic surface, and let $\bar{S}\to Y$ be the double cover of $Y$ branched along its Hessian. Let $S\to\bar{S}$ be a minimal resolution of singularities. The surface $S$ is a minimal surface of general type with $p_g(S)=4$ and $K^2=6$.
\end{theorem}
\begin{remark} The intersection of $Y$ with its Hessian is smooth (and so $S=\bar{S}$) precisely when $Y$ has no Eckardt points. In this case, the Picard rank of $S$ is $28$ \cite[Theorem 6.1]{Ike}. At the other extreme, if $Y$ is the Fermat cubic (which is the only cubic surface attaining the maximal number of Eckardt points) the Picard rank of $S$ is $44$ \cite[Theorem 6.6]{Ike}, and so in this case $S$ is a $\rho$--maximal surface (in the sense of \cite{Beau3}). For more on Eckardt points of cubic surfaces, cf. \cite[Chapter 2 Section 3.6]{Huy}.
\end{remark}
Let us now prove Voisin's conjecture for some of Ikeda's double covers:
\begin{corollary}\label{last} Let $Y\subset\mathbb{P}^3$ be a smooth cubic surface, and let $S$ be a double cover as in theorem \ref{ike}.
Assume that $Y$ is in the pencil
\[ x_0^3 + x_1^3 +x_2^3 -3\lambda x_0 x_1 x_2 + x_3^3 =0 \ .\]
Then $S$ verifies Conjecture \ref{conj}: for any $m>4$ and $a_1,\ldots,a_m\in A^2_{hom}(S)_{\mathbb{Z}}$, there is equality
\[ \sum_{\sigma\in\mathfrak S_m} \hbox{sgn}(\sigma) a_{\sigma(1)}\times\cdots\times a_{\sigma(m)}=0\ \ \ \hbox{in}\ A^{2m}(S^m)_{\mathbb{Z}}\ .\]
\end{corollary}
\begin{proof} A first part of the argument works for arbitrary smooth cubic surfaces $Y$; only in the last step will we use that $Y$ is of a specific type.
Let us assume $Y\subset\mathbb{P}^3$ is any smooth cubic, defined by a cubic polynomial $f(x_0,\ldots,x_3)$. Let $Z\subset\mathbb{P}^4$ be the smooth cubic threefold defined by
\[ f(x_0,\ldots,x_3)+x_4^3=0\ ,\]
so $Z$ has the structure of a triple cover
\[ \rho\colon\ \ Z\ \to\ \mathbb{P}^3 \]
branched along $Y$.
Let $F(Z)$ denote the Fano surface of lines contained in $Z$. Ikeda \cite{Ike} shows that there is a dominant rational map of degree $3$
\[ f\colon\ \ F(Z)\ \dashrightarrow\ S \ ,\]
and an isomorphism
\[ f^\ast\colon\ \ H^2_{tr}(S,\mathbb{Q})\ \xrightarrow{\cong}\ H^2_{tr}(F(Z),\mathbb{Q})^{Gal(\rho)}\ .\]
This implies that there is an isomorphism of homological motives
\begin{equation}\label{homiso} {}^t \Gamma_f\colon\ \ \ t(S)\ \xrightarrow{\cong}\ t(F(Z))^{Gal(\rho)}:=(F(Z),{1\over 3}\sum_{g\in Gal(\rho)} \Gamma_g\circ \pi^2_{tr},0)\ \ \ \hbox{in}\ \mathcal M_{\rm hom}\ .\end{equation}
(Here for any surface $T$, the motive $t(T):=(T,\pi^2_{tr},0)\in\mathcal M_{\rm rat}$ denotes the {\em transcendental part of the motive\/} as in \cite{KMP}.)
According to \cite{Diaz} and \cite{22}, the Fano surface $F(Z)$ has finite--dimensional motive (in the sense of Kimura \cite{Kim}, \cite{An}, \cite{J4}). The surface $S$, being rationally dominated by $F(Z)$, also has finite--dimensional motive. Thus, one may upgrade (\ref{homiso}) to an isomorphism of Chow motives
\[ {}^t \Gamma_f\colon\ \ \ t(S)\ \xrightarrow{\cong}\ t(F(Z))^{Gal(\rho)}\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ .\]
In particular, this implies that there is an isomorphism of Chow groups
\[ f^\ast\colon A^2_{hom}(S)=A^2_{AJ}(S)\ \xrightarrow{\cong}\ A^2_{AJ}(F(Z))^{Gal(\rho)}\ .\]
Let $A$ be the 5--dimensional Albanese variety of $F(Z)$ (which is isomorphic to the intermediate Jacobian of $Z$). As observed in \cite{Diaz}, the inclusion $F(Z)\hookrightarrow A$ induces an isomorphism
\[ A^2_{(2)}(A)\cong A^2_{AJ}(F(Z))\ .\]
In particular, there is a restriction--induced isomorphism
\[ A^2_{(2)}(A)^{Gal(\rho)}\cong A^2_{AJ}(F(Z))^{Gal(\rho)}\ ,\]
where we simply use the same letter $\rho$ for the action induced by the triple cover $\rho\colon Z\to\mathbb{P}^3$.
Consequently, it suffices to prove a version of Conjecture \ref{conj} for cycles in $ A^2_{(2)}(A)^{Gal(\rho)}$. Also, using K\"unnemann's hard Lefschetz theorem (for some $Gal(\rho)$--invariant ample divisor), one reduces to a statement for cycles in $ A^5_{(2)}(A)^{Gal(\rho)}$ (i.e., $0$--cycles). This last statement can be proven, subject to some restrictions on the cubic surface $Y$, thanks to the following result:
\begin{proposition}[Vial \cite{Ch}]\label{factors} Let $B$ be an abelian variety of dimension $g$, and assume $B$ is isogenous to $ E_1^{r_1}\times E_2^{r_2}\times E_3^{r_3}$, where the $E_j$ are elliptic curves. Let $\Gamma\in A^g(B\times B)$ be an idempotent which lies in the sub--algebra generated by symmetric divisors. Assume that $\Gamma^\ast H^{j,0}(B)=0$ for all $j$. Then also
\[ \Gamma_\ast A^g(B)=0\ .\]
\end{proposition}
\begin{proof} This is a special case of \cite[Theorem 3.15]{Ch}, whose hypotheses are more general.
\end{proof}
It remains to verify that Proposition \ref{factors} applies to our set--up. If the cubic threefold $Z=Z_\lambda$ is in the pencil
\[ x_0^3 + x_1^3 +x_2^3 -3\lambda x_0 x_1 x_2 + x_3^3 +x_4^4=0 \ ,\]
its intermediate Jacobian $A$ is isogenous to $E_0^3\times E_\lambda^2$, where $E_\lambda$ is the elliptic curve
\[ x_0^3 + x_1^3 +x_2^3 -3\lambda x_0 x_1 x_2=0\ \]
\cite{Rou}.
We can apply Proposition \ref{factors} with $B:=A^m$ and
\[ \Gamma:= \bigl(\sum_{g\in Gal(\rho)} \Gamma_g\times \cdots \times\Gamma_g\bigr) \circ \bigl(\sum_{\sigma\in \mathfrak S_m} \hbox{sgn}(\sigma)\,\Gamma_\sigma\bigr) \circ \bigl( \pi^8_A\times \cdots \times \pi^8_A\bigr) \ \ \ \in A^{5m}(A^m\times A^m)\ .\]
Here $\pi^8_A$ is part of the Chow--K\"unneth decomposition of \cite{DM}, with the property that
\[ A^5_{(2)}(A)=(\pi^8_A)_\ast A^5(A)\ .\]
Since $g\in Gal(\rho)$ and $\sigma\in \mathfrak S_m$ are homomorphisms of abelian varieties, and the $\pi^8_A$ are symmetrically distinguished (in the sense of O'Sullivan \cite{OS}) and generically defined (in the sense of Vial \cite{Ch}), the correspondence $\Gamma$ is in the sub--algebra generated by symmetric divisors \cite[Proposition 3.11]{Ch}. In particular, the correspondence $\Gamma$ is symmetrically distinguished, and so (since it is idempotent in cohomology) idempotent.
The correspondence ${}^t \Gamma$ acts on cohomology as projector on
\[ \wedge^m \bigl( H^2(A)^{Gal(\rho)}\bigr)\ .\]
Since
\[ \dim \hbox{Gr}^0_F H^2(A)^{Gal(\rho)}=p_g(S)=4\ ,\]
we have that $\Gamma^\ast=({}^t \Gamma)_\ast$ is zero on $H^{j,0}(B)$ as soon as $m>4$. Applying Proposition \ref{factors}, we can prove Conjecture \ref{conj} for
$A^5_{(2)}(A)^{Gal(\rho)}$ (and hence, as explained above, also for $A^2_{AJ}(S)$): let $b_1,\ldots,b_m\in A^5_{(2)}(A)^{Gal(\rho)}$, where $m>4$. Then
\[ \sum_{\sigma\in\mathfrak S_m} \hbox{sgn}(\sigma)\, b_{\sigma(1)}\times b_{\sigma(2)}\times\cdots\times b_{\sigma(m)}=\Gamma_\ast (b_1\times b_2\times\cdots\times b_m)=0\ \ \ \hbox{in}\ A^{5m}(A^m)\ .\]
\end{proof}
\begin{remark} The argument of Corollary \ref{last} also applies to double covers of some other cubic surfaces. For instance, let $Y$ be a cubic surface, let $S$ be the double cover as in theorem \ref{ike}, and let $J(Z)$ be the intermediate Jacobian of the associated cubic threefold. If $J(Z)$ is $\rho$--maximal, then $S$ verifies conjecture \ref{conj}. Indeed, $\rho$--maximality implies that $J(Z)$ is isogenous to $E^5$ for some elliptic curve $E$ \cite[Proposition 3]{Beau3}, and so Proposition \ref{factors} applies.
\end{remark}
\vskip1cm \begin{nonumberingt} Thanks to the wonderful staff of the Executive Lounge at the Schilik Math Research Institute. \end{nonumberingt}
\vskip1cm
\end{document} |
\begin{document}
\title{f Toroidal Embeddings of Right Groups} \begin{abstract} In this note we study embeddings of Cayley graphs of right groups on surfaces. We characterize those right groups which have a toroidal but no planar Cayley graph, such that the generating system of the right group has a minimal generating system of the group as a factor. \end{abstract}
Keywords: Cayley graph, right group, planar, toroidal, embedding
Mathematics Subject Classification: 05C10, 05C25, 20M30, 57M15
\section{Preliminaries} A graph is said to be \textit{(2-cell-)embedded} in a surface $M$ if it is ``drawn" in $M$ such that edges intersect only at their common vertices and deleting the graph from $M$ yields a disjoint union of disks. A graph is said to be \textit{planar} if it can be embedded in the plane. By the \textit{genus} of a graph $X$ we mean the minimum genus among all surfaces in which $X$ can be embedded. So if $X$ is planar then the genus of $X$ is zero. If a non-planar graph can be embedded on the torus, that is on the orientable surface of genus 1, it is called \textit{toroidal}. A graph is said to be \textit{outer planar} if it has an embedding such that one face is incident to every vertex.
It is known that each group can be defined in terms of generators and relations, and that corresponding to each such (non-unique) presentation there is a unique graph, called the Cayley graph of the presentation. A ``drawing" of this graph gives a ``picture" of the group from which certain properties of the group can be determined. The same principle can be used for other algebraic systems. So algebraic systems with a given system of generators will be called \textit{planar} or \textit{toroidal} if the respective Cayley graphs can be embedded on the plane or on the torus.
Finite planar groups have been cataloged by Maschke~\cite{M}. On the basis of Maschke's Theorem, in this work we investigate embeddings of certain completely regular semigroups (unions of groups), namely of right groups. This is a continuation of the investigations from~\cite{Xia} where Clifford semigroups were in focus. Here our attention is restricted to a special class of presentations of right groups for which we classify the toroidal right groups. Note that this generally only gives upper bounds on the genus of right groups. The full determination of the genus will be studied in a subsequent paper~\cite{kk}.
We use $K_n$ for the complete graph on $n$ vertices, $C_n$ for the cycle on $n$ vertices, and $K_{n,n}$ for the respective complete bipartite graph. We denote the cyclic group of order $n$ by $\mathbb{Z}_n=\{0,\dots,n-1\}$, and the dihedral, symmetric and alternating groups by $D_n$, $S_n$ and $A_n$, respectively.
We recall that a \textit{right group} is a semigroup of the form $G\times R_r$ where $G$ is a group and $R_r$ is a right zero semigroup, i. e., $R_r=\{r_1,\dots,r_r\}$ with the multiplication $r_ir_j=r_j$ for $r_i,r_j\in R_r$.
Every semigroup presentation is associated with a \textit{Cayley color graph}: the vertices correspond to the elements of the semigroup; next, imagine the generators of the semigroup to be associated with distinct colors. If vertices $v_{1}$ and $v_{2}$ correspond to semigroup elements $s_{1}$ and $s_{2}$ respectively, then there is a directed edge (of the color of the generator $e$) from $v_{1}$ to $v_{2}$ if and only if $s_{1}e=s_{2}$. It is also possible to construct a Cayley color graph by action from the left. It is clear that for semigroups the structure of this graph may change heavily, when changing the side of the action.
In this note we consider the graph obtained from the Cayley color graph by suppressing all edge directions and all edge colors, deleting loops and multiple edges, that is, the uncolored Cayley graph. It is clear that in passing from the Cayley color graph to the corresponding uncolored graph algebraical information is lost but the genus is not changed. We call this graph \textit{Cayley graph} and denote it by ${\it Cay}(S,C)$ for the semigroup $S$ with the set of generators $C\subseteq S$.
The reader is referred to \cite{GT}, \cite{IK}, \cite{Kilp},
\cite{Pe}, \cite{White} and \cite{Xia} for the terminology and notations which are not given in this paper.
We need the following results. \begin{resu}\label{Euler}{\em (Euler, Poincar{\'e} 1758)} A finite graph with $n$ vertices, $m$ edges, which is $2$-cell embedded on an orientable surface
$M$ of genus $g$ with $f$ faces fulfills the Euler-Poincar{\'e} formula: $n-m+f=2-2g$. \end{resu}
\begin{resu}\label{genuslemm1}{\em (Maschke 1896)} The finite group $G$ is planar if and only if $G=G_{1} \times G_{2}$, where $G_{1}=\mathbb{Z}_{1}$ or $\mathbb{Z}_{2}$ and $G_{2}=\mathbb{Z}_{n}$, $D_{n}$, $S_{4}$, $A_{4}$ or $A_{5}$. \end{resu}
\begin{rema} \label{generatorrema}\rm{It is clear that planarity depends on the set of generators $C$ chosen for the Cayley graph. For example ${\it Cay}(\mathbb{Z}_6,\{1\})=C_6$ and also ${\it Cay}(\mathbb{Z}_6,\{2,3\})$ which is the box product $ C_3\Box K_2$ is planar, but ${\it Cay}(\mathbb{Z}_6,\{1,2,3\})=K_6$ is not. For the planar groups $D_{n}$, $S_{4}$, $A_{4}$ or $A_{5}$ we get various Archemedian solids as Cayley graph representations, with two or three generators \cite{www}}. \end{rema}
\begin{resu}\label{genuslemm2}{\em (Kuratowski 1930)} A finite graph is planar if and only if it does not contain a subgraph that is a subdivision of $K_{5}$ or $K_{3,3}$. \end{resu}
\begin{resu}\label{genuslemm3}{\em (Chartrand, Harary 1967)} A finite graph is outer planar if and only if it does not contain a subgraph that is a subdivision of $K_4$ or $K_{2,3}$. \end{resu}
\section{The $Cay$-functor and right groups} For most of the considerations we can use the following two results which we take from \cite{UX}. However, as far as we know, there do not exist general formulas which relate the genus of a cross product or a lexicographic product of two graphs to the genera of the factors, compare for example \cite{GT}, \cite{IK} or \cite{White}. Some of the difficulties with respect to the lexicographic product can be seen in Example \ref{prop:D3}. We denote by $\times$ the \textit{cross product} for graphs and also the direct product for semigroups and sets. By $X[Y]$ we denote the \textit{lexicographic product} of the graph $X$ with the graph $Y$.
\begin{prop}\label{theo2} For semigroups $S$ and $T$ with subsets $C$ and $D$, respectively, we have ${\it Cay}(S\times T, C\times D)={\it Cay}(S,C)\times {\it Cay}(T,D)$.
\end{prop}
Note that if in the above formula the semigroup $T$ is $R_r$ its graph ${\it Cay}(R_r,R_r)$ has to be considered as $K_r^{(r)}$, i. e. the complete graph with $r$ loops.
\begin{prop}\label{theo2a} Let $S$ be a monoid with identity $1_S$, $T$ a semigroup, $C$ and $D$ subsets of $S$ and $T$ respectively. Then $${\it Cay}(S\times T, (C\times T) \cup (\{1_S\}\times D))={\it Cay}(S,C)[{\it Cay}(T,D)]$$ if and only if $tT=T$ for any $t\in T$, that is if and only if $T$ is a right
group. \end{prop}
\begin{rema} \rm{A formal description of the relation between graphs and subgraphs which are subdivisions with the help of the $Cay$-functor on semigroups with generators seems to be difficult. In ${\it Cay}(\mathbb{Z}_6,\{1\})$ we find a subdivision of $K_3$ corresponding to ${\it Cay}(\{0, 2, 4\}, \{2\})$, as a subgraph. But subdivision is not a categorical concept. And there is no inclusion
between $\{0, 2, 4\}\times \{2\}$ and $\mathbb{Z}_6\times \{1\}$. } \end{rema}
\section{The embeddings}
Now we determine the minimal genus among the Cayley graphs ${\it Cay}(G\times R_r, C\times R_r)$ taken over all minimum generating set $C$ of the group $G$. We do not claim that an embedding of this graph gives the (minimal) genus of the right group considered. Generally $G\times R_r$ may have a generating system $C'\neq C\times R_r$ which yields a Cayley graph with fewer edges and consequently tends to have a smaller genus. A straight-forward calculation yields the following lemma. Note that the first equality can also be obtained by applying Proposition \ref{theo2a} in the form ${\it Cay}(G\times R_r, (C\times R_r) \cup (\{1_G\}\times \emptyset))={\it Cay}(G,C)[{\it Cay}(R_r,\emptyset)]$.
\begin{lemm}
Denote by ${\it Cay}(G,C)[\overline K_r]$ the lexicographic product of ${\it Cay}(G,C)$ with $r$ isolated vertices. We have ${\it Cay}(G\times R_r, C\times R_r)={\it Cay}(G,C)[\overline K_r]$. \end{lemm}
Note that this product can be seen as replacing every vertex of ${\it Cay}(G,C)$ by $r$ independent vertices and every edge by a $K_{r,r}$. In particular $K_{k,k}[ \overline K_r]=K_{kr,kr}$.
\begin{prop} If ${\it Cay}(G,C)$ is not planar then ${\it Cay}(G\times R_r, C\times R_r)$ cannot be embedded on the torus. \end{prop} \begin{proof} Already $K_{3,3}[\overline K_{2}]\cong K_{6,6}$ has genus 4. Moreover, the graph $K_5[\overline K_{2}]$ has 10 vertices and 40 edges. An embedding on the torus would have 30 faces by the formula of Euler-Poincar{\'e}. Even if all faces were triangles in this graph, this would require 45 edges. So the graphs are not toroidal. \end{proof}
\begin{prop}
If $r\geq 5$ then ${\it Cay}(G\times R_r, C\times R_r)$ cannot be embedded on the torus. \end{prop} \begin{proof}
The resulting graph contains $K_{5,5}$ which has genus 3, compare~\cite{White}. \end{proof}
\begin{prop}\label{prop:K22}
If ${\it Cay}(G,C)$ contains a $K_{2,2}$ subdivision and $r\geq 3$ then ${\it Cay}(G\times R_r, C\times R_r)$ cannot be
embedded on the torus. \end{prop} \begin{proof}
The resulting graph contains $K_{6,6}$ which has genus 4, compare~\cite{White}. \end{proof}
Hence, for the rest of the paper we will check all planar groups $G$ and $1\leq r\leq 4$ for ${\it Cay}(G\times R_r, C\times R_r)$ having genus 1.
\begin{lemm}\label{lem:threereg}
If the vertex degree of a planar ${\it Cay}(G,C)$ is at least $3$ then ${\it Cay}(G\times R_2,C\times R_2)$ cannot be embedded on the torus. \end{lemm} \begin{proof} Since ${\it Cay}(G,C)$ is at least $3$-regular ${\it Cay}(G\times R_2,C\times R_2)$ is at least $6$-regular.
Assume that ${\it Cay}(G\times R_2,C\times R_2)$ is embedded on the torus, then the formula of Euler-Poincar{\'e} yields that all faces are triangular. This implies that every edge of ${\it Cay}(G\times R_2,C\times R_2)$ lies in at least two triangles, hence every edge of ${\it Cay}(G,C)$ lies in at least one triangle.
Let $c_1,c_2,c_3\in C$ the generators corresponding to a triangle $a_1,a_2,a_3$. Then $c_1^{\pm 1}c_2^{\pm 1}c_3^{\pm 1}=1_G$ for some signing, where $1_G$ is the identity in $G$. If any two of the $c_i$ are distinct then one of the two is redundant, hence $C$ was not inclusion minimal. Thus every $c\in C$ must be of order $3$. Since $G$ is not cyclic we obtain that ${\it Cay}(G,C)$ is at least $4$-regular. The formula of Euler-Poincar{\'e} yields that the at least $8$-regular ${\it Cay}(G\times R_2,C\times R_2)$ cannot be embedded on the torus. \end{proof}
\begin{figure}\label{fig:Z34R23}
\end{figure}
\begin{prop}\label{prop:main}
The minimum genus of ${\it Cay}(\mathbb{Z}_n\times R_r, C\times R_r)$ among all generating systems $C$ is $1$
iff $(n,r)\in\{(2,3),(2,4),(3,3),(i,2)\}$ for $i\geq 4$. \end{prop} \begin{proof} By Lemma~\ref{lem:threereg} we can assume $C=\{1\}$.
For $n=2$ we have ${\it Cay}(\mathbb{Z}_2\times R_r, C\times R_r)=K_{r,r}$ which exactly for $r\in\{3,4\}$ has genus 1.
Take $n=3$. If $r=2$ we obtain the planar graph ${\it Cay}(\mathbb{Z}_3\times R_2, \{1\}\times R_2)$ shown in Figure~\ref{fig:Z34R23}. If $r=3$ the resulting graph contains $K_{3,3}$, so it cannot be planar. Figure~\ref{fig:Z34R23} shows an embedding as a triangular grid on the torus. If $r=4$ we have the complete tripartite graph $K_{4,4,4}$. Delete the entire set of $16$ edges between two of the partitioning sets. The remaining (non-planar) graph has $12$ vertices, $32$ edges and, assuming a toroidal embedding, $20$ faces. A simple count shows that this cannot be realized without traingular faces. So for $r\geq 4$ the graph ${\it Cay}(\mathbb{Z}_3\times R_r, C\times R_r)$ is not toroidal.
Take $n\geq 4$. Now the graph ${\it Cay}(\mathbb{Z}_n,\{1\})$ contains a $C_4=K_{2,2}$ subdivision. If $r\geq 3$ then ${\it Cay}(\mathbb{Z}_n\times R_r, \{1\}\times R_r)$ is not toroidal by Proposition~\ref{prop:K22}. If $r=2$ an embedding of ${\it Cay}(\mathbb{Z}_4\times R_2, \{1\}\times R_2)$ as a square grid in the torus is shown in Figure~\ref{fig:Z34R23}. This is instructive for the cases $n\geq 5$. Moreover we see that the vertices $\{0,0',2\}$ and $\{1,1',3\}$ induce a $K_{3,3}$ subgraph of ${\it Cay}(\mathbb{Z}_4\times R_2, \{1\}\times R_2)$. Generally for $n\geq 4$ we have that ${\it Cay}(\mathbb{Z}_n\times R_2, \{1\}\times R_2)$ contains a $K_{3,3}$ subdivision, it hence is not planar. \end{proof}
\begin{theo} \label{maintheo} Let $G\times R_r$ be a finite rightgroup. The minimal genus of ${\it Cay}(G\times R_r,C\times R_r)$ among all generating sets $C\subseteq G$ of $G$ is $1$ iff $G\times R_r$ is one of the following rightgroups: \begin{itemize}
\item $\mathbb{Z}_n\times R_r$ with $(n,r)\in\{(2,3),(2,4),(3,3),(i,2)\}$ for $i\geq4$ \item $\mathbb{Z}_2\times\mathbb{Z}_{2n+1}\times R_2$ for $n\geq 1$ \item $D_n\times R_2$ for all $n\geq 2$ \item $\mathbb{Z}_2\times D_n\times R_2$ for all $n\geq 2$ \end{itemize} \end{theo}
\begin{proof}
Since $\mathbb{Z}_2\times\mathbb{Z}_{2n+1}\cong \mathbb{Z}_{4n+2}$ Proposition~\ref{prop:main} proves the first two sets of right groups to have the desired property.
Observe that ${\it Cay}(D_n,C)$, where $C$ consists of two generators $g_1, g_2$ of order 2, is isomorphic to ${\it Cay}(\mathbb{Z}_{2n}, \{1\})$. Thus it is planar and by Proposition~\ref{prop:main} ${\it Cay}(D_n\times R_2,\{g_1,g_2\}\times R_2)$ can be embedded on the torus. Any other generating system for $D_n$ yields ${\it Cay}(D_n,C)$ with degree at least $3$, hence by Lemma~\ref{lem:threereg} it cannot be embedded on the torus and in particular is non-planar.
The only generating system for $\mathbb{Z}_2\times D_n$ which escapes the preconditions of Lemma~\ref{lem:threereg} is $C=\{(1,g_1),(0,g_2)\}$ and indeed ${\it Cay}(\mathbb{Z}_2\times D_n,C)\cong C_{4n}\cong {\it Cay}(\mathbb{Z}_{4n},\{1\})$. Thus ${\it Cay}(\mathbb{Z}_2\times D_n\times R_2,C\times R_2)$ is toroidal by Proposition~\ref{prop:main}.
Let $G\in\{A_4,S_4,A_5, \mathbb{Z}_2\times A_4, \mathbb{Z}_2\times S_4, \mathbb{Z}_2\times A_5, \mathbb{Z}_2\times \mathbb{Z}_{2n}\}$
for $n\geq 2$. It can be checked that $G$ cannot be generated by two elements of order two. Since $G$ is not cyclic we have $|\{g\in C\mid \textmd{ord}(g)=2\}|+2|\{g\in C\mid \textmd{ord}(g)\geq 3\}|\geq 3$ for every generating system $G$. Thus, by Lemma~\ref{lem:threereg} we know that ${\it Cay}(G\times R_2,C\times R_2)$ cannot be embedded on the torus. \end{proof}
In the above proofs we make strong use of Lemma~\ref{lem:threereg}, which tells us that $3$-regular planar Cayley graphs will not be embeddable on the torus after taking the cartesian product with $R_2$. In fact, this operation can increase the genus from $0$ to $3$ already in the following small example.
\begin{figure}
\caption{${\it Cay}(\mathbb{Z}_6\times R_2, \{2,3\}\times R_2)$ in the triple torus with handles $X$, $Y$, $Z$.}
\label{fig:D3R2}
\end{figure}
\begin{exam}\label{prop:D3} The genus of ${\it Cay}(\mathbb{Z}_6\times R_2, \{2,3\}\times R_2)$ is $3$. Note that ${\it Cay}(\mathbb{Z}_6\times R_2, \{2,3\}\times R_2)\cong(C_3\Box K_2)[\overline K_2]$. \end{exam} \begin{proof} To see this we observe that ${\it Cay}(\mathbb{Z}_6\times R_2, \{2,3\}\times R_2)$ consist of two disjoint copies $C_3\Box K_2$ and $(C_3\Box K_2)'$ of ${\it Cay}(\mathbb{Z}_6,\{2,3\})$ with vertex sets $\{0,1,2,3,4,5\}$ and $\{0',1',2',3',4',5'\}$, respectively. Every vertex $v$ of $C_3\Box K_2$ is adjacent to every neighbor of its copy $v'$ in $(C_3\Box K_2)'$. Figure~\ref{fig:D3R2} shows an embedding of ${\it Cay}(\mathbb{Z}_6\times R_2, \{2,3\}\times R_2)$ into the orientable surface of genus $3$ -- \textit{the triple torus}. This graph is $6$-regular with $12$ vertices, so it has $36$ edges.
By Lemma~\ref{lem:threereg} ${\it Cay}(\mathbb{Z}_6\times R_2, \{2,3\}\times R_2)$ cannot be embedded on the torus.
So assume that ${\it Cay}(\mathbb{Z}_6\times R_2, \{2,3\}\times R_2)$ is 2-cell-embedded on the double torus. Delete the $4$ edges between $1,1'$ and $5,5'$ and the $4$ edges between $0,0'$ and $4,4'$. The resulting graph $H$ has $28$ edges. It consists of two graphs
$A$ and $B$, which are copies of $K_{4,4}$, where $A$ has the bipartition $(\{0,0',5,5'\}$, $\{2,2',3,3'\})$ and $B$ has $(\{0,0',1,1'\}$, $\{3,3',4,4'\})$. They are glued at the four vertices with the same numbers and the corresponding $4$ edges are identified. Although $H$ is no longer bipartite it still is triangle-free. Hence by our assumption it is 2-cell-embedded on the double torus. By the formula of Euler-Poincar{\'e} this gives 14 faces and consequently all of them are quadrangular. So the edges between $1,1'$ and $5,5'$ and between $0,0'$ and $4,4'$, which we have to put back in, have to be diagonals of these quadrangular faces. But then $\{2',4,2,0\}$ and $\{2',4,2,0'\}$ are the only 4-cycles in $H$ which contain the vertices $4,0$ and $4,0'$, respectively, they form faces of $H$. Since they have the common edges $\{2',4\}$ and $\{2,4\}$ we obtain a $K_{2,3}$ with bipartition $(\{2,2'\},\{0,0',4\})$. It is folklore that $K_{2,3}$ is not outer planar. Thus the region consisting of the glued $4$-cycles $\{2',4,2,0\}$ and $\{2',4,2,0'\}$ must contain one of the vertices $0,0'$ or $4$ in its interior. Hence this vertex has only degree $2$ -- a contradiction. \end{proof}
\noindent We thank Xia Zhang for many helpful comments as well as Srichan Arworn, Nirutt Pipattanajinda and several graduate students with whom one of the authors discussed the topic extensively on a research stay at Chiangmai University, Thailand -- supported by Deutsche Forschungsgemeinschaft.
\end{document} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.